Unnamed: 0
int64 0
7.24k
| id
int64 1
7.28k
| raw_text
stringlengths 9
124k
| vw_text
stringlengths 12
15k
|
---|---|---|---|
6,100 | 6,518 | More Supervision, Less Computation:
Statistical-Computational Tradeoffs in Weakly
Supervised Learning
Xinyang Yi??
Zhaoran Wang?? Zhuoran Yang?? Constantine Caramanis? Han Liu?
?
?
The University of Texas at Austin
Princeton University
?
?
{yixy,constantine}@utexas.edu
{zhaoran,zy6,hanliu}@princeton.edu
{?: equal contribution}
Abstract
We consider the weakly supervised binary classi?cation problem where the labels
are randomly ?ipped with probability 1 ? ?. Although there exist numerous algorithms for this problem, it remains theoretically unexplored how the statistical
accuracies and computational ef?ciency of these algorithms depend on the degree
of supervision, which is quanti?ed by ?. In this paper, we characterize the effect of
? by establishing the information-theoretic and computational boundaries, namely,
the minimax-optimal statistical accuracy that can be achieved by all algorithms,
and polynomial-time algorithms under an oracle computational model. For small
?, our result shows a gap between these two boundaries, which represents the computational price of achieving the information-theoretic boundary due to the lack of
supervision. Interestingly, we also show that this gap narrows as ? increases. In
other words, having more supervision, i.e., more correct labels, not only improves
the optimal statistical accuracy as expected, but also enhances the computational
ef?ciency for achieving such accuracy.
1
Introduction
Practical classi?cation problems usually involve corrupted labels. Speci?cally, let {(xi , zi )}ni=1 be
n independent data points, where xi ? Rd is the covariate vector and zi ? {0, 1} is the uncorrupted
label. Instead of observing {(xi , zi )}ni=1 , we observe {(xi , yi )}ni=1 in which yi is the corrupted label.
In detail, with probability (1 ? ?), yi is chosen uniformly at random over {0, 1}, and with probability
?, yi = zi . Here ? ? [0, 1] quanti?es the degree of supervision: a larger ? indicates more supervision
since we have more uncorrupted labels in this case. In this paper, we are particularly interested in the
effect of ? on the statistical accuracy and computational ef?ciency for parameter estimation in this
problem, particularly in the high dimensional settings where the dimension d is much larger than the
sample size n.
There exists a vast body of literature on binary classi?cation problems with corrupted labels. In
particular, the study of randomly perturbed labels dates back to [1] in the context of random classi?cation noise model. See, e.g., [12, 20] for a survey. Also, classi?cation problems with missing
labels are also extensively studied in the context of semi-supervised or weakly supervised learning
by [14, 17, 21], among others. Despite the extensive study on this problem, its information-theoretic
and computational boundaries remain unexplored in terms of theory. In a nutshell, the informationtheoretic boundary refers to the optimal statistical accuracy achievable by any algorithms, while the
computational boundary refers to the optimal statistical accuracy achievable by the algorithms under
a computational budget that has a polynomial dependence on the problem scale (d, n). Moreover,
it remains unclear how these two boundaries vary along with ?. One interesting question to ask is
29th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
how the degree of supervision affects the fundamental statistical and computational dif?culties of
this problem, especially in the high dimensional regime.
In this paper, we sharply characterize both the information-theoretic and computational boundaries
of the weakly supervised binary classi?cation problems under the minimax framework. Speci?cally,
we consider the Gaussian generative model where X|Z = z ? N (?z , ?) and z ? {0, 1} is the
true label. Suppose {(xi , zi )}ni=1 are n independent samples of (X, Z). We assume that {yi }ni=1 are
generated from {zi }ni=1 in the aforementioned manner. We focus on the high dimensional regime,
where d ? n and ?1 ? ?0 is s-sparse, i.e., ?1 ? ?0 has s nonzero entires. We are interested in
estimating ?1 ? ?0 from the observed samples {(xi , yi )}ni=1 . By a standard reduction argument [24],
the fundamental limits of this estimation task are captured by a hypothesis testing problem, namely,
H0 : ?1 ? ?0 = 0 versus H1 : ?1 ? ?0 is s-sparse and
(?1 ? ?0 )? ??1 (?1 ? ?0 ) := ?n > 0,
(1.1)
where ?n denotes the signal strength that scales with n. Consequently, we focus on studying the
fundamental limits of ?n for solving this hypothesis testing problem.
?n
??
E?cient
?
s2 s log d
?n = o
? 2
,
n
? n
??
?
s log d s log d
?n = ?
? 2
n
? n
?n = ?
??
s2 s log d
? 2
n
? n
?
Intractable
Impossible
0
?
?n = o
??
s log d s log d
? 2
n
? n
?
1
Figure 1: Computational-statistical phase transitions for weakly supervised binary classi?cation. Here
? denotes the degree of supervision, i.e., the label is corrupted to be uniformly random with probability 1 ? ?, and ?n is the signal strength, which is de?ned in (1.1). Here a ? b denotes min{a, b}.
Our main results are illustrated in Figure 1. Speci?cally, we identify the impossible, intractable, and
ef?cient regimes for the statistical-computational phase transitions under certain regularity conditions.
?
(i) For ?n = o[ s log d/n ? (1/?2 ? s log d/n)], any algorithm is asymptotically powerless in
solving the hypothesis
? testing problem.
?
(ii) For ?n = ?[ s log d/n ? (1/?2 ? s log d/n)] and ?n = o[ s2 /n ? (1/?2 ? s log d/n)], any
tractable algorithm that has a polynomial oracle complexity under an extension of the statistical query
model [18] is asymptotically
powerless. We will rigorously de?ne the computational model in ?2.
?
(iii) For ?n = ?[ s2 /n ? (1/?2 ? s log d/n)], there is an ef?cient algorithm with a polynomial
oracle complexity that is asymptotically powerful in solving the testing problem.
?
?
Here s log d/n ? (1/?2 ? s log d/n) gives the information-theoretic boundary, while s2 /n ?
(1/?2 ? s log d/n) gives the computational boundary. Moreover, by a reduction from the estimation
problem to the testing problem, these boundaries for testing imply the ones for estimating ?2 ? ?1
as well.
Consequently, there exists a signi?cant gap between the computational and information-theoretic
boundaries for small ?. In other word, to achieve the information-theoretic boundary, one has to
pay the price of intractable computation. As ? tends to one, this gap between computational and
information-theoretic boundaries narrows and eventually vanishes. This indicates that, having more
supervision not only improves the statistical accuracy, as shown by the decay of information-theoretic
boundary in Figure 1, but more importantly, enhances the computational ef?ciency by reducing
the computational price for attaining information-theoretic optimality. This phenomenon ? ?more
supervision, less computation? ? is observed for the ?rst time in this paper.
1.1
More Related Work, Our Contribution, and Notation
Besides the aforementioned literature on weakly supervised learning and label corruption, our work
is also connected to a recent line of work on statistical-computational tradeoffs [2?5, 8, 13, 15, 19,
26?28]. In comparison, we quantify the statistical-computational tradeoffs for weakly supervised
learning for the ?rst time. Furthermore, our results are built on an oracle computational model
2
in [8] that slightly extends the statistical query model [18], and hence do not hinge on unproven
conjectures on computational hardness like planted clique. Compared with our work, [8] focuses
on the computational hardness of learning heterogeneous models, whereas we consider the interplay
between supervision and statistical-computational tradeoffs. A similar computational model is used
in [27] to study structural normal mean model and principal component analysis, which exhibit
different statistical-computational phase transitions. In addition, our work is related to sparse linear
discriminant analysis and two-sample testing of sparse means, which correspond to our special cases
of ? = 1 and ? = 0, respectively. See, e.g., [7, 23] for details. In contrast with their results, our
results capture the effects of ? on statistical and computational tradeoffs.
In summary, the contribution of our work is two-fold:
(i) We characterize the computational and statistical boundaries of the weakly supervised binary
classi?cation problem for the ?rst time. Compared with existing results for other models, our results
do not rely on unproven conjectures.
(ii) Based on our theoretical characterization, we propose the ?more supervision, less computation?
phenomenon, which is observed for the ?rst time.
Notation. We denote the ?2 -divergence between two distributions P, Q by D?2 (P, Q). For two
nonnegative sequences an , bn indexed by n, we use an = o(bn ) as a shorthand for limn?? an /bn =
0. We say an = ?(bn ) if an /bn ? c for some absolute constant c > 0 when n is suf?ciently large.
We use a ? b and a ? b to denote max{a, b} and min{a, b}, respectively. For any positive integer k,
we denote {1, 2, . . . , k} by [k]. For v ? Rd , we denote by ?v?p the ?p -norm of v. In addition, we
denote the operator norm of a matrix A by |||A|||2 .
2
Background
In this section, we formally de?ne the statistical model for weakly supervised binary classi?cation.
Then we follow it with the statistical query model that connects computational complexity and
statistical optimality.
2.1
Problem Setup
Consider the following Gaussian generative model for binary classi?cation. For a random vector
X ? Rd and a binary random variable Z ? {0, 1}, we assume
X|Z = 0 ? N (?0 , ?),
X|Z = 1 ? N (?1 , ?),
(2.1)
where P(Z = 0) = P(Z = 1) = 1/2. Under this model, the optimal classi?er by Bayes rule
corresponds to the Fisher?s linear discriminative analysis (LDA) classi?er. In this paper, we focus
on the noisy label setting where true label Z is replaced by a uniformly random label in {0, 1} with
probability 1 ? ?. Hence, ? characterizes the degree of supervision in the model. In speci?c, if ? = 0,
we observe the true label Z, thus the problem belongs to supervised learning. Whereas if ? = 1,
the observed label is completely random, which contains no information of the model in (2.1). This
setting is thus equivalent to learning a Gaussian mixture model, which is an unsupervised problem.
In the general setting with noisy labels, we denote the observed label by Y , which is linked to the
true label Z via
P(Y = Z) = (1 + ?)/2, P(Y = 1 ? Z) = (1 ? ?)/2.
(2.2)
We consider the hypothesis testing problem of detecting whether ?0 ?= ?1 given n i.i.d. samples
{yi , xi }ni=1 of (Y, X), namely
H0 : ?0 = ?1 versus H1 : ?0 ?= ?1 .
(2.3)
We focus on the high dimensional and sparse regime, where d ? n and ?0 ? ?1 is s-sparse, i.e.,
?0 ? ?1 ? B0 (s), where B0 (s) := {? ? Rd : ???0 ? s}. Throughout this paper, use the sample
size n to drive the asymptotics. We introduce a shorthand notation ? := (?0 , ?1 , ?, ?) to represent
the parameters of the aforementioned model. Let P? be the joint distribution of (Y, X) under our
statistical model with parameter ?, and Pn? be the product distribution of n i.i.d. samples accordingly.
We denote the parameter spaces of the null and alternative hypotheses by G0 and G1 respectively. For
any test function ? : {(yi , xi )}ni=1 ? {0, 1}, the classical testing risk is de?ned as the summation of
3
type-I and type-II errors, namely
Rn (?; G0 , G1 ) := sup Pn? (? = 1) + sup Pn? (? = 0).
??G0
??G1
The minimax risk is de?ned as the smallest testing risk of all possible test functions, that is,
Rn? (G0 , G1 ) := inf Rn (?; G0 , G1 ),
?
(2.4)
where the in?mum is taken over all measurable test functions.
Intuitively, the separation between two Gaussian components under H1 and the covariance matrix ?
together determine the hardness of detection. To characterize such dependence, we de?ne the signalto-noise ratio (SNR) as ?(?) := (?0 ? ?1 )? ??1 (?0 ? ?1 ). For any nonnegative sequence {?n }n?1 ,
let G1 (?n ) := {? : ?(?) ? ?n } be a sequence of alternative parameter spaces with minimum
separation ?n . The following minimax rate characterizes the information-theoretic limits of the
detection problem.
De?nition 2.1 (Minimax rate). We say a sequence {?n? }n?1 is a minimax rate if
? For any sequence {?n }n?1 satisfying ?n = o(?n? ), we have limn?? Rn? [G0 , G1 (?n )] = 1;
? For any sequence {?n }n?1 satisfying ?n = ?(?n? ), we have limn?? Rn? [G0 , G1 (?n )] = 0.
The minimax rate in De?nition 2.1 characterizes the statistical dif?culty of the testing problem. However, it fails to shed light on the computational ef?ciency of possible testing algorithms. The reason
is that this concept does not make any computational restriction on the test functions. The minimax
risk in (2.4) might be attained only by test functions that have exponential computational complexities. This limitation of De?nition 2.1 motivates us to study statistical limits under computational
constraints.
2.2
Computational Model
Statistical query models [8?11, 18, 27] capture computational complexity by characterizing the total
number of rounds an algorithm interacts with data. In this paper, we consider the following statistical
query model, which admits bounded query functions but allows the responses of query functions to
be unbounded.
De?nition 2.2 (Statistical query model). In the statistical query model, an algorithm A is allowed
to query an oracle T rounds, but not to access data {(yi , xi )}ni=1 directly. At each round, A queries
the oracle r with a query function q ? QA , in which QA ? {q : {0, 1} ? Rd ? [?M, M ]} denotes
the query space of A . The oracle r outputs a realization of a random variable Zq ? R satisfying
? ?
?
?
?
P
|Zq ? E[q(Y, X)]| ? ?q
? 1 ? 2?, where
q?QA
?q = [?(QA ) + log(1/?)] ? M/n
??
?
2[?(QA ) + log(1/?)] ? (M 2 ? {E[q(Y, X)]}2 ) n. (2.5)
Here ?q > 0 is the tolerance parameter and ? ? [0, 1) is the tail probability. The quantity ?(QA ) ? 0
in ?q measures the capacity of QA in logarithmic scale, e.g., for countable QA , ?(QA ) = log(|QA |).
The number T is de?ned as the oracle complexity. We denote by R[?, n, T, ?(QA )] the set of oracles
satisfying (2.5), and by A(T ) the family of algorithms that queries an oracle no more than T rounds.
This version of statistical query model is used in [8], and reduces to the VSTAT model proposed in
[9?11] by the transformation q?(y, x) = q(y, x)/(2M ) + 1/2 for any q ? QA . The computational
model in De?nition 2.2 enables us to handle query functions that are bounded by an unknown and
?xed number M . Note that that by incorporating the tail probability ?, the response Zq is allowed to
be unbounded. To understand the intuition behind De?nition 2.2, we remark that (2.5) resembles the
Bernstein?s inequality for bounded random variables [25]
?
?? ?
?
?
?
?1 n
?
t2
q(Yi , Xi ) ? E[q(Y, X)]?? ? t ? 2 exp
.
(2.6)
P ??
n
2Var[q(Y, X)] + M t
i=1
We ?rst replace Var [q(Y, X)] by its upper bound M 2 ? {E[q(Y, X)]}2 , ?
which is tight when q takes
n
values in {?M, M }. Then inequality (2.5) is obtained by replacing n?1 i=1 q(Yi , Xi ) in (2.6) by
Zq and then bounding the suprema over the query space QA . In the de?nition of ?q in (2.5), we
4
incorporate the effect of uniform concentration over the query space QA by adding the quantity
?(QA ), which measures the capacity of QA . In addition, under the De?nition 2.2, the algorithm
A does not interact directly with data. Such an restriction characterizes the fact that in statistical
problems, the effectiveness of an algorithm only depends on the global statistical properties, not the
information of individual data points. For instance, algorithms that only rely on the convergence of
the empirical distribution to the population distribution are contained in the statistical query model;
whereas algorithms that hinge on the ?rst data point (y1 , x1 ) is not allowed. This restriction captures
a vast family of algorithms in statistics and machine learning, including applying gradient method to
maximize likelihood function, matrix factorization algorithms, expectation-maximization algorithms,
and sampling algorithms [9].
Based on the statistical query model, we study the minimax risk under oracle complexity constraints.
For the testing problem (2.3), let A(Tn ) be a class of testing algorithms under the statistical query
model with query complexity no more than Tn , with {Tn }n?1 being a sequence of positive integers
depending on the sample size n. For any A ? A(Tn ) and any oracle r ? R[?, n, Tn , ?(QA )] that
responds to A , let H(A , r) be the set of test functions that deterministically depend on A ?s queries
to the oracle r and the corresponding responses. We use P? to denote the distribution of the random
variables returned by oracle r when the model parameter is ?.
For a general hypothesis testing problem, namely, H0 : ? ? G0 versus H1 : ? ? G1 , the minimax testing risk with respect to an algorithm A and a statistical oracle r ? R[?, n, Tn , ?(QA )] is de?ned as
?
?
?
Rn (G0 , G1 ; A , r) :=
inf
sup P? (? = 1) + sup P? (? = 0) .
(2.7)
??H(A ,r) ??G0
??G1
Compared with the classical minimax risk in (2.4), the new notion in (2.7) incorporates the computational budgets via oracle complexity. In speci?c, we only consider the test functions obtained by an
algorithm with at most Tn queries to a statistical oracle. If Tn is a polynomial of the dimensionality
d, (2.7) characterizes the statistical optimality of computational ef?cient algorithms. This motivates
us to de?ne the computationally tractable minimax rate, which contrasts with De?nition 2.1.
De?nition 2.3 (Computationally tractable minimax rate). Let G1 (?n ) := {? : ?(?) ? ?n } be a
sequence of model spaces with minimum separation ?n , where ?(?) is the SNR. A sequence {? ?n }n?1
is called a computationally tractable minimax rate if
? For any sequence {?n }n?1 satisfying ?n = o(? ?n ), any constant ? > 0, and any A ? A(d? ),
?
there exists an oracle r ? R[?, n, Tn , ?(QA )] such that limn?? Rn [G0 , G1 (?n ); A , r] = 1;
?
? For any sequence {?n }n?1 satisfying ?n = ?(? n ), there exist a constant ? > 0 and an algorithm
?
A ? A(d? ) such that, for any r ? R[?, n, Tn , ?(QA )], we have limn?? Rn [G0 , G1 (?n ); A , r] = 0.
3
Main Results
Throughout this paper, we assume that the covariance matrix ? in (2.1) is known. Speci?cally, for
some positive de?nite ? ? Rd?d , the parameter spaces of the null and alternative hypotheses are
de?ned as
G0 (?) := {? = (?, ?, ?, ?) : ? ? Rd },
(3.1)
d
G1 (?; ?n ) := {? = (?0 , ?1 , ?, ?) : ?0 , ?1 ? R , ?0 ? ?1 ? B0 (s), ?(?) ? ?n }.
Accordingly, the testing problem of detecting whether ?0 ?= ?1 is to distinguish
H0 : ? ? G0 (?) versus H1 : ? ? G1 (?; ?n ).
(3.2)
(3.3)
In ?3.1, we present the minimax rate of the detection problem from an information-theoretic perspective. In ?3.2, under the statistical query model introduced in ?2.2, we provide a computational lower
bound and a nearly matching upper bound that is achieved by an ef?cient testing algorithm.
3.1
Information-theoretic Limits
Now we turn to characterize the minimax rate given in De?nition 2.1. For?parameter spaces (3.1) and
(3.2) with known ?, we show that in highly sparse setting where s = o( d), we have
?
?n? = s log d/n ? (1/?2 ? s log d/n),
(3.4)
5
To prove (3.4), we ?rst present a lower bound which shows that the hypothesis testing problem in
(3.3) is impossible if ?n = o(?n? ).
Theorem 3.1. For the hypothesis testing problem in (3.3) with known ?, we assume that there exists
a small constant ? > 0 such that s = o(d1/2?? ). Let ?n? be de?ned in (3.4). For any sequence
{?n }n?1 such that ?n = o(?n? ), any hypothesis test is asymptotically powerless, namely,
lim sup Rn? [G0 (?), G1 (?; ?n )] = 1.
n??
?
By Theorem 3.1, we observe a phase transition in the necessary SNR for powerful detection when ?
decreases from one to zero. Starting with rate s log d/n in the supervised setting where ? = 1, the
required SNR gradually increases as label qualities decrease. Finally, when ? reaches?
zero, which
corresponds to the unsupervised setting, powerful detection requires the SNR to be ?( s log d/n).
It is worth noting that when ? = (s log d/n)1/4 , we still have (n3 s log d)1/4 uncorrupted labels.
However, our lower bound (along with the upper bound shown in Theorem 3.2) indicates that the
information contained in these uncorrupted labels are buried in the noise, and cannot essentially
improve the detection quality compared with the unsupervised setting.
Next we establish a matching upper bound for the detection problem in (3.3). We denote the condition
number of the covariance matrix ? by ?, i.e., ? := ?max (?)/?min (?), where ?max (?) and ?min (?)
are the largest and smallest eigenvalues of ?, repectively. Note that marginally Y is uniformly
distributed over {0, 1}. For ease of presentation, we assume that the sample size is 2n and each class
contains exactly n data points. Note that we can always discard some samples in the larger class to
make the sample sizes of both classes to be equal. Due to the law of large numbers, this trick will not
affect the analysis of sample complexity in the sense of order wise.
d
Given 2n i.i.d. samples {(yi , xi )}2n
i=1 of (Y, X) ? {0, 1} ? R , we de?ne
wi = ??1/2 (x2i ? x2i?1 ), for all i ? [n].
(0) n
In addition, we split the dataset {(yi , xi )}2n
i=1 into two disjoint parts {(0, xi )}i=1
and de?ne
(1)
ui = xi
(0)
? xi , for all i ? [n].
(3.5)
(1)
and {(1, xi )}ni=1 ,
(3.6)
We note that computing sample differences in (3.5) and (3.6) is critical for our problem because we
focus on detecting the difference between ?0 and ?1 , and computing differences can avoid estimating
EP? (X) that might be dense. For any integer s ? [d], we de?ne B2 (s) := B0 (s) ? Sd?1 as the set
of s-sparse vectors on the unit sphere in Rd . With {wi }ni=1 and {ui }ni=1 , we introduce two test
functions
?
?
n
1 ? (v? ??1 wi )2
?1 := 1 sup
?
1
+
?
,
(3.7)
1
? ?1 v
v?B2 (s) n i=1 2v ?
?
?
n
1?
?2 := 1 sup
?v, diag(?)?1/2 ui ? ? ?2 ,
(3.8)
v?B2 (1) n i=1
where ?1 , ?2 > 0 are algorithmic parameters that will be speci?ed later. To provide some intuitions,
we consider the case where ? = I. Test function ?1 seeks a sparse direction that explains the most
variance of wi . Therefore, such a test is closely related to the sparse principal
component detection
?n
problem [3]. Test function ?2 simply selects the coordinate of n?1 i=1 ui that has the largest
magnitude and compares it with ?2 . This test is closely related to detecting sparse normal mean
in high dimensions [16]. Based on these two ingredients, we construct our ?nal testing function ?
as ? = ?1 ? ?2 , i.e., if any of ?1 and ?2 is true, then ? rejects the null. The following theorem
establishes a suf?cient condition for test function ? to be asymptotically powerful.
Theorem 3.2. Consider the testing problem (3.3) where ? is known and has condition number ?.
For test functions ?1 and ?2 de?ned in (3.7) and (3.8) with parameters ?1 and ?2 given by
?
?
?1 = ? s log(ed/s)/n, ?2 = 8 log d/n.
We de?ne the ultimate test function as ? = ?1 ? ?2 . We assume that s ? C ? d for some absolute
constant Cs and n ? 64 ? s log(ed/s). Then if
?
?n ? C ? ? ? [ s log(ed/s)/n ? (1/?2 ? s log d/n)],
(3.9)
6
where C ? is an absolute constant, then test function ? is asymptotically powerful. In speci?c, we have
sup Pn? (? = 1) +
??G0 (?)
sup
??G1 (?;?n )
Pn? (? = 0) ? 20/d.
(3.10)
Theorem 3.2 provides a non-asymptotic guarantee. When n?goes to in?nity, (3.10) implies that
the test function
? is asymptotically powerful. When s = o( d) and ? is a constant, (3.9) yields
?
?n = ?[ s log d/n?(1/?2 ?s log d/n)], which matches the lower bound given in Theorem 3.1. Thus
we conclude that ?n? de?ned in (3.4) is the minimax rate of testing problem in (3.3). We remark that
when s = ?(d), ? = 1, i.e., the standard (low-dimensional) setting
? of two sample testing, the bound
provided in (3.9) is sub-optimal as?[22] shows that SNR rate d/n is suf?cient for asymptotically
powerful detection
? when n = ?( d). It is thus worth noting that we focus on the highly sparse
setting s = o( d) and provided sharp minimax rate for
? ?this regime. In the de?nition of ?1 in
(3.7), we search over the set B2 (s). Since B2 (s) contains ds distinct sets of supports, computing ?1
requires exponential running time.
3.2
Computational Limits
In this section, we characterize the computationally tractable minimax rate ? ?n given in De?nition 2.3.
Moreover, we focus on the setting where ? is known a priori and the parameter spaces for the null
and alternative hypotheses are de?ned
? in (3.1) and (3.2), respectively. The main result is that, in
highly sparse setting where s = o( d), we have
?
? ?n = s2 /n ? (1/?2 ? s log d/n).
(3.11)
We ?rst present the lower bound in the next result.
Theorem 3.3. For the testing problem in (3.3) with ? known a priori, we make the same assumptions
as in Theorem 3.1. For any sequence {?n }n?1 such that
?
??
??
s2 /n ? (1/?2 ? s/n) ,
(3.12)
?n = o ?n? ?
where ?n? is de?ned in (3.4), any computationally tractable test is asymptotically powerless under the
statistical query model. That is, for any constant ? > 0 and any A ? A(d? ), there exists an oracle
?
r ? R[?, n, Tn , ?(QA )] such that limn?? Rn [G0 (?), G1 (?, ?n ); A , r] = 1.
We
the lower bound in (3.12) differs from ?n? in (3.11) by a logarithmic term when
? remark 2that ?
1/n ? ? ? s log d/n. We expect this gap to be eliminated by more delicate analysis under the
statistical query model.
Now putting Theorems 3.1 and 3.3 together, we describe the ?more supervision, less computation?
phenomenon as follows.
(i) When 0 ? ? ? (log2 d/n)1/4 , the computational lower bound implies that the uncorrupted
labels are unable to improve the quality of computationally tractable detection compared with the
unsupervised setting. In addition, in this region, the gap between ?n? and ? ?n remains the same.
(ii) When (log2 d/n)1/4 < ? ? (s log d/n)1/4 , the information-theoretic lower bound shows that
the uncorrupted labels cannot improve the quality of detection compared with unsupervised setting.
However, more uncorrupted labels improve the statistical performances of hypothesis tests that are
computationally tractable by shrinking the gap between ?n? and ? ?n .
(iii) When (s log d/n)1/4 < ? ? 1, having more uncorrupted labels improves both statistical
optimality and the computational ef?ciency. In speci?c, in this case, the gap between ?n? and ? ?n
vanishes and we have ?n? = ? ?n = 1/?2 ? s log d/n.
Now we derive a nearly matching upper bound under the statistical query model, which establishes
the computationally tractable minimax rate together with Theorem 3.3. We construct a computationally ef?cient testing procedure that combines two test functions which yields the two parts in ? ?n
respectively. Similar to ?1 de?ned in (3.7), the ?rst test function discards the information of labels,
which works for the purely unsupervised setting where ? = 0. For j ? [d], we denote by ?j the j-th
diagonal element of ?. Under the statistical query model, we consider the 2d query functions
?
?
?
(3.13)
qj (y, x) := xj / ?j ? 1{|xj / ?j | ? R ? log d},
?
?
q?j (y, x) := (x2j /?j ? 1) ? 1{|xj / ?j | ? R ? log d}, for all j ? [d],
(3.14)
7
where R > 0 is an absolute constant. Here we apply truncation to the query functions to obtain
bounded queries, which is speci?ed by the statistical query model in De?nition 2.2. We denote by zqj
and zq?j the realizations of the random variables output by the statistical oracle for query functions qj
and q?j , respectively. As for the second test function, similar to (3.8), we consider
?
?
?
q v (y, x) = (2y ? 1) ? v? diag(?)?1/2 x ? 1 |v? diag(?)?1/2 x| ? R ? log d
(3.15)
for all v ? B2 (1). We denote by Zqv the output of the statistical oracle corresponding to query
function q v . With these 4d query functions, we introduce test functions
?
?
?
?
?1 := 1 sup (zq?j ? zq2j ) ? C? 1 , ?2 := 1 sup zqv ? 2? 2 ,
(3.16)
j?[d]
v?B2 (1)
where ? 1 and ?2 are positive parameters that will be speci?ed later and C is an absolute constant.
Theorem 3.4. For the test functions ?1 and ?2 de?ned in (3.16) , we de?ne the ultimate test function
as ? = ?1 ? ?2 . We set
?
?
?
? 1 = R2 log d ? log(4d/?)/n, ? 2 = R log d ? log(4d/?)/n,
(3.17)
where ? = o(1). For the hypothesis testing problem in (3.3), we further assume that ??0 ?? ?
??1 ?? ? C0 for some constant C0 > 0. Under the assumption that
?
??
?
?
sup (?0,j ? ?1,j )2 /?j = ? 1/?2 ? log2 d ? log(d/?)/n ? log d ? log(d/?)/n ,
(3.18)
j?[d]
?
the risk of ? satis?es that Rn (?) = sup??G0 (?) P? (? = 1) + sup??G1 (?,?n ) P? (? = 0) ? 5?. Here
we denote by ?0,j and ?1,j the j-th entry of ?0 and ?1 , respectively.
If we set the tail probability of the statistical query model to be ? = 1/d, (3.18) shows that ?
is asymptotically powerful if supj?[d] (?0,j ? ?1,j )2 /?j = ?[(1/?2 ? log3 d/n) ? (log3 d/n)1/2 ].
?
When the energy of ?0 ? ?1 is spread over its support, ??0 ? ?1 ?? and ??0 ? ?1 ?2 / s are close.
Under the assumption that the condition number ? of ? is a constant, (3.18) is implied by
?n ? (s2 log3 d/n)1/2 ? (1/?2 ? s log3 d/n).
Compared with Theorem 3.3, the above?upper bound matches the computational lower bound up to
a logarithmic factor and ? ?n is between s2 /n ? (1/?2 ? s log d/n) and (s2 log3 d/n)1/2 ? (1/?2 ?
s log3 d/n). Note that the truncation on query functions in (3.13) and (3.14) yields an additional
logarithmic term, which could be reduced to (s2 log d/n)1/2 ? (1/?2 ? s log d/n) using more delicate
analysis. Moreover, the test function ?1 is essentially based on a diagonal thresholding algorithm
performed on the covariance matrix ?
of X. The work in [6] provides a more delicate analysis of
this algorithm which establishes the s2 /n rate. Their algorithm can also be formulated into the
statistical query model; we use the simpler version in (3.16)
? for ease of presentation. Therefore, with
more sophicated proof techinique, it can be shown that s2 /n ? (1/?2 ? s log d/n) is the critical
threshold for asymptotically powerful detection with computational ef?ciency.
3.3
Implication for Estimation
Our aforementioned phase transition in the detection problems directly implies the statistical and
computational trade-offs in the problem of estimation. We consider the problem of estimating the
parameter ?? = ?0 ? ?1 of the binary classi?cation model in (2.1) and (2.2), where ?? is s-sparse
and ? is known a priori. We assume that the signal to noise ratio is ?(?) = ??? ??1 ?? ? ?n =
o(? ?n ). For any constant ? > 0 and any A ? A(T ) with T = O(d? ), suppose we obtain an estimator
? of ?? by algorithm A under the statistical query model. If ??
? converges to ?? in the sense that
??
? ?1
?
?
(?? ? ??) ? (?? ? ??) = o[?n2 /?(?)],
? ? ??1 ??
? ? ??? ??1 ??| = o(?n ). Thus the test function ? = 1{??
? ? ???
? ?
we have |??
?n /2} is asymptotically powerful, which contradicts the computational lower bound in Theorem 3.3.
? ? ??)? ??1 (??
? ? ??) ? C?n2 /?(?) for any
Therefore, there exists a constant C such that (??
?
estimator ?? constructed from polynomial number of queries.
Acknowledgments
We would like to thank Vitaly Feldman for valuable discussions.
8
References
[1] Angluin, D. and Laird, P. (1988). Learning from noisy examples. Machine Learning, 2 343?370.
[2] Berthet, Q. and Rigollet, P. (2013). Computational lower bounds for sparse PCA. In Conference on
Learning Theory.
[3] Berthet, Q. and Rigollet, P. (2013). Optimal detection of sparse principal components in high dimension.
The Annals of Statistics, 41 1780?1815.
[4] Chandrasekaran, V. and Jordan, M. I. (2013). Computational and statistical tradeoffs via convex relaxation.
Proceedings of the National Academy of Sciences, 110 1181?1190.
[5] Chen, Y. and Xu, J. (2014). Statistical-computational tradeoffs in planted problems and submatrix localization with a growing number of clusters and submatrices. arXiv preprint arXiv:1402.1267.
[6] Deshpande, Y. and Montanari, A. (2014). Sparse PCA via covariance thresholding. In Advances in Neural
Information Processing Systems.
[7] Fan, J., Feng, Y. and Tong, X. (2012). A road to classi?cation in high dimensional space: The regularized
optimal af?ne discriminant. Journal of the Royal Statistical Society: Series B, 74 745?771.
[8] Fan, J., Liu, H., Wang, Z. and Yang, Z. (2016). Curse of heterogeneity: Computational barriers in sparse
mixture models and phase retrieval. Manuscript.
[9] Feldman, V., Grigorescu, E., Reyzin, L., Vempala, S. and Xiao, Y. (2013). Statistical algorithms and a
lower bound for detecting planted cliques. In ACM Symposium on Theory of Computing.
[10] Feldman, V., Guzman, C. and Vempala, S. (2015). Statistical query algorithms for stochastic convex
optimization. arXiv preprint arXiv:1512.09170.
[11] Feldman, V., Perkins, W. and Vempala, S. (2015). On the complexity of random satis?ability problems
with planted solutions. In ACM Symposium on Theory of Computing.
[12] Fr?nay, B. and Verleysen, M. (2014). Classi?cation in the presence of label noise: A survey. IEEE
Transactions on Neural Networks and Learning Systems, 25 845?869.
[13] Gao, C., Ma, Z. and Zhou, H. H. (2014). Sparse CCA: Adaptive estimation and computational barriers.
arXiv preprint arXiv:1409.8565.
[14] Garc?a-Garc?a, D. and Williamson, R. C. (2011). Degrees of supervision. In Advances in Neural Information Processing Systems.
[15] Hajek, B., Wu, Y. and Xu, J. (2014). Computational lower bounds for community detection on random
graphs. arXiv preprint arXiv:1406.6625.
[16] Johnstone, I. M. (1994). On minimax estimation of a sparse normal mean vector. The Annals of Statistics,
22 271?289.
[17] Joulin, A. and Bach, F. R. (2012). A convex relaxation for weakly supervised classi?ers. In International
Conference on Machine Learning.
[18] Kearns, M. (1993). Ef?cient noise-tolerant learning from statistical queries. In ACM Symposium on Theory
of Computing.
[19] Ma, Z. and Wu, Y. (2014). Computational barriers in minimax submatrix detection. The Annals of
Statistics, 43 1089?1116.
[20] Nettleton, D. F., Orriols-Puig, A. and Fornells, A. (2010). A study of the effect of different types of noise
on the precision of supervised learning techniques. Arti?cial Intelligence Review, 33 275?306.
[21] Patrini, G., Nielsen, F., Nock, R. and Carioni, M. (2016). Loss factorization, weakly supervised learning
and label noise robustness. arXiv preprint arXiv:1602.02450.
[22] Ramdas, A., Singh, A. and Wasserman, L. (2016). Classi?cation accuracy as a proxy for two sample
testing. arXiv preprint arXiv:1602.02210.
[23] Tony Cai, T., Liu, W. and Xia, Y. (2014). Two-sample test of high dimensional means under dependence.
Journal of the Royal Statistical Society: Series B, 76 349?372.
[24] Tsybakov, A. B. (2008). Introduction to nonparametric estimation. Springer.
[25] Vershynin, R. (2010). Introduction to the non-asymptotic analysis of random matrices. arXiv preprint
arXiv:1011.3027.
[26] Wang, T., Berthet, Q. and Samworth, R. J. (2014). Statistical and computational trade-offs in estimation
of sparse principal components. arXiv preprint arXiv:1408.5369.
[27] Wang, Z., Gu, Q. and Liu, H. (2015). Sharp computational-statistical phase transitions via oracle computational model. arXiv preprint arXiv:1512.08861.
[28] Zhang, Y., Wainwright, M. J. and Jordan, M. I. (2014). Lower bounds on the performance of polynomialtime algorithms for sparse linear regression. In Conference on Learning Theory.
9
| 6518 |@word version:2 achievable:2 polynomial:6 norm:2 c0:2 seek:1 bn:5 covariance:5 arti:1 reduction:2 liu:4 contains:3 series:2 xinyang:1 interestingly:1 existing:1 cant:1 enables:1 generative:2 intelligence:1 accordingly:2 characterization:1 detecting:5 provides:2 simpler:1 zhang:1 unbounded:2 along:2 constructed:1 symposium:3 shorthand:2 prove:1 combine:1 nay:1 introduce:3 manner:1 theoretically:1 hardness:3 expected:1 growing:1 curse:1 spain:1 estimating:4 moreover:4 notation:3 bounded:4 provided:2 null:4 xed:1 transformation:1 guarantee:1 cial:1 unexplored:2 nutshell:1 shed:1 exactly:1 unit:1 positive:4 tends:1 limit:6 sd:1 despite:1 establishing:1 might:2 studied:1 resembles:1 dif:2 ease:2 factorization:2 practical:1 acknowledgment:1 testing:28 differs:1 procedure:1 vstat:1 nite:1 asymptotics:1 empirical:1 suprema:1 submatrices:1 reject:1 matching:3 word:2 road:1 refers:2 cannot:2 close:1 operator:1 context:2 impossible:3 risk:8 applying:1 restriction:3 equivalent:1 measurable:1 missing:1 go:1 starting:1 convex:3 survey:2 wasserman:1 rule:1 estimator:2 importantly:1 population:1 handle:1 notion:1 coordinate:1 annals:3 suppose:2 hypothesis:13 trick:1 element:1 satisfying:6 particularly:2 observed:5 ep:1 preprint:9 wang:4 capture:3 region:1 connected:1 decrease:2 trade:2 valuable:1 intuition:2 vanishes:2 complexity:11 ui:4 rigorously:1 weakly:11 solving:3 depend:2 tight:1 singh:1 purely:1 localization:1 completely:1 gu:1 joint:1 caramanis:1 distinct:1 describe:1 query:43 h0:4 larger:3 say:2 ability:1 statistic:4 g1:20 noisy:3 laird:1 interplay:1 sequence:13 eigenvalue:1 cai:1 propose:1 product:1 fr:1 realization:2 date:1 repectively:1 culty:1 reyzin:1 nity:1 achieve:1 academy:1 rst:9 convergence:1 regularity:1 cluster:1 converges:1 depending:1 derive:1 b0:4 c:1 signi:1 implies:3 quantify:1 direction:1 closely:2 correct:1 nock:1 stochastic:1 explains:1 garc:2 summation:1 extension:1 normal:3 exp:1 algorithmic:1 vary:1 smallest:2 estimation:9 samworth:1 label:30 utexas:1 largest:2 establishes:3 offs:2 gaussian:4 always:1 pn:5 avoid:1 zhou:1 focus:8 indicates:3 likelihood:1 contrast:2 sense:2 entire:1 buried:1 interested:2 selects:1 among:1 aforementioned:4 priori:3 verleysen:1 special:1 equal:2 construct:2 having:3 sampling:1 eliminated:1 represents:1 unsupervised:6 nearly:2 others:1 t2:1 guzman:1 randomly:2 divergence:1 national:1 individual:1 replaced:1 phase:7 connects:1 delicate:3 detection:16 satis:2 highly:3 ipped:1 mixture:2 light:1 behind:1 implication:1 necessary:1 indexed:1 theoretical:1 instance:1 maximization:1 entry:1 snr:6 uniform:1 characterize:6 perturbed:1 corrupted:4 vershynin:1 fundamental:3 international:1 together:3 de:37 attaining:1 zhaoran:2 b2:7 depends:1 later:2 h1:5 performed:1 observing:1 characterizes:5 linked:1 sup:14 bayes:1 polynomialtime:1 contribution:3 ni:13 accuracy:9 variance:1 correspond:1 identify:1 yield:3 marginally:1 worth:2 drive:1 corruption:1 cation:14 reach:1 ed:7 energy:1 deshpande:1 proof:1 dataset:1 ask:1 lim:1 improves:3 dimensionality:1 hajek:1 nielsen:1 back:1 manuscript:1 attained:1 supervised:15 follow:1 response:3 furthermore:1 d:1 replacing:1 lack:1 quality:4 lda:1 effect:5 concept:1 true:5 hence:2 nonzero:1 illustrated:1 round:4 theoretic:14 tn:11 patrini:1 wise:1 ef:13 rigollet:2 tail:3 feldman:4 rd:8 access:1 han:1 supervision:15 recent:1 perspective:1 constantine:2 inf:2 belongs:1 discard:2 certain:1 inequality:2 binary:9 yi:14 uncorrupted:8 nition:14 captured:1 minimum:2 additional:1 speci:11 determine:1 maximize:1 signal:3 semi:1 ii:4 reduces:1 match:2 af:1 bach:1 sphere:1 retrieval:1 regression:1 heterogeneous:1 essentially:2 expectation:1 arxiv:18 represent:1 achieved:2 whereas:3 addition:5 background:1 limn:6 vitaly:1 incorporates:1 effectiveness:1 jordan:2 integer:3 ciently:1 structural:1 yang:2 noting:2 bernstein:1 iii:2 split:1 presence:1 affect:2 xj:3 zi:6 tradeoff:7 texas:1 qj:2 whether:2 pca:2 ultimate:2 returned:1 remark:3 involve:1 nonparametric:1 tsybakov:1 extensively:1 reduced:1 angluin:1 exist:2 disjoint:1 nettleton:1 putting:1 threshold:1 achieving:2 nal:1 vast:2 asymptotically:12 relaxation:2 graph:1 powerful:10 extends:1 throughout:2 family:2 chandrasekaran:1 wu:2 separation:3 submatrix:2 cca:1 bound:21 pay:1 distinguish:1 fold:1 fan:2 oracle:22 nonnegative:2 strength:2 constraint:2 sharply:1 perkins:1 n3:1 argument:1 min:4 optimality:4 vempala:3 conjecture:2 ned:13 remain:1 slightly:1 contradicts:1 wi:4 intuitively:1 gradually:1 grigorescu:1 taken:1 computationally:9 remains:3 turn:1 eventually:1 tractable:9 supj:1 studying:1 apply:1 observe:3 alternative:4 robustness:1 denotes:4 running:1 tony:1 log2:3 hinge:2 cally:4 especially:1 establish:1 classical:2 society:2 feng:1 implied:1 puig:1 g0:18 question:1 quantity:2 planted:4 dependence:3 concentration:1 unproven:2 interacts:1 unclear:1 enhances:2 exhibit:1 gradient:1 responds:1 diagonal:2 unable:1 thank:1 capacity:2 discriminant:2 reason:1 besides:1 ratio:2 setup:1 countable:1 motivates:2 carioni:1 unknown:1 upper:6 heterogeneity:1 y1:1 rn:11 sharp:2 community:1 princeton:2 introduced:1 namely:6 required:1 extensive:1 narrow:2 barcelona:1 nip:1 qa:21 usually:1 regime:5 built:1 max:3 including:1 royal:2 wainwright:1 critical:2 rely:2 regularized:1 minimax:22 improve:4 x2i:2 imply:1 numerous:1 ne:10 review:1 literature:2 asymptotic:2 law:1 loss:1 expect:1 interesting:1 suf:3 limitation:1 versus:4 var:2 ingredient:1 degree:6 proxy:1 xiao:1 thresholding:2 austin:1 summary:1 truncation:2 understand:1 johnstone:1 characterizing:1 barrier:3 absolute:5 sparse:22 tolerance:1 distributed:1 boundary:16 dimension:3 xia:1 transition:6 berthet:3 adaptive:1 log3:6 transaction:1 informationtheoretic:1 clique:2 global:1 tolerant:1 conclude:1 xi:17 discriminative:1 search:1 zq:6 interact:1 culties:1 williamson:1 diag:3 quanti:2 joulin:1 main:3 dense:1 spread:1 montanari:1 s2:13 noise:8 bounding:1 n2:2 ramdas:1 allowed:3 body:1 x1:1 xu:2 cient:9 tong:1 shrinking:1 fails:1 sub:1 precision:1 deterministically:1 ciency:7 exponential:2 theorem:14 hanliu:1 covariate:1 er:3 r2:1 decay:1 admits:1 exists:6 intractable:3 incorporating:1 adding:1 magnitude:1 budget:2 gap:8 chen:1 signalto:1 logarithmic:4 simply:1 gao:1 contained:2 springer:1 zhuoran:1 corresponds:2 acm:3 ma:2 presentation:2 formulated:1 consequently:2 price:3 fisher:1 replace:1 uniformly:4 reducing:1 classi:17 principal:4 kearns:1 total:1 called:1 x2j:1 e:2 yixy:1 formally:1 support:2 incorporate:1 mum:1 d1:1 phenomenon:3 |
6,101 | 6,519 | Synthesizing the preferred inputs for neurons in
neural networks via deep generator networks
Anh Nguyen
[email protected]
Jason Yosinski
[email protected]
Alexey Dosovitskiy
[email protected]
Thomas Brox
[email protected]
Jeff Clune
[email protected]
Abstract
Deep neural networks (DNNs) have demonstrated state-of-the-art results on many
pattern recognition tasks, especially vision classification problems. Understanding
the inner workings of such computational brains is both fascinating basic science
that is interesting in its own right?similar to why we study the human brain?and
will enable researchers to further improve DNNs. One path to understanding
how a neural network functions internally is to study what each of its neurons
has learned to detect. One such method is called activation maximization (AM),
which synthesizes an input (e.g. an image) that highly activates a neuron. Here
we dramatically improve the qualitative state of the art of activation maximization
by harnessing a powerful, learned prior: a deep generator network (DGN). The
algorithm (1) generates qualitatively state-of-the-art synthetic images that look
almost real, (2) reveals the features learned by each neuron in an interpretable
way, (3) generalizes well to new datasets and somewhat well to different network
architectures without requiring the prior to be relearned, and (4) can be considered
as a high-quality generative method (in this case, by generating novel, creative,
interesting, recognizable images).
1
Introduction and Related Work
Understanding how the human brain works has been a long-standing quest in human history. Neuroscientists have discovered neurons in human brains that selectively fire in response to specific, abstract
concepts such as Halle Berry or Bill Clinton, shedding light on the question of whether learned neural
codes are local vs. distributed [1]. These neurons were identified by finding the preferred stimuli
(here, images) that highly excite a specific neuron, which was accomplished by showing subjects
many different images while recording a target neuron?s activation. Such neurons are multifaceted:
for example, the ?Halle Berry neuron? responds to very different stimuli related to the actress?from
pictures of her face, to pictures of her in costume, to the word ?Halle Berry? printed as text [1].
Inspired by such neuroscience research, we are interested in shedding light into the inner workings
of DNNs by finding the preferred inputs for each of their neurons. As the neuroscientists did, one
could simply show the network a large set of images and record a set of images that highly activate a
neuron [2]. However, that method has disadvantages vs. synthesizing preferred stimuli: 1) it requires
a distribution of images that are similar to those used to train the network, which may not be known
(e.g. when probing a trained network when one does not know which data were used to train it); 2)
even in such a dataset, many informative images that would activate the neuron may not exist because
the image space is vast [3]; 3) with real images, it is unclear which of their features a neuron has
learned: for example, if a neuron is activated by a picture of a lawn mower on grass, it is unclear if it
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Figure 1: Images synthesized from scratch to highly activate output neurons in the CaffeNet deep
neural network, which has learned to classify different types of ImageNet images.
?cares about? the grass, but if an image synthesized to highly activate the lawn mower neuron contains
grass (as in Fig. 1), we can be more confident the neuron has learned to pay attention to that context.
Synthesizing preferred stimuli is called activation maximization [4?8, 3, 9]. It starts from a random
image and iteratively calculates via backpropagation how the color of each pixel in the image should
be changed to increase the activation of a neuron. Previous studies have shown that doing so without
biasing the images produced creates unrealistic, uninterpretable images [5, 3], because the set of all
possible images is so vast that it is possible to produce ?fooling? images that excite a neuron, but
do not resemble the natural images that neuron has learned to detect. Instead, we must constrain
optimization to generate only synthetic images that resemble natural images [6]. Attempting that
is accomplished by incorporating natural image priors into the objective function, which has been
shown to substantially improve the recognizability of the images generated [7, 6, 9]. Many handdesigned natural image priors have been experimentally shown to improve image quality such as:
Gaussian blur [7], ?-norm [5, 7, 8], total variation [6, 9], jitter [10, 6, 9], data-driven patch priors [8],
center-bias regularization [9], and initializing from mean images [9]. Instead of hand-designing
such priors, in this paper, we propose to use a superior, learned natural image prior [11] akin to a
generative model of images. This prior allows us to synthesize highly human-interpretable preferred
stimuli, giving additional insight into the inner functioning of networks. While there is no way
to rigorously measure human-interpretability, a problem that also makes quantitatively assessing
generative models near-impossible [12], we should not cease scientific work on improving qualitative
results simply because humans must subjectively evaluate them.
Learning generative models of natural images has been a long-standing goal in machine learning [13].
Many types of neural network models exist, including probabilistic [13], auto-encoder [13], stochastic
[14] and recurrent networks [13]. However, they are typically limited to relatively low-dimensional
images and narrowly focused datasets. Recently, advances in network architectures and training
methods enabled the generation of high-dimensional realistic images [15, 16, 11]. Most of these works
are based on Generative Adversarial Networks (GAN) [17], which trains two models simultaneously:
a generative model G to capture the data distribution, and a discriminative model D to estimates
the probability that a sample came from the training data rather than G. The training objective
for G is to maximize the probability of D making a mistake. Recently Dosovitskiy and Brox [11]
trained networks capable of generating images from highly compressed feature representations, by
combining an auto-encoder-style approach with GAN?s adversarial training. We harness these image
generator networks as priors to produce synthetic preferred images. These generator networks are
close to, but not true, generative models because they are trained without imposing any prior on the
hidden distribution as in variational auto-encoders [14] or GANs [17], and without the addition of
noise as in denoising auto-encoders [18]. Thus, there is no natural sampling procedure nor an implicit
density function over the data space.
The image generator DNN that we use as a prior is trained to take in a code (e.g. vector of scalars)
and output a synthetic image that looks as close to real images from the ImageNet dataset [19] as
possible. To produce a preferred input for a neuron in a given DNN that we want to visualize, we
optimize in the input code space of the image generator DNN so that it outputs an image that activates
2
Image
Code
Forward ? and ? backward ? passes
..
candle
u2
u1
c1
c2
c4
convolutional
upconvolutional
Deep ? generator ? network
(prior)
c3
c5
convertible
...
fc6
...
...
u9
banana
fc8
fc6 fc7
DNN ? being ? visualized
Figure 2: To synthesize a preferred input for a target neuron h (e.g. the ?candle? class output neuron),
we optimize the hidden code input (red bar) of a deep image generator network (DGN) to produce
an image that highly activates h. In the example shown, the DGN is a network trained to invert the
feature representations of layer fc6 of CaffeNet. The target DNN being visualized can be a different
network (with a different architecture and or trained on different data). The gradient information
(blue-dashed line) flows from the layer containing h in the target DNN (here, layer fc8) all the way
through the image back to the input code layer of the DGN. Note that both the DGN and target DNN
being visualized have fixed parameters, and optimization only changes the DGN input code (red).
the neuron of interest (Fig. 2). Our method restricts the search to only the set of images that can
be drawn by the prior, which provides a strong biases toward realistic visualizations. Because our
algorithm uses a deep generator network to perform activation maximization, we call it DGN-AM.
2
Methods
Networks that we visualize. We demonstrate our visualization method on a variety of different
networks. For reproducibility, we use pretrained models freely available in Caffe or the Caffe
Model Zoo [20]: CaffeNet [20], GoogleNet [21], and ResNet [22]. They represent different convnet
architectures trained on the ?1.3-million-image 2012 ImageNet dataset [23, 19]. Our default DNN is
CaffeNet [20], a minor variant of the common AlexNet architecture [24] with similar performance
[20]. The last three fully connected layers of the 8-layer CaffeNet are called fc6, fc7 and fc8 (Fig. 2).
fc8 is the last layer (pre softmax) and has 1000 outputs, one for each ImageNet class.
Image generator network. We denote the DNN we want to visualize by ?. Instead of previous
works, which directly optimized an image so that it highly activates a neuron h in ? and optionally
satisfies hand-designed priors embedded in the cost function [5, 7, 9, 6], here we optimize in the
input code of an image generator network G such that G outputs an image that highly activates h.
For G we use networks made publicly available by [11] that have been trained with the principles of
GANs [17] to reconstruct images from hidden-layer feature representations within CaffeNet [20].
How G is trained includes important differences from the original GAN configuration [17]. Here
we can only briefly summarize the training procedure; please see [11] for more details. The training
process involves four convolutional networks: 1) a fixed encoder network E to be inverted, 2) a
generator network G, 3) a fixed ?comparator? network C and 4) a discriminator D. G is trained to
invert a feature representation extracted by the network E, and has to satisfy three objectives: 1) for
a feature vector yi = E(xi ), the synthesized image G(yi ) has to be close to the original image xi ;
2) the features of the output image C(G(yi )) have to be close to those of the real image C(xi ); 3)
D should be unable to distinguish G(yi ) from real images. The objective for D is to discriminate
between synthetic images G(yi ) and real images xi as in the original GAN [17].
In this paper, the encoder E is CaffeNet truncated at different layers. We denote CaffeNet truncated
at layer l by El , and the network trained to invert El by Gl . The ?comparator? C is CaffeNet up to
layer pool5. D is a convolutional network with 5 convolutional and 2 fully connected layers. G is an
upconvolutional (aka deconvolutional) architecture [15] with 9 upconvolutional and 3 fully connected
layers. Detailed architectures are provided in [11].
3
Synthesizing the preferred images for a neuron. Intuitively, we search in the input code space
of the image generator model G to find a code y such that G(y) is an image that produces high
activation of the target neuron h in the DNN ? that we want to visualize (i.e. optimization maximizes
?h (G(y))). Recall that Gl is a generator network trained to reconstruct images from the l-th layer
features of CaffeNet. Formally, and including a regularization term, we may pose the activation
maximization problem as finding a code ybl such that:
ybl = arg max(?h (Gl (yl )) ? ?kyl k)
(1)
yl
Empirically, we found a small amount of L2 regularization (? = 0.005) works best. We also compute
the activation range for each neuron in the set of codes {yil } computed by running validation set
images through El . We then clip each neuron in ybl to be within the activation range of [0, 3?], where
? is one standard deviation around the mean activation (the activation is lower bounded at 0 due to
the ReLU nonlinearities that exist at the layers whose codes we optimize). This clipping acts as a
primitive prior on the code space and substantially improves the image quality. In future work, we
plan to learn this prior via a GAN or other generative model. Because the true goal of activation
maximization is to generate interpretable preferred stimuli for each neuron, we performed random
search in the hyperparameter space consisting of L2 weight ?, number of iterations, and learning rate.
We chose the hyperparameter settings that produced the highest quality images. We note that we
found no correlation between the activation of a neuron and the recognizability of its visualization.
Our code and parameters are available at http://EvolvingAI.org/synthesizing.
3
3.1
Results
Comparison between priors trained to invert features from different layers
Since a generator model Gl could be trained to invert feature representations of an arbitrary layer l of
E, we sampled l = {3, 5, 6, 7} to explore the impact on this choice and identify qualitatively which
produces the best images. Here, the DNN to visualize ? is the same as the encoder E (CaffeNet), but
they can be different (as shown below). The Gl networks are from [11]. For each Gl network we
chose the hyperparameter settings from a random sample that gave the best qualitative results.
Optimizing codes from the convolutional layers (l = 3, 5) typically yields highly repeated fragments,
whereas optimizing fully-connected layer codes produces much more coherent global structure
(Fig. S13). Interestingly, previous studies have shown that G trained to invert lower-layer codes
(smaller l) results in far better reconstructions than higher-layer codes [25, 6]. That can be explained
because those low-level codes come from natural images, and contain more information about
image details than more abstract, high-level codes. For activation maximization, however, we are
synthesizing an entire layer code from scratch. We hypothesize that this process works worse for
Gl priors with smaller l because each feature in low-level codes has a small, local receptive field.
Optimization thus has to independently tune features throughout the image without knowing the
global structure. For example, is it an image of one or four robins? Because fully-connected layers
have information from all areas of the image, they represent information such as the number, location,
size, etc. of an object, and thus all the pixels can be optimized toward this agreed upon structure. An
orthogonal, non-mutually-exclusive hypothesis is that the code space at a convolutional layer is much
more high-dimensional, making it harder to optimize.
We found that optimizing in the fc6 code space produces the best visualizations (Figs. 1 & S13). We
thus use this G6 DGN as the default prior for the experiments in the rest of the paper. In addition,
our images qualitatively appear to be the most realistic-looking compared to visualizations from all
previous methods (Fig. S17). Our result reveals that a great amount of fine detail and global structure
are captured by the DNN even at the last output layer. This finding is in contrast to a previous
hypothesis that DNNs trained with supervised learning often ignore an object?s global structure, and
only learn discriminative features per class (e.g. color or texture) [3]. Section 3.5 provides evidence
that this global structure does not come from the prior.
To test whether our method memorizes the training set images, we retrieved the closest images from
the training set for each of sample synthetic images. Specifically, for each synthetic image for an
output neuron Y (e.g. lipstick), we find an image among the same class Y with the lowest Euclidean
distance in pixel space, as done in previous works [17], but also in each of the 8 code spaces of the
4
encoder DNN. While this is a much harder test than comparing to a nearest neighbor found among the
entire dataset, we found no evidence that our method memorizes the training set images (Fig. S22).
We believe evaluating similarity in the spaces of deep representations, which better capture semantic
aspects of images, is a more informative approach compared to evaluating only in the pixel space.
3.2
Does the learned prior trained on ImageNet generalize to other datasets?
We test whether the same DNN prior (G6 ) that was trained on inverting the feature representations of
ImageNet images generalizes to enable visualizing DNNs trained on different datasets. Specifically,
we target the output neurons of two DNNs downloaded from Caffe Model Zoo [20]): (1) An AlexNet
DNN that was trained on the 2.5-million-image MIT Places dataset to classify 205 types of places
with 50.1% accuracy [26]. (2) A hybrid architecture of CaffeNet and the network in [2] created
by [27] to classify actions in videos by processing each frame of the video separately. The dataset
consists of 13,320 videos categorized into 101 human action classes.
For DNN 1, the prior trained on ImageNet images generalizes well to the completely different MIT
Places dataset (Fig. 3). This result suggests the prior trained on ImageNet will generalize to other
natural image datasets, at least if the architecture of the DNN to be visualized ? is the same as
the architecture of the encoder network E from which the generator model G was trained to invert
feature representations. For DNN 2: the prior generalizes to produce decent results; however, the
images are not qualitatively as sharp and clear as for DNN 1 (Fig. 4). We have two orthogonal
hypotheses for why this happens: 1) ? (the DNN from [27]) is a heavily modified version of E
(CaffeNet); 2) the two types of images are too different: the primarily object-centric ImageNet dataset
vs. the UCF-101 dataset, which focuses on humans performing actions. Sec. 3.3 returns to the first
hypothesis regarding how the similarity between ? and E affects the image quality
Overall, the prior trained with a CaffeNet encoder generalizes well to visualizing other DNNs of the
same CaffeNet architecture trained on different datasets.
Figure 3: Preferred stimuli for output units of an AlexNet DNN trained on the MIT Places dataset [26],
showing that the ImageNet-trained prior generalizes well to a dataset comprised of images of scenes.
3.3
Does the learned prior generalize to visualizing different architectures?
We have shown that when the DNN to be visualized ? is the same as the encoder E, the resultant
visualizations are quite realistic and recognizable (Sec. 3.1). To visualize a different network
? one could train a new G
? to invert ?
? feature representations. However, training a new
architecture ?,
G DGN for every DNN we want to visualize is computationally costly. Here, we test whether the
same DGN prior trained on CaffeNet (G6 ) can be used to visualize two state-of-the-art DNNs that are
architecturally different from CaffeNet, but were trained on the same ImageNet dataset. Both were
downloaded from Caffe Model Zoo and have similar accuracy scores: (a) GoogLeNet is a 22-layer
network and has a top-5 accuracy of 88.9% [21]; (b) ResNet is a new type of very deep architecture
with skip connections [22]. We visualize a 50-layer ResNet that has a top-5 accuracy of 93.3%. [22].
DGN-AM produces the best image quality when ? = E, and the visualization quality tends to
degrade as the ? architecture becomes more distant from E (Fig. 5, top row; GoogleLeNet is closer
5
Figure 4: Preferred images for output units of a heavily modified version of the AlexNet architecture
trained to classify videos into 101 classes of human activities [27]. Here, we optimize a single
preferred image per neuron because the DNN only classifies single frames (whole video classification
is done by averaging scores across all video frames).
in architecture to CaffeNet than ResNet) . An alternative hypothesis is that the network depth
impairs gradient propagation during activation maximization. In any case, training a general prior for
activation maximization that generalizes well to different network architectures, which would enable
comparative analysis between networks, remains an important, open challenge.
Figure 5: DGN-AM produces the best image quality when the DNN being visualized ? is the same
as the encoder E (here, CaffeNet), as in the top row, and degrades when ? is different from E.
3.4
Does the learned prior generalize to visualizing hidden neurons?
Visualizing the hidden neurons in an ImageNet DNN. Previous visualization techniques have
shown that low-level neurons detect small, simple patterns such as corners and textures [2, 9, 7],
mid-level neurons detect single objects like faces and chairs [9, 2, 28, 7], but that visualizations of
hidden neurons in fully-connected layers are alien and difficult to interpret [9]. Since DGN was
trained to invert the feature representations of real, full-sized ImageNet images, one possibility is that
this prior may not generalize to producing preferred images for such hidden neurons because they are
often smaller, different in theme, and or do not resemble real objects. To find out, we synthesized
preferred images for the hidden neurons at all layers and compare them to images produced by the
multifaceted feature visualization method from [9], which harnesses hand-designed priors of total
variation and mean image initialization. The DNN being visualized is the same as in [9] (the CaffeNet
architecture with weights from [7]).
The side-by-side comparison (Fig. S14) shows that both methods often agree on the features that a
neuron has learned to detect. However, overall DGN-AM produces more realistic-looking color and
texture, despite not requiring optimization to be seeded with averages of real images, thus improving
our ability to learn what feature each hidden neuron has learned. An exception is for the faces of
6
human and other animals, which DGN-AM does not visualize well (Fig. S14, 3rd unit on layer 6; 1st
unit on layer 5; and 6th unit on layer 4).
Visualizing the hidden neurons in a Deep Scene DNN. Recently, Zhou et al. [28] found that object
detectors automatically emerge in the intermediate layers of a DNN as we train it to classify scene
categories. To identify what a hidden neuron cares about in a given image, they densely slide an
occluding patch across the image and record when activation drops. The activation changes are
then aggregated to segment out the exact region that leads to the high neural activation (Fig. 6, the
highlighted region in each image). To identify the semantics of these segmentations, humans are
then shown a collection of segmented images for a specific neuron and asked to label what types of
image features activate that neuron [28]. Here, we compare our method to theirs on an AlexNet DNN
trained to classify 205 categories of scenes from the MIT Places dataset (described in Sec. 3.2).
The prior learned on ImageNet generalizes to visualizing the hidden neurons of a DNN trained on the
MIT Places dataset (Fig. S15). Interestingly, our visualizations produce similar results to the method
in [28] that requires showing each neuron a large, external dataset of images to discover what feature
each neuron has learned to detect (Fig. 6). Sometimes, DGN-AM reveals additional information: a
unit that fires for TV screens also fires for people on TV (Fig. 6, unit 106). Overall, DGN-AM thus
not only generalizes well to a different dataset, but also produces visualizations that qualitatively fall
within the human-provided categories of what type of image features each neuron responds to [28].
Figure 6: Visualizations of example hidden neurons at layer 5 of an AlexNet DNN trained to classify
categories of scenes from [28]. For each unit: we compare the two visualizations produced by a
method from [28] (left) to two visualizations produced by our method (right). The left two images
are real images, each highlighting a region that highly activates the neuron, and humans provide text
labels describing the common theme in the highlighted regions. Our synthetic images enable the
same conclusion regarding what feature a hidden neuron has learned. An extended version of this
figure with more units is in Fig. S16. Best viewed electronically with zoom.
3.5
Do the synthesized images teach us what the neurons prefer or what the prior prefers?
Visualizing neurons trained on unseen, modified images. We have shown that DGN-AM can
generate preferred image stimuli with realistic colors and coherent global structures by harnessing the
DGN?s strong, learned, natural image prior (Fig. 1). To what extent do the global structure, natural
colors, and sharp textures (e.g. of the brambling bird, Fig. 1) reflect the features learned by the
?brambling? neuron vs. those preferred by the prior? To investigate that, we train 3 different DNNs:
one on images that have less global structure, one on images of non-realistic colors, and one on blurry
images. We test whether DGN-AM with the same prior produces visualizations that reflect these
modified, unrealistic features.
Specifically, we train 3 different DNNs following CaffeNet architecture to discriminate 2000 classes.
The first 1000 classes contain regular ImageNet images, and the 2nd 1000 classes contain modified
ImageNet images. We perform 3 types of modifications: 1) we cut up each image into quarters
and re-stitch them back in a random order (Fig. S19); 2) we convert regular RGB into BRG images
(Fig. S20); 3) we blur out images with Gaussian blur with radius of 3 (Fig. S21).
We visualize both groups of output neurons (those trained on 1000 regular vs. 1000 modified classes)
in each DNN (Figs. S19, S20, & S21). The visualizations for the neurons that are trained on regular
images often show coherent global structures, realistic-looking colors and sharpness. In contrast, the
visualizations for neurons that are trained on modified images indeed show cut-up objects (Fig. S19),
images in BRG color space (Fig. S20), and objects with washed out details (Fig. S21). The results
show that DGN-AM visualizations do closely reflect the features learned by neurons from the data
and that these properties are not exclusively produced by the prior.
7
Why do visualizations of some neurons not show canonical images? While many DGN-AM
visualizations show global structure (e.g. a single, centered table lamp, Fig. 1); some others do not
(e.g. blobs of textures instead of a dog with 4 legs, Fig. S18) or otherwise are non-canonical (e.g.
a school bus off to the side of an image, Fig. S7). Sec. S5 describes our experiments investigating
whether this is a shortcoming of our method or whether these non-canonical visualizations reflect
some property of the neurons. The results suggest that DGN-AM can accurately visualize a class of
images if the images of that set are mostly canonical, and the reason why the visualizations for some
neurons lack global structure or are not canonical is that the set of images that neuron has learned to
detect are often diverse (multi-modal), instead of having canonical pose. More research is needed
into multifaceted feature visualization algorithms that separately visualize each type of image that
activates a neuron [9].
3.6
Other applications of our proposed method
DGN-AM can also be useful for a variety of other important tasks. We briefly describe our experiments
for these tasks, and refer the reader to the supplementary section for more information.
1. One advantage of synthesizing preferred images is that we can watch how features evolve during
training to better understand what occurs during deep learning. Doing so also tests whether the
learned prior (trained to invert features from a well-trained encoder) generalizes to visualizing underfit
and overfit networks. The results suggest that the visualization quality is indicative of a DNN?s
validation accuracy to some extent, and the learned prior is not overly specialized to the well-trained
encoder DNN. See Sec. S6 for more details.
2. Our method for synthesizing preferred images could naturally be applied to synthesize preferred
videos for an activity recognition DNN to better understand how it works. For example, we found
that a state-of-the-art DNN classifies videos without paying attention to temporal information across
video frames (Sec. S7).
3. Our method can be extended to produce creative, original art by synthesizing images that activate
two neurons at the same time (Sec. S8).
4
Discussion and Conclusion
We have shown that activation maximization?synthesizing the preferred inputs for neurons in neural
networks?via a learned prior in the form of a deep generator network is a fruitful approach. DGNAM produces the most realistic-looking, and thus interpretable, preferred images to date, making it
qualitatively the state of the art in activation maximization. The visualizations it synthesizes from
scratch improve our ability to understand which features a neuron has learned to detect. Not only
do the images closely reflect the features learned by a neuron, but they are visually interesting. We
have explored a variety of ways that DGN-AM can help us understand trained DNNs. In future work,
DGN-AM or its learned prior could dramatically improve our ability to synthesize an image from a
text description of it (e.g. by synthesizing the image that activates a certain caption) or create more
realistic ?deep dream? [10] images. Additionally, that the prior used in this paper does not generalize
equally well to DNNs of different architectures motivates research into how to train such a general
prior. Successfully doing so could enable informative comparative analyses between the information
transformations that occur within different types of DNNs.
Acknowledgments
The authors would like to thank Yoshua Bengio for helpful discussions and Bolei Zhou for providing
images for our study. Jeff Clune was supported by an NSF CAREER award (CAREER: 1453549)
and a hardware donation from the NVIDIA Corporation. Jason Yosinski was supported by the NASA
Space Technology Research Fellowship and NSF grant 1527232. Alexey Dosovitskiy and Thomas
Brox acknowledge funding by the ERC Starting Grant VideoLearn (279401).
References
[1] R. Q. Quiroga, L. Reddy, G. Kreiman, C. Koch, and I. Fried. Invariant visual representation by single
neurons in the human brain. Nature, 435(7045):1102?1107, 2005.
8
[2] M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. In Computer Vision?
ECCV 2014, pages 818?833. Springer, 2014.
[3] A. Nguyen, J. Yosinski, and J. Clune. Deep neural networks are easily fooled: High confidence predictions
for unrecognizable images. In Computer Vision and Pattern Recognition (CVPR), 2015.
[4] D. Erhan, Y. Bengio, A. Courville, and P. Vincent. Visualizing higher-layer features of a deep network.
Dept. IRO, Universit? de Montr?al, Tech. Rep, 4323, 2009.
[5] K. Simonyan, A. Vedaldi, and A. Zisserman. Deep inside convolutional networks: Visualising image
classification models and saliency maps. ICLR workshop, 2014.
[6] A. Mahendran and A. Vedaldi. Visualizing deep convolutional neural networks using natural pre-images.
In Computer Vision and Pattern Recognition (CVPR), 2016.
[7] J. Yosinski, J. Clune, A. Nguyen, T. Fuchs, and H. Lipson. Understanding neural networks through deep
visualization. In Deep Learning Workshop, ICML conference, 2015.
[8] D. Wei, B. Zhou, A. Torrabla, and W. Freeman. Understanding intra-class knowledge inside cnn. arXiv
preprint arXiv:1507.02379, 2015.
[9] A. Nguyen, J. Yosinski, and J. Clune. Multifaceted feature visualization: Uncovering the different types of
features learned by each neuron in deep neural networks. In Visualization for Deep Learning Workshop,
ICML conference, 2016.
[10] A. Mordvintsev, C. Olah, and M. Tyka. Inceptionism: Going deeper into neural networks. Google Research
Blog. Retrieved June, 20, 2015.
[11] A. Dosovitskiy and T. Brox. Generating images with perceptual similarity metrics based on deep networks.
In NIPS, 2016.
[12] L. Theis, A. van den Oord, and M. Bethge. A note on the evaluation of generative models. In ICLR, 2016.
[13] Y. Bengio, I. J. Goodfellow, and A. Courville. Deep learning. MIT Press, 2015.
[14] D. P. Kingma and M. Welling. Auto-encoding variational bayes. In ICLR, 2014.
[15] A. Dosovitskiy, J. Tobias Springenberg, and T. Brox. Learning to generate chairs with convolutional neural
networks. In Computer Vision and Pattern Recognition (CVPR), 2015.
[16] A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional
generative adversarial networks. In ICLR, 2016.
[17] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio.
Generative adversarial nets. In NIPS, 2014.
[18] G. Alain and Y. Bengio. What regularized auto-encoders learn from the data-generating distribution. The
Journal of Machine Learning Research, 15(1):3563?3593, 2014.
[19] O. Russakovsky et al. Imagenet large scale visual recognition challenge. IJCV, 115(3):211?252, 2015.
[20] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe:
Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014.
[21] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich.
Going deeper with convolutions. In Computer Vision and Pattern Recognition (CVPR), 2015.
[22] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In IEEE Conference
on Computer Vision and Pattern Recognition (CVPR), 2016.
[23] J. Deng et al. Imagenet: A large-scale hierarchical image database. In CVPR, 2009.
[24] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural
networks. In Advances in neural information processing systems, pages 1097?1105, 2012.
[25] A. Dosovitskiy and T.Brox. Inverting visual representations with convolutional networks. In CVPR, 2016.
[26] B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva. Learning deep features for scene recognition
using places database. In Advances in neural information processing systems, 2014.
[27] J. Donahue, L. A. Hendricks, S. Guadarrama, M. Rohrbach, et al. Long-term recurrent convolutional
networks for visual recognition and description. In Computer Vision and Pattern Recognition, 2015.
[28] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba. Object detectors emerge in deep scene cnns.
In International Conference on Learning Representations (ICLR), 2015.
9
| 6519 |@word cnn:1 version:3 briefly:2 norm:1 nd:1 open:1 rgb:1 harder:2 configuration:1 contains:1 fragment:1 score:2 exclusively:1 liu:1 interestingly:2 deconvolutional:1 s16:1 guadarrama:2 comparing:1 activation:22 videolearn:1 must:2 realistic:10 distant:1 informative:3 blur:3 s21:3 hypothesize:1 designed:2 interpretable:4 drop:1 v:5 grass:3 generative:11 indicative:1 fried:1 lamp:1 record:2 provides:2 location:1 org:1 zhang:1 c2:1 olah:1 qualitative:3 consists:1 ijcv:1 recognizable:2 inside:2 indeed:1 nor:1 multi:1 brain:5 inspired:1 freeman:1 automatically:1 becomes:1 lapedriza:2 spain:1 provided:2 bounded:1 classifies:2 maximizes:1 discover:1 anh:1 alexnet:6 what:12 lowest:1 substantially:2 finding:4 transformation:1 corporation:1 temporal:1 every:1 act:1 universit:1 unit:9 grant:2 internally:1 appear:1 producing:1 local:2 tends:1 mistake:1 despite:1 encoding:1 path:1 handdesigned:1 alexey:2 chose:2 initialization:1 bird:1 suggests:1 limited:1 range:2 acknowledgment:1 backpropagation:1 procedure:2 area:1 printed:1 vedaldi:2 word:1 pre:2 regular:4 confidence:1 suggest:2 close:4 context:1 impossible:1 optimize:6 bill:1 fruitful:1 demonstrated:1 center:1 mower:2 map:1 primitive:1 attention:2 starting:1 independently:1 focused:1 sharpness:1 pouget:1 insight:1 enabled:1 s6:1 embedding:1 variation:2 inceptionism:1 target:7 heavily:2 exact:1 caption:1 us:1 designing:1 hypothesis:5 goodfellow:2 synthesize:4 recognition:12 cut:2 database:2 preprint:2 initializing:1 capture:2 visualising:1 region:4 connected:6 sun:1 highest:1 architecturally:1 asked:1 tobias:1 rigorously:1 warde:1 trained:42 segment:1 creates:1 upon:1 completely:1 easily:1 train:8 fast:1 shortcoming:1 activate:6 pool5:1 describe:1 harnessing:2 caffe:5 whose:1 quite:1 supplementary:1 cvpr:7 reconstruct:2 compressed:1 encoder:12 ability:3 otherwise:1 simonyan:1 unseen:1 highlighted:2 blob:1 advantage:1 karayev:1 net:1 propose:1 reconstruction:1 combining:1 date:1 reproducibility:1 caffenet:21 description:2 sutskever:1 darrell:1 assessing:1 produce:17 generating:4 comparative:2 resnet:4 object:9 help:1 recurrent:2 donation:1 pose:2 unrecognizable:1 nearest:1 school:1 minor:1 paying:1 strong:2 c:2 resemble:3 involves:1 come:2 skip:1 radius:1 closely:2 cnns:1 stochastic:1 centered:1 human:15 enable:5 dnns:13 s18:1 quiroga:1 around:1 considered:1 koch:1 visually:1 great:1 visualize:13 torralba:2 label:2 create:1 successfully:1 mit:6 activates:8 gaussian:2 modified:7 rather:1 zhou:5 clune:5 focus:1 june:1 fooled:1 alien:1 aka:1 contrast:2 adversarial:4 tech:1 detect:8 am:16 helpful:1 el:3 typically:2 entire:2 her:2 hidden:14 dnn:37 going:2 interested:1 semantics:1 pixel:4 arg:1 classification:4 overall:3 among:2 uncovering:1 plan:1 art:7 softmax:1 animal:1 brox:7 field:1 having:1 sampling:1 look:2 s22:1 icml:2 unsupervised:1 future:2 others:1 stimulus:8 dosovitskiy:6 quantitatively:1 primarily:1 convertible:1 mirza:1 yoshua:1 simultaneously:1 densely:1 zoom:1 consisting:1 fire:3 neuroscientist:2 interest:1 montr:1 highly:12 possibility:1 investigate:1 intra:1 evaluation:1 farley:1 light:2 activated:1 capable:1 closer:1 orthogonal:2 euclidean:1 re:1 girshick:1 classify:7 disadvantage:1 rabinovich:1 maximization:11 clipping:1 cost:1 deviation:1 comprised:1 krizhevsky:1 too:1 encoders:3 synthetic:8 confident:1 st:1 density:1 international:1 oord:1 standing:2 probabilistic:1 yl:2 off:1 bethge:1 gans:2 s17:1 reflect:5 containing:1 worse:1 corner:1 external:1 relearned:1 style:1 return:1 szegedy:1 s13:2 nonlinearities:1 de:3 sec:7 includes:1 satisfy:1 performed:1 jason:3 memorizes:2 doing:3 red:2 start:1 s15:1 bayes:1 metz:1 lipson:1 jia:2 publicly:1 accuracy:5 convolutional:15 yield:1 identify:3 saliency:1 generalize:6 vincent:1 accurately:1 produced:6 ren:1 zoo:3 researcher:1 russakovsky:1 history:1 detector:2 resultant:1 naturally:1 chintala:1 sampled:1 dataset:16 recall:1 color:8 knowledge:1 improves:1 segmentation:1 agreed:1 back:2 nasa:1 centric:1 higher:2 supervised:1 harness:2 response:1 modal:1 zisserman:1 recognizability:2 wei:1 done:2 implicit:1 correlation:1 overfit:1 hand:3 working:2 propagation:1 lack:1 google:1 quality:9 scientific:1 believe:1 multifaceted:4 requiring:2 concept:1 functioning:1 true:2 contain:3 regularization:3 seeded:1 iteratively:1 semantic:1 visualizing:12 during:3 please:1 freiburg:2 demonstrate:1 image:144 variational:2 upconvolutional:3 novel:1 recently:3 funding:1 superior:1 common:2 specialized:1 quarter:1 empirically:1 yil:1 million:2 googlenet:2 yosinski:5 s8:1 he:1 synthesized:5 interpret:1 theirs:1 s5:1 refer:1 anguelov:1 imposing:1 ai:1 rd:1 erc:1 ucf:1 similarity:3 subjectively:1 fc7:2 etc:1 closest:1 own:1 retrieved:2 optimizing:3 driven:1 certain:1 nvidia:1 rep:1 came:1 blog:1 accomplished:2 yi:5 kyl:1 inverted:1 captured:1 additional:2 somewhat:1 care:2 deng:1 freely:1 aggregated:1 maximize:1 dashed:1 full:1 segmented:1 long:4 bolei:1 equally:1 award:1 calculates:1 impact:1 variant:1 basic:1 prediction:1 oliva:2 vision:8 metric:1 arxiv:4 iteration:1 represent:2 sometimes:1 invert:10 c1:1 addition:2 want:4 whereas:1 fine:1 separately:2 fellowship:1 rest:1 pass:1 subject:1 recording:1 mahendran:1 flow:1 call:1 near:1 intermediate:1 bengio:5 decent:1 variety:3 affect:1 relu:1 gave:1 architecture:22 identified:1 s14:2 inner:3 regarding:2 knowing:1 narrowly:1 whether:8 fuchs:1 impairs:1 s7:2 akin:1 action:3 prefers:1 deep:28 dramatically:2 useful:1 detailed:1 clear:1 tune:1 lawn:2 amount:2 slide:1 mid:1 clip:1 visualized:7 category:4 hardware:1 generate:4 http:1 dgn:26 exist:3 restricts:1 canonical:6 nsf:2 neuroscience:1 overly:1 per:2 blue:1 diverse:1 hyperparameter:3 group:1 four:2 actress:1 drawn:1 backward:1 vast:2 convert:1 fooling:1 powerful:1 jitter:1 springenberg:1 place:7 almost:1 throughout:1 reader:1 patch:2 prefer:1 uninterpretable:1 layer:35 pay:1 distinguish:1 courville:3 fascinating:1 activity:2 kreiman:1 occur:1 constrain:1 scene:7 generates:1 u1:1 aspect:1 chair:2 attempting:1 performing:1 relatively:1 tv:2 creative:2 smaller:3 across:3 describes:1 making:3 candle:2 happens:1 modification:1 leg:1 intuitively:1 explained:1 invariant:1 den:1 g6:3 computationally:1 visualization:29 mutually:1 remains:1 agree:1 describing:1 bus:1 reddy:1 needed:1 know:1 generalizes:10 available:3 costume:1 hierarchical:1 blurry:1 alternative:1 thomas:2 original:4 top:4 running:1 gan:5 zeiler:1 giving:1 especially:1 objective:4 question:1 occurs:1 receptive:1 costly:1 exclusive:1 degrades:1 responds:2 unclear:2 gradient:2 iclr:5 convnet:1 unable:1 distance:1 thank:1 degrade:1 evaluate:1 extent:2 toward:2 reason:1 dream:1 iro:1 ozair:1 code:26 reed:1 providing:1 sermanet:1 optionally:1 difficult:1 mostly:1 teach:1 lipstick:1 synthesizing:11 motivates:1 perform:2 neuron:73 convolution:1 datasets:6 acknowledge:1 truncated:2 extended:2 looking:4 hinton:1 banana:1 frame:4 discovered:1 arbitrary:1 sharp:2 inverting:2 dog:1 c3:1 optimized:2 imagenet:19 discriminator:1 connection:1 c4:1 s20:3 coherent:3 learned:28 barcelona:1 kingma:1 nip:3 bar:1 below:1 pattern:8 hendricks:1 biasing:1 summarize:1 challenge:2 interpretability:1 including:2 u9:1 max:1 video:9 unrealistic:2 s19:3 natural:12 hybrid:1 regularized:1 residual:1 tyka:1 fc8:4 improve:6 halle:3 technology:1 picture:3 created:1 washed:1 auto:6 text:3 prior:45 geometric:1 understanding:6 berry:3 l2:2 evolve:1 theis:1 embedded:1 fully:6 interesting:3 generation:1 generator:17 validation:2 shelhamer:1 downloaded:2 vanhoucke:1 xiao:1 principle:1 row:2 eccv:1 changed:1 gl:7 last:3 supported:2 electronically:1 alain:1 bias:2 side:3 understand:4 deeper:2 neighbor:1 fall:1 face:3 emerge:2 distributed:1 van:1 default:2 depth:1 evaluating:2 forward:1 qualitatively:6 c5:1 made:1 collection:1 author:1 nguyen:4 far:1 erhan:2 welling:1 uni:2 preferred:23 ignore:1 global:11 reveals:3 investigating:1 excite:2 discriminative:2 xi:4 fergus:1 search:3 khosla:1 why:4 robin:1 table:1 additionally:1 nature:1 fc6:5 learn:4 career:2 synthesizes:2 improving:2 clinton:1 did:1 whole:1 noise:1 underfit:1 repeated:1 categorized:1 xu:1 fig:29 screen:1 probing:1 theme:2 perceptual:1 donahue:2 specific:3 showing:3 explored:1 abadie:1 cease:1 evidence:2 incorporating:1 workshop:3 texture:5 simply:2 explore:1 rohrbach:1 visual:4 highlighting:1 stitch:1 scalar:1 watch:1 u2:1 pretrained:1 springer:1 radford:1 satisfies:1 extracted:1 comparator:2 brg:2 goal:2 sized:1 viewed:1 mordvintsev:1 jeff:2 experimentally:1 change:2 specifically:3 averaging:1 denoising:1 called:3 total:2 discriminate:2 shedding:2 dosovits:1 exception:1 selectively:1 formally:1 occluding:1 quest:1 people:1 dept:1 scratch:3 |
6,102 | 652 | Computation of Heading Direction From
Optic Flow in Visual Cortex
Markus Lappe?
JosefP. Rauschecker
Laboratory of Neurophysiology, NIMH, Poolesville, MD, U.S.A. and
Max-Planck-Institut fur Biologische Kybernetik, Tiibingen, Germany
Abstract
We have designed a neural network which detects the direction of egomotion from optic flow in the presence of eye movements (Lappe and
Rauschecker, 1993). The performance of the network is consistent with
human psychophysical data, and its output neurons show great similarity
to "triple component" cells in area MSTd of monkey visual cortex. We
now show that by using assumptions about the kind of eye movements
that the obsenrer is likely to perform, our model can generate various
other cell types found in MSTd as well.
1
INTRODUCTION
Following the ideas of Gibson in the 1950's a number of studies in human psychophysics
have demonstrated that optic flow can be used effectively for navigation in space (Rieger and
Toet, 1985; Stone and Perrone, 1991; Warren et aI., 1988). In search for the neural basis of
optic flow processing, an area in the cat's extrastriate visual cortex (PMLS) was described
as having a centrifugal organization of neuronal direction preferences, which suggested
an involvement of area PMLS in the processing of expanding flow fields (Rauschecker
et a/., 1987; Brenner and Rauschecker, 1990). Recently, neurons in the dorsal part of
the medial superior temporal area (MSTd) in monkeys have been described that respond
to various combinations of large expanding/contracting, rotating, or shifting dot patterns
(Duffy and Wurtz, 1991; Tanaka and Saito, 1989). Cells in MSTd show a continuum
of response properties ranging from selectivity for only one movement pattern ("single
?Present address: Neurobiologie, ND7, Ruhr- Universitat Bochum, 4630 Bochum, Germany.
433
434
Lappe and Rauschecker
component cells") to selectivity for one mode of each of the three movement types ("triple
component cells"). An interesting property of many MSTd cells is their position invariance
(Andersen et al., 1990). A sizable proportion of cells, however, do change their selectivity
when the stimulus is displaced by several tens of degrees of visual angle, and their position
dependence seems to be correlated with the type ofmovement selectivity (DuffY and Wurtz,
1991; Orban et al., 1992): It is most common for triple component cells and occurs least
often in single component cells. Taken together, the wide range of directional tuning and
the apparent lack of specificity for the spatial position of a stimulus seem to suggest, that
MSTd cells do not possess the selectivity needed to explain the high accuracy of human
observers in psychophysical experiments. Our simulation results, however, demonstrate
that a population encoding can be used, in which individual neurons are rather broadly
tuned while the whole network gives very accurate results.
2
THE NETWORK MODEL
The major projections to area MST originate from the middle temporal area (M1). Area
MT is a well known area of monkey cortex specialized for the processing of visual motion.
It contains a retinotopic representation of local movement directions (AUman and Kaas,
1971; Maunsell and Van Essen, 1983). In our model we assume that area MT comprises a
population encoding of the optic flow and that area MST uses this input from MT to extract
the heading direction. Therefore, the network consists of two layers. In the first layer, 300
optic flow vectors at random locations within 50 degrees of eccentricity are represented.
Each flow vector is encoded by a population of directionally selective neurons. It has been
shown previously that a biologically plausible population encoding like this can also be
modelled by a neural network (Wang et al., 1989). For simplicity we use only four neurons
to represent an optic flow vector 8 i as
4
8,
= 2: Sileil,
(1)
1=1
with equally spaced preferred directions eil = (cos(1rk/2),sin(1rk/2?t. A neuron's
response to a flow vector of direction 4>i and speed OJ is given by the tuning curve
if COS(4)i -1rk/2)
otherwise.
>0
The second layer contains a retinotopic grid ofpossible translational heading directions T j.
Each direction is represented by a population of neurons, whose summed activities give the
likelihood that Tj is the correct heading. The perceiVed direction is finally chosen to be
the one that has the highest population activity.
The calculation of this likelihood is based on the subspace algorithm by Heeger and Jepson
(1992). It employs the minimization of a residual function over all possible heading
directions. The neuronal populations in the second layer evaluate a related function that
is maximal for the correct heading. The subspace algorithm works as follows: When
an observer moves through a static environment all points in space share the same six
motion parameters, the translation T
(Tx, Ty , Tz)t and the rotation n (Ox, Oy, Oz)t.
The optic flow 8(z, y) is the projection of the movement ofa 3D- point (X, Y, Z)t onto the
retina, which, for simplicity, is modelled as an image plane. In a viewer centered coordinate
=
=
Computation of Heading Direction From Optic Flow in Visual Cortex
system the optic flow can be written as:
1
8(z, y) = Z(z, y) A(z, y)T + B(z, y)fl
with the matrices
A(z,y)
=(
-f
0
o
-f
Z)
y
and B(z, y) =
( f +zyy2j f j f
(2)
-f -
z2 j f
-zyj f
depending only on coordinates (z , y) in the image plane and on the "focal length" f (Heeger
and Jepson, 1992). In trying to estimate T, given the optic flow 8, we first have to note
that the unknowns Z (z, y) and T are multiplied together. They can thus not be determined
independently so that the translation is considered a unit vector pointing in the direction of
heading. Eq. (2) now contains six unknowns, Z (z , y), T and fl, but only two measurements
{}z; and {}y. Therefore, flow vectors from m distinct image points are combined into the
matrix equation
S = C(T)q,
(3)
where S = (81, ... , 8 m )t is a 2m-dimensional vector consisting of the components of
the m image velocities, q = (ljZ(z1' yI), ... , 1jZ(zm, Ym), nz;, ny, nz)t an (m + 3)dimensional vector, and
C(T) (A(Zr)T
=
o
::~
(4)
a 2m x (m + 3) matrix. Heeger and Jepson (1992) show that the heading direction can be
recovered by minimizing the residual function
R(T) = IIStCl.(T)112.
In this equation Cl.(T) is defined as follows: Provided that the columns of C(T) are
linearly independent, they form a basis of an Cm + 3 )-dimensional subspace of the R 2m,
which is called therangeofC(T). Thematrix Cl.(T) spans the remaining (2m-( m+3))dimensional subspace which is called the orthogonal complement of C(T). Every vector
in the orthogonal complement ofC(T) is orthogonal to every vector in the range ofC(T).
In the network, the population of neurons representing a certain Tj shall be maximally
excited when R(Tj) = O. Two steps are necessary to accomplish this. First an individual
neuron evaluates part ofthe argument ofR(Tj) by picking out one ofthe colwnn vectors of
Cl.(Tj), denoted by Ct(Tj), and computing StCtCTj). This is done in the following
way: m first layer populations are chosen to form the neuron's input receptive field The
neuron's output is given by the sigmoid function
m
Ujl
4
= g(1: 1: Jij1:1 S ik -1-'),
(5)
;=1 k=1
in which Jij1:1 denotes the strength of the synaptic connection between the /-th output
neuron in the second layer population representing heading direction T j and the k-th input
neuron in the first layer population representing the optic flow vector 8;, I-' denotes the
threshold For the synaptic strengths we require that:
m
4
1:1: Jij1:1 S i1: = StCt(Tj
i=1 1:=1
).
(6)
435
436
Lappe and Rauschecker
At a single image location i this is:
~J
nt
L..J ijlr:1 Silr:=ui
lr:=1
(Cl~2i-1 (Tj))
C1. .(T.)
1,21 ,
.
Substituting eq. (I) we find:
~
~ t (Cl~2i-1(Tj))
L..J J,jlr:1 S ,lr: = L..J Silr:eilr: C1. .(T.) .
lr:=1
lr:=1
1,21
,
Therefore we set the synaptic strengths to:
Jij1r:l
= eilr:t (Cl~2i-1(Tj))
C1. .(T.) .
1,21 ,
Then. whenever T j is compatible with the measured optic flow, i.e. when 8 is in the range
ofe(Tj), the neuron receives a net input of zero. In the second step, another neuron Ujl1 is
constructed so that the sum of the activities of the two neurons is maximal in this situation.
Bothneurons are connected to the same set of image locations but their connection strengths
satisfY Jij k 11
Jij kl? In addition, the threshold JL is given a slightly negative value. Then
both their sigmoid transfer functions overlap at zero input, and the sum has a single peak.
Finally, the neurons in every second layer population are organized in such matched pairs
so that each population j generates its maximal activity when R(Tj) O.
=-
=
In simulations, our network is able to compute the direction of heading with a mean error
of less than one degree in agreement with human psychophysical data (see Lappe and
Rauschecker, 1993). Like heading detection in human observers it functions over a wide
range ofspeeds, it works with sparse flow fields and it needs depth in the visual environment
when eye movements are performed.
3
DIFFERENT RESPONSE SELECTIVITIES
For the remainder of this paper we will focus on the second layer neuron's response properties by carrying out simulations analogous to neurophysiological experiments (Andersen et
al., 1990; DuffY and Wurtz, 1991; Orban et aI., 1992). A single neuron is constructed that
receives input from 30 random image locations forming a 60 x 60 degree receptive field
The receptive field occupies the lower left quadrant of the visual field and also includes
the fovea (Fig. IA). The neuron is then presented with shifting, expanding/contracting and
rotating optic flow patterns. The center (XC) Yc) of the expanding/contracting and rotating
patterns is varied over the 100 x 100 degree visual field in order to test thepositiondependence of the neuron's responses. Directional tuning is assessed via the direction ~ of the
shifting patterns. All patterns are obtained by choosing suitable translations and rotations
in eq. (2). For instance, rotating patterns centered at (XC) Yc) are generated by
T=O
and
0=
Jz1:~~+P (~).
(7)
In keeping with the most common experimental condition, all depth values Z(x,) Yi) are
taken to be equal.
Computation of Heading Direction From Optic Flow in Visual Cortex
c
A
...
-
m:
?
_Ja
?
JI
?
??
E
...".
??
_.\
?
?
?
Figure 1: Single Neuron Responding To All Three Types Of Optic Flow Stimuli
(" Triple Component Cell")
In the following we consider different assumptions about the observer's eye movements.
These assumptions change the equations of the subspace algorithm. The rotational matrix
B(.x, y) takes on different forms. We will show that these changes result in differwt
cell types. First let us restrict the model to the biologically most important case: During
locomotion in a static environment the eye movements of humans or higher animals are
usually the product of intentional behavior. A very common situation is the fixation of a
visible object during locomotion. A specific eye rotation is necessary to compensate for the
translational body-movement and to keep the object fixed in the center (0, 0) of the visual
field, so that its image velocity eq. (2) vanishes:
8(0,0)
= ZF1
(-ITx)
-ITy + (-lOY)
+jOx = (0)
0
.
(8)
ZF denotes the distance of the fixation point. We can easily calculate Ox and Oy from
eq. (8) and chose Oz = O. The optic flow eq. (2) in the case of the fixation of a stationary
object then is
1
1 8(.x,y)
= Z(.x,y)A(.x,y)T + ZFB(.x,y)T,
with
B(.x, y)
=(
1+.x2 / f
(.xy)/ j
(.xy)/ f
1+ y2 / f
0)
0
.
We would like to emphasize that another common situation, namely no eye movements at
all, can be approximated by Z F -+ (X). We can now construct a new matrix
o
~(""
YllT )
B(.x m , Ym)T
and form synaptic connections in the same way as described above. The resulting network
is able to deal with the most common types of eye movements. The response properties of a
437
438
Lappe and Rauschecker
..
y
A
...
...
?? ?
-
-oo?
_10?
-10 ?
t
.
.
.
. ..
.0?
-.
oo?
C
E
0
F
"
?50?
? ??
? ? ?..
? ??
??
-30
-.0
J'J'
1'1'
B
?
Figure 2: Neuron Selective For Two Components ("Double Component Cell")
single neuron from such a network are shown in Fig. 1. The neuron is selective for all three
types of flow patterns. It exhibits broad directional tuning (Fig. IB) for upward shifting
patterns (~ = 90 deg.). The responses to expanding (Fig. Ie), contracting (Fig. ID) and
rotating (Fig. IE-F) patterns show large areas of position invariant selectivity. Inside the
receptive field, which covers the second quadrant (see destribution of input locations in
Fig. IA), the neuron favors upward shifts, contractions and counterclockwise rotations. It is
thus compatible with a triple component cell in MSTd Also, lines are visible along which
the selectivities reverse. This happens because the neuron's input is a linear function of the
stimulus position (xc, Yc). For example. for rotational patterns we can calculate the input
using eqs. (2), (6), and (7):
~~
J .. s? ?O
~ ~(x c,Yc,f )Bt(x."y,.)(Cl~2i-1(T;))
~~ 1.1 l1 ,k - v' 2
2
f2 ~ z
Cl. .(T?) .
':=1 k=1
Xc + Yc +
i=1
F
1,21
.1
As long as the threshold J.t is small, the neuron's output is halfway between its maximal and
minimal values whenever its input is zero, i.e. when
This is the equation of a line in the (xc, Yc) plane. The naJIon's selectivity for rotations
reverses along this line. A similar equation holds expansion/contraction selectivity.
Now, what would the neuron's selectivity look like, if we had not restricted the eye mov~
ments to the case of the fixation of an object. The responses of a neuron that is constructed
following the unconstrained version of the algorithm, as described in section 2, is shown in
Fig. 2. There is no selectivity for clockwise versus counterclockwise rotations at all, since
both patterns elicit the same response everywhere in the visual field Inside the receptive
field the neuron favors contractions and shifts towards the upper left (4) = 150 deg.). It
can thus be regarded as a double component cell. To understand the absence of rotational
selectivity we have to calculate the whole rotational optic flow pattern 0 rot by inserting T
Computation of Heading Direction From Optic Flow in Visual Cortex
..
y
-
c
A
...
E
. ...
...
"
?? .....? ?
-'0?
B
?
Figure 3: Predominantly Rotation Selective Neuron ("Single Component Cell")
and n from eq. (7) into eq. (3). C(T) becomes
0
...
0
C(O) = ( :
...
:
o ...
0
Denoting the three rightmost colwnn vectors of C (T) by B I ? B 2 , and B3 we:find
?n
8 rot =
Vz~
+ y~ + /2
(ZeBI
+ Ye B 2 + /B3).
Comparison to C(T), eq. (4), shows that 8 rot can be written as a linear combination of
colwnn vectorsofC(T). Thus 8 rot lies in therangeofC(T) and is orthogonal to Cl.(T),
so that 8 rot
(Tj) 0 for all j and I. From eqs. (5) and (6) it follows, that the neuron's
response to any rotationalpattem is always Ujl = g( -1-').
ct
=
The last type of eye movements we want to consider is that of a general frontoparallel
rotation, which is defined by
= O. In addition to the fixation of a stationary object, frontoparallel rotations also include smooth pursuit eye movements necessary for the fixation
of a moving object. Inserting
= 0 into eq. (2) gives
nz
nz
A
9(z,y)
1
= Z(x,y)A(z,y)T+B(z,y)
A
(nx)
Oy
with
H(z y) - (
, -
/
zy/ /
+ y2 / /
-(I + x 2/ f) )
-zy/ /
now being a 2 x 2 matrix, so that C(T), eq. (4), becomes a 2m x (m + 2) matrix
C(T). A neuron that is constructed using C(T) can be seen in Fig. 3. It best responds to
counterclockwise rotational patterns showing complete position invariance over the visual
field The neuron is much less selective to expansions and unidirectional shifts, since
439
440
Lappe and Rauschecker
the responses never reach saturation. It therefore resembles a single component rotation
selective cell. The position invariant behavior can again be explained by looking at the
rotational optic flow pattern. Using the same argument as above, one can show that the
neuron's input is zero whenever Oz vanishes, i.e. when the rotational axis lies in the
(X, Y)-plane. Then the flow pattern becomes
?O
Brot =
(XcBl + yc B 2),
A
v'x~
A
+ y~ + /2
and is an element of the range of C(T;). The (X, Y)-plane thus splits the space of all
rotational axes into two half spaces, one in which the neuron's input is always positive and
one in which it is always negative. Clockwise rotations are characterized by Oz > 0 and
hence all lie in the same half space, while counterclockwise rotations lie in the other. As a
result the neuron is exclusively excited by one mode of rotation in all of the visual field
4
Conclusion
Our neural network model for the detection of ego-motion proposes a computational map
of heading directions. A similar map could be contained in area MSTd of monkey visual
cortex. Cells in MSTd exhibit a varying degree of selectivity for basic optic flow patterns,
but often show a substantial indifference towards the spatial position of a stimulus. By using
a population encoding of the heading directions, individual neurons in the model exhibit
similar position invariant responses within large parts ofthe visual field Different neuronal
selectivities found in MSTd can be modelled by assuming specializations pertaining to
different types of eye movements. Consistent with experimental findings the position
invariance of the model neurons is largest in the single component cells and less developed
in the double and triple component cells.
References
Allman, J. M. and Kaas, 1. H. 1971. Brain Res. 31,85-105.
Andersen, R, Graziano, M., and Snowden, R 1990. Soc. Neurosci. Abstr. 16, 7.
Brenner, E. and Rauschecker, J. P. 1990. J. Physiol. 423,641-660.
Duffy, C. J. and Wurtz, R H. 1991. J. Neurophysiol. 65(6), 1329-1359.
Gibson, J. J. 1950. The Perception of the Visual World. HoughtonMi:fHin, Boston.
Heeger, D. J. and Jepson, A. 1992. Int. J. Compo Vis. 7(2),95-117.
Lappe, M. and Rauschecker, J. P. 1993. Neural Computation (in press).
Maunsell,1. H. R and Van Essen, D. C. 1983. J. Neurophysiol. 49(5), 1127-1147.
Orban, G. A., Lagae, L., Vern, A., Raiguel, S., Xiao, D., Maes, H., and Torre, V. 1992.
Proc. Nat. Acad Sci. 89,2595-2599.
Rauschecker, J. P., von Griinau, M. W., and Poulin, C. 1987. J. Neurosci. 7(4),943-958.
Rieger, J. H. and Toe!, L. 1985. BioI. Cyb. 52,377-381.
Stone, L. S. and Perrone, J. A. 1991. In Soc. Neurosci. Abstr. 17,857.
Tanaka, K. and Saito, H.-A. 1989. J. Neurophysiol. 62(3),626--641.
Wang, H. T., Mathur, B. P. and Koch, C. 1989. Neural Computation 1,92-103.
Warren, W. H. Jr., and Hannon, D. J. 1988. Nature 336, 162-163.
| 652 |@word neurophysiology:1 version:1 middle:1 proportion:1 seems:1 ruhr:1 simulation:3 contraction:3 excited:2 maes:1 extrastriate:1 contains:3 exclusively:1 tuned:1 denoting:1 rightmost:1 recovered:1 z2:1 nt:1 written:2 mst:2 visible:2 physiol:1 centrifugal:1 designed:1 medial:1 stationary:2 half:2 plane:5 lr:4 compo:1 location:5 preference:1 along:2 constructed:4 ik:1 consists:1 fixation:6 inside:2 behavior:2 brain:1 detects:1 eil:1 becomes:3 provided:1 retinotopic:2 matched:1 what:1 kind:1 cm:1 monkey:4 developed:1 finding:1 temporal:2 every:3 ofa:1 unit:1 maunsell:2 planck:1 positive:1 local:1 kybernetik:1 acad:1 encoding:4 id:1 chose:1 nz:4 resembles:1 co:2 range:5 ofe:1 saito:2 area:12 gibson:2 elicit:1 projection:2 quadrant:2 specificity:1 suggest:1 bochum:2 onto:1 map:2 demonstrated:1 center:2 independently:1 simplicity:2 regarded:1 ity:1 population:14 coordinate:2 analogous:1 us:1 locomotion:2 agreement:1 velocity:2 element:1 approximated:1 ego:1 wang:2 calculate:3 connected:1 movement:15 highest:1 substantial:1 environment:3 vanishes:2 nimh:1 ui:1 carrying:1 cyb:1 f2:1 basis:2 neurophysiol:3 mstd:10 easily:1 vern:1 various:2 cat:1 represented:2 tx:1 distinct:1 pertaining:1 choosing:1 apparent:1 encoded:1 whose:1 plausible:1 otherwise:1 favor:2 directionally:1 net:1 maximal:4 jij:2 zm:1 remainder:1 product:1 inserting:2 oz:4 double:3 eccentricity:1 abstr:2 object:6 depending:1 oo:2 measured:1 eq:13 sizable:1 soc:2 revers:1 frontoparallel:2 direction:21 correct:2 torre:1 centered:2 human:6 occupies:1 require:1 ja:1 viewer:1 hold:1 koch:1 considered:1 intentional:1 great:1 pointing:1 substituting:1 major:1 continuum:1 perceived:1 proc:1 ofpossible:1 ofc:2 vz:1 largest:1 minimization:1 ujl:2 always:3 rather:1 snowden:1 varying:1 ax:1 focus:1 fur:1 likelihood:2 bt:1 selective:6 i1:1 germany:2 upward:2 translational:2 denoted:1 proposes:1 animal:1 spatial:2 summed:1 psychophysics:1 field:14 equal:1 construct:1 having:1 never:1 broad:1 look:1 stimulus:5 employ:1 retina:1 individual:3 consisting:1 detection:2 organization:1 essen:2 navigation:1 tj:13 accurate:1 necessary:3 xy:2 institut:1 orthogonal:4 rotating:5 re:1 minimal:1 instance:1 column:1 cover:1 universitat:1 accomplish:1 combined:1 neurobiologie:1 peak:1 ie:2 picking:1 graziano:1 together:2 ym:2 andersen:3 again:1 von:1 tz:1 includes:1 int:1 satisfy:1 vi:1 performed:1 observer:4 kaas:2 biologische:1 unidirectional:1 accuracy:1 spaced:1 ofthe:3 directional:3 modelled:3 zy:2 explain:1 reach:1 whenever:3 synaptic:4 evaluates:1 ty:1 toe:1 static:2 organized:1 higher:1 response:12 maximally:1 done:1 ox:2 receives:2 lack:1 hannon:1 mode:2 b3:2 ye:1 y2:2 hence:1 laboratory:1 deal:1 sin:1 during:2 trying:1 stone:2 complete:1 demonstrate:1 motion:3 l1:1 ranging:1 image:8 recently:1 predominantly:1 superior:1 rotation:13 specialized:1 sigmoid:2 common:5 mt:3 ji:1 itx:1 jl:1 m1:1 measurement:1 ai:2 tuning:4 unconstrained:1 grid:1 focal:1 had:1 dot:1 rot:5 moving:1 cortex:8 similarity:1 zyj:1 involvement:1 reverse:1 selectivity:15 certain:1 yi:2 ofr:1 seen:1 clockwise:2 smooth:1 characterized:1 calculation:1 long:1 compensate:1 equally:1 basic:1 wurtz:4 represent:1 cell:20 c1:3 addition:2 want:1 posse:1 pmls:2 rieger:2 counterclockwise:4 flow:28 seem:1 allman:1 presence:1 split:1 restrict:1 idea:1 lappe:8 shift:3 six:2 specialization:1 ten:1 generate:1 broadly:1 shall:1 four:1 threshold:3 halfway:1 sum:2 angle:1 everywhere:1 respond:1 layer:9 fl:2 ct:2 activity:4 strength:4 optic:21 x2:1 markus:1 generates:1 speed:1 orban:3 span:1 argument:2 combination:2 perrone:2 jr:1 slightly:1 biologically:2 happens:1 explained:1 invariant:3 restricted:1 taken:2 equation:5 previously:1 needed:1 pursuit:1 multiplied:1 denotes:3 remaining:1 responding:1 include:1 xc:5 psychophysical:3 move:1 occurs:1 receptive:5 dependence:1 md:1 responds:1 exhibit:3 subspace:5 fovea:1 distance:1 sci:1 nx:1 originate:1 assuming:1 length:1 rotational:8 minimizing:1 loy:1 negative:2 unknown:2 perform:1 zf:1 upper:1 neuron:40 displaced:1 situation:3 looking:1 varied:1 mathur:1 complement:2 pair:1 namely:1 kl:1 z1:1 connection:3 rauschecker:12 tanaka:2 address:1 able:2 suggested:1 usually:1 pattern:17 perception:1 yc:7 saturation:1 max:1 oj:1 shifting:4 ia:2 overlap:1 suitable:1 zr:1 residual:2 representing:3 eye:12 axis:1 extract:1 contracting:4 interesting:1 oy:3 versus:1 triple:6 degree:6 consistent:2 xiao:1 egomotion:1 share:1 translation:3 compatible:2 last:1 keeping:1 heading:16 warren:2 understand:1 wide:2 sparse:1 van:2 curve:1 depth:2 world:1 toet:1 tiibingen:1 emphasize:1 preferred:1 keep:1 deg:2 search:1 jz:1 transfer:1 nature:1 expanding:5 expansion:2 cl:9 jepson:4 linearly:1 neurosci:3 whole:2 body:1 neuronal:3 fig:9 ny:1 position:10 comprises:1 heeger:4 lie:4 mov:1 ib:1 rk:3 specific:1 showing:1 ments:1 effectively:1 nat:1 duffy:4 boston:1 likely:1 forming:1 neurophysiological:1 visual:18 indifference:1 contained:1 bioi:1 towards:2 brot:1 brenner:2 absence:1 change:3 determined:1 called:2 invariance:3 experimental:2 dorsal:1 assessed:1 evaluate:1 correlated:1 |
6,103 | 6,520 | Generating Long-term Trajectories Using Deep
Hierarchical Networks
Stephan Zheng
Caltech
[email protected]
Yisong Yue
Caltech
[email protected]
Patrick Lucey
STATS
[email protected]
Abstract
We study the problem of modeling spatiotemporal trajectories over long time
horizons using expert demonstrations. For instance, in sports, agents often choose
action sequences with long-term goals in mind, such as achieving a certain strategic
position. Conventional policy learning approaches, such as those based on Markov
decision processes, generally fail at learning cohesive long-term behavior in such
high-dimensional state spaces, and are only effective when fairly myopic decisionmaking yields the desired behavior. The key difficulty is that conventional models
are ?single-scale? and only learn a single state-action policy. We instead propose a
hierarchical policy class that automatically reasons about both long-term and shortterm goals, which we instantiate as a hierarchical neural network. We showcase our
approach in a case study on learning to imitate demonstrated basketball trajectories,
and show that it generates significantly more realistic trajectories compared to
non-hierarchical baselines as judged by professional sports analysts.
1
Introduction
Modeling long-term behavior is a key challenge in many learning problems that require complex decision-making. Consider a sports player
determining a movement trajectory to achieve a certain strategic position.
The space of such trajectories is prohibitively large, and precludes conventional approaches, such as those based on simple Markovian dynamics.
Many decision problems can be naturally modeled as requiring high-level,
long-term macro-goals, which span time horizons much longer than the
timescale of low-level micro-actions (cf. He et al. [8], Hausknecht and
Stone [7]). A natural example for such macro-micro behavior occurs in
spatiotemporal games, such as basketball where players execute complex
trajectories. The micro-actions of each agent are to move around the Figure 1: The player (green)
court and, if they have the ball, dribble, pass or shoot the ball. These has two macro-goals: 1)
micro-actions operate at the centisecond scale, whereas their macro-goals, pass the ball (orange) and
such as "maneuver behind these 2 defenders towards the basket", span 2) move to the basket.
multiple seconds. Figure 1 depicts an example from a professional basketball game, where the player
must make a sequence of movements (micro-actions) in order to reach a specific location on the
basketball court (macro-goal).
Intuitively, agents need to trade-off between short-term and long-term behavior: often sequences of
individually reasonable micro-actions do not form a cohesive trajectory towards a macro-goal. For
instance, in Figure 1 the player (green) takes a highly non-linear trajectory towards his macro-goal of
positioning near the basket. As such, conventional approaches are not well suited for these settings,
as they generally use a single (low-level) state-action policy, which is only successful when myopic
or short-term decision-making leads to the desired behavior.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
In this paper, we propose a novel class of hierarchical policy models, which we instantiate using
recurrent neural networks, that can simultaneously reason about both macro-goals and micro-actions.
Our model utilizes an attention mechanism through which the macro-policy guides the micro-policy.
Our model is further distinguished from previous work on hierarchical policies by dynamically
predicting macro-goals instead of following fixed goals, which gives additional flexibility to our
model class that can be fitted to data (rather than having the macro-goals be specifically hand-crafted).
We showcase our approach in a case study on learning to imitate demonstrated behavior in professional
basketball. Our primary result is that our approach generates significantly more realistic player
trajectories compared to non-hierarchical baselines, as judged by professional sports analysts. We
also provide a comprehensive qualitative and quantitive analysis, e.g., showing that incorporating
macro-goals can actually improve 1-step micro-action prediction accuracy.
2
Related Work
The reinforcement learning community has largely focused on non-hierarchical policies such as those
based on Markovian or linear dynamics (cf. Ziebart et al. [17], Mnih et al. [11], Hausknecht and
Stone [7]). By and large, such policy classes are shown to be effective only when the optimal action
can be found via short-term planning. Previous research has instead focused on issues such as how
to perform effective exploration, plan over parameterized action spaces, or deal with non-convexity
issues from using deep neural networks. In contrast, we focus on developing hierarchical policies that
can effectively generate realistic long-term plans in complex settings such as basketball gameplay.
The use of hierarchical models to decompose macro-goals from micro-actions is relatively common
in the planning community (cf. Sutton et al. [14], He et al. [8], Bai et al. [1]). For instance, the
winning team in 2015 RoboCup Simulation Challenge (Bai et al. [1]) used a manually constructed
hierarchical policy to solve MDPs with a set of fixed sub-tasks, while Konidaris et al. [10] segmented
demonstrations to construct a hierarchy of static macro-goals. In contrast, we study how one can
learn a hierarchical policy from a large amount of expert demonstrations that can adapt its policy in
non-Markovian environments with dynamic macro-goals.
Our approach shares affinity with behavioral cloning. One difference with previous work is that we
do not learn a reward function that induces such behavior (cf. Muelling et al. [12]). Another related
line of research aims to develop efficient policies for factored MDPs (Guestrin et al. [6]), e.g. by
learning value functions over factorized state spaces for multi-agent systems. It may be possible that
such approaches are also applicable for learning our hierarchical policy.
Attention models for deep networks have mainly been applied to natural language processing, image
recognition and combinations thereof (Xu et al. [15]). In contrast to previous work which focuses on
attention models of the input, our attention model is applied to the output by integrating control from
both the macro-policy and the micro-policy.
Recent work on generative models for sequential data (Chung et al. [4]), such as handwriting
generation, have combined latent variables with an RNN?s hidden state to capture temporal variability
in the input. In our work, we instead aim to learn semantically meaningful latent variables that are
external to the RNN and reason about long-term behavior and goals.
Our model shares conceptual similarities to the Dual Process framework (Evans and Stanovich
[5]), which decomposes cognitive processes into fast, unconscious behavior (System 1) and slow,
conscious behavior (System 2). This separation reflects our policy decomposition into a macro and
micro part. Other related work in neuroscience and cognitive science include hierarchical models of
learning by imitation (Byrne and Russon [2]).
3
Long-Term Trajectory Planning
We are interested in learning policies that can produce high quality trajectories, where quality is some
global measure of the trajectory (e.g., realistic trajectories as in Figure 1). We first set notation:
i
i
? At time
t, an agent i is in
state
st ? S and takes action at ? A. The full state and action are
st = sit players i , at = ait players i . The history of events is ht = {(su , au )}0?u<t .
? Macro policies also use a goal space G, e.g. regions in the court that a player should reach.
2
raw micro-policy
?raw
micro-policy
raw action u
state s
?micro
micro-action a
macro-policy
?macro
macro-goal g
transfer ?
Figure 3: The general structure of a 2-level hierarchical policy that consists of 1) a raw micro-policy ?raw 2) a
macro-policy ?macro and 3) a transfer function ?. For clarity, we suppressed the indices i, t in the image. The
raw micro-policy learns optimal short-term policies, while the macro-policy is optimized to achieve long-term
rewards. The macro-policy outputs a macro-goal gti = ?macro (sit , hit ), which guides the raw micro-policy
uit = ?raw (sit , hit ) in order for the hierarchical policy ?micro to achieve a long-term goal gti . The hierarchical
policy ?micro = ?(uit , ?(gti )) uses a transfer function ? and synthesis functon ?, see (3) and Section 4.
? Let ?(st , ht ) denote a policy that maps state and history to a distribution over actions
P (at |st , ht ). If ? is deterministic, the distribution is peaked around a specific action. We
also abuse notation to sometimes refer to ? as deterministically taking the most probable
action ?(st , ht ) = argmaxa?A P (a|st , ht ) ? this usage should be clear from context.
Our main research question is how to design a policy class that can capture the salient properties of
how expert agents execute trajectories. In particular, we present a general policy class that utilizes
a goal space G to guide its actions to create such trajectory histories. We show in Section 4 how to
instantiate this policy class as a hierarchical network that uses an attention mechanism to combine
macro-goals and micro-actions. In our case study on modeling basketball behavior (Section 5.1), we
train such a policy to imitate expert demonstrations using a large dataset of tracked basketball games.
3.1
Incorporating Macro-Goals
Our main modeling assumption is that a policy should simultaneously optimize
behavior hierarchically on multiple well-separated timescales. We consider
two distinct timescales (macro and micro-level), although our approach could
in principle be generalized to even more timescales. During an episode [t0 , t1 ],
an agent i executes a sequence of micro-actions ait t?0 that leads to a macrogoal gti ? G. We do not assume that the start and end times of an episode
are fixed. For instance, macro-goals can change before they are reached. We
finally assume that macro-goals are relatively static on the timescale of the
micro-actions, that is: dgti /dt 1.
Figure 2: Depicting
two macro-goals (blue
Figure 2 depicts an example of an agent with two unique macro-goals over a boxes) as an agent
50-frame trajectory. At every timestep t, the agent executes a micro-action ait , moves to the top left.
while the macro-goals gti change more slowly.
We model the interaction between a micro-action ait and a macro-goal gti through a raw micro-action
uit ? A that is independent of the macro-goal. The micro-policy is then defined as:
ait = ?micro (st , ht ) = argmaxa P micro (a|st , ht )
Z
micro i
P
(at |st , ht ) = dudgP (ait |u, g, st , ht )P (u, g|st , ht ).
(1)
(2)
Here, we model the conditional distribution P (ait |u, g, st , ht ) as a non-linear function of u, g:
P (ait |uit , gti , st , ht ) = ?(uit , ?(gti )),
(3)
where ?, ? are transfer and synthesis functions respectively that we make explicit in Section 4. Note
that (3) does not explicitly depend on st , ht : although it is straightforward to generalize, this did not
make a significant difference in our experiments. This decomposition is shown in Figure 3 and can
be generalized to multiple scales l using multiple macro-goals g l and transfer functions ?l .
4
Hierarchical Policy Network
Figure 3 depicts a high-level overview of our hierarchical policy class for generating long-term
spatiotemporal trajectories. Both the raw micro-policy and macro-policy are instantiated as recurrent
3
convolutional neural networks, and the raw action and macro-goals are combined via an attention
mechanism, which we elaborate on below.
Discretization and deep neural architecture. In general, when using continuous latent variables
g, learning the model (1) is intractable, and one must resort to approximation methods. We choose
to discretize the state-action and latent spaces. In the basketball setting, a state sit ? S is naturally
represented as a 1-hot occupancy vector of the basketball court. We then pose goal states gti as
sub-regions of the court that i wants to reach, defined at a coarser resolution than S. As such, we
instantiate the macro and micro-policies as convolutional recurrent neural networks, which can
capture both predictive spatial patterns and non-Markovian temporal dynamics.
Attention mechanism for integrating macro-goals and micro-actions. We model (3) as an attention, i.e. ? computes a softmax density ?(gti ), over the output action space A and ? is an element-wise
(Hadamard) product. Suppressing indices i, t and s, h for clarity, the distribution (3) becomes
exp h? (g)k
?k (g) = P
, P (ak |u, g) ? P raw (uk |s, h) ? ?k (g), k = 1 . . . |A|,
j exp h? (g)j
(4)
where h? (g) is computed by a neural network that takes P macro (g) as an input. Intuitively, this
structure captures the trade-off between the macro- and raw micro-policy. On the one hand, the
raw micro-policy ?raw aims for short-term optimality. On the other hand, the macro-policy ?macro
can attend via ? to sequences of actions that lead to a macro-goal and bias the agent towards good
long-term behavior. The difference between u and ?(g) thus reflects the trade-off that the hierarchical
policy learns between actions that are good for either short-term or long-term goals.
Multi-stage learning. Given a set D of sequences of state-action tuples (st , a
?t ), where the a
?t are
1-hot labels (omitting the index i for clarity), the hierarchical policy network can be trained via
?? = argmin
?
T
XX
Lt (st , ht , a
?t ; ?).
(5)
D t=1
Given the hierarchical structure of our model class, we decompose the loss Lt (omitting the index t):
L(s, h, a
?; ?) = Lmacro (s, h, g; ?) + Lmicro (s, h, a
?; ?) + R(?),
Lmicro (s, h, a
?; ?) =
A
X
a
?k log [P raw (uk |s, h; ?) ? ?k (g; ?)] ,
(6)
(7)
k=1
where Rt (?) regularizes the model weights ? and k indexes A discrete action-values. Although we
have ground truths a
?t for the observable micro-actions, in general we may not have labels for the
macro-goals gt that induce optimal long-term planning. As such, one would have to appeal to separate
solution methods to compute the posterior P (gt |st , ht ) which minimizes Lt,macro (st , ht , gt ; ?).
To reduce complexity and given the non-convexity of (7), we instead follow a multi-stage learning
approach with a set of weak labels g?t , ??t for the macro-goals gt and attention masks ?t = ?(gt ).
We assume access to such weak labels and only use them in the initial training phases. Here, we
first train the raw micro-policy, macro-policy and attention individually, freezing the other parts of
the network. The policies ?micro , ?macro and attention ? can be trained using standard cross-entropy
minimization with the labels a
?t , g?t and ??t , respectively. In the final stage we fine-tune the entire
network on objective (5), using only Lt,micro and R. We found this approach capable of finding a good
initialization for fine-tuning and generating high-quality long-term trajectories.1 Another advantage
of this approach is that the network can be trained using gradient descent during all stages.
5
Case Study on Modeling Basketball Behavior
We applied our approach to modeling basketball behavior data. In particular, we focus on imitating
the players? movements, which is a challenging problem in the spatiotemporal planning setting.
1
As ut and ?(gt ) enter symmetrically into the objective (7), it is hypothetically possible that the network
converges to a symmetric phase where the predictions ut and ?(gt ) become identical along the entire trajectory.
However, our experiments suggest that our multi-stage learning approach separates timescales well between the
micro- and macro policy and prevents the network from settling in such a redundant symmetric phase.
4
pool
400x380
pool
1, 1
raw micro-policy
2, 3
s
merge
pool
5, 5
pool
10, 9
conv
conv
conv
21, 7
21, 5
21, 5
conv
conv
conv
21, 7
21, 5
21, 5
bn
gru
?raw
micro-policy
bn
512
bn
gru
512
macro-policy
gru
512
?macro
bn
fc
u
256
289
fc
g
256
90
a
fc
?micro
289
?
128
289
transfer ?
Figure 4: Network architecture and hyperparameters of the hierarchical policy network. For clarity, we
suppressed the indices i, t in the image. Max-pooling layers (numbers indicate kernel size) with unit stride
upsample the sparse tracking data st . The policies ?raw , ?macro use a convolutional (kernel size, stride) and GRU
memory (number of cells) stack to predict uit and gti . Batch-normalization "bn" (Ioffe and Szegedy [9]) is applied
to stabilize training. The output attention ? is implemented by 2 fully-connected layers (number of output units).
Finally, the network predicts the final output ?micro (st , ht ) = ?raw (st , ht ) ?(gti ).
5.1
Experimental Setup
We validated the hierarchical policy network (HPN) by learning a movement policy of individual
basketball players that predicts as the micro-action the instantaneous velocity vti = ?micro (st , ht ).
Training data. We trained the HPN on a large dataset of tracking data from professional basketball
games (Yue et al. [16]). The dataset consists of possessions
of variable length: each possession is
a sequence of tracking coordinates sit = xit , yti for each player i, recorded at 25 Hz, where one
team has continuous possession of the ball. Since possessions last between 50 and 300 frames, we
sub-sampled every 4 frames and used a fixed input sequence length of 50 to make training feasible.
Spatially, we discretized the left half court using 400?380 cells of size 0.25ft ? 0.25ft. For simplicity,
we modeled every player identically using a single policy network. The resulting input data for each
possession is grouped into 4 channels: the ball, the player?s location, his teammates, and the opposing
team. After this pre-processing, we extracted 130,000 tracks for training and 13,000 as a holdout set.
Labels. We extracted micro-action labels v?ti = sit+1 ? sit as 1-hot vectors in a grid of 17 ? 17 unit
cells. Additionally, we constructed a set of weak macro-labels g?t , ??t by heuristically segmenting
each track using its stationary points. The labels g?t were defined as the next stationary point. For ??t ,
i
we used 1-hot velocity vectors vt,straight
along the straight path from the player?s location sit to the
macro-goal gti . We refer to the supplementary material for additional details.
Model hyperparameters. To generate smooth rollouts while sub-sampling every 4 frames, we
simultaneously predicted the next 4 micro-actions at , . . . , at+3 . A more general approach would
model the dependency between look-ahead predictions as well, e.g. P (?t+?+1 |?t+? ). However, we
found that this variation did not outperform baseline models. We selected a network architecture to
balance performance and feasible training-time. The macro and micro-policy use GRU memory cells
Chung et al. [3] and a memory-less 2-layer fully-connected network as the transfer function ?, as
depicted in Figure 4. We refer to the supplementary material for more details.
Baselines. We compared the HPN against two natural baselines.
1. A policy with only a raw micro-policy and memory (GRU - CNN) and without memory (CNN).
2. A hierarchical policy network H - GRU - CNN - CC without an attention mechanism, which
instead learns the final output from a concatenation of the raw micro- and macro-policy.
Rollout evaluation. To evaluate the quality of our model, we generated rollouts (st ; h0,r0 ) with burnin period r0 , These are generated by 1) feeding a ground truth sequence of states h0,r0 = (s0 , . . . , sr0 )
to the policy network and 2) for t > r0 , predicting at as the mode of the network output (1) and
updating the game-state st ? st+1 , using ground truth locations for the other agents.
5.2
How Realistic are the Generated Trajectories?
The most holistic way to evaluate the trajectory rollouts is via visual analysis. Simply put, would a
basketball expert find the rollouts by HPN more realistic than those by the baselines? We begin by
first visually analyzing some rollouts, and then present our human preference study results.
5
(a) HPN rollouts
(b) HPN rollouts
(c) HPN rollouts
(d) HPN (top) and (e) HPN (top), basefailure case (bottom) line (bottom)
Figure 5: Rollouts generated by the HPN (columns a, b, c, d) and baselines (column e). Each frame shows
an offensive player (dark green), a rollout (blue) track that extrapolates after 20 frames, the offensive team
(light green) and defenders (red). Note we do not show the ball, as we did not use semantic basketball features
(i.e ?currently has the ball") during training. The HPN rollouts do not memorize training tracks (column a) and
display a variety of natural behavior, such as curving, moving towards macro-goals and making sharp turns
(c, bottom). We also show a failure case (d, bottom), where the HPN behaves unnaturally by moving along a
straight line off the right side of the court ? which may be fixable by adding semantic game state information.
For comparison, a hierarchical baseline without an attention model, produces a straight-line rollout (column e,
bottom), whereas the HPN produces a more natural movement curve (column e, top).
Model comparison
Experts
Non-Experts
All
W/T/L
Avg Gain
W/T/L
Avg Gain
W/T/L
Avg Gain
VS - CNN
21 / 0 / 4
0.68
15 / 9 / 1
0.56
21 / 0 / 4
0.68
VS - GRU - CNN
21 / 0 / 4
0.68
18 / 2 / 5
0.52
21 / 0 / 4
0.68
VS - H - GRU - CNN - CC
22 / 0 / 3
0.76
21 / 0 / 4
0.68
21 / 0 / 4
0.68
VS - GROUND TRUTH
11 / 0 / 14
-0.12
10 / 4 / 11
-0.04
11 / 0 / 14
-0.12
Table 1: Preference study results. We asked basketball experts and knowledgeable non-experts to judge the
relative quality of policy rollouts. We compare HPN with ground truth and 3 baselines: a memory-less (CNN )
and memory-full (GRU - CNN ) micro-policy and a hierarchical policy without attention (GRU - CNN - CC). For
each of 25 test cases, HPN wins if more judges preferred the HPN rollout over a competitor. Average gain is
the average signed vote (1 for always preferring HPN , and -1 for never preferring). We see that the HPN is
preferred over all baselines (all results against baselines are significant at the 95% confidence level). Moreover,
HPN is competitive with ground truth, indicating that HPN generates realistic trajectories within our rollout
setting. Please see the supplementary material for more details.
Visualization. Figure 5 depicts example rollouts for our HPN approach and one example rollout for
a baseline. Every rollout consists of two parts: 1) an initial ground truth phase from the holdout set
and 2) a continuation by either the HPN or ground truth. We note a few salient properties. First, the
HPN does not memorize tracks, as the rollouts differ from the tracks it has trained on. Second, the
HPN generates rollouts with a high dynamic range, e.g. they have realistic curves, sudden changes of
directions and move over long distances across the court towards macro-goals. For instance, HPN
tracks do not move towards macro-goals in unrealistic straight lines, but often take a curved route,
indicating that the policy balances moving towards macro-goals with short-term responses to the
current state (see also Figure 6b). In contrast, the baseline model often generates more constrained
behavior, such as moving in straight lines or remaining stationary for long periods of time.
Human preference study. Our primary empirical result is a preference study eliciting judgments on
the relative quality of rollout trajectories between HPN and baselines or ground truth. We recruited
seven experts (professional sports analysts) and eight knowledgeable non-experts (e.g., college
basketball players) as judges.
6
(b) Rollout tracks and predicted macro-goals gt (blue
(a) Predicted distributions for attention masks ?(g)
boxes). The HPN starts the rollout after 20 frames.
(left column), velocity (micro-action) ?micro (middle)
Macro-goal box intensity corresponds to relative preand weighted velocity ?(g) ?micro (right) for basketdiction frequency during the trajectory.
ball players. The center corresponds to 0 velocity.
Figure 6: Left: Visualizing how the attention mask generated from the macro-policy interacts with the micropolicy ?micro . Row 1 and 2: the micro-policy ?micro decides to stay stationary, but the attention ? goes to the left.
The weighted result ? ?micro is to move to the left, with a magnitude that is the average. Row 3) ?micro wants to
go straight down, while ? boosts the velocity so the agent bends to the bottom-left. Row 4) ?micro goes straight
up, while ? goes left: the agent goes to the top-left. Row 5) ?micro remains stationary, but ? prefers to move in
any direction. As a result, the agent moves down. Right: The HPN dynamically predicts macro-goals and guides
the micro-policy in order to reach them. The macro-goal predictions are stable over a large number of timesteps.
The HPN sometimes predicts inconsistent macro-goals. For instance, in the bottom right frame, the agent moves
to the top-left, but still predicts the macro-goal to be in the bottom-left sometimes.
Because all the learned policies perform better with a ?burn-in? period, we first animated with the
ground truth for 20 frames (after subsampling), and then extrapolated with a policy for 30 frames.
During extrapolation, the other nine players do not animate.2 For each test case, the judges were
shown an animation of two rollout extrapolations of a specific player?s movement: one generated by
the HPN, the other by a baseline or ground truth. The judges then chose which rollout looked more
realistic. Please see the supplementary material for details of the study.
Table 1 shows the preference study results. We tested 25 scenarios (some corresponding to scenarios
in Figure 6b). HPN won the vast majority of comparisons against the baselines using expert judges,
with slightly weaker but still very positive results using non-expert judgments. HPN was also
competitive with ground truth. These results suggest that HPN can generate high-quality player
trajectories that are significant improvements over baselines, and approach the ground truth quality in
this comparison setting.
5.3
Analyzing Macro- and Micro-policy Integration
Our model integrates the macro- and micro-policy by converting the macro-goal into an attention mask
on the micro-action output space, which intuitively guides the micro-policy towards the macro-goal.
We now inspect our macro-policy and attention mechanism to verify this behavior.
Figure 6a depicts how the macro-policy ?macro guides the micro-policy ?micro through the attention ?,
by attending to the direction in which the agent can reach the predicted macro-goal. The attention
model and micro-policy differ in semantic behavior: the attention favors a wider range of velocities
and larger magnitudes, while the micro-policy favors smaller velocities.
2
We chose this preference study design to focus the qualitative comparison on the plausibility of individual
movements (e.g. how players might practice alone), as opposed to strategically coordinated team movements.
7
? = 0 ? = 1 ? = 2 ? = 3 Macro-goals g Attention ?
CNN
21.8%
21.5%
21.7%
21.5%
GRU - CNN
25.8%
25.0%
24.9%
24.4%
H - GRU - CNN - CC
31.5%
29.9%
29.5%
29.1%
10.1%
H - GRU - CNN - STACK
26.9%
25.7%
25.9%
24.9%
9.8%
H - GRU - CNN - ATT
33.7% 31.6% 31.0% 30.5%
10.5%
H - GRU - CNN - AUX
31.6%
30.7%
29.4%
28.0%
10.8%
19.2%
Table 2: Benchmark Evaluations. ?-step look-ahead prediction accuracy for micro-actions ait+? = ?(st )
on a holdout set, with ? = 0, 1, 2, 3. H - GRU - CNN - STACK is an HPN where predictions are organized in a
feed-forward stack, with ?(st )t feeding into ?(st )t+1 . Using attention (H - GRU - CNN - ATT) improves on all
baselines in micro-action prediction. All hierarchical models are pre-trained, but not fine-tuned, on macro-goals
g?. We report prediction accuracy on the weak labels g?, ?? for hierarchical models.H - GRU - CNN - AUX is an HPN
? As ?? optimizes for optimal long-term behavior, this lowers the micro-action accuracy.
that was trained using ?.
Model
Figure 6b depicts predicted macro-goals by HPN along with rollout tracks. In general, we see that the
rollouts are guided towards the predicted macro-goals. However, we also observe that the HPN makes
some inconsistent macro-goal predictions, which suggests there is still room for improvement.
5.4
Benchmark Analysis
We finally evaluated using a number of benchmark experiments, with results shown in Table 2. These
experiments measure quantities that are related to overall quality, albeit not holistically. We first
evaluated the 4-step look-ahead accuracy of the HPN for micro-actions ait+? , ? = 0, 1, 2, 3. On this
benchmark, the HPN outperforms all baseline policy networks when not using weak labels ?? to train
the attention mechanism, which suggests that using a hierarchical model can noticeably improve the
short-term prediction accuracy over non-hierarchical baselines.
? although they were only used during preWe also report the prediction accuracy on weak labels g?, ?,
training, and we did not fine-tune for accuracy on them. Using weak labels one can tune the network
for both long-term and short-term planning, whereas all non-hierarchical baselines are optimized
for short-term planning only. Including the weak labels ?? can lower the accuracy on short-term
prediction, but increases the quality of the on-policy rollouts. This trade-off can be empirically set by
tuning the number of weak labels used during pre-training.
6
Limitations and Future Work
We have presented a hierarchical memory network for generating long-term spatiotemporal trajectories. Our approach simultaneously models macro-goals and micro-actions and integrates them
using a novel attention mechanism. We demonstrated significant improvement over non-hierarchical
baselines in a case study on modeling basketball player behavior.
There are several notable limitations to our HPN model. First, we did not consider all aspects of
basketball gameplay, such as passing and shooting. We also modeled all players using a single policy
whereas in reality player behaviors vary (although the variability can be low-dimensional (Yue et al.
[16])). We only modeled offensive players: an interesting direction is modeling defensive players and
integrating adversarial reinforcement learning (Panait and Luke [13]) into our approach. These issues
limited the scope of our preference study, and it would be interesting to consider extended settings.
In order to focus on the HPN model class, we only used the imitation learning setting. More broadly,
many planning problems require collecting training data via exploration (Mnih et al. [11]), which can
be more challenging. One interesting scenario is having two adversarial policies learn to be strategic
against each other through repeatedly game-play in a basketball simulator. Furthermore, in general it
can be difficult to acquire the appropriate weak labels to initialize the macro-policy training.
From a technical perspective, using attention in the output space may be applicable to other architectures. More sophisticated applications may require multiple levels of output attention masking.
Acknowledgments. This research was supported in part by NSF Award #1564330, and a GPU donation (Tesla
K40 and Titan X) by NVIDIA.
8
References
[1] Aijun Bai, Feng Wu, and Xiaoping Chen. Online planning for large markov decision processes with
hierarchical decomposition. ACM Transactions on Intelligent Systems and Technology (TIST), 6(4):45,
2015.
[2] Richard W Byrne and Anne E Russon. Learning by imitation: A hierarchical approach. Behavioral and
brain sciences, 21(05):667?684, 1998.
[3] Junyoung Chung, ?aglar G?l?ehre, Kyunghyun Cho, and Yoshua Bengio. Gated feedback recurrent neural
networks. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille,
France, 6-11 July 2015, pages 2067?2075, 2015.
[4] Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, and Yoshua Bengio. A
recurrent latent variable model for sequential data. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama,
and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 2980?2988. Curran
Associates, Inc., 2015.
[5] Jonathan St B. T. Evans and Keith E. Stanovich. Dual-Process Theories of Higher Cognition Advancing the
Debate. Perspectives on Psychological Science, 8(3):223?241, May 2013. ISSN 1745-6916, 1745-6924.
doi: 10.1177/1745691612460685.
[6] Carlos Guestrin, Daphne Koller, Ronald Parr, and Shobha Venkataraman. Efficient Solution Algorithms
for Factored MDPs. J. Artif. Int. Res., 19(1):399?468, October 2003. ISSN 1076-9757.
[7] Matthew Hausknecht and Peter Stone. Deep reinforcement learning in parameterized action space. In
Proceedings of the International Conference on Learning Representations (ICLR), San Juan, Puerto Rico,
May .
[8] Ruijie He, Emma Brunskill, and Nicholas Roy. PUMA: Planning Under Uncertainty with Macro-Actions.
In Twenty-Fourth AAAI Conference on Artificial Intelligence, July 2010.
[9] Sergey Ioffe and Christian Szegedy. Batch Normalization: Accelerating Deep Network Training by
Reducing Internal Covariate Shift. pages 448?456, 2015.
[10] George Konidaris, Scott Kuindersma, Roderic Grupen, and Andrew Barto. Robot learning from demonstration by constructing skill trees. The International Journal of Robotics Research, 31(3):360?375, March
2012. ISSN 0278-3649, 1741-3176. doi: 10.1177/0278364911428653.
[11] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare,
Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie,
Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and
Demis Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540):529?533,
February 2015. ISSN 0028-0836. doi: 10.1038/nature14236.
[12] Katharina Muelling, Abdeslam Boularias, Betty Mohler, Bernhard Sch?lkopf, and Jan Peters. Learning
strategies in table tennis using inverse reinforcement learning. Biological Cybernetics, 108(5):603?619,
October 2014. ISSN 1432-0770. doi: 10.1007/s00422-014-0599-1.
[13] Liviu Panait and Sean Luke. Cooperative multi-agent learning: The state of the art. Autonomous Agents
and Multi-Agent Systems, 11(3):387?434, 2005.
[14] Richard S. Sutton, Doina Precup, and Satinder Singh. Between MDPs and semi-MDPs: A framework for
temporal abstraction in reinforcement learning. Artificial Intelligence, 112(1?2):181?211, August 1999.
ISSN 0004-3702. doi: 10.1016/S0004-3702(99)00052-1.
[15] Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard
Zemel, and Yoshua Bengio. Show, Attend and Tell: Neural Image Caption Generation with Visual
Attention. arXiv:1502.03044 [cs], February 2015. arXiv: 1502.03044.
[16] Yisong Yue, Patrick Lucey, Peter Carr, Alina Bialkowski, and Iain Matthews. Learning Fine-Grained
Spatial Models for Dynamic Sports Play Prediction. In IEEE International Conference on Data Mining
(ICDM).
[17] Brian D Ziebart, Andrew L Maas, J Andrew Bagnell, and Anind K Dey. Maximum entropy inverse
reinforcement learning. In AAAI, pages 1433?1438, 2008.
9
| 6520 |@word cnn:18 middle:1 nd:1 heuristically:1 simulation:1 bn:5 decomposition:3 initial:2 bai:3 att:2 tist:1 tuned:1 suppressing:1 animated:1 outperforms:1 current:1 com:1 discretization:1 anne:1 must:2 gpu:1 evans:2 realistic:9 ronald:1 christian:1 v:4 stationary:5 generative:1 instantiate:4 half:1 selected:1 imitate:3 intelligence:2 amir:1 alone:1 aglar:1 short:11 sudden:1 location:4 preference:7 daphne:1 rollout:13 along:4 constructed:2 wierstra:1 become:1 qualitative:2 consists:3 shooting:1 grupen:1 combine:1 behavioral:2 emma:1 mask:4 behavior:23 planning:10 kiros:1 multi:6 simulator:1 discretized:1 brain:1 salakhutdinov:1 automatically:1 becomes:1 spain:1 xx:1 notation:2 conv:6 begin:1 factorized:1 moreover:1 argmin:1 minimizes:1 dharshan:1 finding:1 possession:5 temporal:3 every:5 collecting:1 ti:1 prohibitively:1 hit:2 uk:2 control:2 unit:3 kelvin:1 maneuver:1 segmenting:1 t1:1 before:1 attend:2 teammate:1 positive:1 sutton:2 ak:1 analyzing:2 laurent:1 path:1 abuse:1 merge:1 signed:1 burn:1 chose:2 au:1 initialization:1 might:1 dynamically:2 suggests:2 challenging:2 luke:2 limited:1 range:2 unique:1 acknowledgment:1 practice:1 jan:1 demis:1 riedmiller:1 rnn:2 empirical:1 significantly:2 puma:1 pre:3 integrating:3 induce:1 argmaxa:2 confidence:1 suggest:2 petersen:1 bend:1 judged:2 put:1 context:1 bellemare:1 optimize:1 conventional:4 map:1 demonstrated:3 deterministic:1 center:1 helen:1 straightforward:1 attention:30 go:5 jimmy:1 focused:2 resolution:1 simplicity:1 defensive:1 stats:2 factored:2 attending:1 iain:1 his:2 coordinate:1 variation:1 autonomous:1 hierarchy:1 unconscious:1 play:2 caption:1 us:2 curran:1 associate:1 element:1 velocity:8 recognition:1 roy:1 updating:1 showcase:2 coarser:1 predicts:5 cooperative:1 bottom:8 ft:2 capture:4 region:2 connected:2 episode:2 k40:1 venkataraman:1 movement:8 trade:4 environment:1 convexity:2 complexity:1 ziebart:2 reward:2 asked:1 dynamic:6 trained:7 depend:1 singh:1 animate:1 predictive:1 abdeslam:1 kratarth:1 represented:1 train:3 separated:1 distinct:1 fast:1 effective:3 instantiated:1 doi:5 artificial:2 zemel:1 tell:1 h0:2 supplementary:4 solve:1 larger:1 tested:1 precludes:1 favor:2 timescale:2 final:3 online:1 sequence:9 advantage:1 propose:2 interaction:1 product:1 macro:87 hadamard:1 holistic:1 flexibility:1 achieve:3 decisionmaking:1 produce:3 generating:4 silver:1 converges:1 wider:1 donation:1 recurrent:5 develop:1 pose:1 andrew:3 keith:1 implemented:1 predicted:6 c:1 indicate:1 memorize:2 judge:6 differ:2 direction:4 guided:1 exploration:2 human:3 material:4 noticeably:1 require:3 feeding:2 decompose:2 probable:1 biological:1 ryan:1 brian:1 around:2 ground:13 exp:2 visually:1 lawrence:1 scope:1 predict:1 cognition:1 parr:1 matthew:2 vary:1 ruslan:1 integrates:2 applicable:2 label:16 currently:1 individually:2 grouped:1 create:1 puerto:1 reflects:2 weighted:2 minimization:1 always:1 aim:3 rather:1 rusu:1 barto:1 validated:1 focus:5 xit:1 improvement:3 legg:1 cloning:1 mainly:1 contrast:4 adversarial:2 baseline:22 abstraction:1 entire:2 hidden:1 koller:1 france:1 interested:1 issue:3 dual:2 overall:1 plan:2 spatial:2 softmax:1 fairly:1 orange:1 constrained:1 integration:1 construct:1 never:1 having:2 veness:1 sampling:1 manually:1 identical:1 koray:1 lille:1 look:3 icml:1 kastner:1 peaked:1 future:1 report:2 yoshua:3 intelligent:1 micro:78 shobha:1 richard:3 defender:2 few:1 strategically:1 simultaneously:4 comprehensive:1 individual:2 phase:4 rollouts:16 opposing:1 ostrovski:1 highly:1 mnih:3 zheng:1 sr0:1 evaluation:2 joel:1 mining:1 light:1 behind:1 myopic:2 capable:1 hausknecht:3 tree:1 desired:2 re:1 fitted:1 psychological:1 instance:6 column:6 modeling:8 markovian:4 strategic:3 dribble:1 successful:1 dependency:1 spatiotemporal:5 combined:2 cho:2 st:29 density:1 international:4 preferring:2 stay:1 lee:1 off:5 pool:4 synthesis:2 precup:1 aaai:2 recorded:1 yisong:2 opposed:1 choose:2 slowly:1 boularias:1 juan:1 external:1 cognitive:2 expert:13 chung:4 resort:1 szegedy:2 volodymyr:1 stride:2 ioannis:1 stabilize:1 int:1 inc:1 titan:1 coordinated:1 notable:1 explicitly:1 doina:1 extrapolation:2 reached:1 start:2 red:1 competitive:2 carlos:1 masking:1 hpn:41 robocup:1 accuracy:9 convolutional:3 largely:1 yield:1 judgment:2 generalize:1 weak:10 raw:23 lkopf:1 kavukcuoglu:1 trajectory:27 cc:4 cybernetics:1 straight:8 executes:2 history:3 reach:5 basket:3 against:4 failure:1 konidaris:2 competitor:1 frequency:1 xiaoping:1 thereof:1 naturally:2 static:2 handwriting:1 sampled:1 gain:4 dataset:3 holdout:3 ut:2 improves:1 organized:1 sean:1 betty:1 sophisticated:1 actually:1 feed:1 rico:1 higher:1 dt:1 follow:1 response:1 execute:2 box:3 knowledgeable:2 evaluated:2 furthermore:1 dey:1 stage:5 hand:3 freezing:1 su:1 mode:1 quality:10 artif:1 gti:13 usage:1 omitting:2 requiring:1 verify:1 byrne:2 kyunghyun:2 spatially:1 symmetric:2 semantic:3 deal:1 cohesive:2 visualizing:1 game:7 basketball:21 during:7 please:2 won:1 generalized:2 stone:3 carr:1 roderic:1 image:4 wise:1 shoot:1 novel:2 instantaneous:1 kyle:1 charles:1 common:1 behaves:1 empirically:1 tracked:1 overview:1 he:3 refer:3 significant:4 dinh:1 enter:1 tuning:2 grid:1 sugiyama:1 language:1 moving:4 access:1 stable:1 longer:1 similarity:1 robot:1 gt:8 patrick:2 tennis:1 posterior:1 recent:1 perspective:2 optimizes:1 scenario:3 route:1 certain:2 nvidia:1 vt:1 caltech:4 guestrin:2 additional:2 george:1 goel:1 r0:4 converting:1 redundant:1 period:3 july:2 semi:1 multiple:5 full:2 segmented:1 positioning:1 smooth:1 adapt:1 plausibility:1 cross:1 long:23 technical:1 icdm:1 award:1 prediction:13 arxiv:2 sometimes:3 kernel:2 normalization:2 sergey:1 robotics:1 cell:4 whereas:4 want:2 fine:5 sch:1 operate:1 yue:4 pooling:1 hz:1 recruited:1 shane:1 inconsistent:2 near:1 symmetrically:1 bengio:3 stephan:1 identically:1 offensive:3 variety:1 timesteps:1 architecture:4 reduce:1 andreas:1 court:8 shift:1 t0:1 accelerating:1 peter:3 passing:1 nine:1 action:48 prefers:1 deep:7 repeatedly:1 generally:2 clear:1 tune:3 amount:1 dark:1 conscious:1 induces:1 generate:3 continuation:1 outperform:1 holistically:1 nsf:1 neuroscience:1 track:9 blue:3 broadly:1 discrete:1 georg:1 key:2 salient:2 achieving:1 alina:1 clarity:4 ht:19 advancing:1 timestep:1 vast:1 inverse:2 parameterized:2 uncertainty:1 fourth:1 reasonable:1 wu:1 utilizes:2 separation:1 decision:5 layer:3 display:1 courville:2 extrapolates:1 ahead:3 alex:1 kuindersma:1 generates:5 aspect:1 span:2 optimality:1 relatively:2 martin:1 developing:1 ball:8 combination:1 march:1 across:1 slightly:1 smaller:1 suppressed:2 making:3 intuitively:3 imitating:1 visualization:1 remains:1 turn:1 fail:1 mechanism:8 mind:1 antonoglou:1 end:1 eight:1 observe:1 hierarchical:37 appropriate:1 stig:1 nicholas:1 distinguished:1 batch:2 professional:6 hassabis:1 mohler:1 top:6 remaining:1 cf:4 include:1 subsampling:1 february:2 eliciting:1 feng:1 move:9 objective:2 question:1 quantity:1 occurs:1 looked:1 strategy:1 primary:2 rt:1 interacts:1 bagnell:1 affinity:1 gradient:1 win:1 distance:1 separate:2 iclr:1 fidjeland:1 concatenation:1 unnaturally:1 majority:1 seven:1 reason:3 analyst:3 length:2 issn:6 modeled:4 index:6 demonstration:5 balance:2 acquire:1 art:1 setup:1 difficult:1 october:2 debate:1 ba:1 design:2 policy:91 twenty:1 perform:2 gated:1 discretize:1 inspect:1 kumaran:1 markov:2 benchmark:4 daan:1 descent:1 curved:1 regularizes:1 extended:1 variability:2 team:5 frame:10 stack:4 sharp:1 august:1 community:2 intensity:1 david:1 gru:19 optimized:2 learned:1 barcelona:1 boost:1 nip:1 fixable:1 gameplay:2 below:1 pattern:1 scott:1 panait:2 challenge:2 green:4 max:1 memory:8 including:1 hot:4 event:1 unrealistic:1 difficulty:1 natural:5 settling:1 predicting:2 occupancy:1 improve:2 technology:1 mdps:5 shortterm:1 initialize:1 determining:1 relative:3 graf:1 loss:1 fully:2 generation:2 limitation:2 interesting:3 aijun:1 agent:20 quantitive:1 s0:1 principle:1 editor:1 share:2 ehre:1 row:4 maas:1 extrapolated:1 supported:1 last:1 liviu:1 lucey:2 guide:6 bias:1 side:1 weaker:1 taking:1 sparse:1 curve:2 feedback:1 uit:6 computes:1 forward:1 reinforcement:7 avg:3 san:1 transaction:1 observable:1 skill:1 preferred:2 bernhard:1 satinder:1 global:1 decides:1 vti:1 ioffe:2 conceptual:1 tuples:1 imitation:3 continuous:2 latent:5 decomposes:1 table:5 additionally:1 reality:1 nature:1 learn:5 transfer:7 channel:1 depicting:1 katharina:1 complex:3 constructing:1 garnett:1 marc:1 did:5 main:2 hierarchically:1 timescales:4 hyperparameters:2 animation:1 ait:10 tesla:1 xu:2 crafted:1 junyoung:2 depicts:6 elaborate:1 andrei:1 slow:1 sub:4 position:2 brunskill:1 deterministically:1 explicit:1 winning:1 learns:3 grained:1 down:2 specific:3 covariate:1 showing:1 appeal:1 cortes:1 sit:8 incorporating:2 intractable:1 albeit:1 sequential:2 effectively:1 adding:1 anind:1 magnitude:2 horizon:2 chen:1 suited:1 entropy:2 depicted:1 lt:4 fc:3 simply:1 visual:2 prevents:1 upsample:1 sport:6 tracking:3 sadik:1 corresponds:2 truth:13 extracted:2 acm:1 conditional:1 goal:59 king:1 towards:10 room:1 yti:1 change:3 feasible:2 specifically:1 reducing:1 semantically:1 beattie:1 pas:2 experimental:1 player:27 vote:1 meaningful:1 burnin:1 indicating:2 aaron:2 college:1 hypothetically:1 preand:1 internal:1 jonathan:1 evaluate:2 aux:2 |
6,104 | 6,521 | Completely random measures for modelling
block-structured sparse networks
Tue Herlau
Mikkel N. Schmidt Morten M?rup
DTU Compute
Technical University of Denmark
Richard Petersens plads 31,
2800 Lyngby, Denmark
{tuhe,mns,mmor}@dtu.dk
Abstract
Statistical methods for network data often parameterize the edge-probability by
attributing latent traits such as block structure to the vertices and assume exchangeability in the sense of the Aldous-Hoover representation theorem. These
assumptions are however incompatible with traits found in real-world networks
such as a power-law degree-distribution. Recently, Caron & Fox (2014) proposed
the use of a different notion of exchangeability after Kallenberg (2005) and obtained a network model which permits edge-inhomogeneity, such as a power-law
degree-distribution whilst retaining desirable statistical properties. However, this
model does not capture latent vertex traits such as block-structure. In this work we
re-introduce the use of block-structure for network models obeying Kallenberg?s
notion of exchangeability and thereby obtain a collapsed model which both admits
the inference of block-structure and edge inhomogeneity. We derive a simple
expression for the likelihood and an efficient sampling method. The obtained
model is not significantly more difficult to implement than existing approaches to
block-modelling and performs well on real network datasets.
1
Introduction
Two phenomena are generally considered important for modelling complex networks. The first is
community or block structure, where the vertices are partitioned into non-overlapping blocks (denoted
by ` = 1, . . . , K in the following) and the probability two vertices i, j are connected depends on
their assignment to blocks:
P Edge between vertex i and j = ?`m
where ?`m ? [0, 1] is a number only depending on the blocks `, m to which i, j respectively belongs.
Stochastic block models (SBMs) were first proposed by White et al. (1976) and today form the basic
starting point for many important link-prediction methods such as the infinite relational model (Xu
et al., 2006; Kemp et al., 2006).
While block-structure is important for link prediction, the degree distribution of edges in complex
networks is often found to follow a power-law (Newman et al., 2001; Strogatz, 2001). This realization
has led to many important models of network growth, such as the preferential attachment (PA) model
of Barab?si (1999).
Models such as the IRM and the PA model have different goals. The PA model attempts to explain
how network structure, such as the degree distribution, follows from simple rules of network growth
and is not suitable for link prediction. In contrast, the IRM aims to discover latent block-structure
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
and predict edges ? tasks for which the PA model is unsuitable. In the following, network model
will refer to a model with the same aims as the IRM, most notably prediction of missing edges.
1.1
Exchangeability
Invariance is an important theme in Bayesian approaches to network modelling. For network data, the
invariance which has received most attention is infinite exchangeability of random arrays. Suppose
we represent the network as a subset of an infinite matrix A = (Aij )ij?1 such that Aij is the number
of edges between vertex i and j (we will allow multi and self-edges in the following). Infinite
exchangeability of the random array (Aij )ij?1 is the requirement that (Hoover, 1979; Aldous, 1981)
d
(Aij )ij?1 = (A?(i)?(j) )ij?1 for all finite permutations ? of N. The distribution of a finite network is
then obtained by marginalization. According to the Aldous-Hoover theorem (Hoover, 1979; Aldous,
1981), an infinite exchangeable network has a representation in terms of a random function, and
furthermore, the number of edges in the network must either scale as the square of the number
of vertices or (with probability 1) be zero (Orbanz & Roy, 2015). Neither of these options are
compatible with a power-law degree distribution and one is faced with the dilemma of giving up
either the power-law distribution or exchangeability. It is the first horn of this dilemma which has
been pursued by much work on Bayesian network modelling (Orbanz & Roy, 2015).
It is, however, possible to substitute the notation of infinite exchangeability in the above sense with
a different definition due to Kallenberg (2005, chapter 9). The new notion retains many important
characteristics of the former, including a powerful representation theorem parallelling the AldousHoover theorem but expressed in terms of a random set. Important progress in exploring network
models based on this representation has recently been made by Caron & Fox (2014), who demonstrate
the ability to model power-law behaviour of the degree distribution and construct an efficient sampler
for parameter inference. The reader is encouraged to consult this reference for more details.
In this paper, we will apply the ideas of Caron & Fox (2014) to block-structured network data,
thereby obtaining a model based on the same structural invariance, yet able to capture both blockstructure and degree heterogeneity. The contribution of this work is fourfold: (i) we propose general
extension of sparse networks to allow latent structure, (ii) using this construction we implement a
block-structured network model which obey Kallenbergs notion of exchangeability, (iii) we derive a
collapsed expression of the posterior distribution which allows efficient sampling, (iv) demonstrate
that the resulting model offers superior link prediction compared to both standard block-modelling
and the model of Caron & Fox (2014).
It should be noted that independently of this manuscript, Veitch & Roy (2015) introduced a construction similar to our eq. (4) but focusing on the statistical properties of this type of random process,
whereas this manuscript focuses on the practical implementation of network models based on the
construction.
2
Methods
Before introducing the full method we will describe the construction informally, omitting details
relating to completely random measures.
2.1
A simple approach to sparse networks
Suppose the vertices in the network are labelled by real numbers in R+ . An edge e (edges are
considered directed and we allow for self-edges) then consists of two numbers (xe1 , xe2 ) ? R2+
denoted the edge endpoint. A network X of L edges (possibly L = ?) is simply the collection of
2
points X = ((xe1 , xe2 ))L
e=1 ? R+ . We adopt the convention that multi-edges implies duplicates in
the list of edges. Suppose X is generated by a Poisson process with base measure ? on R2+
X ? PP ? .
(1)
A finite network X? can then be obtained by considering the restriction of X to [0, ?]2 : X? =
X ? [0, ?]2 . As an illustration, suppose ? is the Lebesgue measure. The number of edges is then
L ? Poisson(?2 ) and the edge-endpoints xe1 , xe2 are i.i.d. on [0, ?] simply corresponding to
selecting L random points in [0, ?]2 . The edges are indicated by the gray squares in figure 1a and the
2
?
P
?
P
?
i?1
?2
xe1
(xe1 , xe2 )
xe2
?1
?3
0
xe2
?5
0
zi =2
wi ??i
w5
0
(a) Maximally sparse network
P
zi =3
wi ??i
Aij
?3
?1
0
?
?4
p(Aij ) = Poisson(wi wj ?13 )
?4
w6 ?5
?6
?
P
w2
Aij
0
wi ??i
w4
p(Aij ) = Poisson(wi wj )
?6
w1
?2
w3
?4
xe1
zi =1
wi ??i
w4
(b) Nontrivial network
0
zi = 1
zi = 2
zi = 3
?
(c) Nontrivial network
Figure 1: (Left:) A network is generated by randomly selecting points from [0, ?]2 ? R2+ corresponding to edges (squares) and identifying the unique coordinates with vertices (circles), giving
the maximally disconnected graph. (Middle:) The edges are restricted to lie at the intersection of
randomly generated gray lines at ?i , each with a mass/sociability parameter wi . The probability of
selecting an intersection is proportional to wi wj , giving a non-trivial network structure. (Right:) Each
vertex is assigned a latent trait zi (the assignment to blocks as indicated by the colors) that modulates
the edge probability with a parameter ?`m ? 0, thus allowing block-structured networks.
vertices as circles. Notice the vertices will be distinct with probability 1 and the procedure therefore
gives rise to the degenerate but sparse network of 2L vertices and L edges, shown in figure 1a.
To generate non-trivial networks, the edge-endpoints must coincide with nonzero probability. Similar
to Caron & Fox (2014), suppose the coordinates are restricted to only take a countable number of
potential values, ?1 , ?2 , ? ? ? ? R+ and each value has an associated sociability (or mass) parameter
?
w1 , wP
2 , ? ? ? ? [0, ?[ (we use the shorthand (?i )i = (?i )i=1 for a series). If we define the measure
? = i?1 wi ??i and let ? = ? ? ?, then generating X? according to the procedure of eqn. (1)
P?
the number of edges L is Poisson T 2 , T = ?([0, ?]) = i=1 wi distributed. The position of
the edges remains identically distributed, but with probability proportional to wi wj of selecting
coordinate (?i , ?j ). Since the edge-endpoints coincide with non-zero probability this procedure
allows the generation of a non-trivial associative network structure, see figure 1b. With proper
choice of (wi , ?i )i?1 these networks exhibit many desirable properties, such as a power-law degree
distribution and sparsity (Caron & Fox, 2014).
This process can be intuitively extended to block-structured networks, as illustrated in figure 1c.
There, each vertex is assigned a latent trait (i.e. a block assignment), here highlighted by the colors.
We use the symbol zi ? {1, . . . , K} to indicate the assignment of vertex i to one of the K blocks.
We can then consider a measure of the form
?=
X
?zi zj wi wj ?(?i ,?j ) =
i,j?1
K
X
?`m ?` ? ?m ,
(2)
`,m=1
P
where we have introduced ?` = i:zi =` wi ??i . Defined in this manner, ? is a measure on [0, ?]2
and ?`m parameterizes the interaction strength between community ` and m. Notice the number
of edges L`m between block ` and m is, by basic properties of the Poisson process, distributed as
L`m ? Poisson(?`m T` Tm ), where T` = ?` ([0, ?]). In figure 1c the locations ?i of the vertices have
been artificially ordered according to color for easy visualization. The following section will show
the connection between the above construction of eq. (2) and the exchangeable representation due to
Kallenberg (2005). However, for greater generality, we will let the latent trait be a general continuous
parameter ui ? [0, 1] and later show that block-structured models can be obtained as a special case.
2.2
Exchangeability and point-process network models
Since the networks in the point-set representation are determined by the properties of the measure
?, invariance (i.e. exchangeability) of random point-set networks is defined as invariance of this
random measure. Recall infinite exchangeability for infinite matrices requires that the distribution
of the random matrix to be unchanged by permutation of the rows/columns in the network. For
3
Step 2: Select graphon f
Step 1: Generate candidate vertices
0.5
1
w
?31
0
?32
?33
?3
Step 3: Form measure
0.35
0.3
10
9
0.25
8
f (u, u0 ) = ?21
7
?22
?23
?2
0.2
0.15
6
0.1
5
?
0.05
4
3
2
u
1
P
?12
?1
?2
?13
?1
0
10
8
1
0
?11
0.5
0
i?1 wi ?(?i ,ui ) ? CRM(??,? , R+ ? [0, 1]).
??,? is the L?vy intensity of a GGP
00
10
6
?3 1
?0
?0
,??? ,
K
K
? Gamma(?a , ?b )
(?i )K
i=1 ? Dirichlet
?`m
8
?i
6
4
4
2
2
0
?=
P
i,j?1
?j
0
wi wj f (ui , uj )?(?i ,?j )
Figure 2: (Step 1:) The potential vertex locations, ?i , latent traits ui and sociability parameters
wi are generated using a generalized gamma process (Step 2:) The interaction of the latent traits
f : [0, 1]2 ? R+ , the graphon, is chosen to be a piece-wise constant function (Step 3:) Together,
these determine the random measure ? which is used to generate the network from a Poisson process
a random measure on R2+ , the corresponding requirement is that it should be possible to partition
R+ into intervals I1 , I2 , I3 , . . . , permute the intervals, and have the random measure be invariant to
this permutation. Formally, a random measure ? on R2+ is then said to be jointly exchangeable if
d
? ? (? ? ?)?1 = ? for all measure-preserving transformations ? of R+ . According to Kallenberg
(2005, theorem 9.24), this is ensured provided the measure has a representation of the form:
X
?=
h(?, xi , xj )?(?i ,?j ) ,
(3)
i,j?1
where h is a measurable function, ? is a random variable and {(xi , ?i )}i?1 is a unit rate Poisson
process on R2+ (the converse involves five additional terms (Kallenberg, 2005)). In this representation,
the locations (?i )i and the parameters (xi )i are decoupled, however we are free to select the random
parameters (xi )i?1 to lie in a more general space than R+ . Specifically, we define
xi = (ui , vi ) ? [0, 1] ? R+ ,
with the interpretation that each vi corresponds to a random mass wi through a transformation
wi = g(vi ), and each ui ? [0, 1] is a general latent trait of the vertex. (In figure 1 this parameter
corresponded to the assignment to blocks). We then consider the following choice:
h(?, xi , xj ) = f (ui , uj )gzi (vi )gzj (vj )
(4)
where f : [0, 1]2 ? R+ is a measurable function playing a similar role as the graphon in the AldousHoover representation, and {(ui , vi , ?i )}i?1 follows a unit-rate Poisson process on [0, 1] ? R2+ .
To see the connection with the block-structured model, suppose the function f is a piece-wise constant
function
f (u, u0 ) =
K
X
?`m 1J` (u)1Jm (u0 ),
`,m=1
hP
h P
P`
`?1
K
where J` =
m=1 ?m ,
m=1 ?m ,
`=1 ?` = 1, ?` > 0 and zi = ` denotes the event 1J` (ui ) =
1. Notice this choice for f is exactly equivalent to the graphon for the block-structured network
model in the Aldous-Hoover representation (Orbanz & Roy, 2015). The procedure is illustrated
in figure 2. Realizations of networks generated by this process using different values of K can be
obtained using the simulation methods of Caron & Fox (2014) and can be seen in figure 3. Notice the
K = 1, ?11 = 1 case corresponds to their method.
To
P fully define the method we must first introduce the relevant prior for the measure ? =
i?1 wi ?(?i ,ui ) . As a prior we will use the Generalized Gamma-process (GGP) (Hougaard, 1986).
In the following section, we will briefly review properties of completely random measures and use
these to derive a simple expression of the posterior.
4
2.3
Random measures
As a prior for ? we will use completely random
measures (CRMs) and the reader is referred to
(Kallenberg, 2005; Kingman, 1967) for a comprehensive account. Recall first the definition
of a CRM. Assume S is a separable complete
metric space with the Borel ?-field B(S) (for
our purpose S = [0, ?]). A random measure
? is a random variable whose values are measures on S. For each measurable set A ? B(S),
the random measure induces a random variable
?(A), and the random measure ? will be said
to be completely random if for any finite collection A1 , . . . , An of disjoint measurable sets the
random variables ?(A1 ), . . . , ?(An ) are independent. It was shown by Kingman (1967) that
the non-trivial part of any random measure ? is
discrete almost certainly with a representation
K=1
K=2
K=3
K=4
k = 188
k = 537
k = 689
k = 1 961
Figure 3: (Top:) Example of four randomly generated networks for K = 1, 2, 3 and 4 using the
choice of random measure discussed in section 2.3.
The other parameters were fixed at ? = 20K, ? =
1, ? = 0.5 and ?a = ?b = 1. Vertices have been
?
X
?=
wi ??i ,
(5) sorted according to their assignment to blocks and
sociability parameters.(Bottom:) The same neti=1
works as above but applying a random permutation
where the sequence of masses and locations
to the edges within each tile. A standard SBM
(wi , ?i )i (also known as the atoms) is a Poisassumes a network structure of this form.
+
son random measure on R ? S, with mean
measure ? known as the L?vy intensity measure.
We will consider homogeneous CRMs, where
locations are independent, ?(dw, d?) = ?(dw)?? (d?), and assume ?? is the Lebesgue measure on
[0, ?].
Since the construction as outlined in figure 1c depends on sampling the edge start and end-points at
random from the locations (?i )i , with probability proportional to wi , the normalized form of eqn. (5)
will be of particular interest. Specifically, the chance of selecting a particular location from a random
draw is governed by
?
P =
X
?
=
pi ??i ,
T
i=1
pi =
wi
,
T
T = ?(S) =
?
X
wi ,
(6)
i=1
which is known as the normalized random measure (NRM) and T is the total mass of the CRM
? (Kingman, 1967). A random draw from a Poisson process based on the CRM can thus be realized by
first sampling the number of generated points, L ? Poisson(T ), and then drawing their locations in a
i.i.d. manner from the NRM of eqn. (6). The reader is referred to James (2002) for a comprehensive
treatment on NRMs.
With the notation in place, we can provide the final form of the generative process for a network X? .
Suppose the CRM ? (restricted to the region [0, ?]) has been generated. Assume zi = ` iff. ui ? J`
and define the K thinned measures on [0, ?] as:
X
?` =
wi ??i
i:zi =`
each with total mass T` = ?` ([0, ?]). By basic properties of CRMs, the thinned measures are also
CRMs (Pitman, 2006). The number of points in each tile L`m is then Poisson(?`m T` Tm ) distributed,
and given L`m the edge-endpoints (xe1` , xe2m ) between atoms in measure ` and m can then be
drawn from the corresponding NRM. The generative process is then simply:
iid
?0/K , . . . , ?0/K
(?` )K
? ? CRM(?, U[0,1] ? UR+ )
`=1 ? Dirichlet
iid
?`k ? Gamma(?a , ?b )
iid
L`m ? Poisson(?`m T` Tm )
iid
for e = 1, . . . , L`m : xe1` ? Categorical (wi/T` )zi =`
5
iid
xe2m ? Categorical
wj/Tm )
zj =m
.
In the following we will use the generalized gamma process (GGP) as the choice of L?vy intensity
measure (James, 2002). The GGP is parameterized with two parameters ?, ? and has the functional
form
1
??,? (dw) =
w?1?? e?? w dw.
?(1 ? ?)
The parameters (?, ? ) will be restricted to lie in the region ]0, 1[?[0, ?[ as in (Caron & Fox, 2014).
In conjunction with ? we thus obtain three parameters (?, ?, ? ) which fully describe the CRM and
the induced partition structure.
2.4
Posterior distribution
In order to define a sampling procedure of the CRMSBM we must first characterize the posterior
distribution. In Caron & Fox (2014) this was calculated using a specially tailored version of Palm?s
formula. In this work we will use a counting argument inspired by Pitman (2003, eqn. (32)) and
a reparameterization to collapse the weight-parameter (wi )i?1 to obtain a fairly simple analytical
expression which is amenable to standard sampling procedures. The full derivation is, however,
somewhat lengthy and is included in the supplementary material.
First notice the distribution of the total mass T` of each of the thinned random measures ?` is a
tilted ?-stable random variable (Pitman, 2006). If we introduce ?` ? ?` ?, its density g?` ,?,? may be
written as
1
1
1
g?,?,? (t) = ?? ? f? (t?? ? )?? (t?? ? )
?
1
where ?? (t) = e? ??t , ? = ? ? ? , ? = ?
? and f? is the density of a ?-stable random variable. See
Devroye & James (2014) for more details. According to Zolotarev?s integral representation, the
function f? has the following form (Zolotarev, 1964)
?1
1
Z ?
?A(?,u)
?x 1??
sin(?u)? 1??
?/(1??)
f? (x) =
du A(?, u)e x
, A(?, u) = sin((1??)u)
. (7)
?(1 ? ?) 0
sin(u)
Since not all potential vertices (i.e. terms wi ??i in ?) will have edges attached to them, it is
useful to introduce a variable which encapsulates this distinction. We therefore define the variable
z?i = 0, 1, . . . , K with the definition:
zi if there exists (x, y) ? X? s.t. ?i ? {x, y},
z?i =
0 otherwise.
In addition, suppose for each measure ?` , the end-points of the edges associated with this measure
PK
selects k` = |{i : z?i = `}| unique atoms and k = `=1 k` is the total number of vertices in the
network. Next, we consider a specific network (Aij )ki,j=1 and assume it is labelled
P such that atom
(wi , ?i ) corresponds to a particular vertex i in the network. We also define ni = j (Aij + Aji ) as
P
the number of edge-endpoints that selects atom i, n` = i:?z =` ni as the aggregated edge-endpoints
i
P
that select measure ?` and n`m = z? =`,?zm =j Aij as the edges between measure ?` and ?m . The
i
posterior distribution is then
?0
QK
?1
?(?0 ) `=1 ?`K E` Y G(?a +n`m , ?b +T` Tm )
P (A, (zi )i , ?, ?, (?` , s` , t` )` ) =
,
(8)
Q
G(?a , ?b )
?( ?K0 )K ??0 ij Aij !
`m
where we have introduced:
E` =
Y
?k` sn` ` ?k` ??1
g?` ,?,? (T` ?s` )
(1 ? ?)ni
?
s
`
?(n` ? k` ?)e
z?i =`
P
and s` = i:?z =` wi is the mass of the "occupied" atoms in the measure ?` . The posterior distribution
i
can be seen as the product of K partition functions corresponding to the GGP, multiplied by the K 2
interaction factors involving the function G(a, b) = ?(a)b?a , and corresponding to the interaction
between the measures according to the block structure assumption.
Note that the ? = 1 case, corresponding to a collapsed version of Caron & Fox (2014), can be
a +n,?b +T )
obtained by taking the limit ?a = ?b ? ?, in which case G(?G(?
? e?T . When discussing
a ,?b )
the K = 1 case, we will assume this limit has been taken.
6
2.5
Inference
Sampling the expression eqn. (8) requires three types of sampling updates: (i) the sequence of
block-assignments (zi )i must be updated, (ii) in the simulations we will consider binary networks
and we will therefore need to both impute the integer valued counts (if Aij > 0), as well as missing
values in the network, and (iii) both the parameters associated with the random measure, ? and ? , as
well as the remaining variables associated with each expression E` must be updated.
All terms, except the densities g?,?,? , are amenable to standard sampling techniques. We opted for
the approach of Lomel? et al. (2014), in which u in Zolotarev?s integral representation (eqn. 7) is
considered an auxiliary parameter. The full inference procedure can be found in the supplementary
material, however, the main steps are: 1
Update of (zi )i : For each `, impute (wi )z?i =` once per sweep (see supplementary for details), and
then iterate over i and update each zi using a Gibbs sweep from the likelihood. The Gibbs
sweep is no more costly than that of a standard SBM.
Update of A: Impute (?`m )`m and (wi )i once per sweep (see supplementary for details), and then
for each (ij) such that the edge is either unobserved or must be imputed (Aij ? 1), generate
a candidate a ? Poisson(?`m wi wj ). Then, if the edge is unobserved, simply set Aij = a,
otherwise if the edge is observed and a = 0, reject the update.
Update of ?, ? : For ` = 1, . . . , K, introduce u` corresponding to u in Zolotarev?s integral representation (eqn. 7) and let t` = T` ? s` . Update the four variables in ?` = (?` , u` , s` , t` ) and
?, ? using random-walk Metropolis Hastings updates.
In terms of computational cost, the inference procedure is of the same order as the SBM albeit
with higher constants due to the overall complexity of the likelihood and because the parameters
(?` , u` , s` , t` ) must be sampled for each CRM. In Caron & Fox (2014), the parameters (wi )i?1 were
sampled using Hamiltonian Monte Carlo, whereas herein they are collapsed and re-imputed.
The parameters ?` and ?, ? are important for determining the sparsity and power-law properties
of the network model (Caron & Fox, 2014). To investigate convergence of the sampler for these
parameters, we generated a single network problem using ? = 25, ? = 0.5, ? = 2 and evaluated 12
samplers with K = 1 on the problem. Autocorrelation plots (mean and standard deviation computed
over 12 restarts) can be seen in figure 4a. All parameters mix, however the different parameters
have different mixing times with u in particular being affected by excursions. This indicates many
sampling updates of ?` are required to explore the state space sufficiently and we therefore applied
50 updates of ?` for each update of (zi )i and Aij . Additional validation of the sampling procedure
can be found in the supplementary material.
3
Experiments
The proposed method was evaluated on 11 network datasets (a description of how the datasets were
obtained and prepared can be found in the supplementary material) using K = 200 in the truncated
stick-breaking representation. As a criteria of evaluation we choose AUC score on held-out edges, i.e.
predicting the presence or absence of unobserved edges using the imputation method described in
the previous section. All networks were initially processed by thresholds at 0, and vertices with zero
edges were removed. A fraction of 5% of the edges were removed and considered as held-out data.
To examine the effect of using blocks, we compared the method against the method of Caron &
Fox (2014) (CRM) (corresponding to ?`m = 1 and K = 1), a standard block-structured model with
Poisson observations (pIRM) (Kemp et al., 2006), and the degree-corrected stochastic block model
(DCSBM) Herlau et al. (2014). The later allows both block-structure and degree-heterogeneity but it
is not exchangeable. More details on the simulations and methods are found in the supplementary
material.
The pIRM was selected since it is the closest block-structured model to the CRMSBM without
degree-correction. This allows us to determine the relative benefit of inferring the degree-distribution
compared to only the block-structure. For the priors we selected uniform priors for ?, ?, ? and a
Gamma(2, 1) prior for ?0 , ?a , ?b . Similar choices were made for the other models.
1
Code available at http://people.compute.dtu.dk/tuhe/crmsbm.
7
0.2
1
?
AUC score of held-out edges
Autocorrelation
0.95
?
0.15
?
t
s
u
0.1
0.05
0
0.9
0.85
0.8
0.75
CRMSBM
DCSBM
pIRM
CRM
0.7
0.65
?0.05
0
200
400
600
800
1000
NIPS
Lag
(a) Autocorrelation plots
Netsci
Yeast
Haverford
Hagmann
Caltech
Reed
SciMet
SmaGri
Simmons Swarthmore
(b) Link prediction
Figure 4: (Left:) Autocorrelation plots of the parameters ?, ?, ?, s, t and u for a K = 1 network
drawn from the prior distribution using ? = 25, ? = 0.5 and ? = 2. The plots were obtained
by evaluating the proposed sampling procedure for 106 iterations and the shaded region indicates
standard deviation obtained over 12 re-runs. The simulation indicates reasonable mixing for all
parameters, with u being the most affected by excursions. (Right:) AUC score on held-out edges
for the selected methods (averaged over 4 restarts) on 11 network datasets. For the same number of
blocks, the CRMSBM offers good link-prediction performance compared to the method of Caron
& Fox (2014) (CRM), a SBM with Poisson observations (pIRM) and the degree-corrected SBM
(DCSBM) (Herlau et al., 2014). Additional information is found in the supplementary material.
All methods were evaluated for T = 2 000 iterations, and the latter half of the chains was used for
link prediction. We used 4 random selections of held-out edges per network to obtain the results
seen in figure 4b (same sets of held-out edges were used for all methods). It is evident that blockstructure is crucial to obtain good link prediction performance. For the block-structured methods,
the results indicate additional benefits from using models which permits degree-heterogenity upon
most networks, except the Hagmann brain connectivity graph. This result is possibly explained by
the Hagmann graph having little edge-inhomogeneity. Comparing the CRMSBM and the DCSBM,
these models perform either on par with or with a slight advantage to the CRMSBM.
4
Discussion and Conclusion
Models of networks based on the CRM representation of Kallenberg (2005) offer one of the most
important new ideas in statistical modelling of networks in recent years. To our knowledge Caron
and Fox (2014) were the first to realize the benefits of this modelling approach, describe its statistical
properties and provide an efficient sampling procedure.
The degree distribution of a network is only one of several important characteristics of a complex
network. In this work we have examined how the ideas presented in Caron and Fox (2014) can be
applied for a simple block-structured network model to obtain a model which admits block structure
and degree correction. Our approach is a fairly straightforward generalization of the methods of
Caron and Fox (2014). However, we have opted to explicitly represent the density of the total mass
g?` ,?,? and integrate out the sociability parameters (wi )i , thereby reducing the number of parameters
associated with the CRM from the order of vertices to the order of blocks.
The resulting model has the increased flexibility of being able to control the degree distribution within
each block. In practice, results of the model on 11 real-world datasets indicate that this flexibility
offers benefits over purely block-structured approaches to link prediction for most networks, as well as
potential benefits over alternative approaches to modelling block-structure and degree-heterogeneity.
The results strongly indicate that structural assumptions (such as block-structure) are important to
obtain reasonable link prediction.
Block-structured network modelling is in turn the simplest structural assumption for block-modelling.
The extension of the method of Caron and Fox (2014) to overlapping blocks, possibly using the dependent random measures of Chen et al. (2013), appears fairly straightforward and should potentially
offer a generalization of overlapping block models.
8
Acknowledgments
This project was funded by the Lundbeck Foundation (grant nr. R105-9813).
References
Aldous, David J. Representations for partially exchangeable arrays of random variables. Journal of Multivariate
Analysis, 11(4):581?598, 1981.
Barab?si, Albert-L?szl?. Emergence of Scaling in Random Networks. Science, 286(5439):509?512, October
1999. ISSN 00368075. doi: 10.1126/science.286.5439.509.
Caron, Francois and Fox, Emily B. Bayesian nonparametric models of sparse and exchangeable random graphs.
arXiv preprint arXiv:1401.1137, 2014.
Chen, Changyou, Rao, Vinayak, Buntine, Wray, and Teh, Yee Whye. Dependent normalized random measures.
In Proceedings of The 30th International Conference on Machine Learning, pp. 969?977, 2013.
Devroye, Luc and James, Lancelot. On simulation and properties of the stable law. Statistical methods &
applications, 23(3):307?343, 2014.
Herlau, Tue, Schmidt, Mikkel N, and M?rup, Morten. Infinite-degree-corrected stochastic block model. Phys.
Rev. E, 90:032819, Sep 2014. doi: 10.1103/PhysRevE.90.032819.
Hoover, Douglas N. Relations on probability spaces and arrays of random variables. Preprint, Institute for
Advanced Study, Princeton, NJ, 2, 1979.
Hougaard, Philip. Survival models for heterogeneous populations derived from stable distributions. Biometrika,
73(2):387?396, 1986.
James, Lancelot F. Poisson process partition calculus with applications to exchangeable models and Bayesian
nonparametrics. arXiv preprint math/0205093, 2002.
Kallenberg, Olaf. Probabilistic Symmetries and Invariance Principles. Number v. 10 in Applied probability.
Springer, 2005. ISBN 9780387251158.
Kemp, Charles, Tenenbaum, Joshua B, Griffiths, Thomas L, Yamada, Takeshi, and Ueda, Naonori. Learning
systems of concepts with an infinite relational model. In AAAI, volume 3, pp. 5, 2006.
Kingman, John. Completely random measures. Pacific Journal of Mathematics, 21(1):59?78, 1967.
Lomel?, Mar?a, Favaro, Stefano, and Teh, Yee Whye. A marginal sampler for ?-stable Poisson-Kingman mixture
models. arXiv preprint arXiv:1407.4211, 2014.
Newman, M. E. J., Strogatz, S. H., and Watts, D. J. Random graphs with arbitrary degree distributions and their
applications. Physical Review E, 64(2), July 2001. ISSN 1063-651X.
Orbanz, Peter and Roy, Daniel M. Bayesian models of graphs, arrays and other exchangeable random structures.
Pattern Analysis and Machine Intelligence, IEEE Transactions on, 37(2):437?461, 2015.
Pitman, Jim. Poisson-Kingman partitions. Lecture Notes-Monograph Series, pp. 1?34, 2003.
Pitman, Jim. Combinatorial Stochastic Processes: Ecole D?Et? de Probabilit?s de Saint-Flour XXXII-2002.
Springer, 2006.
Strogatz, Steven H. Exploring complex networks. Nature, 410(6825):268?276, 2001.
Veitch, Victor. and Roy, Daniel M. The Class of Random Graphs Arising from Exchangeable Random Measures.
ArXiv e-prints, December 2015.
White, Harrison C, Boorman, Scott A, and Breiger, Ronald L. Social structure from multiple networks. i.
blockmodels of roles and positions. American journal of sociology, pp. 730?780, 1976.
Xu, Zhao, Tresp, Volker, Yu, Kai, and Kriegel, Hans-Peter. Infinite hidden relational models. In Proceedings of
the 22nd International Conference on Uncertainty in Artificial Intelligence (UAI 2006), 2006.
Zolotarev, Vladimir Mikhailovich. On the representation of stable laws by integrals. Trudy Matematicheskogo
Instituta im. VA Steklova, 71:46?50, 1964.
9
| 6521 |@word briefly:1 version:2 middle:1 changyou:1 nd:1 calculus:1 simulation:5 thereby:3 series:2 score:3 selecting:5 daniel:2 ecole:1 existing:1 comparing:1 si:2 yet:1 must:8 written:1 john:1 realize:1 tilted:1 ronald:1 partition:5 plot:4 update:11 pursued:1 generative:2 selected:3 half:1 intelligence:2 hamiltonian:1 yamada:1 math:1 location:8 nrm:3 five:1 favaro:1 consists:1 shorthand:1 autocorrelation:4 thinned:3 manner:2 introduce:5 notably:1 examine:1 multi:2 brain:1 inspired:1 little:1 jm:1 xe2:6 considering:1 spain:1 discover:1 notation:2 provided:1 project:1 mass:9 xe1:8 whilst:1 unobserved:3 transformation:2 nj:1 growth:2 exactly:1 ensured:1 biometrika:1 stick:1 control:1 exchangeable:9 unit:2 converse:1 grant:1 before:1 limit:2 examined:1 shaded:1 collapse:1 averaged:1 directed:1 horn:1 practical:1 unique:2 hougaard:2 acknowledgment:1 practice:1 block:49 implement:2 procedure:11 aji:1 lancelot:2 probabilit:1 w4:2 significantly:1 reject:1 griffith:1 selection:1 collapsed:4 applying:1 yee:2 restriction:1 measurable:4 equivalent:1 missing:2 straightforward:2 attention:1 starting:1 independently:1 emily:1 identifying:1 rule:1 sbm:5 array:5 dw:4 reparameterization:1 population:1 notion:4 coordinate:3 updated:2 simmons:1 construction:6 today:1 suppose:8 homogeneous:1 pa:4 roy:6 sociability:5 bottom:1 role:2 observed:1 preprint:4 steven:1 capture:2 parameterize:1 wj:8 region:3 connected:1 removed:2 monograph:1 rup:2 ui:11 complexity:1 dilemma:2 upon:1 purely:1 completely:6 sep:1 k0:1 chapter:1 derivation:1 distinct:1 describe:3 monte:1 doi:2 artificial:1 newman:2 corresponded:1 whose:1 lag:1 supplementary:8 valued:1 blockstructure:2 kai:1 drawing:1 otherwise:2 ability:1 highlighted:1 jointly:1 inhomogeneity:3 final:1 associative:1 zolotarev:5 emergence:1 sequence:2 advantage:1 analytical:1 isbn:1 propose:1 interaction:4 product:1 zm:1 relevant:1 realization:2 iff:1 degenerate:1 mixing:2 flexibility:2 description:1 convergence:1 olaf:1 requirement:2 francois:1 generating:1 derive:3 depending:1 ij:6 received:1 progress:1 eq:2 auxiliary:1 involves:1 implies:1 indicate:4 convention:1 stochastic:4 material:6 behaviour:1 generalization:2 hoover:6 im:1 exploring:2 extension:2 graphon:4 correction:2 sufficiently:1 considered:4 predict:1 adopt:1 purpose:1 combinatorial:1 aim:2 i3:1 occupied:1 exchangeability:12 volker:1 conjunction:1 derived:1 focus:1 modelling:11 likelihood:3 indicates:3 contrast:1 opted:2 sense:2 inference:5 dependent:2 initially:1 hidden:1 relation:1 i1:1 selects:2 overall:1 denoted:2 retaining:1 special:1 fairly:3 marginal:1 field:1 construct:1 once:2 having:1 sampling:13 encouraged:1 atom:6 yu:1 richard:1 duplicate:1 randomly:3 gamma:6 comprehensive:2 lebesgue:2 attempt:1 interest:1 w5:1 investigate:1 evaluation:1 certainly:1 flour:1 szl:1 mixture:1 held:6 chain:1 amenable:2 edge:49 integral:4 naonori:1 preferential:1 decoupled:1 fox:19 iv:1 irm:3 walk:1 re:3 circle:2 sociology:1 increased:1 column:1 rao:1 retains:1 vinayak:1 assignment:7 cost:1 introducing:1 vertex:25 subset:1 deviation:2 uniform:1 characterize:1 buntine:1 density:4 international:2 probabilistic:1 together:1 w1:2 connectivity:1 aaai:1 choose:1 possibly:3 tile:2 mikkel:2 american:1 zhao:1 kingman:6 account:1 potential:4 de:2 hagmann:3 explicitly:1 depends:2 vi:5 piece:2 later:2 start:1 option:1 contribution:1 square:3 ni:3 qk:1 characteristic:2 who:1 bayesian:5 wray:1 iid:5 carlo:1 explain:1 phys:1 lengthy:1 definition:3 against:1 petersens:1 pp:5 james:5 associated:5 sampled:2 treatment:1 recall:2 color:3 knowledge:1 focusing:1 manuscript:2 appears:1 higher:1 follow:1 restarts:2 heterogenity:1 maximally:2 nonparametrics:1 evaluated:3 strongly:1 generality:1 furthermore:1 mar:1 eqn:7 hastings:1 overlapping:3 indicated:2 gray:2 yeast:1 omitting:1 effect:1 normalized:3 concept:1 former:1 assigned:2 nonzero:1 wp:1 i2:1 illustrated:2 white:2 sin:3 impute:3 self:2 auc:3 noted:1 criterion:1 generalized:3 whye:2 evident:1 complete:1 demonstrate:2 performs:1 stefano:1 aldoushoover:2 wise:2 recently:2 charles:1 superior:1 functional:1 physical:1 endpoint:7 attached:1 volume:1 discussed:1 interpretation:1 slight:1 relating:1 trait:9 refer:1 caron:19 gibbs:2 lomel:2 outlined:1 mathematics:1 hp:1 funded:1 stable:6 han:1 base:1 posterior:6 closest:1 recent:1 multivariate:1 orbanz:4 aldous:6 belongs:1 binary:1 discussing:1 joshua:1 caltech:1 victor:1 preserving:1 seen:4 greater:1 additional:4 somewhat:1 determine:2 aggregated:1 july:1 ii:2 u0:3 multiple:1 desirable:2 full:3 mix:1 nrms:1 technical:1 xxxii:1 offer:5 a1:2 barab:2 va:1 prediction:11 involving:1 basic:3 heterogeneous:1 metric:1 poisson:20 albert:1 iteration:2 represent:2 tailored:1 arxiv:6 whereas:2 addition:1 interval:2 harrison:1 crucial:1 w2:1 specially:1 induced:1 december:1 integer:1 consult:1 structural:3 counting:1 presence:1 iii:2 identically:1 easy:1 crm:13 iterate:1 marginalization:1 xj:2 zi:20 w3:1 idea:3 parameterizes:1 tm:5 expression:6 peter:2 generally:1 useful:1 informally:1 takeshi:1 nonparametric:1 prepared:1 tenenbaum:1 induces:1 processed:1 simplest:1 imputed:2 generate:4 http:1 zj:2 vy:3 notice:5 disjoint:1 per:3 arising:1 discrete:1 affected:2 four:2 threshold:1 drawn:2 imputation:1 neither:1 douglas:1 kallenberg:9 graph:7 fraction:1 year:1 run:1 parameterized:1 powerful:1 uncertainty:1 place:1 almost:1 reader:3 reasonable:2 ueda:1 excursion:2 draw:2 incompatible:1 scaling:1 ki:1 nontrivial:2 strength:1 argument:1 separable:1 structured:14 palm:1 according:7 pacific:1 watt:1 disconnected:1 son:1 ur:1 partitioned:1 wi:36 metropolis:1 rev:1 encapsulates:1 plads:1 intuitively:1 restricted:4 invariant:1 explained:1 taken:1 lyngby:1 visualization:1 remains:1 turn:1 count:1 neti:1 end:2 available:1 permit:2 multiplied:1 apply:1 obey:1 schmidt:2 alternative:1 substitute:1 thomas:1 denotes:1 dirichlet:2 top:1 remaining:1 saint:1 unsuitable:1 giving:3 sbms:1 uj:2 unchanged:1 sweep:4 print:1 realized:1 costly:1 nr:1 said:2 exhibit:1 morten:2 link:10 tue:2 philip:1 veitch:2 kemp:3 trivial:4 denmark:2 w6:1 devroye:2 code:1 issn:2 reed:1 illustration:1 vladimir:1 difficult:1 october:1 ggp:5 potentially:1 rise:1 implementation:1 countable:1 proper:1 perform:1 allowing:1 teh:2 observation:2 datasets:5 finite:4 truncated:1 heterogeneity:3 relational:3 extended:1 crms:4 jim:2 arbitrary:1 community:2 intensity:3 introduced:3 david:1 required:1 connection:2 distinction:1 herein:1 barcelona:1 nip:2 able:2 kriegel:1 pattern:1 scott:1 sparsity:2 including:1 power:8 suitable:1 event:1 predicting:1 advanced:1 mn:1 dtu:3 attachment:1 categorical:2 tresp:1 sn:1 faced:1 prior:7 review:2 determining:1 relative:1 law:10 fully:2 par:1 permutation:4 lecture:1 generation:1 proportional:3 validation:1 foundation:1 integrate:1 degree:20 principle:1 playing:1 pi:2 row:1 compatible:1 free:1 aij:16 allow:3 herlau:4 institute:1 taking:1 sparse:6 pitman:5 distributed:4 benefit:5 calculated:1 world:2 evaluating:1 made:2 collection:2 coincide:2 social:1 transaction:1 uai:1 xi:6 continuous:1 latent:10 nature:1 obtaining:1 symmetry:1 permute:1 du:1 complex:4 artificially:1 vj:1 pk:1 main:1 blockmodels:1 xu:2 referred:2 borel:1 theme:1 position:2 inferring:1 obeying:1 lie:3 candidate:2 governed:1 breaking:1 theorem:5 formula:1 specific:1 symbol:1 r2:7 dk:2 admits:2 list:1 survival:1 exists:1 albeit:1 modulates:1 chen:2 attributing:1 intersection:2 led:1 simply:4 explore:1 expressed:1 ordered:1 strogatz:3 partially:1 springer:2 corresponds:3 chance:1 goal:1 sorted:1 labelled:2 luc:1 absence:1 included:1 infinite:11 determined:1 specifically:2 except:2 sampler:4 corrected:3 reducing:1 total:5 invariance:6 select:3 formally:1 people:1 latter:1 mikhailovich:1 princeton:1 phenomenon:1 |
6,105 | 6,522 | Scaled Least Squares Estimator for GLMs
in Large-Scale Problems
Murat A. Erdogdu
Department of Statistics
Stanford University
[email protected]
Mohsen Bayati
Graduate School of Business
Stanford University
[email protected]
Lee H. Dicker
Department of Statistics and Biostatistics
Rutgers University and Amazon ?
[email protected]
Abstract
We study the problem of efficiently estimating the coefficients of generalized linear
models (GLMs) in the large-scale setting where the number of observations n is
much larger than the number of predictors p, i.e. n
p
1. We show that in
GLMs with random (not necessarily Gaussian) design, the GLM coefficients are
approximately proportional to the corresponding ordinary least squares (OLS) coefficients. Using this relation, we design an algorithm that achieves the same accuracy
as the maximum likelihood estimator (MLE) through iterations that attain up to a
cubic convergence rate, and that are cheaper than any batch optimization algorithm
by at least a factor of O(p). We provide theoretical guarantees for our algorithm,
and analyze the convergence behavior in terms of data dimensions. Finally, we
demonstrate the performance of our algorithm through extensive numerical studies
on large-scale real and synthetic datasets, and show that it achieves the highest
performance compared to several other widely used optimization algorithms.
1
Introduction
We consider the problem of efficiently estimating the coefficients of generalized linear models (GLMs)
when the number of observations n is much larger than the dimension of the coefficient vector p,
(n
p
1). GLMs play a crucial role in numerous machine learning and statistics problems, and
provide a miscellaneous framework for many regression and classification tasks. Celebrated examples
include ordinary least squares, logistic regression, multinomial regression and many applications
involving graphical models [MN89, WJ08, KF09].
The standard approach to estimating the regression coefficients in a GLM is the maximum likelihood
method. Under standard assumptions on the link function, the maximum likelihood estimator (MLE)
can be written as the solution to a convex minimization problem [MN89]. Due to the non-linear
structure of the MLE problem, the resulting optimization task requires iterative methods. The most
commonly used optimization technique for computing the MLE is the Newton-Raphson method,
which may be viewed as a reweighted least squares algorithm [MN89]. This method uses a second
order approximation to benefit from the curvature of the log-likelihood and achieves locally quadratic
convergence. A drawback of this approach is its excessive per-iteration cost of O(np2 ). To remedy
this, Hessian-free Krylov sub-space based methods such as conjugate gradient and minimal residual
are used, but the resulting direction is imprecise [HS52, PS75, Mar10]. On the other hand, first order
?
Work conducted while at Rutgers University
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
approximation yields the gradient descent algorithm, which attains a linear convergence rate with
O(np) per-iteration cost. Although its convergence rate is slow compared to that of second order
methods, its modest per-iteration cost makes it practical for large-scale problems. In the regime
n
p, another popular optimization technique is the class of Quasi-Newton methods [Bis95, Nes04],
which can attain a per-iteration cost of O(np), and the convergence rate is locally super-linear; a
well-known member of this class of methods is the BFGS algorithm [Nes04]. There are recent studies
that exploit the special structure of GLMs [Erd15], and achieve near-quadratic convergence with a
per-iteration cost of O (np), and an additional cost of covariance estimation.
In this paper, we take an alternative approach to fitting GLMs, based on an identity that is well-known
in some areas of statistics, but appears to have received relatively little attention for its computational
implications in large scale problems. Let glm denote the GLM regression coefficients, and let ols
denote the corresponding ordinary least squares (OLS) coefficients (this notation will be defined
more precisely in Section 2). Then, under certain random predictor (design) models,
glm
/
ols
.
(1)
For logistic regression with Gaussian design (which is equivalent to Fisher?s discriminant analysis),
(1) was noted by Fisher in the 1930s [Fis36]; a more general formulation for models with Gaussian
design is given in [Bri82]. The relationship (1) suggests that if the constant of proportionality is
known, then glm can be estimated by computing the OLS estimator, which may be substantially
simpler than finding the MLE for the original GLM. Our work in this paper builds on this idea.
Our contributions can be summarized as follows.
1. We show that glm is approximately proportional to ols in random design GLMs, regardless
of the predictor distribution. That is, we prove
1
glm
c ? ols 1 . , for some c 2 R.
p
2. We design a computationally efficient estimator for glm by first estimating the OLS coefficients, and then estimating the proportionality constant c . We refer to the resulting
estimator as the Scaled Least Squares (SLS) estimator and denote it by ? sls . After estimating
the OLS coefficients, the second step of our algorithm involves finding a root of a real valued
function; this can be accomplished using iterative methods with up to a cubic convergence
rate and only O(n) per-iteration cost. This is cheaper than the classical batch methods
mentioned above by at least a factor of O(p).
3. For random design GLMs with sub-Gaussian predictors, we show that
r
1
p
glm
? sls
. +
.
p
n/ max {log(n), p}
1
This bound characterizes the performance of the proposed estimator in terms of data dimensions, and justifies the use of the algorithm in the regime n
p
1.
4. We study the statistical and computational performance of ? sls , and compare it to that of the
MLE (using several well-known implementations), on a variety of large-scale datasets.
The rest of the paper is organized as follows: Section 1.1 surveys the related work and Section 2
introduces the required background and the notation. In Section 3, we provide the intuition behind the
relationship (1), which are based on exact calculations for GLMs with Gaussian design. In Section 4,
we propose our algorithm and discuss its computational properties. Section 5 provides a thorough
comparison between the proposed algorithm and other existing methods. Theoretical results may be
found in Section 6. Finally, we conclude with a brief discussion in Section 7.
1.1
Related work
As mentioned in Section 1, the relationship (1) is well-known in several forms in statistics. Brillinger
[Bri82] derived (1) for models with Gaussian predictors. Li & Duan [LD89] studied model misspecification problems in statistics and derived (1) when the predictor distribution has linear conditional
means (this is a slight generalization of Gaussian predictors). More recently, Stein?s lemma [BEM13]
and the relationship (1) has been revisited in the context of compressed sensing [PV15, TAH15],
where it has been shown that the standard lasso estimator may be very effective when used in models
2
where the relationship between the expected response and the signal is nonlinear, and the predictors
(i.e. the design or sensing matrix) are Gaussian. A common theme for all of this previous work is that
it focuses solely on settings where (1) holds exactly and the predictors are Gaussian (or, in [LD89],
nearly Gaussian). Two key novelties of the present paper are (i) our focus on the computational
benefits following from (1) for large scale problems with n
p
1; and (ii) our rigorous analysis
of models with non-Gaussian predictors, where (1) is shown to be approximately valid.
2
Preliminaries and notation
We assume a random design setting, where the observed data consists of n random iid pairs (y1 , x1 ),
(y2 , x2 ), . . ., (yn , xn ); yi 2 R is the response variable and xi = (xi1 , . . . , xip )T 2 Rp is the vector
of predictors or covariates. We focus on problems where fitting a GLM is desirable, but we do not
need to assume that (yi , xi ) are actually drawn from the corresponding statistical model (i.e. we
allow for model misspecification).
The MLE for GLMs with canonical link is defined by
n
X
?mle = argmax 1
yi hxi , i
n i=1
2Rp
(hxi , i).
(2)
where h?, ?i denotes the Euclidean inner-product on Rp , and is a sufficiently smooth convex function.
The GLM coefficients glm are defined by taking the population average in (2):
glm
= argmax E [yi hxi , i
2Rp
(hxi , i)] .
(3)
While we make no assumptions on beyond smoothness, note that if is the cumulant generating
function for yi | xi , then we recover the standard GLM with canonical link and regression parameters
glm
[MN89]. Examples of GLMs in this form include logistic regression, with (w) = log{1+ew };
Poisson regression, with (w) = ew ; and linear regression (least squares), with (w) = w2 /2.
Our objective is to find a computationally efficient estimator for glm . The alternative estimator for glm proposed in this paper is related to the OLS coefficient vector, which is defined by
ols :=
E[xi xTi ] 1 E [xi yi ]; the corresponding OLS estimator is ?ols := (XT X) 1 XT y, where
X = (x1 , . . . , xn )T is the n ? p design matrix and y = (y1 , . . . , yn )T 2 Rn .
Additionally, throughout the text we let [m] = {1, 2, ..., m}, for positive integers m, and we denote
the size of a set S by |S|. The m-th derivative of a function g : R ! R is denoted by g (m) . For
a vector u 2 Rp and a n ? p matrix U, we let kukq and kUkq denote the `q -vector and -operator
norms, respectively. If S ? [n], let US denote the |S| ? p matrix obtained from U by extracting the
rows that are indexed by S. For a symmetric matrix M 2 Rp?p , max (M) and min (M) denote the
maximum and minimum eigenvalues, respectively. ?k (M) denotes the condition number of M with
respect to k-norm. We denote by Nq the q-variate normal distribution.
3
OLS is equivalent to GLM up to a scalar factor
To motivate our methodology, we assume in this section that the covariates are multivariate normal,
as in [Bri82]. These distributional assumptions will be relaxed in Section 6.
Proposition 1.? Assume
? that the covariates are multivariate normal with mean 0 and covariance
matrix ? = E xi xTi , i.e. xi ? Np (0, ?). Then glm can be written as
glm
= c ? ols ,
?
?
where c 2 R satisfies the equation 1 = c E (2) (hx, ols ic ) .
Proof of Proposition 1. The optimal point in the optimization problem (3), has to satisfy the following
normal equations,
h
i
E [yi xi ] = E xi (1) (hxi , i) .
(4)
Now, denote by (x | ?) the multivariate normal density with mean 0 and covariance matrix ?. We
recall the well-known property of Gaussian density d (x | ?)/dx = ? 1 x (x | ?). Using this
3
Algorithm 1 SLS: Scaled Least Squares Estimator
Input: Data (yi , xi )ni=1
Step 1. Compute the least squares estimator: ?ols and y? = X ?ols .
For a sub-sampling based OLS estimator, let S ? [n] be a
T
1 T
random subset and take ?ols = |S|
X y.
n (XS XS )
Pn
Step 2. Solve the following equation for c 2 R: 1 = nc i=1 (2) (c y?i ).
Use Newton?s root-finding method:
Initialize c = 2/Var (yi );
Repeat until convergence:
Pn
c n1 i=1 (2) (c y?i ) 1
Pn
c
c
.
1
(2) (c y
?i ) + c (3) (c y?i )
i=1
n
Output: ? sls = c ? ?ols .
and integration by parts on the right hand side of the above equation, we obtain
h
i Z
h
i
E xi (1) (hxi , i) = x (1) (hx, i) (x | ?) dx = ? E (2) (hxi , i)
(5)
(this is basically the Stein?s lemma). Combining this with the identity (4), we conclude the proof.
Proposition 1 and its proof provide the main intuition behind our proposed method. Observe that
in our derivation, we only worked with the right hand side of the normal equations (4) which does
not depend on the response variable yi . The equivalence holds regardless of the joint distribution of
(yi , xi ), whereas in [Bri82], yi is assumed to follow a single index model. In Section 6, where we
extend the method to non-Gaussian predictors, (5) is generalized via the zero-bias transformations.
3.1 Regularization
A version of Proposition 1 incorporating regularization ? an important tool for datasets where p is
large relative to n or the predictors are highly collinear ? is also possible, as outlined briefly in this
section. We focus on `2 -regularization (ridge regression) in this section; some connections with lasso
(`1 -regularization) are discussed in Section 6 and Corollary 1.
For
0, define the `2 -regularized GLM coefficients,
glm
= argmax E [yi hxi , i
glm
=c ?
ols
(hxi , i)]
2
k k2
2
?
?
and the corresponding `2 -regularized OLS coefficients ols = E xi xTi + I
glm
= 0glm and ols = 0ols ). The same argument as above implies that
2Rp
, where
= c .
(6)
1
E [xi yi ] (so
(7)
This suggests that the ordinary ridge regression for the linear model can be used to estimate the
`2 -regularized GLM coefficients glm . Further pursuing these ideas for problems where regularization
is a critical issue may be an interesting area for future research.
4
SLS: Scaled Least Squares estimator for GLMs
Motivated by the results in the previous section, we design a computationally efficient algorithm for
any GLM task that is as simple as solving the least squares problem; it is described in Algorithm 1.
The algorithm has two basic steps. First, we estimate the OLS coefficients, and then in the second
step we estimate the proportionality constant via a simple root-finding algorithm.
There are numerous fast optimization methods to solve the least squares problem, and even a
superficial review of these could go beyond the page limits of this paper. We emphasize that this
step (finding the OLS estimator) does not have to be iterative and it is the main computational
cost of the proposed algorithm. We suggest using a sub-sampling based estimator for ols , where
we only use a subset of the observations to estimate the covariance matrix. Let S ? [n] be a
4
SLS vs MLE : Computation
Method
SLS
MLE
Method
SLS
MLE
0.9
|?^ ? ?|2
Time(sec)
60
SLS vs MLE : Accuracy
1.2
40
20
0.6
0.3
0
0.0
4
log10(n)
5
6
4
log10(n)
5
6
Figure 1: Logistic regression with general Gaussian design. The left plot shows the computational cost (time)
for finding the MLE and SLS as n grows and p = 200. The right plot depicts the accuracy of the estimators.
In the regime where the MLE is expensive to compute, the SLS is found much more rapidly and has the same
accuracy. R?s built-in functions are used to find the MLE.
random sub-sample and denote by XS the sub-matrix formed by the rows of X in S. Then the
11 T
1
sub-sampled OLS estimator is given as ?ols = |S|
XTS XS
n X y. Properties of this estimator
have been well-studied [Ver10, DLFU13, EM15]. For sub-Gaussian covariates, it suffices to use
a sub-sample size of O (p log(p)) [Ver10]. Hence, this step requires a single time computational
cost of O |S|p2 + p3 + np ? O p max{p2 log(p), n} . For other approaches, we refer reader to
[RT08, DLFU13] and the references therein.
The second step of Algorithm 1 involves solving a simple root-finding problem. As with the first
step of the algorithm, there are numerous methods available for completing this task. Newton?s
root-finding method with quadratic convergence or Halley?s method with cubic convergence may be
appropriate choices. We highlight that this step costs only O (n) per-iteration and that we can attain up
to a cubic rate of convergence. The resulting per-iteration cost is cheaper than other commonly used
batch algorithms by at least a factor of O (p) ? indeed, the cost of computing the gradient is O (np).
For simplicity, we use Newton?s root-finding method initialized at c = 2/Var (yi ). Assuming that
the GLM is a good approximation to the true conditional distribution, by the law of total variance and
basic properties of GLMs, we have
Var (yi ) = E [Var (yi | xi )] + Var (E [yi | xi ]) ? c
1
1
+ Var
(1)
(hxi , i) .
(8)
It follows that this initialization is reasonable as long as c ? E [Var (yi | xi )] is not much smaller
than Var (1) (hxi , i) . Our experiments show that SLS is very robust to initialization.
In Figure 1, we compare the performance of our SLS estimator to that of the MLE, when both are used
to analyze synthetic data generated from a logistic regression model under general Gaussian design
with randomly generated covariance matrix. The left plot shows the computational cost of obtaining
both estimators as n increases for fixed p. The right plot shows the accuracy of the estimators. In the
regime n
p
1 ? where the MLE is hard to compute ? the MLE and the SLS achieve the same
accuracy, yet SLS has significantly smaller computation time. We refer the reader to Section 6 for
theoretical results characterizing the finite sample behavior of the SLS.
5
Experiments
This section contains the results of a variety of numerical studies, which show that the Scaled Least
Squares estimator reaches the minimum achievable test error substantially faster than commonly used
batch algorithms for finding the MLE. Both logistic and Poisson regression models (two types of
GLMs) are utilized in our analyses, which are based on several synthetic and real datasets.
Below, we briefly describe the optimization algorithms for the MLE that were used in the experiments.
1. Newton-Raphson (NR) achieves locally quadratic convergence by scaling the gradient by
the inverse of the Hessian evaluated at the current iterate. Computing the Hessian has a
per-iteration cost of O np2 , which makes it impractical for large-scale datasets.
2. Newton-Stein (NS) is a recently proposed second-order batch algorithm specifically designed for GLMs [Erd16]. The algorithm uses Stein?s lemma and sub-sampling to efficiently
estimate the Hessian with O (np) per-iteration cost, achieving near quadratic rates.
5
Logis0c Regression
Poi?Reg / Covariates ~ ? x Ber( ? 1)
SLS
NR
NS
BFGS
LBFGS
GD
AGD
0.28
0.26
0.3
SLS
NR
NS
BFGS
LBFGS
GD
AGD
2.5
2.0
1.5
1.0
0.24
0.2
SLS
NR
NS
BFGS
LBFGS
GD
AGD
1.5
1.0
0.5
0.22
20
30
Time (sec)
40
50
0.5
0
10
(a)
20
30
Time (sec)
40
50
0
10
(c)
Log?Reg / Covariates ~ ? x {Exp(1)?1}
Test Error
0.24
0.22
0.20
30
40
Log?Reg / Higgs dataset
0.24
SLS
NR
NS
BFGS
LBFGS
GD
AGD
2.0
1.5
1.0
10
Time (sec)
(b)
15
20
5.0
Time (sec)
7.5
10.0
Poi?Reg / Covertype dataset
SLS
NR
NS
BFGS
LBFGS
GD
AGD
15
10
5
0.5
0.18
5
2.5
(g)
Poi?Reg / Covariates ~ ? x Ber( ? 1)
SLS
NR
NS
BFGS
LBFGS
GD
AGD
0.25
0.23
0
0.0
(e)
Test Error
SLS
NR
NS
BFGS
LBFGS
GD
AGD
20
Time (sec)
log(Test Error)
10
log(Test Error)
0
OLS start
Poi?Reg / Covertype dataset
2.0
log(Test Error)
Test Error
Random start
0.4
Poisson Regression
Log?Reg / Higgs dataset
0.30
Test Error
SLS
NR
NS
BFGS
LBFGS
GD
AGD
log(Test Error)
Log?Reg / Covariates ~ ? x {Exp(1)?1}
0.5
0
10
20
Time (sec)
30
40
(d)
0
10
20
Time (sec)
30
40
0
0.0
2.5
(f)
5.0
Time (sec)
7.5
10.0
(h)
Figure 2: Performance of SLS compared to that of MLE obtained with various optimization algorithms on
several datasets. SLS is represented with red straight line. The details are provided in Table 1.
3. Broyden-Fletcher-Goldfarb-Shanno (BFGS) is the most popular and stable quasi-Newton
method [Nes04]. At each iteration, the gradient is scaled by a matrix that is formed
by accumulating information from previous iterations and gradient computations. The
convergence is locally super-linear with a per-iteration cost of O (np).
4. Limited memory BFGS (LBFGS) is a variant of BFGS, which uses only the recent iterates
and gradients to approximate the Hessian, providing significant improvement in terms of
memory usage. LBFGS has many variants; we use the formulation given in [Bis95].
5. Gradient descent (GD) takes a step in the opposite direction of the gradient, evaluated at
the current iterate. Its performance strongly depends on the condition number of the design
matrix. Under certain assumptions, the convergence is linear with O (np) per-iteration cost.
6. Accelerated gradient descent (AGD) is a modified version of gradient descent with an
additional ?momentum? term [Nes83]. Its per iteration cost is O (np) and its performance
strongly depends on the smoothness of the objective function.
For all the algorithms, the step size at each iteration is chosen via the backtracking line search [BV04].
Recall that the proposed Algorithm 1 is composed of two steps; the first finds an estimate of the
OLS coefficients. This up-front computation is not needed for any of the MLE algorithms described
above. On the other hand, each of the MLE algorithms requires some initial value for , but no such
initialization is needed to find the OLS estimator in Algorithm 1. This raises the question of how the
MLE algorithms should be initialized, in order to compare them fairly with the proposed method. We
consider two scenarios in our experiments: first, we use the OLS estimator computed for Algorithm 1
to initialize the MLE algorithms; second, we use a random initial value.
On each dataset, the main criterion for assessing the performance of the estimators is how rapidly the
minimum test error is achieved. The test error is measured as the mean squared error of the estimated
mean using the current parameters at each iteration on a test dataset, which is a randomly selected
(and set-aside) 10% portion of the entire dataset. As noted previously, the MLE is more accurate
for small n (see Figure 1). However, in the regime considered here (n
p
1), the MLE and the
SLS perform very similarly in terms of their error rates; for instance, on the Higgs dataset, the SLS
and MLE have test error rates of 22.40% and 22.38%, respectively. For each dataset, the minimum
achievable test error is set to be the maximum of the final test errors, where the maximum is taken
over all of the estimation methods. Let ?(1) and ?(2) be two randomly generated covariance matrices.
The datasets we analyzed were: (i) a synthetic dataset generated from a logistic regression model
with iid {exponential(1) 1} predictors scaled by ?(1) ; (ii) the Higgs dataset (logistic regression)
[BSW14]; (iii) a synthetic dataset generated from a Poisson regression model with iid binary(?1)
predictors scaled by ?(2) ; (iv) the Covertype dataset (Poisson regression) [BD99].
In all cases, the SLS outperformed the alternative algorithms for finding the MLE by a large margin,
in terms of computation. Detailed results may be found in Figure 2 and Table 1. We provide additional
experiments with different datasets in the Supplementary Material.
6
Table 1: Details of the experiments shown in Figure 2.
M ODEL
DATASET
S IZE
I NITIALIZED
P LOT
M ETHOD
S LS
NR
NS
B FGS
L BFGS
GD
AGD
6
L OGISTIC REGRESSION
P OISSON REGRESSION
??{E XP (1)-1}
H IGGS [BSW14]
??B ER (?1)
C OVERTYPE [BD99]
5
7
5
n = 6.0 ? 10 , p = 300
n = 1.1?10 , p = 29
n = 6.0?10 , p = 300
n = 5.8?105 , p = 53
R ND
O LS
R ND
O LS
R ND
O LS
R ND
O LS
(A)
(B)
(C)
(D)
(E)
(F)
(G)
(H)
T IME IN SECONDS / NUMBER OF ITERATIONS ( TO REACH MIN TEST ERROR )
8.34/4
2.94/3
13.18/3
9.57/3
5.42/5
3.96/5
2.71/6
1.66/20
301.06/6
82.57/3
37.77/3
36.37/3
170.28/5
130.1/4
16.7/8
32.48/18
51.69/8
7.8/3
27.11/4
26.69/4
32.71/5
36.82/4
21.17/10
282.1/216
148.43/31
24.79/8
660.92/68
701.9/68
67.24/29
72.42/26
5.12/7
22.74/59
125.33/39
24.61/8
6368.1/651
6946.1/670
224.6/106
357.1/88
10.01/14
10.05/17
669/138
134.91/25
100871/10101 141736/13808 1711/513
1364/374
14.35/25
33.58/87
218.1/61
35.97/12
2405.5/251
2879.69/277
103.3/51
102.74/40
11.28/15
11.95/25
Theoretical results
In this section, we use the zero-bias transformations [GR97] to generalize the equivalence between
OLS and GLMs to settings where the covariates are non-Gaussian.
Definition 1. Let z be a random variable with mean 0 and variance 2 . Then, there exists a
random variable z ? that satisfies E [zf (z)] = 2 E[f (1) (z ? )], for all differentiable functions f . The
distribution of z ? is said to be the z-zero-bias distribution.
The existence of z ? in Definition 1 is a consequence of Riesz representation theorem [GR97]. The
normal distribution is the unique distribution whose zero-bias transformation is itself (i.e. the normal
distribution is a fixed point of the operation mapping the distribution of z to that of z ? ).
To provide some intuition behind the usefulness of the zero-bias transformation, we refer back to the
proof of Proposition 1. For simplicity, assume that the covariate vector xi has iid entries with mean 0,
and variance 1. Then the zero-bias transformation applied to the j-th normal equation in (4) yields
h
i
h
i
E [yi xij ] = E xij (1) xij j + ?k6=j xik k = j E (2) x?ij j + ?k6=j xik ik . (9)
|
{z
} |
{z
}
j-th normal equation
Zero-bias transformation
The distribution of
is the xij -zero-bias distribution and is entirely determined by the distribution
of xij ; general properties of x?ij can be found, for example, in [CGS10]. If is well spread, it turns
out that taken together, with j = 1, . . . , p, the far right-hand side in (9) behaves similar to the right
side of (5), with ? = I; that is, the behavior is similar to the Gaussian case, where the proportionality
relationship given in Proposition 1 holds. This argument leads to an approximate proportionality
relationship for non-Gaussian predictors, which, when carried out rigorously, yields the following.
x?ij
Theorem 1. Suppose that the covariate vector xi has mean 0 and covariance matrix ? and, fur1
thermore, that the random vector ? /2 xi has independent entries and its sub-Gaussian norm is
bounded by ?. Assume that the function (2) is Lipschitz continuous
p with constant k. Let k k2 = ?
and assume is r-well-spread in the sense that ? / k k1 = r p for some r 2 (0, 1]. Then, for
?
?
c = 1/E (2) (hxi , glm i) , and ? = ?1 (?1/2 ) denoting the condition number of ?1/2 , we have
1
?
c
glm
ols
1
?
?
, where ? = 8k?3 ?k?1/2 k1 (? /r)2 .
p
(10)
Theorem 1 is proved in the Supplementary Material. It implies that the population parameters ols
and glm are approximately equivalent up to a scaling factor, with an error bound of O (1/p). The
assumption that glm is well-spread can be relaxed with minor modifications. For example, if we
have a sparse coefficient vector, where supp( glm ) = {j; jglm 6= 0} is the support set of glm , then
Theorem 1 holds with p replaced by the size of the support set.
An interesting consequence of Theorem 1 and the remarks following the theorem is that whenever
an entry of glm is zero, the corresponding entry of ols has to be small, and conversely. For
0,
define the lasso coefficients
?
1 ?
lasso
= argmin E (yi hxi , i)2 + k k1 .
(11)
2
2Rp
7
?
?
Corollary 1. For any
?/|supp( glm )|, if E [xi ] = 0 and E xi xTi
= I, we have
lasso
glm
glm
supp(
) ? supp(
). Further, if and
also satisfy that 8j 2 supp( glm ), | jglm | >
c
+ ?/|supp( glm )| , then we have supp( lasso ) = supp( glm ).
So far in this section, we have only discussed properties of the population parameters, such as glm .
In the remainder of this section, we turn our attention to results for the estimators that are the main
focus of this paper; these results ultimately build on our earlier results, i.e. Theorem 1.
In order to precisely describe the performance of ? sls , we first need bounds on the OLS estimator.
The OLS estimator has been studied extensively in the literature; however, for our purposes, we
find it convenient to derive a new bound on its accuracy. While we have not seen this exact bound
elsewhere, it is very similar to Theorem 5 of [DLFU13].
?
?
1
Proposition 2. Assume that E [xi ] = 0, E xi xTi = ?, and that ? /2 xi and yi are sub-Gaussian
with norms ? and , respectively. For min denoting the smallest eigenvalue of ?, and |S| > ?p,
r
1/2
p
ols
?ols
? ? min
,
(12)
|S|
2
with probability at least 1
3e
p
, where ? depends only on
and ?.
Proposition 2 is proved in the Supplementary Material. Our main result on the performance of ? sls is
given next.
p
1
Theorem 2. Let the assumptions of Theorem 1 and Proposition 2 hold with E[k? /2 xk2 ] = ?
? p.
? (2)
?
p
Further assume that the function f (z) = zE
(hx, ols iz) satisfies f (?
c) > 1 + ? p for some c?
?
and such that the derivative of f in the interval [0, c?] does not change sign, i.e., its absolute value is
lower bounded by > 0. Then, for n and |S| sufficiently large, we have
r
1
p
glm
? sls
? ?1 + ?2
,
(13)
p
min {n/ log(n), |S|/p}
1
with probability at least 1
5e
p
, where the constants ?1 and ?2 are defined by
?1 =?k?
c? ?k?1/2 k1 (? /r)2
?
1/2
1/2
?2 =??
c min
1 + 1 min k
3
?
ols
k1 max {(b + k/?
?), k?
c} ,
and ? > 0 is a constant depending on ? and .
(14)
(15)
Note that the convergence rate of the upper bound in (13) depends on the sum of the two terms, both
of which are functions of the data dimensions n and p. The first term on the right in (13) comes from
Theorem 1, which bounds the discrepancy between c ? ols and glm . This term is small when p is
large, and it does not depend on the number of observations n.
The second term in the upper bound (13) comes from estimating ols and c . This term is increasing
in p, which reflects the fact that estimating glm is more challenging when p is large. As expected,
this term is decreasing in n and |S|, i.e. larger sample sizepyields better estimates. When the full OLS
p
solution is used (|S| = n), the second term becomes O( p max{log(n), p}/n) = O(p/ n), for p
sufficiently large. This suggests that n should be at least of order p2 for good performance.
7
Discussion
In this paper, we showed that the coefficients of GLMs and OLS are approximately proportional
in the general random design setting. Using this relation, we proposed a computationally efficient
algorithm for large-scale problems that achieves the same accuracy as the MLE by first estimating the
OLS coefficients and then estimating the proportionality constant through iterations that can attain
quadratic or cubic convergence rate, with only O (n) per-iteration cost.
We briefly mentioned that the proportionality between the coefficients holds even when there is
regularization in Section 3.1. Further pursuing this idea may be interesting for large-scale problems
where regularization is crucial. Another interesting line of research is to find similar proportionality
relations between the parameters in other large-scale optimization problems such as support vector
machines. Such relations may reduce the problem complexity significantly.
8
References
[BD99]
J. A. Blackard and D. J. Dean, Comparative accuracies of artificial neural networks and discriminant
analysis in predicting forest cover types from cartographic variables, Comput. Electron. Agr. 24
(1999), 131?151.
[BEM13] M. Bayati, M. A. Erdogdu, and A. Montanari, Estimating lasso risk and noise level, NIPS 26, 2013,
pp. 944?952.
[Bis95]
C. M. Bishop, Neural Networks for Pattern Recognition, Oxford University Press, 1995.
[Bri82]
D. R Brillinger, A generalized linear model with "Gaussian" regressor variables, A Festschrift For
Erich L. Lehmann, CRC Press, 1982, pp. 97?114.
[BSW14] P. Baldi, P. Sadowski, and D. Whiteson, Searching for exotic particles in high-energy physics with
deep learning, Nat. Commun. 5 (2014), 4308?4308.
[BV04]
S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, 2004.
[CGS10]
L. H. Y. Chen, L. Goldstein, and Q.-M. Shao, Normal approximation by Stein?s method, Springer,
2010.
[DLFU13] P. Dhillon, Y. Lu, D. P. Foster, and L. Ungar, New subsampling algorithms for fast least squares
regression, NIPS 26 (2013), 360?368.
[EM15]
M. A. Erdogdu and A. Montanari, Convergence rates of sub-sampled newton methods, NIPS 28,
2015, pp. 3034?3042.
[Erd15]
M. A. Erdogdu, Newton-Stein method: A second order method for GLMs via Stein?s lemma, NIPS
28 (2015), 1216?1224.
[Erd16]
, Newton-Stein Method: An optimization method for GLMs via Stein?s Lemma, Journal of
Machine Learning Research (to appear) (2016).
[Fis36]
R. A. Fisher, The use of multiple measurements in taxonomic problems, Ann. Eugenic 7 (1936),
179?188.
[Gol07]
L. Goldstein, l1 bounds in normal approximation, Ann. Probab. 35 (2007), 1888?1930.
[GR97]
L. Goldstein and G. Reinert, Stein?s method and the zero bias transformation with application to
simple random sampling, Ann. Appl. Probab. 7 (1997), 935?952.
[HS52]
M. R. Hestenes and E. Stiefel, Methods of conjugate gradients for solving linear systems, J. Res.
Nat. Bur. Stand. 49 (1952), 409?436.
[KF09]
D. Koller and N. Friedman, Probabilistic Graphical Models: Principles and Techniques, MIT press,
2009.
[LD89]
K.-C. Li and N. Duan, Regression analysis under link violation, Ann. Stat. 17 (1989), 1009?1052.
[Mar10]
J. Martens, Deep learning via Hessian-free optimization, ICML 27 (2010), 735?742.
[MN89]
P. McCullagh and J. A. Nelder, Generalized Linear Models, 2nd ed., Chapman and Hall, 1989.
[Nes83]
Y. Nesterov, A method of solving a convex programming problem with convergence rate O(1/k2 ),
Soviet Math. Dokl. 27 (1983), 372?376.
[Nes04]
, Introductory Lectures on Convex Optimization: A Basic Course, Springer, 2004.
[PS75]
C. C. Paige and M. A. Saunders, Solution of sparse indefinite systems of linear equations, SIAM J.
Numer. Anal. 12 (1975), 617?629.
[PV15]
Y. Plan and R. Vershynin, The generalized lasso with non-linear observations, 2015, arXiv preprint
arXiv:1502.04071.
[RT08]
V. Rokhlin and M. Tygert, A fast randomized algorithm for overdetermined linear least-squares
regression, P. Natl. Acad. Sci. 105 (2008), 13212?13217.
[TAH15]
C. Thrampoulidis, E. Abbasi, and B. Hassibi, Lasso with non-linear measurements is equivalent to
one with linear measurements, NIPS 28 (2015), 3402?3410.
[Ver10]
R. Vershynin, Introduction to the non-asymptotic analysis of random matrices, 2010,
arXiv:1011.3027.
[WJ08]
M. J. Wainwright and M. I. Jordan, Graphical models, exponential families, and variational inference,
Foundations and Trends in Machine Learning 1 (2008), 1?305.
9
| 6522 |@word briefly:3 version:2 achievable:2 norm:4 nd:5 proportionality:8 covariance:7 mar10:2 initial:2 celebrated:1 contains:1 denoting:2 existing:1 current:3 yet:1 dx:2 written:2 numerical:2 plot:4 designed:1 v:2 aside:1 selected:1 nq:1 provides:1 iterates:1 revisited:1 math:1 simpler:1 ik:1 prove:1 consists:1 nes83:2 fitting:2 introductory:1 baldi:1 bur:1 indeed:1 expected:2 behavior:3 decreasing:1 duan:2 little:1 xti:5 increasing:1 becomes:1 spain:1 estimating:11 notation:3 provided:1 bounded:2 biostatistics:1 exotic:1 argmin:1 substantially:2 finding:11 brillinger:2 transformation:7 impractical:1 guarantee:1 thorough:1 exactly:1 scaled:8 k2:3 wj08:2 yn:2 appear:1 positive:1 limit:1 consequence:2 acad:1 oxford:1 solely:1 approximately:5 therein:1 studied:3 initialization:3 equivalence:2 suggests:3 halley:1 conversely:1 challenging:1 appl:1 limited:1 graduate:1 practical:1 unique:1 area:2 attain:4 significantly:2 convenient:1 imprecise:1 boyd:1 suggest:1 operator:1 context:1 cartographic:1 nes04:4 risk:1 accumulating:1 equivalent:4 dean:1 marten:1 go:1 attention:2 regardless:2 l:5 convex:5 survey:1 amazon:1 simplicity:2 estimator:31 vandenberghe:1 population:3 searching:1 play:1 suppose:1 exact:2 programming:1 us:3 overdetermined:1 trend:1 expensive:1 ze:1 utilized:1 recognition:1 distributional:1 observed:1 role:1 preprint:1 highest:1 thermore:1 mentioned:3 intuition:3 complexity:1 covariates:9 nesterov:1 rigorously:1 ultimately:1 motivate:1 depend:2 mohsen:1 solving:4 raise:1 shao:1 joint:1 various:1 represented:1 soviet:1 derivation:1 kf09:2 effective:1 fast:3 describe:2 artificial:1 saunders:1 whose:1 stanford:4 larger:3 widely:1 valued:1 solve:2 supplementary:3 compressed:1 statistic:6 itself:1 final:1 agr:1 eigenvalue:2 differentiable:1 propose:1 product:1 remainder:1 combining:1 rapidly:2 achieve:2 convergence:19 assessing:1 generating:1 comparative:1 derive:1 depending:1 stat:2 measured:1 ij:3 minor:1 school:1 received:1 p2:3 involves:2 implies:2 riesz:1 come:2 direction:2 drawback:1 mn89:5 oisson:1 material:3 crc:1 hx:3 ungar:1 suffices:1 generalization:1 preliminary:1 proposition:9 hold:6 sufficiently:3 considered:1 ic:1 normal:12 exp:2 hall:1 fletcher:1 mapping:1 electron:1 achieves:5 smallest:1 xk2:1 purpose:1 estimation:2 outperformed:1 tool:1 reflects:1 minimization:1 mit:1 gaussian:22 super:2 modified:1 pn:3 poi:4 corollary:2 np2:2 derived:2 focus:5 improvement:1 likelihood:4 rigorous:1 attains:1 sense:1 inference:1 hestenes:1 xip:1 entire:1 relation:4 koller:1 quasi:2 issue:1 classification:1 denoted:1 k6:2 plan:1 special:1 initialize:2 integration:1 fairly:1 sampling:4 chapman:1 icml:1 excessive:1 nearly:1 future:1 discrepancy:1 np:10 randomly:3 composed:1 ime:1 cheaper:3 festschrift:1 replaced:1 argmax:3 n1:1 friedman:1 highly:1 ogistic:1 numer:1 reinert:1 introduces:1 analyzed:1 violation:1 behind:3 natl:1 implication:1 accurate:1 modest:1 indexed:1 iv:1 euclidean:1 initialized:2 re:1 theoretical:4 minimal:1 instance:1 earlier:1 cover:1 ordinary:4 cost:20 subset:2 entry:4 predictor:16 usefulness:1 conducted:1 front:1 synthetic:5 gd:10 vershynin:2 density:2 shanno:1 siam:1 randomized:1 lee:1 xi1:1 physic:1 probabilistic:1 regressor:1 together:1 squared:1 abbasi:1 derivative:2 li:2 supp:8 bfgs:13 summarized:1 sec:9 coefficient:22 satisfy:2 depends:4 higgs:4 root:6 lot:1 analyze:2 characterizes:1 red:1 start:2 recover:1 portion:1 odel:1 contribution:1 square:15 ni:1 accuracy:9 formed:2 variance:3 efficiently:3 yield:3 generalize:1 igg:1 iid:4 basically:1 lu:1 straight:1 reach:2 whenever:1 ed:1 definition:2 energy:1 pp:3 proof:4 sampled:2 dataset:14 proved:2 popular:2 recall:2 organized:1 actually:1 back:1 goldstein:3 appears:1 follow:1 methodology:1 response:3 formulation:2 evaluated:2 strongly:2 until:1 glms:20 hand:5 nonlinear:1 logistic:8 grows:1 usage:1 y2:1 remedy:1 true:1 ize:1 regularization:7 hence:1 symmetric:1 dhillon:1 goldfarb:1 bv04:2 bem13:2 em15:2 reweighted:1 noted:2 criterion:1 generalized:6 ridge:2 demonstrate:1 l1:1 stiefel:1 variational:1 recently:2 ols:50 common:1 behaves:1 multinomial:1 extend:1 slight:1 discussed:2 refer:4 significant:1 measurement:3 cambridge:1 broyden:1 smoothness:2 outlined:1 erich:1 similarly:1 kukq:2 particle:1 hxi:13 stable:1 tygert:1 curvature:1 multivariate:3 recent:2 showed:1 fgs:1 commun:1 scenario:1 certain:2 binary:1 accomplished:1 yi:22 seen:1 minimum:4 additional:3 relaxed:2 novelty:1 signal:1 ii:2 full:1 desirable:1 multiple:1 smooth:1 faster:1 calculation:1 raphson:2 long:1 mle:30 involving:1 regression:26 basic:3 dicker:1 variant:2 rutgers:3 poisson:5 arxiv:3 iteration:21 achieved:1 background:1 whereas:1 interval:1 crucial:2 w2:1 rest:1 ver10:3 member:1 bd99:3 jordan:1 integer:1 extracting:1 near:2 iii:1 variety:2 iterate:2 variate:1 lasso:9 opposite:1 inner:1 idea:3 reduce:1 motivated:1 collinear:1 paige:1 hessian:6 remark:1 deep:2 detailed:1 stein:10 locally:4 extensively:1 sl:34 xij:5 canonical:2 sign:1 estimated:2 per:14 iz:1 key:1 indefinite:1 achieving:1 drawn:1 sum:1 inverse:1 taxonomic:1 lehmann:1 throughout:1 reader:2 reasonable:1 pursuing:2 family:1 p3:1 scaling:2 entirely:1 bound:9 completing:1 cgs10:2 quadratic:6 covertype:3 precisely:2 worked:1 x2:1 argument:2 min:7 relatively:1 department:2 conjugate:2 smaller:2 modification:1 bis95:3 glm:48 taken:2 computationally:4 equation:8 previously:1 discus:1 turn:2 needed:2 available:1 operation:1 observe:1 appropriate:1 batch:5 alternative:3 rp:8 existence:1 original:1 denotes:2 include:2 subsampling:1 graphical:3 newton:11 log10:2 exploit:1 k1:5 build:2 classical:1 objective:2 question:1 nr:10 said:1 gradient:12 link:4 sci:1 discriminant:2 assuming:1 index:1 relationship:7 providing:1 nc:1 xik:2 design:17 implementation:1 murat:1 ethod:1 anal:1 perform:1 zf:1 upper:2 observation:5 datasets:8 finite:1 descent:4 misspecification:2 y1:2 rn:1 thrampoulidis:1 pair:1 required:1 extensive:1 connection:1 barcelona:1 nip:6 beyond:2 dokl:1 krylov:1 below:1 pattern:1 regime:5 built:1 max:5 memory:2 wainwright:1 critical:1 business:1 regularized:3 predicting:1 residual:1 brief:1 numerous:3 carried:1 text:1 review:1 literature:1 probab:2 relative:1 law:1 asymptotic:1 lecture:1 highlight:1 interesting:4 proportional:3 var:8 bayati:3 foundation:1 xp:1 foster:1 principle:1 row:2 elsewhere:1 course:1 repeat:1 free:2 side:4 allow:1 bias:9 ber:2 erdogdu:5 taking:1 characterizing:1 absolute:1 sparse:2 benefit:2 dimension:4 xn:2 valid:1 stand:1 commonly:3 far:2 agd:10 approximate:2 emphasize:1 blackard:1 conclude:2 assumed:1 nelder:1 xi:25 search:1 iterative:3 continuous:1 table:3 additionally:1 superficial:1 robust:1 obtaining:1 forest:1 whiteson:1 necessarily:1 main:5 spread:3 montanari:2 noise:1 x1:2 depicts:1 cubic:5 slow:1 n:10 hassibi:1 sub:13 theme:1 momentum:1 exponential:2 comput:1 erd15:2 theorem:11 sadowski:1 xt:3 covariate:2 bishop:1 er:1 sensing:2 x:4 incorporating:1 exists:1 nat:2 justifies:1 margin:1 chen:1 backtracking:1 lbfgs:10 scalar:1 springer:2 satisfies:3 conditional:2 viewed:1 identity:2 ann:4 miscellaneous:1 lipschitz:1 fisher:3 hard:1 change:1 mccullagh:1 specifically:1 determined:1 lemma:5 total:1 ew:2 rokhlin:1 support:3 cumulant:1 accelerated:1 reg:8 |
6,106 | 6,523 | Data Programming:
Creating Large Training Sets, Quickly
Alexander Ratner, Christopher De Sa, Sen Wu, Daniel Selsam, Christopher R?
Stanford University
{ajratner,cdesa,senwu,dselsam,chrismre}@stanford.edu
Abstract
Large labeled training sets are the critical building blocks of supervised learning
methods and are key enablers of deep learning techniques. For some applications,
creating labeled training sets is the most time-consuming and expensive part of
applying machine learning. We therefore propose a paradigm for the programmatic
creation of training sets called data programming in which users express weak
supervision strategies or domain heuristics as labeling functions, which are programs that label subsets of the data, but that are noisy and may conflict. We show
that by explicitly representing this training set labeling process as a generative
model, we can ?denoise? the generated training set, and establish theoretically
that we can recover the parameters of these generative models in a handful of
settings. We then show how to modify a discriminative loss function to make it
noise-aware, and demonstrate our method over a range of discriminative models
including logistic regression and LSTMs. Experimentally, on the 2014 TAC-KBP
Slot Filling challenge, we show that data programming would have led to a new
winning score, and also show that applying data programming to an LSTM model
leads to a TAC-KBP score almost 6 F1 points over a state-of-the-art LSTM baseline
(and into second place in the competition). Additionally, in initial user studies we
observed that data programming may be an easier way for non-experts to create
machine learning models when training data is limited or unavailable.
1
Introduction
Many of the major machine learning breakthroughs of the last decade have been catalyzed by the
release of a new labeled training dataset.1 Supervised learning approaches that use such datasets have
increasingly become key building blocks of applications throughout science and industry. This trend
has also been fueled by the recent empirical success of automated feature generation approaches,
notably deep learning methods such as long short term memory (LSTM) networks [14], which ameliorate the burden of feature engineering given large enough labeled training sets. For many real-world
applications, however, large hand-labeled training sets do not exist, and are prohibitively expensive to create due to requirements that labelers be experts in the application domain. Furthermore,
applications? needs often change, necessitating new or modified training sets.
To help reduce the cost of training set creation, we propose data programming, a paradigm for the
programmatic creation and modeling of training datasets. Data programming provides a simple,
unifying framework for weak supervision, in which training labels are noisy and may be from
multiple, potentially overlapping sources. In data programming, users encode this weak supervision
in the form of labeling functions, which are user-defined programs that each provide a label for
some subset of the data, and collectively generate a large but potentially overlapping set of training
labels. Many different weak supervision approaches can be expressed as labeling functions, such
1
http://www.spacemachine.net/views/2016/3/datasets-over-algorithms
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
as strategies which utilize existing knowledge bases (as in distant supervision [22]), model many
individual annotator?s labels (as in crowdsourcing), or leverage a combination of domain-specific
patterns and dictionaries. Because of this, labeling functions may have widely varying error rates and
may conflict on certain data points. To address this, we model the labeling functions as a generative
process, which lets us automatically denoise the resulting training set by learning the accuracies of
the labeling functions along with their correlation structure. In turn, we use this model of the training
set to optimize a stochastic version of the loss function of the discriminative model that we desire to
train. We show that, given certain conditions on the labeling functions, our method achieves the same
asymptotic scaling as supervised learning methods, but that our scaling depends on the amount of
unlabeled data, and uses only a fixed number of labeling functions.
Data programming is in part motivated by the challenges that users faced when applying prior
programmatic supervision approaches, and is intended to be a new software engineering paradigm for
the creation and management of training sets. For example, consider the scenario when two labeling
functions of differing quality and scope overlap and possibly conflict on certain training examples; in
prior approaches the user would have to decide which one to use, or how to somehow integrate the
signal from both. In data programming, we accomplish this automatically by learning a model of the
training set that includes both labeling functions. Additionally, users are often aware of, or able to
induce, dependencies between their labeling functions. In data programming, users can provide a
dependency graph to indicate, for example, that two labeling functions are similar, or that one ?fixes?
or ?reinforces? another. We describe cases in which we can learn the strength of these dependencies,
and for which our generalization is again asymptotically identical to the supervised case.
One further motivation for our method is driven by the observation that users often struggle with
selecting features for their models, which is a traditional development bottleneck given fixed-size
training sets. However, initial feedback from users suggests that writing labeling functions in the
framework of data programming may be easier [12]. While the impact of a feature on end performance
is dependent on the training set and on statistical characteristics of the model, a labeling function has
a simple and intuitive optimality criterion: that it labels data correctly. Motivated by this, we explore
whether we can flip the traditional machine learning development process on its head, having users
instead focus on generating training sets large enough to support automatically-generated features.
Summary of Contributions and Outline Our first contribution is the data programming framework, in which users can implicitly describe a rich generative model for a training set in a more
flexible and general way than in previous approaches. In Section 3, we first explore a simple model in
which labeling functions are conditionally independent. We show here that under certain conditions,
the sample complexity is nearly the same as in the labeled case. In Section 4, we extend our results to
more sophisticated data programming models, generalizing related results in crowdsourcing [17]. In
Section 5, we validate our approach experimentally on large real-world text relation extraction tasks
in genomics, pharmacogenomics and news domains, where we show an average 2.34 point F1 score
improvement over a baseline distant supervision approach?including what would have been a new
competition-winning score for the 2014 TAC-KBP Slot Filling competition. Using LSTM-generated
features, we additionally would have placed second in this competition, achieving a 5.98 point F1
score gain over a state-of-the-art LSTM baseline [32]. Additionally, we describe promising feedback
from a usability study with a group of bioinformatics users.
2
Related Work
Our work builds on many previous approaches in machine learning. Distant supervision is one
approach for programmatically creating training sets. The canonical example is relation extraction
from text, wherein a knowledge base of known relations is heuristically mapped to an input corpus [8,
22]. Basic extensions group examples by surrounding textual patterns, and cast the problem as a
multiple instance learning one [15, 25]. Other extensions model the accuracy of these surrounding
textual patterns using a discriminative feature-based model [26], or generative models such as
hierarchical topic models [1, 27, 31]. Like our approach, these latter methods model a generative
process of training set creation, however in a proscribed way that is not based on user input as in
our approach. There is also a wealth of examples where additional heuristic patterns used to label
training data are collected from unlabeled data [7] or directly from users [21, 29], in a similar manner
to our approach, but without any framework to deal with the fact that said labels are explicitly noisy.
2
Crowdsourcing is widely used for various machine learning tasks [13, 18]. Of particular relevance
to our problem setting is the theoretical question of how to model the accuracy of various experts
without ground truth available, classically raised in the context of crowdsourcing [10]. More recent
results provide formal guarantees even in the absence of labeled data using various approaches [4,
9, 16, 17, 24, 33]. Our model can capture the basic model of the crowdsourcing setting, and can be
considered equivalent in the independent case (Sec. 3). However, in addition to generalizing beyond
getting inputs solely from human annotators, we also model user-supplied dependencies between the
?labelers? in our model, which is not natural within the context of crowdsourcing. Additionally, while
crowdsourcing results focus on the regime of a large number of labelers each labeling a small subset
of the data, we consider a small set of labeling functions each labeling a large portion of the dataset.
Co-training is a classic procedure for effectively utilizing both a small amount of labeled data and a
large amount of unlabeled data by selecting two conditionally independent views of the data [5]. In
addition to not needing a set of labeled data, and allowing for more than two views (labeling functions
in our case), our approach allows explicit modeling of dependencies between views, for example
allowing observed issues with dependencies between views to be explicitly modeled [19].
Boosting is a well known procedure for combining the output of many ?weak? classifiers to create a
strong classifier in a supervised setting [28]. Recently, boosting-like methods have been proposed
which leverage unlabeled data in addition to labeled data, which is also used to set constraints on the
accuracies of the individual classifiers being ensembled [3]. This is similar in spirit to our approach,
except that labeled data is not explicitly necessary in ours, and richer dependency structures between
our ?heuristic? classifiers (labeling functions) are supported.
The general case of learning with noisy labels is treated both in classical [20] and more recent
contexts [23]. It has also been studied specifically in the context of label-noise robust logistic
regression [6]. We consider the more general scenario where multiple noisy labeling functions can
conflict and have dependencies.
3
The Data Programming Paradigm
In many applications, we would like to use machine learning, but we face the following challenges:
(i) hand-labeled training data is not available, and is prohibitively expensive to obtain in sufficient
quantities as it requires expensive domain expert labelers; (ii) related external knowledge bases are
either unavailable or insufficiently specific, precluding a traditional distant supervision or co-training
approach; (iii) application specifications are in flux, changing the model we ultimately wish to learn.
In such a setting, we would like a simple, scalable and adaptable approach for supervising a model
applicable to our problem. More specifically, we would ideally like our approach to achieve
expected loss with high probability, given O(1) inputs of some sort from a domain-expert user, rather
? ?2 ) hand-labeled training examples required by most supervised methods
than the traditional O(
?
(where O notation hides logarithmic factors). To this end, we propose data programming, a paradigm
for the programmatic creation of training sets, which enables domain-experts to more rapidly train
machine learning systems and has the potential for this type of scaling of expected loss. In data
programming, rather than manually labeling each example, users instead describe the processes by
which these points could be labeled by providing a set of heuristic rules called labeling functions.
In the remainder of this paper, we focus on a binary classification task in which we have a distribution
? over object and class pairs (x, y) ? X ? {?1, 1}, and we are concerned with minimizing the logistic
loss under a linear model given some features,
h
i
l(w) = E(x,y)?? log(1 + exp(?wT f (x)y)) ,
where without loss of generality, we assume that k f (x)k ? 1. Then, a labeling function ?i : X 7?
{?1, 0, 1} is a user-defined function that encodes some domain heuristic, which provides a (non-zero)
label for some subset of the objects. As part of a data programming specification, a user provides
some m labeling functions, which we denote in vectorized form as ? : X 7? {?1, 0, 1}m .
Example 3.1. To gain intuition about labeling functions, we describe a simple text relation extraction
example. In Figure 1, we consider the task of classifying co-occurring gene and disease mentions as
either expressing a causal relation or not. For example, given the sentence ?Gene A causes disease B?,
the object x = (A, B) has true class y = 1. To construct a training set, the user writes three labeling
3
Y
d e f lambda_1 ( x ) :
r e t u r n 1 i f ( x . gene , x . pheno ) i n KNOWN_RELATIONS_1 e l s e 0
d e f lambda_2 ( x ) :
r e t u r n -1 i f r e . match ( r ? . ? n o t c a u s e . ? ? , x . t e x t _ b e t w e e n ) e l s e 0
?1
d e f lambda_3 ( x ) :
r e t u r n 1 i f r e . match ( r ? . ? a s s o c i a t e d . ? ? , x . t e x t _ b e t w e e n )
and ( x . gene , x . pheno ) i n KNOWN_RELATIONS_2 e l s e 0
(a) An example set of three labeling functions written by a user.
?2
?3
(b) The generative model of a
training set defined by the user
input (unary factors omitted).
Figure 1: An example of extracting mentions of gene-disease relations from the scientific literature.
functions (Figure 1a). In ?1 , an external structured knowledge base is used to label a few objects with
relatively high accuracy, and is equivalent to a traditional distant supervision rule (see Sec. 2). ?2
uses a purely heuristic approach to label a much larger number of examples with lower accuracy.
Finally, ?3 is a ?hybrid? labeling function, which leverages a knowledge base and a heuristic.
A labeling function need not have perfect accuracy or recall; rather, it represents a pattern that the
user wishes to impart to their model and that is easier to encode as a labeling function than as a
set of hand-labeled examples. As illustrated in Ex. 3.1, labeling functions can be based on external
knowledge bases, libraries or ontologies, can express heuristic patterns, or some hybrid of these types;
we see evidence for the existence of such diversity in our experiments (Section 5). The use of labeling
functions is also strictly more general than manual annotations, as a manual annotation can always be
directly encoded by a labeling function. Importantly, labeling functions can overlap, conflict, and
even have dependencies which users can provide as part of the data programming specification (see
Section 4); our approach provides a simple framework for these inputs.
Independent Labeling Functions We first describe a model in which the labeling functions label
independently, given the true label class. Under this model, each labeling function ?i has some
probability ?i of labeling an object and then some probability ?i of labeling the object correctly; for
simplicity we also assume here that each class has probability 0.5. This model has distribution
m
1Y
?i ?i 1{?i =Y} + ?i (1 ? ?i )1{?i =?Y} + (1 ? ?i )1{?i =0} ,
(1)
??,? (?, Y) =
2 i=1
where ? ? {?1, 0, 1}m contains the labels output by the labeling functions, and Y ? {?1, 1} is the
predicted class. If we allow the parameters ? ? Rm and ? ? Rm to vary, (1) specifies a family of
generative models. In order to expose the scaling of the expected loss as the size of the unlabeled
dataset changes, we will assume here that 0.3 ? ?i ? 0.5 and 0.8 ? ?i ? 0.9. We note that while
these arbitrary constraints can be changed, they are roughly consistent with our applied experience,
where users tend to write high-accuracy and high-coverage labeling functions.
Our first goal will be to learn which parameters (?, ?) are most consistent with our observations?our
unlabeled training set?using maximum likelihood estimation. To do this for a particular training set
S ? X, we will solve the problem
?
?
X
X
??? X
???
? = arg max
(?,
? ?)
log P(?,Y)???,? (? = ?(x)) = arg max
log ????
??,? (?(x), y0 )????
(2)
?,?
?,?
x?S
x?S
y0 ?{?1,1}
In other words, we are maximizing the probability that the observed labels produced on our training
examples occur under the generative model in (1). In our experiments, we use stochastic gradient
descent to solve this problem; since this is a standard technique, we defer its analysis to the appendix.
Noise-Aware Empirical Loss Given that our parameter learning phase has successfully found
some ?? and ?? that accurately describe the training set, we can now proceed to estimate the parameter
? To do
w which minimizes the expected risk of a linear model over our feature mapping f , given ?,
? ?.
so, we define the noise-aware empirical risk L?,? ?? with regularization parameter ?, and compute the
noise-aware empirical risk minimizer
h
i
1 X
T
w? = arg min L?,? ?? (w; S ) = arg min
E(?,Y)???,? ?? log 1 + e?w f (x)Y ? = ?(x) + ? kwk2
(3)
w
w |S |
x?S
4
This is a logistic regression problem, so it can be solved using stochastic gradient descent as well.
We can in fact prove that stochastic gradient descent running on (2) and (3) is guaranteed to produce
accurate estimates, under conditions which we describe now. First, the problem distribution ? needs
to be accurately modeled by some distribution ? in the family that we are trying to learn. That is, for
some ?? and ?? ,
?? ? {?1, 0, 1}m , Y ? {?1, 1}, P(x,y)??? (?(x) = ?, y = Y) = ??? ,?? (?, Y).
(4)
Second, given an example (x, y) ? ?? , the class label y must be independent of the features f (x) given
the labels ?(x). That is,
(x, y) ? ?? ? y ? f (x) | ?(x).
(5)
This assumption encodes the idea that the labeling functions, while they may be arbitrarily dependent
on the features, provide sufficient information to accurately identify the class. Third, we assume that
the algorithm used to solve (3) has bounded generalization risk such that for some parameter ?,
h
i
h
i
Ew? ES L?,? ?? (w;
? S ) ? min ES L?,? ?? (w; S ) ? ?.
(6)
w
Under these conditions, we make the following statement about the accuracy of our estimates, which
is a simplified version of a theorem that is detailed in the appendix.
Theorem 1. Suppose that we run data programming, solving the problems in (2) and (3) using
? and w.
stochastic gradient descent to produce (?,
? ?)
? Suppose further that our setup satisfies the
conditions (4), (5), and (6), and suppose that m ? 2000. Then for any > 0, if the number of labeling
functions m and the size of the input dataset S are large enough that
m
356
|S | ? 2 log
3
then our expected parameter error and generalization risk can be bounded by
i
h
2
E l(w)
? ? min l(w) ? ? +
E
?? ? ??
? m 2
E k?? ? ?? k2 ? m 2
.
w
27?
We select m ? 2000 to simplify the statement of the theorem and give the reader a feel for how
scales with respect to |S |. The full theorem with scaling in each parameter (and for arbitrary m) is
presented in the appendix. This result establishes that to achieve both expected loss and parameter
? ?2 ) training
estimate error , it suffices to have only m = O(1) labeling functions and |S | = O(
examples, which is the same asymptotic scaling exhibited by methods that use labeled data. This
means that data programming achieves the same learning rate as methods that use labeled data, while
requiring asymptotically less work from its users, who need to specify O(1) labeling functions rather
? ?2 ) examples. In contrast, in the crowdsourcing setting [17], the number of
than manually label O(
workers m tends to infinity while here it is constant while the dataset grows. These results provide
some explanation of why our experimental results suggest that a small number of rules with a large
unlabeled training set can be effective at even complex natural language processing tasks.
4
Handling Dependencies
In our experience with data programming, we have found that users often write labeling functions
that have clear dependencies among them. As more labeling functions are added as the system is
developed, an implicit dependency structure arises naturally amongst the labeling functions: modeling
these dependencies can in some cases improve accuracy. We describe a method by which the user
can specify this dependency knowledge as a dependency graph, and show how the system can use it
to produce better parameter estimates.
Label Function Dependency Graph To support the injection of dependency information into the
model, we augment the data programming specification with a label function dependency graph,
G ? D ? {1, . . . , m} ? {1, . . . , m}, which is a directed graph over the labeling functions, each of the
edges of which is associated with a dependency type from a class of dependencies D appropriate to
the domain. From our experience with practitioners, we identified four commonly-occurring types of
dependencies as illustrative examples: similar, fixing, reinforcing, and exclusive (see Figure 2).
For example, suppose that we have two functions ?1 and ?2 , and ?2 typically labels only when (i)
?1 also labels, (ii) ?1 and ?2 disagree in their labeling, and (iii) ?2 is actually correct. We call this a
fixing dependency, since ?2 fixes mistakes made by ?1 . If ?1 and ?2 were to typically agree rather
than disagree, this would be a reinforcing dependency, since ?2 reinforces a subset of the labels of ?1 .
5
Y
Y
?1
?2
s
lambda_1 ( x ) = f ( x . word )
lambda_2 ( x ) = f ( x . lemma )
S i m i l a r ( lambda_1 , lambda_2 )
?2
Y
r
f
?1
?3
lambda_1 ( x ) = f ( ? . ? c a u s e . ? ? )
lambda_2 ( x ) = f ( ? . ? n o t c a u s e . ? ? )
lambda_3 ( x ) = f ( ? . ? c a u s e . ? ? )
F i x e s ( lambda_1 , lambda_2 )
R e i n f o r c e s ( lambda_1 , lambda_3 )
?1
e
?2
lambda_1 ( x ) = x i n DISEASES_A
lambda_2 ( x ) = x i n DISEASES_B
E x c l u d e s ( lambda_1 , lambda_2 )
Figure 2: Examples of labeling function dependency predicates.
Modeling Dependencies The presence of dependency information means that we can no longer
model our labels using the simple Bayesian network in (1). Instead, we model our distribution as a
factor graph. This standard technique lets us describe the family of generative distributions in terms
of a known factor function h : {?1, 0, 1}m ? {?1, 1} 7? {?1, 0, 1} M (in which each entry hi represents
a factor), and an unknown parameter ? ? R M as
?? (?, Y) = Z??1 exp(?T h(?, Y)),
where Z? is the partition function which ensures that ? is a distribution. Next, we will describe how
we define h using information from the dependency graph.
To construct h, we will start with some base factors, which we inherit from (1), and then augment
them with additional factors representing dependencies. For all i ? {1, . . . , m}, we let
h0 (?, Y) = Y, hi (?, Y) = ?i Y, hm+i (?, Y) = ?i , h2m+i (?, Y) = ?2i Y, h3m+i (?, Y) = ?2i .
These factors alone are sufficient to describe any distribution for which the labels are mutually
independent, given the class: this includes the independent family in (1).
We now proceed by adding additional factors to h, which model the dependencies encoded in
G. For each dependency edge (d, i, j), we add one or more factors to h as follows. For a nearduplicate dependency on (i, j), we add a single factor h? (?, Y) = 1{?i = ? j }, which increases
our prior probability that the labels will agree. For a fixing dependency, we add two factors,
h? (?, Y) = ?1{?i = 0 ? ? j , 0} and h?+1 (?, Y) = 1{?i = ?Y ? ? j = Y}, which encode the idea
that ? j labels only when ?i does, and that ? j fixes errors made by ?i . The factors for a reinforcing
dependency are the same, except that h?+1 (?, Y) = 1{?i = Y ? ? j = Y}. Finally, for an exclusive
dependency, we have a single factor h? (?, Y) = ?1{?i , 0 ? ? j , 0}.
Learning with Dependencies We can again solve a maximum likelihood problem like (2) to
? Using the results, we can continue on to find the noise-aware empirical loss
learn the parameter ?.
minimizer by solving the problem in (3). In order to solve these problems in the dependent case, we
typically invoke stochastic gradient descent, using Gibbs sampling to sample from the distributions
used in the gradient update. Under conditions similar to those in Section 3, we can again provide a
bound on the accuracy of these results. We define these conditions now. First, there must be some set
? ? R M that we know our parameter lies in. This is analogous to the assumptions on ?i and ?i we
made in Section 3, and we can state the following analogue of (4):
??? ? ? s.t. ?(?, Y) ? {?1, 0, 1}m ? {?1, 1}, P(x,y)??? (?(x) = ?, y = Y) = ??? (?, Y).
(7)
Second, for any ? ? ?, it must be possible to accurately learn ? from full (i.e. labeled) samples of
? ) that is a function of some dataset T of
?? . More specifically, there exists an unbiased estimator ?(T
independent samples from ?? such that, for some c > 0 and for all ? ? ?,
? ) (2c |T |)?1 I.
Cov ?(T
(8)
Third, for any two feasible models ?1 and ?2 ? ?,
h
i
E(?1 ,Y1 )???1 Var(?2 ,Y2 )???2 (Y2 |?1 = ?2 ) ? cM ?1 .
(9)
That is, we?ll usually be reasonably sure in our guess for the value of Y, even if we guess using
distribution ??2 while the the labeling functions were actually sampled from (the possibly totally
different) ??1 . We can now prove the following result about the accuracy of our estimates.
6
Features
Hand-tuned
LSTM
Method
ITR
DP
ITR
DP
KBP (News)
Prec.
Rec.
F1
51.15 26.72 35.10
50.52 29.21 37.02
37.68 28.81 32.66
47.47 27.88 35.78
Prec.
83.76
83.90
69.07
75.48
Genomics
Rec.
41.67
43.43
50.76
48.48
F1
55.65
57.24
58.52
58.99
Pharmacogenomics
Prec.
Rec.
F1
68.16 49.32 57.23
68.36 54.80 60.83
32.35 43.84 37.23
37.63 47.95 42.17
Table 1: Precision/Recall/F1 scores using data programming (DP), as compared to distant supervision
ITR approach, with both hand-tuned and LSTM-generated features.
Theorem 2. Suppose that we run stochastic gradient descent to produce ?? and w,
? and that our setup
satisfies the conditions (5)-(9). Then for any > 0, if the input dataset S is large enough that
!
2
2 k?0 ? ?? k2
,
|S | ? 2 2 log
c
then our expected parameter error and generalization risk can be bounded by
2
c
E l(w)
? ? min l(w) ? ? + .
E
?? ? ??
? M 2
w
2?
? ?2 ) unlabeled training examples
As in the independent case, this shows that we need only |S | = O(
to achieve error O(), which is the same asymptotic scaling as supervised learning methods. This
suggests that while we pay a computational penalty for richer dependency structures, we are no less
statistically efficient. In the appendix, we provide more details, including an explicit description of
the algorithm and the step size used to achieve this result.
5
Experiments
We seek to experimentally validate three claims about our approach. Our first claim is that data
programming can be an effective paradigm for building high quality machine learning systems,
which we test across three real-world relation extraction applications. Our second claim is that data
programming can be used successfully in conjunction with automatic feature generation methods,
such as LSTM models. Finally, our third claim is that data programming is an intuitive and productive
framework for domain-expert users, and we report on our initial user studies.
Relation Mention Extraction Tasks In the relation mention extraction task, our objects are relation mention candidates x = (e1 , e2 ), which are pairs of entity mentions e1 , e2 in unstructured text,
and our goal is to learn a model that classifies each candidate as either a true textual assertion of the
relation R(e1 , e2 ) or not. We examine a news application from the 2014 TAC-KBP Slot Filling challenge2 , where we extract relations between real-world entities from articles [2]; a clinical genomics
application, where we extract causal relations between genetic mutations and phenotypes from the
scientific literature3 ; and a pharmacogenomics application where we extract interactions between
genes, also from the scientific literature [21]; further details are included in the Appendix.
For each application, we or our collaborators originally built a system where a training set was
programmatically generated by ordering the labeling functions as a sequence of if-then-return
statements, and for each candidate, taking the first label emitted by this script as the training label.
We refer to this as the if-then-return (ITR) approach, and note that it often required significant domain
expert development time to tune (weeks or more). For this set of experiments, we then used the same
labeling function sets within the framework of data programming. For all experiments, we evaluated
on a blind hand-labeled evaluation set. In Table 1, we see that we achieve consistent improvements:
on average by 2.34 points in F1 score, including what would have been a winning score on the 2014
TAC-KBP challenge [30].
We observed these performance gains across applications with very different labeling function sets.
We describe the labeling function summary statistics?coverage is the percentage of objects that
had at least one label, overlap is the percentage of objects with more than one label, and conflict is
2
3
http://www.nist.gov/tac/2014/KBP/
https://github.com/HazyResearch/dd-genomics
7
the percentage of objects with conflicting labels?and see in Table 2 that even in scenarios where
m is small, and conflict and overlap is relatively less common, we still realize performance gains.
Additionally, on a disease mention extraction task (see Usability Study), which was written from
scratch within the data programming paradigm, allowing developers to supply dependencies of the
basic types outlined in Sec. 4 led to a 2.3 point F1 score boost.
F1 Score Improvement
HT
LSTM
KBP (News)
40
29.39
2.03M
1.38
0.15
1.92
3.12
Genomics
146
53.61
256K
26.71
2.05
1.59
0.47
Pharmacogenomics
7
7.70
129K
0.35
0.32
3.60
4.94
Diseases
12
53.32
418K
31.81
0.98
N/A
N/A
Table 2: Labeling function (LF) summary statistics, sizes of generated training sets S ?,0 (only counting non-zero
labels), and relative F1 score improvement over baseline IRT methods for hand-tuned (HT) and LSTM-generated
(LSTM) feature sets.
Application
# of LFs
Coverage
|S ?,0 |
Overlap
Conflict
Automatically-generated Features We additionally compare both hand-tuned and automaticallygenerated features, where the latter are learned via an LSTM recurrent neural network (RNN) [14].
Conventional wisdom states that deep learning methods such as RNNs are prone to overfitting to the
biases of the imperfect rules used for programmatic supervision. In our experiments, however, we
find that using data programming to denoise the labels can mitigate this issue, and we report a 9.79
point boost to precision and a 3.12 point F1 score improvement on the benchmark 2014 TAC-KBP
(News) task, over the baseline if-then-return approach. Additionally for comparison, our approach is
a 5.98 point F1 score improvement over a state-of-the-art LSTM approach [32].
Usability Study One of our hopes is that a user without expertise in ML will be more productive
iterating on labeling functions than on features. To test this, we arranged a hackathon involving
a handful of bioinformatics researchers, using our open-source information extraction framework
Snorkel4 (formerly DDLite). Their goal was to build a disease tagging system which is a common
and important challenge in the bioinformatics domain [11]. The hackathon participants did not have
access to a labeled training set nor did they perform any feature engineering. The entire effort was
restricted to iterative labeling function development and the setup of candidates to be classified. In
under eight hours, they had created a training set that led to a model which scored within 10 points of
F1 of the supervised baseline; the gap was mainly due to recall issue in the candidate extraction phase.
This suggests data programming may be a promising way to build high quality extractors, quickly.
6
Conclusion and Future Work
We introduced data programming, a new approach to generating large labeled training sets. We
demonstrated that our approach can be used with automatic feature generation techniques to achieve
high quality results. We also provided anecdotal evidence that our methods may be easier for domain
experts to use. We hope to explore the limits of our approach on other machine learning tasks that
have been held back by the lack of high-quality supervised datasets, including those in other domains
such imaging and structured prediction.
Acknowledgements Thanks to Theodoros Rekatsinas, Manas Joglekar, Henry Ehrenberg, Jason
Fries, Percy Liang, the DeepDive and DDLite users and many others for their helpful conversations.
The authors acknowledge the support of: DARPA FA8750-12-2-0335; NSF IIS-1247701; NSFCCF1111943; DOE 108845; NSF CCF-1337375; DARPA FA8750-13-2-0039; NSF IIS-1353606;ONR
N000141210041 and N000141310129; NIH U54EB020405; DARPA?s SIMPLEX program; Oracle;
NVIDIA; Huawei; SAP Labs; Sloan Research Fellowship; Moore Foundation; American Family
Insurance; Google; and Toshiba. The views and conclusions expressed in this material are those of the
authors and should not be interpreted as necessarily representing the official policies or endorsements,
either expressed or implied, of DARPA, AFRL, NSF, ONR, NIH, or the U.S. Government.
4
snorkel.stanford.edu
8
References
[1] E. Alfonseca, K. Filippova, J.-Y. Delort, and G. Garrido. Pattern learning for relation extraction with a
hierarchical topic model. In Proceedings of the ACL.
[2] G. Angeli, S. Gupta, M. Jose, C. D. Manning, C. R?, J. Tibshirani, J. Y. Wu, S. Wu, and C. Zhang.
Stanford?s 2014 slot filling systems. TAC KBP, 695, 2014.
[3] A. Balsubramani and Y. Freund. Scalable semi-supervised aggregation of classifiers. In Advances in
Neural Information Processing Systems, pages 1351?1359, 2015.
[4] D. Berend and A. Kontorovich. Consistency of weighted majority votes. In NIPS 2014.
[5] A. Blum and T. Mitchell. Combining labeled and unlabeled data with co-training. In Proceedings of the
eleventh annual conference on Computational learning theory, pages 92?100. ACM, 1998.
[6] J. Bootkrajang and A. Kab?n. Label-noise robust logistic regression and its applications. In Machine
Learning and Knowledge Discovery in Databases, pages 143?158. Springer, 2012.
[7] R. Bunescu and R. Mooney. Learning to extract relations from the web using minimal supervision. In
Annual meeting-association for Computational Linguistics, volume 45, page 576, 2007.
[8] M. Craven, J. Kumlien, et al. Constructing biological knowledge bases by extracting information from text
sources. In ISMB, volume 1999, pages 77?86, 1999.
[9] N. Dalvi, A. Dasgupta, R. Kumar, and V. Rastogi. Aggregating crowdsourced binary ratings. In Proceedings
of the 22Nd International Conference on World Wide Web, WWW ?13, pages 285?294, 2013.
[10] A. P. Dawid and A. M. Skene. Maximum likelihood estimation of observer error-rates using the em
algorithm. Applied statistics, pages 20?28, 1979.
[11] R. I. Do?gan and Z. Lu. An improved corpus of disease mentions in pubmed citations. In Proceedings of
the 2012 workshop on biomedical natural language processing.
[12] H. R. Ehrenberg, J. Shin, A. J. Ratner, J. A. Fries, and C. R?. Data programming with ddlite: putting
humans in a different part of the loop. In HILDA@ SIGMOD, page 13, 2016.
[13] H. Gao, G. Barbier, R. Goolsby, and D. Zeng. Harnessing the crowdsourcing power of social media for
disaster relief. Technical report, DTIC Document, 2011.
[14] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735?1780, 1997.
[15] R. Hoffmann, C. Zhang, X. Ling, L. Zettlemoyer, and D. S. Weld. Knowledge-based weak supervision for
information extraction of overlapping relations. In Proceedings of the ACL.
[16] M. Joglekar, H. Garcia-Molina, and A. Parameswaran. Comprehensive and reliable crowd assessment
algorithms. In Data Engineering (ICDE), 2015 IEEE 31st International Conference on.
[17] D. R. Karger, S. Oh, and D. Shah. Iterative learning for reliable crowdsourcing systems. In Advances in
neural information processing systems, pages 1953?1961, 2011.
[18] R. Krishna, Y. Zhu, O. Groth, J. Johnson, K. Hata, J. Kravitz, S. Chen, Y. Kalantidis, L.-J. Li, D. A. Shamma,
et al. Visual genome: Connecting language and vision using crowdsourced dense image annotations. arXiv
preprint arXiv:1602.07332, 2016.
[19] M.-A. Krogel and T. Scheffer. Multi-relational learning, text mining, and semi-supervised learning for
functional genomics. Machine Learning, 57(1-2):61?81, 2004.
[20] G. Lugosi. Learning with an unreliable teacher. Pattern Recognition, 25(1):79 ? 87, 1992.
[21] E. K. Mallory, C. Zhang, C. R?, and R. B. Altman. Large-scale extraction of gene interactions from
full-text literature using deepdive. Bioinformatics, 2015.
[22] M. Mintz, S. Bills, R. Snow, and D. Jurafsky. Distant supervision for relation extraction without labeled
data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL, 2009.
[23] N. Natarajan, I. S. Dhillon, P. K. Ravikumar, and A. Tewari. Learning with noisy labels. In Advances in
Neural Information Processing Systems 26.
[24] F. Parisi, F. Strino, B. Nadler, and Y. Kluger. Ranking and combining multiple predictors without labeled
data. Proceedings of the National Academy of Sciences, 111(4):1253?1258, 2014.
[25] S. Riedel, L. Yao, and A. McCallum. Modeling relations and their mentions without labeled text. In
Machine Learning and Knowledge Discovery in Databases, pages 148?163. Springer, 2010.
[26] B. Roth and D. Klakow. Feature-based models for improving the quality of noisy training data for relation
extraction. In Proceedings of the 22nd ACM Conference on Knowledge management.
[27] B. Roth and D. Klakow. Combining generative and discriminative model scores for distant supervision. In
EMNLP, pages 24?29, 2013.
[28] R. E. Schapire and Y. Freund. Boosting: Foundations and algorithms. MIT press, 2012.
[29] J. Shin, S. Wu, F. Wang, C. De Sa, C. Zhang, and C. R?. Incremental knowledge base construction using
deepdive. Proceedings of the VLDB Endowment, 8(11):1310?1321, 2015.
[30] M. Surdeanu and H. Ji. Overview of the english slot filling track at the tac2014 knowledge base population
evaluation. In Proc. Text Analysis Conference (TAC2014), 2014.
[31] S. Takamatsu, I. Sato, and H. Nakagawa. Reducing wrong labels in distant supervision for relation
extraction. In Proceedings of the ACL.
[32] P. Verga, D. Belanger, E. Strubell, B. Roth, and A. McCallum. Multilingual relation extraction using
compositional universal schema. arXiv preprint arXiv:1511.06396, 2015.
[33] Y. Zhang, X. Chen, D. Zhou, and M. I. Jordan. Spectral methods meet em: A provably optimal algorithm
for crowdsourcing. In Advances in Neural Information Processing Systems 27, pages 1260?1268. 2014.
9
| 6523 |@word version:2 nd:2 open:1 vldb:1 heuristically:1 seek:1 programmatically:2 mention:9 initial:3 contains:1 score:14 selecting:2 karger:1 daniel:1 tuned:4 ours:1 precluding:1 genetic:1 fa8750:2 document:1 existing:1 bootkrajang:1 com:1 written:2 must:3 realize:1 distant:9 partition:1 enables:1 update:1 alone:1 generative:11 takamatsu:1 guess:2 mccallum:2 short:2 ratner:2 provides:4 boosting:3 theodoros:1 zhang:5 along:1 become:1 supply:1 prove:2 eleventh:1 dalvi:1 manner:1 theoretically:1 tagging:1 notably:1 expected:7 roughly:1 ontology:1 examine:1 nor:1 multi:1 automatically:4 gov:1 totally:1 spain:1 classifies:1 notation:1 bounded:3 provided:1 medium:1 what:2 cm:1 interpreted:1 minimizes:1 developer:1 klakow:2 developed:1 differing:1 guarantee:1 mitigate:1 prohibitively:2 wrong:1 classifier:5 rm:2 k2:2 engineering:4 aggregating:1 modify:1 struggle:1 tends:1 mistake:1 limit:1 meet:1 solely:1 lugosi:1 rnns:1 acl:4 studied:1 suggests:3 co:4 jurafsky:1 limited:1 shamma:1 range:1 statistically:1 ismb:1 directed:1 block:2 lf:2 writes:1 procedure:2 shin:2 empirical:5 rnn:1 universal:1 word:2 induce:1 suggest:1 unlabeled:9 context:4 applying:3 writing:1 risk:6 www:3 optimize:1 equivalent:2 conventional:1 demonstrated:1 maximizing:1 bill:1 roth:3 independently:1 simplicity:1 chrismre:1 unstructured:1 rule:4 estimator:1 utilizing:1 importantly:1 surdeanu:1 oh:1 classic:1 rekatsinas:1 population:1 crowdsourcing:11 analogous:1 feel:1 altman:1 construction:1 suppose:5 user:33 programming:33 us:2 trend:1 dawid:1 expensive:4 recognition:1 rec:3 natarajan:1 labeled:25 database:2 observed:4 preprint:2 solved:1 capture:1 wang:1 ensures:1 news:5 ordering:1 disease:7 intuition:1 complexity:1 ideally:1 productive:2 ultimately:1 solving:2 creation:6 purely:1 darpa:4 joint:1 various:3 surrounding:2 train:2 describe:13 effective:2 labeling:61 h0:1 harnessing:1 crowd:1 heuristic:8 stanford:4 widely:2 richer:2 larger:1 encoded:2 solve:5 cov:1 statistic:3 noisy:7 sequence:1 parisi:1 net:1 sen:1 propose:3 interaction:2 remainder:1 combining:4 loop:1 rapidly:1 achieve:6 academy:1 intuitive:2 description:1 validate:2 competition:4 getting:1 requirement:1 produce:4 generating:2 perfect:1 incremental:1 object:10 help:1 supervising:1 recurrent:1 fixing:3 sa:2 strong:1 coverage:3 predicted:1 indicate:1 snow:1 correct:1 stochastic:7 human:2 kluger:1 material:1 government:1 f1:14 fix:3 ensembled:1 collaborator:1 suffices:1 generalization:4 biological:1 extension:2 strictly:1 considered:1 ground:1 exp:2 nadler:1 scope:1 mapping:1 week:1 claim:4 garrido:1 major:1 dictionary:1 achieves:2 vary:1 omitted:1 estimation:2 proc:1 applicable:1 label:39 expose:1 create:3 successfully:2 establishes:1 weighted:1 hope:2 impart:1 anecdotal:1 joglekar:2 mit:1 always:1 modified:1 rather:5 zhou:1 varying:1 conjunction:1 encode:3 release:1 focus:3 improvement:6 likelihood:3 mainly:1 contrast:1 baseline:6 parameswaran:1 helpful:1 dependent:3 huawei:1 unary:1 typically:3 entire:1 relation:21 provably:1 arg:4 issue:3 flexible:1 classification:1 augment:2 among:1 development:4 art:3 breakthrough:1 raised:1 aware:6 construct:2 having:1 extraction:16 sampling:1 manually:2 identical:1 represents:2 berend:1 filling:5 nearly:1 future:1 simplex:1 report:3 others:1 simplify:1 few:1 kalantidis:1 national:1 comprehensive:1 individual:2 mintz:1 intended:1 phase:2 ehrenberg:2 relief:1 mining:1 insurance:1 evaluation:2 held:1 accurate:1 edge:2 worker:1 necessary:1 experience:3 causal:2 theoretical:1 minimal:1 instance:1 industry:1 modeling:5 n000141310129:1 assertion:1 cost:1 subset:5 entry:1 predictor:1 predicate:1 johnson:1 mallory:1 dependency:37 teacher:1 accomplish:1 thanks:1 st:1 lstm:13 international:2 invoke:1 connecting:1 quickly:2 kontorovich:1 yao:1 again:3 management:2 possibly:2 emnlp:1 classically:1 external:3 creating:3 expert:9 american:1 return:3 li:1 potential:1 de:2 diversity:1 sec:3 includes:2 explicitly:4 sloan:1 depends:1 blind:1 ranking:1 script:1 view:6 jason:1 lab:1 observer:1 schema:1 portion:1 start:1 recover:1 sort:1 participant:1 aggregation:1 annotation:3 crowdsourced:2 defer:1 mutation:1 contribution:2 accuracy:12 characteristic:1 who:1 rastogi:1 identify:1 wisdom:1 weak:6 bayesian:1 accurately:4 produced:1 lu:1 expertise:1 researcher:1 mooney:1 classified:1 manual:2 manas:1 e2:3 naturally:1 associated:1 gain:4 sampled:1 dataset:7 sap:1 mitchell:1 recall:3 knowledge:14 conversation:1 sophisticated:1 actually:2 back:1 adaptable:1 afrl:1 originally:1 supervised:11 wherein:1 specify:2 improved:1 arranged:1 evaluated:1 generality:1 furthermore:1 implicit:1 biomedical:1 correlation:1 hand:9 belanger:1 lstms:1 christopher:2 pheno:2 web:2 overlapping:3 lack:1 google:1 somehow:1 zeng:1 assessment:1 logistic:5 quality:6 scientific:3 grows:1 kab:1 building:3 requiring:1 true:3 unbiased:1 y2:2 ccf:1 regularization:1 strubell:1 moore:1 dhillon:1 illustrated:1 deal:1 conditionally:2 ll:1 illustrative:1 criterion:1 trying:1 outline:1 demonstrate:1 necessitating:1 percy:1 image:1 recently:1 nih:2 common:2 functional:1 ji:1 overview:1 volume:2 extend:1 association:1 kwk2:1 expressing:1 refer:1 significant:1 gibbs:1 tac:8 automatic:2 outlined:1 consistency:1 language:3 had:2 henry:1 specification:4 access:1 supervision:17 longer:1 labelers:4 base:10 add:3 recent:3 hide:1 driven:1 scenario:3 schmidhuber:1 certain:4 nvidia:1 binary:2 success:1 arbitrarily:1 continue:1 onr:2 meeting:2 molina:1 krishna:1 additional:3 paradigm:7 signal:1 ii:4 angeli:1 multiple:4 full:3 needing:1 semi:2 technical:1 usability:3 match:2 hata:1 clinical:1 long:2 e1:3 ravikumar:1 verga:1 impact:1 prediction:1 scalable:2 regression:4 basic:3 involving:1 vision:1 arxiv:4 disaster:1 hochreiter:1 programmatic:5 addition:3 fellowship:1 zettlemoyer:1 wealth:1 source:3 exhibited:1 sure:1 tend:1 spirit:1 jordan:1 emitted:1 practitioner:1 extracting:2 call:1 leverage:3 presence:1 counting:1 iii:2 enough:4 concerned:1 automated:1 identified:1 reduce:1 selsam:1 idea:2 itr:4 imperfect:1 bottleneck:1 whether:1 motivated:2 effort:1 reinforcing:3 penalty:1 proceed:2 cause:1 compositional:1 deep:3 iterating:1 detailed:1 clear:1 tune:1 tewari:1 amount:3 bunescu:1 u54eb020405:1 h3m:1 generate:1 http:3 supplied:1 exist:1 specifies:1 canonical:1 percentage:3 nsf:4 schapire:1 correctly:2 reinforces:2 tibshirani:1 track:1 write:2 dasgupta:1 express:2 group:2 key:2 four:1 deepdive:3 putting:1 blum:1 achieving:1 changing:1 ht:2 utilize:1 imaging:1 graph:7 asymptotically:2 icde:1 run:2 jose:1 fueled:1 ameliorate:1 kbp:10 almost:1 place:1 decide:1 throughout:1 wu:4 family:5 reader:1 endorsement:1 appendix:5 scaling:7 bound:1 hi:2 pay:1 guaranteed:1 tac2014:2 oracle:1 annual:3 sato:1 strength:1 insufficiently:1 occur:1 handful:2 constraint:2 infinity:1 toshiba:1 riedel:1 software:1 encodes:2 weld:1 optimality:1 min:5 kumar:1 injection:1 relatively:2 skene:1 structured:2 combination:1 manning:1 craven:1 across:2 increasingly:1 y0:2 em:2 restricted:1 handling:1 agree:2 mutually:1 turn:1 pharmacogenomics:4 know:1 flip:1 end:2 available:2 eight:1 balsubramani:1 hierarchical:2 appropriate:1 prec:3 fry:2 spectral:1 shah:1 existence:1 running:1 linguistics:1 gan:1 unifying:1 sigmod:1 build:3 establish:1 classical:1 implied:1 question:1 quantity:1 added:1 hoffmann:1 strategy:2 exclusive:2 traditional:5 said:1 gradient:7 amongst:1 dp:3 mapped:1 entity:2 majority:1 topic:2 collected:1 alfonseca:1 modeled:2 providing:1 minimizing:1 liang:1 setup:3 potentially:2 statement:3 irt:1 policy:1 unknown:1 perform:1 allowing:3 disagree:2 observation:2 datasets:4 benchmark:1 nist:1 acknowledge:1 descent:6 proscribed:1 relational:1 head:1 y1:1 arbitrary:2 rating:1 introduced:1 cast:1 required:2 pair:2 sentence:1 conflict:8 learned:1 textual:3 conflicting:1 barcelona:1 boost:2 nip:2 hour:1 address:1 able:1 beyond:1 usually:1 pattern:8 regime:1 challenge:5 program:3 built:1 reliable:2 including:5 memory:2 max:2 explanation:1 analogue:1 critical:1 overlap:5 natural:3 treated:1 hybrid:2 power:1 zhu:1 representing:3 improve:1 github:1 library:1 created:1 hm:1 extract:4 genomics:6 faced:1 prior:3 text:9 literature:3 formerly:1 acknowledgement:1 discovery:2 asymptotic:3 relative:1 freund:2 loss:10 generation:3 var:1 annotator:2 foundation:2 integrate:1 sufficient:3 vectorized:1 consistent:3 article:1 dd:1 kravitz:1 classifying:1 endowment:1 prone:1 summary:3 changed:1 placed:1 last:1 supported:1 english:1 formal:1 allow:1 bias:1 wide:1 face:1 taking:1 feedback:2 world:5 rich:1 genome:1 author:2 commonly:1 made:3 simplified:1 flux:1 social:1 citation:1 implicitly:1 multilingual:1 gene:7 unreliable:1 ml:1 overfitting:1 corpus:2 consuming:1 discriminative:5 iterative:2 decade:1 why:1 table:4 additionally:8 promising:2 learn:7 reasonably:1 robust:2 unavailable:2 improving:1 complex:1 necessarily:1 constructing:1 domain:14 official:1 inherit:1 did:2 dense:1 motivation:1 noise:7 scored:1 ling:1 denoise:3 scheffer:1 pubmed:1 precision:2 explicit:2 wish:2 winning:3 lie:1 candidate:5 barbier:1 third:3 extractor:1 theorem:5 specific:2 gupta:1 evidence:2 burden:1 exists:1 workshop:1 adding:1 effectively:1 n000141210041:1 occurring:2 dtic:1 gap:1 easier:4 phenotype:1 chen:2 generalizing:2 led:3 logarithmic:1 garcia:1 explore:3 gao:1 visual:1 expressed:3 desire:1 collectively:1 springer:2 truth:1 minimizer:2 satisfies:2 acm:2 cdesa:1 groth:1 slot:5 goal:3 catalyzed:1 absence:1 feasible:1 experimentally:3 change:2 included:1 specifically:3 except:2 strino:1 nakagawa:1 wt:1 reducing:1 lemma:1 called:2 e:2 experimental:1 vote:1 ew:1 select:1 support:3 latter:2 arises:1 alexander:1 bioinformatics:4 relevance:1 scratch:1 ex:1 |
6,107 | 6,524 | A
Appendix: Proof of Theorem 1
We first show that the estimate is unbiased. Indeed, for every i 6= j we can rewrite L(z) as
E? `?(i),?(j) (z). Therefore,
L(z) =
X
1
k2
k
L(z) =
i6=j2[k]
X
1
k2
k
E `?(i),?(j) (z) = E L? (z) ,
?
i6=j2[k]
?
which proves that the multibatch estimate is unbiased.
Next, we turn to analyze the variance of the multibatch estimate. let I ? [k]4 be all the indices
i, j, s, t s.t. i 6= j, s 6= t, and we partition I to I1 [ I2 [ I3 , where I1 is the set where i = s and j = t,
I2 is when all indices are different, and I3 is when i = s and j 6= t or i 6= s and j = t. Then:
X
1
E krL? (z) rL(z)k2 = 2
E
(r`?(i),?(j) (z) rL(z)) ? (r`?(s),?(t) (z) rL(z))
2
?
(k
k) ?
(i,j,s,t)2I
=
d
X
r=1
1
(k 2
k)2
3
X
X
E(rr `?(i),?(j) (z)
q=1 (i,j,s,t)2Iq
rr L(z)) (rr `?(s),?(t) (z)
?
(r)
For every r, denote by A(r) the matrix with Ai,j = rr `i,j (z)
(r)
Ei6=j Ai,j
= 0, and that
X
r
(r)
E (Ai,j )2 = E kr`i,j (z)
i6=j
rr L(z))
rr L(z). Observe that for every r,
rL(z)k2 .
i6=j
Therefore,
rL(z)k2 =
E krL? (z)
?
d
X
r=1
1
(k 2
k)2
3
X
X
q=1 (i,j,s,t)2Iq
(r)
(r)
E A?(i),?(j) A?(s),?(t)
?
Let us momentarily fix r and omit the superscript from A(r) .
E? A?(i),?(j) A?(s),?(t) according to the value of q.
We consider the value of
? For q = 1: we obtain E? A2?(i),?(j) which is the variance of the random variable rr `i,j (z)
rr L(z).
? For q = 2: When we fix i, j, s, t which are all different, and take expectation over ?,
then all products of off-diagonal elements of A appear the same P
number of times in
E? A?(i),?(j) A?(s),?(t) . Therefore, this quantity is proportional to p6=r vp vr , where v
P
is the vector of all non-diagonal entries of A. Since p vp = 0, we obtain (using Lemma 1)
P
that p6=r vp vr ? 0, which means that the entire sum for this case is non-positive.
? For q = 3: Let us consider the case when i = s and j 6= t, and the derivation for the case
when i 6= s and j = t is analogous. The expression we obtain is E? A?(i),?(j) A?(i),?(t) .
This is like first sampling a row and then sampling, without replacement, two indices from
the row (while not allowing to take the diagonal element). So, we can rewrite the expression
as:
E A?(i),?(j) A?(s),?(t) = E
?
E
i?[m] j,t2[m]\{i}:j6=t
?
E
i?[m]
?
E Ai,j
j6=i
Ai,j Ai,t
?2
= E (A?i )2 ,
(5)
i?[m]
where we denote A?i = Ej6=i Ai,j and in the inequality we used again Lemma 1.
Finally, the bound on the variance follows by observing that the number of summands in I1 is k 2
and the number of summands in I3 is O(k 3 ). This concludes our proof.
10
k
Lemma 1 Let v 2 Rn be any vector. Then,
E [vs vt ] ? (E[vi ])2
i
s6=t
In particular, if Ei [vi ] = 0 then
P
s6=t
vs vt ? 0.
Proof For simplicity, we use E[v] for Ei [vi ] and E[v 2 ] for Ei [vi2 ]. Then:
E vs vt =
s6=t
=
=
=
1
n2
n
1
n2
n
n2
n2
n
n
n2
n X
n
X
vs vt
s=1 t=1
n
X
vs
s=1
n
X
1
n2
1
vt
n2
t=1
E[v]2
(E[v]2
n
? 0 + E[v]2
11
n2
n
n
n
n
n
X
s=1
n
X
vs2
vs2
s=1
E[v 2 ]
E[v 2 ]) +
n2
n2
n
E[v]2
n
| 6524 |@word prof:1 vt:5 rewrite:2 unbiased:2 variance:3 entire:1 quantity:1 i2:2 vp:3 diagonal:3 i1:3 vi2:1 kr:1 entry:1 derivation:1 j6:2 fix:2 according:1 simplicity:1 sampling:2 every:3 index:3 concludes:1 krl:2 expectation:1 k2:5 t2:1 s6:3 off:1 proof:3 partition:1 omit:1 rl:5 appear:1 analogous:1 a2:1 positive:1 superscript:1 again:1 rr:8 v:5 turn:1 replacement:1 allowing:1 proportional:1 element:2 product:1 j2:2 ai:7 multibatch:2 n2:10 i6:4 sum:1 observe:1 lemma:3 rn:1 i3:3 p6:2 row:2 momentarily:1 summands:2 vr:2 vi:3 ei:3 appendix:1 vs2:2 analyze:1 observing:1 iq:2 bound:1 inequality:1 indeed:1 theorem:1 expression:2 |
6,108 | 6,525 | Verification Based Solution for Structured MAB
Problems
Zohar Karnin
Yahoo Research
New York, NY 10036
[email protected]
Abstract
We consider the problem of finding the best arm in a stochastic Multi-armed
Bandit (MAB) game and propose a general framework based on verification that
applies to multiple well-motivated generalizations of the classic MAB problem. In
these generalizations, additional structure is known in advance, causing the task of
verifying the optimality of a candidate to be easier than discovering the best arm.
Our results are focused on the scenario where the failure probability must be very
low; we essentially show that in this high confidence regime, identifying the best
arm is as easy as the task of verification. We demonstrate the effectiveness of our
framework by applying it, and matching or improving the state-of-the art results in
the problems of: Linear bandits, Dueling bandits with the Condorcet assumption,
Copeland dueling bandits, Unimodal bandits and Graphical bandits.
1
Introduction
The Multi-Armed Bandit (MAB) game is one where in each round the player chooses an action,
also referred to as an arm, from a pre-determined set. The player then gains a reward associated
with the chosen arm and observes the reward while rewards associated with the other arms are not
revealed. In the stochastic setting, each arm x has a fixed associated value ?(x) throughout all rounds,
and the reward associated with the arm is a random variable, independent of the history, with an
expected value of ?(x). In this paper we focus on the pure exploration task [9] in the stochastic
setting where our objective is to identify the arm maximizing ?(x) with sufficiently high probability,
while minimizing the required number of rounds, otherwise known as the query complexity. This
task, as opposed to the classic task of maximizing the sum of accumulated rewards is motivated
by numerous scenarios where exploration (i.e. trying multiple options) is only possible in an initial
testing phase, and not throughout the running time of the game.
As an example consider a company testing several variations of a (physical) product, and then once
realizing the best one, moving to a production phase where the product is massively produced and
shipped to numerous vendors. It is very natural to require that the identified option is the best one
with very high probability, as a mistake can be very costly. Generally speaking, the vast majority of
uses-cases of a pure exploration requires the error probability to be very small, so much so that
even a logarithmic dependence over is non-negligible. Another example to demonstrate this is that
of explore-then-exploit type algorithms. There are many examples of papers providing a solution to a
regret based MAB problem where the first phase consists of identifying the best arm with probability
at least 1 1/T , and then using it in the remainder of the rounds. Here, = 1/T is often assumed to
be the only non-constant.
We do not focus on the classic MAB problem but rather on several extensions of it for settings where
we are given as input some underlying structural properties of the reward function ?. We elaborate
on the formal definitions and different scenarios in Section 2. Another extension we consider is that
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
of Dueling Bandits where, informally, we do not query a single arm but rather a pair, and rather than
observing the reward of the arms we observe a hint as to the difference between their associated ?
values. Each extension we discuss is motivated by different scenarios which we elaborate on in the
upcoming sections. In all of the cases mentioned, we focus on the regime of high confidence meaning
where the failure probability is very small.
Notice that due to the additional structure (that does not exist in the classic case), verifying a candidate
arm is indeed the best arm can be a much easier task, at least conceptually, compared to that of
discovering which arm is the best. This observation leads us to the following design: Explore the
arms and obtain a candidate arm that is the best arm w.p. 1 ? for some constant ?, then verify it
is indeed the best with confidence 1
. If the exploration procedure happened to be correct, the
query complexity of the problem will be composed of a sum of two quantities. One is that of the
exploration algorithm that is completely independent of , and the other is dependent of but is
the query complexity of the easier verification task. The query complexity is either dominated by
that of the verification task, or by that of the original task with a constant failure probability. Either
way, for small values of the savings are potentially huge. As it turns out, as discussed in Section 3,
a careful combination of an exploration and verification algorithm can achieve an expected query
complexity of Hexplore + Hverify where Hexplore is the exploration query complexity, independent of ,
and Hverify is the query complexity of the verification procedure with confidence 1
. Below, we
design exploration and verification algorithms for the problems of: Dueling bandits ?4, Linear bandits
?5, Unimodal graphical bandits ?6 and Graphical bandits1 . In the corresponding sections we provide
short reviews of each MAB problem, and analyze their exploration and verification algorithms. Our
results improve upon the state-of-the-art results in each of these mentioned problem (See Table 1 for
a detailed comparison).
Related Works: We are aware of one attempt to capture multiple (stochastic) bandit problems
in a single frameworks, given in [20]. The focus there is mostly on problems where the observed
random variables do not necessarily reflect the reward, such as the dueling bandit problem, rather
than methods to exploit structure between the arms. For example, in the case of the dueling bandit
problem with the Condorcet assumption their algorithm does not take advantage of the structural
properties and the corresponding query complexity is larger than that obtained here (see Section 4.1).
We review the previous literature of each specific problem in the corresponding sections.
2
Formulation of Bandit Problems
The pure exploration Multi-Armed Bandit (MAB) problem, in the stochastic setting, can be generally
formalized as follows. Our input consists of a set K of arms, where each arm x is associated with
some reward ?(x). In each round t we play an arm xt and observe the outcome of a random variable
whose expected value is ?(xt ). Other non-stochastic settings exist yet they are outside the scope of
our paper; see [4] for a survey on bandit problems, including the stochastic and non-stochastic settings.
The objective in the best arm identification problem is to identify the arm2 x? = arg max ?(x) while
minimizing the expected number of queries to the reward values of the arms. Other than the classic
MAB problem, where K is a finite set and ? is an arbitrary function there exist other frameworks
where some structure is assumed regarding the behavior of ? over the arms of K. An example for a
common framework matching this formulation, that we will analyze in detail in Section 5, is that
of the linear MAB. Here, K is a compact subset of Rd , and the reward function ? is assumed to be
linear. Unlike the classic MAB case, an algorithm can take advantage of the structure of ? and obtain
a performance that is independent of the size of K. Yet another example, discuss in Section 6, is that
of unimodal bandits, where we are given a graph whose vertices are the arms, and it is guaranteed
that the best arm is the unique arm having a maximal value among its neighbors in the graph.
The above general framework captures many variants of the MAB problem, yet does not capture
the Dueling Multi Armed Bandit (DMAB) problem. Here, the input as before consists of a set of
arms denoted by K yet we are not allowed to play a single arm in a round but rather a pair x, y 2 K.
The general definition of the observation from playing the pair x, y is a random variable whose
1
Do to space restrictions we defer the section of Graphical bandits [7] to the extended version.
This objective is naturally extended in the PAC setting where we are interested in an arm that is approximately
the best arm. For simplicity we restrict our focus to the best arm identification problem. We note that our general
framework of exploration and verification can be easily expanded to handle the PAC setting as well.
2
2
expected value is P (x, y) where P : K ? K ! R. The original motivating example for the DMAB
[22] problem is that of information retrieval, where a query to a pair of arms is a presentation of the
interleaved results of two ranking algorithms. The output is the 0 or 1, depending on the choice of the
user, i.e. whether she chose a result from one or ranker or the other. The ? score here can be thought
of a quality score for a ranker, defined according to the P scores. We elaborate on the motivation
for the MAB problem and the exact definition of the best arm in Section 4. In an extended version
of this paper we discuss the problem of graphical bandits that is in some sense a generalization of
the dueling bandit problem. There, we are not allowed to query any pair but rather pairs from some
predefined set E ? K ? K.
3
Boosting the Exploration Process with a Verification Policy
In what follows we present results for different variants of the MAB problem. We discuss two types
of problems. The first is the well known pure exploration problem. Our input is the MAB instance,
including the set of arms and possible structural information, and a confidence parameter ?. The
objective is to find the best arm w.p. at least 1 ? while using a minimal number of queries. We
often discuss variants of the exploration problem where in addition to finding the best arm, we wish
to obtain some additional information about the problem such as an estimate of the gaps of the reward
value of suboptimal arms from the optimal one, the identity of important arm pairs, etc. We refer
to this additional information as an advice vector ?, and our objective is to minimize queries while
obtaining a sufficiently accurate advice vector and the true optimal arm with probability at least
1 ?. For each MAB problem we describe an algorithm referred to as FindBestArm with a query
complexity of3 Hexplore ? log(1/?) that obtains an advice vector ? that is sufficiently accurate4 w.p. at
least 1 ?.
Definition 1. Let FindBestArm be an algorithm that given the MAB problem and confidence parameter ? > 0 has the following guarantees. (1) with probability at least 1 ? it outputs a correct best
arm and advice vector ?. (2) its expected query complexity is Hexplore ? log(1/?), where Hexplore is
some instance specific complexity (that is not required to be known).
The second type of problem is that of verification. Here we are given as input not only the MAB
problem and confidence parameter , but an advice vector ?, including the identity of a candidate
optimal arm.
Definition 2. Let VerifyBestArm be an algorithm that given the MAB problem, confidence parameter
> 0 and an advice vector ? including a proposed identity of the best arm, has the following
guarantees. (1) if the candidate optimal arm is not the actual optimal arm, the output is ?fail? w.p. at
least 1
. (2) if the advice vector is sufficiently accurate, and in particular the candidate is indeed
the optimal arm, we should output ?success? w.p. at least 1
. (3) if the advice vector is sufficiently
accurate the expected query complexity is Hverify log(1/ ). Otherwise, it is Hexplore log(1/ ).
It is very common that Hverify ? Hexplore as it is clearly an easier problem to simply verify the
identity of the optimal arm rather than discover it. Our main result is thus somewhat surprising as
it essentially shows that in the regime of high confidence, the best arm identification problem is as
easy as verifying the identity of a candidate. Specifically we provide a complexity that is additive in
Hexplore and log(1/ ) rather than multiplicative. The formal result is as follows.
Algorithm 1 Explore-Verify Framework
Input: Best arm identification problem, Oracle access to FindBestArm and VerifyBestArm with
failure probability tuning, failure probability parameter , parameter ?.
for all r = 1 . . . do
Call FindBestArm with failure probability ?, denote by ? its output.
Call VerifyBestArm with advice vector ?, that includes a candidate best arm x
?, and failure
probability /2r2 . If succeeded, return x
?. Else, continue to the next iteration
end for
3
The general form of such algorithms is in fact H1 log(1/?) + H0 . For simplicity we state our results for
the form H log(1/?); the general statements are an easy modification.
4
The exact definition of sufficiently accurate is given per problem instance.
3
Theorem 3. Assume that algorithm 1 is given oracle access to FindBestArm and VerifyBestArm
with the above mentioned guarantees, and a confidence parameter < 1/3. For any ? < 1/3, the
algorithm identifies the best arm with probability 1
while using an expected number of at most
O (Hexplore log(1/?) + (Hverify + ? ? Hexplore ) log(1/ ))
The following provides the guarantees for two suggested values of ?. The first may not be known to
us but can very often be estimated beforehand. The second depends only on hence is always known
in advance.
Corollary 4. By setting ? = min {1/3, Hverify /Hexplore }, algorithm 1 has an expected number of at
most
O(Hexplore log(Hexplore /Hverify ) + Hverify log(1/ ))
queries. By setting ? = min {1/3, 1/ log(1/ )}, algorithm 1 has an expected query complexity of at
most
O(Hexplore log(log(1/ )) + Hverify log(1/ ))
Notice that by setting ? to min {1/3, 1/ log(1/ )}, for any practical use-case, the dependence on
in the left summand is nonexistent. In particular, this default value for ? provides a multiplicative
saving of either Hexplore /Hverify , i.e. the ratio between the exploration and verification problem, or
log(1/ )
log(log(1/ )) . Since log(1/ ) is rarely a negligible term, and as we will see in what follows, neither is
Hexplore /Hverify , the savings are significant, hence the effectiveness of our result.
Proof of Theorem 3. In the analysis we often discuss the output of the sub-procedures in round r > 1,
even if the algorithm terminated before round r. We note that these values are well-defined random
variables regardless of the fact
P1that we may not reach the round. To prove the correctness of the
algorithm notice that since r=1 r 2 ? 2 we have with probability at least 1
that all runs of
VerifyBestArm do not err. Since we halt only when VerifyBestArm outputs ?success? our algorithm
indeed outputs the best arm w.p. at least 1
We proceed to analyze the expected query complexity, and start with a simple observation. Let
QCsingle (r) denote the expected query complexity in round r, and let Yr be the indicator variable to
whether the algorithm reached round r. Since Yr is independent of the procedures running in round r
and in particular of the number of queries required by them, we have that the total expected query
complexity is
"1
#
1
X
X
?
?
E
Yr QCsingle (r) =
E [Yr ] ? E QCsingle (r)
r=1
r=1
?
?
?
?
Hence, we proceed to analyze E QCsingle (r) and E[Yr ] separately. For E QCsingle (r) we have
?
?
E QCsingle (r) ? Hexplore log(1/?)+
? 2?
2r
((1 ?) Hverify + ?Hexplore ) log
?
? 2?
2r
Hexplore log(1/?) + (?Hexplore + Hverify ) log
To explain the first inequality, the first summand is the complexity of FindBestArm . The second
summand is that of VerifyBestArm , that is decomposed to the complexity in the scenario where
FindBestArm succeeded vs. the scenario where it failed. To compute E[Yr ], we notice that Yr is an
indicator function hence E[Yr ] = Pr[Yr = 1]. In order for Yr to take the value of 1 we must have that
for all rounds r0 < r either VerifyBestArm or FindBestArm have failed. Since the failure or success of
the algorithms at different rounds are independent we have
?
Y?
Pr[Yr = 1] ?
?+
? 21 r .
2(r0 )2
0
r <r
The last inequality is since , ? ? 1/3. We get that the expected number of queries required by the
algorithm is at most
?
? 2 ??
1
X
2r
r
2?
2
Hexplore log(1/?) + (?Hexplore + Hverify ) log
=
r=1
4
MAB
task
cite
Dueling
[16]
Bandits
Unimodal
Bandits
(line graph)
(line graph)
Graphical
?
K 1+? ?
P
Bandits
(Condorcet)
Linear
existing solution
x6=x?
P
[7]
min
y:
pxy <0
pxy2
?
+
miny,pxy <0 pxy2 log(1/ )
P
P
x6=x? (
x2 (x? )
x)
2
x
P
9
=
pxy2 +
y0 :
;
0 <0
xy
P
2
min
p
log(1/
)
?
y,p
<0
xy
xy
x6=x
x6=x?
y6=x
min
:
pxy2 , min
p
?
d log Kd/ 2
min
2
min
?
?
d log(K/ )
2
min
[19]
[6]
x6=x?
our solution
8
<
?
+
? (Y ) log (1/ )
P
2
+
x6=x? ( x )
P
2
x log(1/ )
x2 (x? )
2
+
log(1/ )
KD log(K/ ) log2 (K)
2
min
KD log3 (K)
2
min
+
KD log(1/ )
2
min
improvement
ratio
K ? for
large
up to d
for small
can be ?(K)
in typical
settings
(large )
log2 (K)
Bandits
Table 1: Comparison between the results obtained by our techniques and the state-of-the-art results in several bandit problem. K represents
the total number of arms, the failure probability; in the case of linear bandits, d is the dimension of the space in which the arms lie. The
definitions the rest of the problem specific quantities are given in the corresponding sections. The ratio between the solutions, for a typical case
is given in the last column.
1
X
2?
2 r (Hexplore log(1/?) + (?Hexplore + Hverify ) log(1/ )) +
r=1
1
X
2?
2 r log(2r2 ) (?Hexplore + Hverify ) = O (Hexplore log(1/?) + (?Hexplore + Hverify ) log(1/ ))
r=1
In the following sections we provide algorithms for several bandit problems using the framework
of Theorem 3. In Table 1 we provide a comparison between the state-of-the-art results prior to this
paper and the results here.
4
Application to Dueling Bandits
The dueling bandit problem, introduced in [22], arises naturally in domains where feedback is
more reliable when given as a pairwise preference (e.g., when it is provided by a human) and
specifying real-valued feedback instead would be arbitrary or inefficient. Examples include ranker
evaluation [14, 23, 12] in information retrieval, ad placement and recommender systems. As with
other preference learning problems [10], feedback consists of a pairwise preference between a
selected pair of arms, instead of scalar reward for a single selected arm, as in the K-armed bandit
problem.
The formulation of the problem is the following. Given a set of arms K, a query is to a pair x, y 2 K
and its output is a r.v. in { 1, 1} with an expected reward of Pij . It is assumed that P is antisymmetric meaning5 P (x, y) = P (y, x) and the ? values are determined by those of P . One
common assumption regarding P is the existence of a Condorcet winner, meaning there exist some
x? 2 K for which P (x? , y) 0 for all y 2 K. In this case, x? is defined as the best arm and the
reward associated with arm y is typically P (x? , y). A more general framework can be considered
where a Condorcet winner is not assumed to exist. In the absence of a Condorcet winner there is no
clear answer as to which arm is the best; several approaches are discussed in [20], [5], and recently in
[8, 3], that use some of the notions proposed by social choice theorists, such as the Copeland score or
the Borda score to measure the quality of each arm, or game theoretic concepts to determine the best
worst-case strategy over arms; we do not elaborate on all of them as they are outside the scope of this
paper. In Appendix B.2 we discuss one solution based on the Copeland score, where ?(x) is defined
as the number of arm y 6= x where P (x, y) > 0.
A general framework capturing both the MAB and DMAB scenarios is that of partial monitoring
games introduced by [18]. In this framework, when playing an arm K one obtains a reward ?(x) yet
observes a different function h(x). Some connection between h and ? is known in advance and based
on it, one can design a strategy to discover the best arm or minimize regret. As we do not present
results regarding this framework we do not elaborate on it any further, but rather mention that our
results, in terms of query complexity, cannot be matched by the existing results there.
5
It is actually common to define the output of P as a number in [0, 1] and have P (x, y) = 1
both definitions are equivalent up to a linear shift of P .
5
P (y, x), but
4.1
Dueling Bandits with the Condorcet Assumption
The Condorcet assumption in the Dueling bandit setting asserts the existence of an arm x? that beats
all other arms. In this section we discuss a solution for finding this arm under the assumption of its
existence. Recall that the observable input consists of a set of arms K of size K. There is assumed
to exist some matrix P mapping each pair of arms x, y 2 K to a number pxy 2 [ 1, 1]; the matrix
P has a zero diagonal, meaning pxx = 0 and is anti-symmetric pxy = pyx . A query to the pair
(x, y) gives an observation to a random Bernoulli variable with expected value (1 + pxy )/2 and is
considered as an outcome of a match between x, y. As we assume the existence of a Condorcet
winner, there exists some x? 2 K with px? y > 0 for all y 6= x.
The Condorcet dueling bandit problem, as stated here and without any additional assumptions was
tackled in several papers [20, 26, 16]. The best guarantees to date are given by [16] that provide an
asymptotically optimal regret bound for the problem, for the regime of a very large time horizon. This
result can be transformed into a best-arm identification algorithm, and the corresponding guarantee is
listed in Table 1. Loosely speaking, the result shows that it suffices to query each pair sufficiently
many times to separate the corresponding Px,y from 0.5 with constant probability, and additionally
only K pairs must be queried sufficiently many times in order to separate the corresponding Px,y from
0.5 with probability 1
. We note that other improvements exist that achieve a better constant term
(the additive term independent of ) [25, 24] or an overall improved result via imposing additional
assumptions about P such as an induced total order, stochastic triangle inequality etc. [22, 23, 1].
These types of results however fall outside the scope of our paper.
In Appendix B.1 we provide an exploration and verification algorithm for the problem. The exploration algorithm queries all pairs until finding, for each suboptimal arm x, an arm y with pxy < 0;
the exploration algorithm provides as output not only the identity of the optimal arm, but for each
sub-optimal arm x, the identity of an arm y(x) that (approximately) maximizes pyx meaning it beats
x by the largest gap. The verification procedure is now straightforward. Given the above advice the
algorithm makes sure that for each allegedly sub-optimal x, the arm y(x) indeed beats it meaning
p(yx) > 0. We obtain the following formal result.
Theorem 5. Algorithm 1, along with the exploration and verification algorithms given in Appendix B.1, finds the Condorcet winner w.p. at least 1
while using an expected amount of at
most
0
1
0
1
?
X
X X
X
?@
O
px?2y +
min pxy2 , 0 min pxy2 A + O @
min pxy2 ln(K/ p2xy )A
y6=x?
x6=x? y6=x
y ,pxy0 <0
x6=x?
y,pxy <0
queries, where x? is the Condorcet winner.
5
Application to Linear Bandits
The linear bandit problem was originally introduced in [2]. It captures multiple problems where there
is linear structure among the available options. Its pure exploration variant (as opposed to the regret
setting) was recently discussed in [19]. Recall that in the linear bandit problem the set of arms K is
a subset of Rd . The reward function associated with an arm x is a random variable with expected
value ?(x) = w> x, for some unknown w 2 Rd . For simplicity we assume that all vectors w, and
those of K lie inside the Euclidean unit ball, and that the noise is sub-gaussian with variance 1 (hence
concentration bounds such as Hoeffding?s inequality can be applied).
The results of [19] offer two approaches. The first is a static strategy that guarantees, for failure
probability ?, a query complexity of d log(K/?)
with x? being the best arm, x = w> (x? x) for
2
min
?
x 6= x and min = minx6=x? x . The second is adaptive and provides better bounds in a specific
case where the majority of the hardship of the problem is in separating the best arm from the second
best arm.
The algorithms are based on tools from the area of Optimal design of experiments where the high level
idea is the following: Consider our set of vectors (arms) K and an additional set of vecotrs Y . We are
interested in querying a sequence of t arms from K that will minimize the maximum variance of the
estimation of w> y, where the maximum is taken over all y 2 Y . Recall that via the Azuma-Hoeffding
inequality, one can show that by querying a set of points x1 , . . . , xt and solving the Ordinary Least
6
Squares (OLS) problem, one obtains an unbiased estimator of w and the corresponding variance to a
! 1
point y is
t
X
?x1 ,...,xt (y) = y >
xi x>
y
i
i=1
Hence, our formal problem statement is to obtain a sequence x1 , . . . , xt that minimizes ?x1 ,...,xt (Y )
defined as ?x1 ,...,xt (Y ) = maxy2Y ?x1 ,...,xt (y). Tools from the area of Optimal design of experiments (see e.g. [21]) provide ways to obtain such sequences that achieve a multiplicative approximation of 1 + d(d + 1)/t of the optimal sequence. In particular it is shown that as t tends to infinity, t
times the ? value of the optimal sequence of length t tends to
! 1
X
?
>
>
? (Y ) = min max y
px xx
y
p
y2Y
x2K
with p restricted to being a distribution over K. We elaborate on these in the extended version of the
paper.
[19] propose two and analyze two different choices of the set Y . The first is the set Y = K; querying
points of K in order to minimize ?x1 ...,xt (K) leads to a best arm identification algorithm with a query
complexity of d log(K/?)/ 2min for failure probability ?. We use essentially the same approach for
the exploration procedure (given in the extended version), and with the same (asymptotic) query
complexity we do not only obtain a candidate best arm x
? but also approximations of the different x
for all x 6= x? . These are required for the verification procedure.
n ?
o
The second interesting set Y is the set Y = x xx |x 2 K, x 6= x? . Clearly this set is not known to
us in advance, but it helps in [19] to define a notion of the ?true? complexity of the problem. Indeed,
one cannot discover the best arm without verifying that it is superior to the others, and the set Y
provides the best strategy to do so. The authors show that6
max kyk2 ? ?? (Y ) ? 4d/
y2Y
2
min
and bring examples where each of the inequalities are tight. Notice that the multiplicative gap between
the bounding expressions can be huge (at least linear in the dimension d), hence an algorithm with a
query complexity depending on ?? (Y ) as opposed to d/ 2min can potentially be much better than the
above mentioned algorithm. The bound on ?? (Y ) proves in particular that indeed querying w.r.t. Y is a
better strategy than querying w.r.t. K. This immediately translates into a verification procedure. Given
the advice from our exploration procedure, we have access to a candidate best arm, and approximate
values. Hence, we construct this set Y and query according to it. We show that given a correct advice,
the query complexity for failure probability is at most O (?? (Y ? ) log(K?? (Y ? )/ )). Combining
the exploration and verification algorithms, we get the following result.
Theorem 6. Algorithm 1, along with the exploration and verification algorithms described above
(we give a the formal version only in the extended version of the paper), finds the best arm w.p. at
least 1
while using an expected query complexity of
!
d log Kd/ 2min
O
+ ?? (Y ? ) log (1/ )
2
min
6
Application to Unimodal Bandits
The unimodal bandit problem consists of a MAB problem given unimodality information. We focus
on a graphical variant defined as follows: There exist some graph G whose vertex set is the set of
arm K and an arbitrary edge set E. For every sub-optimal arm x there exist some neighbor y in the
graph such that ?(x) < ?(y). In other words, the best arm x? is the unique arm having a superior
reward compared to its immediate neighbors. The graphical unimodal bandit problem was introduced
by7 [13].
6
Under the assumption that all vectors in K lie in the Euclidean unit sphere
Other variants of the unimodal bandit problem exist, e.g. one where the arms are the scalars in the intervals
[0, 1] yet we do not deal with them in this paper, as we focus on pure best arm identification problems and in
2/3
that scenario
p the regret setting is more common, and only a PAC algorithm is possible, translating to a T
rather than T regret algorithm
7
7
Due to space constraints we limit the discussion here to a specific type of unimodal bandits in
which the underlying graph is line. The motivation here comes from a scenario where the point
set K represents an ?-net over the [0, 1] interval and the ? values come from some unimodal onedimensional function. We discuss the more general graph scenario only in the extended version of
the paper. To review the existing results we introduce some notations. For an arm x let (x) denote
the set of its neighbors in the graph. For a suboptimal arm x we let x = maxy2 (x) ?(y) ?(x)
be the gap between the reward of x and its neighbors and let x = ?(x? ) ?(x) be its gap from the
best arm x? . We denote by min the minimal value of x and min be the minimal value of x .
Notice that in reasonable scenarios, for a typical arm x we have x ? x since many arms are far
from being optimal but have a close value to those of their two neighbors.
The state-of-the-art results to date, as far as we are aware, for the problem at hand is by [6], where
a method OSUB is proposed achieving an expected query complexity of (up to logarithmic terms
0
1
independent of )8
X
X
2
A
O@
( x) 2 +
x log(1/ )
x6=x?
x2 (x? )
They show that the summand with the logarithmic dependence over is optimal. In the context of a
line graph we provide an algorithm whose exploration is a simple naive application of a best arm
identification algorithm that ignores the structure of the problem, e.g. Exponential Gap-Elimination
by [15]. The verification algorithm requires only the identity of the candidate best arm as advice. It
simply applies a best arm identification algorithm over the candidate arm and its neighborhood. The
following provides our formal results.
Theorem 7. Algorithm 1, along with the exploration of Exponential Gap-Elimination and the
verification algorithm of Exponential Gap-Elimination, applied to the neighborhood of the candidate
best arm, finds the best arm w.p. at least 1
while using an expected query complexity of
0
1
X
X
2
2
A
O@
x log (K/ min ) +
x log (1/ )
x6=x?
x2 (x? )
The improvement w.r.t. the results of [6] is in the constant term independent of . The replacement
of x with x leads to a significant improvement in many reasonable submodular functions. For
example, if the arms for an ?-netPover the [0, 1] interval, and the function is O(1)-Lipchitz then
P
2
= ?(? 3 ) while x6=x? ( x ) 2 can potentially be O(? 2 ). Perhaps for this reason,
x6=x? ( x )
experiments in [6] showed that often, performing UCB on an ?-net is superior to other algorithms.
7
Conclusions
We presented a general framework for improving the performance of best-arm identification problems,
for the regime of high confidence. Our framework is based on the fact that in MAB problems with
structure, it is often easier to design an algorithm for verifying a candidate arm is the best one, rather
than discovering the identity of the best arm. We demonstrated the effectiveness of our framework by
improving the state-of-the-art results in several MAB problems.
References
[1] Nir Ailon, Zohar Karnin, and Thorsten Joachims. Reducing dueling bandits to cardinal bandits. In
Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 856?864, 2014.
[2] Peter Auer. Using confidence bounds for exploitation-exploration trade-offs. The Journal of Machine
Learning Research, 3:397?422, 2003.
[3] Akshay Balsubramani, Zohar Karnin, Robert Schapire, and Masrour Zoghi. Instance-dependent regret
bounds for dueling bandits. In Proceedings of The 29th Conference on Learning Theory, COLT 2016,
2016.
8
The result of [6] is in fact tighter in the sense that it takes advantage of the variance of the estimators by
using confidence bounds based on KL-divergence. In the case of uniform variance however, the stated results
here are accurate. More importantly, the KL-divergence type techniques can be applied here to obtain the same
type of guarantees, at the expense of a slightly more technical analysis. For this reason we present the results for
the case of uniform variance.
8
[4] S?bastien Bubeck and Nicolo Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed
bandit problems. Machine Learning, 5(1):1?122, 2012.
[5] R?bert Busa-Fekete, Bal?zs Sz?r?nyi, and Eyke H?llermeier. Pac rank elicitation through adaptive sampling
of stochastic pairwise preferences. In Proceedings of the Twenty-Eighth AAAI Conference on Artificial
Intelligence, AAAI, 2014.
[6] Richard Combes and Alexandre Proutiere. Unimodal bandits: Regret lower bounds and optimal algorithms.
In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 521?529,
2014.
[7] Dotan Di Castro, Claudio Gentile, and Shie Mannor. Bandits with an edge. CoRR, abs/1109.2296, 2011.
[8] Miroslav Dud?k, Katja Hofmann, Robert E. Schapire, Aleksandrs Slivkins, and Masrour Zoghi. Contextual
dueling bandits. In Gr?nwald et al. [11], pages 563?587.
[9] Eyal Even-Dar, Shie Mannor, and Yishay Mansour. Action elimination and stopping conditions for the
multi-armed bandit and reinforcement learning problems. The Journal of Machine Learning Research,
7:1079?1105, 2006.
[10] J. F?rnkranz and E. H?llermeier, editors. Preference Learning. Springer-Verlag, 2010.
[11] Peter Gr?nwald, Elad Hazan, and Satyen Kale, editors. Proceedings of The 28th Conference on Learning
Theory, COLT 2015, Paris, France, July 3-6, 2015, volume 40 of JMLR Proceedings. JMLR.org, 2015.
[12] K. Hofmann, S. Whiteson, and M. de Rijke. Balancing exploration and exploitation in listwise and pairwise
online learning to rank for information retrieval. Information Retrieval, 16(1):63?90, 2013.
[13] Y Yu Jia and Shie Mannor. Unimodal bandits. In Proceedings of the 28th International Conference on
Machine Learning (ICML-11), pages 41?48, 2011.
[14] T. Joachims. Optimizing search engines using clickthrough data. In KDD, 2002.
[15] Zohar Karnin, Tomer Koren, and Oren Somekh. Almost optimal exploration in multi-armed bandits. In
Proceedings of the 30th International Conference on Machine Learning (ICML-13), pages 1238?1246,
2013.
[16] Junpei Komiyama, Junya Honda, Hisashi Kashima, and Hiroshi Nakagawa. Regret lower bound and
optimal algorithm in dueling bandit problem. In Gr?nwald et al. [11], pages 1141?1154.
[17] Junpei Komiyama, Junya Honda, and Hiroshi Nakagawa. Copeland dueling bandit problem: Regret lower
bound, optimal algorithm, and computationally efficient algorithm, 2016.
[18] A. Piccolboni and C. Schindelhauer. Discrete prediction games with arbitrary feedback and loss. In
Computational Learning Theory, pages 208?223, 2001.
[19] Marta Soare, Alessandro Lazaric, and R?mi Munos. Best-arm identification in linear bandits. In Advances
in Neural Information Processing Systems, pages 828?836, 2014.
[20] Tanguy Urvoy, Fabrice Clerot, Raphael F?raud, and Sami Naamane. Generic exploration and k-armed
voting bandits. In Proceedings of the 30th International Conference on Machine Learning (ICML-13),
pages 91?99, 2013.
[21] Kai Yu, Jinbo Bi, and Volker Tresp. Active learning via transductive experimental design. In Proceedings
of the 23rd international conference on Machine learning, pages 1081?1088. ACM, 2006.
[22] Y. Yue, J. Broder, R. Kleinberg, and T. Joachims. The K-armed dueling bandits problem. Journal of
Computer and System Sciences, 78(5):1538?1556, September 2012.
[23] Y. Yue and T. Joachims. Beat the mean bandit. In ICML, 2011.
[24] Masrour Zoghi, Zohar Karnin, Shimon Whiteson, and Maarten de Rijke. Copeland dueling bandits. In
Advances in Neural Information Processing Systems, pages 307?315, 2015.
[25] Masrour Zoghi, Shimon Whiteson, and Maarten de Rijke. Mergerucb: A method for large-scale online
ranker evaluation. In Proceedings of the Eighth ACM International Conference on Web Search and Data
Mining, pages 17?26. ACM, 2015.
[26] Masrour Zoghi, Shimon Whiteson, Remi Munos, and Maarten D Rijke. Relative upper confidence bound
for the k-armed dueling bandit problem. In Proceedings of the 31st International Conference on Machine
Learning (ICML-14), pages 10?18, 2014.
9
| 6525 |@word katja:1 exploitation:2 version:7 fabrice:1 soare:1 mention:1 nonexistent:1 initial:1 score:6 existing:3 err:1 com:1 contextual:1 surprising:1 jinbo:1 yet:6 must:3 additive:2 kdd:1 hofmann:2 v:1 intelligence:1 discovering:3 yr:11 selected:2 realizing:1 short:1 provides:6 boosting:1 mannor:3 honda:2 preference:5 lipchitz:1 org:1 along:3 consists:6 prove:1 busa:1 inside:1 introduce:1 pairwise:4 indeed:7 expected:21 behavior:1 multi:7 y2y:2 decomposed:1 company:1 actual:1 armed:11 spain:1 discover:3 underlying:2 provided:1 matched:1 maximizes:1 xx:2 notation:1 what:2 minimizes:1 z:1 finding:4 guarantee:8 every:1 voting:1 mergerucb:1 unit:2 before:2 negligible:2 schindelhauer:1 tends:2 mistake:1 limit:1 approximately:2 chose:1 specifying:1 bi:1 unique:2 practical:1 testing:2 regret:11 procedure:9 area:2 thought:1 matching:2 confidence:14 pre:1 word:1 masrour:5 get:2 cannot:2 close:1 context:1 applying:1 restriction:1 equivalent:1 demonstrated:1 maximizing:2 straightforward:1 regardless:1 kale:1 survey:1 focused:1 of3:1 formalized:1 identifying:2 simplicity:3 pure:6 immediately:1 estimator:2 importantly:1 classic:6 handle:1 notion:2 variation:1 maarten:3 marta:1 yishay:1 play:2 user:1 exact:2 us:1 observed:1 verifying:5 capture:4 worst:1 trade:1 observes:2 mentioned:4 alessandro:1 complexity:29 miny:1 reward:19 solving:1 tight:1 upon:1 completely:1 triangle:1 easily:1 unimodality:1 describe:1 hiroshi:2 query:39 artificial:1 outside:3 outcome:2 h0:1 neighborhood:2 whose:5 larger:1 valued:1 elad:1 kai:1 otherwise:2 satyen:1 transductive:1 online:2 advantage:3 sequence:5 net:2 propose:2 product:2 maximal:1 remainder:1 raphael:1 causing:1 combining:1 date:2 achieve:3 asserts:1 help:1 depending:2 come:2 correct:3 stochastic:11 exploration:30 human:1 translating:1 elimination:4 require:1 suffices:1 generalization:3 mab:24 tighter:1 raud:1 extension:3 sufficiently:8 considered:2 scope:3 mapping:1 urvoy:1 naamane:1 estimation:1 largest:1 correctness:1 tool:2 offs:1 clearly:2 always:1 gaussian:1 rather:11 claudio:1 volker:1 corollary:1 focus:7 joachim:4 she:1 improvement:4 bernoulli:1 rank:2 zoghi:5 sense:2 dependent:2 stopping:1 accumulated:1 typically:1 bandit:61 proutiere:1 transformed:1 france:1 interested:2 arg:1 among:2 overall:1 colt:2 denoted:1 yahoo:1 art:6 once:1 karnin:5 saving:3 aware:2 having:2 construct:1 sampling:1 y6:3 represents:2 yu:2 icml:7 others:1 hint:1 summand:4 cardinal:1 richard:1 composed:1 divergence:2 phase:3 replacement:1 attempt:1 ab:1 huge:2 mining:1 evaluation:2 predefined:1 accurate:5 beforehand:1 succeeded:2 edge:2 partial:1 xy:3 allegedly:1 shipped:1 loosely:1 euclidean:2 ymail:1 minimal:3 miroslav:1 instance:4 column:1 ordinary:1 vertex:2 subset:2 uniform:2 gr:3 motivating:1 answer:1 chooses:1 st:3 international:8 broder:1 reflect:1 cesa:1 aaai:2 opposed:3 minx6:1 hoeffding:2 inefficient:1 return:1 de:3 hisashi:1 includes:1 ranking:1 depends:1 ad:1 multiplicative:4 h1:1 eyal:1 observing:1 analyze:5 reached:1 start:1 hazan:1 option:3 defer:1 jia:1 borda:1 minimize:4 square:1 variance:6 identify:2 rijke:4 conceptually:1 identification:11 produced:1 monitoring:1 hardship:1 history:1 explain:1 reach:1 definition:8 failure:12 copeland:5 associated:8 naturally:2 proof:1 static:1 di:1 gain:1 mi:1 recall:3 actually:1 auer:1 alexandre:1 originally:1 x6:12 improved:1 formulation:3 until:1 hand:1 web:1 combes:1 quality:2 perhaps:1 verify:3 true:2 concept:1 unbiased:1 piccolboni:1 hence:8 clerot:1 dud:1 symmetric:1 deal:1 eyke:1 round:14 game:6 kyk2:1 bal:1 trying:1 theoretic:1 demonstrate:2 bring:1 meaning:5 recently:2 common:5 ols:1 superior:3 physical:1 winner:6 volume:1 discussed:3 onedimensional:1 refer:1 significant:2 theorist:1 imposing:1 queried:1 rd:4 tuning:1 submodular:1 moving:1 access:3 etc:2 nicolo:1 showed:1 optimizing:1 scenario:11 massively:1 verlag:1 inequality:6 success:3 continue:1 additional:7 somewhat:1 gentile:1 r0:2 determine:1 july:1 nwald:3 multiple:4 unimodal:12 technical:1 match:1 offer:1 sphere:1 retrieval:4 halt:1 prediction:1 variant:6 essentially:3 iteration:1 oren:1 addition:1 separately:1 interval:3 else:1 rest:1 unlike:1 sure:1 yue:2 induced:1 shie:3 effectiveness:3 call:2 structural:3 revealed:1 easy:3 sami:1 nonstochastic:1 identified:1 restrict:1 suboptimal:3 regarding:3 idea:1 translates:1 shift:1 ranker:4 whether:2 motivated:3 expression:1 peter:2 york:1 speaking:2 proceed:2 action:2 dar:1 generally:2 detailed:1 informally:1 clear:1 listed:1 amount:1 schapire:2 exist:10 notice:6 happened:1 maxy2:1 llermeier:2 estimated:1 lazaric:1 per:1 discrete:1 achieving:1 neither:1 vast:1 graph:10 asymptotically:1 sum:2 run:1 throughout:2 reasonable:2 almost:1 appendix:3 x2k:1 interleaved:1 capturing:1 bound:11 guaranteed:1 koren:1 pxy:7 tackled:1 oracle:2 placement:1 infinity:1 constraint:1 x2:4 junya:2 dominated:1 kleinberg:1 optimality:1 min:27 performing:1 expanded:1 px:5 structured:1 ailon:1 according:2 combination:1 ball:1 kd:5 slightly:1 y0:1 modification:1 castro:1 restricted:1 pr:2 thorsten:1 taken:1 ln:1 vendor:1 computationally:1 discus:9 turn:1 fail:1 end:1 available:1 komiyama:2 observe:2 balsubramani:1 generic:1 kashima:1 existence:4 original:2 running:2 include:1 graphical:8 log2:2 yx:1 exploit:2 prof:1 nyi:1 upcoming:1 objective:5 quantity:2 strategy:5 costly:1 dependence:3 concentration:1 diagonal:1 september:1 separate:2 separating:1 majority:2 condorcet:12 reason:2 zkarnin:1 length:1 providing:1 minimizing:2 ratio:3 mostly:1 robert:2 potentially:3 statement:2 expense:1 stated:2 design:7 clickthrough:1 policy:1 unknown:1 twenty:1 bianchi:1 recommender:1 upper:1 observation:4 finite:1 anti:1 beat:4 immediate:1 extended:7 mansour:1 dotan:1 bert:1 arbitrary:4 aleksandrs:1 tomer:1 introduced:4 pair:14 required:5 kl:2 paris:1 connection:1 slivkins:1 engine:1 barcelona:1 nip:1 zohar:5 suggested:1 elicitation:1 below:1 eighth:2 azuma:1 regime:5 including:4 max:3 reliable:1 dueling:22 natural:1 indicator:2 arm:112 improve:1 numerous:2 identifies:1 tanguy:1 naive:1 tresp:1 nir:1 review:3 literature:1 prior:1 asymptotic:1 relative:1 loss:1 interesting:1 querying:5 verification:22 pij:1 editor:2 playing:2 balancing:1 production:1 last:2 formal:6 neighbor:6 fall:1 akshay:1 munos:2 listwise:1 feedback:4 default:1 dimension:2 rnkranz:1 ignores:1 author:1 adaptive:2 reinforcement:1 far:2 social:1 log3:1 approximate:1 compact:1 obtains:3 observable:1 sz:1 active:1 assumed:6 xi:1 search:2 table:4 additionally:1 obtaining:1 somekh:1 improving:3 pyx:2 whiteson:4 necessarily:1 domain:1 antisymmetric:1 main:1 terminated:1 motivation:2 noise:1 bounding:1 allowed:2 x1:7 advice:13 referred:2 elaborate:6 ny:1 sub:5 wish:1 exponential:3 candidate:14 lie:3 jmlr:2 pxx:1 theorem:6 shimon:3 specific:5 xt:9 bastien:1 pac:4 r2:2 exists:1 corr:1 horizon:1 gap:8 easier:5 logarithmic:3 remi:1 simply:2 explore:3 bubeck:1 failed:2 scalar:2 applies:2 fekete:1 springer:1 cite:1 acm:3 identity:9 presentation:1 careful:1 absence:1 determined:2 specifically:1 typical:3 reducing:1 nakagawa:2 total:3 experimental:1 player:2 ucb:1 rarely:1 junpei:2 arises:1 |
6,109 | 6,526 | Adaptive Maximization of Pointwise Submodular
Functions With Budget Constraint
Nguyen Viet Cuong1
Huan Xu2
Department of Engineering, University of Cambridge, [email protected]
2
Stewart School of Industrial & Systems Engineering, Georgia Institute of Technology,
[email protected]
1
Abstract
We study the worst-case adaptive optimization problem with budget constraint that
is useful for modeling various practical applications in artificial intelligence and
machine learning. We investigate the near-optimality of greedy algorithms for this
problem with both modular and non-modular cost functions. In both cases, we
prove that two simple greedy algorithms are not near-optimal but the best between
them is near-optimal if the utility function satisfies pointwise submodularity and
pointwise cost-sensitive submodularity respectively. This implies a combined
algorithm that is near-optimal with respect to the optimal algorithm that uses half
of the budget. We discuss applications of our theoretical results and also report
experiments comparing the greedy algorithms on the active learning problem.
1
Introduction
Consider problems where we need to adaptively make a sequence of decisions while taking into
account the outcomes of previous decisions. For instance, in the sensor placement problem [1, 2], one
needs to sequentially place sensors at some pre-specified locations, taking into account the working
conditions of previously deployed sensors. The aim is to cover as large an area as possible while
keeping the cost of placement within a given budget. As another example, in the pool-based active
learning problem [3, 4], one needs to sequentially select unlabeled examples and query their labels,
taking into account the previously observed labels. The aim is to learn a good classifier while ensuring
that the cost of querying does not exceed some given budget.
These problems can usually be considered under the framework of adaptive optimization with budget
constraint. In this framework, the objective is to find a policy for making decisions that maximizes the
value of some utility function. With a budget constraint, such a policy must have a cost no higher than
the budget given by the problem. Adaptive optimization with budget constraint has been previously
studied in the average case [2, 5, 6] and worst case [7]. In this paper, we focus on this problem in the
worst case.
In contrast to previous works on adaptive optimization with budget constraint (both in the average
and worst cases) [2, 8], we consider not only modular cost functions but also general, possibly
non-modular, cost functions on sets of decisions. For example, in the sensor placement problem, the
cost of a set of deployed sensors may be the weight of the minimum spanning tree connecting those
sensors, where the weight of the edge between any two sensors is the distance between them.1 In
this case, the cost of deploying a sensor is not fixed, but depends on the set of previously deployed
sensors. This setting allows the cost function to be non-modular, and thus is more general than the
setting in previous works, which usually assume the cost to be modular.
1
This cost function is reasonable in practice if we think of it as the minimal necessary communication cost to
keep the sensors connected (rather than the placement cost).
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
When cost functions are modular, we focus on the useful class of pointwise submodular utility
functions [2, 7, 8] that has been applied to interactive submodular set cover and active learning
problems [7, 8]. With this class of utilities, we investigate the near-optimality of greedy policies for
worst-case adaptive optimization with budget constraint. A policy is near-optimal if its worst-case
utility is within a constant factor of the optimal worst-case utility. We first consider two greedy
policies: one that maximizes the worst-case utility gain and one that maximizes the worst-case utility
gain per unit cost increment at each step. If the cost is uniform and modular, it is known that these
two policies are equivalent and near-optimal [8]; however, we show in this paper that they cannot
achieve near-optimality with non-uniform modular costs. Despite this negative result, we can prove
that the best between these two greedy policies always achieves near-optimality. This suggests we
can combine the two policies into one greedy policy that is near-optimal with respect to the optimal
worst-case policy that uses half of the budget. We discuss applications of our theoretical results to the
budgeted adaptive coverage problem and the budgeted pool-based active learning problem, both of
which can be modeled as worst-case adaptive optimization problems with budget constraint. We also
report experimental results comparing the greedy policies on the latter problem.
When cost functions are general and possibly non-modular, we propose a novel class of utility
functions satisfying a property called pointwise cost-sensitive submodularity. This property is a
generalization of cost-sensitive submodularity to the adaptive setting. In essence, cost-sensitive
submodularity means the utility is more submodular than the cost. Submodularity [9] and pointwise submodularity are special cases of cost-sensitive submodularity and pointwise cost-sensitive
submodularity respectively when the cost is modular. With this new class of utilities, we prove
similar near-optimality results for the greedy policies as in the case of modular costs. Our proofs
build upon the proof techniques for worst-case adaptive optimization with uniform modular costs [8]
and non-adaptive optimization with non-uniform modular costs [10] but go beyond them to handle
general, possibly non-uniform and non-modular, costs.
2
Worst-case Adaptive Optimization with Budget Constraint
We now formalize the framework for worst-case adaptive optimization with budget constraint. Let X
be a finite set of items (or decisions) and Y be a finite set of possible states (or outcomes). Each item
in X can be in any particular state in Y. Let h : X ? Y be a deterministic function that maps each
item x ? X to its state h(x) ? Y. We call h a realization. Let H , Y X = {h | h : X ? Y} be the
realization set consisting of all possible realizations.
We consider the problem where we sequentially select a subset of items from X as follows: we select
an item, observe its state, then select the next item, observe its state, etc. After some iterations, our
observations so far can be represented as a partial realization, which is a partial function from X
to Y. An adaptive strategy to select items takes into account the states of all previous items when
deciding the next item to select. Each adaptive strategy can be encoded as a deterministic policy for
selecting items, where a policy is a function from a partial realization to the next item to select. A
policy can be represented by a policy tree in which each node is an item to be selected and edges
below a node correspond to its states.
We assume there is a cost function c : 2X ? R?0 , where 2X is the power set of X . For any set of items
S ? X , c(S) is the cost incurred if we select the items in S and observe their states.
P For simplicity,
we also assume c(?) = 0 and c(S) > 0 for S 6= ?. If c is modular, then c(S) = x?S c({x}) for all
S. In general, c can be non-modular. We shall consider the modular cost setting in Section 3 and the
non-modular cost setting in Section 4.
For a policy ?, we define the cost of ? as the maximum cost incurred by a set of items selected along
any path of the policy tree of ?. Note that if we fix a realization h, the set of items selected by the
policy ? is fixed, and we denote this set by x?h . The set x?h corresponds to a path of the policy tree of
?, and thus the cost of ? can be formally defined as c(?) , maxh?H c(x?h ).
In the worst-case adaptive optimization problem, we have a utility function f : 2X ? H ? R?0 that
we wish to maximize in the worst case. The utility function f (S, h) depends on a set S of selected
items and a realization h that determines the states of all items. Essentially, f (S, h) denotes the value
of selecting S, given that the true realization is h. We assume f (?, h) = 0 for all h.
For a policy ?, we define its worst-case utility as fworst (?) , minh?H f (x?h , h). Given a budget
K > 0, our goal is to find a policy ? ? whose cost does not exceed K and ? ? maximizes fworst .
2
Formally, ? ? , arg max? fworst (?) subject to c(?) ? K. We call this the problem of worst-case
adaptive optimization with budget constraint.
3
Modular Cost Setting
In this section, we consider the setting where the cost function is modular. This setting is very
common in the literature (e.g., see [2, 10, 11, 12]). We will describe the assumptions on the utility
function, the greedy algorithms for worst-case adaptive optimization with budget constraint, and the
analyses of these algorithms. Proofs in this section are given in the supplementary material.
3.1
Assumptions on the Utility Function
Adaptive optimization with an arbitrary utility function is often infeasible, so we only focus on a
useful class of utility functions: the pointwise monotone submodular functions. Recall that a set
function g : 2X ? R is submodular if it satisfies the following diminishing return property: for
all A ? B ? X and x ? X \ B, g(A ? {x}) ? g(A) ? g(B ? {x}) ? g(B). Furthermore, g is
monotone if g(A) ? g(B) for all A ? B. In our setting, the utility function f (S, h) depends on
both the selected items and the realization, and we assume it satisfies the pointwise submodularity,
pointwise monotonicity, and minimal dependency properties below.
Definition 1 (Pointwise Submodularity). A utility function f (S, h) is pointwise submodular if the set
function fh (S) , f (S, h) is submodular for all h ? H.
Definition 2 (Pointwise Monotonicity). A utility function f (S, h) is pointwise monotone if the set
function fh (S) , f (S, h) is monotone for all h ? H.
Definition 3 (Minimal Dependency). A utility function f (S, h) satisfies minimal dependency if the
value of f (S, h) only depends on the items in S and their states (with respect to the realization h).
These properties are useful for worst-case adaptive optimization and were also considered in [8] for
uniform modular costs. Pointwise submodularity is an extension of submodularity and pointwise
monotonicity is an extension of monotonicity to the adaptive setting. Minimal dependency is needed
to make sure the value of f only depends on what have already been observed. Without this property,
the value of f may be unpredictable and is hard to be reasoned about. The three assumptions above
hold for practical utility functions that we will describe in Section 5.1.
3.2
Greedy Algorithms and Theoretical Results
Our paper focuses on greedy algorithms (or greedy policies) to maximize the worst-case utility with a
budget constraint. We are interested in a theoretical guarantee for these policies: the near-optimality
guarantee. Specifically, a policy is near-optimal if its worst-case utility is within a constant factor of
the optimal worst-case utility. In this section, we consider two intuitive greedy policies and prove
that each of these policies is individually not near-optimal but the best between them will always be
near-optimal. We shall also discuss a combined policy and its guarantee in this section.
3.2.1 Two Greedy Policies
We consider two greedy policies in Figure 1. These policies are described in the general form and
can be used for both modular and non-modular cost functions. In these policies, D is the partial
realization that we have observed so far, and XD , {x ? X | (x, y) ? D for some y ? Y} is the
domain of D (i.e., the set of selected items in D). For any item x, we write ?(x | D) to denote the
worst-case utility gain if x is selected after we observe D. That is,
?(x | D) , min{f (XD ? {x}, D ? {(x, y)}) ? f (XD , D)}.
y?Y
(1)
In this definition, note that we have extended the utility function f to take a partial realization as the
second parameter (instead of a full realization). This extension is possible because the utility function
is assumed to satisfy minimal dependency, and thus its value only depends on the partial realization
that we have observed so far. In the policy ?1 , for any item x ? X and any S ? X , we define:
?c(x | S) , c(S ? {x}) ? c(S),
(2)
which is the cost increment of selecting x after S has been selected. If the cost function c is modular,
then ?c(x | S) = c({x}).
3
Cost-average Greedy Policy ?1 :
Cost-insensitive Greedy Policy ?2 :
D ? ?; U ? X ;
repeat
Pick x? ? U that maximizes ?(x? | D)/?c(x? | XD );
if c(XD ? {x? }) ? K then
Observe state y ? of x? ;
D ? D ? {(x? , y ? )};
end
U ? U \ {x? };
until U = ?;
D ? ?; U ? X ;
repeat
Pick x? ? U that maximizes ?(x? | D);
if c(XD ? {x? }) ? K then
Observe state y ? of x? ;
D ? D ? {(x? , y ? )};
end
U ? U \ {x? };
until U = ?;
Figure 1: Two greedy policies for adaptive optimization with budget constraint.
The two greedy policies in Figure 1 are intuitive. The cost-average policy ?1 greedily selects the
items that maximize the worst-case utility gain per unit cost increment if they are still affordable by
the remaining budget. On the other hand, the cost-insensitive policy ?2 simply ignores the items?
costs and greedily selects the affordable items that maximize the worst-case utility gain.
Analyses of ?1 and ?2 : Given the two greedy policies, we are interested in their near-optimality:
whether they provide a constant factor approximation to the optimal worst-case utility. Unfortunately,
we can show that these policies are not near-optimal. This negative result is stated in Theorem 1 below.
The proof of this theorem constructs counter-examples where the policies are not near-optimal.
Theorem 1. For any ?i ? {?1 , ?2 } and ? > 0, there exists a worst-case adaptive optimization
problem with a utility f , a modular cost c, and a budget K such that f satisfies the assumptions in
Section 3.1 and fworst (?i )/fworst (? ? ) < ?, where ? ? is the optimal policy for the problem.
3.2.2 A Near-optimal Policy
Although the greedy policies ?1 and ?2 are not near-optimal, we now show that the best between
them is in fact near-optimal. More specifically, let us define a policy ? such that:
?1 if fworst (?1 ) > fworst (?2 )
?,
.
(3)
?2 otherwise
Theorem 2 below states that ? is near-optimal for the worst-case adaptive optimization problem with
budget constraint.
Theorem 2. Let f be a utility that satisfies the assumptions in Section 3.1 and ? ? be the optimal
policy for the worst-case adaptive optimization problem with utility f , a modular cost c, and a budget
K. The policy ? defined by Equation (3) satisfies fworst (?) > 21 (1 ? 1/e) fworst (? ? ).
?
The constant factor 12 (1 ? 1/e) in Theorem 2 is slightly worse than the constant factor (1 ? 1/ e)
for the non-adaptive budgeted maximum coverage problem [10]. If we apply this theorem to a
problem with a uniform cost, i.e., c({x}) = c({x0 }) for all x and x0 , then ?1 = ?2 and fworst (?) =
fworst (?1 ) = fworst (?2 ). Thus, from Theorem 2, fworst (?1 ) = fworst (?2 ) > 12 (1 ? 1/e) fworst (? ? ).
Although this implies the greedy policy is near-optimal, the constant factor 12 (1 ? 1/e) in this case
is not as good as the constant factor (1 ? 1/e) in [8] for the uniform modular cost setting. We also
note that Theorem 2 still holds if we replace the cost-insensitive policy ?2 with only the first item
that it selects (see its proof for details). In other words, we can terminate ?2 right after it selects the
first item and the near-optimality in Theorem 2 is still guaranteed.
3.2.3 A Combined Policy
With Theorem 2, a naive approach to the worst-case adaptive optimization problem with budget
constraint is to estimate fworst (?1 ) and fworst (?2 ) (without actually running these policies) and use
the best between them. However, exact estimation of these quantities is intractable because it would
require a consideration of all realizations (an exponential number of them) to find the worst-case
realization for these policies. This is very different from the non-adaptive setting [10, 12, 13] where
we can easily find the best policy because there is only one realization.
Furthermore, in the adaptive setting, we cannot roll back once we run a policy. For example, we
cannot run ?1 and ?2 at the same time to determine which one is better without doubling the budget.
4
This is because we have to pay the cost every time we want to observe the state of an item, and the
next item selected would depend on the previous states. Thus, the adaptive setting in our paper is
more difficult than the non-adaptive setting considered in previous works [10, 12, 13]. If we consider
a Bayesian setting with some prior on the set of realizations [2, 4, 14], we can sample a subset of
realizations from the prior to estimate fworst . However, this method does not provide any guarantee
for the estimation.
1. Run ?1 with budget K/2 (half of the total
Given these difficulties, a more practical apbudget), and let the set of selected items be S1 .
proach is to run both ?1 and ?2 using half of the
budget for each policy and combine the selected
2. Starting with the empty set, run ?2 with budget
K/2 and let the set of items selected in this
sets. Details of this combined policy (?1/2 ) are
step be S2 . For simplicity, we allow S2 to
in Figure 2. Using Theorem 2, we can show that
overlap with S1 .
?1/2 is near-optimal compared to the optimal
worst-case policy that uses half of the budget.
3. Return S1 ? S2 .
Theorem 3 below states this result. We note that
the theorem still holds if the order of running ?1
Figure 2: The combined policy ?1/2 .
and ?2 is exchanged in the policy ?1/2 .
?
Theorem 3. Assume the same setting as in Theorem 2. Let ?1/2
be the optimal policy for
the worst-case adaptive optimization problem with budget K/2. The policy ?1/2 satisfies
?
fworst (?1/2 ) > 12 (1 ? 1/e) fworst (?1/2
).
?
Since Theorem 3 only compares ?1/2 with the optimal policy ?1/2
that uses half of the budget, a
natural question is whether or not the policies ?1 and ?2 running with the full budget have a similar
?
guarantee compared to ?1/2
. Using the same counter-example for ?2 in the proof of Theorem 1, we
can easily show in Theorem 4 that this guarantee does not hold for the cost-insensitive policy ?2 .
Theorem 4. For any ? > 0, there exists a worst-case adaptive optimization problem with a utility
f , a modular cost c, and a budget K such that f satisfies the assumptions in Section 3.1 and
?
?
fworst (?2 )/fworst (?1/2
) < ?, where ?1/2
is the optimal policy for the problem with budget K/2.
As regards the cost-average policy ?1 , it remains open whether running it with full budget provides
?
any constant factor approximation to the worst-case utility of ?1/2
. However, in the supplementary
material, we show that it is not possible to construct a counter-example for this case using a modular
utility function, so a counter-example (if there is any) should use a more sophisticated utility.
4
Non-Modular Cost Setting
We first define cost-sensitive submodularity, a generalization of submodularity that takes into account
a general, possibly non-modular, cost on sets of items. We then state the assumptions on the utility
function and the near-optimality results of the greedy algorithms for this setting.
Cost-sensitive Submodularity: Let c be a general cost function that is strictly monotone, i.e.,
c(A) < c(B) for all A ? B. Hence, ?c(x | S) > 0 for all S and x ?
/ S. Assume c satisfies
the triangle inequality: c(A ? B) ? c(A) + c(B) for all A, B ? X . We define cost-sensitive
submodularity as follows.
Definition 4 (Cost-sensitive Submodularity). A set function g : 2X ? R is cost-sensitively submodular w.r.t. a cost function c if it satisfies: for all A ? B ? X and x ? X \ B,
g(A ? {x}) ? g(A)
g(B ? {x}) ? g(B)
?
.
?c(x | A)
?c(x | B)
(4)
In essence, cost-sensitive submodularity is a generalization of submodularity and means that g is
more submodular than the cost c. When c is modular, cost-sensitive submodularity is equivalent to
submodularity. If g is cost-sensitively submodular w.r.t. a submodular cost, it will also be submodular.
Since c satisfies the triangle inequality, it cannot be super-modular but it can be non-submodular (see
the supplementary for an example).
We state some useful properties of cost-sensitive submodularity in Theorem 5. In this theorem,
?g1 + ?g2 is the function g(S) = ?g1 (S) + ?g2 (S) for all S ? X , and ?c1 + ?c2 is the function
c(S) = ?c1 (S) + ?c2 (S) for all S ? X . The proof of this theorem is in the supplementary material.
5
Theorem 5. (a) If g1 and g2 are cost-sensitively submodular w.r.t. a cost function c, then ?g1 + ?g2
is also cost-sensitively submodular w.r.t. c for all ?, ? ? 0.
(b) If g is cost-sensitively submodular w.r.t. cost functions c1 and c2 , then g is also cost-sensitively
submodular w.r.t. ?c1 + ?c2 for all ?, ? ? 0 such thatP? + ? > 0.
n
(c) For any integerP
n ? 1, if g is monotone and c(S) = i=1 ai (g(S))i with non-negative coefficients
n
ai ? 0 such that i=1 ai > 0, then g is cost-sensitively submodular w.r.t. c.
(d) If g is monotone and c(S) = ?eg(S) for ? > 0, then g is cost-sensitively submodular w.r.t. c.
This theorem specifies various cases where a function g is cost-sensitively submodular w.r.t. a cost
c. Note that neither g nor c needs to be submodular for this theorem to hold. Parts (a,b) state that
cost-sensitive submodularity is preserved for linear combinations of either g or c. Parts (c,d) state
that if c is a polynomial (respectively, exponential) of g with non-negative (respectively, positive)
coefficients, then g is cost-sensitively submodular w.r.t. c.
Assumptions on the Utility: In this setting, we also assume the utility f (S, h) satisfies pointwise
monotonicity and minimal dependency. Furthermore, we assume it satisfies the pointwise costsensitive submodularity property below. This property is an extension of cost-sensitive submodularity
to the adaptive setting and is also a generalization of pointwise submodularity for a general cost. If
the cost is modular, pointwise cost-sensitive submodularity is equivalent to pointwise submodularity.
Definition 5 (Pointwise Cost-sensitive Submodularity). A utility f (S, h) is pointwise cost-sensitively
submodular w.r.t. a cost c if, for all h, fh (S) , f (S, h) is cost-sensitively submodular w.r.t. c.
Theoretical Results: Under the above assumptions, near-optimality guarantees in Theorems 2
and 3 for the greedy algorithms in Section 3.2 still hold. This result is stated and proven in the
supplementary material. The proof requires a sophisticated combination of the techniques for worstcase adaptive optimization with uniform modular costs [8] and non-adaptive optimization with
non-uniform modular costs [10]. Unlike [10], our proof deals with policy trees instead of sets and we
generalize previous techniques, originally used for modular costs, to handle general cost functions.
5
5.1
Applications and Experiments
Applications
We discuss two applications of our theoretical results in this section: the budgeted adaptive coverage
problem and the budgeted pool-based active learning problem. These problems were considered in
[2] for the average case, while we study them here in the worst case where the difficulty, as shown
above, is that simple policies such as ?1 and ?2 are not near-optimal as compared to the former case.
Budgeted Adaptive Coverage: In this problem, we are given a set of locations where we need to
place some sensors to get the spatial information of the surrounding environment. If sensors are
deployed at a set of sensing locations, we have to pay a cost depending on where the locations are.
After a sensor is deployed at a location, it may be in one of a few possible states (e.g., this may be
caused by a partial failure of the sensor), leading to various degrees of information covered by the
sensor. The budgeted adaptive coverage problem can be stated as: given a cost budget K, where
should we place the sensors to cover as much spatial information as possible?
We can model this problem as a worst-case adaptive optimization problem with budget K. Let
X be the set of all possible locations where sensors may be deployed, and let Y be the set of all
possible states of the sensors. For each set of locations S ? X , c(S) is the cost of deploying sensors
there. For a location x and a state y, let Rx,y be the geometric shape associated with the spatial
information S
covered if we put a sensor at x and its state is y. We can define the utility function
f (S, h) = | x?S Rx,h(x) |, which is the cardinality (or volume) of the covered region. If we fix
a realization h, this utility is monotone submodular [11]. Thus, f (S, h) is pointwise monotone
submodular. Since this function also satisfies minimal dependency, we can apply the policy ?1/2 to
this problem and get the guarantee in Theorem 3 if the cost function c is modular.
Budgeted Pool-based Active Learning: For pool-based active learning, we are given a finite set of
unlabeled examples and need to adaptively query the labels of some selected examples from that
set to train a classifier. Every time we query an example, we have to pay a cost and then get to see
its label. In the next iteration, we can use the labels observed so far to select the next example to
6
Table 1: AUCs (normalized to [0,100]) of four learning policies.
Data set 1
Data set 2
Data set 3
Cost
PL
LC
ALC
BLC
PL
LC
ALC
BLC
PL
LC
ALC
BLC
R1
R2
M1
M2
79.8
80.7
92.5
86.9
85.6
85.0
93.0
87.4
93.9
63.0
96.5
91.2
92.0
63.6
95.9
90.1
69.0
70.9
84.6
72.5
69.3
70.4
86.7
73.1
83.1
50.5
91.7
62.1
77.5
51.8
92.6
67.4
76.7
78.6
90.7
79.4
79.7
82.6
91.0
86.3
94.0
51.9
96.9
74.1
90.1
54.7
96.3
78.2
query. The budgeted pool-based active learning problem can be stated as: given a budget K, which
examples should we query to train a good classifier?
We can model this problem as a worst-case adaptive optimization problem with budget K. Let X
be the set of unlabeled examples and Y be the set of all possible labels. For each set of examples
S ? X , c(S) is the cost of querying their labels. A realization h is a labeling of all examples in X .
For pool-based active learning, previous works [2, 8, 14] have shown that the version space reduction
utility is pointwise
monotone submodular and satisfies minimal dependency. This utility is defined as
P
f (S, h) = h0 :h0 (S)6=h(S) p0 [h0 ], where p0 is a prior on H and h(S) is the labels of S according to
h. Thus, we can apply ?1/2 to this problem with the guarantee in Theorem 3 if the cost c is modular.
With the utility above, the greedy criterion that maximizes ?(x? | D) in the cost-insensitive policy
?2 is equivalent to the well-known least confidence criterion x? = arg minx maxy pD [y; x] =
arg maxx miny {1 ? pD [y; x]}, where pD is the posterior after observing D and pD [y; x] is
the probability that x has label y. On the other hand, the greedy criterion that maximizes
?(x? | D)/?c(x? | XD ) in the cost-average policy ?1 is equivalent to:
miny {1 ? pD [y; x]}
?
x = arg max
.
(5)
x
?c(x | XD )
We prove this equation in the supplementary material. Theorem 3 can also be applied if we consider
the total generalized version space reduction utility [8] that incorporates an arbitrary loss. This utility
was also shown to be pointwise monotone submodular and satisfy minimal dependency [8], and thus
the theorem still holds in this case for modular costs.
5.2
Experiments
We present experimental results for budgeted pool-based active learning with various modular cost
settings. We use 3 binary classification data sets extracted from the 20 Newsgroups data [15]:
alt.atheism/comp.graphics (data set 1), comp.sys.mac.hardware/comp.windows.x (data set 2), and
rec.motorcycles/rec.sport.baseball (data set 3). Since the costs are modular, they are put on individual
examples, and the total cost is the sum of the selected examples? costs. We will consider settings
where random costs and margin-dependent costs are put on training data.
We compare 4 data selection strategies: passive learning (PL), cost-insensitive greedy policy or least
confidence (LC), cost-average greedy policy (ALC), and budgeted least confidence (BLC). LC and
ALC have been discussed in Section 5.1, and BLC is the corresponding policy ?1/2 . These three
strategies are active learning algorithms. For comparison, we train a logistic regression model with
budgets 50, 100, 150, and 200, and approximate its area under the learning curve (AUC) using the
accuracies on a separate test set. In Table 1, bold numbers indicate the best scores, and underlines
indicate that BLC is the second best among the active learning algorithms.
Experiments with Random Costs: In this setting, costs are put randomly to the training examples
in 2 scenarios. In scenario R1, some random examples have a cost drawn from Gamma(80, 0.1) and
the other examples have cost 1. From the results for this scenario in Table 1, ALC is better than
LC and BLC is the second best among the active learning algorithms. In scenario R2, all examples
with label 1 have a cost drawn from Gamma(45, 0.1) and the others (examples with label 0) have
cost 1. From Table 1, LC is better than ALC in this scenario, which is due to the biasness of ALC
toward examples with label 0. In this scenario, BLC is also the second best among the active learning
algorithms, although it is still significantly worse than LC.
Experiments with Margin-Dependent Costs: In this setting, costs are put on training examples
based on their margins to a classifier trained on the whole data set. Specifically, we first train a logistic
regression model on all the data and compute its probabilistic prediction for each training example.
7
The margin of an example is then the scaled distance between 0.5 and its probabilistic prediction.
We also consider 2 scenarios. In scenario M1, we put higher costs on examples with lower margins.
From Table 1, ALC is better than LC in this scenario. BLC performs better than both ALC and LC
on data set 2, and performs the second best among the active learning algorithms on data sets 1 and 3.
In scenario M2, we put higher costs on examples with larger margins. From Table 1, ALC is better
than LC on data set 1, while LC is better than ALC on data sets 2 and 3. On all data sets, BLC is the
second best among the active learning algorithms.
Note that our experiments do not intend to show BLC is better than LC and ALC. In fact, our
theoretical results somewhat state that either LC or ALC will perform well although we may not
know which one is better. So, our experiments are to demonstrate some cases where one of these
methods would perform badly, and BLC can be a more robust choice that often performs in-between
these two methods.
6
Related Work
Our work is related to [7, 8, 10, 12] but is more general than these works. Cuong et al. [8] considered
a similar worst-case setting as ours, but they assumed the utility is pointwise submodular and the cost
is uniform modular. Our work is more general than theirs in two aspects: (1) pointwise cost-sensitive
submodularity is a generalization of pointwise submodularity, and (2) our cost function is general and
may be neither uniform nor modular. These generalizations make the problem more complicated as
simple greedy policies, which are near-optimal in [8], will not be near-optimal anymore (see Section
3.2). Thus, we need to combine two simple greedy policies to obtain a new near-optimal policy.
Guillory & Bilmes [7] were the first to consider worst-case adaptive submodular optimization,
particularly in the interactive submodular set cover problem [7, 16]. In [7], the utility is also
pointwise submodular, and they look for a policy that can achieve at least a certain value of utility
w.r.t. an unknown target realization while at the same time minimizing the cost of this policy. Their
final utility, which is derived from the individual utilities of various realizations, is submodular. Our
work, in contrast, tries to maximize the worst-case utility directly given a cost budget.
Khuller et al. [10] considered the budgeted maximum coverage problem, which is the non-adaptive
version of our problem with a modular cost. For this problem, they showed that the best between
two non-adaptive greedy policies can achieve near-optimality compared to the optimal non-adaptive
policy. Similar results were also shown in [13] with a better constant and in [12] for the outbreak
detection problem. Our work is a generalization of [10, 12] to the adaptive setting with general cost
functions, and we can achieve the same constant factor as [12]. Furthermore, the class of utility
functions in our work is even more general than the coverage utilities in these works.
Our concept of cost-sensitive submodularity is a generalization of submodularity [9] for general
costs. Submodularity has been successfully applied to many applications [1, 17, 18, 19, 20]. Besides
pointwise submodularity, there are other ways to extend submodularity to the adaptive setting, e.g.,
adaptive submodularity [2, 21, 22] and approximately adaptive submodularity [23]. For adaptive
submodular utilities, Golovin & Krause [2] proved that greedily maximizing the average utility gain in
each step is near-optimal in both average and worst cases. However, neither pointwise submodularity
implies adaptive submodularity nor vice versa. Thus, our assumptions in this paper can be applied to
a different class of utilities than those in [2].
7
Conclusion
We studied worst-case adaptive optimization with budget constraint, where the cost can be either
modular or non-modular and the utility satisfies pointwise submodularity or pointwise cost-sensitive
submodularity respectively. We proved a negative result about two greedy policies for this problem
but also showed a positive result for the best between them. We used this result to derive a combined
policy which is near-optimal compared to the optimal policy that uses half of the budget. We
discussed applications of our theoretical results and reported experiments for the greedy policies on
the pool-based active learning problem.
Acknowledgments
This work was done when both authors were at the National University of Singapore. The authors were
partially supported by the Agency for Science, Technology and Research (A*STAR) of Singapore
through SERC PSF Grant R266000101305.
8
References
[1] Andreas Krause and Carlos Guestrin. Nonmyopic active learning of Gaussian processes: An
exploration-exploitation approach. In ICML, 2007.
[2] Daniel Golovin and Andreas Krause. Adaptive submodularity: Theory and applications in
active learning and stochastic optimization. JAIR, 2011.
[3] Andrew McCallum and Kamal Nigam. Employing EM and pool-based active learning for text
classification. In ICML, 1998.
[4] Nguyen Viet Cuong, Nan Ye, and Wee Sun Lee. Robustness of Bayesian pool-based active
learning against prior misspecification. In AAAI, 2016.
[5] Brian C. Dean, Michel X. Goemans, and J. Vondrdk. Approximating the stochastic knapsack
problem: The benefit of adaptivity. In FOCS, 2004.
[6] Arash Asadpour, Hamid Nazerzadeh, and Amin Saberi. Stochastic submodular maximization.
In Internet and Network Economics. 2008.
[7] Andrew Guillory and Jeff Bilmes. Interactive submodular set cover. In ICML, 2010.
[8] Nguyen Viet Cuong, Wee Sun Lee, and Nan Ye. Near-optimal adaptive pool-based active
learning with general loss. In UAI, 2014.
[9] G. L. Nemhauser and L. A. Wolsey. Best algorithms for approximating the maximum of a
submodular set function. Mathematics of Operations Research, 3(3):177?188, 1978.
[10] Samir Khuller, Anna Moss, and Joseph Seffi Naor. The budgeted maximum coverage problem.
Information Processing Letters, 70(1):39?45, 1999.
[11] Andreas Krause and Carlos Guestrin. Near-optimal observation selection using submodular
functions. In AAAI, 2007.
[12] Jure Leskovec, Andreas Krause, Carlos Guestrin, Christos Faloutsos, Jeanne VanBriesen, and
Natalie Glance. Cost-effective outbreak detection in networks. In KDD, 2007.
[13] Maxim Sviridenko. A note on maximizing a submodular set function subject to a knapsack
constraint. Operations Research Letters, 32(1):41?43, 2004.
[14] Nguyen Viet Cuong, Wee Sun Lee, Nan Ye, Kian Ming A. Chai, and Hai Leong Chieu. Active
learning for probabilistic hypotheses using the maximum Gibbs error criterion. In NIPS, 2013.
[15] Thorsten Joachims. A probabilistic analysis of the Rocchio algorithm with TFIDF for text
categorization. DTIC Document, 1996.
[16] Andrew Guillory and Jeff A. Bilmes. Simultaneous learning and covering with adversarial
noise. In ICML, 2011.
[17] Andreas Krause and Carlos Guestrin. Submodularity and its applications in optimized information gathering. ACM Transactions on Intelligent Systems and Technology, 2(4):32, 2011.
[18] Andrew Guillory and Jeff A. Bilmes. Online submodular set cover, ranking, and repeated active
learning. In NIPS, 2011.
[19] Andrew Guillory. Active Learning and Submodular Functions. PhD thesis, University of
Washington, 2012.
[20] Kai Wei, Rishabh Iyer, and Jeff Bilmes. Submodularity in data subset selection and active
learning. In ICML, 2015.
[21] Shervin Javdani, Yuxin Chen, Amin Karbasi, Andreas Krause, Drew Bagnell, and Siddhartha S.
Srinivasa. Near optimal Bayesian active learning for decision making. In AISTATS, 2014.
[22] Alkis Gotovos, Amin Karbasi, and Andreas Krause. Non-monotone adaptive submodular
maximization. In IJCAI, 2015.
[23] Matt J. Kusner. Approximately adaptive submodular maximization. In NIPS Workshop on
Discrete and Combinatorial Problems in Machine Learning, 2014.
9
| 6526 |@word exploitation:1 version:3 polynomial:1 underline:1 open:1 p0:2 pick:2 reduction:2 score:1 selecting:3 daniel:1 ours:1 document:1 comparing:2 must:1 kdd:1 shape:1 intelligence:1 greedy:34 half:7 item:33 selected:14 mccallum:1 sys:1 yuxin:1 provides:1 node:2 location:8 along:1 c2:4 natalie:1 focs:1 prove:5 naor:1 combine:3 x0:2 psf:1 nor:3 ming:1 asadpour:1 unpredictable:1 cardinality:1 window:1 spain:1 maximizes:8 what:1 guarantee:9 every:2 xd:8 interactive:3 classifier:4 scaled:1 uk:1 unit:2 grant:1 positive:2 engineering:2 despite:1 path:2 approximately:2 studied:2 suggests:1 practical:3 acknowledgment:1 practice:1 area:2 maxx:1 significantly:1 pre:1 word:1 confidence:3 get:3 cannot:4 unlabeled:3 selection:3 put:7 equivalent:5 deterministic:2 map:1 dean:1 maximizing:2 go:1 economics:1 starting:1 simplicity:2 m2:2 handle:2 increment:3 target:1 exact:1 us:5 hypothesis:1 satisfying:1 particularly:1 rec:2 observed:5 worst:44 region:1 connected:1 sun:3 counter:4 thatp:1 environment:1 pd:5 agency:1 miny:2 cam:1 trained:1 depend:1 upon:1 baseball:1 triangle:2 easily:2 various:5 represented:2 surrounding:1 train:4 describe:2 effective:1 artificial:1 query:5 labeling:1 gotovos:1 outcome:2 h0:3 whose:1 modular:48 encoded:1 supplementary:6 larger:1 kai:1 otherwise:1 g1:4 think:1 final:1 online:1 sequence:1 propose:1 motorcycle:1 realization:23 achieve:4 amin:3 intuitive:2 chai:1 ijcai:1 empty:1 r1:2 categorization:1 depending:1 derive:1 ac:1 andrew:5 school:1 coverage:8 implies:3 indicate:2 submodularity:46 stochastic:3 arash:1 exploration:1 material:5 require:1 fix:2 generalization:8 hamid:1 tfidf:1 brian:1 extension:4 strictly:1 pl:4 hold:7 considered:6 deciding:1 achieves:1 fh:3 rocchio:1 estimation:2 label:12 combinatorial:1 sensitive:20 individually:1 vice:1 successfully:1 blc:12 sensor:20 always:2 gaussian:1 aim:2 super:1 rather:1 sensitively:12 gatech:1 derived:1 focus:4 joachim:1 industrial:1 contrast:2 greedily:3 adversarial:1 jeanne:1 dependent:2 diminishing:1 interested:2 selects:4 arg:4 classification:2 among:5 spatial:3 special:1 construct:2 once:1 reasoned:1 washington:1 look:1 icml:5 kamal:1 report:2 others:1 intelligent:1 few:1 randomly:1 javdani:1 wee:3 gamma:2 national:1 individual:2 consisting:1 detection:2 investigate:2 rishabh:1 edge:2 partial:7 necessary:1 huan:2 tree:5 exchanged:1 theoretical:8 minimal:10 leskovec:1 alkis:1 instance:1 modeling:1 cover:6 stewart:1 maximization:4 cost:139 mac:1 subset:3 uniform:12 graphic:1 reported:1 dependency:9 guillory:5 combined:6 adaptively:2 probabilistic:4 lee:3 pool:12 connecting:1 shervin:1 thesis:1 aaai:2 possibly:4 worse:2 leading:1 return:2 michel:1 account:5 star:1 bold:1 coefficient:2 satisfy:2 caused:1 ranking:1 depends:6 try:1 observing:1 carlos:4 complicated:1 accuracy:1 roll:1 correspond:1 generalize:1 bayesian:3 nazerzadeh:1 rx:2 comp:3 bilmes:5 simultaneous:1 deploying:2 definition:6 failure:1 against:1 proof:9 associated:1 seffi:1 gain:6 proved:2 recall:1 formalize:1 sophisticated:2 actually:1 back:1 higher:3 originally:1 jair:1 wei:1 done:1 furthermore:4 until:2 working:1 hand:2 glance:1 logistic:2 costsensitive:1 matt:1 ye:3 normalized:1 true:1 concept:1 former:1 hence:1 eg:1 deal:1 auc:2 essence:2 covering:1 criterion:4 generalized:1 demonstrate:1 performs:3 passive:1 saberi:1 consideration:1 novel:1 srinivasa:1 nonmyopic:1 common:1 insensitive:6 volume:1 discussed:2 extend:1 m1:2 theirs:1 cambridge:1 versa:1 ai:3 gibbs:1 mathematics:1 submodular:44 maxh:1 etc:1 posterior:1 showed:2 scenario:10 certain:1 inequality:2 binary:1 guestrin:4 minimum:1 somewhat:1 determine:1 maximize:5 full:3 ensuring:1 prediction:2 regression:2 essentially:1 affordable:2 iteration:2 c1:4 preserved:1 want:1 krause:8 unlike:1 sure:1 subject:2 incorporates:1 call:2 near:37 exceed:2 leong:1 newsgroups:1 andreas:7 whether:3 utility:62 useful:5 covered:3 hardware:1 kian:1 specifies:1 singapore:2 per:2 write:1 proach:1 shall:2 discrete:1 siddhartha:1 four:1 drawn:2 budgeted:13 neither:3 monotone:12 sum:1 run:5 letter:2 place:3 reasonable:1 decision:6 internet:1 pay:3 guaranteed:1 nan:3 badly:1 placement:4 constraint:18 sviridenko:1 aspect:1 optimality:11 min:1 department:1 according:1 combination:2 vanbriesen:1 slightly:1 em:1 kusner:1 joseph:1 making:2 s1:3 maxy:1 outbreak:2 gathering:1 thorsten:1 karbasi:2 equation:2 previously:4 remains:1 discus:4 needed:1 know:1 end:2 operation:2 apply:3 observe:7 anymore:1 robustness:1 faloutsos:1 knapsack:2 denotes:1 remaining:1 running:4 serc:1 build:1 approximating:2 objective:1 xu2:1 already:1 quantity:1 question:1 intend:1 strategy:4 bagnell:1 hai:1 minx:1 nemhauser:1 distance:2 separate:1 spanning:1 toward:1 besides:1 pointwise:34 modeled:1 minimizing:1 difficult:1 unfortunately:1 negative:5 stated:4 policy:87 unknown:1 perform:2 observation:2 finite:3 minh:1 extended:1 communication:1 misspecification:1 arbitrary:2 specified:1 optimized:1 barcelona:1 nip:4 jure:1 beyond:1 usually:2 below:6 max:2 power:1 overlap:1 difficulty:2 natural:1 technology:3 naive:1 moss:1 text:2 prior:4 literature:1 geometric:1 loss:2 adaptivity:1 wolsey:1 querying:2 proven:1 incurred:2 degree:1 repeat:2 supported:1 keeping:1 cuong:4 infeasible:1 viet:4 allow:1 institute:1 taking:3 benefit:1 regard:1 curve:1 ignores:1 author:2 adaptive:56 nguyen:4 far:4 employing:1 transaction:1 approximate:1 keep:1 monotonicity:5 active:27 sequentially:3 uai:1 assumed:2 table:6 learn:1 terminate:1 robust:1 golovin:2 nigam:1 domain:1 anna:1 aistats:1 s2:3 whole:1 noise:1 atheism:1 repeated:1 xu:1 georgia:1 deployed:6 lc:14 christos:1 wish:1 exponential:2 samir:1 isye:1 theorem:31 sensing:1 r2:2 alt:1 exists:2 intractable:1 workshop:1 alc:14 drew:1 maxim:1 phd:1 iyer:1 budget:44 margin:6 dtic:1 chen:1 simply:1 khuller:2 g2:4 doubling:1 sport:1 partially:1 chieu:1 corresponds:1 satisfies:17 determines:1 worstcase:1 extracted:1 acm:1 goal:1 jeff:4 replace:1 hard:1 specifically:3 called:1 total:3 goemans:1 experimental:2 select:9 formally:2 latter:1 |
6,110 | 6,527 | Conditional Image Generation with
PixelCNN Decoders
A?ron van den Oord
Google DeepMind
[email protected]
Nal Kalchbrenner
Google DeepMind
[email protected]
Lasse Espeholt
Google DeepMind
[email protected]
Alex Graves
Google DeepMind
[email protected]
Oriol Vinyals
Google DeepMind
[email protected]
Koray Kavukcuoglu
Google DeepMind
[email protected]
Abstract
This work explores conditional image generation with a new image density model
based on the PixelCNN architecture. The model can be conditioned on any vector,
including descriptive labels or tags, or latent embeddings created by other networks.
When conditioned on class labels from the ImageNet database, the model is able to
generate diverse, realistic scenes representing distinct animals, objects, landscapes
and structures. When conditioned on an embedding produced by a convolutional
network given a single image of an unseen face, it generates a variety of new
portraits of the same person with different facial expressions, poses and lighting
conditions. We also show that conditional PixelCNN can serve as a powerful
decoder in an image autoencoder. Additionally, the gated convolutional layers in
the proposed model improve the log-likelihood of PixelCNN to match the state-ofthe-art performance of PixelRNN on ImageNet, with greatly reduced computational
cost.
1
Introduction
Recent advances in image modelling with neural networks [30, 26, 20, 10, 9, 28, 6] have made
it feasible to generate diverse natural images that capture the high-level structure of the training
data. While such unconditional models are fascinating in their own right, many of the practical
applications of image modelling require the model to be conditioned on prior information: for
example, an image model used for reinforcement learning planning in a visual environment would
need to predict future scenes given specific states and actions [17]. Similarly image processing
tasks such as denoising, deblurring, inpainting, super-resolution and colorization rely on generating
improved images conditioned on noisy or incomplete data. Neural artwork [18, 5] and content
generation represent potential future uses for conditional generation.
This paper explores the potential for conditional image modelling by adapting and improving a
convolutional variant of the PixelRNN architecture [30]. As well as providing excellent samples,
this network has the advantage of returning explicit probability densities (unlike alternatives such as
generative adversarial networks [6, 3, 19]), making it straightforward to apply in domains such as
compression [32] and probabilistic planning and exploration [2]. The basic idea of the architecture
is to use autoregressive connections to model images pixel by pixel, decomposing the joint image
distribution as a product of conditionals. Two variants were proposed in the original paper: PixelRNN,
where the pixel distributions are modeled with two-dimensional LSTM [7, 26], and PixelCNN, where
they are modelled with convolutional networks. PixelRNNs generally give better performance, but
PixelCNNs are much faster to train because convolutions are inherently easier to parallelize; given
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
0
255
1
1
1
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Blind spot
Vertical stack
Horizontal stack
Figure 1: Left: A visualization of the PixelCNN that maps a neighborhood of pixels to prediction for
the next pixel. To generate pixel xi the model can only condition on the previously generated pixels
x1 , . . . xi?1 . Middle: an example matrix that is used to mask the 5x5 filters to make sure the model
cannot read pixels below (or strictly to the right) of the current pixel to make its predictions. Right:
Top: PixelCNNs have a blind spot in the receptive field that can not be used to make predictions.
Bottom: Two convolutional stacks (blue and purple) allow to capture the whole receptive field.
the vast number of pixels present in large image datasets this is an important advantage. We aim to
combine the strengths of both models by introducing a gated variant of PixelCNN (Gated PixelCNN)
that matches the log-likelihood of PixelRNN on both CIFAR and ImageNet, while requiring less than
half the training time.
We also introduce a conditional variant of the Gated PixelCNN (Conditional PixelCNN) that allows
us to model the complex conditional distributions of natural images given a latent vector embedding.
We show that a single Conditional PixelCNN model can be used to generate images from diverse
classes such as dogs, lawn mowers and coral reefs, by simply conditioning on a one-hot encoding
of the class. Similarly one can use embeddings that capture high level information of an image to
generate a large variety of images with similar features. This gives us insight into the invariances
encoded in the embeddings ? e.g., we can generate different poses of the same person based on a
single image. The same framework can also be used to analyse and interpret different layers and
activations in deep neural networks.
2
Gated PixelCNN
PixelCNNs (and PixelRNNs) [30] model the joint distribution of pixels over an image x as the
following product of conditional distributions, where xi is a single pixel:
2
p(x) =
n
Y
p(xi |x1 , ..., xi?1 ).
(1)
i=1
The ordering of the pixel dependencies is in raster scan order: row by row and pixel by pixel within
every row. Every pixel therefore depends on all the pixels above and to the left of it, and not on any
of other pixels. The dependency field of a pixel is visualized in Figure 1 (left).
A similar setup has been used by other autoregressive models such as NADE [14] and RIDE [26].
The difference lies in the way the conditional distributions p(xi |x1 , ..., xi?1 ) are constructed. In
PixelCNN every conditional distribution is modelled by a convolutional neural network. To make
sure the CNN can only use information about pixels above and to the left of the current pixel, the
filters of the convolution are masked as shown in Figure 1 (middle). For each pixel the three colour
channels (R, G, B) are modelled successively, with B conditioned on (R, G), and G conditioned on R.
This is achieved by splitting the feature maps at every layer of the network into three and adjusting the
centre values of the mask tensors. The 256 possible values for each colour channel are then modelled
using a softmax.
PixelCNN typically consists of a stack of masked convolutional layers that takes an N x N x 3 image
as input and produces N x N x 3 x 256 predictions as output. The use of convolutions allows the
predictions for all the pixels to be made in parallel during training (all conditional distributions from
2
Equation 1). During sampling the predictions are sequential: every time a pixel is predicted, it is
fed back into the network to predict the next pixel. This sequentiality is essential to generating high
quality images, as it allows every pixel to depend in a highly non-linear and multimodal way on the
previous pixels.
2.1
Gated Convolutional Layers
PixelRNNs, which use spatial LSTM layers instead of convolutional stacks, have previously been
shown to outperform PixelCNNs as generative models [30]. One possible reason for the advantage
is that the recurrent connections in LSTM allow every layer in the network to access the entire
neighbourhood of previous pixels, while the region of the neighbourhood available to pixelCNN
grows linearly with the depth of the convolutional stack. However this shortcoming can largely be
alleviated by using sufficiently many layers. Another potential advantage is that PixelRNNs contain
multiplicative units (in the form of the LSTM gates), which may help it to model more complex
interactions. To amend this we replaced the rectified linear units between the masked convolutions in
the original pixelCNN with the following gated activation unit:
y = tanh(Wk,f ? x) ?(Wk,g ? x),
(2)
where ? is the sigmoid non-linearity, k is the number of the layer, is the element-wise product
and ? is the convolution operator. We call the resulting model the Gated PixelCNN. Feed-forward
neural networks with gates have been explored in previous works, such as highway networks [25],
grid LSTM [13] and neural GPUs [12], and have generally proved beneficial to performance.
2.2
Blind spot in the receptive field
In Figure 1 (top right), we show the progressive growth of the effective receptive field of a 3 ? 3
masked filter over the input image. Note that a significant portion of the input image is ignored by the
masked convolutional architecture. This ?blind spot? can cover as much as a quarter of the potential
receptive field (e.g., when using 3x3 filters), meaning that none of the content to the right of the
current pixel would be taken into account.
In this work, we remove the blind spot by combining two convolutional network stacks: one that
conditions on the current row so far (horizontal stack) and one that conditions on all rows above
(vertical stack). The arrangement is illustrated in Figure 1 (bottom right). The vertical stack, which
does not have any masking, allows the receptive field to grow in a rectangular fashion without any
blind spot, and we combine the outputs of the two stacks after each layer. Every layer in the horizontal
stack takes as input the output of the previous layer as well as that of the vertical stack. If we had
connected the output of the horizontal stack into the vertical stack, it would be able to use information
about pixels that are below or to the right of the current pixel which would break the conditional
distribution.
Figure 2 shows a single layer block of a Gated PixelCNN. We combine Wf and Wg in a single
(masked) convolution to increase parallelization. As proposed in [30] we also use a residual connection [11] in the horizontal stack. We have experimented with adding a residual connection in the
vertical stack, but omitted it from the final model as it did not improve the results in our initial experiments. Note that the (n ? 1) and (n ? n) masked convolutions in Figure 2 can also be implemented
by (d n2 e ? 1) and (d n2 e ? n) convolutions followed by a shift in pixels by padding and cropping.
2.3
Conditional PixelCNN
Given a high-level image description represented as a latent vector h, we seek to model the conditional
distribution p(x|h) of images suiting this description. Formally the conditional PixelCNN models
the following distribution:
n2
Y
p(x|h) =
p(xi |x1 , ..., xi?1 , h).
(3)
i=1
We model the conditional distribution by adding terms that depend on h to the activations before the
nonlinearities in Equation 2, which now becomes:
T
T
y = tanh(Wk,f ? x + Vk,f
h) ?(Wk,g ? x + Vk,g
h),
3
(4)
1?1
p
tanh
?
p
tanh
p
2p
+
?
p
p
p = #feature maps
+ 2p
1?1
n?n
Split feature maps
p
1?n
p
Figure 2: A single layer in the Gated PixelCNN architecture. Convolution operations are shown in
green, element-wise multiplications and additions are shown in red. The convolutions with Wf and
Wg from Equation 2 are combined into a single operation shown in blue, which splits the 2p features
maps into two groups of p.
where k is the layer number. If h is a one-hot encoding that specifies a class this is equivalent to
adding a class dependent bias at every layer. Notice that the conditioning does not depend on the
location of the pixel in the image; this is appropriate as long as h only contains information about
what should be in the image and not where. For example we could specify that a certain animal or
object should appear, but may do so in different positions and poses and with different backgrounds.
We also developed a variant where the conditioning function was location dependent. This could
be useful for applications where we do have information about the location of certain structures
in the image embedded in h. By mapping h to a spatial representation s = m (h) (which has the
same width and height as the image but may have an arbitrary number of feature maps) with a
deconvolutional neural network m(), we obtain a location dependent bias as follows:
y = tanh(Wk,f ? x + Vk,f ? s) ?(Wk,g ? x + Vk,g ? s).
(5)
where Vk,g ? s is an unmasked 1 ? 1 convolution.
2.4
PixelCNN Auto-Encoders
Because conditional PixelCNNs have the capacity to model diverse, multimodal image distributions
p(x|h), it is possible to apply them as image decoders in existing neural architectures such as autoencoders. An auto-encoder consists of two parts: an encoder that takes an input image x and maps it
to a (usually) low-dimensional representation h, and a decoder that tries to reconstruct the original
image.
Starting with a traditional convolutional auto-encoder architecture [16], we replace the deconvolutional decoder with a conditional PixelCNN and train the complete network end-to-end. Since
PixelCNN has proved to be a strong unconditional generative model, we would expect this change to
improve the reconstructions. Perhaps more interestingly, we also expect it to change the representations that the encoder will learn to extract from the data: since so much of the low level pixel statistics
can be handled by the PixelCNN, the encoder should be able to omit these from h and concentrate
instead on more high-level abstract information.
3
3.1
Experiments
Unconditional Modeling with Gated PixelCNN
Table 1 compares Gated PixelCNN with published results on the CIFAR-10 dataset. These architectures were all optimized for the best possible validation score, meaning that models that get a lower
4
score actually generalize better. Gated PixelCNN outperforms the PixelCNN by 0.11 bits/dim, which
has a very significant effect on the visual quality of the samples produced, and which is close to the
performance of PixelRNN.
Model
NLL Test (Train)
Uniform Distribution: [30]
Multivariate Gaussian: [30]
NICE: [4]
Deep Diffusion: [24]
DRAW: [9]
Deep GMMs: [31, 29]
Conv DRAW: [8]
RIDE: [26, 30]
PixelCNN: [30]
PixelRNN: [30]
8.00
4.70
4.48
4.20
4.13
4.00
3.58 (3.57)
3.47
3.14 (3.08)
3.00 (2.93)
Gated PixelCNN:
3.03 (2.90)
Table 1: Test set performance of different models on CIFAR-10 in bits/dim (lower is better), training
performance in brackets.
In Table 2 we compare the performance of Gated PixelCNN with other models on the ImageNet
dataset. Here Gated PixelCNN outperforms PixelRNN; we believe this is because the models are
underfitting, larger models perform better and the simpler PixelCNN model scales better. We were
able to achieve similar performance to the PixelRNN (Row LSTM [30]) in less than half the training
time (60 hours using 32 GPUs). For the results in Table 2 we trained a larger model with 20 layers
(Figure 2), each having 384 hidden units and filter size of 5 ? 5. We used 200K synchronous updates
over 32 GPUs in TensorFlow [1] using a total batch size of 128.
32x32
64x64
Model
NLL Test (Train)
Conv Draw: [8]
PixelRNN: [30]
Gated PixelCNN:
4.40 (4.35)
3.86 (3.83)
3.83 (3.77)
Model
NLL Test (Train)
Conv Draw: [8]
PixelRNN: [30]
Gated PixelCNN:
4.10 (4.04)
3.63 (3.57)
3.57 (3.48)
Table 2: Performance of different models on ImageNet in bits/dim (lower is better), training performance in brackets.
3.2
Conditioning on ImageNet Classes
For our second experiment we explore class-conditional modelling of ImageNet images using Gated
PixelCNNs. Given a one-hot encoding hi for the i-th class we model p(x|hi ). The amount of
information that the model receives is only log(1000) ? 0.003 bits/pixel (for a 32x32 image). Still,
one could expect that conditioning the image generation on class label could significantly improve
the log-likelihood results, however we did not observe big differences. On the other hand, as noted
in [27], we observed great improvements in the visual quality of the generated samples.
In Figure 3 we show samples from a single class-conditional model for 8 different classes. We see that
the generated classes are very distinct from one another, and that the corresponding objects, animals
and backgrounds are clearly produced. Furthermore the images of a single class are very diverse: for
example the model was able to generate similar scenes from different angles and lightning conditions.
It is encouraging to see that given roughly 1000 images from every animal or object the model is able
to generalize and produce new renderings.
5
3.3
Conditioning on Portrait Embeddings
In our next experiment we took the latent representations from the top layer of a convolutional
network trained on a large database of portraits automatically cropped from Flickr images using a
face detector. The quality of images varied wildly, because a lot of the pictures were taken with
mobile phones in bad lightning conditions.
The network was trained with a triplet loss function [23] that ensured that the embedding h produced
for an image x of a specific person was closer to the embeddings for all other images of the same
person than it was to any embedding of another person.
After the supervised net was trained we took the (image=x, embedding=h) tuples and trained the
Conditional PixelCNN to model p(x|h). Given a new image of a person that was not in the training
set we can compute h = f (x) and generate new portraits of the same person.
Samples from the model are shown in Figure 4. We can see that the embeddings capture a lot of the
facial features of the source image and the generative model is able to produce a large variety of new
faces with these features in new poses, lighting conditions, etc.
Finally, we experimented with reconstructions conditioned on linear interpolations between embeddings of pairs of images. The results are shown in Figure 5. Every image in a single row used the
same random seed in the sampling which results in smooth transitions. The leftmost and rightmost
images are used to produce the end points of interpolation.
3.4
PixelCNN Auto Encoder
This experiment explores the possibility of training both the encoder and decoder (PixelCNN) endto-end as an auto-encoder. We trained a PixelCNN auto-encoder on 32x32 ImageNet patches and
compared the results with those from a convolutional auto-encoder trained to optimize MSE. Both
models used a 10 or 100 dimensional bottleneck.
Figure 6 shows the reconstructions from both models. For the PixelCNN we sample multiple
conditional reconstructions. These images support our prediction in Section 2.4 that the information
encoded in the bottleneck representation h will be qualitatively different with a PixelCNN decoder
than with a more conventional decoder. For example, in the lowest row we can see that the model
generates different but similar looking indoor scenes with people, instead of trying to exactly
reconstruct the input.
4
Conclusion
This work introduced the Gated PixelCNN, an improvement over the original PixelCNN that is able to
match or outperform PixelRNN [30], and is computationally more efficient. In our new architecture,
we use two stacks of CNNs to deal with ?blind spots? in the receptive field, which limited the original
PixelCNN. Additionally, we use a gating mechanism which improves performance and convergence
speed. We have shown that the architecture gets similar performance to PixelRNN on CIFAR-10 and
is now state-of-the-art on the ImageNet 32x32 and 64x64 datasets.
Furthermore, using the Conditional PixelCNN we explored the conditional modelling of natural
images in three different settings. In class-conditional generation we showed that a single model is
able to generate diverse and realistic looking images corresponding to different classes. On human
portraits the model is capable of generating new images from the same person in different poses and
lightning conditions from a single image. Finally, we demonstrated that the PixelCNN can be used as
a powerful image decoder in an autoencoder. In addition to achieving state of the art log-likelihood
scores in all these datasets, the samples generated from our model are of very high visual quality
showing that the model captures natural variations of objects and lighting conditions.
In the future it might be interesting to try and generate new images with a certain animal or object
solely from a single example image [21, 22]. Another exciting direction would be to combine
Conditional PixelCNNs with variational inference to create a variational auto-encoder. In existing
work p(x|h) is typically modelled with a Gaussian with diagonal covariance and using a PixelCNN
instead could thus improve the decoder in VAEs. Another promising direction of this work would be
to model images based on an image caption instead of class label [15, 19].
6
African elephant
Coral Reef
Sandbar
Sorrel horse
Lhasa Apso (dog)
Lawn mower
Brown bear
Robin (bird)
Figure 3: Class-Conditional samples from the Conditional PixelCNN.
Figure 4: Left: source image. Right: new portraits generated from high-level latent representation.
Figure 5: Linear interpolations in the embedding space decoded by the PixelCNN. Embeddings from
leftmost and rightmost images are used for endpoints of the interpolation.
7
m = 10
m = 100
m = 10
m = 100
m = 10
m = 100
Figure 6: Left to right: original image, reconstruction by an auto-encoder trained with MSE,
conditional samples from a PixelCNN auto-encoder. Both auto-encoders were trained end-to-end
with a m = 10-dimensional bottleneck and a m = 100 dimensional bottleneck.
References
[1] Mart?n Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S
Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine learning on
heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016.
[2] Marc G Bellemare, Sriram Srinivasan, Georg Ostrovski, Tom Schaul, David Saxton, and Remi Munos.
Unifying count-based exploration and intrinsic motivation. arXiv preprint arXiv:1606.01868, 2016.
[3] Emily L Denton, Soumith Chintala, Rob Fergus, et al. Deep generative image models using a laplacian
pyramid of adversarial networks. In Advances in Neural Information Processing Systems, pages 1486?1494,
2015.
[4] Laurent Dinh, David Krueger, and Yoshua Bengio. NICE: Non-linear independent components estimation.
arXiv preprint arXiv:1410.8516, 2014.
[5] Leon A Gatys, Alexander S Ecker, and Matthias Bethge. A neural algorithm of artistic style. arXiv preprint
arXiv:1508.06576, 2015.
[6] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron
Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing
Systems, pages 2672?2680, 2014.
[7] Alex Graves and J?rgen Schmidhuber. Offline handwriting recognition with multidimensional recurrent
neural networks. In Advances in Neural Information Processing Systems, 2009.
[8] Karol Gregor, Frederic Besse, Danilo J Rezende, Ivo Danihelka, and Daan Wierstra. Towards conceptual
compression. arXiv preprint arXiv:1601.06759, 2016.
[9] Karol Gregor, Ivo Danihelka, Alex Graves, and Daan Wierstra. DRAW: A recurrent neural network for
image generation. Proceedings of the 32nd International Conference on Machine Learning, 2015.
[10] Karol Gregor, Ivo Danihelka, Andriy Mnih, Charles Blundell, and Daan Wierstra. Deep autoregressive
networks. In Proceedings of the 31st International Conference on Machine Learning, 2014.
[11] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition.
arXiv preprint arXiv:1512.03385, 2015.
[12] ?ukasz Kaiser and Ilya Sutskever. Neural gpus learn algorithms. arXiv preprint arXiv:1511.08228, 2015.
[13] Nal Kalchbrenner, Ivo Danihelka, and Alex Graves. Grid long short-term memory. arXiv preprint
arXiv:1507.01526, 2015.
[14] Hugo Larochelle and Iain Murray. The neural autoregressive distribution estimator. The Journal of Machine
Learning Research, 2011.
8
[15] Elman Mansimov, Emilio Parisotto, Jimmy Lei Ba, and Ruslan Salakhutdinov. Generating images from
captions with attention. arXiv preprint arXiv:1511.02793, 2015.
[16] Jonathan Masci, Ueli Meier, Dan Cire?san, and J?rgen Schmidhuber. Stacked convolutional auto-encoders
for hierarchical feature extraction. In Artificial Neural Networks and Machine Learning?ICANN 2011,
pages 52?59. Springer, 2011.
[17] Junhyuk Oh, Xiaoxiao Guo, Honglak Lee, Richard L Lewis, and Satinder Singh. Action-conditional video
prediction using deep networks in atari games. In Advances in Neural Information Processing Systems,
pages 2845?2853, 2015.
[18] Christopher Olah and Mike Tyka. Inceptionism: Going deeper into neural networks. 2015.
[19] Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee.
Generative adversarial text to image synthesis. arXiv preprint arXiv:1605.05396, 2016.
[20] Danilo J Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate
inference in deep generative models. In Proceedings of the 31st International Conference on Machine
Learning, 2014.
[21] Danilo Jimenez Rezende, Shakir Mohamed, Ivo Danihelka, Karol Gregor, and Daan Wierstra. One-shot
generalization in deep generative models. arXiv preprint arXiv:1603.05106, 2016.
[22] Ruslan Salakhutdinov, Joshua B Tenenbaum, and Antonio Torralba. Learning with hierarchical-deep
models. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 35(8):1958?1971, 2013.
[23] Florian Schroff, Dmitry Kalenichenko, and James Philbin. Facenet: A unified embedding for face
recognition and clustering. In Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition, pages 815?823, 2015.
[24] Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised
learning using nonequilibrium thermodynamics. Proceedings of the 32nd International Conference on
Machine Learning, 2015.
[25] Rupesh K Srivastava, Klaus Greff, and J?rgen Schmidhuber. Training very deep networks. In Advances in
Neural Information Processing Systems, pages 2368?2376, 2015.
[26] Lucas Theis and Matthias Bethge. Generative image modeling using spatial LSTMs. In Advances in
Neural Information Processing Systems, 2015.
[27] Lucas Theis, Aaron van den Oord, and Matthias Bethge. A note on the evaluation of generative models.
arXiv preprint arXiv:1511.01844, 2015.
[28] Benigno Uria, Marc-Alexandre C?t?, Karol Gregor, Iain Murray, and Hugo Larochelle. Neural autoregressive distribution estimation. arXiv preprint arXiv:1605.02226, 2016.
[29] Aaron van den Oord and Joni Dambre. Locally-connected transformations for deep gmms. In International
Conference on Machine Learning (ICML) : Deep learning Workshop, Abstracts, pages 1?8, 2015.
[30] Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. arXiv
preprint arXiv:1601.06759, 2016.
[31] A?ron van den Oord and Benjamin Schrauwen. Factoring variations in natural images with deep gaussian
mixture models. In Advances in Neural Information Processing Systems, 2014.
[32] Aaron van den Oord and Benjamin Schrauwen. The student-t mixture as a natural image patch prior with
application to image compression. The Journal of Machine Learning Research, 2014.
9
| 6527 |@word cnn:1 middle:2 compression:3 nd:2 seek:1 covariance:1 inpainting:1 shot:1 initial:1 contains:1 score:3 jimenez:1 interestingly:1 deconvolutional:2 rightmost:2 outperforms:2 existing:2 current:5 com:6 activation:3 devin:1 realistic:2 uria:1 remove:1 update:1 generative:11 half:2 intelligence:1 ivo:5 short:1 ron:2 location:4 simpler:1 zhang:1 height:1 wierstra:5 constructed:1 olah:1 abadi:1 consists:2 combine:4 dan:1 underfitting:1 introduce:1 mask:2 roughly:1 elman:1 planning:2 gatys:1 salakhutdinov:2 automatically:1 encouraging:1 soumith:1 becomes:1 spain:1 conv:3 linearity:1 lowest:1 what:1 atari:1 deepmind:6 developed:1 unified:1 transformation:1 every:11 multidimensional:1 growth:1 exactly:1 returning:1 ensured:1 mansimov:1 sherjil:1 unit:4 omit:1 appear:1 danihelka:5 before:1 encoding:3 parallelize:1 laurent:1 solely:1 interpolation:4 might:1 sandbar:1 bird:1 limited:1 logeswaran:1 practical:1 pixelcnns:7 block:1 x3:1 backpropagation:1 spot:7 yan:1 adapting:1 significantly:1 alleviated:1 dambre:1 pixelrnn:12 get:2 cannot:1 close:1 operator:1 bellemare:1 optimize:1 equivalent:1 map:7 conventional:1 demonstrated:1 mower:2 dean:1 straightforward:1 ecker:1 starting:1 jimmy:1 emily:1 rectangular:1 resolution:1 attention:1 splitting:1 x32:4 matthieu:1 pouget:1 jascha:1 insight:1 iain:2 estimator:1 oh:1 embedding:7 x64:2 variation:2 inceptionism:1 xinchen:1 caption:2 us:1 deblurring:1 goodfellow:1 element:2 recognition:4 database:2 bottom:2 observed:1 mike:1 preprint:14 capture:5 region:1 connected:2 sun:1 ordering:1 benjamin:2 environment:1 schiele:1 kalenichenko:1 saxton:1 warde:1 trained:9 depend:3 singh:1 serve:1 eric:1 multimodal:2 joint:2 maheswaranathan:1 represented:1 train:5 stacked:1 distinct:2 amend:1 shortcoming:1 effective:1 niru:1 artificial:1 horse:1 klaus:1 neighborhood:1 kalchbrenner:3 jean:1 encoded:2 larger:2 bernt:1 reconstruct:2 wg:2 encoder:13 elephant:1 statistic:1 unseen:1 analyse:1 noisy:1 final:1 shakir:2 nll:3 descriptive:1 advantage:4 net:2 matthias:3 took:2 reconstruction:5 interaction:1 product:3 combining:1 achieve:1 schaul:1 description:2 sutskever:1 convergence:1 cropping:1 produce:4 generating:4 karol:5 object:6 help:1 recurrent:4 pose:5 strong:1 implemented:1 predicted:1 larochelle:2 concentrate:1 direction:2 korayk:1 filter:5 cnns:1 stochastic:1 exploration:2 human:1 espeholt:2 require:1 generalization:1 benigno:1 strictly:1 sufficiently:1 ueli:1 great:1 seed:1 mapping:1 predict:2 rgen:3 torralba:1 omitted:1 estimation:2 ruslan:2 schroff:1 label:4 tanh:5 highway:1 create:1 clearly:1 gaussian:3 super:1 aim:1 mobile:1 rezende:3 vk:5 improvement:2 modelling:5 likelihood:4 greatly:1 adversarial:4 wf:2 dim:3 inference:2 ganguli:1 dependent:3 rupesh:1 factoring:1 typically:2 entire:1 hidden:1 going:1 ukasz:1 pixel:36 lucas:2 animal:5 art:3 softmax:1 spatial:3 field:8 having:1 extraction:1 koray:2 sampling:2 progressive:1 denton:1 unsupervised:1 icml:1 future:3 yoshua:2 mirza:1 richard:1 replaced:1 jeffrey:1 ostrovski:1 sriram:1 highly:1 possibility:1 mnih:1 evaluation:1 mixture:2 bracket:2 farley:1 unconditional:3 andy:1 closer:1 capable:1 facial:2 incomplete:1 portrait:6 modeling:2 cover:1 artistic:1 cost:1 introducing:1 nonequilibrium:1 uniform:1 masked:7 dependency:2 encoders:3 combined:1 st:2 person:8 density:2 explores:3 lstm:6 international:5 oord:6 probabilistic:1 lee:2 ashish:1 bethge:3 ilya:1 synthesis:1 schrauwen:2 successively:1 style:1 account:1 potential:4 nonlinearities:1 student:1 wk:6 blind:7 depends:1 multiplicative:1 break:1 try:2 lot:2 philbin:1 portion:1 red:1 parallel:1 masking:1 purple:1 greg:1 convolutional:16 largely:1 ofthe:1 landscape:1 generalize:2 modelled:5 kavukcuoglu:2 produced:4 craig:1 none:1 ren:1 lighting:3 rectified:1 published:1 african:1 detector:1 flickr:1 xiaoxiao:1 raster:1 mohamed:2 james:1 chintala:1 handwriting:1 proved:2 adjusting:1 dataset:2 sequentiality:1 improves:1 akata:1 actually:1 back:1 feed:1 alexandre:1 supervised:1 danilo:3 tom:1 specify:1 improved:1 wei:1 wildly:1 furthermore:2 autoencoders:1 hand:1 receives:1 horizontal:5 lstms:1 christopher:1 mehdi:1 google:12 quality:5 perhaps:1 lei:1 grows:1 believe:1 effect:1 requiring:1 contain:1 brown:1 read:1 illustrated:1 deal:1 x5:1 during:2 width:1 game:1 davis:1 noted:1 leftmost:2 trying:1 complete:1 greff:1 image:72 wise:2 meaning:2 variational:2 krueger:1 charles:1 sigmoid:1 quarter:1 junhyuk:1 hugo:2 conditioning:6 endpoint:1 he:1 interpret:1 significant:2 dinh:1 honglak:2 reef:2 grid:2 similarly:2 centre:1 had:1 lightning:3 pixelcnn:51 ride:2 access:1 etc:1 multivariate:1 own:1 recent:1 showed:1 phone:1 schmidhuber:3 certain:3 joshua:1 florian:1 xiangyu:1 corrado:1 multiple:1 emilio:1 smooth:1 match:3 faster:1 long:2 cifar:4 laplacian:1 prediction:8 variant:5 basic:1 heterogeneous:1 vision:1 arxiv:28 represent:1 agarwal:1 achieved:1 pyramid:1 addition:2 conditionals:1 background:2 cropped:1 grow:1 source:2 jian:1 parallelization:1 lajanugen:1 unlike:1 sure:2 gmms:2 call:1 split:2 embeddings:8 bengio:2 rendering:1 variety:3 architecture:10 andriy:1 idea:1 barham:1 shift:1 blundell:1 synchronous:1 bottleneck:4 expression:1 handled:1 colour:2 padding:1 shaoqing:1 action:2 deep:15 ignored:1 generally:2 useful:1 gravesa:1 antonio:1 lawn:2 amount:1 tenenbaum:1 joni:1 locally:1 visualized:1 reduced:1 generate:10 specifies:1 outperform:2 notice:1 blue:2 diverse:6 georg:1 srinivasan:1 group:1 dickstein:1 achieving:1 nal:3 diffusion:1 vast:1 angle:1 powerful:2 patch:2 draw:5 bit:4 layer:18 hi:2 followed:1 courville:1 fascinating:1 strength:1 alex:4 scene:4 tag:1 generates:2 speed:1 leon:1 gpus:4 beneficial:1 rob:1 making:1 den:6 artwork:1 taken:2 computationally:1 equation:3 visualization:1 previously:2 bing:1 count:1 mechanism:1 fed:1 end:6 available:1 decomposing:1 operation:2 brevdo:1 apply:2 observe:1 hierarchical:2 appropriate:1 neighbourhood:2 alternative:1 batch:1 gate:2 original:6 top:3 clustering:1 unifying:1 coral:2 murray:2 gregor:5 tensor:1 arrangement:1 cire:1 kaiser:1 receptive:7 traditional:1 diagonal:1 capacity:1 decoder:10 zeynep:1 reason:1 ozair:1 modeled:1 colorization:1 reed:1 providing:1 setup:1 ba:1 gated:20 perform:1 vertical:6 convolution:11 datasets:3 daan:5 looking:2 varied:1 stack:18 arbitrary:1 introduced:1 david:3 dog:2 pair:1 meier:1 connection:4 imagenet:9 optimized:1 tensorflow:2 barcelona:1 hour:1 nip:1 able:9 below:2 usually:1 scott:1 indoor:1 pattern:2 including:1 green:1 memory:1 video:1 endto:1 hot:3 natural:6 rely:1 residual:3 tyka:1 representing:1 thermodynamics:1 improve:5 picture:1 created:1 autoencoder:2 auto:12 extract:1 text:1 prior:2 nice:2 eugene:1 theis:2 multiplication:1 graf:4 embedded:1 loss:1 expect:3 bear:1 parisotto:1 generation:7 interesting:1 validation:1 exciting:1 row:8 offline:1 bias:2 allow:2 deeper:1 face:4 munos:1 van:6 distributed:1 depth:1 transition:1 autoregressive:5 forward:1 made:2 reinforcement:1 qualitatively:1 san:1 far:1 transaction:1 approximate:1 dmitry:1 satinder:1 conceptual:1 tuples:1 xi:9 fergus:1 surya:1 latent:5 triplet:1 table:5 additionally:2 promising:1 channel:2 learn:2 robin:1 inherently:1 improving:1 mse:2 excellent:1 complex:2 domain:1 avdnoord:1 marc:2 did:2 icann:1 linearly:1 whole:1 big:1 paul:1 motivation:1 n2:3 lasse:1 x1:4 xu:1 nade:1 fashion:1 besse:1 position:1 decoded:1 explicit:1 lie:1 zhifeng:1 ian:1 masci:1 bad:1 specific:2 gating:1 showing:1 explored:2 experimented:2 abadie:1 frederic:1 essential:1 intrinsic:1 workshop:1 sequential:1 adding:3 sohl:1 conditioned:8 chen:1 easier:1 remi:1 simply:1 explore:1 visual:4 vinyals:2 kaiming:1 springer:1 lewis:1 mart:1 conditional:32 towards:1 replace:1 feasible:1 content:2 change:2 denoising:1 total:1 invariance:1 citro:1 vaes:1 aaron:5 formally:1 support:1 people:1 guo:1 scan:1 jonathan:1 alexander:1 facenet:1 oriol:1 srivastava:1 |
6,111 | 6,528 | Variational Autoencoder for Deep Learning
of Images, Labels and Captions
Yunchen Pu? , Zhe Gan? , Ricardo Henao? , Xin Yuan? , Chunyuan Li? , Andrew Stevens?
and Lawrence Carin?
?
Department of Electrical and Computer Engineering, Duke University
{yp42, zg27, r.henao, cl319, ajs104, lcarin}@duke.edu
?
Nokia Bell Labs, Murray Hill
[email protected]
Abstract
A novel variational autoencoder is developed to model images, as well as associated
labels or captions. The Deep Generative Deconvolutional Network (DGDN) is used
as a decoder of the latent image features, and a deep Convolutional Neural Network
(CNN) is used as an image encoder; the CNN is used to approximate a distribution
for the latent DGDN features/code. The latent code is also linked to generative
models for labels (Bayesian support vector machine) or captions (recurrent neural
network). When predicting a label/caption for a new image at test, averaging is
performed across the distribution of latent codes; this is computationally efficient as
a consequence of the learned CNN-based encoder. Since the framework is capable
of modeling the image in the presence/absence of associated labels/captions, a
new semi-supervised setting is manifested for CNN learning with images; the
framework even allows unsupervised CNN learning, based on images alone.
1
Introduction
Convolutional neural networks (CNNs) [1] are effective tools for image analysis [2], with most CNNs
trained in a supervised manner [2, 3, 4]. In addition to being used in image classifiers, image features
learned by a CNN have been used to develop models for image captions [5, 6, 7]. Most recent work
on image captioning employs a CNN for image encoding, with a recurrent neural network (RNN)
employed as a decoder of the CNN features, generating a caption.
While large sets of labeled and captioned images have been assembled, in practice one typically
encounters far more images without labels or captions. To leverage the vast quantity of these latter
images (and to tune a model to the specific unlabeled/uncaptioned images of interest at test), semisupervised learning of image features is of interest. To account for unlabeled/uncaptioned images,
it is useful to employ a generative image model, such as the recently developed Deep Generative
Deconvolutional Network (DGDN) [8, 9]. However, while the CNN is a feedforward model for image
features (and is therefore fast at test time), the original DGDN implementation required relatively
expensive inference of the latent image features. Specifically, in [8] parameter learning and inference
are performed with Gibbs sampling or Monte Carlo Expectation-Maximization (MCEM).
We develop a new variational autoencoder (VAE) [10] setup to analyze images. The DGDN [8] is
used as a decoder, and the encoder for the distribution of latent DGDN parameters is based on a
CNN (termed a ?recognition model? [10, 11]). Since a CNN is used within the recognition model,
test-time speed is much faster than that achieved in [8]. The VAE framework manifests a novel means
of semi-supervised CNN learning: a Bayesian SVM [12] leverages available image labels, the DGDN
models the images (with or without labels), and the CNN manifests a fast encoder for the distribution
of latent codes. For image-caption modeling, latent codes are shared between the CNN encoder,
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
DGDN decoder, and RNN caption model; the VAE learns all model parameters jointly. These models
are also applicable to images alone, yielding an unsupervised method for CNN learning.
Our DGDN-CNN model for images is related to but distinct from prior convolutional variational
auto-encoder networks [13, 14, 15]. In those models the pooling process in the encoder network is
deterministic (max-pooling), as is the unpooling process in the decoder [14] (related to upsampling
[13]). Our model uses stochastic unpooling, in which the unpooling map (upsampling) is inferred
from the data, by maximizing a variational lower bound.
Summarizing, the contributions of this paper include: (i) a new VAE-based method for deep deconvolutional learning, with a CNN employed within a recognition model (encoder) for the posterior
distribution of the parameters of the image generative model (decoder); (ii) demonstration that the fast
CNN-based encoder applied to the DGDN yields accuracy comparable to that provided by Gibbs sampling and MCEM based inference, while being much faster at test time; (iii) the first semi-supervised
CNN classification results, applied to large-scale image datasets; and (iv) extensive experiments
on image-caption modeling, in which we demonstrate the advantages of jointly learning the image
features and caption model (we also present semi-supervised experiments for image captioning).
2
2.1
Variational Autoencoder Image Model
Image Decoder: Deep Deconvolutional Generative Model
(n)
Consider N images {X(n) }N
? RNx ?Ny ?Nc ; Nx and Ny represent the number of
n=1 , with X
pixels in each spatial dimension, and Nc denotes the number of color bands in the image (Nc = 1 for
gray-scale images and Nc = 3 for RGB images).
To introduce the image decoder (generative model) in its simplest form, we first consider a decoder
2
with L = 2 layers. The code {S(n,k2 ,2) }K
k2 =1 feeds the decoder at the top (layer 2), and at the bottom
(n)
(layer 1) the image X is generated:
? (n,2) = PK2 D(k2 ,2) ? S(n,k2 ,2)
Layer 2:
S
(1)
k2 =1
? (n,2) )
Unpool:
S(n,1) ? unpool(S
(2)
P
K1
(n,1)
(k1 ,1)
(n,k1 ,1)
?
Layer 1:
S
= k1 =1 D
?S
(3)
?1
(n)
(n,1)
?
Data Generation:
X ? N (S
, ? I)
(4)
0
Equation (4) is meant to indicate that E(X
zero-mean Gaussian with precision ?0 .
(n)
? (n,1)
)=S
, and each element of X(n) ? E(X(n) ) is iid
? (n,l) , for layer l ? {1, 2}
Concerning notation, expressions with two superscripts, D(kl ,l) , S(n,l) and S
and image n ? {1, . . . , N }, are 3D tensors. Expressions with three superscripts, S(n,kl ,l) , are 2D
activation maps, representing the kl th ?slice? of 3D tensor S(n,l) ; S(n,kl ,l) is the spatially-dependent
activation map for image n, dictionary element kl ? {1, . . . , Kl }, at layer l of the model. Tensor S(n,l)
(kl ,l)
l
is formed by spatially aligning and ?stacking? the {S(n,kl ,l) }K
? S(n,kl ,l)
kl =1 . Convolution D
(kl ,l)
(n,kl ,l)
between 3D D
and 2D S
indicates that each of the Kl?1 2D ?slices? of D(kl ,l) is
convolved with the spatially-dependent S(n,kl ,l) ; upon aligning and ?stacking? these convolutions, a
tensor output is manifested for D(kl ,l) ? S(n,kl ,l) (that tensor has Kl?1 2D slices).
Assuming dictionary elements {D(kl ,l) } are known, along with the precision ?0 . We now discuss
2
the generative process of the decoder. The layer-2 activation maps {S(n,k2 ,2) }K
k2 =1 are the code
that enters the decoder. Activation map S(n,k2 ,2) is spatially convolved with D(k2 ,2) , yielding a 3D
? (n,2) .
tensor; summing over the K2 such tensors manifested at layer-2 yields the pooled 3D tensor S
(n,2)
(n,1)
?
Stochastic unpooling (discussed below) is employed to go from S
to S
. Slice k1 of S(n,1) ,
(n,k1 ,1)
(k1 ,1)
(n)
S
, is convolved with D
, and summing over k1 yields E(X ).
For the stochastic unpooling, S(n,k1 ,1) is partitioned into contiguous px ? py pooling blocks (analo(n,k ,1)
gous to pooling blocks in CNN-based activation maps [1]). Let z i,j 1 ? {0, 1}px py be a vector
(n,k1 ,1)
of px py ? 1 zeros, and a single one; z i,j
corresponds to pooling block (i, j) in S(n,k1 ,1) . The
2
(n,k ,1)
location of the non-zero element of z i,j 1 identifies the location of the single non-zero element
in the corresponding pooling block of S(n,k1 ,1) . The non-zero element in pooling block (i, j) of
(n,k ,2)
? (n,2) . Within the prior of the decoder,
S(n,k1 ,1) is set to S?i,j 1 , i.e., element (i, j) in slice k1 of S
(n,k1 ,1)
? (n,2) and S(n,2) are 3D tensors with
we impose z
? Mult(1; 1/(px py ), . . . , 1/(px py )). Both S
i,j
K1 2D slices; as a result of the unpooling, the 2D slices in the sparse S(n,2) have px py times more
? (n,2) .
elements than the corresponding slices in the dense S
The above model may be replicated to constitute L > 2 layers. The decoder is represented concisely
as p? (X|s, z), where vector s denotes the ?unwrapped? set of top-layer features {S(?,kL ,L) }, and
vector z denotes the unpooling maps at all L layers. The model parameters ? are the set of dictionary
elements at the L layers, as well as the precision ?0 . The prior over the code is p(s) = N (0, I).
2.2
Image Encoder: Deep CNN
To make explicit the connection between the proposed CNN-based encoder and the above decoder,
we also initially illustrate the encoder with an L = 2 layer model. While the two-layer decoder in
(1)-(4) is top-down, starting at layer 2, the encoder is bottom-up, starting at layer 1 with image X(n) :
Layer 1:
Pool:
Layer 2:
Code Generation:
? (n,k1 ,1) = X(n) ?s F(k1 ,1) , k1 = 1, . . . , K1
C
? (n,1) )
C(n,1) ? pool(C
(5)
? (n,k2 ,2) = C(n,1) ?s F(k2 ,2) , k2 = 1, . . . , K2
C
? (n,2) ), diag(? 2 (C
? (n,2) ))
sn ? N ?? (C
?
(7)
(6)
(8)
Image X(n) and filter F(k1 ,1) are each tensors, composed of Nc stacked 2D images (?slices?). To
implement X(n) ?s F(k1 ,1) , the respective spatial slices of X(n) and F(k1 ,1) are convolved; the results
of the Nc convolutions are aligned spatially and summed, yielding a single 2D spatially-dependent
? (n,k1 ,1) (hence notation ?s , to distinguish ? in (1)-(4)).
filter output C
? (n,1) .
? (n,k1 ,1) }K1 are aligned spatially and ?stacked? to constitute the 3D tensor C
The 2D maps {C
k1 =1
(n,1)
(n,1)
?
Each contiguous px ? py pooling region in C
is stochastically pooled to constitute C
; the
posterior pooling statistics in (6) are detailed below. Finally, the pooled tensor C(n,1) is convolved
2
? (n,k2 ,2) ; the K2
with K2 layer-2 filters {F(k2 ,2) }K
k2 =1 , each of which yields the 2D feature map C
(n,k2 ,2) K2
(n,2)
?
?
feature maps {C
}k2 =1 are aligned and ?stacked? to manifest C
.
(n,k ,1)
1
?
Concerning the pooling in (6), let C
reflect the px py components in pooling block (i, j) of
i,j
(n,k1 ,1)
?
C
. Using a multi-layered perceptron (MLP), this is mapped to the px py
-dimensional real vec
(n,k1 ,1)
(n,k1 ,1)
(n,k1 ,1)
?
? (n,k1 ,1) ) .
tor ?
= MLP(C
), defined as ?
= W1 h, with h = tanh W2 vec(C
i,j
i,j
The pooling vector is drawn
i,j
(n,k ,1)
z i,j 1
?
i,j
(n,k ,1)
Mult(1; Softmax(? i,j 1 ));
as a recognition model,
(n,k ,1)
Mult(1; Softmax(? i,j 1 ))
is also treated as the posterior distribution for the DGDN unpooling in
? (n,2) ) and ? 2 (C
? (n,2) ) in (8), each layer of C
? (n,2) is fed
(2). Similarly, to constitute functions ?? (C
?
through a distinct MLP. Details are provided in the Supplementary Material (SM).
Parameters ? of q? (s, z|X) correspond to the filter banks {F(kl ,l) }, as well as the parameters of
the MLPs. The encoder is a CNN (yielding fast testing), utilized in a novel manner to manifest a
posterior distribution on the parameters of the decoder. As discussed in Section 4, the CNN is trained
in a novel manner, allowing semi-supervised and even unsupervised CNN learning.
3
3.1
Leveraging Labels and Captions
Generative Model for Labels: Bayesian SVM
Assume a label `n ? {1, . . . , C} is associated with training image X(n) ; in the discussion that
follows, labels are assumed available for each image (for notational simplicity), but in practice only a
subset of the N training images need have labels. We design C one-versus-all binary SVM classifiers
3
[16], responsible for mapping top-layer image features sn to label `n ; sn is the same image code as
in (8), from the top DGDN layer. For the `-th classifier, with ` ? {1, . . . , C}, the problem may be
(`)
(`)
(`)
(`)
posed as training with {sn , yn }N
n=1 , with yn ? {?1, 1}. If `n = ` then yn = 1, and yn = ?1
otherwise. Henceforth we consider the Bayesian SVM for each one of the binary learning tasks, with
labeled data {sn , yn }N
n=1 .
Given a feature vector s, the goal of the SVM is to find an f (s) that minimizes the objective function
PN
? n=1 max(1 ? yn f (sn ), 0) + R(f (s)), where max(1 ? yn f (sn ), 0) is the hinge loss, R(f (s)) is
a regularization term that controls the complexity of f (s), and ? is a tuning parameter controlling the
trade-off between error penalization and the complexity of the classification function. Recently, [12]
showed that for the linear classifier f (s) = ? T s, minimizing the SVM objective function is equivalent
QN
to estimating the mode of the pseudo-posterior of ?: p(?|S, y, ?) ? n=1 L(yn |sn , ?, ?)p(?|?),
where y = [y1 . . . yN ]T , S = [s1 . . . sN ], L(yn |sn , ?, ?) is the pseudo-likelihood function, and
p(?|?) is the prior distribution for the vector of coefficients ?. In [12] it was shown that L(yn |sn , ?, ?)
admits a location-scale mixture of normals representation by introducing latent variables ?n :
R ? ??
T
2
T
n ? sn )
L(yn |sn , ?, ?) = e?2? max(1?yn ? sn ,0) = 0 ?2?? exp ? (1+?n2??y
d?n .
(9)
?1 ?
n
n
Note that (9) is a mixture of Gaussian distributions w.r.t. random variable yn ? T sn , where the mixture
is formed with respect to ?n , which controls the mean and variance of the Gaussians. This encourages
data augmentation for variable ?n , permitting efficient Bayesian inference (see [12, 17] for details).
Parameters {? ` }C
`=1 for the C binary SVM classifiers are analogous to the fully connected parameters
of a softmax classifier connected to the top of a traditional CNN [2]. If desired, the pseudo-likelihood
of the SVM-based classifier can be replaced by a softmax-based likelihood. In Section 5 we compare
performance of the SVM and softmax based classifiers.
3.2
Generative Model for Captions
For image n, assume access to an associated caption Y(n) ; for notational simplicity, we again assume
a caption is available for each training image, although in practice captions may only be available
(n)
(n)
(n)
on a subset of images. The caption is represented as Y(n) = (y 1 , . . . , y Tn ), and y t is a 1-of-V
(?one-hot?) encoding, with V the size of the vocabulary, and Tn the length of the caption for image n.
(n)
(n)
(n)
Word t, y t , is embedded into an M -dimensional vector wt = We y t , where We ? RM ?V is
(n)
(n)
a word embedding matrix (to be learned), i.e., wt is a column of We , chosen by the one-hot y t .
The probability of caption Y(n) given top-layer DGDN image features sn is defined as p(Y(n) |sn ) =
QTn
(n)
(n) (n)
(n)
p(y 1 |sn ) t=2
p(y t |y <t , sn ). Specifically, we generate the first word y 1 from sn , with
(n)
(n)
(n)
p(y 1 ) = softmax(Vh1 ), where h1 = tanh(Csn ). Bias terms are omitted for simplicity.
All other words in the caption are then sequentially generated using a recurrent neural network
(n) (n)
(RNN), until the end-sentence symbol is generated. Each conditional p(y t |y <t , sn ) is specified as
(n)
(n)
(n)
(n)
(n)
softmax(Vht ), where ht is recursively updated through ht = H(wt?1 , ht?1 ). C and V are
weight matrices (to be learned), and V is used for computing a distribution over words.
The transition function H(?) can be implemented with a gated activation function, such as Long
Short-Term Memory (LSTM) [18] or a Gated Recurrent Unit (GRU) [19]. Both LSTM and GRU
have been proposed to address the issue of learning long-term dependencies. In experiments we have
found that GRU provides slightly better performance than LSTM (we implemented and tested both),
and therefore the GRU is used.
4
Variational Learning of Model Parameters
To make the following discussion concrete, we describe learning and inference within the context
of images and captions, combining the models in Sections 2 and 3.2. This learning setup is also
applied to model images with associated labels, with the caption model replaced in that case with
the Bayesian SVM of Section 3.1 (details provided in the SM). In the subsequent discussion we
employ the image encoder q? (s, z|X), the image decoder p? (X|s, z), and the generative model for
the caption (denoted p? (Y|s), where ? represents the GRU parameters).
4
The desired parameters {?, ?, ?} are optimized by minimizing the variational lower bound. For a
single captioned image, the variational lower bound L?,?,? (X, Y) can be expressed as
L?,?,? (X, Y) = ? Eq? (s|X) [log p? (Y|s)] + Eq? (s,z|X) [log p? (X, s, z) ? log q? (s, z|X)]
where ? is a tuning parameter that balances the two components of L?,?,? (X, Y). When ? is set to
zero, it corresponds to the variational lower bound for a single uncaptioned image:
U?,? (X) = Eq? (s,z|X) [log p? (X, s, z) ? log q? (s, z|X)]
(10)
The lower bound for the entire dataset is then:
P
P
J?,?,? = (X,Y)?Dc L?,?,? (X, Y) + X?Du U?,? (X)
(11)
where Dc denotes the set of training images with associated captions, and Du is the set of training
images that are uncaptioned (and unlabeled).
To optimize J?,?,? w.r.t. ?, ? and ?, we utilize Monte Carlo integration to approximate the
expectation, Eq? (s,z|X) , and stochastic gradient descent (SGD) for parameter optimization. We use
the variance reduction techniques in [10] and [11] to compute the gradients. Details are provided in
the SM.
When ? is set to 1, L?,?,? (X, Y) recovers the exact variational lower bound. Motivated by assigning
the same weight to every data point, we set ? = NX /(T ?) or NX /(C?) in the experiments, where
NX is the number of pixels in each image, T is the number of words in the corresponding caption, C
is the number of categories for the corresponding label and ? is the proportion of labeled/captioned
data in the mini-batch.
At test time, we consider two tasks: inference of a caption or label for a new image X? . Again,
considering captioning of aRnew image (with similar inference for labeling), after the model parameters
PNs
are learned p(Y? |X? ) = p? (Y? |s? )p(s? |X? )ds? ? s=1
p? (Y? |s?s ), where s?s ? q? (s|X =
?
X ), and Ns is the number of samples. Monte Carlo sampling is used to approximate the integral,
and the recognition model, q? (s|X), is employed to approximate p(s|X), for fast inference of image
representation.
5
Experiments
The architecture of models and initialization of model parameters are provided in the SM. No datasetspecific tuning other than early stopping on validation sets was conducted. The Adam algorithm [20]
with learning rate 0.0002 is utilized for optimization of the variational learning expressions in
Section 4. We use mini-batches of size 64. Gradients are clipped if the norm of the parameter vector
exceeds 5, as suggested in [21]. All the experiments of our models are implemented in Theano [22]
using a NVIDIA GeForce GTX TITAN X GPU with 12GB memory.
5.1
Benchmark Classification
We first present image classification results on MNIST, CIFAR-10 & -100 [23], Caltech 101 [24] &
256 [25], and ImageNet 2012 datasets. For Caltech 101 and Caltech 256, we use 30 and 60 images
per class for training, respectively. The predictions are based on averaging the decision values of
Ns = 50 collected samples from the approximate posterior distribution over the latent variables from
q? (s|X). As a reference for computational cost, our model takes about 5 days to train on ImageNet.
We compared our VAE setup to a VAE with deterministic unpooling, and we also compare with
a DGDN trained using Gibbs sampling and MCEM [8]; classification results and testing time are
summarized in Table 1. Other state-of-the-art results can be found in [8]. The results based on
Gibbs sampling and MCEM are obtained by our own implementation on the same GPU, which are
consistent with the classification accuracies reported in [8].
For Gibbs-sampling-based learning, only suitable for the first five small/modest size datasets we
consider, we collect 50 posterior samples of model parameters ?, after 1000 burn-in iterations during
training. Given a sample of model parameters, the inference of top-layer features at test is also done
via Gibbs sampling. Specifically, we collect 100 samples after discarding 300 burn-in samples; fewer
samples leads to worse performance. The predictions are based on averaging the decision values
5
Table 1: Classification error (%) and testing time (ms per image) on benchmarks.
Method
Gibbs [8]
MCEM [8]
VAE-d
VAE (Ours)
MNIST
test
test
error
time
0.37
3.1
0.45
0.8
0.42 0.007
0.38 0.007
Method
MCEM [8]
VAE (Ours)
CIFAR-10
test
test
error time
8.21
10.4
9.04
1.1
10.74 0.02
8.19
0.02
ImageNet 2012
top-1 top-5
test
error
error time
37.9
16.1
14.4
38.2
15.7
1.0
CIFAR-100
test
test
error time
34.33 10.4
35.92
1.1
37.96 0.02
35.01 0.02
Caltech 101
test
test
error time
12.87 50.4
13.51
8.8
14.79
0.3
11.99
0.3
Caltech 256
test
test
error time
29.50 52.3
30.13
8.9
32.18
0.3
29.33
0.3
ImageNet Pretrained for
Caltech 101
Caltech 256
test error test time test error test time
6.85
14.1
22.10
14.2
6.91
0.9
22.53
0.9
of the collected samples (50 samples of model parameters ?, and for each 100 inference samples
of latent parameters s and z, for a total of 5000 samples). With respect to the testing of MCEM,
all data-dependent latent variables are integrated (summed) out in the expectation, except for the
top-layer feature map, for which we find a MAP point estimate via gradient descent.
As summarized in Table 1, the proposed recognition model is much faster than Gibbs sampling and
MCEM at test time (up to 400x speedup), and yields accuracy commensurate with these other two
methods (often better). To illustrate the role of stochastic unpooling, we replaced it with deterministic
unpooling as in [14]. The results, indicated as VAE-d in Table 1, demonstrate the powerful capabilities
of the stochastic unpooling operation. We also tried VAE-d on the ImageNet 2012 dataset; however,
the performance is much worse than our proposed VAE, hence those results are not reported.
5.2
Semi-Supervised Classification
We now consider semi-supervised classification. With each mini-batch, we use 32 labeled samples
and 32 unlabeled samples, i.e., ? = 0.5.
Table 2: Semi-supervised classification error (%) on MNIST. N is the number of labeled images per class.
Deep generative model [26]
Ladder network [27]
Our model
M1+TSVM
M1+M2
?-full
?-conv
?=0
? = Nx /(C?)
10
16.81 11.82? 0.25 3.33 ? 0.14 1.06 ? 0.37
0.89?0.50
5.83 ? 0.97
1.49 ? 0.36
60
6.16
5.72? 0.05
2.59 ?0.05
0.82 ? 0.17* 2.19 ? 0.19
0.77 ? 0.09
100
5.38
4.24? 0.07
2.40 ?0.02 0.84 ? 0.08 0.74 ? 0.10* 1.75 ? 0.14
0.63 ? 0.06
300
3.45
3.49? 0.04
2.18 ?0.04
0.63 ? 0.02* 1.42 ? 0.08
0.51 ? 0.04
*These results are achieved with our own implementation based on the publicly available code.
N
TSVM
MNIST We first test our model on the MNIST classification benchmark. We randomly split the
60,000 training samples into a 50,000-sample training set and a 10,000-sample validation set (used to
evaluate early stopping). The training set is further randomly split into a labeled and unlabeled set,
and the number of labeled samples in each category varies from 10 to 300. We perform testing on the
standard 10,000 test samples with 20 different training-set splits.
Table 2 shows the classification results. For ? = 0, the model is trained in an unsupervised manner.
When doing unsupervised learning, the features extracted by our model are sent to a separate
transductive SVM (TSVM). In this case, our results can be directly compared to the results of the
M1+TSVM model [26], demonstrating the effectiveness of our recognition model in providing good
representations of images. Using 10 labeled images per class, our semi-supervised learning approach
with ? = Nx /(C?) achieves a test error of 1.49, which is competitive with state-of-the-art results [27].
When using a larger number of labeled images, our model consistently achieves the best results.
ImageNet 2012 ImageNet 2012 is used to assess the scalability of our model to large datasets
(also considered, for supervised learning, in Table 1). Since no comparative results exist for semisupervised learning with ImageNet, we implemented the 8-layer AlexNet [2] and the 22-layer
GoogLeNet [4] as the supervised model baselines, which were trained by utilizing only the labeled
data1 . We split the 1.3M training images into a labeled and unlabeled set, and vary the proportion
1
We use the default settings in the Caffe package, which provide a top-1 accuracy of 57.1% and 68.7%, as
well as a top-5 accuracy of 80.2% and 88.9% on the validation set for AlexNet and GoogLeNet, respectively.
6
Accuracy (%)
of labeled images from 1% to 100%. The classes are balanced to ensure that no particular class is
over-represented, i.e., the ratio of labeled and unlabeled images is the same for each class. We repeat
the training process 10 times, and each time we utilize different sets of images as the unlabeled ones.
Figure 1 shows our results, together with
the baselines. Tabulated results and a
90
plot with error bars are provided in the
SM. The variance of our model?s results
80
(caused by different randomly selected
70
labeled examples) is around 1% when
considering a small proportion of labeled
60
images (less than 10% labels), and the
variance drops to less than 0.2% when
50
the proportion of labeled images is larger
than 30%. As can be seen from Figure 1,
40
our semi-supervised learning approach
AlexNet Top?1
30
with 60% labeled data achieves comparaAlexNet Top?5
ble results (61.24% top-1 accuracy) with
GoogLeNet Top?1
20
the results of full datasets (61.8% top-1
GoogLeNet Top?5
Ours Top?1
accuracy), demonstrating the effective10
Ours Top?5
ness of our approach for semi-supervised
0
classification. Our model provides consistently better results than AlexNet [2]
1 5 10 20 30 40 50 60 70 80 90 100
Proportion (%) of Labeled Images
which has a similar five convolutional
layers architecture as ours. Our model is
outperformed by GoogLeNet when more Figure 1: Semi-supervised classification accuracy on the
labeled images are provided. This is not validation set of ImageNet 2012.
surprising since GoogLeNet utilizes a considerably more complicated CNN architecture than ours.
To further illustrate the role of each component of our model, we replaced the Bayesian SVM with a
softmax classifier (see discussion at the end of Section 3.1). The softmax results are slightly worse,
and provided in the SM. The gap between the results of Bayesian SVM and softmax are around 1%
when the proportion of labeled images is less 30% and drop to around 0.5% when a larger proportion
of labeled images is considered (larger than 30%). This further illustrates that the performance
gain is primarily due to the semi-supervised learning framework used in our model, rather than the
discriminative power of the SVM.
5.3
Image Captioning
We present image captioning results on three benchmark datasets: Flickr8k [29], Flickr30k [30] and
Microsoft (MS) COCO [31]. These datasets contain 8000, 31000 and 123287 images, respectively.
Each image is annotated with 5 sentences. For fair comparison, we use the same pre-defined splits
for all the datasets as in [5]. We use 1000 images for validation, 1000 for test and the rest for training
on Flickr8k and Flickr30k. For MS COCO, 5000 images are used for both validation and testing.
The widely used BLEU metric [32] and sentence perplexity (PPL) are employed to quantitatively
evaluate the performance of our image captioning model. A low PPL indicates a better language
model. For the MS COCO dataset, we further evaluate our model with metrics METEOR [33] and
CIDEr [34]. Our joint model takes three days to train on MS COCO.
We show results for three models: (i) Two-step model: this model consists of our generative and
recognition model developed in Section 2 to analyze images alone, in an unsupervised manner. The
extracted image features are fed to a separately trained RNN. (ii) Joint model: this is the joint model
developed in Sections 2 and 3.2. (iii) Joint model with ImageNet: in this model training is performed
in a semi-supervised manner, with the training set of ImageNet 2012 treated as uncaptioned images,
to complement the captioned training set.
The image captioning results are summarized in Table 3. Our two-step model achieves better
performance than similar baseline two-step methods, in which VggNet [3] and GoogLeNet [4] were
used as feature extractors. The baseline VggNet and GoogLeNet models require labeled images for
training, and hence are trained on ImageNet. By contrast, in our two-step approach, the deep model
is trained in an unsupervised manner, using uncaptioned versions of images from the training set.
This fact may explain the improved quality of our results in Table 3.
7
Table 3: BLEU-1,2,3,4, METEOR, CIDEr and PPL metrics compared to other state-of-the-art
results and baselines on Flickr8k, Flickr 30k and MS COCO datasets.
Flickr8k
B-1 B-2 B-3 B-4
Baseline results
VggNet+RNN
0.56 0.37 0.24 0.16
GoogLeNet+RNN
0.56 0.38 0.24 0.16
Our two step model
0.61 0.41 0.27 0.17
Our results with other state-of-the-art results
Hard-Attention [6]
0.67 0.46 0.31 0.21
Our joint model
0.70 0.49 0.33 0.22
Our joint model with ImageNet 0.72 0.52 0.36 0.25
State-of-the-art results using extra information
Attributes-CNN+RNN [7]
0.74 0.54 0.38 0.27
Method
Method
B-1 B-2 B-3
Baseline results
VggNet+RNN
0.61 0.42 0.28
GoogLeNet+RNN
0.60 0.40 0.26
Our two step
0.61 0.42 0.27
Our results with other state-of-the-art results
DMSM [28]
Hard-Attention [6]
0.72 0.50 0.36
0.71 0.51 0.38
Our joint model
Our joint model with ImageNet 0.72 0.52 0.37
State-of-the-art results using extra information
Attributes-CNN+LSTM [7]
0.74 0.56 0.42
Flickr30k
B-3 B-4
PPL
B-1
B-2
PPL
15.71
15.71
15.82
0.57
0.58
0.61
0.38
0.39
0.41
0.25
0.26
0.27
0.17
0.17
0.17
18.83
18.77
18.73
15.24
13.24
0.67
0.69
0.72
0.44
0.50
0.53
0.30
0.35
0.38
0.20
0.22
0.25
16.17
15.34
12.60 0.73 0.55 0.40
MS COCO
B-4 METEOR CIDEr
0.28
15.96
PPL
0.19
0.17
0.18
0.19
0.19
0.20
0.56
0.55
0.58
13.16
14.01
13.46
0.26
0.25
0.26
0.28
0.24
0.23
0.22
0.24
0.89
0.90
18.10
11.57
11.14
0.31
0.26
0.94
10.49
It is worth noting that our joint model yields significant improvements over our two-step model,
nearly 10% in average for BLEU scores, demonstrating the importance of inferring a shared latent
structure. It can also be seen that our improvement with semi-supervised use of ImageNet is most
significant with the small/modest datasets (Flickr8k and Flickr30k), compared to the large dataset
(MS COCO). Our model performs better than most image captioning systems. The only method
with better performance than ours is [7], which employs an intermediate image-to-attributes layer,
that requires determining an extra attribute vocabulary. Examples of generated captions from the
validation set of ImageNet 2012, which has no ground truth captions and is unseen during training
(the semi-supervised learning only uses the training set of ImageNet 2012), are shown in Figure 2.
a man with a snowboard
next to a man with glasses
a big black dog standing on
the grass
a desk with a keyboard
a player is holding a
hockey stick
a man is standing next to a
brown horse
a box full of apples and
oranges
Figure 2: Examples of generated caption from unseen images on the validation dataset of ImageNet.
6
Conclusions
A recognition model has been developed for the Deep Generative Deconvolutional Network (DGDN)
[8], based on a novel use of a deep CNN. The recognition model has been coupled with a Bayesian
SVM and an RNN, to also model associated labels and captions, respectively. The model is learned
using a variational autoencoder setup, and allows semi-supervised learning (leveraging images without
labels or captions). The algorithm has been scaled up with a GPU-based implementation, achieving
results competitive with state-of-the-art methods on several tasks (and novel semi-supervised results).
Acknowledgements
This research was supported in part by ARO, DARPA, DOE, NGA, ONR and NSF. The Titan X used
in this work was donated by the NVIDIA Corporation.
8
References
[1] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural Computation, 1989.
[2] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural
networks. In NIPS, 2012.
[3] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In
ICLR, 2015.
[4] C. Szegedy, W. Liui, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich.
Going deeper with convolutions. In CVPR, 2015.
[5] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and tell: A neural image caption generator. In
CVPR, 2015.
[6] K. Xu, J. L. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhutdinov, R. S. Zemel, and Y. Bengio. Show,
attend and tell: Neural image caption generation with visual attention. In ICML, 2015.
[7] Q. Wu, C. Shen, A. Hengel, L. Liu, and A. Dick. What value do explicit high level concepts have in vision
to language problems? In CVPR, 2016.
[8] Y. Pu, X. Yuan, A. Stevens, C. Li, and L. Carin. A deep generative deconvolutional image model. In
AISTATS, 2016.
[9] Y. Pu, X. Yuan, and L. Carin. Generative deep deconvolutional learning. In ICLR workshop, 2015.
[10] D. P. Kingma and M. Welling. Auto-encoding variational Bayes. In ICLR, 2014.
[11] A. Mnih and K. Gregor. Neural variational inference and learning in belief networks. In ICML, 2014.
[12] N. G. Polson and S. L. Scott. Data augmentation for support vector machines. Bayes. Anal., 2011.
[13] T. D. Kulkarni, W.l Whitney, P. Kohli, and J. B. Tenenbaum. Deep convolutional inverse graphics network.
In NIPS, 2015.
[14] A. Dosovitskiy, J. T. Springenberg, M. Tatarchenko, and T. Brox. Learning to generate chairs, tables and
cars with convolutional networks. In CVPR, 2015.
[15] C. Li, J. Zhu, T. Shi, and B. Zhang. Max-margin deep generative models. In NIPS, 2015.
[16] V. Vapnik. The nature of statistical learning theory. Springer-Verlag New York, Inc., 1995.
[17] R. Henao, X. Yuan, and L. Carin. Bayesian nonlinear SVMs and factor modeling. NIPS, 2014.
[18] S Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 1997.
[19] K. Cho, B. V. Merrienboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio. Learning
phrase representations using rnn encoder-decoder for statistical machine translation. In EMNLP, 2014.
[20] D. Kingma and J. Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
[21] I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. In NIPS,
2014.
[22] F. Bastien, P. Lamblin, R. Pascanu, J. Bergstra, I. Goodfellow, A. Bergeron, N. Bouchard, D. Warde-Farley,
and Y. Bengio. Theano: new features and speed improvements. In NIPS Workshop, 2012.
[23] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Computer Science
Department, University of Toronto, Tech. Rep, 2009.
[24] F. Li, F. Rob, and P. Perona. Learning generative visual models from few training examples: An incremental
bayesian approach tested on 101 object categories. Computer Vision and Image Understanding, 2007.
[25] G. Griffin, A. Holub, and P. Perona. Caltech-256 object category dataset. 2007.
[26] D.P. Kingma, S. Mohamed, D.J. Rezende, and M. Welling. Semi-supervised learning with deep generative
models. In NIPS, 2014.
[27] A. Rasmus, M. Berglund, M. Honkala, H. Valpola, and T. Raiko. Semi-supervised learning with ladder
networks. In NIPS, 2015.
[28] H. Fang, S. Gupta, F. Iandola, R. Srivastava, L. Deng, P. Doll?r, J. Gao, X. He, M. Mitchell, J. C. Platt,
C. L. Zitnick, and G. Zweig. From captions to visual concepts and back. In CVPR, 2015.
[29] M. Hodosh, P. Young, and J. Hockenmaier. Framing image description as a ranking task: Data, models and
evaluation metrics. Journal of Artificial Intelligence Research, 2013.
[30] P. Young, A. Lai, M. Hodosh, and J. Hockenmaier. From image descriptions to visual denotations: New
similarity metrics for semantic inference over event descriptions. Transactions of the Association for
Computational Linguistics, 2014.
[31] T. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Doll?r, and C. L. Zitnick. Microsoft
COCO: Common objects in context. In ECCV, 2014.
[32] K. Papineni, S. Roukos, T. Ward, and W. Zhu. Bleu: a method for automatic evaluation of machine
translation. Transactions of the Association for Computational Linguistics, 2002.
[33] S. Banerjee and A. Lavie. Meteor: An automatic metric for MT evaluation with improved correlation with
human judgments. In ACL workshop, 2005.
[34] R. Vedantam, Z. C. Lawrence, and D. Parikh. Cider: Consensus-based image description evaluation. In
CVPR, 2015.
9
| 6528 |@word kohli:1 cnn:30 version:1 proportion:7 norm:1 tried:1 rgb:1 sgd:1 recursively:1 reduction:1 liu:1 score:1 ours:7 deconvolutional:7 csn:1 com:1 surprising:1 activation:6 assigning:1 gpu:3 unpooling:12 subsequent:1 plot:1 drop:2 grass:1 alone:3 generative:19 fewer:1 selected:1 intelligence:1 short:2 provides:2 pascanu:1 location:3 toronto:1 zhang:1 five:2 along:1 yuan:4 consists:1 introduce:1 manner:7 kiros:1 multi:1 salakhutdinov:1 considering:2 unwrapped:1 conv:1 provided:8 spain:1 notation:2 estimating:1 alexnet:4 what:1 minimizes:1 developed:5 corporation:1 pseudo:3 every:1 flickr8k:5 donated:1 k2:22 classifier:9 rm:1 control:2 unit:1 stick:1 platt:1 yn:14 ramanan:1 scaled:1 engineering:1 attend:1 consequence:1 encoding:3 black:1 burn:2 acl:1 initialization:1 collect:2 rnx:1 responsible:1 lecun:1 testing:6 practice:3 block:6 implement:1 backpropagation:1 lcarin:1 maire:1 rnn:11 bell:2 mult:3 word:6 pre:1 bergeron:1 unlabeled:8 layered:1 context:2 py:9 optimize:1 equivalent:1 deterministic:3 map:12 shi:1 maximizing:1 go:1 attention:3 starting:2 shen:1 simplicity:3 m2:1 utilizing:1 lamblin:1 pk2:1 fang:1 embedding:1 analogous:1 updated:1 controlling:1 caption:35 duke:2 exact:1 us:2 goodfellow:1 element:9 expensive:1 recognition:12 utilized:2 labeled:22 bottom:2 role:2 electrical:1 enters:1 region:1 connected:2 snowboard:1 trade:1 balanced:1 complexity:2 warde:1 trained:8 mcem:8 upon:1 joint:9 darpa:1 schwenk:1 represented:3 stacked:3 train:2 distinct:2 fast:5 effective:1 describe:1 monte:3 artificial:1 zemel:1 labeling:1 horse:1 tell:2 caffe:1 yp42:1 supplementary:1 posed:1 larger:4 widely:1 otherwise:1 cvpr:6 encoder:16 statistic:1 simonyan:1 unseen:2 ward:1 transductive:1 jointly:2 superscript:2 advantage:1 sequence:2 aro:1 aligned:3 combining:1 papineni:1 description:4 scalability:1 sutskever:2 captioning:8 generating:1 adam:2 comparative:1 incremental:1 object:3 illustrate:3 andrew:1 recurrent:4 develop:2 eq:4 implemented:4 indicate:1 stevens:2 annotated:1 meteor:4 cnns:2 stochastic:7 filter:4 attribute:4 human:1 material:1 require:1 merrienboer:1 around:3 considered:2 ground:1 normal:1 exp:1 lawrence:2 mapping:1 tor:1 dictionary:3 early:2 achieves:4 omitted:1 vary:1 outperformed:1 applicable:1 label:20 tanh:2 jackel:1 honkala:1 hubbard:1 tool:1 gaussian:2 cider:4 rather:1 pn:1 vae:12 rezende:1 notational:2 consistently:2 improvement:3 indicates:2 likelihood:3 tech:1 contrast:1 baseline:7 summarizing:1 glass:1 inference:12 dependent:4 stopping:2 typically:1 entire:1 integrated:1 initially:1 perona:3 captioned:4 going:1 pixel:2 henao:3 classification:15 issue:1 denoted:1 spatial:2 summed:2 softmax:10 integration:1 art:8 ness:1 vht:1 orange:1 brox:1 sampling:8 represents:1 unsupervised:7 carin:4 nearly:1 icml:2 quantitatively:1 dosovitskiy:1 employ:4 primarily:1 few:1 randomly:3 composed:1 replaced:4 microsoft:2 interest:2 mlp:3 mnih:1 evaluation:4 henderson:1 mixture:3 yielding:4 farley:1 integral:1 capable:1 respective:1 modest:2 iv:1 desired:2 column:1 modeling:4 contiguous:2 whitney:1 rabinovich:1 maximization:1 phrase:1 stacking:2 introducing:1 cost:1 subset:2 krizhevsky:2 conducted:1 graphic:1 reported:2 dependency:1 varies:1 considerably:1 cho:2 lstm:4 standing:2 off:1 pool:2 together:1 qtn:1 concrete:1 w1:1 augmentation:2 reflect:1 again:2 emnlp:1 berglund:1 henceforth:1 worse:3 stochastically:1 ricardo:1 li:4 szegedy:1 account:1 bergstra:1 pooled:3 summarized:3 coefficient:1 inc:1 titan:2 caused:1 ranking:1 performed:3 h1:1 lab:2 linked:1 analyze:2 doing:1 competitive:2 bayes:2 capability:1 tsvm:4 complicated:1 bouchard:1 jia:1 contribution:1 ass:1 formed:2 mlps:1 accuracy:9 convolutional:8 variance:4 publicly:1 yield:6 correspond:1 judgment:1 bayesian:11 handwritten:1 iid:1 carlo:3 worth:1 apple:1 explain:1 flickr:1 geforce:1 mohamed:1 associated:7 recovers:1 uncaptioned:6 gain:1 dataset:6 mitchell:1 manifest:4 color:1 car:1 holub:1 back:1 feed:1 supervised:23 day:2 zisserman:1 improved:2 done:1 box:1 tatarchenko:1 until:1 d:1 correlation:1 nonlinear:1 banerjee:1 mode:1 quality:1 gray:1 indicated:1 semisupervised:2 contain:1 gtx:1 brown:1 concept:2 hence:3 regularization:1 spatially:7 semantic:1 during:2 encourages:1 m:8 hill:1 demonstrate:2 tn:2 performs:1 image:113 variational:15 novel:6 recently:2 parikh:1 common:1 data1:1 mt:1 discussed:2 googlenet:10 m1:3 he:1 association:2 bougares:1 significant:2 anguelov:1 gibbs:8 vec:2 tuning:3 automatic:2 similarly:1 language:2 access:1 similarity:1 pu:3 aligning:2 posterior:7 own:2 recent:1 showed:1 coco:8 termed:1 pns:1 hay:1 nvidia:2 perplexity:1 manifested:3 keyboard:1 binary:3 onr:1 verlag:1 rep:1 caltech:8 seen:2 impose:1 zip:1 employed:5 deng:1 dgdn:15 semi:20 ii:2 full:3 multiple:1 exceeds:1 faster:3 long:3 cifar:3 zweig:1 lai:1 concerning:2 lin:1 permitting:1 prediction:2 vision:2 expectation:3 metric:6 iteration:1 represent:1 achieved:2 hochreiter:1 addition:1 separately:1 yunchen:1 w2:1 rest:1 extra:3 pooling:12 sent:1 bahdanau:1 leveraging:2 effectiveness:1 presence:1 leverage:2 feedforward:1 iii:2 split:5 noting:1 intermediate:1 bengio:4 architecture:3 expression:3 motivated:1 gb:1 tabulated:1 york:1 constitute:4 deep:18 useful:1 detailed:1 tune:1 desk:1 band:1 tenenbaum:1 svms:1 category:4 simplest:1 generate:2 exist:1 nsf:1 per:4 ppl:6 demonstrating:3 achieving:1 drawn:1 zg27:1 ht:3 utilize:2 vast:1 nga:1 package:1 inverse:1 powerful:1 springenberg:1 clipped:1 wu:1 utilizes:1 decision:2 ble:1 griffin:1 comparable:1 bound:6 layer:31 distinguish:1 courville:1 denotation:1 toshev:1 speed:2 chair:1 relatively:1 px:9 speedup:1 department:2 lavie:1 across:1 slightly:2 hodosh:2 partitioned:1 rob:1 hockenmaier:2 s1:1 theano:2 computationally:1 equation:1 discus:1 fed:2 end:2 gulcehre:1 available:5 gaussians:1 operation:1 gous:1 flickr30k:4 denker:1 doll:2 batch:3 encounter:1 convolved:5 original:1 denotes:4 top:21 include:1 ensure:1 gan:1 linguistics:2 hinge:1 k1:32 murray:1 gregor:1 tensor:12 objective:2 quantity:1 unpool:2 traditional:1 gradient:4 iclr:4 separate:1 mapped:1 valpola:1 upsampling:2 decoder:19 nx:6 collected:2 consensus:1 bleu:4 assuming:1 code:12 length:1 reed:1 rasmus:1 mini:3 providing:1 demonstration:1 minimizing:2 balance:1 nc:6 setup:4 ratio:1 sermanet:1 dick:1 holding:1 ba:2 polson:1 implementation:4 design:1 anal:1 gated:2 allowing:1 perform:1 convolution:4 datasets:10 sm:6 benchmark:4 commensurate:1 howard:1 descent:2 hinton:2 y1:1 dc:2 chunyuan:1 inferred:1 complement:1 dog:1 required:1 kl:21 extensive:1 connection:1 sentence:3 specified:1 schmidhuber:1 gru:5 optimized:1 imagenet:19 concisely:1 learned:6 boser:1 framing:1 barcelona:1 kingma:3 nip:9 assembled:1 address:1 suggested:1 bar:1 below:2 scott:1 max:5 memory:3 belief:1 hot:2 suitable:1 power:1 treated:2 event:1 predicting:1 zhu:2 representing:1 ladder:2 identifies:1 vggnet:4 raiko:1 autoencoder:5 auto:2 coupled:1 sn:21 prior:4 understanding:1 acknowledgement:1 determining:1 embedded:1 loss:1 fully:1 generation:3 versus:1 generator:1 penalization:1 validation:8 vanhoucke:1 consistent:1 bank:1 tiny:1 roukos:1 translation:2 eccv:1 repeat:1 supported:1 bias:1 deeper:1 perceptron:1 nokia:1 sparse:1 slice:10 dimension:1 vocabulary:2 transition:1 default:1 hengel:1 qn:1 replicated:1 far:1 erhan:2 welling:2 transaction:2 approximate:5 cl319:1 sequentially:1 summing:2 assumed:1 belongie:1 vedantam:1 discriminative:1 zhe:1 latent:13 table:11 hockey:1 nature:1 du:2 zitnick:2 diag:1 aistats:1 dense:1 big:1 n2:1 fair:1 xu:1 ny:2 n:2 precision:3 inferring:1 explicit:2 extractor:1 learns:1 young:2 down:1 specific:1 discarding:1 bastien:1 symbol:1 svm:15 admits:1 gupta:1 workshop:3 mnist:5 vapnik:1 importance:1 illustrates:1 margin:1 gap:1 gao:1 visual:4 vinyals:2 expressed:1 iandola:1 pretrained:1 springer:1 corresponds:2 truth:1 extracted:2 conditional:1 goal:1 shared:2 absence:1 man:3 hard:2 specifically:3 except:1 averaging:3 wt:3 total:1 xin:1 player:1 support:2 latter:1 meant:1 kulkarni:1 evaluate:3 tested:2 srivastava:1 |
6,112 | 6,529 | Semiparametric Differential Graph Models
Pan Xu
University of Virginia
[email protected]
Quanquan Gu
University of Virginia
[email protected]
Abstract
In many cases of network analysis, it is more attractive to study how a network
varies under different conditions than an individual static network. We propose
a novel graphical model, namely Latent Differential Graph Model, where the
networks under two different conditions are represented by two semiparametric
elliptical distributions respectively, and the variation of these two networks (i.e.,
differential graph) is characterized by the difference between their latent precision
matrices. We propose an estimator for the differential graph based on quasi likelihood maximization with nonconvex regularization. We show that our estimator
attains a faster statistical rate in parameter estimation than the state-of-the-art methods, and enjoys the oracle property under mild conditions. Thorough experiments
on both synthetic and real world data support our theory.
1
Introduction
Network analysis has been widely used in various fields to characterize the interdependencies between
a group of variables, such as molecular entities including RNAs and proteins in genetic networks
[3]. Networks are often modeled as graphical models. For instance, in gene regulatory network,
the gene expressions are often assumed to be jointly Gaussian. A Gaussian graphical model [18] is
then employed by representing different genes as nodes and the regulation between genes as edges
in the graph. In particular, two genes are conditionally independent given the others if and only
if the corresponding entry of the precision matrix of the multivariate normal distribution is zero.
Nevertheless, the Gaussian distribution assumption, is too restrictive in practice. For example, the
gene expression values from high-throughput method, even after being normalized, do not follow a
normal distribution [19, 26]. This leads to the inaccuracy in describing the dependency relationships
among genes. In order to address this problem, various semiparametric Gaussian graphical models
[21, 20] are proposed to relax the Gaussian distribution assumption.
On the other hand, it is well-known that the interactions in many types of networks can change under
various environmental and experimental conditions [1]. Take the genetic networks for example, two
genes may be positively conditionally dependent under some conditions but negatively conditionally
dependent under others. Therefore, in many cases, more attention is attracted not by a particular
individual network but rather by whether and how the network varies with genetic and environmental
alterations [6, 15]. This gives rise to differential networking analysis, which has emerged as an
important method in differential expression analysis of gene regulatory networks [9, 28].
In this paper, in order to conduct differential network analysis, we propose a Latent Differential Graph
Model (LDGM), where the networks under two different conditions are represented by two transelliptical distributions [20], i.e., T Ed (??X , ?; f1 , . . . , fd ) and T Ed (??Y , ?; g1 , . . . , gd ) respectively. Here
T Ed (??X , ?; f1 , . . . , fd ) denotes a d-dimensional transelliptical distribution with latent correlation
matrix ??X 2 Rd?d , and will be defined in detail in Section 3. More specifically, the connectivity
of the individual network is encoded by the latent precision matrix (e.g., ??X = (??X ) 1 ) of the
corresponding transelliptical distribution, such that [??X ]jk 6= 0 if and only if there is an edge
between the j-th node and the k-th node in the network. And the differential graph is defined as
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
the difference between the two latent precision matrices ? = ??Y ??X . Our goal is to estimate
?
based on observations sampled from T Ed (??X , ?; f1 , . . . , fd ) and T Ed (??Y , ?; g1 , . . . , gd ). A
simple procedure is estimating ??X and ??Y separately, followed by calculating their difference.
However, it requires estimating 2d2 parameters (i.e., ??X and ??Y ), while our ultimate goal is only
estimating d2 parameters (i.e., ? ). In order to overcome this problem, we assume that the difference
of the two latent precision matrices, i.e., ? is sparse and propose to directly estimate it by quasi
likelihood maximization with nonconvex penalty. The nonconvex penalty is introduced in order to
correct the intrinsic estimation bias incurred by convex penalty
[10,p36]. We prove that, when the
p
true differential graph is s-sparse, our estimator attains O( s1 /n + s2 log d/n)
p convergence rate
in terms of Frobenius norm, which is faster than the estimation error bound O( s log d/n) of `1,1
penalty based estimator in [38]. Here n is the sample size, s1 is the number of entries in ? with
large magnitude, s2 is the number of entries with small magnitude and s = s1 + s2 . We show that
our method enjoys the oracle property under a very mild condition. Thorough numerical experiments
on both synthetic and real-world data back up our theory.
The remainder of this paper is organized as follows: we review the related work in Section 2. We
introduce the proposed model and the non-convex penalty in Section 3, as well as the proposed
estimator. In Section 4, we present our main theories for estimation in semiparametric differential
graph models. Experiments on both synthetic and real world data are provided in Section 5. Section
6 concludes with discussion.
Notation For x = (x1 , . . . , xd )> 2 Rd and 0 < q < 1, we define the `0 , `q and `1 vector norms
Pd
Pd
q 1/q
as kxk0 = i=1 1(xi 6= 0), kxkq =
, and kxk1 = max1?i?d |xi |, where 1(?)
i=1 |xi |
d?d
is the indicator function. For A = (Aij ) 2 R
, we define the matrix `0,0 , `1,1 , `1,1 and `F
Pd
Pd
norms as: kAk0,0 = i,j=1 1 (Aij 6= 0), kAk1,1 = i,j=1 |Aij |, kAk1,1 = max1?i,j?d |Aij |,
qP
2
and kAkF =
ij |Aij | . The induced norm for matrix is defined as kAkq = maxkxkq =1 kAxkq ,
for 0 < q < 1. For a set of tuples S, AS denotes the set of numbers [A(jk) ](jk)2S , and vec(S) is
the vectorized index set of S.
2
Related Work
There exist several lines of research for differential network analysis. One natural procedure is to
estimate the two networks (i.e., two precision matrices) respectively by existing estimators such as
graphical Lasso [12] and node-wise regression [25]. Another family of methods jointly estimates
the two networks by assuming that they share common structural patterns and therefore uses joint
likelihood maximization with group lasso penalty or group bridge penalty [7, 8, 14]. Based on the
estimated precision matrices, the differential graph can be obtained by calculating their difference.
However, both of these two types of methods suffer from the drawback that they need to estimate
twice the number of parameters, and hence require roughly doubled observations to ensure the
estimation accuracy. In order to address this drawback, some methods are proposed to estimate the
difference of matrices directly [38, 35, 22, 11]. For example, [38] proposed a Dantzig selector type
estimator for estimating the difference of the precision matrices directly. [35] proposed a D-Trace
loss [37] based estimator for the difference of the precision matrices. Compared with [38, 35], our
estimator is advantageous in the following aspects: (1) our model relaxes the Gaussian assumption by
representing each network as a transelliptical distribution, while [38, 35] are restricted to Gaussian
distribution. Thus, our model is more general and robust; and (2) by employing nonconvex penalty,
our estimator achieves a sharper statistical rate than theirs. Rather than the Gaussian graphical model
or its semiparametric extension, [22, 11] studied the estimation of change in the dependency structure
between two high dimensional Ising models.
3
Semiparametric Differential Graph Models
In this section, we will first review the transelliptical distribution and present our semiparametric
differential graph model. Then we will present the estimator for differential graph, followed by the
introduction to nonconvex penalty.
3.1 Transelliptical Distribution
To briefly review the transelliptical distribution, we begin with the definition of elliptical distribution.
2
Definition 3.1 (Elliptical distribution). Let ? 2 Rd and ?? 2 Rd?d with rank(?? ) = q ? d. A
random vector X 2 Rd follows an elliptical distribution, denoted by ECd (?, ?? , ?), if it can be
represented as X = ? + ?AU, where A is a deterministic matrix satisfying A> A = ?? , U is a
random vector uniformly distributed on the unit sphere in Rq , and ? ? U is a random variable.
Motivated by the extension from Gaussian distribution to nonparanormal distribution [21], [20] proposed a semiparametric extension of elliptical distribution, which is called transelliptical distribution.
Definition 3.2 (Transelliptical distribution). A random vector X = (X1 , X2 , . . . , Xd )> 2 Rd
is transelliptical, denoted by T Ed (?? , ?; f1 , . . . , fd ), if there exists a set of monotone univariate
functions f1 , . . . , fd and a nonnegative random variable ?, such that (f1 (X1 ), . . . , fd (Xd ))> follows
an elliptical distribution ECd (0, ?? , ?).
3.2 Kendall?s tau Statistic
In semiparametric setting, the Pearson?s sample covariance matrix can be inconsistent in estimating ?? . Given n independent observations X1 , ..., Xn , where Xi = (Xi1 , ..., Xid )> ?
T Ed (?? , ?; f1 , . . . , fd ), [20] proposed a rank-based estimator, the Kendall?s tau statistic, to estimate ?? , due to its invariance under monotonic marginal transformations. The Kendall?s tau
estimator is defined as
X
?
?
2
?bjk =
sign Xij Xi0 j Xik Xi0 k .
(3.1)
n(n 1)
0
1?i<i ?n
It has been shown that ?bjk is an unbiased estimator of ?jk = 2/? arcsin(??jk ) [20], and the correlation
b = [?
b jk ] 2 Rd?d , where
matrix ?? can be estimated by ?
?
?
b jk = sin ? ?bjk .
?
(3.2)
2
b with entries ?bjk , for j, k = 1, . . . d.
We use T? to denote the matrix with entries ?jk and T
3.3 Latent Differential Graph Models and the Estimator
Now we are ready to formulate our differential graph model. Assume that d dimensional random
vectors X and Y satisfy X ? T Ed (??X , ?; f1 , . . . , fd ) and Y ? T Ed (??Y , ?; g1 , . . . , gd ). The
differential graph is defined to be the difference of the two latent precision matrices,
?
where ??X =
??X 1
and ??Y =
??X
?
??Y
??Y 1 .
(??X
= ??Y
??X ,
(3.3)
It immediately implies
??Y
) = 0, and ??Y
?
??X
(??X
??Y ) = 0.
(3.4)
Given i.i.d. copies X1 , . . . , XnX of X, and i.i.d. copies Y1 , . . . , YnY of Y , without loss of generality,
we assume nX = nY = n, and we denote the Kendall?s tau correlation matrices defined in (3.2) as
b X and ?
b Y . Following (3.4), a reasonable procedure for estimating ? is to solve the following
?
equation for
1b
bY + 1?
bY ?
b X (?
bX ?
b Y ) = 0,
?X ?
(3.5)
2
2
where we add up the two equations in (3.4) and replace the latent population correlation matrices
b X, ?
b Y . Note that (3.5) is a Z-estimator [30], which can
??X , ??Y with the Kendall?s tau estimators ?
bX ?
b Y + 1/2?
bY ?
b X (?
bX ?
b Y ) can
be translated into a M-estimator, by noticing that 1/2?
be seen as a score function of the following quasi log likelihood function
`( ) =
1
bY
tr( ?
2
b X)
?
tr
bX
(?
bY ) .
?
(3.6)
Let S = supp( ? ), in this paper, we assume that ? is sparse, i.e., |S| ? s with s > 0. Based on
(3.6), we propose to estimate ? by the following M-estimator with non-convex penalty
b = argmin 1 tr( ?
bY
2Rd?d 2
b X)
?
3
tr
bX
(?
b Y ) + G ( ),
?
(3.7)
where > 0 is a regularization parameter and G is a decomposable nonconvex penalty function,
Pd
i.e., G ( ) = j,k=1 g ( jk ), such as smoothly clipped absolute deviation (SCAD) penalty [10]
or minimax concave penalty (MCP) [36]. The key property of the nonconvex penalty is that it can
avoid over-penalization when the magnitude is very large. It has been shown in [10, 36, 33] that
the nonconvex penalty is able to alleviate the estimation bias and attain a refined statistical rate of
convergence. The nonconvex penalty g ( ) can be further decomposed as the sum of the `1 penalty
and a concave component h ( ), i.e., g ( ) = | | + h ( ). Take MCP penalty for example. The
corresponding g ( ) and h ( ) are defined as follows
?
Z | |?
z
g ( )=
1
dz, for any 2 R,
b +
0
where
> 0 is the regularization parameter and b > 0 is a fixed parameter, and
? 2
?
2
b
h ( )=
1(| | ? b ) +
| | 1(| | > b ).
2b
2
In Section 4, we will show that the above family of nonconvex penalties satisfies certain common
regularity conditions on g ( ) as well as its concave component h ( ).
We will show in the next section that when the parameters of the nonconvex penalty are appropriately
chosen, (3.7) is an unconstrained convex optimization problem. Thus it can be solved by the proximal
gradient descent [4] very efficiently. In addition, it is easy to check that the estimator b from (3.7) is
symmetric. So it does not need the symmetrizing process adopted in [38], which can undermine the
estimation accuracy.
4
Main Theory
In this section, we present our main theories. Let S = supp(
graph. We introduce the following oracle estimator of ? :
?
) be the support of the true differential
b O = argmin `( ),
(4.1)
supp(
)?S
bY ?
b X ) tr (?
bX ?
b Y ) . The oracle estimator b O is not a practical
where `( ) = 1/2 tr( ?
estimator, since we do not know the true support in practice. An estimator is said to have the oracle
property, if it is identical to the oracle estimator b O under certain conditions. We will show that our
estimator enjoys the oracle property under a mild condition.
We first lay out some assumptions that are required through our analysis.
Assumption 4.1. There exist constants ?1 , ?2 > 0 such that ?1 ? min (??X ) ? max (??X ) ? 1/?1
and ?2 ? min (??Y ) ? max (??Y ) ? 1/?2 . The true covariance matrices have bounded `1 norm, i.e.,
k??X k1 ? X , k??Y k1 ? Y , where X , Y > 0 are constants. And the true precision matrices have
bounded matrix `1 -norm, i.e., k??X k1 ? ?X and k??Y k1 ? ?Y , where ?X , ?Y > 0 are constants.
The first part of Assumption 4.1 requires that the smallest eigenvalues of the correlation ??X , ??Y are
bounded below from zero, and their largest eigenvalues are finite. This assumptions is commonly
imposed in the literature for the analysis of graphical models [21, 27].
Assumption 4.2. The true difference matrix
k ? k0,0 ? s and has bounded `1,1 norm, i.e., k
?
?
= ??Y 1 ??X 1 has s nonzero entries, i.e.,
k1,1 ? M , where M > 0 does not depend on d.
Assumption 4.2 requires the differential graph to be sparse. This is reasonable in differential network
analysis where the networks only vary slightly under different conditions.
The next assumption is about regularity conditions on the nonconvex penalty g ( ). Recall that g ( )
can be written as g ( ) = | | + h ( ).
Assumption 4.3. g ( ) and its concave component h ( ) satisfy:
(a) There exists a constant ? such that g 0 ( ) = 0, for | |
(b) There exists a constant ?
? > 0.
0 such that h ( ) + ? /2 ?
4
2
is convex.
(c) h ( ) and h0 ( ) pass through the origin, i.e., h (0) = h0 (0) = 0.
(d) h0 ( ) is bounded, i.e., |h0 ( )| ?
for any .
Similar assumptions have been made in [23, 33]. Note that condition (b) in Assumption 4.3 is weaker
than the smoothness condition in [33], since here it does not require h ( ) to be twice differentiable.
Assumption 4.3 holds for a variety of nonconvex penalty functions including MCP and SCAD. In
particular, MCP penalty satisfies Assumption 4.3 with ? = b and ? = 1/b. Furthermore, according
to condition (b), if ? is smaller than the modulus of the restricted strong convexity for `( ), (3.7)
will become a convex optimization problem, even though G ( ) is nonconvex. Take MCP for
example, this can be achieved by choosing a sufficiently large b in MCP such that ? is small enough.
Now we are ready to present our main theories. We first show that under a large magnitude condition
on nonzero entries of the true differential graph ? , our estimator attains a faster convergence rate,
which matches the minimax rate in the classical regime.
Theorem 4.4. Suppose Assumptions 4.1 and 4.2 hold, and the nonconvex penalty G ( ) satisfies conditions in Assumption 4.3. If nonzero entries of ? satisfy min(j,k)2S | ?jk |
?+
p
2 2
b
C?X ?Y XpY M log s/n, for the estimator in (3.7) with the regularization parameter satisfying
= 2CM log d/n and ? ? ?1 ?2 /2, we have that
r
p
log s
?
2 2
b
? 2 10??X ?Y X Y M
1,1
n
holds with probability at least 1
2/s. Furthermore, we have that
r
C1 M s
?
kb
kF ?
?1 ?2 n
holds with probability at least 1 3/s, where C1 is an absolute constant.
Remark 4.5. Theorem
rate of
p 4.4 suggests that under the large magnitude assumption, the statistical
p
our estimator is O( s/n) in terms of Frobenius norm. This is faster than the rate O( s log d/n) in
[38] which matches the minimax lower bound for sparse differential graph estimation. Note that our
faster rate is not contradictory to the minimax lower bound, because we restrict ourselves to a smaller
class of differential graphs, where the magnitude of the nonzero entries is sufficiently large.
We further show that our estimator achieves oracle property under mild conditions.
Theorem 4.6. Under the same conditions of Theorem 4.4, for the estimator b in (3.7) and the oracle
estimator b O in (4.1), we have with probability at least 1 3/s that b = b O , which further implies
supp( b ) = supp( b O ) = supp( ? ).
Theorem 4.6 suggests that our estimator is identical to the oracle estimator in (4.1) with high
p proba2 2
bility, when the nonzero entries in ? satisfy min(j,k)2S | ?jk | ? + C?X
?Y X Y M log s/n.
p
This condition is optimal up to the logarithmic factor log s.
Now we turn to the general case when the nonzero entries of ? have both large and small magnitudes.
Define S c = {(j, k) : j, k = 1, . . . , d} \ S, S1 = {(j, k) 2 S : | ?jk | > ?}, and S2 = {(j, k) 2 S :
| ?jk | ? ?}. Denote |S1 | = s1 and |S2 | = s2 . Clearly, we have s = s1 + s2 .
Theorem 4.7. Suppose Assumptions 4.1 and 4.2 hold, and the nonconvex penalty G ( ) satisfies
conditions
p in Assumption 4.3. For the estimator in (3.7) with the regularization parameter =
2CM log d/n and ? ? ?1 ?2 /4, we have that
r
p
r
16 3?M s1
10?M C s2 log d
?
b
k
kF ?
+
?1 ?2
n
?1 ?2
n
holds with probability at least 1 3/s1 , where C is an absolute constant.
Remark 4.8. Theorem 4.7 indicates that when the large magnitude condition does not hold, our
?
estimator is still able to attain a faster rate. Specifically, for those nonzero
with large
p entries of
magnitude, the estimation error bound in terms of Frobenius norm is O( s1 /n), which is the same
5
?
as the bound
with small magnitude, the estimation
p in Theorem 4.4. For those nonzero entries of
error is O( s2 log d/n), which matches
the
convergence
rate
in
[38]. Overall, our estimator obtains
p
p
a refined rate of convergence rate O( s1 /n + s2 log d/n), which is faster than [38]. In particular,
if s?2 = 0, the refined convergence rate in Theorem 4.7 reduces to the faster rate in Theorem 4.4.
5
Experiments
In this section, we test our method on both synthetic and real world data. We conducted experiments
for our estimator using both SCAD and MCP penalties. We did not find any significant difference
in the results and thus we only report the results of our estimator with MCP penalty. To choose
the tuning parameters and b, we adopt 5-fold cross-validation. Denoting our estimator with MCP
penalty by LDGM-MCP, we compare it with the following methods: (1) SepGlasso: estimating the
latent precision matrices separately using graphical Lasso and Kendall?s tau correlation matrices [20],
followed by calculating their difference; (2) DPM: directly estimating differential precision matrix
[38]. In addition, we also test differential graph model with `1,1 penalty, denoted as LDGM-L1.
Note that LDGM-L1 is a special case of our method, since `1,1 norm penalty is a special case of
MCP penalty when b = 1. The LDGM-MCP and LDGM-L1 estimators are obtained by solving the
proximal gradient descent algorithm [4]. The implementation of DPM estimator is obtained from the
author?s website, and the SepGlasso estimator is implemented by graphical Lasso.
5.1
Simulations
We first show the results on synthetic data. Since the transelliptical distribution includes Gaussian
distribution, it is natural to show that our approach also works well for the latter one. We consider
the dimension settings n = 100, d = 100 and n = 200, d = 400 respectively. Specifically, data
are generated as follows: (1) For the Gaussian distribution, we generate data {Xi }ni=1 ? N (0, ??X )
and {Yi }ni=1 ? N (0, ??Y ) with precision matrices ??X 1 and ??Y 1 generated by huge package 1 .
(2) For the transelliptical distribution, we consider the following generating scheme: {Xi }ni=1 ?
T Ed (??X , ?; f1 , . . . , fd ), {Yi }ni=1 ? T Ed (??Y , ?; g1 , . . . , gd ), where ? ? d , f1 1 (?) = . . . =
fd 1 = sign(?)| ? |3 and g1 1 (?) = . . . = gd 1 (?) = sign(?)| ? |1/2 . The latent precision matrices ??X 1
and ??Y 1 are generated in the same way as the Gaussian data. For both Gaussian and transelliptical
differential graph mdoels, we consider two settings for individual graph structures: (1) both ??X 1 and
??Y 1 have "random" structures; (2) ??X 1 has a "band" structure, ??Y 1 has a "random" structure.
Given an estimator b , we define the true positive and negative rates of b as
TP =
Pd
1( b jk =
6 0 and ?jk 6= 0)
,
Pd
?
j,k=1 1( jk 6= 0)
j,k=1
TN =
Pd
1( b jk = 0 and ?jk = 0)
.
Pd
?
j,k=1 1( jk = 0)
j,k=1
The receiver operating characteristic (ROC) curves for transelliptical differential graph models are
shown in Figure 1, which report the performances of different methods on support recovery. The
ROC curves were plotted by averaging the results over 10 repetitions. From Figure 1 we can see
our estimator (LDGM-MCP) outperforms other methods in all settings. In addition, LDGM-L1 as a
special case of our estimator also performs better than DPM and SepGlasso, although it is inferior to
LDGM-MCP because the MCP penalty can correct the bias in the estimation and achieve faster rate
of convergence. Note that SepGlasso?s performace is poor since it highly depends on the sparsity of
both individual graphs. When n > 100, the DPM method failed to output the solution in one day
and thus no result was presented. This computational burden is also stated in their paper. We use
?
?
the Frobenius norm k b
kF and infinity norm k b
k1,1 of estimation errors to evaluate
the performances of different methods in estimation. The results averaged over 10 replicates for
transelliptical differential graph are summarized in Tables 1 and 2 respectively. Our estimator also
achieves smaller error than the other baselines in all settings. Due to the space limit, we defer the
experiment results for Gaussian differential graph model to the appendix.
1
Available on http://cran.r-project.org/web/packages/huge
6
1
0.8
0.8
0.8
0.8
0.6
0.6
0.6
0.6
TP
SepGlasso
DPM
LDGM-L1
LDGM-MCP
0.4
0.2
0
0
0.2
0.4
0.6
1-TN
0.8
SepGlasso
DPM
LDGM-L1
LDGM-MCP
0.4
0.2
1
0
0
0.2
0.4
0.6
1-TN
0.8
TP
1
TP
1
TP
1
0.4
0.4
SepGlasso
LDGM-L1
LDGM-MCP
0.2
0
1
0
0.2
0.4
0.6
1-TN
0.8
SepGlasso
LDGM-L1
LDGM-MCP
0.2
1
0
0
0.2
0.4
0.6
1-TN
0.8
1
(a) Setting 1: n=100,d=100 (b) Setting 2: n=100,d=100 (c) Setting 1: n=200,d=400 (d) Setting 2:n=200,d=400
Figure 1: ROC curves for transelliptical differential graph models of all the 4 methods. There are two
settings of graph structure. Note that DPM is not scalable to d = 400.
?
Table 1: Comparisons of estimation errors in Frobenius norm k b
kF for transelliptical differential graph models. N/A means the algorithm did not output the solution in one day.
n = 100, d = 100
n = 200, d = 400
Methods
SepGlasso
DPM
Setting 1
13.5730?0.6376
12.7219?0.3704
Setting 2
25.6664?0.6967
23.0548?0.2669
Setting 1
22.1760?0.3839
N/A
Setting 2
39.9847?0.1856
N/A
LDGM-L1
LDGM-MCP
12.0738?0.4955
11.2831?0.3919
22.3748?0.6643
19.6154?0.5106
20.6537?0.3778
20.1071?0.4303
31.7630?0.0715
28.8676?0.1425
?
Table 2: Comparisons of estimation errors in infinity norm k b
k1,1 for transelliptical
differential graph models. N/A means the algorithm did not output the solution in one day.
n = 100, d = 100
5.2
n = 200, d = 400
Methods
SepGlasso
DPM
Setting 1
2.7483?0.0575
2.3138?0.0681
Setting 2
8.0522?0.1423
6.3250?0.0560
Setting 1
2.1409?0.0906
N/A
Setting 2
6.0108?0.1925
N/A
LDGM-L1
LDGM-MCP
2.2193?0.0850
1.7010?0.0149
6.0716?0.1150
4.6522?0.1337
1.8876?0.0907
1.7339?0.0061
5.1858?0.0218
4.0133?0.0521
Experiments on Real World Data
We applied our approach to the same gene expression data used in [38], which were collected from
patients with stage III or IV ovarian cancer. [29] identified six molecular subtypes of ovarian cancer
in this data, labeled C1 through C6. In particular, the C1 subtype was found to have much shorter
survival times, and was characterized by differential expression of genes associated with stromal and
immune cell types. In this experiment, we intended to investigate whether the C1 subtype was also
associated with the genetic differential networks. The subjects were divided into two groups: Group
1 with n1 = 78 patients containing C1 subtype, and Group 2 with n2 = 113 patients containing
C2 through C6 subtypes. We analyzed two pathways from the KEGG pathway database [16, 17]
respectively. In each pathway, we applied different methods to determine whether there is any
difference in the conditional dependency relationships of the gene expression levels between the
aforementioned Group 1 and Group 2. Two genes were connected in the differential network if their
conditional dependency relationship given the others changed in either magnitude or sign. In order to
obtain a clear view of the differential graph, we only plotted genes whose conditional dependency
with others changed between the two groups. To interpret the results, the genes associated with more
edges in the differential networks were considered to be more important.
Figure 2 shows the results of estimation for the differential graph of the TGF- pathway, where the
number of genes d = 80 is greater than n1 , the sample size of Group 1. LDGM-MCP identified two
important genes, COMP and THBS2, both of which have been suggested to be related to resistance to
platinum-based chemotherapy in epithelial ovarian cancer by [24]. LDGM-L1 suggested that COMP
7
THBS2
ID1
BMPR1B
?
ID4
?
?
?
INHBA
?
PITX2
?
?
?
THBS1
?
?
ID3
BMP4
THBS2
ID2
ID2
?
?
ID1
?
COMP INHBA
BMP7
?
COMP
?
?
ID2
?
COMP
?
?
THBS2
THBS1
?
DCN
?
?
SMAD7
CDKN2B
BMP7
(a) SepGlasso
ID3
?
?
?
ID1
?
DCN
?
THBS2
?
(b) DPM
THBS1
?
SMAD7
COMP
?
?
?
(c) LDGM-L1
(d) LDGM-MCP
Figure 2: Estimates of the differential networks between Group 1 and Group 2. Dataset: KEGG
04350, TGF- pathway.
FAS
?
TP53
ENDOG
?
?
BIRC3
AIFM1
?
?
PIK3R1
?
?
TNFSF10
IL1B
?
?
(a) SepGlasso
AIFM1
PRKAR2B
?
?
PIK3R1
AIFM1 BIRC3
?
?
?
FAS
?
?
TNFSF10
TNFSF10
?
BIRC3
?
?
?
CSF2RB
ENDOG
PRKAR2B
BIRC3
IL1R1
?
?
PIK3R1
CSF2RB
?
?
TNFSF10
?
(b) DPM
ENDOG
?
(c) LDGM-L1
CSF2RB
?
(d) LDGM-MCP
Figure 3: Estimates of the differential networks between Group 1 and Group 2. Dataset: KEGG
04210, Apoptosis pathway.
was important, and DPM also suggested COMP and THBS2. Separate estimation (SepGlasso) gave a
relatively dense network, which made it hard to say which genes are more important.
Figure 3 shows the results for the Apoptosis pathway, where the number of genes d = 87 is also
greater than n1 . LDGM-MCP indicated that TNFSF10 and BIRC3 were the most important. Indeed,
both TNFSF10 and BRIC3 have been widely studied for use as a therapeutic target in cancer [5, 32].
LDGM-L1 and DPM also suggested TNFSF10 and BRIC3 were important. The results of LDGMMCP, LDGM-L1 and DPM are comparable. In order to overcome the nonsparsity issue encountered in
TGF- experiment, the SepGlasso estimator was thresholded more than the other methods. However,
it still performed poorly and identified the wrong gene CSF2RB.
6
Conclusions
In this paper, we propose a semiparametric differential graph model and an estimator for the differential graph based on quasi likelihood maximization. We employ a nonconvex penalty in our estimator,
which results in a faster rate for parameter estimation than existing methods. We also prove that the
proposed estimator achieves oracle property under a mild condition. Experiments on both synthetic
and real world data further support our theory.
Acknowledgments We would like to thank the anonymous reviewers for their helpful comments.
Research was supported by NSF grant III-1618948.
References
[1] BANDYOPADHYAY S, K. D. E . A ., M EHTA M (2010). Rewiring of genetic networks in response to dna
damage. Science 330 1385?1389.
[2] BARBER , R. F. and KOLAR , M. (2015). Rocket: Robust confidence intervals via kendall?s tau for
transelliptical graphical models. arXiv preprint arXiv:1502.07641 .
[3] BASSO , K., M ARGOLIN , A. A., S TOLOVITZKY, G., K LEIN , U., DALLA -FAVERA , R. and C ALIFANO , A.
(2005). Reverse engineering of regulatory networks in human b cells. Nature genetics 37 382?390.
[4] B ECK , A. and T EBOULLE , M. (2009). A fast iterative shrinkage-thresholding algorithm for linear inverse
problems. SIAM journal on imaging sciences 2 183?202.
[5] B ELLAIL A C, M. P. E . A ., Q I L (2009). Trail agonists on clinical trials for cancer therapy: the promises
and the challenges. Reviews on recent clinical trials 4 34?41.
8
[6] C ARTER S L, G. M. E . A ., B RECHB?HLER C M (2004). Gene co-expression network topology provides
a framework for molecular characterization of cellular state. Bioinformatics 20 2242?2250.
[7] C HIQUET, J., G RANDVALET, Y. and A MBROISE , C. (2011). Inferring multiple graphical structures.
Statistics and Computing 21 537?553.
[8] DANAHER , P., WANG , P. and W ITTEN , D. M. (2014). The joint graphical lasso for inverse covariance
estimation across multiple classes. Journal of the Royal Statistical Society: Series B 76 373?397.
[9] DE LA F UENTE , A. (2010). From ?differential expression?to ?differential networking??identification of
dysfunctional regulatory networks in diseases. Trends in genetics 26 326?333.
[10] FAN , J. and L I , R. (2001). Variable selection via nonconcave penalized likelihood and its oracle properties.
Journal of the American statistical Association 96 1348?1360.
[11] FAZAYELI , F. and BANERJEE , A. (2016). Generalized direct change estimation in ising model structure.
arXiv preprint arXiv:1606.05302 .
[12] F RIEDMAN , J., H ASTIE , T. and T IBSHIRANI , R. (2008). Sparse inverse covariance estimation with the
graphical lasso. Biostatistics 9 432?441.
[13] G OLUB , G. H. and L OAN , C. F. V. (1996). Matrix computations (3rd ed.). Johns Hopkins University
Press, Baltimore, MD, USA.
[14] G UO , J., L EVINA , E., M ICHAILIDIS , G. and Z HU , J. (2011). Joint estimation of multiple graphical
models. Biometrika asq060.
[15] H UDSON , N. J., R EVERTER , A. and DALRYMPLE , B. P. (2009). A differential wiring analysis of
expression data correctly identifies the gene containing the causal mutation. PLoS Comput Biol 5 e1000382.
[16] K ANEHISA , M. and G OTO , S. (2000). Kegg: kyoto encyclopedia of genes and genomes. Nucleic acids
research 28 27?30.
[17] K ANEHISA , M., G OTO , S., S ATO , Y., F URUMICHI , M. and TANABE , M. (2011). Kegg for integration
and interpretation of large-scale molecular data sets. Nucleic acids research gkr988.
[18] L AURITZEN , S. L. (1996). Graphical models. Clarendon Press.
[19] L I , P., P IAO , Y., S HON , H. S. and RYU , K. H. (2015). Comparing the normalization methods for the
differential analysis of illumina high-throughput rna-seq data. BMC bioinformatics 16 1.
[20] L IU , H., H AN , F. and Z HANG , C.- H . (2012). Transelliptical graphical models. In NIPS.
[21] L IU , H., L AFFERTY, J. and WASSERMAN , L. (2009). The nonparanormal: Semiparametric estimation of
high dimensional undirected graphs. The Journal of Machine Learning Research 10 2295?2328.
[22] L IU , S., S UZUKI , T. and S UGIYAMA , M. (2014). Support consistency of direct sparse-change learning in
markov networks. arXiv preprint arXiv:1407.0581 .
[23] L OH , P.-L. and WAINWRIGHT, M. J. (2013). Regularized m-estimators with nonconvexity: Statistical
and algorithmic theory for local optima. In NIPS.
[24] M ARCHINI , E . A ., S ERGIO (2013). Resistance to platinum-based chemotherapy is associated with
epithelial to mesenchymal transition in epithelial ovarian cancer. European journal of cancer 49 520?530.
[25] M EINSHAUSEN , N. and B ?HLMANN , P. (2006). High-dimensional graphs and variable selection with the
lasso. The annals of statistics 1436?1462.
[26] O SHLACK , A., ROBINSON , M. D., YOUNG , M. D. ET AL . (2010). From rna-seq reads to differential
expression results. Genome biol 11 220.
[27] R AVIKUMAR , P., WAINWRIGHT, M. J., R ASKUTTI , G., Y U , B. ET AL . (2011). High-dimensional
covariance estimation by minimizing `1-penalized log-determinant divergence. EJS 5 935?980.
[28] T IAN , D., G U , Q. and M A , J. (2016). Identifying gene regulatory network using latent differential
graphical models. Nucleic Acids Research 44 e140?e140.
[29] T OTHILL R W, G. J. E . A ., T INKER A V (2008). Novel molecular subtypes of serous and endometrioid
ovarian cancer linked to clinical outcome. Clinical Cancer Research 14 5198?5208.
[30] VAN DER VAART, A. V. (1998). Asymptotic statistics. Cambridge University Press, Cambridge, UK.
[31] V ERSHYNIN , R. (2010). Introduction to the non-asymptotic analysis of random matrices. arXiv preprint
arXiv:1011.3027 .
[32] V UCIC , D. and FAIRBROTHER , W. J. (2007). The inhibitor of apoptosis proteins as therapeutic targets in
cancer. Clinical Cancer Research 13 5995?6000.
[33] WANG , Z., L IU , H. and Z HANG , T. (2014). Optimal computational and statistical rates of convergence
for sparse nonconvex learning problems. Annals of statistics 42 2164.
[34] W EGKAMP, M. and Z HAO , Y. (2013). Adaptive estimation of the copula correlation matrix for semiparametric elliptical copulas. arXiv preprint arXiv:1305.6526 .
[35] Y UAN , H., X I , R. and D ENG , M. (2015). Differential network analysis via the lasso penalized d-trace
loss. arXiv preprint arXiv:1511.09188 .
[36] Z HANG , C.-H. (2010). Nearly unbiased variable selection under minimax concave penalty. The Annals of
Statistics 894?942.
[37] Z HANG , T. and Z OU , H. (2014). Sparse precision matrix estimation via lasso penalized d-trace loss.
Biometrika ast059.
[38] Z HAO , S. D., C AI , T. T. and L I , H. (2014). Direct estimation of differential networks. Biometrika 101
253?268.
9
| 6529 |@word mild:5 trial:2 determinant:1 briefly:1 norm:14 advantageous:1 d2:2 hu:1 simulation:1 covariance:5 eng:1 tr:6 series:1 score:1 genetic:5 denoting:1 nonparanormal:2 outperforms:1 existing:2 elliptical:7 comparing:1 auritzen:1 attracted:1 written:1 john:1 numerical:1 website:1 provides:1 characterization:1 node:4 org:1 c6:2 c2:1 direct:3 differential:52 become:1 prove:2 pathway:7 introduce:2 indeed:1 roughly:1 bility:1 decomposed:1 eck:1 provided:1 spain:1 estimating:8 notation:1 begin:1 bounded:5 project:1 biostatistics:1 argmin:2 cm:2 transformation:1 thorough:2 concave:5 xd:3 biometrika:3 wrong:1 uk:1 unit:1 subtype:3 grant:1 uo:1 positive:1 engineering:1 local:1 limit:1 twice:2 au:1 dantzig:1 studied:2 suggests:2 co:1 ato:1 averaged:1 id2:3 bjk:4 practical:1 acknowledgment:1 practice:2 procedure:3 proba2:1 attain:2 confidence:1 performace:1 protein:2 doubled:1 selection:3 einshausen:1 deterministic:1 imposed:1 dz:1 dcn:2 reviewer:1 attention:1 convex:6 formulate:1 decomposable:1 recovery:1 immediately:1 identifying:1 wasserman:1 estimator:52 oh:1 population:1 variation:1 annals:3 target:2 suppose:2 us:1 trail:1 origin:1 trend:1 satisfying:2 jk:19 lay:1 ising:2 labeled:1 database:1 kxk1:1 preprint:6 solved:1 wang:2 connected:1 plo:1 rq:1 disease:1 pd:9 convexity:1 depend:1 solving:1 negatively:1 max1:2 gu:1 translated:1 joint:3 k0:1 represented:3 various:3 fast:1 pearson:1 refined:3 h0:4 choosing:1 whose:1 emerged:1 widely:2 encoded:1 solve:1 say:1 relax:1 statistic:7 g1:5 id3:2 vaart:1 jointly:2 eigenvalue:2 differentiable:1 propose:6 rewiring:1 interaction:1 remainder:1 kak1:2 outcome:1 basso:1 poorly:1 achieve:1 frobenius:5 convergence:8 regularity:2 optimum:1 generating:1 ij:1 strong:1 implemented:1 implies:2 drawback:2 correct:2 mesenchymal:1 kb:1 human:1 xid:1 require:2 f1:10 alleviate:1 anonymous:1 subtypes:3 extension:3 kakq:1 tgf:3 hold:7 therapy:1 sufficiently:2 considered:1 normal:2 algorithmic:1 achieves:4 vary:1 smallest:1 adopt:1 estimation:28 xnx:1 epithelial:3 platinum:2 bridge:1 quanquan:1 largest:1 repetition:1 clearly:1 inhibitor:1 rna:3 gaussian:14 rather:2 avoid:1 shrinkage:1 serous:1 rank:2 likelihood:6 check:1 indicates:1 attains:3 baseline:1 helpful:1 dependent:2 p36:1 quasi:4 iu:4 issue:1 overall:1 among:1 aforementioned:1 denoted:3 hon:1 xpy:1 art:1 special:3 integration:1 copula:2 marginal:1 field:1 identical:2 bmc:1 throughput:2 nearly:1 others:4 report:2 employ:1 divergence:1 individual:5 intended:1 ourselves:1 n1:3 ejs:1 huge:2 fd:10 highly:1 investigate:1 chemotherapy:2 replicates:1 fazayeli:1 analyzed:1 edge:3 shorter:1 prkar2b:2 conduct:1 iv:1 plotted:2 causal:1 instance:1 tp:5 hlmann:1 maximization:4 deviation:1 entry:13 conducted:1 virginia:4 too:1 characterize:1 dependency:5 varies:2 proximal:2 synthetic:6 gd:5 siam:1 xi1:1 hopkins:1 connectivity:1 containing:3 choose:1 american:1 bx:6 supp:6 de:1 alteration:1 summarized:1 includes:1 satisfy:4 rocket:1 depends:1 performed:1 view:1 kendall:7 linked:1 defer:1 mutation:1 ni:4 accuracy:2 acid:3 characteristic:1 efficiently:1 identification:1 agonist:1 comp:7 networking:2 ed:12 definition:3 yny:1 associated:4 static:1 sampled:1 therapeutic:2 dataset:2 recall:1 illumina:1 organized:1 ou:1 back:1 clarendon:1 day:3 follow:1 response:1 though:1 generality:1 furthermore:2 stage:1 correlation:7 hand:1 undermine:1 cran:1 web:1 iao:1 banerjee:1 indicated:1 modulus:1 usa:1 normalized:1 true:8 unbiased:2 regularization:5 hence:1 read:1 symmetric:1 nonzero:8 oto:2 attractive:1 conditionally:3 sin:1 wiring:1 inferior:1 generalized:1 tn:5 performs:1 l1:15 wise:1 novel:2 common:2 qp:1 bandyopadhyay:1 association:1 xi0:2 interpretation:1 theirs:1 interpret:1 significant:1 cambridge:2 vec:1 ai:1 smoothness:1 rd:9 unconstrained:1 tuning:1 consistency:1 immune:1 operating:1 add:1 multivariate:1 recent:1 reverse:1 certain:2 nonconvex:18 yi:2 der:1 seen:1 greater:2 kxk0:1 employed:1 determine:1 ldgm:30 ibshirani:1 multiple:3 interdependency:1 reduces:1 danaher:1 kyoto:1 faster:10 characterized:2 match:3 cross:1 sphere:1 clinical:5 divided:1 molecular:5 scalable:1 regression:1 patient:3 arxiv:12 normalization:1 achieved:1 cell:2 c1:6 addition:3 semiparametric:12 separately:2 interval:1 baltimore:1 appropriately:1 comment:1 induced:1 subject:1 undirected:1 dpm:14 inconsistent:1 nonconcave:1 structural:1 iii:2 easy:1 relaxes:1 enough:1 variety:1 gave:1 lasso:9 restrict:1 identified:3 topology:1 whether:3 expression:10 motivated:1 six:1 ultimate:1 penalty:34 suffer:1 resistance:2 remark:2 clear:1 encyclopedia:1 band:1 dna:1 generate:1 http:1 exist:2 xij:1 nsf:1 sign:4 estimated:2 correctly:1 ovarian:5 promise:1 group:14 key:1 nevertheless:1 thresholded:1 nonconvexity:1 imaging:1 graph:38 monotone:1 sum:1 package:2 noticing:1 inverse:3 clipped:1 family:2 reasonable:2 seq:2 appendix:1 comparable:1 bound:5 followed:3 fold:1 fan:1 encountered:1 oracle:12 nonnegative:1 uan:1 infinity:2 x2:1 transelliptical:20 aspect:1 min:4 relatively:1 according:1 scad:3 poor:1 smaller:3 slightly:1 pan:1 across:1 s1:11 restricted:2 kegg:5 equation:2 mcp:25 describing:1 turn:1 know:1 symmetrizing:1 adopted:1 available:1 apoptosis:3 denotes:2 ensure:1 ecd:2 graphical:17 calculating:3 restrictive:1 k1:7 classical:1 society:1 fa:2 damage:1 md:1 said:1 gradient:2 separate:1 thank:1 entity:1 nx:1 barber:1 collected:1 cellular:1 assuming:1 modeled:1 relationship:3 index:1 kolar:1 minimizing:1 regulation:1 sharper:1 xik:1 hao:2 trace:3 negative:1 rise:1 stated:1 implementation:1 observation:3 nucleic:3 markov:1 afferty:1 finite:1 descent:2 y1:1 introduced:1 namely:1 required:1 ryu:1 barcelona:1 inaccuracy:1 nip:3 robinson:1 address:2 able:2 suggested:4 below:1 pattern:1 regime:1 sparsity:1 challenge:1 including:2 tau:7 max:2 royal:1 wainwright:2 natural:2 regularized:1 indicator:1 representing:2 minimax:5 scheme:1 identifies:1 concludes:1 ready:2 review:4 literature:1 kf:4 asymptotic:2 pik3r1:3 loss:4 kakf:1 penalization:1 validation:1 incurred:1 vectorized:1 riedman:1 thresholding:1 share:1 cancer:11 genetics:2 changed:2 penalized:4 supported:1 copy:2 enjoys:3 aij:5 bias:3 weaker:1 dysfunctional:1 absolute:3 sparse:9 distributed:1 van:1 overcome:2 dimension:1 xn:1 world:6 curve:3 genome:2 transition:1 author:1 commonly:1 made:2 tanabe:1 adaptive:1 employing:1 selector:1 obtains:1 hang:4 gene:24 astie:1 receiver:1 assumed:1 tuples:1 xi:6 latent:13 regulatory:5 iterative:1 table:3 nature:1 robust:2 european:1 did:3 main:4 dense:1 ehta:1 s2:10 n2:1 xu:1 positively:1 x1:5 roc:3 ny:1 precision:16 inferring:1 comput:1 young:1 ian:1 theorem:10 avikumar:1 survival:1 intrinsic:1 exists:3 burden:1 magnitude:11 arcsin:1 smoothly:1 logarithmic:1 univariate:1 failed:1 monotonic:1 environmental:2 satisfies:4 stromal:1 conditional:3 kak0:1 goal:2 replace:1 change:4 hard:1 specifically:3 uniformly:1 averaging:1 contradictory:1 called:1 pas:1 kxkq:1 experimental:1 invariance:1 id1:3 la:1 support:6 latter:1 bioinformatics:2 evaluate:1 biol:2 |
6,113 | 653 | Automatic Capacity Tuning
of Very Large VC-dimension Classifiers
I. Guyon
AT&T Bell Labs,
50 Fremont st., 6th floor,
San Francisco, CA 94105
[email protected]
B. Boser?
EECS Department,
University of California,
Berkeley, CA 94720
[email protected]
V. Vapnik
AT&T Bell Labs,
Room 4G-314,
Holmdel, NJ 07733
[email protected]
Abstract
Large VC-dimension classifiers can learn difficult tasks, but are usually
impractical because they generalize well only if they are trained with huge
quantities of data. In this paper we show that even high-order polynomial
classifiers in high dimensional spaces can be trained with a small amount
of training data and yet generalize better than classifiers with a smaller
VC-dimension. This is achieved with a maximum margin algorithm (the
Generalized Portrait). The technique is applicable to a wide variety of
classifiers, including Perceptrons, polynomial classifiers (sigma-pi unit networks) and Radial Basis Functions. The effective number of parameters is
adjusted automatically by the training algorithm to match the complexity
of the problem. It is shown to equal the number of those training patterns
which are closest patterns to the decision boundary (supporting patterns).
Bounds on the generalization error and the speed of convergence of the algorithm are given. Experimental results on handwritten digit recognition
demonstrate good generalization compared to other algorithms.
1
INTRODUCTION
Both experimental evidence and theoretical studies [1] link the generalization of a
classifier to the error on the training examples and the capacity of the classifier.
?Part of this work was done while B. Boser was at AT&T Bell Laboratories. He is now
at the University of California, Berkeley.
147
148
Guyon, Boser, and Vapnik
Classifiers with a large number of adjustable parameters, and therefore large capacity, likely learn the training set without error, but exhibit poor generalization.
Conversely, a classifier with insufficient capacity might not be able to learn the task
at all. The goal of capacity tuning methods is to find the optimal capacity which
minimizes the expected generalization error for a given amount of training data.
Capacity tuning techniques include: starting with a low capacity system and allocating more parameters as needed or starting with an large capacity system and eliminating unnecessary adjustable parameters with regularization. The first method
requires searching in the space of classifier structures which possibly contains many
local minima. The second method is computationally inefficient since it does not
avoid adjusting a large number of parameters although the effective number of parameters may be small.
With the method proposed in this paper, the capacity of some very large VCdimension classifiers is adjusted automatically in the process of training. The problem is formulated as a quadratic programming problem which has a single global
minimum. Only the effective parameters get adjusted during training which ensures
compu tational efficiency.
1.1
MAXIMUM MARGIN AND SUPPORTING PATTERNS
Here is a familiar problem: Given is a limited number of training examples from two
classes A and B; find the linear decision boundary which yields best generalization
performance. When the training data is scarce, there exists usually many errorless
separations (figure 1.1). This is especially true when the dimension of input space
(i.e. the number of tunable parameters) is large compared to the number of training
examples. The question arises which of these solutions to choose? The one solution
that achieves the largest possible margin between the decision boundary and the
training patterns (figure 1.2) is optimal in the "minimax" sense [2] (see section 2.2).
This choice is intuitively justifiable: a new example from class A is likely to fall
within or near the convex envelope of the examples of class A (and similarly for
class B). By providing the largest possible "safety" margin, we minimize the chances
that examples from class A and B cross the border to the wrong side.
An important property of the maximum margin solution is that it is only dependent upon a restricted number of training examples, called supporting patterns (or
informative patterns). These are those examples which lie on the margin and therefore are closest to the decision boundary (figure 1.2). The number m of linearly
independent supporting patterns satisfies the inequality:
m ~ min(N
+ 1,p).
(1)
In this inequality, (N + 1) is the number of adjustable parameters and equals the
Vapnik-Chervonenkis dimension (VC-dimension) [2], and p is the number of training
examples. In reference [3], we show that the generalization error is bounded by m/p
and therefore m is a measure of complexity of the learning problem. Because m is
bounded by p and is generally a lot smaller than p, the maximum margin solution
obtains good generalization even when the problem is grossly underdetermined,
i.e. the number of training patterns p is much smaller than the number of adjustable
parameters, N + 1. In section 2.3 we show that the existence of supporting patterns
is advantageous for computational reasons as well.
Automatic Capacity Tuning of Very Large VC-dimension Classifiers
x
.-- -
Xi
.--
A
-
A
II
8
?? ?
8
?
(1)
(2)
Figure 1: Linear separations.
(1) When many linear decision rules separate the training set, which one to choose?
(2) The maximum margin solution. The distance to the decision boundary of the
closest training patterns is maximized. The grey shading indicates the margin area
in which no pattern falls. The supporting patterns (in white) lie on the margin.
1.2
NON-LINEAR CLASSIFIERS
Although algorithms that maximize the margin between classes have been known
for many years [4, 2], they have for computational reasons so far been limited to the
special case of finding linear separations and consequently to relatively simple classification problems. In this paper, we present an extension to one of these maximum
margin training algorithms called the "Generalized Portrait Method" (G P) [2] to
various non-linear classifiers, including including Perceptrons, polynomial classifiers
(sigma-pi unit networks) and kernel classifiers (Radial Basis Functions) (figure 2).
The new algorithm trains efficiently very high VC-dimension classifiers with a huge
number of tunable parameters. Despite the large number of free parameters, the
solution exhibits good generalization due to the inherent regularization of the maximum margin cost function.
As an example, let us consider the case of a second order polynomial classifier. Its
decision surface is described by the following equation:
2::::: WiXi + 2::::: WijXiXj + b =
i
O.
(2)
i,j
he Wi, Wij and b are adjustable parameters, and Xi are the coordinates of a pattern
x. If n is the dimension of input pattern x, the number of adjustable parameters
of the second order polynomial classifier is [n( n + 1)/2] + 1. In general, the number
of adjustable parameters of a qth order polynomial is of the order of N ~ n q ?
The G P algorithm has been tested on the problem of handwritten digit recognition.
The input patterns consist of 16 X 16 pixel images (n = 256). The results achieved
149
150
Guyon, Boser, and Vapnik
256
3.104
8.10 7
4.10 9
1 . 10 12
10.5 0
5.8%
5.2%
4.9%
5.2%
Table 1: Handwritten digit recognition experiments. The first database
(DB1) consists of 1200 clean images recorded from ten subjects. Half of this data
is used for training, and the other half is used to evaluate the generalization performance. The other database (DB2) consists of 7300 images for training and 2000
for testing and has been recorded from actual mail pieces. We use ten polynomial
classification functions of order q, separating one class against all others. We list the
number N of adjustable parameters, the error rates on the test set and the average
number <m>of supporting patterns per separating hypersurface. The results compare favorably to neural network classifiers which minimize the mean squared error
with backpropagation. For the one layer network (linear classifier),the error on the
test set is 12.7 % on DB1 and larger than 25 % on DB2. The lowest error rate for
DB2, 4.9 %, obtained with a forth order polynomial, is comparable to the 5.1 %
error obtained with a multi-layer neural network with sophisticated architecture
being trained and tested on the same data [6].
with polynomial classifiers of order q are summarized in table 1. Also listed is
the number of adjustable parameters, N. This quantity increases rapidly with q
and quickly reaches a level that is computationally intractable for algorithms thdt
explicitly compute each parameter [5]. Moreover, as N increases, the learning problem becomes grossly underdetermined: the number of training patterns (p = 600
for DB1 and p = 7300 for DB2) becomes very small compared to N. Nevertheless,
good generalization is achieved as shown by the experimental results listed in the
table. This is a consequence of the inherent regularization of the algorithm.
An important concern is the sensitivity of the maximum margin solution to the
presence of outliers in the training data. It is indeed important to remove undesired
outliers (such as meaningless or mislabeled patterns) to get best generalization
performance. Conversely, "good" outliers (such as examples of rare styles) must be
kept. Cleaning techniques have been developed based on the re-examination by a
human supervisor of those supporting patterns which result in the largest increase of
the margin when removed, and thus, are the most likely candidates for outliers [3].
In our experiments on DB2 with linear classifiers, the error rate on the test set
dropped from 15.2% to 10.5% after cleaning the training data (not the test data).
2
ALGORITHM DESIGN
The properties of the G P algorithm arise from merging two separate ideas: Training
in dual space, and minimizing the maximum loss. For large VC-dimension classifiers
(N ~ p), the first idea reduces the number of effective parameters to be actually
Automatic Capacity Tuning of Very Large VC-dimension Classifiers
computed from N to p. The second idea reduces it from p to m.
2.1
DUALITY
We seek a decision function for pattern vectors x of dimension n belonging to either
of two classes A and B. The input to the training algorithm is a set of p examples
Xi with labels Yi:
(3)
where {Yk = 1
Yk =-1
if Xk E class A
if Xk E class B.
From these training examples the algorithm finds the parameters of the decision
function D(x) during a learning phase. After training, the classification of unknown
patterns is predicted according to the following rule:
x E A if
D(x) > 0
(4)
x E B otherwise.
We limit ourselves to classifiers linear in their parameters, but not restricted to
linear dependences in their input components, such as Perceptrons and kernel-based
classifiers. Perceptrons [5] have a decision function defined as:
N
D(x)
=w
. <p(x)
+ b = L Wi<Pi(X) + b,
(5)
i=l
where the <Pi are predefined functions of x, and the Wi and b are the adjustable
parameters of the decision function. This definition encompasses that of polynomial
classifiers. In that particular case, the <Pi are products of components of vector x(see
equation 2). Kernel-based classifiers, have a decision function defined as:
p
D(x)
=L
CtkI?Xk,
x)
+ b,
(6)
k=l
The coefficients Ctk and the bias b are the parameters to be adjusted and the Xk
are the training patterns. The function I< is a predefined kernel, for example a
potential function [7] or any Radial Basis Function (see for instance [8]).
Perceptrons and RBF's are often considered two very distinct approaches to classification. However, for a number of training algorithms, the resulting decision function
can be cast either in the form of equation (5) or (6). This has been pointed out
in the literature for the Perceptron and potential function algorithms [7], for the
polynomial classifiers trained with pseudo-inverse [9] and more recently for regularization algorithms and RBF's [8]. In those cases, Perceptrons and RBF's constitute
dual representations of the same decision function.
The duality principle can be understood simply in the case of Heb b 's learning rule.
The weight vector of a linear Perceptron (<pi(X) = Xi), trained with Hebb's rule, is
simply the average of all training patterns Xk, multiplied by their class membership
polarity Yk:
1 p
w
LYkXk .
P k=l
=-
151
152
Guyon, Boser, and Vapnik
Substituting this solution into equation (5), we obtain the dual representation
1 p
D(x)
= w? x + b = - 2: Yk
Xk . X
+b .
P k=l
The corresponding kernel classifier has kernel K(x, x')
eters ctk are equal to (l/p)Yk.
=
X?
x' and the dual param-
In general, a training algorithm for Perceptron classifiers admits a dual kernel representation if its solution is a linear combination of the training patterns in ip-space:
p
w =
L ctkip(Xk) .
(7)
k=l
Reciprocally, a kernel classifier admits a dual Perceptron representation if the kernel
function possesses a finite (or infinite) expansion of the form:
K(x, x')
=L
ipi(X) ipi(X / ) .
(8)
Such is the case for instance for some symmetric kernels [10]. Examples of kernels
that we have been using include
K(x, x')
K(x, x')
K(x, x')
K(x, x')
K(x, x')
K(x, x')
(x. x, + l)q
tanh (1' x . x')
exp (1' x . x') - 1
exp (-lIx - x/1l2/-y)
exp (-lIx - x/ll/-Y)
(x. x' + l)q exp (-llx - x/ll/-Y)
(polynomial of order q),
(neural units),
(exponential) ,
(gaussian RBF),
(exponential RBF),
(mixed polynomial & RBF).
(9)
These kernels have positive parameters (the integer q or the real number -y) which
can be determined with a Structural Risk Minimization or Cross-Validation procedure (see for instance [2]). More elaborate kernels incorporating known invariances
of the data could be used also.
The G P algorithm computes the maximum margin solution in the kernel representation. This is crucial for making the computation tractable when training very
large VC-dimension classifiers. Training a classifier in the kernel representation is
computationally advantageous when the dimension N of vectors w (or the VCdimension N + 1) is large compared to the number of parameters ctk, which equals
the number of training patterns p. This is always true if the kernel function possesses an infinite expansions (8). The experimental results listed in table 1 indicate
that this argument holds in practice even for low order polynomial expansions when
the dimension n of input space is sufficiently large.
2.2
MINIMIZING THE MAXIMUM LOSS
The margin, defined as the Euclidean distance between the decision boundary and
the closest training patterns in ip-space can be computed as
(10)
Automatic Capacity Tuning of Very Large VC-dimension Classifiers
The goal of the maximum margin training algorithm is to find the decision function
D(x) which maximizes M, that is the solution of the optimization problem
. YkD(Xk)
m~x~n
IIwll
.
(11)
The solution w of this problem depends only on those patterns which are on the
margin, i.e. the ones that are closest to the decision boundary, called supporting
patterns. It can be shown that w can indeed be represented as a linear combination
of the supporting patterns in ip-space [4, 2, 3] (see section 2.3).
In the classical framework of loss minimization, problem 11 is equivalent to minimizing (over w) the maximum loss. The loss function is defined as
This "minimax" approach contrasts with training algorithms which minimize the
average loss. For example, backpropagation minimizes the mean squared error
(MSE), which is the average of
The benefit of minimax algorithms is that the solution is a function only of a
restricted number of training patterns, namely the supporting patterns. This results
in high computational efficiency in those cases when the number m of supporting
patterns is small compared to both the total number of training patterns p and the
dimension N of ip-space.
2.3
THE GENERALIZED PORTRAIT
The G P algorithm consists in formulating the problem 11 in the dual a-space as
the quadratic programming problem of maximizing the cost function
p
J(a, b) =
L ak (1- bYk) -
k=l
under the constrains ak
1
-a . H . a,
2
> 0 [4, 2]. The p x p square matrix H has elements:
Hkl
= YkYIK(Xk,Xl).
where K(x, x') is a kernel, such as the ones proposed in (9), which can be expanded
as in (8). Examples are shown in figure 2. K(x, x') is not restricted to the dot
product K(x, x') x . x' as in the original formulation of the GP algorithm [2].
=
In order for a unique solution to exist, H must be positive definite. The bias b can
be either fixed or optimized together with the parameters ak. This case introduces
another set of constraints: Ek Ykak 0 [4].
=
The quadratic programming problem thus defined can be solved efficiently by standard numerical methods [11]. Numerical computation can be further reduced by
processing iteratively small chunks of data [2]. The computational time is linear the
dimension n of x-space (not the dimension N of ip-space) and in the number p of
training examples and polynomial in the number m < min(N + 1,p) of supporting
153
154
Guyon, Boser, and Vapnik
X
Xi
?
??
~!;~;t;.':<l
'.r-
?
8
A
?
??
??
?
?
?
?
? ??
???
8
A
?
?
?
? ??
? ??
(1 )
(2)
Figure 2: Non-linear separations.
Decision boundaries obtained by maximizing the margin in ip-space (see text). The
grey shading indicates the margin area projected back to x-space. The supporting
patterns (white) lie on the margin. (1) Polynomial classifier of order two (sigma-pi
unit network), with kernel K(x, x') (x. x' + 1)2. (2) Kernel classifier (RBF) with
kernel K(x,x) (exp -llx - x'lI/lO).
=
=
patterns. It can be theoretically proven that it is a polynomial in m of order lower
than 10, but experimentally an order 2 was observed.
Only the supporting patterns appear in the solution with non-zero weight a'k:
D(x)
= LYka'kK(Xk, x) + h,
(12)
k
Substituting (8) in D(x), we obtain:
W
= LYka'kip(Xk) .
(13)
k
Using the kernel representation, with a factorized kernel (such as 9), the classification time is linear in n (not N) and in m (not p).
3
CONCLUSIONS
We presented an algorithm to train in high dimensional spaces polynomial classifiers
and Radial Basis functions which has remarquable computational and generalization performances. The algorithms seeks the solution with the largest possible
margin on both side of the decision boundary. The properties of the algorithm arise
from the fact that the solution is a function only of a small number of supporting
patterns, namely those training examples that are closest to the decision boundary.
The generalization error of the maximum margin classifier is bounded by the ratio
Automatic Capacity Tuning of Very Large VC-dimension Classifiers
of the number of linearly independent supporting patterns and the number of training examples. This bound is tighter than a bound based on the VC-dimension of
the classifier family. For further improvement of the generalization error, outliers
corresponding to supporting patterns with large elk can be eliminated automatically or with the assistance of a supervisor. This feature suggests other interesting
applications of the maximum margin algorithm for database cleaning.
Acknowledgements
We wish to thank our colleagues at UC Berkeley and AT&T Bell Laboratories for
many suggestions and stimulating discussions. Comments by L. Bottou, C. Cortes,
S. Sanders, S. Solla, A. Zakhor, are gratefully acknowledged. We are especially indebted to R. Baldick and D. Hochbaum for investigating the polynomial convergence
property, S. Hein for providing the code for constrained nonlinear optimization, and
D. Haussler and M. Warmuth for help and advice regarding performance bounds.
References
[1] 1. Guyon, V. Vapnik, B. Boser, L. Bottou, and S.A. Solla. Structural risk
minimization for character recognition. In J. Moody and et aI., editors, NIPS
4, San Mateo CA, 1992. IEEE, Morgan Kaufmann.
[2] V.N. Vapnik. Estimation of dependences based on empirical data. Springer,
New York, 1982.
[3] B. Boser, 1. Guyon, and V. Vapnik. An training algorithm for optimal margin classifiers. In Fifth Annual Workshop on Computational Learning Theory,
pages 144-152, Pittsburgh, July 1992. ACM.
[4] P. F. Lambert. Designing patterns recognizers with extremal paradigm information. In Watanabe S., editor, Methodologies of Pattern Recognition, pages
359-391. Academic Press, 1969.
[5] R.O. Duda and P.E. Hart. Pattern Classification And Scene Analysis. Wiley
and Son, 1973.
[6] Y. Le Cun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard,
and 1. D. Jackel. Back-propagation applied to handwritten zipcode recognition. Neural Computation, 1(4):541-551, 1989.
[7] M.A. Aizerman, E.M. Braverman, and L.I. Rozonoer. Theoretical foundations
of the potential function method in pattern recognition learning. Automation
and Remote Control, 25:821-837, 1964.
[8] T. Poggio and F. Girosi. Regularization algorithms for learning that are equivalent to multilayer networks. Science, 247:978 - 982, February 1990.
[9] T. Poggio. On optimal nonlinear associative recall. Bioi. Cybern., 19:201,1975.
[10] G.F Roach. Green's Functions. Cambridge University Press, Cambridge, 1982
(second ed.).
[11] D. Luenberger. Linear and Non-linear Programming. Addidon Wesley, 1984.
155
| 653 |@word eliminating:1 polynomial:19 advantageous:2 duda:1 grey:2 seek:2 shading:2 contains:1 att:2 chervonenkis:1 qth:1 com:2 yet:1 must:2 numerical:2 informative:1 girosi:1 remove:1 half:2 warmuth:1 xk:11 ipi:2 consists:3 theoretically:1 indeed:2 expected:1 multi:1 automatically:3 actual:1 param:1 becomes:2 bounded:3 moreover:1 maximizes:1 factorized:1 lowest:1 minimizes:2 developed:1 finding:1 nj:1 impractical:1 pseudo:1 berkeley:4 ykd:1 classifier:44 wrong:1 control:1 unit:4 appear:1 safety:1 positive:2 dropped:1 local:1 understood:1 limit:1 consequence:1 despite:1 ak:3 might:1 mateo:1 conversely:2 suggests:1 limited:2 unique:1 testing:1 practice:1 definite:1 backpropagation:2 digit:3 procedure:1 area:2 empirical:1 bell:4 radial:4 get:2 risk:2 cybern:1 equivalent:2 maximizing:2 starting:2 convex:1 rule:4 haussler:1 searching:1 coordinate:1 rozonoer:1 cleaning:3 programming:4 designing:1 element:1 recognition:7 database:3 observed:1 lix:2 solved:1 ensures:1 fremont:1 solla:2 removed:1 remote:1 yk:5 complexity:2 constrains:1 byk:1 trained:5 upon:1 efficiency:2 basis:4 mislabeled:1 various:1 represented:1 train:2 distinct:1 effective:4 larger:1 otherwise:1 gp:1 ip:6 zipcode:1 associative:1 product:2 rapidly:1 forth:1 convergence:2 help:1 vcdimension:2 predicted:1 indicate:1 vc:12 human:1 generalization:15 tighter:1 underdetermined:2 adjusted:4 extension:1 hold:1 sufficiently:1 considered:1 exp:5 substituting:2 achieves:1 estimation:1 applicable:1 label:1 tanh:1 jackel:1 extremal:1 hubbard:1 largest:4 minimization:3 gaussian:1 always:1 avoid:1 improvement:1 indicates:2 contrast:1 sense:1 dependent:1 membership:1 wij:1 pixel:1 classification:6 dual:7 constrained:1 special:1 uc:1 equal:4 eliminated:1 others:1 inherent:2 familiar:1 phase:1 ourselves:1 huge:2 braverman:1 henderson:1 introduces:1 predefined:2 allocating:1 poggio:2 heb:1 euclidean:1 re:1 hein:1 theoretical:2 instance:3 portrait:3 ctk:3 cost:2 rare:1 supervisor:2 eec:2 chunk:1 st:1 sensitivity:1 together:1 quickly:1 moody:1 squared:2 recorded:2 choose:2 possibly:1 compu:1 ek:1 inefficient:1 style:1 li:1 elk:1 potential:3 summarized:1 automation:1 coefficient:1 explicitly:1 depends:1 piece:1 lot:1 lab:2 minimize:3 square:1 kaufmann:1 efficiently:2 maximized:1 yield:1 generalize:2 handwritten:4 lambert:1 eters:1 justifiable:1 indebted:1 reach:1 ed:1 definition:1 against:1 grossly:2 colleague:1 tunable:2 adjusting:1 wixi:1 vlad:1 recall:1 sophisticated:1 actually:1 back:2 wesley:1 methodology:1 formulation:1 done:1 nonlinear:2 propagation:1 true:2 regularization:5 symmetric:1 laboratory:2 iteratively:1 white:2 undesired:1 ll:2 during:2 assistance:1 generalized:3 demonstrate:1 image:3 recently:1 he:2 isabelle:1 cambridge:2 llx:2 ai:1 tuning:7 automatic:5 similarly:1 pointed:1 gratefully:1 dot:1 recognizers:1 surface:1 closest:6 inequality:2 yi:1 morgan:1 minimum:2 floor:1 maximize:1 paradigm:1 july:1 ii:1 reduces:2 match:1 academic:1 cross:2 hart:1 multilayer:1 kernel:22 hochbaum:1 achieved:3 crucial:1 envelope:1 meaningless:1 posse:2 comment:1 subject:1 integer:1 structural:2 near:1 presence:1 sander:1 variety:1 architecture:1 idea:3 regarding:1 york:1 constitute:1 generally:1 listed:3 amount:2 ten:2 reduced:1 exist:1 per:1 nevertheless:1 acknowledged:1 clean:1 kept:1 tational:1 year:1 inverse:1 family:1 guyon:7 separation:4 decision:20 holmdel:1 comparable:1 bound:4 layer:2 quadratic:3 annual:1 constraint:1 scene:1 speed:1 argument:1 min:2 formulating:1 expanded:1 relatively:1 department:1 according:1 combination:2 poor:1 belonging:1 smaller:3 son:1 character:1 wi:3 cun:1 making:1 intuitively:1 restricted:4 outlier:5 computationally:3 equation:4 needed:1 tractable:1 luenberger:1 multiplied:1 denker:1 existence:1 original:1 include:2 especially:2 february:1 classical:1 question:1 quantity:2 dependence:2 exhibit:2 distance:2 link:1 separate:2 separating:2 capacity:14 thank:1 mail:1 reason:2 code:1 polarity:1 insufficient:1 providing:2 minimizing:3 kk:1 ratio:1 difficult:1 favorably:1 sigma:3 design:1 adjustable:10 unknown:1 howard:1 finite:1 roach:1 supporting:18 cast:1 namely:2 optimized:1 kip:1 california:2 boser:10 nip:1 able:1 usually:2 pattern:44 aizerman:1 encompasses:1 hkl:1 including:3 green:1 reciprocally:1 examination:1 scarce:1 minimax:3 text:1 literature:1 l2:1 acknowledgement:1 loss:6 mixed:1 interesting:1 suggestion:1 proven:1 validation:1 foundation:1 principle:1 db1:3 editor:2 pi:7 lo:1 free:1 side:2 bias:2 perceptron:4 wide:1 fall:2 fifth:1 benefit:1 boundary:10 dimension:21 computes:1 san:2 projected:1 far:1 hypersurface:1 obtains:1 global:1 investigating:1 pittsburgh:1 unnecessary:1 francisco:1 xi:5 table:4 learn:3 ca:3 expansion:3 mse:1 bottou:2 iiwll:1 linearly:2 border:1 arise:2 advice:1 elaborate:1 db2:5 hebb:1 wiley:1 watanabe:1 wish:1 exponential:2 xl:1 lie:3 candidate:1 list:1 admits:2 cortes:1 evidence:1 concern:1 exists:1 consist:1 intractable:1 vapnik:9 merging:1 incorporating:1 workshop:1 margin:26 simply:2 likely:3 springer:1 chance:1 satisfies:1 acm:1 stimulating:1 bioi:1 goal:2 formulated:1 consequently:1 rbf:7 room:1 experimentally:1 infinite:2 determined:1 called:3 total:1 duality:2 experimental:4 invariance:1 perceptrons:6 arises:1 evaluate:1 tested:2 |
6,114 | 6,530 | Statistical Inference for Pairwise Graphical Models
Using Score Matching
Ming Yu
[email protected]
Varun Gupta
[email protected]
Mladen Kolar?
[email protected]
University of Chicago Booth School of Business
Chicago, IL 60637
Abstract
Probabilistic graphical models have been widely used to model complex systems
and aid scientific discoveries. As a result, there is a large body of literature
focused on consistent model selection. However, scientists are often interested in
understanding uncertainty associated with the estimated parameters, which current
literature has not addressed thoroughly. In this paper, we propose a novel estimator
for edge parameters for pairwise graphical models based on Hyv?rinen scoring
rule. Hyv?rinen scoring rule is especially useful in cases where the normalizing
constant
cannot be obtained efficiently in a closed form. We prove that the estimator
p
is n-consistent and asymptotically Normal. This result allows us to construct
confidence intervals for edge parameters, as well as, hypothesis tests. We establish
our results under conditions that are typically assumed in the literature for consistent
estimation. However, we do not require that the estimator consistently recovers
the graph structure. In particular, we prove that the asymptotic distribution of the
estimator is robust to model selection mistakes and uniformly valid for a large
number of data-generating processes. We illustrate validity of our estimator through
extensive simulation studies.
1
Introduction
Undirected probabilistic graphical models are widely used to explore and represent dependencies
between random variables. They have been used in areas ranging from computational biology to
neuroscience and finance. See [7] for a recent review. An undirected probabilistic graphical model
consists of an undirected graph G = (V, E), where V = {1, . . . , p} is the vertex set and E ? V ? V
is the edge set, and a random vector X = (X1 , . . . , Xp ) 2 X p ? RP . Each coordinate of the random
vector X is associated with a vertex in V and the graph structure encodes the conditional independence
assumptions underlying the distribution of X. In particular, Xa and Xb are conditionally independent
given all the other variables if and only if (a, b) 62 E, that is, the nodes a and b are not adjacent in G.
One of the fundamental problems in statistics is that of learning the structure of G from i.i.d. samples
from X and quantifying uncertainty of the estimated structure.
?
This work is supported by an IBM Corporation Faculty Research Fund at the University of Chicago Booth
School of Business. This work was completed in part with resources provided by the University of Chicago
Research Computing Center.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
We consider a basic class of pairwise interaction graphical models with densities belonging to an
exponential family P = {p? (x) | ? 2 ?} with natural parameter space ? and
X X
X X (l) (l)
X
log p? (x) =
?a(k) t(k)
?ab tab (xa , xb )
(?)+
ha (xa ), x 2 X ? Rp .
a (xa )+
a2V k2[K]
(a,b)2E l2[L]
a2V
(1)
(k) (l)
The functions ta , tab are sufficient statistics and (?) is the log-partition function. In this paper
the support of the densities is either X = RP or X = RP
+ and P is dominated by Lebesgue
measure on Rp . To simplify the notation, we will write log p? (x) = ?T t(x)
(?) + h(x) where
? 2 Rs and t(x) : Rp 7! Rs with sR = p2 ? L + p ? K. The natural parameter space has the
form ? = {? 2 Rs | (x) = log X exp(?T t(x)dx) < 1}. Under the model in (1), there
is no edge between a and b in the corresponding conditional independence graph if and only if
(1)
(L)
?ab = ? ? ? = ?ab = 0. The model in (1) encompasses a large number of graphical models studied in
the literature (see, for example, [7, 15] and referenced there in).
The main focus of the paper is on construction of an asymptotically normal estimator for parameters
in (1) and performing (asymptotic) inference for them. We illustrate a procedure for construction of
valid confidence intervals that have the nominal coverage and propose a statistical test for existence
of edges in the graphical model with nominal size. Our inference results are robust to model selection
mistakes, which commonly occur in ultra-high dimensional setting. Results in the paper complement
existing literature, which is focused on consistent model selection and parameter recovery, as we
review in the next section.
We use Hyv?rinen scoring rule to estimate ?, as in [15]. However, rather than focusing on consistent
model selection we modify the regularized score matching procedure to construct a regular estimator
that is robust to model selection mistakes and show how to use its asymptotic distribution for
statistical inference. Compared to previous work on high-dimensional inference in graphical models
[23, 2, 29, 11], this is the first work on inference in models where computing the normalizing constant
is intractable.
Related work. Our work straddles two areas of statistical learning which have attracted significant
research of late: model selection and estimation in high-dimensional graphical models, and highdimensional inference. Our approach to inference for high-dimensional graphical models is based on
regularized score matching. We briefly review the literature most relevant to our work, and refer the
reader to a recent review article for a comprehensive overview [7].
Graphical model selection: Much of the research effort on graphical model selection has been done
under the assumption that the data obeys the law X ? N (0, ?) (Gaussian graphical models), in
which case the edge set E of the graph G is encoded by the non-zero elements of the precision matrix
? = ? 1 . More recently, [31] studied estimation of graphical models under the assumption that the
node conditional distributions belong to an exponential family distribution (including, for example,
Bernoulli, Gaussian, Poisson and exponential) via regularized
p likelihood (see also [13, 6, 30] and
references therein). In our paper, we construct a novel n-consistent estimator of a parameter
corresponding to a particular edge in (1). As we mentioned earlier, this is the first procedure that can
obtain a parametric rate of convergence for an edge parameter in a graphical model where computing
the normalizing constant is intractable.
High-dimensional inference: Methods for construction of confidence intervals for low dimensional
parameters in high-dimensional linear and generalized linear models, as well as hypothesis tests,
have been developed in [32, 4, 28, 12]. These methods construct honest, uniformly valid confidence
intervals
and hypothesis test based on a first stage `1 penalized estimator. [16, 23, 5] construct
p
n-consistent estimators for elements of the precision matrix ? under a Gaussian assumption.
We contribute to the literature on high dimensional inference by demonstrating how to construct
estimators that are robust and uniformly valid under more general distributional assumptions than
Gaussian.
Score Matching estimators: Score matching estimators were first proposed in [9, 10]. Score
matching offers a computational advantage when the normalization constant is not available in
closed-form making likelihood based approaches intractable. Despite its power, there have not been
any results on inference in high-dimensional models using score matching. In [8], the authors use
score matching for inference of Gaussian linear models (and hence for Gaussian graphical models) in
low-dimensional setting. In [15], the authors use `1 regularized score matching to develop consistent
2
estimators for graphical models in high-dimensional setting. We present the first high-dimensional
inference results using score matching.
2
Score Matching
Let X be a random variable with values in X , and let P be a family of distributions over X . A
scoring rule S(x, Q) is a real valued function that quantifies accuracy of Q 2 P upon observing a
realized value of X, x 2 X . There are a large number of scoring rules that correspond to different
decision problems [20]. Given n independent realizations of X, {xi }i2[n] , one finds optimal score
b 2 P that minimizes the empirical score
estimator Q
b = arg min En [S(xi , Q)] .
Q
(2)
Q2P
When X = Rp and P consists of twice differentiable densities with respect to Lebesgue measure, the
Hyv?rinen scoring rule [9] is given as
S(x, Q) = (1/2)||r log q(x)||22 +
(3)
log q(x)
where q is the density of Q with respect to Lebesgue measure on X , rf (x) = {@/(@xj )f (x)}j2[p]
P
2
2
p
denotes the gradient, and f (x) =
j2[p] @ /(@xj )f (x) the Laplacian operator on R . This
scoring rule is convenient for learning models that are specified in an unnormalized fashion or whose
normalizing constant is difficult to compute. The score matching rule is proper, that is, EX?P S(X, Q)
is minimized over P at RQ = P . Under suitable regularity conditions, the Fisher divergence between
P, Q 2 P, D(P, Q) = p(x)||r log q(x) r log p(x)||22 dx, where p is the density of P , is induced
by the score matching rule [9]. For a parametric exponential family P = {p? | ? 2 ?} with densities
given in (1), minimizing (2) can be done in a closed form [9, 8]. An estimator ?b obtained in this way
can be shown to be asymptotically consistent [9], however, in general it will not be efficient [8].
Hyv?rinen [10] proposed a generalization of the score matching approach to the case of non-negative
data. When X = Rp+ the scoring rule is given as
"
?
?2 #
2
X
@ log q(x)
1 2 @ log q(x)
2 @ log q(x)
S+ (x, Q) =
2xa
+ xa
+ xa
.
(4)
@xa
@x2a
2
@xa
a2V
For exponential families, the non-negative score matching loss again can be obtained in a closed form
and the estimator is consistent and asymptotically normal under suitable conditions [10].
In the context of probabilistic graphical models, [8] studied score matching to learn Gaussian graphical
models with symmetry constraints. [15] proposed a regularized score matching procedure to learn
conditional independence graph in a high-dimensional setting by minimizing En [`(xi , ?)] + ||?||1 ,
where the loss `(xi , ?) is either S(xi , Q? ) or S+ (xi , Q? ). For Gaussian models, `1 -norm regularized
score matching is a simple but state-of-the-art method, which coincides with the method in [17].
Extending the work on estimation of infinite-dimensional exponential families [26], [27] study
learning structure of nonparametric probabilistic graphical models using a score matching estimator.
In the next section, we present a new estimator for components of ? in (1) that is consistent and
asymptotically normal, building on [15] and [4].
3
Methodology
p
In this section, we propose a procedure that constructs a n-consistent estimator of an element ?ab
of ?. Our procedure is based on the three steps that we describe after introducing some additional
notation. We start by describing the procedure for the case where X = Rp .
For fixed indices a, b 2 [p], let q?ab (x) := q?ab (xa , xb | x
given X ab = x ab . In particular,
0
log q?ab (x) = h?ab , '(x)i
ab
ab )
(?, x
be the conditional density of (Xa , Xb )
ab )
(k)
+ hab (x)
(k)
(l)
(l)
where ?ab 2 Rs is a part of the vector ? corresponding to {?a , ?b }k2[K] , {?ac , ?bc }l2[L],c2 ab
0
and '(x) = 'ab (x) 2 Rs is the corresponding vector of sufficient statistics with the dimension
3
s0 = 2K + 2(p 2)L. Here ab (?, x ab ) is the log-partition function for the conditional distribution
T
and hab (x) = ha (xa ) + hb (xb ). Let rab f (x) = ((@/@xa )f (x), (@/@xb )f (x)) 2 R2 be the
2
2
2
2
gradient with respect to xa and xb and ab f (x) = (@ /@xa ) + (@ /@xb ) f (x).
With this notation, we introduce the following scoring rule
S ab (x, ?) = (1/2)||rab log q?ab (x)||22 +
ab
log q?ab (x) = (1/2)?T (x)? + ?T g(x),
(5)
where
(x) = '1 (x)'1 (x)T + '2 (x)'2 (x)T
and
ab
g(x) = '1 (x)hab
1 (x) + '2 (x)h2 (x) +
ab '(x)
ab
ab
ab
with '1 = (@/@xa )', '2 = (@/@xb )', hab
1 = (@/@xa )h , and h2 = (@/@xb )h . This scoring
rule is related to the one in (3), however, rather than using the density q? in evaluating the parameter
vector, we only consider the conditional density q?ab . We will use this conditional scoring rule to
create an asymptotically normal estimator of an element ?ab . Our motivation for using this estimator
comes from the fact that the parameter ?ab can be identified from the conditional distribution of
(Xa , Xb ) | XMab where Mab := {c | (a, c) 2 E or (b, c) 2 E} is the Markov blanket of (Xa , Xb ).
Furthermore, the optimization problems arising in steps 1-3 below can be solved much more efficiently,
as the problems are of much smaller dimension.
We are now ready to describe our procedure for estimating ?ab , which proceeds in three steps.
Step 1: We find a pilot estimator of ?ab by solving the following program
?
?
?bab = arg min0 En S ab (xi , ?) + 1 ||?||1
(6)
?2Rs
where
1
c1 = M (?bab ) := {(c, d) | ?bab 6= 0}.
is a tuning parameter. Let M
cd
Since we are after an asymptotically normal estimator of ?ab , one may think that it is sufficient to find
?
?
c1 } and appeal to results of [21]. Unfortunately, this is
?eab = arg min{En S ab (xi , ?) | M (?) ? M
not the case. Since ?e is obtained via a model selection procedure, it is irregular and its asymptotic
distribution cannot be estimated [14, 22]. Therefore, we proceed to create a regular estimator of ?ab
in steps 2 and 3. The idea is to create an estimator ?eab that is insensitive to first order perturbations
of other components of ?eab , which we consider as nuisance components. The idea of creating an
estimator that is robust to perturbations of nuisance have been recently used in [4], however, the
approach goes back to the work of [19].
Step 2: Let bab be a minimizer of
(1/2)En [('1,ab (xi )
'1,
ab (xi )
T
)2 + ('2,ab (xi )
'2,
ab (xi )
T
)2 ] +
2 ||
||1 .
(7)
The vector (1, bab,T )T approximately computes a row of the inverse of the Hessian in (6).
f = {(a, b)} [ M
c1 [ M (bab ). We obtain our estimator as a solution to the following
Step 3: Let M
program
?
?
f.
(8)
?eab = arg min En S ab (xi , ?)
s.t. M (?) ? M
Motivation for this procedure will be clear from the proof of Theorem 1 given in the next section.
Extension to non-negative data. For non-negative data, the procedure is slightly different. Instead of
ab
(5), as shown in [15], we instead define a different scoring rule S+
(x, ?) = 12 ?T + (x)? + ?T g+ (x)
2
T
2
T
ab
with + (x) = xa ? '1 (x)'1 (x) + xb ? '2 (x)'2 (x) and g+ (x) = '1 (x)hab
1 (x) + '2 (x)h2 (x) +
2
2
2
2
2
xa '11 (x) + xb '22 (x) + 2xa '1 (x) + 2xb '2 (x). Here '11 = (@ /@xa )', and '22 = (@ /@x2b )'.
Now we can define '
e1 = xa '1 and '
e2 = xb '2 . Then + (x) = '
e1 (x)'
e1 (x)T + '
e2 (x)'
e2 (x)T ,
which is of the same form as (5) with '
e1 and '
e2 replacing '1 and '2 , respectively. Thus our three
step procedure for non-negative data follows as before.
4
Asymptotic Normality of the Estimator
In this section, we outline main theoretical properties of our procedure. We start by providing
high-level conditions that allow us to establish properties of each step in our procedure.
4
Assumption M. We are given n i.i.d. samples {xi }i2[n] from p?? of the form in (1). The parameter
vector ?? is sparse, with |M (?ab,? )| ? n. Let
ab,?
= arg min E[('1,ab (xi )
'1,
ab (xi )
T
)2 + ('2,ab (xi )
'2,
ab (xi )
and ?1i = '1,ab (xi ) '1, ab (xi )
and ?2i = '2,ab (xi ) '2, ab (xi )
vector ab,? is sparse with |M ( ab,? )| ? n. Let m = |M (?ab,? )| _ |M ( ab,? )|.
T
ab,?
T
ab,?
T
)2 ]
(9)
for i 2 [n]. The
The assumption M supposes that the parameter to be estimated is sparse, which makes estimation
in high-dimensional setting feasible. An extension to approximately sparse parameter is possible,
but technical. One of the benefits of using the conditional score to learn parameters of the model is
that the sample size will only depend on the size of M (?ab,? ) and not on the sparsity of the whole
vector ?? as in [15]. The second part of the assumption states that the inverse of population Hessian
is approximately sparse, which is a reasonable assumption since the Markov blanket of (Xa , Xb ) is
small under the sparsity assumption on ?ab,? .
Our next condition assumes that the Hessian in (6) and (7) is well conditioned. Let
(s, A) =
inf T A /|| ||22 | 1 ? || ||0 ? s and + (s, A) = sup T A /|| ||22 | 1 ? || ||0 ? s denote the
minimal and maximal s-sparse eigenvalues of a semi-definite matrix A, respectively.
Assumption SE. The event
ESE = {
min
?
holds with probability 1
(m ? log n, En [ (xi )]) ?
SE
where 0 <
min
?
max
+ (m
? log n, En [ (xi )]) ?
< 1.
max }
We choose to impose the sparse eigenvalue condition directly on En [ (xi )] rather that on the
population quantity E [ (xi )]. It is well known that the condition SE holds for a large number of
models. See for example [24] and specifically [31] for exponential family graphical models.
ab,?
Let rj? = ||?bab ?ab,? ||j and rj = ||bab
||j , for j 2 {1, 2}, be the rates of estimation in
steps 1 and 2. Under the assumption SE, on the event
p E? = {||En [ (xi )? + g(xi )] ||1 ? 1 /2}
we have that r1? ? c1 m /
and r2? ? c2 m / . Similarly, on the event E =
{||E
pn [?1i '1, ab (xi ) + ?2i '2, ab (xi )] ||1 ? 2 /2} we have that r1 ? c1 m / min and r2 ?
c2 m / min using results of [18]. Again, one needs to verify the two events hold with highprobability for the model at hand. However, this is a routine calculation under suitable tail assumptions.
See for example Lemma 9 in [31].
The following result establishes a Bahadur representation for ?eab .
?
Theorem 1. Suppose that assumptions M and SE holds. Define w? with wab
= 1 and w? ab =
ab,?
ab,?
, where
is given in the assumption M. On the event E \ E? , we have that
?
?
?
p ?
p
p
4
?
n ? ?eab ?ab
= bn 1 ? nEn w?,T (xi )?ab,? + g(xi ) + O 2max min
? n 2m ,
(10)
where = 1 _ 2 and n = En [?1i '1,ab (xi ) + ?2i '2,ab (xi )].
Theorem 1 is deterministic in nature. It establishes a representation that holds on the event E \ E? \
ESE , which in many cases holds with overwhelming probability. We will show that under suitable
conditions the first term converges to a normal distribution. The following is a regularity condition
needed even in a low dimensional setting for asymptotic normality [8].
?
?
?
?
Assumption R. Eqab || (Xa , Xb , x ab )?ab,? ||2 and Eqab ||g(Xa , Xb , x ab )||2 are finite for all
values of x ab in the domain.
Theorem 1 and Lemma 9 together give the following corollary:
Corollary 2. Suppose that the conditions of Theorem 1 hold. In addition, suppose the assumption
p
?
R holds, (m log p)2 /n = o(1) and P (E \ E? \ ESE ) ! 1. Then n(?eab ?ab
) !D N (0, V ) +
2
?,T
ab
op (1), where V = (E [ n ]) ? Var w
(xi )? + g(xi ) and n is as in Theorem 1.
We see that the variance V depend on true ?ab and ab , which are unknown. In practice, we estimate
V using the following consistent estimator Vb ,
? h?
? ?
?T i?
1
1
eTab En [ (xi )]M
En
(xi )?eab + g(xi )
(xi )?eab + g(xi )
En [ (xi )]M
eab ,
f
f
f
M
5
f
M
where eab is a canonical vector with 1 in position of element ab. Using this estimate, we can construct
a confidence interval with asymptotically nominal coverage. In particular,
?
?
q
?
lim sup P?? ?ab
2 ?eab ? z?/2 ? Vb /n = ? + o(1).
n!1 ? ? 2?
In the next section, we outline the proof of Theorem 1. Proofs of other technical results are relegated
to appendix.
4.1
Proof of Theorem 1
We first introduce some auxiliary estimates. Let eab be a minimizer of the following constrained
problem
h
i
2
2
f
min En '1,ab (xi ) '1, ab (xi )T
+ '2,ab (xi ) '2, ab (xi )T
s.t. M ( ) ? M
(11)
ab
f
where M is defined in step 3 of the procedure. Essentially, e is the refitted estimator from step 2
0
f. Let w
constrained to have the support on M
e 2 Rs with w
eab =
eM
eM
f =
f and zero elsewhere.
? 1, w
?
ab
ab
e
e
The solution ? satisfies the first order optimality condition En [ (xi )] ? + En [g(xi )]
= 0.
f
M
Multiplying by w,
e it follows that
?
?
w
eT En [ (xi )] ?eab + En [g(xi )]
?
?
?
?
T
T
= (w
e w? ) En [ (xi )] ?eab ?ab,? + (w
e w? ) En (xi )?ab,? + g(xi ) +
?
?
?
?
w?,T En [ (xi )] ?eab ?ab,? + w?,T En (xi )?ab,? + g(xi ) , L1 + L2 + L3 + L4 = 0.
(12)
4
From Lemma 6 and Lemma 7, we have that |L1 + L2 | ? C ?
? 2max min
?? 2 m. Using
Lemma 8, the
?
?
1/2
term L3 can be written as En [?1i '1,ab (xi ) + ?2i '2,ab (xi )] ?eab ?ab,? + O max 2 ? 2 m .
ab
min
Putting all the pieces together completes the proof.
5
Synthetic Datasets
In this section we illustrate finite sample properties of our inference procedure on data simulated
from three different Exponential family distributions. The first two examples involve Gaussian
node-conditional distributions, for which we use regularized score matching. For the third setting
where the node-conditional distributions follow an Exponential distribution, we use regularized
non-negative score matching procedure. In each example, we report the mean coverage rate of 95%
confidence intervals for several coefficients averaged over 500 independent simulation runs.
Gaussian Graphical Model. We first consider the simplest case of a Gaussian graphical model. The
data is generated according to X ? N (0, ?). We denote the precision matrix by ? = ? 1 = (wab )
(the inverse of covariance matrix).
For the experiment, we set diagonal entries of ? as wjj = 1, and we set the coefficients of the 4
nearest neighbor lattice graph according to wj,j 1 = wj 1,j = 0.5 and wj,j 2 = wj 2,j = 0.3. We
set the sample size n = 300. Table 1 shows the empirical coverage rate for different choices of the
number of nodes p for four chosen coefficients. As is evident, our inference procedure performs
remarkably well for the Gaussian graphical model studied.
Normal Conditionals. Our second synthetic dataset is sampled from the following exponential
P
Pp
Pp
(2)
family distribution: q(x|B, b, b(2) ) / exp{ j6=k jk x2j x2k + j=1 j x2j + j=1 j xj }, where
(2)
(2)
b = ( 1 , . . . , p ) and b(2) = ( 1 , . . . , p ) are p dimensional vectors, and B = { jk } is a
symmetric interaction matrix with diagonal entries set to 0. The above distribution is a special case
of a class of exponential family distributions with normal conditionals, and densities that need not be
unimodal [1]. This family is intriguing from the perspective of graphical modeling as, in contrast to
the Gaussian case, conditional dependence may also express itself in the variances.
6
Table 1: Empirical Coverage for Gaussian Graphical Model
p = 50
p = 200
p = 400
w1,2
95.4%
94.6%
94.6%
w1,3
92.4%
92.4%
94.8%
w1,4
93.8%
92.6%
92.6%
w1,10
93.2%
94.0%
93.8%
Table 2: Empirical Coverage for Normal Conditionals
1,2
p = 100
p = 300
93.2%
93.2%
1,3
93.4%
93.0%
1,4
1,10
94.6%
92.6%
95.0%
93.0%
Table 3: Empirical Coverage for Exponential Graphical Model
p = 100
p = 300
?1,2
92.0%
92.6%
?1,3
90.0%
92.0%
?1,4
90.0%
92.2%
?1,10
92.4%
92.4%
(2)
For our experiment we set j = 0.4, j = 2, and we use a 4 nearest neighbor lattice dependence
graph with interaction matrix: j,j 1 = j 1,j = 0.2 and j,j 2 = j 2,j = 0.2. Since the
univariate marginal distributions are all Gaussian, we generate the data by Gibbs sampling. The first
500 samples were discarded as ?burn in? step, and of the remaining samples, we keep one in three.
We set the number of samples n = 500. Table 2 shows the empirical coverage rate for p = 100
and p = 300 nodes. Again, we see that our inference algorithm behaves well on the above Normal
Conditionals Model.
Exponential Graphical Model. Our final synthetic simulated example illustrates non-negative
score matching for Exponential Graphical Model. Here the node-conditional distributions obey an
exponential distribution, and therefore the variables take only non-negative values. Such exponential
distributions are typically used for data describing inter-arrivalP
times betweenP
events, among other
p
applications. The density function is given by q(x|?) / exp{
?
X
j
j
j=1
j6=k ?jk Xj Xk }. To
ensure that the distribution is valid and normalizable, we require ?j > 0, and ?jk 0. Therefore,
we can only model negative dependencies via the Exponential graphical model. For the experiment
we choose ?j = 2, and a 2 nearest neighbor dependence graph with ?j,j 1 = ?j 1,j = 0.3. We set
n = 1000 and again use Gibbs sampling to generate data. The empirical coverage rate and histograms
of estimates of four selected coefficients are presented in Table 3 and Figures 1 for p = 100 and
p = 300, respectively.
6
6
5
5
5
3
2
1
0
4
3
2
1
0
0.2
0.4
0
-0.4
0.6
4
3
2
1
-0.2
?1,2
0
0.2
0
-0.4
0.4
4
3
2
1
-0.2
?1,3
0
0.2
0
-0.4
0.4
6
5
5
2
1
0
3
2
1
0
0.2
0.4
?1,2
0.6
0
-0.3
Density
6
5
Density
6
4
4
3
2
1
-0.2
-0.1
0
0.1
0
-0.3
0.2
?1,3
0
0.2
0.4
0.2
0.4
?1,10
4
3
-0.2
?1,4
5
Density
Density
Density
6
4
Density
5
Density
Density
We should point out that, in general, non-negative score matching is harder than regular score
matching. For example, as shown in [15], to recover the structure from a regular Gaussian distribution
4
3
2
1
-0.2
-0.1
0
?1,4
0.1
0.2
0
-0.4
-0.2
0
?1,10
Figure 1: Histograms for ?: the first row is for p = 100 and the second row is for p = 300
7
with high probability, a sample size about O(m2 log p) suffices, while to recover from non-negative
Gaussian distribution, we need O(m2 (log p)8 ), which is significantly larger. Therefore, we expect
that confidence intervals for non-negative score matching would require more samples to give accurate
inference. We can see this from Table 3, where the empirical coverage rate tends to be about 92%,
rather than the designed 95% ? still impressive for the not so large sample size. The histograms in
Figures 1 show that the fitting is quite good, but to get a better estimation and hence better coverage,
we would need more samples.
6
Protein Signaling Dataset
In this section we apply our algorithm to a protein signaling flow cytometry dataset. The dataset
contains the presence of p = 11 proteins in n = 7466 cells. It was first analyzed using Bayesian
Networks in [25] who fit a directed acyclic graph to the data, while [31] fit their proposed M-estimators
for exponential and Gaussian graphical models to the data set.
Figure 2 shows the network structure after applying our method to the data using an Exponential
Graphical Model. Since the data is non-negative and skewed, it can also be analyzed after log
transformation as was done by [31] for fitting Gaussian graphical model. We instead learn the
structure directly from the data without such a transformation. To infer the network structure, we
calculate the p-value for each pair of nodes, and keep the edges with p-values smaller than 0.01.
Estimated negative conditional dependencies are shown via red edges in the figure. Recall that
the exponential graphical model restricts the edge weights to be non-negative, hence only negative
dependencies can be estimated. From the figure we see that PKA is a major protein inhibitor in cell
signaling networks. This result is consistent with the estimated graph structure in [31], as well as in
the Bayesian network of [25]. In addition, we find significant dependency between PKC and PIP3.
Jnk
P38
Raf
PKC
Mek
PKA
Plcg
Akt
PIP2
Erk
PIP3
Figure 2: Estimated Structure of Protein Signaling Dataset
7
Conclusion
Driven by applications in Biology and Social Networks, there has been a surge in statistical learning
models and methods for networks with large number of nodes. Graphical models provide a very
flexible modeling framework for such networks, leading to much work in estimation and inference
algorithms for Gaussian graphical models, and more generally for graphical models with nodeconditional densities lying in Exponential family, in high dimensional setting. Most of this work is
based on regularized likelihood loss minimization, which has the disadvantage of being computationally intractable when the normalization constant (partition function) of the conditional densities is
not available in closed form. Score matching estimators provide a way around this issue, but so far
there has been no work which provides inference guarantees for score matching based estimators for
high-dimensional graphical models. In this paper we fill this gap for the case where score matching
is used to estimate the parameter corresponding to a single edge at a time. An interesting future
extension would be to perform inference on the entire model instead of one edge at a time as in the
current paper. Another extension would be to extend our results to discrete valued data.
8
References
[1] B. C. Arnold, E. Castillo, and J. M. Sarabia. Conditional specification of statistical models. Springer Series
in Statistics. Springer-Verlag, New York, 1999. ISBN 0-387-98761-4.
[2] R. F. Barber and M. Kolar. Rocket: Robust confidence intervals via kendall?s tau for transelliptical graphical
models. ArXiv e-prints, arXiv:1502.07641, Feb. 2015.
[3] A. Belloni and V. Chernozhukov. Least squares after model selection in high-dimensional sparse models.
Bernoulli?, 19(2):521?547, May 2013.
[4] A. Belloni, V. Chernozhukov, and C. B. Hansen. Inference on treatment effects after selection amongst
high-dimensional controls. Rev. Econ. Stud., 81(2):608?650, Nov 2013.
[5] M. Chen, Z. Ren, H. Zhao, and H. H. Zhou. Asymptotically normal and efficient estimation of covariateadjusted gaussian graphical model. Journal of the American Statistical Association, 0(ja):00?00, 2015.
[6] S. Chen, D. M. Witten, and A. Shojaie. Selection and estimation for mixed graphical models. ArXiv
e-prints, arXiv:1311.0085, Nov. 2013.
[7] M. Drton and M. H. Maathuis. Structure learning in graphical modeling. To appear in Annual Review of
Statistics and Its Application, 3, 2016.
[8] P. G. M. Forbes and S. L. Lauritzen. Linear estimating equations for exponential families with application
to Gaussian linear concentration models. Linear Algebra Appl., 473:261?283, 2015.
[9] A. Hyv?rinen. Estimation of non-normalized statistical models by score matching. J. Mach. Learn. Res., 6:
695?709, 2005.
[10] A. Hyv?rinen. Some extensions of score matching. Comput. Stat. Data Anal., 51(5):2499?2512, 2007.
[11] J. Jankova and S. A. van de Geer. Confidence intervals for high-dimensional inverse covariance estimation.
ArXiv e-prints, arXiv:1403.6752, Mar. 2014.
[12] A. Javanmard and A. Montanari. Confidence intervals and hypothesis testing for high-dimensional
regression. J. Mach. Learn. Res., 15(Oct):2869?2909, 2014.
[13] J. D. Lee and T. J. Hastie. Learning the structure of mixed graphical models. J. Comput. Graph. Statist.,
24(1):230?253, 2015.
[14] H. Leeb and B. M. P?tscher. Can one estimate the unconditional distribution of post-model-selection
estimators? Econ. Theory, 24(02):338?376, Nov 2007.
[15] L. Lin, M. Drton, and A. Shojaie. Estimation of high-dimensional graphical models using regularized
score matching. ArXiv e-prints, arXiv:1507.00433, July 2015.
[16] W. Liu. Gaussian graphical model estimation with false discovery rate control. Ann. Stat., 41(6):2948?2978,
2013.
[17] W. Liu and X. Luo. Fast and adaptive sparse precision matrix estimation in high dimensions. J. Multivar.
Anal., 135:153?162, 2015.
[18] S. N. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A unified framework for high-dimensional
analysis of m-estimators with decomposable regularizers. Stat. Sci., 27(4):538?557, 2012.
[19] J. Neyman. Optimal asymptotic tests of composite statistical hypotheses. Probability and statistics, 57:
213, 1959.
[20] M. Parry, A. P. Dawid, and S. L. Lauritzen. Proper local scoring rules. Ann. Stat., 40(1):561?592, Feb
2012.
[21] S. L. Portnoy. Asymptotic behavior of likelihood methods for exponential families when the number of
parameters tends to infinity. Ann. Stat., 16(1):356?366, 1988.
[22] B. M. P?tscher. Confidence sets based on sparse estimators are necessarily large. Sankhy?a, 71(1, Ser. A):
1?18, 2009.
[23] Z. Ren, T. Sun, C.-H. Zhang, and H. H. Zhou. Asymptotic normality and optimalities in estimation of large
Gaussian graphical models. Ann. Stat., 43(3):991?1026, 2015.
[24] M. Rudelson and S. Zhou. Reconstruction from anisotropic random measurements. 2011.
[25] K. Sachs, O. Perez, D. Pe?er, D. A. Lauffenburger, and G. P. Nolan. Causal protein-signaling networks
derived from multiparameter single-cell data. Science, 308(5721):523?529, 2005.
[26] B. Sriperumbudur, K. Fukumizu, A. Gretton, and A. Hyv?rinen. Density estimation in infinite dimensional
exponential families. ArXiv e-prints, arXiv:1312.3516, Dec. 2013.
[27] S. Sun, M. Kolar, and J. Xu. Learning structured densities via infinite dimensional exponential families.
In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural
Information Processing Systems 28, pages 2287?2295. Curran Associates, Inc., 2015.
[28] S. A. van de Geer, P. B?hlmann, Y. Ritov, and R. Dezeure. On asymptotically optimal confidence regions
and tests for high-dimensional models. Ann. Stat., 42(3):1166?1202, Jun 2014.
[29] J. Wang and M. Kolar. Inference for high-dimensional exponential family graphical models. In A. Gretton
and C. C. Robert, editors, Proc. of AISTATS, volume 51, pages 751?760, 2016.
[30] E. Yang, Y. Baker, P. Ravikumar, G. I. Allen, and Z. Liu. Mixed graphical models via exponential families.
In Proc. 17th Int. Conf, Artif. Intel. Stat., pages 1042?1050, 2014.
[31] E. Yang, P. Ravikumar, G. I. Allen, and Z. Liu. On graphical models via univariate exponential family
distributions. J. Mach. Learn. Res., 16:3813?3847, 2015.
[32] C.-H. Zhang and S. S. Zhang. Confidence intervals for low dimensional parameters in high dimensional
linear models. J. R. Stat. Soc. B, 76(1):217?242, Jul 2013.
9
| 6530 |@word briefly:1 faculty:1 norm:1 hyv:8 simulation:2 r:7 bn:1 covariance:2 harder:1 liu:4 contains:1 score:34 series:1 bc:1 existing:1 current:2 luo:1 plcg:1 dx:2 attracted:1 written:1 intriguing:1 chicago:4 partition:3 designed:1 fund:1 eab:18 tscher:2 selected:1 xk:1 provides:1 node:9 contribute:1 p38:1 zhang:3 c2:3 stud:1 prove:2 consists:2 fitting:2 introduce:2 pairwise:3 inter:1 javanmard:1 behavior:1 surge:1 ming:1 overwhelming:1 provided:1 spain:1 underlying:1 notation:3 estimating:2 mek:1 baker:1 erk:1 minimizes:1 developed:1 unified:1 transformation:2 corporation:1 guarantee:1 finance:1 k2:2 ser:1 control:2 appear:1 before:1 scientist:1 referenced:1 modify:1 tends:2 mistake:3 local:1 despite:1 mach:3 approximately:3 burn:1 twice:1 therein:1 studied:4 appl:1 obeys:1 averaged:1 directed:1 testing:1 practice:1 definite:1 signaling:5 procedure:18 area:2 empirical:8 significantly:1 composite:1 matching:31 convenient:1 confidence:13 mingyu:1 regular:4 protein:6 get:1 cannot:2 selection:14 operator:1 context:1 applying:1 deterministic:1 center:1 go:1 focused:2 decomposable:1 recovery:1 m2:2 estimator:37 rule:15 fill:1 refitted:1 population:2 coordinate:1 construction:3 nominal:3 suppose:3 rinen:8 curran:1 hypothesis:5 associate:1 element:5 chicagobooth:3 dawid:1 jk:4 distributional:1 portnoy:1 solved:1 wang:1 calculate:1 wj:4 region:1 sun:2 mentioned:1 rq:1 wjj:1 depend:2 solving:1 algebra:1 upon:1 fast:1 describe:2 whose:1 encoded:1 widely:2 valued:2 larger:1 quite:1 nolan:1 statistic:6 think:1 multiparameter:1 itself:1 final:1 advantage:1 differentiable:1 eigenvalue:2 isbn:1 propose:3 reconstruction:1 interaction:3 maximal:1 j2:2 relevant:1 realization:1 pka:2 convergence:1 regularity:2 extending:1 r1:2 generating:1 converges:1 illustrate:3 develop:1 ac:1 stat:9 nearest:3 lauritzen:2 op:1 school:2 p2:1 soc:1 coverage:11 auxiliary:1 come:1 blanket:2 min0:1 require:3 ja:1 suffices:1 generalization:1 mab:1 ultra:1 extension:5 hold:8 lying:1 around:1 normal:12 exp:3 lawrence:1 major:1 estimation:17 chernozhukov:2 proc:2 hansen:1 create:3 establishes:2 minimization:1 q2p:1 fukumizu:1 inhibitor:1 gaussian:24 rather:4 pn:1 zhou:3 corollary:2 derived:1 focus:1 consistently:1 bernoulli:2 likelihood:4 contrast:1 inference:22 typically:2 entire:1 relegated:1 interested:1 arg:5 among:1 flexible:1 issue:1 art:1 constrained:2 special:1 marginal:1 construct:8 bab:8 sampling:2 biology:2 yu:2 sankhy:1 future:1 minimized:1 report:1 simplify:1 divergence:1 comprehensive:1 sarabia:1 lebesgue:3 ab:98 drton:2 analyzed:2 unconditional:1 perez:1 regularizers:1 xb:19 accurate:1 edge:13 re:3 causal:1 theoretical:1 minimal:1 earlier:1 modeling:3 disadvantage:1 hlmann:1 lattice:2 introducing:1 vertex:2 entry:2 dependency:5 supposes:1 synthetic:3 thoroughly:1 density:23 fundamental:1 straddle:1 negahban:1 probabilistic:5 lee:2 together:2 w1:4 again:4 choose:2 conf:1 creating:1 american:1 zhao:1 leading:1 de:2 coefficient:4 inc:1 int:1 rocket:1 piece:1 closed:5 kendall:1 tab:2 observing:1 sup:2 start:2 recover:2 red:1 raf:1 jul:1 forbes:1 il:1 square:1 accuracy:1 variance:2 who:1 efficiently:2 correspond:1 bayesian:2 ren:2 multiplying:1 j6:2 wab:2 sriperumbudur:1 pp:2 e2:4 associated:2 proof:5 recovers:1 sampled:1 pilot:1 dataset:5 treatment:1 recall:1 lim:1 nodeconditional:1 routine:1 back:1 focusing:1 ta:1 varun:2 follow:1 methodology:1 ritov:1 done:3 mar:1 furthermore:1 xa:27 stage:1 hand:1 replacing:1 rab:2 scientific:1 artif:1 building:1 effect:1 validity:1 verify:1 true:1 normalized:1 hence:3 symmetric:1 i2:2 conditionally:1 adjacent:1 skewed:1 nuisance:2 unnormalized:1 coincides:1 generalized:1 outline:2 evident:1 performs:1 l1:2 allen:2 ranging:1 novel:2 recently:2 behaves:1 witten:1 overview:1 insensitive:1 volume:1 anisotropic:1 belong:1 tail:1 extend:1 association:1 significant:2 refer:1 measurement:1 gibbs:2 tuning:1 similarly:1 sugiyama:1 l3:2 specification:1 impressive:1 feb:2 pkc:2 recent:2 perspective:1 inf:1 driven:1 verlag:1 scoring:13 additional:1 impose:1 july:1 semi:1 unimodal:1 rj:2 infer:1 gretton:2 technical:2 multivar:1 calculation:1 offer:1 lin:1 post:1 e1:4 ravikumar:3 laplacian:1 basic:1 regression:1 essentially:1 poisson:1 arxiv:10 histogram:3 represent:1 normalization:2 cell:3 dec:1 hab:5 c1:5 irregular:1 addition:2 remarkably:1 conditionals:4 addressed:1 interval:11 completes:1 sr:1 induced:1 undirected:3 flow:1 presence:1 yang:2 hb:1 independence:3 xj:4 fit:2 hastie:1 identified:1 idea:2 honest:1 ese:3 effort:1 proceed:1 hessian:3 york:1 useful:1 generally:1 clear:1 se:5 involve:1 parry:1 nonparametric:1 statist:1 simplest:1 generate:2 bahadur:1 canonical:1 restricts:1 dezeure:1 estimated:8 neuroscience:1 arising:1 econ:2 write:1 discrete:1 express:1 putting:1 four:2 demonstrating:1 asymptotically:10 graph:12 run:1 inverse:4 uncertainty:2 family:19 reader:1 reasonable:1 decision:1 appendix:1 vb:2 x2k:1 annual:1 occur:1 constraint:1 belloni:2 infinity:1 normalizable:1 encodes:1 dominated:1 transelliptical:1 min:12 optimality:1 performing:1 structured:1 according:2 belonging:1 smaller:2 slightly:1 em:2 rev:1 making:1 computationally:1 resource:1 equation:1 neyman:1 describing:2 needed:1 available:2 lauffenburger:1 nen:1 apply:1 obey:1 rp:9 existence:1 denotes:1 assumes:1 remaining:1 ensure:1 completed:1 graphical:50 rudelson:1 especially:1 establish:2 print:5 realized:1 quantity:1 parametric:2 concentration:1 dependence:3 diagonal:2 gradient:2 amongst:1 simulated:2 sci:1 barber:1 index:1 providing:1 kolar:5 minimizing:2 difficult:1 unfortunately:1 x2b:1 robert:1 negative:16 anal:2 proper:2 unknown:1 perform:1 markov:2 datasets:1 discarded:1 mladen:2 finite:2 perturbation:2 cytometry:1 complement:1 pair:1 specified:1 extensive:1 barcelona:1 nip:1 proceeds:1 below:1 sparsity:2 encompasses:1 program:2 rf:1 including:1 max:5 tau:1 wainwright:1 power:1 suitable:4 event:7 business:2 natural:2 regularized:10 normality:3 ready:1 jun:1 review:5 literature:7 discovery:2 understanding:1 l2:4 asymptotic:9 law:1 loss:3 expect:1 mixed:3 interesting:1 acyclic:1 var:1 h2:3 sufficient:3 consistent:14 xp:1 article:1 s0:1 editor:2 cd:1 ibm:1 row:3 elsewhere:1 penalized:1 supported:1 allow:1 highprobability:1 arnold:1 neighbor:3 sparse:10 benefit:1 van:2 dimension:3 valid:5 evaluating:1 computes:1 author:2 commonly:1 adaptive:1 far:1 social:1 nov:3 keep:2 assumed:1 xi:58 quantifies:1 table:7 learn:7 nature:1 robust:6 symmetry:1 complex:1 necessarily:1 domain:1 garnett:1 aistats:1 main:2 montanari:1 sachs:1 motivation:2 whole:1 pip3:2 body:1 x1:1 xu:1 intel:1 en:24 fashion:1 aid:1 akt:1 precision:4 position:1 exponential:28 comput:2 pe:1 late:1 third:1 theorem:8 er:1 r2:3 appeal:1 cortes:1 gupta:2 normalizing:4 intractable:4 false:1 conditioned:1 illustrates:1 jnk:1 gap:1 booth:2 chen:2 explore:1 univariate:2 pip2:1 springer:2 minimizer:2 satisfies:1 shojaie:2 oct:1 conditional:17 quantifying:1 ann:5 x2a:1 fisher:1 feasible:1 infinite:3 specifically:1 uniformly:3 lemma:5 x2j:2 castillo:1 geer:2 leeb:1 maathuis:1 l4:1 highdimensional:1 support:2 ex:1 |
6,115 | 6,531 | Testing for Differences in Gaussian Graphical Models:
Applications to Brain Connectivity
Eugene Belilovsky 1,2,3 , Gael Varoquaux2 , Matthew Blaschko3
1
University of Paris-Saclay, 2 INRIA, 3 KU Leuven
{eugene.belilovsky, gael.varoquaux } @inria.fr
[email protected]
Abstract
Functional brain networks are well described and estimated from data with Gaussian Graphical Models (GGMs), e.g. using sparse inverse covariance estimators.
Comparing functional connectivity of subjects in two populations calls for comparing these estimated GGMs. Our goal is to identify differences in GGMs known
to have similar structure. We characterize the uncertainty of differences with
confidence intervals obtained using a parametric distribution on parameters of a
sparse estimator. Sparse penalties enable statistical guarantees and interpretable
models even in high-dimensional and low-sample settings. Characterizing the
distributions of sparse models is inherently challenging as the penalties produce
a biased estimator. Recent work invokes the sparsity assumptions to effectively
remove the bias from a sparse estimator such as the lasso. These distributions can
be used to give confidence intervals on edges in GGMs, and by extension their
differences. However, in the case of comparing GGMs, these estimators do not
make use of any assumed joint structure among the GGMs. Inspired by priors from
brain functional connectivity we derive the distribution of parameter differences
under a joint penalty when parameters are known to be sparse in the difference.
This leads us to introduce the debiased multi-task fused lasso, whose distribution
can be characterized in an efficient manner. We then show how the debiased lasso
and multi-task fused lasso can be used to obtain confidence intervals on edge
differences in GGMs. We validate the techniques proposed on a set of synthetic
examples as well as neuro-imaging dataset created for the study of autism.
1
Introduction
Gaussian graphical models describe well interactions in many real-world systems. For instance,
correlations in brain activity reveal brain interactions between distant regions, a process know as
functional connectivity. Functional connectivity is an interesting probe on brain mechanisms as
it persists in the absence of tasks (the so-called ?resting-state?) and is thus applicable to study
populations of impaired subjects, as in neurologic or psychiatric diseases [3]. From a formal
standpoint, Gaussian graphical models are well suited to estimate brain connections from functional
Magnetic Resonance Imaging (fMRI) signals [28, 33]. A set of brain regions and related functional
connections is then called a functional connectome [31, 3]. Its variation across subjects can capture
cognition [26, 27] or pathology [17, 3]. However, the effects of pathologies are often very small, as
resting-state fMRI is a weakly-constrained and noisy imaging modality, and the number of subjects
in a study is often small given the cost of imaging. Statistical power is then a major concern [2]. The
statistical challenge is to increase the power to detect differences between Gaussian graphical models
in the small-sample regime.
In these settings, estimation and comparison of Gaussian graphical models fall in the range of
high-dimensional statistics: the number of degrees of freedom in the data is small compared to
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
the dimensionality of the model. In this regime, sparsity-promoting `1 -based penalties can make
estimation well-posed and recover good estimation performance despite the scarcity of data [29, 10,
22, 6, 1]. These encompass sparse regression methods such as the lasso or recovery methods such as
basis pursuit, and can be applied to estimation of Gaussian graphical models with approaches such as
the graphical lasso[10]. There is now a wide body of literature which demonstrates the statistical
properties of these methods [1]. Crucial to applications in medicine or neuroscience, recent work
characterizes the uncertainty, with confidence intervals and p-values, of the parameters selected by
these methods [15, 16, 19, 12]. These works focus primarily on the lasso and graphical lasso.
Approaches to estimate statistical significance on sparse models fall into several general categories:
(a) non-parameteric sampling based methods which are inherently expensive and have difficult
limiting distributions [1, 24, 5], (b) characterizations of the distribution of new parameters that enter a
model along a regularization path [19, 12], or (c) for a particular regularization parameter, debiasing
the solution to obtain a new consistent estimator with known distribution [16, 15, 30]. While some of
the latter work has been used to characterize confidence intervals on network edge selection, there is
no result, to our knowledge, on the important problem of identifying differences in networks. Here
the confidence on the result is even more critical, as the differences are the direct outcome used for
neuroscience research or medical practice, and it is important to provide the practitioner a measure of
the uncertainty.
Here, we consider the setting of two datasets known to have very similar underlying signals, but
which individually may not be very sparse. A motivating example is determining the difference
in brain networks of subjects from different groups: population analysis of connectomes [31, 17].
Recent literature in neuroscience [20] has suggested functional networks are not sparse. On the
other hand, differences in connections across subjects should be sparse. Indeed the link between
functional and anatomical brain networks [13] suggests they should not differ drastically from one
subject to another. From a neuroscientific standpoint we are interested in determining which edges
between two populations (e.g. autistic and non-autistic) are different. Furthermore we want to provide
confidence-intervals on our results. We particularly focus on the setting where one dataset is larger
than the other. In many applications it is more difficult to collect one group (e.g. individuals with
specific pathologies) than another.
We introduce an estimator tailored to this goal: the debiased multi-task fused lasso. We show that,
when the underlying parameter differences are indeed sparse, we can obtain a tractable Gaussian
distribution for the parameter difference. This closed-form distribution underpins accurate hypothesis
testing and confidence intervals. We then use the relationship between nodewise regression and the
inverse covariance matrix to apply our estimator to learning differences of Gaussian graphical models.
The paper is organized as follows. In Section 2 we review previous work on learning of GGMs and
the debiased lasso. Section 3 discusses a joint debiasing procedure that specifically debiases the
difference estimator. In Section 3.1 we introduce the debiased multi-task fused lasso and show how
it can be used to learn parameter differences in linear models. In Section 3.2, we show how these
results can be used for GGMs. In Section 4 we validate our approach on synthetic and fMRI data.
2
Background and Related Work
Debiased Lasso A central starting point for our work is the debiased lasso [30, 16]. Here one
considers the linear regression model, Y = X? + , with data matrix X and output Y , corrupted by
? N (0, ?2 I) noise. The lasso estimator is formulated as follows:
1
??? = arg min kY ? X?k2 + ?k?k1
? n
(1)
The KKT conditions give k?? = n1 X T (Y ? X?), where k? is the subgradient of ?k?k1 . The debiased
lasso estimator [30, 16] is then formulated as ??u? = ??? +M k?? for some M that is constructed to give
guarantees on the asymptotic distribution of ??u? . Note that this estimator is not strictly unbiased in the
finite sample case, but has a bias that rapidly approaches zero (w.r.t. n) if M is chosen appropriately,
the true regressor ? is indeed sparse, and the design matrix satistifes a certain restricted eigenvalue
property [30, 16]. We decompose the difference of this debiased estimator and the truth as follows:
1
? ? I)(?? ? ?)
??u? ? ? = M X T ? (M ?
n
2
(2)
The first term is Gaussian and the second term is responsible for the bias. Using Holder?s inequality
?
?
the second term can be bounded by kM ??Ik
? k? ??k1 . The first part of which we can bound using
an appropriate selection of M while the second part is bounded by our implicit sparsity assumptions
coming from lasso theory [1]. Two approaches from the recent literature discuss how one can select
M to appropriately debias this estimate. In [30] it suffices to use nodewise regression to learn an
? ? Ik? . A second approach by [16]
inverse covariance matrix which guarantees constraints on kM ?
proposes to solve a quadratic program to directly minimize the variance of the debiased estimator
? ? Ik? to induce sufficiently small bias.
while constraining kM ?
Intuitively the construction of ??u? allows us to trade variance and bias via the M matrix. This allows
us to overcome a naive bias-variance tradeoff by leveraging the sparsity assumptions that bound
k?? ? ?k1 . In the sequel we expand this idea to the case of debiased parameter difference estimates
and sparsity assumptions on the parameter differences.
In the context of GGMs, the debiased lasso can gives us an estimator that asymptotically converges to
the partial correlations. As highlighted by [34] we can thus use the debiased lasso to obtain difference
estimators with known distributions. This allows us to obtain confidence intervals on edge differences
between Gaussian graphical models. We discuss this further in the sequel.
Gaussian Graphical Model Structure Learning A standard approach to estimating Gaussian
graphical models in high dimensions is to assume sparsity of the precision matrix and have a
constraint which limits the number of non-zero entries of the precision matrix. This constraint can
be achieved with a `1 -norm regularizer as in the popular graphical lasso [10]. Many variants of this
approach that incorporate further structural assumptions have been proposed [14, 6, 23].
An alternative solution to inducing sparsity on the precision matrix indirectly is neighborhood `1
regression from [22]. Here the authors make use of a long known property that connects the entries
of the precision matrix to the problem of regression of one variable on all the others [21]. This
property is critical to our proposed estimation as it allows relating regression models to finding edges
connected to specific nodes in the GGM.
GGMs have been found to be good at recovering the main brain networks from fMRI data [28, 33].
Yet, recent work in neuroscience has showed that the structural wiring of the brain did not correspond
to a very sparse network [20], thus questioning the underlying assumption of sparsity often used
to estimate brain network connectivity. On the other hand, for the problem of finding differences
between networks in two populations, sparsity may be a valid assumption. It is well known that
anatomical brain connections tend to closely follow functional ones [13]. Since anatomical networks
do not differ drastically we can surmise that two brain networks should not differ much even in the
presence of pathologies. The statistical method we present here leverages sparsity in the difference of
two networks, to yield well-behaved estimation and hypothesis testing in the low-sample regime. Most
closely related to our work, [35, 9] recently consider a different approach to estimating difference
networks, but does not consider assigning significance to the detection of edges.
3
Debiased Difference Estimation
In many applications one may be interested in learning multiple linear models from data that share
many parameters. Situations such as this arise often in neuroimaging and bioinformatics applications.
We can often improve the learning procedure of such models by incorporating fused penalties that
penalize the k ? k1 norm of the parameter differences or k ? k1,2 which encourages groups of parameters
to shrink together. These methods have been shown to substantially improve the learning of the
joint models. However, the differences between model parameters, which can have a high sample
complexity when there are few of them, are often pointed out only in passing [4, 6, 14]. On the
other hand, in many situations we might be interested in actually understanding and identifying
the differences between elements of the support. For example when considering brain networks of
patients suffering from a pathology and healthy control subjects, the difference in brain connectivity
may be of great interest. Here we focus specifically on accurately identifying differences with
significance.
We consider the case of two tasks (e.g. two groups of subjects), but the analysis can be easily extended
to general multi-task settings. Consider the problem setting of data matrices X1 and X2 , which
are n1 ? p and n2 ? p, respectively. We model them as producing outputs Y1 and Y2 , corrupted by
3
diagonal gaussian noise 1 and 2 as follows
Y1 = X1 ?1 + 1 , Y2 = X2 ?2 + 2
(3)
Let S1 and S2 index the elements of the support of ?1 and ?2 , respectively. Furthermore the support
of ?1 ? ?2 is indexed by Sd and finally the union of S1 and S2 is denoted Sa . Using a squared loss
estimator producing independent estimates ??1 , ??2 we can obtain a difference estimate ??d = ??1 ? ??2 .
In general if Sd is very small relative to Sa then we will have a difficult time to identify the support
Sd . This can be seen if we consider each of the individual components of the prediction errors. The
larger the true support Sa the more it will drown out the subset which corresponds to the difference
support. This can be true even if one uses `1 regularizers over the parameter vectors. Consequently,
one cannot rely on the straightforward strategy of learning two independent estimates and taking their
difference. The problem is particularly pronounced in the common setting where one group has fewer
samples than the other. Thus here we consider the setting where n1 > n2 and possibly n1 n2 .
Let ??1 and ??2 be regularized least squares estimates. In our problem setting we wish to obtain
confidence intervals on debiased versions of the difference ??d = ??1 ? ??2 in a high-dimensional
setting (in the sense that n2 < p), we aim to leverage assumptions about the form of the true ?d ,
primarily that it is sparse, while the independent ??1 and ??2 are weakly sparse or not sparse. We
consider a general case of a joint regularized least squares estimation of ??1 and ??2
1
1
kY1 ? X1 ?1 k2 +
kY2 ? X2 ?2 k2 + R(?1 , ?2 )
(4)
min
?1 ,?2 n1
n2
We note that the differentiating and using the KKT conditions gives
1 T
k?1
?
n1 X1 (Y ? X1 ?1 )
?
k = ? = 1 T
k2
n2 X2 (Y ? X2 ?2 )
(5)
where k?? is the (sub)gradient of R(?1 , ?2 ). Substituting Equation (3) we can now write
? 1 (??1 ? ?1 ) + k?1 = 1 X1T 1 and ?
? 2 (??2 ? ?2 ) + k?2 = 1 X2T 2
?
(6)
n1
n2
We would like to solve for the difference ??1 ? ??2 but the covariance matrices may not be invertible.
We introduce matrices M1 and M2 , which will allow us to isolate the relevant term. We will see that
in addition these matrices will allow us to decouple the bias and variance of the estimators.
? 1 (??1 ? ?1 ) + M1 k?1 = 1 M1 X T 1 and M2 ?
? 2 (??2 ? ?2 ) + M2 k?2 = 1 M2 X T 2 (7)
M1 ?
1
2
n1
n2
subtracting these and rearranging we can now isolate the difference estimator plus a term we add
back controlled by M1 and M2
1
1
(??1 ? ??2 ) ? (?1 ? ?2 ) + M1 k?1 ? M2 k?2 =
M1 X1T 1 ?
M2 X2T 2 ? ?
(8)
n1
n2
? 1 ? I)(??1 ? ?1 ) ? (M2 ?
? 2 ? I)(??2 ? ?2 )
? = (M1 ?
(9)
Denoting ?d := ?1 ? ?2 and ?a := ?1 + ?2 , we can reformulate ?:
? 1 ? I + M2 ?
? 2 ? I)
? 1 ? M2 ?
? 2)
(M1 ?
(M1 ?
(??d ? ?d ) +
(??a ? ?a )
(10)
2
2
Here, ? will control the bias of our estimator. Additionally, we want to minimize its variance,
1
1
? 1 M1 ?
? 2 M2 ?
M1 ?
?12 +
M2 ?
?22 .
(11)
n1
n2
We can now overcome the limitations of simple bias variance trade-off by using an appropriate
regularizer coupled with an assumption on the underlying signal ?1 and ?2 . This will in turn make ?
asymptotically vanish while maximizing the variance.
?=
Since we are interested in pointwise estimates, we can focus on bounding the infinity norm of ?.
1
? 1 + M2 ?
? 2 ? 2Ik? k??d ? ?d k1 + 1 kM1 ?
? 1 ? M2 ?
? 2 k? k??a ? ?a k1 (12)
k?k? ? kM1 ?
{z
} | {z } 2 |
{z
} | {z }
2|
?1
?2
ld
4
la
We can control the maximum bias by selecting M1 and M2 appropriately. If we use an appropriate
regularizer coupled with sparsity assumptions we can bound the terms la and ld and use this knowledge
to appropriately select M1 and M2 such that the bias becomes neglibile. If we had only the
independent parameter sparsity assumption we can apply the results of the debiased lasso and
estimate M1 and M2 independently as in [16]. In the case of interest where ?1 and ?2 share many
weights we can do better by taking this as an assumption and applying a sparsity regularization on the
difference by adding the term ?2 k?1 ? ?2 k1 . Comparing the decoupled penalty to the fused penalty
proposed we see that ld would decrease at a given sample size. We now show how to jointly estimate
M1 and M2 so that k?k? becomes negligible for a given n, p and sparsity assumption.
3.1
Debiasing the Multi-Task Fused Lasso
Motivated by the inductive hypothesis from neuroscience described above we introduce a consistent
low-variance estimator, the debiased multi-task fused lasso. We propose to use the following
regularizer R(?1 , ?2 ) = ?1 k?1 k1 + ?1 k?2 k1 + ?2 k?1 ? ?2 k1 . This penalty has been referred to in
some literature as the multi-task fused lasso [4]. We propose to then debias this estimate as shown in
(8). We estimate the M1 and M2 matrices by solving the following QP for each row m1 and m2 of
the matrices M1 and M2 .
1 T?
1 T?
m1 ?1 m1 +
m ?2 m2
m1 ,m2 n1
n2 2
? 1 + M2 ?
? 2 ? 2Ik? ? ?1 , kM1 ?
? 1 ? M2 ?
? 2 k? ? ?2
s.t. kM1 ?
min
(13)
This directly minimizes the variance, while bounding the bias in the constraint. We now show how to
set the bounds:
q
p
Proposition 1. Take ?1 > 2 log
n2 and ?2 = O(?1 ). Denote sd the difference sparsity, s1,2 the
parameter sparsity |S1 | + |S2 |, c > 1,a > 1, and 0 < m 1. When the compatibility condition
[1, 11] holds the following bounds gives la u2 = o(1) and ld u1 = o(1) and thus k?k? = o(1) with
high probability.
?1 ?
1
1
and ?2 ?
m
c?2 sd n2
a(?1 s1,2 + ?2 sd )nm
2
(14)
The proof is given in the supplementary material. Using the prescribed M s obtained with (13) and
14 we obtain an unbiased estimator given by (8) with variance (11)
3.2
GGM Difference Structure Discovery with Significance
The debiased lasso and the debiased multi-task fused lasso, proposed in the previous section, can be
used to learn the structure of a difference of Gaussian graphical models and to provide significance
results on the presence of edges within the difference graph. We refer to these two procedures as
Difference of Neighborhoods Debiased Lasso Selection and Difference of Neighborhoods Debiased
Fused Lasso Selection.
We recall that the conditional independence properties of a GGM are given by the zeros of the
precision matrix and these zeros correspond to the zeros of regression parameters when regressing
one variable on all the other. By obtaining a debiased lasso estimate for each node in the graph [34]
notes this leads to a sparse unbiased precision matrix estimate with a known asymptotic distribution.
Subtracting these estimates for two different datasets gives us a difference estimate whose zeros
correspond to no difference of graph edges in two GGMs. We can similarly use the debiased multitask fused lasso described above and the joint debiasing procedure to obtain a test statistic for the
difference of networks. We now formalize this procedure.
Notation Given GGMs j = 1, 2. Let Xj denote the random variable in Rp associated with GGM
j. We denote Xj,v the random variable associated with a node, v of the GGM and Xj,vc all other
nodes in the graph. We denote ??j,v the lasso or multi-task fused lasso estimate of Xj,vc onto
Xj,v , then ??j,dL,v is the debiased version of ??j,v . Finally let ?j,v denote the unknown regression,
i
i
i
Xj,v = Xj,vc ?j,v + j where j ? N(0, ?j I). Define ?D,v
= ??1,dL,v
? ??2,dL,v
the test statistic
associated with the edge v, i in the difference of GGMs j = 1, 2.
5
Algorithm 1 Difference Network Selection with Algorithm 2 Difference Network Selection with
Neighborhood Debiased Lasso
V = {1, ..., P }
NxP Data Matrices, X1 and X2
Px(P-1) Output Matrix B of test statistics
for v ? V do
Estimate unbiased ?
?1 , ?
?2 from X1,v , X2,v
for j ? {1, 2} do
?j ? SolveLasso(Xj,vc , Xj,v )
Mj ? M Estimator(Xj,vc )
T
c (Xj,v ? Xj,v c ?j )
?j,U ? ?j +Mj Xj,v
end for
2
?
?2
? 1 M1 + ??2 M2T ?
? 2 M2 )
?d2 ? diag( n11 M1T ?
n2
c
for j ? v do
q
2
Bv,j = (?1,U,j ? ?2,U,j )/ ?d,j
end for
end for
Neighborhood Debiased Fused Lasso
V = {1, ..., P }
NxP Data Matrices, X1 and X2
Px(P-1) Output Matrix B of test statistics
for v ? V do
Estimate unbiased ?
?1 , ?
?2 from X1,v , X2,v
?1 ,?2 ? F usedLasso(X1,vc , X1,v , X2,vc , X2,v )
M1 , M2 ? M Estimator(X1,vc , X2,vc )
for j ? {1, 2} do
T
c (Xj,v ? Xj,v c ?j )
?j,U ? ?j +Mj Xj,v
end for
2
?
?2
? 1 M1 + ??2 M2T ?
? 2 M2 )
?d2 ? diag( n11 M1T ?
n2
c
for j ? v do
q
2
Bv,j = (?1,U,j ? ?2,U,j )/ ?d,j
end for
end for
i
Proposition 2. Given the ??D,v
, M1 and M2 computed as in [16] for the debiased lasso or as
in Section 3.1 for the debiased multi-task fused lasso. When the respective assumptions of these
estimators are satisfied the following holds w.h.p.
i
i
? 1 M T + ? 2 M2 ??2 M T ]i,i )
??D,v
? ?D,v
= W + o(1) where W ? N(0, [?12 M1 ?
1
2
2
(15)
i
This follows directly from the asymptotic consistency of each individual ??j,dL,v
for the debiased
lasso and multi-task fused lasso.
We can now define the the null hypothesis of interest as H0 : ?1,(i,j) = ?2,(i,j) . Obtaining a test
i
statistic for each element ?D,v
allows us to perform hypothesis testing on individual edges, all the
edges, or groups of edges (controlling for the FWER). We summarize the Neighbourhood Debiased
Lasso Selection process in Algorithm 1 and the Neighbourhood Debiased Multi-Task Fused Lasso
Selection in Algorithm 2 which can be used to obtain a matrix of all the relevant test statistics.
4
4.1
Experiments
Simulations
We generate synthetic data based on two Gaussian graphical models with 75 vertices. Each of the
individual graphs have a sparsity of 19% and their difference sparsity is 3%. We construct the
models by taking two identical precision matrices and randomly removing some edges from both.
We generate synthetic data using both precision matrices. We use n1 = 800 samples for the first
dataset and vary the second dataset n2 = 20, 30, ...150.
We perform a regression using the debiased lasso and the debiased multi-task fused lasso on each
node of the graphs. As an extra baseline we consider the projected ridge
pmethod from the R package
?hdi? [7]. We use the debiased lasso of [16] where we set ? = k?
? log p/n. We select c by 3fold cross validation k = {0.1, ..100} and M as prescribed in [16] which we obtain by solving a
quadratic program.
? is an unbiased estimator
p ?
p of the noise variance. For the debiased lasso we let
both ?1 = k1 ??2 log p/n2 and ?2 = k2 ??2 log p/n2 , and select based on 3-fold cross-validation
from the same range as k. M1 and M2 are obtained as in Equation (13) with the bounds (14) being
set with c = a = 2, sd = 2, s1,2 = 15, m = 0.01, and the cross validated ?1 and ?2 . In both
debiased lasso and fused multi-task lasso cases we utilize the Mosek QP solver package to obtain M .
For the projected ridge method we use the hdi package to obtain two estimates of ?1 and ?2 along
with their upper bounded biases which are then used to obtain p-values for the difference.
We report the false positive rate, the power, the coverage and interval length as per [30] for the
difference of graphs. In these experiments we aggregate statistics to demonstrate power of the test
statistic, as such we consider each edge as a separate test and do not perform corrections. Table
1 gives the numerical results for n2 = 60: the power and coverage is substantially better for the
debiased fused multi-task lasso, while at the same time the confidence interval smaller.
6
Power
Method
Deb. Lasso
Deb. Fused Lasso
Ridge Projection
1.0
ridge
lasso
fusedlasso
Power
0.8
0.6
0.2
30
40
50
60
70
80
90 100
n2
110
120
130
140
TP(Power)
80.6%
93.3%
18.6%
Cov S
96.2%
100%
100%
Cov Sdc
92%
98.6%
100%
len S
2.199
2.191
5.544
len Sdc
2.195
2.041
5.544
Table 1: Comparison of Debiased Lasso, Debiased
Fused Lasso, and Projected Ridge Regression for edge
selection in difference of GGM. The significance level
is 5%, n1 = 800 and n2 = 60. All methods have
false positive below the significance level and the debiased fused lasso dominates in terms of power. The
coverage of the difference support and non-difference
support is also best for the debiased fused lasso, which
simultaneously has smaller confidence intervals on average.
0.4
0.0
FP
3.7%
0.0%
0.0%
150
Figure 1: Power of the test for different number of
samples in the second simulation, with n1 = 800. The
debiased fused lasso has highest statistical power.
Figure 1 shows the power of the test for different values of n2 . The fusedlasso outperforms the other
methods substantially. Projected ridge regression is particularly weak, in this scenario, as it uses a
worst case p-value obtained using an estimate of an upper bound on the bias [7].
4.2
Autism Dataset
Correlations in brain activity measured via fMRI reveal functional interactions between remote brain
regions [18]. In population analysis, they are used to measure how connectivity varies between
different groups. Such analysis of brain function is particularly important in psychiatric diseases,
that have no known anatomical support: the brain functions in a pathological aspect, but nothing
abnormal is clearly visible in the brain tissues. Autism spectrum disorder is a typical example of such
ill-understood psychiatric disease. Resting-state fMRI is accumulated in an effort to shed light on
this diseases mechanisms: comparing the connectivity of autism patients versus control subjects. The
ABIDE (Autism Brain Imaging Data Exchange) dataset [8] gathers rest-fMRI from 1 112 subjects
across, with 539 individuals suffering from autism spectrum disorder and 573 typical controls. We
use the preprocessed and curated data1 .
In a connectome analysis [31, 26], each subject is described by a GGM measuring functional
connectivity between a set of regions. We build a connectome from brain regions of interest based on
a multi-subject atlas2 of 39 functional regions derived from resting-state fMRI [32] (see. Fig. 4).
We are interested in determining edge differences between the autism group and the control group.
We use this data to show how our parametric hypothesis test can be used to determine differences in
brain networks. Since no ground truth exists for this problem, we use permutation testing to evaluate
the statistical procedures [25, 5]. Here we permute the two conditions (e.g. autism and control group)
to compute a p-value and compare it to our test statistics. This provides us with a finite sample strict
control on the error rate: a non-parametric validation of our parametric test.
For our experiments we take 2000 randomly chosen volumes from the control group subjects and
100 volumes from the autism group subjects. We perform permutation testing using the de-biased
lasso, de-biased multi-task fused lasso, and projected ridge regression. Parameters for the de-biased
fused lasso are chosen as in the previous section. For the de-biased lasso we use the exact settings for
? and constraints on M provided in the experimental section of [16]. Projected ridge regression is
evaluated as in the previous section.
Figure 2 shows a comparison of three parametric approaches versus their analogue obtained with
a permutation test. The chart plots the permutation p-values of each entry in the 38 ? 39 B matrix
against the expected parametric p-value. For all the methods the points are above the line indicating
the tests are not breaching the expected false positive rates. However the de-biased lasso and ridge
projecting are very conservative and lead to few detections. The de-biased multi-task fused lasso
yields far more detections on the same dataset, within the expected false positive rate or near it.
We now analyse the reproducibility of the results by repeatedly sampling 100 subsets of the data (with
the same proportions n1 = 2000 and n2 = 100), obtaining the matrix of test statistics, selecting edges
that fall below the 5% significance level. Figure 3 shows how often edges are selected multiple times
across subsamples. We report results with a threshold on uncorrected p-values as the lasso procedure
1
http://preprocessed-connectomes-project.github.io/abide/
https://team.inria.fr/parietal/research/spatial_patterns/
spatial-patterns-in-resting-state/
2
7
fusedlasso
0.6
0.4
0.2
parametric p-values
0.06
0.04
0.02
0.05
0.10
0.15
permutation p-values
0.4
0.2
0.4
0.6
0.8
permutation p-values
0.8
0.6
0.4
0.2
0.0
0.0
1.0
0.10
0.08
0.06
0.04
0.02
0.00
0.00
0.20
0.2
lasso pvalues at tail
0.10
0.08
0.00
0.00
0.6
0.0
0.0
1.0
fusedlasso pvalues at tail
0.10
parametric p-values
0.4
0.6
0.8
permutation p-values
0.8
parametric p-values
0.2
ridge
1.0
parametric p-values
0.8
0.0
0.0
lasso
1.0
parametric p-values
parametric p-values
1.0
0.05
0.10
0.15
permutation p-values
0.20
0.2
0.4
0.6
0.8
permutation p-values
1.0
ridge pvalues at tail
0.08
0.06
0.04
0.02
0.00
0.00
0.05
0.10
0.15
permutation p-values
0.20
Figure 2: Permutation testing comparing debiased fused lasso, debiased lasso, and projected ridge regression
on the ABIDE dataset. The chart plots the permutation p-values of each method on each possible edge against
the expected parametric p-value. The debiased lasso and ridge projection are very conservative and lead to few
detections. The fused lasso yields far more detections on the same dataset, almost all within the expected false
positive rate.
Fraction of connections occuring at least t times
L
Reproducibility across subsamples
1.0
R
lasso
fusedlasso
0.8
x=2
0.6
z=20
0.4
Figure 4: Outlines of the regions of the MSDL atlas.
0.2
0.0
1
2
3
4
5
6
Number of occurences (t)
7
8
9
Figure 3: Reproducibility of results from sub-sampling
using uncorrected error rate. The fused lasso is much
more likely to detect edges and produce stable results. Figure 5: Connectome of repeatedly picked up edges
Using corrected p-values no detections are made by in 100 trials. We only show edges selected more than
lasso (figure in supplementary material).
once. Darker red indicates more frequent selection.
selects no edges with multiple comparison correction (supplementary materials give FDR-corrected
results for the de-biased fused multi-task lasso selection). Figure 5 shows a connectome of the edges
frequently selected by the de-biased fused multi-task lasso (with FDR correction).
5
Conclusions
We have shown how to characterize the distribution of differences of sparse estimators and how to use
this distribution for confidence intervals and p-values on GGM network differences. For this purpose,
we have introduced the de-biased multi-task fused lasso. We have demonstrated on synthetic and
real data that this approach can provide accurate p-values and a sizable increase of statistical power
compared to standard procedures. The settings match those of population analysis for functional
brain connectivity, and the gain in statistical power is direly needed to tackle the low sample sizes [2].
Future work calls for expanding the analysis to cases with more than two groups as well as considering
a `1,2 penalty sometimes used at the group level [33]. Additionally the squared loss objective
optimizes excessively the prediction and could be modified to lower further the sample complexity in
terms of parameter estimation.
8
Acknowledgements
This work is partially funded by Internal Funds KU Leuven, ERC Grant 259112, FP7-MC-CIG
334380, and DIGITEO 2013-0788D - SOPRANO, and ANR-11-BINF-0004 NiConnect.
References
[1] P. B?hlmann and S. van de Geer. Statistics for High-Dimensional Data. Springer, 2011.
[2] K. Button et al. Power failure: Why small sample size undermines the reliability of neuroscience. Nature
Reviews Neuroscience, 14:365, 2013.
[3] F. X. Castellanos et al. Clinical applications of the functional connectome. Neuroimage, 80:527, 2013.
[4] X. Chen et al. Smoothing proximal gradient method for general structured sparse learning. In UAI, 2011.
[5] B. Da Mota et al. Randomized parcellation based inference. NeuroImage, 89:203?215, 2014.
[6] P. Danaher, P. Wang, and D. Witten. The joint graphical lasso for inverse covariance estimation across
multiple classes. Journal of the Royal Statistical Society (B), 76(2):373?397, 2014.
[7] R. Dezeure, P. B?hlmann, L. Meier, and N. Meinshausen. High-dimensional inference: Confidence
intervals, p-values and R-software hdi. Statist. Sci., 30(4):533?558, 11 2015.
[8] A. Di Martino et al. The autism brain imaging data exchange: Towards a large-scale evaluation of the
intrinsic brain architecture in autism. Mol. Psychiatry, 19:659, 2014.
[9] F. Fazayeli and A. Banerjee. Generalized direct change estimation in ising model structure. In ICML, 2016.
[10] J. Friedman, T. Hastie, and R. Tibshirani. Sparse inverse covariance estimation with the graphical lasso.
Biostatistics, 9(3):432?441, 2008.
[11] A. Ganguly and W. Polonik. Local neighborhood fusion in locally constant Gaussian graphical models.
arXiv:1410.8766, 2014.
[12] M. G. G?Sell, J. Taylor, and R. Tibshirani. Adaptive testing for the graphical lasso. arXiv:1307.4765, 2013.
[13] C. Honey, O. Sporns, L. Cammoun, X. Gigandet, et al. Predicting human resting-state functional connectivity from structural connectivity. Proc. Nat. Acad. Sciences, 106:2035, 2009.
[14] J. Honorio and D. Samaras. Multi-task learning of Gaussian graphical models. In ICML, 2010.
[15] J. Jankov? and S. van de Geer. Confidence intervals for high-dimensional inverse covariance estimation.
Electron. J. Statist., 9(1):1205?1229, 2015.
[16] A. Javanmard and A. Montanari. Confidence intervals and hypothesis testing for high-dimensional
regression. The Journal of Machine Learning Research, 15(1):2869?2909, 2014.
[17] C. Kelly, B. B. Biswal, R. C. Craddock, F. X. Castellanos, and M. P. Milham. Characterizing variation in
the functional connectome: Promise and pitfalls. Trends in Cog. Sci., 16:181, 2012.
[18] M. A. Lindquist et al. The statistical analysis of fMRI data. Stat. Sci., 23(4):439?464, 2008.
[19] R. Lockhart et al. A significance test for the lasso. Ann. Stat., 42:413, 2014.
[20] N. T. Markov, M. Ercsey-Ravasz, D. C. Van Essen, K. Knoblauch, Z. Toroczkai, and H. Kennedy. Cortical
high-density counterstream architectures. Science, 342(6158):1238406, 2013.
[21] G. Marsaglia. Conditional means and covariances of normal variables with singular covariance matrix.
Journal of the American Statistical Association, 59(308):1203?1204, 1964.
[22] N. Meinshausen and P. B?hlmann. High-dimensional graphs and variable selection with the lasso. Ann.
Stat., pages 1436?1462, 2006.
[23] K. Mohan et al. Structured learning of Gaussian graphical models. In NIPS, pages 620?628, 2012.
[24] M. Narayan and G. I. Allen. Mixed effects models to find differences in multi-subject functional connectivity. bioRxiv:027516, 2015.
[25] T. E. Nichols and A. P. Holmes. Nonparametric permutation tests for functional neuroimaging: A primer
with examples. Human Brain Mapping, 15(1):1?25, 2002.
[26] J. Richiardi, H. Eryilmaz, S. Schwartz, P. Vuilleumier, and D. Van De Ville. Decoding brain states from
fMRI connectivity graphs. NeuroImage, 56:616?626, 2011.
[27] W. R. Shirer, S. Ryali, E. Rykhlevskaia, V. Menon, and M. D. Greicius. Decoding subject-driven cognitive
states with whole-brain connectivity patterns. Cerebral Cortex, 22(1):158?165, 2012.
[28] S. M. Smith et al. Network modelling methods for fMRI. NeuroImage, 54:875, 2011.
[29] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society.
Series B, pages 267?288, 1996.
[30] S. Van de Geer, P. B?hlmann, Y. Ritov, and R. Dezeure. On asymptotically optimal confidence regions and
tests for high-dimensional models. Ann. Stat., 42(3):1166?1202, 2014.
[31] G. Varoquaux and R. C. Craddock. Learning and comparing functional connectomes across subjects.
NeuroImage, 80:405?415, 2013.
[32] G. Varoquaux, A. Gramfort, F. Pedregosa, V. Michel, and B. Thirion. Multi-subject dictionary learning to
segment an atlas of brain spontaneous activity. In IPMI, 2011.
[33] G. Varoquaux, A. Gramfort, J.-B. Poline, and B. Thirion. Brain covariance selection: Better individual
functional connectivity models using population prior. In NIPS, 2010.
[34] L. Waldorp. Testing for graph differences using the desparsified lasso in high-dimensional data. Statistics
Survey, 2014.
[35] S. D. Zhao et al. Direct estimation of differential networks. Biometrika, 101(2):253?268, 2014.
9
| 6531 |@word multitask:1 trial:1 version:2 norm:3 proportion:1 km:3 d2:2 simulation:2 covariance:10 ld:4 series:1 selecting:2 denoting:1 outperforms:1 craddock:2 comparing:7 yet:1 assigning:1 numerical:1 distant:1 visible:1 m1t:2 remove:1 plot:2 interpretable:1 atlas:2 fund:1 selected:4 fewer:1 smith:1 characterization:1 provides:1 node:5 along:2 constructed:1 direct:3 differential:1 ik:5 manner:1 introduce:5 javanmard:1 expected:5 indeed:3 frequently:1 multi:25 brain:33 inspired:1 pitfall:1 considering:2 solver:1 becomes:2 spain:1 blaschko:1 underlying:4 bounded:3 estimating:2 notation:1 provided:1 null:1 project:1 underpins:1 biostatistics:1 substantially:3 minimizes:1 finding:2 guarantee:3 tackle:1 shed:1 honey:1 biometrika:1 demonstrates:1 k2:5 schwartz:1 control:9 medical:1 grant:1 producing:2 positive:5 negligible:1 understood:1 persists:1 local:1 sd:7 limit:1 io:1 acad:1 despite:1 path:1 inria:3 might:1 plus:1 meinshausen:2 suggests:1 challenging:1 collect:1 greicius:1 range:2 responsible:1 testing:10 parameteric:1 practice:1 union:1 procedure:8 projection:2 confidence:17 induce:1 psychiatric:3 cannot:1 onto:1 selection:14 context:1 applying:1 demonstrated:1 maximizing:1 straightforward:1 starting:1 independently:1 survey:1 recovery:1 identifying:3 disorder:2 occurences:1 m2:31 estimator:27 holmes:1 population:8 lindquist:1 variation:2 limiting:1 construction:1 controlling:1 spontaneous:1 exact:1 us:2 hypothesis:7 element:3 trend:1 expensive:1 particularly:4 digiteo:1 curated:1 surmise:1 ising:1 wang:1 capture:1 worst:1 region:8 connected:1 remote:1 trade:2 decrease:1 highest:1 disease:4 complexity:2 ipmi:1 weakly:2 solving:2 segment:1 samara:1 debias:2 basis:1 easily:1 joint:7 soprano:1 regularizer:4 describe:1 aggregate:1 outcome:1 neighborhood:6 h0:1 whose:2 posed:1 larger:2 solve:2 supplementary:3 anr:1 statistic:13 cov:2 ganguly:1 highlighted:1 noisy:1 jointly:1 analyse:1 subsamples:2 eigenvalue:1 questioning:1 cig:1 propose:2 subtracting:2 interaction:3 coming:1 fr:2 frequent:1 relevant:2 rapidly:1 reproducibility:3 inducing:1 validate:2 pronounced:1 ky:1 x1t:2 impaired:1 produce:2 converges:1 derive:1 stat:4 narayan:1 measured:1 sa:3 sizable:1 recovering:1 coverage:3 uncorrected:2 differ:3 closely:2 vc:9 human:2 enable:1 material:3 exchange:2 suffices:1 decompose:1 varoquaux:4 proposition:2 extension:1 strictly:1 correction:3 hold:2 sufficiently:1 drown:1 ground:1 normal:1 great:1 cognition:1 mapping:1 electron:1 matthew:2 substituting:1 major:1 vary:1 dictionary:1 purpose:1 estimation:14 proc:1 applicable:1 healthy:1 individually:1 clearly:1 gaussian:19 aim:1 modified:1 shrinkage:1 validated:1 focus:4 derived:1 martino:1 modelling:1 indicates:1 psychiatry:1 baseline:1 detect:2 sense:1 inference:2 accumulated:1 honorio:1 expand:1 selects:1 interested:5 compatibility:1 arg:1 among:1 ill:1 denoted:1 proposes:1 resonance:1 polonik:1 constrained:1 spatial:1 smoothing:1 gramfort:2 construct:1 once:1 sampling:3 identical:1 sell:1 icml:2 mosek:1 fmri:11 future:1 report:2 others:1 primarily:2 few:3 pathological:1 randomly:2 simultaneously:1 individual:7 connects:1 n1:15 friedman:1 freedom:1 detection:6 interest:4 essen:1 regressing:1 evaluation:1 fazayeli:1 light:1 regularizers:1 accurate:2 edge:25 partial:1 respective:1 decoupled:1 hdi:3 indexed:1 taylor:1 biorxiv:1 instance:1 castellanos:2 tp:1 measuring:1 hlmann:4 cost:1 ravasz:1 vertex:1 entry:3 subset:2 undermines:1 motivating:1 characterize:3 autistic:2 varies:1 corrupted:2 proximal:1 synthetic:5 density:1 randomized:1 sequel:2 off:1 decoding:2 connectome:7 regressor:1 together:1 fused:34 invertible:1 connectivity:17 squared:2 central:1 nm:1 satisfied:1 possibly:1 cognitive:1 american:1 zhao:1 michel:1 de:13 picked:1 closed:1 characterizes:1 red:1 recover:1 len:2 minimize:2 square:2 ggm:8 holder:1 chart:2 variance:11 correspond:3 identify:2 yield:3 weak:1 accurately:1 mc:1 autism:11 kennedy:1 tissue:1 against:2 failure:1 proof:1 associated:3 di:1 gain:1 dataset:9 popular:1 recall:1 knowledge:2 dimensionality:1 organized:1 formalize:1 actually:1 back:1 follow:1 ritov:1 evaluated:1 shrink:1 furthermore:2 implicit:1 correlation:3 hand:3 banerjee:1 reveal:2 behaved:1 menon:1 effect:2 excessively:1 nichols:1 unbiased:6 true:4 y2:2 inductive:1 regularization:3 biswal:1 wiring:1 encourages:1 fwer:1 generalized:1 outline:1 ridge:13 demonstrate:1 occuring:1 allen:1 binf:1 recently:1 common:1 data1:1 witten:1 functional:22 qp:2 debiasing:4 volume:2 cerebral:1 tail:3 association:1 m1:28 resting:6 relating:1 refer:1 enter:1 eryilmaz:1 leuven:2 consistency:1 similarly:1 pointed:1 erc:1 pathology:5 had:1 funded:1 reliability:1 stable:1 cortex:1 add:1 recent:5 showed:1 optimizes:1 driven:1 scenario:1 certain:1 inequality:1 seen:1 determine:1 signal:3 encompass:1 multiple:4 danaher:1 match:1 characterized:1 cross:3 long:1 clinical:1 n11:2 controlled:1 prediction:2 neuro:1 regression:17 variant:1 patient:2 arxiv:2 sometimes:1 tailored:1 achieved:1 penalize:1 background:1 want:2 addition:1 interval:16 singular:1 standpoint:2 modality:1 biased:10 crucial:1 appropriately:4 extra:1 rest:1 strict:1 subject:19 tend:1 isolate:2 leveraging:1 call:2 practitioner:1 structural:3 near:1 presence:2 leverage:2 constraining:1 independence:1 xj:16 architecture:2 lasso:80 hastie:1 idea:1 tradeoff:1 motivated:1 effort:1 penalty:9 passing:1 repeatedly:2 gael:2 nonparametric:1 locally:1 statist:2 category:1 generate:2 http:2 dezeure:2 estimated:2 neuroscience:7 per:1 tibshirani:3 anatomical:4 nodewise:2 write:1 promise:1 group:14 threshold:1 preprocessed:2 utilize:1 imaging:6 asymptotically:3 subgradient:1 graph:10 fraction:1 button:1 milham:1 ville:1 inverse:6 package:3 uncertainty:3 almost:1 bound:7 abnormal:1 fold:2 quadratic:2 activity:3 bv:2 constraint:5 infinity:1 deb:2 x2:12 software:1 u1:1 aspect:1 min:3 prescribed:2 px:2 structured:2 across:7 smaller:2 s1:6 intuitively:1 restricted:1 projecting:1 equation:2 discus:3 turn:1 mechanism:2 x2t:2 needed:1 know:1 thirion:2 kuleuven:1 tractable:1 fp7:1 end:6 pursuit:1 probe:1 promoting:1 apply:2 appropriate:3 indirectly:1 magnetic:1 neighbourhood:2 alternative:1 ky1:1 primer:1 rp:1 graphical:22 medicine:1 parcellation:1 invokes:1 k1:13 build:1 society:2 objective:1 parametric:13 strategy:1 diagonal:1 gradient:2 link:1 separate:1 sci:3 considers:1 connectomes:3 length:1 index:1 relationship:1 reformulate:1 pointwise:1 difficult:3 neuroimaging:2 km1:4 abide:3 neuroscientific:1 design:1 fdr:2 unknown:1 perform:4 upper:2 datasets:2 markov:1 finite:2 parietal:1 situation:2 extended:1 team:1 y1:2 introduced:1 meier:1 paris:1 connection:5 barcelona:1 esat:1 nip:3 suggested:1 below:2 pattern:2 regime:3 sparsity:18 challenge:1 summarize:1 fp:1 saclay:1 program:2 royal:2 analogue:1 power:15 critical:2 sporns:1 rely:1 regularized:2 predicting:1 improve:2 github:1 created:1 m2t:2 naive:1 coupled:2 ky2:1 eugene:2 kelly:1 acknowledgement:1 prior:2 literature:4 review:2 understanding:1 determining:3 asymptotic:3 relative:1 discovery:1 loss:2 nxp:2 permutation:13 mixed:1 interesting:1 limitation:1 versus:2 validation:3 degree:1 gather:1 consistent:2 share:2 row:1 poline:1 drastically:2 bias:14 formal:1 allow:2 fall:3 wide:1 characterizing:2 taking:3 differentiating:1 sparse:21 van:5 overcome:2 dimension:1 cortical:1 world:1 valid:1 author:1 made:1 adaptive:1 projected:7 far:2 sdc:2 kkt:2 uai:1 assumed:1 ryali:1 spectrum:2 why:1 table:2 additionally:2 nature:1 ku:2 learn:3 expanding:1 rearranging:1 inherently:2 neurologic:1 obtaining:3 mol:1 mj:3 permute:1 lockhart:1 diag:2 da:1 did:1 significance:9 main:1 montanari:1 s2:3 noise:3 arise:1 bounding:2 n2:23 nothing:1 whole:1 suffering:2 body:1 x1:12 fig:1 referred:1 darker:1 precision:8 sub:2 neuroimage:5 wish:1 vanish:1 removing:1 cog:1 specific:2 concern:1 dl:4 incorporating:1 dominates:1 exists:1 false:5 adding:1 effectively:1 intrinsic:1 fusion:1 nat:1 mohan:1 chen:1 suited:1 likely:1 partially:1 u2:1 springer:1 corresponds:1 truth:2 conditional:2 goal:2 formulated:2 consequently:1 ann:3 towards:1 absence:1 change:1 specifically:2 typical:2 debiased:45 pvalues:3 corrected:2 decouple:1 conservative:2 called:2 geer:3 experimental:1 la:3 indicating:1 select:4 pedregosa:1 ggms:14 internal:1 support:9 latter:1 bioinformatics:1 scarcity:1 incorporate:1 evaluate:1 |
6,116 | 6,532 | Tree-Structured Reinforcement Learning for
Sequential Object Localization
Zequn Jie1 , Xiaodan Liang2 , Jiashi Feng1 , Xiaojie Jin1 , Wen Feng Lu1 , Shuicheng Yan1
1
National University of Singapore, Singapore
2
Carnegie Mellon University, USA
Abstract
Existing object proposal algorithms usually search for possible object regions over
multiple locations and scales separately, which ignore the interdependency among
different objects and deviate from the human perception procedure. To incorporate
global interdependency between objects into object localization, we propose an effective Tree-structured Reinforcement Learning (Tree-RL) approach to sequentially
search for objects by fully exploiting both the current observation and historical
search paths. The Tree-RL approach learns multiple searching policies through
maximizing the long-term reward that reflects localization accuracies over all the
objects. Starting with taking the entire image as a proposal, the Tree-RL approach
allows the agent to sequentially discover multiple objects via a tree-structured
traversing scheme. Allowing multiple near-optimal policies, Tree-RL offers more
diversity in search paths and is able to find multiple objects with a single feedforward pass. Therefore, Tree-RL can better cover different objects with various
scales which is quite appealing in the context of object proposal. Experiments on
PASCAL VOC 2007 and 2012 validate the effectiveness of the Tree-RL, which can
achieve comparable recalls with current object proposal algorithms via much fewer
candidate windows.
1
Introduction
Modern state-of-the-art object detection systems [1, 2] usually adopt a two-step pipeline: extract
a set of class-independent object proposals at first and then classify these object proposals with a
pre-trained classifier. Existing object proposal algorithms usually search for possible object regions
over dense locations and scales separately [3, 4, 5]. However, the critical correlation cues among
different proposals (e.g., relative spatial layouts or semantic correlations) are often ignored. This
in fact deviates from the human perception process ? as claimed in [6], humans do not search
for objects within each local image patch separately, but start with perceiving the whole scene and
successively explore a small number of regions of interest via sequential attention patterns. Inspired
by this observation, extracting one object proposal should incorporate the global dependencies of
proposals by considering the cues from the previous predicted proposals and future possible proposals
jointly.
In this paper, in order to fully exploit global interdependency among objects, we propose a novel
Tree-structured Reinforcement Learning (Tree-RL) approach that learns to localize multiple objects
sequentially based on both the current observation and historical search paths. Starting from the
entire image, the Tree-RL approach sequentially acts on the current search window either to refine
the object location prediction or discover new objects by following a learned policy. In particular,
the localization agent is trained by deep RL to learn the policy that maximizes a long-term reward
for localizing all the objects, providing better global reasoning. For better training the agent, we
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
local translation
scaling
scaling
local translation
scaling
local translation
Figure 1: Illustration of Tree-RL. Starting from the whole image, the agent recursively selects the
best actions from both action groups to obtain two next windows for each window. Red and orange
solid windows are obtained by taking scaling and local translation actions, respectively. For each
state, green dashed windows are the initial windows before taking actions, which are result windows
from the last level.
propose a novel reward stimulation that well balances the exploration of uncovered new objects and
refinement of the current one for quantifying the localization accuracy improvements.
The Tree-RL adopts a tree-structured search scheme that enables the agent to more accurately find
objects with large variation in scales. The tree search scheme consists of two branches of pre-defined
actions for each state, one for locally translating the current window and the other one for scaling the
window to a smaller one. Starting from the whole image, the agent recursively selects the best action
from each of the two branches according to the current observation (see Fig. 1). The proposed tree
search scheme enables the agent to learn multiple near-optimal policies in searching multiple objects.
By providing a set of diverse near-optimal policies, Tree-RL can better cover objects in a wide range
of scales and locations.
Extensive experiments on PASCAL VOC 2007 and 2012 [7] demonstrate that the proposed model
can achieve a similar recall rate as the state-of-the-art object proposal algorithm RPN [5] yet using a
significantly smaller number of candidate windows. Moreover, the proposed approach also provides
more accurate localizations than RPN. Combined with the Fast R-CNN detector [2], the proposed
approach also achieves higher detection mAP than RPN.
2
Related Work
Our work is related to the works which utilize different object localization strategies instead of sliding
window search in object detection. Existing works trying to reduce the number of windows to be
evaluated in the post-classification can be roughly categorized into two types, i.e., object proposal
algorithms and active object search with visual attention.
Early object proposal algorithms typically rely on low-level image cues, e.g., edge, gradient and
saliency [3, 4, 8]. For example, Selective Search [9] hierarchically merges the most similar segments
to form proposals based on several low-level cues including color and texture; Edge Boxes [4] scores
a set of densely distributed windows based on edge strengths fully inside the window and outputs the
high scored ones as proposals. Recently, RPN [5] utilizes a Fully Convolutional Network (FCN) [10]
to densely generate the proposals in each local patch based on several pre-defined ?anchors? in
the patch, and achieves state-of-the-art performance in object recall rate. Nevertheless, object
proposal algorithms assume that the proposals are independent and usually perform window-based
classification on a set of reduced windows individually, which may still be wasteful for images
containing only a few objects.
Another type of works attempts [11, 12, 13, 14] to reduce the number of windows with an active
object detection strategy. Lampert et al. [15] proposed a branch-and-bound approach to find the
highest scored windows while only evaluating a few locations. Alexe et al. [11] proposed a context
driven active object searching method, which involves a nearest-neighbor search over all the training
2
scaling actions
local translation actions
Figure 2: Illustration of the five scaling actions and eight local translation actions. Each yellow
window with dashed lines represents the next window after taking the corresponding action.
images. Gonzeles-Garcia et al. [12] proposed an active search scheme to sequentially evaluate
selective search object proposals based on spatial context information.
Visual attention models are also related to our work. These models are often leveraged to facilitate
the decision by gathering information from previous steps in the sequential decision making vision
tasks. Xu et al. [16] proposed an attention model embedded in recurrent neural networks (RNN)
to generate captions for images by focusing on different regions in the sequential word prediction
process. Minh et al. [17] and Ba et al. [18] also relied on RNN to gradually refine the focus regions
to better recognize characters.
Perhaps [19] and [20] are the closest works to ours. [19] learned an optimal policy to localize a single
object through deep Q-learning. To handle multiple objects cases, it runs the whole process starting
from the whole image multiple times and uses an inhibition-of-return mechanism to manually mark
the objects already found. [20] proposed a top-down search strategy to recursively divide a window
into sub-windows. Then similar to RPN, all the visited windows serve as ?anchors? to regress the
locations of object bounding boxes. Compared to them, our model can localize multiple objects in a
single run starting from the whole image. The agent learns to balance the exploration of uncovered
new objects and the refinement of covered ones with deep Q-learning. Moreover, our top-down tree
search does not produce ?anchors? to regress the object locations, but provides multiple near-optimal
search paths and thus requires less computation.
3
3.1
Tree-Structured Reinforcement Learning for Object Localization
Multi-Object Localization as a Markov Decision Process
The Tree-RL is based on a Markov decision process (MDP) which is well suitable for modeling the
discrete time sequential decision making process. The localization agent sequentially transforms
image windows within the whole image by performing one of pre-defined actions. The agent aims
to maximize the total discounted reward which reflects the localization accuracy of all the objects
during the whole running episode. The design of the reward function enables the agent to consider
the trade-off between further refinement of the covered objects and searching for uncovered new
objects. The actions, state and reward of our proposed MDP model are detailed as follows.
Actions: The available actions of the agent consist of two groups, one for scaling the current
window to a sub-window, and the other one for translating the current window locally. Specifically,
the scaling group contains five actions, each corresponding to a certain sub-window with the size
0.55 times as the current window (see Fig. 2). The local translation group is composed of eight
actions, with each one changing the current window in one of the following ways: horizontal
moving to left/right, vertical moving to up/down, becoming shorter/longer horizontally and becoming
shorter/longer vertically, as shown in Fig. 2, which are similar to [19]. Each local translation action
moves the window by 0.25 times of the current window size. The next state is then deterministically
obtained after taking the last action. The scaling actions are designed to facilitate the search of objects
in various scales, which cooperate well with the later discussed tree search scheme in localizing
objects in a wide range of scales. The translation actions aim to perform successive changes of
visual focus, playing an important role in both refining the current attended object and searching for
uncovered new objects.
3
States: At each step, the state of MDP is the concatenation of three components: the feature vector
of the current window, the feature vector of the whole image and the history of taken actions. The
features of both the current window and the whole image are extracted using a VGG-16 [21] layer
CNN model pre-trained on ImageNet. We use the feature vector of layer ?fc6? in our problem. To
accelerate the feature extraction, all the feature vectors are computed on top of pre-computed feature
maps of the layer ?conv5_3? after using ROI Pooling operation to obtain a fixed-length feature
representation of the specific windows, which shares the spirit of Fast R-CNN. It is worth mentioning
that the global feature here not only provides context cues to facilitate the refinement of the currently
attended object, but also allows the agent to be aware of the existence of other uncovered new objects
and thus make a trade-off between further refining the attended object and exploring the uncovered
ones. The history of the taken actions is a binary vector that tells which actions have been taken in
the past. Therefore, it implies the search paths that have already been gone through and the objects
already attended by the agent. Each action is represented by a 13-d binary vector where all values are
zeros except for the one corresponding to the taken action. 50 past actions are encoded in the state to
save a full memory of the paths from the start.
Rewards: The reward function r(s, a) reflects the localization accuracy improvements of all the
objects by taking the action a under the state s. We adopt the simple yet indicative localization quality
measurement, Intersection-over-Union (IoU) between the current window and the ground-truth object
bounding boxes. Given the current window w and a ground-truth object bounding box g, IoU between
w and g is defined as IoU(w, g) , area(w ? g)/area(w ? g). Assuming that the agent moves from
state s to state s0 after taking the action a, each state s has an associated window w, and there are n
ground-truth objects g1 ... gn , then the reward r(s, a) is defined as follows:
r(s, a) = max sign(IoU(w0 , gi ) ? IoU(w, gi )).
1?i?n
(1)
This reward function returns +1 or ?1. Basically, if any ground-truth object bounding box has a
higher IoU with the next window than the current one, the reward of the action moving from the
current window to the next one is +1, and ?1 otherwise. Such binary rewards reflect more clearly
which actions can drive the window towards the ground-truths and thus facilitate the agent?s learning.
This reward function encourages the agent to localize any objects freely, without any limitation
or guidance on which object should be localized at that step. Such a free localization strategy is
especially important in a multi-object localization system for covering multiple objects by running
only a single episode starting from the whole image.
Another key reward stimulation +5 is given to those actions which cover any ground-truth objects
with an IoU greater than 0.5 for the first time. For ease of explanation, we define fi,t as the hit
flag of the ground-truth object gi at the tth step which indicates whether the maximal IoU between
gi and all the previously attended windows {wj }tj=1 is greater than 0.5, and assign +1 to fi,t if
max1?j?t IoU(wj , gi ) is greater than 0.5 and ?1 otherwise. Then supposing the action a is taken at
the tth step under state s, the reward function integrating the first-time hit reward can be written as
follows:
?
?
+5,
if max (fi,t+1 ? fi,t ) > 0
1?i?n
r(s, a) =
(2)
? max sign(IoU(w0 , gi ) ? IoU(w, gi )), otherwise.
1?i?n
The high reward given to the actions which hit the objects with an IoU > 0.5 for the first time avoids
the agent being trapped in the endless refinement of a single object and promotes the search for
uncovered new objects.
3.2
Tree-Structured Search
The Tree-RL relies on a tree structured search strategy to better handle objects in a wide range of
scales. For each window, the actions with the highest predicted value in both the scaling action
group and the local translation action group are selected respectively. The two best actions are both
taken to obtain two next windows: one is a sub-window of the current one and the other is a nearby
window to the current one after local translation. Such bifurcation is performed recursively by each
window starting from the whole image in a top-down fashion, as illustrated in Fig. 3. With tree
search, the agent is enforced to take both scaling action and local translation action simultaneously at
4
pre-computed
conv5_3 feature map
level 1
4096-d image
feature
4096-d
level 2
RoI
RoI pooling
layer
1024-d 1024-d 1024-d
13
actions
level 3
4096-d RoI
feature
650-d action
history
level 4
Figure 3: Illustration of the top-down tree
search. Starting from the whole image, each
window recursively takes the best actions
from both action groups. Solid arrows and
dashed arrows represent scaling actions and
local translation actions, respectively.
Figure 4: Illustration of our Q-network. The regional
feature is computed on top of the pre-computed ?conv5_3? feature maps extracted by VGG-16 pre-trained
model. It is concatenated with the whole image feature
and the history of past actions to be fed into an MLP.
The MLP predicts the estimated values of the 13 actions.
each state, and thus travels along multiple near-optimal search paths instead of a single optimal path.
This is crucial for improving the localization accuracy for objects in different scales. Because only
the scaling actions significantly change the scale of the attended window while the local translation
actions almost keep the scale the same as the previous one. However there is no guarantee that the
scaling actions are often taken as the agent may tend to go for large objects which are easier to be
covered with an IoU larger than 0.5, compared to scaling the window to find small objects.
3.3
Deep Q-learning
The optimal policy of maximizing the sum of the discounted rewards of running an episode starting
from the whole image is learned with reinforcement learning. However, due to the high-dimensional
continuous image input data and the model-free environment, we resort to the Q-learning algorithm
combined with the function approximator technique to learn the optimal value for each state-action
pair which generalizes well to unseen inputs. Specifically, we use the deep Q-network proposed
by [22, 23] to estimate the value for each state-action pair using a deep neural network. The detailed
architecture of our Q-network is illustrated in Fig. 4. Please note that similar to [23], we also use the
pre-trained CNN as the regional feature extractor instead of training the whole hierarchy of CNN,
considering the good generalization of the CNN trained on ImageNet [24].
During training, the agent runs sequential episodes which are paths from the root of the tree to its
leafs. More specifically, starting from the whole image, the agent takes one action from the whole
action set at each step to obtain the next state. The agent?s behavior during training is -greedy.
Specifically, the agent selects a random action from the whole action set with probability , and
selects a random action from the two best actions in the two action groups (i.e. scaling group and
local translation group) with probability 1 ? , which differs from the usual exploitation behavior that
the single best action with the highest estimated value is taken. Such exploitation is more consistent
with the proposed tree search scheme that requires the agent to take the best actions from both action
groups. We also incorporate a replay memory following [23] to store the experiences of the past
episodes, which allows one transition to be used in multiple model updates and breaks the short-time
strong correlations between training samples. Each time Q-learning update is applied, a mini batch
randomly sampled from the replay memory is used as the training samples. The update for the
network weights at the ith iteration ?i given transition samples (s, a, r, s0 ) is as follows:
?i+1 = ?i + ?(r + ? max
Q(s0 , a0 ; ?i ) ? Q(s, a; ?i ))??i Q(s, a; ?i ),
0
a
(3)
where a0 represents the actions that can be taken at state s0 , ? is the learning rate and ? is the discount
factor.
3.4
Implementation Details
We train a deep Q-network on VOC 2007+2012 trainval set [7] for 25 epochs. The total number of
training images is around 16,000. Each epoch is ended after performing an episode in each training
5
Table 1: Recall rates (in %) of single optimal
search path RL with different numbers of search
steps and under different IoU thresholds on
VOC 07 testing set. We only report 50 steps
instead of 63 steps as the maximal number of
steps is 50.
Table 2: Recall rates (in %) of Tree-RL with
different numbers of search steps and under different IoU thresholds on VOC 07 testing set. 31
and 63 steps are obtained by setting the number
of levels in Tree-RL to 5 and 6, respectively.
# steps large/small IoU=0.5 IoU=0.6 IoU=0.7
31
large
62.2
53.1
40.2
31
small
18.9
15.6
11.2
31
all
53.8
45.8
34.5
50
large
62.3
53.2
40.4
50
small
19.0
15.8
11.3
50
all
53.9
45.9
34.8
# steps large/small IoU=0.5 IoU=0.6 IoU=0.7
31
large
78.9
69.8
53.3
31
small
23.2
12.5
4.5
31
all
68.1
58.7
43.8
63
large
83.3
76.3
61.9
63
small
39.5
28.9
15.1
63
all
74.8
67.0
52.8
image. During -greedy training, is annealed linearly from 1 to 0.1 over the first 10 epochs. Then
is fixed to 0.1 in the last 15 epochs. The discount factor ? is set to 0.9. We run each episode with
maximal 50 steps during training. During testing, using the tree search, one can set the number of
levels of the search tree to obtain the desired number of proposals. The replay memory size is set to
800,000, which contains about 1 epoch of transitions. The mini batch size in training is set to 64.
The implementations are based on the publicly available Torch7 [25] platform on a single NVIDIA
GeForce Titan X GPU with 12GB memory.
4
Experimental Results
We conduct comprehensive experiments on PASCAL VOC 2007 and 2012 testing sets of detection
benchmarks to evaluate the proposed method. The recall rate comparisons are conducted on VOC
2007 testing set because VOC 2012 does not release the ground-truth annotations publicly and can
only return a detection mAP (mean average precision) of the whole VOC 2012 testing set from the
online evaluation server.
Tree-RL vs Single Optimal Search Path RL: We first compare the performance in recall rate
between the proposed Tree-RL and a single optimal search path RL on PASCAL VOC 2007 testing
set. For the single optimal search path RL, it only selects the best action with the highest estimated
value by the deep Q-network to obtain one next window during testing, instead of taking two best
actions from the two action groups. As for the exploitation in the -greedy behavior during training,
the agent in the single optimal path RL always takes the action with the highest estimated value
in the whole action set with probability 1 ? . Apart from the different search strategy in testing
and exploitation behavior during training, all the actions, state and reward settings are the same as
Tree-RL. Please note that for Tree-RL, we rank the proposals in the order of the tree depth levels. For
example, when setting the number of levels to 5, we have 1+2+4+8+16=31 proposals. The recall
rates of the single optimal search path RL and Tree-RL are shown in Table 1 and Table 2, respectively.
It is found that the single optimal search path RL achieves an acceptable recall with a small number
of search steps. This verifies the effectiveness of the proposed MDP model (including reward, state
and actions setting) in discovering multiple objects. It does not rely on running multiple episodes
starting from the whole image like [19] to find multiple objects. It is also observed that Tree-RL
outperforms the single optimal search path RL in almost all the evaluation scenarios, especially for
large objects1 . The only case where Tree-RL is worse than the single optimal search path RL is the
recall of small objects within 31 steps at IoU threshold 0.6 and 0.7. This may be because the agent
performs a breadth-first-search from the whole image, and successively narrows down to a small
region. Therefore, the search tree is still too shallow (i.e. 5 levels) to accurately cover all the small
objects using 31 windows. Moreover, we also find that recalls of the single optimal search path RL
become stable with a few steps and hardly increase with the increasing of steps. In contrast, the
recalls of Tree-RL keep increasing as the levels of the search tree increase. Thanks to the multiple
diverse near-optimal search paths, a better coverage of the whole image in both locations and scales
is achieved by Tree-RL.
1
Throughout the paper, large objects are defined as those containing more than 2,000 pixels. The rest are
small objects.
6
1
0.9
0.8
0.8
0.8
0.7
0.7
0.7
0.6
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.3
0.2
0.2
0.2
0.1
0.1
0.6
0.7
0.8
0.9
0
0.5
1
IoU overlap threshold
(a) 31 proposals per image
0.1
0.6
0.7
0.8
0.9
0
0.5
1
IoU overlap threshold
0.8
0.8
0
10 1
10 2
10 3
# proposals
(e) Recall at 0.5 IoU
average recall
0.8
recall at IoU threshold 0.80
1
0.2
0.7
0.8
0.9
1
(b) 255 proposals per image (c) 1023 proposals per image
1
0.4
0.6
IoU overlap threshold
1
0.6
Tree-RL
Bing
EdgeBoxes
Geodesic
RPN
SelectiveSearch
0.5
0.4
0
0.5
recall at IoU threshold 0.50
recall
1
0.9
recall
recall
1
0.9
0.6
0.4
0.2
0
10 1
Tree-RL
Bing
EdgeBoxes
Geodesic
RPN
SelectiveSearch
0.6
0.4
0.2
10 2
10 3
# proposals
0
10 1
10 2
10 3
# proposals
(f) Recall at 0.8 IoU
(g) Average recall (0.5<IoU<1)
Figure 5: Recall comparisons between Tree-RL and other state-of-the-art methods on PASCAL VOC
2007 testing set.
Recall Comparison to Other Object Proposal Algorithms: We then compare the recall rates of
the proposed Tree-RL and the following object proposal algorithms: BING [3], Edge Boxes [4],
Geodesic Object Proposal [26], Selective Search [9] and Region Proposal Network (RPN) [5] (VGG16 network trained on VOC 07+12 trainval) on VOC 2007 testing set. All the proposals of other
methods are provided by [27]. Fig. 5 (a)-(c) show the recall when varying the IoU threshold within
the range [0.5,1] for different numbers of proposals. We set the number of levels in Tree-RL to 5, 8
and 10 respectively to obtain the desired numbers of proposals. Fig. 5 (e)-(g) demonstrate the recall
when changing the number of proposals for different IoU thresholds. It can be seen that Tree-RL
outperforms other methods including RPN significantly with a small number of proposals (e.g. 31).
When increasing the number of proposals, the advantage of Tree-RL over other methods becomes
smaller, especially at a low IoU threshold (e.g. 0.5). For high IoU thresholds (e.g. 0.8), Tree-RL stills
performs the best among all the methods. Tree-RL also behaves well on the average recall between
IoU 0.5 to 1 which is shown to correlate extremely well with detector performance [27].
Detection mAP Comparison to Faster R-CNN: We conduct experiments to evaluate the effects
on object detection of the proposals generated by the proposed Tree-RL. The two baseline methods
are RPN (VGG-16) + Fast R-CNN (ResNet-101) and Faster R-CNN (ResNet-101). The former
one trains a Fast R-CNN detector (ResNet-101 network) on the proposals generated by a VGG-16
based RPN to make fair comparisons with the proposed Tree-RL which is also based on VGG-16
network. The latter one, i.e. Faster-RCNN (ResNet-101), is a state-of-the-art detection framework
integrating both proposal generation and object detector in an end-to-end trainable system which is
based on ResNet-101 network. Our method, Tree-RL (VGG-16) + Fast R-CNN (ResNet-101) trains
a Fast R-CNN detector (ResNet-101 network) on the proposals generated by the VGG-16 based
Tree-RL. All the Fast R-CNN detectors are fine-tuned from the publicly released ResNet-101 model
pre-trained on ImageNet. The final average pooling layer and the 1000-d fc layer of ResNet-101 are
replaced by a new fc layer directly connecting the last convolution layer to the output (classification
and bounding box regression) during fine-tuning. For Faster-RCNN (ResNet-101), we directly use
the reported results in [28]. For the other two methods, we train and test the Fast R-CNN using the
top 255 proposals. Table 3 and Table 4 show the average precision of 20 categories and mAP on
PASCAL VOC 2007 and 2012 testing set, respectively. It can be seen that the proposed Tree-RL
combined with Fast R-CNN outperforms two baselines, especially the recent reported Faster R-CNN
(ResNet-101) on the detection mAP. Considering the fact that the proposed Tree-RL relies on only
VGG-16 network which is much shallower than ResNet-101 utilized by Faster R-CNN in proposal
generation, the proposed Tree-RL is able to generate high-quality object proposals which are effective
when used in object detection.
7
Table 3: Detection results comparison on PASCAL VOC 2007 testing set.
method
aero bike bird boat bottle bus
RPN (VGG-16)+
Fast R-CNN (ResNet-101)
Faster R-CNN
(ResNet-101) [28]
Tree-RL (VGG-16)+
Fast R-CNN (ResNet-101)
car
cat chair cow table dog horse mbike person plant sheep sofa train
tv
mAP
77.7 82.7 77.4 68.5 54.7 85.5 80.0 87.6 60.7 83.2 71.8 84.8 85.1
75.6
76.9
52.0
76.8 79.1 81.1 73.9 75.8
79.8 80.7 76.2 68.3 55.9 85.1 85.3 89.8 56.7 87.8 69.4 88.3 88.9
80.9
78.4
41.7
78.6 79.8 85.3 72.0 76.4
78.2 82.4 78.0 69.3 55.4 86.0 79.3 88.4 60.8 85.3 74.0 85.7 86.3
78.2
77.2
51.4
76.4 80.5 82.2 74.5 76.6
Table 4: Detection results comparison on PASCAL VOC 2012 testing set.
method
RPN (VGG-16)+
Fast R-CNN (ResNet-101)
Faster R-CNN
(ResNet-101) [28]
Tree-RL (VGG-16)+
Fast R-CNN (ResNet-101)
aero bike bird boat bottle bus
car
cat chair cow table dog horse mbike person plant sheep sofa train
tv
mAP
86.9 83.3 75.6 55.4 50.8 79.2 76.9 92.8 48.8 79.0 57.2 90.2 85.4
82.1
79.4
46.0
77.0 66.4 83.3 66.0 73.1
86.5 81.6 77.2 58.0 51.0 78.6 76.6 93.2 48.6 80.4 59.0 92.1 85.3
84.8
80.7
48.1
77.3 66.5 84.7 65.6 73.8
85.9 79.3 77.1 62.1 53.4 77.8 77.4 90.1 52.3 79.2 56.2 88.9 84.5
80.8
81.1
51.7
77.3 66.9 82.6 68.5 73.7
Visualizations: We show the visualization examples of the proposals generated by Tree-RL in
Fig. 6. As can be seen, within only 15 proposals (the sum of level 1 to level 4), Tree-RL is able to
localize the majority of objects with large or middle sizes. This validates the effectiveness of Tree-RL
again in its ability to find multiple objects with a small number of windows.
Figure 6: Examples of the proposals generated by Tree-RL. We only show the proposals of level 2 to
level 4. Green, yellow and red windows are generated by the 2nd, 3rd and 4th level respectively. The
1st level is the whole image.
5
Conclusions
In this paper, we proposed a novel Tree-structured Reinforcement Learning (Tree-RL) approach to
sequentially search for objects with the consideration of global interdependency between objects.
It follows a top-down tree search scheme to allow the agent to travel along multiple near-optimal
paths to discovery multiple objects. The experiments on PASCAL VOC 2007 and 2012 validate
the effectiveness of the proposed Tree-RL. Briefly, Tree-RL is able to achieve a comparable recall
to RPN with fewer proposals and has higher localization accuracy. Combined with Fast R-CNN
detector, Tree-RL achieves comparable detection mAP to the state-of-the-art detection system Faster
R-CNN (ResNet-101).
Acknowledgment
The work of Jiashi Feng was partially supported by National University of Singapore startup grant R263-000-C08-133 and Ministry of Education of Singapore AcRF Tier One grant R-263-000-C21-112.
8
References
[1] Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate
object detection and semantic segmentation. In CVPR, 2014.
[2] Ross Girshick. Fast r-cnn. In ICCV, 2015.
[3] Ming-Ming Cheng, Ziming Zhang, Wen-Yan Lin, and Philip Torr. Bing: Binarized normed gradients for
objectness estimation at 300fps. In CVPR, 2014.
[4] C Lawrence Zitnick and Piotr Doll?r. Edge boxes: Locating object proposals from edges. In ECCV. 2014.
[5] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection
with region proposal networks. In NIPS, 2015.
[6] Jiri Najemnik and Wilson S Geisler. Optimal eye movement strategies in visual search. Nature,
434(7031):387?391, 2005.
[7] Mark Everingham, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman. The
pascal visual object classes (voc) challenge. IJCV, 88(2):303?338, 2010.
[8] Bogdan Alexe, Thomas Deselaers, and Vittorio Ferrari. What is an object? In CVPR, 2010.
[9] Jasper RR Uijlings, Koen EA van de Sande, Theo Gevers, and Arnold WM Smeulders. Selective search
for object recognition. IJCV, 104(2):154?171, 2013.
[10] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015.
[11] Bogdan Alexe, Nicolas Heess, Yee W Teh, and Vittorio Ferrari. Searching for objects driven by context. In
NIPS, 2012.
[12] Abel Gonzalez-Garcia, Alexander Vezhnevets, and Vittorio Ferrari. An active search strategy for efficient
object class detection. In CVPR, 2015.
[13] Stefan Mathe and Cristian Sminchisescu. Multiple instance reinforcement learning for efficient weaklysupervised detection in images. arXiv preprint arXiv:1412.0100, 2014.
[14] Stefan Mathe, Aleksis Pirinen, and Cristian Sminchisescu. Reinforcement learning for visual object
detection. In CVPR, 2016.
[15] Christoph H Lampert, Matthew B Blaschko, and Thomas Hofmann. Efficient subwindow search: A branch
and bound framework for object localization. TPAMI, 31(12):2129?2142, 2009.
[16] Kelvin Xu, Jimmy Ba, Ryan Kiros, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, and Yoshua
Bengio. Show, attend and tell: Neural image caption generation with visual attention. arXiv preprint
arXiv:1502.03044, 2015.
[17] Volodymyr Mnih, Nicolas Heess, Alex Graves, et al. Recurrent models of visual attention. In NIPS, 2014.
[18] Jimmy Ba, Volodymyr Mnih, and Koray Kavukcuoglu. Multiple object recognition with visual attention.
arXiv preprint arXiv:1412.7755, 2014.
[19] Juan C Caicedo and Svetlana Lazebnik. Active object localization with deep reinforcement learning. In
ICCV, 2015.
[20] Yongxi Lu, Tara Javidi, and Svetlana Lazebnik. Adaptive object detection using adjacency and zoom
prediction. arXiv preprint arXiv:1512.07711, 2015.
[21] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
[22] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra,
and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602,
2013.
[23] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare,
Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through
deep reinforcement learning. Nature, 518(7540):529?533, 2015.
[24] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical
image database. In CVPR, 2009.
[25] Ronan Collobert, Koray Kavukcuoglu, and Cl?ment Farabet. Torch7: A matlab-like environment for
machine learning. In NIPS Workshop, 2011.
[26] Philipp Kr?henb?hl and Vladlen Koltun. Geodesic object proposals. In ECCV. 2014.
[27] J. Hosang, R. Benenson, P. Doll?r, and B. Schiele. What makes for effective detection proposals? TPAMI,
38(4):814?830, 2016.
[28] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition.
arXiv preprint arXiv:1512.03385, 2015.
9
| 6532 |@word cnn:27 middle:1 exploitation:4 briefly:1 nd:1 everingham:1 shuicheng:1 attended:6 solid:2 recursively:5 initial:1 uncovered:7 score:1 contains:2 trainval:2 tuned:1 ours:1 past:4 existing:3 outperforms:3 current:21 yet:2 written:1 gpu:1 najemnik:1 john:1 ronan:1 hofmann:1 enables:3 designed:1 update:3 rpn:14 v:1 cue:5 fewer:2 selected:1 leaf:1 greedy:3 indicative:1 discovering:1 ith:1 short:1 provides:3 location:8 successive:1 philipp:1 zhang:2 five:2 wierstra:1 along:2 become:1 jiri:1 fps:1 koltun:1 consists:1 ijcv:2 inside:1 behavior:4 roughly:1 kiros:1 multi:2 inspired:1 voc:18 discounted:2 ming:2 salakhutdinov:1 window:55 considering:3 increasing:3 becomes:1 spain:1 discover:2 moreover:3 provided:1 maximizes:1 bike:2 blaschko:1 what:2 atari:1 ended:1 guarantee:1 binarized:1 act:1 classifier:1 hit:3 control:1 grant:2 kelvin:1 before:1 attend:1 local:16 vertically:1 path:21 becoming:2 bird:2 christoph:1 mentioning:1 ease:1 range:4 gone:1 c21:1 acknowledgment:1 testing:14 union:1 differs:1 procedure:1 evan:1 area:2 riedmiller:2 rnn:2 yan:1 significantly:3 pre:11 word:1 integrating:2 context:5 yee:1 bellemare:1 koen:1 map:11 vittorio:3 maximizing:2 annealed:1 layout:1 attention:7 starting:12 go:1 normed:1 williams:1 jimmy:2 searching:6 handle:2 variation:1 ferrari:3 hierarchy:2 caption:2 us:1 xiaodan:1 recognition:4 utilized:1 predicts:1 database:1 observed:1 role:1 preprint:7 aero:2 region:8 wj:2 sun:2 episode:8 trade:2 highest:5 movement:1 caicedo:1 xiaojie:1 environment:2 abel:1 schiele:1 reward:20 geodesic:4 trained:8 segment:1 serve:1 localization:19 max1:1 accelerate:1 various:2 represented:1 cat:2 train:6 fast:15 effective:3 zemel:1 tell:2 horse:2 startup:1 quite:1 encoded:1 larger:1 kai:1 cvpr:7 otherwise:3 ability:1 simonyan:1 gi:7 g1:1 unseen:1 jointly:1 validates:1 final:1 online:1 cristian:2 advantage:1 rr:1 tpami:2 propose:3 ment:1 maximal:3 achieve:3 validate:2 exploiting:1 darrell:2 produce:1 silver:2 object:111 resnet:19 bogdan:2 recurrent:2 andrew:2 nearest:1 strong:1 coverage:1 predicted:2 involves:1 implies:1 iou:35 exploration:2 human:4 translating:2 education:1 adjacency:1 assign:1 generalization:1 ryan:1 exploring:1 around:1 ground:8 roi:4 lawrence:1 alexe:3 matthew:1 achieves:4 adopt:2 early:1 released:1 ruslan:1 estimation:1 travel:2 sofa:2 currently:1 visited:1 ross:3 individually:1 reflects:3 stefan:2 clearly:1 always:1 aim:2 rusu:1 varying:1 wilson:1 deselaers:1 release:1 focus:2 refining:2 improvement:2 rank:1 indicates:1 contrast:1 baseline:2 entire:2 typically:1 a0:2 selective:4 selects:5 pixel:1 among:4 classification:3 pascal:10 art:6 spatial:2 orange:1 bifurcation:1 platform:1 aware:1 extraction:1 piotr:1 koray:4 manually:1 veness:1 represents:2 fcn:1 future:1 report:1 yoshua:1 richard:2 few:3 wen:2 modern:1 randomly:1 composed:1 simultaneously:1 national:2 densely:2 recognize:1 comprehensive:1 zoom:1 replaced:1 zequn:1 attempt:1 detection:22 interest:1 mlp:2 ostrovski:1 mnih:4 evaluation:2 joel:1 sheep:2 tj:1 accurate:2 endless:1 edge:6 experience:1 shorter:2 traversing:1 tree:73 conduct:2 divide:1 desired:2 guidance:1 mbike:2 girshick:3 instance:1 classify:1 modeling:1 gn:1 cover:4 localizing:2 jin1:1 jiashi:2 conducted:1 too:1 reported:2 dependency:1 combined:4 thanks:1 person:2 st:1 geisler:1 off:2 dong:1 connecting:1 again:1 reflect:1 successively:2 containing:2 leveraged:1 juan:1 worse:1 resort:1 return:3 li:4 volodymyr:4 diversity:1 de:1 ioannis:1 titan:1 jitendra:1 hosang:1 collobert:1 later:1 performed:1 root:1 break:1 red:2 start:2 relied:1 wm:1 annotation:1 gevers:1 jia:2 smeulders:1 publicly:3 accuracy:6 convolutional:3 saliency:1 yellow:2 kavukcuoglu:4 accurately:2 basically:1 ren:2 lu:1 worth:1 drive:1 history:4 detector:7 farabet:1 trevor:2 geforce:1 regress:2 associated:1 sampled:1 recall:28 color:1 car:2 segmentation:2 ea:1 focusing:1 higher:3 zisserman:2 wei:1 evaluated:1 box:8 correlation:3 horizontal:1 christopher:1 acrf:1 quality:2 perhaps:1 mdp:4 usa:1 facilitate:4 effect:1 former:1 edgeboxes:2 semantic:3 illustrated:2 during:10 encourages:1 please:2 covering:1 trying:1 demonstrate:2 performs:2 reasoning:1 cooperate:1 image:37 lazebnik:2 consideration:1 novel:3 recently:1 fi:4 behaves:1 stimulation:2 jasper:1 rl:61 vezhnevets:1 discussed:1 he:2 mellon:1 measurement:1 tuning:1 rd:1 moving:3 stable:1 longer:2 inhibition:1 closest:1 recent:1 driven:2 apart:1 scenario:1 claimed:1 certain:1 store:1 nvidia:1 server:1 binary:3 sande:1 seen:3 ministry:1 greater:3 deng:1 freely:1 xiangyu:1 maximize:1 dashed:3 vgg16:1 sliding:1 multiple:24 interdependency:4 branch:4 full:1 faster:10 offer:1 long:3 lin:1 post:1 promotes:1 prediction:3 regression:1 vision:1 arxiv:14 iteration:1 represent:1 achieved:1 proposal:56 separately:3 fine:2 winn:1 jian:2 crucial:1 benenson:1 rest:1 regional:2 pooling:3 supposing:1 tend:1 pirinen:1 spirit:1 effectiveness:4 extracting:1 near:7 feedforward:1 bengio:1 architecture:1 cow:2 reduce:2 andreas:1 vgg:12 whether:1 gb:1 torch7:2 locating:1 karen:1 henb:1 shaoqing:2 hardly:1 action:71 matlab:1 deep:13 ignored:1 heess:2 covered:3 detailed:2 transforms:1 discount:2 locally:2 category:1 tth:2 reduced:1 generate:3 singapore:4 sign:2 trapped:1 estimated:4 per:3 diverse:2 carnegie:1 discrete:1 georg:1 group:12 key:1 nevertheless:1 threshold:12 localize:5 wasteful:1 changing:2 breadth:1 utilize:1 sum:2 enforced:1 run:4 svetlana:2 almost:2 throughout:1 patch:3 utilizes:1 gonzalez:1 decision:5 acceptable:1 scaling:17 comparable:3 bound:2 layer:8 ki:1 courville:1 cheng:1 refine:2 strength:1 alex:3 fei:2 scene:1 weaklysupervised:1 nearby:1 extremely:1 chair:2 performing:2 martin:2 structured:9 tv:2 according:1 vladlen:1 smaller:3 character:1 appealing:1 shallow:1 making:2 hl:1 gradually:1 iccv:2 gathering:1 pipeline:1 taken:9 tier:1 visualization:2 previously:1 bing:4 bus:2 mechanism:1 fed:1 antonoglou:1 end:2 available:2 operation:1 generalizes:1 doll:2 eight:2 hierarchical:1 save:1 batch:2 existence:1 thomas:2 top:8 running:4 exploit:1 concatenated:1 especially:4 feng:2 move:2 malik:1 already:3 strategy:8 usual:1 javidi:1 gradient:2 fidjeland:1 concatenation:1 majority:1 philip:1 w0:2 assuming:1 length:1 illustration:4 providing:2 balance:2 mini:2 conv5_3:3 ba:3 design:1 implementation:2 policy:8 perform:2 allowing:1 shallower:1 vertical:1 observation:4 convolution:1 markov:2 teh:1 benchmark:1 minh:1 daan:1 mathe:2 c08:1 david:2 pair:2 bottle:2 dog:2 extensive:1 imagenet:4 learned:3 merges:1 narrow:1 barcelona:1 nip:5 able:4 usually:4 perception:2 pattern:1 selectivesearch:2 challenge:1 green:2 including:3 memory:5 max:4 explanation:1 gool:1 critical:1 suitable:1 overlap:3 rely:2 boat:2 residual:1 scheme:8 eye:1 extract:1 deviate:2 epoch:5 discovery:1 relative:1 graf:3 embedded:1 fully:5 plant:2 generation:3 limitation:1 ziming:1 approximator:1 localized:1 shelhamer:1 rcnn:2 agent:28 consistent:1 s0:4 feng1:1 playing:2 share:1 translation:15 eccv:2 supported:1 last:4 free:2 theo:1 allow:1 arnold:1 wide:3 neighbor:1 taking:8 distributed:1 van:2 depth:1 evaluating:1 avoids:1 transition:3 rich:1 adopts:1 subwindow:1 reinforcement:11 refinement:5 adaptive:1 historical:2 correlate:1 ignore:1 keep:2 global:6 sequentially:7 active:6 anchor:3 search:56 continuous:1 table:10 learn:3 fc6:1 nature:2 nicolas:2 improving:1 sminchisescu:2 cl:1 uijlings:1 zitnick:1 marc:1 yan1:1 dense:1 hierarchically:1 linearly:1 arrow:2 whole:25 bounding:5 scored:2 lampert:2 verifies:1 fair:1 categorized:1 xu:2 fig:8 fashion:1 andrei:1 precision:2 sub:4 deterministically:1 candidate:2 replay:3 extractor:1 learns:3 donahue:1 down:7 specific:1 liang2:1 consist:1 socher:1 workshop:1 sequential:6 kr:1 texture:1 easier:1 intersection:1 garcia:2 fc:2 explore:1 visual:9 horizontally:1 kaiming:2 partially:1 truth:8 relies:2 extracted:2 quantifying:1 towards:2 jeff:1 luc:1 change:2 objectness:1 specifically:4 perceiving:1 except:1 torr:1 flag:1 total:2 pas:1 experimental:1 aaron:1 tara:1 mark:2 latter:1 lu1:1 jonathan:1 alexander:1 incorporate:3 evaluate:3 trainable:1 |
6,117 | 6,533 | A Non-generative Framework and Convex
Relaxations for Unsupervised Learning
Elad Hazan
Princeton University
35 Olden Street 08540
[email protected].
Tengyu Ma
Princeton University
35 Olden Street, NJ 08540
[email protected].
Abstract
We give a novel formal theoretical framework for unsupervised learning with two
distinctive characteristics. First, it does not assume any generative model and
based on a worst-case performance metric. Second, it is comparative, namely
performance is measured with respect to a given hypothesis class. This allows
to avoid known computational hardness results and improper algorithms based
on convex relaxations. We show how several families of unsupervised learning
models, which were previously only analyzed under probabilistic assumptions and
are otherwise provably intractable, can be efficiently learned in our framework by
convex optimization.
1
Introduction
Unsupervised learning is the task of learning structure from unlabelled examples. Informally, the
main goal of unsupervised learning is to extract structure from the data in a way that will enable
efficient learning from future labelled examples for potentially numerous independent tasks.
It is useful to recall the Probably Approximately Correct (PAC) learning theory for supervised learning [28], based on Vapnik?s statistical learning theory [29]. In PAC learning, the learning can access
labelled examples from an unknown distribution. On the basis of these examples, the learner constructs a hypothesis that generalizes to unseen data. A concept is said to be learnable with respect to
a hypothesis class if there exists an (efficient) algorithm that outputs a generalizing hypothesis with
high probability after observing polynomially many examples in terms of the input representation.
The great achievements of PAC learning that made it successful are its generality and algorithmic
applicability: PAC learning does not restrict the input domain in any way, and thus allows very
general learning, without generative or distributional assumptions on the world. Another important
feature is the restriction to specific hypothesis classes, without which there are simple impossibility
results such as the ?no free lunch? theorem. This allows comparative and improper learning of
computationally-hard concepts.
The latter is a very important point which is often understated. Consider the example of sparse
regression, which is a canonical problem in high dimensional statistics. Fitting the best sparse vector
to linear prediction is an NP-hard problem [20]. However, this does not prohibit improper learning,
since we can use a `1 convex relaxation for the sparse vectors (famously known as LASSO [26]).
Unsupervised learning, on the other hand, while extremely applicative and well-studied, has not seen
such an inclusive theory. The most common approaches, such as restricted Boltzmann machines,
topic models, dictionary learning, principal component analysis and metric clustering, are based
almost entirely on generative assumptions about the world. This is a strong restriction which makes
it very hard to analyze such approaches in scenarios for which the assumptions do not hold. A
more discriminative approach is based on compression, such as the Minimum Description Length
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
criterion. This approach gives rise to provably intractable problems and doesn?t allow improper
learning.
Main results. We start by proposing a rigorous framework for unsupervised learning which allows data-dependent, comparative learning without generative assumptions about the world. It is
general enough to encompass previous methods such as PCA, dictionary learning and topic models.
Our main contribution are optimization-based relaxations and efficient algorithms that are shown to
improperly probably learn previous models, specifically:
1. We consider the classes of hypothesis known as dictionary learning. We give a more general
hypothesis class which encompasses and generalizes it according to our definitions. We
proceed to give novel polynomial-time algorithms for learning the broader class. These
algorithms are based on new techniques in sum-of-squares convex relaxations.
As far as we know, this is the first result for efficient improper learning of dictionaries without generative assumptions. Moreover, our result handles polynomially over-complete dictionaries, while previous works [4, 8] apply to at most constant factor over-completeness.
2. We give efficient algorithms for learning a new hypothesis class which we call spectral
autoencoders. We show that this class generalizes, according to our definitions, the class of
PCA (principal component analysis) and its kernel extensions.
Structure of this paper. In the following chapter we a non-generative, distribution-dependent definition for unsupervised learning which mirrors that of PAC learning for supervised learning. We
then proceed to an illustrative example and show how Principal Component Analysis can be formally learned in this setting. The same section also gives a much more general class of hypothesis
for unsupervised learning which we call polynomial spectral decoding, and show how they can be
efficient learned in our framework using convex optimization. Finally, we get to our main contribution: a convex optimization based methodology for improper learning a wide class of hypothesis,
including dictionary learning.
1.1
Previous work
The vast majority of work on unsupervised learning, both theoretical as well as applicative, focuses
on generative models. These include topic models [11], dictionary learning [13], Deep Boltzmann
Machines and deep belief networks [24] and many more. Many times these models entail nonconvex optimization problems that are provably NP-hard to solve in the worst-case.
A recent line of work in theoretical machine learning attempts to give efficient algorithms for these
models with provable guarantees. Such algorithms were given for topic models [5], dictionary
learning [6, 4], mixtures of gaussians and hidden Markov models [15, 3] and more. However, these
works retain, and at times even enhance, the probabilistic generative assumptions of the underlying
model. Perhaps the most widely used unsupervised learning methods are clustering algorithms such
as k-means, k-medians and principal component analysis (PCA), though these lack generalization
guarantees. An axiomatic approach to clustering was initiated by Kleinberg [17] and pursued further
in [9]. A discriminative generalization-based approach for clustering was undertaken in [7] within
the model of similarity-based clustering.
Another approach from the information theory literature studies with online lossless compression.
The relationship between compression and machine learning goes back to the Minimum Description
Length criterion [23]. More recent work in information theory gives online algorithms that attain
optimal compression, mostly for finite alphabets [1, 21]. For infinite alphabets, which are the main
object of study for unsupervised learning of signals such as images, there are known impossibility
results [16]. This connection to compression was recently further advanced, mostly in the context
of textual data [22].
In terms of lossy compression, Rate Distortion Theory (RDT) [10, 12] is intimately related to our
definitions, as a framework for finding lossy compression with minimal distortion (which would
correspond to reconstruction error in our terminology). Our learnability definition can be seen of
an extension of RDT to allow improper learning and generalization error bounds. Another learning framework derived from lossy compression is the information bottleneck criterion [27], and its
2
learning theoretic extensions [25]. The latter framework assumes an additional feedback signal, and
thus is not purely unsupervised.
The downside of the information-theoretic approaches is that worst-case competitive compression
is provably computationally hard under cryptographic assumptions. In contrast, our compressionbased approach is based on learning a restriction to a specific hypothesis class, much like PAClearning. This circumvents the impossibility results and allows for improper learning.
2
A formal framework for unsupervised learning
The basis constructs in an unsupervised learning setting are:
1. Instance domain X , such as images, text documents, etc. Target space, or range, Y. We
usually think of X = Rd , Y = Rk with d
k. (Alternatively, Y can be all sparse vectors
in a larger space. )
2. An unknown, arbitrary distribution D on domain X .
3. A hypothesis class of decoding and encoding pairs,
H ? {(h, g) 2 {X 7! Y} ? {Y 7! X }},
where h is the encoding hypothesis and g is the decoding hypothesis.
4. A loss function ` : H ? X 7! R>0 that measures the reconstruction error,
`((g, h), x) .
For example, a natural choice is the `2 -loss `((g, h), x) = kg(h(x)) xk22 . The rationale here is to learn structure without significantly compromising supervised learning for
arbitrary future tasks. Near-perfect reconstruction is sufficient as formally proved in Appendix 6.1. Without generative assumptions, it can be seen that near-perfect reconstruction
is also necessary.
For convenience of notation, we use f as a shorthand for (h, g) 2 H, a member of the hypothesis
class H. Denote the generalization ability of an unsupervised learning algorithm with respect to a
distribution D as
loss(f ) = E [`(f, x)].
D
x?D
We can now define the main object of study: unsupervised learning with respect to a given hypothesis class. The definition is parameterized by real numbers: the first is the encoding length (measured
in bits) of the hypothesis class. The second is the bias, or additional error compared to the best
hypothesis. Both parameters are necessary to allow improper learning.
Definition 2.1. We say that instance D, X is (k, )-C -learnable with respect to hypothesis class H if
exists an algorithm that for every , " > 0, after seeing m(", ) = poly(1/", log(1/ ), d) examples,
returns an encoding and decoding pair (h, g) (not necessarily from H) such that:
1. with probability at least 1
, lossD ((h, g)) 6 min(h,g)2H lossD ((h, g)) + " + .
2. h(x) has an explicit representation with length at most k bits.
For convenience we typically encode into real numbers instead of bits. Real encoding can often
(though not in the worst case) be trivially transformed to be binary with a loss of logarithmic factor.
Following PAC learning theory, we can use uniform convergence to bound the generalization error
of the empirical risk minimizer (ERM). Define the empirical loss for a given sample S ? Dm as
loss(f ) =
S
1 X
?
`(f, x)
m
x2S
Define the ERM hypothesis for a given sample S ? Dm as f?ERM = arg minf?2H lossS (f?) .
3
For a hypothesis class H, a loss function ` and a set of m samples S ? Dm , define the empirical
Rademacher complexity of H with respect to ` and S as, 1
"
#
1 X
RS,` (H) =
sup
E
i `(f, x)
?{?1}m f 2H m
x2S
Let the Rademacher complexity of H with respect to distribution D and loss ` as Rm (H) =
ES?Dm [RS,` (H)]. When it?s clear from the context, we will omit the subscript `.
We can now state and apply standard generalization error results. The proof of following theorem is
almost identical to [19, Theorem 3.1]. For completeness we provide a proof in Appendix 6.
Theorem 2.1. For any > 0, with probability 1
, the generalization error of the ERM hypothesis
is bounded by:
s
4 log 1
loss(f?ERM ) 6 min loss(f ) + 6Rm (H) +
f 2H D
D
2m
An immediate corollary of the theorem is that as long as the Rademacher complexity of a hypothesis
class approaches zero as the number of examples goes to infinity, it can be C learned by an inefficient
algorithm that optimizes over the hypothesis class by enumeration and outputs an best hypothesis
with encoding length k and bias = 0. Not surprisingly such optimization is often intractable and
hences the main challenge is to design efficient algorithms. As we will see in later sections, we often
need to trade the encoding length and bias slightly for computational efficiency.
Notations: For every vector z 2 Rd1 ?Rd2 , we can view it as a matrix of dimension d1 ?d2 , which
is denoted as M(z). Therefore in this notation, M(u ? v) = uv > . Let vmax (?) : (Rd )?2 ! Rd be
the function that compute the top right-singular vector of some vector in (Rd )?2 viewed as a matrix.
That is, for z 2 (Rd )?2 , then vmax (z) denotes the top right-singular vector of M(z). We also
overload the notation vmax for generalized eigenvectors of higher order tensors. For T 2 (Rd )?` ,
let vmax (T ) = argmaxkxk61 T (x, x, . . . , x) where T (?) denotes the multi-linear form defined by
tensor T .
3
Spectral autoencoders: unsupervised learning of algebraic manifolds
3.1
Algebraic manifolds
The goal of the spectral autoencoder hypothesis class we define henceforth is to learn the representation of data that lies on a low-dimensional algebraic variety/manifolds. The linear variety, or linear
manifold, defined by the roots of linear equations, is simply a linear subspace. If the data resides in
a linear subspace, or close enough to it, then PCA is effective at learning its succinct representation.
One extension of the linear manifolds is the set of roots of low-degree polynomial equations. Fors
mally, let k, s be integers and let c1 , . . . , cds k 2 Rd be a set of vectors in ds dimension, and
consider the algebraic variety
M = x 2 Rd : 8i 2 [ds
k], hci , x?s i = 0 .
Observe that here each constraint hci , x?s i is a degree-s polynomial over variables x, and when
s
s = 1 the variety M becomes a liner subspace. Let a1 , . . . , ak 2 Rd be a basis of the subspaces
s
k?d
orthogonal to all of c1 , . . . , cds k , and let A 2 R
contains ai as rows. Then we have that given
x 2 M, the encoding
y = Ax?s
pins down all the unknown information regarding x. In fact, for any x 2 M, we have A> Ax?s =
x?s and therefore x is decodable from y. The argument can also be extended to the situation when
the data point is close to M (according to a metric, as we discuss later). The goal of the rest of the
subsections is to learn the encoding matrix A given data points residing close to M.
1
Technically, this is the Rademacher complexity of the class of functions ` H. However, since ` is usually
fixed for certain problem, we emphasize in the definition more the dependency on H.
4
Warm up: PCA and kernel PCA. In this section we illustrate our framework for agnostic unsupervised learning by showing how PCA and kernel PCA can be efficiently learned within our model.
The results of this sub-section are not new, and given only for illustrative purposes. The class of hypothesis corresponding to PCA operates on domain X = Rd and range Y = Rk for some k < d via
linear operators. In kernel PCA, the encoding linear operator applies to the s-th tensor power x?s
s
of the data. That is, the encoding and decoding are parameterized by a linear operator A 2 Rk?d ,
pca
Hk,s
= (hA , gA ) : hA (x) = Ax?s , , gA (y) = A? y ,
where A? denotes the pseudo-inverse of A. The natural loss function here is the Euclidean norm,
`((g, h), x) = kx?s g(h(x))k2 = k(I A? A)x?s k2 .
pca
Theorem 3.1. For a fixed constant s > 1, the class Hk,s
is efficiently C -learnable with encoding
length k and bias = 0.
pca
The proof of the Theorem follows from two simple components: a) finding the ERM among Hk,s
can be efficiently solved by taking SVD of covariance matrix of the (lifted) data points. b) The
Rademacher complexity of the hypothesis class is bounded by O(ds /m) for m examples. Thus by
Theorem 2.1 the minimizer of ERM generalizes. The full proof is deferred to Appendix A.
3.2
Spectral Autoencoders
In this section we give a much broader set of hypothesis, encompassing PCA and kernel-PCA, and
show how to learn them efficiently. Throughout this section we assume that the data is normalized to
Euclidean norm 1, and consider the following class of hypothesis which naturally generalizes PCA:
sa
Definition 3.1 (Spectral autoencoder). We define the class Hk,s
as the following set of all hypothesis
(g, h),
?
s
h(x) = Ax?s , A 2 Rk?d
s
Hksa = (h, g) :
.
(3.1)
g(y) = vmax (By), B 2 Rd ?k
pca
We note that this notion is more general than kernel PCA: suppose some (g, h) 2 Hk,s
has re?
?s
?s
construction error ", namely, A Ax is "-close to x in Euclidean norm. Then by eigenvector
perturbation theorem, we have that vmax (A? Ax?s ) also reconstructs x with O(") error, and therefore there exists a PSCA hypothesis with O(") error as well . Vice versa, it?s quite possible that for
every A, the reconstruction A? Ax?s is far away from x?s so that kernel PCA doesn?t apply, but
with spectral decoding we can still reconstruct x from vmax (A? Ax?s ) since the top eigenvector of
A? Ax?s is close x.
Here the key matter that distinguishes us from kernel PCA is in what metric x needs to be close to
the manifold so that it can be reconstructed. Using PCA, the requirement is that x is in Euclidean
distance close to M (which is a subspace), and using kernel PCA x?2 needs to be in Euclidean
distance close to the null space of ci ?s. However, Euclidean distances in the original space and lifted
space typically are meaningless for high-dimensional data since any two data points are far away
with each other in Euclidean distance. The advantage of using spectral autoencoders is that in the
lifted space the geometry is measured by spectral norm distance that is much smaller than Euclidean
distance (with a potential gap of d1/2 ). The key here is that though the dimension of lifted space is
d2 , the objects of our interests is the set of rank-1 tensors of the form x?2 . Therefore, spectral norm
distance is a much more effective measure of closeness since it exploits the underlying structure of
the lifted data points.
We note that spectral autoencoders relate to vanishing component analysis [18]. When the data is
close to an algebraic manifold, spectral autoencoders aim to find the (small number of) essential
non-vanishing components in a noise robust manner.
3.3
Learnability of polynomial spectral decoding
For simplicity we focus on the case when s = 2. Ideally we would like to learn the best encodingdecoding scheme for any data distribution D. Though there are technical difficulties to achieve such
a general result. A natural attempt would be to optimize the loss function f (A, B) = kg(h(x))
xk2 = kx vmax (BAx?2 )k2 . Not surprisingly, function f is not a convex function with respect to
A, B, and in fact it could be even non-continuous (if not ill-defined)!
5
Here we make a further realizability assumption that the data distribution D admits a reasonable
encoding and decoding pair with reasonable reconstruction error.
Definition 3.2. We say a data distribution D is (k, ")-regularly spectral decodable if there exist
2
2
A 2 Rk?d and B 2 Rd ?k with kBAkop 6 ? such that for x ? D, with probability 1, the
?2
encoding y = Ax satisfies that
where kEkop
M(By) = M(BAx?2 ) = xx> + E ,
6 ". Here ? > 1 is treated as a fixed constant globally.
(3.2)
To interpret the definition, we observe that if data distribution D is (k, ")-regularly spectrally decodable, then by equation (3.2) and Wedin?s theorem (see e.g. [30] ) on the robustness of eigenvector to
perturbation, M(By) has top eigenvector2 that is O(")-close to x itself. Therefore, definition 3.2 is a
sufficient condition for the spectral decoding algorithm vmax (By) to return x approximately, though
it might be not necessary. Moreover, this condition partially addresses the non-continuity issue of
using objective f (A, B) = kx vmax (BAx?2 )k2 , while f (A, B) remains (highly) non-convex. We
resolve this issue by using a convex surrogate.
Our main result concerning the learnability of the aforementioned hypothesis class is:
sa
Theorem 3.2. The hypothesis class Hk,2
is C - learnable with encoding length O(? 4 k 4 / 4 ) and bias
with respect to (k, ")-regular distributions in polynomial time.
Our approach towards finding an encoding and decoding matrice A, B is to optimize the objective,
h
i
minimize f (R) = E Rx?2 x?2 op
(3.3)
s.t. kRkS1 6 ? k
where k ? kS1 denotes the Schatten 1-norm. Suppose D is (k, ")-regularly decodable, and suppose
hA and gB are the corresponding encoding and decoding function. Then we see that R = AB will
satisfies that R has rank at most k and f (R) 6 ". On the other hand, suppose one obtains some R
of rank k 0 such that f (R) 6 , then we can produce hA and gB with O( ) reconstruction simply by
0
2
2
0
choosing A 2 Rk ?d B and B 2 Rd ?k such that R = AB.
We use (non-smooth) Frank-Wolfe to solve objective (3.3), which in particular returns a low-rank
solution. We defer the proof of Theorem 3.2 to the Appendix A.1. With a slightly stronger assumptions on the data distribution D, we can reduce the length of the code to O(k 2 /"2 ) from O(k 4 /"4 ).
See details in Appendix B.
4
A family of optimization encodings and efficient dictionary learning
In this section we give efficient algorithms for learning a family of unsupervised learning algorithms
commonly known as ?dictionary learning?. In contrast to previous approaches, we do not construct
an actual ?dictionary?, but rather improperly learn a comparable encoding via convex relaxations.
We consider a different family of codes which is motivated by matrix-based unsupervised learning
models such as topic-models, dictionary learning and PCA. This family is described by a matrix
A 2 Rd?r which has low complexity according to a certain norm k ? k? , that is, kAk? 6 c? . We can
parametrize a family of hypothesis H according to these matrices, and define an encoding-decoding
pair according to
1
hA (x) = arg min |x Ay|1 , gA (y) = Ay
kyk 6k d
We choose `1 norm to measure the error mostly for convenience, though it can be quite flexible.
The different norms k ? k? , k ? k over A and y give rise to different learning models that have been
considered before. For example, if these are Euclidean norms, then we get PCA. If k ? k? is the max
column `2 or `1 norm and k ? kb is the `0 norm, then this corresponds to dictionary learning (more
details in the next section).
The optimal hypothesis in terms of reconstruction error is given by
?
?
1
?
A = arg min E
|x gA (hA (x))|1 = arg min E
min
r
kAk? 6c? x?D d
kAk? 6c? x?D y2R :kyk
2
Or right singular vector when M(By) is not symmetric
6
6k
1
|x
d
Ay|1 .
The loss function can be generalized to other norms, e.g., squared `2 loss, without any essential
change in the analysis. Notice that this optimization objective derived from reconstruction error
is identically the one used in the literature of dictionary learning. This can be seen as another
justification for the definition of unsupervised learning as minimizing reconstruction error subject to
compression constraints.
The optimization problem above is notoriously hard computationally, and significant algorithmic
and heuristic literature attempted to give efficient algorithms under various distributional assumptions(see [6, 4, 2] and the references therein). Our approach below circumvents this computational
hardness by convex relaxations that result in learning a different creature, albeit with comparable
compression and reconstruction objective.
4.1
Improper dictionary learning: overview
We assume the max column `1 norm of A is at most 1 and the `1 norm of y is assumed to be at
most k. This is a more general setting than the random dictionaries (up to a re-scaling) that
p previous
works [6, 4] studied. 3 In this case, the magnitude of each entry of x is on the order of k if y has
k random ?1 entries. We think of our target error per entry as much smaller than 14 . We consider
k
Hdict
that are parametrized by the dictionary matrix A = Rd?r ,
Hkdict = (hA , gA ) : A 2 Rd?r , kAk`1 !`1 6 1 ,
where hA (x) = arg min |x
kyk1 6k
Ay|1 , gA (y) = Ay
Here we allow r to be larger than d, the case that is often called over-complete dictionary. The
choice of the loss can be replaced by `2 loss (or other Lipschitz loss) without any additional efforts,
though for simplicity we stick to `1 loss. Define A? to be the the best dictionary under the model
and "? to be the optimal error,
?
?
A? = arg minkAk` !`1 61 Ex?D miny2Rr :kyk1 6k |x Ay|1
(4.1)
1
?1
?
?
" = Ex?D d ? |x gA? (hA? (x))|1 .
Algorithm 1 group encoding/decoding for improper dictionary learning
Inputs: N data points X 2 Rd?N ? DN . Convex set Q. Sampling probability ?.
1. Group encoding: Compute
Z = arg min |X
C2Q
C|1 ,
(4.2)
and let Y = h(X) = P? (Z) , where P? (B) is a random sampling of B where each entry
is picked with probability ?.
2. Group decoding: Compute g(Y ) = arg minC2Q |P? (C) Y |1 .
Theorem 4.1. For any > 0, p > 1, the hypothesis class Hkdict is C -learnable with encoding length
? 2 r1/p / 2 ), bias + O("? ) and sample complexity dO(p) in time nO(p2 )
O(k
We note that here r can be potentially much larger than d since by choosing a large
p constant p the
overhead caused by r can be negligible. Since the average size of the entries is k, therefore we
can get the bias smaller than average size of the entries with code length roughly ? k.
The proof of Theorem 4.1 is deferred to supplementary. To demonstrate the key intuition and technique behind it, in the rest of the section we consider a simpler algorithm that achieves a weaker
goal: Algorithm 1 encodes multiple examples into some codes with the matching average encoding
? 2 r1/p / 2 ), and these examples can be decoded from the codes together with reconstruclength O(k
tion error "? + . Next, we outline the analysis of Algorithm 1, and we will show later that one can
reduce the problem of encoding a single examples to the problem of encoding multiple examples.
p
The assumption can be relaxed to that A has `1 norm at most k and `2 -norm atpmost d straightforwardly.
4
We are conservative in the scaling of the error here. Error much smaller than k is already meaningful.
3
7
Here we overload the notation gA? (hA? (?)) so that gA? (hA? (X)) denotes the collection of all the
gA? (hA? (xj )) where xj is the j-th column of X. Algorithm 1 assumes that there exists a convex set
Q ? Rd?N such that
gA? (hA? (X)) : X 2 Rd?N ? {AY : kAk`1 !`1 6 1, kY k`1 !`1 6 k} ? Q .
(4.3)
That is, Q is a convex relaxation of the group of reconstructions allowed in the class Hdict . Algorithm 1 first uses convex programming to denoise the data X into a clean version Z, which belongs
to the set Q. If the set Q has low complexity, then simple random sampling of Z 2 Q serves as a
good encoding.
The following Lemma shows that if Q has low complexity in terms of sampling Rademacher width,
then Algorithm 1 will give a good group encoding and decoding scheme.
Lemma 4.2. Suppose convex Q ? Rd?N satisfies condition (4.3). Then, Algorithm 1 gives a group
encoding and decoding
, the average reconstruction error is
p pair such that with
pprobability 1
bounded by "? + O( SRW m (Q) + O( log(1/ )/m) where m = ?N d and SRW m (?) is the
?
sampling Rademacher width (defined in appendix), and the average encoding length is O(?d).
Towards analyzing the algorithm, we will show that the difference between Z and X is comparable
to "? , which is a direct consequence of the optimization over a large set Q that contains optimal
reconstruction. Then we prove that the sampling procedure doesn?t lose too much information given
a denoised version of the data is already observed, and thus one can reconstruct Z from Y .
The novelty here is to use these two steps together to denoise and achieve a short encoding. The
typical bottleneck of applying convex relaxation on matrix factorization based problem (or any other
problem) is the difficulty of rounding. Here instead of pursuing a rounding algorithm that output the
factor A and Y , we look for a convex relaxation that preserves the intrinsic complexity of the set
which enables the trivial sampling encoding. It turns out that controlling the width/complexity of
the convex relaxation boils down to proving concentration inequalities with sum-of-squares (SoS)
proofs, which is conceptually easier than rounding.
Therefore, the remaining challenge is to design convex set Q that simultaneously has the following
properties (a) is a convex relaxation in the sense of satisfying condition (4.3). (b) admits an efficient
?
optimization algorithm. (c) has low complexity (that is, sampling rademacher width O(N
poly(k))).
Concretely, we have the following theorem. We note that these three properties (with Lemma 4.2)
2 2/p
imply that Algorithm 1 with Q = Qsos
d 1 / 2 ? log d) gives a group encodingp and ? = O(k r
decoding pair with average encoding length O(k 2 r2/p / 2 ? log d) and bias .
Theorem 4.3. For every p > 4, let N = dc0 p with a sufficiently large absolute constant c0 . Then,
d?N
there exists a convex set Qsos
such that (a) it satisfies condition 4.3; (b) The optimizap ? R
2
tion (4.2) and (2) are solvable by semidefinite
programming with run-time nO(p ) ; (c) the sampling
p
? 2 r2/p N/m).
Rademacher width of Qsos
SRW m (Q) 6 O(k
p is bounded by
5
Conclusions
We have defined a new framework for unsupervised learning which replaces generative assumptions
by notions of reconstruction error and encoding length. This framework is comparative, and allows
learning of particular hypothesis classes with respect to an unknown distribution by other hypothesis
classes. We demonstrate its usefulness by giving new polynomial time algorithms for two unsupervised hypothesis classes. First, we give new polynomial time algorithms for dictionary models in
significantly broader range of parameters and assumptions. Another domain is the class of spectral
encodings, for which we consider a new class of models that is shown to strictly encompass PCA
and kernel-PCA. This new class is capable, in contrast to previous spectral models, learn algebraic
manifolds. We give efficient learning algorithms for this class based on convex relaxations.
Acknowledgements
We thank Sanjeev Arora for many illuminating discussions and crucial observations in earlier phases
of this work, amongst them that a representation which preserves information for all classifiers
requires lossless compression.
8
References
[1] Jayadev Acharya, Hirakendu Das, Ashkan Jafarpour, Alon Orlitsky, and Ananda Theertha Suresh. Tight bounds for universal compression of large alphabets. In Proceedings of the 2013 IEEE International Symposium on Information Theory, Istanbul, Turkey, July 7-12,
2013, pages 2875?2879, 2013.
[2] Michal Aharon, Michael Elad, and Alfred Bruckstein. K-svd: Design of dictionaries for sparse representation. In IN: PROCEEDINGS
OF SPARS05, pages 9?12, 2005.
[3] Animashree Anandkumar, Rong Ge, Daniel Hsu, Sham M. Kakade, and Matus Telgarsky. Tensor decompositions for learning latent
variable models. J. Mach. Learn. Res., 15(1):2773?2832, January 2014.
[4] Sanjeev Arora, Rong Ge, Tengyu Ma, and Ankur Moitra. Simple, efficient, and neural algorithms for sparse coding. In Proceedings of
The 28th Conference on Learning Theory, COLT 2015, Paris, France, July 3-6, 2015, pages 113?149, 2015.
[5] Sanjeev Arora, Rong Ge, and Ankur Moitra. Learning topic models?going beyond svd. In Foundations of Computer Science (FOCS),
2012 IEEE 53rd Annual Symposium on, pages 1?10. IEEE, 2012.
[6] Sanjeev Arora, Rong Ge, and Ankur Moitra. New algorithms for learning incoherent and overcomplete dictionaries. arXiv preprint
arXiv:1308.6273, 2013.
[7] Maria-Florina Balcan, Avrim Blum, and Santosh Vempala. A discriminative framework for clustering via similarity functions. In
Proceedings of the Fortieth Annual ACM Symposium on Theory of Computing, STOC ?08, pages 671?680, 2008.
[8] Boaz Barak, Jonathan A. Kelner, and David Steurer. Dictionary learning and tensor decomposition via the sum-of-squares method. In
Proceedings of the Forty-Seventh Annual ACM on Symposium on Theory of Computing, STOC 2015, Portland, OR, USA, June 14-17,
2015, pages 143?151, 2015.
[9] Shai Ben-David and Margareta Ackerman. Measures of clustering quality: A working set of axioms for clustering. In D. Koller,
D. Schuurmans, Y. Bengio, and L. Bottou, editors, Advances in Neural Information Processing Systems 21, pages 121?128. Curran
Associates, Inc., 2009.
[10] Toby Berger. Rate distortion theory: A mathematical basis for data compression. 1971.
[11] David M. Blei, Andrew Y. Ng, and Michael I. Jordan. Latent dirichlet allocation. J. Mach. Learn. Res., 3:993?1022, March 2003.
[12] Thomas M. Cover and Joy A. Thomas. Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing).
Wiley-Interscience, 2006.
[13] D. L. Donoho and X. Huo. Uncertainty principles and ideal atomic decomposition. IEEE Trans. Inf. Theor., 47(7):2845?2862, September
2006.
[14] Elad Hazan and Satyen Kale. Projection-free online learning. In Proceedings of the 29th International Conference on Machine Learning,
ICML 2012, Edinburgh, Scotland, UK, June 26 - July 1, 2012, 2012.
[15] Daniel Hsu and Sham M Kakade. Learning mixtures of spherical gaussians: moment methods and spectral decompositions. In Proceedings of the 4th conference on Innovations in Theoretical Computer Science, pages 11?20. ACM, 2013.
[16] Nikola Jevtic, Alon Orlitsky, and Narayana P. Santhanam. A lower bound on compression of unknown alphabets. Theor. Comput. Sci.,
332(1-3):293?311, 2005.
[17] Jon M. Kleinberg. An impossibility theorem for clustering. In S. Becker, S. Thrun, and K. Obermayer, editors, Advances in Neural
Information Processing Systems 15, pages 463?470. MIT Press, 2003.
[18] Roi Livni, David Lehavi, Sagi Schein, Hila Nachlieli, Shai Shalev-Shwartz, and Amir Globerson. Vanishing component analysis. In
Proceedings of the 30th International Conference on Machine Learning, ICML 2013, 2013.
[19] Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar. Foundations of machine learning. MIT press, 2012.
[20] B. K. Natarajan. Sparse approximate solutions to linear systems. SIAM J. Comput., 24(2):227?234, 1995.
[21] Alon Orlitsky, Narayana P. Santhanam, and Junan Zhang. Universal compression of memoryless sources over unknown alphabets. IEEE
Trans. Information Theory, 50(7):1469?1481, 2004.
[22] Hristo S Paskov, Robert West, John C Mitchell, and Trevor Hastie. Compressive feature learning. In Advances in Neural Information
Processing Systems, pages 2931?2939, 2013.
[23] Jorma Rissanen. Modeling by shortest data description. Automatica, 14(5):465?471, 1978.
[24] Ruslan Salakhutdinov. Learning Deep Generative Models. PhD thesis, University of Toronto, 2009. AAINR61080.
[25] Ohad Shamir, Sivan Sabato, and Naftali Tishby. Learning and Generalization with the Information Bottleneck, pages 92?107. Springer
Berlin Heidelberg, Berlin, Heidelberg, 2008.
[26] Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological),
58(1):267?288, 1996.
[27] Naftali Tishby, Fernando C. N. Pereira, and William Bialek. The information bottleneck method. CoRR, physics/0004057, 2000.
[28] Leslie G Valiant. A theory of the learnable. Communications of the ACM, 27(11):1134?1142, 1984.
[29] Vladimir N. Vapnik. Statistical Learning Theory. Wiley-Interscience, 1998.
[30] Van Vu. Singular vectors under random perturbation. Random Structures and Algorithms, 39(4):526?538, 2011.
9
| 6533 |@word version:2 polynomial:8 compression:16 norm:17 stronger:1 c0:1 d2:2 r:2 covariance:1 decomposition:4 jafarpour:1 moment:1 contains:2 series:2 daniel:2 document:1 michal:1 john:1 enables:1 rd2:1 joy:1 generative:12 pursued:1 kyk:2 amir:1 huo:1 scotland:1 vanishing:3 short:1 blei:1 completeness:2 toronto:1 simpler:1 kelner:1 zhang:1 narayana:2 mathematical:1 dn:1 direct:1 symposium:4 focs:1 shorthand:1 hci:2 prove:1 fitting:1 overhead:1 interscience:2 manner:1 hardness:2 roughly:1 multi:1 salakhutdinov:1 globally:1 spherical:1 resolve:1 actual:1 enumeration:1 becomes:1 spain:1 xx:1 moreover:2 underlying:2 notation:5 bounded:4 agnostic:1 null:1 what:1 kg:2 x2s:2 eigenvector:3 spectrally:1 proposing:1 compressive:1 finding:3 nj:1 guarantee:2 pseudo:1 every:4 orlitsky:3 rm:2 k2:4 stick:1 classifier:1 uk:1 omit:1 before:1 negligible:1 sagi:1 consequence:1 encoding:35 ak:1 analyzing:1 initiated:1 mach:2 subscript:1 approximately:2 might:1 therein:1 studied:2 ankur:3 factorization:1 range:3 globerson:1 atomic:1 vu:1 procedure:1 suresh:1 empirical:3 universal:2 axiom:1 attain:1 significantly:2 matching:1 projection:1 regular:1 seeing:1 get:3 convenience:3 close:10 ga:11 operator:3 selection:1 context:2 risk:1 applying:1 restriction:3 optimize:2 go:2 kale:1 convex:24 simplicity:2 jorma:1 proving:1 handle:1 notion:2 justification:1 target:2 suppose:5 construction:1 controlling:1 shamir:1 programming:2 us:1 curran:1 hypothesis:40 associate:1 wolfe:1 element:1 satisfying:1 natarajan:1 distributional:2 observed:1 preprint:1 solved:1 worst:4 improper:11 trade:1 nikola:1 intuition:1 complexity:12 ideally:1 tight:1 purely:1 distinctive:1 technically:1 efficiency:1 learner:1 basis:4 intrinsic:1 chapter:1 various:1 alphabet:5 effective:2 choosing:2 y2r:1 shalev:1 quite:2 heuristic:1 elad:3 solve:2 widely:1 distortion:3 larger:3 otherwise:1 say:2 reconstruct:2 ability:1 statistic:1 satyen:1 unseen:1 think:2 itself:1 online:3 advantage:1 reconstruction:15 ackerman:1 achieve:2 description:3 ky:1 achievement:1 convergence:1 requirement:1 r1:2 rademacher:9 produce:1 comparative:4 perfect:2 telgarsky:1 ben:1 object:3 illustrate:1 supplementary:1 alon:3 andrew:1 measured:3 op:1 sa:2 strong:1 p2:1 c:2 correct:1 compromising:1 kb:1 enable:1 generalization:8 theor:2 extension:4 strictly:1 rong:4 hold:1 sufficiently:1 residing:1 considered:1 bax:3 roi:1 great:1 algorithmic:2 matus:1 dictionary:24 achieves:1 xk2:1 purpose:1 understated:1 ruslan:1 hilum:1 axiomatic:1 lose:1 vice:1 mit:2 aim:1 rather:1 avoid:1 shrinkage:1 lifted:5 broader:3 corollary:1 encode:1 derived:2 focus:2 june:2 ax:10 maria:1 portland:1 rank:4 methodological:1 impossibility:4 contrast:3 rigorous:1 hk:6 rostamizadeh:1 sense:1 talwalkar:1 dependent:2 typically:2 istanbul:1 hidden:1 koller:1 going:1 france:1 transformed:1 provably:4 issue:2 arg:8 ill:1 flexible:1 denoted:1 among:1 aforementioned:1 colt:1 santosh:1 construct:3 ng:1 sampling:9 identical:1 look:1 unsupervised:24 icml:2 minf:1 jon:1 future:2 np:2 acharya:1 distinguishes:1 decodable:4 preserve:2 simultaneously:1 replaced:1 geometry:1 phase:1 william:1 attempt:2 ab:2 interest:1 highly:1 deferred:2 analyzed:1 mixture:2 wedin:1 semidefinite:1 behind:1 ehazan:1 capable:1 necessary:3 ohad:1 orthogonal:1 euclidean:9 re:4 overcomplete:1 schein:1 theoretical:4 minimal:1 instance:2 column:3 earlier:1 downside:1 modeling:1 cover:1 leslie:1 applicability:1 entry:6 uniform:1 usefulness:1 successful:1 rounding:3 seventh:1 too:1 learnability:3 tishby:2 straightforwardly:1 dependency:1 international:3 siam:1 retain:1 probabilistic:2 physic:1 decoding:17 enhance:1 together:2 michael:2 sanjeev:4 squared:1 thesis:1 moitra:3 reconstructs:1 choose:1 henceforth:1 inefficient:1 return:3 potential:1 coding:1 matter:1 inc:1 caused:1 tion:2 later:3 view:1 root:2 picked:1 hazan:2 observing:1 analyze:1 start:1 competitive:1 sup:1 denoised:1 shai:2 defer:1 contribution:2 minimize:1 square:3 characteristic:1 efficiently:5 correspond:1 conceptually:1 rx:1 notoriously:1 ashkan:1 trevor:1 rdt:2 definition:13 dm:4 naturally:1 proof:7 boil:1 hsu:2 proved:1 animashree:1 mitchell:1 recall:1 subsection:1 back:1 higher:1 supervised:3 methodology:1 though:7 generality:1 autoencoders:6 d:3 hand:2 working:1 lack:1 continuity:1 quality:1 perhaps:1 lossy:3 usa:1 concept:2 normalized:1 symmetric:1 memoryless:1 width:5 naftali:2 illustrative:2 prohibit:1 kak:5 criterion:3 generalized:2 ay:7 complete:2 theoretic:2 demonstrate:2 outline:1 balcan:1 image:2 novel:2 recently:1 common:1 overview:1 interpret:1 significant:1 versa:1 ai:1 rd:21 uv:1 trivially:1 access:1 entail:1 similarity:2 etc:1 recent:2 optimizes:1 belongs:1 inf:1 scenario:1 certain:2 nonconvex:1 inequality:1 binary:1 seen:4 minimum:2 additional:3 relaxed:1 novelty:1 forty:1 shortest:1 fernando:1 signal:3 july:3 encompass:2 full:1 multiple:2 turkey:1 sham:2 smooth:1 technical:1 unlabelled:1 long:1 concerning:1 a1:1 prediction:1 regression:2 florina:1 metric:4 arxiv:2 kernel:10 c1:2 median:1 singular:4 source:1 crucial:1 sabato:1 rest:2 meaningless:1 probably:2 subject:1 member:1 regularly:3 jordan:1 call:2 integer:1 anandkumar:1 near:2 ideal:1 bengio:1 enough:2 identically:1 variety:4 xj:2 hastie:1 restrict:1 lasso:2 reduce:2 regarding:1 bottleneck:4 motivated:1 pca:26 gb:2 becker:1 improperly:2 effort:1 algebraic:6 proceed:2 deep:3 useful:1 clear:1 informally:1 eigenvectors:1 exist:1 canonical:1 notice:1 per:1 tibshirani:1 alfred:1 santhanam:2 group:7 key:3 terminology:1 liner:1 blum:1 rissanen:1 sivan:1 hirakendu:1 clean:1 undertaken:1 vast:1 relaxation:13 sum:3 run:1 inverse:1 parameterized:2 fortieth:1 telecommunication:1 uncertainty:1 family:6 almost:2 throughout:1 reasonable:2 pursuing:1 circumvents:2 appendix:6 scaling:2 comparable:3 bit:3 entirely:1 bound:4 replaces:1 encodingdecoding:1 annual:3 infinity:1 constraint:2 inclusive:1 psca:1 encodes:1 kleinberg:2 argument:1 extremely:1 min:8 tengyu:3 vempala:1 ameet:1 according:6 march:1 smaller:4 slightly:2 intimately:1 kakade:2 mally:1 lunch:1 ks1:1 restricted:1 erm:7 computationally:3 xk22:1 equation:3 previously:1 remains:1 pin:1 discus:1 turn:1 know:1 ge:4 serf:1 generalizes:5 gaussians:2 parametrize:1 aharon:1 apply:3 observe:2 away:2 spectral:18 robustness:1 original:1 thomas:2 assumes:2 clustering:9 include:1 top:4 denotes:5 remaining:1 dirichlet:1 exploit:1 giving:1 society:1 jayadev:1 tensor:6 objective:5 already:2 concentration:1 bialek:1 surrogate:1 junan:1 said:1 september:1 amongst:1 obermayer:1 subspace:5 distance:7 berlin:2 thrun:1 kekop:1 thank:1 schatten:1 street:2 olden:2 majority:1 parametrized:1 topic:6 manifold:8 trivial:1 provable:1 afshin:1 length:14 code:5 relationship:1 berger:1 minimizing:1 margareta:1 innovation:1 vladimir:1 mostly:3 robert:2 potentially:2 relate:1 frank:1 stoc:2 sci:1 rise:2 design:3 steurer:1 cryptographic:1 boltzmann:2 unknown:6 observation:1 markov:1 finite:1 january:1 immediate:1 situation:1 extended:1 communication:1 perturbation:3 arbitrary:2 david:4 namely:2 pair:6 paris:1 connection:1 learned:5 textual:1 barcelona:1 nip:1 trans:2 address:1 beyond:1 usually:2 below:1 challenge:2 encompasses:1 including:1 max:2 royal:1 belief:1 power:1 natural:3 warm:1 difficulty:2 treated:1 solvable:1 advanced:1 scheme:2 lossless:2 imply:1 numerous:1 arora:4 realizability:1 incoherent:1 extract:1 autoencoder:2 text:1 literature:3 acknowledgement:1 loss:19 encompassing:1 rationale:1 allocation:1 foundation:2 illuminating:1 degree:2 sufficient:2 principle:1 editor:2 famously:1 cd:2 row:1 mohri:1 surprisingly:2 free:2 formal:2 allow:4 bias:8 weaker:1 barak:1 wide:1 taking:1 absolute:1 sparse:7 livni:1 edinburgh:1 van:1 feedback:1 dimension:3 world:3 resides:1 doesn:3 concretely:1 made:1 commonly:1 vmax:10 collection:1 far:3 polynomially:2 reconstructed:1 approximate:1 emphasize:1 obtains:1 boaz:1 bruckstein:1 automatica:1 assumed:1 discriminative:3 shwartz:1 alternatively:1 continuous:1 latent:2 learn:10 robust:1 srw:3 schuurmans:1 heidelberg:2 mehryar:1 bottou:1 poly:2 necessarily:1 domain:5 da:1 main:8 noise:1 toby:1 denoise:2 succinct:1 allowed:1 west:1 creature:1 wiley:3 sub:1 decoded:1 explicit:1 pereira:1 comput:2 lie:1 kyk1:2 theorem:17 rk:6 down:2 pprobability:1 specific:2 paskov:1 pac:6 showing:1 learnable:6 r2:2 admits:2 theertha:1 closeness:1 intractable:3 exists:5 applicative:2 vapnik:2 essential:2 corr:1 valiant:1 albeit:1 ci:1 mirror:1 magnitude:1 avrim:1 phd:1 kx:3 gap:1 easier:1 rd1:1 generalizing:1 logarithmic:1 simply:2 partially:1 applies:1 springer:1 corresponds:1 minimizer:2 satisfies:4 acm:4 ma:2 goal:4 viewed:1 donoho:1 towards:2 labelled:2 lipschitz:1 hard:6 change:1 specifically:1 infinite:1 operates:1 typical:1 principal:4 conservative:1 called:1 lemma:3 ananda:1 e:1 svd:3 attempted:1 meaningful:1 formally:2 latter:2 jonathan:1 overload:2 princeton:4 d1:2 ex:2 |
6,118 | 6,534 | Scaling Factorial Hidden Markov Models:
Stochastic Variational Inference without Messages
Yin Cheng Ng
Dept. of Statistical Science
University College London
[email protected]
Pawel Chilinski
Dept. of Computing Science
University College London
[email protected]
Ricardo Silva
Dept. of Statistical Science
University College London
[email protected]
Abstract
Factorial Hidden Markov Models (FHMMs) are powerful models for sequential
data but they do not scale well with long sequences. We propose a scalable inference and learning algorithm for FHMMs that draws on ideas from the stochastic
variational inference, neural network and copula literatures. Unlike existing approaches, the proposed algorithm requires no message passing procedure among
latent variables and can be distributed to a network of computers to speed up learning. Our experiments corroborate that the proposed algorithm does not introduce
further approximation bias compared to the proven structured mean-field algorithm,
and achieves better performance with long sequences and large FHMMs.
1
Introduction
Breakthroughs in modern technology have allowed more sequential data to be collected in higher resolutions. The resulted sequential data sets are often extremely long and high-dimensional, exhibiting
rich structures and long-range dependency that can only be captured by fitting large models to the
sequences, such as Hidden Markov Models (HMMs) with a large state space. The standard methods
of learning and performing inference in the HMM class of models are the Expectation-Maximization
(EM) and the Forward-Backward algorithms. The Forward-Backward and EM algorithms are prohibitively expensive for long sequences and large models because of their linear and quadratic
computational complexity with respect to sequence length and state space size respectively.
To rein in the computational cost of inference in HMMs, several variational inference algorithms that
trade-off inference accuracy in exchange for lower computational cost have been proposed in the
literatures. Variational inference is a deterministic approximate inference technique that approximates
posterior distribution p by minimizing the Kullback-Leibler divergence KL(q||p), where q lies in a
family of distributions selected to approximate p as closely as possible while keeping the inference
algorithm computationally tractable [24]. Despite its biased approximation of the actual posteriors,
the variational inference approach has been proven to work well in practice [21].
Variational inference has also been successfully scaled to tackle problems with large data sets
through the use of stochastic gradient descent (SGD) algorithms [12]. However, applications of such
techniques to models where the data is dependent (i.e., non-i.i.d.) require much care in the choice of
the approximating family and parameter update schedules to preserve dependency structure in the
data [9]. More recently, developments of stochastic variational inference algorithms to scale models
for non-i.i.d. data to large data sets have been increasingly explored [5, 9].
We propose a stochastic variational inference approach to approximate the posterior of hidden Markov
chains in Factorial Hidden Markov Models (FHMM) with independent chains of bivariate Gaussian
copulas. Unlike existing variational inference algorithms, the proposed approach eliminates the need
for explicit message passing between latent variables and allows computations to be distributed to
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
multiple computers. To scale the variational distribution to long sequences, we reparameterise the
bivariate Gaussian copula chain parameters with feed-forward recognition neural networks that are
shared by copula chain parameters across different time points. The use of recognition networks in
variational inference has been well-explored in models in which data is assumed to be i.i.d. [14, 11].
To the best of our knowledge, the use of recognition networks to decouple inference in non-factorised
stochastic process of unbounded length has not been well-explored. In addition, both the FHMM
parameters and the parameters of the recognition networks are learnt in conjunction by maximising
the stochastic lower bound of the log-marginal likelihood, computed based on randomly sampled
subchains from the full sequence of interest. The combination of recognition networks and stochastic
optimisations allow us to scale the Gaussian copula chain variational inference approach to very long
sequences.
2
Background
2.1
Factorial Hidden Markov Model
Factorial Hidden Markov Models (FHMMs) are a class of HMMs consisting of M latent variables
st = (s1t , ? ? ? , sM
t ) at each time point, and observations yt where the conditional emission probability
of the observations p (yt |st , ?) is parameterised through factorial combinations of st and emission
parameters ?. Each of the latent variables sm
t evolves independently in time through discretevalued Markov chains governed by transition matrix Am [8]. For a sequence of observations
y = (y1 , ? ? ? , yT ) and corresponding latent variables s = (s1 , ? ? ? , sT ), the joint distribution can be
written as follow
p(y, s) =
M
Y
m=1
p(sm
1 )p(y1 |s1 , ?)
T
Y
p(yt |st , ?)
t=2
M
Y
m
p(sm
t |st?1 , Am )
(1)
m=1
Depending on the state of latent variables at a particular time point, different subsets of emission
parameters ? can be selected, resulting in a dynamic mixture of distributions for the data. The
factorial respresentation of state space reduces the required number of parameters to encode transition
dynamics compared to regular HMMs with the same number of states. As an example, a state space
with 2M states can be encoded by M binary transition matrices with a total of 2M parameters while
a regular HMM requires a transition matrix with 2M ? (2M ? 1) parameters to be estimated.
In this paper, we specify a FHMM with D?dimensional Gaussian emission distributions and M
binary hidden Markov chains. The emission distributions share a covariance matrix ? across different
states while the mean is parameterised as a linear combination of the latent variables,
?t = WT ?st ,
(2)
where ?st =
is a M + 1-dimensional binary vector and W ? R(M +1)?D . The
FHMM model parameters ? = (?, W, A1 , ? ? ? , AM ) can be estimated with the EM algorithm. Note
that to facilitate optimisations, we reparameterised ? as LLT where L ? RD?D is a lower-triangular
matrix.
[s1t , ? ? ?
2.1.1
T
, sM
t , 1]
Inference in FHMMs
Exact inference in FHMM is intractable due to the O(T M K M +1 ) computational complexity for
FHMM with M K-state hidden Markov chains [15]. A structured mean-field (SMF) variational
inference approach proposed in [8] approximates the posterior distribution with M independent
Markov chains and reduces the complexity to O(T M K 2 ) in models with linear-Gaussian emission
distributions. While the reduction in complexity is significant, inference and learning with SMF
remain insurmountable in the presence of extremely long sequences. In addition, SMF requires the
storage of O(2T M K) variational parameters in-memory per training sequence. Such computational
requirements remain expensive to satisfy even in the age of cloud computing.
2.2
Gaussian Copulas
Gaussian copulas are a family of multivariate cumulative distribution functions (CDFs) that capture
linear dependency structure between random variables with potentially different marginal distributions.
Given two random variables X1 , X2 with their respective marginal CDFs F1 , F2 , their Gaussian
copula joint CDF can be written as
?? (??1 (F1 (x1 )), ??1 (F2 (x2 )))
2
(3)
where ??1 is the quantile function of the standard Gaussian distribution, and ? is the CDF of the
standard bivariate Gaussian distribution with correlation ?. In a bivariate setting, the dependency
between X1 and X2 is captured by ?. The bivariate Gaussian copula can be easily extended to
multivariate settings through a correlation matrix. For an in-depth introduction of copulas, please
refer to [18, 3].
2.3
Stochastic Variational Inference
Variational inference is a class of deterministic approximate inference algorithms that approximate
intractable posterior distributions p(s|y) of latent variables s given data y with a tractable family of
variational distributions q? (s) parameterised by variational parameters ?. The variational parameters
are fitted to approximate the posterior distributions by maximising the evidence
lower bound of
R
log-marginal likelihood (ELBO) [24]. By applying the Jensen?s inequality to p(y, s)ds the ELBO
can be expressed as
ELBO = Eq [log p(y, s)] ? Eq [log q(s)].
(4)
The ELBO can also be interpreted as the negative KL-divergence KL(q? (s)||p(s|y)) up to a constant.
Therefore, variational inference results in variational distribution that is the closest to p within the
approximating family as measured by KL.
Maximising ELBO in the presence of large data set is computationally expensive as it requires the
ELBO to be computed over all data points. Stochastic variational inference (SVI) [12] successfully
scales the inference technique to large data sets using subsampling based stochastic gradient descent
algorithms[2].
2.4
Amortised Inference and Recognition Neural Networks
The many successes of neural networks in tackling certain supervised learning tasks have generated
much research interest in applying neural networks to unsupervised learning and probabilistic
modelling problems [20, 7, 19, 14]. A recognition neural network was initially proposed in [11] to
extract underlying structures of data modelled by a generative neural network. Taking the observed
data as input, the feed-forward recognition network learns to predict a vector of unobserved code that
the generative neural network initially conjectured to generate the observed data.
More recently, a recognition network was applied to variational inference for latent variable models[14,
7]. Given data, the latent variable model and an assumed family of variational distributions, the
recognition network learns to predict optimal variational parameters for the specific data points. As
the recognition network parameters are shared by all data points, information learned by the network
on a subset of data points are shared with other data points. This inference process is aptly named
amortised inference. In short, recognition network can simply be thought of as a feed-forward neural
network that learns to predict optimal variational parameters given the observed data, with ELBO as
its utility function.
3
The Message Free Stochastic Variational Inference Algorithm
While structured mean-field variational inference and its associated EM algorithms are effective tools
for inference and learning in FHMMs with short sequences, they become prohibitively expensive as
the sequences grow longer. For example, one iteration of SMF forward-backward message passing
for FHMM of 5 Markov chains and 106 sequential data points takes hours of computing time on a
modern 8-cores workstation, rendering SMF unusable for large scale problems. To scale FHMMs to
long sequences, we resort to stochastic variational inference.
The proposed variational inference algorithm approximates posterior distributions of the M hidden
Markov chains in FHMM with M independent chains of bivariate Gaussian-Bernoulli copulas. The
computational cost of optimising the variational parameters is managed by a subsampling-based
stochastic gradient ascent algorithm similar to SVI. In addition, parameters of the copula chains
are reparameterised using feed-forward recognition neural networks to improve efficiency of the
variational inference algorithm.
In contrast to the EM approach for learning FHMM model parameters, our approach allows for both
the model parameters and variational parameters to be learnt in conjunction by maximising the ELBO
3
with a stochastic gradient ascent algorithm. In the following sections, we describe the variational
distributions and recognition networks, and derive the stochastic ELBO for SGD.
3.1
Variational Chains of Bivariate Gaussian Copulas
Similar to the SMF variational inference algorithm proposed in [8], we aim to preserve posterior
dependency of latent variables within the same hidden Markov chains by introducing chains of
bivariate Gaussian copulas. The chain of bivariate Gaussian copulas variational distribution can be
written as the product of bivariate Gaussian copulas divided by the marginals of latent variables at the
intersection of the pairs,
Q
T
m
m
t=2 q(st?1 , st )
QT ?1
m
t=2 q(st )
q(sm ) =
(5)
m
where q(sm
t?1 , st ) is the joint probability density or mass function of a bivariate Gaussian copula.
The copula parameterization in Equation (5) offers several P
advantages. Firstly, theP
overlapping bivarim
m
m
q(s
,
s
)
=
q(sm
ate copula structure enforces coherence of q(sm
)
such
that
m
t
t , st+1 ).
t?1 t
st?1
sm
t+1
Secondly, the chain structure of the distribution restricts the growth in the number of variational
parameters to only two parameters per chain for every increment in the sequence length. Finally,
the Gaussian copula allows marginals and dependency structure of the random variables to be modelled separately [3]. The decoupling of the marginal and correlation parameters thus allows these
parameters to be estimated by unconstrained optimizations and also lend themselves to be predicted
separately using feed-forward recognition neural networks.
For the rest of the paper, we assume that the FHMM latent variables are Bernoulli random variables
with the following bivariate Gaussian-Bernoulli copula probability mass function (PMF) as their
variational PMFs
m
m
q(sm
t?1 = 0, st = 0) = q00t
m
m
m
q(st?1 = 0, st = 1) = 1 ? ?t?1,m ? q00
t
m
m
q(sm
t?1 = 1, st = 0) = 1 ? ?t,m ? q00t
m
m
m
q(st?1 = 1, st = 1) = ?t,m + ?t?1,m + q00
?1
t
(6)
m
where q00
= ??t,m (??1 (1 ? ?t?1,m ), ??1 (1 ? ?t,m )) and q(sm
t = 1) = ?t,m . The Gaussiant
Bernoulli copula can be easily extended to multinomial random variables.
Assuming independence between random variables in different hidden chains, the posterior distribution of s can be factorised by chains and approximated by
q(s) =
M
Y
q(sm )
(7)
m=1
3.2
Feed-forward Recognition Neural Networks
The number of variational parameters in the chains of bivariate Gaussian copulas scales linearly with
respect to the length of the sequence as well as the number of sequences in the data set. While it is
possible to directly optimise these variational parameters, the approach quickly becomes infeasible
as the size of data set grows. We propose to circumvent the challenging scalability problem by
reparameterising the variational parameters with rolling feed-forward recognition neural networks
that are shared among variational parameters within the same chain. The marginal variational
parameters ?t,m and copula correlation variational parameters ?t,m are parameterised with different
recognition networks as they are parameters of a different nature.
Given observed sequence y = (y1 , . . . , yT ), the marginal and correlation recognition networks for
hidden chain m compute the variational paremeters ?t,m and ?t,m by performing a forward pass on a
window of observed data ?yt = (yt? 12 ?t , . . . , yt , . . . , yt+ 12 ?t )
?t,m = f?m (?yt )
?t,m = f?m (?yt )
f?m
(8)
f?m
where ?t + 1 is the user selected size of rolling window,
and
are the marginal and correlation
recognition networks for hidden chain m with parameters ?m = (??,m , ??,m ). The output layer nonlinearities of f?m and f?m are chosen to be the sigmoid and hyperbolic tangent functions respectively
to match the range of ?t,m and ?t,m .
The recognition network hyperparameters, such as the number of hidden units, non-linearities, and
the window size ?t can be chosen based on computing budget and empirical evidence. In our
4
experiments with shorter sequences where ELBO can be computed within a reasonable amount of
time, we did not observe a significant difference in the coverged ELBOs among different choices of
non-linearity. However, we observed that the converged ELBO is sensitive to the number of hidden
units and the number of hidden units needs to be adapted to the data set and computing budget.
Recognition networks with larger hidden layers have larger capacity to approximate the posterior
distributions as closely as possible but require more computing budget to learn. Similarly, the choice
of ?t determines the amount of information that can be captured by the variational distributions as
well as the computing budget required to learn the recognition network parameters. As a rule of
thumb, we recommend the number of hidden units and ?t to be chosen as large as the computing
budget allows in long sequences. We emphasize that the range of posterior dependency captured
by the correlation recognition networks is not limited by ?t, as the recognition network parameters
are shared across time, allowing dependency information to be encoded in the network parameters.
For FHMMs with large number of hidden chains, various schemes to share the networks? hidden
layers can be devised to scale the method to FHMMs with a large state space. This presents another
trade-off between computational requirements and goodness of posterior approximations.
In addition to scalability, the use of recognition networks also allows our approach to perform fast
inference at run-time, as computing the posterior distributions only require forward passes of the
recognition networks with data windows of interest. The computational complexity of the recognition
network forward pass scales linearly with respect to ?t. As with other types of neural networks,
the computation is highly data-parallel and can be massively sped up with GPU. In comparison,
computation for a stochastic variational inference algorithm based on a message passing approach
also scales linearly with respect to ?t but is not data-parallel [5]. Subchains from long sequences,
together with their associated recognition network computations, can also be distributed across a
cluster of computers to improve learning and inference speed.
However, the use of recognition networks is not without its drawbacks. Compared to message passing
algorithms, the recognition networks approach cannot handle missing data gracefully by integrating
out the relevant random variables. The fidelity of the approximated posterior can also be limited by
the capacity of the neural networks and bad local minimas. The posterior distributions of the random
variables close to the beginning and the end of the sequence also require special handling, as the
rolling window cannot be moved any further to the left or right of the sequences. In such scenarios,
the posteriors can be computed by adapting the structured mean-field algorithm proposed in [8] to the
subchains at the boundaries (see Supplementary Material). The importance of the boundary scenarios
in learning the FHMM model parameters diminishes as the data sequence becomes longer.
3.3
Learning Recognition Network and FHMM Parameters
Given sequence y of length T , the M -chain FHMM parameters ? and recognition network parameters
? = (?1 , . . . , ?M ) need to be adapted to the data by maximising the ELBO as expressed in Equation
(4) with respect to ? and ?. Note that the distribution q(sm ) is now parameterised by the recognition
network parameters ?m . For notational simplicty, we do not explicitly express the parameterisation
of q(sm ) in our notations. Plugging in the FHMM joint distribution in Equation (1) and variational
distribution in Equation (7), the FHMM ELBO L(?, ?) for the variational chains of bivariate Gaussian
copula is approximated as
1 ?t?1
T?2
X
L(?, ?) ?
log p(yt |s1t , . . . , sM
t )
q
1 ?t+1
t= 2
+
M
X
m
log p(sm
t |st?1 )
q
m m
+ log q(sm
t ) q ? log q(st , st+1 ) q
(9)
m=1
Equation (9) is only an approximation of the ELBO as the variational distribution of sm
t close to
the beginning and end of y cannot be computed using the recognition networks. Because of the
QM
approximation, the FHMM initial distribution m=1 p(sm
1 ) cannot be learned using our approach.
However, they can be approximated by the stationary distribution of the transition matrices as T
become large assuming that the sequence is close to stationary[5]. Comparisons to SMF in our
experiment results suggest that the error caused by the approximations is negligible.
The log-transition probability expectations and variational entropy in Equation (9) can be easily
computed as they are simply sums over pairs of Bernoulli random variables. The expectations of
5
log-emission distributions can be efficiently computed for certain distributions, such as multinomial
and multivariate Gaussian distributions. Detailed derivations of the expectation terms in ELBO can
be found in the Supplementary Material.
3.3.1
Stochastic Gradient Descent & Subsampling Scheme
We propose to optimise Equation (9) with SGD by computing noisy unbiased gradients of ELBO with
respect to ? and ? based on contributions from subchains of length ?t + 1 randomly sampled from
y [2, 12]. Multiple subchains can be sampled in each of the learning iterations to form a mini-batch
of subchains, reducing variance of the noisy gradients. Noisy gradients with high variance can
cause the SGD algorithm to converge slowly or diverge [2]. The subchains should also be sampled
randomly without replacement until all subchains in y are depleted to speed up convergence. To
ensure unbiasedness of the noisy gradients, the gradients computed in each iteration need to be
multiplied by a batch factor
c=
T ? ?t
nminibatch
(10)
where nminibatch is the number of subchains in each mini-batch. The scaled noisy gradients can then
be used by SGD algorithm of choice to optimise L. In our implementation of the algorithm, gradients
are computed using the Python automatic differentiation tool [17] and the optimisation is performed
using Rmsprop [22].
4
Related Work
Copulas have previously been adapted in variational inference literatures as a tool to model posterior
dependency in models with i.i.d. data assumption [23, 10]. However, the previously proposed
approaches cannot be directly applied to HMM class of models without addressing parameter
estimation issues as the dimensionality of the posterior distributions grow with the length of sequences.
The proposed formulation of the variational distribution circumvents the problem by exploiting the
chain structure of the model, coupling only random variables within the same chain that are adjacent
in time with a bivariate Gaussian-Bernoulli copula, leading to a coherent chain of bivariate Gaussian
copulas as the variational distribution.
On the other hand, a stochastic variational inference algorithm that also aims to scale HMM class
of models to long sequences has previously been proposed in [5]. Our proposed algorithm differs
from the existing approach in that it does not require explicit message passing to perform inference
and learning. Applying the algorithm proposed in [5] to FHMM requires multiple message passing
iterations to determine the buffer length of each subchain in the mini batch of data, and the procedure
needs to be repeated for each FHMM Markov chain. The message passing routines can be expensive
as the number of Markov chains grows. In contrast, the proposed recognition network approach
eliminates the need for iterative message passing and allows the variational distributions to be learned
directly from the data using gradient descent. The use of recognition networks also allows fast
inference at run-time with modern parallel computing hardwares.
The use of recognition networks as inference devices for graphical models has received much research
interest recently because of its scalability and simplicity. Similar to our approach, the algorithms
proposed in [4, 13] also make use of the recognition networks for inference, but still rely on message
passing to perform certain computations. In addition, [1] proposed an inference algorithm for state
space models using a recognition network. However, the algorithm cannot be applied to models with
non-Gaussian posteriors.
Finally, the proposed algorithm is analogous to composite likelihood algorithms for learning in HMMs
in that the data dependency is broken up according to subchains to allow tractable computations
[6]. The EM-composite likelihood algorithm in [6] partitions the likelihood function according to
subchains, bounding each subchain separately with a different posterior distribution that uses only
the data in that subsequence. Our recognition models generalize that.
5
Experiments
We evaluate the validity of our algorithm and the scalability claim with experiments using real
and simulated data. To validate the algorithm, we learn FHMMs on simulated and real data using
6
the proposed algorithm and the existing SMF-EM algorithm. The models learned using the two
approaches are compared with log-likelihood (LL). In addition, we compare the learned FHMM
parameters to parameters used to simulate the data. The validation experiments ensure that the
proposed approach does not introduce further approximation bias compared to SMF.
To verify the scalability claim, we compare the LL of FHMMs with different numbers of hidden
chains learned on simulated sequences of increasing length using the proposed and SMF-based EM
algorithms. Two sets of experiments are conducted to showcase scalability with respect to sequence
length and the number of hidden Markov chains. To simulate real-world scenarios where computing
budget is constrained, both algorithms are given the same fixed computing budget. The learned
FHMMs are compared after the computing budget is depleted. Finally, we demonstrate the scalability
of the proposed algorithm by learning a 10 binary hidden Markov chains FHMM on long time series
recorded in a real-world scenario.
5.1
Algorithm Validation
Simulated Data We simulate a 1, 000 timesteps long 2-dimensional sequence from a FHMM with 2
hidden binary chains and Gaussian emission, and attempt to recover the true model parameters with
the proposed approach. The simulation procedure is detailed in the Supplementary Material. The
proposed algorithm successfully recovers the true model parameters from the simulated data. The
LL of the learned model also compared favorably to FHMM learned using the SMF-EM algorithm,
showing no visible further bias compares to the proven SMF-EM algorithm. The LL of the proposed
algorithm and SMF-EM are shown in Table 1. The learned emission parameters, together with the
training data, are visualised in Figure 1.
Bach Chorales Data Set [16] Following the experiment in [8], we compare the proposed algorithm
to SMF-EM based on LL. The training and testing data consist of 30 and 36 sequences from the Bach
Chorales data set respectively. FHMMs with various numbers of binary hidden Markov chains are
learned from the training data with both algorithms. The log-likelihoods, tabulated in Table 1, show
that the proposed algorithm is competitve with SMF-EM on a real data set in which FHMM is proven
to be a good model, and show no further bias. Note that the training log-likelihood of the FHMM
with 8 chains trained using the proposed algorithm is smaller than the FHMM with 7 chains, showing
that the proposed algorithm can be trapped in bad local minima.
5.2
Scalability Verification
Simulated Data This experiment consists of two parts to verify scalability with respect to sequence
length and the state space size. In the first component, we simulate 2-dimensional sequences of
varying length from a FHMM with 4-binary chains using an approach similar to the validation
experiment. Given fixed computing budget of 2 hours per sequence on a 24 cores Intel i7 workstation,
both SMF-EM and the proposed algorithm attempt to fit 4-chain FHMMs to the sequences. Two
testing sequences of length 50, 000 are also simulated from the same model. In the second component,
we keep the sequence length to 15, 000 and attempt to learn FHMMs with various numbers of chains
with computing budget of 1, 000s. The computing budget in the second component is scaled according
to the sequence length. Log-likelihoods are computed with the last available learned parameters after
computing time runs out. The proposed algorithm is competitive with SMF-EM when sequences are
shorter and state space is smaller, and outperforms SMF-EM in longer sequences and larger state
space. The results in Figure 2 and Figure 3 both show the increasing gaps in the log-likelihoods as
sequence length and state space size increased. The recognition networks in the experiments have 1
hidden layer with 30 tanh hidden units, and rolling window size of 5. The marginal and correlation
recognition networks for latent variables in the same FHMM Markov chain share hidden units to
reduce memory and computing requirements as the number of Markov chains increases.
Household Power Consumption Data Set [16] We demonstrate the applicability of our algorithm
to long sequences in which learning with SMF-EM using the full data set is simply intractable. The
power consumption data set consists of a 9-dimensional sequence of 2, 075, 259 time steps. After
dropping the date/time series and the current intensity series that is highly correlated with the power
consumption series, we keep the first 106 data points of the remaining 6 dimensional sequence for
training and set aside the remaining series as test data. A FHMM with 10 hidden Markov chains is
7
learned on the training data using the proposed algorithm. In this particular problem, we force all
20 recognition networks in our algorithm to share a common tanh hidden layer of 200 units. The
rolling window size is set to 21 and we allow the algorithm to complete 150, 000 SGD iterations with
10 subchains per iteration before terminating. To compare, we also learned the 10-chain FHMM with
SMF-EM on the last 5, 000 data points of the training data. The models learned with the proposed
algorithm and SMF-EM are compared based on the Mean Squared Error (MSE) of the smoothed test
data (i.e., learned emission means weighted by latent variable posterior). As shown in Table 2, the
test MSEs of the proposed algorithm are lower than the SMF-EM algorithm in all data dimensions.
The result shows that learning with more data is indeed advantageous, and the proposed algorithm
allows FHMMs to take advantage of the large data set.
Figure 1: Simulated data in
validation experiments with the
emission parameters from simulation (red), learned by proposed
algorithm (green) and SMF-EM
(blue). The emission means are
depicted as stars and standard deviations as elliptical contour at 1
standard deviation.
Figure 2: The red and blue lines
show the train (solid) and test
(dashed) LL results from the proposed and SMF-EM algorithms
in the scalability experiments as
the sequence length (x-axis) increases. Both algorithms are
given 2hr computing budget per
data set. SMF-EM failed to complete a single iteration for length
of 150, 000.
Proposed Algo.
SMF
nchain LLtrain LLtest LLtrain LLtest
Simulated Data
2
-2.320 -2.332 -2.315 -2.338
Bach Chorales
2
-7.241 -7.908 -7.172 -7.869
3
-6.627 -7.306 -6.754 -7.489
4
-6.365 -7.322 -6.409 -7.282
5
-6.135 -6.947 -5.989 -7.174
6
-5.973 -6.716 -5.852 -7.008
7
-5.754 -6.527 -5.771 -6.664
8
-5.836 -6.722 -5.675 -6.697
Figure 3: The red and blue
lines show the train (solid) and
test (dashed) LL results from
the proposed and SMF-EM algorithms in the scalability experiments as the number of hidden Markov chain (x-axis) increases. Both algorithms are
given 1, 000s computing budget
per data set.
Dim. M SESM F M SEP roposed
1
2
3
4
5
6
0.155
0.084
0.079
0.466
0.121
0.202
0.082
0.055
0.027
0.145
0.062
0.145
Table 2: Test MSEs of the SMF-EM and the proposed algorithm for each dimension in the household power consumption data set. The results
show that the proposed algorithm is able to take
Table 1: LL from the validation experiments. The advantage of the full data set to learn a better
results demonstrate that the proposed algorithm is model because of its scalability. Plots of the fitted
competitive with SMF. Plot of the Bach chorales and observed data are available in the Supplementary Material.
LL is available in the Supplementary Material.
6
Conclusions
We propose a novel stochastic variational inference and learning algorithm that does not rely on
message passing to scale FHMM to long sequences and large state space. The proposed algorithm
achieves competitive results when compared to structured mean-field on short sequences, and outperforms structured mean-field on longer sequences with a fixed computing budget that resembles a
real-world model deployment scenario. The applicability of the algorithm to long sequences where
the structured mean-field algorithm is infeasible is also demonstrated. In conclusion, we believe that
the proposed scalable algorithm will open up new opportunities to apply FHMMs to long sequential
data with rich structures that could not be previously modelled using existing algorithms.
8
References
[1] Evan Archer, Il Memming Park, Lars Buesing, John Cunningham, and Liam Paninski. Black box variational
inference for state space models. arXiv preprint arXiv:1511.07367, 2015.
[2] John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and
stochastic optimization. The Journal of Machine Learning Research, 12:2121?2159, 2011.
[3] Gal Elidan. Copulas in machine learning. In Piotr Jaworski, Fabrizio Durante, and Wolfgang Karl H?rdle,
editors, Copulae in Mathematical and Quantitative Finance, Lecture Notes in Statistics, pages 39?60.
Springer Berlin Heidelberg, 2013.
[4] Kai Fan, Chunyuan Li, and Katherine Heller. A unifying variational inference framework for hierarchical
graph-coupled HMM with an application to influenza infection. 2016.
[5] Nicholas Foti, Jason Xu, Dillon Laird, and Emily Fox. Stochastic variational inference for hidden markov
models. In Advances in Neural Information Processing Systems, pages 3599?3607, 2014.
[6] Xin Gao and Peter X-K Song. Composite likelihood EM algorithm with applications to multivariate hidden
markov model. Statistica Sinica, pages 165?185, 2011.
[7] Samuel J Gershman and Noah D Goodman. Amortized inference in probabilistic reasoning. In Proceedings
of the 36th Annual Conference of the Cognitive Science Society, 2014.
[8] Zoubin Ghahramani and Michael I Jordan. Factorial hidden markov models. Machine learning, 29(23):245?273, 1997.
[9] Prem K Gopalan and David M Blei. Efficient discovery of overlapping communities in massive networks.
Proceedings of the National Academy of Sciences, 110(36):14534?14539, 2013.
[10] Shaobo Han, Xuejun Liao, David B Dunson, and Lawrence Carin. Variational gaussian copula inference.
arXiv preprint arXiv:1506.05860, 2015.
[11] Geoffrey E Hinton, Peter Dayan, Brendan J Frey, and Radford M Neal. The" wake-sleep" algorithm for
unsupervised neural networks. Science, 268(5214):1158?1161, 1995.
[12] Matthew D Hoffman, David M Blei, Chong Wang, and John Paisley. Stochastic variational inference. The
Journal of Machine Learning Research, 14(1):1303?1347, 2013.
[13] Matthew J Johnson, David Duvenaud, Alexander B Wiltschko, Sandeep R Datta, and Ryan P Adams.
Composing graphical models with neural networks for structured representations and fast inference. arXiv
preprint arXiv:1603.06277, 2016.
[14] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114,
2013.
[15] Steffen L Lauritzen and David J Spiegelhalter. Local computations with probabilities on graphical structures
and their application to expert systems. Journal of the Royal Statistical Society. Series B (Methodological),
pages 157?224, 1988.
[16] M. Lichman. UCI machine learning repository, 2013.
[17] Dougal Maclaurin, David Duvenaud, Matthew Johnson, and Ryan P. Adams. Autograd: Reverse-mode
differentiation of native Python, 2015.
[18] Roger B Nelsen. An introduction to copulas, volume 139. Springer Science & Business Media, 2013.
[19] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014.
[20] Andreas Stuhlm?ller, Jacob Taylor, and Noah Goodman. Learning stochastic inverses. In Advances in
neural information processing systems, pages 3048?3056, 2013.
[21] Y. W. Teh, D. Newman, and M. Welling. A collapsed variational Bayesian inference algorithm for latent
Dirichlet allocation. In Advances in Neural Information Processing Systems, volume 19, 2007.
[22] Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of
its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4:2, 2012.
[23] Dustin Tran, David Blei, and Edo M Airoldi. Copula variational inference. In Advances in Neural
Information Processing Systems, pages 3550?3558, 2015.
[24] Martin J Wainwright and Michael I Jordan. Graphical models, exponential families, and variational
R in Machine Learning, 1(1-2):1?305, 2008.
inference. Foundations and Trends
9
| 6534 |@word repository:1 advantageous:1 open:1 simulation:2 covariance:1 jacob:1 sgd:6 solid:2 reduction:1 initial:1 series:6 lichman:1 jimenez:1 outperforms:2 existing:5 current:1 elliptical:1 tackling:1 diederik:1 written:3 gpu:1 john:3 visible:1 partition:1 plot:2 update:1 aside:1 stationary:2 generative:3 selected:3 device:1 parameterization:1 beginning:2 short:3 core:2 blei:3 firstly:1 unbounded:1 mathematical:1 wierstra:1 become:2 consists:2 fitting:1 subchains:12 introduce:2 indeed:1 themselves:1 steffen:1 actual:1 window:7 increasing:2 becomes:2 spain:1 underlying:1 linearity:2 notation:1 mass:2 medium:1 interpreted:1 unobserved:1 gal:1 differentiation:2 quantitative:1 every:1 tackle:1 growth:1 finance:1 prohibitively:2 scaled:3 qm:1 uk:3 unit:7 before:1 negligible:1 local:3 frey:1 despite:1 encoding:1 black:1 resembles:1 challenging:1 deployment:1 hmms:5 limited:2 cdfs:2 range:3 ms:2 liam:1 enforces:1 testing:2 practice:1 differs:1 backpropagation:1 svi:2 procedure:3 evan:1 empirical:1 thought:1 hyperbolic:1 adapting:1 composite:3 integrating:1 regular:2 suggest:1 zoubin:1 cannot:6 close:3 storage:1 collapsed:1 applying:3 deterministic:2 demonstrated:1 yt:12 missing:1 independently:1 emily:1 resolution:1 xuejun:1 simplicity:1 rule:1 handle:1 increment:1 analogous:1 user:1 exact:1 massive:1 us:1 amortized:1 trend:1 expensive:5 recognition:44 approximated:4 showcase:1 native:1 observed:7 cloud:1 preprint:5 wang:1 capture:1 coursera:1 trade:2 visualised:1 broken:1 complexity:5 rmsprop:2 dynamic:2 trained:1 terminating:1 algo:1 f2:2 efficiency:1 easily:3 joint:4 sep:1 various:3 derivation:1 minimas:1 train:2 fast:3 effective:1 london:3 describe:1 newman:1 rein:1 encoded:2 larger:3 supplementary:5 q00:3 elad:1 kai:1 elbo:16 triangular:1 statistic:1 noisy:5 laird:1 shakir:1 online:1 sequence:50 advantage:3 ucl:3 propose:5 tran:1 product:1 relevant:1 uci:1 date:1 academy:1 jaworski:1 moved:1 validate:1 scalability:12 exploiting:1 convergence:1 cluster:1 requirement:3 nelsen:1 adam:2 depending:1 derive:1 ac:3 insurmountable:1 coupling:1 measured:1 lauritzen:1 qt:1 received:1 eq:2 predicted:1 exhibiting:1 closely:2 drawback:1 stochastic:25 lars:1 material:5 reparameterised:2 exchange:1 require:5 f1:2 ryan:2 secondly:1 duvenaud:2 lawrence:1 maclaurin:1 predict:3 claim:2 matthew:3 elbos:1 achieves:2 estimation:1 diminishes:1 pmfs:1 tanh:2 sensitive:1 successfully:3 tool:3 weighted:1 hoffman:1 gaussian:27 aim:2 varying:1 conjunction:2 encode:1 rezende:1 emission:12 notational:1 methodological:1 modelling:1 likelihood:11 bernoulli:6 contrast:2 brendan:1 am:3 dim:1 inference:61 dependent:1 dayan:1 initially:2 hidden:35 cunningham:1 archer:1 issue:1 among:3 fidelity:1 development:1 constrained:1 breakthrough:1 copula:33 s1t:3 marginal:9 field:7 special:1 ng:2 piotr:1 optimising:1 park:1 unsupervised:2 carin:1 foti:1 recommend:1 modern:3 randomly:3 preserve:2 resulted:1 divergence:2 national:1 autograd:1 consisting:1 replacement:1 attempt:3 interest:4 message:13 dougal:1 highly:2 chong:1 mixture:1 chain:48 pawel:1 respective:1 shorter:2 fox:1 taylor:1 divide:1 pmf:1 fitted:2 increased:1 corroborate:1 goodness:1 maximization:1 cost:3 introducing:1 addressing:1 subset:2 deviation:2 rolling:5 applicability:2 conducted:1 johnson:2 dependency:10 learnt:2 unbiasedness:1 st:23 density:1 probabilistic:2 off:2 diverge:1 michael:2 together:2 quickly:1 squared:1 recorded:1 slowly:1 cognitive:1 resort:1 expert:1 leading:1 ricardo:1 li:1 nonlinearities:1 factorised:2 star:1 dillon:1 satisfy:1 explicitly:1 caused:1 performed:1 jason:1 wolfgang:1 hazan:1 red:3 competitive:3 recover:1 bayes:1 parallel:3 memming:1 contribution:1 il:1 accuracy:1 variance:2 efficiently:1 fhmm:30 modelled:3 generalize:1 thumb:1 buesing:1 bayesian:1 converged:1 llt:1 edo:1 infection:1 mohamed:1 associated:2 recovers:1 workstation:2 sampled:4 rdle:1 knowledge:1 dimensionality:1 schedule:1 routine:1 feed:7 higher:1 supervised:1 follow:1 danilo:1 specify:1 fhmms:17 formulation:1 box:1 parameterised:5 roger:1 correlation:8 d:1 until:1 hand:1 overlapping:2 mode:1 grows:2 believe:1 facilitate:1 validity:1 verify:2 unbiased:1 managed:1 true:2 leibler:1 neal:1 adjacent:1 ll:9 please:1 samuel:1 complete:2 demonstrate:3 duchi:1 silva:2 reasoning:1 variational:67 novel:1 recently:3 sigmoid:1 common:1 multinomial:2 sped:1 influenza:1 volume:2 approximates:3 marginals:2 significant:2 refer:1 paisley:1 rd:1 unconstrained:1 automatic:1 similarly:1 han:1 longer:4 posterior:21 multivariate:4 closest:1 recent:1 conjectured:1 reverse:1 massively:1 scenario:5 certain:3 buffer:1 inequality:1 binary:7 success:1 captured:4 minimum:1 care:1 converge:1 determine:1 ller:1 elidan:1 dashed:2 multiple:3 full:3 reduces:2 match:1 offer:1 long:18 bach:4 wiltschko:1 divided:1 devised:1 dept:3 a1:1 plugging:1 scalable:2 liao:1 optimisation:3 expectation:4 arxiv:10 iteration:7 addition:6 background:1 separately:3 wake:1 grow:2 goodman:2 biased:1 eliminates:2 unlike:2 rest:1 ascent:2 pass:1 jordan:2 presence:2 depleted:2 rendering:1 independence:1 fit:1 timesteps:1 reduce:1 idea:1 andreas:1 i7:1 sandeep:1 utility:1 tabulated:1 song:1 peter:2 passing:11 cause:1 deep:1 detailed:2 gopalan:1 factorial:8 amount:2 hardware:1 generate:1 restricts:1 estimated:3 trapped:1 per:6 fabrizio:1 blue:3 dropping:1 express:1 respresentation:1 backward:3 graph:1 subgradient:1 sum:1 run:3 inverse:1 powerful:1 named:1 family:7 reasonable:1 draw:1 circumvents:1 coherence:1 scaling:1 bound:2 layer:5 cheng:1 fan:1 quadratic:1 sleep:1 annual:1 adapted:3 noah:2 x2:3 speed:3 simulate:4 extremely:2 performing:2 martin:1 durante:1 structured:8 according:3 combination:3 across:4 remain:2 em:26 increasingly:1 ate:1 smaller:2 evolves:1 parameterisation:1 s1:2 computationally:2 equation:7 previously:4 singer:1 tractable:3 end:2 available:3 multiplied:1 apply:1 observe:1 hierarchical:1 nicholas:1 batch:4 remaining:2 subsampling:3 ensure:2 dirichlet:1 graphical:4 opportunity:1 running:1 unifying:1 household:2 yoram:1 quantile:1 ghahramani:1 approximating:2 society:2 gradient:14 simulated:9 capacity:2 hmm:5 aptly:1 gracefully:1 consumption:4 sesm:1 berlin:1 collected:1 maximising:5 assuming:2 length:18 code:1 mini:3 tijmen:1 minimizing:1 sinica:1 katherine:1 dunson:1 potentially:1 favorably:1 negative:1 implementation:1 perform:3 allowing:1 teh:1 reparameterise:1 observation:3 markov:26 sm:21 daan:1 smf:29 descent:4 extended:2 hinton:2 y1:3 smoothed:1 chunyuan:1 community:1 intensity:1 datta:1 david:7 pair:2 required:2 kl:4 coherent:1 learned:17 barcelona:1 hour:2 nip:1 kingma:1 able:1 optimise:3 memory:2 lend:1 green:1 wainwright:1 power:4 max:1 royal:1 business:1 rely:2 circumvent:1 force:1 hr:1 scheme:2 improve:2 chorale:4 technology:1 spiegelhalter:1 axis:2 extract:1 coupled:1 auto:1 heller:1 literature:3 discretevalued:1 tangent:1 python:2 discovery:1 lecture:2 allocation:1 proven:4 gershman:1 geoffrey:2 age:1 validation:5 shaobo:1 foundation:1 verification:1 editor:1 share:4 karl:1 last:2 keeping:1 free:1 infeasible:2 bias:4 allow:3 amortised:2 taking:1 distributed:3 boundary:2 depth:1 dimension:2 transition:6 cumulative:1 rich:2 world:3 contour:1 forward:13 adaptive:1 subchain:2 welling:2 approximate:8 emphasize:1 kullback:1 keep:2 paremeters:1 assumed:2 thep:1 subsequence:1 latent:16 iterative:1 table:5 nature:1 learn:5 correlated:1 decoupling:1 composing:1 heidelberg:1 mse:1 did:1 statistica:1 linearly:3 bounding:1 hyperparameters:1 allowed:1 repeated:1 x1:3 xu:1 intel:1 explicit:2 exponential:1 lie:1 governed:1 dustin:1 learns:3 unusable:1 bad:2 specific:1 showing:2 jensen:1 explored:3 evidence:2 bivariate:16 intractable:3 consist:1 sequential:5 importance:1 airoldi:1 magnitude:1 budget:14 gap:1 entropy:1 intersection:1 yin:1 depicted:1 simply:3 paninski:1 gao:1 failed:1 expressed:2 springer:2 radford:1 tieleman:1 determines:1 cdf:2 conditional:1 shared:5 reducing:1 wt:1 decouple:1 total:1 pas:2 xin:1 college:3 stuhlm:1 competitve:1 alexander:1 prem:1 evaluate:1 handling:1 |
6,119 | 6,535 | Nearly Isometric Embedding by Relaxation
James McQueen
Department of Statistics
University of Washington
Seattle, WA 98195
[email protected]
Marina Meil?a
Department of Statistics
University of Washington
Seattle, WA 98195
[email protected]
Dominique Perrault-Joncas
Google
Seattle, WA 98103
[email protected]
Abstract
Many manifold learning algorithms aim to create embeddings with low or no distortion (isometric). If the data has intrinsic dimension d, it is often impossible to
obtain an isometric embedding in d dimensions, but possible in s > d dimensions.
Yet, most geometry preserving algorithms cannot do the latter. This paper proposes an embedding algorithm to overcome this. The algorithm accepts as input,
besides the dimension d, an embedding dimension s ? d. For any data embedding
Y, we compute a Loss(Y), based on the push-forward Riemannian metric associated with Y, which measures deviation of Y from from isometry. Riemannian
Relaxation iteratively updates Y in order to decrease Loss(Y). The experiments
confirm the superiority of our algorithm in obtaining low distortion embeddings.
1
Introduction, background and problem formulation
Suppose we observe data points sampled from a smooth manifold M with intrinsic dimension d
which is itself a submanifold of D-dimensional Euclidean space M ? RD . The task of manifold
learning is to provide a mapping ? : M ? N (where N ? Rs ) of the manifold into lower
dimensional space s ? D. According to the Whitney Embedding Theorem [11] we know that
M can be embedded smoothly into R2d using one homeomorphism ?. Hence we seek one smooth
map ? : M ? Rs with d ? s ? 2d ? D.
Smooth embeddings preserve the topology of the original M. Nevertheless, in general, they distort
the geometry. Theoretically speaking1 , preserving the geometry of an embedding is embodied in the
concepts of Riemannian metric and isometric embedding. A Riemannian metric g is a symmetric
positive definite tensor field on M which defines an inner product <, >g on the tangent space Tp M
for every point p ? M. A Riemannian manifold is a smooth manifold with a Riemannian metric at
every point. A diffeomorphism ? : M ? N is called an isometry iff for all p ? M, u, v ? Tp M
we have < u, v >gp =< d?p u, d?p v >h?(p) . By Nash?s Embedding Theorem [13], it is known that
any smooth manifold of class C k , k ? 3 and intrinsic dimension d can be embedded isometrically
in the Euclidean space Rs with s polynomial in d.
In unsupervised learning, it is standard to assume that (M, g0 ) is a submanifold of RD and that it
inherits the Euclidean metric from it2 . An embedding ? : M ? ?(M) = N defines a metric g on
N given by < u, v >g(?(p)) =< d??1 u, d??1 v >g0 (p) called the pushforward Riemannian metric;
(M, g0 ) and (N , g) are isometric.
Much previous work in non-linear dimension reduction[16, 20, 19] has been driven by the desire
to find smooth embeddings of low dimension that are isometric in the limit of large n. This work
has met with mixed success. There exists the constructive implementation [19] of Nash?s proof
1
2
For a more complete presentation the reader is referred to [8] or [15] or [10].
Sometimes the Riemannian metric on M is not inherited, but user-defined via a kernel or distance function.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
technique, which guarantees consistence and isometry. However, the algorithm presented falls short
of being practical, as the embedding dimension s it requires is significantly higher than the minimum
necessary, a major drawback in practice. Overall, the algorithm leads to mappings ? that, albeit
having the desired properties, are visually unintuitive, even for intrinsic dimensions as low as d = 1.
There are many algorithms, too many for an exhaustive list, which map the data using a cleverly
chosen reconstruction criterion. The criterion is chosen so that the mapping ? can be obtained as the
unique solution of a ?classic? optimization problem, e.g. Eigendecomposition for Laplacian Eigenmaps [2], Diffusion Maps [12] and LTSA [21], Semidefinite Programming for Maximum Variance
Unfolding [20] or Multidimensional Scaling for Isomap [3]. These embedding algorithms sometimes come with guarantees of consistency [2] and, only in restricted cases, isometry [3].
In this paper we propose an approach which departs from both these existing directions. The main
difference, from the algorithmic point of view, is that the loss function we propose does not have a
form amenable to a standard solver (and is not even guaranteed to be convex or unimodal). Thus, we
do not obtain a mapping ? in ?one shot?, as the previous algorithms do, but by the gradual improvements of an initial guess, i.e. by gradient descent. Nevertheless, the loss we define directly measures
the deviation from isometry; therefore, when this loss is (near) 0, (near) isometry is achieved.
The algorithm is initialized with a smooth embedding Y = ?(M) ? Rs , s ? d; we define the
objective function Loss(Y) as the averaged deviation of the pushforward metric from isometry. Then
Y is iteratively changed in a direction that decreases Loss. To construct this loss function, we exploit
the results of [15] who showed how a pushforward metric can be estimated, for finite samples and
in any given coordinates, using a discrete estimator of the Laplace-Beltrami operator ?M . The
optimization algorithm is outlined in Algorithm 1.
Input : data X ? Rn?D , kernel function Kh (), weights w1:n , intrinsic dimension d, embedding dimension s
Initial coordinates Y ? Rn?s , with Yk,: representing the coordinates of point k.
Init
: Compute Laplacian matrix L ? Rn?n using X and Kh ().
while not converged do
Compute H = [Hk ]k=1:n ? Rn?s?s the (dual) pushforward metric at data points from Y and L.
Compute Loss(H1:n ) and ?Y Loss(H)
Take a gradient step Y ? Y ? ??Y Loss(H)
end
Output : Y
Algorithm 1: Outline of the Riemannian Relaxation Algorithm.
A remark on notation is necessary. Throughout the paper, we denote by M, p?M, Tp M, ?M a
manifold, a point on it, the tangent subspace at p, and the Laplace-Beltrami operator in the abstract,
coordinate free form. When we describe algorithms acting on data, we will use coordinate and finite
sample representations. The data is X ? Rn?D , and an embedding thereof is denoted Y ? Rn?s ;
rows k of X, Y, denoted Xk , Yk are coordinates of data point k, while the columns, e.g Yj represent
functions of the points, i.e restrictions to the data of functions on M. The construction of L (see below) requires a kernel, which can be the (truncated) gaussian kernel Kh (z) = exp(z 2 /h),
P |z| < rh
for some fixed r > 0 [9, 17]. Besides these, the algorithm is given a set of weights w1:n , k wk = 1.
The construction of the loss is based on two main sets of results that we briefly review here. First,
an estimator L of the Laplace-Beltrami operator ?M of M, and second, an estimator of the pushforward metric g in the current coordinates Y.
To construct L we use the method of [4], which guarantees that, if the data are sampled from a
manifold M, L converges to ?M [9, 17]. Given a set of points in high-dimensional Euclidean space
RD , represented by the n?D matrix X, construct a weighted neighborhood graph G = ({1 : n}, W )
over them, with W = [Wkl ]k,l=1:n . The weight Wkl between Xk: and Xl: is the heat kernel [2]
Wkl ? Kh (||Xk: ? Xl: ||) with h a bandwidth parameter fixed by the user, and || || the Euclidean
norm. Next, construct L = [Lkl ]ij of G by
D= diag(W1) ,
? = diag(W1)
? , and L = D
? ?1 W
?
D
? = D?1 WD?1 ,
W
(1)
Equation (1) represents the discrete versions of the renormalized Laplacian construction from [4].
? W,
? L all depend on the bandwidth h via the heat kernel. The consistency of L
Note that W, D, D,
has been proved in e.g [9, 17].
2
The second fact we use is the relationship between the Laplace-Beltrami operator and the Riemannian metric on a manifold [11]. Based on this, [15] gives a a construction method for a discrete
estimator of the Riemannian metric g, in any given coordinate system, from an estimate L of ?M .
In a given coordinate representation Y, a Riemannian metric g at each point is an s ? s positive
semidefinite matrix of rank d. The method of [15] obtains the matrix Moore-Penrose pseudoinverse
of this metric (which must be therefore inverted to obtain the pushforward metric). We denote this
inverse at point k by Hk ; let H = [Hk , k = 1, . . . n] be the three dimensional array containing the
inverse for each data point. Note that H is itself the (discrete estimate of) a Riemannian metric,
called the dual (pushforward) metric. With these preliminaries, the method of [15] computes H by
i
1h
Hij =
L(Yi ? Yj ) ? Yi ? (LYj ) ? Yj ? (LYi )
(2)
2
Where here Hij is the vector whose kth entry is the ijth element of the dual pushforward metric H
at the point k and ? denotes element-by-element multiplication.
2
The objective function Loss
The case s = d (embedding dimension equals intrinsic dimension). Under this condition, it
can be shown [10] that ? : M ? Rd is an isometry iff gp , p ? M expressed in a normal coordinate
system equals the unit matrix Id . Based on this observation, it is natural to measure the quality of the
data embedding Y as the departure of the Riemannian metric obtained via (2) from the unit matrix.
This is the starting idea for the distortion measure we propose to optimize. We develop it further as
follows. First, we choose to use the dual of g, evaluated by H instead of pushforward metric itself.
Naturally Hk = Id iff H?1
k = Id , so the dual metric identifies isometry as well. When no isometric
transformation exists, it is likely that optimizing w.r.t g and optimizing w.r.t h will arrive to different
embeddings. There is no mathematically compelling reason, however, to prefer optimizing one
over the other. We choose to optimize w.r.t h for three reasons; (1) it is computationally faster, (2) it
is numerically more stable, and (3) in our experience users find H more interpretable. 3
Second, we choose to measure the distortion of Hk by ||Hk ?I|| where || || denotes the matrix spectral
norm. This choice will be motivated shortly. Third, we choose the weights w1:n to be proportional
? from (1). As [4] show, these values converge to the sampling density ? on M. Putting these
to D
together, we obtain the loss function
Loss(Y; L, w) =
n
X
2
wk ||Hk ? Id || .
(3)
k=1
To motivate the choice of a ?squared loss? instead of simply using ||Hk ? Id ||, notice (the proofs are
straightforward) that || || is not differentiable at 0, but || ||2 is.
A natural question to ask about Loss is if it is convex. The following proposition proved in the
Supplement summarizes a set of relevant convexity facts.
Proposition 1 Denote by ?1:d (Hk ) ? 0 the eigenvalues of Hk , in decreasing order and assume Y
is in a compact, convex set. Then
Pd
1. ?1 (Hk ), ?1 (Hk ) ? ?d (Hk ) and ?1 (Hk ) ? d? =1 ?d? (Hk ) are convex in Y.
2. ||Hk ? Id || is convex in Y for (?1 (Hk ) + ?d (Hk ))/2 ? 1 and concave otherwise.
3. ||Hk ? Id ||2 is convex in Y whenever ||Hk ? Id || is convex and differentiable in Y.
This proposition shows that Loss may not be convex near its minimum, and moreover that squaring
the loss only improves convexity.
Choosing the right measure of distortion The norm of a Hermitian bilinear functional (i.e
symmetric tensor of order 2) g : Rs ? Rs ? R is defined as supu6=0 |g(u, u)|/||u||. In a
fixed orthonormal base of Rs , g(u, v) = u? Gv, ||g|| = supu6=0 |u? Gu|. One can define norms
with respect to any metric g0 on Rs (where g0 is represented in coordinates by G0 , a symmetric,
positive definite matrix), by ||u||G0 = u? G0 u, respectively ||g||G0 = supu6=0 |u? Gu|/||u||G0 =
3
Hk represents the direction & degree of distortion as opposed to the scaling required to ?correct" the space.
3
?1/2
?1/2
?1/2
?1/2
supu?6=0 |?
u? G0 GG0 u
?|/||?
u|| = ?max (G0 GG0 ). In particular, since any Riemannian
metric at a point k is a g as above, setting g and g0 respectively to Hk and Id we measure the operator norm of the distortion by ||Hk ? Id ||. In other words, the appropriate operator norm we seek can
be expresed as a matrix spectral norm.
The expected loss over the data set, given a distribution represented by the weights w1:n is then
identical to the expression of Loss in (3). If the weights are computed as in (1), it is easy to see that
the loss function in (3) is the finite sample version of the squared L2 distance between h and g0 on
the space of Riemannian metrics on M, w.r.t base measure ?dVg0
Z
||h ? g0 ||2g0 =
(4)
||h ? g0 ||2g0 ?dVg0 , with dVg0 volume element on M.
M
Defining Loss for embeddings with s > d dimensions Consider G, G0 ? Rs?s , two symmetric
matrices with G0 semipositive definite of rank d < s. We would like to extend the G0 norm of G to
this case. We start with the family of norms ||||G0 +?Is for ? > 0 and we define
(5)
||G||G0 = lim ||G||G0 +?Is .
??0
Proposition 2 Let G, G0 ? Rs?s be symmetric matrices, with G0 semipositive definite of rank
?
Gu
d < s, and let ? > 0, ?(u, ?) = u? G0uu+?||u||
2 . Then,
? 2 with G
? = (G0 + ?I)?1/2 G(G0 + ?I)?1/2 .
1. ||G||G0 +?Is = ||G||
2. If ||G||G0 +?Is < r, then ?? (G) < ?r with ?? (G) = supv?Null(G0 ) ?(v, ?),
3. ||||G0 is a matrix norm that takes infinite values when Null G0 6? Null G.
Hence, || ||G0 +?Is can be computed as the spectral norm of a matrix. The computation of || ||G0 is
similar, with the additional step of checking first if Null G0 6? Null G, in which case we output
the value ?. Let B? (0, r) (B(0, r)) denote the r-radius ball centered at 0 in the || ||G0 +?Is (|| ||G0 ).
From Proposition 2 it follows that if G ? B? (0, r) then ?? (G) < ?r and if G ? B(0, r) then
Null(G0 ) ? Null(G). In particular, if rank G = rank G0 then Null(G) = Null(G0 ).
To define the loss for s > d we set G = Hk and G0 = Uk U?k , with Uk an orthonormal basis for
Tk M the tangent subspace at k. The norms || ||G0 +?Is , || ||G0 act as soft and hard barrier functions
constraining the span of Hk to align with the tangent subspace of the data manifold.
n
X
wk || (Uk U?k + ?2orth Is )?1/2 Hk ? Uk U?k (Uk U?k + ?2orth Is )?1/2 ||2 .
Loss(Y; L, w, d, ?orth ) =
{z
}
|
k=1
?k
G
3
Optimizing the objective
(6)
Let Lk denote the kth row of L, then Hk can be rewritten in the convenient form
1
1
Hk (Y) = Y? [trace(Lk ) ? (ek e?k L) ? (ek e?k L)? ]Y ? Y? Lk Y
(7)
2
2
where ek refers to the kth standard basis vector of Rn and Lk is a symmetric positive semi-definite
matrix precomputed from entries in L; Lk has non-zero rows only for the neighbors of k.
Proposition 3 Let Lossk denote term k of Loss. If s = d, the gradient of Lossk as given by (3) is
? Lossk
= 2wk ??k Lk Yuk u?k ,
(8)
?Y
?
with ?k the largest eigenvalue of Hk ? Id and uk is the corresponding eigenvector.
If s > d, the gradient of Lossk of (6) is
? Lossk
(9)
= 2wk ??k Lk Y?k uk u?k ??k
?Y
? k of (6) and uk is the
where ?k = (Uk U?k + (?orth )k Is )?1/2 , ??k is the largest eigenvalue of G
corresponding eigenvector.
4
When embedding in s > d dimensions, the loss function depends at each point k on finding the
d-dimensional subspace Uk . Mathematically, this subspace coincides with the span of the Jacobian
DYk which can be identified with the d-principal subspace of Hk . When computing the gradient of
Loss we assume that U1:n are fixed. Since the derivatives w.r.t Y are taken only of H and not of the
tangent subspace Uk , the algorithm below is actually an alternate minimization algorithm, which
reduces the cost w.r.t Y in one step, and w.r.t U1:n in the alternate step.
3.1
Algorithm
We optimize the loss (3) or (6) by projected gradient
Pdescent with line search (subject to the observation above). The projection consists of imposing k Yk = 0, which we enforce by centering ?Y
before taking a step. This eliminates the degeneracy of the Loss in (3) and (6) w.r.t constant shift
in Y. To further improve the good trade-off between time per iteration and number of iterations,
we found that a heavy-ball method with parameter ? is effective. At each iteration computing the
gradient is O((S + s3 )n) where S is the number of nonzero entries of L.
Input : data X, kernel function Kh (), initial coordinates Y0 , weights w1:n , intrinsic dimension d,
orthonormal tolerance ?orth , heavy ball parameter ? ? [0, 1)
Init
: Compute: graph Laplacian L by (1), matrices L1:n as in (7). Set S = 0
while not converged do
Compute ? Loss:
for all k do
1. Calculate Hk via (2);
2. If s > d
(a) Compute Uk by SVD from Hk ;
(b) Compute gradient of ? Lossk (Y) using (9);
3. Else (s = d): calculate gradient ? Lossk (Y) using (8);
4. Add ? Lossk (Y) to the total gradient;
end
Take a step in Y:
1. Compute projected direction S and project S ? (In ? en e?n )? Loss +?S;
2. Find step size ? by line search and update Y ? Y ? ?S;
end
Output : Y
3.2
Algorithm 2: R IEMANNIAN R ELAXATION (RR)
For large or noisy data
Here we describe an extension of the RR Algorithm which can naturally adapt to large or noisy data,
where the manifold assumption holds only approximately. The idea is to subsample the data, but in
a highly non-uniform way that improves the estimation of the geometry.
A simple peliminary observation is that, when an embedding is smooth, optimizing the loss on a
subset of the data will be sufficient. Let I ? {1, . . . n} be set of size n? < n. The subsampled
loss LossI will be computed only for the points k ? ? I. If every point k has O(d) neighbors in I,
this assures that the gradient of LossI will be a good approximation of ? Loss at point k, even if
k 6? I, and does not have a term containing Hk in LossI . To optimize LossI by RR, it is sufficient
to run the ?for? loop over k ? ? I. Algorithm PCS-RR below describes how we choose a ?good"
subsample I, with the help of the P RINCIPAL C URVES algorithm of [14].
Input : data X, kernel function Kh (), initial coordinates Y0 , intrinsic dimension d, subsample size n? , other
parameters for RR
? = P RINCIPAL C URVES(X, Kh , d)
Compute X
Take a uniform sample I0 of size n? from {1, . . . n} (without replacement).
for k? in I0 do
? k? , and add l to I (removing duplicates)
Find Xl the nearest neigbor in X of X
end
Output : Y = RR(Y0 , Kh , d, I, . . .)
Algorithm 3: P RINCIPAL C URVES -R IEMANNIAN R ELAXATION (PCS-RR)
5
sphere + noise
hourglass + noise
final embedding
sigma vs. (log10) loss and MSE
?1
?1.5
?2
log10(MSE)
log10(loss)
?2.5
?3
?3.5
?4
0
0.005
0.01
0.015
noise standard deviation
0.02
0.025
Figure 1: Hourglass to sphere. From left to right: target Y (noisy sphere), initialization Y0 of RR (noisy
hourglass), output of RR, mean-squared error and Loss vs. noise level ? (on a log10 scale). Convergence of
RR was achieved after 400 iterations.
Informally speaking, P RINCIPAL C URVES uses a form of Mean-Shift to obtain points in the ddimensional manifold of highest density in the data. The result is generally biased, however [7]
have shown that this algorithm offers a very advantageous bias-variance trade-off in case of mani? of P RINCIPAL C URVES to find a subset of points that (1) lie
folds with noise. We use the output Y
in a high density region relative to most directions in RD and (2) are ?in the middle? of their neighbors, or more formally, have neighborhoods of dimension at least d. In other words, this is a good
heuristic to avoid ?border effects?, or other regions where the d-manifold assumption is violated.
4
Experimental evaluation
Hourglass to sphere illustrates how the algorithm works for s = 3, d = 2. The data X is sampled
uniformly from a sphere of radius 1 with intrinsic dimension d = 2. We sample n = 10000 points
from the sphere and add i.i.d. Gaussian noise with ? = ? 2 /sIs 4 , estimating the Laplacian L on
the noisy data X. We initialize with a noisy ?hourglass? shape in s = 3 dimensions, with the same
noise distribution as the sphere. If the algorithm works correctly, by using solely the Laplacian and
weights from X, it should morph the hourglass Y0 back into a sphere. The results after convergence
at 400 iterations are shown in Fig. 1 (and an animation of this convergence in the Supplement). We
see that RR not only recovers the sphere, but it also suppresses the noise.
The next two experiments compare RR to several embedding algorithms w.r.t geometric recovery. The algorithms are Isomap, Laplacian Eigenmaps, HLLE[6], MVU 5 . The embeddings
YLE,M V U,HLLE need to be rescaled before being evaluated, and we use a Procrustes transformation to the original data. The algorithms are compared w.r.t the dual metric distortion Loss, and w.r.t
mean squared errror in pairwise distance (the loss optimized by Isomap 6 ). This is
X
2
dis(Y, Ytrue ) = 2/n(n?1)
||Yk ? Yk? || ? ||Ytrue
? Ytrue
(10)
k
k? ||
k6=k?
where Y is the embedding resulting from the chosen method and Ytrue are the true noiseless coordinates. Note that none of Isomap, MVU, HLLE could have been tested on the hourglass to sphere
data of the previous example, because they work only for s = d. The sample size is n = 3000 in
both experiments, and noise is added as described above.
Flat ?swiss roll? manifold, s = d = 2. The results are displayed in Fig. 2.
Curved ?half sphere? manifold, s = d = 2. Isometric embedding into 2D is not possible. We
examine which of the algorithms achieves the smallest distortions in this scenario. The true distances
were computed as arc-lengths on the half-sphere. The results are displayed in Fig 2.
RR was initialized at each method. In almost every initalization and noise level, RR achieves a
decrease in dis, in some cases significant decreases. Isomap also performs well and even though
RR optimizes a different loss function it never increases dis and often improves on it. This demonstrates the ability of the Riemannian Metric to encode simultaneously all aspects of manifold geom4
For this artificial noise, adding dimensions beyond s has no effect except to increase ?.
embeddings were computed using drtoolbox: https://lvdmaaten.github.io/drtoolbox/
6
Isomap estimates the true distances using graph shortest path
5
6
Isomap
Laplacian Eigenmaps
MVU
HLLE
RR
RR
RR
RR
Isomap
HLLE
MVU
Leigs
0
1.52
?0.1
1.5
?0.2
average distortion (log10)
average distortion (log10)
Leigs
1.54
1.48
1.46
1.44
1.42
1.4
1.38
1.36
0
Isomap
HLLE
MVU
?0.3
?0.4
?0.5
?0.6
?0.7
?0.8
0.2
Leigs
0.4
?
Isomap
0.6
HLLE
0.8
?0.9
0
1
MVU
0.02
0.04
Leigs
1.5
0.06
0.08
?
Isomap
0.1
HLLE
0.12
0.14
0.16
0.14
0.16
MVU
?0.1
?0.2
average loss (log10)
average loss (log10)
1
0.5
0
?0.3
?0.4
?0.5
?0.6
?0.7
?0.5
?0.8
?1
0
0.2
0.4
?
0.6
0.8
?0.9
0
1
0.02
0.04
0.06
0.08
?
0.1
0.12
Figure 2: Swiss hole (left) & half sphere (right). Top plots display example initial embeddings and their
Riemannian Relaxed versions. Middle row displays dis value vs. noise level ?. Bottom row displays Loss
value vs. noise level ?. As RR was initialized at each method dashed lines indicated relaxed embeddings
etry. Convergence of RR varies with the initialization but was in all cases faster than Isomap. The
extension of RR to PCS-RR allows for scaling to much larger data sets.
4.1
Visualizing the main SDSS galaxy sample in spectra space
The data consists of spectra of galaxies from the Sloan Digital Sky Survey7 [1]. We extracted a
subset of spectra whose SNR was sufficiently high, known as the main sample. This set contains
675,000 galaxies observed in D = 3750 spectral bins, preprocessed by first moving them to a
common rest-frame wavelength and filling-in missing data following [18] but using the more sophisticated weighted PCA algorithm of [5], before computing a sparse neighborhood graph and pairwise
distances between neighbors in this graph. A log-log plot of the average number neighbors m(r)
vs. neighborhood radius r (shown in the Supplement), indicates that the intrinsic dimension of these
data varies with the scale r. In particular, in order to support m = O(d) neighbors, the radius must
be above 60, in which case d ? 3. We embedded the whole data set by Laplacian Eigenmaps, obtaining the graph in Fig. 3 a. This figure strongly suggests that d is not constant for this data cloud,
and that the embedding is not isometric (Fig 3, b). We ?rescaled? the data along the three evident
7
www.sdss.org
7
3
2.5
2
1.5
0.5
log10(||H||)
1
0
-0.5
-1
-1.5
a
-2
b
4
3
3.5
2.5
3
2
1.5
log10(||H||)
0.5
2
log10(H ? Emission)
2.5
1
1.5
0
-0.5
1
-1
0.5
-1.5
c
-2
0
d
?
Figure 3: a: Initial LE embedding from D = 3750 to s = 3 dimensions, with the principal curves Y
superimposed. For clarity, we only show a small subsample of the Y0 ; a larger one is in the Supplement; b:
same embedding, only points ?on? principal curves, colored by log10 ||Hk || (hence, 0 represents isometry); c:
same points as in (b), after RR(color on the same scale as in (b)); d: 40,000 galaxies in the coordinates from (c),
colored by the strength of Hydrogen ? emission, a very nonlinear feature which requires dozens of dimensions
to be captured in a linear embedding. Convergence of PCS-RR was achieved after 1000 iterations and took 2.5
hours optimizing a Loss with n? = 2000 terms over the n ? s = 105 ? 3 coordinates, corresponding to the
highest density points. (Please zoom for better viewing)
principal curves shown in Figure 3 a by running PCS-RR (Y, n = 105 , n? = 2000, s = 3, d = 1).
In the new coordinates (Fig 3, c), Y is now close to isometric along the selected curves, while in
Fig. 3,b, ||Hk || was in the thousands on the uppermost ?arm?. This means that, at the largest scale,
the units of distance in the space of galaxy spectra are being preserved (almost) uniformly along
the sequences, and that they correspond to the distances in the original D = 3750 data. Moreover,
we expect the distances along the final embedding to be closer on average to the true distance, because of the denoising effect of the embedding. Interpreting the coordinates along these ?arms? is in
progress. As a next step of the analysis, RR with s = d = 3 will be used to rescale the high-density
region at the confluence of the three principal curves.
5
Discussion
Contributions: we propose a new, natural, way to measure the distortion from isometry of any
embedding Y ? Rn?s of a data set X ? Rn?D , and study its properties. The distortion loss is based
on an estimate of the push-forward Riemannian metric into Euclidean space Rs .
The RR we propose departs from existing non-linear embedding algorithms in several ways. First,
instead of a heuristically chosen loss, like pairwise distances, or local linear reconstruction error, it
directly optimizes the (dual) Riemannian metric of the embedding Y. When this is successful, and
the loss is 0 all geometric properties (lengths, angles, volumes) are preserved simultaneously. From
the computational point of view, the non-convex loss is optimized iteratively by projected gradient.
Third, our algorithm explicitly requires both an embedding dimension s and an intrinsic dimension
d as inputs. Estimating the intrinsic dimension of a data set is not a solved problem, and beyond
the scope of this work. However, as a rule of thumb, we propose chosing the smallest d for which
Loss is not too large, for s fixed, or, if d is known (something that all existing algorithms assume),
increasing s until the loss becomes almost 0. Most existing embedding algorithms, as Isomap, LLE,
HLLE, MVU, LTSA only work in the case s = d, while Laplacian Eigenmaps/Diffusion Maps
requires only s but does not attempt to preserve geometric relations. Finally, RR is computationally
competitive with existing algorithms, and can be seamlessly adapted to a variety of situations arising
in the analysis of real data sets.
8
References
[1] K. N. Abazajian et al. The Seventh Data Release of the Sloan Digital Sky Survey. Astrophysical
Journal Supplement Series, 182:543?558, June 2009.
[2] M. Belkin and P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 15:1373?1396, 2002.
[3] M. Bernstein, V. deSilva, J. C. Langford, and J. Tennenbaum. Graph approximations to
geodesics on embedded manifolds. Science, 290, 2000.
[4] R. R. Coifman and S. Lafon. Diffusion maps. Applied and Computational Harmonic Analysis,
21(1):6?30, 2006.
[5] L. Delchambre. Weighted principal component analysis: a weighted covariance eigendecomposition approach. Monthly Notices of the Royal Astronomical Society, 446(4):3545?3555,
2015.
[6] David L. Donoho and Carrie Grimes. Hessian eigenmaps: Locally linear embedding techniques for high-dimensional data. Proc Natl Acad Sci, 100(10):5591?5596, May 2003.
[7] Christopher Genovese, Marco Perone-Pacifico, Isabella Verdinelli, and Larry Wasserman. Minimax manifold estimation. Journal of Machine Learning Research, 13:1263??L??1291, May
2012.
[8] M. Hein and J.-Y. Audibert. Intrinsic dimensionality estimation of submanifolds in Rd . In
Proceedings of the 22nd international conference on Machine learning, ICML, pages 289?
296, 2005.
[9] M. Hein, J.-Y. Audibert, and U. von Luxburg. Graph Laplacians and their Convergence on
Random Neighborhood Graphs. Journal of Machine Learning Research, 8:1325?1368, 2007.
[10] J. M. Lee. Riemannian Manifolds: An Introduction to Curvature, volume M. Springer, New
York, 1997.
[11] J. M. Lee. Introduction to Smooth Manifolds. Springer, New York, 2003.
[12] B. Nadler, S. Lafon, R. R. Coifman, and Kevrekidis. Diffusion maps, spectral clustering and
reaction coordiantes of dynamical systems. Applied and Computational Harmonic Analysis,
21:113?127, 2006.
[13] J. Nash. The imbedding problem for Riemannian manifolds. Annals of Mathematics, 63, pages
20?63, 1956.
[14] Umut Ozertem and Deniz Erdogmus. Locally defined principal curves and surfaces. Journal
of Machine Learning Research, 12:1249?1286, 2011.
[15] Dominique Perrault-Joncas and Marina Meila. Non-linear dimention reduction: Riemannian
metric estimation and the problem of geometric recovery. arXiv:1305.7255v1, 2013.
[16] J. Tenenbaum, V. deSilva, and J. C. Langford. A global geometric framework for nonlinear
dimensionality reduction. Science, 290:2319?2323, 2000.
[17] D. Ting, L Huang, and M. I. Jordan. An analysis of the convergence of graph Laplacians. In
ICML, pages 1079?1086, 2010.
[18] Jake Vanderplas and Andrew Connolly. Reducing the dimensionality of data: Locally linear
embedding of sloan galaxy spectra. The Astronomical Journal, 138(5):1365, 2009.
[19] Nakul Verma. Distance preserving embeddings for general n-dimensional manifolds. Journal
of Machine Learning Research, 14:2415?2448, 2013.
[20] K.Q. Weinberger and L.K. Saul. Unsupervised learning of image manifolds by semidefinite
programming. International Journal of Computer Vision, 70:77?90, 2006. 10.1007/s11263005-4939-z.
[21] Z. Zhang and H. Zha. Principal manifolds and nonlinear dimensionality reduction via tangent
space alignment. SIAM J. Scientific Computing, 26(1):313?338, 2004.
9
| 6535 |@word version:3 briefly:1 polynomial:1 norm:12 advantageous:1 middle:2 nd:1 heuristically:1 dominique:2 r:11 seek:2 gradual:1 covariance:1 shot:1 reduction:5 initial:6 contains:1 series:1 existing:5 reaction:1 current:1 com:1 wd:1 si:1 gmail:1 yet:1 must:2 deniz:1 shape:1 gv:1 hourglass:7 interpretable:1 update:2 plot:2 v:5 half:3 selected:1 guess:1 xk:3 short:1 colored:2 org:1 zhang:1 along:5 leigs:4 consists:2 hermitian:1 coifman:2 pairwise:3 theoretically:1 expected:1 examine:1 decreasing:1 consistence:1 solver:1 increasing:1 becomes:1 spain:1 project:1 notation:1 moreover:2 estimating:2 kevrekidis:1 null:9 submanifolds:1 eigenvector:2 suppresses:1 finding:1 transformation:2 guarantee:3 sky:2 every:4 multidimensional:1 act:1 concave:1 isometrically:1 demonstrates:1 supv:1 uk:12 unit:3 superiority:1 positive:4 before:3 local:1 sd:2 limit:1 io:1 acad:1 bilinear:1 id:11 meil:1 solely:1 hlle:9 approximately:1 path:1 initialization:2 suggests:1 imbedding:1 averaged:1 practical:1 unique:1 yj:3 practice:1 definite:5 supu:1 swiss:2 significantly:1 isabella:1 convenient:1 projection:1 word:2 refers:1 cannot:1 close:1 confluence:1 operator:6 mvu:8 impossible:1 restriction:1 optimize:4 map:6 www:1 missing:1 straightforward:1 starting:1 convex:9 survey:1 recovery:2 wasserman:1 estimator:4 rule:1 array:1 orthonormal:3 it2:1 embedding:36 classic:1 coordinate:18 laplace:4 annals:1 construction:4 suppose:1 target:1 user:3 carrie:1 programming:2 us:1 element:4 bottom:1 observed:1 cloud:1 solved:1 calculate:2 thousand:1 region:3 decrease:4 trade:2 highest:2 rescaled:2 yk:5 pd:1 nash:3 convexity:2 renormalized:1 geodesic:1 motivate:1 depend:1 gg0:2 basis:2 gu:3 represented:3 heat:2 describe:2 effective:1 artificial:1 neighborhood:5 choosing:1 exhaustive:1 whose:2 heuristic:1 larger:2 distortion:13 otherwise:1 ability:1 statistic:2 niyogi:1 gp:2 itself:3 noisy:6 final:2 sequence:1 differentiable:2 eigenvalue:3 rr:29 yle:1 reconstruction:2 propose:6 took:1 product:1 relevant:1 loop:1 iff:3 kh:8 seattle:3 convergence:7 converges:1 tk:1 help:1 develop:1 andrew:1 stat:1 rescale:1 nearest:1 ij:1 progress:1 ddimensional:1 homeomorphism:1 come:1 met:1 direction:5 beltrami:4 radius:4 drawback:1 correct:1 centered:1 viewing:1 larry:1 bin:1 preliminary:1 proposition:6 mathematically:2 extension:2 hold:1 marco:1 lkl:1 sufficiently:1 normal:1 visually:1 exp:1 nadler:1 mapping:4 algorithmic:1 scope:1 major:1 achieves:2 smallest:2 estimation:4 proc:1 largest:3 create:1 weighted:4 uppermost:1 unfolding:1 minimization:1 gaussian:2 aim:1 avoid:1 encode:1 release:1 inherits:1 emission:2 june:1 improvement:1 rank:5 indicates:1 superimposed:1 seamlessly:1 hk:35 squaring:1 i0:2 relation:1 overall:1 dual:7 denoted:2 k6:1 proposes:1 initialize:1 field:1 construct:4 equal:2 having:1 washington:4 sampling:1 never:1 identical:1 represents:3 lyi:1 unsupervised:2 filling:1 nearly:1 genovese:1 icml:2 semipositive:2 duplicate:1 belkin:1 preserve:2 simultaneously:2 zoom:1 subsampled:1 geometry:4 replacement:1 attempt:1 highly:1 evaluation:1 alignment:1 grime:1 semidefinite:3 pc:5 ytrue:4 natl:1 amenable:1 closer:1 necessary:2 experience:1 euclidean:6 initialized:3 desired:1 hein:2 column:1 soft:1 compelling:1 tp:3 whitney:1 ijth:1 cost:1 deviation:4 entry:3 subset:3 snr:1 uniform:2 submanifold:2 successful:1 eigenmaps:7 connolly:1 seventh:1 too:2 lvdmaaten:1 morph:1 varies:2 density:5 international:2 siam:1 lee:2 off:2 together:1 perrault:2 w1:7 squared:4 von:1 containing:2 choose:5 r2d:1 opposed:1 huang:1 ek:3 derivative:1 wk:5 sloan:3 explicitly:1 depends:1 audibert:2 astrophysical:1 view:2 h1:1 start:1 competitive:1 zha:1 inherited:1 contribution:1 roll:1 variance:2 who:1 correspond:1 thumb:1 none:1 converged:2 whenever:1 distort:1 centering:1 lossi:4 james:1 thereof:1 galaxy:6 naturally:2 associated:1 riemannian:23 proof:2 recovers:1 degeneracy:1 sampled:3 proved:2 ask:1 lim:1 color:1 improves:3 dimensionality:5 astronomical:2 sophisticated:1 actually:1 back:1 higher:1 isometric:10 formulation:1 evaluated:2 though:1 strongly:1 until:1 langford:2 christopher:1 nonlinear:3 google:1 defines:2 quality:1 indicated:1 scientific:1 effect:3 concept:1 true:4 isomap:13 mani:1 hence:3 symmetric:6 iteratively:3 moore:1 nonzero:1 joncas:2 visualizing:1 please:1 coincides:1 criterion:2 outline:1 complete:1 evident:1 performs:1 l1:1 interpreting:1 image:1 harmonic:2 common:1 functional:1 volume:3 extend:1 numerically:1 significant:1 monthly:1 imposing:1 rd:6 meila:1 consistency:2 outlined:1 mathematics:1 moving:1 stable:1 surface:1 base:2 align:1 add:3 something:1 curvature:1 isometry:11 showed:1 optimizing:6 optimizes:2 driven:1 scenario:1 success:1 yi:2 inverted:1 preserving:3 minimum:2 additional:1 relaxed:2 captured:1 converge:1 shortest:1 dashed:1 semi:1 unimodal:1 reduces:1 smooth:9 faster:2 adapt:1 offer:1 sphere:13 marina:2 laplacian:11 noiseless:1 metric:31 vision:1 arxiv:1 iteration:6 sometimes:2 kernel:8 represent:1 achieved:3 preserved:2 background:1 else:1 biased:1 eliminates:1 rest:1 ltsa:2 subject:1 jordan:1 near:3 constraining:1 bernstein:1 embeddings:11 easy:1 variety:1 topology:1 bandwidth:2 identified:1 inner:1 idea:2 shift:2 pushforward:9 motivated:1 expression:1 pca:1 speaking:1 hessian:1 york:2 remark:1 mcqueen:1 generally:1 informally:1 procrustes:1 locally:3 tenenbaum:1 http:1 notice:2 s3:1 estimated:1 arising:1 per:1 correctly:1 discrete:4 supu6:3 putting:1 nevertheless:2 clarity:1 preprocessed:1 diffusion:4 v1:1 graph:10 relaxation:3 run:1 inverse:2 angle:1 luxburg:1 arrive:1 throughout:1 reader:1 family:1 almost:3 prefer:1 scaling:3 summarizes:1 guaranteed:1 display:3 fold:1 strength:1 adapted:1 flat:1 u1:2 aspect:1 span:2 department:2 according:1 alternate:2 ball:3 cleverly:1 describes:1 y0:6 restricted:1 taken:1 computationally:2 equation:1 assures:1 precomputed:1 know:1 end:4 rewritten:1 observe:1 spectral:5 appropriate:1 enforce:1 weinberger:1 shortly:1 original:3 denotes:2 top:1 running:1 clustering:1 log10:12 pacifico:1 exploit:1 ting:1 society:1 jake:1 tensor:2 objective:3 g0:44 question:1 neigbor:1 added:1 ozertem:1 gradient:12 kth:3 subspace:7 distance:12 sci:1 manifold:25 reason:2 besides:2 length:2 relationship:1 hij:2 trace:1 sigma:1 unintuitive:1 implementation:1 observation:3 arc:1 finite:3 descent:1 curved:1 displayed:2 truncated:1 defining:1 situation:1 wkl:3 rn:9 frame:1 david:1 required:1 vanderplas:1 optimized:2 accepts:1 barcelona:1 hour:1 nip:1 beyond:2 below:3 dynamical:1 departure:1 laplacians:2 max:1 royal:1 natural:3 arm:2 representing:1 minimax:1 improve:1 github:1 identifies:1 lk:7 embodied:1 dyk:1 review:1 geometric:5 l2:1 tangent:6 checking:1 multiplication:1 relative:1 embedded:4 loss:51 expect:1 mixed:1 proportional:1 digital:2 eigendecomposition:2 degree:1 sufficient:2 verma:1 heavy:2 row:5 changed:1 free:1 dis:4 bias:1 lle:1 fall:1 neighbor:6 taking:1 barrier:1 urves:5 saul:1 sparse:1 tolerance:1 overcome:1 dimension:29 curve:6 lafon:2 computes:1 forward:2 projected:3 obtains:1 compact:1 confirm:1 umut:1 pseudoinverse:1 global:1 spectrum:5 search:2 hydrogen:1 obtaining:2 init:2 mse:2 diag:2 nakul:1 main:4 rh:1 border:1 subsample:4 noise:13 animation:1 whole:1 chosing:1 fig:7 referred:1 en:1 lyj:1 orth:5 mmp:1 xl:3 lie:1 dvg0:3 third:2 jacobian:1 dozen:1 theorem:2 departs:2 removing:1 list:1 intrinsic:13 exists:2 albeit:1 adding:1 perone:1 supplement:5 illustrates:1 push:2 hole:1 smoothly:1 simply:1 likely:1 wavelength:1 penrose:1 desire:1 expressed:1 springer:2 extracted:1 presentation:1 donoho:1 diffeomorphism:1 erdogmus:1 hard:1 infinite:1 except:1 uniformly:2 reducing:1 acting:1 denoising:1 principal:8 called:3 total:1 verdinelli:1 svd:1 experimental:1 formally:1 support:1 latter:1 violated:1 constructive:1 tested:1 |
6,120 | 6,536 | Tracking the Best Expert in Non-stationary Stochastic
Environments
Chen-Yu Wei
Yi-Te Hong
Chi-Jen Lu
Institute of Information Science
Academia Sinica, Taiwan
{bahh723, ted0504, cjlu}@iis.sinica.edu.tw
Abstract
We study the dynamic regret of multi-armed bandit and experts problem in nonstationary stochastic environments. We introduce a new parameter ?, which
measures the total statistical variance of the loss distributions over T rounds of the
process, and study how this amount affects the regret. We investigate the interaction
between ? and ?, which counts the number of times the distributions change, as
well as ? and V , which measures how far the distributions deviates over time. One
striking result we find is that even when ?, V , and ? are all restricted to constant,
the regret lower bound in the bandit setting still grows with T . The other highlight
is that in the full-information setting, a constant regret becomes achievable with
constant ? and ?, as it can be made independent of T , while with constant V and
?, the regret still has a T 1/3 dependency. We not only propose algorithms with
upper bound guarantee, but prove their matching lower bounds as well.
1
Introduction
Many situations in our daily life require us to make repeated decisions which result in some losses
corresponding to our chosen actions. This can be abstracted as the well-known online decision
problem in machine learning [5]. Depending on how the loss vectors are generated, two different
worlds are usually considered. In the adversarial world, loss vectors are assumed to be deterministic
and controlled by an adversary, while in the stochastic world, loss vectors are assumed to be sampled
independently from some distributions.
In both worlds, good online algorithms are known which
?
can achieve a regret of about T over T time steps, where the regret is the difference between the
total loss of the online algorithm and that of the best offline one. Another distinction is about the
information the online algorithm can receive after each action. In the full-information setting, it gets
to know the whole loss vector of that step, while in the bandit setting,
only the loss value of the
?
chosen action is received. Again, in both settings, a regret of about T turns out to be achievable.
While the regret bounds remain in the same order in those general scenarios discussed above, things
become different when some natural conditions are considered. One well-known example is that in
the stochastic multi-armed bandit (MAB) problem, when the best arm (or action) is substantially
better than the second best, with a constant gap between their means, then a much lower regret, of the
order of log T , becomes possible. This motivates us to consider other possible conditions which can
have finer characterization of the problem in terms of the achievable regret.
In the stochastic world, most previous works focused on the stationary setting, in which the loss (or
reward) vectors are assumed to be sampled from the same distribution for all time steps. With this
assumption, although one needs to balance between exploration and exploitation in the beginning,
after some trials, one can be confident about which action is the best and rest assured that there are
no more surprises. On the other hand, the world around us may not be stationary, in which existing
learning algorithms for the stationary case may no longer work. In fact, in a non-stationary world, the
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
dilemma between exploration and exploitation persists as the underlying distribution may drift as
time evolves. How does the non-stationarity affect the achievable regret? How does one measure the
degree of non-stationarity?
In this paper, we answer the above questions through the notion of dynamic regret, which measures
the algorithm?s performance against an offline algorithm allowed to select the best arm at every step.
Related Works. One way to measure the non-stationarity of a sequence of distributions is to count
the number of times the distribution at a time step differs from its previous one. Let ? ? 1 be
this number so that the whole time horizon can be partitioned into ? intervals,
with each interval
?
having a stationary distribution. In the bandit setting, a regret of about ?T is achieved by the
EXP3.S algorithm in [2], as well as the discounted UCB and sliding-window UCB algorithms in
[8]. The dependency on T can be refined in the full-information
? setting: AdaNormalHedge [10]
and Adapt-ML-Prod [7] can both achieve regret in the form of ?C, where C is the total first-order
and second-order excess loss respectively, which is upper-bounded by?T . From a slightly different
Online Mirror Descent approach, [9] can also achieve a regret of about ?D, where D is the sum of
differences between consecutive loss vectors.
Another measure of non-stationarity, denoted by V , is to compute the difference between the means
of consecutive distributions and sum them up. Note that this allows the possibility for the best arm to
change frequently, with a very large ?, while still having similar distributions with a small V . For
such a measure V , [3] provided a bandit algorithm which achieves a regret of about V 1/3 T 2/3 . This
regret upper bound is unimprovable in general even in the full-information setting, as a matching
lower bound was shown in [4]. Again, [9] refined the upper
bound in the full-information setting
p
3
through the introduction of D, achieving the regret of about V? DT , for a parameter V? different but
related to V : V? calculates the sum of differences between consecutive realized loss vectors, while V
measures that between mean loss vectors. This makes the results of [3] and [9] incomparable. The
problem stems from the fact that [9] considers the traditional adversarial setting, while [3] studies the
non-stationary stochastic setting. In this paper, we will provide a framework that bridges these two
seemingly disparate worlds.
Our Results. We base ourselves in the stochastic world with non-stationary distributions, characterized by the parameters ? and V . In addition, we introduce a new parameter ?, which measures
the total statistical variance of the distributions. Note that traditional adversarial setting corresponds
to the case with ? = 0 and ? ? V ? T , while the traditional stochastic setting has ? ? T and
? = V = 1. Clearly, with a smaller ?, the learning problem becomes easier, and we would like to
understand the tradeoff between ? and other parameters, including ?, V , and T . In particular, we
would like to know how the bounds described in the related works would be changed. Would all the
dependency on T be replaced by ?, or would only some partial dependency on T be shifted to ??
First, we consider the effect of the variance ??with respect to the parameter ?. We show that in
the full-information setting, a regret of about ?? + ? can be achieved, which is independent of
T . On the other hand, we show a sharp contrast
? that in the bandit setting, the dependency on T is
unavoidable, and a lower bound of the order of ?T exists. That is, even when there is no variance in
distributions, with ? = 0, and ?
the distributions only change once, with ? = 2, any bandit algorithm
cannot avoid a regret of about T , while a full-information algorithm can achieve a constant regret
independent of T .
Next,
we study
?
? the tradeoff between ? and V . We show that in the bandit setting, a regret of about
3
?V T + V T is achievable. Note that this recovers the V 1/3 T 2/3 regret bound of [3] as ? is at
most of the order of T , but our bound becomes better when ? is much smaller than T . Again, one
may notice the dependency on T and wonder if this can also be removed in the full-information
setting. We show that in the full-information setting, the regret upper
bound and lower bound are
p
?
3
3
?
both about ?V T + V . Our upper bound is incomparable to the V DT bound of [9], since their
adversarial setting corresponds to ? = 0 and their D can be as large as T in our setting. Moreover,
we see that while the full-information regret bound is slightly better than that in the bandit setting,
there is still an unavoidable T 1/3 dependency.
2
Our results provide a big picture of the regret landscape in terms of the parameters ?, ?, V , and T ,
in both full-information and bandit settings. A table summarizing our bounds as well as previous
ones is given in Appendix A in the supplementary material. Finally, let us remark that our effort
mostly focuses on characterizing the achievable (minimax) regrets, and most of our upper bounds are
achieved by algorithms which need the knowledge of the related parameters and may not be practical.
To complement this, we also propose a parameter-free algorithm, which still achieve a good regret
bound and may have independent interest of its own.
2
Preliminaries
Let us first introduce some notations. For an integer K > 0, let [K] denote the set {1, . . . , K}. For
a vector ` ? RK , let `i denote its i?th component. When we need to refer to a time-indexed vector
`t ? RK , we will write `t,i to denote its i?th component. We will use the indicator function 1C for a
condition C, which gives the value 1 if C holds and 0 otherwise. For a vector `, we let k`kb denote its
?
Lb -norm. While standard notation O(?) is used to hide constant factors, we will use the notation O(?)
to hide logarithmic factors.
Next, let us describe the problem we study in this paper. Imagine that a learner is given the choice
of a total of K actions, and has to play iteratively for a total of T steps. At step t, the learner
needs to choose an action at ? [K], and then suffers a corresponding loss `t,i ? [0, 1], which is
independently drawn from a non-stationary distribution with expected loss E[`t,i ] = ?t,i , which
may drift over time. After that, the learner receives some feedback from the environment. In the
full-information setting, the feedback gives the whole loss vector `t = (`t,1 , ..., `t,K ), while in
the bandit setting, only the loss `t,at of the chosen action is revealed. A standard way to evaluate
the learner?s performance is to measure her (or his) regret, which is the difference between the
total loss she suffers and that of an offline algorithm. While most prior works consider offline
algorithms which can only play a fixed action for all the steps, we consider stronger offline algorithms
which can take different actions in different steps. Our consideration is natural for non-stationary
distributions, although this would make the regret large when compared to such stronger offline
algorithms. Formally,
learner?s performance
we measure the P
by its expected dynamic pseudo-regret,
PT
T
defined as t=1 E `t,at ? `t,u?t = t=1 ?t,at ? ?t,u?t , where u?t = arg mini ?t,i is the best
action at step t. For convenience, we will simply refer it as the regret of the learner later in our paper.
We will consider the following parameters characterizing different aspects of the environments:
?=1+
T
X
t=2
1?t 6=?t?1 , V =
T
X
k?t ? ?t?1 k? , and ? =
t=1
T
X
E k`t ? ?t k22 ,
(1)
t=1
where we let ?0 be the all-zero vector. Here, ? ? 1 is the number of times the distributions switch,
V measures the distance the distributions deviate, and ? is the total statistical variance of these T
distributions. We will call distributions with a small ? switching distributions, while we will call
distributions with a small V drifting distributions and call V the total drift of the distributions.
Finally, we will need the following large deviation bound, known as empirical Bernstein inequality.
(X1 , ..., Xn ) be a vector of independent random variables taking
Theorem 2.1. [11] Let X =P
values in [0, 1], and let ?X = 1?i<j?n (Xi ? Xj )2 /(n(n ? 1)). Then for any ? > 0, we have
s
" n
#
X [X ] ? X
2? log 2?
7 log 2?
E i
i
Pr
+
.
> ?(n, ?X , ?) ? ?, for ?(n, ?, ?) =
n
n
3(n ? 1)
i=1
3
Algorithms
We would like to characterize the achievable regret bounds for both switching and drifting distributions, in both full-information and bandit settings. In particular, we would like to understand the
interplay among the parameters ?, V, ?, and T , defined in (1). The only known upper bound which is
good enough for our purpose is that by [8] for switching distributions in the bandit setting, which is
close to the lower bound in our Theorem 4.1. In subsection 3.1, we provide a bandit algorithm for
drifting distributions which achieves an almost optimal regret upper bound, when given the parameters
3
Algorithm 1 Rerun-UCB-V
Initialization: Set B according to (2) and ? = 1/(KT ).
for m = 1, . . . , T /B do
for t = (m ? 1)B + 1, . . . , mB do
Choose arm at := argmini (?
?t,i ? ?t,i ), with ?
?t,i and ?t,i computed according to (3).
end for
end for
V, ?, T . In subsection 3.2, we provide a full-information algorithm which works for both switching
and drifting distributions. The regret bounds it achieves are also close to optimal, but it again needs
the knowledge of the related parameters. To complement this, we provide a full-information algorithm
in subsection 3.3, which does not need to know the parameters but achieves slightly larger regret
bounds.
3.1
Parameter-Dependent Bandit Algorithm
In this subsection, we consider drifting distributions?
parameterized
? by V and ?. Our main result is a
3
bandit algorithm which achieves a regret of about ?V T + V T . As we aim to achieve smaller
regrets for distributions with smaller statistical variances, we adopt a variant of the UCB algorithm
developed by [1], called UCB-V, which takes variances into account when building its confidence
interval.
Our algorithm divides the time steps into T /B intervals I1 , . . . , IT /B , each having B steps,1 with
p
p
B = 3 K 2 ?T /V 2 if K?2 ? T V and B = KT /V otherwise.
(2)
For each interval, our algorithm clears all the information from previous intervals, and starts a fresh
run of UCB-V. More precisely, before step t in an interval I, it maintains for each arm i its empirical
? t,i , and size of confidence interval ?t,i , defined as
mean ?
?t,i , empirical variance ?
?
?t,i =
X
s?St,i
X
`s,i ?
(`r,i ? `s,i )2
? t,i , ?),
, ?t,i =
, and ?t,i = ?(|St,i |, ?
|St,i |
|St,i |(|St,i | ? 1)
(3)
r,s?St,i
where St,i denotes the set of steps before t in I that arm i was played, and ? is the function given in
? t,i = 0 and ?t,i = 1 if
Theorem 2.1. Here we use the convention that ?
?t,i = 0 if |St,i | = 0, while ?
|St,i | ? 1. Then at step t, our algorithm selects the optimistic arm
at := argmin(?
?t,i ? ?t,i ),
i
receives the corresponding loss, and updates the statistics.
Our algorithm is summarized in Algorithm 1, and its regret is guaranteed by the following, which we
prove in Appendix B in the supplementary material.
?
?
? 3 K 2 ?V T + KV T ).
Theorem 3.1. The expected regret of Algorithm 1 is at most O(
3.2
Parameter-Dependent Full-Information Algorithms
In this subsection, we provide full-information algorithms for switching and drifting distributions. In
fact, they are based on an existing algorithm from [6], which is known to work in a different setting:
the loss vectors are deterministic and adversarial, and the offline comparator cannot switch arms.
? In
that setting, one
P of their algorithms, based on gradient-descent (GD), can achieve a regret of O( D)
where D = t k`t ? `t?1 k22 , which is small when the loss vectors have small deviation. Our first
observation is that their algorithm in fact can work against a dynamic offline
? comparator which
switches arms less than N times, given any N , with its regret becoming O( N D). Our second
observation is that when ? is small, each observed loss vector `t is likely to be close to its true mean
1
For simplicity of presentation, let us assume here and later in the paper that taking divisions and roots to
produce blocks of time steps all yield integers. It is easy to modify our analysis to the general case without
affecting the order of our regret bound.
4
Algorithm 2 Full-information GD-based algorithm
Initialization: Let x1 = x
?1 = (1/K, . . . , 1/K)> .
for t = 1, 2, . . . , T do
x ? xt k22 ), and then receive loss vector `t .
Play x
?t = arg minx??X (h`t?1 , x
?i + ?1t k?
Update xt+1 = arg minx?X (h`t , xi + ?1t kx ? xt k22 ).
end for
?t , and when V is small, `t is likely to be close to `t?1 . These two observations make possible for us
to adopt their algorithm to our setting.
We show the first algorithm in Algorithm 2, with the feasible set X being the probability simplex.
The idea is to use `t?1 as an estimate for `t to move x
?t further in a possibly beneficial direction. Its
regret is guaranteed by the following, which we prove in Appendix C in the supplementary material.
Theorem p
3.2. For switching distributions
by ? and ?, the regret of Algorithm 2 with
?parameterized
?
?t = ? = ?/(? + K?), is at most O( ?? + K?).
Note that for switching distributions, the regret of Algorithm 2 does not depend on T , which means
that it can achieve a constant regret for constant ? and ?. Let us remark that although using a variant
based on multiplicative updates could result in a better dependency on K, an additional factor of
log T would then emerge when using existing techniques for dealing with dynamic comparators.
For drifting distributions, one can show that Algorithm 2 still works and has a good regret bound.
However, a slightly better bound can be achievedpas we describe next. The idea is to divide the time
steps into T /B intervals of size B, with B = 3 ?T /V 2 if ?T > V 2 and B = 1 otherwise, and
re-run Algorithm 2 in each interval with an adaptive learning rate. One way to have an adaptive
learning rate can be found in [9], which works well when there is only one interval. A natural way to
adopt it here is to reset the learning rate at the start of each interval, but this does not lead to a good
enough regret bound as it results in some constant regret at the start of every interval. To avoid this,
some careful changes are needed. Specifically, in an interval [t1 , t2 ], we run Algorithm 2 with the
learning rate reset as
v
u t?1
u X
?t = 1/t4
k`? ? `? ?1 k22
? =t1
for t > t1 , with ?t1 = ? initially for every interval. This has the benefit of having small or even no
regret at the start of an interval when the loss vectors across the boundary have small or no deviation.
The regret of this new algorithm is guaranteed by the following, which we prove in Appendix D in
the supplementary material.
Theorem 3.3.?For drifting
? distributions parameterized by V and ?, the regret of this new algorithm
is at most O( 3 V ?T + KV ).
3.3
Parameter-Free Full-Information Algorithm
The reason that our algorithm for Theorem 3.3 needs the related parameters is to set its learning rate
properly. To have a parameter-free algorithm, we would like to adjust the learning rate dynamically
in a data-driven way. One way for doing this can be found in [7], which is based onq
the multiplicative
P 2
updates variant of the mirror-descent algorithm. It achieves a static regret of about
t rt,k against
any expert k, where rt,k = hpt , `t i ? `t,k is its instantaneous regret for playing pt at step t. However,
in order to work in our setting, we would like the regret bound to depend on `t ? `t?1 as seen
previously. This suggests us to modify the Adapt-ML-Prod algorithm of [7] using the idea of [6],
which takes `t?1 as an estimate of `t to move pt further in an optimistic direction.
Recall that the algorithm of [7] maintains a separate learning rate ?t,k for each arm k at time t, and it
updates the weight wt,k as well as ?t,k using the instantaneous regret rt,k . To modify the algorithm
using the idea of [6], we would like to have an estimate mt,k for rt,k in order to move pt,k further
using mt,k and update the learning rate accordingly. More precisely, at step t, we now play pt , with
pt,k = ?t?1,k w
?t?1,k /h?t?1 , w
?t?1 i where w
?t?1,k = wt?1,k exp(?t?1,k mt,k ),
5
(4)
Algorithm 3 Optimistic-Adapt-ML-Prod
Initialization: Let w0,k = 1/K and `0,k = 0 for every k ? [K].
for t = 1, 2, . . . , T do
Play pt according to (4), and then receive loss vector `t .
Update each weight wt,k according to (5) and each learning rate ?t,k according to (6).
end for
which uses the estimate mt,k to move further from wt?1,k . Then after receiving the loss vector `t ,
we update each weight
?t,k /?t?1,k
2
wt,k = wt?1,k exp ?t?1,k rt,k ? ?t?1,k
(rt,k ? mt,k )2
(5)
as well as each learning rate
(
?t,k = min 1/4,
s
(ln K)/ 1 +
)
X
s?[t]
(rs,k ? ms,k
)2
.
(6)
Our
pPalgorithm is summarized in Algorithm 3, and we will show that it achieves a regret of about
2
t (rt,k ? mt,k ) against arm k. It remains to choose an appropriate estimate mt,k . One attempt
is to have mt,k = rt?1,k , but rt,k ? rt?1,k = (hpt , `t i ? `t,k ) ? (hpt?1 , `t?1 i ? `t?1,k ), which does
not lead to a desirable bound. The other possibility is to set mt,k = hpt , `t?1 i ? `t?1,k , which can
be shown to have (rt,k ? mt,k )2 ? (2k`t ? `t?1 k? )2 . However, it is not clear how to compute
such mt,k because it depends on pt,k which in turns depends on mt,k itself. Fortunately, we can
approximate it efficiently in the following way.
Note that the key quantity is hpt , `t?1 i. Given its value ?, w
?t?1,k and pt,k can be seen as functions of ?, definedP
according to (5) as w
?t?1,k (?) = wt?1,k exp(?t,k (? ? `t?1,k )) and pt,k (?) =
?t?1,k w
?t?1,k (?)/ i ?t?1,i w
?t?1,i (?). Then we would like to show the existence of ? such that
hpt (?), `t?1 i = ? and to find it efficiently. For this, consider the function f (?) = hpt (?), `t?1 i,
with pt (?) defined above. It is easy to check that f is a continuous function bounded in [0, 1], which
implies the existence of some fixed point ? ? [0, 1] with f (?) = ?. Using a binary search, such an ?
can be approximated within error 1/T in log T iterations. As such a small error does not affect the
order of the regret, we will ignore it for simplicity of presentation, and assume that we indeed have
hpt , `t?1 i and hence mt,k = hpt , `t?1 i ? `t?1,k without error.
Then we have the following regret bound (c.f. [7, Corollary 4]), which we prove in Appendix E in
the supplementary material.
Theorem 3.4. The static regret of Algorithm 3 w.r.t. any arm (or expert) k ? [K] is at most
rX
rX
2
2
?
?
(rt,k ? mt,k ) ln K + ln K ? O
k`t ? `t?1 k? ln K + ln K ,
O
t?[T ]
t?[T ]
? hides a ln ln T factor.
where the notation O(?)
The regret in the theorem above is measured against a fixed arm. To achieve a dynamic regret against
an offline algorithm which can switch arms, one can use a generic reduction to the so-called sleeping
? = KT sleeping experts, and
experts problem. In particular, we can use the idea in [7] by creating K
? experts (instead of on the K arms). More precisely, each sleeping
run our Algorithm 3 on these K
expert is indexed by some pair (s, k), and it is asleep for steps before s and becomes awake for steps
? experts, and computes its own
t ? s. At step t, it calls Algorithm 3 for the distributionP
p?t over the K
t
distribution pt over K arms, with pt,k proportional to s=1 p?t,(s,k) . Then it plays pt , receives loss
vector `t , and feeds some modified loss vector `?t and estimate vector m
? t to Algorithm 3 for update.
Here, we set `?t,(s,k) to its expected loss hpt , `t i if expert (s, k) is asleep and to `t,k otherwise, while
we set m
? t,(s,k) to 0 if expert (s, k) is asleep and to mt,k = hpt , `t?1 i ? `t?1,k otherwise. This choice
allows us to relate the regret of Algorithm 3 to that of the new algorithm, which can be seen in the
proof of the following theorem, given in Appendix F in the supplementary material.
?
? ?? ln K + ? ln K) for
Theorem 3.5. The dynamic expected
regret of?the new algorithm is O(
?
? 3 V ?T ln K + V T ln K) for drifting distributions.
switching distributions and O(
6
4
Lower Bounds
We study regret lower bounds in this section. In
? subsection 4.1, we show that for switching distributions with ? ? 1 ? 1 switches, there is an ?( ?T ) lower bound for bandit algorithms, even when
there is no variance (? = 0) and there are constant loss gaps between the optimal and suboptimal arms.
We also show a full-information lower bound, which almost matches our upper bound in Theorem 3.2.
In subsection 4.2, we show that for drifting distributions, our upper bounds in Theorem 3.1 and
Theorem?3.2 are almost tight. In particular, we show that now even for full-information algorithms,
a large 3 T dependency in the regret turns out to be unavoidable, even for small V and ?. This
provides a sharp contrast to the upper bound of our Theorem 3.2, which shows that a constant regret
is in fact achievable by a full-information algorithm for switching distributions with constant ? and
?. For simplicity of presentation, we will only discuss the case with K = 2 actions, as it is not hard
to extend our proofs to the general case.
4.1
Switching Distributions
In contrast to the full-information setting, the existence of switches presents a dilemma with lose-lose
situation for a bandit algorithm: in order to detect any possible switch early enough, it must explore
aggressively, but this has the consequence of playing suboptimal arms too often. To fool any bandit
algorithm, we will switch between two deterministic distributions, with no variance, which have
mean vectors `(1) = (1/2, 1)> and `(2) = (1/2, 0)> , respectively. Our result is the following.
?
Theorem 4.1. The worst-case expected regret of any bandit algorithm is ?( ?T ), for ? ? 2.
Proof. Consider any bandit algorithm A, and let us partition the T steps into ?/2 intervals, each
consisting
? of B = 2T /? steps. Our goal is to make A suffer in each interval an expected regret
of ?( B) by switching the loss vectors at most once. As mentioned before, we will only switch
between two different deterministic distributions with mean vectors: `(1) and `(2) . Note that we can
see these two distributions simply as two loss vectors, with `(i) having arm i as the optimal arm.
In what follows, we focus on one of the intervals, and assume that we have chosen the distributions in
all previous intervals. We would like to start the interval with the loss vector `(1) . Let N2 denote the
expected number ?
of steps A plays the suboptimal arm 2 in this interval if `(1) is used for the whole
interval. If N2 ? B/2, we can actually use `(1)
interval with no switch, which makes
? for the whole
?
A suffer an expected regret of?at least (1/2) ? B/2 = B/4 in this interval. Thus, it remains to
consider the case with N2 < B/2. In this case, A does not explore arm 2 often enough, and we
>
let it pay by choosing an appropriate step to switch to the other loss vector `(2) = (1/2,
? 0) , which
has arm 2 as the
? optimal one. For this,
? let us divide the B steps of the interval into B blocks, each
consisting of B steps. As N2 < B/2,
? there must be a block in which the expected number of
steps that A plays arm 2 is at most N2 / B < 1/2. By a Markov inequality, the probability that A
ever plays arm 2 in this block is less than 1/2. This implies that when given the loss vector `(1) for
all the steps till the end of this block, A never plays arm 2 in the block with probability more than
1/2. Therefore, if we make the switch to the loss vector `(2) = (1/2, 0)> at the beginning of the
block, then A with probability more than 1/2 still never plays arm 2 and never notices the switch in
(2)
this block. As arm 2 is the?optimal
?one with respect to ` , the expected regret of A in this block is
more than (1/2) ? (1/2) ? B = B/4.
Now if we choose distributions in each interval as described above, then there are at most ?/2 ? 2 = ?
periods of stationary
distribution
and the total expected regret of A can be made
p in the whole horizon,
?
?
at least ?/2 ? B/4 = ?/2 ? 2T /?/4 = ?( ?T ), which proves the theorem.
For full-information algorithms, we have the following lower bound, which almost matches our upper
bound in Theorem 3.2. We provide the proof in Appendix G in the supplementary material.
?
Theorem 4.2. The worst-case expected regret of any full-information algorithm is ?( ?? + ?).
7
4.2
Drifting Distributions
In this subsection, we show that the regret upper bounds achieved by our bandit algorithm and
full-information algorithm are close to optimal by showing almost matching lower bounds. More
precisely, we have the following.
?
Theorem 4.3. The worst-case expected ?
regret of any
full-information algorithm is ?( 3 ?V T + V ),
?
while that of any bandit algorithm is ?( 3 ?V T + V T ).
Proof. Let us first consider the full-information case. When ?T ?
? 32KV 2 , we immediately have
from Theorem 4.2 the regret lower bound of ?(?) ? ?(V ) ? ?( 3 ?V T + V ).
?
Thus, let us focus on the case
with ?T ? 32KV 2 . In this case, V ? O( 3 ?V T ), so it suffices to
?
prove a lower bound of ?( 3 ?V T ). Fix any full-information algorithm A, and we will show the
existence of a sequence of loss distributions for A to suffer such an expectedp
regret. Following [3],
we divide the time steps into T /B intervals of length B, and we set B = 3 ?T /(32KV 2 ) ? 1.
For each interval, we will pick some arm i as the optimal one, and give it some loss distribution
P, while other arms are sub-optimal and all have some loss distribution Q. We need P and Q to
satisfy the following three conditions: (a) P?s mean is smaller than Q?s by , (b) their variances are at
most ? 2 , and (c) their KL divergence satisfies (ln 2)KL(Q, P) ? 2 /? 2 , for some , ? ? (0, 1) to be
specified later. Their existence is guaranteed by the following, which we prove in Appendix H in the
supplementary material.
?
Lemma 4.4. For any 0 ? ? ? 1/2 and 0 ? ? ?/ 2, there exist distributions P and Q satisfying
the three conditions above.
Let Di denote the joint distribution of such K distributions, with arm i being the optimal one, and we
will use the same Di for all the steps in an interval. We will show that for any interval, there is some
i such that using Di this way can make algorithm A suffer a large expected regret in the interval,
conditioned on the distributions chosen for previous intervals. Before showing that, note that when
we choose distributions in this way, their total variance is at most T K? 2 while their
p total drift is at
most (T /B). To have them bounded by ? and V respectively, we choose ? = ?/(4KT ) and
= V B/T , which satisfy the condition of Lemma 4.4, with our choice of B.
To find the distributions, we deal with the intervals one by one. Consider any interval, and assume
that the distributions for previous intervals have been chosen. Let Ni denote the number of steps A
plays arm i in this interval, and let Ei [Ni ] denote its expectation when Di is used for every step of
the interval, conditioned on the distributions of previous intervals. One can bound this conditional
expectation in terms of a related one, denoted as Eunif [Ni ], when every arm has the distribution Q for
every step of the interval, again conditioned on the distributions of previous intervals. Specifically,
using an almost identical argument to that in [2, proof of Theorem A.2.], one can show that
Bp
Ei [Ni ] ? Eunif [Ni ] +
B(2 ln 2) ? KL(Q, P).2
(7)
2
According to Lemma 4.4 and our choice of parameters, we have B(2 ln 2) ? KL(Q,
P) ? 2B ?
P
(2 /? 2 ) P
? 1/4. Summing both sides of (7) over arm i, and using the fact that i Eunif [Ni ] = B,
we get i Ei [Ni ] ? B + BK/4, which implies the existence of some i such that Ei [Ni ] ?
B/K + B/4 ? (3/4)B. Therefore, if we choose this distribution Di , the conditional expected regret
of algorithm A in this interval is at least (B ? Ei [Ni ]) ? B/4.
By choosing distributions inductively
in this way, we can make A suffer a total expected regret of at
?
least (T /B) ? (B/4) ? ?( 3 ?V T ). This completes the proof for the full-information case.
Next,
the bandit case. From Theorem 4.1, we immediately
have a lower bound
of
? let us consider
?
?
?
?
3
?(
?T
)
?
?(
V
T
),
which
implies
the
required
bound
when
V
T
?
?V
T
.
When
V
T
?
?
?
3
?V T , we have V ? ?
?2 /T which implies that V ? 3 ?V T , and we can then use the fullinformation bound of ?( 3 ?V T ) just proved before. This completes the proof of the theorem.
2
Note that inside the square root, we use B instead of Eunif [Ni ] as in [2]. This is because in their bandit
setting, Ni is the number of steps when arm i is sampled and has its information revealed to the learner, while in
our full-information case, information about arm i is revealed in every step and there are at most B steps.
8
References
[1] Jean-Yves Audibert, R?mi Munos, and Csaba Szepesv?ri. Exploration-exploitation tradeoff
using variance estimates in multi-armed bandits. Theor. Comput. Sci., 410(19):1876?1902,
2009.
[2] Peter Auer, Nicol? Cesa-Bianchi, Yoav Freund, and Robert E. Schapire. The nonstochastic
multiarmed bandit problem. SIAM J. Comput., 32(1):48?77, 2002.
[3] Omar Besbes, Yonatan Gur, and Assaf J. Zeevi. Stochastic multi-armed-bandit problem with
non-stationary rewards. In Advances in Neural Information Processing Systems 27: Annual
Conference on Neural Information Processing Systems (NIPS), December 2014.
[4] Omar Besbes, Yonatan Gur, and Assaf J. Zeevi. Non-stationary stochastic optimization. Operations Research, 63(5):1227?1244, 2015.
[5] Nicol? Cesa-Bianchi and G?bor Lugosi. Prediction, learning, and games. Cambridge University
Press, 2006.
[6] Chao-Kai Chiang, Tianbao Yang, Chia-Jung Lee, Mehrdad Mahdavi, Chi-Jen Lu, Rong Jin,
and Shenghuo Zhu. Online optimization with gradual variations. In The 25th Conference on
Learning Theory (COLT), June 2012.
[7] Pierre Gaillard, Gilles Stoltz, and Tim van Erven. A second-order bound with excess losses. In
The 27th Conference on Learning Theory (COLT), June 2014.
[8] Aur?lien Garivier and Eric Moulines. On upper-confidence bound policies for switching bandit
problems. In The 22nd International Conferenc on Algorithmic Learning Theory (ALT), October
2011.
[9] Ali Jadbabaie, Alexander Rakhlin, Shahin Shahrampour, and Karthik Sridharan. Online
optimization : Competing with dynamic comparators. In Proceedings of the 18th International
Conference on Artificial Intelligence and Statistics (AISTAT), May 2015.
[10] Haipeng Luo and Robert E. Schapire. Achieving all with no parameters: Adanormalhedge. In
The 28th Conference on Learning Theory (COLT), July 2015.
[11] Andreas Maurer and Massimiliano Pontil. Empirical bernstein bounds and sample-variance
penalization. In The 22nd Conference on Learning Theory (COLT), June 2009.
9
| 6536 |@word trial:1 exploitation:3 achievable:8 norm:1 stronger:2 nd:2 r:1 gradual:1 pick:1 reduction:1 erven:1 existing:3 luo:1 must:2 academia:1 partition:1 update:9 stationary:13 intelligence:1 accordingly:1 beginning:2 chiang:1 characterization:1 provides:1 become:1 prove:7 assaf:2 inside:1 introduce:3 indeed:1 expected:17 frequently:1 multi:4 chi:2 moulines:1 discounted:1 armed:4 window:1 becomes:5 provided:1 spain:1 underlying:1 bounded:3 moreover:1 notation:4 what:1 argmin:1 substantially:1 developed:1 csaba:1 guarantee:1 pseudo:1 every:8 before:6 t1:4 persists:1 modify:3 consequence:1 switching:13 becoming:1 lugosi:1 initialization:3 shenghuo:1 dynamically:1 suggests:1 practical:1 regret:82 block:9 differs:1 pontil:1 empirical:4 matching:3 confidence:3 get:2 cannot:2 close:5 convenience:1 deterministic:4 tianbao:1 independently:2 focused:1 simplicity:3 immediately:2 gur:2 his:1 notion:1 variation:1 imagine:1 play:12 pt:14 us:1 approximated:1 satisfying:1 observed:1 worst:3 removed:1 mentioned:1 environment:4 reward:2 inductively:1 dynamic:8 depend:2 tight:1 ali:1 dilemma:2 division:1 eric:1 learner:7 joint:1 massimiliano:1 describe:2 artificial:1 choosing:2 refined:2 jean:1 supplementary:8 larger:1 kai:1 otherwise:5 statistic:2 itself:1 online:7 seemingly:1 interplay:1 sequence:2 propose:2 interaction:1 mb:1 reset:2 till:1 achieve:9 kv:5 haipeng:1 aistat:1 produce:1 tim:1 depending:1 measured:1 received:1 implies:5 convention:1 direction:2 stochastic:10 kb:1 exploration:3 material:8 require:1 suffices:1 fix:1 preliminary:1 mab:1 theor:1 rong:1 hold:1 around:1 considered:2 exp:3 algorithmic:1 zeevi:2 achieves:7 consecutive:3 adopt:3 early:1 purpose:1 lose:2 bridge:1 gaillard:1 clearly:1 aim:1 modified:1 shahin:1 avoid:2 corollary:1 focus:3 june:3 she:1 properly:1 check:1 contrast:3 adversarial:5 summarizing:1 detect:1 dependent:2 initially:1 her:1 bandit:30 lien:1 i1:1 selects:1 rerun:1 arg:3 among:1 colt:4 denoted:2 once:2 never:3 having:5 identical:1 yu:1 comparators:2 simplex:1 t2:1 divergence:1 replaced:1 ourselves:1 consisting:2 karthik:1 attempt:1 stationarity:4 interest:1 unimprovable:1 investigate:1 possibility:2 adjust:1 hpt:11 kt:4 partial:1 daily:1 stoltz:1 indexed:2 divide:4 maurer:1 re:1 yoav:1 deviation:3 wonder:1 too:1 characterize:1 dependency:9 answer:1 gd:2 confident:1 st:9 international:2 siam:1 aur:1 lee:1 receiving:1 jadbabaie:1 again:5 unavoidable:3 cesa:2 choose:7 possibly:1 creating:1 expert:11 mahdavi:1 account:1 summarized:2 satisfy:2 audibert:1 depends:2 later:3 root:2 multiplicative:2 optimistic:3 doing:1 start:5 maintains:2 square:1 ni:11 yves:1 variance:14 efficiently:2 yield:1 landscape:1 bor:1 lu:2 rx:2 finer:1 suffers:2 against:6 proof:8 di:5 recovers:1 static:2 mi:1 sampled:3 proved:1 recall:1 knowledge:2 subsection:8 actually:1 auer:1 feed:1 dt:2 wei:1 just:1 hand:2 receives:3 ei:5 grows:1 building:1 effect:1 k22:5 true:1 hence:1 aggressively:1 iteratively:1 deal:1 round:1 game:1 hong:1 m:1 consideration:1 instantaneous:2 mt:15 discussed:1 extend:1 refer:2 multiarmed:1 cambridge:1 longer:1 base:1 own:2 hide:3 driven:1 scenario:1 yonatan:2 inequality:2 binary:1 life:1 yi:1 seen:3 additional:1 fortunately:1 period:1 july:1 ii:1 sliding:1 full:31 desirable:1 stem:1 match:2 exp3:1 adapt:3 characterized:1 chia:1 controlled:1 calculates:1 prediction:1 variant:3 expectation:2 iteration:1 achieved:4 sleeping:3 receive:3 addition:1 affecting:1 szepesv:1 interval:42 completes:2 rest:1 thing:1 december:1 sridharan:1 integer:2 nonstationary:1 call:4 yang:1 revealed:3 bernstein:2 enough:4 easy:2 besbes:2 switch:13 affect:3 xj:1 nonstochastic:1 competing:1 suboptimal:3 incomparable:2 idea:5 andreas:1 tradeoff:3 effort:1 suffer:5 peter:1 action:12 remark:2 clear:2 fool:1 amount:1 eunif:4 schapire:2 exist:1 shifted:1 notice:2 write:1 key:1 achieving:2 drawn:1 garivier:1 sum:3 run:4 parameterized:3 striking:1 almost:6 decision:2 appendix:8 bound:53 pay:1 guaranteed:4 played:1 annual:1 precisely:4 awake:1 bp:1 ri:1 aspect:1 argument:1 min:1 according:7 remain:1 slightly:4 smaller:5 beneficial:1 across:1 partitioned:1 tw:1 evolves:1 restricted:1 pr:1 ln:14 previously:1 remains:2 turn:3 count:2 discus:1 needed:1 know:3 end:5 operation:1 appropriate:2 generic:1 pierre:1 drifting:11 existence:6 denotes:1 prof:1 move:4 question:1 realized:1 quantity:1 rt:12 mehrdad:1 traditional:3 gradient:1 minx:2 distance:1 separate:1 sci:1 w0:1 omar:2 considers:1 reason:1 fresh:1 taiwan:1 length:1 mini:1 balance:1 sinica:2 mostly:1 october:1 robert:2 relate:1 disparate:1 motivates:1 policy:1 bianchi:2 upper:15 gilles:1 observation:3 markov:1 descent:3 jin:1 situation:2 ever:1 sharp:2 lb:1 drift:4 bk:1 complement:2 pair:1 required:1 kl:4 specified:1 distinction:1 barcelona:1 nip:2 adversary:1 usually:1 including:1 natural:3 indicator:1 arm:36 minimax:1 zhu:1 picture:1 deviate:2 prior:1 chao:1 nicol:2 freund:1 loss:40 highlight:1 proportional:1 penalization:1 degree:1 playing:2 changed:1 jung:1 free:3 offline:9 side:1 understand:2 institute:1 fullinformation:1 characterizing:2 taking:2 emerge:1 munos:1 benefit:1 van:1 feedback:2 boundary:1 xn:1 world:9 computes:1 made:2 adaptive:2 far:1 excess:2 approximate:1 ignore:1 dealing:1 abstracted:1 ml:3 summing:1 assumed:3 xi:2 continuous:1 search:1 prod:3 table:1 distributionp:1 assured:1 main:1 whole:6 big:1 n2:5 repeated:1 allowed:1 x1:2 sub:1 comput:2 rk:2 theorem:24 jen:2 xt:3 showing:2 rakhlin:1 alt:1 exists:1 mirror:2 te:1 conditioned:3 t4:1 horizon:2 kx:1 chen:1 gap:2 surprise:1 easier:1 shahrampour:1 logarithmic:1 simply:2 likely:2 explore:2 tracking:1 corresponds:2 satisfies:1 asleep:3 comparator:2 conditional:2 goal:1 presentation:3 careful:1 adanormalhedge:2 feasible:1 change:4 argmini:1 hard:1 specifically:2 wt:7 lemma:3 total:13 called:2 ucb:6 select:1 formally:1 alexander:1 evaluate:1 |
6,121 | 6,537 | A Probabilistic Model of Social Decision Making
based on Reward Maximization
Koosha Khalvati
Department of Computer Science
University of Washington
Seattle, WA 98105
[email protected]
Seongmin A. Park
CNRS UMR 5229
Institut des Sciences Cognitives Marc Jeannerod
Lyon, France
[email protected]
Jean-Claude Dreher
CNRS UMR 5229
Institut des Sciences Cognitives Marc Jeannerod
Lyon, France
[email protected]
Rajesh P. N. Rao
Department of Computer Science
University of Washington
Seattle, WA 98195
[email protected]
Abstract
A fundamental problem in cognitive neuroscience is how humans make decisions,
act, and behave in relation to other humans. Here we adopt the hypothesis that when
we are in an interactive social setting, our brains perform Bayesian inference of the
intentions and cooperativeness of others using probabilistic representations. We employ the framework of partially observable Markov decision processes (POMDPs)
to model human decision making in a social context, focusing specifically on the
volunteer?s dilemma in a version of the classic Public Goods Game. We show that
the POMDP model explains both the behavior of subjects as well as neural activity
recorded using fMRI during the game. The decisions of subjects can be modeled
across all trials using two interpretable parameters. Furthermore, the expected
reward predicted by the model for each subject was correlated with the activation of
brain areas related to reward expectation in social interactions. Our results suggest
a probabilistic basis for human social decision making within the framework of
expected reward maximization.
1
Introduction
A long tradition of research in social psychology recognizes volunteering as the hallmark of human
altruistic action, aimed at improving the survival of a group of individuals living together [15].
Volunteering entails a dilemma wherein the optimal decision maximizing an individual?s utility
differs from the strategy which maximizes benefits to the group to which the individual belongs. The
"volunteer?s dilemma" characterizes everyday group decision-making whereby one or few volunteers
are enough to bring common goods to the group [1, 6]. Examples of such volunteering include
vigilance duty, serving on school boards or town councils, and donating blood. The fact that makes
the volunteer?s dilemma challenging is not only that a lack of enough volunteers would lead to no
common goods being produced, but also that resources would be wasted if more than the required
number of group members volunteer. As a result, to achieve maximum utility in the volunteer?s
dilemma, each member must have a very good sense of others? intentions in the absence of any
This work was supported by LABEX ANR-11-LABEX-0042, ANR-11-IDEX-0007, NSF-ANR ?Social_POMDP? and ANR BrainCHOICE n? 14-CE13-0006 to JC. D, NSF grants EEC-1028725 and 1318733,
ONR grant N000141310817, and CRCNS/NIMH grant 1R01MH112166-01.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Figure 1: The computer screen that players see during one round of PGG.
communication between them, before choosing their actions. To model such social decision making,
one therefore needs a framework that can model the uncertainty associated with the "theory of
mind" of each player. In this paper, we tackle this problem by combining a probabilistic model with
behavioral measures and fMRI recordings to provide an account of the decisions made under the
volunteer?s dilemma and the underlying neural computations.
The Public Goods Game (PGG) is a classic paradigm in behavioral economics. It has previously been
employed as a useful tool to study the neural mechanisms underlying group cooperation [2, 3, 4, 16].
Here we recast the PGG to investigate the volunteer?s dilemma and conducted an experiment where
30 subjects played a version of the PGG while their brain activity was simultaneously recorded
using fMRI. We show how a probabilistic model based on Partially Observable Markov Decision
Processes (POMDPs) with a simple and intuitive state model can explain our subjects? behavior.
The normative model explains the behavior of the subjects in all trials of the game, including the
first trial of each round, using only two parameters. The validity of the model is demonstrated by
the correlation between the reward predicted by the model and activation of brain areas implicated
in reward expectation in social interactions. Also, the values of the parameters of our model are
interpretable, and the differences among players and games can be explained by these parameters.
2
Public Goods Game and Experimental Paradigm
In a Public Goods Game (PGG), N strangers make collective decisions together as a group. In the
current study, we keep the number of members in a group constant at N = 5. No communication
among members is allowed. The game is composed of 15 rounds of interactions with the same
partners. At the beginning of each round, 1 monetary unit (MU) is endowed (E) to each of N = 5
individuals. Each individual can choose between two decisions, contribution (c) or free-riding
(f). When the participant makes a decision, the selected option is highlighted on the screen. The
participant must make a decision within three seconds, otherwise a warning message appears and the
trial is repeated. After all members of the group have made their decisions, the feedback screen is
shown to all. According to the decisions of group members, public good is produced as the group
reward (R = 2M U ) only if at least k individuals have contributed their resources (k = 2 or k = 4).
Value of k was conveyed to group members before decision-making and is kept fixed for any single
PGG. From the feedback screen, participants only know the number of other contributors and not
individual single member decisions, which are represented by white icons. A yellow icon stands
for the individual playing in the scanner and served to track their decisions. Each PGG consists of
a finite round of interactions (T = 15). This is informed to all participants. The computer screen
in front of each player during one round of a PGG is shown in Figure 1. Each contribution has a
cost (C = 1M U ). Therefore, the resultant MUs after one round is E ? C + R = 2M U for the
contributor and E + R = 3M U for the free-rider when public good is produced (SUCCESS). On the
other hand, the contributor has E ? C = 0M U and the free-rider has E = 1M U when no public
good is produced (FAILURE) . Each participant plays 14 games. During the first 2 PGGs, they receive
no feedback, but the following 12 PGGs provide social and monetary feedback as shown in Figure 1.
Our analyses are from the 12 PGGs with feedback. Importantly, we inform participants before the
experiment that they get a final monetary reward as much as the result of one PGG randomly selected
by the compute r at the end of the study [23].
2
We recruited 30 right-handed subjects to participate in the Public Goods Game and make decisions
in an fMRI scanner. Data from 29 participants (fourteen women, mean age 22.97 years old 1.99
S.D.) were analyzed (one participant aborted the experiment due to anxiety). Based on self-reported
questionnaires, none of our subjects had a history of neurological or psychiatric disorders. Each
participant was told that they would play with 19 other participants located in another room; in
actuality, a computer selected the actions instead of 19 others. Each action selected by our computer
algorithm in any round is a probabilistic
of the participant?s action in the previous round
P function
t?1
(at?1
),
and
its
own
previous
actions
(
a
i
j??i j ). Given the average contribution rate of others
P
at
T ?t+1
j
t?1
)e2 a
??i
? K) where K = k/N.
a
?t?i = j??i
we have logit(?
at?i ) = e0 at?1
+ e1 (( 1?K
i
N ?1
1?K
This model has 3 free parameters: e0 , e1 , e2 . These are obtained by fitting the above function to the
actual behavior of individuals in another PGG study [16]. Therefore, this function is a simulation of
real individuals? behavior in a PGG. For the first round, we use the mean contribution rate of each
subject as their fellow members? decision.
3
Markov Decision Processes and POMDPs
The family of Markov Decision Processes (MDPs and POMDPs) provide a mathematical framework
for decision making in stochastic environments [22]. A Markov Decision Process (MDP) is formally
a tuple (S, A, T, R, ?) with the following description: S is the set of states of the environment, A is
the set of actions, T is the transition function P (s|s0 , a), i.e., the probability of going from a state s0
to state s after performing action a. R : S ? A ? R is a bounded function determining the reward
obtained after performing action a in state s. ? is the discount factor which we assume here is 1.
Starting from an initial P
state s0 , the goal is to find the sequence of actions to maximize expected
?
discounted reward Est [ t=0 ? t R(st , at )]. This sequence of actions is given by an optimal policy,
which is a mapping from states to actions: ? ? : |S| ? |A| representing the best action at a given state.
The optimal policy can be computed by an efficient algorithm called value iteration [22]. MDPs
assume that the current state is always fully observable to the agent. When this is not the case, a more
general framework, known as Partially Observable Decision Processes (POMDPs), can be used. In a
POMDP, the agent reasons about the current state based on an observation. Therefore, POMDP can
be regarded as an MDP with observations, Z and an observation function O : Z ? A ? S ? [0, 1],
which determines P (z|a, s), the probability of observing z after performing action a in state s. In
a POMDP, instead of knowing the current state, st , the agent computes the belief state, bt , which
is the posterior probability over states given
P all past observations and actions. The belief state can
be updated as: bt+1 (s) ? O(s, at , zt+1 ) s0 T (s0 , s, at )bt (s0 ). Consequently, the optimal policy
of a POMDP is a mapping from belief state to actions: ? ? : B ? A where B = [0, 1]|S| . One
could easily see that a POMDP is an MDP whose states are belief states. As the belief state space is
exponentially larger than the original state space (B = [0, 1]|S| ), solving POMDPs is computationally
very expensive (NP-hard [19]). Therefore, heuristic methods are used to approximate the optimal
policy for a POMDP [11, 20]. In the case that the belief state can be expressed in closed form, e.g.,
Gaussian, one can solve the POMDP by considering it as an MDP whose state space is the POMDP?s
belief state space and performing the value iteration algorithm. We use this technique in our model.
4
Model of the Game
In a PGG with N players and known minimum number of required volunteers (k), the reward of a
player, say player i, in each round is determined only by their action (free ride (f ) or contribution
(c)), and the total number of contributors among other players. We use the notation ?i to represent
all players except player i. We denote the action of each player as a and the reward of each player as
r. The occurrence of an event is given by an indicator function I (for event x, I(x) is equal to 1 if
event x happens and 0 otherwise). Then, the reward expected by player i at round t is:
?
?
? ?
N
X
rti = E ?E ? I(ait = c) ? C + I ?
I(ajt = c) ? k ? ? R?
j=1
?
?
?
= E ?E ? I(ait = c) ? C + I ?I(ait = c) +
X
j??i
3
?
I(ajt = c) ? k ? ? R?
(1)
This means that in order to choose the best action at step t (ait ), player i should estimate the probability
P
of j??i I(ajt = c). Now if each player is a contributor with probability ?c , the probability of this
sum would be a binomial distribution:
X
0
N ? 1 k0
P(
I(ajt = c) = k 0 ) =
?c (1 ? ?c )N ?1?k
(2)
0
k
j??i
We could model the whole group with one parameter, because players only get the total number
of contributions made by others and not individual contributions. Individuals cannot be tracked
by others, and all group members can be seen together as one group. In other words, ?c could be
interpreted as cooperativeness of the group on average. With ?c , the reward that player i expects at
time step t is:
!
N
?1
X
N ? 1 k0
i
i
i
N ?1?k0
rt = E ? I(at = c) ? C + I(at = c).
?c (1 ? ?c )
?R
k0
k0 =k?1
(3)
!
N
?1
X
N ? 1 k0
i
N ?1?k0
+ I(at = f ).
?c (1 ? ?c )
?R
k0
0
k =k
This is only for one round. The game however, contains multiple rounds (15 here) and the goal is
to maximize the total expected reward, not the reward of a specific round. In addition, ?c changes
after each round because players see others? actions and update the probability of cooperativeness in
the group. For example, if a player sees that others are not contributing, they may reduce ?c when
picking an action in the next round. Also, since our subjects think they are playing with other humans,
they may assume others make these updates too. As a result, each player thinks their own action will
change ?c as well. In fact, although they are playing with computers, our algorithm does depend on
their actions and their assumption is thus, correct. In addition, because subjects think they have a
correct model of the group, they assume all group members have the same ?c as them. If we define
each possible value of ?c as a discrete state (this set is infinite, but we could discretize the space,
e.g., 100 values from 0 to 1) and model the change in ?c with a transition function, our problem of
maximizing total expected reward becomes equivalent to an MDP.
Unfortunately, the subject does not know ?c and therefore must maintain a probability distribution
(belief state) over ?c denoting belief about the average cooperativeness of the group. The model
therefore becomes a POMDP. The beta distribution is a conjugate prior for the binomial distribution,
meaning that when the prior distribution is a beta distribution and the likelihood function is a binomial
distribution, the posterior will also be a beta distribution. Therefore, in our model, the subject starts
with a beta distribution as their initial belief, and updates their belief over the course of the game
using the transition and observation functions which are both binomial, implying that their belief
always remains a beta distribution. The beta distribution contains two parameters, ? and ?. Using
maximum likelihood estimation (MLE), the posterior distribution after seeing k 0 true events from
total of N events with prior Beta(?, ?) is Beta(? + k 0 , ? + N ? k 0 ):
Prior : Beta(?, ?) ? P (?) =
???1 (1 ? ?)??1
B(?, ?)
0
(4)
0
??+k ?1 (1 ? ?)?+N ?k ?1
Posterior : Beta(? + k , ?(t) + N ? k ) ? P (?) =
B(? + k 0 , ? + N ? k 0 )
R 1 ??1
where B(?, ?) is the normalizing constant: B(?, ?) = 0 ?
(1 ? ?)??1 d?.
0
0
(5)
As mentioned before, each POMDP is an MDP whose state space is the belief state of the original
POMDP. As our belief state has a closed form, we can estimate the solution of our POMDP by
discretizing this belief space, e.g., considering a bounded set of integers for ? and ?, and solving it
as an MDP. Also, the transition function of this MDP would be based on the maximum likelihood
estimate shown above. This transition function is as follows:
N ? 1 B(? + k 0 , ? + N ? 1 ? k 0 )
P ((? + k 0 + 1, ? + N ? 1 ? k 0 )|(?, ?), c) =
k0
B(?, ?)
(6)
0
N ? 1 B(? + k , ? + N ? 1 ? k 0 )
P ((? + k 0 , ? + N ? k 0 )|(?, ?), f ) =
k0
B(?, ?)
4
The pair (?, ?) is the state and represents the belief of the player about ?c , given by Beta(?, ?). The
reward function of this belief-based MDP is:
N
X
N ? 1 B(? + k 0 , ? + N ? 1 ? k 0 )
R((?, ?), c) = E ? C +
R
k0
B(?, ?)
k0 =k?1
(7)
N
X
N ? 1 B(? + k 0 , ? + N ? 1 ? k 0 )
R((?, ?), f ) = E +
R
k0
B(?, ?)
0
k =k
This MDP shows how the subject plays and learns their group dynamics simultaneously by updating
their belief about the group during the course of the game. Note that although we are reducing
the problem to an MDP for computational efficiency, conceptually the player is being modeled by
a POMDP because the player maintains a belief about the environment and updates it based on
observations (here, other players? actions).
5
Results
The parameters of our model are all known, so the question is how the model differs for different
individuals. The difference is in the initial belief of the player about the group that they are playing
within, in other words, the state that our belief-based MDP starts from (b0 in POMDP parlance). This
means that each individual has a pair ?0 and ?0 for each k that shapes their behavior through the
game. For example, an ?0 significantly larger than ?0 means that the player starts the game with the
belief that the group is cooperative. Also, ?0 and ?0 for the same individual differs for different k?s
since the number of required volunteers changes the game and consequently the belief of the player
about the optimal strategy. We investigate these differences in our analysis below.
5.1
Modeling behavioral data
To find ?0 and ?0 of each player (and for each k), we run our model with different values of ?0 and
?0 , and using the actions that the player sees as other players? actions during the experiment, we
check if the actions predicted by our model is the same as the actual actions of the player. In other
P15
?it | where ait is the action of our subject at
words we find the ?0 and ?0 that minimize t=1 |ait ? a
i
step t, and a
?t is the predicted action from our model. Note that we only give the model other players?
data and do not correct the predicted action for the previous state if the model has made a mistake.
Also, we calculate the error on all games of that player with the same k, i.e., we assume each game is
an independent data point. This is justified because subjects are explicitly told that at each game they
could play with different players and also, they get reward for only one game chosen randomly. As a
result one cannot use one game for training for the next ones. For each player, we call the average of
P15
i
?it | among all of their games with the same k, round by round error.
t=1 |at ? a
The average round by round error among all players for the POMDP was 3.38 for k = 2 and 2.15
for k = 4 (Table 1). For example, only around 2 out of 15 rounds were predicted incorrectly by our
model for k = 4. The possible ?0 and ?0 values for each player ranged over all integers between 1
and 100, yielding 1002 pairs to evaluate as s0 for the belief-based MDP; this evaluation process was
computationally efficient. We found that MDPs with horizons longer than the true number of rounds
fit our data better. As a result, we set our horizon to a number much larger than 15, in this case, 50.
Such an error in estimating the dynamics of a game in humans is consistent with previous reports [3].
To compare our results with other state-of-the-art methods, we fit a previously proposed descriptive
model [24] to our data. This model assumes that the action of the player in each round is a function
of their action in the previous round (the "Previous Action" model). Therefore, to fit the data we
need to estimate p(ait |ait?1 ). This means that the model has two parameters, i.e. p(ait = c|ait?1 = c)
and p(ait = c|ait?1 = f ). Note that this descriptive model is unable to predict the first action. We
found that its average round by round error for the last 14 rounds (3.90 for k = 2 and 3.25 for k = 4)
is more than the POMDP model?s error (Table 1), even though it considers one round less than the
POMDP model.
We also used Leave One Out Cross Validation (LOOCV) to compare the models (see Table 1).
Although the LOOCV error for k = 2 is larger for the POMDP model, the POMDP model?s error
5
Table 1: Average round by round error by POMDP, the descriptive model based on previous action,
P
p(ait |ait?1 ), and the most general descriptive mode, p(ait |ait?1 , j??i ajt?1 ) . In front of each error,
the normalized error (divided by number of rounds) is written in parenthesis to facilitate comparison.
model
Fitting error Fitting error LOOCV error LOOCV error Total number
(k = 2)
(k = 4)
(k = 2)
(k = 4)
of rounds
POMDP
3.38 (0.22)
2.15 (0.14)
4.23 (0.28)
2.67 (0.18)
15
Previous Action 3.90 (0.28)
3.25 (0.23)
4.00 (0.29)
3.48 (0.25)
14
All Actions
3.75 (0.27)
2.74 (0.19)
5.52 (0.39)
7.33 (0.52)
14
is for 15 rounds while the error for the descriptive model is for 14 (note that the error divided by
number of rounds is larger for the descriptive model). In addition, to examine if another descriptive
model based on previous rounds can outperform the POMDP model, we tested a model based on all
P
previous actions, i.e. p(ait |ait?1 , j??i ajt?1 ). The POMDP model outperforms this model as well.
5.2
Comparing model predictions to neural data
Besides modeling human behavior better than the descriptive models, the POMDP model can
also predict the amount of reward the subject is expecting since it is formulated based on reward
maximization. We use the parameters obtained by the behavioral fit and generate the expected reward
for each subject before playing the next round.
To validate these predictions about reward expectation, we checked if there is any correlation between
neural activity recorded by fMRI and the model?s predictions. Image preprocessing was performed
using the SPM8 software package. The time-series of images were registered in three-dimensional
space to minimize any effects from the participant?s head motion. Functional scans were realigned to
the last volume, corrected for slice timing, co-registered with structural maps, spatially normalized
into the standard Montreal Neurological Institute (MNI) atlas space, and then spatially smoothed with
an 8mm isotropic full-width-at-half-maximum (FWHM) Gaussian kernel using standard procedures
in SPM8. Specifically, we construct a general linear model (GLM) and run a first-level analysis
modeling brain responses related to outcome while informing the judgments of others. They are
modeled as a box-car function time-locked to the onset of outcome with the duration of 4 sec. Brain
responses related to decision-making with knowledge of the outcome of the previous trial are modeled
separately. These are modeled as a box-car function time-locked to the onset of decision-making
with duration of reaction times in each trial. They are further modulated by parametric regressors
accounting for the expected reward. In addition, the six types of motion parameters produced for head
movement, and the two motor parameters produced for buttons pressing with the right and the left
hands are also entered as additional regressors of no interest to account for motion-related artifacts.
All these regressors are convolved with the canonical hemodynamic response function. Contrast
images are calculated and entered into a second-level group analysis. In the GLM, brain regions
whose blood-oxygen-level-dependent (BOLD) response are correlated with POMDP-model-based
estimates of expected reward are first identified. To correct for multiple comparisons, small volume
correction (SVC) is applied to a priori anatomically defined regions of interests (ROI). The search
volume is defined by a 10mm diameter spherical ROI centered on the dorsolateral prefrontal cortex
(dlPFC) and the ventral striatum (vS) that have been identified in previous studies. The role of the
dlPFC has been demonstrated in the control of strategic decision-making [14], and its function and
gray matter volume have been implicated in individual differences in social value computation [7, 21].
Moreover, vS has been found to mediate rewards signal engaged in mutual contribution, altruism,
and social approval [18]. In particular, the left vS has been found to be associated with both social
and monetary reward prediction error [13].
We find a strong correlation between our model?s prediction of expected reward and activity in
bilateral dlPFC (the peak voxel in the right dlPFC: (x, y, z) = (42, 47, 19), T = 3.45, and the peak
voxel in the left dlPFC: (x, y, z) = (?30, 50, 25), T = 3.17) [7], and left vS (the peak voxel in
the vS: (x, y, z) = (?24, 17, ?2), T = 2.98) [13](Figure 2). No other brain area was found to
have a higher activation than them at the relatively liberal threshold, uncorrected p < 0.005. Large
activations were found in these regions when participants received the outcome of a trial (p < 0.05,
FWE corrected within small-volume clusters). This is because after seeing the outcome of one round,
they update their belief and consequently their expected reward for the next round.
6
Figure 2: Strong correlation between brain activity in the dlPFC and the left vS after seeing the
outcome of a round and the predicted expected reward for the next round by our model. The
activations were reported with a significance of p < 0.05, FWE across participants corrected in a
priori region of interest. The activation maps are acquired at the threshold, p < 0.005 (uncorrected).
The color in each cluster indicates the level of z-score activation in each voxel.
5.3
Modeling subjects? perception of group cooperativeness
The ratio and the sum of the best fitting ?0 and ?0 that we obtain from the model are interpretable
within the context of cognitive science. In the Beta-binomial distribution update equations 4 and 5, ?
is related to the occurrence of the action "contribution" and ? to "free-riding." Therefore, the ratio
of ?0 to ?0 captures the player?s prior belief about the cooperativeness of the group. On the other
hand, after every binomial observation (here round), N (here N = 5) is added to the prior. Therefore,
the absolute values of ? and ? determine the weight that the player gives to the prior compared to
their observations during the game. For example, adding 5 does not change Beta(100, 100) much
but changes Beta(2, 2) to Beta(7, 2); the former does not alter the chance of contribution versus
free riding much while the latter indicates that the group is cooperative.
We estimated the best initial parameters for each player, but is there a unique pair of ?0 and ?0
values that minimizes the round by round error or are there multiple values for the best fit? We
investigated this question by examining the error for all possible parameter values for all players in
our experiments. The error, as a function of ?0 and ?0 , for one of the players is shown in Figure
3a as a heat map (darker means smaller error, i.e. better fit). We found that the error function is
continuous and although there exist multiple best-fitting parameter values, these values define a set of
lines ? = a? + c with bounds min ? ? ? max. The lines and bounds are linked to the ratio and
prior weight alluded to above, suggesting that players do consider prior probability and the weight,
and best-fitting ?0 and ?0 values have similar characteristics.
We also calculated the average error function over all players for both values of k. As shown in
figures 3b and 3c, ?0 is larger than ?0 for k = 2, while for k = 4, they are close to each other. Also,
the absolute value of these parameters are larger for k = 2. This implies that when k = 4, players
start out with more caution to ascertain whether the group is cooperative or not. For k = 2 however,
because only 2 volunteers are enough, they start by giving cooperativeness a higher probability.
Higher absolute value for k = 2 is indicative of the fact that the game tends towards mostly free riders
for k = 2 and the prior is weighted much more than observations. Players know only 2 volunteers
are enough and they can free-ride more frequently but still get the public good.1
6
Related Work
PGG has previously been analyzed using descriptive models, assuming that only the actions of
players in the previous trial affect decisions in the current trial [8, 24, 25]. As a result, the first trial of
each round cannot be predicted by these models. Moreover, these models only predicts with what
probability each player changes their action. The POMDP model, in contrast, takes all trials of each
1
We should emphasize that this is the behavior on average and a few subjects do deviate from this behavior.
7
(a) One player
(b) k = 2
(c) k = 4
Figure 3: Round by round error based on different initial parameters ?0 and ?0 . Darker means lower
error. (a) Error function for one of the players. The function for other players and other ks has the
same linear pattern in terms of continuity but the location of the low error line is different among
individuals and ks. (b) Average error function over all players for k = 2. (c) Average error function
for k = 4.
round into account and predicts actions based on prior belief of the player about the cooperativeness
of the group, within the context of maximizing expected reward. Most importantly, the POMDP
model predicts not only actions, but also the expected reward for the next round for each player as
demonstrated in our results above.
POMDPs have previously been used in perceptual decision making [12, 17] and value-based decision
making [5]. The modeled tasks, however, are all single player tasks. A model based on iPOMCPs, an
interactive framework based on POMDPs ([9]) with Monte Carlo sampling, has been used to model a
trust game [10] involving two players. The PGG task we consider involves a larger group of players
(5 in our experiments). Also, the iPOMCP algorithm is complicated and its neural implementation
remains unclear. By comparison, our POMDP model is relatively simple and only uses two parameters
to represent the belief state.
7
Discussion
This paper presents a probabilistic model of social decision making that not only explains human
behavior in volunteer?s dilemma but also predicts the expected reward in each round of the game.
This prediction was validated using neural data recorded from an fMRI scanner. Unlike other existing
models for this task, our model is based on the principle of reward maximization and Bayesian
inference, and does not rely on a subject?s actions directly. In other words, our model is normative.
In addition, as we discussed above, the model parameters that we fit to an individual or k are
interpretable.
One may argue that our model ignores empathy among group members since it assumes that the
players attempt to maximize their own reward. First, an extensive study with auxiliary tasks has shown
that pro-social preferences such as empathy do not explain human behaviour in the public goods
games [3]. Second, one?s own reward is not easily separable from others? rewards as maximizing
expected reward requires cooperation among group members. Third, a major advantage of a normative
model is the fact that different hypotheses can be tested by varying the components of the model.
Here we presented the most general model to avoid over-fitting. Testing different reward functions
could be a fruitful direction of future research.
Although we have not demonstrated that our model can be neurally implemented in the brain, the
model does capture the fundamental components of social decision making required to solve tasks
such as the volunteer?s dilemma, namely, belief about others (belief state in our model), updating
of belief with new observations, knowing that other group members will update their beliefs as
well (modeled via transition function), prior belief about people playing the game (ratio of ?0 to
?0 ), weight of the prior in comparison to observations (absolute value of initial parameters), and
maximizing total expected reward (modeled via reward function in MDP/POMDP). Some of these
components may be simplified or combined in a neural implementation but we believe acknowledging
them explicitly in our models will help pave the way for a deeper understanding of the neural
mechanisms underlying human social interactions.
8
References
[1] M. Archetti. A strategy to increase cooperation in the volunteer?s dilemma: Reducing vigilance improves
alarm calls. Evolution, 65(3):885?892, 2011.
[2] N. Bault, B. Pelloux, J. J. Fahrenfort, K. R. Ridderinkhof, and F. van Winden. Neural dynamics of social
tie formation in economic decision-making. Social Cognitive and Affective Neuroscience, 10(6):877?884,
2015.
[3] M. N. Burton-Chellew and S. A. West. Prosocial preferences do not explain human cooperation in
public-goods games. Proceedings of the National Academy of Sciences, 110(1):216?221, 2013.
[4] D. Chung, K. Yun, and J. Jeong. Decoding covert motivations of free riding and cooperation from
multi-feature pattern analysis of signals. Social Cognitive and Affective Neuroscience, 10(9):1210?1218,
2015.
[5] P. Dayan and N. D. Daw. Decision theory, reinforcement learning, and the brain. Cognitive, Affective, &
Behavioral Neuroscience, 8(4):429?453, 2008.
[6] D. Diekmann. Cooperation in an asymmetric volunteer?s dilemma game: theory and experimental evidence.
International Journal of Game Theory, 22(1):75?85, 1993.
[7] A. S. R. Fermin, M. Sakagami, T. Kiyonari, Y. Li, Y. Matsumoto, and T. Yamagishi. Representation of
economic preferences in the structure and function of the amygdala and prefrontal cortex. Scientific reports,
6, 2016.
[8] U. Fischbacher, S. Gatcher, and E. Fehr. Are people conditionally cooperative? Evidence from a public
goods experiment. Economics Letters, 71(3):397 ? 404, 2001.
[9] P. J. Gmytrasiewicz and P. Doshi. A framework for sequential planning in multi-agent settings. Journal of
Artificial Intelligence Research, 24:49?79, 2005.
[10] A. Hula, P. R. Montague, and P. Dayan. Monte carlo planning method estimates planning horizons during
interactive social exchange. PLoS Computational Biology, 11(6):e1004254, 2015.
[11] K. Khalvati and A. K. Mackworth. A fast pairwise heuristic for planning under uncertainty. In Proceedings
of The Twenty-Seventh AAAI Conference on Artificial Intelligence, pages 187?193, 2013.
[12] K. Khalvati and R. P. N. Rao. A Bayesian framework for modeling confidence in perceptual decision
making. In Advances in Neural Information Processing Systems (NIPS) 28, pages 2413?2421. 2015.
[13] A. Lin, R. Adolphs, and A. Rangel. Social and monetary reward learning engage overlapping neural
substrates. Social cognitive and affective neuroscience, 7(3):274?281, 2012.
[14] E. K. Miller and J. D. Cohen. An integrative theory of prefrontal cortex function. Annual Review of
Neuroscience, 24:167?202, 2001.
[15] M. Olson. The Logic of Collective Action: Public Goods and the Theory of Groups. Harvard University
Press, 1971.
[16] S. A. Park, S. Jeong, and J. Jeong. TV programs that denounce unfair advantage impact women?s sensitivity
to defection in the public goods game. Social Neuroscience, 8(6):568?582, 2013.
[17] R. P. N. Rao. Decision making under uncertainty: a neural model based on partially observable Markov
decision processes. Frontiers in computational neuroscience, 4, 2010.
[18] C. C. Ruff and E. Fehr. The neurobiology of rewards and values in social decision making. Nature Reviews
Neuroscience, 15:549?562, 2014.
[19] R. D. Smallwood and E. J. Sondik. The optimal control of partially observable markov processes over a
finite horizon. Operations Research, 21(5):1071?1088, 1973.
[20] T. Smith and R. G. Simmons. Heuristic search value iteration for POMDPs. In Proceedings of International
Conference on Uncertainty in Artificial Intelligence (UAI), 2004.
[21] N. Steinbeis and E. A. Crone. The link between cognitive control and decision-making across child and
adolescent development. Current Opinion in Behavioral Sciences, 10:28?32, 2016.
[22] S. Thrun, W. Burgard, and D. Fox. Probabilistic Robotics. MIT Press, Cambridge, MA? 2005.
[23] S. M. Tom, C. R. Fox, C. Trepel, and R. A. Poldrack. The neural basis of loss aversion in decision-making
under risk. Science, 315(5811):515?518, 2007.
[24] J. Wanga, S. Surib, and D. J. Wattsb. Cooperation and assortativity with dynamic partner updating.
Proceedings of the National Academy of Sciences, 109(36):14363?14368, 2012.
[25] M. Wunder, S. Suri, and D. J. Watts. Empirical agent based models of cooperation in public goods games.
In Proceedings of the Fourteenth ACM Conference on Electronic Commerce (EC), pages 891?908, 2013.
9
| 6537 |@word trial:11 version:2 logit:1 integrative:1 simulation:1 koosha:2 accounting:1 initial:6 contains:2 series:1 score:1 denoting:1 past:1 outperforms:1 reaction:1 current:6 comparing:1 existing:1 activation:7 must:3 written:1 shape:1 motor:1 atlas:1 interpretable:4 update:7 v:6 implying:1 half:1 selected:4 intelligence:3 indicative:1 isotropic:1 beginning:1 smith:1 location:1 preference:3 liberal:1 mathematical:1 beta:15 consists:1 fitting:7 behavioral:6 affective:4 pairwise:1 acquired:1 expected:17 behavior:10 aborted:1 examine:1 frequently:1 brain:11 multi:2 planning:4 approval:1 discounted:1 spherical:1 lyon:2 actual:2 considering:2 idex:1 becomes:2 spain:1 estimating:1 underlying:3 bounded:2 maximizes:1 notation:1 moreover:2 what:1 interpreted:1 minimizes:1 yamagishi:1 informed:1 caution:1 warning:1 fellow:1 every:1 act:1 tackle:1 interactive:3 tie:1 control:3 unit:1 grant:3 before:5 defection:1 timing:1 tends:1 mistake:1 striatum:1 umr:2 k:2 challenging:1 co:1 fwhm:1 locked:2 unique:1 commerce:1 testing:1 differs:3 procedure:1 area:3 empirical:1 significantly:1 intention:2 word:4 confidence:1 seeing:3 suggest:1 psychiatric:1 get:4 cannot:3 close:1 context:3 risk:1 equivalent:1 map:3 demonstrated:4 fruitful:1 maximizing:5 economics:2 starting:1 duration:2 adolescent:1 pomdp:30 disorder:1 importantly:2 regarded:1 smallwood:1 classic:2 updated:1 simmons:1 play:4 engage:1 substrate:1 us:1 hypothesis:2 harvard:1 expensive:1 located:1 updating:3 asymmetric:1 predicts:4 cooperative:4 role:1 burton:1 capture:2 calculate:1 region:4 plo:1 movement:1 mentioned:1 expecting:1 questionnaire:1 mu:2 nimh:1 environment:3 reward:43 dynamic:4 depend:1 solving:2 dilemma:11 efficiency:1 basis:2 easily:2 trepel:1 montague:1 k0:13 represented:1 heat:1 fast:1 monte:2 artificial:3 formation:1 choosing:1 outcome:6 jean:1 whose:4 larger:8 heuristic:3 solve:2 say:1 otherwise:2 anr:4 think:3 highlighted:1 final:1 sequence:2 descriptive:9 claude:1 pressing:1 advantage:2 interaction:5 fr:2 cognitives:2 combining:1 monetary:5 entered:2 achieve:1 academy:2 intuitive:1 description:1 validate:1 everyday:1 olson:1 seattle:2 cluster:2 leave:1 help:1 montreal:1 school:1 received:1 b0:1 strong:2 auxiliary:1 c:2 predicted:8 uncorrected:2 implies:1 involves:1 implemented:1 direction:1 correct:4 stochastic:1 centered:1 human:12 opinion:1 public:15 explains:3 exchange:1 behaviour:1 frontier:1 correction:1 scanner:3 mm:2 around:1 roi:2 mapping:2 predict:2 major:1 ventral:1 adopt:1 estimation:1 loocv:4 council:1 contributor:5 tool:1 weighted:1 dreher:2 mit:1 always:2 gaussian:2 avoid:1 realigned:1 varying:1 validated:1 likelihood:3 check:1 indicates:2 contrast:2 tradition:1 sense:1 inference:2 dependent:1 dayan:2 cnrs:4 bt:3 gmytrasiewicz:1 relation:1 going:1 france:2 among:8 priori:2 development:1 art:1 mutual:1 equal:1 construct:1 washington:4 sampling:1 biology:1 represents:1 park:3 alter:1 fmri:6 future:1 others:12 np:1 report:2 dlpfc:6 employ:1 few:2 randomly:2 fehr:2 composed:1 simultaneously:2 national:2 individual:18 maintain:1 attempt:1 interest:3 message:1 investigate:2 evaluation:1 analyzed:2 yielding:1 isc:2 rajesh:1 tuple:1 institut:2 fox:2 old:1 e0:2 handed:1 modeling:5 rao:4 maximization:4 cost:1 strategic:1 expects:1 burgard:1 examining:1 seventh:1 conducted:1 front:2 too:1 reported:2 eec:1 combined:1 st:2 fundamental:2 peak:3 international:2 sensitivity:1 probabilistic:8 told:2 decoding:1 picking:1 together:3 aaai:1 recorded:4 town:1 choose:2 vigilance:2 woman:2 prefrontal:3 cognitive:7 chung:1 li:1 account:3 suggesting:1 de:2 sec:1 bold:1 matter:1 jc:1 explicitly:2 onset:2 performed:1 bilateral:1 sondik:1 closed:2 observing:1 characterizes:1 linked:1 start:5 participant:14 option:1 maintains:1 complicated:1 contribution:10 minimize:2 acknowledging:1 characteristic:1 miller:1 judgment:1 yellow:1 conceptually:1 bayesian:3 produced:6 none:1 carlo:2 pomdps:9 served:1 icon:2 history:1 explain:3 inform:1 assortativity:1 checked:1 failure:1 p15:2 e2:2 resultant:1 associated:2 doshi:1 donating:1 mackworth:1 knowledge:1 car:2 color:1 improves:1 focusing:1 appears:1 higher:3 tom:1 wherein:1 response:4 though:1 box:2 furthermore:1 correlation:4 parlance:1 hand:3 trust:1 overlapping:1 lack:1 continuity:1 mode:1 artifact:1 gray:1 scientific:1 mdp:14 believe:1 riding:4 facilitate:1 effect:1 validity:1 ranged:1 true:2 normalized:2 former:1 evolution:1 spatially:2 white:1 conditionally:1 round:50 game:35 during:8 self:1 width:1 whereby:1 yun:1 covert:1 motion:3 bring:1 pro:1 oxygen:1 hallmark:1 meaning:1 image:3 svc:1 suri:1 common:2 functional:1 poldrack:1 tracked:1 fourteen:1 cohen:1 exponentially:1 volume:5 discussed:1 cambridge:1 had:1 fwe:2 ride:2 entail:1 longer:1 cortex:3 posterior:4 own:4 belongs:1 onr:1 success:1 discretizing:1 seen:1 minimum:1 additional:1 employed:1 determine:1 paradigm:2 maximize:3 living:1 signal:2 multiple:4 full:1 stranger:1 neurally:1 rti:1 cross:1 long:1 lin:1 divided:2 e1:2 mle:1 parenthesis:1 impact:1 prediction:6 involving:1 expectation:3 volunteer:17 iteration:3 represent:2 kernel:1 robotics:1 receive:1 addition:5 justified:1 separately:1 unlike:1 jeannerod:2 subject:21 recording:1 recruited:1 member:14 integer:2 call:2 structural:1 enough:4 affect:1 fit:7 psychology:1 identified:2 reduce:1 economic:2 knowing:2 actuality:1 duty:1 six:1 whether:1 utility:2 action:46 useful:1 aimed:1 amount:1 discount:1 diameter:1 generate:1 outperform:1 exist:1 nsf:2 canonical:1 neuroscience:9 estimated:1 track:1 serving:1 discrete:1 group:35 threshold:2 blood:2 ruff:1 kept:1 wasted:1 button:1 year:1 sum:2 run:2 package:1 letter:1 uncertainty:4 fourteenth:1 family:1 electronic:1 decision:43 dorsolateral:1 bound:2 played:1 annual:1 activity:5 mni:1 software:1 min:1 performing:4 separable:1 relatively:2 department:2 tv:1 according:1 watt:1 conjugate:1 across:3 rider:3 smaller:1 ascertain:1 making:20 happens:1 explained:1 anatomically:1 glm:2 computationally:2 resource:2 equation:1 previously:4 remains:2 alluded:1 mechanism:2 mind:1 know:3 end:1 operation:1 endowed:1 occurrence:2 adolphs:1 convolved:1 original:2 binomial:6 assumes:2 include:1 recognizes:1 giving:1 question:2 added:1 strategy:3 parametric:1 rt:1 pave:1 unclear:1 unable:1 link:1 thrun:1 participate:1 partner:2 argue:1 considers:1 reason:1 assuming:1 besides:1 modeled:8 ratio:4 anxiety:1 unfortunately:1 mostly:1 ridderinkhof:1 implementation:2 collective:2 policy:4 zt:1 perform:1 contributed:1 discretize:1 twenty:1 observation:11 markov:7 matsumoto:1 finite:2 behave:1 incorrectly:1 neurobiology:1 communication:2 head:2 smoothed:1 pair:4 required:4 namely:1 extensive:1 jeong:3 registered:2 barcelona:1 daw:1 nip:2 below:1 perception:1 pattern:2 program:1 recast:1 including:1 max:1 belief:33 event:5 rely:1 indicator:1 representing:1 mdps:3 deviate:1 prior:13 understanding:1 review:2 determining:1 contributing:1 fully:1 loss:1 versus:1 age:1 validation:1 labex:2 aversion:1 agent:5 conveyed:1 consistent:1 s0:7 principle:1 playing:6 course:2 cooperation:8 spm8:2 supported:1 last:2 free:10 implicated:2 deeper:1 institute:1 absolute:4 benefit:1 slice:1 feedback:5 calculated:2 van:1 stand:1 transition:6 amygdala:1 computes:1 ignores:1 made:4 reinforcement:1 preprocessing:1 regressors:3 simplified:1 voxel:4 ec:1 social:24 approximate:1 observable:6 emphasize:1 keep:1 logic:1 uai:1 search:2 continuous:1 table:4 nature:1 improving:1 ajt:6 investigated:1 marc:2 significance:1 whole:1 motivation:1 alarm:1 mediate:1 ait:18 allowed:1 repeated:1 child:1 west:1 crcns:1 board:1 screen:5 darker:2 perceptual:2 unfair:1 third:1 learns:1 specific:1 normative:3 survival:1 normalizing:1 evidence:2 adding:1 sequential:1 horizon:4 expressed:1 partially:5 neurological:2 determines:1 chance:1 acm:1 ma:1 goal:2 formulated:1 consequently:3 informing:1 towards:1 room:1 absence:1 hard:1 change:7 khalvati:3 specifically:2 determined:1 except:1 infinite:1 reducing:2 corrected:3 called:1 total:7 engaged:1 experimental:2 player:59 est:1 formally:1 rangel:1 people:2 latter:1 scan:1 modulated:1 evaluate:1 hemodynamic:1 tested:2 correlated:2 |
6,122 | 6,538 | Safe and ef?cient off-policy reinforcement learning
Thomas Stepleton
[email protected]
Google DeepMind
R?emi Munos
[email protected]
Google DeepMind
Anna Harutyunyan
[email protected]
Vrije Universiteit Brussel
Marc G. Bellemare
[email protected]
Google DeepMind
Abstract
In this work, we take a fresh look at some old and new algorithms for off-policy,
return-based reinforcement learning. Expressing these in a common form, we derive a novel algorithm, Retrace(?), with three desired properties: (1) it has low
variance; (2) it safely uses samples collected from any behaviour policy, whatever
its degree of ?off-policyness?; and (3) it is ef?cient as it makes the best use of samples collected from near on-policy behaviour policies. We analyze the contractive
nature of the related operator under both off-policy policy evaluation and control
settings and derive online sample-based algorithms. We believe this is the ?rst
return-based off-policy control algorithm converging a.s. to Q? without the GLIE
assumption (Greedy in the Limit with In?nite Exploration). As a corollary, we
prove the convergence of Watkins? Q(?), which was an open problem since 1989.
We illustrate the bene?ts of Retrace(?) on a standard suite of Atari 2600 games.
One fundamental trade-off in reinforcement learning lies in the de?nition of the update target: should
one estimate Monte Carlo returns or bootstrap from an existing
? Q-function? Return-based methods (where return refers to the sum of discounted rewards t ? t rt ) offer some advantages over
value bootstrap methods: they are better behaved when combined with function approximation, and
quickly propagate the fruits of exploration (Sutton, 1996). On the other hand, value bootstrap methods are more readily applied to off-policy data, a common use case. In this paper we show that
learning from returns need not be at cross-purposes with off-policy learning.
We start from the recent work of Harutyunyan et al. (2016), who show that naive off-policy policy
evaluation, without correcting for the ?off-policyness? of a trajectory, still converges to the desired
Q? value function provided the behavior ? and target ? policies are not too far apart (the maximum allowed distance depends on the ? parameter). Their Q? (?) algorithm learns from trajectories
generated by ? simply by summing discounted off-policy corrected rewards at each time step. Unfortunately, the assumption that ? and ? are close is restrictive, as well as dif?cult to uphold in the
control case, where the target policy is greedy with respect to the current Q-function. In that sense
this algorithm is not safe: it does not handle the case of arbitrary ?off-policyness?.
Alternatively, the Tree-backup (TB(?)) algorithm (Precup et al., 2000) tolerates arbitrary target/behavior discrepancies by scaling information (here called traces) from future temporal differences by the product of target policy probabilities. TB(?) is not ef?cient in the ?near on-policy?
case (similar ? and ?), though, as traces may be cut prematurely, blocking learning from full returns.
In this work, we express several off-policy, return-based algorithms in a common form. From this
we derive an improved algorithm, Retrace(?), which is both safe and ef?cient, enjoying convergence
guarantees for off-policy policy evaluation and ? more importantly ? for the control setting.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Retrace(?) can learn from full returns retrieved from past policy data, as in the context of experience
replay (Lin, 1993), which has returned to favour with advances in deep reinforcement learning (Mnih
et al., 2015; Schaul et al., 2016). Off-policy learning is also desirable for exploration, since it allows
the agent to deviate from the target policy currently under evaluation.
To the best of our knowledge, this is the ?rst online return-based off-policy control algorithm which
does not require the GLIE (Greedy in the Limit with In?nite Exploration) assumption (Singh et al.,
2000). In addition, we provide as a corollary the ?rst proof of convergence of Watkins? Q(?) (see,
e.g., Watkins, 1989; Sutton and Barto, 1998).
Finally, we illustrate the signi?cance of Retrace(?) in a deep learning setting by applying it to the
suite of Atari 2600 games provided by the Arcade Learning Environment (Bellemare et al., 2013).
1
Notation
We consider an agent interacting with a Markov Decision Process (X , A, ?, P, r). X is a ?nite state
space, A the action space, ? ? [0, 1) the discount factor, P the transition function mapping stateaction pairs (x, a) ? X ? A to distributions over X , and r : X ? A ? [?RMAX , RMAX ] is the reward
function. For notational simplicity we will consider a ?nite action space, but the case of in?nite ?
possibly continuous ? action space can be handled by the Retrace(?) algorithm as well. A policy ?
is a mapping from X to a distribution over A. A Q-function Q maps each state-action pair (x, a) to
a value in R; in particular, the reward r is a Q-function. For a policy ? we de?ne the operator P ? :
? ?
P (x? | x, a)?(a? | x? )Q(x? , a? ).
(P ? Q)(x, a) :=
x? ?X a? ?A
The value function for a policy ?, Q? , describes the expected discounted sum of rewards associated
with following ? from a given state-action pair. Using operator notation, we write this as
?
? t (P ? )t r.
(1)
Q? :=
t?0
The Bellman operator T ? for a policy ? is de?ned as T ? Q := r + ?P ? Q and its ?xed point is Q? ,
i.e. T ? Q? = Q? = (I ? ?P ? )?1 r. The Bellman optimality operator introduces a maximization
over the set of policies:
(2)
T Q := r + ? max P ? Q.
?
Its ?xed point is Q , the unique optimal value function (Puterman, 1994). It is this quantity that we
will seek to obtain when we talk about the ?control setting?.
?
Return-based Operators: The ?-return extension (Sutton, 1988) of the Bellman operators considers exponentially weighted sums of n-steps returns:
? ?
?
T?? Q := (1 ? ?)
?n (T ? )n+1 Q = Q + (I ? ??P ? )?1 (T ? Q ? Q),
n?0
where T Q ? Q is the Bellman residual of Q for policy ?. Examination of the above shows that Q?
?
is also the ?xed point of T?? . At one extreme (? = 0) we have the Bellman operator T?=0
Q = T ? Q,
?
?
while at the other (? = 1) we have the policy evaluation operator T?=1 Q = Q which can be
estimated using Monte Carlo methods (Sutton and Barto, 1998). Intermediate values of ? trade off
estimation bias with sample variance (Kearns and Singh, 2000).
?
We seek to evaluate a target policy ? using trajectories drawn from a behaviour policy ?. If ? = ?,
we are on-policy; otherwise, we are off-policy. We will consider trajectories of the form:
x0 = x, a0 = a, r0 , x1 , a1 , r1 , x2 , a2 , r2 , . . .
with at ? ?(?|xt ), rt = r(xt , at ) and xt+1 ? P (?|xt , at ). We denote by Ft this sequence up to
time t, and write E? the expectation with respect to both ? and the MDP transition probabilities.
Throughout, we write ? ? ? for supremum norm.
2
2
Off-Policy Algorithms
We are interested in two related off-policy learning problems. In the policy evaluation setting, we
are given a ?xed policy ? whose value Q? we wish to estimate from sample trajectories drawn from
a behaviour policy ?. In the control setting, we consider a sequence of policies that depend on our
own sequence of Q-functions (such as ?-greedy policies), and seek to approximate Q? .
The general operator that we consider for comparing several return-based off-policy algorithms is:
RQ(x, a) := Q(x, a) + E?
??
t?0
?t
t
??
cs
s=1
??
??
rt + ?E? Q(xt+1 , ?) ? Q(xt , at ) ,
(3)
?
for some non-negative coef?cients (cs ), where we write E? Q(x, ?) := a ?(a|x)Q(x, a) and de?ne
?t
( s=1 cs ) = 1 when t = 0. By extension of the idea of eligibility traces (Sutton and Barto, 1998),
we informally call the coef?cients (cs ) the traces of the operator.
s |xs )
Importance sampling (IS): cs = ?(a
?(as |xs ) . Importance sampling is the simplest way to correct
for the discrepancy between ? and ? when learning from off-policy returns (Precup et al., 2000,
2001; Geist and Scherrer, 2014). The off-policy correction uses the product of the likelihood ratios
between ? and ?. Notice that RQ de?ned in (3) with
of (cs ) yields Q? for any Q. For
? ?tthis choice
?
?
t
Q = 0 we recover the basic IS estimate t?0 ?
s=1 cs rt , thus (3) can be seen as a variance
reduction technique (with a baseline Q). It is well known that IS estimates can suffer from large
?(at |xt )
1 |x1 )
? even possibly in?nite ? variance (mainly due to the variance of the product ?(a
?(a1 |x1 ) ? ? ? ?(at |xt ) ),
which has motivated further variance reduction techniques such as in (Mahmood and Sutton, 2015;
Mahmood et al., 2015; Hallak et al., 2015).
Off-policy Q? (?) and Q? (?): cs = ?. A recent alternative proposed by Harutyunyan et al. (2016)
introduces an off-policy correction based on a Q-baseline (instead of correcting the probability of
the sample path like in IS). This approach, called Q? (?) and Q? (?) for policy evaluation and control,
respectively, corresponds to the choice cs = ?. It offers the advantage of avoiding the blow-up of the
variance of the product of ratios encountered with IS. Interestingly, this operator contracts around Q?
provided that ? and ? are suf?ciently close to each other. De?ning ? := maxx ??(?|x) ? ?(?|x)?1
the level of ?off-policyness?, the authors prove that the operator de?ned by (3) with cs = ? is a
1??
?
contraction mapping around Q? for ? < 1??
?? , and around Q for the worst case of ? < 2? .
Unfortunately, Q? (?) requires knowledge of ?, and the condition for Q? (?) is very conservative.
Neither Q? (?), nor Q? (?) are safe as they do not guarantee convergence for arbitrary ? and ?.
Tree-backup, TB(?): cs = ??(as |xs ). The TB(?) algorithm of Precup et al. (2000) corrects for
the target/behaviour discrepancy by multiplying each term of the sum by the product of target policy
probabilities. The corresponding operator de?nes a contraction mapping for any policies ? and ?,
which makes it a safe algorithm. However, this algorithm is not ef?cient in the near on-policy case
(where ? and ? are similar) as it unnecessarily cuts the traces, preventing it to make use of full
returns: indeed we need not discount stochastic on-policy transitions (as shown by Harutyunyan
et al.?s results about Q? ).
?
?
s |xs )
Retrace(?): cs = ? min 1, ?(a
?(as |xs ) . Our contribution is an algorithm ? Retrace(?) ? that takes
the best of the three previous algorithms. Retrace(?) uses an importance sampling ratio truncated
at 1. Compared to IS, it does not suffer from the variance explosion of the product of IS ratios.
Now, similarly to Q? (?) and unlike TB(?), it does not cut the traces in the on-policy case, making
it possible to bene?t from the full returns. In the off-policy case, the traces are safely cut, similarly
?
?
s |xs )
to TB(?). In particular, min 1, ?(a
?(as |xs ) ? ?(as |xs ): Retrace(?) does not cut the traces as much
as TB(?). In the subsequent sections, we will show the following:
? For any traces 0 ? cs ? ?(as |xs )/?(as |xs ) (thus including the Retrace(?) operator), the
return-based operator (3) is a ?-contraction around Q? , for arbitrary policies ? and ?
? In the control case (where ? is replaced by a sequence of increasingly greedy policies) the
online Retrace(?) algorithm converges a.s. to Q? , without requiring the GLIE assumption.
? As a corollary, Watkins?s Q(?) converges a.s. to Q? .
3
De?nition
of cs
Estimation
variance
Guaranteed
convergence?
Use full returns
(near on-policy)
?(as |xs )
?(as |xs )
High
for any ?, ?
yes
Importance sampling
Q (?)
?
Low
for ? close to ?
yes
TB(?)
??(as |xs )
?
?
s |xs )
? min 1, ?(a
?(as |xs )
Low
for any ?, ?
no
Low
for any ?, ?
yes
?
Retrace(?)
Table 1: Properties of several algorithms de?ned in terms of the general operator given in (3).
?Guaranteed convergence of the expected operator R.
3
Analysis of Retrace(?)
We will in turn analyze both off-policy policy evaluation and control settings. We will show that R
is a contraction mapping in both settings (under a mild additional assumption for the control case).
3.1
Policy Evaluation
Consider a ?xed target policy ?. For ease of exposition we consider a ?xed behaviour policy ?,
noting that our result extends to the setting of sequences of behaviour policies (?k : k ? N).
Our ?rst result states the ?-contraction of the operator (3) de?ned by any set of non-negative coef?cients cs = cs (as , Fs ) (in order to emphasize that cs can be a function of the whole history Fs )
s |xs )
under the assumption that 0 ? cs ? ?(a
?(as |xs ) .
Theorem 1. The operator R de?ned by (3) has a unique ?xed point Q? . Furthermore, if for each
?
?
s |xs )
as ? A and each history Fs we have cs = cs (as , Fs ) ? 0, ?(a
?(as |xs ) , then for any Q-function Q
?RQ ? Q? ? ? ??Q ? Q? ?.
The following lemma will be useful in proving Theorem 1 (proof in the appendix).
Lemma 1. The difference between RQ and its ?xed point Q? is
?
RQ(x, a) ? Q (x, a) = E?
? ?
t?1
? t?1
? ???
???
E? [(Q ? Q? )(xt , ?)] ? ct (Q ? Q? )(xt , at ) .
?
ci
t
i=1
Proof (Theorem ?1). The fact that Q? is the ?xed point
? of the operator R is obvious from (3) since
Ext+1 ?P (?|xt ,at ) rt + ?E? Q? (xx+1 , ?) ? Q? (xt , at ) = (T ? Q? ? Q? )(xt , at ) = 0, since Q? is
the ?xed point of T ? . Now, from Lemma 1, and de?ning ?Q := Q ? Q? , we have
?
RQ(x, a) ? Q (x, a) =
=
?
? txE
1:t
t?1 a1:t
?
?? t?1
? ???
???
ci
E? ?Q(xt , ?) ? ct ?Q(xt , at )
1:t
t?1 a1:t?1
=
?
i=1
?? t?1
? ???
???
t
? xE
ci
E? ?Q(xt , ?) ? Eat [ct (at , Ft )?Q(xt , at )|Ft ]
i=1
?
?? t?1
? ???
?
? txE
ci
?(b|xt ) ? ?(b|xt )ct (b, Ft ) ?Q(xt , b) .
1:t
t?1 a1:t?1
i=1
b
Now
since ?(a|xt ) ? ?(a|xt )ct (b, Ft ) ? 0, we have that RQ(x, a) ? Q? (x, a) =
?
y,b wy,b ?Q(y, b), i.e. a linear combination of ?Q(y, b) weighted by non-negative coef?cients:
wy,b :=
?
? txE
1:t
?
?? t?1
? ??
?
ci ?(b|xt ) ? ?(b|xt )ct (b, Ft ) I{xt = y} .
t?1 a1:t?1
i=1
4
The sum of those coef?cients is:
?? t?1
?
? ???
?
??
?(b|xt ) ? ?(b|xt )ct (b, Ft )
wy,b =
? txE
ci
y,b
1:t
t?1 a1:t?1
i=1
b
? ?
?
?? t?1
?? t?1
? ?
? ?
?
=
? txE
ci Eat [1 ? ct (at , Ft )|Ft ] =
? txE
ci (1 ? ct )
1:t
t?1 a1:t?1
= E?
??
t?1
??
i=1
t?1
1:t
a1:t
i=1
t
? t?1
??
? ? ? ??
?t
ci ?
?t
ci = ?C ? (C ? 1),
i=1
? ?t
t
t?1
i=1
??
?
where C := E?
t?0 ?
y,b wy,b ? ?. Thus
i=1 ci . Since C ? 1, we have that
RQ(x, a) ? Q? (x, a) is a sub-convex combination of ?Q(y, b) weighted by non-negative coef?cients wy,b which sum to (at most) ?, thus R is a ?-contraction mapping around Q? .
Remark 1. Notice that the? coef?cient C in the? proof of Theorem 1 depends on (x, a). If we write
?t
?
t
?(x, a) := 1 ? (1 ? ?)E?
t?0 ? ( s=1 cs ) , then we have shown that
|RQ(x, a) ? Q? (x, a)| ? ?(x, a)?Q ? Q? ?.
Thus ?(x, a) ? [0, ?] is a (x, a)-speci?c contraction coef?cient, which is ? when c1 = 0 (the trace
is cut immediately) and can be close to zero when learning from full returns (E? [ct ] ? 1 for all t).
3.2
Control
In the control setting, the single target policy ? is replaced by a sequence of policies (?k ) which
depend on (Qk ). While most prior work has focused on strictly greedy policies, here we consider
the larger class of increasingly greedy sequences. We now make this notion precise.
De?nition 1. We say that a sequence of policies (?k : k ? N) is increasingly greedy w.r.t. a sequence
(Qk : k ? N) of Q-functions if the following property holds for all k: P ?k+1 Qk+1 ? P ?k Qk+1 .
Intuitively, this means that each ?k+1 is at least as greedy as the previous policy ?k for Qk+1 .
Many natural sequences of policies are increasingly greedy, including ?k -greedy policies (with nonincreasing ?k ) and softmax policies (with non-increasing temperature). See proofs in the appendix.
We will assume that cs = cs (as , Fs ) = c(as , xs ) is Markovian, in the sense that it depends on
xs , as (as well as the policies ? and ?) only but not on the full past history. This allows us to de?ne
the (sub)-probability transition operator
??
p(x? |x, a)?(a? |x? )c(a? , x? )Q(x? , a? ).
(P c? Q)(x, a) :=
x?
a?
Finally, an additional requirement to the convergence in the control case, we assume that Q0 satis?es
T ?0 Q0 ? Q0 (this can be achieved by a pessimistic initialization Q0 = ?RM AX /(1 ? ?)).
Theorem 2. Consider an arbitrary sequence of behaviour policies (?k ) (which may depend on
(Qk )) and a sequence of target policies (?k ) that are increasingly greedy w.r.t. the sequence (Qk ):
Qk+1 = Rk Qk ,
where the return operator Rk is de?ned by (3) for ?k and ?k and a Markovian cs = c(as , xs ) ?
s |xs )
[0, ??kk (a
(as |xs ) ]. Assume the target policies ?k are ?k -away from the greedy policies w.r.t. Qk , in the
sense that T ?k Qk ? T Qk ? ?k ?Qk ?e, where e is the vector with 1-components. Further suppose
that T ?0 Q0 ? Q0 . Then for any k ? 0,
?Qk+1 ? Q? ? ? ??Qk ? Q? ? + ?k ?Qk ?.
In consequence, if ?k ? 0 then Qk ? Q? .
Sketch of Proof (The full proof is in the appendix). Using P c?k , the Retrace(?) operator rewrites
?
? t (P c?k )t (T ?k Q ? Q) = Q + (I ? ?P c?k )?1 (T ?k Q ? Q).
Rk Q = Q +
t?0
5
We now lower- and upper-bound the term Qk+1 ? Q? .
Upper bound on Qk+1 ? Q? . We prove that Qk+1 ? Q? ? Ak (Qk ? Q? ) with Ak := ?(I ?
?
?
t |xt )
?P c?k )?1 P ?k ? P c?k . Since ct ? [0, ?(a
?(at |xt ) ] we deduce that Ak has non-negative elements,
whose sum over each row, is at most ?. Thus
(4)
Qk+1 ? Q? ? ??Qk ? Q? ?e.
?
Lower bound on Qk+1 ? Q? . Using the fact that T ?k Qk ? T ? Qk ? ?k ?Qk ?e we have
Qk+1 ? Q?
?
=
?
Qk+1 ? T ?k Qk + ?P ? (Qk ? Q? ) ? ??k ?Qk ?e
?
?P c?k (I ? ?P c?k )?1 (T ?k Qk ? Qk ) + ?P ? (Qk ? Q? ) ? ?k ?Qk ?e. (5)
Lower bound on T ?k Qk ? Qk . Since the sequence (?k ) is increasingly greedy w.r.t. (Qk ), we have
T ?k+1 Qk+1 ? Qk+1 ? T ?k Qk+1 ? Qk+1 = r + (?P ?k ? I)Rk Qk
= Bk (T ?k Qk ? Qk ),
(6)
?k
c?k
c?k ?1
?k
c?k
c?k ?1
and (I ??P ) are non-negative
where Bk := ?[P ?P ](I ??P ) . Since P ?P
matrices, so is Bk . Thus T ?k Qk ? Qk ? Bk?1 Bk?2 . . . B0 (T ?0 Q0 ? Q0 ) ? 0, since we assumed
T ?0 Q0 ? Q0 ? 0. Thus, (5) implies that
?
Qk+1 ? Q? ? ?P ? (Qk ? Q? ) ? ?k ?Qk ?e.
Combining the above with (4) we deduce ?Qk+1 ? Q? ? ? ??Qk ? Q? ? + ?k ?Qk ?. When ?k ? 0,
we further deduce that Qk are bounded, thus Qk ? Q? .
3.3
Online algorithms
So far we have analyzed the contraction properties of the expected R operators. We now describe online algorithms which can learn from sample trajectories. We analyze the algorithms in
the every visit form (Sutton and Barto, 1998), which is the more practical generalization of the
?rst-visit form. In this section, we will only consider the Retrace(?) algorithm de?ned with the
c?
???
coef?cient c = ? min(1,
, where
? ? ?/?). For that c, let us rewrite the operator P as ?P
???
P
Q(x, a) :=
y
b min(?(b|y), ?(b|y))Q(y, b), and write the Retrace operator RQ =
Q + (I ? ??P ??? )?1 (T ? Q ? Q). We focus on the control case, noting that a similar (and simpler)
result can be derived for policy evaluation.
Theorem 3. Consider a sequence of sample trajectories, with the k th trajectory
x0 , a0 , r0 , x1 , a1 , r1 , . . . generated by following ?k : at ? ?k (?|xt ). For each (x, a) along
this trajectory, with s being the time of ?rst occurrence of (x, a), update
t
t
? ?
?
? ? ?
?t k
? t?j
ci I{xj , aj = x, a},
(7)
Qk+1 (x, a) ? Qk (x, a) + ?k
j=s
t?s
i=j+1
where ?t?k := rt + ?E?k Qk (xt+1 , ?) ? Qk (xt , at ), ?k = ?k (xs , as ). We consider the Retrace(?)
?
?
i |xi )
algorithm where ci = ? min 1, ?(a
?(ai |xi ) . Assume that (?k ) are increasingly greedy w.r.t. (Qk ) and
are each ?k -away from the greedy policies (?Qk ), i.e. maxx ??k (?|x)??Qk (?|x)?1 ? ?k , with ?k ?
??k
0. Assume that P ?k and P ?k ??k asymptotically commute: limk ?P ?k P ?k?
? P ?k ??k P ?k ? = 0.
Assume further that (1) all states and actions are visited in?nitely often: t?0 P{xt , at = x, a} ?
D > 0, (2) the sample trajectories are ?nite in terms of the second moment of their lengths Tk :
E?k Tk2 < ?, (3) the stepsizes obey the usual Robbins-Munro conditions. Then Qk ? Q? a.s.
The proof extends similar convergence proofs of TD(?) by Bertsekas and Tsitsiklis (1996) and of
optimistic policy iteration by Tsitsiklis (2003), and is provided in the appendix. Notice that compared to Theorem 2 we do not assume that T ?0 Q0 ? Q0 ? 0 here. However, we make the additional
(rather technical) assumption that P ?k and P ?k ??k commute at the limit. This is satis?ed for example when the probability assigned by the behavior policy ?k (?|x) to the greedy action ?Qk (x)
is independent of x. Examples include ?-greedy policies, or more generally mixtures between the
greedy policy ?Qk and an arbitrary distribution ? (see Lemma 5 in the appendix for the proof):
?(a|x)
I{a ?= ?Qk (x)} + (1 ? ?)I{a = ?Qk (x)}.
?k (a|x) = ?
(8)
1 ? ?(?Qk (x)|x)
Notice that the mixture coef?cient ? needs not go to 0.
6
4
4.1
Discussion of the results
Choice of the trace coef?cients cs
s |xs )
Theorems 1 and 2 ensure convergence to Q? and Q? for any trace coef?cient cs ? [0, ?(a
?(as |xs ) ].
However, to make the best choice of cs , we need to consider the speed of convergence, which
depends on both (1) the variance of the online estimate, which indicates how many online updates
are required in a single iteration of R, and (2) the contraction coef?cient of R.
Variance: The variance of the estimate strongly depends on the variance of the product trace
(c1 . . . ct ), which is not an easy quantity to control in general, as the (cs ) are usually
? ? tnot inde-?
pendent. However, assuming independence and stationarity of (cs ), we have that V
t ? c1 . . . c t
?
is at least t ? 2t V(c)t , which is ?nite only if V(c) < 1/? 2 . Thus, an important requirement for a
numerically stable algorithm is for V(c) to be as small as possible, and certainly no more than 1/? 2 .
? ?(a|x)
?2
?
This rules out importance sampling (for which c = ?(a|x)
a ?(a|x) ?(a|x) ? 1 ,
?(a|x) , and V(c|x) =
which may be larger than 1/? 2 for some ? and ?), and is the reason we choose c ? 1.
Contraction speed: The contraction coef?cient ? ? [0, ?] of R (see Remark 1) depends on how
much the traces have been cut, and should be as small as possible (since it takes log(1/?)/ log(1/?)
iterations of R to obtain an ?-approximation). It is smallest when the traces are not cut at all (i.e. if
cs = 1 for all s, R is the policy evaluation operator which produces Q? in a single iteration). Indeed,
when the traces are cut, we do not bene?t from learning from full returns (in the extreme, c1 = 0
and R reduces to the (one step) Bellman operator with ? = ?).
A reasonable trade-off between low variance (when cs are small) and high contraction speed (when
cs are large) is given by Retrace(?), for which we provide the convergence of the online algorithm.
If we relax the assumption that the trace is Markovian (in which case only the result for policy
evaluation has been proven so far) we could trade off a low trace at some time for a possibly largerthan-1 trace at another time, as long as their product is less than 1. A possible choice could be
cs = ? min
4.2
?
1
?(as |xs ) ?
,
.
c1 . . . cs?1 ?(as |xs )
(9)
Other topics of discussion
No GLIE assumption. The crucial point of Theorem 2 is that convergence to Q? occurs for arbitrary behaviour policies. Thus the online result in Theorem 3 does not require the behaviour policies
to become greedy in the limit with in?nite exploration (i.e. GLIE assumption, Singh et al., 2000). We
believe Theorem 3 provides the ?rst convergence result to Q? for a ?-return (with ? > 0) algorithm
that does not require this (hard to satisfy) assumption.
Proof of Watkins? Q(?). As a corollary of Theorem 3 when selecting our target policies ?k to be
greedy w.r.t. Qk (i.e. ?k = 0), we deduce that Watkins? Q(?) (e.g., Watkins, 1989; Sutton and Barto,
1998) converges a.s. to Q? (under the assumption that ?k commutes asymptotically with the greedy
policies, which is satis?ed for e.g. ?k de?ned by (8)). We believe this is the ?rst such proof.
Increasingly greedy policies The assumption that the sequence of target policies (?k ) is increasingly greedy w.r.t. the sequence of (Qk ) is more general that just considering greedy policies
w.r.t. (Qk ) (which is Watkins?s Q(?)), and leads to more ef?cient algorithms. Indeed, using nongreedy target policies ?k may speed up convergence as the traces are not cut as frequently. Of
course, in order to converge to Q? , we eventually need the target policies (and not the behaviour
policies, as mentioned above) to become greedy in the limit (i.e. ?k ? 0 as de?ned in Theorem 2).
Comparison to Q? (?). Unlike Retrace(?), Q? (?) does not need to know the behaviour policy
?. However, it fails to converge when ? is far from ?. Retrace(?) uses its knowledge of ? (for the
chosen actions) to cut the traces and safely handle arbitrary policies ? and ?.
Comparison to TB(?). Similarly to Q? (?), TB(?) does not need the knowledge of the behaviour
policy ?. But as a consequence, TB(?) is not able to bene?t from possible near on-policy situations,
cutting traces unnecessarily when ? and ? are close.
7
????????????????????
1.0
???????
0.8
???????????
0.6
??????????
0.4
0.2
0.0
1.0
?????????????????
?????????????????
1.0
??
0.8
0.6
0.4
0.2
0.8
???????
???????????
0.6
0.4
??????????
0.2
0.0
1.0
0.0
?????????????????????
?????????????????????
0.8
0.6
0.4
0.2
0.0
?????????????????????
Figure 1: Inter-algorithm score distribution for ?-return (? = 1) variants and Q-Learning (? = 0).
Estimating the behavior policy. In the case ? is unknown, it is reasonable to build an estimate ?
?
from observed samples and use ?
? instead of ? in the de?nition of the trace coef?cients cs . This may
actually even lead to a better estimate, as analyzed by Li et al. (2015).
Continuous action space. Let us mention that Theorems 1 and 2 extend to the case of (measurable) continuous or in?nite action spaces. The trace coef?cients will make use of the densities
min(1, d?/d?) instead of the probabilities min(1, ?/?). This is not possible with TB(?).
Open questions include: (1) Removing the technical assumption that P ?k and P ?k ??k asymptotically commute, (2) Relaxing the Markov assumption in the control case in order to allow trace
coef?cients cs of the form (9).
5
Experimental Results
To validate our theoretical results, we employ Retrace(?) in an experience replay (Lin, 1993) setting,
where sample transitions are stored within a large but bounded replay memory and subsequently
replayed as if they were new experience. Naturally, older data in the memory is usually drawn from
a policy which differs from the current policy, offering an excellent point of comparison for the
algorithms presented in Section 2.
Our agent adapts the DQN architecture of Mnih et al. (2015) to replay short sequences from the
memory (details in the appendix) instead of single transitions. The Q-function target update for a
sample sequence xt , at , rt , ? ? ? , xt+k is
?Q(xt , at ) =
t+k?1
?
s=t
? s?t
s
? ?
i=t+1
ci
??
?
r(xs , as ) + ?E? Q(xs+1 , ?) ? Q(xs , as ) .
We compare our algorithms? performance on 60 different Atari 2600 games in the Arcade Learning
Environment (Bellemare et al., 2013) using Bellemare et al.?s inter-algorithm score distribution.
Inter-algorithm scores are normalized so that 0 and 1 respectively correspond to the worst and best
score for a particular game, within the set of algorithms under comparison. If g ? {1, . . . , 60} is a
game and zg,a the inter-algorithm score on g for algorithm a, then the score distribution function is
f (x) := |{g : zg,a ? x}|/60. Roughly, a strictly higher curve corresponds to a better algorithm.
Across values of ?, ? = 1 performs best, save for Q? (?) where ? = 0.5 obtains slightly superior
performance. However, is highly sensitive to the choice of ? (see Figure 1, left, and Table 2 in the
appendix). Both Retrace(?) and TB(?) achieve dramatically higher performance than Q-Learning
early on and maintain their advantage throughout. Compared to TB(?), Retrace(?) offers a narrower
but still marked advantage, being the best performer on 30 games; TB(?) claims 15 of the remainder.
Per-game details are given in the appendix.
Conclusion. Retrace(?) can be seen as an algorithm that automatically adjusts ? ef?ciently and
safely ? the length of the return to the degree of ?off-policyness? of any available data.
Acknowledgments. The authors thank Daan Wierstra, Nicolas Heess, Hado van Hasselt, Ziyu
Wang, David Silver, Audrunas Gr?uslys, Georg Ostrovski, Hubert Soyer, and others at Google DeepMind for their very useful feedback on this work.
8
References
Bellemare, M. G., Naddaf, Y., Veness, J., and Bowling, M. (2013). The Arcade Learning Environment: An evaluation platform for general agents. Journal of Arti?cial Intelligence Research,
47:253?279.
Bertsekas, D. P. and Tsitsiklis, J. N. (1996). Neuro-Dynamic Programming. Athena Scienti?c.
Geist, M. and Scherrer, B. (2014). Off-policy learning with eligibility traces: A survey. The Journal
of Machine Learning Research, 15(1):289?333.
Hallak, A., Tamar, A., Munos, R., and Mannor, S. (2015). Generalized emphatic temporal difference
learning: Bias-variance analysis. arXiv:1509.05172.
Harutyunyan, A., Bellemare, M. G., Stepleton, T., and Munos, R. (2016). Q(?) with off-policy
corrections.
Kearns, M. J. and Singh, S. P. (2000). Bias-variance error bounds for temporal difference updates.
In Conference on Computational Learning Theory, pages 142?147.
Li, L., Munos, R., and Szepesvari, C. (2015). Toward minimax off-policy value estimation. In Proceedings of the 18th International Conference on Arti?cial Intelligence and Statistics (AISTATS).
Lin, L. (1993). Scaling up reinforcement learning for robot control. In Machine Learning: Proceedings of the Tenth International Conference, pages 182?189.
Mahmood, A. R. and Sutton, R. S. (2015). Off-policy learning based on weighted importance
sampling with linear computational complexity. In Conference on Uncertainty in Arti?cial Intelligence.
Mahmood, A. R., Yu, H., White, M., and Sutton, R. S. (2015). Emphatic temporal-difference
learning. arXiv:1507.01569.
Mnih, V., Badia, A. P., Mirza, M., Graves, A., Lillicrap, T. P., Harley, T., Silver, D., and
Kavukcuoglu, K. (2016). Asynchronous methods for deep reinforcement learning. In Proceedings
of the International Conference on Machine Learning.
Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A.,
Riedmiller, M., Fidjeland, A. K., Ostrovski, G., et al. (2015). Human-level control through deep
reinforcement learning. Nature, 518(7540):529?533.
Precup, D., Sutton, R. S., and Dasgupta, S. (2001). Off-policy temporal-difference learning with
function approximation. In International Conference on Machine Laerning, pages 417?424.
Precup, D., Sutton, R. S., and Singh, S. (2000). Eligibility traces for off-policy policy evaluation. In
Proceedings of the Seventeenth International Conference on Machine Learning.
Puterman, M. L. (1994). Markov Decision Processes: Discrete Stochastic Dynamic Programming.
John Wiley & Sons, Inc., New York, NY, USA.
Schaul, T., Quan, J., Antonoglou, I., and Silver, D. (2016). Prioritized experience replay. In International Conference on Learning Representations.
Singh, S., Jaakkola, T., Littman, M. L., and Szepesv?ari, C. (2000). Convergence results for singlestep on-policy reinforcement-learning algorithms. Machine Learning, 38(3):287?308.
Sutton, R. and Barto, A. (1998). Reinforcement learning: An introduction. Cambridge Univ Press.
Sutton, R. S. (1988). Learning to predict by the methods of temporal differences. Machine learning,
3(1):9?44.
Sutton, R. S. (1996). Generalization in reinforcement learning: Successful examples using sparse
coarse coding. In Advances in Neural Information Processing Systems 8.
Tsitsiklis, J. N. (2003). On the convergence of optimistic policy iteration. Journal of Machine
Learning Research, 3:59?72.
Watkins, C. J. C. H. (1989). Learning from Delayed Rewards. PhD thesis, King?s College, Cambridge, UK.
9
| 6538 |@word mild:1 norm:1 open:2 seek:3 propagate:1 contraction:12 uphold:1 commute:4 arti:3 mention:1 moment:1 reduction:2 score:6 selecting:1 offering:1 interestingly:1 past:2 existing:1 hasselt:1 current:2 com:3 comparing:1 tnot:1 readily:1 john:1 subsequent:1 update:5 greedy:26 intelligence:3 cult:1 short:1 provides:1 mannor:1 coarse:1 simpler:1 wierstra:1 along:1 become:2 prove:3 x0:2 inter:4 indeed:3 expected:3 roughly:1 behavior:4 nor:1 frequently:1 bellman:6 discounted:3 td:1 automatically:1 considering:1 increasing:1 provided:4 spain:1 notation:2 xx:1 bounded:2 estimating:1 xed:10 atari:3 rmax:2 deepmind:4 hallak:2 suite:2 guarantee:2 cial:3 safely:4 temporal:6 every:1 stateaction:1 rm:1 uk:1 whatever:1 control:19 bertsekas:2 limit:5 consequence:2 sutton:15 ext:1 ak:3 soyer:1 path:1 nitely:1 initialization:1 relaxing:1 dif:1 ease:1 contractive:1 seventeenth:1 unique:2 practical:1 acknowledgment:1 differs:1 bootstrap:3 nite:10 riedmiller:1 maxx:2 refers:1 arcade:3 close:5 operator:29 context:1 applying:1 bellemare:8 measurable:1 map:1 go:1 convex:1 focused:1 survey:1 simplicity:1 immediately:1 correcting:2 rule:1 adjusts:1 importantly:1 tthis:1 handle:2 proving:1 notion:1 target:18 suppose:1 programming:2 us:4 element:1 cut:11 blocking:1 observed:1 ft:9 wang:1 worst:2 trade:4 rq:10 cance:1 environment:3 mentioned:1 complexity:1 reward:6 littman:1 dynamic:2 singh:6 depend:3 rewrite:2 geist:2 talk:1 univ:1 describe:1 monte:2 whose:2 larger:2 say:1 relax:1 otherwise:1 statistic:1 online:9 advantage:4 sequence:19 product:8 remainder:1 cients:10 combining:1 achieve:1 adapts:1 schaul:2 validate:1 rst:8 convergence:16 requirement:2 r1:2 produce:1 silver:4 converges:4 tk:1 derive:3 illustrate:2 ac:1 tolerates:1 b0:1 pendent:1 c:36 signi:1 implies:1 safe:5 ning:2 correct:1 stochastic:2 subsequently:1 exploration:5 human:1 require:3 behaviour:13 generalization:2 pessimistic:1 extension:2 strictly:2 correction:3 hold:1 around:5 mapping:6 predict:1 claim:1 early:1 a2:1 smallest:1 purpose:1 estimation:3 currently:1 visited:1 sensitive:1 robbins:1 weighted:4 rather:1 rusu:1 stepsizes:1 barto:6 jaakkola:1 corollary:4 ax:1 focus:1 derived:1 notational:1 likelihood:1 mainly:1 indicates:1 baseline:2 sense:3 a0:2 interested:1 scherrer:2 platform:1 softmax:1 veness:2 sampling:6 unnecessarily:2 look:1 yu:1 discrepancy:3 future:1 others:1 mirza:1 employ:1 delayed:1 vrije:1 replaced:2 maintain:1 harley:1 stationarity:1 ostrovski:2 satis:3 highly:1 mnih:4 evaluation:14 certainly:1 introduces:2 analyzed:2 extreme:2 mixture:2 scienti:1 hubert:1 nonincreasing:1 explosion:1 experience:4 retrace:25 mahmood:4 tree:2 enjoying:1 old:1 desired:2 theoretical:1 vub:1 markovian:3 maximization:1 successful:1 gr:1 too:1 stored:1 combined:1 density:1 fundamental:1 international:6 contract:1 off:37 corrects:1 quickly:1 precup:5 thesis:1 choose:1 possibly:3 return:25 li:2 de:20 blow:1 coding:1 inc:1 satisfy:1 depends:6 optimistic:2 analyze:3 universiteit:1 start:1 recover:1 contribution:1 variance:16 who:1 qk:72 yield:1 correspond:1 yes:3 kavukcuoglu:2 carlo:2 trajectory:10 multiplying:1 history:3 harutyunyan:6 coef:17 ed:2 obvious:1 naturally:1 proof:12 associated:1 knowledge:4 actually:1 higher:2 improved:1 replayed:1 though:1 strongly:1 furthermore:1 just:1 hand:1 sketch:1 google:7 aj:1 behaved:1 believe:3 mdp:1 dqn:1 usa:1 lillicrap:1 requiring:1 normalized:1 assigned:1 q0:12 puterman:2 white:1 game:7 bowling:1 eligibility:3 generalized:1 performs:1 temperature:1 glie:5 ef:7 novel:1 ari:1 common:3 superior:1 exponentially:1 extend:1 numerically:1 expressing:1 cambridge:2 ai:1 similarly:3 stable:1 robot:1 badia:1 deduce:4 own:1 recent:2 retrieved:1 apart:1 xe:1 nition:4 seen:2 additional:3 performer:1 speci:1 r0:2 converge:2 full:9 desirable:1 reduces:1 technical:2 offer:3 cross:1 lin:3 long:1 visit:2 a1:10 converging:1 variant:1 basic:1 neuro:1 expectation:1 arxiv:2 iteration:5 hado:1 achieved:1 c1:5 addition:1 szepesv:1 crucial:1 unlike:2 limk:1 quan:1 call:1 ciently:2 near:5 noting:2 intermediate:1 easy:1 xj:1 independence:1 architecture:1 idea:1 tamar:1 favour:1 motivated:1 handled:1 munro:1 suffer:2 f:5 returned:1 york:1 action:10 remark:2 deep:4 dramatically:1 useful:2 generally:1 heess:1 informally:1 discount:2 brussel:1 simplest:1 notice:4 estimated:1 per:1 naddaf:1 write:6 discrete:1 dasgupta:1 georg:1 express:1 drawn:3 neither:1 tenth:1 asymptotically:3 sum:7 uncertainty:1 audrunas:1 extends:2 throughout:2 reasonable:2 decision:2 appendix:8 scaling:2 bound:5 ct:12 guaranteed:2 encountered:1 x2:1 emi:1 speed:4 optimality:1 min:9 eat:2 ned:10 combination:2 describes:1 across:1 increasingly:9 slightly:1 son:1 making:1 intuitively:1 turn:1 eventually:1 know:1 antonoglou:1 available:1 obey:1 away:2 occurrence:1 singlestep:1 save:1 alternative:1 thomas:1 include:2 ensure:1 restrictive:1 tk2:1 build:1 question:1 quantity:2 occurs:1 rt:7 usual:1 distance:1 thank:1 fidjeland:1 athena:1 topic:1 collected:2 considers:1 reason:1 fresh:1 toward:1 assuming:1 length:2 kk:1 ratio:4 unfortunately:2 trace:27 negative:6 policy:110 unknown:1 upper:2 markov:3 daan:1 t:1 truncated:1 situation:1 precise:1 prematurely:1 interacting:1 arbitrary:8 bk:5 david:1 pair:3 required:1 bene:4 barcelona:1 nip:1 able:1 wy:5 usually:2 tb:15 max:1 including:2 memory:3 natural:1 examination:1 residual:1 minimax:1 older:1 ne:4 naive:1 deviate:1 prior:1 graf:2 inde:1 suf:1 proven:1 emphatic:2 degree:2 agent:4 fruit:1 row:1 course:1 asynchronous:1 tsitsiklis:4 bias:3 allow:1 munos:5 sparse:1 van:1 curve:1 feedback:1 transition:6 preventing:1 author:2 reinforcement:10 far:4 approximate:1 emphasize:1 obtains:1 cutting:1 supremum:1 summing:1 assumed:1 xi:2 ziyu:1 alternatively:1 continuous:3 table:2 nature:2 learn:2 szepesvari:1 nicolas:1 excellent:1 marc:1 anna:2 aistats:1 backup:2 whole:1 allowed:1 x1:4 cient:13 ny:1 wiley:1 sub:2 fails:1 wish:1 lie:1 replay:5 watkins:9 learns:1 theorem:14 rk:4 removing:1 stepleton:3 xt:36 r2:1 x:32 importance:6 ci:14 phd:1 simply:1 corresponds:2 marked:1 narrower:1 king:1 exposition:1 prioritized:1 hard:1 corrected:1 kearns:2 conservative:1 called:2 lemma:4 e:1 experimental:1 zg:2 college:1 evaluate:1 avoiding:1 |
6,123 | 6,539 | Hierarchical Object Representation for Open-Ended
Object Category Learning and Recognition
S.Hamidreza Kasaei, Ana Maria Tom?, Lu?s Seabra Lopes
IEETA - Instituto de Engenharia Electr?nica e Telem?tica de Aveiro
University of Aveiro, Averio, 3810-193, Portugal
{seyed.hamidreza, ana, lsl}@ua.pt
Abstract
Most robots lack the ability to learn new objects from past experiences. To migrate
a robot to a new environment one must often completely re-generate the knowledgebase that it is running with. Since in open-ended domains the set of categories to
be learned is not predefined, it is not feasible to assume that one can pre-program
all object categories required by robots. Therefore, autonomous robots must have
the ability to continuously execute learning and recognition in a concurrent and
interleaved fashion. This paper proposes an open-ended 3D object recognition
system which concurrently learns both the object categories and the statistical
features for encoding objects. In particular, we propose an extension of Latent
Dirichlet Allocation to learn structural semantic features (i.e. topics) from low-level
feature co-occurrences for each category independently. Moreover, topics in each
category are discovered in an unsupervised fashion and are updated incrementally
using new object views. The approach contains similarities with the organization of
the visual cortex and builds a hierarchy of increasingly sophisticated representations.
Results show the fulfilling performance of this approach on different types of
objects. Moreover, this system demonstrates the capability of learning from few
training examples and competes with state-of-the-art systems.
1
Introduction
Open-ended learning theory in cognitive psychology has been a topic of considerable interest for many
researchers. The general principle is that humans learn to recognize object categories ceaselessly
over time. This ability allows them to adapt to new environments, by enhancing their knowledge
from the accumulation of experiences and the conceptualization of new object categories [1]. In
humans there is evidence of hierarchical models for object recognition in cortex [2]. Moreover,
in humans object recognition skills and the underlying capabilities are developed concurrently [2].
In hierarchical recognition theories, the human sequentially processes information about the target
object leading to the recognition result. This begins with lower level cortical processors such as the
elementary visual cortex and go ?up? to the inferotemporal cortex (IT) where recognition occurs.
Taking this as inspiration, an autonomous robot will process visual information continuously, and
perform learning and recognition concurrently. In other words, apart from learning from a batch of
labelled training data, the robot should continuously update and learn new object categories while
working in the environment in an open-ended manner. In this paper, ?open-ended? implies that the set
of object categories to be learned is not known in advance. The training instances are extracted from
on-line experiences of a robot, and thus become gradually available over time, rather than completely
available at the beginning of the learning process.
Classical object recognition systems are often designed for static environments i.e. training (offline) and testing (online) are two separated phases. If limited training data is used, this might
lead to non-discriminative object representations and, as a consequence, to poor object recog30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
nition performance. Therefore, building a discriminative object representation is a challenging step to improve object recognition performance. Moreover, time and memory efficiency is
also important. Comparing 3D directly based their local features is computationally expensive.
Topic modelling is suitable for open-ended learning because, not only it provides short object
descriptions (i.e. optimizing memory), but also enables efficient processing of large collections.
This paper proposes a 3D object recogcategory layer
nition system capable of learning both
view layer
object categories as well as the topics
used to encode them concurrently and
topic layer
in an open-ended manner. We proBoW layer
pose an extension of Latent Dirichlet
Allocation to learn incrementally topfeature layer
ics for each category independently.
Moreover, topics in each category are
discovered in an unsupervised fashion
and updated incrementally using new
object views. As depicted in Fig.1,
the approach is designed to be used by
a service robot working in a domes- Figure 1: The proposed multi layer object representation being
tic environment. Fig.1(left), shows a tested on a service robot. It consists of five layers of hierarchy
PR2 robot looking at some objects on including feature layer, BoW layer, topic layer, object view layer
the table. Fig.1(right) shows the point and category layer.
cloud of the scene obtained through
the robot?s Kinect and the used representations. Tabletops objects are tracked (signed by different
colors) and processed through a hierarchy of five layers. For instance, to describe an object view,
in the feature layer, a spin-image shape descriptor [3] is used to represent the local shapes of the
object in different key points; afterwards, in the Bag-of-Words (BoW) layer, the given object view is
described by histograms of local shape features, as defined in Bag-of-Words models; in the topic layer,
each topic is defined as a discrete distribution over visual words and each object view is described as
a random mixture over latent topics of the category and stores them into the memory (view layer).
Finally, the category model is updated by adding the obtained representation (category layer).
The remainder of this paper is organized as follows. In section2, we discuss related works. Section3
provides a system overview. The methodology for constructing visual words dictionary is presented
in section4. Section5 describes the proposed object representation. Object category learning and
recognition are then explained in section6. Evaluation of the proposed system is presented in section7.
Finally, conclusions are presented and future research is discussed.
2
Related work
One of the important tasks in the field of assistive and service robots is to achieve human-like object
category learning and recognition. Riesenhuber and Poggio [2] proposed a hierarchical approach for
object recognition consistent with physiological data, in which objects are modelled in a hierarchy of
increasingly sophisticated representations.
Sivic et al. [4] proposed an approach to discover objects in images using Probabilistic Latent Semantic
Indexing (pLSI) modelling [5]. Blei et al. [6] argued that the pLSI is incomplete in that it provides
no probabilistic model at the level of documents. They extended the pLSI model calling the approach
Latent Dirichlet Allocation (LDA). Similar to pLSI and LDA, we discover topics in an unsupervised
fashion. Unlike our approach in this paper, pLSI and LDA do not incorporate class information.
Several works have been presented to incorporate a class label in the generative model [7][8][9].
Blei et al. [7] extend LDA and proposed Supervised LDA (sLDA). The sLDA was first used for
supervised text prediction. Later, Wang et al. [8] extended sLDA to classification problems. Another
popular extension of LDA is the classLDA (cLDA) [9]. Similar to our approach, the only supervision
used by sLDA and cLDA is the category label of each training object. However, there are two main
differences. First, the learned topics in sLDA and cLDA are shared among all categories, while we
propose to learn specific topics per category. Second, the sLDA and cLDA approaches follow a
standard train-and-test procedure (i.e. set of classes, train and test data are known or available in
2
advance), our approach can incrementally update topics using new observations and the set of classes
is continuously growing. There are some topic-supervised approaches e.g. Labeled LDA [10] and
semiLDA [11] that consider class labels for topics. On one hand, these approaches need tens of hours
of manual annotation. On the other hand, a human can not provide a specific category label for a 3D
local shape description (e.g. a spin-images [3]).
There are some LDA approaches that support incremental learning of object categories. The difference
between incremental and open-ended learning is that the set of classes is predefined in incremental
learning, while in open-ended learning the set of classes is continuously growing. Banerjee et al.
proposed [12] online LDA (o-LDA) that is a simple modification of batch collapsed Gibbs sampler.
The o-LDA first applies the batch Gibbs sampler to the full dataset and then samples new topics for
each newly observed word using information observed so far. Canini et al. [13] extended o-LDA and
proposed an incremental Gibbs sampler for LDA (here referred to as I-LDA). The I-LDA does not
need a batch initialization phase like o-LDA. In o-LDA and I-LDA, the number of categories is fixed,
while in our approach the number of categories is growing. Moreover, o-LDA and I-LDA are used to
discover topics shared among all categories, while our approach is used to discover specific topics
per category.
Currently, a popular approach in object recognition is deep learning. However, there are several limitations to use Deep Neural Networks (DNN) in open-ended domains. Deep networks are incremental
by nature but not open-ended, since the inclusion of novel categories enforces a restructuring in the
topology of the network. Moreover, DNN usually needs a lot of training data and long training times
to obtain an acceptable accuracy. Schwarz et.al [14] used DNN for 3D object category learning. They
clearly showed that the performance of DNN degrades when the size of dataset is reduced.
3
System overview
The main motivation of this work is to achieve a multi-layered object representation that builds an
increasingly complex object representation (see Fig. 1). Particularly, a statistical model is used to
get structural semantic features from low-level feature co-occurrences. The basic idea is that each
object view is described as a random mixture over a set of latent topics, and each topic is defined as
a discrete distribution over visual words (i.e. local shape features). It must be pointed out that we
are using shape features rather than semantic properties to encode the statistical structure of object
categories [15]. It is easier to explain the details using an example. We start by selecting a category
label, for example Mug. To represent a new instance of Mug, a distribution over Mug topics is drawn
that will specify which intermediate topics should be selected for generating each visual words of
the object. According to this distribution, a particular topic is selected out of the mixture of possible
topics of the Mug category for generating each visual word in the object. For instance, a Mug usually
has a handle, and a ?handle? topic refers to some visual words that occur frequently together in
handles. The process of drawing both the topic and visual word is repeated several times to choose a
set of visual words that would construct a Mug. We use statistical inference techniques for inverting
this process to automatically find out a set of topics for each category from a collection of instances.
In other words, we try to learn a model for each category (a set of latent variables) that explains how
each object obtains its visual words. In our approach, the characteristics of surfaces belonging to
objects are described by local shape features called spin-images [3].
4
Dictionary construction
Comparing 3D objects based on their local features is computationally expensive. The topic modelling
approach directly addresses this concern. It requires a dictionary with V visual words. Usually, the
dictionary is created via off-line clustering of training data, while in open-ended learning, there is
no training data available at the beginning of the learning process. To cope with this limitation, we
propose that the robot freely explores several scenes and collects several object experiences.
In general, object exploration is a challenging task because of ill-definition of the objects [16]. Since
a system of boolean equations can represent any expression or any algorithm, it is particularly well
suited for encoding the world and object candidates. Similar to Collet?s work [16], we have used
boolean algebra based on the three logical operators, namely AND ?, OR ? and NOT ?. A set of
constraints, C, is then defined. Each constraint has been implemented as a function that returns either
true or false (see Table 1).
3
Table 1: List of used constraints with a short description for each one.
Constraints
Ctable : ?is this candidate on a table??
Ctrack : ?is this candidate being tracked??
Csize : ?is this candidate manipulatable??
Cinstructor : ?is this candidate part of the instructor?s body??
Crobot : ?is this candidate part of the robot?s body??
Cedge : ?is this candidate near to the edge of the table??
Ckey_view : ?is this candidate a key view??
Description
The interest object candidate is placed on top of a table.
This constraint is used to infer that the segmented
object is already being tracked or not.
Reject large object candidate.
Reject candidates that are belong to the user?s body.
Reject candidates that are belong to the robot?s body.
Reject candidates that are near to the edge of the table.
Only key-views are stored into Perceptual Memory.
Note that, storing all object views while the object is static would lead to unnecessary accumulation
of highly redundant data. Therefore, Ckey_view is used to optimize memory usage and computation
while keeping potentially relevant and distinctive information. An object view is selected as a key
view whenever the tracking of an object is initialized (Ctrack ), or when it becomes static again after
being moved. In case the hands are detected near the object, storing key views are postponed until the
hands are withdrawn [17]. Using these constraints, boolean expressions, ?, are built to encode object
candidates for the Object Exploration and Object Recognition purposes (see equations 1 and 2).
?exploration = Ctable ? Ctrack ? Ckey_view ? ?(Cinstructor ? Crobot ),
(1)
?recognition = Ctable ? Ctrack ? ? (Cinstructor ? Crobot ? Cedge ),
(2)
The basic perception infrastructure, which is strongly based on the Point Cloud Library (PCL), has
been described in detail in previous publications [18][19]. A table is detected by finding the dominant
plane in the point cloud. This is done using the RANSAC algorithm. The extraction of polygonal
prisms mechanism is used for collecting the points which lie directly above the table. Afterwards, an
Euclidean Cluster Extraction algorithm is used to segment each scene into individual clusters. Every
cluster that satisfies the exploration expression is selected. The output of this object exploration is a
pool of object candidates. Subsequently, to construct a pool of features, spin-images [3] are computed
for the selected points extracted from the pool of object candidates. We computed around 32000
spin-images from the point cloud of the 194 objects views. Finally, the dictionary is constructed by
clustering the features using the k-means algorithm. The centers of the V extracted clusters are used
as visual words, wt (1 ? t ? V ). A video of the robot exploring an environment1 is available at:
https://youtu.be/MwX3J6aoAX0.
5
Object representation
A hierarchical system is presented which follows the organization of the visual cortex and builds
an increasingly complex object representation. Plasticity and learning can occurr at all layers and
certainly at the top-most layers of the hierarchy. In this paper, object view representation in the
feature layer involves two main phases: keypoint extraction and computation of spin images for the
keypoints. For keypoint extraction, a voxelized grid approach is used to obtain a smaller set of points
by taking only the nearest neighbor point for each voxel center. Afterwards, the spin-image descriptor
is used to encode the surrounding shape in each keypoint using the original point cloud (i.e. feature
layer). Subsequently, the spin images go ?up? to the BoW layer where each spin image is assigned
to a visual word by searching for the nearest neighbor in the dictionary. Afterwards, each object is
represented as a set of visual words. The obtained representation is then presented as input to the
topic layer. The LDA model consists of three levels? parameters including category-level parameters
(i.e. ?), which are sampled once in the process of generating a category of objects; object-level
variables (i.e. ?d ), which are sampled once per object, and word-level variables (i.e. zd,n and wd,n ),
which are sampled every time a feature is extracted. The variables ?, ? and z are latent variables that
should be inferred. Assume everything is observed and a category label is selected for each object;
i.e. each object belongs to one category. The joint distribution of all hidden and observed variables
for a category is defined as follows:
p(c) (w, z, ?, ?|?, ?) =
K
Y
z=1
p(c) (?z |?)
|c|
Y
p(c) (?d |?)
N
Y
p(c) (zd,n |?d )p(c) (wd,n |zd,n , ?),
(3)
n=1
d=1
1
The ROS bag file used in this video was created by the Knowledge-Based Systems Group, Institute of
Computer Science, University of Osnabrueck.
4
where ? and ? are Dirichlet prior hyper-parameters that affect the sparsity of distributions, and K
is the number of topics, |c| is the number of known objects in the category c and N is the number
of words in the object d. Each ?d represents an instance of category c in topic-space as a Cartesian
histogram (i.e. topic layer), w represents an object as a vector of visual words, w = {w1 , w2 , ..., wN },
where each entry represents one of the V words of the dictionary (i.e. BoW layer). z is a vector of
topics and zi = 1 means wi was generated form ith topic. It should be noticed that there is a topic
for each word and ? is a K ? V matrix, which represents word-probability matrix for each topic,
where V is the size of dictionary and ?i,j = p(c) (wi |zj ); thus, the posterior distributions of the latent
variables given the observed data is computed as follows:
p(c) (z, ?, ?|w, ?, ?) =
p(c) (w, z, ?, ?|?, ?)
,
p(c) (w|?, ?)
(4)
Unfortunately, the denominator of the equation 4 is intractable and can not be computed exactly. A
collapsed Gibbs sampler is used to solve the inference problem. Since ? and ? can be derived from zi ,
they are integrated out from the sampling procedure. In this work, for each category an incremental
LDA model is created. Whenever a new training instance is presented, the collapsed Gibbs sampling
is employed to update the parameters of the model. The collapsed Gibbs sampler is used to estimate
the probability of topic zi being assigned to a word wi , given all other topics assigned to all other
words:
p(c) (zi = k|z?i , w) ? p(c) (zi = k|z?i ) ? p(c) (wi |z?i , w?i )
(c)
nw,k,?i + ?
? PK
,
? PV
(c)
[ k=1 nd,k + ?] ? 1
w=1 nw,k + ?
nd,k,?i + ?
(5)
where z?i means all hidden variables expect zi and z = {zi , z?i }. nd,k is the number of times topic
(c)
k is assigned to some visual word in object d and nw,k shows the number of times visual word w
assigned to topic k. In addition, the denominator of the p(c) (zi = k|z?i ) is omitted because it does
not depend on zi . The multinomial parameter sets ?(c) and ?(c) can be estimated using the following
(c)
equations:
nw,k + ?
nd,k + ?
(c)
(c)
?k,d =
.
(6)
, and ?w,k = (c)
nd + K?
nk + V ?
(c)
where nk is the number of times a word assigned to topic k in category c and nd is the number of
words exist in the object d. Since in this approach, what happens next depends only on the current
(c)
state of the system and not on the sequence of previous states, whenever a new object view, ?d , is
(c)
(c)
added to the category c, nk and nw,k are updated incrementally.
6
Object category learning and recognition
Whenever a new object view is added to a category [17], the object conceptualizer retrieves the
current model of the category as well as representation of the new object view, and creates a new, or
updates the existing category. To exemplify the strength of object representation, an instance-based
learning approach is used in the current system, i.e. object categories are represented by sets of
known instances. The instance-based approach is used because it is a baseline method for category
representation. However, more advanced approaches like Bayesian approach can be easily adapted.
An advantage of the instance based approach is to facilitate incremental learning in an open-ended
fashion. Similarly, a baseline recognition mechanism in the form of a nearest neighbour classifier
with a simple thresholding approach are used to recognize a given object view.
(c)
The query object view, Oq , is first represented using the topic distribution of each category, ?q .
Afterwards, to assess the dissimilarity between the query object and stored instances of category c,
(c)
?p , the symmetric Kullback Leibler divergence, i.e. DKL (?q , ?p ), is used to measure the difference
between two distributions. Subsequently, the minimum distance between the query object and all
instances of the category c, is considered as the Object-Category Distance, OCD(.):
OCD(?q(c) , c) = min DKL (?q(c) , ?p ), c ? {1, . . . , C}.
?p ?c
5
(7)
Consequently, the query object is classified based on the minimum OCD(.). If, for all categories, the
OCD(.) is larger than a given Classification Threshold (e.g. CT= 0.75), then the object is classified
as unknown; otherwise, it is classified as the category that has the highest similarity.
7
Experimental results
The proposed approach was evaluated using a standard cross-validation protocol as well as an
open-ended protocol. We also report on a demonstration of the system.
7.1 Off-line evaluation
An object dataset has been used [18], which contains 339 views of 10 categories of objects. The
system has five different parameters that must be well selected to provide a good balance between
recognition performance and memory usage. To examine the performance of different configurations
of the proposed approach, 10-fold cross-validation has been used. A total of 180 experiments were
performed for different values of five parameters of the system, namely the voxel size (VS), which
determines the number of keypoints extracted from each object view, the image width (IW) and
support length (SL) of spin images, the dictionary size (DS) and the number of topics (NT). Results
are presented in Table 2. The parameters that obtained the best average accuracy was selected as
the default configuration: VS=0.03, IW=4 and SL=0.05, DS=90 and NT=30. In all experiments,
the number of iterations for Gibbs sampling was 30 and ? and ? parameters were set to 1 and 0.1
respectively. The accuracy of the proposed system with the default configuration was 0.87. Therefore,
this configuration displays a good balance between recognition performance and memory usage. The
remaining results were obtained using this configuration.
The accuracy of the system in each layer has been
Table 3: Object recognition performance
calculated individually. For comparison, the accuRepresentation
Accuracy
racy of a topic layer with topics shared among all
Feature Layer
0.12
categories is also computed. Results are presented
BoW Layer
0.79
Topic Layer (shared topics)
0.79
in Table 3. One important observation is that the
Topic Layer (our approach)
0.87
overall performance of the recognition system based
on topic modelling is promising and the proposed
representation is capable of providing distinctive representation for the given object. Moreover, it
was observed that the discriminative power of the proposed representation was better than the other
layers. In addition, independent topics for each category provides better representation than shared
topics for all categories. Furthermore, it has been observed that the discriminative power of shared
topics depends on the order of introduction of categories.
The accuracy of object recognition based on pure shape features (i.e. feature layer) is very low. The
BoW representation obtains an acceptable performance. The topic layer provides a good balance
between memory usage and descriptiveness with 30 floats (i.e. NT=30). The length of the BoW
layer is around three times larger than the representation of the topic layer. The feature layer is
the less compact representation. These results show the hierarchical object representation builds an
increasingly complex representation.
7.2
Open-ended evaluation
The off-line evaluation methodologies (e.g k-fold cross validation, etc.) are not well suited to evaluate
open-ended learning systems, because they do not abide to the simultaneous nature of learning
and recognition. Those methodologies imply that the set of categories must be predefined. An
evaluation protocol for open-ended learning systems was proposed in [20]. The idea is to emulate
the interactions of a recognition system with the surrounding environment over long periods of time.
A simulated teacher was developed to follow the evaluation protocol and autonomously interact
with the recognition system using three basic actions including: teach, for teaching a new object
category; ask, to ask the system what is the category of an object view; and correct, for providing
Table 2: Object recognition performance for different parameters
Parameters
Values
Avg. Accuracy(%)
VS(m)
0.03 0.04
85
81
IW (bins)
4
8
83
83
0.04
82
SL(m)
0.05 0.06
83
83
6
50
82
DS (visual words)
60 70 80 90
82 83 84 84
30
84
NT
40
83
50
82
1.2
Exp3
Accuracy
1
0.8
0.6
camera
hand-towel
platewater-bottle
onion
bag-food
potato
0.2
stapler
notebook
cap
cell-phone
pitcher
calculator
0
0.4
0
20
40
60
80
jar-food
bell-pepper
shampoo
scissors instant-noodles
100
120
140
160
180
200
Iterations
Figure 2: Evolution of accuracy vs. number of question/correction iterations in the first 200 iterations of the
third experiment. Vertical red lines and labels indicate when and which categories are introduced to the system.
corrective feedback, i.e. the ground truth label of a misclassified object view. The idea is that, for each
newly taught category, the simulated teacher repeatedly picks unseen object views of the currently
known categories from a dataset and presents them to the system. It progressively estimates the
recognition accuracy of the system and, in case this accuracy exceeds a given threshold (marked by
the horizontal line in Fig.2), introduces an additional object category (marked by the vertical lines
and labels in Fig.2). This way, the system is trained, and at the same time the accuracy of the system
is continuously estimated. The simulated teacher must be connected to an object dataset. In this work,
the simulated teacher was connected to the largest available dataset namely RGB-D Object Dataset
consisting of 250,000 views of 300 common household objects, organized into 51 categories [21].
Number of learned categories
Global classification accuracy
Table 4: Summary of experiments.
Since the performance of an openEXP# #QCI #TLC #AIC GCA (%) APA (%)
ended learning system is not limited
1
1740
39
18.38
65
71
to the object recognition accuracy,
2
803
30
11.07
69
79
when an experiment is carried out,
3
1099
35
13.20
67
77
learning performance is evaluated us4
1518
38
16.29
66
73
ing three distinct measures, includ5
1579
42
15.12
67
72
ing: (i) the number of learned categories at the end of an experiment
(TLC), an indicator of How much does it learn?; (ii) The number of question / correction iterations
(QCI) required to learn those categories and the average number of stored instances per category
(AIC), indicators of How fast does it learn? (see Fig.3 (right)); (iii) Global classification accuracy
(GCA), an accuracy computed using all predictions in a complete experiment, and the average
protocol accuracy (APA), indicators of How well does it learn? (see Fig.3 (left)). Since the order of
the categories introduced may have an affect on the performance of the system, five experiments were
carried out in which categories were introduced in random sequences. Results are reported in Table 4.
Figure 2 shows the performance of the system in the initial 200 iterations of the third experiment.
By comparing all experiments, it is visible that in the fifth experiment, the system learned more
categories than other experiments. Figure 3 (left) shows the global classification accuracy obtained
by the proposed approach as a function of the number of learned categories. In experiments 1, 4, 5,
the accuracy first decreases, and then starts slightly going up again as more categories are introduced.
This is expected since the number of categories known by the system makes the classification task
more difficult. However, as the number of learned categories increases, also the number of instances
per category increases, which augments the category models (topics) and therefore improves performance of the system. Fig.3 (right) gives a measure of how fast the learning occurred in each of
the experiments and shows the number of question/correction iterations required to learn a certain
number of categories. Our approach learned faster than that of Schwarz et. al [14] approach, i.e. our
approach requires much less examples than Schwarz?s work. Furthermore, we achieved accuracy
around 75% while storing less than 20 instances per category (see Table 4), while Schwarz et.al [14]
stored more than 1000 training instances per category (see Fig.8 in [14]). In addition, they clearly
showed the performance of DNN degrades when the size of dataset is reduced.
Exp1
Exp2
Exp3
Exp4
Exp5
1
0.9
0.8
0.7
0.6
0.5
0
5
10
15
20
25
30
35
40
45
45
Exp2
Exp3
35
Exp4
30
Exp5
1740
25
1579
20
1518
15
1099
10
803
5
0
Number of learned categories
Exp1
40
0
200
400
600
800
1000
1200
1400
1600
1800
Iterations
Figure 3: System performance during simulated user experiments.
7
(a)
(b)
(c)
Figure 4: Three snapshots showing object recognition results in two scenarios: first two snapshots show the
proposed system supports (a) classical learning from a batch of train labelled data and (b) open-ended learning
from on-line experiences. Snapshot (c) shows object recognition results on a scene of Washington scene dataset.
7.3
System demonstration
To show the strength of object representation, a real demonstration was performed, in which the
proposed approach has been integrated in the object perception system presented in [18]. In this
demonstration a table is in front of a robot and two users interact with the system. Initially, the system
only had prior knowledge about the Vase and Dish categories, learned from batch data (i.e. set of
observations with ground truth labels), and there is no information about other categories (i.e. Mug,
Bottle, Spoon). Throughout this session, the system must be able to recognize instances of learned
categories and incrementally learn new object categories. Figure4 illustrates the behaviour of the
system:
(a) The instructor puts object TID6 (a Mug) on the table. It is classified as Unknown because mugs
are not known to the system; Instructor labels TID6 as a Mug. The system conceptualizes Mug
and TID6 is correctly recognized. The instructor places a Vase on the table. The system has
learned Vase category from batch data, therefore, the Vase is properly recognized (Fig.4 (a)).
(b) Later, another Mug is placed on the table. This particular Mug had not been previously seen, but
the system can recognize it, because the Mug category was previously taught (Fig.4 (b)).
This demonstration shows that the system is capable of using prior knowledge to recognize new
objects in the scene and learn about new object categories in an open-ended fashion. A video of this
demonstration is available at: https://youtu.be/J0QOc_Ifde4.
Another demonstration has been performed using Washington RGB-D Scenes Dataset v2. This dataset
consists of 14 scenes containing a subset of the objects in the RGB-D Object Dataset, including
bowls, caps, mugs, and soda cans and cereal boxes. Initially, the system had no prior knowledge. The
four first objects are introduced to the system using the first scene and the system conceptualizes
those categories. The system is then tested using the second scene of the dataset and it can recognize
all objects except cereal boxes, because this category was not previously taught. The instructor
provided corrective feedback and the system conceptualized the cereal boxes category. Afterwards,
all objects are classified correctly in all 12 remaining scenes (Fig.4 (c)). This evaluation illustrates
the process of acquiring categories in an open-ended fashion. A video of this demonstration is online
at: https://youtu.be/pe29DYNolBE.
8
Conclusion
This paper presented a multi-layered object representation to enhance a concurrent 3D object category
learning and recognition. In this work, for optimizing the recognition process and memory usage,
each object view was hierarchically described as a random mixture over a set of latent topics, and
each topic was defined as a discrete distribution over visual words. This paper focused in detail on
unsupervised object exploration to construct a dictionary and concentrated on supervised open-ended
object category learning using an extension of topic modelling. We transform objects from bag-ofwords space into a local semantic space and used distribution over distribution representation for
providing powerful representation and deal with the semantic gap between low-level features and
high-level concepts. Results showed that the proposed system supports classical learning from a
batch of train labelled data and open-ended learning from actual experiences of a robot.
Acknowledgements
This work was funded by National Funds through FCT project PEst-OE/EEI/UI0127/2016 and FCT
scholarship SFRH/BD/94183/2013.
8
References
[1] Sungmoon Jeong and Minho Lee. Adaptive object recognition model using incremental feature representation and hierarchical classification. Neural Networks, 25:130?140, 2012.
[2] Maximilian Riesenhuber and Tomaso Poggio. Hierarchical models of object recognition in cortex. Nature
neuroscience, 2(11):1019?1025, 1999.
[3] AE. Johnson and M. Hebert. Using spin images for efficient object recognition in cluttered 3D scenes.
Pattern Analysis and Machine Intelligence, IEEE Transactions on, 21(5):433?449, May 1999.
[4] Josef Sivic, Bryan C Russell, Alexei Efros, Andrew Zisserman, William T Freeman, et al. Discovering
objects and their location in images. In Computer Vision, 2005. ICCV 2005. Tenth IEEE International
Conference on, volume 1, pages 370?377. IEEE, 2005.
[5] Thomas Hofmann. Probabilistic latent semantic indexing. In Proceedings of the 22nd annual international
ACM SIGIR conference on Research and development in information retrieval, pages 50?57. ACM, 1999.
[6] David M Blei, Andrew Y Ng, and Michael I Jordan. Latent dirichlet allocation. the Journal of machine
Learning research, 3:993?1022, 2003.
[7] Jon D Mcauliffe and David M Blei. Supervised topic models. In Advances in neural information processing
systems, pages 121?128, 2008.
[8] Chong Wang, David Blei, and Fei-Fei Li. Simultaneous image classification and annotation. In Computer
Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 1903?1910. IEEE, 2009.
[9] Li Fei-Fei and Pietro Perona. A bayesian hierarchical model for learning natural scene categories. In
Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on,
volume 2, pages 524?531. IEEE, 2005.
[10] Daniel Ramage, David Hall, Ramesh Nallapati, and Christopher D Manning. Labeled lda: A supervised
topic model for credit attribution in multi-labeled corpora. In Proceedings of the 2009 Conference on
Empirical Methods in Natural Language Processing: Volume 1-Volume 1, pages 248?256, 2009.
[11] Yang Wang, Payam Sabzmeydani, and Greg Mori. Semi-latent dirichlet allocation: A hierarchical model
for human action recognition. In Human Motion?Understanding, Modeling, Capture and Animation, pages
240?254. Springer, 2007.
[12] Arindam Banerjee and Sugato Basu. Topic models over text streams: A study of batch and online
unsupervised learning. In SDM, volume 7, pages 437?442. SIAM, 2007.
[13] Kevin R Canini, Lei Shi, and Thomas L Griffiths. Online inference of topics with latent dirichlet allocation.
In International conference on artificial intelligence and statistics, pages 65?72, 2009.
[14] Max Schwarz, Hannes Schulz, and Sven Behnke. RGB-D object recognition and pose estimation based
on pre-trained convolutional neural network features. In Robotics and Automation (ICRA), 2015 IEEE
International Conference on, pages 1329?1335. IEEE, 2015.
[15] Jiye G Kim, Irving Biederman, Mark D Lescroart, and Kenneth J Hayworth. Adaptation to objects in the
lateral occipital complex (loc): shape or semantics? Vision research, 49(18):2297?2305, 2009.
[16] Alvaro Collet, Bo Xiong, Corina Gurau, Martial Hebert, and Siddhartha S Srinivasa. Herbdisc: Towards
lifelong robotic object discovery. The International Journal of Robotics Research, 34(1):3?25, 2015.
[17] Gi Hyun Lim, M. Oliveira, V. Mokhtari, S. Hamidreza Kasaei, A. Chauhan, L. Seabra Lopes, and A.M.
Tome. Interactive teaching and experience extraction for learning about objects and robot activities. In
Robot and Human Interactive Communication, The 23rd IEEE International Symposium on, 2014.
[18] S Hamidreza Kasaei, Miguel Oliveira, Gi Hyun Lim, Lu?s Seabra Lopes, and Ana Maria Tom?. Interactive
open-ended learning for 3d object recognition: An approach and experiments. Journal of Intelligent &
Robotic Systems, 80(3):537?553, 2015.
[19] Miguel Oliveira, Lu?s Seabra Lopes, Gi Hyun Lim, S. Hamidreza Kasaei, Ana Maria Tom?, and Aneesh
Chauhan. 3D object perception and perceptual learning in the RACE project. Robotics and Autonomous
Systems, 75, Part B:614 ? 626, 2016.
[20] Aneesh Chauhan and Lu?s Seabra Lopes. Using spoken words to guide open-ended category formation.
Cognitive processing, 12(4):341?354, 2011.
[21] K. Lai, Liefeng Bo, Xiaofeng Ren, and D. Fox. A large-scale hierarchical multi-view RGB-D object
dataset. In Robotics and Automation (ICRA), 2011 IEEE International Conference on, pages 1817?1824,
2011.
9
| 6539 |@word nd:7 open:25 rgb:5 pick:1 initial:1 configuration:5 contains:2 loc:1 selecting:1 daniel:1 document:1 past:1 existing:1 sugato:1 current:3 comparing:3 wd:2 nt:4 must:7 bd:1 visible:1 plasticity:1 shape:10 enables:1 hofmann:1 designed:2 update:4 progressively:1 v:4 fund:1 generative:1 electr:1 selected:8 intelligence:2 discovering:1 plane:1 beginning:2 ith:1 short:2 blei:5 infrastructure:1 provides:5 location:1 five:5 conceptualizes:2 constructed:1 become:1 symposium:1 ocd:4 consists:3 manner:2 expected:1 tomaso:1 frequently:1 growing:3 multi:5 examine:1 aveiro:2 freeman:1 automatically:1 food:2 actual:1 ua:1 becomes:1 begin:1 spain:1 moreover:8 competes:1 underlying:1 discover:4 provided:1 project:2 what:2 tic:1 developed:2 spoken:1 finding:1 ended:26 every:2 collecting:1 interactive:3 exactly:1 ro:1 demonstrates:1 classifier:1 mcauliffe:1 service:3 local:8 instituto:1 consequence:1 encoding:2 might:1 signed:1 initialization:1 collect:1 challenging:2 co:2 limited:2 camera:1 enforces:1 testing:1 procedure:2 empirical:1 bell:1 reject:4 pre:2 word:33 refers:1 instructor:5 griffith:1 get:1 layered:2 operator:1 put:1 collapsed:4 accumulation:2 optimize:1 center:2 conceptualized:1 shi:1 go:2 pitcher:1 attribution:1 independently:2 cluttered:1 focused:1 sigir:1 occipital:1 pure:1 handle:3 searching:1 autonomous:3 updated:4 pt:1 lsl:1 hierarchy:5 target:1 construction:1 user:3 recognition:42 expensive:2 particularly:2 labeled:3 observed:7 cloud:5 wang:3 capture:1 connected:2 oe:1 autonomously:1 decrease:1 highest:1 russell:1 environment:6 trained:2 depend:1 segment:1 algebra:1 distinctive:2 seyed:1 efficiency:1 creates:1 completely:2 easily:1 joint:1 bowl:1 represented:3 retrieves:1 emulate:1 corrective:2 assistive:1 surrounding:2 train:4 separated:1 distinct:1 fast:2 describe:1 sven:1 detected:2 query:4 artificial:1 hyper:1 kevin:1 formation:1 slda:6 larger:2 solve:1 cvpr:2 drawing:1 otherwise:1 ability:3 statistic:1 gi:3 unseen:1 transform:1 online:5 sequence:2 advantage:1 sdm:1 propose:3 interaction:1 remainder:1 adaptation:1 relevant:1 descriptiveness:1 bow:7 exp2:2 achieve:2 description:4 moved:1 cluster:4 generating:3 incremental:8 object:146 andrew:2 pose:2 miguel:2 nearest:3 implemented:1 involves:1 implies:1 noodle:1 indicate:1 correct:1 subsequently:3 exploration:6 human:9 ana:4 everything:1 bin:1 explains:1 argued:1 behaviour:1 elementary:1 extension:4 exploring:1 correction:3 around:3 considered:1 ic:1 credit:1 withdrawn:1 hall:1 ground:2 nw:5 efros:1 dictionary:10 notebook:1 omitted:1 purpose:1 estimation:1 sfrh:1 bag:5 label:11 currently:2 iw:3 schwarz:5 individually:1 concurrent:2 largest:1 concurrently:4 clearly:2 rather:2 spoon:1 publication:1 encode:4 derived:1 maria:3 properly:1 modelling:5 baseline:2 kim:1 inference:3 integrated:2 initially:2 hidden:2 onion:1 dnn:5 perona:1 misclassified:1 schulz:1 going:1 semantics:1 josef:1 overall:1 classification:8 among:3 ill:1 figure4:1 proposes:2 development:1 art:1 field:1 construct:3 once:2 extraction:5 washington:2 sampling:3 ng:1 represents:4 unsupervised:5 jon:1 future:1 report:1 intelligent:1 few:1 neighbour:1 recognize:6 divergence:1 individual:1 national:1 phase:3 consisting:1 william:1 organization:2 interest:2 highly:1 alexei:1 evaluation:7 certainly:1 chong:1 introduces:1 mixture:4 section3:1 predefined:3 edge:2 capable:3 potato:1 experience:7 poggio:2 fox:1 incomplete:1 euclidean:1 initialized:1 re:1 instance:18 modeling:1 boolean:3 entry:1 subset:1 johnson:1 front:1 stored:4 reported:1 teacher:4 mokhtari:1 explores:1 international:7 siam:1 alvaro:1 probabilistic:3 off:3 lee:1 pool:3 enhance:1 together:1 continuously:6 michael:1 w1:1 again:2 pr2:1 containing:1 choose:1 cognitive:2 leading:1 return:1 li:2 de:2 tica:1 automation:2 eei:1 race:1 scissors:1 depends:2 stream:1 later:2 view:30 lot:1 try:1 performed:3 red:1 start:2 capability:2 annotation:2 youtu:3 ass:1 spin:11 accuracy:20 calculator:1 descriptor:2 characteristic:1 greg:1 convolutional:1 payam:1 modelled:1 bayesian:2 lu:4 ren:1 researcher:1 processor:1 classified:5 explain:1 simultaneous:2 manual:1 whenever:4 definition:1 static:3 sampled:3 newly:2 dataset:14 popular:2 ask:2 logical:1 lim:3 knowledge:5 color:1 exemplify:1 organized:2 cap:2 improves:1 sophisticated:2 supervised:6 follow:2 tom:3 methodology:3 specify:1 zisserman:1 hannes:1 execute:1 done:1 strongly:1 evaluated:2 furthermore:2 box:3 until:1 d:3 working:2 hand:5 horizontal:1 christopher:1 banerjee:2 lack:1 incrementally:6 liefeng:1 lda:23 lei:1 building:1 usage:5 facilitate:1 cereal:3 true:1 concept:1 evolution:1 ramage:1 inspiration:1 assigned:6 symmetric:1 leibler:1 semantic:7 deal:1 mug:15 during:1 width:1 irving:1 complete:1 motion:1 image:15 novel:1 arindam:1 srinivasa:1 common:1 multinomial:1 tracked:3 overview:2 volume:5 discussed:1 extend:1 belong:2 occurred:1 gibbs:7 rd:1 grid:1 similarly:1 inclusion:1 portugal:1 pointed:1 teaching:2 session:1 language:1 had:3 funded:1 robot:20 similarity:2 cortex:6 supervision:1 surface:1 etc:1 inferotemporal:1 dominant:1 posterior:1 plsi:5 showed:3 optimizing:2 belongs:1 apart:1 phone:1 scenario:1 store:1 certain:1 dish:1 prism:1 qci:2 postponed:1 nition:2 seen:1 minimum:2 additional:1 employed:1 freely:1 recognized:2 redundant:1 period:1 semi:1 ii:1 afterwards:6 full:1 keypoints:2 infer:1 segmented:1 exceeds:1 ing:2 adapt:1 exp3:3 cross:3 long:2 faster:1 retrieval:1 lai:1 dkl:2 prediction:2 ransac:1 basic:3 denominator:2 enhancing:1 ae:1 vision:4 histogram:2 represent:3 iteration:8 achieved:1 cell:1 robotics:4 addition:3 float:1 w2:1 unlike:1 collet:2 file:1 oq:1 jordan:1 structural:2 near:3 yang:1 intermediate:1 iii:1 wn:1 affect:2 psychology:1 zi:9 pepper:1 topology:1 behnke:1 idea:3 expression:3 gca:2 tlc:2 section2:1 action:2 repeatedly:1 migrate:1 deep:3 oliveira:3 tabletop:1 ten:1 concentrated:1 processed:1 category:102 augments:1 reduced:2 generate:1 http:3 sl:3 exist:1 zj:1 estimated:2 neuroscience:1 per:7 correctly:2 bryan:1 zd:3 discrete:3 siddhartha:1 taught:3 group:1 key:5 four:1 threshold:2 drawn:1 tenth:1 kenneth:1 pietro:1 powerful:1 soda:1 lope:5 place:1 throughout:1 acceptable:2 interleaved:1 layer:38 ct:1 apa:2 display:1 aic:2 fold:2 annual:1 activity:1 strength:2 occur:1 adapted:1 constraint:6 hayworth:1 fei:4 scene:13 shampoo:1 pcl:1 calling:1 min:1 fct:2 conceptualization:1 according:1 poor:1 manning:1 belonging:1 describes:1 smaller:1 increasingly:5 slightly:1 wi:4 modification:1 happens:1 explained:1 gradually:1 fulfilling:1 indexing:2 iccv:1 computationally:2 equation:4 mori:1 previously:3 discus:1 mechanism:2 end:1 sabzmeydani:1 available:7 hierarchical:11 v2:1 occurrence:2 xiong:1 batch:9 pest:1 original:1 thomas:2 top:2 running:1 dirichlet:7 clustering:2 remaining:2 instant:1 household:1 scholarship:1 build:4 classical:3 society:1 icra:2 noticed:1 already:1 added:2 occurs:1 question:3 degrades:2 ofwords:1 stapler:1 distance:2 simulated:5 lateral:1 topic:64 length:2 providing:3 demonstration:8 balance:3 difficult:1 unfortunately:1 voxelized:1 potentially:1 abide:1 teach:1 unknown:2 perform:1 vertical:2 observation:3 snapshot:3 ramesh:1 hyun:3 riesenhuber:2 canini:2 extended:3 looking:1 communication:1 discovered:2 kinect:1 biederman:1 inferred:1 introduced:5 inverting:1 namely:3 required:3 bottle:2 david:4 jeong:1 sivic:2 learned:13 barcelona:1 hour:1 nip:1 address:1 able:1 usually:3 perception:3 pattern:3 sparsity:1 program:1 built:1 including:4 memory:9 video:4 max:1 power:2 suitable:1 exp4:2 natural:2 indicator:3 vase:4 advanced:1 improve:1 library:1 keypoint:3 imply:1 martial:1 created:3 carried:2 text:2 prior:4 understanding:1 acknowledgement:1 discovery:1 expect:1 limitation:2 allocation:6 validation:3 consistent:1 principle:1 thresholding:1 storing:3 summary:1 placed:2 keeping:1 hebert:2 offline:1 guide:1 institute:1 neighbor:2 basu:1 taking:2 lifelong:1 fifth:1 feedback:2 default:2 cortical:1 world:1 calculated:1 collection:2 avg:1 adaptive:1 dome:1 far:1 voxel:2 cope:1 transaction:1 aneesh:2 skill:1 obtains:2 compact:1 kullback:1 global:3 sequentially:1 robotic:2 corpus:1 unnecessary:1 discriminative:4 latent:14 table:20 promising:1 learn:14 nature:3 interact:2 complex:4 constructing:1 domain:2 protocol:5 pk:1 main:3 hierarchically:1 motivation:1 animation:1 nallapati:1 repeated:1 body:4 fig:13 referred:1 fashion:7 pv:1 candidate:16 lie:1 perceptual:2 exp1:2 third:2 learns:1 xiaofeng:1 specific:3 showing:1 list:1 physiological:1 evidence:1 concern:1 intractable:1 false:1 adding:1 polygonal:1 dissimilarity:1 racy:1 illustrates:2 cartesian:1 maximilian:1 nk:3 gap:1 easier:1 jar:1 suited:2 depicted:1 visual:22 restructuring:1 tracking:1 bo:2 applies:1 acquiring:1 springer:1 truth:2 satisfies:1 determines:1 extracted:5 acm:2 towel:1 marked:2 consequently:1 towards:1 labelled:3 shared:6 feasible:1 considerable:1 except:1 sampler:5 wt:1 called:1 total:1 experimental:1 support:4 mark:1 incorporate:2 evaluate:1 tested:2 nica:1 |
6,124 | 654 | Generic Analog Neural Computation
- The EPSILON Chip
Stepben Cburcber
Dept. of Elee. Engineering
University of Edinburgh
King's Buildings
Edinburgh. EH9 3JL
Donald J. Baxter
Dept of Elec. Engineering
University of Edinburgh
King's Buildings
Edinburgh. EH9 3JL
Alister Hamilton
DeptofE~.En~ring
University of Edinburgh
King's Buildings
Edinburgh. EH9 3JL
H. Martin Reekie
as above
Alan F. Murray
as above
Abstract
An analog CMOS VLSI neural processing chip has been designed and fabricated. The device employs "pulse-stream" neural state signalljng, and is capable of computing some 360 million synaptic connections per secood. In addition to basic characterisation results. the performance of the chip in solving
"real-world" problems is also demonstrated.
1 INTRODUCTION
Inspired by biology. and borne out of a desire to perform analogue computation with fundamentally digital fabrication processes. the so-called "pulse-stream" arithmetic system
has been steadily evolved and improved since its inception in 1986 (Murrayl990a. Murray 1989a). In addition to this continuous development at Edinburgh. many other research
groups around the world (mast notably Meador et al (Meadorl990a? have experimented
with their own pulse-firing neural circuits.
In pulsed implementations. each neural state is represented by some variable attribute
(e.g. the width of fixed frequency pulses. or the rate of fixed width pulses) of a train (or
"stream") of pulses. The neuron design therefore reduces to a form. of oscillatcr. Each
neuron is fed by a column of synapses. which multiply incoming neural states by the
synaptic weights. In contrast with the original circuits of Murray and Smith (MurrayI987a). the synapse design which will be discussed herein utilises analog circuit techniques to perform. the multiplication of neural state by synaptic weight
This paper describes the Edinburgh Pulse-Stream Implementation of a Learning Oriented
Network (EPSn.ON) chip. EPSILON was developed as a ftexible neural processor. capable of addressing a variety of applications. The main design criteria were as follows:
?
That it be large enough to be of use in practical problems.
?
It should be capable of implementing networks of arbitrary size and architecture.
?
It must be able to act as both a "slave" acceleratcr to a conventional computer. and as
an "autonomous" processor.
As will be seen. these constraints resulted in a chip which could realise only a single layer of synaptic weights. but which could be cascaded to form. large. useful networks for
solving real-wcrld problems.
773
774
Churcher, Baxter, Hamilton, Murray, and Reekie
The remaining sections of this paper describe the attributes of pulse-coded neural systems
in general. befoce detailing the circuits which were employed on EPSILON. Finally. results frOOl a vowel recognition application are presented. in order to illustrate the performance of EPSILON when applied to real tasks.
2 PULSE CODED NEURAL SYSTEMS
As already mentioned. EPSILON is a pulse coded analog neural processing chip. In such
implementations. neural states are encoded as digital pulses. The states themselves may
then be represented either by varying the width of the pulses (pulse width modulation PWM). oc by varying the rate of the pulses (pulse frequency modulation - PFM). The
arguments for using pulses in this way are strong. Firstly. they provide a very effective
and robust method for communicating states both on- and between-chip. since pulses are
extremely resistant to noise. Secondly. the use of pulses to represent states renders interfacing to digital circuits and computer peripherals straightforward. Finally. pulsed signalling leads to simplification of artihmetic circuits (i.e. synapses). resulting in much
higher inter-connection densities.
Unfortunately. pulse-based systems do have drawbacks. In common with all analog circuits. the synaptic computing elements have limited precision (usually equivalent to
about 7 bits). and their performance is subject to the vagaries of fabrication process variations. This results in a situation whereby supposedly "matched" circuits vary marlredly in
their characteristics. Furthermore. the switching which is inherent in any pulsed circuit
results in increased levels of system noise. most usually in the form of power supply transients. An additional problem with pulse frequency modulation (PPM) systems is that
computation rates are dependent on the data: this is an impatant consideration in speedcritical applicatioos.
3 THE EPSILON DESIGN
This section describes the circuits which were used in the EPSILON design. The operating principles of each circui~ are discussed. and characterisation results presented. In accordance with the demerits mentioned in the previous section. all circuits were designed
to be tolerant to noise and process variations. to cause as little noise as possible themselves. and to be easy to "set up" in practice. Finally. the specification of the EPSn..ON
chip is presented.
3.1
SYNAPSE
The synapse design was based on the standard transconductance multiplier circuit. which
had previously been the basis for monolithic analogue transversal filters for use in signal
processing applications (DenyerI981a). Such multipliers use MOS transistors in their
linear region of operation to generate output currents proportional to a product of two input voltages. This concept was adapted for use in pulsed neural networks by fixing one of
the input voltages. and using a neural state to gate the output current. In this manner. the
synaptic weight controls the magnitude of the output current. which is multiplied by the
incoming neural pulses. The resultant charge packets are subsequently integrated to yield
the total post-synaptic activity voltage.
Figure 1 shows the basic pulsed multiplier cell. where MI and M2 form the transconductance multiplier. and M3 is the output pulse transistor. By ensuring that the drain-source
voltages for MI and M2 are the same and constant (the differential amplifier and transistors M4 and M5 are used to satisfy this constraint). non-linearities in the transistor responses can be cancelled out. such that I OUT is linearly dependent on the ctifference of
VGS I and VGS2 (Murray 1992a). Multiplication is achieved by pulsing this current by the
neural state, V)' An "instantaneous" representation of the aggregated post-synaptic activity is given by the output voltage. Vour: this must subsequently be integrated in order to
provide an activity input to a neuron.
Generic Analog Neural Computation-The EPSILON Chip
--~----------------.--~
v.
1
~
~ ?......... -... -.---------------------,
---------..---- ... --_. --- _.~ ~- . -._VIWV. .
Figure 1: Transconductance Multiplier Synapse
14.0
5.0
-.
II!
13.0
1
12.0
4.4
11.0
4.1
10.0
3.75
9.0
3.4
8.0
3.1
2.8
~
.!t
.s?
4.i
!I
~
';
Q.
';
0
7.0
0.0
10.0
20.0
30.0
40.0
50.0
ro.O
70.0
80.0
90.0
Input Stale (microseconds)
Figure 2: Synapse Characterisation Results
Results from characterisation tests of the synapse are presented in Figure 2. which shows
output state against input state. for different synaptic weight voltages. As seen from the
Figure. the linearity of the synapses. with respect to input state. is very high. The variation of synapse respoose with synaptic weight voltage is also fairly uniform. The graphs
depict mean performance over all the synaptic columns in all the chips tested. The associated standard deviations were more or less constant. representing a variation of approximately ? 300 ns in the values of the output pulse widths. The effects of intra- and interchip process mismatches would therefore seem to be well contained by the circuit design.
The "zero point" in the synaptic weight range was set at 3.75 V and. as can be seen from
the Figure. each graph shows an offset problem when the input neural state is zero. This
was attributable to an imbalance in the operating conditions of the transist(Xs in the
synapse. induced by the non-ideal nature of the power supplies (i.e. the non-zero sheet
resistance of the power supply tracks). resulting in an offset in the input voltage to the
post-synaptic integrator. This problem is easily obviated in practice. by employing three
synapses per column to cancel the offset.
775
776
Churcher, Baxter, Hamilton, Murray, and Reekie
3.2
NEURONS
In order to reflect the diversity of neural network forms. and possible applications. two
different neuron designs were included on the EPSn.,oN chip. The first, a synchronous
pulse width modulation neuron was designed with vision applicatioos in mind. This circuit could guarantee netwcrk computation times. thereby eliminating the data dependency inherent in pulse frequency systems. The second neuron design used asynchronous
pulse frequency modulation; the asynchronous nature of these circuits is advantageous in
feedback and recurrent neural architectures. where temporal characteristics are important.
As with the synapse. both circuits were designed to minimise transient noise injection.
and to be tolerant of process variations.
3.2.1
Pulse Width Modulation
As already stated. this system retains all the advantages of using pulses for communication/calculation. whilst being able to guarantee a maximum network evaluation time. In
the first instance. the main disadvantage with this technique appeared to be its synchronous nature - neurons would all be switching together causing larger power supply
transients than in an asynchronous system. This problem has. however. been circumvented via a "double-sided" pulse modulation scheme. which will be more fully explained later.
V~
~~.-
.. .......... ~
Xc;~ ~. ~' ~.:.~~. : . ':. :. :'. ~.::.~. : .: . . . . :.:.".".'.::.....:.:.: ':.'.:.::.:.:.'
VOUT
Figure 3: Pulse-Width Modulation Neuron
The operation of the pulse-width modulation neuron is illustrated in Figure 3. The neuron
itself is nothing more elaborate than a 2-stage comparator. with an inverter output driver
stage. The inputs to the circuit are the integrated post-synaptic activity voltage. VA.CT. and
a reference voltage. VRAMP. which is generated off-chip and is globally distributed to all
neurons in parallel. As seen from the waveforms in Figure 3. the output of the neuron
changes state whenever the reference signal crosses the activity voltage level. An output
pulse. which is some function of the input activation. is thus generated. The transfer function is entirely dependent on the shape of the reference signal - when this is generated by
a RAM look-up table. the function can become completely arbitrary and hence user programmable. Figure 3 shows the signal which should be applied if a sigmoidal transfer
characteristic is desired. Note that the sigmoid signals are "on their sides" - this is because the input (or independent variable) is on the vertical axis rather than the horizontal
axis. as would normally be expected. The use of a "double-sided" ramp for the reference
signal was alluded to earlier - this mechanism generates a pulse which is symmetrical
about the mid-point of the ramp. thereby greatly reducing the likelihood of coincident
Generic Analog Neural Computation-The EPSILON Chip
edges. This edge-asynchronicity obviates the problem of larger switching transients on
the power supplies. Furthermore. because the analogue element (i.e. the ramp voltage) is
effectively removed from the chip. and the circuit itself merely functions as a digital
block. the system is immune to process variations.
-.- --
100
.
/
"i -
/
I
80
.. . , , ' . ...
?
,
0'
f ,' , " .: ? ? ? -
~
B
I ? , .'
J.;:.:"
60
Temperatures :
l.0 0.8 ????
0.6 - --
S
tfl
Ol
....
z'"
40
0
., . . , ," . ' I
.. .. "
.'
I
20
. ' ,,'
.
0.4 . ..
I
0.2 - -
/
1.6
1.8
2
2.2
2.4
2.6
2.8
3
3.2
3.4
Activity (V)
Figure 4: PWM Neuron Performance
Figure 4 shows plots of output state (measured as a percentage of a maximum possible
20 fJS pulse) versus input activity voltage. fer five different sigmoid "temperatures". averaged over all the neurons on one chip. As can be seen. the fidelity of the sigmoids is extremely high. and it should be noted that all the curves are symmetrical about their midpoints - sOOlething which is difficult to achieve using standard analog circuits.
3.2.2
Pulse Frequency Modulation
The second neuron design which was included in EPSn..ON used pulse frequency encoding of the neural state. Although hampered by data dependent calculation times. its wholly asynchronous nature makes it ideal for neural network architectures which embody
temporal characteristics Le. feedback networks. and recurrent networks.
VIH 01---------<
c
T
Figure 5: Pulse Frequency Modulation Neuron
777
778
Churcher, Baxter, Hamilton , Murray, and Reekie
The neuron design is illustrated in Figure 5. and is basically a Voltage Controlled Oscillatoc (VCO) with a variable gain sigmoidal transfer characteristic. Oscillation is achieved
via the hysteretic charge and discharge of capacitor C. by the currents fH and fL respectively. The output pulse width is constant. and is set by fH. whilst the inter-pulse spacing
(and hence output frequency) is controlled by fL. fL itself is determined by the activity
voltage. VXf. via the differential stage constituted by transistors M3 to M6. It is this latter
which gives the veo its sigmoidal characteristic. and gain variations may be achieved by
injecting and removing additional current at appropriate points in this stage (note that the
circuitry for this has been omitted from Figure 5. for the sake of clarity).
60
I
50
~
!
!
40
JO
20
10
0
?1.5
?1
~.5
vm -
0
VXl .
0.5
1.5
VMJJ) (V)
Figure 6: PFM Neuron Performance
veo
The characterisation results for the
are presented in Figure 6. The Figure shows
plots of output percentage duty cycle versus input differential voltage. for different values
of sigmoid gain. Note that the curves are fair approximations to sigmoids. although. in
contrast with the pulse width modulation neuron. they are not symmetrical about their
mid-points . It can also be seen that the range of possible sigmoid gains is smaller than the
range available with the PWM system. although this is not a crucial factor in many applications.
3.3
EPSILON SPECIFICATIOJlli
The circuits described in the previous section were combined to form the EPSILON chip.
This was subsequently fabricated by European Silicoo Structures (ES2) using their
ECPD15 (i.e. 1.5 pm. double metal. single poly CMOS). As already stated. each chip was
capable of implementing a single layer of synaptic connections. and could accept inputs
as either analog voltages (for direct interface to sensors) oc as pulses (for communication
with other chips. and with digital systems). The full specification is given in Table 1.
4 APPLICATION - VOWEL RECOGNITION
Mter the device characterisation experiments had been completed. EPSILON was used to
implement a multi-layer perceptron (MLP) for speech data classification. The MLP had
54 inputs. 27 hidden units. and 11 outputs. and the task was to classify 11 different vowel
sounds spoken by each of 33 speakers. The input vectors were formed by the analog outputs of 54 band-pass filters.
The MLP was initially trained on a SPARC station. using a subset of 22 patterns. Learning (using the VIrtual Targets algorithm. with 0 % noise (Murray 1991a? proceeded until
the maximum bit erroc in the output vector was S; O. 3. at which point the weight set was
Generic Analog Neural Computation-The EPSILON Chip
Table 1: EPSn.,ON Specifications
EPSn.,ON Specification
No. of State Input Pins
30
120. Muxed in Banks of 30
No. of Actual State Inputs
Input Modes
Analog. PW. or PF
30. Directly Pinned Out
No. of State Outputs
PW orPF
Output Modes
No. of Synapses
3600
No. of Weight Load Channels
2
3.6 ms
Weight Load TilDe
Weight Storage
Dynamic
Maximum Speed (cps)
360 Mcps
Technology
1.5 pm. Double Metal CMOS
Die Size
9.5 mm x 10. 1 mm
Maximum Power Dissipation
350mW
6
0.3
tE
0.25
~
::s
0.2
(1)
cr'
CZ)
c::
c:u
0.15
::E
0.1
(1)
0.05
200
400
600
800
1000
1200
1400
Epoch Number
Figure 7: EPSn..ON Under Training
downloaded to EPSn.,ON. Training was then restarted under the same regime as before
(using the same "stop" criterion). although this time EPSn..ON was used to evaluate the
"forward pass" phases of the network. Figure 7 shows the evolution of mean square error
with number of epochs during this period; at the end of training. EPSn..ON could correctly identify all 22 training patterns.
Subsequent to this. 176 "unseen" test patterns were presented to the EPSILON network.
with the result that 65 . 34 % of these vectors were correctly classified. This compared
very favourably with similar generalisation experiments which were carried out on a
SPARC : in this case. the best result obtained was 67.61 %.
5 CONCLUSIONS
In conclusion. a large analog VLSI neural chip. composed of process tolerant circuits.
with useful characteristics has been fabricated. Although not a self-learning device. it has
been proved that EPSn.,ON will support learning. and can be applied successfully to real-
779
780
Churcher, Baxter, Hamilton, Murray, and Reekie
world problems. Indeed. when correctly trained. the performance of the EPSn..ON chip
has been shown to be comparable with that of software simulations on a SPARC station.
Work is currently under way to apply EPSll..ON to computer vision tasks: more specifically. it it will be used to implement an MLP which is capable of recognising regions in
segmented images ci natural scenes. Furthermcn. the success of the learning experiments has given us sufficient coofidence to undertake the development ci a self-learning
analog neural chip. It is envisaged that this will employ EPSILON-type circuits. and will
implement the VutuaI Targets (Murrayl991a) training algorithm: the design of a small
prototype is currently nearing completion.
Acknowledgements
The authors would like to thank the Science and Engineering Research Council for their
continued funding of this work. In addition. Stephen Churcber and Donald Baxter are
grateful to British Aerospace PLC and Thorn-EMI CRL respectively for sponsorship and
technical suppat during the course of their PhD's.
Lionel Tarassenko (Dept. of Engineering Science. University of Oxford) must also be
thanked for his invaluable comments. and for supplying the vowel database.
References
Murray 1990a.
A. F. Murray. D. Baxter. Z. Butler. S. Churcber. A. Hamilton. H. M. Reekie. and L.
Tarassenko. "Innovations in Pulse Stream Neural VLSI : Arithmetic and COOlJIlUnieations". IEEE Workshop on Microelectronics for Neural Networks, Dortmund
1990. pp. 8-15. 1990.
Murray 1989a.
A. F. Murray. "Pulse Arithmetic in VLSI Neural Networks". IEEE MICRO. vol. 9.
no.6.pp.64-74.1989.
Me adorl99Oa.
J. Meador. A. Wu. C. Cole. N. Nintunze. and P. Chintrakulchai. "Programmable
Impulse Neural Circuits". IEEE Transactions on Neural Networks. vol. 2. no. 1.
pp. 101-109. 1990.
Murray 1987a.
A. F. Murray and A. V. W. Smith. "Asynchronous Arithmetic fer VLSI Neural
Systems". Electronics Letters. vol. 23. no. 12. pp. 642-643. June. 1987.
Denyerl981a.
P. B. Denyer and 1. Maver. "MOST Transconductance Multipliers for Array Applications" . lEE Proc. Pt. 1. vol. 128. no. 3. pp. 81-86. June 1981.
Murray 1992a.
A.F. Murray. A. Hamilton. DJ. Baxter. S. Churcber. H.M. Reekie. and L.
Tarassenko. "Integrated Pulse-Stream Neural Networks - Results. Issues and Pointers" . IEEE Trans. Neural Networks. pp. 385-393. 1992.
Murray 1991a.
A. F. Murray. "Analog VLSI and Multi-Layer Perceptrons - Accuracy. Noise and
On-Chip Learning". Proc. Second International Conference on Microelectronics
for Neural NeMorks, Munich (Germany). pp. 27-34. 1991.
| 654 |@word proceeded:1 eliminating:1 pw:2 advantageous:1 pulse:44 simulation:1 thereby:2 electronics:1 current:6 activation:1 must:3 subsequent:1 shape:1 designed:4 plot:2 depict:1 device:3 signalling:1 smith:2 supplying:1 pointer:1 firstly:1 sigmoidal:3 five:1 direct:1 differential:3 supply:5 driver:1 become:1 manner:1 inter:2 notably:1 indeed:1 expected:1 embody:1 themselves:2 multi:2 integrator:1 ol:1 inspired:1 globally:1 little:1 actual:1 pf:1 matched:1 linearity:2 circuit:23 evolved:1 developed:1 whilst:2 spoken:1 fabricated:3 guarantee:2 temporal:2 act:1 charge:2 ro:1 control:1 normally:1 unit:1 hamilton:7 before:1 engineering:4 accordance:1 monolithic:1 switching:3 encoding:1 oxford:1 firing:1 modulation:12 approximately:1 limited:1 range:3 averaged:1 practical:1 practice:2 block:1 implement:3 wholly:1 donald:2 sheet:1 storage:1 conventional:1 equivalent:1 demonstrated:1 straightforward:1 communicating:1 m2:2 continued:1 array:1 his:1 autonomous:1 variation:7 discharge:1 target:2 pt:1 user:1 element:2 recognition:2 tarassenko:3 database:1 interchip:1 region:2 cycle:1 removed:1 mentioned:2 supposedly:1 plc:1 ppm:1 dynamic:1 trained:2 grateful:1 solving:2 basis:1 completely:1 easily:1 chip:23 represented:2 train:1 elec:1 describe:1 effective:1 encoded:1 larger:2 ramp:3 unseen:1 itself:3 advantage:1 transistor:5 product:1 fer:2 causing:1 achieve:1 double:4 lionel:1 cmos:3 ring:1 illustrate:1 recurrent:2 completion:1 fixing:1 sponsorship:1 measured:1 strong:1 waveform:1 drawback:1 attribute:2 filter:2 subsequently:3 packet:1 transient:4 virtual:1 implementing:2 pinned:1 secondly:1 mm:2 around:1 mo:1 circuitry:1 vary:1 inverter:1 omitted:1 fh:2 proc:2 injecting:1 currently:2 council:1 cole:1 successfully:1 interfacing:1 sensor:1 rather:1 cr:1 varying:2 voltage:17 june:2 likelihood:1 greatly:1 contrast:2 dependent:4 integrated:4 accept:1 initially:1 hidden:1 vlsi:6 germany:1 issue:1 fidelity:1 classification:1 development:2 transversal:1 fairly:1 biology:1 look:1 cancel:1 secood:1 fundamentally:1 inherent:2 employ:2 micro:1 oriented:1 composed:1 resulted:1 m4:1 phase:1 vowel:4 amplifier:1 mlp:4 multiply:1 intra:1 evaluation:1 edge:2 capable:5 detailing:1 desired:1 increased:1 column:3 instance:1 earlier:1 classify:1 disadvantage:1 retains:1 addressing:1 deviation:1 subset:1 uniform:1 fabrication:2 dependency:1 combined:1 density:1 international:1 lee:1 off:1 vm:1 vgs:1 together:1 jo:1 reflect:1 borne:1 nearing:1 diversity:1 satisfy:1 stream:6 later:1 parallel:1 formed:1 square:1 accuracy:1 characteristic:7 yield:1 identify:1 vout:1 basically:1 dortmund:1 processor:2 classified:1 synapsis:5 whenever:1 synaptic:15 against:1 realise:1 frequency:9 steadily:1 pp:7 resultant:1 associated:1 mi:2 gain:4 stop:1 proved:1 higher:1 response:1 improved:1 synapse:9 furthermore:2 inception:1 stage:4 until:1 horizontal:1 favourably:1 mode:2 impulse:1 stale:1 building:3 effect:1 concept:1 multiplier:6 vih:1 evolution:1 hence:2 illustrated:2 vagary:1 during:2 width:11 self:2 whereby:1 noted:1 speaker:1 oc:2 die:1 criterion:2 m5:1 m:1 dissipation:1 temperature:2 interface:1 invaluable:1 image:1 consideration:1 instantaneous:1 funding:1 common:1 sigmoid:4 jl:3 analog:15 million:1 discussed:2 pm:2 had:3 immune:1 dj:1 resistant:1 specification:4 operating:2 own:1 pulsed:5 sparc:3 meador:2 success:1 seen:6 additional:2 pfm:2 utilises:1 employed:1 churcher:4 aggregated:1 envisaged:1 period:1 signal:6 arithmetic:4 ii:1 full:1 sound:1 stephen:1 reduces:1 alan:1 segmented:1 technical:1 calculation:2 cross:1 dept:3 post:4 coded:3 va:1 ensuring:1 controlled:2 basic:2 vision:2 represent:1 veo:2 cz:1 achieved:3 cell:1 cps:1 addition:3 spacing:1 source:1 crucial:1 comment:1 subject:1 induced:1 capacitor:1 seem:1 transist:1 mw:1 ideal:2 enough:1 baxter:8 easy:1 variety:1 m6:1 undertake:1 architecture:3 prototype:1 minimise:1 vgs2:1 synchronous:2 duty:1 render:1 resistance:1 speech:1 cause:1 programmable:2 useful:2 mid:2 band:1 generate:1 percentage:2 per:2 track:1 correctly:3 vol:4 group:1 hysteretic:1 characterisation:6 clarity:1 ram:1 graph:2 merely:1 letter:1 thorn:1 asynchronicity:1 fjs:1 wu:1 oscillation:1 comparable:1 bit:2 entirely:1 eh9:3 layer:4 ct:1 fl:3 simplification:1 alister:1 activity:8 adapted:1 constraint:2 vco:1 scene:1 software:1 sake:1 generates:1 speed:1 argument:1 extremely:2 emi:1 transconductance:4 injection:1 martin:1 circumvented:1 munich:1 peripheral:1 describes:2 smaller:1 mcps:1 explained:1 sided:2 alluded:1 previously:1 pin:1 mechanism:1 mind:1 fed:1 end:1 available:1 operation:2 multiplied:1 apply:1 generic:4 appropriate:1 cancelled:1 gate:1 original:1 obviates:1 pwm:3 remaining:1 hampered:1 completed:1 xc:1 epsilon:15 murray:19 obviated:1 already:3 thank:1 me:1 innovation:1 difficult:1 unfortunately:1 demerit:1 stated:2 implementation:3 design:12 perform:2 imbalance:1 vertical:1 neuron:20 coincident:1 tilde:1 situation:1 communication:2 station:2 arbitrary:2 connection:3 aerospace:1 herein:1 trans:1 able:2 usually:2 pattern:3 mismatch:1 appeared:1 regime:1 analogue:3 power:6 natural:1 cascaded:1 representing:1 scheme:1 technology:1 axis:2 carried:1 epoch:2 acknowledgement:1 drain:1 multiplication:2 fully:1 proportional:1 versus:2 digital:5 downloaded:1 reekie:7 metal:2 sufficient:1 principle:1 bank:1 course:1 asynchronous:5 side:1 perceptron:1 midpoint:1 edinburgh:8 distributed:1 feedback:2 curve:2 world:3 forward:1 author:1 employing:1 transaction:1 mter:1 incoming:2 tolerant:3 symmetrical:3 butler:1 vramp:1 continuous:1 table:3 nature:4 transfer:3 robust:1 channel:1 european:1 poly:1 main:2 constituted:1 linearly:1 noise:7 nothing:1 fair:1 en:1 elaborate:1 attributable:1 n:1 precision:1 slave:1 removing:1 british:1 load:2 denyer:1 offset:3 experimented:1 x:1 microelectronics:2 workshop:1 recognising:1 effectively:1 ci:2 phd:1 magnitude:1 tfl:1 te:1 sigmoids:2 desire:1 contained:1 restarted:1 comparator:1 king:3 microsecond:1 crl:1 change:1 included:2 determined:1 generalisation:1 reducing:1 specifically:1 called:1 total:1 pas:2 pulsing:1 m3:2 perceptrons:1 mast:1 support:1 latter:1 thanked:1 evaluate:1 tested:1 |
6,125 | 6,540 | Fast Distributed Submodular Cover:
Public-Private Data Summarization
Baharan Mirzasoleiman
ETH Zurich
Morteza Zadimoghaddam
Google Research
Amin Karbasi
Yale University
Abstract
In this paper, we introduce the public-private framework of data summarization
motivated by privacy concerns in personalized recommender systems and online
social services. Such systems have usually access to massive data generated by a
large pool of users. A major fraction of the data is public and is visible to (and
can be used for) all users. However, each user can also contribute some private
data that should not be shared with other users to ensure her privacy. The goal is to
provide a succinct summary of massive dataset, ideally as small as possible, from
which customized summaries can be built for each user, i.e. it can contain elements
from the public data (for diversity) and users? private data (for personalization).
To formalize the above challenge, we assume that the scoring function according
to which a user evaluates the utility of her summary satisfies submodularity, a
widely used notion in data summarization applications. Thus, we model the data
summarization targeted to each user as an instance of a submodular cover problem.
However, when the data is massive it is infeasible to use the centralized greedy
algorithm to find a customized summary even for a single user. Moreover, for a
large pool of users, it is too time consuming to find such summaries separately. Instead, we develop a fast distributed algorithm for submodular cover, FAST C OVER,
that provides a succinct summary in one shot and for all users. We show that
the solution provided by FAST C OVER is competitive with that of the centralized
algorithm with the number of rounds that is exponentially smaller than state of the
art results. Moreover, we have implemented FAST C OVER with Spark to demonstrate its practical performance on a number of concrete applications, including
personalized location recommendation, personalized movie recommendation, and
dominating set on tens of millions of data points and varying number of users.
1
Introduction
Data summarization, a central challenge in machine learning, is the task of finding a representative
subset of manageable size out of a large dataset. It has found numerous applications, including image
summarization [1], recommender systems [2], scene summarization [3], clustering [4, 5], active set
selection in non-parametric learning [6], and document and corpus summarization [7, 8], to name a
few. A general recipe to obtain a faithful summary is to define a utility/scoring function that measures
coverage and diversity of the selected subset [1]. In many applications, the choice of utility functions
used for summarization exhibit submodularity, a natural diminishing returns property. In words,
submodularity implies that the added value of any given element from the dataset decreases as we
include more data points to the summary. Thus, the data summarization problem can be naturally
reduced to that of a submodular cover problem where the objective is to find the smallest subset
whose utility achieves a desired fraction of the utility provided by the entire dataset.
It is known that the classical greedy algorithm yields a logarithmic factor approximation to the
optimum summary [9]. It starts with an empty set, and at each iteration adds an element with the
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
maximum added value to the summary selected so far. It is also known that improving upon the
logarithmic approximation ratio is NP-hard [10]. Even though the greedy algorithm produces a
near-optimal solution, it is highly impractical for massive datasets, as sequentially selecting elements
on a single machine is heavily constrained in terms of speed and memory. Hence, in order to solve the
submodular cover problem at scale, we need to make use of MapReduce-style parallel computation
models [11, 12]. The greedy algorithm, due to its sequential nature, is poorly suited for parallelization.
In this paper, we propose a fast distributed algorithm, FAST C OVER, that enables us to solve the more
general problem of covering multiple submodular functions in one run of the algorithm. It relies
one three important ingredients: 1) a reduction from multiple submodular cover problems into a
single instance of a submodular cover problem [13, 14], 2) randomized filtration mechanism to select
elements with high utility, and 3) a set of carefully chosen threshold functions used for the filteration
mechanism. FAST C OVER also provides a natural tarde-off between the number of MapReduce rounds
and the size of the returned summary. It effectively lets us choose between compact summaries (i.e.,
smaller solution size) while running more MapReduce rounds or larger summaries while running
fewer MapReduce rounds.
This setting is motivated by privacy concerns in many modern applications, including personalized
recommender systems, online social services, and the data collected by apps on mobile platforms
[15, 16]. In such applications, users have some control over their own data and can mark some part
of it private (in a slightly more general case, we can assume that users can make part of their data
private to specific groups and public to others). As a result, the dataset consists of public data, shared
among all users, and disjoint sets of private data accessible to the owners only.
We call this more general framework for data summarization, public-private data summarization,
where the private data of one user should not be included in another user?s summary (see also [15]).
This model naturally reduces to solving one instance of the submodular cover problem for each
user, as their view of the dataset and the specific utility function specifying users? preferences differ
across users. When the number of users is small, one can solve the public-private data summarization
separately for each user, using the greedy algorithm (for datasets of small size) or the recently
proposed distributed algorithm D IS C OVER [12] (for datasets of moderate size). However, when there
are many users or the dataset is massive, none of the prior work truly scales.
We report performance of D IS C OVER using Spark on concrete applications of the public-private data
summarization, including personalized movie recommendation on a dataset containing 2 million
ratings by more than 100K users for 1000 movies, personalized location recommendation based
on 20 users and their collected GPS locations, and finding the dominating set on a social network
containing more than 65 million nodes and 1.8 billion edges. For small to moderate sized datasets, we
compare our results with previous work, namely, classical greedy algorithm and D IS C OVER [12]. For
truly large-scale experiments, where the data is big and/or there are many users involved (e.g., movie
recommendation), we cannot run D IS C OVER as the number of MapReduce rounds in addition to their
communication costs is prohibitive. In our experiments, we constantly observe that FAST C OVER
provides solutions of size similar to the greedy algorithm (and very often even smaller) with the
number of rounds that are orders of magnitude smaller than D IS C OVER. This makes FAST C OVER
the first distributed algorithm that solves the public-private data summarization fast and at scale.
2
Problem Statement: Public-Private Data Summarization
In this section, we formally define the public-private model of data summarization1 . Here, we
consider a potentially large dataset (sometimes called universe of items) V of size n and a set of
users U. The dataset consists of public data VP and disjoint subsets of private data Vu for each user
u ? U. The public-private aspect of data summarization realizes in two dimensions. First, each
user u ? U has her own utility function fu (S) according to which she scores the value of a subset
S ? V. Throughout this paper we assume that fu (?) is integer-valued2 , non-negative, and monotone
1
All the results are applicable to submodular cover as a special case where there is only public data.
For the submodular cover problem it is a standard assumption that the function is integer-values for the
theoretical results to hold. In applications where this assumption is not satisfied, either we can appropriately
discretize and rescale the function, or instead of achieving the desired utility Q, try to reach (1 ? ?)Q, for some
0 < ? < 1. In the latter case, we can simply replace Q with Q/? in the theorems to get the correct bounds.
2
2
submodular. More formally, submodularity means that
fu (A ? {e}) ? fu (A) ? fu (B ? {e}) ? fu (B) ?A ? B ? V and ?e ? V \ B.
.
Monotonicity implies that for any A ? V and e ? V we have ?fu (e|A) = fu (A ? {e}) ? fu (A) ? 0.
The term ?fu (e|A) is called the marginal gain (or added value) of e to the set A. Whenever it is
clear from the context we drop fu from ?fu (e|A). Without loss of generality, we normalize all
users? functions so that they achieve the same maximum value, i.e., fu (V) = fv (V) for all u, v ? U.
Second, and in contrast to public data that is shared among all users, the private data of a user cannot
be shared with others. Thus, a user u ? U can only evaluate the public and her own private part of a
summary S, i.e., S ? (VP ? Vu ). In other words, if the summary S contains private data of a user
v 6= u, the user u cannot have access or evaluate v?s private part of S, i.e., S ? Vv . In public-private
data summarization, we would like to find the smallest subset S ? V such that all users reach a
desired utility Q ? fu (V) = fu (VP ? Vu ) simultaneously, i.e.,
OPT = arg min |S|, such that fu (S ? (VP ? Vu )) ? Q ?u ? U.
(1)
S?V
A naive way to solve the above problem is to find a separate summary for each user and then return
the union of all summaries as S. A more clever way is to realize that problem (1) is in fact equivalent
to the following problem [13, 14]
. X
OPT = arg min |S|, such that f (S) =
min{fu (S ? (VP ? Vu )), Q} ? Q ? |U|. (2)
S?V
u?U
Note that the surrogate function f (?) is also monotone submodular as a thresholded submodular
function remains submodular. Thus, finding a set S that provides each user with utility Q is equivalent
.
of finding a set S with f (S) ? L = Q ? |U|. This reduction lets us focus on developing a fast
distributed solution for solving a single submodular cover problem. Our method FAST C OVER is
explained in detail in Section 4.
Related Work: When the data is small, we can use the centralized greedy algorithm to solve
problem (2) (and equivalently problem (1)). The greedy algorithm sequentially picks elements and
returns a solution of size (1 + ln M )OPT ? ln(L)|OPT| where M = maxe?V f (e). As elaborated
earlier, when the data is large, one cannot run this greedy algorithm as it requires centralized access to
the full dataset. This is why scalable solutions for the submodular cover problem have recently gained
a lot of interest. In particular, for the set cover problem (a special case of submodular cover problem)
there have been efficient MapReduce-based implementations proposed in the literature [17, 18, 19].
There have also been recent studies on the streaming set cover problem [20]. Perhaps the closest work
to our efforts is [12] where the authors proposed a distributed algorithm for the submodular cover
problem called D IS C OVER. Their method relies on the reduction of the submodular cover problem to
multiple instances of the distributed constrained submodular maximization problem
[6, 21]. For any
p
fixed 0 < ? ? 1, D IS C OVER
returns
a
solution
of
size
d2?k+72
log(L)|OPT|
min(m,
?|OPT|))e
p
in dlog(?|OPT|) + 36 min(m, ?|OPT|) log(L)/? + 1e rounds, where m denotes the number
of machines. Even though D IS C OVER scales better than the greedy algorithm, the solution it
returns
is usually much larger. Moreover, the dependency of the number of MapReduce rounds on
p
min(m, ?|OPT|) is far from desirable. Note that as we increase the number of machines, the
number of rounds may increase (rather than decreasing). Instead, in this paper we propose a fast
distributed algorithm, FAST C OVER, that truly scales to massive data and produces a solution that is
competitive with that of the greedy algorithm. More specifically, for any > 0, FAST C OVER returns a
solution of size at most dln(L)|OPT|/(1?)e with at most dlog3/2 (n/m|OPT|) log(M )/+log(L)e
rounds, where M = maxe?V f (e). Thus, in terms of speed, FAST C OVER improves exponentially
upon D IS C OVER while providing a smaller solution. Moreover, in our work, the number of rounds
decreases as the number of machines increases, in sharp contrast to [12].
3
Applications of Pubic-Private Data Data Summarization
In this section, we discuss 3 concrete applications where parts of data are private and the remaining
parts are public. All objective functions are non-negative, monotone, and submodular.
3
3.1
Personalized Movie Recommendation
Consider a movie recommender system that allows users to anonymously and privately rate movies.
The system can use this information to recognize users? preferences using existing matrix completion
techniques [22]. A good set of recommended movies should meet two criteria: 1) be correlated with
user?s preferences, and 2) be diverse and contains globally popular movies. To this end, we define the
following sum-coverage function to score the quality of the selected movies S for a user u:
X
X
fu (S) = ?u
si,j + (1 ? ?u )
si,j ,
(3)
i?S,j?VP \S
i?S,j?Vu
where Vu is the list of highly ranked movies by user u (i.e., private information), VP is the set of
all movies in the database3 , and si,j measures the similarity between movie i and j. The similarity
can be easily calculated using the
P inner product between the corresponding feature vectors of any
two movies i and j. The term i?S,j?Vu si,j measures the similarity between the recommended
P
set S and the user?s preferences. The second term i?S,j?VP \S si,j encourages diversity. Finally,
the parameter 0 ? ?u ? 1 provides the user the freedom to specify how much she cares about
personalization versus diversity, i.e., ?u = 1 indicates that all the recommended movies should be
very similar to the movies she highly ranked and ?u = 0 means that she prefers to receive a set of
globally popular movies among all users, irrespective of her own private ratings. Note that in this
application, the universe of items (i.e., movies) is public. What is private is the users? ratings through
which we identify the set of highly ranked movies by each user Vu . The effect of private data is
expressed in users? utility functions. The objective is to find the smallest set S of movies V, from
which we can build recommendations for all users in a way that all reach a certain utility.
3.2
Personalized Location Recommendation
Nowadays, many mobile apps collect geolocation data of their users. To comply with privacy concerns,
some let their customers have control over their data, i.e., users can mark some part of their data
private and disallow the app to share it with other users. In the personalized location recommendation,
a user is interested in identifying a set of locations that are correlated with the places she visited and
popular places everyone else visited. Note that as close by locations are likely to be similar it is very
typical to define a kernel matrix K capturing the similarity between data points. A commonly used
kernel in practice is the squared exponential kernel K(ei , ej ) = exp(?||ei ? ej ||22 /h2 ). To define the
information gain of a set of locations indexed by S, it is natural to use f (S) = log det(I + ?KS,S ).
The information gain objective captures the diversity and is used in many ML applications, e.g., active
set selection for nonparametric learning [6], sensor placement [13], determinantal point processes,
among many others. Then, the personalized location recommendation can be modeled by
fu (S) = ?u f (S ? Vu ) + (1 ? ?u )f (S ? VP ),
(4)
where Vu is the set of locations that user u does not want to share with others and VP is the collection
of all publicly disclosed locations. Again, the parameter ?u lets the user indicate to what extent she
is willing to receive recommendations based on her private information. The objective is to find
the smallest set of locations to recommend to all users such that each reaches a desired threshold.
Note that private data is usually small and private functions are fast to compute. Thus, the function
evaluation is mainly affected by the amount of public data. Moreover, for many objectives, e.g.,
information gain, each machine can evaluate fu (S) by using its own portion of the private data.
3.3
Dominating Set in Social Networks
Probably the easiest way to define the influence of a subset of users on other members of a social
network is by the dominating set problem. Here, we assume that there is a graph G = (V, E) where
V and E indicate the set of nodes and edges, respectively. Let N (S) denote the neighbors of S. Then,
we define the coverage size of S by f (S) = |N (S)?S|. The goal is to find the smallest subset S such
that the coverage size is at least some fraction of |V|.This is a trivial instance of public-private data
summarization as all the data is public and there is a single utility function. We use the dominating
set problem to run a large-scale application for which D IS C OVER terminates in a reasonable amount
of time and its performance can be compared to our algorithm FAST C OVER.
3
Two private lists may point to similar movies, but for now we treat the items on each list as unique entities.
4
4
FAST C OVER for Fast Distributed Submodular Cover
In this section, we explain in detail our fast distributed Algorithm FAST C OVER shown in Alg. 1. It
receives a universe of items V and an integer-valued, non-negative, monotone submodular function
f : 2V ? R+ . The objective is to find the smallest set S that achieves a value L ? f (V).
FAST C OVER starts with S = ?, and keeps adding those items x ? V to S whose marginal values
?(e|S) are at least some threshold ? . In the beginning, ? is set to a conservative initial value
.
M = maxx?V f (x). When there are no more items with a marginal value ? , FAST C OVER lowers ?
by a factor of (1 ? ), and iterates anew through the elements. Thus, ? ranges over ?0 = M, ?1 =
(1 ? )M, ? ? ? , ?` = (1 ? )` M, ? ? ? . FAST C OVER terminates when f (S) ? L. The parameter
determines the size of the final solution. When is small, we expect to find better solutions (i.e.,
smaller in size) while having to spend more number of rounds.
One of the key ideas behind FAST C OVER is that finding elements with marginal values ? = ?` can
be done in a distributed manner. Effectively, FAST C OVER partitions V into m sets T1 , . . . , Tm , one
for each cluster node/machine. A naive distributed implementation is the following. For a given set
S (whose elements are communicated to all machines) each machine i finds all of its items x ? Ti
whose marginal values ?(x|S) are larger than ? and send them all to a central machine (note that
S is fixed on each machine). Then, this central machine sequentially augments S with elements
whose marginal values are more than ? (here S changes by each insertion). The new elements of S
are communicated back to all machines and they run the same procedure, this time with a smaller
threshold ? (1 ? ). The main problem with this approach is that there might be many items on each
machine that satisfy the chosen threshold ? at each round (i.e., many more than |OPT|). A flood of
such items from m machines overwhelms the central machine. Instead, what FAST C OVER does is to
enforce each machine to randomly pick only k items from their potentially big set of candidates (i.e.,
T HRESHOLD S AMPLE algorithm shown in Alg. 2). The value k is carefully chosen (line 7). This way
the number of items the central machine processes is never more than O(m|OPT|).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Input: V, , L, and m
Output: S ? V where f (S) ? L
Find a balanced partition {Ti }m
i=1 of V;
S ? ?;
? ? maxx?V f (x);
while ? ? 1 do
k ? d(L ? f (S))/? e;
forall the 1 ? i ? m do
<Si , F ulli>? T hresholdSample(i,?,k,S);
forall the x ? ?m
i=1 Si do
if f ({x} ? S) ? f (S) ? ? then
S ? S ? {x};
if f (S) ? L then Break;
if ?i : F ulli = F alse then
if ? > 1 then ? ? max{1, (1 ? )? };
else Break;
Return S;
Algorithm 1: FAST C OVER
1
2
3
4
5
6
7
8
9
10
11
Input: Index i, ? , k, and S
Output: Si ? Ti with |Si | ? k
Si ? ?;
forall the x ? Si do
if f (S ? {x}) ? f (S) ? ? then
Si ? Si ? {x};
if |Si | ? k then
Return < Si , F alse >;
else
Si ? k random items of Si ;
Return < Si , T rue >;
Algorithm 2: T HRESHOLD S AMPLE
Theorem 4.1. FAST C OVER terminates with at most log3/2 (n/(|OPT|m))(1 + log(M )/) + log2 (L)
rounds (with high probability) and a solution of size at most |OPT| ln(L)/(1 ? ).
Although FAST C OVER is distributed and unlike centralized algorithms does not enjoy the benefits of
accessing all items together, its solution size is truly competitive with the greedy algorithm and is
only away by a factor of 1/(1 ? ). Moreover, its number of rounds is logarithmic in n and L. This
is in sharp contrast with
p the previously best known algorithm, D IS C OVER [12], where the number of
rounds scales with min(m, |OP T |)4 . Thus, FAST C OVER not only improves exponentially over
p
Note that min(m, |OP T |) can be as large as n1/6 when |OP T | = n1/3 and the memory limit of each
machine is n2/3 which results in m ? n1/3 .
4
5
D IS C OVER in terms of speed but also its number of rounds decreases as the number of available
machines m increases. Even though FAST C OVER is a simple distributed algorithm, its performance
analysis is technical and is deferred to the supplementary materials. Below, we provide the main
ideas behind the proof of Theorem 4.1.
Proof sketch. We say that an item has a high value if its marginal value to S is at least ? . We define
an epoch to be the rounds during which ? does not change. In the last round of each epoch, all
high value items are sent to the central machine (i.e., the set ?m
i=1 Si ) because F ulli is false for all
machines. We also add every high value item to S in lines 11 ? 12. So, at the end of each epoch,
marginal values of all items to S are less than ? . Since we reduce ? by a factor of (1 ? ), we can
always say that ? ? (1 ? ) maxx?V ?(x|S) which means we are only adding items that have almost
the highest marginal values. By the classic analysis of greedy algorithm for submodular maximization,
we can conclude that every item we add has an added value that is at least (1 ? )(L ? f (S))/|OPT|.
Therefore, after adding |OPT| ln(L)/(1 ? ) items, f (S) becomes at least L.
To upper bound rounds, we divide the rounds into two groups. In a good round, the algorithm adds
at least k2 items to S. The rest are bad rounds. In a good round, we add k/2 ? (L ? f (S))/(2? )
items, and each of them increases the value of S by ? . Therefore in a good round, we see at least
(L ? f (S))/2 increase in value of S. In other words, the gap L ? f (S) is reduced by a factor of at
least 2 in each good round. Since f only takes integer values, once L ? f (S) becomes less than 1,
we know that f (S) ? L. Therefore, there cannot be more than log2 L good rounds. Every time we
update ? (start of an epoch), we decrease it by a factor of 1 ? (except maybe the last round for
log(M )
log(M )
1 (M ) ? 1 +
which ? = 1). Therefore, there are at most 1 + log 1?
epochs.
log(1/(1?)) ? 1 +
In a bad round, a machine with more than k high value items, sends k of those to the central machine,
and at most k/2 of them are selected. In other words, the addition of these items to S in this bad
round caused more than half of high value items of each machine to become of low value (marginal
values less than ? ). Since there are n/m items in each machine, and F ulli becomes False once there
are at most k high value items in the machine, we conclude that in expectation there should not be
more than log2 (n/km) bad rounds in each epoch. Summarizing the upper bounds yields the bound
on total number of rounds. Finer analysis leads to the high probability claim.
5
Experiments
In this section, we evaluate the performance of FAST C OVER on the three applications that we
described in Section 3: personalized movie recommendation, personalized location recommendation,
and dominating set on social networks. To validate our theoretical results and demonstrate the
effectiveness of FAST C OVER, we compare the performance of our algorithm against D IS C OVER and
the centralized greedy algorithm (when possible).
Our experimental infrastructure was a cluster of 16 quad-core machines with 20GB of memory
each, running Spark. The cluster was configured with one master node responsible for resource
management, and the remaining 15 machines working as executors. We set the number of reducers
to m = 60. To run FAST C OVER on Spark, we first distributed the data uniformly at random to
the machines, and performed a map/reduce task to find the highest marginal gain ? = M . Each
machine then carries out a set of map/reduce tasks in sequence, where each map/reduce stage
filters out elements with a specific threshold ? on the whole dataset. We then tune the parameter ? ,
communicate back the results to the machines and perform another round of map/reduce calculation.
We continue performing map/reduce tasks until we get to the desired value L.
5.1 Personalized Location Recommendation with Spark
Our location recommendation experiment involves applying FAST C OVER to the information gain
utility function, described in Eq. (4). Our dataset consists of 3,056 GPS measurements from 20 users
in the form of (latitude, longitude, altitude) collected during bike tours around Zurich [23]. The size
of each path is between 50 and 500 GPS coordinates. For each pairs of points i and j we used the
corresponding GPS coordinates to calculate their distance in meters d(i, j) and then formed a squared
exponential kernel Ki,j = exp(?d(i, j)2 /h2 ) with h = 1500. For each user, we marked 20% of her
data private (data points are chosen consecutively) selected from each path taken by the biker. The
parameter ?u is set randomly for each user u.
Figures 1a, 1b, 1c compare the performance of FAST C OVER to the benchmarks for building a
recommendation set that covers 60%, 80%, and 90% of the maximum utility of each user. We
6
considered running D IS C OVER with different values of parameter ? that makes a trade off between
the size of the solution and number of rounds of the algorithm. It can be seen that by avoiding the
doubling steps of D IS C OVER, our algorithm FAST C OVER is able to return a significantly smaller
solution than that of D IS C OVER in considerably less number of rounds. Interestingly, for small values
of , FAST C OVER returns a solution that is even smaller than the centralized greedy algorithm.
5.2
Personalized Movie Recommendation with Spark
Our personalized public-private recommendation experiment involves FAST C OVER applied to a set
of 1,313 movies, and 20,000,263 users? ratings from 138,493 users of the MovieLens database [24].
All selected users rated at least 20 movies. Each movie is associated with a 25 dimensional feature
vector calculated from users? ratings. We use the inner product of the non-normalized feature vectors
to compute the similarity si,j between movies i and j [25]. Our final objective function consists of
138,493 coverage functions -one per user- and a global sum-coverage function defined on the whole
pool of movies (see Eq. (3)). Each function is normalized by its maximum value to make sure that all
functions have the same scale.
Fig 1d, 1e, 1f show the ratio of the size of the solutions obtained by FAST C OVER to that of the greedy
algorithm. The figures demonstrate the results for 10%, 20%, and 30% covers for all the 138,493
users? utility functions. The parameter ?u is set to 0.7 for all users. We scaled down the number of
iterations by a factor of 0.01, so that the corresponding bars can be shown in the same figures. Again,
FAST C OVER was able to find a considerably smaller solution than the centralized greedy. Here, we
couldn?t run D IS C OVER because of its prohibitive running time on Spark.
Fig 1g shows the size of the solution set obtained by FAST C OVER for building recommendations
from a set of 1000 movies for 1000 users vs. the size of the merged solutions found by finding
recommendations separately for each user. It can be seen that FAST C OVER was able to find a much
smaller solution by covering all the functions at the same time.
5.3
Large Scale Dominating Set with Spark
In order to be able to compare the performance of our algorithm with D IS C OVER more precisely,
we applied FAST C OVER to the Friendster network consists of 65,608,366 nodes and 1,806,067,135
edges [26]. This dataset was used in [12] to evaluate the performance of D IS C OVER.
Fig. 1j, 1k, 1l show the performance of FAST C OVER for obtaining covers for 50%, 40%, 30%
of the whole graph, compared to the centralized greedy solution. Again, the size of the solution
obtained by FAST C OVER is smaller than the greedy algorithm for small values of . Note that
running the centralized greedy is impractical if the dataset cannot fit into the memory of a single
machine. Fig. 1h compares the solution set size and the number of rounds for FAST C OVER and
D IS C OVER with different values of and ?. The points in the bottom left correspond to the solution
obtained by FAST C OVER which confirm its superior performance. We further measured the actual
running time of both algorithms on a smaller instance of the same graph with 14,043,721 nodes. We
tuned and ? to get solutions of approximately equal size for both algorithms. Fig. 1i shows the
speedup of FAST C OVER over D IS C OVER. It can be observed that by increasing the coverage value
L, FAST C OVER shows an exponential speedup over D IS C OVER.
6
Conclusion
In this paper, we introduced the public-private model of data summarization motivated by privacy
concerns of recommender systems. We also developed a fast distributed algorithm, FAST C OVER,
that provides a succinct summary for all users without violating their privacy. We showed that
FAST C OVER returns a solution that is competitive to that of the best centralized, polynomial-time
algorithm (i.e., greedy solution). We also showed that FAST C OVER runs exponentially faster than
the previously proposed distributed algorithms. The superior practical performance of FAST C OVER
against all the benchmarks was demonstrated through a large set of experiments, including movie
recommendation, location recommendation and dominating set (all were implemented with Spark).
Our theoretical results combined with the practical performance of FAST C OVER makes it the only
existing distributed algorithm for the submodular cover problem that truly scales to massive data.
Acknowledgment: This research was supported by Google Faculty Research Award and DARPA
Young Faculty Award (D16AP00046).
7
1500
FastCover
DisCover
Greedy
430
,=0.2
420
410
0=0.9
400
0=0.6
0=0.4
390
380
10
0=0.3
,=0.1
20
30
1400
,=0.2
1300
0=0.4 0=0.3
10
20
40
50
2100
60
0.3
0.2
0.15
0.1
0.1
0.05
0=0.7
0=0.5
Solution set size
600
500
400
300
3.4
3
100
0.1
2.6
0.4
0.5
Coverage
4.6
0=0.1
10
0
50
100
150
200
3
2
0
1M 2M 3M 4M 5M 6M 7M
Solution set size
20
30
Number of rounds
(j) Friendster (30%)
(i) Friendster (14M)
#105
3.1
FastCover
Greedy
0=0.5
1.3
Solution set size
0=0.3
4.7
4.5
4
1
1.25
1.2
0=0.3
1.15
1.1
0=0.1
1.05
10
20
30
40
#105
FastCover
Greedy
0=0.5
3.05
Solution set size
1.35
FastCover
Greedy
4.9
4.8
5
(h) Friendster (50%)
#10 4
0=0.5
6
Number of rounds
(g) Movie (1K)
0=0.3
7
3.2
2.8
0=0.5
8
DisCover ,=0.1
DisCover ,=0.2
DisCover ,=0.4
DisCover ,=1.0
FastCover 0=0.5
FastCover 0=0.3
FastCover 0=0.1
3.6
200
0.3
0=0.7
(f) Movies (30%)
#105
3.8
700
0.2
0
0=0.3
FastCover speedup
4
50
Number of iterations
Normalized solution set size
(e) Movies (20%)
Union of the summaries for each user
Single summary for all users
40
0.05
0
0=0.1
30
0.1
(d) Movies (10%)
800
20
,=0.2
,=0.1
0=0.3
0.25
0.2
0.2
900
10
0.3
0.15
0=0.3
0=0.4
(c) Location data (90%)
0.3
0=0.5
0=0.6
Number of rounds
0.25
0.4
Solution set size
2200
Number of iterations
Normalized solution set size
0.35
0.5
Solution set size
2250
(b) Location data (80%)
Number of iterations
Normalized solution set size
0
,=0.4
2300
Number of rounds
0.6
4.4
30
FastCover
DisCover
Greedy
,=1.0
2150
1250
40
0.7
5
,=0.1
0=0.6
0=0.9
2350
0=0.9
1350
(a) Location data (60%)
5.1
FastCover
DisCover
Greedy
,=0.4
Number of rounds
0.8
2400
,=1.0
1450
Solution set size
Solution set size
440 ,=1.0
Solution set size
450
3
2.95
2.9
0=0.3
2.85
2.8
2.75
2.7
0=0.1
10
20
30
40
Number of rounds
Number of rounds
(k) Friendster (40%)
(l) Friendster (50%)
50
Figure 1: Performance of FAST C OVER vs. other baselines. a), b), c) solution set size vs. number of rounds for
personalized location recommendation on a set of 3,056 GPS measurements, for covering 60%, 80%, 90% of the
maximum utility of each user. d), e), f) same measures for personalized movie recommendation on a set of 1000
movies, 138,493 users and 20,000,263 ratings, for covering 10%, 20%, 30% of the maximum utility of each user.
g) solution set size vs. coverage for simultaneously covering all users vs. covering users one by one and taking
the union. The recommendation is on a set of 1000 movies for 1000 users. h) solution set size vs. the number of
rounds for FAST C OVER and D IS C OVER for covering 50% of the Friendster network with 65,608,366 vertices. i)
Exponential speedup of FAST C OVER over D IS C OVER on a subgraph of 14M nodes. j), k), l) solution set size vs.
the number of rounds for covering 30%, 40%, 50% of the Friendster network.
8
References
[1] Sebastian Tschiatschek, Rishabh Iyer, Haochen Wei, and Jeff Bilmes. Learning Mixtures of Submodular
Functions for Image Collection Summarization. In NIPS, 2014.
[2] Khalid El-Arini and Carlos Guestrin. Beyond keyword search: discovering relevant scientific literature. In
KDD, 2011.
[3] Ian Simon, Noah Snavely, and Steven M Seitz. Scene summarization for online image collections. In
ICCV, 2007.
[4] Delbert Dueck and Brendan J Frey. Non-metric affinity propagation for unsupervised image categorization.
In ICCV, 2007.
[5] Ryan Gomes and Andreas Krause. Budgeted nonparametric learning from data streams. In ICML, 2010.
[6] Baharan Mirzasoleiman, Amin Karbasi, Rik Sarkar, and Andreas Krause. Distributed submodular maximization: Identifying representative elements in massive data. In NIPS, 2013.
[7] Hui Lin and Jeff Bilmes. A class of submodular functions for document summarization. In North American
chapter of the Assoc. for Comp. Linguistics/Human Lang. Tech., 2011.
[8] Ruben Sipos, Adith Swaminathan, Pannaga Shivaswamy, and Thorsten Joachims. Temporal corpus
summarization using submodular word coverage. In CIKM, 2012.
[9] Laurence A. Wolsey. An analysis of the greedy algorithm for the submodular set covering problem.
Combinatorica, 1982.
[10] Uriel Feige. A threshold of ln n for approximating set cover. Journal of the ACM, 1998.
[11] J. Dean and S. Ghemawat. Mapreduce: Simplified data processing on large clusters. In OSDI, 2004.
[12] Baharan Mirzasoleiman, Amin Karbasi, Ashwinkumar Badanidiyuru, and Andreas Krause. Distributed
submodular cover: Succinctly summarizing massive data. In NIPS, 2015.
[13] Andreas Krause, Brendan McMahan, Carlos Guestrin, and Anupam Gupta. Robust submodular observation
selection. JMLR, 2008.
[14] Rishabh K Iyer and Jeff A Bilmes. Submodular optimization with submodular cover and submodular
knapsack constraints. In NIPS, 2013.
[15] Flavio Chierichetti, Alessandro Epasto, Ravi Kumar, Silvio Lattanzi, and Vahab Mirrokni. Efficient
algorithms for public-private social networks. In KDD, 2015.
[16] Baharan Mirzasoleiman, Ashwinkumar Badanidiyuru, and Amin Karbasi. Fast constrained submodular
maximization: Personalized data summarization. In ICML, 2016.
[17] Bonnie Berger, John Rompel, and Peter W Shor. Efficient nc algorithms for set cover with applications to
learning and geometry. Journal of Computer and System Sciences, 1994.
[18] Guy E. Blelloch, Richard Peng, and Kanat Tangwongsan. Linear-work greedy parallel approximate set
cover and variants. In SPAA, 2011.
[19] Stergios Stergiou and Kostas Tsioutsiouliklis. Set cover at web scale. In SIGKDD, 2015.
[20] Erik D Demaine, Piotr Indyk, Sepideh Mahabadi, and Ali Vakilian. On streaming and communication
complexity of the set cover problem. In Distributed Computing. 2014.
[21] Ravi Kumar, Benjamin Moseley, Sergei Vassilvitskii, and Andrea Vattani. Fast greedy algorithms in
mapreduce and streaming. TOPC, 2015.
[22] Emmanuel J Cand?s and Benjamin Recht. Exact matrix completion via convex optimization. Foundations
of Computational mathematics, 2009.
[23] https://refind.com/fphilipe/topics/open-data.
[24] Grouplens. movielens 20m dataset. http://grouplens.org/datasets/movielens/20m/.
[25] Erik M Lindgren, Shanshan Wu, and Alexandros G Dimakis. Sparse and greedy: Sparsifying submodular
facility location problems. NIPS, 2015.
[26] Jaewon Yang and Jure Leskovec. Defining and evaluating network communities based on ground-truth.
Knowledge and Information Systems, 2015.
9
| 6540 |@word private:38 faculty:2 manageable:1 polynomial:1 laurence:1 open:1 d2:1 willing:1 km:1 seitz:1 pick:2 shot:1 carry:1 reduction:3 initial:1 contains:2 score:2 selecting:1 tuned:1 document:2 interestingly:1 existing:2 com:1 si:20 lang:1 sergei:1 determinantal:1 realize:1 john:1 visible:1 partition:2 kdd:2 enables:1 drop:1 update:1 v:7 greedy:32 selected:6 fewer:1 prohibitive:2 item:27 half:1 discovering:1 beginning:1 core:1 alexandros:1 infrastructure:1 provides:6 iterates:1 contribute:1 location:21 preference:4 node:7 org:1 become:1 consists:5 owner:1 introduce:1 manner:1 privacy:6 peng:1 andrea:1 cand:1 globally:2 decreasing:1 actual:1 quad:1 increasing:1 becomes:3 provided:2 spain:1 moreover:6 discover:7 bike:1 what:3 easiest:1 dimakis:1 developed:1 finding:6 mahabadi:1 impractical:2 dueck:1 temporal:1 every:3 ti:3 k2:1 scaled:1 assoc:1 control:2 enjoy:1 t1:1 service:2 frey:1 treat:1 limit:1 meet:1 path:2 approximately:1 might:1 k:1 specifying:1 collect:1 tschiatschek:1 range:1 practical:3 faithful:1 unique:1 responsible:1 vu:11 union:3 practice:1 acknowledgment:1 communicated:2 procedure:1 eth:1 maxx:3 significantly:1 word:5 get:3 cannot:6 clever:1 selection:3 close:1 context:1 influence:1 applying:1 equivalent:2 map:5 customer:1 demonstrated:1 dean:1 send:1 convex:1 shanshan:1 spark:9 identifying:2 classic:1 notion:1 coordinate:2 heavily:1 massive:9 user:80 exact:1 gps:5 element:13 database:1 bottom:1 observed:1 steven:1 capture:1 calculate:1 keyword:1 decrease:4 highest:2 reducer:1 trade:1 balanced:1 accessing:1 alessandro:1 benjamin:2 insertion:1 haochen:1 complexity:1 ideally:1 solving:2 overwhelms:1 badanidiyuru:2 ali:1 upon:2 easily:1 darpa:1 chapter:1 fast:64 couldn:1 whose:5 widely:1 dominating:8 solve:5 larger:3 valued:1 spend:1 supplementary:1 say:2 tsioutsiouliklis:1 flood:1 final:2 online:3 indyk:1 sequence:1 propose:2 product:2 relevant:1 subgraph:1 poorly:1 achieve:1 amin:4 validate:1 normalize:1 recipe:1 billion:1 empty:1 optimum:1 cluster:4 produce:2 categorization:1 mirzasoleiman:4 develop:1 completion:2 measured:1 rescale:1 op:3 eq:2 solves:1 longitude:1 implemented:2 coverage:10 involves:2 implies:2 indicate:2 differ:1 submodularity:4 merged:1 correct:1 filter:1 consecutively:1 human:1 public:26 material:1 blelloch:1 opt:17 ryan:1 hold:1 around:1 considered:1 ground:1 exp:2 claim:1 major:1 achieves:2 smallest:6 applicable:1 realizes:1 visited:2 grouplens:2 sensor:1 always:1 rather:1 forall:3 ej:2 varying:1 mobile:2 focus:1 joachim:1 she:6 indicates:1 mainly:1 tech:1 contrast:3 sigkdd:1 brendan:2 baseline:1 summarizing:2 osdi:1 shivaswamy:1 el:1 streaming:3 entire:1 diminishing:1 her:7 interested:1 arg:2 among:4 art:1 constrained:3 platform:1 special:2 marginal:11 equal:1 once:2 never:1 having:1 piotr:1 unsupervised:1 icml:2 np:1 others:4 report:1 recommend:1 few:1 richard:1 modern:1 randomly:2 simultaneously:2 recognize:1 geometry:1 n1:3 freedom:1 centralized:11 interest:1 highly:4 khalid:1 evaluation:1 deferred:1 truly:5 mixture:1 personalization:2 rishabh:2 behind:2 edge:3 fu:20 nowadays:1 indexed:1 divide:1 desired:5 theoretical:3 leskovec:1 instance:6 vahab:1 earlier:1 cover:29 maximization:4 cost:1 vertex:1 subset:8 tour:1 too:1 dependency:1 considerably:2 combined:1 recht:1 randomized:1 accessible:1 off:2 pool:3 together:1 concrete:3 squared:2 central:7 satisfied:1 again:3 containing:2 choose:1 management:1 arini:1 guy:1 american:1 vattani:1 style:1 return:12 diversity:5 north:1 configured:1 satisfy:1 caused:1 stream:1 performed:1 view:1 try:1 lot:1 break:2 portion:1 competitive:4 start:3 carlos:2 parallel:2 simon:1 elaborated:1 formed:1 publicly:1 yield:2 identify:1 correspond:1 vp:10 apps:2 none:1 bilmes:3 comp:1 stergiou:1 finer:1 app:1 explain:1 reach:4 whenever:1 sebastian:1 evaluates:1 against:2 involved:1 naturally:2 proof:2 associated:1 gain:6 dataset:16 popular:3 knowledge:1 improves:2 formalize:1 carefully:2 back:2 violating:1 specify:1 wei:1 done:1 though:3 biker:1 generality:1 stage:1 uriel:1 swaminathan:1 until:1 sketch:1 receives:1 working:1 web:1 ei:2 propagation:1 google:2 quality:1 perhaps:1 scientific:1 adith:1 name:1 effect:1 building:2 contain:1 normalized:5 facility:1 hence:1 round:46 during:2 encourages:1 covering:9 bonnie:1 criterion:1 demonstrate:3 image:4 recently:2 hreshold:2 superior:2 exponentially:4 million:3 measurement:2 mathematics:1 submodular:38 access:3 similarity:5 ashwinkumar:2 add:5 lindgren:1 closest:1 own:5 recent:1 showed:2 zadimoghaddam:1 moderate:2 certain:1 continue:1 flavio:1 scoring:2 seen:2 guestrin:2 care:1 recommended:3 multiple:3 full:1 desirable:1 reduces:1 technical:1 faster:1 calculation:1 lin:1 award:2 scalable:1 variant:1 expectation:1 metric:1 sepideh:1 iteration:5 sometimes:1 kernel:4 receive:2 addition:2 want:1 separately:3 krause:4 else:3 sends:1 appropriately:1 parallelization:1 rest:1 unlike:1 probably:1 sure:1 tangwongsan:1 sent:1 member:1 ample:2 effectiveness:1 call:1 integer:4 near:1 yang:1 pannaga:1 fit:1 shor:1 inner:2 idea:2 tm:1 reduce:6 andreas:4 det:1 vassilvitskii:1 motivated:3 utility:19 gb:1 effort:1 peter:1 returned:1 kanat:1 prefers:1 clear:1 tune:1 maybe:1 amount:2 nonparametric:2 ten:1 augments:1 reduced:2 http:2 dln:1 disjoint:2 per:1 cikm:1 diverse:1 affected:1 group:2 key:1 sparsifying:1 threshold:7 achieving:1 budgeted:1 ravi:2 thresholded:1 graph:3 monotone:4 fraction:3 sum:2 run:8 master:1 communicate:1 place:2 throughout:1 reasonable:1 almost:1 wu:1 capturing:1 bound:4 ki:1 yale:1 placement:1 precisely:1 noah:1 constraint:1 scene:2 personalized:18 aspect:1 speed:3 min:8 kumar:2 performing:1 speedup:4 developing:1 according:2 smaller:13 slightly:1 across:1 terminates:3 feige:1 alse:2 explained:1 dlog:1 iccv:2 karbasi:4 thorsten:1 altitude:1 taken:1 ln:5 resource:1 zurich:2 remains:1 previously:2 discus:1 mechanism:2 know:1 end:2 available:1 observe:1 away:1 enforce:1 anupam:1 database3:1 knapsack:1 denotes:1 running:7 ensure:1 clustering:1 include:1 remaining:2 log2:3 linguistics:1 vakilian:1 emmanuel:1 build:1 approximating:1 classical:2 objective:8 added:4 parametric:1 snavely:1 ulli:4 mirrokni:1 surrogate:1 exhibit:1 affinity:1 distance:1 separate:1 entity:1 topic:1 collected:3 extent:1 trivial:1 sipos:1 erik:2 friendster:8 modeled:1 index:1 berger:1 ratio:2 providing:1 equivalently:1 nc:1 statement:1 potentially:2 negative:3 filtration:1 implementation:2 summarization:26 perform:1 recommender:5 discretize:1 upper:2 observation:1 datasets:5 benchmark:2 defining:1 d16ap00046:1 communication:2 sharp:2 community:1 rating:6 introduced:1 sarkar:1 namely:1 pair:1 fv:1 jaewon:1 barcelona:1 nip:6 jure:1 able:4 bar:1 beyond:1 usually:3 below:1 latitude:1 challenge:2 baharan:4 built:1 including:5 memory:4 everyone:1 max:1 natural:3 ranked:3 customized:2 movie:37 rated:1 numerous:1 irrespective:1 naive:2 prior:1 literature:2 mapreduce:9 comply:1 epoch:6 meter:1 ruben:1 loss:1 expect:1 wolsey:1 demaine:1 versus:1 ingredient:1 h2:2 foundation:1 rik:1 share:2 succinctly:1 summary:21 supported:1 last:2 infeasible:1 disallow:1 vv:1 neighbor:1 taking:1 sparse:1 distributed:22 benefit:1 dimension:1 calculated:2 evaluating:1 author:1 commonly:1 collection:3 simplified:1 far:2 social:7 log3:1 approximate:1 compact:1 keep:1 monotonicity:1 ml:1 anew:1 active:2 sequentially:3 global:1 confirm:1 corpus:2 conclude:2 gomes:1 consuming:1 search:1 why:1 nature:1 robust:1 spaa:1 obtaining:1 improving:1 alg:2 rue:1 main:2 universe:3 privately:1 big:2 whole:3 n2:1 succinct:3 lattanzi:1 fig:5 representative:2 chierichetti:1 delbert:1 kostas:1 exponential:4 candidate:1 mcmahan:1 jmlr:1 young:1 ian:1 theorem:3 down:1 bad:4 specific:3 ghemawat:1 list:3 gupta:1 concern:4 disclosed:1 false:2 sequential:1 effectively:2 gained:1 adding:3 hui:1 magnitude:1 iyer:2 gap:1 morteza:1 suited:1 logarithmic:3 simply:1 likely:1 expressed:1 doubling:1 recommendation:25 truth:1 satisfies:1 relies:2 constantly:1 determines:1 acm:1 goal:2 targeted:1 sized:1 marked:1 jeff:3 shared:4 replace:1 hard:1 change:2 included:1 specifically:1 typical:1 except:1 uniformly:1 movielens:3 conservative:1 called:3 total:1 silvio:1 experimental:1 moseley:1 maxe:2 select:1 formally:2 combinatorica:1 mark:2 latter:1 evaluate:5 avoiding:1 correlated:2 |
6,126 | 6,541 | Spatio?Temporal Hilbert Maps for Continuous
Occupancy Representation in Dynamic Environments
Ransalu Senanayake
University of Sydney
[email protected]
Simon O?Callaghan
Data61/CSIRO, Australia
[email protected]
Lionel Ott
University of Sydney
[email protected]
Fabio Ramos
University of Sydney
[email protected]
Abstract
We consider the problem of building continuous occupancy representations in
dynamic environments for robotics applications. The problem has hardly been
discussed previously due to the complexity of patterns in urban environments,
which have both spatial and temporal dependencies. We address the problem
as learning a kernel classifier on an efficient feature space. The key novelty of
our approach is the incorporation of variations in the time domain into the spatial
domain. We propose a method to propagate motion uncertainty into the kernel using
a hierarchical model. The main benefit of this approach is that it can directly predict
the occupancy state of the map in the future from past observations, being a valuable
tool for robot trajectory planning under uncertainty. Our approach preserves the
main computational benefits of static Hilbert maps ? using stochastic gradient
descent for fast optimization of model parameters and incremental updates as
new data are captured. Experiments conducted in road intersections of an urban
environment demonstrated that spatio-temporal Hilbert maps can accurately model
changes in the map while outperforming other techniques on various aspects.
1
Introduction
We are in the climax of driverless vehicles research where the perception and learning are no longer
trivial problems due to the transition from controlled test environments to real world complex
interactions with other road users. Online mapping environments is vital for action planing. In such
applications, the state of the observed world with respect to the vehicle changes over time, making
modeling and predicting into the future challenging. Despite this, there is a plethora of mapping
techniques for static environments but only very few instances of truly dynamic mapping methods.
Most existing techniques merely consider a static representation, and as parallel processes, initialize
target trackers for the dynamic objects in the scene, updating the map with new information. This
approach can be effective from a computational point of view, but it disregards crucial relationships
between time and space. By treating the dynamics as a separate problem from the space representation,
such methods cannot perform higher level inference tasks such as what are the most likely regions of
the environment to be occupied in the future, or when and where a dynamic object is most likely to
appear.
In occupancy grid maps (GM) [1], the space is divided into a fixed number of non-overlapping
cells and the likelihood of occupancy for each individual cell is estimated independently based on
sensor measurements. Considering the main drawbacks of the GM, discretization of the world and
disregarding spatial relationship among cells, Gaussian process occupancy map (GPOM) [2] enabled
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
continuous probabilistic representation. In spite of its profound formulation, it is less pragmatic
for online learning due to O(N 3 ) computational cost in both learning and inference, where N is
the number of data points. Recently, as an alternative, static Hilbert maps (SHMs) [3, 4] was
proposed, borrowing the two main advantages of GPOMs but at a much lower computational cost.
As a parametric technique, SHMs have a constant cost for updating the model with new observations.
Additionally, the parameters can be learned using stochastic gradient descent (SGD) which made it
computationally attractive and capable of handling large datasets. Nonetheless, all these techniques
assume a static environment.
Although attempts to adapt occupancy grid maps to dynamic environments and identify periodic
patterns exist [5], to the best of our knowledge, only dynamic Gaussian processes occupancy maps
(DGPOM) [6] can model occupancy in dynamic environments in a continuous fashion. There,
velocity estimates are linearly added to the inputs of the GP kernel. This approach, similar to the
proposed method, can make occupancy predictions into the future. However, being a non-parametric
model, the cost of inverting the covariance matrix in DGPOM grows over time and hence the model
cannot be run in real-time.
In this paper, we propose a method for building continuous spatio-temporal Hilbert maps (STHM)
using ?hinged? features. This method builds on the main ideas behind SHM and generalize it to
dynamic environments. To this end, we formulate a novel methodology to permeate the variability
in the temporal domain into the spatial domain, rather than considering time merely as another
dimension. This approach can be used to predict the occupancy state of the world, interpolating
not only in space but also in time. The representation is demonstrated in highly dynamic urban
environments of busy intersections with cars moving and turning in both directions obeying traffic
lights. In Section 2, we lay the foundation by introducing SHMs and then, we discuss the proposed
method in Section 3, followed by experiments and discussions in Section 4.
2
Static Hilbert maps (SHMs)
A static Hilbert map (SHM) [3] is a continuous probabilistic occupancy representation of the space,
given a collection of range sensor measurements. As in almost all autonomous vehicles, we assume a
training dataset consisting of locations with associated occupancy information obtained from a range
sensor ? in the case of a laser scanner (i.e. LIDAR), points along the beam are unoccupied while the
end point is occupied ? the model predicts the occupancy state of different locations given by query
points.
D
The SHM model: Formally, let the training dataset be defined as D = {xi , yi }N
i=1 with xi ? R
being a point in 2D or 3D space, and yi ? {?1, +1} the associated occupancy status. SHM
predicts the probability of occupancy for a new point x? calculated as p(y? |x? , w, D), given a set of
parameters w and the dataset D. This discriminative model takes the form of a logistic regression
classifier with an elastic net regularizer operating on basis functions mapping the point coordinates to
a Hilbert space defined by a kernel k(x, x0 ) : X ? X ? R where x, x0 ? X = {location}. This is
equivalent to kernel logistic regression [7] which is known to be computationally expensive due to
the need of computing the kernel matrix between all points in the dataset. The crucial insight to
make the method computationally efficient is to first approximate the kernel by a dot product of
basis functions such that k(x, x0 ) ? ?(x)> ?(x0 ). This can be done using the random kitchen sinks
procedure [8, 9] or by directly defining efficient basis functions. Note that, [3] assumes a linear
machine w> ?(x). Learning w is done by minimizing the regularized negative-log-likelihood using
stochastic gradient descent (SGD) [10]. The probability that a query point x? is not occupied is
?1
given by p(y? = ?1|x? , w, D) = 1 + exp(w> ?(x? ))
, while the probability of being occupied
is given by p(y? = +1|x? , w, D) = 1 ? p(y? = ?1|x? , w, D).
3
Spatio-temporal hinged features (HF-STHM)
In this section, SHMs are generalized into the spatio-temporal domain. Though augmenting the inputs of the SHM kernel X = {location} as X = {(location, time)} or X =
{(location, time, velocity)} is the naive method to build instantaneous maps, they cannot be used for
predicting into the future mainly because they are not cable of capturing complex and miscellaneous
spatio-temporal dependencies. As discussed in Section 3.3, in our approach, the uncertainty of
2
Figure 1: Motion centroids are collected over time from raw data (Section 3.1) and individual GPs are
trained (input: centroids, output: motion information) to learn GP-hyperparameters (Section 3.2 and
Figure 4). Then, the motion of data points at time t? (= 0 for present, > 0 for future, < 0
for past) are queried using the trained GPs and this motion distribution is fed into the kernel
(Section 3.3). This implicitly embeds motion information into the spatial observations. Then a
kernelized logistic regression model logistic(w> ?) is trained to learn w. For a new query point
in space, ?(longitude, latitude) is calculated using Equation 6 followed by sigmoidal(w> ?) to
obtain the occupancy probability. These steps are repeated for each new laser scan.
dynamic objects is incorporated into the map. This uncertainty is estimated using an underlying
Gaussian process (GP) regression model described in Section 3.2. The inputs for the GP are obtained
using a further underlying model based on motion cluster data association which is discussed in Section 3.1. This way, locations are no more deterministic but each location has a probability distribution
and hence the kernel inputs become X = {mean and variance of location}. Sections 3.1?3.3
explain this three-step hierarchical framework in the bottom-to-top approach which are executed
sequentially as new data are received. The method is summarized in Figure 1.
Assumptions: WOLOG, we assume that the sensor is not moving; the general case where the
sensor moves is trivial if the motion of the platform is known. From a robotics perspective, we treat
localization as a separate process and assume it is given for the purpose of introducing the method.
Notation: In this section, unless otherwise stated, input x = (x, y, t) are the longitude, latitude and
time components and, s = (x, y) are merely the spatial coordinates. A motion vector (displacement)
is denoted by v = (vx , vy ), where vx and vy are the motion in x and y directions, respectively. A
motion field is a mapping from space and time to a motion vector, (x, y, t) 7? (vx , vy ).
3.1
Motion observations
As the first step, motion observations are extracted from laser scans. Due to occlusions and sensor
noise, extracting dynamic parts of a scene is not straightforward. Similarly, as the shapes of observed
objects change over time (because the only measurement in laser is depth), morphology based object
tracking algorithms and optical flow [11, 12] which are commonly used in computer vision are
unsuitable. Therefore, we devise as a method that is robust to occlusions and noise without relying on
the shape of the objects present in the scene. To obtain motion observations, taking raw laser scans as
inputs and output motion vectors, the following two steps are performed.
3.1.1
Computing centroids of dynamic objects
As shown in Figure 2, firstly, a SHM is built from the raw scanner data at time t and then it is binarized
to produce a grid map containing occupied and free cells. Based on this grid map, observable areas
where dynamic objects can appear are extracted. Next, dynamic objects are obtained by performing
logical conjunction between an adaptive binary mask and the raw laser data. The final step is the
computation of the centroid for each of these components.
3.1.2
Associating centroids of consecutive frames
Having obtained N centroids for frame t and, M centroids for frame t ? 1 from the previous step,
we formulate the centroid?association as the integer program in Equation 1.
3
(a)
(b)
Figure 2: The various steps involved in computing motion observations discussed in Section 3.1 is
shown in (a). The mask (lower left of (b)) is generated by applying morphological operations to the
raw scans (top row). Taking the intersection between the mask and a raw scan yields the potential
dynamic objects in a scene at a given time (middle row). The final centroid association of such
connected components across two consecutive frames is shown in the bottom right frame.
where dij is the Euclidean distance between
dij aij
(1a) two centroids and aij are the elements of the
minimize
assignment matrix. In order to obtain valid asi=1 j=1
signment solutions aij , we impose that only one
M
X
centroid from frame t can be assigned to one
subject to
aij = 1, j = 1, . . . , N (1b) centroid in frame t ? 1, Equation 1b, and the
i=1
vice versa with Equation 1c. Finally, we only
N
allow integer solutions, Equation 1d. The soX
aij = 1, i = 1, . . . , M (1c) lution to the above problem is obtained using
j=1
the Hungarian method [13]. The asymptotically
aij ? {0, 1},
(1d) cubic computational complexity
does not thwart online learning as the number of vehicles in the field of vision is typically very low
(say, < 10). This forms the basis for obtaining the motion field which is described in the next section.
M X
N
X
3.2
Motion prediction using Gaussian process regression
In this section we describe the construction of a model to predict the motion field as a mapping
(x, y, t) ? (vx , vy ). We adopt a Bayesian approach that can provide uncertainty of a query point
with a little amount of data. A Gaussian process (GP) regression model is instantiated for each new
moving object and motion observations are collected over time until the object disappears from the
robot?s view. Each GP model has a different number of data points which grows over time during its
lifespan. Nevertheless, this stage does not suffer from O(N 3 ) asymptotic cost of GPs because objects
appear and disappear from the mapped area (say, the number of GPs < 20 and N < 50 for each GP).
Let us denote displacements collected over time t = {t ? T, . . . , t ? 2, t ? 1, t} for any such moving
object as V = {vt?T , vt?2 , vt?1 , . . . , vt }. A Gaussian process (GP) prior is placed on f , such that
f ? GP(0, kGP (t, t0 )), and V = f (t) + is an additive noise ? N (0, ? 2 ). This way we can
model non-linear relationships between motion and time. As v are observations in 2D, the model is a
two dimensional output GP. However, it is also possible to disregard the correlation between response
variables vx and vy for simplicity. So as to capture the variations in motions, we adopt a polynomial
covariance function of degree 3. Further, as commonly used in kriging methods in geostatistics [14],
we explicitly augment the input with a quadratic term ?t = [t, t2 ]> and build kGP (t, t0 ) = (?t?t0 + 1)3 ,
to improve (verified in pilot experiments) the prediction. Unlike squared-exponential kernels which
definitely decay beyond the range of data points, polynomial kernels are suitable for extrapolation
into the near future. However, note that polynomials of unnecessarily higher orders would result in
over-fitting.
4
The predictive distribution for the motion of a point in the locality of an individual GP at a given time,
v? ? N (E, V), can be then predicted using standard GP prediction equations [15] (Figure 4). Note
that hyperparameters of each GP has to be optimized before making any predictions. The associated
distribution for the position of a point transformed by p(v(x)) is then,
s ? N (?, ?) ? N (E, V) ? N
x
y
+ E, V
?N
x + ?x
y + ?y
?xx
,
?yx
?xy
?yy
(2)
where we used s(x) to denote the spatial coordinates of x such that s(x) = (x, y).
3.3
Feature embedding
With the predicted spatial coordinates for each point x at time t? , represented as N (?, ?), obtained
in the previous step, the HF-STHM (hinged feature STHM) can now be constructed. As there is
uncertainty in the motion of a point, this uncertainty needs to be propagated into the map.
Denoting H for a reproducing kernel Hilbert space (RKHS) of functions f : S ? R with a
reproducing kernel k : SR ? S ? R, the mean map ? from probability space P into H is obtained [16]
as ? : P ? H, P 7? S k(s, ?)dP(s). Then, the kernel between two distributions can be written as,
Z Z
hk(si , ?), k(sj , ?)iH dPi (si )dPj (sj )
k(Pi , Pj ) =
Z Z
=
k(si , sj )dP(si )dP(sj )
(3)
Z Z
=
k(si , sj )p(si ; ?i , ?i )p(sj ; ?j , ?j )dsi dsj ,
where h?, ?i denotes the dot product and Pi := P(si ) = N (?i , ?i ) in a probability space P.
Theorem 1 [17] If a squared exponential kernel, k(si , sj ) = exp{? 12 (si ? sj )> L?1 (si ? sj )}, is
endowed with P = N (s; ?, ?), then there exists an analytical solution in the form,
?1/2
1
?1
>
?1
k(Pi , Pj ) = I + L (?i + ?j )
exp ? (?i ? ?j ) (L + ?i + ?j ) (?i ? ?j ) ,
2
(4)
where I is the identity matrix and L is the matrix of length scale parameters which determines how
fast the magnitude of the exponential decays with ?.
Corollary 1 For point estimates ?s of Pj ,
1
?1 ?1/2
>
?1
k(Pi , ?s) = I + L ?
exp ? (? ? ?s) (L + ?) (? ? ?s) .
2
(5)
Corollary 1 is now used to compute k p(s), ?s which defines the feature embedding for HF-STHM.
Note that Corollary 1 is equivalent to centering (hinging) the kernels at M fixed points ?s in space
which allows capturing different spatial dependencies over the map dimensions. The pooled-length
scales L + ? of these ?hinged? kernels change over time. Typically, these ?s can be obtained by
a pre-defined regular grid. Finally, the feature mapping for each spatial location is obtained by
concatenating multiple kernels hinged at supports:
>
?hinged (x) = k p(s), ?s1 , . . . , k p(s), ?sM
,
5
(6)
The method to predict occupancy maps at each iteration is summarized in Algorithm 1. As in SHM,
the length-scale of the hinged-feature kernels and the regularization parameter has to be picked
heuristically or using grid-search.
Data: Set of consecutive laser scans
Result: Continuous occupancy map at time t? at any arbitrary resolution
while true do
Extract motion observations V (Section 3.1);
Build the motion vector field from V using Gaussian process regression (Section 3.2);
Generate motion predictions p(v) for t? (Section 3.2);
Compute the feature mapping (Equation 6);
Update w of the logistic regression model similar to Section 2;
Generate a new spatial map by querying at a desirable resolution similar to Section 2;
end
Algorithm 1: Querying maps for t? using HF-STHM algorithm.
Being a parametric model, this method can be used to predict past (t? < 0), present (t? = 0) and
future (t? > 0) occupancy maps using a fixed number of parameters (M + 1). However, in practice,
it may not be required to generate future or past maps at every time step. However, it is required to
incorporate new laser data and update w using SGD at each iteration. Therefore, GP predictions
and probabilistic feature embedding can be skipped by setting ? = 0, whenever it is not required to
predict future or past maps as the uncertainty of knowing the current location for any laser reflection
is zero.
4
Experiments and Discussion
In this section we demonstrate how HF-STHM can be effectively used for mapping in dynamic
environments. Our main dataset1 , named as dataset 1, consists of laser scans, each with 180 beams
covering 1800 angle and 30 m radius, collected from a busy intersection [6]. Figure 3 [6] shows an
aerial view of the area and the location of the sensor. In Section 4.4, we used an additional dataset1
(dataset 2) of a larger intersection, as this section verifies an important part of our algorithm.
4.1
Motion model
Figure 4 shows a real instance where a vehicle breaks and how the GP model is cable of predicting
its future locations with associated uncertainty. Although the GP has two outputs vx and vy , only
predictions along the direction of motion vx is shown for clarity. There can be several such GP models
at a given time as a new GP model is initialized for each new moving object (centroid association)
entering the environment and is removed as it disappears. The GP model not only extrapolates the
motion into the future, but also provides an estimate of the predictive uncertainty which is crucial for
the probabilistic feature embedding techniques discussed in Section 3.3. This location uncertainty
around past observations is negligible while it is increasingly high as the more time steps ahead
into the future we attempt to predict. However, the variance may also slightly change with the
number of data points in the GP and the variability of the motion. As opposed to the two-frame
based velocity calculation technique employed in DGPOM, our method uses motion data of dynamic
objects collected over several frames which makes the predictions more accurate as it does not make
assumptions about the motion of objects such as constant velocity.
4.2
Supports for hinged features
Although in Section 3.3 we suggested to hinge the kernels using a regular grid, we compare it with
kernels hinged in random locations in this experiment. As shown in Table 1, the area-under-ROCcurve (AUC) averaged over randomly selected maps at t? = 0 are more accurate for regular grid
because random supports cannot cover the entire realm, especially if the number of supports is small.
Similarly, a random support based map may not be qualitatively appealing. In general, regular grid
requires less amount of features to ensure a qualitatively and quantitatively better map.
1
https://goo.gl/f9cTDr
6
Table 1: Average AUC ?
supports for hinged features
Motion map
quiver plot
Laser returns
from street walls
Sensor's
location
Figure 3: Ariel view of dataset
1 environment
4.3
Figure 4: GP model
No. of
supports
250
500
1000
5000
Regular
grid
0.95
0.98
0.99
0.99
Random
grid
0.83
0.88
0.94
0.98
Point estimate vs. distribution embedding
It is important to understand if distribution embedding discussed in Section 3.3 indeed improves
accuracy over point embedding. In order to see this, the accuracy between dynamic clusters of
future maps and corresponding ground truth laser values should be compared. Since automatically
identifying dynamic clusters is not possible, we semi-automatically extracted them. To this end,
dynamic clusters of each predict-ahead map were manually delimited using python graphical user
interface tools and negative-log-loss (NLL) between those dynamic clusters and corresponding ground
truth laser values were evaluated. Because the maps are probabilistic, NLL is more representative
than AUC.
Keeping all other variables unaltered, the average decrements of NLL from point estimates to
distribution embedding of randomly selected instances for query time steps t? = 1 to 5 were
0.11, 0.22, 0.34, 0.83, 0.50, 1.36 (note! log scale) where t? > 0 represents future. Therefore, embedding both mean and variance, rather than merely mean, is crucial for a higher accuracy. Intuitively,
though we can never predict the exact future location of a moving vehicle, it is possible to predict the
probability of its presence at different locations in the space.
4.4
Spatial maps vs. spatio-temporal maps
In order to showcase the importance of spatio-temporal models (HF-STHM) over spatial models
(SHM), NLL values of a subset of dataset were calculated similar to Section 4.3 for compare dynamic
occupancy grid map (DGM), SHM and HF-STHM. SHM and HF-STHM used 1000 bases. DGM
is an extension to [1] which calculates occupancy probability based on few past time steps. In this
experiment we considered 10 past time steps and 1 m grid-cell resolution for DGM.
The experiments were performed for datasets 1 and 2 and results are given in Table 2. The smaller
the NLL, the better the accuracy is. HF-STHM outperforms SHM and this effect becomes more
prominent for higher t? . DGM struggles in dynamic environments because of the fixed grid-size,
assumptions about cell independence and it was not explicitly designed for predicting into the future.
NLL of DGM increases with t? as it keeps memory in a decaying-fashion for 10-consecutive-paststeps. Since SHM does not update positions of objects (as it is a spatial model), NLL also increases
with t? . In HF-STHM, NLL increases with t? because predictive variance increases with t? in
addition to mean error. Figure 5 presents a qualitative comparison.
Table 2: NLL - predictions using dynamic occupancy grid map
(DGM), static Hilbert map (SHM) and the proposed method (HFSTHM) for future time steps.
Dataset 1
Dataset 2
Time
DGM SHM STHM
DGM SHM STHM
t? = 0 11.20 0.11
0.12
6.00
0.18
0.09
t? = 1 17.69 0.15
0.15
10.16 0.29
0.12
t? = 2 19.88 0.28
0.18
12.71 0.82
0.34
t? = 3 25.24 0.61
0.19
16.54 1.85
0.57
t? = 4 26.84 1.18
0.48
20.76 2.96
0.16 Table 3: AUC of prediction
t? = 5 27.44 1.46
0.89
25.25 4.00
1.10
t? = 6 34.54 2.00
1.68
26.78 4.90
1.30
7
Figure 5: SHM and HF-STHM for t? -ahead predictions. The robot is at (0,0) facing up. The
white points are ground truth laser reflections. Observe that, in HF-STHM, moving objects are
predicted-ahead and uncertainty of dynamic areas grows as t? increases. Differences are encircled
for t? = 7.
4.5
Predicting into the future and retrieving old maps
In order to assess the ability of our method to predict the future locations of dynamic objects, we
compare the map obtained when predicting a certain number of time steps ahead (t? ) with the
measurements made at that time. Then the average is computed and the AUC as a function of how
far ahead the model makes predictions. The experiment was carried out similar to [6]We compare
our model with DGPOM (AUC values obtained from [6]) as this is the only other method capable of
this type of prediction. According to Figure 3 we can see that both methods perform comparably
when t? < 2. However, if we predict further ahead our method maintains high quality while DGPOM
start to suffer somewhat. One explanation for this is the way motion predictions are integrated in our
method. As discussed in Section 4.3, we embed distributions rather than point observations to the
model and hence it allows us to better deal with the uncertainty of the motion of the dynamic objects.
On the other hand, our motion model can capture non-linear patterns.
In addition to predicting into the future, our method is also capable of extrapolating few steps into the
past merely by changing the time index t to negative instead of positive. This allows us to retrieve past
maps without having to store the complete dataset. In contrast to DGPOM, the parametric nature and
amenability to optimization using SGD makes our method much more efficient in both performing
inference and updating with new observations.
4.6
Runtime
To add a new observation, i.e. a new laser scan, into the HF-STHM map it takes around 0.5 s with the
extraction of the dynamic objects taking up the majority of the time. To query a single map with 0.1 m
resolution takes around 0.5 s as well. These numbers are for a simple Python based implementation.
5
Conclusions and future work
This paper presented hinged features to model occupancy state of dynamic environments, by generalizing static Hilbert maps into dynamic environments. The method requires only a small number
of data points (180) per frame to model the occupancy of a dynamic environment (30 meter radius)
at any resolution. To this end, uncertainty of motion predictions were embedded into the map in a
probabilistic manner by considering spatio-temporal relationships. Because of the hierarchical nature,
the proposed feature embedding technique is amenable for more sophisticated motion prediction
models and sensor fusion techniques. The power of this method can be used for planning and safe
navigation where knowing the future state of the world is always advantageous. Furthermore, it can
be used as a general tool for learning behaviors of moving objects and how they interact with the
space around them.
8
References
[1] A. Elfes, ?Sonar-based real-world mapping and navigation,? IEEE Journal of Robotics and
Automation, vol. RA-3(3), pp. 249?265, 1987.
[2] S. T. O?Callaghan and F. T. Ramos, ?Gaussian process occupancy maps,? The International
Journal of Robotics Research (IJRR), vol. 31, no. 1, pp. 42?62, 2012.
[3] F. Ramos and L. Ott, ?Hilbert maps: scalable continuous occupancy mapping with stochastic
gradient descent,? in Proceedings of Robotics: Science and Systems (RSS), 2015.
[4] K. Doherty, J. Wang, and B. Englot, ?Probabilistic map fusion for fast, incremental occupancy
mapping with 3d hilbert maps,? in IEEE International Conference on Robotics and Automation
(ICRA), pp. 0?0, 2016.
[5] T. Krajn?k, P. Fentanes, G. Cielniak, C. Dondrup, and T. Duckett, ?Spectral analysis for longterm robotic mapping,? in IEEE International Conference on Robotics and Automation (ICRA),
2014.
[6] S. O?Callaghan and F. Ramos, ?Gaussian Process Occupancy Maps for Dynamic Environment,?
in Proceedings of the International Symposium of Experimental Robotics (ISER), 2014.
[7] T. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning. Springer Series
in Statistics, New York, NY, USA: Springer New York Inc., 2001.
[8] A. Rahimi and B. Recht, ?Random features for large-scale kernel machines,? in Neural Information Processing Systems (NIPS), 2008.
[9] A. Rahimi and B. Recht, ?Weighted sums of random kitchen sinks: Replacing minimization
with randomisation in learning,? in Neural Information Processing Systems (NIPS), 2009.
[10] L. Bottou and O. Bousquet, ?The tradeoffs of large scale learning,? in Neural Information
Processing Systems (NIPS), 2008.
[11] D. Fleet and Y. Weiss, ?Optical flow estimation,? in Handbook of Mathematical Models in
Computer Vision (MMCV), pp. 237?257, Springer, 2006.
[12] B. D. Lucas, T. Kanade, et al., ?An iterative image registration technique with an application
to stereo vision.,? in International Joint Conference on Artificial Intelligence (IJCAI), vol. 81,
pp. 674?679, 1981.
[13] H. Kuhn, ?The hungarian method for the assignment problem,? Naval research logistics quarterly, 1955.
[14] H. Wackernagel, Multivariate geostatistics: an introduction with applications. Springer-Verlag
Berlin Heidelberg, 2003.
[15] C. Rasmussen and C. Williams, Gaussian Processes for Machine Learning. The MIT Press,
2006.
[16] A. Smola, A. Gretton, L. Song, and B. Sch?lkopf, ?A hilbert space embedding for distributions,?
in International Conference Algorithmic Learning Theory (COLT), pp. 13?31, Springer-Verlag,
2007.
[17] A. Girard, C. E. Rasmussen, J. Quinonero-Candela, and R. Murray-Smith, ?Gaussian process
priors with uncertain inputs: Application to multiple-step ahead time series forecasting,? in
Neural Information Processing Systems (NIPS), 2002.
9
| 6541 |@word unaltered:1 longterm:1 middle:1 polynomial:3 advantageous:1 heuristically:1 r:1 propagate:1 covariance:2 sgd:4 series:2 denoting:1 rkhs:1 past:10 existing:1 outperforms:1 current:1 discretization:1 si:10 written:1 additive:1 shape:2 treating:1 plot:1 update:4 designed:1 v:2 extrapolating:1 intelligence:1 selected:2 smith:1 hinged:11 provides:1 location:19 sigmoidal:1 firstly:1 mathematical:1 along:2 constructed:1 become:1 profound:1 symposium:1 retrieving:1 qualitative:1 consists:1 fitting:1 manner:1 x0:4 mask:3 ra:1 indeed:1 behavior:1 planning:2 morphology:1 relying:1 automatically:2 little:1 considering:3 becomes:1 spain:1 xx:1 underlying:2 notation:1 what:1 sox:1 temporal:11 every:1 binarized:1 runtime:1 classifier:2 appear:3 positive:1 before:1 negligible:1 treat:1 struggle:1 despite:1 au:4 challenging:1 range:3 averaged:1 practice:1 procedure:1 displacement:2 area:5 asi:1 pre:1 road:2 regular:5 spite:1 cannot:4 applying:1 equivalent:2 map:52 demonstrated:2 deterministic:1 straightforward:1 williams:1 independently:1 formulate:2 resolution:5 simplicity:1 identifying:1 insight:1 enabled:1 retrieve:1 embedding:11 variation:2 autonomous:1 coordinate:4 target:1 gm:2 construction:1 user:2 exact:1 gps:4 us:1 velocity:4 element:2 expensive:1 updating:3 lay:1 showcase:1 predicts:2 observed:2 bottom:2 wang:1 capture:2 region:1 connected:1 morphological:1 goo:1 removed:1 valuable:1 kriging:1 environment:21 complexity:2 dynamic:34 trained:3 predictive:3 localization:1 basis:4 sink:2 joint:1 various:2 represented:1 regularizer:1 laser:15 instantiated:1 fast:3 effective:1 describe:1 query:6 artificial:1 larger:1 say:2 otherwise:1 ability:1 statistic:1 gp:21 final:2 online:3 nll:9 advantage:1 net:1 analytical:1 propose:2 interaction:1 product:2 ijcai:1 cluster:5 lionel:2 plethora:1 produce:1 incremental:2 object:23 augmenting:1 planing:1 received:1 sydney:6 longitude:2 hungarian:2 predicted:3 direction:3 amenability:1 radius:2 drawback:1 safe:1 kuhn:1 stochastic:4 vx:7 australia:1 wall:1 extension:1 scanner:2 tracker:1 around:4 ground:3 considered:1 exp:4 mapping:13 predict:12 algorithmic:1 consecutive:4 adopt:2 purpose:1 estimation:1 vice:1 tool:3 weighted:1 minimization:1 mit:1 sensor:9 gaussian:11 always:1 rather:3 occupied:5 conjunction:1 corollary:3 naval:1 likelihood:2 mainly:1 hk:1 contrast:1 skipped:1 centroid:13 inference:3 typically:2 entire:1 integrated:1 borrowing:1 kernelized:1 transformed:1 among:1 colt:1 denoted:1 augment:1 lucas:1 spatial:14 platform:1 initialize:1 field:5 never:1 having:2 extraction:1 manually:1 represents:1 unnecessarily:1 future:23 t2:1 quantitatively:1 few:3 randomly:2 preserve:1 individual:3 kitchen:2 consisting:1 occlusion:2 csiro:2 attempt:2 friedman:1 highly:1 truly:1 navigation:2 light:1 behind:1 amenable:1 accurate:2 capable:3 xy:1 lution:1 unless:1 euclidean:1 old:1 initialized:1 uncertain:1 instance:3 modeling:1 cover:1 assignment:2 ott:3 cost:5 introducing:2 subset:1 dij:2 conducted:1 dependency:3 periodic:1 data61:2 recht:2 thwart:1 definitely:1 international:6 probabilistic:7 squared:2 containing:1 opposed:1 return:1 busy:2 potential:1 summarized:2 pooled:1 automation:3 inc:1 explicitly:2 vehicle:6 view:4 performed:2 extrapolation:1 candela:1 picked:1 traffic:1 break:1 start:1 hf:13 decaying:1 parallel:1 maintains:1 simon:2 minimize:1 ass:1 accuracy:4 variance:4 yield:1 identify:1 generalize:1 lkopf:1 raw:6 bayesian:1 accurately:1 comparably:1 trajectory:1 explain:1 whenever:1 centering:1 nonetheless:1 pp:6 involved:1 associated:4 static:9 propagated:1 pilot:1 dataset:11 logical:1 knowledge:1 car:1 realm:1 improves:1 hilbert:14 sophisticated:1 higher:4 delimited:1 methodology:1 kgp:2 response:1 wei:1 formulation:1 done:2 though:2 evaluated:1 furthermore:1 stage:1 smola:1 until:1 correlation:1 hand:1 dgm:8 replacing:1 unoccupied:1 overlapping:1 defines:1 logistic:5 quality:1 grows:3 building:2 effect:1 usa:1 true:1 hence:3 assigned:1 regularization:1 entering:1 white:1 attractive:1 deal:1 during:1 auc:6 covering:1 generalized:1 prominent:1 complete:1 demonstrate:1 doherty:1 motion:40 reflection:2 interface:1 image:1 instantaneous:1 novel:1 recently:1 discussed:7 association:4 measurement:4 versa:1 queried:1 grid:15 similarly:2 iser:1 dot:2 moving:8 robot:3 longer:1 operating:1 base:1 add:1 multivariate:1 perspective:1 store:1 certain:1 verlag:2 outperforming:1 binary:1 vt:4 yi:2 devise:1 captured:1 additional:1 somewhat:1 impose:1 employed:1 novelty:1 semi:1 multiple:2 desirable:1 gretton:1 encircled:1 rahimi:2 adapt:1 calculation:1 divided:1 controlled:1 calculates:1 prediction:17 permeate:1 regression:8 scalable:1 vision:4 iteration:2 kernel:23 robotics:8 cell:6 beam:2 addition:2 crucial:4 sch:1 unlike:1 sr:1 subject:1 flow:2 integer:2 extracting:1 near:1 presence:1 vital:1 duckett:1 independence:1 hastie:1 associating:1 idea:1 knowing:2 tradeoff:1 t0:3 fleet:1 wackernagel:1 forecasting:1 stereo:1 suffer:2 song:1 york:2 hardly:1 action:1 amount:2 lifespan:1 generate:3 http:1 exist:1 vy:6 estimated:2 per:1 yy:1 tibshirani:1 vol:3 key:1 nevertheless:1 urban:3 clarity:1 changing:1 pj:3 verified:1 registration:1 asymptotically:1 merely:5 sum:1 run:1 angle:1 uncertainty:14 named:1 almost:1 capturing:2 followed:2 quadratic:1 extrapolates:1 ahead:8 incorporation:1 scene:4 bousquet:1 aspect:1 randomisation:1 performing:2 optical:2 according:1 aerial:1 across:1 slightly:1 increasingly:1 smaller:1 appealing:1 cable:2 making:2 s1:1 intuitively:1 ariel:1 computationally:3 equation:7 previously:1 discus:1 fed:1 end:5 operation:1 endowed:1 observe:1 hierarchical:3 quarterly:1 spectral:1 alternative:1 assumes:1 top:2 denotes:1 ensure:1 graphical:1 hinge:1 yx:1 unsuitable:1 build:4 especially:1 disappear:1 murray:1 icra:2 move:1 added:1 parametric:4 gradient:4 dp:3 fabio:2 distance:1 separate:2 mapped:1 berlin:1 street:1 majority:1 quinonero:1 collected:5 trivial:2 length:3 index:1 relationship:4 minimizing:1 executed:1 negative:3 stated:1 implementation:1 perform:2 observation:14 datasets:2 sm:1 descent:4 roccurve:1 logistics:1 defining:1 variability:2 incorporated:1 frame:10 reproducing:2 dpi:1 arbitrary:1 inverting:1 required:3 optimized:1 learned:1 shm:16 barcelona:1 nip:5 geostatistics:2 address:1 beyond:1 suggested:1 pattern:3 perception:1 latitude:2 program:1 built:1 memory:1 explanation:1 power:1 suitable:1 regularized:1 predicting:7 ramos:5 turning:1 occupancy:29 improve:1 ijrr:1 disappears:2 carried:1 naive:1 extract:1 prior:2 python:2 meter:1 asymptotic:1 embedded:1 loss:1 dsi:1 querying:2 facing:1 foundation:1 degree:1 pi:4 row:2 placed:1 gl:1 free:1 keeping:1 rasmussen:2 aij:6 allow:1 understand:1 taking:3 benefit:2 dimension:2 calculated:3 transition:1 world:6 depth:1 valid:1 dpj:1 dataset1:2 made:2 collection:1 commonly:2 adaptive:1 qualitatively:2 far:1 sj:9 approximate:1 observable:1 uni:1 implicitly:1 status:1 keep:1 sequentially:1 robotic:1 handbook:1 spatio:9 xi:2 discriminative:1 continuous:8 search:1 iterative:1 sonar:1 table:5 additionally:1 kanade:1 learn:2 nature:2 robust:1 elastic:1 obtaining:1 interact:1 heidelberg:1 bottou:1 complex:2 interpolating:1 domain:5 main:6 linearly:1 decrement:1 noise:3 hyperparameters:2 verifies:1 repeated:1 girard:1 representative:1 fashion:2 cubic:1 ny:1 embeds:1 position:2 obeying:1 exponential:3 concatenating:1 theorem:1 embed:1 disregarding:1 decay:2 fusion:2 exists:1 ih:1 effectively:1 importance:1 callaghan:3 magnitude:1 hinging:1 locality:1 intersection:5 generalizing:1 likely:2 tracking:1 springer:5 truth:3 determines:1 extracted:3 identity:1 miscellaneous:1 change:5 lidar:1 experimental:1 disregard:2 pragmatic:1 formally:1 support:7 scan:8 incorporate:1 handling:1 |
6,127 | 6,542 | Towards Conceptual Compression
Karol Gregor
Google DeepMind
[email protected]
Frederic Besse
Google DeepMind
[email protected]
Ivo Danihelka
Google DeepMind
[email protected]
Danilo Jimenez Rezende
Google DeepMind
[email protected]
Daan Wierstra
Google DeepMind
[email protected]
Abstract
We introduce convolutional DRAW, a homogeneous deep generative model achieving state-of-the-art performance in latent variable image modeling. The algorithm
naturally stratifies information into higher and lower level details, creating abstract
features and as such addressing one of the fundamentally desired properties of
representation learning. Furthermore, the hierarchical ordering of its latents creates
the opportunity to selectively store global information about an image, yielding a
high quality ?conceptual compression? framework.
1
Introduction
Deep generative models with latent variables can capture image information in a probabilistic manner
to answer questions about structure and uncertainty. Such models can also be used for representation
learning, and the associated procedures for inferring latent variables are vital to important application
areas such as (semi-supervised) classification and compression.
In this paper we introduce convolutional DRAW, a new model in this class that is able to transform
an image into a progression of increasingly detailed representations, ranging from global conceptual
aspects to low level details (see Figure 1). It significantly improves upon earlier variational latent
variable models (Kingma & Welling, 2014; Rezende et al., 2014; Gregor et al., 2014). Furthermore, it
is simple and fully convolutional, and does not require complex design choices, just like the recently
introduced DRAW architecture (Gregor et al., 2015). It provides an important insight into building
good variational auto-encoder models of images: positioning multiple layers of stochastic variables
?close? to the pixels (in terms of nonlinear steps in the computational graph) can significantly improve
generative performance. Lastly, the system?s ability to stratify information has the side benefit of
allowing it to perform high quality lossy compression, by selectively storing a higher level subset of
inferred latent variables, while (re)generating the remainder during decompression (see Figure 3).
In the following we will first discuss variational auto-encoders and compression. The subsequent
sections then describe the algorithm and present results both on generation quality and compression.
1.1
Variational Auto-Encoders
Numerous deep generative models have been developed recently, ranging from restricted and deep
Boltzmann machines (Hinton & Salakhutdinov, 2006; Salakhutdinov & Hinton, 2009), generative
adversarial networks (Goodfellow et al., 2014), autoregressive models (Larochelle & Murray, 2011;
Gregor & LeCun, 2011; van den Oord et al., 2016) to variational auto-encoders (Kingma & Welling,
2014; Rezende et al., 2014; Gregor et al., 2014). In this paper we focus on the class of models in the
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Figure 1: Conceptual Compression. The top rows show full reconstructions from the model for
Omniglot and ImageNet, respectively. The subsequent rows were obtained by storing the first t
iteratively obtained groups of latent variables and then generating the remaining latents and visibles
using the model (only a subset of all possible t values are shown, in increasing order). Left: Omniglot
reconstructions. Each group of four columns shows different samples at a given compression level.
We see that the variations in the latter samples concentrate on small details, such as the precise
placement of strokes. Reducing the number of stored bits tends to preserve the overall shape, but
increases the symbol variation. Eventually a varied set of symbols is generated. Nevertheless even
in the first row there is a clear difference between variations produced from a given symbol and
those between different symbols. Right: ImageNet reconstructions. Here the latent variables were
generated with zero variance (ie. the mean of the latent prior is used). Again the global structure is
captured first and the details are filled in later on.
variational auto-encoding framework. Since we are also interested in compression, we present them
from an information-theoretic perspective.
Variational auto-encoders consist of two neural networks: one that generates samples from latent
variables (?imagination?), and one that infers latent variables from observations (?recognition?). The
two networks share the latent variables. Intuitively speaking one might think of these variables as
specifying, for a given image, at different levels of abstraction, whether a particular object such as
a cat or a dog is present in the input, or perhaps what the exact position and intensity of an edge
at a given location might be. During the recognition phase the network acquires information about
the input and stores it in the latent variables, reducing their uncertainty. For example, at first not
knowing whether a cat or a dog is present in the image, the network observes the input and becomes
nearly certain that it is a cat. The reduction in uncertainty is quantitatively equal to the amount of
information that the network acquired about the input. During generation the network starts with
uncertain latent variables and samples their values from a prior distribution. Different choices will
produce different visibles.
Variational auto-encoders provide a natural framework for unsupervised learning ? we can build
hierarchical networks with multiple layers of stochastic variables and expect that, after learning, the
representations become more and more abstract for higher levels of the hierarchy. The pertinent
questions then are: can such a framework indeed discover such representations both in principle and
in practice, and what techniques are required for its satisfactory performance.
1.2
Conceptual Compression
Variational auto-encoders can not only be used for representation learning but also for compression.
The training objective of variational auto-encoders is to compress the total amount of information
needed to encode the input. They achieve this by using information-carrying latent variables that
express what, before compression, was encoded using a larger amount of information in the input.
The information in the layers and the remaining information in the input can be encoded in practice
as explained later in this paper.
The achievable amount of lossless compression is bounded by the underlying entropy of the image
distribution. Most image information as measured in bits is contained in the fine details of the image.
2
E2
X
Inference
Appr. Posterior
Z1
E1
5
D2
Latent (Information)
D1
R
Generation
0.01 * Layer 1
Layer 2
4.5
4
Information (bits)
Layer 2
Layer 1
Z2
3.5
3
2.5
2
1.5
1
Prior
0.5
0
0
5
10
15
20
25
30
Iteration number
Figure 2: Two-layer convolutional DRAW. A schematic depiction of one time slice is shown
on the left. X and R denote input and reconstruction, respectively. On the right, the amount of
information at different layers and time steps is shown. A two-layer convolutional DRAW was trained
on ImageNet, with a convolutional first layer and a fully connected second layer. The amount of
information at a given layer and iteration is measured by the KL-divergence between the prior and
the posterior (5). When presented with an image, first the top layer acquires information and then the
second slowly increases, suggesting that the network first acquires ?conceptual? information about
the image and only then encodes the remaining details. Note that this is an illustration of a two-layer
system, whereas most experiments in this paper, unless otherwise stated, were performed with a
one-layer version.
Thus we might reasonably expect that future improvements in lossless compression technology will
be bounded in scope.
Lossy compression, on the other hand, holds much more potential for improvement. In this case the
objective is to best compress an image in terms of quality of similarity to the original image, whilst
allowing for some information loss. As an example, at a low level of compression (close to lossless
compression), we could start by reducing pixel precision, e.g. from 8 bits to 7 bits. Then, as in JPEG,
we could express a local 8x8 neighborhood in a discrete cosine transform basis and store only the
most significant components. This way, instead of introducing quantization artefacts in the image
that would appear if we kept decreasing pixel precision, we preserve higher level structures but to a
lower level of precision. Nevertheless, if we want to improve upon this and push the limits of what is
possible in compression, we need to be able to identify what the most salient ?aspects? of an image
are.
If we wanted to compress images of cats and dogs down to one bit, what would that bit ideally
represent? It is natural to argue that it should represent whether the image contains either a cat or
a dog. How would we then produce an image from this single bit? If we have a good generative
model, we can simply generate the entire image from this single latent variable by ancestral sampling,
yielding an image of a cat if the bit corresponds to ?cat?, and an image of a dog otherwise. Now let us
imagine that instead of compressing down to one bit we wanted to compress down to ten bits. We can
then store some other important properties of the animal as well ? e.g. its type, color, and basic pose.
Conditioned on this information, everything else can be probabilistically ?filled in? by the generative
model during decompression. Increasing the number of stored bits further we can preserve more
and more about the image, still filling in the fine pixel-level details such as precise hair structure, or
the exact pattern of the floor, etc. Most bits indeed concern such low level details. We refer to this
type of compression ? compressing by preferentially storing the higher levels of representation while
generating/filling-in the remainder ? ?conceptual compression?.
Importantly, if we solve deep representation learning with latent variable generative models that generate high quality samples, we simultaneously achieve the objective of lossy compression mentioned
above. We can see this as follows. Assume that the network has learned a hierarchy of progressively
more abstract representations. Then, to get different levels of compression, we can store only the
corresponding number of topmost layers and generate the rest. By solving unsupervised deep learning,
the network would order information according to its importance and store it with that priority.
3
35
2
Convolutional DRAW
Below we present the equations for a one layer system (for a two layer system the reader is referred
to the supplementary material):
For t = 1, . . . , T
At the end, at time T,
=
x ? ?(rt?1 )
(1)
zt
=
?
(2)
(3)
pt
Lzt
=
=
RNN(x, t , het?1 , hdt?1 )
qt = q(zt |het )
p(zt |hdt?1 )
hdt
t
het
rt
KL(qt |pt )
(4)
(5)
=
RNN(zt , hdt?1 , rt?1 )
(6)
=
W hdt
(7)
rt?1 +
?, ?
px
qx
Lx
=
=
=
=
split(rT )
(8)
N (?, exp(?)))
(9)
U(x ? s/2, x + s/2) (10)
log(q x /px )
(11)
PT
x
z
L = ?L + t=1 Lt
(12)
Long Short-Term Memory networks (LSTM; Hochreiter & Schmidhuber, 1997) are used as the
recurrent modules (RNN) and convolutions are used for all linear operations. We follow the computations and explain them and the variables as we go along. The input image is x. The canvas
variable rt?1 , initialized to a bias, carries information about the current reconstruction of the image:
a mean ?(rt?1 ) and a log standard deviation ?(rt?1 ). We compute the reconstruction error t . This,
together with x, is fed to the encoder RNN (E in the diagram), which updates its internal state and
produces an output vector het . This goes into the approximate posterior distribution qt from which zt
is sampled. The prior distribution pt and the latent loss Lzt are calculated. zt is passed to the decoder
and Lzt measures the amount of information about x that is transmitted using zt to the decoder at
this time. The decoder (D in the diagram) updates its state and outputs the vector hdt which is then
used to update the canvas rt . At the end of the recurrence, the canvas consists of the values of
? and ? = log ? of the Gaussian distribution p(x|z1 , . . . , zT ) (or analogous parameters for other
distributions). This probability is computed for the input x as px . Because we use a real valued
distribution, but the original data has 256 values per color channel for a typical image, we encode
this discretization as a uniform distribution U(x ? s/2, x + s/2) of width equal to the discretization
s (typically 1/255) around x. The input cost is then Lx = log(q x /px ), it is always non-negative, and
measures the number of bits (nats) needed to describe x knowing (z1 , . . . , zT ). The final cost is the
PT
sum of the two costs L = Lx + t=1 Lzt and equals the amount of information that the model uses
to compress x losslessly. This is the loss we use to report the likelihood bounds and is the standard
loss for variational auto-encoders. However, we also include a constant ? and train models with
? 6= 1 to observe the visual effect on generated data and to perform lossy compression as explained
in section 3. Values ? < 1 put less pressure on the network to reconstruct exact pixel details and
increase its capacity to learn a better latent representation.
The general multi-layer architecture is summarized in Figure 2 (left). The algorithm is loosely
inspired by the architecture of the visual cortex (Carlson et al., 2013). We will describe known
cortical properties and in brackets the correspondences in our diagram. The visual cortex consists of
hierarchically organized areas such as V1 , V2 , V4 , IT (in our case: layers 1, 2, . . .). Each area such as
V1 is a composite structure consisting of six sublayers each most likely performing different functions
(in our case: E for encoding, Z for sampling and information measuring, D and R for decoding).
Eyes saccade around three times per second with blank periods in between. Thus the cortex has about
250ms to consider each input. When an input is received, there is a feed-forward computation that
progresses to high levels of hierarchy such as IT in about 100ms (in our case: the input is passed
through the E layers). The architecture is recurrent (our architecture as well) with a large amount of
feedback from higher to lower layers (in our case: each D feeds into the E, Z, D, R layers of the
next step), and can still perform significant computations before the next input is processed (in our
case: the iterations of DRAW).
3
Compression Methodology
In this section we show how instances of the variational auto-encoder paradigm (including convolutional DRAW) can be turned into compression algorithms. Note however that storing subsets of
4
Figure 3: Lossy Compression. Example images for various methods and levels of compression.
Top row: original images. Each subsequent block has four rows corresponding to four methods
of compression: (a) JPEG, (b) JPEG2000, (c) convolutional DRAW with full prior variance for
generation and (d) convolutional DRAW with zero prior variance. Each block corresponds to a
different compression level; in order, the average number of bits per input dimension are: 0.05, 0.1,
0.15, 0.2, 0.4, 0.8 (bits per image: 153, 307, 460, 614, 1228, 2457). In the first block, JPEG was left
gray because it does not compress to this level. Images are of size 32 ? 32. See appendix for 64 ? 64
images.
latents as described above results in good compression only if the network separates high level from
low level information. It is not obvious whether this should occur to a satisfactory extent, or at all.
In the following sections we will show that convolutional DRAW does in fact have this desirable
property. It stratifies information into a progression of increasingly abstract features, allowing the
resulting compression algorithm to select a degree of compression. What is appealing here is that this
occurs naturally in such a simple homogeneous architecture.
The underlying compression mechanism is arithmetic coding (Witten et al., 1987). Arithmetic coding
takes as input a sequence of discrete variables x1 , . . . , xt and a set of probabilities p(xt |x1 , . . . , xt?1 )
that predict
P the variable at time t from the previous ones. It then compresses this sequence to
L = ? t log2 p(xt |x1 , . . . , xt?1 ) bits plus a constant of order one.
We can use variational auto-encoders for compression as follows. First, train the model with an
approximate posterior q that has a variance independent from the input. After training, discretize the
latent variables z to the size of the variance of q. When compressing an input, assign z to the nearest
discretized point to the mean of q instead of sampling from q. Calculate the discrete probabilities p
over the values of z. Retrain decoder and p to perform well with the discretized values. Now, we
can use arithmetic coding directly, having the probabilities over discrete values of z. This procedure
might require tuning to achieve the best performance. However such process is likely to work since
there is another, less practical way to compress that is guaranteed to achieve the theoretical value.
This second approach uses bits-back coding (Hinton & Van Camp, 1993). We explain only the basic
idea here. First, discretize the latents down to a very high level of precision and use p to transmit
the information. Because the discretization precision is high, the probabilities for discrete values are
easily assigned. That will preserve the information but it will cost many bits, namely ? log2 pd (z)
where pd is the prior under that discretization. Now, instead of choosing a random sample z from
the approximate posterior q d under the discretization when encoding, use another stream of bits that
needs to be transmitted, to choose z, in effect encoding these bits into the choice of z. The encoded
amount is ? log2 q d (z) bits. When z is recovered at the receiving end, both the information about the
current input and the other information is recovered and thus the information needed to encode the
5
Figure 4: Generated samples on Omniglot.
Figure 5: Generated samples on ImageNet for different input cost scales. On the left, 32 ? 32
samples are shown with input cost ? in (12) equal to {0.2, 0.4, 0.6, 0.8, 1} for each respective block
of two rows. On the right, 64 ? 64 are shown with input cost scale ? is {0.4, 0.5, 0.6, 0.8, 1} for each
row respectively. For smaller values of ? the network is less compelled to explain finer details of
images, and produces ?cleaner? larger structures.
current input is ? log2 pd (z) + log2 q d (z) = ? log2 (pd (z)/q d (z)). The expectation of this quantity
is the KL-divergence in (5), which therefore measures the amount of information stored in a given
latent layer. The disadvantage of this approach is that we need this extra data to encode a given input.
However, this coding scheme works even if the variance of the approximate posterior is dependent on
the input.
4
Results
All models (except otherwise specified) were single-layer, with the number of DRAW time steps
nt = 32, a kernel size of 5 ? 5, and stride 2 convolutions between input layers and hidden layers with
12 latent feature maps. We trained the models on Cifar-10, Omniglot and ImageNet with 320, 160
and 160 LSTM feature maps, respectively. We use the version of ImageNet presented in (van den
Oord et al., 2016). We train the network with Adam optimization (Kingma & Ba, 2014) with learning
rate 5 ? 10?4 . We found that the cost occasionally increased dramatically during training. This is
probably due to the Gaussian nature of the distribution, when a given variable is produced too far
from the mean relative to sigma. We observed this happening approximately once per run. To be able
to keep training we store older parameters, detect such jumps and revert to the old parameters when
they occur. In these instances training always continued unperturbed.
4.1
Modeling Quality
Omniglot The recently introduced Omniglot dataset Lake et al. (2015) is comprised of 1628 character
classes drawn from multiple alphabets with just 20 samples per class. Referred to by some as the
6
?transpose of MNIST?, it was designed to study conceptual representations and generative models in a
low-data regime. Table 1 shows likelihoods of different models compared to ours. For our model, we
only calculate the upper bound (variational bound) and therefore underestimate its quality. Samples
generated by the model are shown in Figure 4.
Cifar-10 Table 1 also shows reported likelihoods of different models on Cifar-10. Convolutional
DRAW outperforms most previous models. The recently introduced Pixel RNN model (van den
Oord et al., 2016) yields better likelihoods, but as it is not a latent variable model, it does not
build representations, cannot be used for lossy compression, and is slow to sample from due to
its autoregressive nature. At the same time, we must emphasize that the two approaches might be
complementary, and could be combined by feeding the output of convolutional DRAW into the
recurrent network of Pixel RNN.
We also show the likelihood for a (non-recurrent) variational auto-encoder that we obtained internally.
We tested architectures with multiple layers, both deterministic and stochastic but with standard
functional forms, and reported the best result that we were able to obtain. Convolutional DRAW
performs significantly better.
ImageNet Additionaly, we trained on the version of ImageNet as prepared in (van den Oord et al.,
2016) which was created with the aim of making a standardized dataset to test generative models.
The results are in Table 1. Note that since this is a new dataset, few other methods have yet been
applied to it.
In Figure 5 we show generations from the model. We trained networks with varying input cost scales
as explained in the next section. The generations are sharp and contain many details, unlike previous
versions of variational auto-encoder that tend to generate blurry images.
Table 1: Test set performance of different models. Results on 28 ? 28 Omniglot are shown in nats,
results on CIFAR-10 and ImageNet are shown in bits/dim. Training losses are shown in brackets.
Omniglot
VAE (2 layers, 5 samples)
IWAE (2 layers, 50 samples)
RBM (500 hidden)
DRAW
Conv DRAW
ImageNet
Pixel RNN (32 ? 32)
Pixel RNN (64 ? 64)
Conv DRAW (32 ? 32)
Conv DRAW (64 ? 64)
4.2
NLL
106.31
103.38
100.46
< 96.5
< 92.0
NLL
3.86 (3.83)
3.63 (3.57)
4.40 (4.35)
4.10 (4.04)
CIFAR-10
Uniform Distribution
Multivariate Gaussian
NICE [1]
Deep Diffusion [2]
Deep GMMs [3]
Pixel RNN [4]
Deep VAE
DRAW
Conv DRAW
NLL
8.00
4.70
4.48
4.20
4.00
3.00 (2.93)
< 4.54
< 4.13
< 3.58 (3.57)
Reconstruction vs Latent Cost Scaling
Each pixel (and color channel) of the data consists of 256 values, and as such, likelihood and lossless
compression are well defined. When compressing the image there is much to be gained in capturing
precise correlations between nearby pixels. There are a lot more bits in these low level details than in
the higher level structure that we are actually interested in when learning higher level representations.
The network might focus on these details, ignoring higher level structure.
One way to make it focus less on the details is to scale down the cost of the input relative to the
latents, that is, setting ? < 1 in (12). Generations for different cost scalings are shown in Figure 5,
with the original objective being scale ? = 1. Visually we can verify that lower scales indeed have a
?cleaner? high level structure. Scale 1 contains a lot of information at the precise pixel values and
the network tries to capture that, while not being good enough to properly align details and produce
real-looking patterns. Improving this might simply be a matter of network capacity and scaling:
increasing layer size and depth, using more iterations, or using better functional forms.
7
4.3
Information Distribution
We look at how much information is contained at different levels and time steps. This information is
simply the KL-divergence in (5) during inference. For a two layer system with one convolutional and
one fully connected layer, this is shown in Figure 2 (right).
We see that the higher level contains information mainly at the beginning of computation, whereas
the lower layer starts with low information which then gradually increases. This is desirable from a
conceptual point of view. It suggests that the network first captures the overall structure of the image,
and only then proceeds to ?explain? the details contained within that structure. Understanding the
overall structure rapidly is also convenient if the algorithm needs to respond to observations in a
timely manner. For the single layer system used in all other experiments, the information distribution
is similar to the blue curve of Figure 2 (right). Thus, while the variables in the last set of iterations
contain the most bits, they don?t seem to visually affect the quality of reconstructed images to a large
extent, as shown in Figure 1. This demonstrates the separation of information into global aspects that
humans consider important from low level details.
4.4
Lossy Compression Results
We can compress an image lossily by storing only the subset of the latent variables associated with the
earlier iterations of convolutional DRAW, namely those that encode the more high-level information
about the image. The units not stored should be generated from the prior distribution (4). This
amounts to decompression.
We can also generate a more likely image by lowering the variance of the prior Gaussian. We show
generations with full variance in row 3 of each block of Figure 3 and with zero variance in row 4.
We see that using the original variance, the network generates sharp details. Because the generative
model is not perfect, the resulting images are less realistic looking as we lower the number of stored
time steps. For zero variance we see that the network starts with rough details making a smooth
image and then refines it with more time steps. All these generations are produced with a single-layer
convolutional DRAW, and thus, despite being single-layer, it achieves some level of ?conceptual
compression? by first capturing the global structure of the image and then focusing on details.
There is another dimension we can vary for lossy compression ? the input scale introduced in
subsection 4.2. Even if we store all the latent variables (but not the input bits), the reconstructed
images will get less detailed as we scale down the input cost.
To build a high performing compressor, at each compression rate, we need to find which of the
networks, input scales and number of time steps would produce visually good images. We have
done the following. For several compression levels, we have looked at images produced by different
methods and selected qualitatively which network gave the best looking images. We have not done
this per image, just per compression level. We then display compressed images that we have not seen
with this selection.
We compare our results to JPEG and JPEG2000 compression which we obtained using ImageMagick.
We found however that these compressors were unable to produce reasonable results for small images
(3 ? 32 ? 32) at high compression rates. Instead, we concatenated 100 images into one 3 ? 320 ? 320
image, compressed that and extracted back the compressed small images. The number of bits per
image reported is then the number of bits of this image divided by 100. This is actually unfair to our
algorithm since any correlations between nearby images can be exploited. Nevertheless we show the
comparison in Figure 3. Our algorithm shows better quality than JPEG and JPEG 2000 at all levels
where a corruption is easily detectable. Note that even if our algorithm was trained on one specific
image size, it can be used on arbitrarily sized images as it contains only convolutional operators.
5
Conclusion
In this paper we introduced convolutional DRAW, a state-of-the-art latent variable generative model
which demonstrates the potential of sequential computation and recurrent neural networks in scaling
up the performance of deep generative models. During inference, the algorithm arrives at a natural
stratification of information, ranging from global aspects to low-level details. An interesting feature
of the method is that, when we restrict ourselves to storing just the high level latent variables, we
arrive at a ?conceptual compression? algorithm that rivals the quality of JPEG2000.
8
References
Carlson, Thomas, Tovar, David A, Alink, Arjen, and Kriegeskorte, Nikolaus. Representational
dynamics of object vision: the first 1000 ms. Journal of vision, 13(10):1?1, 2013.
Goodfellow, Ian, Pouget-Abadie, Jean, Mirza, Mehdi, Xu, Bing, Warde-Farley, David, Ozair, Sherjil,
Courville, Aaron, and Bengio, Yoshua. Generative adversarial nets. In Advances in Neural
Information Processing Systems, pp. 2672?2680, 2014.
Gregor, Karol and LeCun, Yann. Learning representations by maximizing compression. arXiv
preprint arXiv:1108.1169, 2011.
Gregor, Karol, Danihelka, Ivo, Mnih, Andriy, Blundell, Charles, and Wierstra, Daan. Deep autoregressive networks. In Proceedings of the 31st International Conference on Machine Learning,
2014.
Gregor, Karol, Danihelka, Ivo, Graves, Alex, Rezende, Danilo Jimenez, and Wierstra, Daan. Draw:
A recurrent neural network for image generation. In Proceedings of the 32nd International
Conference on Machine Learning, 2015.
Hinton, Geoffrey E and Salakhutdinov, Ruslan R. Reducing the dimensionality of data with neural
networks. Science, 313(5786):504?507, 2006.
Hinton, Geoffrey E and Van Camp, Drew. Keeping the neural networks simple by minimizing the
description length of the weights. In Proceedings of the sixth annual conference on Computational
learning theory, pp. 5?13. ACM, 1993.
Hochreiter, Sepp and Schmidhuber, J?rgen. Long short-term memory. Neural computation, 9(8):
1735?1780, 1997.
Kingma, Diederik and Ba, Jimmy. Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980, 2014.
Kingma, Diederik P and Welling, Max. Auto-encoding variational bayes. In Proceedings of the
International Conference on Learning Representations (ICLR), 2014.
Lake, Brenden M, Salakhutdinov, Ruslan, and Tenenbaum, Joshua B. Human-level concept learning
through probabilistic program induction. Science, 350(6266):1332?1338, 2015.
Larochelle, Hugo and Murray, Iain. The neural autoregressive distribution estimator. Journal of
Machine Learning Research, 15:29?37, 2011.
Rezende, Danilo J, Mohamed, Shakir, and Wierstra, Daan. Stochastic backpropagation and approximate inference in deep generative models. In Proceedings of the 31st International Conference on
Machine Learning, pp. 1278?1286, 2014.
Salakhutdinov, Ruslan and Hinton, Geoffrey E. Deep boltzmann machines. In International Conference on Artificial Intelligence and Statistics, pp. 448?455, 2009.
van den Oord, Aaron, Kalchbrenner, Nal, and Kavukcuoglu, Koray. Pixel recurrent neural networks.
arXiv preprint arXiv:1601.06759, 2016.
Witten, Ian H, Neal, Radford M, and Cleary, John G. Arithmetic coding for data compression.
Communications of the ACM, 30(6):520?540, 1987.
9
| 6542 |@word version:4 achievable:1 compression:47 kriegeskorte:1 nd:1 d2:1 arjen:1 pressure:1 cleary:1 carry:1 reduction:1 contains:4 jimenez:2 ours:1 outperforms:1 current:3 com:5 z2:1 discretization:5 blank:1 recovered:2 nt:1 yet:1 must:1 diederik:2 john:1 refines:1 subsequent:3 realistic:1 shape:1 pertinent:1 wanted:2 designed:1 progressively:1 update:3 v:1 generative:15 selected:1 intelligence:1 ivo:3 compelled:1 beginning:1 short:2 provides:1 location:1 lx:3 wierstra:5 along:1 become:1 consists:3 introduce:2 manner:2 acquired:1 indeed:3 multi:1 discretized:2 salakhutdinov:5 inspired:1 decreasing:1 increasing:3 becomes:1 spain:1 discover:1 bounded:2 underlying:2 conv:4 what:7 deepmind:5 developed:1 whilst:1 visibles:2 demonstrates:2 sherjil:1 unit:1 internally:1 appear:1 danihelka:4 before:2 local:1 tends:1 limit:1 despite:1 encoding:5 approximately:1 might:7 plus:1 specifying:1 suggests:1 practical:1 lecun:2 practice:2 block:5 backpropagation:1 procedure:2 area:3 rnn:9 significantly:3 composite:1 convenient:1 get:2 cannot:1 close:2 selection:1 operator:1 put:1 map:2 deterministic:1 maximizing:1 go:2 sepp:1 jimmy:1 iwae:1 pouget:1 insight:1 continued:1 iain:1 importantly:1 estimator:1 d1:1 variation:3 analogous:1 transmit:1 hierarchy:3 imagine:1 pt:5 exact:3 homogeneous:2 us:2 goodfellow:2 hdt:6 recognition:2 observed:1 module:1 preprint:3 capture:3 calculate:2 compressing:4 connected:2 ordering:1 observes:1 mentioned:1 topmost:1 pd:4 nats:2 ideally:1 warde:1 dynamic:1 trained:5 carrying:1 solving:1 creates:1 upon:2 basis:1 easily:2 cat:7 various:1 alphabet:1 train:3 revert:1 describe:3 artificial:1 neighborhood:1 choosing:1 kalchbrenner:1 jean:1 encoded:3 larger:2 solve:1 supplementary:1 valued:1 otherwise:3 reconstruct:1 encoder:5 ability:1 compressed:3 statistic:1 think:1 transform:2 final:1 shakir:1 nll:3 sequence:2 net:1 reconstruction:7 remainder:2 turned:1 rapidly:1 achieve:4 representational:1 description:1 produce:7 karol:4 generating:3 adam:2 perfect:1 object:2 recurrent:7 pose:1 measured:2 nearest:1 qt:3 received:1 progress:1 larochelle:2 concentrate:1 artefact:1 sublayers:1 stochastic:5 additionaly:1 human:2 material:1 everything:1 require:2 feeding:1 assign:1 hold:1 around:2 exp:1 visually:3 scope:1 predict:1 appr:1 rgen:1 achieves:1 vary:1 ruslan:3 rough:1 gaussian:4 stratifies:2 always:2 aim:1 varying:1 vae:2 probabilistically:1 encode:5 rezende:5 focus:3 improvement:2 properly:1 likelihood:6 mainly:1 adversarial:2 detect:1 camp:2 dim:1 inference:4 abstraction:1 dependent:1 entire:1 typically:1 hidden:2 interested:2 pixel:14 overall:3 classification:1 animal:1 art:2 equal:4 once:1 having:1 sampling:3 stratification:1 koray:1 look:1 unsupervised:2 nearly:1 filling:2 future:1 report:1 mirza:1 fundamentally:1 quantitatively:1 few:1 yoshua:1 preserve:4 divergence:3 simultaneously:1 phase:1 consisting:1 ourselves:1 mnih:1 bracket:2 arrives:1 yielding:2 farley:1 edge:1 respective:1 unless:1 filled:2 loosely:1 old:1 initialized:1 desired:1 re:1 theoretical:1 uncertain:1 instance:2 column:1 modeling:2 earlier:2 increased:1 jpeg:6 disadvantage:1 measuring:1 cost:13 introducing:1 addressing:1 subset:4 latents:5 deviation:1 uniform:2 comprised:1 too:1 stored:5 reported:3 encoders:9 answer:1 combined:1 st:2 lstm:2 international:5 oord:5 ie:1 ancestral:1 probabilistic:2 v4:1 receiving:1 decoding:1 together:1 again:1 choose:1 slowly:1 priority:1 creating:1 imagination:1 suggesting:1 potential:2 stride:1 summarized:1 coding:6 matter:1 stream:1 later:2 performed:1 lot:2 try:1 view:1 start:4 bayes:1 timely:1 convolutional:19 variance:11 yield:1 identify:1 kavukcuoglu:1 produced:4 finer:1 corruption:1 stroke:1 explain:4 sixth:1 underestimate:1 pp:4 mohamed:1 obvious:1 e2:1 naturally:2 associated:2 rbm:1 sampled:1 dataset:3 color:3 subsection:1 improves:1 infers:1 organized:1 dimensionality:1 actually:2 back:2 focusing:1 feed:2 higher:10 danilo:3 supervised:1 follow:1 methodology:1 done:2 furthermore:2 just:4 lastly:1 correlation:2 canvas:3 hand:1 mehdi:1 nonlinear:1 google:10 quality:10 perhaps:1 gray:1 lossy:8 building:1 effect:2 contain:2 verify:1 concept:1 assigned:1 iteratively:1 satisfactory:2 neal:1 during:7 width:1 recurrence:1 acquires:3 cosine:1 m:3 theoretic:1 performs:1 image:58 ranging:3 variational:17 recently:4 charles:1 jpeg2000:3 witten:2 functional:2 hugo:1 significant:2 refer:1 tuning:1 decompression:3 omniglot:8 similarity:1 depiction:1 cortex:3 etc:1 align:1 posterior:6 multivariate:1 perspective:1 schmidhuber:2 store:8 certain:1 occasionally:1 arbitrarily:1 exploited:1 joshua:1 captured:1 transmitted:2 seen:1 floor:1 paradigm:1 period:1 semi:1 arithmetic:4 multiple:4 full:3 desirable:2 smooth:1 positioning:1 long:2 cifar:5 divided:1 e1:1 schematic:1 basic:2 hair:1 vision:2 expectation:1 arxiv:6 iteration:6 represent:2 kernel:1 hochreiter:2 whereas:2 want:1 fine:2 else:1 diagram:3 extra:1 rest:1 unlike:1 probably:1 tend:1 gmms:1 seem:1 vital:1 split:1 enough:1 bengio:1 affect:1 gave:1 architecture:7 restrict:1 andriy:1 idea:1 knowing:2 blundell:1 whether:4 six:1 passed:2 stratify:1 speaking:1 deep:13 dramatically:1 detailed:2 clear:1 cleaner:2 amount:12 prepared:1 rival:1 ten:1 tenenbaum:1 processed:1 generate:5 per:9 blue:1 discrete:5 express:2 group:2 four:3 salient:1 nevertheless:3 achieving:1 drawn:1 nal:1 diffusion:1 kept:1 lowering:1 v1:2 graph:1 sum:1 run:1 uncertainty:3 respond:1 arrive:1 reader:1 reasonable:1 yann:1 lake:2 separation:1 draw:25 appendix:1 scaling:4 bit:28 capturing:2 layer:38 bound:3 guaranteed:1 display:1 correspondence:1 courville:1 annual:1 occur:2 placement:1 alex:1 encodes:1 nearby:2 generates:2 aspect:4 performing:2 px:4 according:1 smaller:1 increasingly:2 character:1 appealing:1 making:2 den:5 restricted:1 intuitively:1 explained:3 gradually:1 equation:1 bing:1 discus:1 eventually:1 mechanism:1 detectable:1 needed:3 fed:1 end:3 operation:1 progression:2 hierarchical:2 observe:1 v2:1 blurry:1 nikolaus:1 original:5 compress:9 top:3 remaining:3 include:1 standardized:1 thomas:1 opportunity:1 log2:6 carlson:2 concatenated:1 murray:2 build:3 gregor:8 objective:4 question:2 quantity:1 occurs:1 looked:1 rt:9 losslessly:1 iclr:1 separate:1 unable:1 capacity:2 decoder:4 argue:1 extent:2 induction:1 ozair:1 length:1 illustration:1 minimizing:1 preferentially:1 sigma:1 stated:1 negative:1 ba:2 design:1 zt:9 boltzmann:2 perform:4 allowing:3 discretize:2 upper:1 observation:2 convolution:2 daan:4 hinton:6 looking:3 precise:4 communication:1 varied:1 het:4 sharp:2 brenden:1 intensity:1 inferred:1 introduced:5 david:2 dog:5 required:1 kl:4 namely:2 z1:3 imagenet:10 specified:1 learned:1 kingma:5 barcelona:1 nip:1 able:4 proceeds:1 below:1 pattern:2 regime:1 program:1 including:1 memory:2 max:1 natural:3 scheme:1 improve:2 older:1 technology:1 lossless:4 eye:1 numerous:1 created:1 x8:1 auto:15 danilor:1 prior:10 nice:1 understanding:1 relative:2 graf:1 fully:3 expect:2 loss:5 generation:10 interesting:1 geoffrey:3 degree:1 principle:1 storing:6 share:1 row:9 last:1 transpose:1 keeping:1 side:1 bias:1 benefit:1 van:7 slice:1 calculated:1 cortical:1 feedback:1 dimension:2 depth:1 autoregressive:4 curve:1 forward:1 qualitatively:1 jump:1 far:1 welling:3 qx:1 reconstructed:2 approximate:5 emphasize:1 keep:1 global:6 conceptual:11 don:1 latent:28 alink:1 table:4 channel:2 reasonably:1 learn:1 nature:2 ignoring:1 improving:1 complex:1 hierarchically:1 complementary:1 x1:3 xu:1 referred:2 retrain:1 besse:1 slow:1 precision:5 inferring:1 position:1 unfair:1 ian:2 down:6 xt:5 specific:1 symbol:4 unperturbed:1 abadie:1 frederic:1 concern:1 consist:1 quantization:1 mnist:1 sequential:1 importance:1 gained:1 drew:1 conditioned:1 push:1 entropy:1 lt:1 simply:3 likely:3 visual:3 happening:1 contained:3 compressor:2 saccade:1 radford:1 corresponds:2 extracted:1 acm:2 sized:1 towards:1 typical:1 except:1 reducing:4 total:1 aaron:2 selectively:2 select:1 internal:1 latter:1 tested:1 |
6,128 | 6,543 | Interpretable Nonlinear Dynamic Modeling
of Neural Trajectories
Yuan Zhao and Il Memming Park
Department of Neurobiology and Behavior
Department of Applied Mathematics and Statistics
Institute for Advanced Computational Science
Stony Brook University, NY 11794
{yuan.zhao, memming.park}@stonybrook.edu
Abstract
A central challenge in neuroscience is understanding how neural system implements computation through its dynamics. We propose a nonlinear time series
model aimed at characterizing interpretable dynamics from neural trajectories.
Our model assumes low-dimensional continuous dynamics in a finite volume. It
incorporates a prior assumption about globally contractional dynamics to avoid
overly enthusiastic extrapolation outside of the support of observed trajectories.
We show that our model can recover qualitative features of the phase portrait such
as attractors, slow points, and bifurcations, while also producing reliable longterm future predictions in a variety of dynamical models and in real neural data.
1
Introduction
Continuous dynamical systems theory lends itself as a framework for both qualitative and quantitative understanding of neural models [1, 2, 3, 4]. For example, models of neural computation are
often implemented as attractor dynamics where the convergence to one of the attractors represents
the result of computation. Despite the wide adoption of dynamical systems theory in theoretical
neuroscience, solving the inverse problem, that is, reconstructing meaningful dynamics from neural
time series, has been challenging. Popular neural trajectory inference algorithms often assume linear dynamical systems [5, 6] which lack nonlinear features ubiquitous in neural computation, and
typical approaches of using nonlinear autoregressive models [7, 8] sometimes produce wild extrapolations which are not suitable for scientific study aimed at confidently recovering features of the
dynamics that reflects the nature of the underlying computation.
In this paper, we aim to build an interpretable dynamics model to reverse-engineer the neural implementation of computation. We assume slow continuous dynamics such that the sampled nonlinear
trajectory is locally linear, thus, allowing us to propose a flexible nonlinear time series model that
directly learns the velocity field. Our particular parameterization yields to better interpretations:
identifying fixed points and ghost points are easy, and so is the linearization of the dynamics around
those points for stability and manifold analyses. We further parameterize the velocity field using a
finite number of basis functions, in addition to a global contractional component. These features encourage the model to focus on interpolating dynamics within the support of the training trajectories.
2
Model
Consider a general d-dimensional continuous nonlinear dynamical system driven by external input,
x? = F (x, u)
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
(1)
where x ? Rd represent the dynamic trajectory, and F : Rd ? Rdi ? Rd fully defines the dynamics
in the presence of input drive u ? Rdi . We aim to learn the essential part of the dynamics F from a
collection of trajectories sampled at frequency 1/?.
Our work builds on extensive literature in nonlinear time series modeling. Assuming a separable, linear input interaction, F (x, u) = Fx (x)+Fu (x)u, a natural nonlinear extension of an autoregressive
model is to use a locally linear expansion of (1) [7, 9]:
(2)
xt+1 = xt + A(xt )xt + b(xt ) + B(xt )ut + ?t
where b(x) = Fx (x)?, A(x) : R ? R
is the Jacobian matrix of Fx at x scaled by time step
?, B(x) : Rd ? Rd?di is the linearization of Fu around x, and ?t denotes model mismatch noise of
order O(?2 ). For example, {A, B} are parametrized with a radial basis function (RBF) network in
the multivariate RBF-ARX model of [10, 7], and {A, b, B} are parametrized with sigmoid neural
networks in [9]. Note that A(?) is not guaranteed to be the Jacobian of the dynamical system (1)
since A and b also change with x. In fact, the functional form for A(?) is not unique, and a powerful
function approximator for b(?) makes A(?) redundant and over parameterizes the dynamics.
d
d?d
Note that (2) is a subclass of a general nonlinear model:
(3)
xt+1 = f (xt ) + B(xt )ut + ?t ,
where f , B are the discrete time solution of Fx , Fu . This form is widely used, and called nonlinear
autoregressive with eXogenous inputs (NARX) model where f assumes various function forms (e.g.
neural network, RBF network [11], or Volterra series [8]).
We propose to use a specific parameterization,
xt+1 = xt + g(xt ) + B(xt )ut + ?t
2
(4)
g(xt ) = Wg ?(xt ) ? e?? xt
vec(B(xt )) = WB ?(xt )
where ?(?) is a vector of r continuous basis functions,
?(?) = (?1 (?), . . . , ?r (?))? .
(5)
Note the inclusion of a global leak towards the origin whose rate is controlled by ? . The further
away from the origin (and as ? ? 0), the larger the effect of the global contraction. This encodes our
prior knowledge that the neural dynamics are limited to a finite volume of phase space, and prevents
solutions with nonsensical runaway trajectories.
2
The function g(x) directly represents the velocity field of an underlying smooth dynamics (1), unlike
f (x) in (3) which can have convoluted jumps. We can even run the dynamics backwards in time,
since the time evolution for small ? is reversible (by taking g(xt ) ? g(xt+1 )), which is not possible
for (3), since f (x) is not necessarily an invertible function.
Fixed points x? satisfy g(x? ) + B(x? )u = 0 for a constant input u. Far away from the fixed points,
dynamics are locally just a flow (rectification theorem) and largely uninteresting. The Jacobian in
the absence of input, J = ?g(x)
?x provides linearization of the dynamics around the fixed points (via
the Hartman-Grobman theorem), and the corresponding fixed point is stable if all eigenvalues of J
are negative.
We can further identify fixed points, and ghost points (resulting from disappearance of fixed points
due to bifurcation) from local minima of ?g? with small magnitude. The flow around the ghost
points can be extremely slow [4], and can exhibit signatures of computation through meta-stable
dynamics [12]. Continuous attractors (such as limit cycles) are also important features of neural dynamics which exhibit spontaneous oscillatory modes. We can easily identify attractors by simulating
the model.
3
Estimation
We define the mean squared error as the loss function
T ?1
1 !
L(Wg , WB , c1...r , ?1...r ) =
?g(xt ) + B(xt )ut + xt ? xt+1 ?22 ,
T t=0
2
(6)
where we use normalized squared exponential radial basis functions
"
#
?z?c ?2
exp ? 2?2i 2
" i
#,
?i (z) =
$r
?z?c ?2
? + i=1 exp ? 2?2i 2
(7)
i
with centers ci and corresponding kernel width ?i . The small constant ? = 10?7 is to avoid numerical 0 in the denominator.
We estimate the parameters {Wg , WB , ?, c, ?} by minimizing the loss function through gradient
descent (Adam [13]) implemented within TensorFlow [14]. We initialize the matrices Wg and
WB by truncated standard normal distribution, the centers {ci } by the centroids of the K-means
clustering on the training set, and the kernel width ? by the average euclidean distance between the
centers.
4
Inferring Theoretical Models of Neural Computation
We apply the proposed method to a variety of low-dimensional neural models in theoretical neuroscience. Each theoretical model is chosen to represent a different mode of computation.
4.1
Fixed point attractor and bifurcation for binary decision-making
Perceptual decision-making and working memory tasks are widely used behavioral tasks where the
tasks typically involve a low-dimensional decision variable, and subjects are close to optimal in their
performance. To understand how the brain implements such neural computation, many competing
theories have been proposed [15, 16, 17, 18, 19, 20, 21]. We implemented the two dimensional
dynamical system from [20] where the final decision is represented by two stable fixed points corresponding to each choice. The stimulus strength (coherence) nonlinearly interacts with the dynamics
(see appendix for details), and biases the choice by increasing the basin of attraction (Fig. 1). We
encode the stimulus strength as a single variable held constant throughout each trajectory as in [20].
The model with 10 basis functions learned the dynamics from 90 training trajectories (30 per coherence c = 0, 0.5, ?0.5). We visualize the log-speed as colored contours, and the direction component
of the velocity field as arrows in Fig. 1. The fixed/ghost points are shown as red dots, which ideally
should be at the crossing of the model nullclines given by solid lines. For each coherence, two novel
starting points were simulated from the true model and the estimated model in Fig. 1. Although the
model was trained with only low or moderate coherence levels where there are 2 stable and 1 unstable fixed points, it predicts bifurcation at higher coherence and it identifies the ghost point (lower
right panel).
We compare the model (4) to the following ?locally linear? (LL) model,
xt+1 =A(xt )xt + B(xt )ut + xt
vec(A(xt )) =WA ?(xt )
vec(B(xt )) =WB ?(xt )
(8)
in terms of training and prediction errors in Table 1. Note that there is no contractional term. We
train both models on the same trajectories described above. Then we simulate 30 trajectories from
the true system and trained models for coherence c = 1 with the same random initial states within the
unit square and calculate the mean squared error between the true trajectories and model-simulated
ones as prediction error. The other parameters are set to the same value as training. The LL model
Table 1: Model errors
Model
(4)
(8)
Training error
Prediction error: mean (std)
4.06E-08
2.04E-08
0.002 (0.008)
0.244 (0.816)
has poor prediction on the test set. This is due to unbounded flow out of the phase space where the
training data lies (see Fig. 6 in the supplement).
3
Figure 1: Wong and Wang?s 2D dynamics model for perceptual decision-making [20]. We train the
model with 90 trajectories (uniformly random initial points within the unit square, 0.5 s duration,
1 ms time step) with different input coherence levels c = {0, 0.5, ?0.5} (30 trajectories per coherence). The yellow and green lines are the true nullclines. The black arrows represent the true velocity
fields (direction only) and the red arrows are model-predicted ones. The black and gray circles are
the true stable and unstable fixed points, while the red ones are local minima of model-prediction
(includes fixed points and slow points). The background contours are model-predicted log? dd st ?2 .
We simulated two 1 s trajectories each for true and learned model dynamics. The trajectories start
from the cyan circles. The blue lines are from the true model and the cyan ones are simulated from
trained models. Note that we do not train our model on trajectories from the bottom right condition
(c = 1).
4
(a)
(b)
(c)
(d)
Figure 2: FitzHugh-Nagumo model. (a) Direction (black arrow) and log-speed (contour) of true velocity field. Two blue trajectories starting at the blue circles are simulated from the true system. The
yellow and green lines are nullclines of v and w. The diamond is a spiral point. (b) 2-dimensional
embedding of v model-predicted velocity field (red arrow and background contour). The black arrows are true velocity field. There are a few model-predicted slow points in light red. The blue
lines are the same trajectories as the ones in (a). The cyan ones are simulated from trained model
withe the same initial states of the blue ones. (c) 100-step prediction every 100 steps using a test
trajectory generated with the same setting as training. (d) 200-step prediction every 200 steps using
a test trajectory driven by sinusoid input with 0.5 standard deviation white Gaussian noise.
4.2
Nonlinear oscillator model
One of the most successful application of dynamical systems in neuroscience is in the biophysical
model of a single neuron. We study the FitzHugh-Nagumo (FHN) model which is a 2-dimensional
reduction of the Hodgkin-Huxley model [3]:
v3
? w + I,
3
w? = 0.08(v + 0.7 ? 0.8w),
v? = v ?
(9)
(10)
where v is the membrane potential, w is a recovery variable and I is the magnitude of stimulus
current. The FHN has been used to model the up-down states observed in the neural time series of
anesthetized auditory cortex [22].
We train the model with 50 basis functions on 100 simulated trajectories with uniformly random
initial states within the unit square [0, 1] ? [0, 1] and driven by injected current generated from a 0.3
mean and 0.2 standard deviation white Gaussian noise. The duration is 200 and the time step is 0.1.
5
(b)
(a)
Figure 3: (a) Velocity field (true: black arrows, model-predicted: red arrows) for both direction and
log-speed; model-predicted fixed points (red circles, solid: stable, transparent: unstable). (b) One
trajectory from the true model (x, y), and one trajectory from the fitted model (?
x, y?). The trajectory
remains on the circle for both. Both are driven by the same input, and starts at same initial state.
In electrophysiological experiments, we only have access to v(t), and do not observe the slow recovery variable w. Delay embedding allows reconstruction of the phase space under mild conditions [23]. We build a 2D model by embedding v(t) as (v(t), v(t ? 10)), and fit the dynamical
model (Fig. 2b). The phase space is distorted, but the overall prediction of the model is good given
a fixed current (Fig. 2b). Furthermore, the temporal simulation of v(t) for white noise injection
shows reliable long-term prediction (Fig. 2c). We also test the model in a regime far from the training trajectories, and the dynamics does not diverge away from reasonable region of the phase space
(Fig. 2d).
4.3
Ring attractor dynamics for head direction network
Continuous attractors such as line and ring attractors are often used as models for neural representation of continuous variables [17, 4]. For example, the head direction neurons are tuned for the
angle of the animal?s head direction, and a bump attractor network with ring topology is proposed
as the dynamics underlying the persistently active set of neurons [24]. Here we use the following 2
variable reduction of the ring attractor system:
?r r? = r0 ? r,
?? ?? = I(t),
(11)
(12)
where ? represents the head direction driven by input I(t), and r is the radial component representing
the overall activity in the bump. The computational role of this ring attractor is to be insensitive to the
noise in the r direction, while integrating the differential input in the ? direction. In the absence of
input, the head direction ? does a random walk around the ring attractor. The ring attractor consists
of a continuum of stable fixed points with a center manifold.
We train the model with 50 basis functions on 150 trajectories. The duration is 5 and the time step
is 0.01. The parameters are set as r0 = 2, ?r = 1 and ?? = 1. The initial states are uniformly
random within (x, y) ? [?3, 3] ? [?3, 3]. The inputs are constant angles evenly spaced in [??, ?]
with Gaussian noises (? = 0, ? = 5) added (see Fig. 7 in online supplement).
From the trained model, we can identify a number of fixed points arranged around the ring attractor
(Fig. 3a). The true ring dynamics model has one negative eigenvalue, and one zero-eigenvalue in the
Jacobian. Most of the model-predicted fixed points are stable (two negative real parts of eigenvalues)
and the rest are unstable (two positive real parts of eigenvalues).
6
Figure 4: (a) Vector plot of 1-step-ahead prediction on one Lorenz trajectory (test). (b) 50-step
prediction every 50 steps on one Lorenz trajectory (test). (c) A 200-step window of (b) (100-300).
The dashed lines are the true trajectory, the solid lines are the prediction and the circles are the start
points of prediction.
4.4
Chaotic dynamics
Chaotic dynamics (or near chaos) has been postulated to support asynchronous states in the cortex [1], and neural computation over time by generating rich temporal patterns [2, 25]. We consider
the 3D Lorenz attractor as an example chaotic system. We simulate 20 trajectories from,
x? = 10(y ? x),
y? = x(28 ? z) ? y,
(13)
8
z? = xy ? z.
3
The initial state of each trajectory is standard normal. The duration is 200 and the time step is 0.04.
The first 300 transient states of each trajectory are discarded. We use 19 trajectories for training and
the last one for testing. We train a model with 10 basis functions. Figure 4a shows the direction
of prediction. The vectors represented by the arrows start from current states and point at the next
future state. The predicted vectors (red) overlap the true vectors (blue) implying the one-step-ahead
predictions are close to the true values in both speed and direction. Panel (b) gives an overview that
the prediction resembles the true trajectory. Panel (c) shows that the prediction is close to the true
value up to 200 steps.
5
Learning V1 neural dynamics
To test the model on data obtained from cortex, we use a set of trajectories obtained from the
variational Gaussian latent process (vLGP) model [26]. The latent trajectory model infers a 5dimensional trajectory that describes a large scale V1 population recording (see [26] for details). The
recording was from an anesthetized monkey where 72 different equally spaced directional drifting
gratings were presented for 50 trials each. We used 63 well tuned neurons out of 148 simultaneously
recorded single units. Each trial lasts for 2.56 s and the stimulus was presented only during the first
half.
We train our model with 50 basis functions on the trial-averaged trajectories for 71 directions, and
use 1 direction for testing. The input was 3 dimensional: two boxcars indicating the stimulus direction (sin ?, cos ?), and one corresponding to a low-pass filtered stimulus onset indicator. Figure 5
shows the prediction of the best linear dynamical system (LDS) for the 71 directions, and the nonlinear prediction from our model. LDS is given as xt+1 = Axt + But + xt with parameters A and
B found by least squares. Although the LDS is widely used for smoothing the latent trajectories, it
clearly is not a good predictor for the nonlinear trajectory of V1 (Fig. 5a). In comparison, our model
does a better job at capturing the oscillations much better, however, it fails to capture the fine details
of the oscillation and the stimulus-off period dynamics.
7
(a) LDS prediction
(b) Proposed model prediction
Figure 5: V1 latent dynamics prediction. Models trained on 71 average trajectories for each directional motion are tested on the 1 unseen direction. We divide the average trajectory at 0? into 200
ms segments and predict each whole segment from the starting point of the segment. Note the poor
predictive performance of linear dynamical system (LDS) model.
6
Discussion
To connect dynamical theories of neural computation with neural time series data, we need to be
able to fit an expressive model to the data that robustly predicts well. The model then needs to
be interpretable such that signatures of neural computation from the theories can be identified by
its qualitative features. We show that our method successfully learns low-dimensional dynamics in
contrast to fitting a high-dimensional recurrent neural network models in previous approaches [17,
4, 25]. We demonstrated that our proposed model works well for well known dynamical models
of neural computation with various features: chaotic attractor, fixed point dynamics, bifurcation,
line/ring attractor, and a nonlinear oscillator. In addition, we also showed that it can model nonlinear
latent trajectories extracted from high-dimensional neural time series.
Critically, we assumed that the dynamics consists of a continuous and slow flow. This allowed us
to parameterize the velocity field directly, reducing the complexity of the nonlinear function approximation, and making it easy to identify the fixed/slow points. An additional structural assumption
was the existence of a global contractional dynamics. This regularizes and encourages the dynamics
to occupy a finite phase volume around the origin.
Previous strategies of visualizing arbitrary trajectories from a nonlinear system such as recurrence
plots were often difficult to understand. We visualized the dynamics using the velocity field decomposed into speed and direction, and overlaid fixed/slow points found numerically as local minima
of the speed. This is obviously more difficult for higher-dimensional dynamics, and dimensionality
reduction and visualization that preserves essential dynamic features are left for future directions.
The current method is a two-step procedure for analyzing neural dynamics: first infer the latent
trajectories, and then infer the dynamic laws. This is clearly not an inefficient inference, and the next
step would be to combine vLGP observation model and inference algorithm with the interpretable
dynamic model and develop a unified inference system.
In summary, we present a novel complementary approach to studying the neural dynamics of neural
computation. Applications of the proposed method are not limited to neuroscience, but should
be useful for studying other slow low-dimensional nonlinear dynamical systems from observations [27].
Acknowledgment
We thank the reviewers for their constructive feedback. This work was partially supported by the
Thomas Hartman Foundation for Parkinson?s Research.
8
References
[1] D. Hansel and H. Sompolinsky. Synchronization and computation in a chaotic neural network. Physical
Review Letters, 68(5):718?721, Feb 1992.
[2] W. Maass, T. Natschl?ger, and H. Markram. Real-time computing without stable states: A new framework
for neural computation based on perturbations. Neural Computation, 14:2531?2560, 2002.
[3] E. M. Izhikevich. Dynamical systems in neuroscience : the geometry of excitability and bursting. Computational neuroscience. MIT Press, 2007.
[4] D. Sussillo and O. Barak. Opening the black box: Low-Dimensional dynamics in High-Dimensional
recurrent neural networks. Neural Computation, 25(3):626?649, December 2012.
[5] L. Paninski, Y. Ahmadian, D. G. G. Ferreira, et al. A new look at state-space models for neural data.
Journal of computational neuroscience, 29(1-2):107?126, August 2010.
[6] J. P. Cunningham and B. M. Yu. Dimensionality reduction for large-scale neural recordings. Nat Neurosci,
17(11):1500?1509, November 2014.
[7] T. Ozaki. Time Series Modeling of Neuroscience Data. CRC Press, January 2012.
[8] S. Eikenberry and V. Marmarelis. A nonlinear autoregressive volterra model of the HodgkinHuxley equations. Journal of Computational Neuroscience, 34(1):163?183, August 2013.
[9] M. Watter, J. Springenberg, J. Boedecker, and M. Riedmiller. Embed to control: A locally linear latent
dynamics model for control from raw images. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama,
and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 2746?2754. Curran
Associates, Inc., 2015.
[10] M. Gan, H. Peng, X. Peng, X. Chen, and G. Inoussa. A locally linear RBF network-based state-dependent
AR model for nonlinear time series modeling. Information Sciences, 180(22):4370?4383, November
2010.
[11] S. Chen, S. A. Billings, C. F. N. Cowan, and P. M. Grant. Practical identification of NARMAX models
using radial basis functions. International Journal of Control, 52(6):1327?1350, December 1990.
[12] M. I. Rabinovich, R. Huerta, P. Varona, and V. S. Afraimovich. Transient cognitive dynamics, metastability, and decision making. PLoS Computational Biology, 4(5):e1000072+, May 2008.
[13] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014.
[14] M. Abadi, A. Agarwal, P. Barham, et al. TensorFlow: Large-scale machine learning on heterogeneous
systems, 2015. Software available from tensorflow.org.
[15] O. Barak, D. Sussillo, R. Romo, M. Tsodyks, and L. F. Abbott. From fixed points to chaos: three models
of delayed discrimination. Progress in neurobiology, 103:214?222, April 2013.
[16] C. K. Machens, R. Romo, and C. D. Brody. Flexible control of mutual inhibition: A neural model of
Two-Interval discrimination. Science, 307(5712):1121?1124, February 2005.
[17] V. Mante, D. Sussillo, K. V. Shenoy, and W. T. Newsome. Context-dependent computation by recurrent
dynamics in prefrontal cortex. Nature, 503(7474):78?84, November 2013.
[18] S. Ganguli, J. W. Bisley, J. D. Roitman, et al. One-dimensional dynamics of attention and decision making
in LIP. Neuron, 58(1):15?25, April 2008.
[19] M. E. Mazurek, J. D. Roitman, J. Ditterich, and M. N. Shadlen. A role for neural integrators in perceptual
decision making. Cerebral Cortex, 13(11):1257?1269, November 2003.
[20] K.-F. Wong and X.-J. Wang. A recurrent network mechanism of time integration in perceptual decisions.
The Journal of Neuroscience, 26(4):1314?1328, January 2006.
[21] M. S. Goldman. Memory without feedback in a neural network. Neuron, 61(4):621?634, February 2009.
[22] C. Curto, S. Sakata, S. Marguet, V. Itskov, and K. D. Harris. A simple model of cortical dynamics
explains variability and state dependence of sensory responses in Urethane-Anesthetized auditory cortex.
The Journal of Neuroscience, 29(34):10600?10612, August 2009.
[23] H. Kantz and T. Schreiber. Nonlinear Time Series Analysis. Cambridge University Press, 2003.
[24] A. Peyrache, M. M. Lacroix, P. C. Petersen, and G. Buzsaki. Internally organized mechanisms of the head
direction sense. Nature Neuroscience, 18(4):569?575, March 2015.
[25] R. Laje and D. V. Buonomano. Robust timing and motor patterns by taming chaos in recurrent neural
networks. Nat Neurosci, 16(7):925?933, July 2013.
[26] Y. Zhao and I. M. Park. Variational latent Gaussian process for recovering single-trial dynamics from
population spike trains. ArXiv e-prints, April 2016.
[27] B. C. Daniels and I. Nemenman. Automated adaptive inference of phenomenological dynamical models.
Nature Communications, 6:8133+, August 2015.
9
| 6543 |@word mild:1 trial:4 longterm:1 nonsensical:1 simulation:1 contraction:1 solid:3 reduction:4 initial:7 series:11 daniel:1 tuned:2 current:5 stony:1 numerical:1 motor:1 plot:2 interpretable:5 discrimination:2 implying:1 half:1 parameterization:2 colored:1 filtered:1 hodgkinhuxley:1 stonybrook:1 provides:1 org:1 unbounded:1 differential:1 ozaki:1 qualitative:3 yuan:2 consists:2 abadi:1 wild:1 fitting:1 behavioral:1 combine:1 peng:2 behavior:1 enthusiastic:1 brain:1 integrator:1 globally:1 decomposed:1 goldman:1 window:1 increasing:1 spain:1 underlying:3 panel:3 monkey:1 unified:1 temporal:2 quantitative:1 every:3 subclass:1 axt:1 ferreira:1 scaled:1 control:4 unit:4 grant:1 internally:1 producing:1 shenoy:1 positive:1 local:3 timing:1 limit:1 despite:1 analyzing:1 black:6 resembles:1 bursting:1 challenging:1 metastability:1 co:1 limited:2 adoption:1 averaged:1 practical:1 unique:1 acknowledgment:1 testing:2 implement:2 chaotic:5 procedure:1 riedmiller:1 radial:4 integrating:1 petersen:1 close:3 huerta:1 context:1 wong:2 demonstrated:1 center:4 reviewer:1 romo:2 attention:1 starting:3 duration:4 identifying:1 recovery:2 attraction:1 stability:1 embedding:3 population:2 fx:4 spontaneous:1 curran:1 machens:1 origin:3 associate:1 velocity:11 crossing:1 persistently:1 std:1 predicts:2 observed:2 bottom:1 role:2 wang:2 capture:1 parameterize:2 calculate:1 tsodyks:1 region:1 cycle:1 sompolinsky:1 plo:1 afraimovich:1 leak:1 complexity:1 ideally:1 dynamic:55 signature:2 trained:6 solving:1 segment:3 predictive:1 basis:10 easily:1 various:2 represented:2 lacroix:1 train:8 ahmadian:1 outside:1 whose:1 widely:3 larger:1 wg:4 hartman:2 statistic:1 unseen:1 sakata:1 itself:1 final:1 online:1 obviously:1 eigenvalue:5 biophysical:1 propose:3 reconstruction:1 interaction:1 buzsaki:1 convoluted:1 convergence:1 mazurek:1 produce:1 generating:1 adam:2 ring:10 sussillo:3 recurrent:5 develop:1 progress:1 job:1 grating:1 implemented:3 recovering:2 predicted:8 direction:21 stochastic:1 transient:2 runaway:1 crc:1 explains:1 transparent:1 extension:1 around:7 normal:2 exp:2 overlaid:1 lawrence:1 predict:1 visualize:1 bump:2 continuum:1 estimation:1 hansel:1 schreiber:1 successfully:1 reflects:1 mit:1 clearly:2 gaussian:5 aim:2 avoid:2 parkinson:1 nullclines:3 encode:1 focus:1 contrast:1 centroid:1 sense:1 inference:5 ganguli:1 dependent:2 typically:1 cunningham:1 overall:2 flexible:2 animal:1 smoothing:1 integration:1 bifurcation:5 initialize:1 mutual:1 field:11 biology:1 represents:3 park:3 look:1 yu:1 future:3 stimulus:7 few:1 opening:1 simultaneously:1 preserve:1 delayed:1 phase:7 geometry:1 attractor:18 ab:1 nemenman:1 light:1 held:1 fu:3 encourage:1 xy:1 euclidean:1 divide:1 walk:1 circle:6 theoretical:4 varona:1 fitted:1 portrait:1 modeling:4 wb:5 ar:1 newsome:1 rabinovich:1 rdi:2 deviation:2 uninteresting:1 predictor:1 delay:1 successful:1 connect:1 st:1 international:1 lee:1 off:1 invertible:1 diverge:1 squared:3 central:1 recorded:1 prefrontal:1 marmarelis:1 external:1 cognitive:1 zhao:3 inefficient:1 withe:1 potential:1 includes:1 inc:1 satisfy:1 postulated:1 onset:1 extrapolation:2 exogenous:1 red:8 start:4 recover:1 memming:2 il:1 square:4 largely:1 yield:1 identify:4 spaced:2 yellow:2 directional:2 lds:5 raw:1 identification:1 critically:1 trajectory:48 drive:1 oscillatory:1 frequency:1 di:1 sampled:2 auditory:2 popular:1 knowledge:1 ut:5 infers:1 ubiquitous:1 electrophysiological:1 dimensionality:2 organized:1 higher:2 response:1 april:3 arranged:1 box:1 furthermore:1 just:1 working:1 expressive:1 nonlinear:22 reversible:1 lack:1 defines:1 mode:2 gray:1 scientific:1 izhikevich:1 effect:1 roitman:2 normalized:1 true:19 evolution:1 sinusoid:1 excitability:1 maass:1 white:3 visualizing:1 ll:2 during:1 width:2 sin:1 encourages:1 recurrence:1 m:2 motion:1 image:1 variational:2 chaos:3 novel:2 sigmoid:1 functional:1 physical:1 overview:1 insensitive:1 volume:3 cerebral:1 interpretation:1 numerically:1 cambridge:1 vec:3 boxcar:1 rd:5 mathematics:1 inclusion:1 sugiyama:1 dot:1 phenomenological:1 stable:9 access:1 cortex:6 inhibition:1 feb:1 multivariate:1 showed:1 moderate:1 driven:5 reverse:1 meta:1 binary:1 minimum:3 additional:1 arx:1 r0:2 v3:1 redundant:1 period:1 dashed:1 july:1 infer:2 smooth:1 long:1 nagumo:2 equally:1 controlled:1 prediction:23 denominator:1 heterogeneous:1 curto:1 arxiv:1 sometimes:1 represent:3 kernel:2 agarwal:1 c1:1 addition:2 background:2 fine:1 interval:1 rest:1 unlike:1 natschl:1 subject:1 recording:3 cowan:1 december:2 incorporates:1 flow:4 structural:1 near:1 presence:1 backwards:1 easy:2 spiral:1 automated:1 variety:2 fit:2 competing:1 topology:1 identified:1 billing:1 parameterizes:1 barham:1 ditterich:1 useful:1 aimed:2 involve:1 locally:6 visualized:1 occupy:1 neuroscience:13 overly:1 per:2 estimated:1 blue:6 discrete:1 abbott:1 v1:4 run:1 inverse:1 angle:2 powerful:1 injected:1 hodgkin:1 distorted:1 letter:1 springenberg:1 throughout:1 reasonable:1 oscillation:2 decision:9 coherence:8 appendix:1 capturing:1 cyan:3 brody:1 guaranteed:1 mante:1 activity:1 strength:2 ahead:2 huxley:1 software:1 encodes:1 speed:6 simulate:2 extremely:1 separable:1 fitzhugh:2 injection:1 buonomano:1 department:2 march:1 poor:2 membrane:1 describes:1 reconstructing:1 making:7 rectification:1 equation:1 visualization:1 remains:1 mechanism:2 studying:2 available:1 apply:1 observe:1 away:3 simulating:1 robustly:1 drifting:1 existence:1 thomas:1 assumes:2 denotes:1 clustering:1 gan:1 narx:1 build:3 february:2 added:1 print:1 spike:1 volterra:2 strategy:1 dependence:1 disappearance:1 interacts:1 exhibit:2 gradient:1 lends:1 distance:1 thank:1 simulated:7 parametrized:2 evenly:1 manifold:2 unstable:4 assuming:1 minimizing:1 difficult:2 negative:3 ba:1 implementation:1 diamond:1 allowing:1 neuron:6 observation:2 discarded:1 finite:4 descent:1 november:4 truncated:1 january:2 regularizes:1 neurobiology:2 variability:1 head:6 communication:1 perturbation:1 arbitrary:1 august:4 bisley:1 nonlinearly:1 extensive:1 learned:2 tensorflow:3 barcelona:1 kingma:1 nip:1 brook:1 able:1 dynamical:16 pattern:2 mismatch:1 ghost:5 regime:1 challenge:1 confidently:1 reliable:2 memory:2 green:2 suitable:1 overlap:1 natural:1 itskov:1 indicator:1 advanced:1 representing:1 marguet:1 identifies:1 taming:1 prior:2 understanding:2 literature:1 review:1 law:1 synchronization:1 fully:1 loss:2 ger:1 approximator:1 foundation:1 basin:1 shadlen:1 dd:1 editor:1 summary:1 supported:1 last:2 asynchronous:1 bias:1 understand:2 barak:2 institute:1 wide:1 characterizing:1 taking:1 markram:1 anesthetized:3 feedback:2 cortical:1 contour:4 autoregressive:4 rich:1 sensory:1 collection:1 jump:1 adaptive:1 far:2 global:4 active:1 assumed:1 continuous:9 latent:8 table:2 lip:1 nature:4 learn:1 robust:1 expansion:1 interpolating:1 necessarily:1 garnett:1 neurosci:2 arrow:9 whole:1 noise:6 allowed:1 complementary:1 fig:11 ny:1 slow:10 fails:1 inferring:1 exponential:1 watter:1 lie:1 perceptual:4 jacobian:4 learns:2 theorem:2 down:1 embed:1 xt:35 specific:1 cortes:1 essential:2 lorenz:3 corr:1 ci:2 supplement:2 magnitude:2 linearization:3 nat:2 chen:2 paninski:1 boedecker:1 prevents:1 partially:1 extracted:1 harris:1 rbf:4 towards:1 oscillator:2 absence:2 change:1 typical:1 uniformly:3 reducing:1 engineer:1 called:1 pas:1 meaningful:1 indicating:1 support:3 constructive:1 tested:1 |
6,129 | 6,544 | Coupled Generative Adversarial Networks
Ming-Yu Liu
Mitsubishi Electric Research Labs (MERL),
[email protected]
Oncel Tuzel
Mitsubishi Electric Research Labs (MERL),
[email protected]
Abstract
We propose coupled generative adversarial network (CoGAN) for learning a joint
distribution of multi-domain images. In contrast to the existing approaches, which
require tuples of corresponding images in different domains in the training set,
CoGAN can learn a joint distribution without any tuple of corresponding images.
It can learn a joint distribution with just samples drawn from the marginal distributions. This is achieved by enforcing a weight-sharing constraint that limits the
network capacity and favors a joint distribution solution over a product of marginal
distributions one. We apply CoGAN to several joint distribution learning tasks, including learning a joint distribution of color and depth images, and learning a joint
distribution of face images with different attributes. For each task it successfully
learns the joint distribution without any tuple of corresponding images. We also
demonstrate its applications to domain adaptation and image transformation.
1
Introduction
The paper concerns the problem of learning a joint distribution of multi-domain images from data. A
joint distribution of multi-domain images is a probability density function that gives a density value
to each joint occurrence of images in different domains such as images of the same scene in different
modalities (color and depth images) or images of the same face with different attributes (smiling and
non-smiling). Once a joint distribution of multi-domain images is learned, it can be used to generate
novel tuples of images. In addition to movie and game production, joint image distribution learning
finds applications in image transformation and domain adaptation. When training data are given as
tuples of corresponding images in different domains, several existing approaches [1, 2, 3, 4] can be
applied. However, building a dataset with tuples of corresponding images is often a challenging task.
This correspondence dependency greatly limits the applicability of the existing approaches.
To overcome the limitation, we propose the coupled generative adversarial networks (CoGAN)
framework. It can learn a joint distribution of multi-domain images without existence of corresponding
images in different domains in the training set. Only a set of images drawn separately from the
marginal distributions of the individual domains is required. CoGAN is based on the generative
adversarial networks (GAN) framework [5], which has been established as a viable solution for image
distribution learning tasks. CoGAN extends GAN for joint image distribution learning tasks.
CoGAN consists of a tuple of GANs, each for one image domain. When trained naively, the CoGAN
learns a product of marginal distributions rather than a joint distribution. We show that by enforcing a
weight-sharing constraint the CoGAN can learn a joint distribution without existence of corresponding
images in different domains. The CoGAN framework is inspired by the idea that deep neural networks
learn a hierarchical feature representation. By enforcing the layers that decode high-level semantics
in the GANs to share the weights, it forces the GANs to decode the high-level semantics in the
same way. The layers that decode low-level details then map the shared representation to images in
individual domains for confusing the respective discriminative models. CoGAN is for multi-image
domains but, for ease of presentation, we focused on the case of two image domains in the paper.
However, the discussions and analyses can be easily generalized to multiple image domains.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
We apply CoGAN to several joint image distribution learning tasks. Through convincing visualization
results and quantitative evaluations, we verify its effectiveness. We also show its applications to
unsupervised domain adaptation and image transformation.
2
Generative Adversarial Networks
A GAN consists of a generative model and a discriminative model. The objective of the generative
model is to synthesize images resembling real images, while the objective of the discriminative model
is to distinguish real images from synthesized ones. Both the generative and discriminative models
are realized as multilayer perceptrons.
Let x be a natural image drawn from a distribution, pX , and z be a random vector in Rd . Note that we
only consider that z is from a uniform distribution with a support of [?1 1]d , but different distributions
such as a multivariate normal distribution can be applied as well. Let g and f be the generative and
discriminative models, respectively. The generative model takes z as input and outputs an image,
g(z), that has the same support as x. Denote the distribution of g(z) as pG . The discriminative model
estimates the probability that an input image is drawn from pX . Ideally, f (x) = 1 if x ? pX and
f (x) = 0 if x ? pG . The GAN framework corresponds to a minimax two-player game, and the
generative and discriminative models can be trained jointly via solving
max min V (f, g) ? Ex?pX [? log f (x)] + Ez?pZ [? log(1 ? f (g(z)))].
g
(1)
f
In practice (1) is solved by alternating the following two gradient update steps:
Step 1: ? t+1
= ? tf ? ?t ??f V (f t , g t ),
f
Step 2: ? t+1
= ? tg + ?t ??g V (f t+1 , g t )
g
where ? f and ? g are the parameters of f and g, ? is the learning rate, and t is the iteration number.
Goodfellow et al. [5] show that, given enough capacity to f and g and sufficient training iterations,
the distribution, pG , converges to pX . In other words, from a random vector, z, the network g can
synthesize an image, g(z), that resembles one that is drawn from the true distribution, pX .
3
Coupled Generative Adversarial Networks
CoGAN as illustrated in Figure 1 is designed for learning a joint distribution of images in two different
domains. It consists of a pair of GANs?GAN1 and GAN2 ; each is responsible for synthesizing
images in one domain. During training, we force them to share a subset of parameters. This results in
that the GANs learn to synthesize pairs of corresponding images without correspondence supervision.
Generative Models: Let x1 and x2 be images drawn from the marginal distribution of the 1st
domain, x1 ? pX1 and the marginal distribution of the 2nd domain, x2 ? pX2 , respectively. Let g1
and g2 be the generative models of GAN1 and GAN2 , which map a random vector input z to images
that have the same support as x1 and x2 , respectively. Denote the distributions of g1 (z) and g1 (z) by
pG1 and pG2 . Both g1 and g2 are realized as multilayer perceptrons:
(m ) (m ?1)
(2) (1)
(m ) (m ?1)
(2) (1)
g1 (z) = g1 1 g1 1
. . . g1 g1 (z) , g2 (z) = g2 2 g2 2
. . . g2 g2 (z)
(i)
(i)
where g1 and g2 are the ith layers of g1 and g2 and m1 and m2 are the numbers of layers in g1 and
g2 . Note that m1 need not equal m2 . Also note that the support of x1 need not equal to that of x2 .
Through layers of perceptron operations, the generative models gradually decode information from
more abstract concepts to more material details. The first layers decode high-level semantics and the
last layers decode low-level details. Note that this information flow direction is opposite to that in a
discriminative deep neural network [6] where the first layers extract low-level features while the last
layers extract high-level features.
Based on the idea that a pair of corresponding images in two domains share the same high-level
concepts, we force the first layers of g1 and g2 to have identical structure and share the weights.
That is ? g(i) = ? g(i) , for i = 1, 2, ..., k where k is the number of shared layers, and ? g(i) and ? g(i)
1
2
1
(i)
(i)
2
are the parameters of g1 and g2 , respectively. This constraint forces the high-level semantics to
be decoded in the same way in g1 and g2 . No constraints are enforced to the last layers. They can
materialize the shared high-level representation differently for fooling the respective discriminators.
2
Generators
Discriminators
GAN1
weight sharing
GAN2
Figure 1: CoGAN consists of a pair of GANs: GAN1 and GAN2 . Each has a generative model for synthesizing
realistic images in one domain and a discriminative model for classifying whether an image is real or synthesized.
We tie the weights of the first few layers (responsible for decoding high-level semantics) of the generative models,
g1 and g2 . We also tie the weights of the last few layers (responsible for encoding high-level semantics) of the
discriminative models, f1 and f2 . This weight-sharing constraint allows CoGAN to learn a joint distribution of
images without correspondence supervision. A trained CoGAN can be used to synthesize pairs of corresponding
images?pairs of images sharing the same high-level abstraction but having different low-level realizations.
Discriminative Models: Let f1 and f2 be the discriminative models of GAN1 and GAN2 given by
(n )
(n ?1)
(2)
(1)
(n )
(n ?1)
(2)
(1)
f1 (x1 ) = f1 1 f1 1
. . . f1 f1 (x1 ) , f2 (x2 ) = f2 2 f2 2
. . . f2 f2 (x2 )
(i)
(i)
where f1 and f2 are the ith layers of f1 and f2 and n1 and n2 are the numbers of layers. The
discriminative models map an input image to a probability score, estimating the likelihood that the
input is drawn from a true data distribution. The first layers of the discriminative models extract
low-level features, while the last layers extract high-level features. Because the input images are
realizations of the same high-level semantics in two different domains, we force f1 and f2 to have
the same last layers, which is achieved by sharing the weights of the last layers via ? f (n1 ?i) =
1
? f (n2 ?i) , for i = 0, 1, ..., l ? 1 where l is the number of weight-sharing layers in the discriminative
2
(i)
(i)
models, and ? f (i) and ? f (i) are the network parameters of f1 and f2 , respectively. The weight1
2
sharing constraint in the discriminators helps reduce the total number of parameters in the network,
but it is not essential for learning a joint distribution.
Learning: The CoGAN framework corresponds to a constrained minimax game given by
max min V (f1 , f2 , g1 , g2 ), subject to
g1 ,g2 f1 ,f2
? g(i) = ? g(i) ,
1
for i = 1, 2, ..., k
(2)
2
? f (n1 ?j) = ? f (n2 ?j) , for j = 0, 1, ..., l ? 1
1
2
where the value function V is given by
V (f1 , f2 , g1 , g2 ) = Ex1 ?pX1 [? log f1 (x1 )] + Ez?pZ [? log(1 ? f1 (g1 (z)))]
+ Ex2 ?pX2 [? log f2 (x2 )] + Ez?pZ [? log(1 ? f2 (g2 (z)))].
(3)
In the game, there are two teams and each team has two players. The generative models form a
team and work together for synthesizing a pair of images in two different domains for confusing the
discriminative models. The discriminative models try to differentiate images drawn from the training
data distribution in the respective domains from those drawn from the respective generative models.
The collaboration between the players in the same team is established from the weight-sharing
constraint. Similar to GAN, CoGAN can be trained by back propagation with the alternating gradient
update steps. The details of the learning algorithm are given in the supplementary materials.
Remarks: CoGAN learning requires training samples drawn from the marginal distributions, pX1
and pX2 . It does not rely on samples drawn from the joint distribution, pX1 ,X2 , where corresponding
supervision would be available. Our main contribution is in showing that with just samples drawn
separately from the marginal distributions, CoGAN can learn a joint distribution of images in the
two domains. Both weight-sharing constraint and adversarial training are essential for enabling
this capability. Unlike autoencoder learning [3], which encourages a generated pair of images
to be identical to the target pair of corresponding images in the two domains for minimizing the
reconstruction loss1 , the adversarial training only encourages the generated pair of images to be
1
This is why [3] requires samples from the joint distribution for learning the joint distribution.
3
Figure 2: Left (Task A): generation of digit and corresponding edge images. Right (Task B): generation of digit
and corresponding negative images. Each of the top and bottom pairs was generated using the same input noise.
We visualized the results by traversing in the input space.
0.96
0.94
0.92
0.9
0.88
Task B: pair generation of digit and negative images
Avg. pixel agreement ratios
avg. pixel agreement ratios
Task A: pair generation of digit and edge images
0
1
2
3
# of weight-sharing layers in the discriminative models
0.96
0.94
0.92
0.9
0.88
Generative models share 1 layer.
Generative models share 2 layers.
Generative models share 3 layers.
Generative models share 4 layers.
0
1
2
3
# of weight-sharing layers in the discriminative models
Figure 3: The figures plot the average pixel agreement ratios of the CoGANs with different weight-sharing
configurations for Task A and B. The larger the pixel agreement ratio the better the pair generation performance.
We found that the performance was positively correlated with the number of weight-sharing layers in the
generative models but was uncorrelated to the number of weight-sharing layers in the discriminative models.
CoGAN learned the joint distribution without weight-sharing layers in the discriminative models.
individually resembling to the images in the respective domains. With this more relaxed adversarial
training setting, the weight-sharing constraint can then kick in for capturing correspondences between
domains. With the weight-sharing constraint, the generative models must utilize the capacity more
efficiently for fooling the discriminative models, and the most efficient way of utilizing the capacity
for generating a pair of realistic images in two domains is to generate a pair of corresponding images
since the neurons responsible for decoding high-level semantics can be shared.
CoGAN learning is based on existence of shared high-level representations in the domains. If such a
representation does not exist for the set of domains of interest, it would fail.
4
Experiments
In the experiments, we emphasized there were no corresponding images in the different domains in the
training sets. CoGAN learned the joint distributions without correspondence supervision. We were
unaware of existing approaches with the same capability and hence did not compare CoGAN with
prior works. Instead, we compared it to a conditional GAN to demonstrate its advantage. Recognizing
that popular performance metrics for evaluating generative models all subject to issues [7], we
adopted a pair image generation performance metric for comparison. Many details including the
network architectures and additional experiment results are given in the supplementary materials. An
implementation of CoGAN is available in https://github.com/mingyuliutw/cogan.
Digits: We used the MNIST training set to train CoGANs for the following two tasks. Task A is
about learning a joint distribution of a digit and its edge image. Task B is about learning a joint
distribution of a digit and its negative image. In Task A, the 1st domain consisted of the original
handwritten digit images, while the 2nd domain consisted of their edge images. We used an edge
detector to compute training edge images for the 2nd domain. In the supplementary materials, we also
showed an experiment for learning a joint distribution of a digit and its 90-degree in-plane rotation.
We used deep convolutional networks to realized the CoGAN. The two generative models had an identical structure; both had 5 layers and were fully convolutional. The stride lengths of the convolutional
layers were fractional. The models also employed the batch normalization processing [8] and the
parameterized rectified linear unit processing [9]. We shared the parameters for all the layers except
for the last convolutional layers. For the discriminative models, we used a variant of LeNet [10].
4
The inputs to the discriminative models were batches containing output images from the generative
models and images from the two training subsets (each pixel value is linearly scaled to [0 1]).
We divided the training set into two equal-size non-overlapping subsets. One was used to train GAN1
and the other was used to train GAN2 . We used the ADAM algorithm [11] for training and set the
learning rate to 0.0002, the 1st momentum parameter to 0.5, and the 2nd momentum parameter to
0.999 as suggested in [12]. The mini-batch size was 128. We trained the CoGAN for 25000 iterations.
These hyperparameters were fixed for all the visualization experiments.
The CoGAN learning results are shown in Figure 2. We found that although the CoGAN was
trained without corresponding images, it learned to render corresponding ones for both Task A and
B. This was due to the weight-sharing constraint imposed to the layers that were responsible for
decoding high-level semantics. Exploiting the correspondence between the two domains allowed
GAN1 and GAN2 to utilize more capacity in the networks to better fit the training data. Without the
weight-sharing constraint, the two GANs just generated two unrelated images in the two domains.
Weight Sharing: We varied the numbers of weight-sharing layers in the generative and discriminative
models to create different CoGANs for analyzing the weight-sharing effect for both tasks. Due to
lack of proper validation methods, we did a grid search on the training iteration hyperparameter
and reported the best performance achieved by each network. For quantifying the performance, we
transformed the image generated by GAN1 to the 2nd domain using the same method employed
for generating the training images in the 2nd domain. We then compared the transformed image
with the image generated by GAN2 . A perfect joint distribution learning should render two identical
images. Hence, we used the ratios of agreed pixels between 10K pairs of images generated by
each network (10K randomly sampled z) as the performance metric. We trained each network 5
times with different initialization weights and reported the average pixel agreement ratios over the 5
trials for each network. The results are shown in Figure 3. We observed that the performance was
positively correlated with the number of weight-sharing layers in the generative models. With more
sharing layers in the generative models, the rendered pairs of images resembled true pairs drawn
from the joint distribution more. We also noted that the performance was uncorrelated to the number
of weight-sharing layers in the discriminative models. However, we still preferred discriminator
weight-sharing because this reduces the total number of network parameters.
Comparison with Conditional GANs: We compared the CoGAN with the conditional GANs [13].
We designed a conditional GAN with the generative and discriminative models identical to those in
the CoGAN. The only difference was the conditional GAN took an additional binary variable as input,
which controlled the domain of the output image. When the binary variable was 0, it generated an
image resembling images in the 1st domain; otherwise, it generated an image resembling images in
the 2nd domain. Similarly, no pairs of corresponding images were given during the conditional GAN
training. We applied the conditional GAN to both Task A and B and hoped to empirically answer
whether a conditional model can be used to learn to render corresponding images with correspondence
supervision. The pixel agreement ratio was used as the performance metric. The experiment results
showed that for Task A, CoGAN achieved an average ratio of 0.952, outperforming 0.909 achieved
by the conditional GAN. For Task B, CoGAN achieved a score of 0.967, which was much better
than 0.778 achieved by the conditional GAN. The conditional GAN just generated two different
digits with the same random noise input but different binary variable values. These results showed
that the conditional model failed to learn a joint distribution from samples drawn from the marginal
distributions. We note that for the case that the supports of the two domains are different such as the
color and depth image domains, the conditional model cannot even be applied.
Faces: We applied CoGAN to learn a joint distribution of face images with different. We trained
several CoGANs, each for generating a face with an attribute and a corresponding face without the
attribute. We used the CelebFaces Attributes dataset [14] for the experiments. The dataset covered
large pose variations and background clutters. Each face image had several attributes, including
blond hair, smiling, and eyeglasses. The face images with an attribute constituted the 1st domain; and
those without the attribute constituted the 2nd domain. No corresponding face images between the
two domains was given. We resized the images to a resolution of 132 ? 132 and randomly sampled
128 ? 128 regions for training. The generative and discriminative models were both 7 layer deep
convolutional neural networks.
The experiment results are shown in Figure 4. We randomly sampled two points in the 100dimensional input noise space and visualized the rendered face images as traveling from one pint to
5
Figure 4: Generation of face images with different attributes using CoGAN. From top to bottom, the figure
shows pair face generation results for the blond-hair, smiling, and eyeglasses attributes. For each pair, the 1st
row contains faces with the attribute, while the 2nd row contains corresponding faces without the attribute.
the other. We found CoGAN generated pairs of corresponding faces, resembling those from the same
person with and without an attribute. As traveling in the space, the faces gradually change from one
person to another. Such deformations were consistent for both domains. Note that it is difficult to
create a dataset with corresponding images for some attribute such as blond hair since the subjects
have to color their hair. It is more ideal to have an approach that does not require corresponding
images like CoGAN. We also noted that the number of faces with an attribute was often several times
smaller than that without the attribute in the dataset. However, CoGAN learning was not hindered by
the mismatches.
Color and Depth Images: We used the RGBD dataset [15] and the NYU dataset [16] for learning
joint distribution of color and depth images. The RGBD dataset contains registered color and depth
images of 300 objects captured by the Kinect sensor from different view points. We partitioned the
dataset into two equal-size non-overlapping subsets. The color images in the 1st subset were used for
training GAN1 , while the depth images in the 2nd subset were used for training GAN2 . There were
no corresponding depth and color images in the two subsets. The images in the RGBD dataset have
different resolutions. We resized them to a fixed resolution of 64 ? 64. The NYU dataset contains
color and depth images captured from indoor scenes using the Kinect sensor. We used the 1449
processed depth images for the depth domain. The training images for the color domain were from
6
Figure 5: Generation of color and depth images using CoGAN. The top figure shows the results for the RGBD
dataset: the 1st row contains the color images, the 2nd row contains the depth images, and the 3rd and 4th rows
visualized the depth profile under different view points. The bottom figure shows the results for the NYU dataset.
all the color images in the raw dataset except for those registered with the processed depth images.
We resized both the depth and color images to a resolution of 176 ? 132 and randomly cropped
128 ? 128 patches for training.
Figure 5 showed the generation results. We found the rendered color and depth images resembled
corresponding RGB and depth image pairs despite of no registered images existed in the two domains
in the training set. The CoGAN recovered the appearance?depth correspondence unsupervisedly.
5
Applications
In addition to rendering novel pairs of corresponding images for movie and game production, the
CoGAN finds applications in the unsupervised domain adaptation and image transformation tasks.
Unsupervised Domain Adaptation (UDA): UDA concerns adapting a classifier trained in one
domain to classify samples in a new domain where there is no labeled example in the new domain
for re-training the classifier. Early works have explored ideas from subspace learning [17, 18] to
deep discriminative network learning [19, 20, 21]. We show that CoGAN can be applied to the UDA
problem. We studied the problem of adapting a digit classifier from the MNIST dataset to the USPS
dataset. Due to domain shift, a classifier trained using one dataset achieves poor performance in the
other. We followed the experiment protocol in [17, 20], which randomly samples 2000 images from
the MNIST dataset, denoted as D1 , and 1800 images from the USPS dataset, denoted as D2 , to define
an UDA problem. The USPS digits have a different resolution. We resized them to have the same
resolution as the MNIST digits. We employed the CoGAN used for the digit generation task. For
classifying digits, we attached a softmax layer to the last hidden layer of the discriminative models.
We trained the CoGAN by jointly solving the digit classification problem in the MNIST domain
which used the images and labels in D1 and the CoGAN learning problem which used the images
(3) (2) (1)
in both D1 and D2 . This produced two classifiers: c1 (x1 ) ? c(f1 (f1 (f1 (x1 )))) for MNIST
(3) (2) (1)
and c2 (x2 ) ? c(f2 (f2 (f2 (x2 )))) for USPS. No label information in D2 was used. Note that
(2)
(2)
(3)
(3)
f1 ? f2 and f1 ? f2 due to weight sharing and c denotes the softmax layer. We then applied
c2 to classify digits in the USPS dataset. The classifier adaptation from USPS to MNIST can be
achieved in the same way. The learning hyperparameters were determined via a validation set. We
reported the average accuracy over 5 trails with different randomly selected D1 and D2 .
Table 1 reports the performance of the proposed CoGAN approach with comparison to the stateof-the-art methods for the UDA task. The results for the other methods were duplicated from [20].
We observed that CoGAN significantly outperformed the state-of-the-art methods. It improved the
accuracy from 0.64 to 0.90, which translates to a 72% error reduction rate.
Cross-Domain Image Transformation: Let x1 be an image in the 1st domain. Cross-domain image
transformation is about finding the corresponding image in the 2nd domain, x2 , such that the joint
7
Method
From MNIST
to USPS
From USPS
to MNIST
Average
[17]
[18]
[19]
[20]
CoGAN
0.408
0.467
0.478
0.607
0.912 ?0.008
0.274
0.355
0.631
0.673
0.891 ?0.008
0.341
0.411
0.554
0.640
0.902
Table 1: Unsupervised domain adaptation performance comparison. The
table reported classification accuracies achieved by competing algorithms.
Figure 6: Cross-domain image
transformation. For each pair, left
is the input; right is the transformed image.
probability density, p(x1 , x2 ), is maximized. Let L be a loss function measuring difference between
two images. Given g1 and g2 , the transformation can be achieved by first finding the random vector
that generates the query image in the 1st domain z? = arg minz L(g1 (z), x1 ). After finding z? , one
can apply g2 to obtain the transformed image, x2 = g2 (z? ). In Figure 6, we show several CoGAN
cross-domain transformation results, computed by using the Euclidean loss function and the L-BFGS
optimization algorithm. We found the transformation was successful when the input image was
covered by g1 (The input image can be generated by g1 .) but generated blurry images when it is not
the case. To improve the coverage, we hypothesize that more training images and a better objective
function are required, which are left as future work.
6
Related Work
Neural generative models has recently received an increasing amount of attention. Several approaches, including generative adversarial networks[5], variational autoencoders (VAE)[22], attention
models[23], moment matching[24], stochastic back-propagation[25], and diffusion processes[26],
have shown that a deep network can learn an image distribution from samples. The learned networks
can be used to generate novel images. Our work was built on [5]. However, we studied a different
problem, the problem of learning a joint distribution of multi-domain images. We were interested
in whether a joint distribution of images in different domains can be learned from samples drawn
separately from its marginal distributions of the individual domains. We showed its achievable via
the proposed CoGAN framework. Note that our work is different to the Attribute2Image work[27],
which is based on a conditional VAE model [28]. The conditional model can be used to generate
images of different styles, but they are unsuitable for generating images in two different domains
such as color and depth image domains.
Following [5], several works improved the image generation quality of GAN, including a Laplacian
pyramid implementation[29], a deeper architecture[12], and conditional models[13]. Our work
extended GAN to dealing with joint distributions of images.
Our work is related to the prior works in multi-modal learning, including joint embedding space
learning [30] and multi-modal Boltzmann machines [1, 3]. These approaches can be used for
generating corresponding samples in different domains only when correspondence annotations are
given during training. The same limitation is also applied to dictionary learning-based approaches [2,
4]. Our work is also related to the prior works in cross-domain image generation [31, 32, 33], which
studied transforming an image in one style to the corresponding images in another style. However,
we focus on learning the joint distribution in an unsupervised fashion, while [31, 32, 33] focus on
learning a transformation function directly in a supervised fashion.
7
Conclusion
We presented the CoGAN framework for learning a joint distribution of multi-domain images. We
showed that via enforcing a simple weight-sharing constraint to the layers that are responsible for
decoding abstract semantics, the CoGAN learned the joint distribution of images by just using
samples drawn separately from the marginal distributions. In addition to convincing image generation
results on faces and RGBD images, we also showed promising results of the CoGAN framework for
the image transformation and unsupervised domain adaptation tasks.
8
References
[1] Nitish Srivastava and Ruslan R Salakhutdinov. Multimodal learning with deep boltzmann machines. In
NIPS, 2012.
[2] Shenlong Wang, Lei Zhang, Yan Liang, and Quan Pan. Semi-coupled dictionary learning with applications
to image super-resolution and photo-sketch synthesis. In CVPR, 2012.
[3] Jiquan Ngiam, Aditya Khosla, Mingyu Kim, Juhan Nam, Honglak Lee, and Andrew Y Ng. Multimodal
deep learning. In ICML, 2011.
[4] Jianchao Yang, John Wright, Thomas S Huang, and Yi Ma. Image super-resolution via sparse representation.
IEEE TIP, 2010.
[5] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron
Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, 2014.
[6] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional
neural networks. In NIPS, 2012.
[7] Lucas Theis, A?ron van den Oord, and Matthias Bethge. A note on the evaluation of generative models. In
ICLR, 2016.
[8] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing
internal covariate shift. arXiv:1502.03167, 2015.
[9] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing
human-level performance on imagenet classification. In ICCV, 2015.
[10] Yann LeCun, L?on Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to
document recognition. Proceedings of the IEEE, 1998.
[11] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
[12] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep
convolutional generative adversarial networks. In ICLR, 2016.
[13] Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv:1411.1784, 2014.
[14] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In
ICCV, 2015.
[15] Kevin Lai, Liefeng Bo, Xiaofeng Ren, and Dieter Fox. A large-scale hierarchical multi-view rgb-d object
dataset. In ICRA, 2011.
[16] Nathan Silberman, Derek Hoiem, Pushmeet Kohli, and Rob Fergus. Indoor segmentation and support
inference from rgbd images. In ECCV, 2012.
[17] Mingsheng Long, Jianmin Wang, Guiguang Ding, Jiaguang Sun, and Philip Yu. Transfer feature learning
with joint distribution adaptation. In ICCV, 2013.
[18] Basura Fernando, Tatiana Tommasi, and Tinne Tuytelaars. Joint cross-domain classification and subspace
learning for unsupervised adaptation. Pattern Recognition Letters, 65:60?66, 2015.
[19] Eric Tzeng, Judy Hoffman, Ning Zhang, Kate Saenko, and Trevor Darrell. Deep domain confusion:
Maximizing for domain invariance. arXiv:1412.3474, 2014.
[20] Artem Rozantsev, Mathieu Salzmann, and Pascal Fua. Beyond sharing weights for deep domain adaptation.
arXiv:1603.06432, 2016.
[21] Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Fran?ois Laviolette,
Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. JMLR, 2016.
[22] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. In ICLR, 2014.
[23] Karol Gregor, Ivo Danihelka, Alex Graves, and Daan Wierstra. Draw: A recurrent neural network for
image generation. In ICML, 2015.
[24] Yujia Li, Kevin Swersky, and Richard Zemel. Generative moment matching networks. ICML, 2016.
[25] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. ICML, 2014.
[26] Jascha Sohl-Dickstein, Eric A Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised
learning using nonequilibrium thermodynamics. In ICML, 2015.
[27] Xinchen Yan, Jimei Yang, Kihyuk Sohn, and Honglak Lee. Attribute2image: Conditional image generation
from visual attributes. arXiv:1512.00570, 2015.
[28] Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised
learning with deep generative models. In NIPS, 2014.
[29] Emily L Denton, Soumith Chintala, Rob Fergus, et al. Deep generative image models using a laplacian
pyramid of adversarial networks. In NIPS, 2015.
[30] Ryan Kiros, Ruslan Salakhutdinov, and Richard S Zemel. Unifying visual-semantic embeddings with
multimodal neural language models. arXiv:1411.2539, 2014.
[31] Junho Yim, Heechul Jung, ByungIn Yoo, Changkyu Choi, Dusik Park, and Junmo Kim. Rotating your face
using multi-task deep neural network. In CVPR, 2015.
[32] Scott E Reed, Yi Zhang, Yuting Zhang, and Honglak Lee. Deep visual analogy-making. In NIPS, 2015.
[33] Alexey Dosovitskiy, Jost Tobias Springenberg, and Thomas Brox. Learning to generate chairs with
convolutional neural networks. In CVPR, 2015.
9
| 6544 |@word kohli:1 trial:1 achievable:1 nd:12 d2:4 mitsubishi:2 rgb:2 pg:3 moment:2 reduction:1 liu:2 configuration:1 score:2 contains:6 hoiem:1 salzmann:1 jimenez:2 document:1 existing:4 recovered:1 com:3 luo:1 diederik:3 must:1 john:1 realistic:2 christian:1 hypothesize:1 designed:2 plot:1 update:2 generative:43 gan2:9 selected:1 alec:1 ivo:1 plane:1 uda:5 ith:2 ron:1 yuting:1 zhang:5 wierstra:2 c2:2 viable:1 consists:4 mingyuliutw:1 wild:1 ex2:1 kiros:1 multi:12 inspired:1 ming:1 salakhutdinov:2 soumith:2 increasing:1 spain:1 estimating:1 unrelated:1 evgeniya:1 finding:3 transformation:12 quantitative:1 jimei:1 pg2:1 tie:2 scaled:1 classifier:6 sherjil:1 ustinova:1 unit:1 danihelka:1 limit:2 despite:1 encoding:2 analyzing:1 ajakan:1 alexey:1 initialization:1 resembles:1 studied:3 challenging:1 luke:1 ease:1 responsible:6 lecun:1 practice:1 backpropagation:1 digit:17 tuzel:1 yan:2 adapting:2 significantly:1 matching:2 word:1 mingyu:1 cannot:1 map:3 imposed:1 maximizing:1 resembling:5 attention:2 jimmy:1 emily:1 focused:1 resolution:8 pouget:1 jascha:1 m2:2 utilizing:1 nam:1 embedding:1 variation:1 target:1 xinchen:1 decode:6 hana:1 trail:1 goodfellow:2 agreement:6 synthesize:4 recognition:2 labeled:1 bottom:3 observed:2 ding:1 solved:1 wang:3 region:1 sun:2 transforming:1 ideally:1 warde:1 tobias:1 trained:11 solving:2 f2:21 eric:2 usps:8 tinne:1 easily:1 joint:45 multimodal:3 differently:1 xiaoou:1 maheswaranathan:1 train:3 niru:1 query:1 zemel:2 kevin:2 basura:1 jean:1 supplementary:3 larger:1 cvpr:3 otherwise:1 favor:1 g1:24 tuytelaars:1 jointly:2 shakir:2 differentiate:1 advantage:1 net:2 matthias:1 took:1 propose:2 reconstruction:1 product:2 adaptation:11 realization:2 exploiting:1 sutskever:1 darrell:1 generating:5 adam:2 converges:1 perfect:1 object:2 help:1 karol:1 andrew:1 ganin:1 pose:1 recurrent:1 ex:1 received:1 coverage:1 ois:1 larochelle:1 direction:1 ning:1 attribute:18 stochastic:3 human:1 material:4 require:2 f1:21 ryan:1 wright:1 normal:1 achieves:1 early:1 dictionary:2 ruslan:2 outperformed:1 label:2 individually:1 tf:1 successfully:1 create:2 hoffman:1 sensor:2 super:2 rather:1 resized:4 vae:2 rezende:2 focus:2 likelihood:1 greatly:1 contrast:1 adversarial:15 kim:2 inference:2 ganguli:1 abstraction:1 hidden:1 transformed:4 interested:1 semantics:10 pixel:8 issue:1 classification:5 arg:1 pascal:2 denoted:2 stateof:1 lucas:1 jianmin:1 constrained:1 softmax:2 art:2 brox:1 tzeng:1 marginal:11 equal:4 once:1 having:1 ng:1 identical:5 park:1 yu:2 denton:1 unsupervised:9 icml:5 future:1 report:1 mirza:2 yoshua:2 richard:2 few:2 dosovitskiy:1 randomly:6 individual:3 n1:3 interest:1 evaluation:2 loss1:1 farley:1 tuple:3 edge:6 respective:5 traversing:1 fox:1 euclidean:1 rotating:1 re:1 deformation:1 merl:4 classify:2 measuring:1 tg:1 applicability:1 subset:7 junmo:1 uniform:1 nonequilibrium:1 krizhevsky:1 recognizing:1 successful:1 osindero:1 reported:4 dependency:1 answer:1 st:10 density:3 person:2 oord:1 lee:3 decoding:4 tip:1 together:1 synthesis:1 ilya:1 gans:9 bethge:1 containing:1 huang:1 style:3 li:1 szegedy:1 bfgs:1 yaroslav:1 stride:1 juhan:1 kate:1 eyeglass:2 try:1 view:3 lab:2 mario:1 bayes:1 metz:1 capability:2 annotation:1 simon:1 contribution:1 accuracy:3 convolutional:8 efficiently:1 maximized:1 handwritten:1 raw:1 produced:1 ren:2 unsupervisedly:1 rectified:1 detector:1 ping:1 sharing:30 trevor:1 derek:1 mohamed:2 chintala:2 sampled:3 dataset:21 popular:1 duplicated:1 color:17 fractional:1 segmentation:1 agreed:1 back:2 supervised:2 danilo:2 modal:2 improved:2 wei:1 fua:1 just:5 autoencoders:1 traveling:2 sketch:1 mehdi:2 overlapping:2 propagation:2 lack:1 liefeng:1 quality:1 lei:1 mingsheng:1 building:1 effect:1 smiling:4 verify:1 true:3 concept:2 consisted:2 hence:2 lenet:1 alternating:2 changkyu:1 semantic:1 illustrated:1 ex1:1 game:5 during:3 encourages:2 noted:2 generalized:1 demonstrate:2 confusion:1 image:161 variational:2 novel:3 recently:1 rotation:1 empirically:1 hugo:1 attached:1 he:1 m1:2 synthesized:2 surpassing:1 honglak:3 rd:2 grid:1 px1:4 similarly:1 language:1 had:3 supervision:5 patrick:1 multivariate:1 celebfaces:1 showed:7 shenlong:1 binary:3 outperforming:1 yi:2 victor:1 captured:2 additional:2 relaxed:1 employed:3 xiangyu:1 fernando:1 semi:2 multiple:1 px2:3 reduces:1 cross:6 long:1 divided:1 lai:1 controlled:1 laplacian:2 jost:1 variant:1 hair:4 multilayer:2 metric:4 arxiv:6 iteration:4 normalization:2 sergey:1 pyramid:2 achieved:10 c1:1 addition:3 background:1 separately:4 cropped:1 jian:1 modality:1 unlike:1 subject:3 quan:1 flow:1 effectiveness:1 kick:1 ideal:1 yang:2 bengio:2 enough:1 embeddings:1 rendering:1 fit:1 architecture:2 competing:1 opposite:1 hindered:1 reduce:1 idea:3 haffner:1 translates:1 shift:2 whether:3 jianchao:1 tommasi:1 accelerating:1 render:3 shaoqing:1 remark:1 deep:21 covered:2 amount:1 clutter:1 visualized:3 processed:2 sohn:1 generate:5 http:1 exist:1 materialize:1 hyperparameter:1 dickstein:1 drawn:16 diffusion:1 utilize:2 fooling:2 enforced:1 parameterized:1 letter:1 springenberg:1 swersky:1 extends:1 yann:1 patch:1 fran:1 draw:1 confusing:2 jiquan:1 capturing:1 layer:44 followed:1 distinguish:1 courville:1 correspondence:9 existed:1 marchand:1 xiaogang:1 constraint:13 alex:2 your:1 scene:2 x2:13 generates:1 nathan:1 nitish:1 min:2 chair:1 rendered:3 px:6 poor:1 pg1:1 smaller:1 oncel:2 pan:1 partitioned:1 rob:2 making:1 den:1 gradually:2 iccv:3 dieter:1 visualization:2 bing:1 fail:1 photo:1 adopted:1 available:2 operation:1 apply:3 hierarchical:2 blurry:1 occurrence:1 yim:1 batch:4 existence:3 original:1 thomas:2 top:3 denotes:1 gan:15 unifying:1 laviolette:1 unsuitable:1 tatiana:1 junho:1 gregor:1 icra:1 silberman:1 objective:3 realized:3 gradient:3 iclr:4 subspace:2 capacity:5 philip:1 enforcing:4 ozair:1 length:1 byungin:1 reed:1 mini:1 ratio:8 convincing:2 minimizing:1 liang:1 difficult:1 negative:3 synthesizing:3 ba:1 ziwei:1 implementation:2 proper:1 boltzmann:2 cogan:54 neuron:1 daan:2 enabling:1 extended:1 hinton:1 team:4 varied:1 kinect:2 david:1 pair:27 required:2 germain:1 discriminator:4 imagenet:2 learned:7 registered:3 established:2 barcelona:1 kingma:3 nip:7 beyond:1 suggested:1 pattern:1 mismatch:1 indoor:2 yujia:1 scott:1 built:1 including:6 max:4 natural:1 force:5 rely:1 minimax:2 thermodynamics:1 improve:1 movie:2 github:1 mathieu:1 coupled:5 extract:4 autoencoder:1 auto:1 prior:3 theis:1 graf:1 fully:1 loss:2 generation:16 limitation:2 analogy:1 geoffrey:1 generator:1 validation:2 degree:1 sufficient:1 consistent:1 heechul:1 classifying:2 share:8 collaboration:1 production:2 uncorrelated:2 row:5 eccv:1 jung:1 last:9 deeper:1 perceptron:1 face:20 gan1:9 sparse:1 van:1 overcome:1 depth:20 evaluating:1 unaware:1 avg:2 rozantsev:1 attribute2image:2 pushmeet:1 welling:2 approximate:1 preferred:1 dealing:1 ioffe:1 tuples:4 discriminative:30 fergus:2 surya:1 search:1 khosla:1 why:1 table:3 promising:1 learn:12 transfer:1 delving:1 ngiam:1 bottou:1 electric:2 domain:84 protocol:1 did:2 main:1 constituted:2 linearly:1 noise:3 hyperparameters:2 profile:1 n2:3 allowed:1 rgbd:6 x1:12 positively:2 xu:1 fashion:2 judy:1 momentum:2 decoded:1 jmlr:1 minz:1 learns:2 ian:1 tang:1 artem:1 choi:1 xiaofeng:1 resembled:2 emphasized:1 covariate:1 showing:1 rectifier:1 nyu:3 pz:3 explored:1 abadie:1 concern:2 naively:1 essential:2 mnist:9 sohl:1 hoped:1 appearance:1 ez:3 visual:3 failed:1 aditya:1 kaiming:1 g2:21 bo:1 srivastava:1 radford:1 corresponds:2 ma:1 conditional:18 lempitsky:1 presentation:1 quantifying:1 shared:6 change:1 determined:1 except:2 reducing:1 total:2 blond:3 invariance:1 player:3 saenko:1 perceptrons:2 aaron:1 pint:1 internal:1 support:6 kihyuk:1 yoo:1 d1:4 correlated:2 |
6,130 | 6,545 | Consistent Kernel Mean Estimation
for Functions of Random Variables
?,?
?
Carl-Johann Simon-Gabriel? , Adam Scibior
, Ilya Tolstikhin, Bernhard Sch?lkopf
Department of Empirical Inference, Max Planck Institute for Intelligent Systems
Spemanstra?e 38, 72076 T?bingen, Germany
?
joint first authors; ? also with: Engineering Department, Cambridge University
cjsimon@, adam.scibior@, ilya@, [email protected]
Abstract
We provide a theoretical foundation for non-parametric estimation of functions of
random variables using kernel mean embeddings. We show that for any continuous
function f , consistent estimators of the mean embedding of a random variable X
lead to consistent estimators of the mean embedding of f (X). For Mat?rn kernels
and sufficiently smooth functions we also provide rates of convergence.
Our results extend to functions of multiple random variables. If the variables
are dependent, we require an estimator of the mean embedding of their joint
distribution as a starting point; if they are independent, it is sufficient to have
separate estimators of the mean embeddings of their marginal distributions. In
either case, our results cover both mean embeddings based on i.i.d. samples as well
as ?reduced set? expansions in terms of dependent expansion points. The latter
serves as a justification for using such expansions to limit memory resources when
applying the approach as a basis for probabilistic programming.
1
Introduction
A common task in probabilistic modelling is to compute the distribution of f (X), given a measurable
function f and a random variable X. In fact, the earliest instances of this problem date back at least
to Poisson (1837). Sometimes this can be done analytically. For example, if f is linear and X is
Gaussian, that is f (x) = ax + b and X ? N (?; ), we have f (X) ? N (a? + b; a ). There exist
various methods for obtaining such analytical expressions (Mathai, 1973), but outside a small subset
of distributions and functions the formulae are either not available or too complicated to be practical.
An alternative to the analytical approach is numerical approximation, ideally implemented as a
flexible software library. The need for such tools is recognised in the general programming languages
community (McKinley, 2016), but no standards were established so far. The main challenge is in
finding a good approximate representation for random variables.
Distributions on integers, for example, are usually represented as lists of (xi , p(xi )) pairs. For real
valued distributions, integral transforms (Springer, 1979), mixtures of Gaussians (Milios, 2009), Laguerre polynomials (Williamson, 1989), and Chebyshev polynomials (Korze?n and Jaroszewicz, 2014)
were proposed as convenient representations for numerical computation. For strings, probabilistic
finite automata are often used. All those approaches have their merits, but they only work with a
specific input type.
There is an alternative, based on Monte Carlo sampling (Kalos and Whitlock, 2008), which is to
represent X by a (possibly weighted) sample {(xi , wi )}ni=1 (with wi 0). This representation has
several advantages: (i) it works for any input type, (ii) the sample size controls the time-accuracy
trade-off, and (iii) applying functions to random variables reduces to applying the functions pointwise
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
to the sample, i.e., {(f (xi ), wi )} representsP
f (X). Furthermore,
expectations of functions of random
P
variables can be estimated as E [f (X)] ? i wi f (xi )/ i wi , sometimes with guarantees for the
convergence rate.
The flexibility of this Monte Carlo approach comes at a cost: without further assumptions on the
underlying input space X , it is hard to quantify the accuracy of this representation. For instance,
given two samples of the same size, {(xi , wi )}ni=1 and {(x0i , wi0 )}ni=1 , how can we tell which one is a
better representation of X? More generally, how could we optimize a representation with predefined
sample size?
There exists an alternative to the Monte Carlo approach, called Kernel Mean Embeddings (KME)
(Berlinet and Thomas-Agnan, 2004; Smola et al., 2007). It also represents random variables as
samples, but additionally defines a notion of similarity between sample points. As a result, (i) it
keeps all the advantages of the Monte Carlo scheme, (ii) it includes the Monte Carlo method as
a special case, (iii) it overcomes its pitfalls described above, and (iv) it can be tailored to focus
on different properties of X, depending on the user?s needs and prior assumptions. The KME
approach identifies both sample points and distributions with functions in an abstract Hilbert space.
Internally the latter are still represented as weighted samples, but the weights can be negative and
the straightforward Monte Carlo interpretation is no longer valid. Sch?lkopf et al. (2015) propose
using KMEs as approximate representation of random variables for the purpose of computing their
functions. However, they only provide theoretical justification for it in rather idealised settings, which
do not meet practical implementation requirements.
In this paper, we build on this work and provide general theoretical guarantees for the proposed estimators. Specifically, we prove statements of the form ?if {(xi , wi )}ni=1 provides a good estimate for
the KME of X, then {(f (xi ), wi )}ni=1 provides a good estimate for the KME of f (X)?. Importantly,
our results do not assume joint independence of the observations xi (and weights wi ). This makes
them a powerful tool. For instance, imagine we are given data {(xi , wi )}ni=1 from a random variable
X that we need to compress. Then our theorems guarantee that, whatever compression algorithm we
use, as long as the compressed representation {(x0j , wj0 )}nj=1 still provides a good estimate for the
KME of X, the pointwise images {(f (x0j ), wj0 )}nj=1 provide good estimates of the KME of f (X).
In the remainder of this section we first introduce KMEs and discuss their merits. Then we explain
why and how we extend the results of Sch?lkopf et al. (2015). Section 2 contains our main results. In
Section 2.1 we show consistency of the relevant estimator in a general setting, and in Section 2.2 we
provide finite sample guarantees when Mat?rn kernels are used. In Section 3 we show how our results
apply to functions of multiple variables, both interdependent and independent. Section 4 concludes
with a discussion.
1.1
Background on kernel mean embeddings
Let X be a measurable input space. We use a positive definite bounded and measurable kernel
? := {(xi , wi )}n
k : X ? X ! R to represent random variables X ? P and weighted samples X
i=1
k
k
as two functions ?X and ?
?X in the corresponding Reproducing Kernel Hilbert Space (RKHS) Hk by
defining
Z
X
?kX := k(x, .) dP (x) and ?
?kX :=
wi k(xi , .) .
i
These are guaranteed to exist, since we assume the kernel is bounded (Smola et al., 2007). When
clear from the context, we omit the kernel k in the superscript. ?X is called the KME of P , but we
also refer to it as the KME of X. In this paper we focus on computing functions of random variables.
For f : X ! Z, where Z is a measurable space, and for a positive definite bounded kz : Z ? Z ! R
we also write
Z
X
?kf z(X) := kz (f (x), .) dP (x) and ?
?kf z(X) :=
wi kz (f (xi ), .) .
(1)
i
? to functions in the RKHS is that
The advantage of mapping random variables X and samples X
?
we may now say that X is a good approximation for X if the RKHS distance k?
?X ?X k is
small. This distance depends on the choice of the kernel and different kernels emphasise different
information about X. For example if on X := [a, b] ? R we choose k(x, x0 ) := x ? x0 + 1, then
2
?X (x) = EX?P [X] x + 1. Thus any two distributions and/or samples with equal means are mapped
to the same function in Hk so the distance between them is zero. Therefore using this particular k,
we keep track only of the mean of the distributions. If instead we prefer to keep track of all first
p moments, we may use the kernel k(x, x0 ) := (x ? x0 + 1)p . And if we do not want to loose any
information at all, we should choose k such that ?k is injective over all probability measures on X .
Such kernels are called characteristic. For standard spaces, such as X = Rd , many widely used
kernels were proven characteristic, such as Gaussian, Laplacian, and Mat?rn kernels (Sriperumbudur
et al., 2010, 2011).
kx
x0 k2
2 2
The Gaussian kernel k(x, x0 ) := e
may serve as another good illustration of the flexibility
of this representation. Whatever positive bandwidth 2 > 0, we do not lose any information about
distributions, because k is characteristic. Nevertheless, if 2 grows, all distributions start looking the
same, because their embeddings converge to a constant function 1. If, on the other hand, 2 becomes
small, distributions look increasingly different and ?
?X becomes a function with bumps of height wi
at every xi . In the limit when 2 goes to zero, each point is only similar to itself, so ?
?X reduces to
the Monte Carlo method. Choosing 2 can be interpreted as controlling the degree of smoothing in
the approximation.
1.2
Reduced set methods
An attractive feature when using KME estimators is the ability to reduce the number of expansion points (i.e., the size of the weighted sample) in a principled way. Specifically, if
? 0 := {(x0 , 1/N )}N then the objective is to construct X
? := {(xi , wi )}n that minimises
X
j
j=1
i=1
k?
?X 0 ?
?X k with n < N . Often the resulting xi are mutually dependent and the wi certainly
depend on them. The algorithms for constructing such expansions are known as reduced set methods
and have been studied by the machine learning community (Sch?lkopf and Smola, 2002, Chapter 18).
Although reduced set methods provide significant efficiency gains, their application raises certain
concerns when it comes to computing functions of random variables. Let P, Q be distributions
of X
P
and f (X) respectively. If x0j ?i.i.d. P , then f (x0j ) ?i.i.d. Q and so ?
?f (X 0 ) = N1 j k(f (x0j ), .)
p
reduces to the commonly used N -consistent empirical estimator of ?f (X) (Smola et al., 2007).
Unfortunately, this is not the case after applying reduced set methods, and it is not known under
which conditions ?
?f (X) is a consistent estimator for ?f (X) .
Sch?lkopf et al. (2015) advocate the use of reduced expansion set methods to save computational
resources. They also provide some reasoning why this should be the right thing to do for characteristic
kernels, but as they state themselves, their rigorous analysis does not cover practical reduced set
methods. Motivated by this and other concerns listed in Section 1.4, we provide a generalised analysis
of the estimator ?
?f (X) , where we do not make assumptions on how xi and wi were generated.
Before doing that, however, we first illustrate how the need for reduced set methods naturally emerges
on a concrete problem.
1.3
Illustration with functions of two random variables
? 0 = {x0 , 1/N }N and Y? 0 =
Suppose that we want to estimate ?f (X,Y ) given i.i.d. samples X
i
i=1
{yj0 , 1/N }N
j=1 from two independent random variables X 2 X and Y 2 Y respectively. Let Q be
the distribution of Z = f (X, Y ).
Pn
The first option is to consider what we will call the diagonal estimator ?
? 1 := N1 i=1 kz f (x0i , yi0 ), . .
p
Since f (x0i , yi0 ) ?i.i.d. Q, ?
? 1 is N -consistent (Smola et al., 2007). Another option is to p
conPN
1
0
0
sider the U-statistic estimator ?
? 2 := N 2 i,j=1 kz f (xi , yj ), . , which is also known to be N consistent. Experiments show that ?
? 2 is more accurate and has lower variance than ?
? 1 (see Figure 1).
However, the U-statistic estimator ?
? 2 needs O(n2 ) memory rather than O(n). For this reason
? 0 and Y? 0 to get new samSch?lkopf et al. (2015) propose to use a reduced set method both on X
n
n
?
?
ples X = {xi , wi }i=1 and Y = {yj , uj }j=1 of size n ? N , and then estimate ?f (X,Y ) using
Pn
?
? 3 := i,j=1 wi uj kx (f (xi , yj ), .).
3
We ran experiments on synthetic data to show how accurately ?
? 1, ?
? 2 and ?
? 3 approximate ?f (X,Y )
with growing sample size N . We considered three basic arithmetic operations: multiplication
X ? Y , division X/Y , and exponentiation X Y , with X ? N (3; 0.5) and Y ? N (4; 0.5). As the
true embedding ?f (X,Y ) is unknown, we approximated it by a U-statistic estimator based on a large
sample (125 points). For ?
?3 , we used the simplest possible reduced set method: we randomly sampled
subsets of size n = 0.01 ? N of the xi , and optimized the weights wi and ui to best approximate ?
?X
and ?
?Y . The results are summarised in Figure 1 and corroborate our expectations: (i) all estimators
converge, (ii) ?
? 2 converges fastest and has the lowest variance, and (iii) ?
? 3 is worse than ?
? 2 , but
much better than the diagonal estimator ?
? 1 . Note, moreover, that unlike the U-statistic estimator
?
? 2 , the reduced set based estimator ?
? 3 can be used with a fixed storage budget even if we perform
a sequence of function applications?a situation naturally appearing in the context of probabilistic
programming.
Sch?lkopf et al. (2015) prove the consistency of ?
? 3 only for a rather limited case, when the points
of the reduced expansions {xi }ni=1 and {yi }ni=1 are i.i.d. copies of X and Y , respectively, and
the weights {(wi , ui )}ni=1 are constants. Using our new results we will prove in Section 3.1 the
consistency of ?
? 3 under fairly general conditions, even in the case when both expansion points and
weights are interdependent random variables.
Figure 1: Error of kernel mean estimators for basic arithmetic functions of two variables, X ? Y ,
X/Y and X Y , as a function of sample size N . The U -statistic estimator ?
?2 works best, closely
followed by the proposed estimator ?
?3 , which outperforms the diagonal estimator ?
?1 .
1.4
Other sources of non-i.i.d. samples
Although our discussion above focuses on reduced expansion set methods, there are other popular
algorithms that produce KME expansions where the samples are not i.i.d. Here we briefly discuss
several examples, emphasising that our selection is not comprehensive. They provide additional
motivation for stating convergence guarantees in the most general setting possible.
An important notion in probability theory is that of a conditional distribution, which can also be
represented using KME (Song et al., 2009). With this representation the standard laws of probability,
such as sum, product, and Bayes? rules, can be stated using KME (Fukumizu et al., 2013). Applying
those rules results in KME estimators with strong dependencies between samples and their weights.
Another possibility is that even though i.i.d. samples are available, they may not produce the best
estimator. Various approaches, such as kernel herding (Chen et al., 2010; Lacoste-Julien et al.,
2015), attempt to produce a better KME estimator by actively generating pseudo-samples that are not
i.i.d. from the underlying distribution.
2
Main results
This section contains our main results regarding consistency and finite sample guarantees for the
estimator ?
?f (X) defined in (1). They are based on the convergence of ?
?X and avoid simplifying
assumptions about its structure.
4
2.1
Consistency
If kx is c0 -universal (see Sriperumbudur et al. (2011)), consistency of ?
?f (X) can be shown in a rather
general setting.
Theorem 1. Let X and Z be compact Hausdorff spaces equipped with their Borel -algebras,
f : X ! Z a continuous function, kx , kz continuous
kernels on X , Z respectively. Assume kx is
P
c0 -universal and that there exists C such that i |wi | ? C independently of n. The following holds:
If
?
?kXx ! ?kXx
?
?kf z(X) ! ?kf z(X)
then
as n ! 1.
Pn
Proof. Let P be the distribution of X and P?n =
i=1 wi xi . Define a new kernel on X by
e
kx (x1 , x2 ) := kz f (x1 ), f (x2 ) . X is compact and {P?n | n 2 N} [ {P } is a bounded set (in
Pn
total variation norm) of finite measures, because kP?n kT V = i=1 |wi | ? C. Furthermore, kx
is continuous and c0 -universal. Using Corollary 52 of Simon-Gabriel and Sch?lkopf (2016) we
conclude that: ?
?kXx ! ?kXx implies that P? converges weakly to P . Now, kz and f being continuous,
e
e
so is e
kx . Thus, if P? converges weakly to P , then ?
?kx ! ?kx (Simon-Gabriel and Sch?lkopf, 2016,
X
?kXx
?
?kXx
X
e
e
Theorem 44, Points (1) and (iii)). Overall,
!
implies ?
?kXx ! ?kXx . We conclude the proof
by showing that convergence in Hekx leads to convergence in Hkz :
?
?kf z(X)
?kf z(X)
2
kz
e
= ?
?kXx
For a detailed version of the above, see Appendix A.
e
?kXx
2
e
kx
! 0.
The continuity assumption is rather unrestrictive. All kernels and functions defined on a discrete
space are continuous with respect to the discrete topology, so the theorem applies in this case. For
X = Rd , many kernels used in practice are continuous, including Gaussian, Laplacian, Mat?rn and
other radial kernels. The slightly limiting factor of this theorem is that kx must be c0 -universal, which
often can be tricky to verify. However, most standard kernels?including all radial, non-constant
kernels?are c0 -universal (see Sriperumbudur et al., 2011). The assumption that the input domain
is compact is satisfied in most applications, since any measurements
P coming from physical sensors
are contained in a bounded range. Finally, the assumption that i |wi | ? C can be enforced, for
instance, by applying a suitable regularization in reduced set methods.
2.2
Finite sample guarantees
Theorem 1 guarantees that the estimator ?
?f (X) converges to ?f (X) when ?
?X converges to ?X .
However, it says nothing about the speed of convergence. In this section we provide a convergence
rate when working with Mat?rn kernels, which are of the form
kxs (x, x0 ) =
21 s
kx
(s)
s d/2
x0 k2
Bd/2
s
(kx
x0 k2 ) ,
(2)
where B? is a modified Bessel function of the third kind (also known as Macdonald function) of
order ?, is the Gamma function and s > d2 is a smoothness parameter. The RKHS induced by
kxs is the Sobolev space W2s (Rd ) (Wendland, 2004, Theorem 6.13 & Chap.10) containing s-times
differentiable functions. The finite-sample bound of Theorem 2 is based on the analysis of Kanagawa
et al. (2016), which requires the following assumptions:
? =
Assumptions 1. Let X be a random variable over X = Rd with distribution P and let X
{(xi , wi )}ni=1 be random variables over X n ? Rn with joint distribution S. There exists a probability
distribution Q with full support on Rd and a bounded density, satisfying the following properties:
(i) P has a bounded density function w.r.t. Q;
(ii) there is a constant D > 0 independent of n, such that
" n
#
1X 2
2
g (xi ) ? D kgkL2 (Q) ,
E
S n
i=1
5
8g 2 L2 (Q) .
These assumptions were shown to be fairly general and we refer to Kanagawa et al. (2016, Section
4.1) for various examples where they are met. Next we state the main result of this section.
0
Theorem 2. Let X = Rd , Z = Rd , and f : X ! Z be an ?-times differentiable function (? 2 N+ ).
Take s1 > d/2 and s2 > d0 such that s1 , s2 /2 2 N+ . Let kxs1 and kzs2 be Mat?rn kernels over X and
? = {(xi , wi )}n ? S satisfy 1. Moreover,
Z respectively as defined in (2). Assume X ? P and X
i=1
assume that P and the marginals of x1 , . . . xn have a common compact support. Suppose that, for
some constants b > 0 and 0 < c ? 1/2:
h
i
2
(i) ES k?
?X ?X kkxs1 = O(n 2b ) ;
Pn
2
2c
(ii)
) (with probability 1) .
i=1 wi = O(n
s2
Let ? = min( 2s
, ? , 1) and assume ?b (1/2 c)(1 ?) > 0. Then
1 s1
?
?
2
0
?f (X) ?f (X) s2 = O (log n)d n 2 (?b (1/2
E ?
kz
S
c)(1 ?))
?
.
(3)
Before we provide a short sketch of the proof, let us briefly comment on this result. As a benchmark,
? = {(xi , 1/n)}n , we get
remember that when x1 , . . . xn are i.i.d. observations from X and X
i=1
2
1
k?
?f (X) ?f (X) k = OP (n ), which was recently shown to be a minimax optimal rate (Tolstikhin
et al., 2016). How do we compare to this benchmark? In this case we have b = c = 1/2 and our rate
is defined by ?. If f is smooth enough, say ? > d/2 + 1, and by setting s2 > 2s1 = 2?, we recover
0
the O(n 1 ) rate up to an extra (log n)d factor.
However, Theorem 2 applies to much more general settings. Importantly, it makes no i.i.d. assumptions on the data points and weights, allowing for complex interdependences. Instead, it asks the
convergence of the estimator ?
?X to the embedding ?X to be sufficiently fast. On the downside, the
upper bound is affected by the smoothness of f , even in the i.i.d. setting: if ? ? d/2 the rate will
become slower, as ? = ?/s1 . Also, the rate depends both on d and d0 . Whether these are artefacts of
our proof remains an open question.
Proof. Here we sketch the main ideas of the proof and develop the details in Appendix C. Throughout
the proof, C will designate a constant that depends neither on the sample size n nor on the variable R
(to be introduced). C may however change from line to line. We start by showing that:
?
??
Z
?2
2
d0
kz
kz
2
?f (X) ?f (X)
= (2?)
?hf(X) ?hf(X) ](z)
dz,
(4)
E ?
E [?
kz
S
Z S
where h is Mat?rn kernel over Z with smoothness parameter s2 /2. Second, we upper bound the
integrand by roughly imitating the proof idea of Theorem 1 from Kanagawa et al. (2016). This
eventually yields:
??
?2
?hf(X) ?hf(X) ](z)
? Cn 2? ,
(5)
E [?
S
where ? := ?b (1/2 c)(1 ?). Unfortunately, this upper bound does not depend on z and can
not be integrated over the whole Z in (4). Denoting BR the ball of radius R, centred on the origin of
Z, we thus decompose the integral in (4) as:
??
Z
?2
?hf(X) ?hf(X) ](z)
dz
E [?
Z
??
??
Z
Z
?2
?2
h
h
=
?f (X) ?f (X) ](z)
dz +
?hf(X) ?hf(X) ](z)
dz.
E [?
E [?
Z\BR
BR
On BR we upper bound the integral by (5) times the ball?s volume (which grows like Rd ):
??
Z
?2
?hf(X) ?hf(X) ](z)
dz ? CRd n 2? .
E [?
(6)
BR
On X \BR , we upper bound the integral by a value that decreases with R, which is of the form:
??
Z
?2
0
?hf(X) ?hf(X) ](z)
dz ? Cn1 2c (R C 0 )s2 2 e 2(R C )
(7)
E [?
Z\BR
6
with C 0 > 0 being a constant smaller than R. In essence, this upper bound decreases with R because
[?
?hf(X) ?hf(X) ](z) decays with the same speed as h when kzk grows indefinitely. We are now left
with two rates, (6) and (7), which respectively increase and decrease with growing R. We complete
the proof by balancing these two terms, which results in setting R ? (log n)1/2 .
3
Functions of Multiple Arguments
The previous section applies to functions f of one single variable X. However, we can apply its
results to functions of multiple variables if we take the argument X to be a tuple containing multiple
values. In this section we discuss how to do it using two input variables from spaces X and Y, but
the results also apply to more inputs. To be precise, our input space changes from X to X ? Y, input
random variable from X to (X, Y ), and the kernel on the input space from kx to kxy .
To apply our results from Section 2, all we need is a consistent estimator ?
?(X,Y ) of the joint embedding
?(X,Y ) . There are different ways to get such an estimator. One way is to sample (x0i , yi0 ) i.i.d. from
the joint distribution of (X, Y ) and construct the usual empirical estimator, or approximate it using
reduced set methods. Alternatively, we may want to construct ?
?(X,Y ) based only on consistent
estimators of ?X and ?Y . For example, this is how ?
? 3 was defined in Section 1.3. Below we show
that this can indeed be done if X and Y are independent.
3.1
Application to Section 1.3
Following Sch?lkopf et al. (2015), we consider two independent random variables X ? Px and
Y ? Py . Their P
joint distribution is Px ? Py . PConsistent estimators of their embeddings are
n
n
given by ?
?X =
?Y =
i=1 wi kx (xi , .) and ?
j=1 uj ky (yi , .). In this section we show that
Pn
?
?f (X,Y ) = i,j=1 wi uj kz f (xi , yj ), . is a consistent estimator of ?f (X,Y ) .
We choose a product kernel kxy (x1 , y1 ), (x2 , y2 ) = kx (x1 , x2 )ky (y1 , y2 ), so the corresponding
RKHS is a tensor product Hkxy = Hkx ? Hky (Steinwart and Christmann, 2008, Lemma 4.6) and
the mean embedding of the product random variable (X, Y ) is a tensor product of their marginal
mean embeddings ?(X,Y ) = ?X ? ?Y . With consistent estimators for the marginal embeddings we
can estimate the joint embedding using their tensor product
?
?(X,Y ) = ?
?X ? ?
?Y =
n
X
i,j=1
wi uj kx (xi , .) ? ky (yj , .) =
n
X
wi uj kxy (xi , yj ), (. , .) .
i,j=1
If points are i.i.d. and wi = ui = 1/n, this reduces to the U-statistic estimator ?
?2 from Section 1.3.
Lemma 3. Let (sn )n be any positive real sequence converging to zero. Suppose kxy = kx ky is a
product kernel, ?(X,Y ) = ?X ? ?Y , and ?
?(X,Y ) = ?
?X ? ?
?Y . Then:
(
k?
?X ?X kkx = O(sn );
implies
?
?(X,Y ) ?(X,Y )
= O(sn ) .
k?
?Y ?Y kky = O(sn )
kxy
Proof. For a detailed expansion of the first inequality see Appendix B.
?
?(X,Y )
?(X,Y )
+ k?
?X
Corollary 4. If ?
?X
kxy
? k?X kkx k?
?Y
?X kkx k?
?Y
! ?X and ?
?Y
n!1
?Y kky + k?Y kky k?
?X
?X kkx
?Y kky = O(sn ) + O(sn ) + O(s2n ) = O(sn ).
! ?Y , then ?
?(X,Y )
n!1
! ?(X,Y ) .
n!1
Together with the results from Section 2 this lets us reason about estimators resulting from applying
functions to multiple independent random variables. Write
kxy
?
?XY
=
n
X
2
wi uj kxy (xi , yj ), . =
i,j=1
n
X
`=1
7
!` kxy (?` , .),
where ` enumerates the (i, j) pairs and ?` = (xi , yj ), !` = wi uj . Now if ?
?kXx ! ?kXx
ky
ky
kxy
kxy
and ?
? ! ?Y then ?
?XY ! ?(X,Y ) (according to Corollary 4) and Theorem 1 shows that
Pn Y
w
u
k
f
(x
,
y
i j ), . is consistent as well. Unfortunately, we cannot apply Theorem 2 to get
i,j=1 i j z
the speed of convergence, because a product of Mat?rn kernels is not a Mat?rn kernel any more.
One downside of this overall approach is that the number of expansion points used for the estimation
of the joint increases exponentially with the number of arguments of f . This can lead to prohibitively
large computational costs, especially if the result of such an operation is used as an input to another
function of multiple arguments. To alleviate this problem, we may use reduced expansion set methods
before or after applying f , as we did for example in Section 1.2.
To conclude this section, let us summarize the implications of our results for two practical scenarios
that should be distinguished.
. If we have separate samples from two random variables X and Y , then our results justify
how to provide an estimate of the mean embedding of f (X, Y ) provided that X and Y are
independent. The samples themselves need not be i.i.d. ? we can also work with weighted
samples computed, for instance, by a reduced set method.
. How about dependent random variables? For instance, imagine that Y = X, and
f (X, Y ) = X + Y . Clearly, in this case the distribution of f (X, Y ) is a delta measure on 0, and there is no way to predict this from separate samples of X and Y . However,
it should be stressed that our results (consistency and finite sample bound) apply even to
the case where X and Y are dependent. In that case, however, they require a consistent
estimator of the joint embedding ?(X,Y ) .
. It is also sufficient to have a reduced set expansion of the embedding of the joint distribution.
This setting may sound strange, but it potentially has significant applications. Imagine that
one has a large database of user data, sampled from a joint distribution. If we expand the
joint?s embedding in terms of synthetic expansion points using a reduced set construction
method, then we can pass on these (weighted) synthetic expansion points to a third party
without revealing the original data. Using our results, the third party can nevertheless
perform arbitrary continuous functional operations on the joint distribution in a consistent
manner.
4
Conclusion and future work
This paper provides a theoretical foundation for using kernel mean embeddings as approximate
representations of random variables in scenarios where we need to apply functions to those random
variables. We show that for continuous functions f (including all functions on discrete domains),
consistency of the mean embedding estimator of a random variable X implies consistency of the
mean embedding estimator of f (X). Furthermore, if the kernels are Mat?rn and the function f
is sufficiently smooth, we provide bounds on the convergence rate. Importantly, our results apply
beyond i.i.d. samples and cover estimators based on expansions with interdependent points and
weights. One interesting future direction is to improve the finite-sample bounds and extend them to
general radial and/or translation-invariant kernels.
Our work is motivated by the field of probabilistic programming. Using our theoretical results,
kernel mean embeddings can be used to generalize functional operations (which lie at the core of
all programming languages) to distributions over data types in a principled manner, by applying the
operations to the points or approximate kernel expansions. This is in principle feasible for any data
type provided a suitable kernel function can be defined on it. We believe that the approach holds
significant potential for future probabilistic programming systems.
Acknowledgements
We thank Krikamol Muandet for providing the code used to generate Figure 1, Paul Rubenstein,
Motonobu Kanagawa and Bharath Sriperumbudur for very useful discussions, and our anonymous
reviewers for their valuable feedback. Carl-Johann Simon-Gabriel is supported by a Google European
Fellowship in Causal Inference.
8
References
R. A. Adams and J. J. F. Fournier. Sobolev Spaces. Academic Press, 2003.
C. Bennett and R. Sharpley. Interpolation of Operators. Pure and Applied Mathematics. Elsevier Science, 1988.
A. Berlinet and C. Thomas-Agnan. RKHS in probability and statistics. Springer, 2004.
Y. Chen, M. Welling, and A. Smola. Super-samples from kernel herding. In UAI, 2010.
K. Fukumizu, L. Song, and A. Gretton. Kernel Bayes? Rule: Bayesian Inference with Positive Definite Kernels.
Journal of Machine Learning Research, 14:3753?3783, 2013.
I. S. Gradshteyn and I. M. Ryzhik. Table of integrals, series, and products. Elsevier/Academic Press, Amsterdam,
2007. Edited by Alan Jeffrey and Daniel Zwillinger.
M. Kalos and P. Whitlock. Monte Carlo Methods. Wiley, 2008.
M. Kanagawa, B. K. Sriperumbudur, and K. Fukumizu. Convergence guarantees for kernel-based quadrature
rules in misspecified settings. arXiv:1605.07254 [stat], 2016. arXiv: 1605.07254.
Y. Katznelson. An Introduction to Harmonic Analysis. Cambridge University Press, 2004.
M. Korze?n and S. Jaroszewicz. PaCAL: A Python package for arithmetic computations with random variables.
Journal of Statistical Software, 57(10), 2014.
S. Lacoste-Julien, F. Lindsten, and F. Bach. Sequential kernel herding : Frank-Wolfe optimization for particle
filtering. In Artificial Intelligence and Statistics, volume 38, pages 544?552, 2015.
A. Mathai. A review of the different techniques used for deriving the exact distributions of multivariate test
criteria. Sankhy?a: The Indian Journal of Statistics, Series A, pages 39?60, 1973.
K. McKinley. Programming the world of uncertain things (keynote). In ACM SIGPLAN-SIGACT Symposium on
Principles of Programming Languages, pages 1?2, 2016.
D. Milios. Probability Distributions as Program Variables. PhD thesis, University of Edinburgh, 2009.
S. Poisson. Recherches sur la probabilit?des jugements en mati?re criminelle et en mati?re civile, pr?c?d?es des
r?gles g?n?rales du calcul des probabilit?s. 1837.
B. Sch?lkopf and A. J. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization,
and Beyond. MIT Press, 2002.
B. Sch?lkopf, K. Muandet, K. Fukumizu, S. Harmeling, and J. Peters. Computing functions of random variables
via reproducing kernel Hilbert space representations. Statistics and Computing, 25(4):755?766, 2015.
C. Scovel, D. Hush, I. Steinwart, and J. Theiler. Radial kernels and their reproducing kernel hilbert spaces.
Journal of Complexity, 26, 2014.
C.-J. Simon-Gabriel and B. Sch?lkopf. Kernel distribution embeddings: Universal kernels, characteristic kernels
and kernel metrics on distributions. Technical report, Max Planck Institute for Intelligent Systems, 2016.
A. Smola, A. Gretton, L. Song, and B. Sch?lkopf. A Hilbert space embedding for distributions. In ALT, 2007.
L. Song, J. Huang, A. Smola, and K. Fukumizu. Hilbert space embeddings of conditional distributions with
applications to dynamical systems. In International Conference on Machine Learning, pages 1?8, 2009.
M. D. Springer. The Algebra of Random Variables. Wiley, 1979.
B. K. Sriperumbudur, A. Gretton, K. Fukumizu, B. Sch?lkopf, and G. R. Lanckriet. Hilbert space embeddings
and metrics on probability measures. Journal of Machine Learning Research, 11:1517?1561, 2010.
B. K. Sriperumbudur, K. Fukumizu, and G. R. G. Lanckriet. Universality, characteristic kernels and RKHS
embedding of measures. Journal of Machine Learning Research, 12:2389?2410, 2011.
I. Steinwart and A. Christmann. Support Vector Machines. Information Science and Statistics. Springer, 2008.
I. Steinwart and C. Scovel. Mercer?s Theorem on General Domains: On the Interaction between Measures,
Kernels, and RKHSs. Constructive Approximation, 35(3):363?417, 2012.
I. Tolstikhin, B. Sriperumbudur, and K. Muandet.
arXiv:1602.04361 [math, stat], 2016.
Minimax Estimation of Kernel Mean Embeddings.
H. Wendland. Scattered Data Approximation. Cambridge University Press, 2004.
R. Williamson. Probabilistic Arithmetic. PhD thesis, University of Queensland, 1989.
9
| 6545 |@word version:1 briefly:2 compression:1 polynomial:2 norm:1 yi0:3 c0:5 open:1 d2:1 queensland:1 simplifying:1 asks:1 moment:1 contains:2 series:2 daniel:1 denoting:1 rkhs:7 outperforms:1 sharpley:1 scovel:2 universality:1 must:1 bd:1 numerical:2 krikamol:1 intelligence:1 short:1 core:1 indefinitely:1 provides:4 math:1 height:1 become:1 symposium:1 prove:3 advocate:1 manner:2 introduce:1 interdependence:1 x0:11 indeed:1 roughly:1 mpg:1 themselves:2 growing:2 nor:1 chap:1 pitfall:1 equipped:1 kme:14 becomes:2 spain:1 provided:2 underlying:2 bounded:7 moreover:2 lowest:1 what:1 kind:1 interpreted:1 string:1 lindsten:1 finding:1 nj:2 guarantee:9 pseudo:1 remember:1 every:1 prohibitively:1 k2:3 berlinet:2 control:1 whatever:2 internally:1 omit:1 tricky:1 planck:2 positive:5 generalised:1 engineering:1 before:3 limit:2 meet:1 interpolation:1 studied:1 fastest:1 limited:1 range:1 practical:4 harmeling:1 yj:8 practice:1 definite:3 cn1:1 probabilit:2 empirical:3 universal:6 revealing:1 convenient:1 sider:1 radial:4 get:4 cannot:1 selection:1 operator:1 storage:1 context:2 applying:9 py:2 optimize:1 measurable:4 reviewer:1 dz:6 straightforward:1 go:1 starting:1 independently:1 automaton:1 pure:1 estimator:41 rule:4 importantly:3 deriving:1 embedding:16 hkz:1 notion:2 variation:1 justification:2 limiting:1 imagine:3 controlling:1 suppose:3 user:2 yj0:1 programming:8 carl:2 construction:1 exact:1 origin:1 lanckriet:2 wolfe:1 approximated:1 satisfying:1 keynote:1 database:1 trade:1 decrease:3 valuable:1 ran:1 principled:2 edited:1 ui:3 complexity:1 ideally:1 depend:2 raise:1 weakly:2 algebra:2 serve:1 division:1 efficiency:1 basis:1 joint:14 various:3 represented:3 chapter:1 fast:1 monte:8 kp:1 artificial:1 tell:1 outside:1 choosing:1 widely:1 valued:1 say:3 compressed:1 ability:1 statistic:11 itself:1 superscript:1 advantage:3 sequence:2 differentiable:2 analytical:2 propose:2 crd:1 interaction:1 product:9 coming:1 remainder:1 relevant:1 date:1 flexibility:2 ky:6 convergence:12 requirement:1 produce:3 generating:1 adam:3 converges:5 depending:1 illustrate:1 stating:1 stat:2 develop:1 minimises:1 x0i:4 op:1 strong:1 implemented:1 christmann:2 come:2 implies:4 quantify:1 met:1 artefact:1 direction:1 radius:1 closely:1 require:2 emphasising:1 mathai:2 decompose:1 alleviate:1 anonymous:1 designate:1 ryzhik:1 hold:2 sufficiently:3 considered:1 mapping:1 predict:1 bump:1 purpose:1 estimation:4 whitlock:2 lose:1 tool:2 weighted:6 fukumizu:7 mit:1 clearly:1 sensor:1 gaussian:4 w2s:1 super:1 modified:1 rather:5 pn:7 avoid:1 earliest:1 corollary:3 ax:1 focus:3 rubenstein:1 modelling:1 hk:2 rigorous:1 hkx:1 elsevier:2 inference:3 dependent:5 integrated:1 expand:1 germany:1 overall:2 flexible:1 smoothing:1 special:1 fairly:2 marginal:3 equal:1 construct:3 field:1 sampling:1 represents:1 look:1 sankhy:1 future:3 report:1 intelligent:2 randomly:1 gamma:1 comprehensive:1 jeffrey:1 n1:2 attempt:1 tolstikhin:3 possibility:1 certainly:1 mixture:1 predefined:1 accurate:1 kt:1 implication:1 integral:5 tuple:1 injective:1 xy:2 ples:1 iv:1 kxx:12 re:2 causal:1 theoretical:5 uncertain:1 instance:6 downside:2 cover:3 corroborate:1 cost:2 subset:2 too:1 dependency:1 synthetic:3 muandet:3 density:2 international:1 probabilistic:7 off:1 together:1 ilya:2 concrete:1 thesis:2 satisfied:1 containing:2 choose:3 possibly:1 hky:1 huang:1 worse:1 actively:1 potential:1 de:4 centred:1 includes:1 satisfy:1 depends:3 doing:1 start:2 bayes:2 option:2 complicated:1 recover:1 hf:14 simon:5 ni:10 accuracy:2 variance:2 characteristic:6 yield:1 generalize:1 lkopf:15 bayesian:1 accurately:1 carlo:8 herding:3 bharath:1 explain:1 sriperumbudur:8 jaroszewicz:2 milios:2 naturally:2 proof:10 gain:1 sampled:2 popular:1 enumerates:1 emerges:1 hilbert:7 back:1 done:2 though:1 furthermore:3 smola:9 hand:1 working:1 sketch:2 steinwart:4 google:1 continuity:1 defines:1 grows:3 believe:1 verify:1 true:1 y2:2 hausdorff:1 idealised:1 analytically:1 wi0:1 regularization:2 recherches:1 attractive:1 essence:1 criterion:1 wj0:2 recognised:1 complete:1 reasoning:1 image:1 harmonic:1 recently:1 misspecified:1 common:2 functional:2 physical:1 exponentially:1 volume:2 extend:3 interpretation:1 marginals:1 refer:2 significant:3 measurement:1 cambridge:3 smoothness:3 rd:8 consistency:9 mathematics:1 particle:1 language:3 similarity:1 longer:1 multivariate:1 scenario:2 certain:1 inequality:1 yi:2 additional:1 converge:2 bessel:1 ii:5 arithmetic:4 multiple:7 full:1 sound:1 reduces:4 d0:3 gretton:3 smooth:3 alan:1 technical:1 academic:2 bach:1 long:1 laplacian:2 converging:1 basic:2 expectation:2 poisson:2 metric:2 arxiv:3 kernel:57 sometimes:2 represent:2 tailored:1 background:1 want:3 fellowship:1 source:1 sch:14 extra:1 unlike:1 comment:1 induced:1 sigact:1 thing:2 integer:1 call:1 iii:4 embeddings:15 enough:1 independence:1 bandwidth:1 topology:1 reduce:1 regarding:1 idea:2 cn:1 br:7 chebyshev:1 whether:1 expression:1 motivated:2 song:4 peter:1 bingen:1 gabriel:5 generally:1 kxy:11 clear:1 listed:1 detailed:2 useful:1 transforms:1 simplest:1 reduced:19 generate:1 exist:2 estimated:1 delta:1 track:2 summarised:1 write:2 discrete:3 mat:10 affected:1 nevertheless:2 neither:1 lacoste:2 fournier:1 sum:1 enforced:1 package:1 exponentiation:1 powerful:1 throughout:1 x0j:5 strange:1 sobolev:2 prefer:1 appendix:3 bound:10 guaranteed:1 followed:1 x2:4 software:2 sigplan:1 hkxy:1 integrand:1 speed:3 argument:4 min:1 px:2 department:2 according:1 ball:2 smaller:1 slightly:1 increasingly:1 wi:35 b:1 kgkl2:1 s1:5 invariant:1 pr:1 imitating:1 resource:2 mutually:1 remains:1 discus:3 loose:1 eventually:1 merit:2 serf:1 available:2 gaussians:1 operation:5 apply:8 appearing:1 s2n:1 distinguished:1 save:1 alternative:3 rkhss:1 slower:1 kalos:2 thomas:2 compress:1 original:1 build:1 uj:8 especially:1 tensor:3 objective:1 unrestrictive:1 question:1 parametric:1 usual:1 diagonal:3 dp:2 distance:3 separate:3 mapped:1 macdonald:1 thank:1 tuebingen:1 reason:2 code:1 sur:1 pointwise:2 illustration:2 providing:1 unfortunately:3 statement:1 potentially:1 frank:1 negative:1 stated:1 implementation:1 unknown:1 perform:2 allowing:1 upper:6 observation:2 benchmark:2 finite:8 defining:1 situation:1 looking:1 precise:1 y1:2 rn:11 reproducing:3 arbitrary:1 community:2 introduced:1 pair:2 optimized:1 kkx:4 established:1 barcelona:1 hush:1 nip:1 beyond:2 usually:1 laguerre:1 agnan:2 below:1 rale:1 dynamical:1 challenge:1 summarize:1 program:1 max:2 memory:2 including:3 suitable:2 minimax:2 scheme:1 improve:1 library:1 julien:2 identifies:1 concludes:1 sn:7 prior:1 interdependent:3 l2:1 acknowledgement:1 kf:6 multiplication:1 python:1 review:1 law:1 calcul:1 interesting:1 filtering:1 proven:1 foundation:2 degree:1 sufficient:2 consistent:14 theiler:1 mercer:1 principle:2 balancing:1 translation:1 supported:1 copy:1 institute:2 emphasise:1 edinburgh:1 kzk:1 feedback:1 xn:2 valid:1 world:1 kz:14 author:1 commonly:1 far:1 party:2 welling:1 approximate:7 compact:4 bernhard:1 overcomes:1 keep:3 uai:1 conclude:3 xi:33 alternatively:1 continuous:9 why:2 table:1 additionally:1 kanagawa:5 obtaining:1 du:1 expansion:18 williamson:2 mckinley:2 complex:1 constructing:1 domain:3 european:1 did:1 main:6 motivation:1 s2:7 whole:1 paul:1 n2:1 nothing:1 quadrature:1 x1:6 en:2 borel:1 scattered:1 wiley:2 lie:1 third:3 formula:1 theorem:14 specific:1 showing:2 list:1 decay:1 alt:1 concern:2 exists:3 sequential:1 gradshteyn:1 phd:2 budget:1 kx:21 chen:2 amsterdam:1 contained:1 wendland:2 applies:3 springer:4 kky:4 acm:1 conditional:2 bennett:1 feasible:1 hard:1 change:2 specifically:2 justify:1 lemma:2 called:3 total:1 kxs:2 pas:1 e:2 la:1 mati:2 support:4 latter:2 stressed:1 indian:1 constructive:1 johann:2 ex:1 |
6,131 | 6,546 | Multiple-Play Bandits in the Position-Based Model
Paul Lagr?e?
LRI, Universit? Paris Sud
Universit? Paris Saclay
[email protected]
Claire Vernade?
LTCI, CNRS, T?l?com ParisTech
Universit? Paris Saclay
[email protected]
Olivier Capp?
LTCI, CNRS
T?l?com ParisTech
Universit? Paris Saclay
Abstract
Sequentially learning to place items in multi-position displays or lists is a task that
can be cast into the multiple-play semi-bandit setting. However, a major concern in
this context is when the system cannot decide whether the user feedback for each
item is actually exploitable. Indeed, much of the content may have been simply
ignored by the user. The present work proposes to exploit available information
regarding the display position bias under the so-called Position-based click model
(PBM). We first discuss how this model differs from the Cascade model and its
variants considered in several recent works on multiple-play bandits. We then
provide a novel regret lower bound for this model as well as computationally
efficient algorithms that display good empirical and theoretical performance.
1
Introduction
During their browsing experience, users are constantly provided ? without having asked for it ? with
clickable content spread over web pages. While users interact on a website, they send clicks to the
system for a very limited selection of the clickable content. Hence, they let every unclicked item with
an equivocal answer: the system does not know whether the content was really deemed irrelevant
or simply ignored. In contrast, in traditional multi-armed bandit (MAB) models, the learner makes
actions and observes at each round the reward corresponding to the chosen action. In the so-called
multiple play semi-bandit setting, when users are presented with L items, they are assumed to provide
feedback for each of those items.
Several variants of this basic setting have been considered in the bandit literature. The necessity
for the user to provide feedback for each item has been called into question in the context of the
so-called Cascade Model [8, 14, 6] and its extensions such as the Dependent Click Model (DCM)
[20]. Both models are particularly suited for search contexts, where the user is assumed to be looking
for something relative to a query. Consequently, the learner expects explicit feedback: in the Cascade
Model each valid observation sequence must be either all zeros or terminated by a one, such that no
ambiguity is left on the evaluation of the presented items, while multiple clicks are allowed in the
DCM thus leaving some ambiguity on the last zeros of a sequence.
In the Cascade Model, the positions of the items are not taken into account in the reward process
because the learner is assumed to obtain a click as long as the interesting item belongs to the list.
Indeed, there are even clear indications that the optimal strategy in a learning context consists in
showing the most relevant items at the end of the list in order to maximize the amount of observed
feedback [14] ? which is counter-intuitive in recommendation tasks.
To overcome these limitations, [6] introduces weights ? to be defined by the learner ? that are
attributed to positions in the list, with a click on position l ? {1, . . . , L} providing a reward wl ,
where the sequence (wl )l is decreasing to enforce the ranking behavior. However, no rule is given for
?
The two authors contributed equally.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
setting the weights (wl )l that control the order of importance of the positions. The authors propose an
algorithm based on KL-UCB [10] and prove a lower bound on the regret as well as an asymptotically
optimal upper bound.
Another way to address the limitations of the Cascade Model is to consider the DCM as in [20]. Here,
examination probabilities vl are introduced for each position l: conditionally on the event that the
user effectively scanned the list up to position l, he/she can choose to leave with probability vl and in
that case, the learner is aware of his/her departure. This framework naturally induces the necessity to
rank the items in the optimal order.
All previous models assume that a portion of the recommendation list is explicitly examined by the
user and hence that the learning algorithm eventually has access to rewards corresponding to the
unbiased user?s evaluation of each item. In contrast, we propose to analyze multiple-play bandits in
the Position-based model (PBM) [5]. In the PBM, each position in the list is also endowed with a
binary Examination variable [8, 19] which is equal to one only when the user paid attention to the
corresponding item. But this variable, that is independent of the user?s evaluation of the item, is not
observable. It allows to model situations where the user is not explicitly looking for specific content,
as in typical recommendation scenarios.
Compared to variants of the Cascade model, the PBM is challenging due to the censoring induced by
the examination variables: the learning algorithm observes actual clicks but non-clicks are always
ambiguous. Thus, combining observations made at different positions becomes a non-trivial statistical
task. Some preliminary ideas on how to address this issue appear in the supplementary material of
[13]. In this work, we provide a complete statistical study of stochastic multiple-play bandits with
semi-bandit feedback in the PBM.
We introduce the model and notations in Section 2 and provide the lower bound on the regret in
Section 3. In Section 4, we present two optimistic algorithms as well as a theoretical analysis of
their regret. In the last section dedicated to experiments, those policies are compared to several
benchmarks on both synthetic and realistic data.
2
Setting and Parameter Estimation
We consider the binary stochastic bandit model with K Bernoulli-distributed arms. The model
parameters are the arm expectations ? = (?1 , ?2 , . . . , ?K ), which lie in ? = (0, 1)K . We will
denote by B(?) the Bernoulli distribution with parameter ? and by d(p, q) := p log(p/q) + (1 ?
p) log((1 ? p)/(1 ? q)) the Kullback-Leibler divergence from B(p) to B(q). At each round t, the
learner selects a list of L arms ? referred to as an action ? chosen among the K arms which are
indexed by k ? {1, . . . , K}. The set of actions is denoted by A and thus contains K!/(K ? L)!
ordered lists; the action selected at time t will be denoted A(t) = (A1 (t), . . . , AL (t)).
The PBM is characterized by examination parameters (?l )1?l?L , where ?l is the probability that the
user effectively observes the item in position l [5]. At round t, the selection A(t) is shown to the user
and the learner observes the complete feedback ? as in semi-bandit models ? but the observation at
position l, Zl (t), is censored being the product of two independent Bernoulli variables Yl (t) and Xl (t),
where Yl (t) ? B(?l ) is non null when the user considered the item in position l ? which is unknown to
the learner ? and Xl (t) ? B(?Al (t) ) represents the actual user feedback to the item shown in position
PL
l. The learner receives a reward rA(t) = l=1 Zl (t), where Z(t) = (X1 (t)Y1 (t), . . . , XL (t)YL (t))
denotes the vector of censored observations at step t.
In the following, we will assume, without loss of generality, that ?1 > ? ? ? > ?K and ?1 > ? ? ? >
?L > 0, in order to simplify the notations. The fact that the sequences (?l )l and (?l )l are decreasing
PT
implies that the optimal list is a? = (1, . . . , L). Denoting by R(T ) = t=1 ra? ? rA(t) the regret
incurred by the learner up to time T , one has
E[R(T )] =
T X
L
X
t=1 l=1
?l (?a?l ? E[?Al (t) ]) =
X
a?A
(?? ? ?a ) E[Na (T )] =
X
?a E[Na (T )],
(1)
a?A
PL
where ?a = l=1 ?l ?al is the expected reward of action a, ?? = ?a? is the best possible reward in
PT
average, ?a = ?? ? ?a the expected gap to optimality, and, Na (T ) = t=1 1{A(t) = a} is the
number of times action a has been chosen up to time T .
2
In the following, we assume that the examination parameters (?l )1?l?L are known to the learner.
These can be estimated from historical data [5], using, for instance, the EM algorithm [9] (see also
Section 5). In most scenarios, it is realistic to assume that the content (e.g., ads in on-line advertising)
is changing much more frequently than the layout (web page design for instance) making it possible
to have a good knowledge of the click-through biases associated with the display positions.
The main statistical challenge associated with the PBM is that one needs to obtain estimates and
confidence bounds for the components ?k of ? from the available B(?l ?k )-distributed draws corresponding to occurrences of arm k at various positions l = 1, . . . , L in the list. To this aim,
Pt?1
PL
we define the following statistics: Sk,l (t) = s=1 Zl (s)1{Al (s) = k}, Sk (t) = l=1 Sk,l (t),
Pt?1
PL
Nk,l (t) = s=1 1{Al (s) = k}, Nk (t) = l=1 Nk,l (t). We further require bias-corrected versions
?k,l (t) = Pt?1 ?l 1{Al (s) = k} and N
?k (t) = PL N
?
of the counts N
s=1
l=1 k,l (t).
A time t, and conditionally on the past actions A(1) up to A(t ? 1), the Fisher information for ?k is
PL
given by I(?k ) = l=1 Nk,l (t)?l /(?k (1 ? ?l ?k )) (see Appendix A). We cannot however estimate
?k using the maximum likelihood estimator since it has no closed form expression. Interestingly
though, the simple pooled linear estimator
?k (t),
??k (t) = Sk (t)/N
(2)
considered in the supplementary material to [13], is unbiased and has a (conditional) variance of
PL
PL
?(?k ) = ( l=1 Nk,l (t)?l ?k (1 ? ?l ?k ))/( l=1 Nk,l (t)?l )2 , which is close to optimal given the
Cram?r-Rao lower bound. Indeed, ?(?k )I(?k ) is recognized as a ratio of a weighted arithmetic mean
to the corresponding weighted harmonic mean, which is known to be larger than one, but is upper
bounded by 1/(1 ? ?k ), irrespectively of the values of the ?l ?s. Hence, if, for instance, we can assume
that all ?k ?s are smaller than one half, the loss with respect to the best unbiased estimator is no more
than a factor of two for the variance. Note that despite its simplicity, ??k (t) cannot be written as a
simple sum of conditionally independent increments divided by the number of terms and will thus
require specific concentration results.
It can be checked that when ?k gets very close to one, ??k (t) is no longer close to optimal. This
observation also has a Bayesian counterpart that will be discussed in Section 5. Nevertheless, it is
PL
always preferable to the ?position-debiased? estimator ( l=1 Sk,l (t)/?l )/Nk,l (t) which gets very
unreliable as soon as one of the ?l ?s gets very small.
3
Lower Bound on the Regret
In this section, we consider the fundamental asymptotic limits of learning performance for online
algorithms under the PBM. These cannot be deduced from earlier general results, such as those of
[11, 7], due to the censoring in the feedback associated to each action. We detail a simple and general
proof scheme ? using the results of [12] ? that applies to the PBM, as well as to more general models.
Lower bounds on the regret rely on changes of measure: the question is how much can we mistake
the true parameters of the problem for others, when observing successive arms? With this in mind,
we will subscript all expectations and probabilities by the parameter value and indicate explicitly
that the quantities ?a , a? , ?? , ?a , introduced in Section 2, also depend on the parameter. For ease of
notation, we will still assume that ? is such that a? (?) = (1, . . . , L).
3.1
Existing results for multiple-play bandit problems
Lower bounds on the regret will be proved for uniformly efficient algorithms, in the sense of [16]:
Definition 1. An algorithm is said to be uniformly efficient if for any bandit model parameterized by
? and for all ? ? (0, 1], its expected regret after T rounds is such that E? R(T ) = o(T ? ).
For the multiple-play MAB, [2] obtained the following bound
lim inf
T ??
K
X
E? R(T )
?L ? ?k
?
.
log(T )
d(?k , ?L )
k=L+1
3
(3)
For the ?learning to rank? problem where rewards follow the weighted Cascade Model with decreasing
weights (wl )l=1,...,L , [6] derived the following bound
lim inf
T ??
K
X
E? R(T )
?L ? ?k
? wL
.
log T
d(?k , ?L )
k=L+1
Perhaps surprisingly, this lower bound does not show any additional term corresponding to the
complexity of ranking the L optimal arms. Indeed, the errors are still asymptotically dominated by
the need to discriminate irrelevant arms (?k )k>L from the worst of the relevant arms, that is, ?L .
3.2
Lower bound step by step
Step 1: Computing the expected log-likelihood ratio. Denoting by Fs?1 the ?-algebra generated
by the past actions and observations, we define the log-likelihood ratio for the two values ? and ? of
the parameters by
t
X
p(Z(s); ? | Fs?1 )
log
`(t) :=
.
(4)
p(Z(s);
? | Fs?1 )
s=1
Lemma 2. For each position l and each item k, define the local amount of information by
p(Zl (t); ?)
Al (t) = k ,
Il (?k , ?k ) := E? log
p(Zl (t); ?)
PL PK
and its cumulated sum over the L positions by Ia (?, ?) := l=1 k=1 1{al = k}Il (?k , ?k ). The
expected log-likelihood ratio is given by
X
E? [`(t)] =
Ia (?, ?)E? [Na (t)].
(5)
a?A
The next proposition is adapted from Theorem 17 in Appendix B of [12] and provides a lower bound
on the expected log-likelihood ratio.
Proposition 3. Let B(?) := {? ? ? |?l ? L, ?l = ?l and ?? (?) < ?? (?) } be the set of changes of
measure that improve over ? without modifying the optimal arms. Assuming that the expectation of
the log-likelihood ratio may be written as in (5), for any uniformly efficient algorithm one has
P
Ia (?, ?)E? [Na (T )]
? 1.
?? ? B(?),
lim inf a?A
T ??
log(T )
Step 2: Variational form of the lower bound. We are now ready to obtain the lower bound in a
form similar to that originally given by [11].
Theorem 4. The expected regret of any uniformly efficient algorithm satisfies
X
X
E? R(T )
?a (?)ca , s.t. inf
Ia (?, ?)ca ? 1.
lim inf
? f (?) , where f (?) = inf
T ??
c0
log T
??B(?)
a?A
a?A
Theorem 4 is a straightforward consequence of Proposition 3, combined with the expression of the
P
|A|
expected regret given in (1). The vector c ? R+ , that satisfies the inequality a?A Ia (?, ?)ca ? 1,
represents the feasible values of E? [Na (T )]/ log(T ).
Step 3: Relaxing the constraints. The bounds mentioned in Section 3.1 may be recovered from
Theorem 4 by considering only the changes of measure that affect a single suboptimal arm.
Corollary 5.
f (?) ? inf
c0
X
a?A
?a (?)ca ,
s.t.
L
XX
1{al = k}Il (?k , ?L )ca ? 1 ,
?k ? {L + 1, . . . , K}.
a?A l=1
Corollary 5 is obtained by restricting the constraint set B(?) of Theorem 4 to ?K
k=L+1 Bk (?), where
?
?
Bk (?) := {? ? ?|?j 6= k, ?j = ?j and ? (?) < ? (?)} .
4
3.3
Lower bound for the PBM
Theorem 6. For the PBM, the following lower bound holds for any uniformly efficient algorithm:
lim inf
T ??
K
X
E? R(T )
?
log T
?vk,l (?)
,
l?{1,...,L} d(?l ?k , ?l ?L )
min
k=L+1
(6)
where vk,l := (1, . . . , l ? 1, k, l, . . . , L ? 1).
Proof. First, note that for the PBM one has Il (?k , ?k ) = d(?l ?k , ?l ?k ). To get the expression given
in Theorem 6 from Corollary 5, we proceed as in [6] showing that the optimal coefficients (ca )a?A
can be non-zero only for the K ? L actions that put the suboptimal arm k in the position l that reaches
the minimum of ?vk,l (?)/d(?l ?k , ?l ?L ). Nevertheless, this position does not always coincide with
L, the end of the displayed list, contrary to the case of [6] (see Appendix B for details).
The discrete minimization that appears in the r.h.s. of Theorem 6 corresponds to a fundamental
trade-off in the PBM. When trying to discriminate a suboptimal arm k from the L optimal ones, it
is desirable to put it higher in the list to obtain more information, as d(?l ?k , ?l ?L ) is an increasing
function of ?l . On the other hand, the gap ?vk,l (?) is also increasing as l gets closer to the top
of the list. The fact that d(?l ?k , ?l ?L ) is not linear in ?l (it is a strictly convex function of ?l )
renders the trade-off non trivial. It is easily checked that when (?1 ? ?L ) is very small, i.e. when all
optimal arms are equivalent, the optimal exploratory position is l = 1. In contrast, it is equal to L
when the gap (?L ? ?L+1 ) becomes very small. Note that by using that for any suboptimal a ? A,
PK
PL
?a (?) ? k=L+1 l=1 1{al = k}?l (?L ? ?k ), one can lower bound the r.h.s. of Theorem 6 by
PK
?L k=L+1 (?L ? ?k )/d(?L ?k , ?L ?L ), which is not tight in general.
Remark 7. In the uncensored version of the PBM ? i.e., if the Yl (t) were observed ?, the expression
PL PK
of Ia (?, ?) is simpler: it is equal to l=1 k=1 1{Al (t) = k}?l d(?k , ?k ) and leads to a lower
bound that coincides with (3). The uncensored PBM is actually statistically very close to the weighted
Cascade model and can be addressed by algorithms that do not assume knowledge of the (?l )l but
only of their ordering.
4
Algorithms
In this section we introduce two algorithms for the PBM. The first one uses the CUCB strategy of [4]
and requires an simple upper confidence bound for ?k based on the estimator ??k (t) defined in (2).
The second algorithm is based on the Parsimonious Item Exploration ? PIE(L) ? scheme proposed
in [6] and aims at reaching asymptotically optimal performance. For this second algorithm, termed
PBM-PIE, it is also necessary to use a multi-position analog of the well-known KL-UCB index [10]
that is inspired by a result of [17]. The analysis of PBM-PIE provided below confirms the relevance
of the lower bound derived in Section 3.
PBM-UCB The first algorithm simply consists in sorting optimistic indices in decreasing order
and pulling the corresponding first L arms [4]. To derive the expression of the required ?exploration
bonus? we use an upper confidence for ??k (t) based on Hoeffding?s inequality:
s
s
Sk (t)
Nk (t)
?
U CB
+
Uk
(t, ?) =
,
?
?
?
Nk (t)
Nk (t) 2Nk (t)
for which a coverage bound is given by the next proposition, proven in Appendix C.
Proposition 8. Let k be any arm in {1, . . . , K}, then for any ? > 0,
P UkU CB (t, ?) ? ?k ? e? log(t)e?? .
Following the ideas of [7], it is possible to obtain a logarithmic regret upper bound for this algorithm.
The proof is given in Appendix D.
5
PL
Pl
Theorem 9. Let C(?) = min1?l?L [( j=1 ?j )2 /l + ( j=1 ?j )2 ]/?2L and ? = mina??(a? )\a? ?a ,
where ?(a? ) denotes the permutations of the optimal action. Using PBM-UCB with ? = (1 +
) log(t) for some > 0, there exists a constant C0 () independent from the model parameters such
that the regret of PBM-UCB is bounded from above by
!
X
L
1
E[R(T )] ? C0 () + 16(1 + )C(?) log T
+
.
?
?L (?L ? ?k )
?
k?a
/
The presence of the term L/? in the above expression is attributable to limitations of the mathematical
analysis. On the other hand, the absence of the KL-divergence terms appearing in the lower bound (6)
is due to the use of an upper confidence bound based on Hoeffding?s inequality.
PBM-PIE We adapt the PIE(l) algorithm introduced by [6] for the Cascade Model to the PBM in
Algorithm 1 below. At each round, the learner potentially explores at position L with probability 1/2
using the following upper-confidence bound for each arm k
( L
)
X
Sk,l (t)
Uk (t, ?) =
sup
q
Nk,l (t)d
, ?l q ? ? ,
(7)
Nk,l (t)
min (t),1]
q?[?k
l=1
PL
where ?kmin (t) is the minimum of the convex function ? : q 7? l=1 Nk,l (t)d(Sk,l (t)/Nk,l (t), ?l q).
In other positions, l = 1, . . . , L ? 1, PBM-PIE selects the arms with the largest estimates ??k (t).
The resulting algorithm is presented as Algorithm 1 below, denoting by L(t) the L-largest empirical
estimates, referred to as the ?leaders? at round t.
Algorithm 1 ? PBM-PIE
Require: K, L, observation probabilities ?, > 0
Initialization: first K rounds, play each arm at every position
for t = K + 1, . . . , T do
Compute ??k (t) for all k
L(t) ? top-L ordered arms by decreasing ??k (t)
Al (t) ? Ll (t) for each position l < L
B(t) ? {k|k ?
/ L(t), Uk (t, (1 + ) log(T )) ? ??LL (t) (t)
if B(t) = ? then
AL (t) ? LL (t)
else
With probability 1/2, select AL (t) uniformly at random from B(t), else AL (t) ? LL (t)
end if
Play action A(t) and observe feedback Z(t); Update Nk,l (t + 1) and Sk,l (t + 1).
end for
The Uk (t, ?) index defined in (7) aggregates observations from all positions ? as in PBM-UCB ? but
allows to build tighter confidence regions as shown by the next proposition proved in Appendix E.
Proposition 10. For all ? ? L + 1,
L
d? log(t)e ?
P (Uk (t, ?) < ?k ) ? eL+1
e?? .
L
We may now state the main result of this section that provides an upper bound on the regret of
PBM-PIE.
Theorem 11. Using PBM-PIE with ? = (1 + ) log(t) and > 0, for any ? < mink<K (?k ?
?k+1 )/2, there exist problem-dependent constants C1 (?), C2 (, ?), C3 () and ?(, ?) such that
2
E[R(T )] ? (1 + ) log(T )
K
X
?L (?L ? ?k )
C2 (, ?)
+ C1 (?) + ?(,?) + C3 ().
d(?L ?k , ?L (?L ? ?))
T
k=L+1
The proof of this result is provided in Appendix E. Comparing to the expression in (6), Theorem 11
shows that PBM-PIE reaches asymptotically optimal performance when the optimal exploring
6
position is indeed located at index L. In other case, there is a gap that is caused by the fact the
exploring position is fixed beforehand and not adapted from the data.
We conclude this section by a quick description of two other algorithms that will be used in the
experimental section to benchmark our results.
Ranked Bandits (RBA-KL-UCB) The state-of-the-art algorithm for the sequential ?learning to
rank? problem was proposed by [18]. It runs one bandit algorithm per position, each one being
entitled to choose the best suited arm at its rank. The underlying bandit algorithm that runs in each
position is left to the choice of the user, the better the policy the lower the regret can be. If the bandit
algorithm at position l selects an arm already chosen at a higher position, it receives a reward of zero.
Consequently, the bandit algorithm operating at position l tends to focus on the estimation of l-th
best arm. In the next section, we use as benchmark the Ranked Bandits strategy using the KL-UCB
algorithm [10] as the per-position bandit.
PBM-TS The observations Zl (t) are censored Bernoulli which results in a posterior that does not
belong to a standard family of distribution. [13] suggest a version of Thompson Sampling called
?Bias Corrected Multiple Play TS? (or BC-MP-TS) that approximates the true posterior by a Beta
distribution. We observed in experiments that for parameter values close to one, this algorithm does
not explore enough. In Figure 1(a), we show this phenomenon for ? = (0.95, 0.85, 0.75, 0.65, 0.55).
The true posterior for the parameter ?k at time t may be written as a product of truncated scaled beta
distributions
Y ? (t)
?t (?k ) ?
?k k,l (1 ? ?l ?k )?k,l (t) ,
l
where ?k,l (t) = Sk,l (t) and ?k,l (t) = Nk,l (t) ? Sk,l (t). To draw from this exact posterior, we use rejection sampling with proposal distribution Beta(?k,m (t), ?k,m (t))/?m , where
m = arg max1?l?L (?k,l (t) + ?k,l (t)).
5
5.1
Experiments
Simulations
In order to evaluate our strategies, a simple problem is considered in which K = 5, L = 3,
? = (0.9, 0.6, 0.3) and ? = (0.45, 0.35, 0.25, 0.15, 0.05). The arm expectations are chosen such that
the asymptotic behavior can be observed after reasonable time horizon. All results are averaged based
on 10, 000 independent runs of the algorithm. We present the results in Figure 1(b) where PBM-UCB,
PBM-PIE and PBM-TS are compared to RBA-KL-UCB. The performance of PBM-PIE and
PBM-TS are comparable, the latter even being under the lower bound (it is a common observation,
e.g. see [13], and is due to the asymptotic nature of the lower bound). The curves confirm our analysis
1400
Regret R(T )
1000
250
Regret R(T )
1200
300
Lower Bound
BC-MP-TS
PBM-TS
800
600
400
150
100
50
200
0
102
200
Lower Bound
PBM-TS
PBM-UCB
PBM-PIE
RBA-KLUCB
103
Round t
104
0
100
105
(a) Average regret of PBM-TS and BC-MPTS compared for high parameters. Shaded
areas: first and last deciles.
101
102
103
Round t
104
105
(b) Average regret of various algorithms
on synthetic data under the PBM.
Figure 1: Simulation results for the suggested strategies.
7
800
#records
min ?
max ?
700
5
5
6
6
6
8
11
11
216, 565
68, 179
435, 951
110, 071
147, 214
122, 218
1, 228, 004
391, 951
0.016
0.031
0.025
0.023
0.004
0.108
0.022
0.022
0.077
0.050
0.067
0.069
0.148
0.146
0.149
0.084
600
Regret R(T )
#ads (K)
500
PBM-TS
PBM-UCB
PBM-PIE
RBA-KLUCB
400
300
200
100
0
100
Table 1: Statistics on the queries: each line corresponds
to the sub-dataset associated with a query.
101
102
103
Round t
104
105
Figure 2: Performance of the proposed
algorithms under the PBM on real data.
for PBM-PIE and lets us conjecture that the true Thompson Sampling policy might be asymptotically
optimal. As expected, PBM-PIE shows asymptotically optimal performance, matching the lower
bound after a large enough horizon.
5.2
Real data experiments: search advertising
The dataset was provided for KDD Cup 2012 track 2[1] and involves session logs of soso.com, a
search engine owned by Tencent. It consists of ads that were inserted among search results. Each of
the 150M lines from the log contains the user ID, the query typed, an ad, a position (1, 2 or 3) at
which it was displayed and a binary reward (click/no-click). First, for every query, we excluded ads
that were not displayed at least 1, 000 times at every position. We also filtered queries that had less
than 5 ads satisfying the previous constraints. As a result, we obtained 8 queries with at least 5 and
up to 11 ads. For each query q, we computed the matrix of the average click-through rates (CTR):
Mq ? RK?L , where K is the number of ads for the query q and L = 3 the number of positions. It is
noticeable that the SVD of each Mq matrix has a highly dominating first singular value, therefore
validating the low-rank assumption underlying in the PBM. In order to estimate the parameters of the
problem, we used the EM algorithm suggested by [5, 9]. Table 1 reports some statistics about the
bandit models reconstructed for each query: number of arms K, amount of data used to compute the
parameters, minimum and maximum values of the ??s for each model.
We conducted a series of 2, 000 simulations over this dataset. At the beginning of each run, a query
was randomly selected together with corresponding probabilities of scanning positions and arm
expectations. Even if rewards were still simulated, this scenario is more realistic since the values of
the parameters were extracted from a real-world dataset. We show results for the different algorithms
in Figure 2. It is remarkable that RBA-KL-UCB performs slightly better than PBM-UCB. One can
imagine that PBM-UCB does not benefit enough from position aggregations ? only 3 positions are
considered ? to beat RBA-KL-UCB. Both of them are outperformed by PBM-TS and PBM-PIE.
Conclusion
This work provides the first analysis of the PBM in an online context. The proof scheme used to
obtain the lower bound on the regret is interesting on its own, as it can be generalized to various other
settings. The tightness of the lower bound is validated by our analysis of PBM-PIE but it would be
an interesting future contribution to provide such guarantees for more straightforward algorithms
such as PBM-TS or a ?PBM-KLUCB? using the confidence regions of PBM-PIE. In practice, the
algorithms are robust to small variations of the values of the (?l )l , but it would be preferable to obtain
some control over the regret under uncertainty on these examination parameters.
Acknowledgements
This work was partially supported by the French research project ALICIA (grant ANR-13-CORD0020) and by the Machine Learning for Big Data Chair at T?l?com ParisTech.
8
References
[1] Kdd cup 2012 track 2. http://www.kddcup2012.org/.
[2] V. Anantharam, P. Varaiya, and J. Walrand. Asymptotically efficient allocation rules for the
multiarmed bandit problem with multiple plays - part I: IID rewards. Automatic Control, IEEE
Transactions on, 32(11):968?976, 1987.
[3] S. Boucheron, G. Lugosi, and P. Massart. Concentration Inequalities: A Nonasymptotic Theory
of Independence. OUP Oxford, 2013.
[4] W. Chen, Y. Wang, and Y. Yuan. Combinatorial multi-armed bandit: General framework and
applications. In Proc. of the 30th Int. Conf. on Machine Learning, 2013.
[5] A. Chuklin, I. Markov, and M. d. Rijke. Click models for web search. Synthesis Lectures on
Information Concepts, Retrieval, and Services, 7(3):1?115, 2015.
[6] R. Combes, S. Magureanu, A. Prouti?re, and C. Laroche. Learning to rank: Regret lower bounds
and efficient algorithms. In Proc. of the 2015 ACM SIGMETRICS Int. Conf. on Measurement
and Modeling of Computer Systems, 2015.
[7] R. Combes, M. S. T. M. Shahi, A. Prouti?re, et al. Combinatorial bandits revisited. In Advances
in Neural Information Processing Systems, 2015.
[8] N. Craswell, O. Zoeter, M. Taylor, and B. Ramsey. An experimental comparison of click
position-bias models. In Proc. of the Int. Conf. on Web Search and Data Mining. ACM, 2008.
[9] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via
the EM algorithm. Journal of the royal statistical society. Series B, pages 1?38, 1977.
[10] A. Garivier and O. Capp?. The KL-UCB algorithm for bounded stochastic bandits and beyond.
In Proc. of the Conf. on Learning Theory, 2011.
[11] T. L. Graves and T. L. Lai. Asymptotically efficient adaptive choice of control laws in controlled
markov chains. SIAM journal on control and optimization, 35(3):715?743, 1997.
[12] E. Kaufmann, O. Capp?, and A. Garivier. On the complexity of best arm identification in
multi-armed bandit models. Journal of Machine Learning Research, 2015.
[13] J. Komiyama, J. Honda, and H. Nakagawa. Optimal regret analysis of thompson sampling in
stochastic multi-armed bandit problem with multiple plays. In Proc. of the 32nd Int. Conf. on
Machine Learning, 2015.
[14] B. Kveton, C. Szepesv?ri, Z. Wen, and A. Ashkan. Cascading bandits : Learning to rank in the
cascade model. In Proc. of the 32nd Int. Conf. on Machine Learning, 2015.
[15] B. Kveton, Z. Wen, A. Ashkan, and C. Szepesv?ri. Tight regret bounds for stochastic combinatorial semi-bandits. In Proc. of the 18th Int. Conf. on Artificial Intelligence and Statistics,
2015.
[16] T. L. Lai and H. Robbins. Asymptotically efficient adaptive allocation rules. Advances in
applied mathematics, 6(1):4?22, 1985.
[17] S. Magureanu, R. Combes, and A. Prouti?re. Lipschitz bandits: Regret lower bounds and
optimal algorithms. In Proc. of the Conf. on Learning Theory, 2014.
[18] F. Radlinski, R. Kleinberg, and T. Joachims. Learning diverse rankings with multi-armed
bandits. In Proc. of the 25th Int. Conf. on Machine learning. ACM, 2008.
[19] M. Richardson, E. Dominowska, and R. Ragno. Predicting clicks: estimating the click-through
rate for new ads. In Proc. of the 16th Int. Conf. on World Wide Web. ACM, 2007.
[20] K. Sumeet, B. Kveton, C. Szepesv?ri, and Z. Wen. DCM bandits: Learning to rank with multiple
clicks. In Proc. of the 33rd Int. Conf. on Machine Learning, 2016.
9
| 6546 |@word version:3 nd:2 c0:2 confirms:1 simulation:3 paid:1 necessity:2 contains:2 series:2 denoting:3 bc:3 interestingly:1 past:2 existing:1 ramsey:1 recovered:1 com:4 comparing:1 must:1 written:3 realistic:3 kdd:2 update:1 half:1 selected:2 website:1 item:19 intelligence:1 beginning:1 record:1 filtered:1 provides:3 revisited:1 honda:1 successive:1 org:1 simpler:1 mathematical:1 c2:2 beta:3 lagr:1 yuan:1 consists:3 prove:1 introduce:2 ra:3 expected:9 indeed:5 behavior:2 frequently:1 multi:7 sud:1 inspired:1 decreasing:5 actual:2 armed:5 considering:1 increasing:2 becomes:2 provided:4 spain:1 notation:3 bounded:3 xx:1 bonus:1 underlying:2 null:1 estimating:1 project:1 guarantee:1 every:4 preferable:2 universit:4 scaled:1 uk:5 control:5 zl:6 decile:1 grant:1 appear:1 service:1 local:1 tends:1 limit:1 mistake:1 consequence:1 despite:1 id:1 oxford:1 subscript:1 lugosi:1 might:1 initialization:1 examined:1 challenging:1 relaxing:1 shaded:1 ease:1 limited:1 statistically:1 averaged:1 kveton:3 practice:1 regret:26 differs:1 equivocal:1 area:1 empirical:2 cascade:10 matching:1 confidence:7 cram:1 suggest:1 get:5 cannot:4 close:5 selection:2 put:2 context:5 www:1 equivalent:1 quick:1 send:1 layout:1 attention:1 straightforward:2 convex:2 thompson:3 simplicity:1 rule:3 estimator:5 cascading:1 his:1 mq:2 exploratory:1 variation:1 increment:1 pt:5 play:13 imagine:1 user:19 exact:1 olivier:1 us:1 satisfying:1 particularly:1 located:1 observed:4 min1:1 inserted:1 wang:1 worst:1 region:2 ordering:1 counter:1 trade:2 observes:4 mentioned:1 dempster:1 complexity:2 reward:12 asked:1 depend:1 tight:2 algebra:1 max1:1 learner:12 capp:3 easily:1 various:3 query:11 artificial:1 aggregate:1 supplementary:2 larger:1 dominating:1 tightness:1 anr:1 statistic:4 richardson:1 laird:1 online:2 sequence:4 indication:1 propose:2 product:2 fr:2 relevant:2 combining:1 intuitive:1 description:1 leave:1 derive:1 noticeable:1 coverage:1 involves:1 implies:1 indicate:1 modifying:1 stochastic:5 exploration:2 material:2 cucb:1 require:3 really:1 preliminary:1 mab:2 proposition:7 tighter:1 extension:1 pl:15 strictly:1 hold:1 exploring:2 considered:6 cb:2 major:1 estimation:2 proc:11 outperformed:1 combinatorial:3 robbins:1 largest:2 wl:5 weighted:4 minimization:1 always:3 rba:6 aim:2 sigmetrics:1 reaching:1 corollary:3 derived:2 focus:1 validated:1 joachim:1 she:1 vk:4 rank:8 bernoulli:4 likelihood:7 lri:1 contrast:3 sense:1 dependent:2 el:1 cnrs:2 vl:2 her:1 bandit:32 selects:3 issue:1 among:2 arg:1 denoted:2 proposes:1 art:1 equal:3 aware:1 having:1 sampling:4 represents:2 future:1 others:1 report:1 simplify:1 wen:3 randomly:1 divergence:2 psud:1 ltci:2 highly:1 mining:1 evaluation:3 introduces:1 chain:1 beforehand:1 closer:1 necessary:1 experience:1 censored:3 indexed:1 incomplete:1 taylor:1 re:3 theoretical:2 instance:3 earlier:1 modeling:1 rao:1 expects:1 conducted:1 answer:1 scanning:1 synthetic:2 combined:1 deduced:1 fundamental:2 explores:1 siam:1 yl:4 off:2 together:1 synthesis:1 dominowska:1 na:6 ctr:1 ambiguity:2 choose:2 hoeffding:2 pbm:57 conf:11 account:1 nonasymptotic:1 pooled:1 coefficient:1 int:9 explicitly:3 ranking:3 ad:9 caused:1 mp:2 closed:1 optimistic:2 analyze:1 observing:1 portion:1 sup:1 aggregation:1 zoeter:1 contribution:1 il:4 variance:2 kaufmann:1 sumeet:1 rijke:1 bayesian:1 identification:1 iid:1 advertising:2 reach:2 ashkan:2 checked:2 definition:1 typed:1 naturally:1 associated:4 attributed:1 proof:5 proved:2 dataset:4 knowledge:2 lim:5 actually:2 appears:1 originally:1 higher:2 follow:1 though:1 generality:1 hand:2 receives:2 web:5 combes:3 french:1 perhaps:1 pulling:1 concept:1 unbiased:3 true:4 counterpart:1 hence:3 excluded:1 boucheron:1 leibler:1 conditionally:3 round:10 ll:4 during:1 ambiguous:1 coincides:1 generalized:1 trying:1 mina:1 complete:2 performs:1 dedicated:1 harmonic:1 variational:1 novel:1 common:1 discussed:1 he:1 analog:1 belong:1 approximates:1 measurement:1 multiarmed:1 cup:2 automatic:1 rd:1 mathematics:1 session:1 had:1 access:1 longer:1 operating:1 something:1 posterior:4 own:1 recent:1 irrelevant:2 belongs:1 inf:8 scenario:3 termed:1 inequality:4 binary:3 uku:1 entitled:1 minimum:3 additional:1 recognized:1 maximize:1 semi:5 arithmetic:1 multiple:13 desirable:1 characterized:1 adapt:1 long:1 retrieval:1 divided:1 lai:2 equally:1 a1:1 controlled:1 variant:3 basic:1 expectation:5 c1:2 proposal:1 szepesv:3 klucb:3 kmin:1 addressed:1 else:2 singular:1 leaving:1 massart:1 induced:1 validating:1 contrary:1 presence:1 enough:3 affect:1 independence:1 click:17 suboptimal:4 idea:2 regarding:1 whether:2 expression:7 f:3 render:1 proceed:1 action:13 remark:1 ignored:2 clear:1 amount:3 induces:1 http:1 exist:1 walrand:1 estimated:1 per:2 track:2 diverse:1 discrete:1 irrespectively:1 nevertheless:2 changing:1 garivier:2 asymptotically:9 sum:2 run:4 parameterized:1 uncertainty:1 place:1 family:1 reasonable:1 decide:1 parsimonious:1 draw:2 appendix:7 comparable:1 bound:39 display:4 adapted:2 scanned:1 constraint:3 ri:3 dominated:1 kleinberg:1 ragno:1 optimality:1 min:3 chair:1 oup:1 conjecture:1 smaller:1 enst:1 em:3 slightly:1 making:1 taken:1 computationally:1 discus:1 eventually:1 count:1 know:1 mind:1 end:4 available:2 endowed:1 komiyama:1 observe:1 enforce:1 occurrence:1 appearing:1 denotes:2 top:2 exploit:1 build:1 society:1 question:2 quantity:1 already:1 strategy:5 concentration:2 craswell:1 traditional:1 said:1 uncensored:2 simulated:1 evaluate:1 trivial:2 assuming:1 index:4 providing:1 ratio:6 pie:19 potentially:1 mink:1 design:1 policy:3 unknown:1 contributed:1 upper:8 observation:10 markov:2 benchmark:3 t:12 displayed:3 truncated:1 beat:1 situation:1 magureanu:2 looking:2 y1:1 introduced:3 bk:2 cast:1 paris:4 kl:9 required:1 c3:2 varaiya:1 engine:1 barcelona:1 prouti:3 nip:1 address:2 beyond:1 suggested:2 below:3 alicia:1 departure:1 challenge:1 saclay:3 max:1 royal:1 ia:6 event:1 ranked:2 examination:6 rely:1 predicting:1 arm:27 scheme:3 improve:1 deemed:1 ready:1 literature:1 acknowledgement:1 relative:1 asymptotic:3 graf:1 loss:2 lecture:1 permutation:1 law:1 interesting:3 limitation:3 allocation:2 proven:1 remarkable:1 incurred:1 vernade:2 rubin:1 laroche:1 claire:1 censoring:2 surprisingly:1 last:3 soon:1 supported:1 bias:5 wide:1 distributed:2 benefit:1 feedback:10 overcome:1 curve:1 valid:1 world:2 author:2 made:1 adaptive:2 coincide:1 historical:1 transaction:1 reconstructed:1 observable:1 kullback:1 unreliable:1 confirm:1 sequentially:1 assumed:3 conclude:1 leader:1 search:6 sk:11 table:2 nature:1 robust:1 ca:6 tencent:1 interact:1 pk:4 spread:1 main:2 terminated:1 big:1 paul:2 allowed:1 exploitable:1 x1:1 referred:2 attributable:1 sub:1 position:46 explicit:1 xl:3 lie:1 theorem:12 rk:1 specific:2 showing:2 list:14 concern:1 exists:1 restricting:1 sequential:1 effectively:2 importance:1 cumulated:1 horizon:2 browsing:1 chen:1 gap:4 nk:17 suited:2 sorting:1 rejection:1 logarithmic:1 simply:3 explore:1 ordered:2 partially:1 soso:1 recommendation:3 applies:1 corresponds:2 satisfies:2 constantly:1 owned:1 extracted:1 acm:4 dcm:4 conditional:1 consequently:2 lipschitz:1 fisher:1 content:6 paristech:3 change:3 feasible:1 typical:1 absence:1 corrected:2 debiased:1 uniformly:6 nakagawa:1 lemma:1 called:5 discriminate:2 experimental:2 svd:1 ucb:17 select:1 radlinski:1 latter:1 relevance:1 anantharam:1 phenomenon:1 |
6,132 | 6,547 | Reward Augmented Maximum Likelihood
for Neural Structured Prediction
Mohammad Norouzi
Samy Bengio
Zhifeng Chen
Navdeep Jaitly
Mike Schuster
Yonghui Wu
Dale Schuurmans
{mnorouzi, bengio, zhifengc, ndjaitly}@google.com
{schuster, yonghui, schuurmans}@google.com
Google Brain
Abstract
A key problem in structured output prediction is direct optimization of the task
reward function that matters for test evaluation. This paper presents a simple and
computationally efficient approach to incorporate task reward into a maximum likelihood framework. By establishing a link between the log-likelihood and expected
reward objectives, we show that an optimal regularized expected reward is achieved
when the conditional distribution of the outputs given the inputs is proportional
to their exponentiated scaled rewards. Accordingly, we present a framework to
smooth the predictive probability of the outputs using their corresponding rewards.
We optimize the conditional log-probability of augmented outputs that are sampled
proportionally to their exponentiated scaled rewards. Experiments on neural sequence to sequence models for speech recognition and machine translation show
notable improvements over a maximum likelihood baseline by using reward augmented maximum likelihood (RML), where the rewards are defined as the negative
edit distance between the outputs and the ground truth labels.
1
Introduction
Structured output prediction is ubiquitous in machine learning. Recent advances in natural language
processing, machine translation, and speech recognition hinge on the development of better discriminative models for structured outputs and sequences. The foundations of learning structured
output models were established by the seminal work on conditional random fields (CRFs) [17] and
structured large margin methods [32], which demonstrate how generalization performance can be
significantly improved when one considers the joint effects of the predictions across multiple output
components. These models have evolved into their deep neural counterparts [29, 1] through the use
of recurrent neural networks (RNN) with LSTM [13] cells and attention mechanisms [2].
A key problem in structured output prediction has always been to enable direct optimization of the
task reward (loss) used for test evaluation. For example, in machine translation one seeks better BLEU
scores, and in speech recognition better word error rates. Not surprisingly, almost all task reward
metrics are not differentiable, hence hard to optimize. Neural sequence models (e.g. [29, 2]) optimize
conditional log-likelihood, i.e. the conditional log-probability of the ground truth outputs given
corresponding inputs. These models do not explicitly consider the task reward during training, hoping
that conditional log-likelihood serves as a good surrogate for the task reward. Such methods make no
distinction between alternative incorrect outputs: log-probability is only measured on the ground truth
input-output pairs, and all alternative outputs are equally penalized through normalization, whether
near or far from the ground truth target. We believe one can improve upon maximum likelihood (ML)
sequence models if the difference in the rewards of alternative outputs is taken into account.
Standard ML training, despite its limitations, has enabled the training of deep RNN models, leading to
revolutionary advances in machine translation [29, 2, 21] and speech recognition [5?7]. A key property
of ML training for locally normalized RNN models is that the objective function factorizes into
individual loss terms, which could be efficiently optimized using stochastic gradient descend (SGD).
This training procedure does not require any form of inference or sampling from the model during
training, leading to computational efficiency and ease to implementation. By contrast, almost all
alternative formulations for training structure prediction models require some form of inference or
sampling from the model at training time which slows down training, especially for deep RNNs
(e.g. see large margin, search-based [8, 39], and expected risk optimization methods).
Our work is inspired by the use of reinforcement learning (RL) algorithms, such as policy gradient [37], to optimize expected task reward [25]. Even though expected task reward seems like a
natural objective, direct policy optimization faces significant challenges: unlike ML, a stochastic
gradient given a mini-batch of training examples is extremely noisy and has a high variance; gradients
need to be estimated via sampling from the model, which is a non-stationary distribution; the reward
is often sparse in a high-dimensional output space, which makes it difficult to find any high value
predictions, preventing learning from getting off the ground; and, finally, maximizing reward does
not explicitly consider the supervised labels, which seems inefficient. In fact, all previous attempts
at direct policy optimization for structured output prediction have started by bootstrapping from a
previously trained ML solution [25, 27], using several heuristics and tricks to make learning stable.
This paper presents a new approach to task reward optimization that combines the computational
efficiency and simplicity of ML with the conceptual advantages of expected reward maximization.
Our algorithm called reward augmented maximum likelihood (RML) simply adds a sampling step
on top of the typical likelihood objective. Instead of optimizing conditional log-likelihood on
training input-output pairs, given each training input, we first sample an output proportionally to its
exponentiated scaled reward. Then, we optimize log-likelihood on such auxiliary output samples
given corresponding inputs. When the reward for an output is defined as its similarity to a ground
truth output, then the output sampling distribution is peaked at the ground truth output, and its
concentration is controlled by a temperature hyper-parameter.
Our theoretical analysis shows that the RML and regularized expected reward objectives optimize a
KL divergence between the exponentiated reward and model distributions, but in opposite directions.
Further, we show that at non-zero temperatures, the gap between the two criteria can be expressed
by a difference of variances measured on interpolating distributions. This observation reveals how
entropy regularized expected reward can be estimated by sampling from exponentiated scaled rewards,
rather than sampling from the model distribution.
Remarkably, we find that the RML approach achieves significantly improved results over state of the
art maximum likelihood RNNs. We show consistent improvement on both speech recognition (TIMIT
dataset) and machine translation (WMT?14 dataset), where output sequences are sampled according
to their edit distance to the ground truth outputs. Surprisingly, we find that the best performance
is achieved with output sampling distributions that shift a lot of the weight away from the ground
truth outputs. In fact, in our experiments, the training algorithm rarely sees the original unperturbed
outputs. Our results give further evidence that models trained with imperfect outputs and their reward
values can improve upon models that are only exposed to a single ground truth output per input
[12, 20].
2
Reward augmented maximum likelihood
Given a dataset of input-output pairs, D ? {(x(i) , y?(i) )}N
i=1 , structured output models learn a
parametric score function p? (y | x), which scores different output hypotheses, y ? Y. We assume
that the set of possible output, Y is finite, e.g. English sentences up to a maximum length. In a
probabilistic model, the score function is normalized, while in a large-margin model the score may
not be normalized. In either case, once the score function is learned, given an input x, the model
b achieving maximal score,
predicts an output y
b (x) = argmax p? (y | x) .
y
(1)
y
If this optimization is intractable, approximate inference (e.g. beam search) is used. We use a reward
function r(y, y? ) to evaluate
P different proposed?outputs against ground-truth outputs. Given a test
dataset D0 , one computes (x,y? )?D0 r(b
y(x), y ) as a measure of empirical reward. Since models
with larger empirical reward are preferred, ideally one hopes to maximize empirical reward during
training.
2
However, since empirical reward is not amenable to numerical optimization, one often considers
optimizing alternative differentiable objectives. The maximum likelihood (ML) framework tries to
minimize negative log-likelihood of the parameters given the data,
X
LML (?; D) =
? log p? (y? | x) .
(2)
(x,y? )?D
Minimizing this objective increases the conditional probability of the target outputs, log p? (y? | x),
while decreasing the conditional probability of alternative incorrect outputs. According to this
objective, all negative outputs are equally wrong, and none is preferred over the others.
By contrast, reinforcement learning (RL) advocates optimizing expected reward (with a maximum
entropy regularizer [38]), which is formulated as minimization of the following objective,
X
X
?
LRL (?; ?, D) =
? ? H (p? (y | x)) ?
p? (y | x) r(y, y ) ,
(3)
(x,y? )?D
y?Y
?
where r(y, y ) denotes the reward function, e.g. negative edit distance or BLEU score, ? controls
P the degree of regularization, and H (p) is the entropy of a distribution p, i.e. H (p(y)) =
? y?Y p(y) log p(y). It is well-known that optimizing LRL (?; ? ) using SGD is challenging because of the large variance of the gradients. Below we describe how ML and RL objectives are
related, and propose a hybrid between the two that combines their benefits for supervised learning.
Let us define a distribution in the output space, termed the exponentiated payoff distribution, that is
central in linking ML and RL objectives:
q(y | y? ; ? ) =
1
exp {r(y, y? )/? } ,
Z(y? , ? )
(4)
P
where Z(y? , ? ) = y?Y exp {r(y, y? )/? }. One can verify that the global minimum of LRL (?; ? ),
i.e. the optimal regularized expected reward, is achieved when the model distribution matches the
exponentiated payoff distribution, i.e. p? (y | x) = q(y | y? ; ? ). To see this, we re-express the
objective function in (3) in terms of a KL divergence between p? (y | x) and q(y | y? ; ? ),
X
1
(5)
DKL (p? (y | x) k q(y | y? ; ? )) = LRL (?; ? ) + const ,
?
?
(x,y )?D
P
where the constant const on the RHS is (x,y? )?D log Z(y? , ? ). Thus, the minimum of DKL (p? k q)
and LRL is achieved when p? = q. At ? = 0, when there is no entropy regularization, the optimal p?
is a delta distribution, p? (y | x) = ?(y | y? ), where ?(y | y? ) = 1 at y = y? and 0 at y 6= y? . Note
that ?(y | y? ) is equivalent to the exponentiated payoff distribution in the limit as ? ? 0.
Returning to the log-likelihood objective, one can verify that (2) is equivalent to a KL divergence in
the opposite direction between a delta distribution ?(y | y? ) and the model distribution p? (y | x),
X
DKL (?(y | y? ) k p? (y | x)) = LML (?) .
(6)
(x,y? )?D
There is no constant on the RHS, as the entropy of a delta distribution is zero, i.e. H (?(y | y? )) = 0.
We propose a method called reward-augmented maximum likelihood (RML), which generalizes ML
by allowing a non-zero temperature parameter in the exponentiated payoff distribution, while still
optimizing the KL divergence in the ML direction. The RML objective function takes the form,
X X
LRML (?; ?, D) =
?
q(y | y? ; ? ) log p? (y | x) ,
(7)
(x,y? )?D
y?Y
which can be re-expressed in terms of a KL divergence as follows,
X
DKL (q(y | y? ; ? ) k p? (y | x)) = LRML (?; ? ) + const ,
(8)
(x,y? )?D
P
where the constant const is ? (x,y? )?D H (q(y | y? , ? )). Note that the temperature parameter,
? ? 0, serves as a hyper-parameter that controls the smoothness of the optimal distribution around
3
correct targets by taking into account the reward function in the output space. The objective functions
LRL (?; ? ) and LRML (?; ? ), have the same global optimum of p? , but they optimize a KL divergence
in opposite directions. We characterize the difference between these two objectives below, showing
that they are equivalent up to their first order Taylor approximations. For optimization convenience,
we focus on minimizing LRML (?; ? ) to achieve a good solution for LRL (?; ? ).
2.1
Optimization
Optimizing the reward augmented maximum likelihood (RML) objective, LRML (?; ? ), is straightforward if one can draw unbiased samples from q(y | y? ; ? ). We can express the gradient of LRML in
terms of an expectation over samples from q(y | y? ; ? ),
?? LRML (?; ? ) = Eq(y|y? ;? ) ? ?? log p? (y | x) .
(9)
Thus, to estimate ?? LRML (?; ? ) given a mini-batch of examples for SGD, one draws y samples
given mini-batch y? ?s and then optimizes log-likelihood on such samples by following the mean
gradient. At a temperature ? = 0, this reduces to always sampling y? , hence ML training with no
sampling.
By contrast, the gradient of LRL (?; ? ), based on likelihood ratio methods, takes the form,
?? LRL (?; ? ) = Ep? (y|x) ? ?? log p? (y | x) ? r(y, y? ) .
(10)
There are several critical differences between (9) and (10) that make SGD optimization of LRML (?; ? )
more desirable. First, in (9), one has to sample from a stationary distribution, the so called exponentiated payoff distribution, whereas in (10) one has to sample from the model distribution as it is
evolving. Not only does sampling from the model potentially slow down training, one also needs
to employ several tricks to get a better estimate of the gradient of LRL [25]. A body of literature in
reinforcement learning focuses on reducing the variance of (10) by using sophisticated techniques
such as actor-critique methods [30, 9]. Further, the reward is often sparse in a high-dimensional
output space, which makes finding any reasonable prediction challenging when (10) is used to refine
a randomly initialized model. Thus, smart model initialization is needed. By contrast, we initialize
the models randomly and refine them using (9).
2.2
Sampling from the exponentiated payoff distribution
To compute the gradient of the model using the RML approach, one needs to sample auxiliary outputs
from the exponentiated payoff distribution, q(y | y? ; ? ). This sampling is the price that we have to
pay to learn with rewards. One should contrast this with loss-augmented inference in structured large
margin methods, and sampling from the model in RL. We believe sampling outputs proportional to
exponentiated rewards is more efficient and effective in many cases.
Experiments in this paper use reward values defined by either negative Hamming distance or negative
edit distance. We sample from q(y | y? ; ? ) by stratified sampling, where we first select a particular
distance, and then sample an output with that distance value. Here we focus on edit distance sampling,
as Hamming distance sampling is a simpler special case. Given a sentence y? of length m, we count
the number of sentences within an edit distance e, where e ? {0, . . . , 2m}. Then, we reweight the
counts by exp{?e/? } and normalize. Let c(e, m) denote the number of sentences at an edit distance
e from a sentence of length m. First, note that a deletion can be thought as a substitution with a nil
token. This works out nicely because given a vocabulary of length v, for each insertion we have v
options, and for each substitution we have v ? 1 options, but including the nil token, there are v
options for substitutions too. When e = 1, there are m possible substitutions and m + 1 insertions.
Hence, in total there are (2m + 1)v sentences at an edit distance of 1. Note, that exact computation
of c(e, m) is difficult if we consider all edge cases, for example when there are repetitive words in y? ,
but ignoring such edge cases we can come up with approximate counts that are reliable for sampling.
When e > 1, we estimate c(e, m) by
m
X
m m + e ? 2s e
v ,
(11)
c(e, m) =
s
e?s
s=0
where s enumerates over the number of substitutions. Once s tokens are substituted, then those s
positions lose their significance, and the insertions before and after such tokens could be merged.
Hence, given s substitutions, there are really m ? s reference positions for e ? s possible insertions.
Finally, one can sample according to BLEU score or other sequence metrics by importance sampling
where the proposal distribution could be edit distance sampling above.
4
3
RML analysis
In the RML framework, we find the model parameters by minimizing the objective (7) instead of
optimizing the RL objective, i.e. regularized expected reward in (3). The difference lies in minimizing
DKL (q(y | y? ; ? ) k p? (y | x)) instead of DKL (p? (y | x) k q(y | y? ; ? )). For convenience, let?s
refer to q(y | y? ; ? ) as q, and p? (y | x) as p. Here, we characterize the difference between the two
divergences, DKL (q k p) ? DKL (p k q), and use this analysis to motivate the RML approach.
We will initially consider the KL divergence in its more general form as a Bregman divergence, which
will make some of the key properties clearer. A Bregman divergence is defined by a strictly convex,
differentiable, closed potential function F : F ? R [3]. Given F and two points p, q ? F, the
corresponding Bregman divergence DF : F ? F ? R+ is defined by
T
DF (p k q) = F (p) ? F (q) ? (p ? q) ?F (q) ,
(12)
the difference between the strictly convex potential at p and its first order Taylor approximation
expanded about q. Clearly this definition is not symmetric between p and q. By the strict convexity
of F it follows that DF (p k q) ? 0 with DF (p k q) = 0 if and only if p = q. To characterize the
difference between opposite Bregman divergences, we provide a simple result that relates the two
directions under suitable conditions. Let HF denote the Hessian of F .
Proposition 1. For any twice differentiable strictly convex closed potential F , and p, q ? int(F):
T
DF (q k p) = DF (p k q) + 41 (p ? q) HF (a) ? HF (b) (p ? q)
(13)
for some a = (1 ? ?)p + ?q, (0 ? ? ? 21 ), b = (1 ? ?)q + ?p, (0 ? ? ? 12 ). (see supp. material)
For probability vectors p, q ? ?|Y| and a potential F (p) = ?? H (p), DF (p k q) = ? DKL (p k q).
Let f ? : R|Y| ? ?|Y| denote a normalized exponential operator that takes a real-valued logit
vector and turns it into a probability vector. Let r and s denote real-valued logit vectors such that
q = f ? (r/? ) and p = f ? (s/? ). Below, we characterize the gap between DKL (p(y) k q(y)) and
DKL (q(y) k p(y)) in terms of the difference between s(y) and r(y).
Proposition 2. The KL divergence between p and q in two directions can be expressed as,
DKL (p k q) = DKL (q k p) +
< DKL (q k p) +
1
?
4? 2 Vary?f (a/? )
2
1
? 2 ks ? rk2 ,
[s(y) ? r(y)] ?
1
4? 2
Vary?f ? (b/? ) [s(y) ? r(y)]
for some a = (1 ? ?)s + ?r, (0 ? ? ? 12 ), b = (1 ? ?)r + ?s, (0 ? ? ? 12 ). (see supp. material)
Given Proposition 2, one can relate the two objectives, LRL (?; ? ) (5) and LRML (?; ? ) (8), by
o
X n
1
LRL = ? LRML + 4?
Vary?f ? (a/? ) [s(y) ? r(y)] ? Vary?f ? (b/? ) [s(y) ? r(y)] + const,
(x,y? )?D
(14)
where s(y) denotes ? -scaled logits predicted by the model such that p? (y | x) = f ? (s(y)/? ), and
r(y) = r(y, y? ). The gap between regularized expected reward (5) and ? -scaled RML criterion (8)
is simply a difference of two variances, whose magnitude decreases with increasing regularization.
Proposition 2 also shows an opportunity for learning algorithms: if ? is chosen so that q = f ? (r/? ),
then f ? (a/? ) and f ? (b/? ) have lower variance than p (which can always be achieved for sufficiently
small ? provided p is not deterministic), then the expected regularized reward under p, and its gradient
for training, can be exactly estimated, in principle, by including the extra variance terms and sampling
from more focused distributions than p. Although we have not yet incorporated approximations to
the additional variance terms into RML, this is an interesting research direction.
4
Related Work
The literature on structure output prediction is vast, falling into three broad categories: (a) supervised learning approaches that ignore task reward and use supervision; (b) reinforcement learning
approaches that use only task reward and ignore supervision; and (c) hybrid approaches that attempt
to exploit both supervision and task reward. This paper clearly falls in category (c).
Work in category (a) includes classical conditional random fields [17] and conditional log-likelihood
training of RNNs [29, 2]. It also includes the approaches that attempt to perturb the training inputs
5
and supervised training structures to improves the robustness (and hopefully the generalization) of
the conditional models (e.g. see [4, 16]). These approaches offer improvements to standard maximum
likelihood estimation, but they are fundamentally limited by not incorporating a task reward.
By contrast, work in category (b) includes reinforcement learning approaches that only consider
task reward and do not use any other supervision. Beyond the traditional reinforcement learning
approaches, such as policy gradient [37, 31], and actor-critic [30], Q-learning [34], this category
includes SEARN [8]. There is some relationship to the work presented here and work on relative
entropy policy search [23], and policy optimization via expectation maximization [35] and KLdivergence [14, 33], however none of these bridge the gap between the two directions of the KLdivergence, nor do they consider any supervision data as we do here.
There is also a substantial body of related work in category (c), which considers how to exploit
supervision information while training with a task reward metric. A canonical example is large
margin structured prediction [32, 11], which explicitly uses supervision and considers an upper
bound surrogate for task loss. This approach requires loss augmented inference that cannot be
efficiently achieved for general task losses. We are not aware of successful large-margin methods
for neural sequence prediction, but a related approach by [39] for neural machine translation builds
on SEARN [8]. Some form of inference during training is still needed, and the characteristics of
the objective are not well studied. We also mentioned the work on maximizing task reward by
bootstrapping from a maximum likelihood policy [25, 27], but such an approach only makes limited
use of supervision. Some work in robotics has considered exploiting supervision as a means to
provide indirect sampling guidance to improve policy search methods that maximize task reward
[18, 19, 26], but these approaches do not make use of maximum likelihood training. An interesting
work is [15] which explicitly incorporates supervision in the policy evaluation phase of a policy
iteration procedure that otherwise seeks to maximize task reward. However, this approach only
considers a greedy policy form that does not lend itself to being represented as a deep RNN, and has
not been applied to structured output prediction. Most relevant are ideas for improving approximate
maximum likelihood training for intractable models by passing the gradient calculation through
an approximate inference procedure [10, 28]. These works, however, are specialized to particular
approximate inference procedures, and, by directly targeting expected reward, are subject to the
variance problems that motivated this work.
One advantage of the RML framework is its computational efficiency at training time. By contrast,
RL and scheduled sampling [4] require sampling from the model, which can slow down the gradient
computation by 2?. Structural SVM requires loss-augmented inference which is often more expensive
than sampling from the model. Our framework only requires sampling from a fixed exponentated
payoff distribution, which can be thought as a form of input pre-processing. This pre-processing can
be parallelized by model training by having a thread handling loading the data and augmentation.
Recently, we were informed of the unpublished work of Volkovs et al. [36] that also proposes an
objective like RML, albeit with a different derivation. No theoretical relation was established to
entropy regularized RL, nor was the method applied to neural nets for sequences, but large gains
were reported over several baselines applying the technique to ranking problems with CRFs.
5
Experiments
We compare our approach, reward augmented maximum likelihood (RML), with standard maximum
likelihood (ML) training on sequence prediction tasks using state-of-the-art attention-based recurrent neural networks [29, 2]. Our experiments demonstrate that the RML approach considerably
outperforms ML baseline on both speech recognition and machine translation tasks.
5.1
Speech recognition
For experiments on speech recognition, we use the TIMIT dataset; a standard benchmark for clean
phone recognition. This dataset consists of recordings from different speakers reading ten phonetically
rich sentences covering major dialects of American English. We use the standard train / dev / test
splits suggested by the Kaldi toolkit [24].
As the sequence prediction model, we use an attention-based encoder-decoder recurrent model of [5]
with three 256-dimensional LSTM layers for encoding and one 256-dimensional LSTM layer for
decoding. We do not modify the neural network architecture or its gradient computation in any way,
6
? = 0.6
? = 0.7
? = 0.8
? = 0.9
0
0.1
0
11
0.2
1
12
0.3
2
13
0.4
3
14
0.5
4
15
0.6
5
16
0.7
6
7
0.8
8
0.9
9
1
10
Figure 1: Fraction of different number of edits applied to a sequence of length 20 for different ? . At
? = 0.9, augmentations with 5 to 9 edits are sampled with a probability > 0.1. [view in color]
Method
ML baseline
RML, ? = 0.60
RML, ? = 0.65
RML, ? = 0.70
RML, ? = 0.75
RML, ? = 0.80
RML, ? = 0.85
RML, ? = 0.90
RML, ? = 0.95
RML, ? = 1.00
Dev set
20.87 (?0.2, +0.3)
19.92 (?0.6, +0.3)
19.64 (?0.2, +0.5)
18.97 (?0.1, +0.1)
18.44 (?0.4, +0.4)
18.27 (?0.2, +0.1)
18.10 (?0.4, +0.3)
18.00 (?0.4, +0.3)
18.46 (?0.1, +0.1)
18.78 (?0.6, +0.8)
Test set
22.18 (?0.4, +0.2)
21.65 (?0.5, +0.4)
21.28 (?0.6, +0.4)
21.28 (?0.5, +0.4)
20.15 (?0.4, +0.4)
19.97 (?0.1, +0.2)
19.97 (?0.3, +0.2)
19.89 (?0.4, +0.7)
20.12 (?0.2, +0.1)
20.41 (?0.2, +0.5)
Table 1: Phone error rates (PER) for different methods on TIMIT dev and test sets. Average PER of 4
independent training runs is reported.
but we only change the output targets fed into the network for gradient computation and SGD update.
The input to the network is a standard sequence of 123-dimensional log-mel filter response statistics.
Given each input, we generate new outputs around ground truth targets by sampling according to
the exponentiated payoff distribution. We use negative edit distance as the measure of reward. Our
output augmentation process allows insertions, deletions, and substitutions.
An important hyper-parameter in our framework is the temperature parameter, ? , controlling the
degree of output augmentation. We investigate the impact of this hyper-parameter and report results
for ? selected from a candidate set of ? ? {0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95, 1.0}. At a
temperature of ? = 0, outputs are not augmented at all, but as ? increases, more augmentation is
generated. Figure 1 depicts the fraction of different numbers of edits applied to a sequence of length
20 for different values of ? . These edits typically include very small number of deletions, and roughly
equal number of insertions and substitutions. For insertions and substitutions we uniformly sample
elements from a vocabulary of 61 phones. According to Figure 1, at ? = 0.6, more than 60% of the
outputs remain intact, while at ? = 0.9, almost all target outputs are being augmented with 5 to 9
edits being sampled with a probability larger than 0.1. We note that the augmentation becomes more
severe as the outputs get longer.
The phone error rates (PER) on both dev and test sets for different values of ? and the ML baseline
are reported in Table 1. Each model is trained and tested 4 times, using different random seeds. In
Table 1, we report average PER across the runs, and in parenthesis the difference of average error to
minimum and maximum error. We observe that a temperature of ? = 0.9 provides the best results,
outperforming the ML baseline by 2.9% PER on the dev set and 2.3% PER on the test set. The
results consistently improve when the temperature increases from 0.6 to 0.9, and they get worse
beyond ? = 0.9. It is surprising to us that not only the model trains with such a large amount of
augmentation at ? = 0.9, but also it significantly improves upon the baseline. Finally, we note that
previous work [6, 7] suggests several refinements to improve sequence to sequence models on TIMIT
by adding noise to the weights and using more focused forward-moving attention mechanism. While
these refinements are interesting and they could be combined with the RML framework, in this work,
we do not implement such refinements, and focus specifically on a fair comparison between the ML
baseline and the RML method.
7
Method
ML baseline
RML, ? = 0.75
RML, ? = 0.80
RML, ? = 0.85
RML, ? = 0.90
RML, ? = 0.95
Average BLEU
36.50
36.62
36.80
36.91
36.69
36.57
Best BLEU
36.87
36.91
37.11
37.23
37.07
36.94
Table 2: Tokenized BLEU score on WMT?14 English to French evaluated on newstest-2014 set. The
RML approach with different ? considerably improves upon the maximum likelihood baseline.
5.2
Machine translation
We evaluate the effectiveness of the proposed approach on WMT?14 English to French machine
translation benchmark. Translation quality is assessed using tokenized BLEU score, to be consistent
with previous work on neural machine translation [29, 2, 22]. Models are trained on the full 36M
sentence pairs from WMT?14 training set, and evaluated on 3003 sentence pairs from newstest-2014
test set. To keep the sampling process efficient and simple on such a large corpus, we augment the
output sentences only based on Hamming distance (i.e. edit distance without insertion or deletion).
For each sentece we sample a single output at each step. One can consider insertions and deletions or
sampling according to exponentiated sentence BLEU scores, but we leave that to future work.
As the conditional sequence prediction model, we use an attention-based encoder-decoder recurrent
neural network similar to [2], but we use multi-layer encoder and decoder networks consisting of
three layers of 1024 LSTM cells. As suggested by [2], for computing the softmax attention vectors,
we use a feedforward neural network with 1024 hidden units, which operates on the last encoder
and the first decoder layers. In all of the experiments, we keep the network architecture and the
hyper-parameters fixed. All of the models achieve their peak performance after about 4 epochs of
training, once we anneal the learning rates. To reduce the noise in the BLEU score evaluation, we
report both peak BLEU score and BLEU score averaged among about 70 evaluations of the model
while doing the fifth epoch of training. We perform beam search decoding with a beam size of 8.
Table 2 summarizes our experimental results on WMT?14. We note that our ML translation baseline
is quite strong, if not the best among neural machine translation models [29, 2, 22], achieving very
competitive performance for a single model. Even given such a strong baseline, the RML approach
consistently improves the results. Our best model with a temperature ? = 0.85 improves average
BLEU by 0.4, and best BLEU by 0.35 points, which is a considerable improvement. Again we
observe that as we increase the amount of augmentation from ? = 0.75 to ? = 0.85 the results
consistently get better, and then they start to get worse with more augmentation.
Details. We train the models using asynchronous SGD with 12 replicas without momentum. We
use mini-batches of size 128. We initially use a learning rate of 0.5, which we then exponentially
decay to 0.05 after 800K steps. We keep evaluating the models between 1.1 and 1.3 million steps
and report average and peak BLEU scores in Table 2. We use a vocabulary 200K words for the
source language and 80K for the target language. We only consider training sentences that are up to
80 tokens. We replace rare words with several UNK tokens based on their first and last characters. At
inference time, we replace UNK tokens in the output sentences by copying source words according
to largest attention activations as suggested by [22].
6
Conclusion
We present a learning algorithm for structured output prediction that generalizes maximum likelihood
training by enabling direct optimization of a task reward metric. Our method is computationally
efficient and simple to implement. It only requires augmentation of the output targets used within a
log-likelihood objective. We show how using augmented outputs sampled according to edit distance
improves a maximum likelihood baseline by a considerable margin, on both machine translation and
speech recognition tasks. We believe this framework is applicable to a wide range of probabilistic
models with arbitrary reward functions. In the future, we intend to explore the applicability of this
framework to other probabilistic models on tasks with more complicated evaluation metrics.
8
References
[1] D. Andor, C. Alberti, D. Weiss, A. Severyn, A. Presta, K. Ganchev, S. Petrov, and M. Collins. Globally
normalized transition-based neural networks. arXiv:1603.06042, 2016.
[2] D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate.
ICLR, 2015.
[3] A. Banerjee, S. Merugu, I. S. Dhillon, and J. Ghosh. Clustering with Bregman divergences. JMLR, 2005.
[4] S. Bengio, O. Vinyals, N. Jaitly, and N. M. Shazeer. Scheduled sampling for sequence prediction with
recurrent neural networks. NIPS, 2015.
[5] W. Chan, N. Jaitly, Q. V. Le, and O. Vinyals. Listen, attend and spell. ICASSP, 2016.
[6] J. Chorowski, D. Bahdanau, K. Cho, and Y. Bengio. End-to-end continuous speech recognition using
attention-based recurrent nn: first results. arXiv:1412.1602, 2014.
[7] J. K. Chorowski, D. Bahdanau, D. Serdyuk, K. Cho, and Y. Bengio. Attention-based models for speech
recognition. NIPS, 2015.
[8] H. Daum?e, III, J. Langford, and D. Marcu. Search-based structured prediction. Mach. Learn. J., 2009.
[9] T. Degris, P. M. Pilarski, and R. S. Sutton. Model-free reinforcement learning with continuous action in
practice. ACC, 2012.
[10] J. Domke. Generic methods for optimization-based modeling. AISTATS, 2012.
[11] K. Gimpel and N. A. Smith. Softmax-margin crfs: Training log-linear models with cost functions. NAACL,
2010.
[12] G. Hinton, O. Vinyals, and J. Dean. Distilling the knowledge in a neural network. arXiv:1503.02531, 2015.
[13] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 1997.
[14] H. J. Kappen, V. G?omez, and M. Opper. Optimal control as a graphical model inference problem. Mach.
Learn. J., 2012.
[15] B. Kim, A. M. Farahmand, J. Pineau, and D. Precup. Learning from limited demonstrations. NIPS, 2013.
[16] A. Kumar, O. Irsoy, J. Su, J. Bradbury, R. English, B. Pierce, P. Ondruska, I. Gulrajani, and R. Socher. Ask
me anything: Dynamic memory networks for natural language processing. ICML, 2016.
[17] J. D. Lafferty, A. McCallum, and F. C. N. Pereira. Conditional Random Fields: Probabilistic models for
segmenting and labeling sequence data. ICML, 2001.
[18] S. Levine and V. Koltun. Guided policy search. ICML, 2013.
[19] S. Levine and V. Koltun. Variational policy search via trajectory optimization. NIPS, 2013.
[20] D. Lopez-Paz, B. Sch?olkopf, L. Bottou, and V. Vapnik. Unifying distillation and privileged information.
ICLR, 2016.
[21] M.-T. Luong, H. Pham, and C. D. Manning. Effective approaches to attention-based neural machine
translation. EMNLP, 2015.
[22] M.-T. Luong, I. Sutskever, Q. V. Le, O. Vinyals, and W. Zaremba. Addressing the rare word problem in
neural machine translation. ACL, 2015.
[23] J. Peters, K. M?ulling, and Y. Alt?un. Relative entropy policy search. AAAI, 2010.
[24] D. Povey, A. Ghoshal, G. Boulianne, et al. The kaldi speech recognition toolkit. ASRU, 2011.
[25] M. Ranzato, S. Chopra, M. Auli, and W. Zaremba. Sequence level training with recurrent neural networks.
ICLR, 2016.
[26] S. Shen, Y. Cheng, Z. He, W. He, H. Wu, M. Sun, and Y. Liu. Minimum risk training for neural machine
translation. ACL, 2016.
[27] D. Silver et al. Mastering the game of Go with deep neural networks and tree search. Nature, 2016.
[28] V. Stoyanov, A. Ropson, and J. Eisner. Empirical risk minimization of graphical model parameters given
approximate inference, decoding, and model structure. AISTATS, 2011.
[29] I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. NIPS, 2014.
[30] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998.
[31] R. S. Sutton, D. A. McAllester, S. P. Singh, and Y. Mansour. Policy gradient methods for reinforcement
learning with function approximation. NIPS, 2000.
[32] B. Taskar, C. Guestrin, and D. Koller. Max-margin markov networks. NIPS, 2004.
[33] E. Todorov. Linearly-solvable markov decision problems. NIPS, 2006.
[34] H. van Hasselt, A. Guez, and D. Silver. Deep reinforcement learning with double q-learning.
arXiv:1509.06461, 2015.
[35] N. Vlassis, M. Toussaint, G. Kontes, and S. Piperidis. Learning model-free robot control by a Monte Carlo
EM algorithm. Autonomous Robots, 2009.
[36] M. Volkovs, H. Larochelle, and R. Zemel. Loss-sensitive training of probabilistic conditional random
fields. arXiv:1107.1805v1, 2011.
[37] R. J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning.
Mach. Learn. J., 1992.
[38] R. J. Williams and J. Peng. Function optimization using connectionist reinforcement learning algorithms.
Connection Science, 1991.
[39] S. Wiseman and A. M. Rush.
Sequence-to-sequence learning as beam-search optimization.
arXiv:1606.02960, 2016.
9
| 6547 |@word seems:2 loading:1 logit:2 seek:2 sgd:6 kappen:1 substitution:9 liu:1 score:16 outperforms:1 hasselt:1 com:2 surprising:1 activation:1 yet:1 guez:1 numerical:1 hoping:1 update:1 stationary:2 greedy:1 selected:1 accordingly:1 mccallum:1 smith:1 short:1 provides:1 simpler:1 direct:5 koltun:2 incorrect:2 consists:1 farahmand:1 lopez:1 combine:2 advocate:1 peng:1 expected:14 roughly:1 nor:2 multi:1 brain:1 inspired:1 globally:1 decreasing:1 increasing:1 becomes:1 provided:1 evolved:1 informed:1 finding:1 ghosh:1 bootstrapping:2 kldivergence:2 zaremba:2 exactly:1 returning:1 scaled:6 wrong:1 control:4 unit:1 segmenting:1 before:1 attend:1 modify:1 limit:1 despite:1 encoding:1 mach:3 sutton:3 critique:1 establishing:1 rnns:3 twice:1 initialization:1 k:1 studied:1 acl:2 suggests:1 challenging:2 ease:1 limited:3 stratified:1 range:1 averaged:1 practice:1 implement:2 procedure:4 rnn:4 empirical:5 evolving:1 significantly:3 thought:2 word:6 pre:2 get:5 convenience:2 cannot:1 targeting:1 operator:1 risk:3 applying:1 seminal:1 optimize:7 equivalent:3 deterministic:1 dean:1 boulianne:1 crfs:3 maximizing:2 straightforward:1 attention:10 go:1 williams:2 convex:3 focused:2 shen:1 simplicity:1 ropson:1 enabled:1 autonomous:1 target:8 controlling:1 exact:1 us:1 samy:1 hypothesis:1 jaitly:3 trick:2 element:1 recognition:13 expensive:1 marcu:1 predicts:1 mike:1 ep:1 levine:2 taskar:1 descend:1 sun:1 ranzato:1 decrease:1 substantial:1 mentioned:1 convexity:1 insertion:9 reward:62 ideally:1 dynamic:1 trained:4 motivate:1 singh:1 smart:1 exposed:1 predictive:1 upon:4 efficiency:3 icassp:1 joint:1 indirect:1 represented:1 regularizer:1 derivation:1 dialect:1 train:3 describe:1 effective:2 ondruska:1 monte:1 zemel:1 labeling:1 hyper:5 whose:1 heuristic:1 larger:2 valued:2 quite:1 otherwise:1 pilarski:1 encoder:4 statistic:1 jointly:1 noisy:1 itself:1 sequence:24 differentiable:4 advantage:2 net:1 propose:2 maximal:1 relevant:1 translate:1 achieve:2 normalize:1 olkopf:1 getting:1 exploiting:1 sutskever:2 double:1 optimum:1 kontes:1 silver:2 leave:1 recurrent:7 clearer:1 measured:2 bradbury:1 eq:1 strong:2 auxiliary:2 predicted:1 come:1 larochelle:1 distilling:1 direction:8 rml:35 guided:1 merged:1 correct:1 filter:1 stochastic:2 enable:1 mcallester:1 material:2 require:3 generalization:2 really:1 proposition:4 alberti:1 strictly:3 pham:1 around:2 sufficiently:1 ground:12 considered:1 exp:3 seed:1 kaldi:2 major:1 achieves:1 vary:4 estimation:1 applicable:1 lose:1 label:2 bridge:1 edit:12 largest:1 sensitive:1 ganchev:1 hope:1 minimization:2 mit:1 clearly:2 always:3 rather:1 factorizes:1 barto:1 focus:4 improvement:4 consistently:3 likelihood:32 contrast:7 baseline:13 kim:1 andor:1 inference:12 nn:1 typically:1 initially:2 hidden:1 relation:1 koller:1 among:2 unk:2 augment:1 development:1 proposes:1 art:2 special:1 initialize:1 softmax:2 field:4 once:3 aware:1 nicely:1 having:1 sampling:31 equal:1 broad:1 icml:3 peaked:1 future:2 others:1 report:4 fundamentally:1 connectionist:2 employ:1 randomly:2 divergence:14 individual:1 argmax:1 phase:1 consisting:1 attempt:3 investigate:1 edits:5 evaluation:6 severe:1 amenable:1 bregman:5 edge:2 tree:1 taylor:2 initialized:1 re:2 rush:1 guidance:1 theoretical:2 modeling:1 dev:5 wiseman:1 maximization:2 applicability:1 cost:1 addressing:1 rare:2 successful:1 paz:1 too:1 characterize:4 reported:3 considerably:2 combined:1 cho:3 lstm:4 peak:3 volkovs:2 probabilistic:5 off:1 decoding:3 precup:1 augmentation:10 central:1 again:1 aaai:1 emnlp:1 severyn:1 worse:2 luong:2 american:1 inefficient:1 leading:2 supp:2 account:2 potential:4 chorowski:2 degris:1 includes:4 int:1 matter:1 notable:1 explicitly:4 ranking:1 try:1 lot:1 closed:2 view:1 doing:1 competitive:1 hf:3 option:3 start:1 complicated:1 timit:4 minimize:1 phonetically:1 variance:9 characteristic:1 efficiently:2 merugu:1 norouzi:1 none:2 carlo:1 trajectory:1 acc:1 definition:1 against:1 petrov:1 hamming:3 sampled:5 gain:1 dataset:6 ask:1 enumerates:1 color:1 improves:6 ubiquitous:1 listen:1 knowledge:1 sophisticated:1 supervised:4 response:1 improved:2 wei:1 formulation:1 evaluated:2 though:1 langford:1 su:1 banerjee:1 hopefully:1 google:3 french:2 pineau:1 quality:1 scheduled:2 gulrajani:1 believe:3 effect:1 naacl:1 normalized:5 verify:2 unbiased:1 counterpart:1 rk2:1 hence:4 regularization:3 logits:1 spell:1 symmetric:1 dhillon:1 during:4 game:1 covering:1 speaker:1 mel:1 anything:1 criterion:2 mohammad:1 demonstrate:2 temperature:10 variational:1 recently:1 specialized:1 rl:8 irsoy:1 exponentially:1 million:1 linking:1 he:2 significant:1 refer:1 distillation:1 piperidis:1 smoothness:1 language:4 wmt:5 stable:1 actor:2 similarity:1 supervision:10 toolkit:2 longer:1 add:1 moving:1 align:1 robot:2 recent:1 chan:1 optimizing:7 optimizes:1 phone:4 termed:1 schmidhuber:1 outperforming:1 guestrin:1 minimum:4 additional:1 parallelized:1 maximize:3 relates:1 multiple:1 desirable:1 full:1 stoyanov:1 d0:2 reduces:1 smooth:1 match:1 calculation:1 offer:1 long:1 equally:2 dkl:14 privileged:1 controlled:1 impact:1 prediction:19 parenthesis:1 metric:5 navdeep:1 expectation:2 df:7 repetitive:1 normalization:1 iteration:1 arxiv:6 achieved:6 cell:2 beam:4 proposal:1 whereas:1 remarkably:1 robotics:1 hochreiter:1 source:2 sch:1 extra:1 yonghui:2 unlike:1 strict:1 subject:1 recording:1 bahdanau:3 incorporates:1 lafferty:1 effectiveness:1 structural:1 near:1 chopra:1 feedforward:1 bengio:6 split:1 iii:1 todorov:1 architecture:2 opposite:4 imperfect:1 idea:1 reduce:1 shift:1 whether:1 motivated:1 thread:1 peter:1 speech:12 hessian:1 searn:2 passing:1 action:1 deep:6 proportionally:2 amount:2 locally:1 ten:1 category:6 generate:1 canonical:1 estimated:3 delta:3 per:7 express:2 key:4 achieving:2 falling:1 clean:1 povey:1 replica:1 v1:1 vast:1 fraction:2 run:2 revolutionary:1 almost:3 reasonable:1 wu:2 draw:2 decision:1 summarizes:1 bound:1 layer:5 pay:1 cheng:1 refine:2 newstest:2 extremely:1 kumar:1 expanded:1 structured:14 according:8 manning:1 across:2 remain:1 em:1 character:1 mastering:1 ndjaitly:1 taken:1 lml:2 computationally:2 previously:1 turn:1 count:3 mechanism:2 needed:2 fed:1 serf:2 end:2 generalizes:2 observe:2 away:1 generic:1 alternative:6 batch:4 robustness:1 original:1 top:1 denotes:2 include:1 clustering:1 graphical:2 opportunity:1 hinge:1 unifying:1 const:5 daum:1 exploit:2 eisner:1 perturb:1 especially:1 build:1 classical:1 objective:23 intend:1 parametric:1 concentration:1 traditional:1 surrogate:2 gradient:18 iclr:3 distance:17 link:1 decoder:4 me:1 considers:5 bleu:14 tokenized:2 length:6 copying:1 relationship:1 mini:4 ratio:1 minimizing:4 demonstration:1 difficult:2 potentially:1 relate:1 reweight:1 negative:7 slows:1 implementation:1 policy:15 perform:1 allowing:1 upper:1 observation:1 markov:2 benchmark:2 finite:1 enabling:1 payoff:9 hinton:1 incorporated:1 vlassis:1 shazeer:1 auli:1 mansour:1 arbitrary:1 pair:5 unpublished:1 kl:8 optimized:1 sentence:13 connection:1 distinction:1 learned:1 deletion:5 established:2 nip:8 beyond:2 suggested:3 below:3 reading:1 challenge:1 including:2 reliable:1 lend:1 memory:2 max:1 critical:1 suitable:1 natural:3 hybrid:2 regularized:8 solvable:1 improve:5 gimpel:1 started:1 epoch:2 literature:2 ulling:1 relative:2 loss:8 interesting:3 limitation:1 proportional:2 toussaint:1 foundation:1 degree:2 consistent:2 principle:1 critic:1 translation:18 penalized:1 token:7 surprisingly:2 last:2 asynchronous:1 english:5 free:2 exponentiated:15 fall:1 wide:1 face:1 taking:1 fifth:1 sparse:2 ghoshal:1 benefit:1 van:1 opper:1 vocabulary:3 evaluating:1 transition:1 rich:1 computes:1 dale:1 preventing:1 forward:1 reinforcement:12 refinement:3 far:1 approximate:6 ignore:2 preferred:2 keep:3 ml:20 global:2 reveals:1 conceptual:1 corpus:1 discriminative:1 search:11 continuous:2 un:1 table:6 learn:5 nature:1 ignoring:1 serdyuk:1 schuurmans:2 improving:1 bottou:1 interpolating:1 anneal:1 substituted:1 aistats:2 significance:1 linearly:1 rh:2 noise:2 fair:1 body:2 augmented:14 depicts:1 slow:2 position:2 momentum:1 pereira:1 exponential:1 lie:1 candidate:1 jmlr:1 zhifeng:1 down:3 showing:1 unperturbed:1 decay:1 svm:1 alt:1 evidence:1 intractable:2 incorporating:1 socher:1 albeit:1 adding:1 vapnik:1 importance:1 magnitude:1 pierce:1 margin:9 chen:1 gap:4 entropy:8 simply:2 explore:1 vinyals:5 expressed:3 omez:1 lrl:12 truth:11 conditional:15 formulated:1 asru:1 price:1 replace:2 considerable:2 hard:1 change:1 typical:1 specifically:1 reducing:1 uniformly:1 operates:1 domke:1 called:3 nil:2 total:1 experimental:1 intact:1 rarely:1 select:1 assessed:1 collins:1 incorporate:1 evaluate:2 tested:1 schuster:2 handling:1 |
6,133 | 6,548 | Catching heuristics are optimal control policies
Boris Belousov* , Gerhard Neumann* , Constantin A. Rothkopf** , Jan Peters*
**
*
Department of Computer Science, TU Darmstadt
Cognitive Science Center & Department of Psychology, TU Darmstadt
Abstract
Two seemingly contradictory theories attempt to explain how humans move to
intercept an airborne ball. One theory posits that humans predict the ball trajectory
to optimally plan future actions; the other claims that, instead of performing such
complicated computations, humans employ heuristics to reactively choose appropriate actions based on immediate visual feedback. In this paper, we show that
interception strategies appearing to be heuristics can be understood as computational solutions to the optimal control problem faced by a ball-catching agent acting
under uncertainty. Modeling catching as a continuous partially observable Markov
decision process and employing stochastic optimal control theory, we discover
that the four main heuristics described in the literature are optimal solutions if the
catcher has sufficient time to continuously visually track the ball. Specifically, by
varying model parameters such as noise, time to ground contact, and perceptual
latency, we show that different strategies arise under different circumstances. The
catcher?s policy switches between generating reactive and predictive behavior based
on the ratio of system to observation noise and the ratio between reaction time
and task duration. Thus, we provide a rational account of human ball-catching
behavior and a unifying explanation for seemingly contradictory theories of target
interception on the basis of stochastic optimal control.
1
Introduction
Humans exhibit impressive abilities of intercepting moving targets as exemplified in sports such as
baseball [6]. Despite the ubiquity of this visuomotor capability, explaining how humans manage to
catch flying objects is a long-standing problem in cognitive science and human motor control. What
makes this problem computationally difficult for humans are the involved perceptual uncertainties,
high sensory noise, and long action delays compared to artificial control systems and robots. Thus,
understanding action generation in human ball interception from a computational point of view
may yield important insights on human visuomotor control. Surprisingly, there is no generally
accepted model that explains empirical observations of human interception of airborne balls. McIntyre et al. [15] and Hayhoe et al. [13] claim that humans employ an internal model of the physical
world to predict where the ball will hit the ground and how to catch it. Such internal models allow for
planning and potentially optimal action generation, e.g., they enable optimal catching strategies where
humans predict the interception point and move there as fast as mechanically possible to await the ball.
Clearly, there exist situations where latencies of the catching task require such strategies (e.g., when
a catcher moves the arm to receive the pitcher?s ball). By contrast, Gigerenzer & Brighton [11] argue
that the world is far too complex for sufficiently precise modeling (e.g., a catcher or an outfielder
in baseball would have to take air resistance, wind, and spin of the ball into account to predict its
trajectory). Thus, humans supposedly extract few simple but robust features that suffice for successful
execution of tasks such as catching. Here, immediate feedback is employed to guide action generation
instead of detailed modeling. Policies based on these features are called heuristics and the claim
is that humans possess a bag of such tricks, the ?adaptive toolbox?. For a baseball outfielder, a
successful heuristic could be ?Fix your gaze on the ball, start running, and adjust your running speed
so that the angle of gaze remains constant? [10]. Thus, at the core, finding a unifying computational
account of the human interception of moving targets also contributes to the long-lasting debate about
the nature of human rationality [20].
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
In this paper, we propose that these seemingly contradictory views can be unified using a single
computational model based on a continuous partially observable Markov decison process model
(POMDP). In this model, the intercepting agent is assumed to choose optimal actions that take
uncertainty about future movement into account. This model prescribes that both the catcher and the
outfielder act optimally for their respective situation and uncertainty. We show that an outfielder agent
using a highly stochastic internal model for prediction will indeed resort to purely reactive polices
resembling established heuristics from the literature. The intuitive reason for such short-sighted
behavior being optimal is that ball predictions over sufficiently long time horizons with highly
stochastic models effectively become guessing. Similarly, our model will yield optimally planned
actions based on predictions if the uncertainty encountered by the catcher agent is low while the
latency is non-negligible in comparison to the movement duration. Moreover, we identify catching
scenarios where the only strategy to intercept the ball requires to turn away from it and run as fast as
possible. While such strategies cannot be explained by the heuristics proposed so far, the optimal
control approach yields a plausible policy exhibiting both reactive and feedforward behavior. While
other motor tasks (e.g., reaching movements [9, 22], locomotion [1]) have been explained in terms of
stochastic optimal control theory, to the best of our knowledge this paper is the first to explain ball
catching within this computational framework. We show that the four previously described empirical
heuristics are actually optimal control policies. Moreover, our approach allows predictions for settings
that cannot be explained by heuristics and have not been studied before. As catching behavior has
previously been described as a prime example of humans not following complex computations but
using simple heuristics, this study opens an important perspective on the fundamental question of
human rationality.
2
Related work
A number of heuristics have been proposed to explain how humans catch balls, see [27, 8, 16] for an
overview. We focus on three theories well-supported by experiments: Chapman?s theory, the generalized optic acceleration cancellation (GOAC) theory, and the linear optical trajectory (LOT) theory.
Chapman [6] considered a simple kinematic problem (see Figure 1) where the ball B follows a parabolic trajectory B0:N
while the agent C follows C0:N to intercept it. Only the position
of the agent is relevant?his gaze is always directed towards the
ball. Angle ? is the elevation angle; angle ? is the bearing angle
with respect to direction C0 B0 (or C2 G2 which is parallel). Due
to delayed reaction, the agent starts running when the ball is
already in the air. Chapman proposed two heuristics, i.e., the
optic acceleration cancellation (OAC) that prescribes maintaining d tan ?/dt = const, and the constant bearing angle (CBA),
which requires ? = const. However, Chapman did not explain
how these heuristics cope with disturbances and observations.
To incorporate visual observations, McLeod et al. [16] introduced the field of view of the agent into Chapman?s theory and
coupled the agent?s running velocity to the location of the ball
in the visual field. Instead of the CBA heuristic, a tracking
heuristic is employed to form the generalized optic acceleration
cancellation (GOAC) theory. This tracking heuristic allows reactions to uncertain observations. In our example in Figure 1,
Figure 1: Well-known heuristics. the agent might have moved from C to C while maintaining a
0
2
constant ?. To keep fulfilling this heuristic, the ball needs to arrive at B2 at the same time. However,
if the ball is already at B20 , the agent will see it falling into the right side of his field of view and he
will speed up. Thus, the agent internally tracks the angle ? between CD and C0 B0 and attempts to
adjust ? to ?.
In Chapman?s theory and the GOAC theory, the elevation angle ? and the bearing angle ? are
controlled independently. As such, separate control strategies are implausible, therefore McBeath
et al. [14] proposed the linear optical trajectory (LOT) heuristic that controls both angles jointly.
LOT suggests that the catching agent runs such that the projection of the ball trajectory onto the
plane perpendicular to the direction CD remains linear, which implies that ? = ?E2 B0 F2 remains
constant. As tan ? = tan ?2 / tan ?2 can be observed from the pyramid B0 F2 C2 E2 with the right
angles at F2 , there exist a coupling between the elevation angle ? and the horizontal optical angle ?
(defined as the angle between CB0 and CD), which can be used for directing the agent.
2
In contrast to the literature on outfielder?s catching in baseball, other strands of research in human
motor control have focused on predictive models [17] and optimality of behavior [9, 22]. Tasks
similar to the catcher?s in baseball have yielded evidence for prediction. Humans were shown to
anticipate where a tennis ball will hit the floor when thrown with a bounce [13], and humans also
appear to use an internal model of gravity to estimate time-to-contact when catching balls [15].
Optimal control theory has been used to explain reaching movements (with cost functions such
as minimum-jerk [9], minimum-torque-change [23] and minimum end-point variance [12]), motor
coordination [22], and locomotion (as minimizing metabolic energy [1])
3
Modeling ball catching under uncertainty as an optimal control problem
To parsimoniously model the catching agent, we rely on an optimal control formulation (Sec. 3.1)
where the agent is described in terms of state-transitions, observations and a cost function (Sec. 3.2).
3.1 Optimal control under uncertainty
In optimal control, the interaction of the agent with the environment is described by a stochastic
dynamic model or system (e.g., describing ball flight and odometry). The system?s state
xk+1 = f (xk , uk ) + k+1 , k = 0 . . . N ? 1,
(1)
at the next time step k + 1 is given as a noisy function of the state xk ? Rn and the action uk ? Rm
at the current time step k. The mean state dynamics f are perturbed by zero-mean stationary white
Gaussian noise k ? N (0, Q) with a constant system noise covariance matrix Q modeling the
uncertainty in the system (e.g., the uncertainty in the agent?s and ball?s positions).
The state of the system is not always fully observed (e.g., the catching agent can only observe a
ball when he looks at it), lower-dimensional than the system?s state (e.g., only ball positions can
directly be observed) and the observations are generally noisy (e.g., visuomotor noise affects ball
position estimates). Thus, at every time step k, sensory input provides a noisy lower-dimensional
measurement z k ? Rp of the true underlying system state xk ? Rn with p < n described by
z k = h(xk ) + ? k , k = 1 . . . N,
(2)
where h is a deterministic observation function and ? k ? N (0, Rk ) is zero-mean non-stationary
white Gaussian noise with a state-dependent covariance matrix Rk = R(xk ). For catching, such
state-dependency is crucial to modeling the effect of the human visual field. When the ball is
at its center, measurements are least uncertain; whereas when the ball is outside the visual field,
observations are maximally uncertain.
The agent obviously can only generate actions based on the observations collected so far, while
affecting his and the environment?s true next state. The history of observations allows forming
probability distributions over the state at different time-steps called beliefs. Taking the uncertainty
in (1) and (2) into account, the agent needs to plan and control in the belief space (i.e., the space of
probability distributions over states) rather than in the state space. We approximate belief bk about
the state of the system at time k by a Gaussian distribution with mean ?k and variance ?k . For
brevity, we write bk = (?k , ?k ), associating the belief with its sufficient statistics. Belief dynamics
(bk?1 , uk?1 , z k ) ? bk is approximated by the extended Kalman filter [21, Chapter 3.3].
A cost function J can be a parsimonious description of the agent?s objective. The agent will choose
the next action by optimizing such a cost function with respect to all future actions at every time-step.
To make the resulting optimal control computations numerically tractable, future observations need
to be assumed to coincide with their most likely values (see e.g., [19, 5]). Thus, at every time step,
the agent solves a constrained nonlinear optimization problem
min
J(?0:N , ?0:N ; u0:N ?1 )
u0:N ?1
(3)
s.t.
uk ? Ufeasible , k = 0 . . . N ? 1,
?k ? Xfeasible , k = 0 . . . N,
which returns an optimal sequence of controls u0:N ?1 minimizing the objective function J. The
agent executes the first action, obtains a new observation, and replans again; such an approach is
known as model predictive control. The policy resulting from such computations is sub-optimal
because of open-loop planning and limited time horizon, but with growing time horizon it approaches
the optimal policy. Reaction time ?r can be incorporated by delaying the observations. An interesting
property of this model is that the catching agent decides on his own in an optimal way when to gather
information by looking at the ball and when to exploit already acquired knowledge depending on the
level of uncertainty he agrees to tolerate.
3
3.2
A computational model of the catching agent for belief-space optimal control
Here we explain the modeling assumptions concerning states, actions, state transitions, and observations. After that we describe the cost function that the agent has to minimize.
States and actions. The state of the system x consists of the location and velocity of the ball in
3D space, the location and velocity of the catching agent in the ground plane, and the agent?s gaze
direction represented by a unit 3D vector. The agent?s actions u consist of the force applied to the
center of mass and the rate of change of the gaze direction.
State transitions and observations. Several model components are essential to faithfully describe
catching behavior. First, the state transfer is described by the damped dynamics of the agent?s center
of mass r? c = F ? ?r? c , where r c = [x, y] are the agent?s Cartesian coordinates, F is the applied
force resulting from the agent?s actions, and ? is the damping coefficient. Damping ensures that the
catching agent?s velocity does not grow without bound when the maximum force is applied. The
magnitude of the maximal force and the friction coefficient are chosen to fit Usain Bolt?s sprint
data1 . Second, the gaze vector?s direction d is controlled through the first derivatives of the two
angles that define it. These are the angle between d and its projection onto the xy-plane and the
angle between d?s projection onto the xy-plane and the x-axis. Such parametrization of the actions
allows for realistically fast changes of gaze direction. Third, the maximal running speed depends
on the gaze direction, e.g., running backwards is slower than running forward or even sideways.
This relationship can be incorporated through dependence of the maximal applicable force F max
on the direction d. It can be expressed by limiting the magnitude of the maximal applicable force
|F max (?)| = F1 + F2 cos ?, where ? is the angle between F (i.e., the direction into which the catcher
accelerates) and the projection of the catcher?s gaze direction d onto the xy-plane. The parameters F1
and F2 are chosen to fit human data on forward and backwards running2 . The resulting continuous
time dynamics of agent and ball are converted into discrete time state transfers using the classical
Runga-Kutta method. Fourth, the observation uncertainty depends on the state, which reflects the
fact that humans? visual resolution falls off across the visual field with increasing distance from the
fovea. When the ball falls to the side of the agent?s field of view, the uncertainty about ball?s position
2
2
grows according to ?o2 = s(?max
(1 ? cos ?) + ?min
) depending on the distance to the ball s and
the angle ? between gaze direction d and the vector pointing from the agent towards the ball. The
parameters {?min , ?max } control the scale of the noise. The ball is modeled as a parabolic flight
perturbed by Gaussian noise with variance ?b2 .
Cost function. The catching agent has to trade-off success (i.e., catching the ball) with effort.
In other words, he aims at maximizing the probability of catching the ball with minimal effort. A
ball is assumed to be caught if it is within reach, i.e., not further away from the catching agent
than ?threshold at the final time. Thus, the probability of catching the ball can be expressed as
Pr(|?b ? ?c | ? ?threshold ), where ?b and ?c are the predicted positions of the ball and the agent at
the final time (i.e., parts of the belief state of the agent). Since such beliefs are modeled as Gaussians,
this probability has a unique global maximum at ?b = ?c and ?N ? 0+ . Therefore, a final cost
Jfinal = w0 k?b ? ?c k22 + w1 tr ?N can approximate the negated log-probability of successfully
catching the ball while rendering the optimal control problem solvable. The weights w0 and w1
are set to optimally approximate this negated log-probability. The desire
P N ?1of Tthe agent to be energy
efficient is encoded as a penalty on the control signals Jenergy = ?
k=0 uk M uk with the fixed
duration ? of the discretized time steps and a diagonal weight matrix M to trade-offPcontrols. Finally,
?1
we add a term that penalizes agent?s uncertainty at every time step Jrunning = ? w2 N
k=0 tr ?k that
encodes the agent preference of certainty over uncertainty. It appears naturally in optimal control
problems when the maximum likelihood observations assumption is relaxed [24] and captures how
final uncertainty distributes over the preceding time steps, but has to be added explicitly within
the model predictive control framework in order to account for replanning at every time step. The
complete cost function is thus given by the sum
X
X
?1
N ?1 T
J = Jfinal+Jrunning+Jenergy = w0 k?b ??c k22 + w1 tr ?N + ? w2 N
k=0 uk M uk
k=0 tr ?k + ?
|
{z
}
|
{z
}
{z
} |
{z
}
|
final position
final uncertainity
running uncertainty
total energy
that the catching agent has to minimize in order to successfully intercept the ball.
1
2
Usain Bolt?s world record sprint data http://datagenetics.com/blog/july32013/index.html
World records for backwards running http://www.recordholders.org/en/list/backwards-running.html
4
3.3 Implementation details
To solve Problem (3), we use the covariance-free multiple shooting method [18] for trajectory
optimization [7, 3] in the belief space. Derivatives of the cost function are computed using CasADi [2].
Non-linear optimization is carried out by Ipopt [26]. L-BFGS and warm-starts used.
4
Simulated experiments and results
In this section, we present the results of two simulated scenarios and a comparative evaluation. First,
using the optimal control approach, we show that continuous tracking (where the ball always remains
in the field of view of the outfielder) naturally leads to the heuristics from literature [6, 16, 14] if
the catching agent is sufficiently fast in comparison to the ball independent of whether he is running
forward, backwards, or sideways. Subsequently, we show that more complex behavior arises when
the ball is too fast to be caught while running only sideways or backwards (e.g., as in soccer or long
passes in American football). Here, tracking is interrupted as the agent needs to turn away from the
ball to run forward. While the heuristics break, our optimal control formulation exhibits plausible
strategies similar to those employed by human catchers. Finally, we systematically study the effects
of noise and time delay onto the agent?s policy. The optimal control policies arising from our model
switch between reactive and predictive behaviors depending on uncertainty and latency.
Distance y [m]
4.1 Continuous tracking of an outfielder?heuristics hold
To directly compare our model against empirical catching data that has been described as
resulting from a heuristic, we reproduce the settings from [16] where a ball flew 15 m in
3 s and a human subject starting about 6 m away from the impact point had to intercept it.
The optimal control
10
policy can deal with
such situations and
8
yields the behavior
6
observed by McLeod
Catcher?s trajectory
et al.
[16].
In
4
Catcher?s gaze
Ball trajectory
fact, even when dou2
Observed ball trajectory
bling all distances
Belief trajectory, mean
the reactive control
0
Belief trajectory, covariance
policy exhibits all
0
5
10
15
20
25
30
35 four major heuristics
Distance x [m]
(OAC, GOAC, CBA
Figure 2: A typical simulated trajectory of a successful catch in the continand LOT) with apuous tracking scenario as encountered by the outfielder. The uncertainty in
proximately the same
the belief state is kept low by the agent by fixating the ball. Such empirically
precision as in the
observed scenarios [6, 16, 14] have led to the proposition of the heuristics
original human experwhich arise naturally from our optimal control formulation.
iments.
Figure 2
shows a typical simulated catch viewed from above. The ball and the agent?s true trajectories
are depicted in green (note that the ball is frequently hidden behind the belief state trajectory). The
agent?s observations and the mean belief trajectory of the ball are represented by magenta crosses and
a magenta line, respectively. The belief uncertainty is indicated by the cyan ellipsoids that capture
95% of the probability mass. The gaze vectors of the agent are shown as red arrows. The catching
agent starts sufficiently close to the interception point to continuously visually track the ball, therefore
he is able to efficiently reduce his uncertainty on the ball?s position and successfully intercept it
while keeping it in sight. Note that the agent does not follow a straight trajectory but a curved one in
agreement with human experiments [16].
Figure 3 shows plots of the relevant angles over time to compare the behavior exhibited by human
catchers to the optimal catching policy. The tangent of the elevation angle tan ? grows linearly
with time, as predicted by the optic acceleration cancellation heuristic (OAC). The bearing angle
? remains constant (within a 5 deg margin) as predicted by the constant bearing angle heuristic
(CBA). The rotation angle ? oscillates around ? as predicted by the generalized optic acceleration
cancellation theory (GOAC). The tangent of the horizontal optical angle tan ? is proportional to
tan ?, as predicted by the linear optical trajectory theory (LOT). The small oscillations in the rotation
angle and in the horizontal optical angle are due to reaction delay and uncertainty; they are also
predicted by GOAC and LOT. Thus, in this well-studied case, the model produces an optimal policy
that exhibits behavior which is fully in accordance with the heuristics.
5
Tracking heuristic (part of GOAC)
30
Simulation
Linear fit
0.8
0.6
0.4
0.2
10
0
?10
?20
?30
0.0
0.0
0.5
1.0 1.5 2.0
Time t [sec]
2.5
3.0
Constant bearing angle (CBA)
Simulation
Constant fit
20
15
10
5
0
0.0
0.5
1.0 1.5 2.0
Time t [sec]
2.5
0.0
0.5
1.0 1.5 2.0
Time t [sec]
2.5
3.0
Linear optical trajectory (LOT)
Tangent of hor. opt. angle tan ?
Bearing angle ? [deg]
25
?5
Rotation angle ?
Bearing angle ?
20
Angle [deg]
Tangent of elevation angle tan ?
Optic acceleration cancellation (OAC)
1.0
3.0
0.30
0.25
Linear fit
Simulation
0.20
0.15
0.10
0.05
0.00
0.00 0.25 0.50 0.75 1.00 1.25 1.50
Tangent of elevation angle tan ?
Figure 3: During simulations of successful catches for the continuous tracking scenario encountered
by the outfielder (shown in Figure 2), the policies resulting from our optimal control formulation
always fulfill the heuristics (OAC, GOAC, CBA, and LOT) from literature with approximately the
same precision as in the original human experiments.
Distance y [m]
4.2 Interrupted tracking during long passes?heuristics break but prediction is required
The competing theory to the heuristics claims that a predictive internal model allows humans to
intercept the ball [15, 13]. Brancazio [4] points out that "the best outfielders can even turn their
backs to the ball, run to the landing point, and then turn and wait for the ball to arrive". Similar
behavior is observed in football and american football during long passes. To see whether predictions
become necessary, we reproduced situations where the agent cannot catch the ball when acting purely
reactively. For example, if the running time to interception point when running backwards (i.e.,
the ratio between the distance to
the interception point divided by
Plan
15
Posterior
the maximal backwards running
Prior
velocity) is substantially higher
Prior + posterior
than the flight time of the ball,
10
Catcher?s gaze
no backwards running strategy
will be successful. Thus, by
varying the initial conditions for
5
the catching agent and the ball,
new scenarios can be generated
using our optimal control model.
0
The agent?s control policy can
0
5
10
15
20
25
30
35 be tested on reliance on predicDistance x [m]
tions as it is available in form
Figure 4: An interception plan that leads to successful catch despite of a computational model, i.e.,
violating heuristics. Here, the agent would not be able to reach the if the computed policy makes
interception point in time while running backwards and, thus, has use of the belief states on future
to turn forward to run faster. The resulting optimal control policy time steps, the agent clearly emrelies on beliefs on the future generated by an internal model.
ploys an internal model to predict the interception point. By choosing appropriate initial conditions for the ball and the agent,
we can pursue such scenarios. For example, if the ball flies over the agent?s head, he has to turn
6
Tracking heuristic (part of GOAC)
140
1.4
120
1.2
100
Angle [deg]
Tangent of elevation angle tan ?
Optic acceleration cancellation (OAC)
1.6
1.0
0.8
0.6
0.4
80
60
40
20
Simulation
Linear fit
0.2
Rotation angle ?
Bearing angle ?
0
0.0
0.0
0.5
1.0
1.5
2.0
Time t [sec]
2.5
3.0
0.0
Constant bearing angle (CBA)
0.5
1.0
1.5
2.0
Time t [sec]
2.5
3.0
Linear optical trajectory (LOT)
Tangent of hor. opt. angle tan ?
25
Bearing angle ? [deg]
20
15
10
5
Simulation
Constant fit
0
?5
0.0
0.5
1.0
1.5
2.0
Time t [sec]
2.5
5
0
?5
?10
?15
?20
?30
3.0
Linear fit
Simulation
?25
0.0
0.5 1.0 1.5 2.0 2.5 3.0 3.5
Tangent of elevation angle tan ?
4.0
Figure 5: For initial conditions (positions of the ball and the agent) which do not allow the agent
to reach the interception point by running backwards or sideways, the optimal policy will include
running forward with maximal velocity (as shown in Figure 4). In this case, the agent cannot
continuously visually track the ball and, expectedly, the heuristics do not hold.
away from it for a moment in order to gain speed by running forward, instead of running backwards
or sideways and looking at the ball all the time. Figure 4 shows such an interception plan where
the agent decides to initially speed up and, when sufficiently close, turn around and track the ball
while running sideways. Notice that the future belief uncertainty (i.e., the posterior uncertainty ?
returned by the extended Kalman filter), represented by red ellipses, grows when the catcher is not
looking at the ball and shrinks otherwise. The prior uncertainty (obtained by integrating out future
observations), shown in yellow, on the other hand, grows towards the end of the trajectory because
future observations are not available at planning time. Similar to [5, 25], we can show for our model
predictive control law that the sum of prior and posterior uncertainties (shown as green circles)
equals the total system uncertainty obtained by propagating the belief state into the future without
incorporating future observations. Figure 5 shows that the heuristics fail to explain this catch?even
in the final time steps where the catching agent is tracking the ball to intercept it. OAC deviates from
linearity, CBA is not constant, the tracking heuristic wildly deviates from the prediction, and LOT
is highly non-linear. GOAC and LOT are affected more dramatically because they directly depend
on the catcher?s gaze, in contrast to OAC and CBA. Since the heuristics were not meant to describe
such situations, they predictably do not hold. Only an internal model can explain the reliance of the
optimal policy on the future belief states.
4.3 Switching behaviors when uncertainty and reaction time are varied
The previous experiment has pointed us towards policies that switch between predictive subpolicies
based on internal models and reactive policies based on current observations. To systematically
study what behaviors arise, we use the scenario from Section 4.2 and vary two essential model
parameters: system to observation noise ratio ?1 = log ?b2 /?o2 and reaction time to task duration
ratio ?2 = ?r /T , where T is the duration of the ball flight. The system to observation noise
ratio effectively determines whether predictions based on the internal model of the dynamics are
sufficiently trustworthy for (partially) open-loop behavior or whether reactive control based on
the observations of the current state of the system should be preferred. The reaction time to task
duration ratio sets the time scale of the problem. For example, an outfielder in baseball may have
about 3 s to catch a ball and his reaction delay of about 200 ms is negligible, whereas a catcher in
baseball often has to act within a fraction of a second, and, thus, the reaction latency becomes crucial.
7
We run the experiment at different noise levels and time delays and average the results
over 10 trials. In all cases, the agent starts at
the point (20, 5) looking towards the origin,
while the ball flies from the origin towards the
point (30, 15) in 3 s. All parameters are kept
fixed apart from the reaction time and system
noise; in particular, task duration and observation noise are kept fixed. Figure 6 shows how
the agent?s policy depends on the parameters.
Boundaries correspond to contour lines of the
function that equals number of times the agent
turns towards the ball. We count turns by analyzing trajectories for gaze direction changes
and reduction of uncertainty (e.g., in Figure 4
Figure 6: Switches between reactive and feedforward the agent turns once towards the ball). When
policies are determined by uncertainties and latency. reaction delays are long and predictions are
reliable, the agent turns towards the interception points and runs as fast as he can (purely predictive
strategies; lower right corner in Figure 6). When predictions are not sufficiently trustworthy, the
agent has to switch multiple times between a reactive policy to gather information and a predictive
feedforward strategy to successfully fulfill the task (upper left corner). When reaction time and
system noise become sufficiently large, the agent fails to intercept the ball (upper right grayed out
area). Thus, seemingly substantially different behaviors can be explained by means of a single model.
Note that in this figure a purely reactive strategy (as required for only using the heuristics) is not
possible. However, if different initial conditions enabling the purely reactive strategy are used, the
upper left corner is dominated by the purely reactive strategy.
5
Discussion and conclusion
We have presented a computational model of human interception of a moving target, such as an
airborne ball, in form of a continuous state-action partially observable Markov decision problem.
Depending on initial conditions, the optimal control solver either generates continuously tracking
behavior or dictates the catching agent to turn away from the ball in order to speed up. Interception
trajectories in the first case turn out to demonstrate all properties that were previously taken as
evidence that humans avoid complex computations by employing simple heuristics. In the second
case, we have shown that different regimes of switches between reactive and predictive behavior
arise depending on relative uncertainty and latency. When the agent has sufficient time to gather
observations (bottom-left in Figure 6), he turns towards the ball as soon as possible and continuously
tracks it till the end (e.g., outfielder in baseball acts in this regime). If he is confident in the interception
point prediction but the task duration is so short relative to the latency that he does not have sufficient
time to gather observations (bottom-right), he will rely entirely on the internal model (e.g., catcher
in baseball may act in this regime). If the agent?s interception point prediction is rather uncertain
(e.g., due to system noise), the agent will gather observations more often regardless of time delays.
Conclusions regarding the trade-off between reactive and predictive behaviors may well generalize
beyond ball catching to various motor skills. Assuming an agent has an internal model of a task and
gets noisy delayed partial observations, he has to tolerate a certain level of uncertainty; if moreover
the agent has a limited time to perform the task, he is compelled to act based on prediction instead
of observations. As our optimal control policy can explain both reactive heuristics and predictive
feedforward strategies, as well as switches between these two kinds of subpolicies, it can be viewed
as a unifying explanation for the two seemingly contradictory theories of target interception.
In this paper, we have provided a computational level explanation for a range of observed human
behaviors in ball catching. Importantly, while previous interpretations of whether human catching
behavior is the result of complex computations or the result of simple heuristics have been inconclusive, here we have demonstrated that what looks like simple rules of thumb from a bag of tricks is
actually the optimal solution to a continuous partially observable Markov decision problem. This
result therefore fundamentally contributes to our understanding of human rationality.
Acknowledgements
This project has received funding from the European Union?s Horizon 2020 research and innovation
programme under grant agreement No 640554.
8
References
[1] F. C. Anderson and M. G. Pandy. Dynamic optimization of human walking. Journal of
biomechanical engineering, 123(5):381?390, 2001.
[2] J. Andersson, J. ?kesson, and M. Diehl. CasADi: A symbolic package for automatic differentiation and optimal control. In Recent Advances in Algorithmic Differentiation, pages 297?307.
Springer, 2012.
[3] J. T. Betts. Survey of Numerical Methods for Trajectory Optimization. Journal of Guidance,
Control, and Dynamics, 21(2):193?207, 1998.
[4] P. J. Brancazio. Looking into Chapman?s homer: The physics of judging a fly ball. American
Journal of Physics, 53(9):849, 1985.
[5] A. Bry and N. Roy. Rapidly-exploring random belief trees for motion planning under uncertainty.
Proceedings - IEEE ICRA, pages 723?730, 2011.
[6] S. Chapman. Catching a baseball. American Journal of Physics, 36(10):868, 1968.
[7] M. Diehl, H. G. Bock, H. Diedam, and P. B. Wieber. Fast direct multiple shooting algorithms
for optimal robot control. In Lecture Notes in Control and Information Sciences, volume 340,
pages 65?93, 2006.
[8] P. W. Fink, P. S. Foo, and W. H. Warren. Catching fly balls in virtual reality: a critical test of
the outfielder problem. Journal of vision, 9(13):1?8, 2009.
[9] T. Flash and N. Hogan. The coordination of arm movements: an experimentally confirmed
mathematical model. The Journal of Neuroscience, 5(7):1688?1703, 1985.
[10] G. Gigerenzer. Gut feelings: The intelligence of the unconscious. Penguin, 2007.
[11] G. Gigerenzer and H. Brighton. Homo Heuristicus: Why Biased Minds Make Better Inferences.
Topics in Cognitive Science, 1(1):107?143, 2009.
[12] C. M. Harris and D. M. Wolpert. Signal-dependent noise determines motor planning. Nature,
394(6695):780?4, 1998.
[13] M. M. Hayhoe, N. Mennie, K. Gorgos, J. Semrau, and B. Sullivan. The role of prediction in
catching balls. Journal of Vision, 4(8):156?156, 2004.
[14] M. McBeath, D. Shaffer, and M. Kaiser. How baseball outfielders determine where to run to
catch fly balls. Science, 268(5210):569?573, 1995.
[15] J. McIntyre, M. Zago, A. Berthoz, and F. Lacquaniti. Does the brain model Newton?s laws?
Nature neuroscience, 4(7):693?694, 2001.
[16] P. McLeod, N. Reed, and Z. Dienes. The generalized optic acceleration cancellation theory of
catching. Journal of experimental psychology. Human perception and performance, 32(1):139?
48, 2006.
[17] R. C. Miall and D. M. Wolpert. Forward models for physiological motor control, 1996.
[18] S. Patil, G. Kahn, M. Laskey, and J. Schulman. Scaling up Gaussian Belief Space Planning
through Covariance-Free Trajectory Optimization and Automatic Differentiation. Algorithmic
Foundations of Robotics XI, pages 515?533, 2015.
[19] R. Platt, R. Tedrake, L. Kaelbling, and T. Lozano-Perez. Belief space planning assuming
maximum likelihood observations. Robotics: Science and Systems, 2010.
[20] H. A. Simon. A Behavioral Model of Rational Choice. The Quarterly Journal of Economics,
69(1):99?118, 1955.
[21] S. Thrun, W. Burgard, and D. Fox. Probabilistic robotics. 2005.
[22] E. Todorov and M. I. Jordan. Optimal feedback control as a theory of motor coordination.
Nature neuroscience, 5(11):1226?1235, 2002.
[23] Y. Uno, M. Kawato, and R. Suzuki. Formation and control of optimal trajectory in human
multijoint arm movement. Minimum torque-change model. Biological cybernetics, 61(2):89?
101, 1989.
[24] J. van den Berg, S. Patil, and R. Alterovitz. Motion planning under uncertainty using iterative
local optimization in belief space. The International Journal of Robotics Research, 31(11):1263?
1278, 2012.
[25] M. P. Vitus and C. J. Tomlin. Closed-loop belief space planning for linear, Gaussian systems.
In Proceedings - IEEE ICRA, pages 2152?2159, 2011.
[26] A. W?chter and L. T. Biegler. On the Implementation of a Primal-Dual Interior Point Filter
Line Search Algorithm for Large-Scale Nonlinear Programming. Mathematical Programming,
106:25?57, 2006.
[27] M. Zago, J. McIntyre, P. Senot, and F. Lacquaniti. Visuo-motor coordination and internal
models for object interception, 2009.
9
| 6548 |@word trial:1 c0:3 open:3 simulation:7 covariance:5 lacquaniti:2 tr:4 reduction:1 moment:1 initial:5 o2:2 reaction:13 current:3 com:1 trustworthy:2 interrupted:2 numerical:1 biomechanical:1 motor:9 plot:1 stationary:2 intelligence:1 plane:5 xk:6 compelled:1 parametrization:1 core:1 short:2 record:2 provides:1 location:3 preference:1 org:1 mathematical:2 c2:2 direct:1 become:3 shaffer:1 shooting:2 consists:1 alterovitz:1 behavioral:1 acquired:1 indeed:1 behavior:22 planning:9 growing:1 frequently:1 brain:1 discretized:1 torque:2 solver:1 increasing:1 becomes:1 spain:1 discover:1 moreover:3 suffice:1 underlying:1 mass:3 linearity:1 grayed:1 what:3 provided:1 kind:1 substantially:2 pursue:1 unified:1 finding:1 homer:1 interception:21 differentiation:3 certainty:1 every:5 act:5 fink:1 gravity:1 oscillates:1 rm:1 hit:2 uk:8 control:49 unit:1 internally:1 grant:1 appear:1 platt:1 before:1 negligible:2 understood:1 accordance:1 engineering:1 local:1 switching:1 despite:2 analyzing:1 approximately:1 might:1 studied:2 suggests:1 dienes:1 co:2 limited:2 perpendicular:1 range:1 directed:1 unique:1 union:1 sullivan:1 jan:1 area:1 empirical:3 dictate:1 projection:4 word:1 integrating:1 wait:1 symbolic:1 get:1 cannot:4 onto:5 close:2 interior:1 intercept:9 www:1 landing:1 deterministic:1 demonstrated:1 center:4 maximizing:1 resembling:1 economics:1 pitcher:1 starting:1 duration:8 independently:1 pomdp:1 focused:1 resolution:1 caught:2 regardless:1 survey:1 insight:1 rule:1 importantly:1 his:6 coordinate:1 limiting:1 target:5 rationality:3 gerhard:1 tan:13 unconscious:1 programming:2 locomotion:2 agreement:2 trick:2 velocity:6 origin:2 approximated:1 bry:1 walking:1 roy:1 observed:8 bottom:2 role:1 fly:5 capture:2 await:1 ensures:1 movement:6 trade:3 supposedly:1 environment:2 dynamic:8 hogan:1 prescribes:2 depend:1 gigerenzer:3 predictive:13 flying:1 baseball:11 purely:6 f2:5 basis:1 chapter:1 represented:3 various:1 fast:7 describe:3 artificial:1 visuomotor:3 formation:1 outside:1 choosing:1 heuristic:43 encoded:1 plausible:2 solve:1 otherwise:1 football:3 ability:1 statistic:1 tomlin:1 jointly:1 noisy:4 final:7 seemingly:5 obviously:1 reproduced:1 sequence:1 propose:1 interaction:1 maximal:6 tu:2 relevant:2 loop:3 rapidly:1 till:1 realistically:1 intuitive:1 moved:1 description:1 neumann:1 flew:1 generating:1 boris:1 comparative:1 produce:1 object:2 tions:1 coupling:1 depending:5 propagating:1 received:1 b0:5 solves:1 predicted:6 implies:1 exhibiting:1 direction:12 posit:1 uncertainity:1 filter:3 stochastic:6 subsequently:1 human:41 enable:1 virtual:1 explains:1 require:1 tthe:1 darmstadt:2 fix:1 f1:2 elevation:8 cba:9 anticipate:1 sprint:2 proposition:1 opt:2 biological:1 exploring:1 hold:3 sufficiently:8 considered:1 ground:3 around:2 visually:3 algorithmic:2 predict:5 claim:4 pointing:1 major:1 vary:1 multijoint:1 applicable:2 bag:2 coordination:4 replanning:1 agrees:1 faithfully:1 successfully:4 sideways:6 reflects:1 clearly:2 always:4 gaussian:6 odometry:1 aim:1 reaching:2 rather:2 avoid:1 fulfill:2 sight:1 varying:2 gut:1 focus:1 likelihood:2 contrast:3 inference:1 dependent:2 initially:1 hidden:1 kahn:1 reproduce:1 dual:1 html:2 plan:5 constrained:1 field:8 equal:2 once:1 chapman:8 look:2 future:12 fundamentally:1 penguin:1 jfinal:2 employ:2 few:1 parsimoniously:1 delayed:2 b20:1 attempt:2 thrown:1 highly:3 kinematic:1 homo:1 evaluation:1 adjust:2 perez:1 behind:1 primal:1 damped:1 constantin:1 partial:1 necessary:1 xy:3 respective:1 damping:2 tree:1 fox:1 penalizes:1 circle:1 catching:43 guidance:1 minimal:1 uncertain:4 modeling:7 planned:1 cost:9 kaelbling:1 burgard:1 delay:7 successful:6 mcleod:3 too:2 optimally:4 dependency:1 perturbed:2 confident:1 fundamental:1 international:1 standing:1 mechanically:1 off:3 physic:3 probabilistic:1 gaze:15 continuously:5 intercepting:2 again:1 w1:3 manage:1 choose:3 cognitive:3 corner:3 resort:1 derivative:2 american:4 return:1 account:6 converted:1 fixating:1 bfgs:1 b2:3 sec:8 coefficient:2 explicitly:1 depends:3 view:6 wind:1 lot:11 break:2 closed:1 red:2 start:5 complicated:1 capability:1 parallel:1 simon:1 minimize:2 air:2 spin:1 variance:3 efficiently:1 yield:4 identify:1 correspond:1 yellow:1 generalize:1 thumb:1 trajectory:26 confirmed:1 cybernetics:1 straight:1 executes:1 history:1 explain:9 implausible:1 reach:3 against:1 energy:3 involved:1 e2:2 naturally:3 visuo:1 rational:2 gain:1 knowledge:2 actually:2 back:1 appears:1 tolerate:2 dt:1 higher:1 follow:1 violating:1 maximally:1 formulation:4 shrink:1 wildly:1 anderson:1 flight:4 hand:1 horizontal:3 nonlinear:2 indicated:1 laskey:1 hor:2 grows:4 effect:2 k22:2 true:3 lozano:1 white:2 deal:1 during:3 soccer:1 m:1 generalized:4 brighton:2 complete:1 demonstrate:1 motion:2 rothkopf:1 mcintyre:3 funding:1 data1:1 rotation:4 kawato:1 physical:1 overview:1 empirically:1 volume:1 he:14 interpretation:1 numerically:1 measurement:2 automatic:2 similarly:1 pointed:1 cancellation:8 had:1 moving:3 robot:2 tennis:1 impressive:1 bolt:2 add:1 outfielder:14 posterior:4 own:1 recent:1 perspective:1 optimizing:1 apart:1 prime:1 scenario:8 certain:1 blog:1 success:1 minimum:4 relaxed:1 floor:1 preceding:1 employed:3 determine:1 signal:2 u0:3 multiple:3 faster:1 cross:1 long:8 divided:1 concerning:1 ellipsis:1 controlled:2 oac:8 prediction:15 impact:1 circumstance:1 vision:2 pyramid:1 robotics:4 receive:1 whereas:2 affecting:1 airborne:3 grow:1 crucial:2 w2:2 biased:1 posse:1 exhibited:1 pass:3 subject:1 jordan:1 backwards:12 feedforward:4 rendering:1 switch:7 jerk:1 affect:1 psychology:2 fit:8 todorov:1 associating:1 competing:1 reduce:1 regarding:1 bounce:1 whether:5 effort:2 ipopt:1 penalty:1 peter:1 resistance:1 returned:1 action:19 dramatically:1 generally:2 latency:8 detailed:1 generate:1 http:2 exist:2 notice:1 judging:1 neuroscience:3 arising:1 track:6 write:1 discrete:1 affected:1 four:3 reliance:2 threshold:2 falling:1 kept:3 fraction:1 sum:2 run:8 angle:43 package:1 uncertainty:34 fourth:1 reactively:2 arrive:2 parabolic:2 parsimonious:1 oscillation:1 decision:3 scaling:1 accelerates:1 bound:1 cyan:1 entirely:1 expectedly:1 encountered:3 yielded:1 optic:8 your:2 uno:1 encodes:1 dominated:1 generates:1 speed:6 friction:1 optimality:1 min:3 performing:1 optical:8 department:2 according:1 project:1 ball:90 across:1 berthoz:1 lasting:1 den:1 explained:4 fulfilling:1 pr:1 taken:1 computationally:1 remains:5 previously:3 turn:14 describing:1 fail:1 count:1 mind:1 tractable:1 end:3 available:2 gaussians:1 observe:1 quarterly:1 away:6 appropriate:2 ubiquity:1 appearing:1 slower:1 rp:1 original:2 running:22 include:1 patil:2 maintaining:2 newton:1 unifying:3 const:2 exploit:1 sighted:1 classical:1 icra:2 contact:2 move:3 objective:2 question:1 already:3 added:1 kaiser:1 strategy:15 dependence:1 diagonal:1 guessing:1 exhibit:4 kutta:1 distance:7 separate:1 fovea:1 simulated:4 thrun:1 catcher:18 w0:3 topic:1 argue:1 collected:1 reason:1 cb0:1 assuming:2 kalman:2 modeled:2 relationship:1 index:1 ratio:7 minimizing:2 ellipsoid:1 innovation:1 reed:1 difficult:1 potentially:1 debate:1 subpolicies:2 implementation:2 policy:25 negated:2 perform:1 upper:3 observation:33 markov:4 enabling:1 curved:1 immediate:2 situation:5 extended:2 incorporated:2 precise:1 delaying:1 directing:1 rn:2 looking:5 head:1 varied:1 police:1 introduced:1 bk:4 required:2 toolbox:1 established:1 barcelona:1 nip:1 able:2 hayhoe:2 beyond:1 exemplified:1 perception:1 regime:3 max:4 green:2 explanation:3 belief:25 reliable:1 critical:1 rely:2 disturbance:1 force:6 solvable:1 warm:1 arm:3 axis:1 carried:1 catch:11 extract:1 coupled:1 faced:1 prior:4 literature:5 understanding:2 tangent:8 deviate:2 acknowledgement:1 schulman:1 relative:2 law:2 fully:2 lecture:1 generation:3 interesting:1 proportional:1 bock:1 foundation:1 agent:80 gather:5 sufficient:4 metabolic:1 systematically:2 cd:3 surprisingly:1 supported:1 free:2 keeping:1 soon:1 guide:1 allow:2 side:2 warren:1 explaining:1 fall:2 taking:1 van:1 feedback:3 boundary:1 world:4 transition:3 contour:1 sensory:2 forward:8 suzuki:1 adaptive:1 coincide:1 programme:1 employing:2 far:3 cope:1 feeling:1 miall:1 approximate:3 observable:4 obtains:1 preferred:1 skill:1 keep:1 deg:5 global:1 decides:2 predictably:1 assumed:3 xi:1 biegler:1 continuous:8 iterative:1 search:1 why:1 reality:1 nature:4 transfer:2 robust:1 diehl:2 contributes:2 bearing:11 complex:5 european:1 did:1 main:1 linearly:1 arrow:1 noise:18 arise:4 en:1 foo:1 precision:2 sub:1 position:9 fails:1 iments:1 perceptual:2 third:1 rk:2 magenta:2 chter:1 list:1 physiological:1 evidence:2 inconclusive:1 consist:1 essential:2 incorporating:1 effectively:2 magnitude:2 execution:1 cartesian:1 horizon:4 margin:1 wolpert:2 depicted:1 led:1 likely:1 forming:1 visual:7 expressed:2 strand:1 desire:1 tracking:13 partially:5 sport:1 g2:1 tedrake:1 springer:1 determines:2 harris:1 viewed:2 acceleration:8 flash:1 towards:10 change:5 experimentally:1 specifically:1 typical:2 determined:1 acting:2 distributes:1 contradictory:4 called:2 total:2 andersson:1 accepted:1 experimental:1 berg:1 internal:13 arises:1 meant:1 brevity:1 reactive:15 incorporate:1 tested:1 |
6,134 | 6,549 | Automated scalable segmentation of neurons from
multispectral images
Uygar S?mb?l
Grossman Center for the Statistics of Mind
and Dept. of Statistics, Columbia University
Douglas Roossien Jr.
University of Michigan Medical School
Fei Chen
MIT Media Lab and McGovern Institute
Nicholas Barry
MIT Media Lab and McGovern Institute
Edward S. Boyden
MIT Media Lab and McGovern Institute
Dawen Cai
University of Michigan Medical School
John P. Cunningham
Grossman Center for the Statistics of Mind
and Dept. of Statistics, Columbia University
Liam Paninski
Grossman Center for the Statistics of Mind
and Dept. of Statistics, Columbia University
Abstract
Reconstruction of neuroanatomy is a fundamental problem in neuroscience.
Stochastic expression of colors in individual cells is a promising tool, although its
use in the nervous system has been limited due to various sources of variability in
expression. Moreover, the intermingled anatomy of neuronal trees is challenging
for existing segmentation algorithms. Here, we propose a method to automate the
segmentation of neurons in such (potentially pseudo-colored) images. The method
uses spatio-color relations between the voxels, generates supervoxels to reduce
the problem size by four orders of magnitude before the final segmentation, and is
parallelizable over the supervoxels. To quantify performance and gain insight, we
generate simulated images, where the noise level and characteristics, the density
of expression, and the number of fluorophore types are variable. We also present
segmentations of real Brainbow images of the mouse hippocampus, which reveal
many of the dendritic segments.
1
Introduction
Studying the anatomy of individual neurons and the circuits they form is a classical approach
to understanding how nervous systems function since Ram?n y Cajal?s founding work. Despite
a century of research, the problem remains open due to a lack of technological tools: mapping
neuronal structures requires a large field of view, a high resolution, a robust labeling technique, and
computational methods to sort the data. Stochastic labeling methods have been developed to endow
individual neurons with color tags [1, 2]. This approach to neural circuit mapping can utilize the
light microscope, provides a high-throughput and the potential to monitor the circuits over time, and
complements the dense, small scale connectomic studies using electron microscopy [3] with its large
field-of-view. However, its use has been limited due to its reliance on manual segmentation.
The initial stochastic, spectral labeling (Brainbow) method had a number of limitations for neuroscience applications including incomplete filling of neuronal arbors, disproportionate expression
of the nonrecombined fluorescent proteins in the transgene, suboptimal fluorescence intensity, and
color shift during imaging. Many of these limitations have since improved [4] and developments
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
in various aspects of light microscopy provide further opportunities [5, 6, 7, 8]. Moreover, recent
approaches promise a dramatic increase in the number of (pseudo) color sources [9, 10, 11]. Taken
together, these advances have made light microscopy a much more powerful tool for neuroanatomy
and connectomics. However, existing automated segmentation methods are inadequate due to the
spatio-color nature of the problem, the size of the images, and the complicated anatomy of neuronal
arbors. Scalable methods that take into account the high-dimensional nature of the problem are
needed.
Here, we propose a series of operations to segment 3-D images of stochastically tagged nervous
tissues. Fundamentally, the computational problem arises due to insufficient color consistency within
individual cells, and the voxels occupied by more than one neuron. We denoise the image stack
through collaborative filtering [12], and obtain a supervoxel representation that reduces the problem
size by four orders of magnitude. We consider the segmentation of neurons as a graph segmentation
problem [13], where the nodes are the supervoxels. Spatial discontinuities and color inhomogeneities
within segmented neurons are penalized using this graph representation. While we concentrate on
neuron segmentation in this paper, our method should be equally applicable to the segmentation of
other cell classes such as glia.
To study various aspects of stochastic multispectral labeling, we present a basic simulation algorithm
that starts from actual single neuron reconstructions. We apply our method on such simulated images
of retinal ganglion cells, and on two different real Brainbow images of hippocampal neurons, where
one dataset is obtained by expansion microscopy [5].
2
Methods
Successful segmentations of color-coded neural images should consider both the connected nature
of neuronal anatomy and the color consistency of the Brainbow construct. However, the size and
the noise level of the problem prohibit a voxel-level approach (Fig. 1). Methods that are popular in
hyperspectral imaging applications, such as nonnegative matrix factorization [14], are not immediately
suitable either because the number of color channels are too few and it is not easy to model neuronal
anatomy within these frameworks. Therefore, we develop (i) a supervoxelization strategy, (ii)
explicitly define graph representations on the set of supervoxels, and (iii) design the edge weights to
capture the spatio-color relations (Fig. 2a).
2.1 Denoising the image stack
Voxel colors within a neurite can drift along the neurite, exhibit high frequency variations, and differ
between the membrane and the cytoplasm when the expressed fluorescent protein is membranebinding (Fig. 1). Collaborative filtering generates an extra dimension consisting of similar patches
within the stack, and applies filtering in this extra dimension rather than the physical dimensions.
We use the BM4D denoiser [12] on individual channels of the datasets, assuming that the noise is
Gaussian. Figure 2 demonstrates that the boundaries are preserved in the denoised image.
2.2 Dimensionality reduction
We make two basic observations to reduce the size of the dataset: (i) Voxels expressing fluorescent
proteins form the foreground, and the dark voxels form the much larger background in typical
Brainbow settings. (ii) The basic promise of Brainbow suggests that nearby voxels within a neurite
have very similar colors. Hence, after denoising, there must be many topologically connected voxel
sets that also have consistent colors.
The watershed transform [15] considers its input as a topographic map and identifies regions associated
with local minima (?catchment basins? in a flooding interpretation of the topographic map). It can
be considered as a minimum spanning forest algorithm, and obtained in linear time with respect
to the input size [16, 17]. For an image volume V = V (x, y, z, c), we propose to calculate the
topographical map T (disaffinity map) as
T (x, y, z) =
max max |Gt (x, y, z, c)|,
t?{x,y,z}
c
(1)
where x, y, z denote the spatial coordinates, c denotes the color coordinate, and Gx , Gy , Gz denote
the spatial gradients of V (nearest neighbor differencing). That is, any edge with significant deviation
in any color channel will correspond to a ?mountain? in the topographic map. A flooding parameter,
f , assigns the local minima of T to catchment basins, which partition V together with the boundary
voxels. We assign the boundaries to neighboring basins based on color proximity. The background is
2
A
B
1
C
0.8
D
0.9
E
0.7
0.8
0.6
normalized intensity
normalized intensity
0.7
0.6
0.5
0.4
0.5
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0
0
1
F
5
10
x?position (?m)
0
15
0
0.7
G
2
4
2
4
6
8
10
12
14
6
8
10
12
14
y?position (?m)
H
0.9
0.6
0.8
0.5
normalized intensity
normalized intensity
0.7
0.6
0.5
0.4
0.3
0.4
0.3
0.2
0.2
0.1
0.1
0
0
5
10
x?position (?m)
15
0
0
y?position (?m)
Figure 1: Multiple noise sources affect the color consistency in Brainbow images. a, An 85?121
Brainbow image patch from a single slice (physical size: 8.5? ? 12.1?). Expression level differs
significantly between the membrane and the cytoplasm along a neurite (arrows). b, A maximum
intensity projection view of the 3-d image stack. Color shifts along a single neurite, which travels
to the top edge and into the page (arrows). c, A 300 ? 300 image patch from a single slice of a
different Brainbow image (physical size: 30? ? 30?). d, The intensity variations of the different
color channels along the horizontal line in c. e, Same as d for the vertical line in c. f, The image
patch in c after denoising. g?h, Same as d and e after denoising. For the plots, the range of individual
color channels is [0, 1].
the largest and darkest basin. We call the remaining objects supervoxels [18, 19]. Let F denote the
binary image identifying all of the foreground voxels.
Objects without interior voxels (e.g., single-voxel thick dendritic segments) may not be detected by
Eq. 1 (Supp. Fig. 1). We recover such ?bridges? using a topology-preserving warping (in this case,
only shrinking is used.) of the thresholded image stack into F [20, 21]:
B = W(I? , F ),
(2)
where I? is binary and obtained by thresholding the intensity image at ?. W returns a binary image
B such that B has the same topology as I? and agrees with F as much as possible. Each connected
component of B ? F? (foreground of B and background of F ) is added to a neighboring supervoxel
based on color proximity, and discarded if no spatial neighbors exist (Supp. Text).
We ensure the color homogeneity within supervoxels by dividing non-homogeneous supervoxels (e.g.,
large color variation across voxels) into connected subcomponents based on color until the desired
homogeneity is achieved (Supp. Text). We summarize each supervoxel?s color by its mean color.
We apply local heuristics and spatio-color constraints iteratively to further reduce the data size and
demix overlapping neurons in voxel space (Fig. 2f,g and Supp. Text). Supp. Text provides details on
the parallelization and complexity of these steps and the method in general.
3
Figure 2: Best
viewed digitally.
a, A schematic
of the processing
steps b, Max.
intensity
projection of a raw
Brainbow image
c, Max. intensity
projection of the
denoised image
d, A zoomed-in
version of the
patch indicated by
the dashed square
in b.
e, The
corresponding
denoised image. f,
One-third of the
supervoxels in the
top-left quadrant
(randomly chosen). g, Same as f
after the merging
step.
h1-h4,
Same as b,c,f,g
for
simulated
data. Scale bars,
20?m.
2.3 Clustering the supervoxel set
We consider the supervoxels as the nodes of a graph and express their spatio-color similarities
through the existence (and the strength) of the edges connecting them, summarized by a highly
sparse adjacency matrix. Removing edges between supervoxels that aren?t spatio-color neighbors
avoids spurious links. However, this procedure also removes many genuine links due to high color
variability (Fig. 1). Moreover, it cannot identify disconnected segments of the same neuron (e.g., due
to limited field-of-view). Instead, we adjust the spatio-color neighborhoods based on the ?reliability?
of the colors of the supervoxels. Let S denote the set of supervoxels in the dataset. We define
the sets of reliable and unreliable supervoxels as Sr = {s ? S : n(s) > ts , h(s) < td } and
Su = S \ Sr , respectively, where n(s) denotes the number of voxels in s, h(s) is a measure of the
color heterogeneity (e.g., the maximum difference between intensities across all color channels), ts
and td are the corresponding thresholds.
We describe a graph G = (V, E), where V denotes the vertex set (supervoxels) and E = Es ?Ec ?Es?
denotes the edges between them:
Es = {(ij) : ?ij < s , i 6= j}
Ec = {(ij) : si , sj ? Sr , dij < c , i 6= j}
Es? = {(ij), (ji) : si ? Su , (ij) ?
/ Es , Oi (j) < kmin ? Ki , i 6= j},
(3)
where ?ij , dij are the spatial and color distances between si and sj , respectively. s and c are
the corresponding maximum distances. An unreliable supervoxel with too few spatial neighbors is
allowed to have up to kmin edges via proximity in color space. Here, Oi (j) is the order of supervoxel
sj in terms of the color distance from supervoxel si , and Ki is the number of s -spatial neighbors of
si . (Note the symmetric formulation in Es?.) Then, we construct the adjacency matrix as
2
e??dij , (ij) ? E
A(i, j) =
(4)
0,
otherwise
4
where ? controls the decay in affinity with respect to distance in color. We use k-d tree structures
to efficiently retrieve the color neighborhoods [22]. Here, the distance between two supervoxels is
minv?V,u?U D(v, u), where V and U are the voxel sets of the two supervoxels and D(v, u) is the
Euclidean distance between voxels v and u.
A classical way of partitioning graph nodes that are nonlinearly separable is by minimizing a function
(e.g., the sum or the maximum) of the edge weights that are severed during the partitioning [23].
Here, we use the normalized cuts algorithm [24, 13] with two simple modifications: the k-means step
is weighted by the sizes of the supervoxels and initialized by a few iterations of k-means clustering
of the supervoxel colors only (Supp. Text). The resulting clusters partition the image stack (together
with the background), and represent a segmentation of the individual neurons within the image stack.
An estimate of the number of neurons can be obtained from a Dirichlet process mixture model [25].
While this estimate is often rough [26], the segmentation accuracy appears resilient to imperfect
estimates (Fig. 4c).
2.4 Simulating Brainbow tissues
We create basic simulated Brainbow image stacks from volumetric reconstructions of single neurons
(Algorithm 1). For simplicity, we model the neuron color shifts by a Brownian noise component on
the tree, and the background intensity by a white Gaussian noise component (Supp. Text).
We quantify the segmentation quality of the voxels using the adjusted Rand index (ARI), whose
maximum value is 1 (perfect agreement), and expected value is 0 for random clusters [27]. (Supp.
Text)
Algorithm 1 Brainbow image stack simulation
Require: number of color channels C, set of neural shapes S = {ni }i , stack (empty, 3d space + color),
background noise variability ?1 , neural color variability ?2 , r, saturation level M
1: for ni ? S do
2:
Shift and rotate neuron ni to minimize overlap with existing neurons in the stack
3:
Generate a uniformly random color vector vi of length C
4:
Identify the connected components of cij of ni within the stack
5:
for cij ? {cij }j do
6:
Pre-assign vi to r% of the voxels of cij
7:
C-dimensional random walk on cij with steps N (0, ?12 I) (Supp. Text)
8:
end for
9:
Add neuron ni to the stack (with additive colors for shared voxels)
10: end for
11: Add white noise to each voxel generated by N (0, ?22 I)
12: if brightness exceeds M then
13:
Saturate at M
14: end if
15: return stack
3
Datasets
To simulate Brainbow image stacks, we used volumetric single neuron reconstructions of mouse
retinal ganglion cells in Algorithm 1. The dataset is obtained from previously published studies [28,
29]. Briefly, the voxel size of the images is 0.4? ? 0.4? ? 0.5?, and the field of view of individual
stacks is 320? ? 320? ? 70? or larger. We evaluate the effects of different conditions on a central
portion of the simulated image stack.
Both real datasets are images of the mouse hippocampal tissue. The first dataset has 1020?1020?225
voxels (voxel size: 0.1 ? 0.1 ? 0.3?3 ), and the tissue was imaged at 4 different frequencies (channels).
The second dataset has 1080 ? 1280 ? 134 voxels with an effective voxel size of 70 ? 70 ? 40nm,
where the tissue was 4? linearly expanded [5], and imaged at 3 different channels. The Brainbow
constructs were delivered virally, and approximately 5% of the neurons express a fluorescence gene.
4
Results
Parameters used in the experiments are reported in Supp. Text.
Fig. 1b, d, and e depict the variability of color within individual neurites in a single slice and through
the imaging plane. Together, they demonstrate that the voxel colors of even a small segment of a
5
Figure 3: Segmentation
of a simulated Brainbow
image stack. Adjusted
Rand index of the foreground is 0.80. Pseudocolor representation of 4channel data. Top: maximum intensity projection
of the ground truth. Only
the supervoxels that are occupied by a single neuron
are shown. Bottom: maximum intensity projection
of the reconstruction. The
top-left corners show the
whole image stack. All
other panels show the maximum intensity projections
of the supervoxels assigned
to a single cluster (inferred
neuron).
adjusted Rand index
0.9
0.9
0.95
1
0.95
3 ch.
4 ch.
5 ch.
0.9
0.85
0.85
0.8
0.85
3 ch.
4 ch.
5 ch.
0.8
0.75
0.8
0.75
0.7
0.75
0.7
0.65
0.7
0.65
0.6
0.65
0.6
0.55
0.06
0.08
0.1
0.12
0.14
0.16
expression density (ratio of occupied voxels)
0.55
0.18
0
0.02
0.04
0.06
0.08
step size (? ) ?? range per channel: [0, 1]
1
0.1
0.5
3
true (9)
6
7
8
10
11
12
4
channel count
5
Figure 4: Segmentation accuracy of simulated data a, Expression density (ratio of voxels occupied
by at least one neuron) vs. ARI. b, ?1 (Algorithm 1) vs. ARI. c, Channel count vs. ARI for a 9-neuron
simulation, where K ? [6, 12]. ARI is calculated for the foreground voxels. See Supp. Fig. 7 for
ARI values for all voxels.
neuron?s arbor can occupy a significant portion of the dynamic range in color with the state-of-theart Brainbow data. Fig. 1c-e show that collaborative denoising removes much of this noise while
preserving the edges, which is crucial for segmentation. Fig. 2b-e and h demonstrate a similar effect
on a larger scale with real and simulated Brainbow images.
Fig. 2 shows the raw and denoised versions of the 1020 ? 1020 ? 225 image, and a randomly chosen
subset of its supervoxels (one-third). The original set had 6.2 ? 104 supervoxels, and the merging
routine decreased this number to 3.9 ? 104 . The individual supervoxels grew in size while avoiding
mergers with supervoxels of different neurons. This set of supervoxels, together with a (sparse)
spatial connectivity matrix, characterizes the image stack. Similar reductions are obtained for all the
real and simulated datasets.
Fig. 3 shows the segmentation of a simulated 200?200?100 (physical size: 80??80??50?) image
patch. (Supp. Fig. 2 shows all three projections, and Supp. Fig. 3 shows the density plot through
the z-axis.) In this particular example, the number of neurons within the image is 9, ?1 = 0.04,
?2 = 0.1, and the simulated tissue is imaged using 4 independent channels. Supp. Fig. 4 shows a
patch from a single slice to visualize the amount of noise. The segmentation has an adjusted Rand
index of 0.80 when calculated for the detected foreground voxels, and 0.73 when calculated for all
voxels. (In some cases, the value based on all voxels is higher.) The ground truth image displays only
those supervoxels all of whose voxels belong to a single neuron. The bottom part of Fig. 3 shows
6
Figure 5: Segmentation of a Brainbow stack ? best viewed digitally. Pseudo-color representation of 4-channel data. The physical size of the stack is 102? ? 102? ? 68?. The top-left
corner shows the maximum intensity projection of the whole image stack, all other panels show the
maximum intensity projections of the supervoxels assigned to a single cluster (inferred neuron).
7
that many of these supervoxels are correctly clustered to preserve the connectivity of neuronal arbors.
There are two important mistakes in clusters 4 (merger) and 9 (spurious cluster). These are caused by
aggressive merging of supervoxels (Supp. Fig. 5), and the segmentation quality improves with the
inclusion of an extra imaging channel and more conservative merging (Supp. Fig. 6). We plot the
performance of our method under different conditions in Fig. 4 (and Supp. Fig. 7). We set the noise
standard deviation to ?1 in the denoiser, and ignored the contribution of ?2 . Increasing the number
of observation channels improves the segmentation performance. The clustering accuracy degrades
gradually with increasing neuron-color noise (?1 ) in the reported range (Fig. 4b). The accuracy does
not seem to degrade when the cluster count is mildly overestimated, while it decays quickly when the
count is underestimated (Fig. 4c).
Fig. 5 displays the segmentation of the 1020 ? 1020 ? 225 image. While some mistakes can be
spotted by eye, most of the neurites can be identified and simple tracing tools can be used to obtain
final skeletons/segmentations [30, 31]. In particular, the identified clusters exhibit homogeneous
colors and dendritic pieces that either form connected components or miss small pieces that do not
preclude the use of those tracing tools. Some clusters appear empty while a few others seem to
comprise segments from more than one neuron, in line with the simulation image (Fig. 2.4).
Supp. Fig. 8 displays the segmentation of the 4? expanded, 1080 ? 1280 ? 134 image. While the two
real datasets have different characteristics and voxel sizes, we used essentially the same parameters
for both of them throughout denoising, supervoxelization, merging, and clustering (Supp. Text).
Similar to Fig. 5, many of the processes can be identified easily. On the other hand, Supp. Fig. 8
appears more fragmented, which can be explained by the smaller number of color channels (Fig. 4).
5
Discussion
Tagging individual cells with (pseudo)colors stochastically is an important tool in biological sciences.
The versatility of genetic tools for tagging synapses or cell types and the large field-of-view of light
microscopy positions multispectral labeling as a complementary approach to electron microscopy
based, small-scale, dense reconstructions [3]. However, its use in neuroscience has been limited due
to various sources of variability in expression. Here, we demonstrate that automated segmentation of
neurons in such image stacks is possible. Our approach considers both accuracy and scalability as
design goals.
The basic simulation proposed here (Algo. 1) captures the key aspects of the problem and may
guide the relevant genetics research. Yet, more detailed biophysical simulations represent a valuable
direction for future work. Our simulations suggest that the segmentation accuracy increases significantly with the inclusion of additional color channels, which coincides with ongoing experimental
efforts [9, 10, 11]. We also note that color constancy of individual neurons plays an important role
both in the accuracy of the segmentation (Fig. 4) and the supervoxelized problem size.
While we did not focus on post-processing in this paper, basic algorithms (e.g., reassignment of small,
isolated supervoxels) may improve both the visualization and the segmentation quality. Similarly,
more elaborate formulations of the adjacency relationship between supervoxels can increase the
accuracy. Finally, supervised learning of this relationship (when labeled data is present) is a promising
direction, and our methods can significantly accelerate the generation of training sets.
6
Acknowledgments
The authors thank Suraj Keshri and Min-hwan Oh (Columbia University) for useful conversations.
Funding for this research was provided by ARO MURI W911NF-12-1-0594, DARPA N6600115-C-4032 (SIMPLEX), and a Google Faculty Research award; in addition, this work was supported
by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/ Interior
Business Center (DoI/IBC) contract number D16PC00008. The U.S. Government is authorized to
reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation
thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should
not be interpreted as necessarily representing the official policies or endorsements, either expressed
or implied, of IARPA, DoI/IBC, or the U.S. Government.
8
References
[1] J Livet et al. Transgenic strategies for combinatorial expression of fluorescent proteins in the nervous
system. Nature, 450(7166):56?62, 2007.
[2] A H Marblestone et al. Rosetta brains: A strategy for molecularly-annotated connectomics. arXiv preprint
arXiv:1404.5103, 2014.
[3] Shin-ya Takemura, Arjun Bharioke, Zhiyuan Lu, Aljoscha Nern, Shiv Vitaladevuni, Patricia K Rivlin,
William T Katz, Donald J Olbris, Stephen M Plaza, Philip Winston, et al. A visual motion detection circuit
suggested by drosophila connectomics. Nature, 500(7461):175?181, 2013.
[4] D Cai et al. Improved tools for the brainbow toolbox. Nature methods, 10(6):540?547, 2013.
[5] F Chen et al. Expansion microscopy. Science, 347(6221):543?548, 2015.
[6] K Chung and K Deisseroth. Clarity for mapping the nervous system. Nat. methods, 10(6):508?513, 2013.
[7] E Betzig et al. Imaging intracellular fluorescent proteins at nanometer resolution. Science, 313(5793):1642?
1645, 2006.
[8] M J Rust et al. Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (storm).
Nature methods, 3(10):793?796, 2006.
[9] A M Zador et al. Sequencing the connectome. PLoS Biol, 10(10):e1001411, 2012.
[10] J H Lee et al. Highly multiplexed subcellular rna sequencing in situ. Science, 343(6177):1360?1363, 2014.
[11] K H Chen et al. Spatially resolved, highly multiplexed rna profiling in single cells. Science,
348(6233):aaa6090, 2015.
[12] M Maggioni et al. Nonlocal transform-domain filter for volumetric data denoising and reconstruction.
Image Processing, IEEE Transactions on, 22(1):119?133, 2013.
[13] U Von Luxburg. A tutorial on spectral clustering. Statistics and computing, 17(4):395?416, 2007.
[14] D Lee and H S Seung. Algorithms for non-negative matrix factorization. In Advances in neural information
processing systems, pages 556?562, 2001.
[15] F Meyer. Topographic distance and watershed lines. Signal processing, 38(1):113?125, 1994.
[16] F Meyer. Minimum spanning forests for morphological segmentation. In Mathematical morphology and
its applications to image processing, pages 77?84. Springer, 1994.
[17] J Cousty et al. Watershed cuts: Minimum spanning forests and the drop of water principle. Pattern Analysis
and Machine Intelligence, IEEE Transactions on, 31(8):1362?1374, 2009.
[18] Xiaofeng Ren and Jitendra Malik. Learning a classification model for segmentation. In Computer Vision,
2003. Proceedings. Ninth IEEE International Conference on, pages 10?17. IEEE, 2003.
[19] J S Kim et al. Space-time wiring specificity supports direction selectivity in the retina. Nature,
509(7500):331?336, 2014.
[20] Gilles Bertrand and Gr?goire Malandain. A new characterization of three-dimensional simple points.
Pattern Recognition Letters, 15(2):169?175, 1994.
[21] Viren Jain, Benjamin Bollmann, Mark Richardson, Daniel R Berger, Moritz N Helmstaedter, Kevin L
Briggman, Winfried Denk, Jared B Bowden, John M Mendenhall, Wickliffe C Abraham, et al. Boundary
learning by optimization with topological constraints. In Computer Vision and Pattern Recognition (CVPR),
2010 IEEE Conference on, pages 2488?2495. IEEE, 2010.
[22] J L Bentley. Multidimensional binary search trees used for associative searching. Communications of the
ACM, 18(9):509?517, 1975.
[23] Z Wu and R Leahy. An optimal graph theoretic approach to data clustering: Theory and its application to
image segmentation. IEEE Trans. Pattern Anal. Mach. Intell., 15(11):1101?1113, 1993.
[24] A Y Ng et al. On spectral clustering: Analysis and an algorithm. Advances in neural information processing
systems, 2:849?856, 2002.
[25] K Kurihara et al. Collapsed variational dirichlet process mixture models. In IJCAI, volume 7, pages
2796?2801, 2007.
[26] Jeffrey W Miller and Matthew T Harrison. A simple example of dirichlet process mixture inconsistency for
the number of components. In Advances in neural information processing systems, pages 199?206, 2013.
[27] Lawrence Hubert and Phipps Arabie. Comparing partitions. Journal of classification, 2(1):193?218, 1985.
[28] U S?mb?l et al. A genetic and computational approach to structurally classify neuronal types. Nature
communications, 5, 2014.
[29] U S?mb?l et al. Automated computation of arbor densities: a step toward identifying neuronal cell types.
Frontiers in neuroscience, 2014.
[30] M H Longair et al. Simple neurite tracer: open source software for reconstruction, visualization and
analysis of neuronal processes. Bioinformatics, 27(17):2453?2454, 2011.
[31] H Peng et al. V3d enables real-time 3d visualization and quantitative analysis of large-scale biological
image data sets. Nature biotechnology, 28(4):348?353, 2010.
9
| 6549 |@word briefly:1 faculty:1 version:2 hippocampus:1 rivlin:1 open:2 simulation:7 brightness:1 dramatic:1 briggman:1 deisseroth:1 reduction:2 initial:1 series:1 daniel:1 genetic:2 existing:3 comparing:1 subcomponents:1 si:5 yet:1 must:1 connectomics:3 john:2 additive:1 partition:3 shape:1 enables:1 remove:2 plot:3 drop:1 depict:1 v:3 intelligence:2 boyden:1 nervous:5 plane:1 merger:2 colored:1 provides:2 characterization:1 node:3 gx:1 mathematical:1 along:4 h4:1 peng:1 tagging:2 expected:1 morphology:1 brain:1 bertrand:1 td:2 actual:1 preclude:1 increasing:2 spain:1 provided:1 moreover:3 project:1 circuit:4 medium:3 panel:2 mountain:1 virally:1 interpreted:1 developed:1 pseudo:4 quantitative:1 multidimensional:1 demonstrates:1 control:1 partitioning:2 medical:2 appear:1 before:1 thereon:1 phipps:1 local:3 mistake:2 limit:1 despite:1 mach:1 approximately:1 suggests:1 challenging:1 limited:4 brainbow:20 liam:1 factorization:2 range:4 acknowledgment:1 minv:1 differs:1 procedure:1 shin:1 significantly:3 projection:9 pre:1 quadrant:1 donald:1 specificity:1 protein:5 suggest:1 bowden:1 cannot:1 interior:3 collapsed:1 map:5 center:4 zador:1 resolution:2 simplicity:1 identifying:2 immediately:1 assigns:1 insight:1 oh:1 leahy:1 retrieve:1 century:1 searching:1 maggioni:1 variation:3 coordinate:2 play:1 homogeneous:2 us:1 viren:1 agreement:1 recognition:2 cut:2 muri:1 labeled:1 bottom:2 constancy:1 role:1 preprint:1 capture:2 calculate:1 region:1 connected:6 morphological:1 plo:1 technological:1 valuable:1 digitally:2 benjamin:1 complexity:1 skeleton:1 seung:1 dynamic:1 denk:1 arabie:1 segment:6 algo:1 easily:1 accelerate:1 darpa:1 resolved:1 various:4 jain:1 describe:1 effective:1 doi:2 mcgovern:3 detected:2 labeling:5 kevin:1 intermingled:1 neighborhood:2 whose:2 heuristic:1 larger:3 cvpr:1 otherwise:1 statistic:7 topographic:4 richardson:1 transform:2 inhomogeneity:1 final:2 delivered:1 associative:1 biophysical:1 cai:2 reconstruction:9 propose:3 aro:1 mb:3 zoomed:1 neighboring:2 relevant:1 shiv:1 subcellular:1 scalability:1 ijcai:1 cluster:9 empty:2 demix:1 perfect:1 object:2 develop:1 ij:7 nearest:1 school:2 arjun:1 eq:1 dividing:1 edward:1 disproportionate:1 quantify:2 differ:1 concentrate:1 direction:3 anatomy:5 thick:1 annotated:1 filter:1 stochastic:5 mendenhall:1 adjacency:3 resilient:1 require:1 government:2 assign:2 clustered:1 drosophila:1 dendritic:3 biological:2 adjusted:4 frontier:1 proximity:3 considered:1 ground:2 lawrence:1 mapping:3 visualize:1 electron:2 automate:1 matthew:1 purpose:1 travel:1 applicable:1 severed:1 combinatorial:1 fluorescence:2 bridge:1 largest:1 agrees:1 create:1 tool:8 weighted:1 mit:3 rough:1 gaussian:2 rna:2 rather:1 occupied:4 endow:1 focus:1 sequencing:2 kim:1 cunningham:1 spurious:2 relation:2 reproduce:1 classification:2 development:1 spatial:8 field:5 construct:3 genuine:1 comprise:1 ng:1 throughput:1 filling:1 theart:1 foreground:6 future:1 simplex:1 others:1 ibc:2 fundamentally:1 few:4 retina:1 randomly:2 cajal:1 preserve:1 homogeneity:2 individual:12 intell:1 consisting:1 jeffrey:1 versatility:1 william:1 detection:1 highly:3 patricia:1 situ:1 adjust:1 mixture:3 light:4 copyright:1 watershed:3 hubert:1 edge:9 tree:4 incomplete:1 euclidean:1 initialized:1 desired:1 walk:1 isolated:1 classify:1 w911nf:1 deviation:2 vertex:1 subset:1 successful:1 dij:3 inadequate:1 gr:1 too:2 reported:2 density:5 fundamental:1 international:1 overestimated:1 contract:1 lee:2 connectome:1 together:5 mouse:3 connecting:1 quickly:1 connectivity:2 von:1 central:1 nm:1 stochastically:2 corner:2 chung:1 grossman:3 return:2 supp:20 account:1 potential:1 aggressive:1 distribute:1 retinal:2 gy:1 summarized:1 jitendra:1 explicitly:1 caused:1 vi:2 piece:2 view:7 h1:1 lab:3 characterizes:1 portion:2 start:1 sort:1 denoised:4 complicated:1 recover:1 multispectral:3 annotation:1 bharioke:1 collaborative:3 minimize:1 square:1 ni:5 oi:2 accuracy:8 contribution:1 characteristic:2 efficiently:1 miller:1 correspond:1 identify:2 raw:2 lu:1 ren:1 published:1 tissue:6 synapsis:1 parallelizable:1 manual:1 volumetric:3 frequency:2 storm:1 associated:1 gain:1 dataset:6 popular:1 color:58 conversation:1 dimensionality:1 improves:2 segmentation:33 routine:1 appears:2 higher:1 flooding:2 supervised:1 improved:2 rand:4 formulation:2 until:1 hand:1 horizontal:1 su:2 overlapping:1 lack:1 google:1 quality:3 reveal:1 indicated:1 bentley:1 effect:2 normalized:5 true:1 tagged:1 hence:1 assigned:2 spatially:1 symmetric:1 iteratively:1 imaged:3 moritz:1 white:2 wiring:1 during:2 prohibit:1 coincides:1 reassignment:1 hippocampal:2 theoretic:1 demonstrate:3 motion:1 image:51 variational:1 ari:6 funding:1 physical:5 ji:1 rust:1 volume:2 belong:1 interpretation:1 katz:1 expressing:1 significant:2 neurites:2 consistency:3 similarly:1 inclusion:2 had:2 neurite:6 reliability:1 similarity:1 gt:1 add:2 brownian:1 recent:1 supervoxel:8 selectivity:1 binary:4 inconsistency:1 preserving:2 minimum:5 additional:1 neuroanatomy:2 barry:1 signal:1 dashed:1 ii:2 multiple:1 stephen:1 reduces:1 segmented:1 exceeds:1 profiling:1 dept:3 post:1 equally:1 spotted:1 coded:1 hwan:1 award:1 schematic:1 scalable:2 basic:6 essentially:1 vision:2 arxiv:2 iteration:1 represent:2 microscopy:8 cell:9 microscope:1 preserved:1 background:6 achieved:1 kmin:2 addition:1 decreased:1 underestimated:1 harrison:1 source:5 crucial:1 extra:3 parallelization:1 sr:3 bollmann:1 seem:2 call:1 iii:1 easy:1 automated:4 affect:1 topology:2 suboptimal:1 identified:3 reduce:3 imperfect:1 shift:4 expression:9 effort:1 biotechnology:1 ignored:1 useful:1 detailed:1 connectomic:1 amount:1 dark:1 generate:2 occupy:1 exist:1 tutorial:1 governmental:1 neuroscience:4 per:1 correctly:1 promise:2 express:2 key:1 four:2 reliance:1 threshold:1 monitor:1 clarity:1 douglas:1 thresholded:1 utilize:1 ram:1 imaging:6 graph:7 sum:1 luxburg:1 letter:1 powerful:1 topologically:1 topographical:1 throughout:1 wu:1 patch:7 endorsement:1 diffraction:1 ki:2 display:3 topological:1 winston:1 nonnegative:1 activity:1 plaza:1 strength:1 constraint:2 fei:1 software:1 nanometer:1 tag:1 generates:2 aspect:3 nearby:1 simulate:1 min:1 separable:1 expanded:2 optical:1 glia:1 department:1 supervoxels:30 disconnected:1 disclaimer:1 jr:1 membrane:2 across:2 smaller:1 marblestone:1 modification:1 founding:1 gradually:1 explained:1 zhiyuan:1 taken:1 visualization:3 remains:1 previously:1 count:4 needed:1 mind:3 jared:1 end:3 studying:1 operation:1 apply:2 spectral:3 nicholas:1 simulating:1 darkest:1 existence:1 original:1 denotes:4 top:5 remaining:1 ensure:1 clustering:7 dirichlet:3 opportunity:1 classical:2 warping:1 implied:1 olbris:1 added:1 malik:1 strategy:3 degrades:1 exhibit:2 gradient:1 affinity:1 distance:7 link:2 thank:1 simulated:11 philip:1 degrade:1 evaluate:1 considers:2 spanning:3 water:1 toward:1 denoiser:2 assuming:1 length:1 index:4 relationship:2 insufficient:1 ratio:2 minimizing:1 berger:1 differencing:1 cij:5 keshri:1 potentially:1 negative:1 design:2 anal:1 policy:1 gilles:1 vertical:1 neuron:34 observation:2 datasets:5 discarded:1 nern:1 t:2 heterogeneity:1 grew:1 variability:6 communication:2 stack:24 ninth:1 intensity:17 drift:1 inferred:2 complement:1 nonlinearly:1 toolbox:1 suraj:1 herein:1 barcelona:1 nip:1 discontinuity:1 trans:1 bar:1 suggested:1 pattern:4 summarize:1 saturation:1 including:1 max:4 reliable:1 suitable:1 overlap:1 business:1 advanced:1 representing:1 improve:1 eye:1 identifies:1 axis:1 reprint:1 gz:1 columbia:4 tracer:1 text:10 voxels:24 understanding:1 takemura:1 generation:1 limitation:2 fluorescent:5 filtering:3 basin:4 consistent:1 thresholding:1 principle:1 genetics:1 penalized:1 supported:1 guide:1 institute:3 neighbor:5 sparse:2 tracing:2 slice:4 boundary:4 dimension:3 calculated:3 fragmented:1 avoids:1 author:2 made:1 voxel:12 ec:2 transaction:2 nonlocal:1 sj:3 cytoplasm:2 unreliable:2 gene:1 spatio:7 search:1 promising:2 channel:19 vitaladevuni:1 nature:10 robust:1 helmstaedter:1 forest:3 rosetta:1 expansion:2 necessarily:1 domain:1 official:1 did:1 dense:2 linearly:1 arrow:2 whole:2 noise:12 intracellular:1 abraham:1 iarpa:2 denoise:1 allowed:1 complementary:1 neuronal:10 fig:30 elaborate:1 shrinking:1 sub:1 position:5 meyer:2 structurally:1 third:2 removing:1 saturate:1 xiaofeng:1 decay:2 merging:5 hyperspectral:1 magnitude:2 notwithstanding:1 nat:1 chen:3 mildly:1 authorized:1 aren:1 michigan:2 paninski:1 ganglion:2 visual:1 expressed:2 contained:1 applies:1 springer:1 ch:6 truth:2 acm:1 viewed:2 goal:1 dawen:1 shared:1 typical:1 uniformly:1 kurihara:1 denoising:7 miss:1 conservative:1 arbor:5 e:6 experimental:1 ya:1 transgenic:1 support:1 mark:1 winfried:1 arises:1 rotate:1 bioinformatics:1 avoiding:1 ongoing:1 goire:1 multiplexed:2 biol:1 |
6,135 | 655 | A dynamical model of priming and
repetition blindness
Daphne Bavelier
Laboratory of Neuropsychology
The Salk Institute
La J oHa, CA 92037
Michael I. Jordan
Department of Brain and Cognitive Sciences
Massachusetts Institute of Technology
Cambridge MA 02139
Abstract
We describe a model of visual word recognition that accounts for
several aspects of the temporal processing of sequences of briefly
presented words. The model utilizes a new representation for written words, based on dynamic time warping and multidimensional
scaling. The visual input passes through cascaded perceptual, comparison, and detection stages. We describe how these dynamical
processes can account for several aspects of word recognition, including repetition priming and repetition blindness.
1
INTRODUCTION
Several psychological phenomena show that the construction of organized and meaningful representations of the visual environment requires establishing separate representations (termed episodic representations) for the different objects viewed. Three
phenomena in the word recognition literature suggest that the segregation of the
visual flow into separate episodic representations can be characterized in terms of
specific temporal constraints. We developed a model to explore the nature of these
constraints.
2
DESCRIPTION OF THE BEHAVIORAL DATA
In a typical priming experiment, subjects are presented with a first word, termed
the "prime," and then asked to name or make a judgment to a second word, termed
879
880
Bavelier and Jordan
the "target." The performance of subjects is compared in conditions in which the
target and prime are related versus conditions in which they are unrelated.
When the prime is presented fast enough so that it cannot be identified (about
40 ms), subjects' performance on the target is facilitated when the prime and the
target are identical compared to the case in which they are unrelated. This effect,
known as "lnasked priming," is very short lasting, appearing only within trials,
and lasting on the order of 100 ms (Humphreys, Evett, Quinlan & Besner, 1987).
If the prime, however, is presented for a period such that it is just identifiable (about
100 ms), subjects' performance on the target is hindered when prime and target
are identical (Kanwisher, 1987; Humphreys et al., 1987). This effect, known as
"repetition blindness," is conditional on the conscious identification of the prime.
The size of the effect decreases as the duration between the two items increases.
Repetition blindness is observed only within trials and vanishes for inter-stimulus
durations on the order of 500 ms.
When the prime is presented long enough to be easily identifiable (about 250 ms or
more), subjects' performance on the target is once again facilitated when prime and
target are identical (Salasoo, Shiffrin & Feustel, 1985). This effect, known as "classical repetition priming," is long lasting, being observed not only within trials,
but between trials and even between sessions. In certain experimental conditions,
it has been observed to last up to a year.
These results implicate two factors influencing word recognition: the time of presentation and whether or not the prime has been identified. We have developed a
model that captures the rather non-intuitive result that as the time of presentation
of the prime increases, recall of the target is first facilitated, then inhibited and then
facilitated again. The two main features of the model are the dynamical properties
of the word representations and the dependence of the detection processes for each
word on previous conscious identification of that word.
3
REPRESENTATION
The representation that we developed for our model is a vector space representation
that allows each word to be represented by a fixed-length vector, even though the
words are of different length. We developed an algorithmic method for finding the
word representations that avoids some of the difficulties with earlier proposals (cf.
Pinker & Prince, 1988).
The algorithm proceeds in three stages. First, dynamic programming (Bellman,
1957) is used to compute an inter-word similarity matrix. The transition costs in
the dynamic programming procedure were based on empirically-determined values
of visual similarity between individual letters (Townsend, 1971). Interestingly, we
found that dynamic programming solutions naturally capture several factors that
are known to be important in human sensitivity to orthographic similarity (for example, orthographic priming increases as a function of the number of letters shared
between the prime and the target in a nonlinear manner, shared end-letters are
more important than shared middle-letters, and relative letter position determines
orthographic similarity (Humphreys et al., 1987)).
A dynamical model of priming and repetition blindness
After the dynamic programming stage, multidimensional scaling (Torgerson, 19.58)
is used to convert the inter-word similarity ma.trix into a vector space representation
in which distance correlates with similarity.
Next, word vectors are normalized by projecting them onto a semi-hypersphere.
This gives the origin of the vector space a meaning, allowing us to use vector
magnitude to represent signal energy.
This representation also yielded natural choices for the "blank" stimulus and the
"mask" stimulus. The "blank" was taken to be the origin of the space and the
"mask" was taken to be a vector on the far side ofthe hypersphere. In the dynamical
model that we describe below, vectors that are far apart have maximally disruptive
effects on each other. A distant stimulus causes the state to move rapidly away
from a particular word vector, thus interfering maximally with its processing.
4
4.1
PROCESSING
FORMALIZATION OF THE PROBLEM AS A SIGNAL
DETECTION PROBLEM
We formalize the problem of visual word recognition as a problem of detecting
significant fluctuations of a multidimensional signal embedded in noise. This can
be viewed as a maximum likelihood detection problem in which the onsets and
durations of the signal are not known a priori. Our model has two main levels of
processing: a perceptual stage and a detection stage.
Perceptual Stage
The perceptual stage is a bank of noisy linear filters. Let Wi denote the ndimensional word vector presented at time t, with components Wi,k. The word
vector is corrupted with white noise crt] to form the input Uk [t]:
udt] = VVi ,k + crt],
and this input is filtered:
rdt] = -aork[t - 1] - alrk[t - 2]
in the presence of additional white noise 7][t].
+ budt] + 7][t],
Detection Stages
The first detection stage in the model is a linear filter whose inverted impulse
response is matched to the impulse response of the perceptual filter:
sdt]
= -cosdt -
1] - clsdt - 2] + drk[t].
Such a filter is known as a matched filter, and is known to have optimality properties
that make it an effective preprocessor for a system that utilizes thresholds for making
decisions (van Trees, 1968). The output of the matched filter is projected onto each
of the words in the lexicon to form scalar "word activation" signals xdt] that can
be compared to thresholds:
n
Xi[t]
= L Wi ,ksdt ].
k=l
881
882
Bavelier and Jordan
?!i
A
0
.,
Input
~
Perceptual
,..
<0,
6"
TlME (n.)
Stage
."
10 00
.00
"00
,ao
,,00
!"
2
Per09ptual Filter
f
11
.
G
,,,
<0,
""
lltjE(ITW )
~
Matched Filter
~
Ii:
Detection
f
~
Stag ..
iI
!i
l<
Word Activation
'00
.,
.00
TIME(ITW)
0
[=-=-
n, ',
\
~
Word deleaion
SI
'00
'00
.00
.00
10'"
TIMEtlTW)
Figure 1: The processing stages of the model. The figures on the right show the
signals in the model projected onto the vector for the word bring. Bring was presented for 100 ms, followed by a 300 ms blank, followed by a second presentation
of bring for 300 ms.
The decision process is a simple binary decision based on a variable baseline pdt]
and a variable threshold edt]:
Yi =
4.2
{ I
a
if xdt] - pdt]
otherwise
> edt]
DETECTION DYNAMICS
The problem of detecting signals that may overlap in time and have unknown onsets
and unknown durations requires the system to focus on fluctuations rather than the
absolute heights of the activation curves . Moreover, the test for significance of a
fluctuation must be dependent on the state of the detection mechanism and the
state of the filters. Our significance test utilizes two time-varying quantities to
capture this state-dependence: the baseline p and the threshold o.
The baseline Pi [t] varies as follows. On time steps for which the fluctuations are
subthreshold (ydt] = 0, for all i), each baseline simply tracks the most recent
A dynamical model of priming and repetition blindness
minimum value of the corresponding word activation signal:
I/. ?
rt
[t] _ { fl,i [t - 1] if Xi[t] > fl,i[t]
Xi [t]
otherwise
When a fluctuation passes threshold (ydt] = 1, for some i), the word i is "detected,"
and the baselines of all words are increased:
fl,dt]
= fl,dt -
1] + ~c/>(i, k),
where c/>(i, k) is the angle between Wi and Wk and ~ is a positive scaling parameter.
This rule prevents multiple detections during a single presentation and it prevents
the neighbors of a detected word from being detected due to their overlap with the
detected word.
The threshold (}i is subject to first-order dynamics that serve to increase or decrease the threshold as a function of the recent activation history of the word (a
rudimentary form of adaptation):
Odt]
= a(}dt -1] + (1- a)()~ -
,B(Xi[t] - fl,dt]) +,
where a and ,B are positive numbers. This rule has the effect of decreasing the
threshold if the activation of the word is currently above its baseline, and increasing
the threshold toward its nominal value ()? otherwise.
4.3
PARAMETERS
The parameters in the model were determined from the behavioral data and from
the structural assumptions of the model in the following manner. The dynamics of
the perceptual filter were determined by the time constants of masked priming, as
given by the behavioral data. This choice also fixed the dynamics of the matched
filter, since the matched filter was tied to the dynamics of the perceptual filter. The
dynamics of the baseline fl, (i.e., the value 0 were determined by the constraint that
a long presentation of a word not lead to multiple detections of the word. Finally, the
dynamics of the threshold 0 were determined by the dynamics of classical repetition
priming as given by the behavioral data. Note that the behavioral data on repetition
blindness were not used in adjusting the parameters of the model.
5
5.1
ACCOUNTS OF THE THREE BASIC PHENOMENA
MASKED PRIMING
The facilitation observed in masked priming is due to temporal superposition in the
perceptual filter and the matched filter. At the time scale at which masked priming
is observed, the activation due to the first critical word (Cl) overlaps with the
activation due to the second critical word (C2) (see Figure 2A), leading to a larger
word activation value when C1 and C2 are identical than when they are different.
5.2
REPETITION BLINDNESS
The temporal superposition that leads to masked priming is also responsible for repetition blindness (see Figure 3). The temporal overlap from the filtering dynamics
883
884
Bavelier and Jordan
28
2A
~
...
0
....,
~
0
(')
u
0
c(
N
0
~
0
o
200
400
800
600
o
1000
200
400
TIME(ms)
600
800
1000
TIME(ms)
Figure 2: Activation curves at the perceptual level (A) and the matched filter level
(B) for the word bring during the presentation of the sequence bring, character,
bring. Each word was presented for 40 ms .
o
It)
1-=- 1
o
o
200
400
600
800
1000
TIME(ms)
Figure 3: Activation curves for the word bring during the presentation of the sequence bring, character, bring. Each word was presented for 100 ms.
will prevent the baseline I-' from getting reset to a sufficiently small value to allow
a second detection. That is, repetition blindness arises because the fluctuation due
to the brief presentation of C2 is not judged significant against the background of
the recent detection of the word. Note that such a failure to detect the second occurrence will happen only when C 1 has been correctly detected, because only then
will the baseline be increased. This dependence of repetition blindness on explicit
detection of the first occurrence also characterizes the behavioral data (Kanwisher,
1987) .
5.3
CLASSICAL REPETITION PRIMING
The facilitation observed in classical repetition priming is due to the dynamics of the
threshold (). The value of () decreases during significant increases in the activation of
a word; hence a smaller fluctuation in activation is needed for the next occurrence
A dynamical model of priming and repetition blindness
~
-----
....." I
ItIR_,....
8
0
II)
0
~
a
200
/
~f\
400
600
800
1000
nME(ma)
Figure 4: Activation curves for the word bring during the presentation of the word
bring for 300ms, followed by a 300ms blank, followed by bring again for lOOms.
to be detected (see Figure 4).
6
OTHER DATA ACCOUNTED FOR BY THE MODEL
The model captures most of the specific characteristics of the three basic phenomena
that we have reviewed. For example, it accounts for the finding of masked priming
between orthographic neighbors (Humphreys, et al., 1987). This effect arises in the
model because a distributed representation is used for the words. The model also
captures the finding that the size of repetition blindness decreases as the interval
between the critical stimuli increases (this is due to the fact that the baseline is reset
to increasingly lower values as the inter-stimulus interval increases), as well as the
fact that the size of repetition blindness decreases as the duration of presentation of
C2 increases (because the activation for C2 continues to increase while the baseline
remains fixed). Similarly, the model accounts for the finding that. the manifestation
of repetition blindness is dependent on the conscious identification of the first occurrence, as well the finding of repetition blindness between orthographic neighbors
(Kanwisher, 1987). Specifics of classical repetition priming, such as the finding that
priming is restricted to a word identity, and the fact that its size increases with
the number of repetitions and diminishes as the lag between repetitions increases
(Salasoo, Shiffrin & Feustel, 1985), are also captured by the model.
The model also accounts for other behavioral phenomena described in the literature
on word recognition. Our vector space representation allows us t.o account naturally
for the fact that the final words in a list are recalled better than the middle words
in the list (the "recency" effect). This occurs because dissimilar words tend to
have large angle between them (and therefore "inhibit" each other dynamically).
whereas the "blank" is at the origin of the space and is relatively "close" to an of the
words. The residual activation for a presented word therefore tends to be stronger
if followed by a blank than by a dissimilar word. The model also captures certain of
the effects of pattern masks on word recognition. For example, "forward" masking,
a condition in which the mask precedes the word to be detected, is known t.o be
less disruptive than "backward" masking, a condition in which the mask follows
the word to be detected. This occurs in the model because of the dynamics of the
885
886
Bavelier and Jordan
baselines: preceding a word with a mask tends to reset its baseline to lower values
and therefore renders the test for significance relatively more sensitive.
7
CONCLUSIONS
From the point of view of the current model, the fact that the detection of repeated
items is enhanced, then suppressed, then once again enhanced as the duration of
the items is increased finds a natural explanation in the nature of the signal processing task that the word recognition system must solve. The signals that arrive in
temporal sequence for perceptual processing have unknown onset times, unknown
durations, and are corrupted by noise. The fact that signals have unknown onset
times and can superimpose implies that the system must detect fluctuations in signal strength rather than absolute values of signal strength. The presence of noise,
inevitable given the neural hardware and the complex multidimensional nature of
the signal, implies that the system must detect significant fluctuations and must incorporate information about recent events into its significance tests. The real-time
constraints of this detection task and the need to guard against errors imply that
certain of the fluctuations will be missed, a fact that will result in "blindness" to
repeated items at certain time scales.
Acknowledgments
This research was funded by the McDonnell-Pew Centers for Cognitive Neuroscience
at UCSD and MIT, by a grant from the McDonnell-Pew Foundation to Michael!.
Jordan, and by NIDCD Grant 5R01-DC-00128 to Helen Neville.
References
Humphreys, G. W., Evett, L. J., Quinlan, P. T., & Besner, D. (1987). Orthographic
priming. In M. Coltheart (Ed.), Attention and Performance XII (pp. 105-125).
Hillsdale, NJ: Erlbaum.
Kanwisher, N. (1987). Repetition blindness: Type recognition without token individuation. Cognition, 27, 117-143.
Pinker, S. & Prince, A. (1988). On language and connectionism: Analysis of a
parallel distributed processing model of language acquisition. Cognition, 28, 73193.
Salasoo, A., Shiffrin, R. M., & Feustel, T. C. (1985). Building Permanent Memory Codes: Codification and Repetition Effects in Word Identification . Journal of
Experimental Psychology: General, 114, 50-77.
Torgerson, W. S. (1958). Theory and Methods of Scaling. J. Wiley & Sons: New
York.
Townsend, J. T. (1971). Theoritical analysis of an alphabetic confusion matrix.
Perception and Psychophysics, 9, 40-50. (see also 449-454).
Vall Trees, F. (1968). Detection, Estimation and Modulation Theory, Part 1. New
York: Wiley.
| 655 |@word blindness:18 trial:4 middle:2 briefly:1 stronger:1 interestingly:1 blank:6 current:1 activation:16 si:1 written:1 must:5 distant:1 happen:1 item:4 short:1 filtered:1 hypersphere:2 detecting:2 lexicon:1 daphne:1 height:1 guard:1 c2:5 behavioral:7 manner:2 inter:4 mask:6 kanwisher:4 brain:1 bellman:1 decreasing:1 increasing:1 unrelated:2 matched:8 moreover:1 developed:4 finding:6 nj:1 temporal:6 multidimensional:4 coltheart:1 uk:1 grant:2 positive:2 influencing:1 tends:2 establishing:1 fluctuation:10 modulation:1 dynamically:1 acknowledgment:1 responsible:1 orthographic:6 procedure:1 episodic:2 word:61 suggest:1 cannot:1 onto:3 close:1 judged:1 recency:1 center:1 helen:1 attention:1 duration:7 nme:1 rule:2 facilitation:2 construction:1 target:10 nominal:1 enhanced:2 programming:4 origin:3 recognition:9 continues:1 observed:6 capture:6 decrease:5 inhibit:1 neuropsychology:1 environment:1 vanishes:1 asked:1 bavelier:5 dynamic:16 serve:1 easily:1 represented:1 fast:1 describe:3 effective:1 detected:8 precedes:1 whose:1 lag:1 larger:1 solve:1 otherwise:3 theoritical:1 noisy:1 final:1 sequence:4 reset:3 adaptation:1 rapidly:1 shiffrin:3 alphabetic:1 intuitive:1 description:1 getting:1 object:1 implies:2 filter:16 human:1 crt:2 hillsdale:1 ao:1 connectionism:1 sufficiently:1 algorithmic:1 cognition:2 estimation:1 diminishes:1 currently:1 superposition:2 sensitive:1 repetition:26 mit:1 rather:3 varying:1 focus:1 odt:1 likelihood:1 baseline:13 detect:3 dependent:2 priori:1 psychophysics:1 once:2 identical:4 inevitable:1 stimulus:6 inhibited:1 individual:1 detection:18 tree:2 prince:2 psychological:1 increased:3 earlier:1 cost:1 masked:6 erlbaum:1 varies:1 corrupted:2 drk:1 individuation:1 sensitivity:1 michael:2 again:4 cognitive:2 udt:1 leading:1 account:7 sdt:1 wk:1 permanent:1 onset:4 view:1 characterizes:1 pinker:2 parallel:1 masking:2 nidcd:1 characteristic:1 judgment:1 ofthe:1 subthreshold:1 identification:4 history:1 ed:1 rdt:1 against:2 failure:1 energy:1 acquisition:1 pp:1 naturally:2 adjusting:1 massachusetts:1 recall:1 organized:1 formalize:1 dt:4 response:2 maximally:2 though:1 just:1 stage:11 nonlinear:1 impulse:2 building:1 effect:10 name:1 normalized:1 hence:1 laboratory:1 white:2 during:5 m:15 manifestation:1 confusion:1 bring:12 rudimentary:1 meaning:1 empirically:1 significant:4 cambridge:1 pew:2 session:1 similarly:1 language:2 funded:1 similarity:6 recent:4 apart:1 prime:12 termed:3 certain:4 binary:1 itw:2 yi:1 inverted:1 captured:1 minimum:1 additional:1 preceding:1 period:1 signal:14 semi:1 ii:3 multiple:2 characterized:1 long:3 ydt:2 basic:2 represent:1 c1:1 proposal:1 background:1 whereas:1 interval:2 pass:2 subject:6 tend:1 flow:1 jordan:6 vvi:1 structural:1 presence:2 enough:2 psychology:1 identified:2 hindered:1 whether:1 render:1 york:2 cause:1 conscious:3 hardware:1 torgerson:2 neuroscience:1 track:1 correctly:1 xii:1 threshold:11 prevent:1 backward:1 year:1 convert:1 facilitated:4 letter:5 angle:2 arrive:1 utilizes:3 missed:1 decision:3 scaling:4 fl:6 followed:5 identifiable:2 yielded:1 strength:2 constraint:4 aspect:2 optimality:1 relatively:2 department:1 mcdonnell:2 smaller:1 increasingly:1 son:1 suppressed:1 wi:4 character:2 making:1 lasting:3 projecting:1 restricted:1 taken:2 segregation:1 remains:1 mechanism:1 needed:1 end:1 pdt:2 away:1 appearing:1 occurrence:4 cf:1 quinlan:2 xdt:2 classical:5 r01:1 warping:1 move:1 quantity:1 occurs:2 dependence:3 rt:1 distance:1 separate:2 stag:1 toward:1 oha:1 length:2 code:1 neville:1 disruptive:2 unknown:5 allowing:1 dc:1 ucsd:1 vall:1 implicate:1 edt:2 superimpose:1 besner:2 recalled:1 proceeds:1 dynamical:7 below:1 pattern:1 perception:1 including:1 memory:1 explanation:1 overlap:4 critical:3 difficulty:1 natural:2 event:1 cascaded:1 townsend:2 ndimensional:1 residual:1 loom:1 technology:1 brief:1 imply:1 literature:2 relative:1 embedded:1 filtering:1 versus:1 foundation:1 bank:1 pi:1 interfering:1 accounted:1 token:1 last:1 side:1 allow:1 institute:2 neighbor:3 absolute:2 van:1 distributed:2 curve:4 transition:1 avoids:1 forward:1 projected:2 tlme:1 far:2 correlate:1 xi:4 reviewed:1 nature:3 ca:1 priming:21 cl:1 complex:1 significance:4 main:2 noise:5 repeated:2 salk:1 wiley:2 formalization:1 position:1 explicit:1 perceptual:11 tied:1 humphreys:5 preprocessor:1 specific:3 list:2 magnitude:1 simply:1 explore:1 visual:6 prevents:2 trix:1 scalar:1 determines:1 ma:3 conditional:1 viewed:2 presentation:10 identity:1 shared:3 typical:1 determined:5 experimental:2 la:1 meaningful:1 arises:2 dissimilar:2 incorporate:1 phenomenon:5 |
6,136 | 6,550 | Clustering with Bregman Divergences: an Asymptotic
Analysis
Chaoyue Liu, Mikhail Belkin
Department of Computer Science & Engineering
The Ohio State University
[email protected], [email protected]
Abstract
Clustering, in particular k-means clustering, is a central topic in data analysis.
Clustering with Bregman divergences is a recently proposed generalization of
k-means clustering which has already been widely used in applications. In this
paper we analyze theoretical properties of Bregman clustering when the number
of the clusters k is large. We establish quantization rates and describe the limiting
distribution of the centers as k ? ?, extending well-known results for k-means
clustering.
1
Introduction
Clustering and the closely related problem of vector quantization are fundamental problems in
machine learning and data mining. The aim is to partition similar points into "clusters" in order to
organize or compress the data. In many clustering methods these clusters are represented by their
centers or centroids. The set of these centers is often called ?the codebook" in the vector quantization
literature. In this setting the goal of clustering is to find an optimal codebook, i.e., a set of centers
which minimizes a clustering loss function also known as the quantization error.
There is vast literature on clustering and vector quantization, see, e.g., [8, 10, 12]. One of the particularly important types of clustering and, arguably, of data analysis methods of any type, is k-means
clustering [16] which aims to minimize the loss function based on the squared Euclidean distance.
This is typically performed using the Lloyd?s algorithm [15], which is an iterative optimization
technique. The Lloyd?s algorithm is simple, easy to implement and is guaranteed to converge in a
finite number of steps. There is an extensive literature on various aspects and properties of k-means
clustering, including applications and theoretical analysis [2, 13, 23]. An important type of analysis is
the asymptotic analysis, which studies the setting when the number of centers is large. This situation
(n k 0) arises in many applications related to data compression as well as algorithms such as
soft k-means features used in computer vision and other applications, where the number of centers
k is quite large but significantly less than the number of data points n. This situation also arises in
k-means feature-based methods which have seen significant success in computer vision, e.g., [6].
The quantization loss for k-means clustering in the setting k ? ? is well-known (see [5, 9, 20]). A
less well-known fact shown in [9, 18] is that the discrete set of centers also converges to a measure
closely related to the underlying probability distribution. This fact can be used to reinterpret k-means
feature based methods in terms of a density dependent kernel [21].
More recently, it has been realized that the properties of square Euclidean distance which make the
Lloyd?s algorithm for k-means clustering so simple and efficient are shared by a class of similarity
measures based on Bregman divergence. In an influential paper [3] the authors introduced clustering
based on Bregman divergences, which generalized k-means clustering to that setting and produced
a corresponding generalized version of the Lloyd?s algorithm. That work has lead to a new line
of research on clustering including results on multitask Bregman clustering[24], agglomerative
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Bregman clustering[22] and many others. There has also been some theoretical analysis of Bregman
clustering including [7] proving the existence of an optimal quantizer and convergence and bounds
for quantization loss in the limit of size of data n ? ? for fixed k.
In this paper we set out to investigate asymptotic properties of Bregman clustering as the number
of centers increases. We provide explicit asymptotic rates for the quantization error of Bregman
clustering as well as the continuous measure which is the limit of the center distribution. Our results
generalize the well-known results for k-means clustering. We believe that these results will be useful
for better understanding in Bregman divergence based clustering algorithms and algorithms design.
2
2.1
Preliminaries and Existing Work
k-means clustering and its asymptotic analysis
k-means clustering is one of the most popular and well studied clustering problems in data analysis.
Suppose we are given a dataset D = {xi }ni=1 ? Rd , containing n observations of a Rd -valued
random variable X. k-means clustering aims to find a set of points (centroids) ? = {aj }kj=1 ? Rd ,
with |?| = k initially fixed, that minimizes the squared Euclidean loss function
1X
L(?) =
min kxj ? ak22 .
(1)
n j a??
Finding the global minimum of loss function is a NP-hard problem [1, 17]. However, Lloyd?s
algorithm [15] is a simple and elegant method to obtain a locally optimal clustering of the data,
corresponding to a local minimum of the loss function. A key reason for the practical utility of the
Lloyd?s k-means algorithm is the following property of squared Euclidean distance: the arithmetic
mean of a set of points is the unique minimizer of the loss for a single center:
n
n
1X
1X
xi = arg min
kxi ? sk22 .
(2)
n i=1
s?Rd n
i=1
It turns out that this property holds in far greater generality as we will discuss below.
Asymptotic analysis of Euclidean quantization:
In an asymptotic quantization problem, we focus on the limiting case of k ? ?, where the size
of dataset n k. In this paper we will assume n = ?, i.e., that the probability distribution with
density P is given. This setting is in line with the analysis in [9].
R
Correspondingly, a density measure P is defined as follows: for a set A ? Rd , P(A) = A P d?d ,
dP
where ?d is the Lebesgue measure on Rd . We also have P = d?
d.
The classical asymptotic results for the Euclidean quantization are provided in a more general setting
for an arbitrary power of the distance Eq.(1), Euclidean quantization of order r (1 ? r < ?), with
loss
L(?) = EP min kX ? akr2 .
(3)
a??
Note that the Lloyd?s algorithm is only applicable to the standard case with r = 2.
The output of the k-means algorithm include locations of centroids, which then imply the partition
and the corresponding loss. For large k we are interested in: (1) the asymptotic quantization error,
and (2) the distribution of centroids.
Asymptotic quantization error. The asymptotic quantization error for k-means clustering has
been analyzed in detail in [5, 14, 20]. S. Graf and H. Luschgy [9] show that as k ? ?, the r-th
quantization error decreases at a rate of k ?r/d . Furthermore, coefficient of the term k ?r/d is of the
form
Qr (P ) = Qr ([0, 1]d )kP kd/(d+r) ,
(4)
where Qr ([0, 1]d ), a constant independent of P , is geometrically interpreted as asymptotic Euclidean
quantization error for uniform distribution on d-dimensional unite cube [0, 1]d . Here k ? kd/(d+r) is
R
the Ld/(d+r) norm of function: kf kd/(d+r) = ( f d/(d+r) d?d )(d+r)/d .
2
Locational distribution of centroids. A less well-known fact is that the locations of the optimal
centroid configuration of k-means converges to a limit distribution closely related to the underlying
density [9, 18]. Specifically, given a discrete set of centroids ?k , to construct the corresponding
discrete measure,
k
1X
Pk =
?a ,
(5)
k j=1 j
where ?a is Dirac measure centered at a. For a open set A ? Rd , Pk (A) is the ratio of number of
centroids kA located within A to the total number of centroids k, namely Pk (A) = kA /k. We say
?
that a continuous measure P? is the limit distribution of centroids, if {Pk } (weakly) converges to P,
specifically
?
?A ? Rd , lim Pk (A) = P(A).
(6)
k??
S. Graf and H. Luschgy [9] gave an explicit expression for this continuous limit distribution of
centroids:
P?r = P?r ?d , P?r = N ? P d/(d+r) ,
(7)
d
where ? is the Lebesgue measure on Rd , P is the density of the probability distribution and N is
the normalization constant to make sure that P?r integrates to 1.
2.2
Bregman divergences and Bregman Clustering
In this section we briefly review basics of Bregman divergences and the Bregman clustering algorithm.
Bregman divergence, first proposed in 1967 by L.M.Bregman [4], measure dissimilarity between two
points in a space. The formal definition is as follows:
Definition 1 (Bregman Divergence). Let function ? be strictly convex on a convex set ? ? Rd , such
that ? is differentiable on relative interior of ?, we define Bregman divergence D? : ? ? ? ? R
with respect to ? as:
D? (p, q) = ?(p) ? ?(q) ? hp ? q, ??(q)i ,
(8)
where h?, ?i is inner product in Rd . ? is domain of the Bregman divergence.
Note that Bregman divergences are not necessarily true metrics. In general, they do satisfy the basic
properties of non-negativity and identity of indiscernibles, but may not respect the triangle inequality
and symmetry.
Examples: Some popular examples of Bregman divergences include:
Squared Euclidean distance:
DEU (p, q) = kp ? qk22 ,
(?EU (z) = kzk2 )
DM H (p, q) = (p ? q)T A(p ? q), A ? Rd?d
X
pi X
Kullback-Leibler divergence: KL(pkq) =
pi ln ?
(pi ? qi ),
qi
X
(?KL (z) =
zi ln zi ? zi , zi > 0)
X
X pi
pi
Itakura-Saito divergence:
DIS (pkq) =
? ln ? 1, (?IS (z) = ?
ln zi )
qi
qi
X
??1
?
Norm-like divergence:
DN L (pkq) =
p?
.
i + (? ? 1)qi ? ?pi qi
Mahalanobis distance:
i
(?N L (z) =
X
zi? ,
zi > 0, ? ? 2)
(9)
Domains of Bregman divergences: ?EU = ?M H = Rd , and ?KL = ?IS = ?N L = Rd+ .
Alternative expression: the quadratic form. Suppose that ? ? C 2 (?), which holds for most
popularly used Bregman divergences. Note that ?(q) + hp ? q, ??(q)i is simply the first two terms
in Taylor expansion of ? at q. Thus, Bregman divergences are nothing but the difference between a
function and its linear approximation. By Lagrange?s form of the remainder term, there exists ? with
?i ? [min(pi , qi ), max(pi , qi )] (i.e. ? is in the smallest d-dimensional axis-parallel cube that contains
p and q) such that
1
D? (p, q) = (p ? q)T ?2 ?(?)(p ? q),
(10)
2
3
where ?2 ?(?) denotes the Hessian matrix of ? at ?.
This form is more compact and will be convenient for further analysis, but at the expense of
introducing an unknown point ?. We will use this form in later discussions.
The mean as the minimizer. As shown in A. Banerjee et al. [3], the property Eq.(2) still holds if
squared Euclidean distance is substituted by a general Bregman divergence:
n
n
X
1X
xi = arg min
D? (xi , s).
(11)
s??
n i=1
i=1
That allows for the Lloyd?s method to be generalized to arbitrary Bregman clustering problems, where
the new loss function is defined as
1X
L(?) =
min D? (xi , a).
(12)
n i a??
This modified version of k-means, called Bregman hard clustering algorithm (see Algorithm 1 in [3]),
results a locally optimal quantization as well.
3
Asymptotic Analysis of Bregman Quantization
We do not distinguish the terminology of Bregman quantization and Bregman clustering. In this
section, we analyze the asymptotics of Bregman quantization allowing a power of Bregman divergences in the loss function. We show expressions for the quantization error and limiting distribution
of centers.
We start with the following:
Definition 2 (k-th quantization error for P of order r). Suppose a variable X takes values on ? ? Rd
following a density P , where ? is the d-dimensional domain of Bregman divergence D? . The k-th
quantization error for P of order r (1/2 ? r < ?) associated with D? is defined as
r
Vk,r,? (P ) =
inf
EP min D? (X, a)
(13)
??Rd ,|?|=k
a??
d
where ? ? R is set of representatives of clusters, corresponding to a certain partition, or quantization of Rd or support of P , and EP [?] means taking expectation value over P .
Remark: (a) The set ?? that reaches the infimum is called k-optimal set of centers for P of order
r with respect to D?r (X, a). (b) In this setting, Bregman quantization of order r corresponds to
Euclidean quantization of order 2r, because of Eq.(10).
3.1
Asymptotic Bregman quantization error
We are interested in the asymptotic case, where k ? ?.
First note that quantization error asymptotically approaches zero as every point x in the support
support of the distribution can always is arbitrarily closed to a centroid with respect to the Bregman
divergence when k is large enough.
Intuition on Convergence rate. We start by providing an informal intuition for the convergence
rate. Assume P has a compact support with a finite volume. Suppose each cluster is a (Bregman)
Voronoi cell with typical size . Since total volume of the support does not change, volume of one
cell should be inversely proportional to the number of clusters, d ? k1 . On the other hand, because
of Eq.(10), Bregman divergence between two points in one cell is the order of square of the cell size,
D? (X, a) ? 2 , That implies
Vk,r,? (P ) ? k ?2r/d asymptotically.
(14)
We will now focus making this intuition precise and on deriving an expression for the coefficient at
the leading term k ?2r/d in the quantization error. For now we will keep the assumption that P has
compact support, and remove it later on. We only describe the method and display important results
in the following. Please see detailed proofs of these results in the Appendix.
We first mention a few useful facts:
4
Lemma 1. In the limit of k ? ?, each interior point x in the support of P is assigned to an
arbitrarily close centroid in the optimal Bregman quantization setting.
Lemma 2. If support of P is convex, ? is strictly convex on the support and ?2 ? is uniformly
2r
continuous on the support, then (a): limk?? k d Vk,r,? (P ) exists in (0, ?), denoted as Qr,? (P ),
and (b):
r
2r
1
T 2
d
Qr,? (P ) = lim k
inf EP min
(X ? a) ? ?(a)(X ? a)
.
(15)
a??
k??
2
?(|?|=k)
Remark: 1, Since Qr,? (P ) is finite, part (a) of Lemma 2 proves our intuition on convergence rate,
Eq.(14). 2, In Eq.(15), it does not matter whether ?2 ? take values at a, x or even any point between
x and a, as long as ?2 ? has finite values at that point.
Coefficient of Bregman quantization error. We evaluate the coefficient of quantization error
Qr,? (P ), based on Eq.(15). What makes this analysis challenging is that unlike is that Euclidean
quantization, general Bregman error does not satisfy translational invariance and scaling properties.
For example, Lemma 3.2 in [9] does not hold for general Bregman divergence. We follow the
following approach: First, dice the the support of P into infinitesimal cubes {Al } with edges parallel
to axes, where l is the index for cells. In each cell, we approximate the Hessian by a constant matrix
?2 ?(zl ), where zl is a fixed point located in the cell. Therefore, evaluating the Bregman quantization
error within each cell reduces to a Euclidean quantization problem, with existing result, Eq.(4). Then
summing them up appropriately over the cubes gives total quantization error.
We start from Eq.(15), and introduce the following notation: denote sl = P(Al ) and conditional
density on Al as P (?|Al ), ?l = ? ? Al as set of centroids that located in Al andP
kl = |?l | as size
of ?l , and ratio vl = kl /k. Following the above intuition and noting that P =
P(Al )P (?|Al ),
Qr,? (P ) is approximated by
X ?2r/d
Qr,? (P, {vl }) ?
sl vl
Qr,M h,l (P (?|Al )) ,
(16)
l
2r
Qr,M h,l (P (?|Al ))
=
lim kl d
kl ??
r
1
EP (?|Al ) min (X ? a)T ?2 ?(zl )(X ? a) (17)
a??l 2
?l (|?l |=kl )
inf
where Qr,M h,l (P (?|Al )) is coefficient of asymptotic Mahalanobis quantization error with Mahalanobis matrix ?2 ?(zl ), evaluated on Al with density P (?|Al ). It can be shown that the approximation
error of Qr,? (P ) converges to zero in the limits of k ? ? and then size of cell to zero.
In each cell Al , P (?|Al ) is further approximated by uniform density U (Al ) = 1/Vl , and Hessian
?2 ?(zl ), as a constant, is absorbed by performing a coordinate transformation. Then Qr,M h,l (U (Al ))
reduces to squared Euclidean quantization error. By applying Eq.(4), we show that
1
Qr,M h,l (U (Al )) = r Q2r ([0, 1]d )? 2r [det ?2 ?(zl )]r/d
(18)
2
where ? is the size of cube, and Q2r ([0, 1]d ) is again the constant in Eq.(4).
Combining Eq.(17) and Eq.(18), Qr,? (P ) is approximated by
X ?2r/d
1
Qr,? (P, {vl }) ? r Q2r ([0, 1]d )? 2r
sl vl
[det ?2 ?(zl )]r/d .
2
(19)
l
Portion of centroids vl within Al is still undecided yet. The following lemma provides an optimal
configuration of {vl } that minimizes Qr,? (P, {vl }):
PL
Lemma 3. Let B = {(v1 , ? ? ? , vL ) ? (0, ?)L :
l=1 vl = 1}, and define
d/(d+2r)
s
[det ?2 ?(zl )]r/(d+2r)
vl? = P l d/(d+2r)
,
r/(d+2r)
[det ?2 ?(zl )]
l sl
(20)
then for the function
F (v1 , ? ? ? , vL ) =
L
X
?2r/d
sl vl
l=1
5
[det ?2 ?(zl )]r/d ,
(21)
!(d+2r)/d
F (v1? , ? ? ?
?
, vL
)
=
min
(v1 ,??? ,vL )?B
F (v1 , ? ? ? , vL ) =
X
d/(d+2r)
sl
[det ?2 ?(zl )]r/(d+2r)
.
l
(22)
Lemma 3 finds the optimal configuration of {vl } in Eq.(19). Recall that quantization error is defined
to be infimum over all possible configurations, we have our main result:
Theorem 1. Suppose E||X||2r+ < ? for some > 0 and ?2 (?) is uniformly continuous on the
support of P , then
Qr,? (P ) =
1
Q2r ([0, 1]d )k(det ?2 ?)r/d P kd/(d+2r) .
2r
(23)
Remark: 1, In the Euclidean quantization cases, where ?(z) = kzk2 , Eq.(23) reduces to Eq.(4),
noting that ?2 ? = 2I. Bregman quantization, which is more general than Euclidean quantization,
has result that is quite similar to Eq.(4), differing by a det ?2 ?-related term.
3.2
The Limit Distribution of Centroids
Similar to Euclidean clustering, Bregman clustering also outputs k discrete cluster centroids, which
can be interpreted as a discrete measure. Below we show that in the limit this discrete measure
coincide with a continuous measure defined in terms of the probability density P .
Define Pr,? to be the integrand function in Eq.(23) (with a normalization factor N ),
Pr,? = N ? (det ?2 ?)r/(d+2r) P d/(d+2r) .
(24)
The following theorem claim that Pr,? is exactly the continuous distribution we are looking for:
Theorem 2. Suppose P is absolutely continuous with respect to Lebesgue measure ?d . Let ?k be an
asymptotically k-optimal set of centers for P of order r based on D? . Define measure Pr,? := Pr,? ?d ,
then
1 X
?a ? Pr,? (weakly).
(25)
k a??
k
Remark: As before Pr,? is the measure while Pr,? is the corresponding density function. The proof
of the theorem can be found in the appendix.
Example 1: Clustering with Squared Euclidean distance (Graf P
and Luschgy[9]). Squared Euclidean distance is an instance of Bregman divergence, with ?(z) =
zi2 . Graf and Luschgy proved
that asymptotic centroid?s distribution is like
Pr,EU (z) ? P d/(d+2r) (z).
(26)
Example 2: Clustering with Mahalanobis distance. Mahalanobis distance is another instance of
Bregman divergence, with ?(z) = z T Az, (A) ? Rd . Hessian matrix ?2 ? = A. Then the asymptotic
centroid?s distribution is same as that of Squared Euclidean distance
Pr,M h (z) ? P d/(d+2r) (z).
(27)
Example 3: Clustering with Kullback-Leibler divergence. The convex function used to define
Kullback-Leibler divergence is negative Shannon entropy defined on domain ? ? Rd+ ,
X
?KL (z) =
zi ln zi ? zi
(28)
i
with component index i. Then Hessian matrix
?2 ?KL (z) = diag(
6
1 1
1
, , ? ? ? , ).
z1 z2
zd
(29)
According to Eq. (24), centroid?s density distribution function
!?r/(d+2r)
Y
d/(d+2r)
Pr,KL (z) ? P
(z)
zi
.
(30)
i
Example 4: Clustering with Itakura-Saito divergence. Itakura-Saito divergence uses Burg entropy
as ?,
X
?IS (z) = ?
ln zi , z ? Rd ,
(31)
i
with component index i. Then Hessian matrix
?2 ?IS (z) = diag(
1 1
1
, , ? ? ? , 2 ).
z12 z22
zd
(32)
According to Eq. (24), centroid?s density distribution function
!?r/(d+2r)
Y
d/(d+2r)
2
.
Pr,IS (z) ? P
(z)
zi
(33)
i
P
Example 5: Clustering with Norm-like divergence. Convex function ?(z) = i zi? ,z ? Rd+ ,
with power ? ? 2. Simple calculation shows that the divergence reduces to Euclidean distance when
? = 2. However, the divergence is no longer Euclidean-like, as long as ? > 2:
X
??1
?
DN L (p, q) =
p?
.
(34)
i + (? ? 1)qi ? ?pi qi
i
With some calculation, we have
!(??2)r/(d+2r)
Pr,N L (z) ? P
d/(d+2r)
Y
(z)
zi
.
(35)
i
Remark: It is easy to see that Kullback-Leibler and Itakura-Saito quantization tend to move centroids
closer to axes, and Norm-like quantization, when ? > 2, does opposite thing, moving centroids far
away from axes.
4
Experiments
In this section, we verify our results, especially centroid?s location distribution Eq.(24), by using
the Bregman hard clustering algorithm.
2/3 x?1/3
1
4/3 x1/3
2
1.4
1.8
1.8
1.2
1.6
1.6
1.4
1
1.4
1.2
0.8
1
1.2
0.6
0.8
1
0.6
0.4
0.4
0.8
0.2
0.2
Recall that our results are obtained
in a limiting case, where we first
take size of dataset n ? ? and
then number of clusters k ? ?.
However, size of real data is finite
and it is also not practical to apply
Bregman clustering algorithms on
the asymptotic case. In this section, Squared Euclidean Kullback-Leibler Norm-like (? = 3)
we simply sample data points from
given distribution, with dataset size Figure 1: First row are predicted distribution functions of
large enough, compared to k, to centroids by Eq.(36,37,38); second row are experimental hisavoid early stopping of Bregman tograms of location of centroids, by applying corresponding
clustering. In addition, we only Bregman hard clustering algorithms.
verify r = 1 cases here, since
the Bregman clustering algorithm,
which utilizes Lloyd?s method, cannot address Bregman quantization problems with r 6= 1.
0.6
0
0
0.2
0.4
0.6
0.8
0
1
0.2
0.4
0.6
0.8
1
0
0.2
0.4
x
x
12
0.6
0.8
1
x
18
9
16
8
14
7
12
6
10
5
8
4
6
3
10
8
6
4
4
2
2
2
0
1
0
0
0.2
0.4
0.6
0.8
7
1
0
0
0.2
0.4
0.6
0.8
1
0
0.2
0.4
0.6
0.8
1
Case 1 (1-dimensional): Suppose the density P is uniform over [0, 1]. We set number of clusters
k = 81, and apply different versions of Bregman hard clustering algorithm on this sample: standard
k-means, Kullback-Leibler clustering and norm-like clustering. According to Eq.(27), Eq.(33) and
Eq.(35), theoretical prediction of centroids locational distribution functions in this case should be:
P1,EU (z) = 1, z ? [0, 1];
(36)
z ?1/3 ,
P1,KL (z) ?
P1,N L (z) ?
z
1/3
z ? (0, 1];
(37)
z ? [0, 1];
,
(38)
and P (z) = 0 elsewhere.
Figure 1 shows, in the first row, the theoretical prediction of distribution of centroids, and in the second
row, experimental histograms of centroid locations for different Bregman quantization problems.
Case 2 (2-dimensional): The density P = U ([0, 1]2 ). Set k = 81 and apply the same three Bregman
clustering algorithms as in case 1. Theoretical predictions of distribution of centroids for this case by
Eq.(27), Eq.(33) and Eq.(35) are as follow, also shown in Figure 2:
P1,EU (z) = 1, z = (z1 , z2 ) ? [0, 1]2 ;
(39)
P1,KL (z) ? (z1 z2 )?1/4 ,
1/4
P1,N L (z) ? (z1 z2 )
and P (z) = 0 elsewhere.
Figure 2, in the first row,
shows a visualization of
centroids locations generated by experiments. For
comparison, second row of
Figure 2 presents 3-d plots
of theoretical predictions of
distribution of centroids. In
each of the 3-d plots, function is plotted over the cube
[0, 1]2 , with left most corner corresponding to point
(0, 0).
z = (z1 , z2 ) ? (0, 1]2 ;
(40)
2
z = (z1 , z2 ) ? [0, 1] ;
,
(41)
1
1
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0
0
0
0.2
0.4
0.6
0.8
1
0
2.0
1.5
1.0
0.5
0.0
1.0
0.0
4
0.2
0.4
0.6
0.8
1
1.0
3
2
0.5
1
1.0
0
0.0
1.0
0.0
0.0
0.5
0.5
0.5
1.0
0.5
1.0
0.0
0.5
0.5
0.0
1.0
0.0
Squared Euclidean Kullback-Leibler Norm-like (? = 3)
It is easy to see that squared
Euclidean quantization, in
Figure 2: Experimental results and theoretical predictions of centroids
this case, results an unidistribution for Case 2. In each of the 3-d plots, function is plotted over
form distribution of centhe cube [0, 1]2 , with left most corner corresponding to point (0, 0),
troids, and that Kullbackand right most corner corresponding to point (1, 1).
Leibler quantization tends
to attract centroids towards
axes, and norm-like quantization tends repel centroids away from axes.
5
Conclusion
In this paper, we analyzed the asymptotic Bregman quantization problems for general Bregman
divergences. We obtained explicit expressions for both leading order of asymptotic quantization
error and locational distribution of centroids, both of which extend the classical results for k-means
quantization. We showed how our results apply to commonly used Bregman divergences, and
gave some experimental verification. We hope these results will provide guidance and insight for
further theoretical analysis of Bregman clustering, such as Bregman soft clustering and other related
methods [3, 11], as well as for practical algorithm design and applications. Our results can also lead
to better understanding of the existing seeding strategies for Bregman clustering [19] and to new
seeding methods.
Acknowledgement
We thank the National Science Foundation for financial support and to Brian Kulis for discussions.
8
References
[1] D. Aloise, A. Deshpande, P. Hansen, and P. Popat. NP-hardness of Euclidean sum-of-squares clustering.
Machine learning, 75(2):245?248, 2009.
[2] K. Alsabti, S. Ranka, and V. Singh. An efficient k-means clustering algorithm. IPPS/SPDP Workshop on
High Performance Data Mining, 1998.
[3] A. Banerjee, S. Merugu, I. S. Dhillon, and J. Ghosh. Clustering with Bregman divergences. The Journal of
Machine Learning Research, 6:1705?1749, 2005.
[4] L. M. Bregman. The relaxation method of finding the common point of convex sets and its application to
the solution of problems in convex programming. USSR computational mathematics and mathematical
physics, 7(3):200?217, 1967.
[5] J. A. Bucklew and G. L. Wise. Multidimensional asymptotic quantization theory with r th power distortion
measures. Information Theory, IEEE Transactions on, 28(2):239?247, 1982.
[6] A. Coates and A. Y. Ng. Learning feature representations with k-means. In Neural Networks: Tricks of the
Trade, pages 561?580. Springer, 2012.
[7] A. Fischer. Quantization and clustering with Bregman divergences. Journal of Multivariate Analysis,
101(9):2207?2221, 2010.
[8] A. Gersho and R. M. Gray. Vector quantization and signal compression, volume 159. Springer Science &
Business Media, 2012.
[9] S. Graf and H. Luschgy. Foundations of quantization for probability distributions. Springer, 2000.
[10] A. K. Jain, M. N. Murty, and P. J. Flynn. Data clustering: a review. ACM computing surveys (CSUR),
31(3):264?323, 1999.
[11] K. Jiang, B. Kulis, and M. I. Jordan. Small-variance asymptotics for exponential family Dirichlet process
mixture models. In Advances in Neural Information Processing Systems, pages 3158?3166, 2012.
[12] L. Kaufman and P. J. Rousseeuw. Finding groups in data: an introduction to cluster analysis, volume 344.
John Wiley & Sons, 2009.
[13] K. Krishna and M. N. Murty. Genetic k-means algorithm. Systems, Man, and Cybernetics, Part B:
Cybernetics, IEEE Transactions on, 29(3):433?439, 1999.
[14] T. Linder. On asymptotically optimal companding quantization. Problems of Control and Information
Theory, 20(6):475?484, 1991.
[15] S. P. Lloyd. Least squares quantization in PCM. Info. Theory, IEEE Transactions on, 28(2):129?137, 1982.
[16] J. MacQueen. Some methods for classification and analysis of multivariate observations. In Proceedings
of the fifth Berkeley symposium on mathematical statistics and probability, volume 1, pages 281?297.
Oakland, CA, USA., 1967.
[17] M. Mahajan, P. Nimbhorkar, and K. Varadarajan. The planar k-means problem is NP-hard. In WALCOM:
Algorithms and Computation, pages 274?285. Springer, 2009.
[18] D. E. McClure. Nonlinear segmented function approximation and analysis of line patterns. Quarterly of
Applied Mathematics, 33(1):1?37, 1975.
[19] R. Nock, P. Luosto, and J. Kivinen. Mixed Bregman clustering with approximation guarantees. In Joint
ECML and KDD, pages 154?169. Springer, 2008.
[20] P. Panter and W. Dite. Quantization distortion in pulse-count modulation with nonuniform spacing of
levels. Proceedings of the IRE, 39(1):44?48, 1951.
[21] Q. Que and M. Belkin. Back to the future: Radial basis function networks revisited. In Proceedings of the
19th International Conference on Artificial Intelligence and Statistics, pages 1375?1383, 2016.
[22] M. Telgarsky and S. Dasgupta. Agglomerative Bregman clustering. arXiv preprint arXiv:1206.6446, 2012.
[23] K. Wagstaff, C. Cardie, S. Rogers, S. Schr?dl, et al. Constrained k-means clustering with background
knowledge. In ICML, volume 1, pages 577?584, 2001.
[24] J. Zhang and C. Zhang. Multitask Bregman clustering. Neurocomputing, 74(10):1720?1734, 2011.
9
| 6550 |@word multitask:2 kulis:2 version:3 briefly:1 compression:2 norm:8 open:1 pulse:1 mention:1 ld:1 liu:2 configuration:4 contains:1 genetic:1 existing:3 ka:2 z2:6 yet:1 john:1 ranka:1 partition:3 kdd:1 remove:1 plot:3 seeding:2 intelligence:1 quantizer:1 provides:1 cse:1 codebook:2 location:6 ire:1 revisited:1 zhang:2 mathematical:2 dn:2 symposium:1 introduce:1 hardness:1 p1:6 indiscernibles:1 provided:1 spain:1 underlying:2 notation:1 medium:1 what:1 q2r:4 interpreted:2 minimizes:3 kaufman:1 differing:1 finding:3 transformation:1 ghosh:1 flynn:1 guarantee:1 berkeley:1 every:1 reinterpret:1 multidimensional:1 exactly:1 zl:11 control:1 organize:1 arguably:1 before:1 engineering:1 local:1 tends:2 limit:9 jiang:1 modulation:1 studied:1 challenging:1 practical:3 unique:1 implement:1 asymptotics:2 dice:1 saito:4 significantly:1 murty:2 convenient:1 radial:1 varadarajan:1 cannot:1 interior:2 close:1 applying:2 center:13 convex:8 survey:1 insight:1 deriving:1 financial:1 proving:1 coordinate:1 limiting:4 suppose:7 programming:1 us:1 trick:1 approximated:3 particularly:1 located:3 ep:5 preprint:1 eu:5 decrease:1 trade:1 intuition:5 weakly:2 singh:1 basis:1 triangle:1 kxj:1 joint:1 represented:1 various:1 undecided:1 jain:1 describe:2 kp:2 artificial:1 que:1 quite:2 widely:1 valued:1 say:1 distortion:2 statistic:2 fischer:1 differentiable:1 product:1 remainder:1 combining:1 dirac:1 az:1 qr:19 convergence:4 cluster:10 extending:1 telgarsky:1 converges:4 panter:1 eq:28 predicted:1 implies:1 closely:3 popularly:1 nock:1 centered:1 rogers:1 generalization:1 preliminary:1 brian:1 strictly:2 pl:1 hold:4 claim:1 early:1 smallest:1 integrates:1 applicable:1 hansen:1 hope:1 always:1 aim:3 modified:1 ax:5 focus:2 vk:3 centroid:36 mbelkin:1 dependent:1 voronoi:1 stopping:1 attract:1 vl:18 typically:1 initially:1 interested:2 aloise:1 arg:2 translational:1 classification:1 denoted:1 ussr:1 constrained:1 cube:7 construct:1 ng:1 icml:1 future:1 others:1 np:3 belkin:2 few:1 divergence:38 national:1 neurocomputing:1 lebesgue:3 mining:2 investigate:1 analyzed:2 mixture:1 bregman:69 edge:1 closer:1 euclidean:26 unite:1 taylor:1 plotted:2 guidance:1 theoretical:9 instance:2 soft:2 introducing:1 uniform:3 kxi:1 density:15 fundamental:1 international:1 physic:1 squared:12 central:1 again:1 containing:1 corner:3 leading:2 lloyd:10 coefficient:5 matter:1 z12:1 satisfy:2 kzk2:2 performed:1 later:2 closed:1 analyze:2 portion:1 start:3 parallel:2 minimize:1 square:4 ni:1 merugu:1 variance:1 generalize:1 produced:1 cardie:1 cybernetics:2 reach:1 definition:3 infinitesimal:1 deshpande:1 dm:1 associated:1 proof:2 dataset:4 proved:1 popular:2 recall:2 lim:3 knowledge:1 nimbhorkar:1 bucklew:1 back:1 follow:2 planar:1 evaluated:1 generality:1 furthermore:1 hand:1 companding:1 nonlinear:1 banerjee:2 infimum:2 aj:1 gray:1 believe:1 usa:1 verify:2 true:1 csur:1 assigned:1 leibler:8 dhillon:1 mahajan:1 mahalanobis:5 please:1 generalized:3 wise:1 ohio:2 recently:2 common:1 volume:7 extend:1 significant:1 rd:21 mathematics:2 hp:2 moving:1 similarity:1 longer:1 pkq:3 multivariate:2 showed:1 dite:1 inf:3 certain:1 inequality:1 success:1 arbitrarily:2 seen:1 minimum:2 greater:1 krishna:1 converge:1 signal:1 arithmetic:1 reduces:4 segmented:1 calculation:2 long:2 mcclure:1 sk22:1 qi:10 prediction:5 basic:2 vision:2 expectation:1 metric:1 arxiv:2 histogram:1 kernel:1 normalization:2 cell:10 addition:1 background:1 spacing:1 appropriately:1 unlike:1 limk:1 sure:1 tend:1 elegant:1 thing:1 jordan:1 noting:2 easy:3 enough:2 gave:2 zi:15 opposite:1 inner:1 det:9 whether:1 expression:5 utility:1 hessian:6 remark:5 useful:2 detailed:1 rousseeuw:1 z22:1 locally:2 sl:6 coates:1 zd:2 discrete:6 dasgupta:1 group:1 key:1 terminology:1 v1:5 vast:1 asymptotically:4 relaxation:1 geometrically:1 sum:1 family:1 utilizes:1 appendix:2 scaling:1 bound:1 guaranteed:1 distinguish:1 display:1 quadratic:1 locational:3 aspect:1 qk22:1 integrand:1 min:10 performing:1 department:1 influential:1 according:3 kd:4 son:1 making:1 pr:13 wagstaff:1 ln:6 visualization:1 turn:1 discus:1 count:1 oakland:1 gersho:1 informal:1 apply:4 quarterly:1 away:2 zi2:1 alternative:1 existence:1 luschgy:5 compress:1 denotes:1 clustering:66 include:2 dirichlet:1 burg:1 k1:1 prof:1 establish:1 especially:1 classical:2 move:1 already:1 realized:1 strategy:1 dp:1 distance:13 thank:1 topic:1 agglomerative:2 reason:1 index:3 ratio:2 providing:1 expense:1 info:1 negative:1 design:2 unknown:1 allowing:1 observation:2 walcom:1 macqueen:1 finite:5 ecml:1 situation:2 looking:1 precise:1 schr:1 nonuniform:1 arbitrary:2 ipps:1 introduced:1 namely:1 kl:13 extensive:1 z1:6 repel:1 barcelona:1 nip:1 address:1 andp:1 below:2 pattern:1 including:3 max:1 power:4 business:1 kivinen:1 imply:1 inversely:1 axis:1 negativity:1 kj:1 review:2 literature:3 understanding:2 acknowledgement:1 kf:1 asymptotic:22 graf:5 relative:1 loss:12 mixed:1 proportional:1 foundation:2 verification:1 pi:9 row:6 elsewhere:2 dis:1 formal:1 taking:1 correspondingly:1 mikhail:1 fifth:1 evaluating:1 author:1 commonly:1 coincide:1 far:2 transaction:3 approximate:1 compact:3 kullback:7 keep:1 global:1 summing:1 xi:5 continuous:8 iterative:1 ca:1 itakura:4 symmetry:1 expansion:1 necessarily:1 domain:4 substituted:1 diag:2 pk:5 main:1 nothing:1 x1:1 representative:1 wiley:1 explicit:3 exponential:1 theorem:4 popat:1 dl:1 exists:2 workshop:1 quantization:60 dissimilarity:1 kx:1 ak22:1 entropy:2 simply:2 pcm:1 absorbed:1 lagrange:1 springer:5 corresponds:1 minimizer:2 acm:1 conditional:1 goal:1 identity:1 towards:1 shared:1 man:1 hard:6 change:1 specifically:2 typical:1 uniformly:2 lemma:7 called:3 total:3 invariance:1 experimental:4 osu:1 shannon:1 linder:1 support:13 arises:2 absolutely:1 evaluate:1 |
6,137 | 6,551 | A Comprehensive Linear Speedup Analysis for
Asynchronous Stochastic Parallel Optimization from
Zeroth-Order to First-Order
?
Xiangru Lian* , Huan Zhang? , Cho-Jui Hsieh? , Yijun Huang* , and Ji Liu*
?
Department of Computer Science, University of Rochester, USA
Department of Electrical and Computer Engineering, University of California, Davis, USA
?
Department of Computer Science, University of California, Davis, USA
[email protected], [email protected], [email protected],
[email protected], [email protected]
Abstract
Asynchronous parallel optimization received substantial successes and extensive
attention recently. One of core theoretical questions is how much speedup (or
benefit) the asynchronous parallelization can bring to us. This paper provides a
comprehensive and generic analysis to study the speedup property for a broad
range of asynchronous parallel stochastic algorithms from the zeroth order to the
first order methods. Our result recovers or improves existing analysis on special
cases, provides more insights for understanding the asynchronous parallel behaviors, and suggests a novel asynchronous parallel zeroth order method for the first
time. Our experiments provide novel applications of the proposed asynchronous
parallel zeroth order method on hyper parameter tuning and model blending problems.
1
Introduction
Asynchronous parallel optimization received substantial successes and extensive attention recently,
for example, [5, 25, 31, 33, 34, 37]. It has been used to solve various machine learning problems,
such as deep learning [4, 7, 26, 36], matrix completion [25, 28, 34], SVM [15], linear systems [3, 21],
PCA [10], and linear programming [32]. Its main advantage over the synchronous parallel optimization is avoiding the synchronization cost, so it minimizes the system overheads and maximizes the
efficiency of all computation workers.
One of core theoretical questions is how much speedup (or benefit) the asynchronous parallelization
can bring to us, that is, how much time can we save by employing more computation resources?
More precisely, people are interested in the running time speedup (RTS) with T workers:
RTS(T ) =
running time using a single worker
.
running time using T workers
Since in the asynchronous parallelism all workers keep busy, RTS can be measured roughly by the
computational complexity speedup (CCS) with T workers1
CCS(T ) =
total computational complexity using a single worker
? T.
total computational complexity using T workers
In this paper, we are mainly interested in the conditions to ensure the linear speedup property. More
specifically, what is the upper bound on T to ensure CCS(T ) = ?(T )?
Existing studies on special cases, such as asynchronous stochastic gradient descent (ASGD) and
asynchronous stochastic coordinate descent (ASCD), have revealed some clues for what factors can
1
For simplicity, we assume that the communication cost is not dominant throughout this paper.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Table 1: Asynchronous parallel algorithms. ?I? and ?C? in ?model? stand for inconsistent and
consistent read model respectively, which will be explained later. ?Base alg.? is short for base
algorithm.
Asyn. alg.
ASGD [25]
ASGD [1]
ASGD[11]
ASGD [18]
ASGD [18]
ARK [21]
ASCD [20]
ASCD [20]
ASCD [19]
ASCD [3]
ASCD [3]
ASCD [15]
ASZD
ASGD
ASGD
ASCD
problem type
upper bound of T
model
smooth, strongly convex
O(N 1/4 )
C
smooth, convex
O(K 1/4 min{? 3/2 , ? 1/2 })
C
composite, convex
O(K 1/4 ? 1/2 )
C
smooth, nonconvex
O(N 1/4 K 1/2 ?)
I
smooth, nonconvex
O(K 1/2 ?)
C
Ax = b
O(N )
C
smooth, convex, unconstrained
O(N 1/2 )
C
smooth, convex, constrained
O(N 1/4 )
C
composite, convex
O(N 1/4 )
I
T
1 T
x
Ax
?
b
x
O(N
)
C
2
1 T
x Ax ? bT x
O(N 1/2 )
I
2
T
1/2
1 T
x
Ax
?
b
x
,
constrained
O(N
)
I
2
zeroth order
?
3/2
1/2
2
SGD & SCD
smooth, nonconvex
O(?N
+ KN ? )
I
1/2 ? 2 )
SGD
smooth, nonconvex
O( N 3/2
+
KN
I
?
SGD
smooth, nonconvex
O( K? 2 + 1)
C
SCD
smooth, nonconvex
O(N 3/4 )
I
base alg.
SGD
SGD
SGD
SGD
SGD
SGD
SCD
SCD
SCD
SCD
SCD
SCD
affect the upper bound of T . For example, Agarwal and Duchi [1] showed the upper bound depends
on the variance of the stochastic gradient in ASGD; Niu et al. [25] showed that the upper bound
depends on the data sparsity and the dimension of the problem in ASGD; and Avron et al. [3], Liu
and Wright [19] found that the upper bound depends on the problem dimension as well as the
diagonal dominance of the Hessian matrix of the objective. However, it still lacks a comprehensive
and generic analysis to comprehend all pieces and show how these factors jointly affect the speedup
property.
This paper provides a comprehensive and generic analysis to study the speedup property for a broad
range of asynchronous parallel stochastic algorithms from the zeroth order to the first order methods.
To avoid unnecessary complication and cover practical problems and algorithms, we consider the
following nonconvex stochastic optimization problem:
minx?RN f (x) := E? (F (x; ?)),
(1)
where ? ? ? is a random variable, and both F (?; ?) : RN ? R and f (?) : RN ? R are smooth but
not necessarily convex functions. This objective function covers a large scope of machine learning
problems including deep learning. F (?; ?)?s are called component functions in this paper. The most
common specification is that ? is an index set of all training samples ? = {1, 2, ? ? ? , n} and F (x; ?)
is the loss function with respect to the training sample indexed by ?.
We highlight the main contributions of this paper in the following:
? We provide a generic analysis for convergence and speedup, which covers many existing algorithms including ASCD, ASGD ( implementation on parameter server), ASGD (implementation on
multicore systems), and others as its special cases.
? Our generic analysis can recover or improve the existing results on special cases.
? Our generic analysis suggests a novel asynchronous stochastic zeroth-order gradient descent
(ASZD) algorithm and provides the analysis for its convergence rate and speedup property. To the
best of our knowledge, this is the first asynchronous parallel zeroth order algorithm.
? The experiment includes a novel application of the proposed ASZD method on model blending
and hyper parameter tuning for big data optimization.
1.1 Related Works
We first review first-order asynchronous parallel stochastic algorithms. Table 1 summarizes existing
linear speedup results for asynchronous parallel optimization algorithms mostly related to this paper.
The last block of Table 1 shows the results in this paper. Reddi et al. [29] proved the convergence of
asynchronous variance reduced stochastic gradient (SVRG) method and its speedup in sparse setting.
Mania et al. [22] provides a general perspective (or starting point) to analyze for asynchronous
stochastic algorithms, including H OGWILD !, asynchronous SCD and asynchronous sparse SVRG.
The fundamental difference in our work lies on that we apply different analysis and our result can be
2
directly applied to various special cases, while theirs cannot. In addition, there is a line of research
studying the asynchronous ADMM type methods, which is not in the scope of this paper. We
encourage readers to refer to recent literatures, for example, Hong [14], Zhang and Kwok [35].
We end this section by reviewing the zeroth-order stochastic methods. We use N to denote the
dimension of the problem, K to denote the iteration number, and ? to the
?variance of stochastic gradient. Nesterov and Spokoiny [24] proved a convergence rate of O(N/ K) for zeroth-order SGD
applied
? to convex optimization. Based on [24], Ghadimi and Lan [12] proved a convergence rate of
O( N/K) rate for ?
zeroth-order SGD on nonconvex smooth problems. Jamieson et al. [16] shows
a lower bound O(1/ K) ?
for any zeroth-order method with inaccurate evaluation. Duchi et al. [9]
proved a O(N 1/4 /K + 1/ K) rate for zeroth order SGD on convex objectives but with some?
very
different assumptions compared to our paper. Agarwal et al. [2] proved a regret of O(poly(N ) K)
for zeroth-order bandit algorithm on convex objectives.
For more comprehensive review of asynchronous algorithms, please refer to the long version of this
paper on arXiv:1606.00498.
1.2 Notation
? ei ? RN denotes the ith natural unit basis vector.
? E(?) means taking the expectation with respect to all random variables, while Ea (?) denotes the
expectation with respect to a random variable a.
? ?f (x) ? RN is the gradient of f (x) with respect to x. Let S be a subset of {1, ? ? ? , N }.
?S f (x) ? RN is the projection of ?f (x) onto the index set S, that is, setting components of
?f (x) outside of S to be zero. We use ?i f (x) ? RN to denote ?{i} f (x) for short.
? f ? denotes the optimal objective value in (1).
2
Algorithm
Algorithm
1
Generic
Asynchronous
Stochastic
We illustrate the asynchronous parallelism Algorithm (GASA)
by assuming a centralized network: a cen- Require: x0 , K, Y, (?1 , ?2 , . . . , ?N ), {?k }k=0,...,K?1 ?
tral node and multiple child nodes (work?k is the step length for k th iteration
ers). The central node maintains the opti- Ensure: {x }K
k
mization variable x. It could be a param- 1: for k = 0, k=0
. . . , K ? 1 do
eter server if implemented on a computer 2:
Randomly select a component function index ?k
cluster [17]; it could be a shared memory
and a set of coordinate indices Sk , where |Sk | = Y ;
if implemented on a multicore machine. 3:
xk+1 = xk ? ?k GSk (?
xk ; ?k );
Given a base algorithm A, all child nodes 4: end for
run algorithm A independently and concurrently: read x from the central node (we call the result of this read x
?, and it is mathematically
defined later in (4)), calculate locally using the x
?, and modify x on the central node. There is no need
to synchronize child nodes. Therefore, all child nodes stay busy and consequently their efficiency
gets maximized. In other words, we have CCS(T ) ? RTS(T ). Note that due to the asynchronous
parallel mechanism the variable x in the central node is not updated exactly following the protocol
of Algorithm A, since when a child node returns its computation result, the x in the central node
might have been changed by other child nodes. Thus a new analysis is required. A fundamental
question would be under what conditions a linear speedup can be guaranteed. In other words, under
what conditions CCS(T ) = ?(T ) or equivalently RTS(T ) = ?(T )?
To provide a comprehensive analysis, we consider a generic algorithm A ? the zeroth order hybrid
of SCD and SGD: iteratively sample a component function2 indexed by ? and a coordinate block
S?{1, 2, ? ? ? , N }, where |S| = Y for some constant Y and update x with
x ? x ? ?GS (x; ?)
(2)
?1
where GS (x; ?) is an approximation to the block coordinate stochastic gradient N Y ?S F (x; ?):
?
(3)
GS (x; ?) := i?S 2YN?i (F (x + ?i ei ; ?) ? F (x ? ?i ei ; ?))ei , S ? {1, 2, . . . , N }.
th
In the definition of GS (x; ?), ?i is the approximation parameter for the i coordinate.
(?1 , ?2 , . . . , ?N ) is predefined in practice. We only use the function value (the zeroth order information) to estimate GS (x; ?). It is easy to see that the closer to 0 the ?i ?s are, the closer GS (x; ?)
and N Y ?1 ?S f (x; ?) will be. In particular, lim?i ?0,?i GS (x; ?) = N Y ?1 ?S f (x; ?).
2
The algorithm and theoretical analysis followed can be easily extended to the minibatch version.
3
Applying the asynchronous parallelism, we propose a generic asynchronous stochastic algorithm in
Algorithm 1. This algorithm essentially characterizes how the value of x is updated in the central
node. ?k is the predefined steplength (or learning rate). K is the total number of iterations (note
that this iteration number is counted by the the central node, that is, any update on x no matter from
which child node will increase this counter.)
As we mentioned, the key difference of the asynchronous algorithm from the protocol of Algo?k may be not equal to xk . In asynchronous parallelism, there are two
rithm A in Eq. (2) is that x
different ways to model the value of x
?k :
? Consistent read: x
?k is some early existed state of x in the central node, that is, x
?k = xk??k for
some ?k ? 0. This happens if reading x and writing x on the central node by any child node are
atomic operations, for instance, the implementation on a parameter server [17].
? Inconsistent read: x
?k could be more complicated when the atomic read on x cannot be guaranteed, which could happen, for example, in the implementation on the multi-core system. It means
that while one child is reading x in the central node, other child nodes may be performing modifications on x at the same time. Therefore, different coordinates of x read by any child node may have
different ages. In other words, x
?k may not be any existed state of x in the central node.
Readers who want to learn more details about consistent read and inconsistent read can refer to
[3, 18, 19]. To cover both cases, we note that x
?k can be represented in the following generic form:
?
x
?k = xk ? j?J(k) (xj+1 ? xj ),
(4)
where J(k) ? {k?1, k?2, . . . , k?T } is a subset of the indices of early iterations, and T is the upper
bound for staleness. This expression is also considered in [3, 18, 19, 27]. Note that the practical
value of T is usually proportional to the number of involved nodes (or workers). Therefore, the total
number of workers and the upper bound of the staleness are treated as the same in the following
discussion and this notation T is abused for simplicity.
3
Theoretical Analysis
Before we show the main results of this paper, let us first make some global assumptions commonly
used for the analysis of stochastic algorithms.3
Bounded Variance of Stochastic Gradient E? (??F (x; ?) ? ?f (x)?2 ) ? ? 2 , ?x.
Lipschitzian Gradient The gradient of both the objective and its component functions are Lipschitzian:4
max{??f (x) ? ?f (y)?, ??F (x; ?) ? ?F (y; ?)?} ? L?x ? y? ?x, ?y, ??.
(5)
Under the Lipschitzian gradient assumption, define two more constants Ls and Lmax . Let s be
any positive integer bounded by N . Define Ls to be the minimal constant
satisfying the following
?
inequality: ??, ?x, ?i ei ?S ? {1, 2, ..., N } with |S| ? s for any z = i?S we have:
max {??f (x) ? ?f (x + z)? , ??F (x; ?) ? ?F (x + z; ?)?} ? Ls ?z?
Define L(i) for i ? {1, 2, . . . , N } as the minimum constant that satisfies:
max{??i f (x) ? ?i f (x + ?ei )?, ??i F (x; ?) ? ?i F (x + ?ei ; ?)?} ? L(i) |?|. ??, ?x.
(6)
Define Lmax := maxi?{1,...,N } L(i) . It can be seen that Lmax ? Ls ? L.
Independence All random variables ?k , Sk for k = 0, 1, ? ? ? , K are independent to each other.
Bounded Age Let T be the global bound for delay: J(k)?{k ? 1, . . . , k ? T }, ?k, so |J(k)| ? T .
We define the following global quantities for short notations:
)
(
(?
? ) 2
N
3/2 2
2
2
:=
L
/N,
?
4
+
4
T
Y
+
Y
T
/
N LT /(L2Y N ),
? :=
?
1
i=1 (i) i
?2 := Y /((f (x0 ) ? f ? )LY N ),
?3 := (K(N ? + ? 2 )?2 + 4)L2Y /L2T .
(7)
Next we show our main result in the following theorem:
3
Some underlying assumptions such as reading and writing a float number are omitted here. As pointed in
[25], these behaviors are guaranteed by most modern architectures.
4
Note that the Lipschitz assumption on the component function F (x; ?)?s can be eliminated when it comes
to first order methods (i.e., ? ? 0) in our following theorems.
4
Theorem 1 (Generic Convergence Rate for GASA). Choose the steplength ?k to be a constant ? in
Algorithm 1
(?
)
?
?k?1 = ? ?1 = 2LY N Y ?1
?12 /(K(N ? + ? 2 )?2 + ?1 ) + K(N ? + ? 2 )?2 , ?k
)
(?
?
N
and suppose the age T is bounded by T ? 2Y 1/2
1 + 4Y ?1/2 N 1/2 ?3 ? 1 . We have the following convergence rate:
?K
k=0
E??f (xk )?2
1
20
?
+
K
K?2
K?2
(
L2T
L2Y
?
)
?
?
1 + 4Y ?1/2 N 1/2 ?3 ? 1
?
+ 11 N ? + ? 2 K?2 + N ?.
N Y ?1
(8)
Roughly speaking, the first term on the RHS of (8) is related to SCD; the second term is related to
?stochastic? gradient descent; and the last term is due to the zeroth-order approximation.
Although this result looks complicated (or may be less elegant), it is capable to capture many important subtle structures, which can be seen by the subsequent discussion. We will show how to recover
and improve existing results as well as prove the convergence for new algorithms using Theorem 1.
To make the results more interpretable, we use the big-O notation to avoid explicitly writing down
all the constant factors, including all L?s, f (x0 ), and f ? in the following corollaries.
3.1 Asynchronous Stochastic Coordinate Descent (ASCD)
We apply Theorem 1 to study the asynchronous SCD algorithm by taking Y = 1 and ? = 0. Sk =
{ik } only contains a single randomly sampled coordinate, and ? = 0 (or equivalently ?i = 0, ?i).
The essential updating rule on x is xk+1 = xk ? ?k ?ik f (?
xk ).
Corollary 2 (ASCD). Let ? = 0, ? = 0, and Y = 1 in Algorithm 1 and Theorem 1. If
? O(N 3/4 ),
(9)
)
2
E??f
(x
)?
/K ? O(N/K).
k
k=0
(10)
T
the following convergence rate holds:
(?
K
The proved convergence rate O(N/K) is consistent with the existing analysis of SCD [30] or ASCD
for smooth optimization [20]. However, our requirement in (9) to ensure the linear speedup property
is better than the one in [20], by improving it from T ? O(N 1/2 ) to T ? O(N 3/4 ). Mania et al. [22]
analyzed ASCD for strongly convex objectives and proved a linear speedup smaller than O(N 1/6 ),
which is also more restrictive than ours.
3.2 Asynchronous Stochastic Gradient Descent (ASGD)
ASGD has been widely used to solve deep learning [7, 26, 36], NLP [4, 13], and many other important machine learning problems [25]. There are two typical implementations of ASGD. The first
type is to implement on the computer cluster with a parameter sever [1, 17]. The parameter server
serves as the central node. It can ensure the atomic read or write of the whole vector x and leads to
the following updating rule for x (setting Y = N and ?i = 0, ?i in Algorithm 1):
xk+1 = xk ? ?k ?F (?
xk ; ?k ).
(11)
Note that a single iteration is defined as modifying the whole vector. The other type is to implement
on a single computer with multiple cores. In this case, the central node corresponds to the shared
memory. Multiple cores (or threads) can access it simultaneously. However, in this model atomic
read and write of x cannot be guaranteed. Therefore, for the purpose of analysis, each update on
a single coordinate accounts for an iteration. It turns out to be the following updating rule (setting
Sk = {ik }, that is, Y = 1, and ?i = 0, ?i in Algorithm 1):
xk+1 = xk ? ?k ?ik F (?
xk ; ?k ).
(12)
Readers can refer to [3, 18, 25] for more details and illustrations for these two implementations.
Corollary 3 (ASGD in (11)). Let ? = 0 (or ?i = 0, ?i equivalently) and Y = N in Algorithm 1
and Theorem 1. If
(?
)
T ? O K? 2 + 1 ,
(13)
then the following convergence rate holds:
(?
)
( ?
)
K
2
+
1/K
.
(14)
E??f
(x
)?
/K
?
O
?/
K
k
k=0
5
First note that the convergence rate in (14) is tight since it is consistent with the serial (nonparallel)
version of SGD [23]. We compare this linear speedup property indicated by (13) with results in
[1], [11], and [18]. ?To ensure such rate, Agarwal and Duchi [1] need T to be bounded by T ?
O(K 1/4 min{? 3/2 , ?}), which is inferior to our result in (13). Feyzmahdavian et al. [11] need
T to be bounded by ? 1/2 K 1/4 to achieve the same rate, which is also inferior to our result. Our
requirement is consistent with the one in [18]. To the best of our knowledge, it is the best result so
far.
Corollary 4 (ASGD in (12)). Let ? = 0 (or equivalently, ?i = 0, ?i) and Y = 1 in Algorithm 1
and Theorem 1. If
(?
)
T ?O
N 3/2 + KN 1/2 ? 2 ,
(15)
then the following convergence rate holds
)
(?
(?
)
K
2
E??f
(x
)?
/K
?
O
N/K?
+
N/K
.
k
k=0
(16)
The additional factor N in (16) (comparing to (14)) arises from the different way of counting the
iteration. This additional factor also appears
in [25] and [18]. We first compare our result with [18],
?
which requires T to be bounded by O( KN 1/2 ? 2 ). We can see that our requirement in (16) allows
a larger value for T , especially when ? is small such that N 3/2 dominates KN 1/2 ? 2 . Next we compare with [25], which assumes that the objective function is strongly convex. Although this is sort
of comparing ?apple? with ?orange?, it is still meaningful if one believes that the strong convexity
would not affect the linear speedup property, which is implied by [22]. In [25], the linear speedup
is guaranteed if T ? O(N 1/4 ) under the assumption that the sparsity of the stochastic gradient
is bounded by O(1). In comparison, we do not require the assumption of sparsity for stochastic
gradient and have a better dependence on N . Moreover, beyond the improvement over existing analysis in [22] and [18], our analysis provides some interesting insights for asynchronous parallelism.
Niu et al. [25] essentially suggests a large problem dimension N is beneficial to the linear speedup,
while Lian et al. [18] and many others (for example, Agarwal and Duchi [1], Feyzmahdavian et al.
[11]) suggest that a large stochastic variance ? (this often implies the number of samples is large) is
beneficial to the linear speedup. Our analysis shows the combo effect of N and ? and shows how
they improve the linear speedup jointly.
3.3 Asynchronous Stochastic Zeroth-order Descent (ASZD)
We end this section by applying Theorem 1 to generate a novel asynchronous zeroth-order stochastic
descent algorithm, by setting the block size Y = 1 (or equivalently Sk = {ik }) in GSk (?
xk ; ?k )
GSk (?
xk ; ?k ) = G{ik } (?
xk ; ?k ) = (F (?
xk + ?ik eik ; ?k ) ? F (?
xk ? ?ik eik ; ?k ))/(2?ik )eik .
(17)
To the best of our knowledge, this is the first asynchronous algorithm for zeroth-order optimization.
Corollary 5 (ASZD). Set Y = 1 and all ?i ?s to be a constant ? in Algorithm 1. Suppose that ?
satisfies
( ?
{?
? })
? ? O 1/ K + min
?(N K)?1/4 , ?/ N
,
(18)
and T satisfies
T ?O
(?
)
N 3/2 + KN 1/2 ? 2 .
We have the following convergence rate
(?
)
(
)
?
K
2
E??f
(x
)?
/K
?
O
N/K
+
N/K?
.
k
k=0
(19)
(20)
We firstly note that the convergence rate in (20) is consistent with the rate for the serial (nonparallel)
zeroth-order stochastic gradient method in [12]. Then we evaluate this result from two perspectives.
First, we consider T = 1, which leads to the serial (non-parallel) zeroth-order stochastic descent.
Our result implies a better dependence on ?, comparing with [12].5 To obtain such convergence rate
5
Acute readers may notice that our way in (17) to estimate the stochastic gradient is different from the one
used in [12]. Our method only estimates a single coordinate gradient of a sampled component function, while
Ghadimi and Lan [12] estimate the whole gradient of the sampled component function. Our estimation is more
accurate but less aggressive. The proved convergence rate actually improves a small constant in [12].
6
(
? )
in (20), Ghadimi and Lan [12] require ? ? O 1/(N K) , while our requirement in (18) is much
less restrictive. An important insight in our requirement is to suggest the dependence on the variance
?: if the variance ? is large, ? is allowed to be a much larger value. This insight meets the common
sense: a large variance means that the stochastic gradient may largely deviate from the true gradient,
so we are allowed to choose a large ? to obtain a less exact estimation for the stochastic gradient
without affecting the convergence rate. From the practical view of point, it always tends to choose a
large value for ?. Recall the zeroth-order method uses the function difference at two different points
(e.g., x + ?ei and x ? ?ei ) to estimate the differential. In a practical system (e.g., a concrete control
system), there usually exists some system noise while querying the function values. If two points
are too close (in other words ? is too small), the obtained function difference is dominated by noise
and does not really reflect the function differential.
Second, we consider the case T ? 1, which leads to the asynchronous zeroth-order stochastic
descent. To the best of our knowledge, this is the first such algorithm. The upper bound for T in (19)
essentially indicates the requirement for the linear speedup property. The linear speedup property
here also shows that even if K? 2 is much smaller than 1, we still have O(N 3/4 ) linear speedup,
which shows a fundamental understanding of asynchronous stochastic algorithms that N and ? can
improve the linear speedup jointly.
4
Experiment
Since the ASCD and various ASGDs have been extensively validated in recent papers. We conduct
two experiments to validate the proposed ASZD on in this section. The first part applies ASZD
to estimate the parameters for a synthetic black box system. The second part applies ASZD to the
model combination for Yahoo Music Recommendation Competition.
4.1 Parameter Optimization for A Black Box
We use a deep neural network to simulate a black box system. The optimization variables are the
weights associated with a neural network. We choose 5 layers (400/100/50/20/10 nodes) for the
neural network with 46380 weights (or parameters) totally. The weights are randomly generated
from i.i.d. Gaussian distribution. The output vector is constructed by applying the network to the
input vector plus some Gaussian random noise. We use this network to generate 463800 samples.
These synthetic samples are used to optimize the weights for the black box. (We pretend not to know
the structure and weights of this neural network because it is a black box.) To optimize (estimate)
the parameters for this black box, we apply the proposed ASZD method.
The experiment is conducted on the machine (Intel Xeon architecture), which has 4 sockets and
10 cores for each socket. We run Algorithm 1 on various numbers of cores from 1 to 32 and the
steplength is chosen as ? = 0.1, which is based on the best performance of Algorithm 1 running on
1 core to achieve the precision 10?1 for the objective value.
Table 2: CCR and RTS of ASZD for different # of threads (synthetic data).
thr-# 1
4
8
12
16
20
24
28
32
CCS 1 3.87 7.91 9.97 14.74 17.86 21.76 26.44 30.86
RTS 1 3.32 6.74 8.48 12.49 15.08 18.52 22.49 26.12
The speedup is reported in Table 2. We observe that the iteration speedup is almost linear while the
running time speedup is slightly worse than the iteration speedup. We also draw Figure 1 (see the
supplement) to show the curve of the objective value against the number of iterations and running
time respectively.
4.2 Asynchronous Parallel Model Combination for Yahoo Music Recommendation
Competition
In KDD-Cup 2011, teams were challenged to predict user ratings in music given the Yahoo! Music
data set [8]. The evaluation criterion is the Root Mean Squared Error (RMSE) of the test data set:
??
RMSE =
?ui )2 /|T1 |,
(21)
(u,i)?T1 (rui ? r
where (u, i) ? T1 are all user ratings in Track 1 test data set (6,005,940 ratings), rui is the true rating
for user u and item i, and r?ui is the predicted rating. The winning team from NTU created more
than 200 models using different machine learning algorithms [6], including Matrix Factorization,
k-NN, Restricted Boltzmann Machines, etc. They blend these models using Neural Network and
Binned Linear Regression on the validation data set (4,003,960 ratings) to create a model ensemble
to achieve better RMSE.
7
We were able to obtain the predicted ratings of N = 237 individual models on the KDD-Cup test
data set from the NTU KDD-Cup team, which is a matrix X with 6,005,940 rows (corresponding to
the 6,005,940 test data set samples) and 237 columns. Each element Xij indicates the j-th model?s
predicted rating on the i-th Yahoo! Music test data sample. In our experiments, we try to linearly
blend the 237 models using information from the test data set. Thus, our variable to optimize
is a vector x ? RN as coefficients of the predicted ratings for each model. To ensure that our
linear blending does not over-fit, we further split X randomly into two equal parts, calling them the
?validation? set (denoted as A ? Rn?N ) for model blending and the true test set.
We define our objective function as RMSE2 of the blended output on the validation set: f (x) =
?Ax ? r?2 /n where r is the corresponding true ratings in the validation set and Ax is the predicted
ratings after blending.
We assume that we cannot see the entries of r directly, and thus cannot compute the gradient of f (x).
In our experiment, we treat f (x) as a blackbox, and the only information we can get from it is its
value given a model blending coefficients x. This is similar to submitting a model for KDD-Cup and
obtain a leader-board RMSE of the test set; we do not know the actual values of the test set. Then,
we apply our ASZD algorithm to minimize f (x) with zero-order information only.
Table 3: Comparing RMSEs on test data set with KDD-Cup winner teams
NTU (1st) Commendo (2nd) InnerPeace (3rd) Our result
RMSE
21.0004
21.0545
21.2335
21.1241
We implement our algorithm using Julia on a 10-core Xeon E7-4680 machine an run our algorithm
for the same number of iterations, with different number of threads, and measured the running time
speedup (RTS) in Figure 4 (see supplement). Similar to our experiment on neural network blackbox,
our algorithm has a almost linear speedup. For completeness, Figure 2 in supplement shows the
square root of objective function value (RMSE) against the number of iterations and running time.
After about 150 seconds, our algorithm running with 10 threads achieves a RMSE of 21.1241 on our
test set. Our results are comparable to KDD-Cup winners, as shown in Table 3. Since our goal is
to show the performance of our algorithm, we assume we can ?submit? our solution x for unlimited
times, which is unreal in a real contest like KDD-Cup. However, even with very few iterations, our
algorithm does converge fast to a reasonable small RMSE, as shown in Figure 3.
5
Conclusion
In this paper, we provide a generic linear speedup analysis for the zeroth-order and first-order asynchronous parallel algorithms. Our generic analysis can recover or improve the existing results on
special cases, such as ASCD, ASGD (parameter implementation), ASGD (multicore implementation). Our generic analysis also suggests a novel ASZD algorithm with guaranteed convergence rate
and speedup property. To the best of our knowledge, this is the first asynchronous parallel zeroth
order algorithm. The experiment includes a novel application of the proposed ASZD method on
model blending and hyper parameter tuning for big data optimization.
Acknowledgements
This project is in part supported by the NSF grant CNS-1548078. We especially thank Chen-Tse
Tsai for providing the code and data for the Yahoo Music Competition.
References
[1] A. Agarwal and J. C. Duchi. Distributed delayed stochastic optimization. NIPS, 2011.
[2] A. Agarwal, D. P. Foster, D. J. Hsu, S. M. Kakade, and A. Rakhlin. Stochastic convex optimization with
bandit feedback. In NIPS, pages 1035?1043, 2011.
[3] H. Avron, A. Druinsky, and A. Gupta. Revisiting asynchronous linear solvers: Provable convergence rate
through randomization. Journal of the ACM (JACM), 62(6):51, 2015.
[4] Y. Bengio, R. Ducharme, P. Vincent, and C. Janvin. A neural probabilistic language model. The Journal
of Machine Learning Research, 3:1137?1155, 2003.
[5] S. Chaturapruek, J. C. Duchi, and C. R?. Asynchronous stochastic convex optimization: the noise is in
the noise and SGD don?t care. In NIPS, pages 1531?1539, 2015.
[6] P.-L. Chen, C.-T. Tsai, Y.-N. Chen, K.-C. Chou, C.-L. Li, C.-H. Tsai, K.-W. Wu, Y.-C. Chou, C.-Y. Li,
W.-S. Lin, et al. A linear ensemble of individual and blended models for music rating prediction. In
KDDCup, 2012.
[7] J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, M. Mao, A. Senior, P. Tucker, K. Yang, Q. V. Le,
et al. Large scale distributed deep networks. NIPS, 2012.
8
[8] G. Dror, N. Koenigstein, Y. Koren, and M. Weimer. The Yahoo! Music dataset and KDD-Cup?11. In
KDDCup, pages 8?18, 2012.
[9] J. C. Duchi, P. L. Bartlett, and M. J. Wainwright. Randomized smoothing for stochastic optimization.
SIAM Journal on Optimization, 22(2):674?701, 2012.
[10] J. Fellus, D. Picard, and P. H. Gosselin. Asynchronous gossip principal components analysis. Neurocomputing, 2015. doi: 10.1016/j.neucom.2014.11.076.
[11] H. R. Feyzmahdavian, A. Aytekin, and M. Johansson. An asynchronous mini-batch algorithm for regularized stochastic optimization. arXiv, 2015.
[12] S. Ghadimi and G. Lan. Stochastic first-and zeroth-order methods for nonconvex stochastic programming.
SIAM Journal on Optimization, 23(4):2341?2368, 2013.
[13] K. Gimpel, D. Das, and N. A. Smith. Distributed asynchronous online learning for natural language
processing. In CoNLL, pages 213?222, 2010.
[14] M. Hong. A distributed, asynchronous and incremental algorithm for nonconvex optimization: An
ADMM based approach. arXiv:1412.6058, 2014.
[15] C. Hsieh, H. Yu, and I. S. Dhillon. PASSCoDe: Parallel ASynchronous Stochastic dual Co-ordinate
Descent. In ICML, pages 2370?2379, 2015.
[16] K. G. Jamieson, R. Nowak, and B. Recht. Query complexity of derivative-free optimization. In NIPS,
2012.
[17] M. Li, D. G. Andersen, J. W. Park, A. J. Smola, A. Ahmed, V. Josifovski, J. Long, E. J. Shekita, and B.-Y.
Su. Scaling distributed machine learning with the parameter server. OSDI, 2014.
[18] X. Lian, Y. Huang, Y. Li, and J. Liu. Asynchronous parallel stochastic gradient for nonconvex optimization. In NIPS, pages 2719?2727, 2015.
[19] J. Liu and S. J. Wright. Asynchronous stochastic coordinate descent: Parallelism and convergence properties. arXiv:1403.3862, 2014.
[20] J. Liu, S. J. Wright, C. R?, V. Bittorf, and S. Sridhar. An asynchronous parallel stochastic coordinate
descent algorithm. ICML, 2014.
[21] J. Liu, S. J. Wright, and S. Sridhar. An asynchronous parallel randomized kaczmarz algorithm. arXiv,
2014.
[22] H. Mania, X. Pan, D. Papailiopoulos, B. Recht, K. Ramchandran, and M. I. Jordan. Perturbed iterate
analysis for asynchronous stochastic optimization. arXiv:1507.06970, 2015.
[23] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach to stochastic programming. SIAM Journal on Optimization, 19(4):1574?1609, 2009.
[24] Y. Nesterov and V. Spokoiny. Random gradient-free minimization of convex functions. Foundations of
Computational Mathematics, pages 1?40, 2011.
[25] F. Niu, B. Recht, C. Re, and S. Wright. Hogwild: A lock-free approach to parallelizing stochastic gradient
descent. NIPS, 2011.
[26] T. Paine, H. Jin, J. Yang, Z. Lin, and T. Huang. Gpu asynchronous stochastic gradient descent to speed
up neural network training. NIPS, 2013.
[27] Z. Peng, Y. Xu, M. Yan, and W. Yin. Arock: an algorithmic framework for asynchronous parallel coordinate updates. arXiv, 2015.
[28] F. Petroni and L. Querzoni. Gasgd: stochastic gradient descent for distributed asynchronous matrix completion via graph partitioning. ACM Conference on Recommender systems, 2014.
[29] S. J. Reddi, A. Hefny, S. Sra, B. P?czos, and A. J. Smola. On variance reduction in stochastic gradient
descent and its asynchronous variants. In NIPS, pages 2629?2637, 2015.
[30] P. Richt?rik and M. Tak??c. Iteration complexity of randomized block-coordinate descent methods for
minimizing a composite function. Mathematical Programming, 144(1-2):1?38, 2014.
[31] C. D. Sa, C. Zhang, K. Olukotun, C. R?, and C. R?. Taming the wild: A unified analysis of hogwild-style
algorithms. In NIPS, pages 2674?2682, 2015.
[32] S. Sridhar, S. Wright, C. Re, J. Liu, V. Bittorf, and C. Zhang. An approximate, efficient LP solver for lp
rounding. NIPS, 2013.
[33] J. Wei, W. Dai, A. Kumar, X. Zheng, Q. Ho, and E. P. Xing. Consistent bounded-asynchronous parameter
servers for distributed ml. arXiv:1312.7869, 2013.
[34] H. Yun, H.-F. Yu, C.-J. Hsieh, S. Vishwanathan, and I. Dhillon. Nomad: Non-locking, stochastic multimachine algorithm for asynchronous and decentralized matrix completion. arXiv:1312.0193, 2013.
[35] R. Zhang and J. Kwok. Asynchronous distributed ADMM for consensus optimization. ICML, 2014.
[36] S. Zhang, A. Choromanska, and Y. LeCun. Deep learning with elastic averaging SGD. arXiv, 2014.
[37] S. Zhao and W. Li. Fast asynchronous parallel stochastic gradient descent: A lock-free approach with
convergence guarantee. In AAAI, pages 2379?2385, 2016.
9
| 6551 |@word version:3 johansson:1 nd:1 hsieh:3 sgd:16 reduction:1 liu:8 contains:1 ours:1 existing:9 com:4 comparing:4 gmail:3 gpu:1 devin:1 subsequent:1 happen:1 kdd:8 interpretable:1 update:4 juditsky:1 chohsieh:1 item:1 rts:8 xk:21 ith:1 smith:1 core:9 short:3 provides:6 completeness:1 complication:1 node:26 bittorf:2 firstly:1 zhang:6 mathematical:1 constructed:1 differential:2 ik:9 prove:1 overhead:1 wild:1 x0:3 peng:1 roughly:2 behavior:2 blackbox:2 multi:1 actual:1 param:1 solver:2 totally:1 spain:1 project:1 notation:4 bounded:9 maximizes:1 underlying:1 moreover:1 what:4 minimizes:1 dror:1 unified:1 guarantee:1 avron:2 exactly:1 partitioning:1 control:1 grant:1 unit:1 ly:2 jamieson:2 yn:1 before:1 positive:1 engineering:1 t1:3 modify:1 tends:1 treat:1 unreal:1 niu:3 opti:1 meet:1 might:1 zeroth:27 black:6 plus:1 suggests:4 co:1 josifovski:1 factorization:1 nemirovski:1 range:2 practical:4 lecun:1 atomic:4 practice:1 block:5 regret:1 implement:3 kaczmarz:1 yan:1 composite:3 projection:1 word:4 jui:1 suggest:2 get:2 cannot:5 onto:1 close:1 applying:3 writing:3 function2:1 ghadimi:4 optimize:3 dean:1 attention:2 starting:1 independently:1 convex:15 l:4 druinsky:1 simplicity:2 passcode:1 insight:4 rule:3 ogwild:1 coordinate:14 updated:2 papailiopoulos:1 suppose:2 user:3 exact:1 programming:4 us:1 element:1 satisfying:1 updating:3 ark:1 electrical:1 capture:1 calculate:1 revisiting:1 richt:1 counter:1 yijun:1 substantial:2 mentioned:1 convexity:1 complexity:5 ui:2 locking:1 scd:13 nesterov:2 reviewing:1 tight:1 algo:1 efficiency:2 basis:1 easily:1 mization:1 various:4 represented:1 fast:2 abused:1 doi:1 query:1 hyper:3 outside:1 widely:1 solve:2 larger:2 ducharme:1 jointly:3 online:1 advantage:1 propose:1 achieve:3 validate:1 competition:3 convergence:22 cluster:2 requirement:6 incremental:1 koenigstein:1 illustrate:1 completion:3 measured:2 multicore:3 received:2 sa:1 eq:1 strong:1 implemented:2 predicted:5 come:1 implies:2 modifying:1 stochastic:53 require:3 really:1 ntu:3 randomization:1 mathematically:1 blending:7 hold:3 considered:1 wright:6 feyzmahdavian:3 scope:2 predict:1 algorithmic:1 cen:1 steplength:3 achieves:1 early:2 omitted:1 purpose:1 estimation:2 create:1 minimization:1 concurrently:1 always:1 gaussian:2 e7:1 querzoni:1 avoid:2 corollary:5 ax:6 validated:1 improvement:1 indicates:2 mainly:1 chou:2 sense:1 osdi:1 nn:1 ascd:15 bt:1 inaccurate:1 bandit:2 tak:1 choromanska:1 interested:2 dual:1 denoted:1 yahoo:6 constrained:2 special:6 orange:1 smoothing:1 equal:2 eliminated:1 broad:2 look:1 yu:2 icml:3 park:1 eik:3 others:2 few:1 modern:1 randomly:4 simultaneously:1 comprehensive:6 individual:2 delayed:1 neurocomputing:1 cns:1 centralized:1 zheng:1 picard:1 evaluation:2 analyzed:1 predefined:2 accurate:1 chaturapruek:1 encourage:1 worker:9 closer:2 capable:1 nowak:1 huan:1 indexed:2 conduct:1 re:2 theoretical:4 minimal:1 instance:1 xeon:2 column:1 tse:1 cover:4 blended:2 challenged:1 cost:2 subset:2 entry:1 delay:1 rounding:1 conducted:1 too:2 reported:1 kn:6 perturbed:1 synthetic:3 cho:1 st:1 recht:3 fundamental:3 randomized:3 siam:3 stay:1 probabilistic:1 concrete:1 squared:1 aaai:1 central:13 reflect:1 andersen:1 huang:3 choose:4 worse:1 derivative:1 style:1 return:1 zhao:1 li:5 busy:2 account:1 aggressive:1 includes:2 coefficient:2 matter:1 spokoiny:2 explicitly:1 yandex:1 depends:3 piece:1 asgd:19 later:2 view:1 root:2 hogwild:2 analyze:1 characterizes:1 try:1 xing:1 recover:3 maintains:1 parallel:24 complicated:2 sort:1 rochester:1 rmse:8 contribution:1 minimize:1 square:1 variance:9 who:1 largely:1 maximized:1 ensemble:2 socket:2 vincent:1 cc:6 apple:1 definition:1 against:2 involved:1 tucker:1 associated:1 recovers:1 sampled:3 hsu:1 proved:8 dataset:1 recall:1 knowledge:5 lim:1 improves:2 combo:1 subtle:1 hefny:1 ea:1 actually:1 appears:1 wei:1 nomad:1 box:6 strongly:3 smola:2 ei:9 su:1 lack:1 minibatch:1 indicated:1 arock:1 usa:3 effect:1 true:4 read:11 iteratively:1 dhillon:2 staleness:2 comprehend:1 please:1 davis:2 inferior:2 hong:2 criterion:1 yun:1 julia:1 duchi:7 bring:2 novel:7 recently:2 common:2 ji:2 winner:2 theirs:1 refer:4 cup:8 tuning:3 unconstrained:1 rd:1 mathematics:1 pointed:1 contest:1 language:2 specification:1 access:1 acute:1 etc:1 base:4 mania:3 dominant:1 showed:2 recent:2 perspective:2 nonconvex:11 server:6 inequality:1 success:2 seen:2 minimum:1 additional:2 care:1 dai:1 converge:1 corrado:1 multiple:3 smooth:13 ahmed:1 long:2 lin:2 serial:3 nonparallel:2 variant:1 regression:1 prediction:1 paine:1 essentially:3 expectation:2 arxiv:10 iteration:15 tral:1 monga:1 agarwal:6 eter:1 addition:1 want:1 affecting:1 float:1 parallelization:2 elegant:1 inconsistent:3 jordan:1 reddi:2 call:1 integer:1 counting:1 yang:2 revealed:1 split:1 easy:1 bengio:1 iterate:1 affect:3 xj:2 independence:1 fit:1 architecture:2 synchronous:1 thread:4 expression:1 pca:1 bartlett:1 hessian:1 speaking:1 sever:1 deep:6 locally:1 extensively:1 reduced:1 generate:2 shapiro:1 xij:1 nsf:1 notice:1 track:1 ccr:1 write:2 dominance:1 key:1 lan:5 graph:1 olukotun:1 run:3 throughout:1 reader:4 almost:2 reasonable:1 wu:1 shekita:1 draw:1 summarizes:1 scaling:1 conll:1 comparable:1 bound:11 layer:1 guaranteed:6 followed:1 koren:1 existed:2 g:7 binned:1 precisely:1 vishwanathan:1 unlimited:1 calling:1 petroni:1 dominated:1 simulate:1 speed:1 min:3 kumar:1 performing:1 speedup:34 department:3 combination:2 smaller:2 beneficial:2 slightly:1 pan:1 kakade:1 lp:2 modification:1 happens:1 explained:1 restricted:1 xiangru:2 resource:1 turn:1 mechanism:1 know:2 end:3 serf:1 studying:1 operation:1 decentralized:1 apply:4 kwok:2 observe:1 generic:14 save:1 batch:1 ho:1 denotes:3 running:9 ensure:7 nlp:1 assumes:1 lock:2 lipschitzian:3 music:8 pretend:1 restrictive:2 especially:2 implied:1 objective:12 question:3 quantity:1 blend:2 dependence:3 diagonal:1 gradient:30 minx:1 thank:1 consensus:1 provable:1 assuming:1 gosselin:1 length:1 code:1 index:5 illustration:1 providing:1 l2t:2 mini:1 minimizing:1 equivalently:5 mostly:1 implementation:8 boltzmann:1 upper:9 recommender:1 descent:19 jin:1 extended:1 communication:1 team:4 rn:9 parallelizing:1 rating:12 ordinate:1 required:1 janvin:1 extensive:2 thr:1 california:2 ucdavis:1 barcelona:1 nip:12 beyond:1 able:1 parallelism:6 usually:2 sparsity:3 reading:3 including:5 memory:2 max:3 belief:1 wainwright:1 natural:2 hybrid:1 synchronize:1 treated:1 regularized:1 improve:5 gimpel:1 created:1 deviate:1 review:2 understanding:2 literature:1 l2y:3 acknowledgement:1 taming:1 synchronization:1 loss:1 highlight:1 interesting:1 proportional:1 querying:1 age:3 validation:4 rmses:1 foundation:1 rik:1 consistent:8 foster:1 row:1 changed:1 lmax:3 supported:1 last:2 asynchronous:63 svrg:2 free:4 czos:1 senior:1 taking:2 sparse:2 benefit:2 distributed:8 curve:1 dimension:4 feedback:1 stand:1 commonly:1 clue:1 counted:1 employing:1 far:1 approximate:1 keep:1 aytekin:1 ml:1 global:3 unnecessary:1 leader:1 don:1 kddcup:2 sk:6 table:7 learn:1 robust:1 sra:1 elastic:1 improving:1 alg:3 necessarily:1 poly:1 protocol:2 submit:1 da:1 main:4 linearly:1 rh:1 big:3 whole:3 noise:5 weimer:1 sridhar:3 child:11 allowed:2 xu:1 intel:1 gossip:1 rithm:1 board:1 precision:1 mao:1 winning:1 lie:1 theorem:9 down:1 er:1 maxi:1 rakhlin:1 svm:1 submitting:1 gupta:1 dominates:1 essential:1 exists:1 supplement:3 gsk:3 ramchandran:1 rui:2 chen:4 lt:1 yin:1 jacm:1 uwisc:1 recommendation:2 applies:2 corresponds:1 satisfies:3 acm:2 goal:1 consequently:1 shared:2 admm:3 lipschitz:1 specifically:1 typical:1 averaging:1 principal:1 total:4 called:1 meaningful:1 select:1 people:1 arises:1 tsai:3 evaluate:1 lian:3 avoiding:1 |
6,138 | 6,552 | Visual Dynamics: Probabilistic Future Frame
Synthesis via Cross Convolutional Networks
Tianfan Xue*1 Jiajun Wu*1 Katherine L. Bouman1 William T. Freeman1,2
1
2
Massachusetts Institute of Technology
Google Research
{tfxue, jiajunwu, klbouman, billf}@mit.edu
Abstract
We study the problem of synthesizing a number of likely future frames from a
single input image. In contrast to traditional methods, which have tackled this
problem in a deterministic or non-parametric way, we propose to model future
frames in a probabilistic manner. Our probabilistic model makes it possible for us
to sample and synthesize many possible future frames from a single input image.
To synthesize realistic movement of objects, we propose a novel network structure,
namely a Cross Convolutional Network; this network encodes image and motion
information as feature maps and convolutional kernels, respectively. In experiments,
our model performs well on synthetic data, such as 2D shapes and animated game
sprites, as well as on real-world video frames. We also show that our model can be
applied to visual analogy-making, and present an analysis of the learned network
representations.
1
Introduction
From just a single snapshot, humans are often able to imagine how a scene will visually change
over time. For instance, due to the pose of the girl in Figure 1, most would predict that her arms
are stationary but her leg is moving. However, the exact motion is often unpredictable due to an
intrinsic ambiguity. Is the girl?s leg moving up or down? In this work, we study the problem of
visual dynamics: modeling the conditional distribution of future frames given an observed image.
We propose to tackle this problem using a probabilistic, content-aware motion prediction model that
learns this distribution without using annotations. Sampling from this model allows us to visualize
the many possible ways that an input image is likely to change over time.
Modeling the conditional distribution of future frames given only a single image as input is a very
challenging task for a number of reasons. First, natural images come from a very high dimensional
distribution that is difficult to model. Designing a generative model for realistic images is a very
challenging problem. Second, in order to properly predict motion distributions, the model must first
learn about image parts and the correlation of their respective motions in an unsupervised fashion.
In this work, we tackle the visual dynamics problem using a neural network structure, based on a
variational autoencoder [Kingma and Welling, 2014] and our newly proposed cross convolutional layer.
During training, the network observes a set of consecutive image pairs in videos, and automatically
infers the relationship between them without any supervision. During testing, the network then
predicts the conditional distribution, P (J|I), of future RGB images J (Figure 1b) given an RGB
input image I that was not in the training set (Figure 1a). Using this distribution, the network is able
to synthesize multiple different image samples corresponding to possible future frames of the input
image (Figure 1c). Our network contains a number of key components that contribute to its success:
? We use a conditional variational autoencoder to model the complex conditional distribution
of future frames [Kingma and Welling, 2014, Yan et al., 2016]. This allows us to approximate
a sample, J, from the distribution of future images by using a trainable function J = f (I, z).
? indicates equal contributions.
Conditional Distribution
of Future Frame
(a) Input Image
(b) Probabilistic Model
(c) Output Image Samples
Figure 1: Predicting the movement of an object from a single snapshot is often ambiguous. For
instance, is the girl?s leg in (a) moving up or down? We propose a probabilistic, content-aware motion
prediction model (b) that learns the conditional distribution of future frames. Using this model we
are able to synthesize various future frames (c) that are all consistent with the observed input (a).
The argument z is a sample from a simple distribution, e.g. Gaussian, which introduces
randomness into the sampling of J. This formulation makes the problem of learning the
distribution much more tractable than explicitly modeling the distribution.
? We model motion using a set of image-dependent convolution kernels operating over an
image pyramid. Unlike normal convolutional layers, these kernels vary between images,
as different images may have different motions. Our proposed cross convolutional layer
convolves image-dependent kernels with feature maps from an observed frame, to synthesize
a probable future frame.
We test the proposed model on two synthetic datasets as well as a dataset generated from real videos.
We show that, given an RGB input image, the algorithm can successfully model a distribution of
possible future frames, and generate different samples that cover a variety of realistic motions. In
addition, we demonstrate that our model can be easily applied to tasks such as visual analogy-making,
and present an analysis of the learned network representations.
2
Related Work
Motion priors Research studying the human visual system and motion priors provides evidence
for low-level statistics of object motion. Pioneering work by Weiss and Adelson [1998] found that
the human visual system prefers slow and smooth motion fields. More recent work by Roth and
Black [2005] analyzed the response of spatial filters applied to optical flow fields. Fleet et al. [2000]
also found that a local motion field can be represented by a linear combination of a small number
of bases. All these works focus on the distribution of a motion field itself without considering any
image information. On the contrary, our context-aware model captures the relationship between an
observed image and its motion field.
Motion or future prediction Our problem is closely related to the motion or feature prediction
problem. Given an observed image or a short video sequence, models have been proposed to predict
a future motion field [Liu et al., 2011, Pintea et al., 2014, Xue et al., 2014, Walker et al., 2015, 2016],
a future trajectory of objects [Walker et al., 2014, Wu et al., 2015], or a future visual representation [Vondrick et al., 2016b]. Most of these works use deterministic prediction models [Pintea et al.,
2014, Vondrick et al., 2016b]. Recently, and concurrently with our own work, Walker et al. [2016]
found that there is an intrinsic ambiguity in deterministic prediction, and propose a probabilistic
prediction framework. Our model is also a probabilistic prediction model, but it directly predicts the
pixel values, rather than motion fields or image features.
Parametric image synthesis Early work in parametric image synthesis mostly focus on texture
synthesis using hand-crafted features [Portilla and Simoncelli, 2000]. More recently, works in
image synthesis have begun to produce impressive results by training variants of neural network
structures to produce novel images [Gregor et al., 2015, Xie et al., 2016a,b, Zhou et al., 2016].
Generative adversarial networks [Goodfellow et al., 2014, Denton et al., 2015, Radford et al., 2016]
and variational autoencoders [Kingma and Welling, 2014, Yan et al., 2016] have been used to model
and sample from natural image distributions. Our proposed algorithm is also based on the variational
autoencoder, but unlike in this previous work, we also model temporal consistency.
Video synthesis Techniques that exploit the periodic structure of motion in videos have also been
successful at generating novel frames from an input sequence. Early work in video textures proposed to shuffle frames from an existing video to generate a temporally consistent, looping image
sequence [Sch?dl et al., 2000]. These ideas were later extended to generate cinemagraphies [Joshi
et al., 2012], seamlessly looping videos containing a variety of objects with different motion patterns [Agarwala et al., 2005, Liao et al., 2013], or video inpainting [Wexler et al., 2004]. While
2
?
?
?(?|?, ?)
?(?|?)
?
?
?(?|?)
?
?
?
?(?|?, ?)
?
?
?
?
(b) Deterministic
motion prediction
(a) A toy world
?
?
(c) Motion prior
?
?
?
?
(d) Probabilistic frame predictor
Figure 2: A toy world example. See Section 3.2 for details.
high-resolution and realistic looking videos are generated using these techniques, they are often
limited to periodic motion and require an input reference video. In contrast, we build an image
generation model that does not require a reference video at test time.
Recently, several network structures have been proposed to synthesize a new frame from observed
frames. They infer the future motion either from multiple previous frames Srivastava et al. [2015],
Mathieu et al. [2016], user-supplied action labels Oh et al. [2015], Finn et al. [2016], or a random
vector Vondrick et al. [2016a]. In contrast to these approaches, our network takes a single frame as
input and learns the distribution of future frames without any supervision.
3
Formulation
3.1 Problem Definition
In this section, we describe how to sample future frames from a current observation image. Here we
focus on next frame synthesis; given an RGB image I observed at time t, our goal is to model the
conditional distribution of possible frames observed at time t + 1.
Formally, let {(I (1) , J (1) ), . . . , (I (n) , J (n) )} be the set of image pairs in the training set, where
I (i) and J (i) are images observed at two consecutive time steps. Using this data, our task is to
model the distribution p? (J|I) of all possible next frames J for a new, previously unseen test image
I, and then to sample new images from this distribution. In practice, we choose not to directly
predict the next frame, but instead to predict the difference image v = J ? I, also known as the
Eulerian motion, between the observed frame I and the future frame J; these two problems are
equivalent. The task is then to learn the conditional distribution p? (v|I) from a set of training pairs
{(I (1) , v (1) ), . . . , (I (n) , v (n) )}.
3.2 A Toy Example
Consider a simple toy world that only consists of circles and squares. All circles move vertically,
while all squares move horizontally, as shown in the Figure 2(a). Although in practice we choose v to
be the difference image between consecutive frames, for this toy example we show v as a 2D motion
field for a more intuitive visualization. Consider the three models shown in Figure 2.
(1) Deterministic motion prediction In this structure, the model tries to find a deterministic
relationship between the input image and object motion
P(Figure 2(b)). To do this, it attempts to find
a function f that minimizes the reconstruction error i ||v (i) ? f (I (i) )|| on a training set. Thus,
it cannot capture the multiple possible motions that a shape can have, and the algorithm can only
learn a mean motion for each object. In the case of zero-mean, symmetric motion distributions, the
algorithm would produce an output frame with almost no motion.
(2) Motion prior A simple way to model the multiple possible motions of future frames is to use a
variational autoencoder [Kingma and Welling, 2014], as shown in Figure 2(c). The network consists
of an encoder network (gray) and a decoder network (yellow), and the latent representation z encodes
the intrinsic dimensionality of the motion fields. A shortcoming of this model is that it does not see
the input image during inference. Therefore, it will only learn a global motion field of both circles
and squares, without distinguishing the particular motion pattern for each class of objects.
(3) Probabilistic frame predictor In this work, we combine the deterministic motion prediction
structure with a motion prior, to model the uncertainty in a motion field and the correlation between
motion and image content. We extend the decoder in (2) to take two inputs, the intrinsic motion
representation z and an image I (see the yellow network in Figure 2(d), which corresponds to
p(v|I, z)). Therefore, instead of modeling a joint distribution of motion v, it will learn a conditional
distribution of motion given the input image I.
3
In this toy example, since squares and circles only move in one (although different) direction, we
would only need a scalar z ? R for encoding the velocity of the object. The model is then able to
infer the location and direction of motion conditioned on the shape that appears in the input image.
3.3 Conditional Variational Autoencoder
In this section, we will formally derive the training objective of our model, following the similar
derivations as those in Kingma and Welling [2014], Kingma et al. [2014], Yan et al. [2016]. Consider
the following generative process that samples a future frame conditioned on an observed image, I.
First, the algorithm samples the hidden variable z from a prior distribution pz (z); in this work, we
assume pz (z) is a multivariate Gaussian distribution where each dimension is i.i.d. with zero-mean
and unit-variance. Then, given a value of z, the algorithm samples the intensity difference image v
from the conditional distribution p? (v|I, z). The final image, J = I + v, is then returned as output.
In the trainingPstage, the algorithm attempts to maximize the log-likelihood of the conditional marginal
(i) (i)
|I ). Assuming I and z are independent, the marginal distribution is
distribution i log p(v
R
P
expanded as i log z p(v (i) |I (i) , z)pz (z)dz. Directly maximizing this marginal distribution is hard,
thus we instead maximize its variational upper-bound, as proposed by Kingma and Welling [2014].
Each term in the marginal distribution is upper-bounded by
L
i
1 Xh
log p? (v (i) |z (i,l) , I (i) ) ,
(1)
L(?, ?, v (i) |I (i) ) ? ?DKL (q? (z|v (i) , I (i) )||pz (z)) +
L
l=1
(i)
(i)
where DKL is the KL-divergence, q? (z|v , I ) is the variational distribution that approximates
the posterior p(z|v (i) , I (i) ), and z (i,l) are samples from the variational distribution. For simplicity,
we refer to the conditional data distribution, p? (?), as the generative model, and the variational
distribution, q? (?), as the recognition model.
We assume Gaussian distributions for both the generative model and recognition model? , where the
mean and variance of the distributions are functions specified by neural networks, that is? :
p? (v (i) |z (i,l) , I (i) ) = N (v (i) ; fmean (z (i,l) , I (i) ), ? 2 I),
q? (z
(i,l)
(i)
|v , I
(i)
) = N (z
(i,l)
(i)
; gmean (v , I
(i)
(2)
(i)
), gvar (v , I
(i)
)),
(3)
where N ( ? ; a, b) is a Gaussian distrubtion with mean a and variance b. fmean is a function that
predicts the mean of the generative model, defined by the generative network (the yellow network in
Figure 2(d)). gmean and gvar are functions that predict the mean and variance of the recognition model,
respectively, defined by the recognition network (the gray network in Figure 2(d)). Here we assume
that all dimensions of the generative model have the same variance ? 2 , where ? is a hand-tuned hyper
parameter. In the next section, we will describe the details of both network structures.
4
Method
In this section we present a trainable neural network structure, which defines the generative function
fmean and recognition functions gmean , and gvar . Once trained, these functions can be used in conjunction
with an input image to sample future frames. We first describe our newly proposed cross convolutional
layer, which naturally characterizes a layered motion representation [Wang and Adelson, 1993]. We
then explain our network structure and demonstrate how we integrate the cross convolutional layer
into the network for future frame synthesis.
4.1 Layered Motion Representations and Cross Convolutional Networks
Motion can often be decomposed in a layer-wise manner [Wang and Adelson, 1993]. Intuitively,
different semantic segments in an image should have different distributions over all possible motions;
for example, a building is often static, but a river flows.
To model layered motion, we propose a novel cross convolutional network (Figure 3). The network
first decomposes an input image pyramid into multiple feature maps through an image encoder
(Figure 3(c)). It then convolves these maps with different kernels (Figure 3(d)), and uses the outputs
to synthesize a difference image (Figure 3(e)). This network structure naturally fits a layered motion
representation, as each feature map characterizes an image layer (note this is different from a network
?
A complicated distribution can be approximated by a function of a simple distribution, e.g. Gaussian, which
is referred as the reparameterization trick in [Kingma and Welling, 2014].
?
Here the bold I denotes an identity matrix, whereas the normal-font I denotes the observed image.
4
(a) Motion encoder
?
Difference
image
?
(b) Kernel decoder
?
?
Feature maps
Upsample
Pyramid of
the current frame
(c) Image encoder
(d) Cross convolution
(e) Motion
decoder
Difference
image
Figure 3: Our network consists of five components: (a) a motion encoder, (b) a kernel decoder, (c) an
image encoder, (d) a cross convolution layer, and (e) a motion decoder. Our image encoder takes
images at four scales as input. For simplicity, we only show two scales in this figure.
layer) and the corresponding kernel characterizes the motion of that layer. In other words, we model
motions as convolutional kernels, which are applied to feature maps of images at multiple scales.
Unlike a traditional convolutional network, these kernels should not be identical for all inputs, as
different images typically have different motions (kernels). We therefore propose a cross convolutional
layer to tackle this problem. The cross convolutional layer does not learn the weights of the kernels
itself. Instead, it takes both kernel weights and feature maps as input and performs convolution during
a forward pass; for back propagation, it computes the gradients of both convolutional kernels and
feature maps. Concurrent works from Finn et al. [2016], Brabandere et al. [2016] also explored
similar ideas. While they applied the learned kernels on input images, we jointly learn feature maps
and kernels without direct supervision.
4.2 Network Structure
As shown in Figure 3, our network consists of five components: (a) a motion encoder, (b) a kernel
decoder, (c) an image encoder, (d) a cross convolutional layer, and (e) a motion decoder. The
recognition functions gmean and gvar are defined by the motion encoder, whereas the generative
function fmean is defined by the remaining network.
During training, our variational motion encoder (Figure 3(a)) takes two adjacent frames in time
as input, both at a resolution of 128 ? 128, and outputs a 3,200-dimensional mean vector and a
3,200-dimensional variance vector. The network samples the latent motion representation z using
these mean and variance vectors. Next, the kernel decoder (Figure 3(b)) sends the 3,200 = 128?5?5
tensor into two additional convolutional layers, producing four sets of 32 motion kernels of size
5 ? 5. Our image encoder (Figure 3(c)) operates on four different scaled versions of the input image
I (256 ? 256, 128 ? 128, 64 ? 64, and 32 ? 32). The output sizes of the feature maps in these four
channels are 32 ? 64 ? 64, 32 ? 32 ? 32, 32 ? 16 ? 16, and 32 ? 8 ? 8, respectively. This multi-scale
convolutional network allows us to model both global and local structures in the image, which may
have different motions. See appendix for more details.
The core of our network is a cross convolutional layer (Figure 3(d)) which, as discussed in Section 4.1,
applies the kernels learned by the kernel decoder to the feature maps learned by the image encoder,
respectively. The output size of the cross convolutional layer is identical to that of the image encoder.
Finally, our motion decoder (Figure 3(e)) uses the output of the cross convolutional layer to regress
the output difference image.
Training and testing details During training, the image encoder takes a single frame I (i) as input,
and the motion encoder takes both I (i) and the difference image v (i) = J (i) ? I (i) as input, where
J (i) is the next frame. The network aims to regress the difference image that minimizes the `2 loss.
During testing, the image encoder still sees a single image I; however, instead of using a motion
encoder, we directly sample motion vectors z (j) from the prior distribution pz (z). In practice, we use
an empirical distribution of z over all training samples as an approximation to the prior, as we find it
produces better synthesis results. The network synthesizes possible difference images v (j) by taking
5
(d) Frame 2
(Sample 1)
(c) Frame 2
(Reconstruction)
(b) Frame 2
(ground truth)
(a) Frame 1
(e) Frame 2
(Sample 2)
Figure 4: Results on the shapes dataset containing circles (C) squares (S) and triangles (T). For each
?Frame 2? we show the RGB image along with an overlay of green and magenta versions of the 2
consecutive frames, to help illustrate motion. See text and our project page for more details and a
better visualization.
Circles
-5
Vy
0
5
-5
0
Vx 5
-5
Square
0
0
0
5
-5
0
Vx 5
5
-5
0
Vx 5
5
-5
0
Ground truth distribution
Vy
0
0
Vx 5
-5
Vy
Vcirc
Vy
0
Circles-Triangles
-5
Vy
5
-5
0
0
0
Vx 5
5
-5
Flow
AE
Ours
Vcirc
Vy
0
Vx 5
Predicted distribution
5
-5
KL divergence (DKL (pgt || ppred )) between
predicted and ground truth distributions
0
Shapes
Method
VTri 5
-5
-5
-5
5
-5
Triangles
-5
C.
S.
T.
C.-T.
6.77
8.76
1.70
7.07
12.37
2.48
6.07
10.36
1.14
8.42
10.58
2.46
VTri 5
Figure 5: Left: for each object, comparison between its ground-truth motion distribution and the
distribution predicted by our method. Right: KL divergence between ground-truth distributions and
distributions predicted by three different algorithms.
the sampled latent representation z (j) and an RGB image I as input. We then generate a set of future
frames {J (j) } from these difference images: J (j) = I + v (j) .
5
Evaluations
We now present a series of experiments to evaluate our method. All experimental results, along with
additional visualizations, are also available on our project page? .
Movement of 2D shapes We first evaluate our method using a dataset of synthetic 2D shapes. This
dataset serves to benchmark our model on objects with simple, yet nontrivial, motion distributions. It
contains three types of objects: circles, squares, and triangles. Circles always move vertically, squares
horizontally, and triangles diagonally. The motion of circles and squares are independent, while the
motion of circles and triangles are correlated. The shapes can be heavily occluded, and their sizes,
positions, and colors are chosen randomly. There are 20,000 pairs for training, and 500 for testing.
Results are shown in Figure 4. Figure 4(a) and (b) show a sample of consecutive frames in the dataset,
and Figure 4(c) shows the reconstruction of the second frame after encoding and decoding with the
ground truth images. Figure 4(d) and (e) show samples of the second frame; in these results the
network only takes the first image as input, and the compact motion representation, z, is randomly
sampled. Note that the network is able to capture the distinctive motion pattern for each shape,
including the strong correlation of triangle and circle motion.
To quantitatively evaluate our algorithm, we compare the displacement distributions of circles,
squares, and triangles in the sampled images with their ground truth distributions. We sampled 50,000
images and used the optical flow package by Liu [2009] to calculate the movement of each object. We
compare our algorithm with a simple baseline that copies the optical flow field from the training set
(?Flow? in Figure 5 right); for each test image, we find its 10-nearest neighbors in the training set, and
randomly transfer one of the corresponding optical flow fields. To illustrate the advantage of using
a variational autoencoder over a standard autoencoder, we also modify our network by removing
the KL-divergence loss and sampling layer (?AE? in Figure 5 right). Figure 5 shows our predicted
distribution is very close to the ground-truth distribution. It also shows that a variational autoencoder
helps to capture the true distribution of future frames.
?
Our project page: http://visualdynamics.csail.mit.edu
6
Labeled real (%)
Resolution
Method
32?32 64?64
(a) Frame 1
(b) Frame 2
(ground truth)
(c) Frame 2
(Sample 1)
(d) Frame 2
(Sample 2)
Flow
Ours
29.7
41.2
21.0
35.7
Figure 6: Left: Sampling results on the Sprites dataset. Motion is illustrated using the overlay
described in Figure 4. Right: Probability that a synthesized result is labeled as real by humans in
Mechanical Turk behavioral experiments
Labeled real (%)
Resolution
Method
32?32 64?64
(a) Frame 1
(b) Frame 2
(ground truth)
(c) Frame 2
(Sample 1)
(d) Frame 2
(Sample 2)
Flow
Ours
31.3
36.7
25.5
31.3
Figure 7: Results on Exercise dataset. Left: Sampling results on Exercise dataset. Motion is illustrated
using the overlay described in Figure 4. Right: probability that a synthesized result is labeled as real
by humans in Mechanical Turk behavior experiments
Movement of video game sprites We evaluate our framework on a video game sprites dataset? ,
also used by Reed et al. [2015]. The dataset consists of 672 unique characters, and for each character
there are 5 animations (spellcast, thrust, walk, slash, shoot) from 4 different viewpoints. Each
animation ranges from 6 to 13 frames. We collect 102,364 pairs of neighboring frames for training,
and 3,140 pairs for testing. The same character does not appear in both the training and test sets.
Synthesized sample frames are shown in Figure 6. The results show that from a single input frame,
our method can capture various possible motions that are consistent with those in the training set.
For a quantitative evaluation, we conduct behavioral experiments on Amazon Mechanical Turk. We
randomly select 200 images, sample possible next frames using our algorithm, and show them to
multiple human subjects as an animation side by side with the ground truth animation. We then ask
the subject to choose which animation is real (not synthesized). An ideal algorithm should achieve a
success rate of 50%. In our experiments, we present the animation in both the original resolution
(64 ? 64) and a lower resolution (32 ? 32). We only evaluate on subjects that have a past approval
rating of > 95% and also pass our qualification tests. Figure 6 shows that our algorithm significantly
out-performs a baseline algorithm that warps an input image by transferring a randomly selected flow
field from the training set. Subjects are more easily fooled by the 32 ? 32 pixel images, as it is harder
to hallucinate realistic details in high-resolution images.
Movement in real videos captured in the wild To demonstrate that our algorithm can also handle
real videos, we collect 20 workout videos from YouTube, each about 30 to 60 minutes long. We first
apply motion stabilization to the training data as a pre-processing step to remove camera motion.
We then extract 56,838 pairs of frames for training and 6,243 pairs for testing. The training and
testing pairs come from different video sequences. Figure 7 shows that our framework works well in
predicting the movement of the legs and torso. Additionally, Mechanical Turk behavioral experiments
show that the synthesized frames are visually realistic.
Zero-shot visual analogy-making Recently, Reed et al. [2015] studied the problem of inferring the
relationship between a pair of reference images and synthesizing a new analogy-image by applying
the inferred relationship to a test image. Our network is also able to preform this task, without
even requiring supervision. Specifically, we extract the motion vector, z, from two reference frames
using our motion encoder (Figure 3(a)). We then use the extracted motion vector z to synthesize an
analogy-image given a new test image.
Our network successfully transfers the motion in reference pairs to a test image. For example, in
Figure 8(a), it learns that the character leans toward to the right, and in Figure 8(b) it learns that
the girl spreads her feet apart. A quantitative evaluation is also shown in Figure 9. Even without
?
Liberated pixel cup: http://lpc.opengameart.org
7
Model
Reference
Input
+
Prediction
(a)
(b)
Figure 8: Visual analogy-making (predicted frames are marked in red)
spellcast thrust walk slash shoot average
Add
Dis
Dis+Cls
41.0
40.8
13.3
53.8
55.8
24.6
55.7 52.1
52.6 53.5
17.2 18.9
77.6
79.8
40.8
56.0
56.5
23.0
Ours
9.5
11.5
11.1 28.2
19.0
15.9
Figure 9: Mean squared pixel error on test analogies,
by animation. The first three models (Add, Dis, and
Dis+Cls) are from Reed et al. [2015].
Input
Images
Input
Images
Scale 1
Map 25
Scale 2
Map 5
Scale 2
Map 20
Scale 1
Map 32
Scale 2
Map 28
Input
Images
Scale 2
Map 31
Scale 1
Map 1
Figure 10: Learned feature maps on the shapes dataset (left), the sprites dataset (top right), and the
exercise dataset (bottom right)
supervision, our method out-performs the algorithm by Reed et al. [2015], which requires visual
analogy labels during training.
Visualizing feature maps We visualize the learned feature maps (see Figure 3(b)) in Figure 10.
Even without supervision, our network learns to detect objects or contours in the image. For example,
we see that the network automatically learns object detectors and edge detectors on the shape dataset.
It also learns a hair detector and a body detector on the sprites and exercise datasets, respectively.
Visualizing latent representations By visualizing the latent representations of z we have found
that each dimension corresponds to a certain type of motion. For instance, in the excerise dataset,
varying one dimension of z causes the girl to stand-up and another causes her to move a leg. Please
refer to our project page for this visualization.
Dimension of latent representation Although our latent motion representation, z, has 3,200
dimensions, its intrinsic dimensionality is much smaller. First, zmean is very sparse. The non-zero
elements of zmean for each dataset are 299 in shapes, 54 in sprites, and 978 in exercise. Second, the
independent components of z are even fewer. We run principle component analysis (PCA) on the zmean s
obtained from a set of training images, and find that for each dataset, a small fraction of components
cover at least 95% of the variance in zmean (5 in shapes, 2 in sprites, and 27 in exercise). This indicates
that our network has learned a compact representation of motion in an unsupervised fashion, and
encodes high-level knowledge using a small number of bits, rather than simply memorizing training
samples. The KL-divergence criterion in Eq. 1 forces the latent representation, z, to carry minimal
information, as discussed by Hinton and Van Camp [1993] and concurrently by Higgins et al. [2016].
6
Conclusion
In this paper, we have proposed a novel framework that samples future frames from a single input image. Our method incorporates a variational autoencoder for learning compact motion representations,
and a novel cross convolutional layer for regressing Eulerian motion maps. We have demonstrated
that our framework works well on both synthetic, and real-life videos.
More generally, results suggest that our probabilistic visual dynamics model may be useful for
additional applications, such as inferring objects? higher-order relationships by examining correlations
in their motion distributions. Furthermore, this learned representation could be potentially used as a
sophisticated motion prior in other computer vision and computational photography applications.
Acknowledgement The authors thank Yining Wang for helpful discussions. This work is supported
by NSF Robust Intelligence 1212849, NSF Big Data 1447476, ONR MURI 6923196, Adobe, and
Shell Research. The authors would also like to thank Nvidia for GPU donations.
8
References
Aseem Agarwala, Ke Colin Zheng, Chris Pal, Maneesh Agrawala, Michael Cohen, Brian Curless, David Salesin,
and Richard Szeliski. Panoramic video textures. ACM TOG, 24(3):821?827, 2005. 2
Bert De Brabandere, Xu Jia, Tinne Tuytelaars, and Luc Van Gool. Dynamic filter networks. NIPS, 2016. 5
Emily L Denton, Soumith Chintala, and Rob Fergus. Deep generative image models using an laplacian pyramid
of adversarial networks. In NIPS, 2015. 2
Chelsea Finn, Ian Goodfellow, and Sergey Levine. Unsupervised learning for physical interaction through video
prediction. In NIPS, 2016. 3, 5
David J Fleet, Michael J Black, Yaser Yacoob, and Allan D Jepson. Design and use of linear models for image
motion analysis. IJCV, 36(3):171?193, 2000. 2
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron
Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, 2014. 2
Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, and Daan Wierstra. Draw: A recurrent
neural network for image generation. In ICML, 2015. 2
Irina Higgins, Loic Matthey, Xavier Glorot, Arka Pal, Benigno Uria, Charles Blundell, Shakir Mohamed,
and Alexander Lerchner. Early visual concept learning with unsupervised deep learning. arXiv preprint
arXiv:1606.05579, 2016. 8
Geoffrey E Hinton and Drew Van Camp. Keeping the neural networks simple by minimizing the description
length of the weights. In COLT, 1993. 8
Neel Joshi, Sisil Mehta, Steven Drucker, Eric Stollnitz, Hugues Hoppe, Matt Uyttendaele, and Michael Cohen.
Cliplets: juxtaposing still and dynamic imagery. In UIST, 2012. 2
Diederik P Kingma and Max Welling. Auto-encoding variational bayes. In ICLR, 2014. 1, 2, 3, 4
Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised learning
with deep generative models. In NIPS, 2014. 4
Zicheng Liao, Neel Joshi, and Hugues Hoppe. Automated video looping with progressive dynamism. ACM
TOG, 32(4):77, 2013. 2
Ce Liu. Beyond pixels: exploring new representations and applications for motion analysis. PhD thesis,
Massachusetts Institute of Technology, 2009. 6
Ce Liu, Jenny Yuen, and Antonio Torralba. SIFT flow: Dense correspondence across scenes and its applications.
IEEE TPAMI, 33(5):978?994, 2011. 2
Michael Mathieu, Camille Couprie, and Yann LeCun. Deep multi-scale video prediction beyond mean square
error. In ICLR, 2016. 3
Junhyuk Oh, Xiaoxiao Guo, Honglak Lee, Richard L Lewis, and Satinder Singh. Action-conditional video
prediction using deep networks in atari games. In NIPS, 2015. 3
Silvia L Pintea, Jan C van Gemert, and Arnold WM Smeulders. Dejavu: Motion prediction in static images. In
ECCV, 2014. 2
Javier Portilla and Eero P Simoncelli. A parametric texture model based on joint statistics of complex wavelet
coefficients. IJCV, 40(1):49?70, 2000. 2
Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional
generative adversarial networks. In ICLR, 2016. 2
Scott E Reed, Yi Zhang, Yuting Zhang, and Honglak Lee. Deep visual analogy-making. In NIPS, 2015. 7, 8
Stefan Roth and Michael J Black. On the spatial statistics of optical flow. In ICCV, 2005. 2
Arno Sch?dl, Richard Szeliski, David H Salesin, and Irfan Essa. Video textures. ACM TOG, 7(5):489?498,
2000. 2
Nitish Srivastava, Elman Mansimov, and Ruslan Salakhutdinov. Unsupervised learning of video representations
using LSTMs. In ICML, 2015. 3
Carl Vondrick, Hamed Pirsiavash, and Antonio Torralba. Generating videos with scene dynamics. NIPS, 2016a.
3
Carl Vondrick, Hamed Pirsiavash, and Antonio Torralba. Anticipating visual representations from unlabeled
video. In CVPR, 2016b. 2
Jacob Walker, Abhinav Gupta, and Martial Hebert. Patch to the future: unsupervised visual prediction. In CVPR,
2014. 2
Jacob Walker, Abhinav Gupta, and Martial Hebert. Dense optical flow prediction from a static image. In ICCV,
2015. 2
Jacob Walker, Carl Doersch, Abhinav Gupta, and Martial Hebert. An uncertain future: Forecasting from static
images using variational autoencoders. In ECCV, 2016. 2
John YA Wang and Edward H Adelson. Layered representation for motion analysis. In CVPR, 1993. 4
Yair Weiss and Edward H Adelson. Slow and Smooth: a Bayesian theory for the combination of local motion
signals in human vision. Center for Biological and Computational Learning Paper, 158(1624):1?42, 1998. 2
Yonatan Wexler, Eli Shechtman, and Michal Irani. Space-time video completion. In CVPR, 2004. 2
Jiajun Wu, Ilker Yildirim, Joseph J Lim, William T Freeman, and Joshua B Tenenbaum. Galileo: Perceiving
physical object properties by integrating a physics engine with deep learning. In NIPS, 2015. 2
Jianwen Xie, Song-Chun Zhu, and Ying Nian Wu. Synthesizing dynamic textures and sounds by spatial-temporal
generative convnet. arXiv preprint arXiv:1606.00972, 2016a. 2
Junyuan Xie, Ross Girshick, and Ali Farhadi. Deep3d: Fully automatic 2d-to-3d video conversion with deep
convolutional neural networks. arXiv preprint arXiv:1604.03650, 2016b. 2
Tianfan Xue, Michael Rubinstein, Neal Wadhwa, Anat Levin, Fredo Durand, and William T Freeman. Refraction
wiggles for measuring fluid depth and velocity from video. In ECCV, 2014. 2
Xinchen Yan, Jimei Yang, Kihyuk Sohn, and Honglak Lee. Attribute2image: Conditional image generation
from visual attributes. In ECCV, 2016. 1, 2, 4
Tinghui Zhou, Shubham Tulsiani, Weilun Sun, Jitendra Malik, and Alexei A Efros. View synthesis by appearance
flow. ECCV, 2016. 2
9
| 6552 |@word version:2 mehta:1 rgb:6 jacob:3 wexler:2 inpainting:1 shot:1 harder:1 carry:1 shechtman:1 liu:4 contains:2 series:1 jimenez:2 tuned:1 ours:4 animated:1 past:1 existing:1 current:2 michal:1 yet:1 diederik:2 must:1 gpu:1 john:1 uria:1 realistic:6 thrust:2 shape:13 nian:1 remove:1 stationary:1 generative:15 selected:1 fewer:1 intelligence:1 ivo:1 alec:1 hallucinate:1 short:1 core:1 provides:1 contribute:1 location:1 org:1 tianfan:2 zhang:2 five:2 yuting:1 wierstra:1 along:2 shubham:1 direct:1 consists:5 ijcv:2 combine:1 wild:1 behavioral:3 manner:2 uist:1 allan:1 behavior:1 elman:1 multi:2 approval:1 salakhutdinov:1 decomposed:1 freeman:2 automatically:2 soumith:2 unpredictable:1 considering:1 hugues:2 farhadi:1 project:4 bounded:1 atari:1 preform:1 minimizes:2 arno:1 juxtaposing:1 temporal:2 quantitative:2 jimei:1 tackle:3 scaled:1 mansimov:1 sherjil:1 unit:1 appear:1 producing:1 danihelka:1 local:3 vertically:2 modify:1 qualification:1 encoding:3 black:3 studied:1 collect:2 challenging:2 luke:1 limited:1 range:1 unique:1 camera:1 lecun:1 testing:7 galileo:1 practice:3 displacement:1 jan:1 empirical:1 yan:4 maneesh:1 significantly:1 word:1 pre:1 integrating:1 suggest:1 cannot:1 close:1 layered:5 unlabeled:1 context:1 applying:1 loic:1 junyuan:1 equivalent:1 deterministic:7 map:23 roth:2 dz:1 maximizing:1 demonstrated:1 center:1 emily:1 resolution:7 ke:1 simplicity:2 amazon:1 pouget:1 higgins:2 oh:2 reparameterization:1 handle:1 imagine:1 xinchen:1 heavily:1 user:1 exact:1 carl:3 distinguishing:1 designing:1 goodfellow:3 us:2 trick:1 synthesize:8 velocity:2 recognition:6 approximated:1 element:1 lean:1 predicts:3 labeled:4 muri:1 observed:12 bottom:1 levine:1 preprint:3 steven:1 wang:4 capture:5 calculate:1 sun:1 movement:7 shuffle:1 observes:1 occluded:1 warde:1 dynamic:8 trained:1 singh:1 segment:1 ali:1 distinctive:1 tog:3 eric:1 triangle:8 girl:5 tinne:1 easily:2 joint:2 convolves:2 various:2 represented:1 derivation:1 describe:3 shortcoming:1 rubinstein:1 hyper:1 jean:1 cvpr:4 encoder:19 spellcast:2 statistic:3 unseen:1 aseem:1 tuytelaars:1 jointly:1 itself:2 final:1 shakir:2 sequence:4 advantage:1 tpami:1 net:1 essa:1 propose:7 reconstruction:3 interaction:1 neighboring:1 achieve:1 intuitive:1 description:1 produce:4 generating:2 karol:1 object:17 help:2 derive:1 illustrate:2 donation:1 pose:1 recurrent:1 completion:1 nearest:1 tulsiani:1 eq:1 edward:2 strong:1 predicted:6 come:2 direction:2 foot:1 closely:1 attribute:1 filter:2 stabilization:1 human:7 vx:6 require:2 benigno:1 yuen:1 probable:1 brian:1 biological:1 exploring:1 ground:11 normal:2 visually:2 predict:6 visualize:2 slash:2 neel:2 efros:1 vary:1 consecutive:5 early:3 torralba:3 ruslan:1 label:2 ross:1 concurrent:1 successfully:2 stefan:1 mit:2 concurrently:2 gaussian:5 always:1 aim:1 rather:2 zhou:2 varying:1 yacoob:1 conjunction:1 rezende:2 focus:3 properly:1 indicates:2 likelihood:1 seamlessly:1 panoramic:1 contrast:3 adversarial:4 fooled:1 baseline:2 detect:1 camp:2 helpful:1 inference:1 dependent:2 typically:1 transferring:1 her:4 hidden:1 pixel:5 agarwala:2 colt:1 spatial:3 jiajunwu:1 equal:1 aware:3 field:14 marginal:4 once:1 sampling:5 identical:2 progressive:1 adelson:5 unsupervised:7 denton:2 icml:2 future:32 mirza:1 yoshua:1 quantitatively:1 richard:3 randomly:5 divergence:5 irina:1 william:3 attempt:2 zheng:1 alexei:1 evaluation:3 regressing:1 introduces:1 analyzed:1 yining:1 farley:1 edge:1 respective:1 conduct:1 walk:2 circle:13 girshick:1 minimal:1 uncertain:1 instance:3 modeling:4 cover:2 zicheng:1 measuring:1 predictor:2 successful:1 examining:1 levin:1 pal:2 periodic:2 xue:3 synthetic:4 river:1 csail:1 probabilistic:11 lee:3 physic:1 decoding:1 michael:6 synthesis:10 squared:1 ambiguity:2 imagery:1 thesis:1 containing:2 choose:3 toy:6 de:1 bold:1 coefficient:1 jitendra:1 explicitly:1 later:1 try:1 view:1 characterizes:3 red:1 wm:1 bayes:1 metz:1 complicated:1 annotation:1 jia:1 contribution:1 smeulders:1 square:11 convolutional:24 variance:8 yellow:3 salesin:2 bayesian:1 curless:1 yildirim:1 trajectory:1 agrawala:1 randomness:1 explain:1 detector:4 hamed:2 definition:1 xiaoxiao:1 turk:4 regress:2 mohamed:2 refraction:1 naturally:2 chintala:2 static:4 sampled:4 newly:2 dataset:17 massachusetts:2 begun:1 ask:1 lim:1 color:1 infers:1 dimensionality:2 torso:1 knowledge:1 javier:1 sophisticated:1 anticipating:1 back:1 appears:1 higher:1 xie:3 danilo:2 supervised:1 response:1 wei:2 formulation:2 furthermore:1 just:1 correlation:4 autoencoders:2 hand:2 lstms:1 mehdi:1 propagation:1 google:1 defines:1 gray:2 building:1 matt:1 requiring:1 true:1 concept:1 xavier:1 symmetric:1 irani:1 semantic:1 illustrated:2 neal:1 adjacent:1 visualizing:3 game:4 during:8 please:1 ambiguous:1 criterion:1 demonstrate:3 performs:4 motion:100 vondrick:5 image:111 variational:16 wise:1 novel:6 recently:4 shoot:2 photography:1 charles:1 junhyuk:1 physical:2 cohen:2 extend:1 discussed:2 approximates:1 synthesized:5 refer:2 cup:1 honglak:3 doersch:1 automatic:1 consistency:1 moving:3 supervision:6 operating:1 impressive:1 base:1 add:2 multivariate:1 own:1 recent:1 posterior:1 chelsea:1 apart:1 certain:1 nvidia:1 yonatan:1 onr:1 success:2 durand:1 life:1 yi:1 joshua:1 captured:1 tinghui:1 additional:3 maximize:2 colin:1 signal:1 semi:1 jenny:1 multiple:7 simoncelli:2 sound:1 infer:2 smooth:2 cross:17 long:1 dkl:3 laplacian:1 adobe:1 prediction:18 variant:1 hair:1 liao:2 ae:2 vision:2 hoppe:2 arxiv:6 kernel:21 sergey:1 pyramid:4 addition:1 whereas:2 liberated:1 walker:6 sends:1 sch:2 unlike:3 subject:4 contrary:1 flow:14 incorporates:1 joshi:3 yang:1 ideal:1 bengio:1 automated:1 variety:2 fit:1 idea:2 drucker:1 blundell:1 fleet:2 pca:1 forecasting:1 song:1 yaser:1 returned:1 sprite:8 cause:2 prefers:1 action:2 deep:9 antonio:3 generally:1 useful:1 tenenbaum:1 sohn:1 generate:4 http:2 supplied:1 overlay:3 vy:6 nsf:2 jiajun:2 key:1 four:4 ce:2 fraction:1 run:1 package:1 eli:1 uncertainty:1 almost:1 wu:4 yann:1 patch:1 draw:1 appendix:1 bit:1 layer:19 bound:1 tackled:1 courville:1 correspondence:1 nontrivial:1 looping:3 alex:1 scene:3 encodes:3 argument:1 nitish:1 expanded:1 optical:6 combination:2 smaller:1 across:1 character:4 joseph:1 rob:1 making:5 lerchner:1 leg:5 intuitively:1 memorizing:1 iccv:2 visualization:4 previously:1 bing:1 gemert:1 tractable:1 finn:3 serf:1 studying:1 available:1 apply:1 arka:1 yair:1 eulerian:2 original:1 denotes:2 remaining:1 top:1 pgt:1 exploit:1 build:1 gregor:2 tensor:1 move:5 objective:1 malik:1 font:1 parametric:4 traditional:2 gradient:1 iclr:3 convnet:1 thank:2 decoder:11 chris:1 reason:1 toward:1 ozair:1 assuming:1 length:1 relationship:6 reed:5 minimizing:1 ying:1 difficult:1 katherine:1 mostly:1 potentially:1 fluid:1 synthesizing:3 design:1 upper:2 conversion:1 convolution:4 snapshot:2 datasets:2 observation:1 benchmark:1 daan:1 workout:1 extended:1 looking:1 hinton:2 frame:71 portilla:2 bert:1 camille:1 intensity:1 inferred:1 rating:1 david:4 namely:1 pair:11 kl:5 specified:1 mechanical:4 engine:1 learned:9 kingma:10 nip:9 able:6 beyond:2 pattern:3 scott:1 lpc:1 pioneering:1 green:1 including:1 video:32 max:2 gool:1 pirsiavash:2 natural:2 force:1 predicting:2 arm:1 zhu:1 technology:2 abhinav:3 temporally:1 mathieu:2 martial:3 autoencoder:9 extract:2 auto:1 ilker:1 text:1 prior:9 acknowledgement:1 graf:1 billf:1 loss:2 fully:1 generation:3 analogy:9 geoffrey:1 integrate:1 consistent:3 principle:1 viewpoint:1 eccv:5 diagonally:1 supported:1 copy:1 keeping:1 hebert:3 dis:4 side:2 warp:1 institute:2 neighbor:1 szeliski:2 taking:1 arnold:1 sparse:1 van:4 matthey:1 dimension:6 depth:1 world:4 stand:1 contour:1 computes:1 forward:1 author:2 attribute2image:1 welling:9 approximate:1 compact:3 satinder:1 global:2 eero:1 fergus:1 latent:8 decomposes:1 additionally:1 learn:7 channel:1 transfer:2 robust:1 irfan:1 synthesizes:1 complex:2 cl:2 jepson:1 spread:1 dense:2 big:1 silvia:1 animation:7 body:1 xu:2 crafted:1 referred:1 fashion:2 slow:2 zmean:4 position:1 inferring:2 xh:1 exercise:6 anat:1 learns:8 wavelet:1 ian:2 down:2 magenta:1 removing:1 minute:1 brabandere:2 sift:1 explored:1 pz:5 abadie:1 gupta:3 chun:1 evidence:1 dl:2 intrinsic:5 glorot:1 drew:1 texture:6 phd:1 conditioned:2 wiggle:1 simply:1 likely:2 appearance:1 visual:17 horizontally:2 upsample:1 scalar:1 srivastava:2 radford:2 applies:1 corresponds:2 truth:11 lewis:1 extracted:1 acm:3 shell:1 conditional:16 goal:1 identity:1 marked:1 couprie:1 luc:1 content:3 change:2 hard:1 youtube:1 specifically:1 perceiving:1 operates:1 pas:2 experimental:1 ya:1 aaron:1 formally:2 select:1 guo:1 kihyuk:1 alexander:1 evaluate:5 trainable:2 correlated:1 |
6,139 | 6,553 | Adaptive Averaging in Accelerated Descent Dynamics
Walid Krichene ?
UC Berkeley
Alexandre M. Bayen
UC Berkeley
Peter L. Bartlett
UC Berkeley and QUT
[email protected]
[email protected]
[email protected]
Abstract
We study accelerated descent dynamics for constrained convex optimization. This
dynamics can be described naturally as a coupling of a dual variable accumulating
gradients at a given rate ?(t), and a primal variable obtained as the weighted average
of the mirrored dual trajectory, with weights w(t). Using a Lyapunov argument,
we give sufficient conditions on ? and w to achieve a desired convergence rate. As
an example, we show that the replicator dynamics (an example of mirror descent
on the simplex) can be accelerated using a simple averaging scheme.
We then propose an adaptive averaging heuristic which adaptively computes the
weights to speed up the decrease of the Lyapunov function. We provide guarantees
on adaptive averaging in continuous-time, prove that it preserves the quadratic
convergence rate of accelerated first-order methods in discrete-time, and give
numerical experiments to compare it with existing heuristics, such as adaptive
restarting. The experiments indicate that adaptive averaging performs at least as
well as adaptive restarting, with significant improvements in some cases.
1
Introduction
We study the problem of minimizing a convex function f over a feasible set X , a closed convex subset
of E = Rn . We will assume that f is differentiable, that its gradient ?f is a Lipschitz function with
Lipschitz constant L, and that the set of minimizers S = arg minx?X f (x) is non-empty. We will
focus on the study of continuous-time, first-order dynamics for optimization. First-order methods
have seen a resurgence of interest due to the significant increase in both size and dimensionality of the
data sets typically encountered in machine learning and other applications, which makes higher-order
methods computationally intractable in most cases. Continuous-time dynamics for optimization
have been studied for a long time, e.g. [6, 9, 5], and more recently [20, 2, 1, 3, 11, 23], in which a
connection is made between Nesterov?s accelerated methods [14, 15] and a family of continuous-time
ODEs. Many optimization algorithms can be interpreted as a discretization of a continuous-time
process, and studying the continuous-time dynamics is useful for many reasons: The analysis is
often simpler in continuous-time, it can help guide the design and analysis of new algorithms, and
it provides intuition and insight into the discrete process. For example, Su et al. show in [20] that
Nesterov?s original method [14] is a discretization of a second-order ODE, and use this interpretation
to propose a restarting heuristic which empirically speeds up the convergence. In [11], we generalize
this approach to the proximal version of Nesterov?s method [15] which applies to constrained convex
problems, and show that the continuous-time ODE can be interpreted as coupled dynamics of a dual
variable Z(t) which evolves in the dual space E ? , and a primal variable X(t) which is obtained as
the weighted average of a non-linear transformation of the dual trajectory. More precisely,
?
?
Z(t)
= ? rt ?f (X(t))
?
?
?
?
R t r?1
?
?? ? (Z(? ))d?
X(t) = 0 R t ? r?1 d?
?
0
?
?
?
X(0) = ?? ? (Z(0)) = x0 ,
?
Walid Krichene is currently affiliated with Google. [email protected]
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
where r ? 2 is a fixed parameter, the initial condition x0 is a point in the feasible set X , and ?? ? is
a Lipschitz function that maps from the dual space E ? to the feasible set X , which we refer to as the
mirror map (such a function can be constructed using standard results from convex analysis, by taking
the convex conjugate of a strongly convex function ? with domain X ; see the supplementary material
for a brief review of the definition and basic properties of mirror maps). Using a Lyapunov argument,
we show that the solution trajectories of this ODE exhibit a quadratic convergence rate, i.e. if f ? is the
minimum of f over the feasible set, then f (X(t)) ? f ? ? C/t2 for a constant C which depends on
the initial conditions. This formalized an interesting connection between acceleration and averaging,
which had been observed in [8] in the special case of unconstrained quadratic minimization.
A natural question that arises is whether different averaging schemes can be used to achieve the same
rate, or perhaps faster rates. In this article, we provide a positive answer. We study a broad family of
Accelerated Mirror Descent (AMD) dynamics, given by
?
?
Z(t)
= ??(t)?f (X(t))
?
?
?
?
R
Rt
X(t0 )W (t0 )+ tt w(? )?? ? (Z(? ))d?
0
AMDw,?
X(t) =
, with W (t) = 0 w(? )d?
W (t)
?
?
?
?
X(t0 ) = ?? ? (Z(t0 )) = x0 ,
(1)
parameterized by two positive, continuous weight functions w and ?, where w is used in the averaging
and ? determines the rate at which Z accumulates gradients. This is illustrated in Figure 1. In our
formulation we choose to initialize the ODE at t0 > 0 instead of 0 (to guarantee existence and
uniqueness of a solution, as discussed in Section 2). We give a unified study of this ODE using an
appropriate Lyapunov function, given by
Lr (X, Z, t) = r(t)(f (X) ? f ? ) + D?? (Z, z ? ),
(2)
where D?? is the Bregman divergence associated with ? ? (a non-negative function defined on
E ? ? E ? ), and r(t) is a desired convergence rate (a non-negative function defined on R+ ). By
construction, Lr is a non-negative function on X ? E ? ? R+ . If t 7? Lr (X(t), Z(t), t) is a
non-increasing function for all solution trajectories (X(t), Z(t)), then Lr is said to be a Lyapunov
function for the ODE, in reference to Aleksandr Mikhailovich Lyapunov [12]. We give in Theorem 2
a sufficient condition on ?, w and r for Lr to be a Lyapunov function for AMDw,? , and show that
under these conditions, f (X(t)) converges to f ? at the rate 1/r(t).
E
??
X
E?
?? ? (Z(t))
Z(t)
X(t)
??(t)?f (X(t))
?? ?
Figure 1: Illustration of AMDw,? . The dual variable Z evolves in the dual space E ? , and accumulates
negative gradients at a rate ?(t), and the primal variable X(t) (green solid line) is obtained by
averaging the mirrored trajectory {?? ? (Z(? )), ? ? [t0 , t]} (green dashed line), with weights w(? ).
In Section 3, we give an equivalent formulation of AMDw,? written purely in the primal space. We
give several examples of these dynamics for simple constraint sets. In particular, when the feasible
set is the probability simplex, we derive an accelerated version of the replicator dynamics, an ODE
that plays an important role in evolutionary game theory [22] and viability theory [4].
Many heuristics have been developed to empirically speed up the convergence of accelerated methods.
Most of these heuristics consist in restarting the ODE (or the algorithm in discrete time) whenever
a simple condition is met. For example, a gradient restart heuristic is proposed in [17], in which
the algorithm is restarted whenever the trajectory forms an acute angle with the gradient (which
intuitively indicates that the trajectory is not making progress), and a speed restarting heuristic
?
is proposed in [20], in which the ODE is restarted whenever the speed kX(t)k
decreases (which
intuitively indicates that progress is slowing). These heuristics are known to empirically improve
2
the speed of convergence, but provide few guarantees. For example, the gradient restart in [17]
is only studied for unconstrained quadratic problems, and the speed restart in [20] is only studied
for unconstrained strongly convex problems. In particular, it is not guaranteed (to our knowledge)
that these heuristics preserve the original convergence rate of the non-restarted method, when the
objective function is not strongly convex. In Section 4, we propose a new heuristic that provides such
guarantees, and that is based on a simple idea for adaptively computing the weights w(t) along the
solution trajectories. The heuristic simply decreases the time derivative of the Lyapunov function
Lr (X(t), Z(t), t) whenever possible. Thus it preserves the 1/r(t) convergence rate. Other adaptive
methods have been applied to convex optimization, such as Adagrad [7] and Adam [10], which adapt
the learning rate in first-order methods, by maintaining moment estimates of the observed gradients.
They are particularly well suited to problems with sparse gradients. While these methods are similar
in spirit to adaptive averaging, they are not designed for accelerated methods. In Section 5, we give
numerical experiments in which we compare the performance of adaptive averaging and restarting.
The experiments indicate that adaptive averaging compares favorably in all of the examples, and
gives a significant improvement in some cases. We conclude with a brief discussion in Section 6.
2
Accelerated mirror descent with generalized averaging
We start by giving an equivalent form of AMDw,? , which we use to briefly discuss existence
and uniqueness of a solution. Writing the second equation as X(t)W (t) ? X(t0 )W (t0 ) =
Rt
w(? )?? ? (Z(? ))d? , then taking the time-derivative, we have
t0
?
X(t)W
(t) + X(t)w(t) = w(t)?? ? (Z(t)).
Thus the ODE is equivalent to
? ?
Z(t) = ??(t)?f (X(t))
?
?
?
0
w(t)
?
AMDw,?
X(t)
= W
(?? ? (Z(t)) ? X(t))
(t)
?
?
?
X(t0 ) = ?? ? (Z(t0 )) = x0 .
The following theorem guarantees existence and uniqueness of the solution.
Theorem 1. Suppose that W (t0 ) > 0. Then AMDw,? has a unique maximal (i.e. defined on a
maximal interval) solution (X(t), Z(t)) that is C 1 ([t0 , +?)). Furthermore, for all t ? t0 , X(t)
belongs to the feasible set X .
Proof. Recall that, by assumption, ?f and ?? ? are both Lipschitz, and w, ? are continuous. Furthermore, W (t) is non-decreasing and continuous, as the integral of a non-negative function, thus
w(t)/W (t) ? w(t)/W (t0 ). This guarantees that on any finite interval [t0 , T ), the functions ?(t) and
w(t)
?
w(t)/W (t) are bounded. Therefore, ??(t)?f (X) and W
(t) (?? (Z) ? X) are Lipschitz functions
of (X, Z), uniformly in t ? [t0 , T ). By the Cauchy-Lipschitz theorem (e.g. Theorem 2.5 in [21]),
there exists a unique C 1 solution defined on [t0 , T ). Since T is arbitrary, this defines a unique solution
on all of [t0 , +?). Indeed, any two solutions defined on [t0 , T1 ) and [t0 , T2 ) with T2 > T1 coincide
on [t0 , T1 ). Finally, feasibility of the solution follows from the fact that X is convex and X(t) is the
weighted average of points in X , specifically, x0 and the set {?? ? (Z(? )), ? ? [t0 , t]}.
Note that in general, it is important to initialize the ODE at t0 and not 0, since W (0) = 0 and
w(t)/W (t) can diverge at 0, in which case one cannot apply the Cauchy-Lipschitz theorem. It is
possible however to prove existence and uniqueness with t0 = 0 for some choices of w, by taking
a sequence of Lipschitz ODEs that approximate the original one, as is done in [20], but this is a
technicality and does not matter for practical purposes.
We now move to our main result for this section. Suppose that r is an increasing, positive differentiable
function on [t0 , +?), and consider the candidate Lyapunov function Lr defined in (2), where the
Bregman divergence term is given by
D?? (z, y) := ? ? (z) ? ? ? (y) ? h?? ? (y), z ? yi ,
and z ? is a point in the dual space such that ?? ? (z ? ) = x? belongs to the set of minimizers S. Let
(X(t), Z(t)) be the unique maximal solution trajectory of AMDw,? .
3
Taking the derivative of t 7? Lr (X(t), Z(t), t) = r(t)(f (X(t)) ? f ? ) + D?? (Z(t), z ? ), we have
D
E D
E
d
?
?
Lr (X(t), Z(t), t) = r0 (t)(f (X(t)) ? f ? ) + r(t) ?f (X(t)), X(t)
+ Z(t),
?? ? (Z(t)) ? ?? ? (z ? )
dt
D
E
W (t) ?
0
?
?
?
= r (t)(f (X(t)) ? f ) + r(t) ?f (X(t)), X(t) + ??(t)?f (X(t)), X(t) +
X(t) ? x
w(t)
D
E
?(t)W (t)
?
? (f (X(t)) ? f ? )(r0 (t) ? ?(t)) + ?f (X(t)), X(t)
r(t) ?
,
(3)
w(t)
where we used the expressions for Z? and ?? ? (Z) from AMD0w,? in the second equality, and
convexity of f in the last inequality. Equipped with this bound, it becomes straightforward to give
sufficient conditions for Lr to be a Lyapunov function.
Theorem 2. Suppose that for all t ? [t0 , +?),
0
1. ?(t)
D ? r (t) and E
?
2. ?f (X(t)), X(t)
r(t) ?
?(t)W (t)
w(t)
? 0.
Then Lr is a Lyapunov function for AMDw,? , and for all t ? t0 , f (X(t)) ? f ? ?
Lr (X(t0 ),Z(t0 ),t0 )
.
r(t)
d
Proof. The two conditions, combined with inequality (3), imply that dt
Lr (X(t), Z(t), t) ? 0, thus
Lr is a Lyapunov function. Finally, since D?? is non-negative, and Lr is decreasing, we have
f (X(t)) ? f ? ?
Lr (X(t), Z(t), t)
Lr (X(t0 ), Z(t0 ), t0 )
?
.
r(t)
r(t)
which proves the claim.
Note that the second condition depends on the solution trajectory X(t), and may be hard to check a
priori. However, we give one special case in which the condition trivially holds.
Corollary 1. Suppose that for all t ? [t0 , +?), ?(t) =
w(t)r(t)
W (t) ,
?
w(t)
r 0 (t)
W (t) ? r(t) . Then
Lr (X(t0 ),Z(t0 ),t0 )
.
r(t)
and
Lyapunov function for AMDw,? , and for all t ? t0 , f (X(t)) ? f ?
Lr is a
Next, we describe a method to construct weight functions w, ? that satisfy the conditions of Corolw(t)
r 0 (t)
lary 1, given a desired rate r. Of course, it suffices to construct w that satisfies W
(t) ? r(t) , then
to set ?(t) =
w(t)r(t)
W (t) .
We can reparameterize the weight function by writing
integrating from t0 to t, we have
W (t)
W (t0 )
=e
Rt
t0
a(? )d?
w(t) = w(t0 )
w(t)
W (t)
= a(t). Then
, and
a(t) Rtt a(? )d?
e 0
.
a(t0 )
(4)
Therefore the conditions of the corollary are satisfied whenever w(t) is of the form (4) and a : R+ ?
0
(t)
R+ is a continuous, positive function with a(t) ? rr(t)
. Note that the expression of w is defined up
to the constant w(t0 ), which reflects the fact that the condition of the corollary is scale-invariant (if
the condition holds for a function w, then it holds for ?w for all ? > 0).
Example 1. Let r(t) = t2 . Then r0 (t)/r(t) = 2/t, and we can take a(t) = ?t with ? ? 2. Then
Rt
a(? )d?
a(t)
?/t ? ln(t/t0 )
e t0
e
= (t/t0 )??1 and ?(t) = w(t)r(t)
w(t) = a(t
= ?/t
W (t) = ?t, and we recover
0)
0
the weighting scheme used in [11].
Example 2. More generally, if r(t) = tp , p ? 1, then r0 (t)/r(t) = p/t, and we can take a(t) = ?t
p?1
with ? ? p. Then w(t) = (t/t0 )??1 , and ?(t) = w(t)r(t)
.
W (t) = ?t
We also exhibit in the following a second energy function that is guaranteed to decrease under the
same conditions. This energy function, unlike the Lyapunov function Lr , does not guarantee a
specific convergence rate. However, it captures a natural measure of energy in the system. To define
this energy function, we will use the following characterization of the inverse mirror map: By duality
of the subdifferentials (e.g. Theorem 23.5 in [18]), we have for a pair of convex conjugate functions ?
and ? ? that x ? ?? ? (x? ) if and only if x? ? ??(x). To simplify the discussion, we will assume that
? is also differentiable, so that (?? ? )?1 = ?? (this assumption can be relaxed). In what follows,
? = ??(X) and Z? = ?? ? (Z).
we will denote by X
4
? = ??(X).
Theorem 3. Let (X(t), Z(t)) be the unique maximal solution of AMDw,? , and let X
Consider the energy function
Er (t) = f (X(t)) +
1
?
D?? (Z(t), X(t)).
r(t)
(5)
Then if w, ? satisfy condition (2) of Theorem 2, Er is a decreasing function of time.
Proof. To make the notation more concise,
we omit the explicit dependence on time in this proof.
? = ? ? (Z) ? ? ? (X)
? ? X, Z ? X
? . Taking the time-derivative , we have
We have D?? (Z, X)
E
E D
E D
D
E D
d
??
?? ? X,
? Z ?X
? ? X, Z? ? X
? = ?? ? (Z), Z? ? ?? ? (X),
? X
D?? (Z, X)
dt
D
E D
E
? Z ?X
? .
= ?? ? (Z) ? X, Z? ? X,
D
E
? Z ?X
? =
? and X,
Using the second equation in AMD0w,? , we have ?? ? (Z) ? X = a1 X,
?
? Z ?X
?
a ?? ? (Z) ? ?? ? (X),
Combining, we have
D
E ? 0 by monotonicity of ?? .
?
d
?
?
?
D? (Z, X) ? ? X, ?f (X) , and we can finally bound the derivative of Er :
dt
a
D
E 1 d
0
d
? ? r D?? (Z, X)
?
Er (t) = ?f (X), X? +
D?? (Z, X)
2
dt
r
dt
r
D
E
?
? ?f (X), X?
1?
.
ar
Therefore condition (2) of Theorem 2 implies that
d
dt Er (t)
? 0.
This energy function can be interpreted, loosely speaking, as the sum of a potential energy given by
1
? Indeed, when the problem is unconstrained,
f (X), and a kinetic energy given by r(t)
D?? (Z, X):
? =
then one can take ? ? (z) = 12 kzk2 , in which case ?? ? = ?? = I, the identity, and D?? (Z, X)
?
1 ?
kZ ? Xk2 = 1 k X k2 , a quantity proportional to the kinetic energy.
2
3
2
a
Primal Representation and Example Dynamics
An equivalent primal representation can be obtained by rewriting the equations in terms of Z? =
?? ? (Z) and its derivatives (Z? is a primal variable that remains in X , since ?? ? maps into X ).
In this section, we assume that ? ? is twice differentiable on E ? . Taking the time derivative of
? = ?? ? (Z(t)), we have
Z(t)
??
?
?
Z(t)
= ?2 ? ? (Z(t))Z(t)
= ??(t)?2 ? ? ? ??(Z(t))?f
(X(t)),
2
?
where ?2 ? ? (z) is the Hessian of ? ? at z, defined as ?2 ? ? (z)ij = ??z?j ?z(z)
. Then using the averaging
i
expression for X, we can write AMDw,? in the following primal form
?
R
? )d?
x0 W (t0 )+ tt w(? )Z(?
? Z(t)
?
2 ?
0
?
?
=
??(t)?
?
?
??(
Z(t))?f
W (t)
AMDpw,?
? ?
Z(t0 ) = x0 .
(6)
A similar derivation can be made for the mirror descent ODE without acceleration, which can be
written as follows [11] (see also the original derivation of Nemirovski and Yudin in Chapter 3 in [13])
?
?
?
= ??f (X(t))
? Z(t)
MD
X(t) = ?? ? (Z(t))
?
? X(t ) = x .
0
0
Note that this can be interpreted as a limit case of AMD?,w with ?(t) ? 1 and w(t) a Dirac function
?
?
at t. Taking the time derivative of X(t) = ?? ? (Z(t)), we have X(t)
= ?2 ? ? (Z(t))Z(t),
which
leads to the primal form of the mirror descent ODE
(
MD
p
?
X(t)
= ??2 ? ? ? ??(X(t))?f (X(t))
X(t0 ) = x0 .
5
(7)
The operator ?2 ? ? ? ?? appears in both primal representations (6) and (7), and multiplies the
gradient of f . It can be thought of as a transformation of the gradient which ensures that the primal
trajectory remains in the feasible set, this is illustrated in the supplementary material. For some
choices of ?, ?2 ? ? ? ?? has a simple expression. We give two examples below.
We also observe that in its primal form, AMDpw,? is a generalization of the ODE family studied
d
?
in [23], which can be written as dt
??(X(t) + e??(t) X(t))
= ?e?(t)+?(t) ?f (X(t)), for which
??(t)
they prove the convergence rate O(e
). This corresponds to setting, in our notation, a(t) = e?(t) ,
?(t)
r(t) = e
and taking ?(t) = a(t)r(t) (which corresponds to the condition of Corollary 1).
n
Positive-orthant-constrained dynamics
orthant
+ , and consider
P Suppose that X is the positive
P zR
?
i ?1
the negative entropy function ?(x) = i xi ln xi . Then its dual is ? (z) = i e
, and we have
??(x)i = 1 + ln xi and ?2 ? ? (z)i,j = ?ij ezi ?1 , where ?ij is 1 if i = j and 0 otherwise. Thus for all
x ? Rn+ , ?2 ? ? ? ??(x) = diag(x). Therefore, the primal forms (7) and (6), reduce to, respectively,
(
?i, X? i = ?Xi ?f (X)i
?i, Z?? i = ??(t)Z?i ?f (X)i
? 0 ) = x0
X(0) = x0
Z(t
where for the second ODE we write X compactly to denote the weighted average given by the second
equation of AMDw,? . When f is affine, the mirror descent ODE lead to Lotka-Volterra equation
which has applications in economics and ecology. For the mirror descent ODE, one can verify that
the solution remains in the positive orthant since X? tends to 0 as Xi approaches the boundary of the
feasible set. Similarly for the accelerated version, Z?? tends to 0 as Z? approaches the boundary, thus Z?
remains feasible, and so does X by convexity.
Simplex-constrained dynamics:
Pn the replicator equation. Now suppose that X is the n-simplex,
n
X
=
?
=
{x
?
R
:
+
i=1 xi = 1}. Consider the distance-generating function ?(x) =
Pn
x
ln
x
+
?
(x),
where
i
i
X
i=1
Pn ?X (?) is the convex indicator function of the feasible set. Then its
conjugate is ? ? (z) = ln ( i=1 ezi ), defined on E ? , and we have ??(x)i = 1 + ln xi , ?? ? (z)i =
zi
Pe z
k
ke
?j x
Pi i
k xk
(
, and ?2 ? ? (z)ij =
?
(
xi xj
P
2
k xk )
? j ez i
Pi z
k
ke
?
ezi ezj
P z 2.
k)
ke
(
Then it is simple to calculate ?2 ? ? ? ??(x)ij =
= ?ij xi ? xi xj . Therefore, the primal forms (7) and (6) reduce to, respectively,
?i, X? i + Xi (?f (X)i ? hX, ?f (X)i) = 0
X(0) = x0
(
? ?f (X) = 0
?i, Z?? i + ?(t)Z?i ?f (X)i ? Z,
?
Z(0)
= x0 .
The first ODE is known as the replicator dynamics [19], and has many applications in evolutionary
game theory [22] and viability theory [4], among others. See the supplementary material for additional
discussion on the interpretation and applications of the replicator dynamics. This example shows that
the replicator dynamics can be accelerated simply by performing the original replicator update on the
? in which (i) the gradient of the objective function is scaled by ?(t) at time t, and (ii) the
variable Z,
gradient is evaluated at X(t), the weighted average of the Z? trajectory.
4
Adaptive Averaging Heuristic
In this section, we propose an adaptive averaging heuristic for Dadaptively computing
E the weights
w.
?(t)
?(t)
?
Note that in Corollary 1, we simply set a(t) = r(t) so that ?f (X(t)), X(t) r(t) ? a(t) is
identically zero (thus trivially satisfying condition (2) of Theorem 2). However, from the bound (3),
if this term is negative, then this helps further decrease the Lyapunov function Lr (as well as the
energy function Er ). A simple strategy is then to adaptively choose a(t) as follows
D
E
(
?
a(t) = ?(t)
if ?f (X(t)), X(t)
> 0,
r(t)
(8)
?(t)
a(t) ? r(t) otherwise.
If we further have ?(t) ? r0 (t), then the conditions of Theorem 2 and Theorem 3 are satisfied, which
guarantee that Lr is a Lyapunov function and that the energy Er decreases. In particular, such a
heuristic would preserve the convergence rate r(t) by Theorem 2.
6
We now propose a discrete version of the heuristic when r(t) = t2 . We consider the quadratic rate
in particular since in this case the discretization proposed by [11] preserves the quadratic rate, and
corresponds to a first-order accelerated method2 for which many heuristics have been developed,
such as the restarting heuristics [17, 20] discussed in the introduction. To satisfy condition (1) of
?
Theorem 2, we choose ?(t) = ?t with ? ? 2. Note that in this case, ?(t)
r(t) = t . In the supplementary
?
material, we propose a discretization of the heuristic (8), using the correspondance t = k s, for a
step size s. The resulting algorithm is summarized in Algorithm 1, where ? ? is a smooth distance
generating function, and R is a regularizer assumed to be strongly convex and smooth. We give a
bound on the convergence rate of Algorithm 1 in the supplementary material. The proof relies on a
discrete counterpart of the Lyapunov function Lr .
?
The algorithm keeps ak = ak?1 whenever f (?
x(k+1) ) ? f (?
x(k) ), and sets ak to k?
otherwise. This
s
results in a non-increasing sequence ak . It is worth observing that in continuous time, from the
expression (4), a constant a(t) over an interval [t1 , t2 ] corresponds to an exponential increase in
the weight w(t) over that interval, while a(t) = ?t corresponds to a polynomial increase w(t) =
(t/t0 )??1 . Intuitively, adaptive averaging increases the weights w(t) on portions of the trajectory
which make progress.
Algorithm 1 Accelerated mirror descent with adaptive averaging
1: Initialize x
?(0) = x0 , z?(0) = x0 , a1 = ??s
2: for k ? N do
D
E
3:
z?(k+1) = arg minz??X ?ks ?f (x(k) ), z? + D? (?
z , z?(k) ).
D
E
4:
x
?(k+1) = arg minx??X ?s ?f (x(k) ), x
? + R(?
x, x(k) )
6:
x(k+1) = ?k+1 z?(k+1) + (1 ? ?k+1 )?
x(k+1) , with ?k =
max
ak = min ak?1 , ?k?s
7:
8:
if f (?
x(k+1) ) ? f (?
x(k) ) > 0 then
?
?
ak = k s
5:
5
?
sa
? k .
1+ sak
Numerical Experiments
In this section, we compare our adaptive averaging heuristic (in its discrete version given in Algorithm 1) to existing restarting heuristics. We consider simplex-constrained problems and take
the distance generating function ? to be the entropy function, so that the resulting algorithm is a
discretization of the accelerated replicator ODE studied in Section 3. We perform the experiments in
R3 so that we can visualize the solution trajectories (the supplementary material contains additional
experiments in higher dimension). We consider different objective functions: A strongly convex
quadratic given by f (x) = (x ? s)T A(x ? s) for a positive definite matrix A, a weakly convex
quadratic, a linear function f (x) = cT x, and the Kullback-Leibler divergence, f (x) = DKL (x? , x).
We compare the following methods:
1. The original accelerated mirror descent method (in which the weights follow a predetermined
?
schedule given by ak = k?
),
s
2. Our adaptive averaging, in which ak is computed adaptively following Algorithm 1,
3. The gradient restarting
heuristic in [17], in which the algorithm is restarted from the current
point whenever ?f (x(k) ), x(k+1) ? x(k) > 0,
4. The speed restarting heuristic in [20], in which the algorithm is restarted from the current
point whenever kx(k+1) ? x(k) k ? kx(k) ? x(k?1) k.
The results are shown in Figure 2. Each subfigure is divided into four plots: Clockwise from the top
left, we show the value of the objective function, the trajectory on the simplex, the value of the energy
function Er and the value of the Lyapunov function Lr .
2
For faster rates r(t) = tp , p > 2, it is possible to discretize the ODE and preserve the convergence rate, as
proposed by Wibisono et al. [23], however this discretization results in a higher-order method such as Nesterov?s
cubic accelerated Newton method [16].
7
The experiments show that adaptive averaging compares favorably to the restarting heuristics on
all these examples, with a significant improvement in the strongly convex case. Additionally, the
experiments confirm that under the adaptive averaging heuristic, the Lyapunov function is decreasing.
This is not the case for the restarting heuristics as can be seen on the weakly convex example. It is
interesting to observe, however, that the energy function Er is non-increasing for all the methods
in our experiments. If we interpret the energy as the sum of a potential and a kinetic term, then this
could be explained intuitively by the fact that restarting keeps the potential energy constant, and
decreases the kinetic energy (since the velocity is reset to zero). It is also worth observing that even
though the Lyapunov function Lr is non-decreasing, it will not necessarily converge to 0 when there
is more than one minimizer (its limit will depend on the choice of z ? in the definition of Lr ).
Finally, we observe that the methods have a different qualitative behavior: The original accelerated
method typically exhibits oscillations around the set of minimizers. The heuristics alleviate these
oscillations in different ways: Intuitively, adaptive averaging acts by increasing the weights on
portions of the trajectory which make the most progress, while the restarting heuristics reset the
velocity to zero whenever the algorithm detects that the trajectory is moving in a bad direction. The
speed restarting heuristic seems to be more conservative in that it restarts more frequently.
(a) Strongly convex quadratic.
(b) Weakly convex function.
(c) Linear function.
(d) KL divergence.
Figure 2: Examples of accelerated descent with adaptive averaging and restarting.
6
Conclusion
Motivated by the averaging formulation of accelerated mirror descent, we studied a family of ODEs
with a generalized averaging scheme, and gave simple sufficient conditions on the weight functions to
guarantee a given convergence rate in continuous time. We showed as an example how the replicator
ODE can be accelerated by averaging. Our adaptive averaging heuristic preserves the convergence
rate (since it preserves the Lyapunov function), and it seems to perform at least as well as other
heuristics for first-order accelerated methods, and in some cases considerably better. This encourages
further investigation into the performance of this adaptive averaging, both theoretically (by attempting
to prove faster rates, e.g. for strongly convex functions), and numerically, by testing it on other
methods, such as the higher-order accelerated methods proposed in [23].
8
References
[1] H. Attouch and J. Peypouquet. The rate of convergence of nesterov?s accelerated forwardbackward method is actually faster than 1/k 2 . SIAM Journal on Optimization, 26(3):1824?1834,
2016.
[2] H. Attouch, J. Peypouquet, and P. Redont. Fast convergence of an inertial gradient-like system
with vanishing viscosity. CoRR, abs/1507.04782, 2015.
[3] H. Attouch, J. Peypouquet, and P. Redont. Fast convex optimization via inertial dynamics with
hessian driven damping. CoRR, abs/1601.07113, 2016.
[4] J.-P. Aubin. Viability Theory. Birkhauser Boston Inc., Cambridge, MA, USA, 1991.
[5] A. Bloch, editor. Hamiltonian and gradient flows, algorithms, and control. American Mathematical Society, 1994.
[6] A. A. Brown and M. C. Bartholomew-Biggs. Some effective methods for unconstrained
optimization based on the solution of systems of ordinary differential equations. Journal of
Optimization Theory and Applications, 62(2):211?224, 1989.
[7] J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and
stochastic optimization. J. Mach. Learn. Res., 12:2121?2159, July 2011.
[8] N. Flammarion and F. R. Bach. From averaging to acceleration, there is only a step-size. In
28th Conference on Learning Theory, COLT, pages 658?695, 2015.
[9] U. Helmke and J. Moore. Optimization and dynamical systems. Communications and control
engineering series. Springer-Verlag, 1994.
[10] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In Proceedings of the
3rd International Conference on Learning Representations (ICLR), 2014.
[11] W. Krichene, A. Bayen, and P. Bartlett. Accelerated mirror descent in continuous and discrete
time. In NIPS, 2015.
[12] A. Lyapunov. General Problem of the Stability Of Motion. Control Theory and Applications
Series. Taylor & Francis, 1992.
[13] A. S. Nemirovsky and D. B. Yudin. Problem Complexity and Method Efficiency in Optimization.
Wiley-Interscience series in discrete mathematics. Wiley, 1983.
[14] Y. Nesterov. A method of solving a convex programming problem with convergence rate o(1/k2).
Soviet Mathematics Doklady, 27(2):372?376, 1983.
[15] Y. Nesterov. Smooth minimization of non-smooth functions. Mathematical Programming, 103
(1):127?152, 2005.
[16] Y. Nesterov. Accelerating the cubic regularization of newton?s method on convex problems.
Mathematical Programming, 112(1):159?181, 2008.
[17] B. O?Donoghue and E. Cand?s. Adaptive restart for accelerated gradient schemes. Foundations
of Computational Mathematics, 15(3):715?732, 2015. ISSN 1615-3375.
[18] R. Rockafellar. Convex Analysis. Princeton University Press, 1970.
[19] K. Sigmund. Complexity, Language, and Life: Mathematical Approaches, chapter A Survey of
Replicator Equations, pages 88?104. Springer Berlin Heidelberg, Berlin, Heidelberg, 1986.
[20] W. Su, S. Boyd, and E. Cand?s. A differential equation for modeling Nesterov?s accelerated
gradient method: Theory and insights. In NIPS, 2014.
[21] G. Teschl. Ordinary differential equations and dynamical systems, volume 140. American
Mathematical Soc., 2012.
[22] J. W. Weibull. Evolutionary game theory. MIT press, 1997.
[23] A. Wibisono, A. C. Wilson, and M. I. Jordan. A variational perspective on accelerated methods
in optimization. CoRR, abs/1603.04245, 2016.
9
| 6553 |@word briefly:1 version:5 polynomial:1 seems:2 nemirovsky:1 concise:1 solid:1 moment:1 initial:2 contains:1 series:3 existing:2 current:2 discretization:6 com:1 written:3 numerical:3 predetermined:1 designed:1 plot:1 update:1 slowing:1 xk:2 vanishing:1 hamiltonian:1 lr:26 provides:2 characterization:1 simpler:1 mathematical:5 along:1 constructed:1 differential:3 qualitative:1 prove:4 interscience:1 theoretically:1 x0:14 indeed:2 behavior:1 cand:2 frequently:1 detects:1 decreasing:5 equipped:1 redont:2 increasing:5 becomes:1 spain:1 bounded:1 notation:2 what:1 interpreted:4 weibull:1 method2:1 developed:2 unified:1 transformation:2 guarantee:9 berkeley:6 act:1 doklady:1 k2:2 scaled:1 control:3 omit:1 positive:8 t1:4 engineering:1 tends:2 limit:2 accumulates:2 aleksandr:1 ak:9 mach:1 twice:1 studied:6 k:1 nemirovski:1 unique:5 practical:1 testing:1 definite:1 thought:1 boyd:1 integrating:1 cannot:1 operator:1 writing:2 accumulating:1 equivalent:4 map:5 straightforward:1 economics:1 convex:25 survey:1 ke:3 formalized:1 insight:2 stability:1 construction:1 play:1 suppose:6 programming:3 velocity:2 satisfying:1 particularly:1 observed:2 role:1 capture:1 calculate:1 ensures:1 decrease:7 forwardbackward:1 intuition:1 convexity:2 complexity:2 nesterov:9 dynamic:18 weakly:3 depend:1 solving:1 purely:1 efficiency:1 biggs:1 compactly:1 chapter:2 regularizer:1 soviet:1 derivation:2 fast:2 describe:1 effective:1 heuristic:30 supplementary:6 otherwise:3 online:1 sequence:2 differentiable:4 rr:1 propose:6 maximal:4 reset:2 combining:1 achieve:2 dirac:1 convergence:19 empty:1 generating:3 adam:2 converges:1 help:2 coupling:1 derive:1 ij:6 progress:4 sa:1 soc:1 bayen:3 c:1 indicate:2 implies:1 met:1 lyapunov:22 direction:1 stochastic:2 material:6 hx:1 suffices:1 generalization:1 alleviate:1 investigation:1 aubin:1 hold:3 around:1 ezj:1 visualize:1 claim:1 xk2:1 purpose:1 uniqueness:4 currently:1 weighted:5 reflects:1 minimization:2 mit:1 pn:3 wilson:1 corollary:5 focus:1 improvement:3 indicates:2 check:1 minimizers:3 typically:2 arg:3 dual:10 among:1 colt:1 priori:1 multiplies:1 constrained:5 special:2 initialize:3 uc:3 construct:2 broad:1 simplex:6 t2:6 others:1 simplify:1 few:1 preserve:8 divergence:4 ecology:1 ab:3 interest:1 primal:14 bloch:1 bregman:2 integral:1 damping:1 loosely:1 taylor:1 desired:3 re:1 subfigure:1 modeling:1 ar:1 tp:2 ordinary:2 subset:1 answer:1 eec:1 proximal:1 considerably:1 combined:1 adaptively:4 international:1 siam:1 diverge:1 satisfied:2 choose:3 american:2 derivative:8 potential:3 summarized:1 rockafellar:1 matter:1 inc:1 satisfy:3 kzk2:1 depends:2 closed:1 observing:2 hazan:1 portion:2 start:1 recover:1 francis:1 correspondance:1 generalize:1 trajectory:17 worth:2 whenever:9 definition:2 energy:16 naturally:1 associated:1 proof:5 recall:1 knowledge:1 dimensionality:1 schedule:1 inertial:2 actually:1 appears:1 alexandre:1 higher:4 dt:8 follow:1 restarts:1 formulation:3 done:1 evaluated:1 strongly:8 though:1 furthermore:2 su:2 google:2 defines:1 perhaps:1 usa:1 attouch:3 brown:1 verify:1 subdifferentials:1 counterpart:1 equality:1 regularization:1 leibler:1 moore:1 illustrated:2 krichene:3 game:3 encourages:1 generalized:2 tt:2 performs:1 duchi:1 flammarion:1 motion:1 variational:1 sigmund:1 recently:1 replicator:10 empirically:3 volume:1 discussed:2 interpretation:2 interpret:1 numerically:1 significant:4 refer:1 cambridge:1 rd:1 unconstrained:5 trivially:2 mathematics:3 similarly:1 bartholomew:1 language:1 had:1 peypouquet:3 moving:1 acute:1 ezi:3 showed:1 perspective:1 belongs:2 driven:1 verlag:1 inequality:2 life:1 yi:1 seen:2 minimum:1 additional:2 relaxed:1 r0:5 converge:1 dashed:1 ii:1 clockwise:1 july:1 smooth:4 faster:4 adapt:1 bach:1 long:1 divided:1 dkl:1 a1:2 feasibility:1 basic:1 ode:24 interval:4 unlike:1 flow:1 spirit:1 jordan:1 viability:3 identically:1 xj:2 zi:1 gave:1 reduce:2 idea:1 donoghue:1 t0:53 whether:1 expression:5 motivated:1 bartlett:3 accelerating:1 peter:1 speaking:1 hessian:2 useful:1 generally:1 viscosity:1 mirrored:2 discrete:8 write:2 lotka:1 four:1 rewriting:1 subgradient:1 sum:2 angle:1 parameterized:1 inverse:1 family:4 oscillation:2 bound:4 ct:1 guaranteed:2 quadratic:9 encountered:1 precisely:1 constraint:1 speed:9 argument:2 reparameterize:1 min:1 performing:1 attempting:1 helmke:1 conjugate:3 evolves:2 making:1 intuitively:5 invariant:1 explained:1 computationally:1 equation:10 ln:6 remains:4 discus:1 r3:1 singer:1 studying:1 apply:1 observe:3 appropriate:1 sak:1 existence:4 original:7 top:1 maintaining:1 newton:2 giving:1 prof:1 society:1 objective:4 move:1 question:1 quantity:1 volterra:1 strategy:1 rt:5 dependence:1 md:2 said:1 exhibit:3 gradient:18 minx:2 evolutionary:3 distance:3 iclr:1 berlin:2 restart:4 amd:2 cauchy:2 reason:1 issn:1 illustration:1 minimizing:1 favorably:2 negative:8 resurgence:1 ba:1 design:1 affiliated:1 perform:2 discretize:1 finite:1 descent:14 orthant:3 communication:1 rn:2 rtt:1 arbitrary:1 pair:1 kl:1 connection:2 barcelona:1 kingma:1 nip:3 below:1 dynamical:2 green:2 max:1 natural:2 indicator:1 zr:1 scheme:5 improve:1 brief:2 imply:1 coupled:1 review:1 adagrad:1 interesting:2 proportional:1 foundation:1 affine:1 sufficient:4 article:1 editor:1 pi:2 course:1 last:1 guide:1 taking:8 sparse:1 boundary:2 dimension:1 yudin:2 computes:1 kz:1 made:2 adaptive:24 coincide:1 restarting:16 approximate:1 kullback:1 technicality:1 monotonicity:1 keep:2 confirm:1 conclude:1 assumed:1 xi:11 continuous:15 additionally:1 learn:1 heidelberg:2 necessarily:1 domain:1 diag:1 main:1 qut:1 cubic:2 wiley:2 explicit:1 exponential:1 candidate:1 pe:1 weighting:1 minz:1 theorem:16 bad:1 specific:1 er:9 intractable:1 consist:1 exists:1 corr:3 mirror:14 kx:3 boston:1 suited:1 entropy:2 simply:3 ez:1 applies:1 restarted:5 springer:2 corresponds:5 minimizer:1 determines:1 satisfies:1 relies:1 kinetic:4 ma:1 identity:1 acceleration:3 lipschitz:8 feasible:10 hard:1 specifically:1 uniformly:1 birkhauser:1 averaging:30 walid:3 conservative:1 duality:1 arises:1 accelerated:28 wibisono:2 mikhailovich:1 princeton:1 |
6,140 | 6,554 | Finite Sample Prediction and Recovery Bounds
for Ordinal Embedding
Lalit Jain
University of Michigan
Ann Arbor, MI 48109
[email protected]
Kevin Jamieson
University of California, Berkeley
Berkeley, CA 94720
[email protected]
Robert Nowak
University of Wisconsin
Madison, WI 53706
[email protected]
Abstract
The goal of ordinal embedding is to represent items as points in a low-dimensional
Euclidean space given a set of constraints like ?item i is closer to item j than
item k?. Ordinal constraints like this often come from human judgments. The
classic approach to solving this problem is known as non-metric multidimensional
scaling. To account for errors and variation in judgments, we consider the noisy
situation in which the given constraints are independently corrupted by reversing
the correct constraint with some probability. The ordinal embedding problem has
been studied for decades, but most past work pays little attention to the question
of whether accurate embedding is possible, apart from empirical studies. This
paper shows that under a generative data model it is possible to learn the correct
embedding from noisy distance comparisons. In establishing this fundamental
result, the paper makes several new contributions. First, we derive prediction error
bounds for embedding from noisy distance comparisons by exploiting the fact
that the rank of a distance matrix of points in Rd is at most d + 2. These bounds
characterize how well a learned embedding predicts new comparative judgments.
Second, we show that the underlying embedding can be recovered by solving a
simple convex optimization. This result is highly non-trivial since we show that
the linear map corresponding to distance comparisons is non-invertible, but there
exists a nonlinear map that is invertible. Third, two new algorithms for ordinal
embedding are proposed and evaluated in experiments.
1
Ordinal Embedding
Ordinal embedding aims to represent items as points in Rd so that the distances between items agree
as well as possible with a given set of ordinal comparisons such as item i is closer to item j than
to item k. In other words, the goal is to find a geometric representation of data that is faithful to
comparative similarity judgments. This problem has been studied and applied for more than 50 years,
dating back to the classic non-metric multidimensional scaling (NMDS) [1, 2] approach, and it is
widely used to gauge and visualize how people perceive similarities.
Despite the widespread application of NMDS and recent algorithmic developments [3, 4, 5, 6, 7],
the fundamental question of whether an embedding can be learned from noisy distance/similarity
comparisons had not been answered. This paper shows that if the data are generated according to
a known probabilistic model, then accurate recovery of the underlying embedding is possible by
solving a simple convex optimization, settling this long-standing open question. In the process of
answering this question, the paper also characterizes how well a learned embedding predicts new
distance comparisons and presents two new computationally efficient algorithms for solving the
optimization problem.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
1.1
Related Work
The classic approach to ordinal embedding is NMDS [1, 2]. Recently, several authors have proposed
new approaches based on more modern techniques. Generalized NMDS [3] and Stochastic Triplet
Embedding (STE) [6] employ hinge or logistic loss measures and convex relaxations of the lowdimensionality (i.e., rank) constraint based on the nuclear norm. These works are most closely related
to the theory and methods in this paper. The Linear partial order embedding (LPOE) method is
similar, but starts with a known Euclidean embedding and learns a kernel/metric in this space based
distance comparison data [7]. The Crowd Kernel [4] and t-STE [6] propose alternative non-convex
loss measures based on probabilistic generative models. The main contributions in these papers are
new optimization methods and experimental studies, but did not address the fundamental question
of whether an embedding can be recovered under an assumed generative model. Other recent work
has looked at the asymptotics of ordinal embedding, showing that embeddings can be learned as the
number of items grows and the items densely populate the embedding space [8, 9, 10]. In contrast,
this paper focuses on the practical setting involving a finite set items. Finally, it is known that at least
2dn log n distance comparisons are necessary to learn an embedding of n points in Rd [5].
1.2
Ordinal Embedding from Noisy Data
Consider n points x1 , x2 , . . . , xn 2 Rd . Let X = [x1 ? ? ? xn ] 2 Rd?n . The Euclidean distance
?
matrix D ? is defined to have elements Dij
= kxi xj k22 . Ordinal embedding is the problem of
recovering X given ordinal constraints on distances. This paper focuses on ?triplet? constraints of
?
?
the form Dij
< Dik
, where 1 ? i 6= j 6= k ? n. Furthermore, we only observe noisy indications of
these constraints, as follows. Each triplet t = (i, j, k) has an associated probability pt satisfying
xj k2 < kxi
pt > 1/2 () kxi
xk k2 .
Let S denote a collection of triplets drawn independently and uniformly at random. And for
each t 2 S we observe an independent random variable yt = 1 with probability pt , and yt = 1
otherwise. The goal is to recover the embedding X from these data. Exact recovery of D ? from such
data requires a known link between pt and D ? . To this end, our main focus is the following problem.
Ordinal Embedding from Noisy Data
Consider n points x1 , x2 ? ? ? , xn in d-dimensional Euclidean space. Let S denote a collection of
triplets and for each t 2 S observe an independent random variable
8
?
?
Dik
)
< 1 w.p. f (Dij
yt =
.
: 1
?
?
w.p. 1 f (Dij
Dik
)
where the link function f : R ! [0, 1] is known. Estimate X from S, {yt }, and f .
For example, if f is the logistic function, then for triplet t = (i, j, k)
pt = P(yt =
?
1) = f (Dij
?
Dik
) =
1
?
1 + exp(Dij
? )
Dik
,
(1)
?
?
then Dij
Dik
= log 1 ptpt . However, we stress that we only require the existence of a link
function for exact recovery of D ? . Indeed, if one just wishes to predict the answers to unobserved
triplets, then the results of Section 2 hold for arbitrary pt probabilities. Aspects of the statistical
analysis are related to one-bit matrix completion and rank aggregation [11, 12, 13]. However, we
use novel methods for the recovery of the embedding based on geometric properties of Euclidean
distance matrices.
1.3
Organization of Paper
This paper takes the following approach to ordinal embedding.
1. Our samples are assumed to be independently generated according to a probabilistic model based
on an underlying low-rank distance matrix. We use relatively standard statistically learning theory
2
techniques to analyze the minimizer of a bounded, Lipschitz loss with a nuclear norm constraint,
and show that an embedding can be learned from the data that predicts nearly as well as the true
embedding with O(dn log n) samples (Theorem 1).
2. Next, assuming the form of the probabilistic generative model is known (e.g., logistic), we show
that if the learned embedding is a good predictor of the ordinal comparisons, then it must also be a
good estimator of the true differences of distances between the embedding points (Theorem 2). This
result hinges on the fact that the (linear) observation model acts approximately like an isometry on
differences of distances.
3. While the true differences of distances can be estimated, the observation process is ?blind? to the
mean distance between embedding points. Despite this, we show that the mean is determined by the
differences of distances, due to the special properties of Euclidean distance matrices. Specifically,
the second eigenvalue of the ?mean-centered? distance matrix (well-estimated by the data from the
estimate of the differences of distances, Theorem 3) is proportional to the mean distance (Theorem 4).
This allows us to show that the minimizer of the loss with a nuclear norm constraint indeed recovers
an accurate estimate of the underlying true distance matrix.
1.4
Notation and Assumptions
We will use (D ? , G? ) to denote the distance and Gram matrices of the latent embedding, and (D, G)
to denote an arbitrary distance matrix and its corresponding Gram matrix. The observations {yt } carry
information about D ? , but distance matrices are invariant to rotation and translation, and therefore
it may only be possible to recover X up to a rigid transformation. Without
Pnloss of generality, we
assume assume the points x1 , . . . xn 2 Rd are centered at the origin (i.e., i=1 xi = 0).
?
1
T
Define the centering matrix V := I
n 11 . If X is centered, XV = X. Note that D is
?
T
?
determined by the Gram matrix G = X X. In addition, X can be determined from G up
to a unitary transformation. Note that if X is centered, the Gram matrix is ?centered? so that
V G? V = G? . It will be convenient in the paper to work with both the distance and Gram matrix
representations, and the following identities will be useful to keep in mind. For any distance matrix
D and its centered Gram matrix G
G
=
D
=
1
V DV ,
2
diag(G)1T 2G + 1diag(G)T ,
(2)
(3)
where diag(G) is the column vector composed of the diagonal of G. In particular this establishes a
bijection between centered Gram matrices and distance matrices. We refer the reader to [14] for an
insightful and thorough treatment of the properties of distance matrices. We also define the set of all
unique triplets
T := (i, j, k) : 1 ? i 6= j 6= k ? n, j < k .
Assumption 1. The observed triplets in S are drawn independently and unifomly from T .
2
Prediction Error Bounds
For t 2 T with t = (i, j, k) we define Lt to be the linear operator satisfying Lt (X T X) =
kxi xj k2 kxi xk k2 for all t 2 T . In general, for any Gram matrix G
Lt (G) := Gjj
2Gij
Gkk + 2Gik .
We can naturally view Lt as a linear operator on Sn+ , the space of n?n symmetric positive semidefinite
matrices. We can also represent Lt as a symmetric n ? n matrix that is zero everywhere except on
the submatrix corresponding to i, j, k which has the form
"
#
0
1
1
1
1
0
1
0
1
and so we will write
Lt (G) := hLt , Gi
3
where hA, Bi = vec(A)T vec(B) for any compatible matrices A, B. Ordering the elements of T
lexicographically, we arrange all the Lt (G) together to define the n n 2 1 -dimensional vector
L(G) = [L123 (G), L124 (G), ? ? ? , Lijk (G), ? ? ? ]T .
(4)
`(yt hLt , Gi) = log(1 + exp( yt hLt , Gi)).
(5)
Let `(yt hLt , Gi) denote a loss function. For example we can consider the 0 1 loss `(yt hLt , Gi) =
1{sign{yt hLt ,Gi}6=1} , the hinge-loss `(yt hLt , Gi) = max{0, 1 yt hLt , Gi}, or the logistic loss
Let pt := P(yt = 1) and take the expectation of the loss with respect to both the uniformly random
selection of the triple t and the observation yt , we have the risk of G
1 X
R(G) := E[`(yt hLt , Gi)] =
pt `( hLt , Gi) + (1 pt )`(hLt , Gi).
|T |
t2T
Given a set of observations S under the model defined in the problem statement, the empirical risk is,
X
bS (G) = 1
R
`(yt hLt , Gi)
(6)
|S|
t2S
bS (G)] = R(G). For any G 2 Sn , let kGk?
which is an unbiased estimator of the true risk: E[R
+
denote the nuclear norm and kGk1 := maxij |Gij |. Define the constraint set
G
:= {G 2 Sn+ : kGk? ? , kGk1 ? } .
,
(7)
b the solution of the program,
We estimate G? by G,
b := argminR
bS (G) .
G
G2G
(8)
,
Since G? is positive semidefinite, we expect the diagonal entries of G? to bound the off-diagonal
entries. So an infinity norm constraint on the diagonal guarantees that the points x1 , . . . , xn corresponding to G? live inside a bounded `2 ball. The `1 constraint in (7) plays two roles: 1) if our loss
bS (G) from
function is Lipschitz, large magnitude values of hLt , Gi can lead to large deviations of R
R(G); bounding ||G||1 bounds |hLt , Gi|. 2) Later we will define ` in terms of the link function
f and as the magnitude of hLt , Gi increases the magnitude of the derivative of the link function f
typically becomes very small, making it difficult to ?invert?; bounding ||G||1 tends to keep hLt , Gi
within an invertible regime of f .
Theorem 1. Fix , and assume G? 2 G , . If the loss function `(?) is L-Lipschitz (or | supy `(y)| ?
L max{1, 12 }) then with probability at least 1
,
s
!
r
p
4L
18|S|
log(n)
3
288 log 2/
?
b
R(G)
R(G ) ?
+
log n + L
|S|
n
3
|S|
Proof. The proof follows from standard statistical learning theory techniques, see for instance [15].
By the bounded difference inequality, with probability 1
b
R(G)
b
R(G? ) = R(G)
b +R
b
bS (G)
bS (G)
R
bS (G)
? 2 sup |R
G2G
,
bS (G? ) + R
bS (G? )
R
bS (G)
R(G)| ? 2E[ sup |R
G2G
R(G? )
R(G)|] +
,
s
2B 2 log 2/
|S|
where supG2G , `(yt hLt , Gi) `(yt0 hLt0 , Gi) ? supG2G , L|hyt Lt yt0 Lt0 , Gi| ? 12L =: B
using the facts that Lt has 6 non-zeros of magnitude 1 and ||G||1 ? .
Using standard symmetrization and contraction lemmas, we can introduce Rademacher random
variables ?t 2 { 1, 1} for all t 2 S so that
bS (G)
E sup |R
G2G
,
R(G)| ? E sup
G2G
4
,
2L X
?t hLt , Gi .
|S|
t2S
The right hand side is just the Rademacher complexity of G
{G : kGk? ? } =
,
. By definition,
? conv({uuT : |u| = 1}).
where conv(U ) is the convex hull of a set U . Since the Rademacher complexity of a set is the same
as the Rademacher complexity of it?s closed convex hull,
!
X
X
X
T
T
E sup
?t hLt , Gi ? E sup
?t hLt , uu i = E sup u
? t Lt u
G2G
,
t2S
P
|u|=1 t2S
|u|=1
t2S
which we recognize is just Ek t2S ?t Lt k. By [16, 6.6.1] we can bound the operator norm
P
P
k t2S ?t Lt k in terms of the variance of t2S L2t and the maximal eigenvalue of maxt Lt . These
are computed in Lemma 1 given in the supplemental materials. Combining these results gives,
!
r
p
X
2L
2L
18|S| log(n)
3
Ek
?t Lt k ?
+
log n .
|S|
|S|
n
3
t2S
We remark that if G is a rank d < n matrix then
p
p
kGk? ? dkGkF ? dnkGk1
so if G? is low rank, we really only need a bound on the infinity norm
p of our constraint set. Under
the assumption that G? is rank d with ||G? ||1 ? and we set = dn , then Theorem 1 implies
that for |S| > n log n/161
s
s
18dn
log(n)
288 log 2/
b
R(G)
R(G? ) ? 8L
+L
|S|
|S|
with probability at least 1
. The above display says that |S| must scale like dn log(n) which is
consistent with known finite sample bounds [5].
3
Maximum Likelihood Embedding
We now turn our attention to recovering metric information about G? . Let S be a collection of
triplets sampled uniformly at random with replacement and let f : R ! (0, 1) be a known probability
function governing the observations. Any link function f induces a natural loss function `f , namely,
the negative log-likelihood of a solution G given an observation yt defined as
`f (yt hLt , Gi) = 1yt =
1
log( f (hL1t ,Gi) ) + 1yt =1 log( 1
1
f (hLt ,Gi) )
For example, the logistic link function of (1) induces the logistic loss of (5). Recalling that P(yt =
1) = f (hLt , Gi) we have
E[`f (yt hLt , Gi)] = f (hLt , G? i) log( f (hL1t ,Gi) ) + (1
f (hLt , G? i) log( 1
1
f (hLt ,Gi) )
= H(f (hLt , G? i)) + KL(f (hLt , G? i)|f (hLt , Gi))
where H(p) = p log( p1 ) + (1 p) log( 1 1 p ) and KL(p, q) = p log( pq ) + (1 p) log( 11 pq ) are the
entropy and KL divergence of Bernoulli RVs with means p, q. Recall that ||G||1 ? controls the
magnitude of hLt , Gi so for the moment, assume this is small. Then by a Taylor series f (hLt , Gi) ?
1
1
0
2 + f (0)hLt , Gi using the fact that f (0) = 2 , and by another Taylor series we have
KL(f (hLt , G? i)|f (hLt , Gi)) ? KL( 12 + f 0 (0)hLt , G? i| 12 + f 0 (0)hLt , Gi)
? 2f 0 (0)2 (hLt , G?
Gi)2 .
e 2 arg minG R(G) with
Thus, recalling the definition of L(G) from (4) we conclude that if G
P
1
e
R(G) = |T | t2T E[`f (yt hLt , Gi)] then one would expect L(G) ? L(G? ). Moreover, since
b to approximate L(G? ). The next
bS (G) is an unbiased estimator of R(G), one expects L(G)
R
theorem, combined with Theorem 1, formalizes this observation; its proof is found in the appendix.
5
Theorem 2. Let Cf = mint2T inf G2G
for any G
,
2Cf2
kL(G)
|T |
|f 0 hLt , Gi | where f 0 denotes the derivative of f . Then
L(G? )k2F ? R(G)
R(G? ) .
Note that if f is the logistic link function of (1) then its straightforward to show that |f 0 hLt , Gi |
1
1
1
4 exp( |hLt , Gi|)
4 exp( 6||G||1 ) for any t, G so it suffices to take Cf = 4 exp( 6 ).
b To do this, it is more
It remains to see that we can recover G? even given L(G? ), much less L(G).
convenient to work with distance matrices instead of Gram matrices. Analogous to the operators
Lt (G) defined above, we define the operators t for t 2 T satisfying,
t (D)
Dik ? Lt (G) .
:= Dij
We will view the t as linear operators on the space of symmetric hollow n ? n matrices Snh , which
includes distance matrices as special cases. As with L, we can arrange all the t together, ordering
the t 2 T lexicographically, to define the n n 2 1 -dimensional vector
(D) = [D12
Dik , ? ? ? ]T .
D13 , ? ? ? , Dij
We will use the fact that L(G) ? (D) heavily. Because (D) consists of differences of matrix
entries, has a non-trivial kernel. However, it is easy to see that D can be recovered given (D) and
any one off-diagonal element of D, so the kernel is 1-dimensional. Also, the kernel is easy to identify
by example. Consider the regular simplex in d dimensions. The distances between all n = d + 1
vertices are equal and the distance matrix can easily be seen to be 11T I. Thus (D) = 0 in this
case. This gives us the following simple result.
Lemma 2. Let Snh denote the space of symmetric hollow matrices, which includes all distance
matrices. For any D 2 Snh , the set of linear functionals { t (D), t 2 T } spans an n2
1
dimensional subspace of Snh , and the 1-dimensional kernel is given by the span of 11T I.
So we see that the operator is not invertible on Snh . Define J := 11T I. For any D, let C, the
centered distance matrix, be the component of D orthogonal to the kernel of L (i.e., tr(CJ ) = 0).
Then we have the orthogonal decomposition
D = C +
trace(DJ )/kJ k2F .
where D =
interpretation:
D
=
1
2
n
2
D
J,
Since G is assumed to be centered, the value of
X
1?i?j?n
Dij =
2
n
1
X
1?i?n
hxi , xi i =
D
has a simple
2kGk?
,
n 1
(9)
the average of the squared distances or alternatively a scaled version of the nuclear norm of G.
b and C
b be the corresponding distance and centered distance matrices corresponding to G
b the
Let D
n
solution to 8. Though is not invertible on all Sh , it is invertible on the subspace orthogonal to
b ? (D ? ), or equivalently L(G)
b ? L(G? ), we expect C
b to be
the kernel, namely J ? . So if (D)
?
close to C . The next theorem quantifies this.
b C ? be defined as above. Then
Theorem 3. Consider the setting of Theorems 1 and 2 and let C,
s
!
r
p
1 b
L
18|S|
log(n)
3
L
288 log 2/
C ? k2F ?
+
log n +
n kC
2 |S|
2
4C
n
3
4C
|S|
2 2
f
f
Proof. By combining Theorem 2 with the prediction error bounds obtainined in 1 we see that
s
!
r
p
2Cf2
4L
18|S|
log(n)
3
288 log 2/
b
L(G? )k2F ?
+
log n + L
.
n 1 kL(G)
|S|
n
3
|S|
n 2
Next we employ the following restricted isometry property of
in the supplementary materials.
6
on the subspace J ? whose proof is
0
Lemma 3. Let D and D 0 be two different distance matrices of n points in Rd and Rd . Let C and
C 0 be the components of D and D 0 orthogonal to J . Then
nkC C 0 k2F ? k (C)
(C 0 )k2 = k (D)
(D 0 )k2 ? 2(n 1)kC C 0 k2F .
The result then follows.
This implies that by collecting enough samples, we can recover the centered distance matrix. By
applying the discussion?following
Theorem 1 when
G? is rank d, we can state an upperbound of
q
?
1
b C ? k2 ? O L 2 dn log(n)+log(1/ ) . However, it is still not clear that this is enough to
kC
F
|S|
Cf
2( n
2)
recover D ? or G? . Remarkably, despite this unknown component being in the kernel, we show next
that it can be recovered.
Theorem 4. Let D be a distance matrix of n points in Rd , let C be the component of D orthogonal
to the kernel of L, and let 2 (C) denote the second largest eigenvalue of C. If n > d + 2, then
D = C + 2 (C) J .
(10)
This shows that D is uniquely determined as a function of C. Therefore, since (D) = (C) and
because C is orthogonal to the kernel of , the distance matrix D can be recovered from (D),
even though the linear operator is non-invertible.
We now provide a proof of Theorem 4 in the case where n > d + 3. The result is true in the case
when n > d + 2 but requires a more detailed analysis. This includes the construction of a vector x
such that Dx = 1 and 1T x 0 for any distance matrix a result in [17].
Proof. To prove Theorem 4 we need the following lemma, proved in the supplementary materials.
Lemma 4. Let D be a Euclidean distance matrix on n points. Then D is negative semidefinite on
the subspace
1? := {x 2 Rn |1T x = 0}.
Furthermore, ker(D) ? 1? .
For any matrix M , let i (M ) denote its ith largest eigenvalue. Under the conditions of the theorem,
we show that for > 0, 2 (D
J ) = . Since C = D
D J , this proves the theorem.
Note that,
that 2 (D
i (D
J) =
11 ) = 0.
T
i (D
11T ) +
for 1 ? i ? n and
arbitrary. So it suffices to show
By Weyl?s Theorem
11T ) ? 2 (D) + 1 ( 11T ) .
11T ) = 0, we have 2 (D
11T ) ? 2 (D) = 0. By the Courant-Fischer Theorem
2 (D
Since
1(
2 (D)
=
min
U :dim(U )=n
xT Dx
xT Dx
? min max
? 0
T
U =1? x2U,x6=0 xT x
1 x2U,x6=0 x x
max
since D negative semidefinite on 1? . Now let v i denote the ith eigenvector of D with eigenvalue
i = 0. Then
(D
11T )v i = Dv i = 0 ,
since v Ti 1 = 0 by 4. So D
11T has at least n d 2 zero eigenvalues, since rankD ? d + 2.
In particular, if n > d + 3, then D
11T must have at least two eigenvalues equal to 0. Therefore,
T
11 ) = 0.
2 (D
The previous theorem along with Theorem 3 guarantees that we can recover G? as we increase
the number of triplets sampled. The final theorem, which follows directly from Theorems 3 and 4,
summarizes this.
Theorem 5. Assume n > d + 2 and consider the setting of Theorems 1 and 2. As |S| ! 1,
b ! D ? where D
b is the distance matrix corresponding to G
b (the solution to 8).
D
b =C
b+
Proof. Recall D
b
2 (C)J ,
b ! C ?, D
b ! D? .
so as C
7
Figure 1: G? generated with n = 64 points in d = 2 and d = 8 dimensions on the left and right.
4
Experimental Study
The section empirically studies the properties of estimators suggested by our theory. It is not an
attempt to perform an exhaustive empirical evaluation of different embedding techniques; for that see
1
[18, 4, 6, 3]. In what follows each of the n points is generated randomly: xi ? N (0, 2d
I d ) 2 Rd ,
i = 1, . . . , n, motivated by the observation that
?
?
?
?
?
?
E[|hLt , G? i|] = E kxi xj k22 ||xi xk ||22 ? E kxi
xj k22 = 2E kxi k22 = 1
for any triplet t = (i, j, k).We report the prediction error on a holdout set of 10, 000 triplets and
the error in Frobenius norm of the estimated
Gram matrix over 36 random trials. We minimize the
bS (G) = 1 P
logistic MLE objective R
log(1
+ exp( yt hLt , Gi)).
t2S
|S|
For each algorithm considered, the domain of the objective variable G is the space of symmetric
positive semi-definite matrices. None of the methods impose the constraint maxij |Gij | ? (as
done above), since this was used to simplify the analysis and does not have a large impact in practice.
bS (G) with
Rank-d Projected Gradient Descent (PGD) performs gradient descent on the objective R
line search, projecting onto the subspace spanned by the top d eigenvalues at each step (i.e. setting the
bS (G) projecting
smallest n d eigenvalues to 0). Nuclear Norm PGD performs gradient descent on R
onto the nuclear norm ball with radius kG? k? , where G? is the Gram matrix of the latent embedding.
The nuclear norm projection can have the undesirable effect of shrinking the non-zero eigenvalues
toward the origin. To compensate for this potential bias, we employ Nuclear Norm PGD Debiased,
which takes the biased output of Nuclear Norm PGD, decomposes it into U EU T where U 2 Rn?d
bS (U diag(s)U T ).
are the top d eigenvectors, and outputs U diag(b
s)U T where sb = arg mins2Rd R
This last algorithm is motivated by the observation that methods for minimizing k ? k1 or k ? k? are
good at identifying the true support of a signal, but output biased magnitudes [19]. Rank-d PGD and
Nuclear Norm PGD Debiased are novel ordinal embedding algorithms.
Figure 1 presents how the algorithms behave for n = 64 and d = 2, 8. We observe that the unbiased
nuclear norm solution behaves near-identically to the rank-d solution and remark that this was
observed in all of our experiments (see the supplementary materials for other values of n, d, and
scalings of G? ). A popular technique for recovering rank d embeddings is to perform (stochastic)
bS (U T U ) with objective variable U 2 Rn?d taken as the embedding [18, 4, 6].
gradient descent on R
In all of our experiments this method produced Gram matrices nearly identical to those produced by
our Rank-d-PGD method, but Rank-d-PGD was an order of magnitude faster in our implementation.
bS (G)] is nearly a scaled
Also, in light of our isometry theorem, we can show that the Hessian of E[R
identity, leading us to hypothesize that a globally optimal linear convergence result for this nonconvex optimization may be possible using the techniques of [20, 21]. Finally, we note that previous
literature has reported that nuclear norm optimizations like Nuclear Norm PGD tend to produce less
accurate embeddings than those of non-convex methods [4, 6]. The results imply that Nuclear Norm
PGD Debiased appears to close the performance gap between the convex and non-convex solutions.
Acknowledgments This work was partially supported by the NSF grants CCF-1218189 and IIS1447449, the NIH grant 1 U54 AI117924-01, the AFOSR grant FA9550-13-1-0138, and by ONR
awards N00014-15-1-2620, and N00014-13-1-0129. We would also like to thank Amazon Web
Services for providing the computational resources used for running our simulations.
8
References
[1] Roger N Shepard. The analysis of proximities: Multidimensional scaling with an unknown
distance function. i. Psychometrika, 27(2):125?140, 1962.
[2] Joseph B Kruskal. Nonmetric multidimensional scaling: a numerical method. Psychometrika,
29(2):115?129, 1964.
[3] Sameer Agarwal, Josh Wills, Lawrence Cayton, Gert Lanckriet, David J Kriegman, and Serge
Belongie. Generalized non-metric multidimensional scaling. In International Conference on
Artificial Intelligence and Statistics, pages 11?18, 2007.
[4] Omer Tamuz, Ce Liu, Ohad Shamir, Adam Kalai, and Serge J Belongie. Adaptively learning
the crowd kernel. In Proceedings of the 28th International Conference on Machine Learning
(ICML-11), pages 673?680, 2011.
[5] Kevin G Jamieson and Robert D Nowak. Low-dimensional embedding using adaptively selected
ordinal data. In Communication, Control, and Computing (Allerton), 2011 49th Annual Allerton
Conference on, pages 1077?1084. IEEE, 2011.
[6] Laurens Van Der Maaten and Kilian Weinberger. Stochastic triplet embedding. In Machine
Learning for Signal Processing (MLSP), 2012 IEEE International Workshop on, pages 1?6.
IEEE, 2012.
[7] Brian McFee and Gert Lanckriet. Learning multi-modal similarity. The Journal of Machine
Learning Research, 12:491?523, 2011.
[8] Matth?us Kleindessner and Ulrike von Luxburg. Uniqueness of ordinal embedding. In COLT,
pages 40?67, 2014.
[9] Yoshikazu Terada and Ulrike V Luxburg. Local ordinal embedding. In Proceedings of the 31st
International Conference on Machine Learning (ICML-14), pages 847?855, 2014.
[10] Ery Arias-Castro. Some theory for ordinal embedding. arXiv preprint arXiv:1501.02861, 2015.
[11] Mark A Davenport, Yaniv Plan, Ewout van den Berg, and Mary Wootters. 1-bit matrix
completion. Information and Inference, 3(3), 2014.
[12] Yu Lu and Sahand N Negahban. Individualized rank aggregation using nuclear norm regularization. In 2015 53rd Annual Allerton Conference on Communication, Control, and Computing
(Allerton), pages 1473?1479. IEEE, 2015.
[13] D. Park, J , Neeman, J. Zhang, S. Sanghavi, and I. Dhillon. Preference completion: Large-scale
collaborative ranking from pairwise comparisons. Proc. Int. Conf. Machine Learning (ICML),
2015.
[14] Jon Dattorro. Convex Optimization & Euclidean Distance Geometry. Meboo Publishing USA,
2011.
[15] St?phane Boucheron, Olivier Bousquet, and G?bor Lugosi. Theory of classification: A survey
of some recent advances. ESAIM: probability and statistics, 9:323?375, 2005.
[16] Joel A. Tropp. An introduction to matrix concentration inequalities, 2015.
[17] Pablo Tarazaga and Juan E. Gallardo. Euclidean distance matrices: new characterization and
boundary properties. Linear and Multilinear Algebra, 57(7):651?658, 2009.
[18] Kevin G Jamieson, Lalit Jain, Chris Fernandez, Nicholas J Glattard, and Rob Nowak. Next: A
system for real-world development, evaluation, and application of active learning. In Advances
in Neural Information Processing Systems, pages 2638?2646, 2015.
[19] Nikhil Rao, Parikshit Shah, and Stephen Wright. Conditional gradient with enhancement and
truncation for atomic norm regularization. In NIPS workshop on Greedy Algorithms, 2013.
[20] Samet Oymak, Benjamin Recht, and Mahdi Soltanolkotabi. Sharp time?data tradeoffs for linear
inverse problems. arXiv preprint arXiv:1507.04793, 2015.
[21] Jie Shen and Ping Li. A tight bound of hard thresholding. arXiv preprint arXiv:1605.01656,
2016.
9
| 6554 |@word kgk:5 trial:1 version:1 norm:21 open:1 simulation:1 contraction:1 decomposition:1 tr:1 carry:1 moment:1 liu:1 series:2 neeman:1 past:1 recovered:5 dx:3 must:3 numerical:1 weyl:1 hypothesize:1 generative:4 intelligence:1 selected:1 item:12 greedy:1 xk:3 ith:2 fa9550:1 characterization:1 bijection:1 preference:1 allerton:4 zhang:1 dn:6 along:1 consists:1 prove:1 inside:1 introduce:1 pairwise:1 indeed:2 p1:1 multi:1 ming:1 globally:1 little:1 becomes:1 spain:1 conv:2 underlying:4 bounded:3 notation:1 moreover:1 psychometrika:2 what:1 kg:1 eigenvector:1 supplemental:1 unobserved:1 transformation:2 guarantee:2 formalizes:1 berkeley:3 thorough:1 multidimensional:5 act:1 collecting:1 ti:1 k2:7 scaled:2 control:3 grant:3 jamieson:3 positive:3 service:1 local:1 xv:1 tends:1 despite:3 establishing:1 approximately:1 lugosi:1 studied:2 bi:1 statistically:1 d13:1 faithful:1 practical:1 unique:1 acknowledgment:1 atomic:1 practice:1 definite:1 ker:1 mcfee:1 asymptotics:1 empirical:3 convenient:2 projection:1 word:1 regular:1 onto:2 close:2 selection:1 operator:8 undesirable:1 risk:3 live:1 applying:1 map:2 yt:26 straightforward:1 attention:2 independently:4 convex:10 d12:1 survey:1 shen:1 amazon:1 recovery:5 identifying:1 perceive:1 estimator:4 nuclear:16 spanned:1 embedding:44 classic:3 gert:2 variation:1 analogous:1 pt:9 play:1 heavily:1 construction:1 exact:2 shamir:1 olivier:1 origin:2 lanckriet:2 element:3 satisfying:3 predicts:3 observed:2 role:1 preprint:3 argminr:1 kilian:1 ordering:2 eu:1 benjamin:1 complexity:3 kriegman:1 solving:4 tight:1 algebra:1 easily:1 jain:2 artificial:1 kevin:3 crowd:2 exhaustive:1 whose:1 widely:1 supplementary:3 say:1 nikhil:1 otherwise:1 statistic:2 gi:40 fischer:1 noisy:7 final:1 indication:1 eigenvalue:10 propose:1 maximal:1 ste:2 combining:2 omer:1 t2s:10 frobenius:1 exploiting:1 convergence:1 yaniv:1 enhancement:1 rademacher:4 produce:1 comparative:2 adam:1 phane:1 derive:1 completion:3 recovering:3 come:1 uu:1 implies:2 laurens:1 radius:1 closely:1 correct:2 stochastic:3 hull:2 centered:11 human:1 material:4 require:1 fix:1 suffices:2 really:1 samet:1 brian:1 multilinear:1 hold:1 proximity:1 considered:1 wright:1 exp:6 lawrence:1 algorithmic:1 predict:1 visualize:1 kruskal:1 arrange:2 smallest:1 uniqueness:1 proc:1 kleindessner:1 symmetrization:1 largest:2 gauge:1 establishes:1 aim:1 kalai:1 focus:3 rank:15 likelihood:2 bernoulli:1 contrast:1 dim:1 inference:1 rigid:1 sb:1 typically:1 kc:3 arg:2 classification:1 colt:1 glattard:1 development:2 plan:1 special:2 equal:2 identical:1 park:1 yu:1 k2f:6 nearly:3 jon:1 icml:3 simplex:1 sanghavi:1 report:1 simplify:1 employ:3 modern:1 randomly:1 composed:1 densely:1 recognize:1 divergence:1 parikshit:1 geometry:1 replacement:1 recalling:2 attempt:1 organization:1 gkk:1 highly:1 evaluation:2 joel:1 sh:1 semidefinite:4 light:1 accurate:4 hyt:1 nowak:3 closer:2 partial:1 necessary:1 ewout:1 ohad:1 orthogonal:6 euclidean:9 taylor:2 instance:1 column:1 rao:1 deviation:1 entry:3 expects:1 vertex:1 predictor:1 dij:10 characterize:1 reported:1 answer:1 corrupted:1 kxi:8 combined:1 adaptively:2 st:2 recht:1 international:4 negahban:1 fundamental:3 oymak:1 standing:1 probabilistic:4 off:2 invertible:7 together:2 squared:1 von:1 juan:1 nmds:4 davenport:1 conf:1 ek:2 derivative:2 leading:1 li:1 account:1 potential:1 upperbound:1 includes:3 mlsp:1 int:1 ranking:1 blind:1 fernandez:1 later:1 view:2 closed:1 analyze:1 characterizes:1 sup:7 start:1 recover:6 aggregation:2 ulrike:2 ery:1 contribution:2 minimize:1 collaborative:1 variance:1 judgment:4 identify:1 serge:2 bor:1 lalit:2 produced:2 none:1 lu:1 ping:1 hlt:44 definition:2 centering:1 naturally:1 associated:1 mi:1 recovers:1 proof:8 meboo:1 sampled:2 proved:1 treatment:1 holdout:1 popular:1 recall:2 cj:1 nonmetric:1 back:1 appears:1 courant:1 x6:2 modal:1 evaluated:1 though:2 done:1 generality:1 furthermore:2 just:3 governing:1 roger:1 cayton:1 hand:1 gjj:1 web:1 tropp:1 nonlinear:1 widespread:1 logistic:8 grows:1 mary:1 usa:1 effect:1 k22:4 true:7 unbiased:3 ccf:1 regularization:2 symmetric:5 dhillon:1 boucheron:1 nkc:1 uniquely:1 generalized:2 stress:1 performs:2 novel:2 recently:1 nih:1 rotation:1 behaves:1 empirically:1 shepard:1 interpretation:1 refer:1 vec:2 rd:11 soltanolkotabi:1 had:1 pq:2 dj:1 hxi:1 similarity:4 t2t:2 isometry:3 recent:3 inf:1 apart:1 n00014:2 nonconvex:1 inequality:2 onr:1 der:1 seen:1 impose:1 signal:2 semi:1 rv:1 stephen:1 sameer:1 lexicographically:2 faster:1 long:1 compensate:1 mle:1 award:1 impact:1 prediction:5 involving:1 metric:5 expectation:1 arxiv:6 represent:3 kernel:12 agarwal:1 invert:1 addition:1 remarkably:1 biased:2 tend:1 unitary:1 near:1 embeddings:3 easy:2 enough:2 identically:1 xj:5 tradeoff:1 whether:3 motivated:2 sahand:1 dik:8 hessian:1 remark:2 wootters:1 jie:1 useful:1 clear:1 detailed:1 eigenvectors:1 induces:2 nsf:1 sign:1 estimated:3 write:1 drawn:2 wisc:1 ce:1 relaxation:1 year:1 luxburg:2 inverse:1 everywhere:1 terada:1 reader:1 maaten:1 appendix:1 scaling:6 summarizes:1 bit:2 submatrix:1 bound:11 pay:1 display:1 annual:2 constraint:15 infinity:2 x2:2 bousquet:1 aspect:1 answered:1 span:2 min:2 relatively:1 according:2 ball:2 wi:1 joseph:1 rob:1 b:18 making:1 matth:1 castro:1 den:1 dv:2 invariant:1 restricted:1 projecting:2 taken:1 computationally:1 resource:1 agree:1 remains:1 turn:1 ordinal:21 mind:1 end:1 umich:1 observe:4 nicholas:1 alternative:1 weinberger:1 shah:1 existence:1 denotes:1 top:2 cf:3 running:1 publishing:1 hinge:3 madison:1 k1:1 prof:1 objective:4 question:5 looked:1 concentration:1 diagonal:5 gradient:5 subspace:5 distance:49 link:8 thank:1 individualized:1 chris:1 trivial:2 toward:1 assuming:1 providing:1 l2t:1 minimizing:1 equivalently:1 difficult:1 robert:2 statement:1 trace:1 negative:3 implementation:1 unknown:2 perform:2 observation:10 finite:3 descent:4 behave:1 situation:1 communication:2 rn:3 arbitrary:3 sharp:1 david:1 pablo:1 namely:2 dattorro:1 kl:7 california:1 learned:6 barcelona:1 nip:2 address:1 suggested:1 lijk:1 regime:1 program:1 max:4 maxij:2 natural:1 settling:1 esaim:1 imply:1 dating:1 sn:3 kj:1 geometric:2 literature:1 wisconsin:1 afosr:1 loss:13 expect:3 proportional:1 triple:1 supy:1 consistent:1 thresholding:1 translation:1 maxt:1 rdnowak:1 compatible:1 yt0:2 supported:1 last:1 truncation:1 populate:1 side:1 bias:1 supg2g:2 van:2 boundary:1 dimension:2 xn:5 gram:12 world:1 author:1 collection:3 projected:1 u54:1 functionals:1 approximate:1 keep:2 active:1 kjamieson:1 assumed:3 conclude:1 belongie:2 xi:4 alternatively:1 search:1 latent:2 decade:1 triplet:14 quantifies:1 decomposes:1 learn:2 ca:1 domain:1 diag:5 did:1 main:2 bounding:2 n2:1 x1:5 g2g:7 tamuz:1 shrinking:1 wish:1 answering:1 mahdi:1 third:1 learns:1 theorem:28 xt:3 showing:1 insightful:1 uut:1 gik:1 hl1t:2 exists:1 workshop:2 aria:1 magnitude:7 gap:1 entropy:1 michigan:1 lt:16 cf2:2 josh:1 partially:1 minimizer:2 conditional:1 goal:3 identity:2 ann:1 lipschitz:3 hard:1 determined:4 specifically:1 uniformly:3 reversing:1 except:1 pgd:10 debiased:3 lemma:6 gij:3 arbor:1 experimental:2 berg:1 people:1 support:1 mark:1 hollow:2 kgk1:2 x2u:2 |
6,141 | 6,555 | SDP Relaxation with Randomized Rounding for
Energy Disaggregation
Kiarash Shaloudegi
Imperial College London
[email protected]
Csaba Szepesv?ri
University of Alberta
[email protected]
Andr?s Gy?rgy
Imperial College London
[email protected]
Wilsun Xu
University of Alberta
[email protected]
Abstract
We develop a scalable, computationally efficient method for the task of energy
disaggregation for home appliance monitoring. In this problem the goal is to
estimate the energy consumption of each appliance over time based on the total
energy-consumption signal of a household. The current state of the art is to model
the problem as inference in factorial HMMs, and use quadratic programming to
find an approximate solution to the resulting quadratic integer program. Here we
take a more principled approach, better suited to integer programming problems,
and find an approximate optimum by combining convex semidefinite relaxations
randomized rounding, as well as a scalable ADMM method that exploits the special
structure of the resulting semidefinite program. Simulation results both in synthetic
and real-world datasets demonstrate the superiority of our method.
1
Introduction
Energy efficiency is becoming one of the most important issues in our society. Identifying the
energy consumption of individual electrical appliances in homes can raise awareness of power
consumption and lead to significant saving in utility bills. Detailed feedback about the power
consumption of individual appliances helps energy consumers to identify potential areas for energy
savings, and increases their willingness to invest in more efficient products. Notifying home owners
of accidentally running stoves, ovens, etc., may not only result in savings but also improves safety.
Energy disaggregation or non-intrusive load monitoring (NILM) uses data from utility smart meters
to separate individual load consumptions (i.e., a load signal) from the total measured power (i.e., the
mixture of the signals) in households.
The bulk of the research in NILM has mostly concentrated on applying different data mining and
pattern recognition methods to track the footprint of each appliance in total power measurements.
Several techniques, such as artificial neural networks (ANN) [Prudenzi, 2002, Chang et al., 2012,
Liang et al., 2010], deep neural networks [Kelly and Knottenbelt, 2015], k-nearest neighbor (k-NN)
[Figueiredo et al., 2012, Weiss et al., 2012], sparse coding [Kolter et al., 2010], or ad-hoc heuristic
methods [Dong et al., 2012] have been employed. Recent works, rather than turning electrical events
into features fed into classifiers, consider the temporal structure of the data[Zia et al., 2011, Kolter
and Jaakkola, 2012, Kim et al., 2011, Zhong et al., 2014, Egarter et al., 2015, Guo et al., 2015],
resulting in state-of-the-art performance [Kolter and Jaakkola, 2012]. These works usually model the
individual appliances by independent hidden Markov models (HMMs), which leads to a factorial
HMM (FHMM) model describing the total consumption.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
FHMMs, introduced by Ghahramani and Jordan [1997], are powerful tools for modeling times series
generated from multiple independent sources, and are great for modeling speech with multiple people
simultaneously talking [Rennie et al., 2009], or energy monitoring which we consider here [Kim et al.,
2011]. Doing exact inference in FHMMs is NP hard; therefore, computationally efficient approximate
methods have been the subject of study. Classic approaches include sampling methods, such as
MCMC or particle filtering [Koller and Friedman, 2009] and variational Bayes methods [Wainwright
and Jordan, 2007, Ghahramani and Jordan, 1997]. In practice, both methods are nontrivial to make
work and we are not aware of any works that would have demonstrated good results in our application
domain with the type of FHMMs we need to work and at practical scales.
In this paper we follow the work of Kolter and Jaakkola [2012] to model the NILM problem by
FHMMs. The distinguishing features of FHMMs in this setting are that (i) the output is the sum of
the output of the underlying HMMs (perhaps with some noise), and (ii) the number of transitions
are small in comparison to the signal length. FHMMs with the first property are called additive. In
this paper we derive an efficient, convex relaxation based method for FHMMs of the above type,
which significantly outperforms the state-of-the-art algorithms. Our approach is based on revisiting
relaxations to the integer programming formulation of Kolter and Jaakkola [2012]. In particular,
we replace the quadratic programming relaxation of Kolter and Jaakkola, 2012 with a relaxation
to an semi-definite program (SDP), which, based on the literature of relaxations is expected to be
tighter and thus better. While SDPs are convex and could in theory be solved using interior-point
(IP) methods in polynomial time [Malick et al., 2009], IP scales poorly with the size of the problem
and is thus unsuitable to our large scale problem which may involve as many a million variables. To
address this problem, capitalizing on the structure of our relaxation coming from our FHMM model,
we develop a novel variant of ADMM [Boyd et al., 2011] that uses Moreau-Yosida regularization
and combine it with a version of randomized rounding that is inspired by the the recent work of
Park and Boyd [2015]. Experiments on synthetic and real data confirm that our method significantly
outperforms other algorithms from the literature, and we expect that it may find its applications in
other FHMM inference problems, too.
1.1
Notation
Throughout the paper, we use the following notation: R denotes the set of real numbers, Sn+ denotes
the set of n ? n positive semidefinite matrices, I{E} denotes the indicator function of an event E
(that is, it is 1 if the event is true and zero otherwise), 1 denotes a vector of appropriate dimension
whose entries are all 1. For an integer K, [K] denotes the set {1, 2, . . . , K}. N (?, ?) denotes the
Gaussian distribution with mean ? and covariance matrix ?. For a matrix A, trace(A) denotes its
trace and diag(A) denotes the vector formed by the diagonal entries of A.
2
System Model
Following Kolter and Jaakkola [2012], the energy usage of the household is modeled using an additive
factorial HMM [Ghahramani and Jordan, 1997]. Suppose there are M appliances in a household.
Each of them is modeled via an HMM: let Pi 2 RKi ?Ki denote the transition-probability matrix of
appliance i 2 [M ], and assume that for each state s 2 [Ki ], the energy consumption of the appliance
is constant ?i,s (?i denotes the corresponding Ki -dimensional column vector (?i,1 , . . . , ?i,Ki )> ).
Denoting by xt,i 2 {0, 1}Ki the indicator vector of the state
P st,i of appliance i at time t (i.e.,
xt,i,s = I{st,i =s} ), the total power consumption at time t is i2[M ] ?>
i xt,i , which we assume is
P
2
2 1
observed with some additive zero mean Gaussian noise of variance : yt ? N ( i2[M ] ?>
).
i xt,i ,
Given this model, the maximum likelihood estimate of the appliance state vector sequence can be
obtained by minimizing the log-posterior function
PM >
T
T
M
2
X
X1 X
(yt
i=1 xt,i ?i )
arg min
x>
t,i (log Pi )xt+1,i
2
xt,i
2
(1)
t=1
t=1 i=1
subject to
xt,i 2 {0, 1}Ki , 1> xt,i = 1, i 2 [M ] and t 2 [T ],
1
Alternatively, we can assume that the power
P consumption yt,i ofPeach appliance is normally distributed with
2
2
mean ?>
= i2[M ] i2 , and yt = i2[M ] yt,i .
i xt,i and variance i , where
2
where log Pi denotes a matrix obtained from Pi by taking the logarithm of each entry.
In our particular application, in addition to the signal?s temporal structure, large changes in total power
(in comparison to signal noise) contain valuable information that can be used to further improve the
inference results (in fact, solely this information was used for energy disaggregation, e.g., by Dong
et al., 2012, 2013, Figueiredo et al., 2012). This observation was used by Kolter and Jaakkola [2012]
to amend the posterior with a term that tries to match the large signal changes to the possible changes
in the power level when only the state of a single appliance changes.
Formally, let
yt ,
yt = yt+1
(i)
?i,m , and define the matrices Et,i 2 RKi ?Ki
?m,k = ?i,k
(i)
2
?m,k )2 /(2 diff
),
by (Et,i )m,k = ( yt
for some constant diff > 0. Intuitively, (Et,i )m,k is
the negative log-likelihood (up to a constant) of observing a change yt in the power level when
appliance i transitions from state m to state k under some zero-mean Gaussian noise with variance
2
diff . Making the heuristic approximation that the observation noise and this noise are independent
(which clearly does not hold under the previous model), Kolter and Jaakkola [2012] added the term
PT 1 PM >
(
t=1
i=1 xt,i Et,i xt+1,i ) to the objective of (1), arriving at
arg min
xt,i
f (x1 , . . . , xT ) :=
T
X
(yt
t=1
>
PM
>
2
i=1 xt,i ?i )
2 2
T
M
X1 X
x>
t,i (Et,i + log Pi )xt+1,i
t=1 i=1
(2)
subject to xt,i 2 {0, 1}Ki , 1 xt,i = 1, i 2 [M ] and t 2 [T ] .
In the rest of the paper we derive an efficient approximate solution to (2), and demonstrate that it is
superior to the approximate solution derived by Kolter and Jaakkola [2012] with respect to several
measures quantifying the accuracy of load disaggregation solutions.
3
SDP Relaxation and Randomized Rounding
There are two major challenges to solve the optimization problem (2) exactly: (i) the optimization is
over binary vectors xt,i ; and (ii) the objective function f , even when considering its extension to a
convex domain, is in general non-convex (due to the second term). As a remedy we will relax (2) to
make it an integer quadratic programming problem, then apply an SDP relaxation and randomized
rounding to solve approximately the relaxed problem. We start with reviewing the latter methods.
3.1
Approximate Solutions for Integer Quadratic Programming
In this section we consider approximate solutions to the integer quadratic programming problem
minimize
subject to
f (x) = x> Dx + 2d> x
x 2 {0, 1}n ,
(3)
where D 2 Sn+ is positive semidefinite, and d 2 Rn . While an exact solution of (3) can be found
by enumerating all possible combination of binary values within a properly chosen box or ellipsoid,
the running time of such exact methods is nearly exponential in the number n of binary variables,
making these methods unfit for large scale problems.
One way to avoid exponential running times is to replace (3) with a convex problem with the hope that
the solutions of the convex problems can serve as a good starting point to find high-quality solutions
to (3). The standard approach to this is to linearize (3) by introducing a new variable X 2 Sn+
tied to x trough X = xx> , so that x> Dx = trace(DX), and then relax the nonconvex constraints
X = xx> , x 2 {0, 1}n to X ? xx> , diag(X) = x, x 2 [0, 1]n . This leads to the relaxed SDP
problem
minimize
subject to
trace(D> X) + 2d> x
?
1 x>
? 0, diag(X) = x,
x X
3
x 2 [0, 1]n
(4)
?
?= 1
By introducing X
x
x>
this can be written in the compact SDP form
X
? > X)
?
trace(D
? ? 0, AX
? = b.
X
minimize
subject to
?
(5)
0 d>
m
n
m
2 Sn+1
+ , b 2 R and A : S+ ! R is an appropriate linear operator. This
d D
general SDP optimization problem can be solved with arbitrary precision in polynomial time using
interior-point methods [Malick et al., 2009, Wen et al., 2010]. As discussed before, this approach
becomes impractical in terms of both the running time and the required memory if either the number
of variables or the optimization constraints are large [Wen et al., 2010]. We will return to the issue of
building scaleable solvers for NILM in Section 5.
? =
where D
Note that introducing the new variable X, the problem is projected into a higher dimensional space,
which is computationally more challenging than just simply relaxing the integrality constraint in (3),
but leads to a tighter approximation of the optimum (c.f., Park and Boyd, 2015; see also Lov?sz and
Schrijver, 1991, Burer and Vandenbussche, 2006).
To obtain a feasible point of (3) from the solution of (5), we still need to change the solution x to
a binary vector. This can be done via randomized rounding [Park and Boyd, 2015, Goemans and
Williamson, 1995]: Instead of letting x 2 [0, 1]n , the integrality constraint x 2 {0, 1}n in (3) can be
replaced by the inequalities xi (xi 1) 0 for all i 2 [n]. Although these constraints are nonconvex,
they admit an interesting probabilistic interpretation: the optimization problem
minimize
Ew?N (?,?) [w> Dw + 2d> w]
subject to
Ew?N (?,?) [wi (wi
1)]
i 2 [n],
0,
is equivalent to
minimize
trace((? + ??> )D) + 2d> ?
subject to
?i,i + ?2i
?i
0,
? 2 Rn ,
??0
(6)
i 2 [n],
which
is in the form of (4) with X = ? + ??> and x = ? (above, Ex?P [f (x)] stands for
R
f (x)dP (x)). This leads to the rounding procedure: starting from a solution (x? , X ? ) of (4),
(j)
we randomly draw several samples w(j) from N (x? , X ? x? x? > ), round wi to 0 or 1 to obtain
x(j) , and keep the x(j) with the smallest objective value. In a series of experiments, Park and Boyd
[2015] found this procedure to be better than just naively rounding the coordinates of x? .
4
An Efficient Algorithm for Inference in FHMMs
To arrive at our method we apply the results of the previous subsection to (2). To do so, as mentioned
at the beginning of the section, we need to change the problem to a convex one, since the elements of
the second term in the objective of (2), x>
t,i (Et,i + log Pi )xt+1,i are not convex. To address this
issue, we relax the problem by introducing new variables Zt,i = xt,i x>
t+1,i and replace the constraint
Zt,i = xt,i x>
with
two
new
ones:
t+1,i
>
and Zt,i
1 = xt+1,i .
Zt,i 1 = xt,i
To simplify the presentation, we will assume that Ki = K for all i 2 [M ]. Then problem (2) becomes
arg min
xt,i
subject to
T ?
X
1
yt
2
2
t=1
xt 2 {0, 1}M K ,
z?t 2 {0, 1}
M KK
1> xt,i = 1,
>
Zt,i 1 = xt,i ,
x>
t ?
,
2
p>
t zt
t 2 [T ],
t 2 [T
t 2 [T ] and i 2 [M ],
> >
Zt,i
1 = xt+1,i ,
4
(7)
1],
t 2 [T
1] and i 2 [M ],
Algorithm 1 ADMM-RR: Randomized rounding algorithm for suboptimal solution to (2)
Given: number of iterations: itermax, length of input data: T
Solve the optimization problem (8): Run Algorithm 2 to get Xt? and zt?
Set xbest
:= zt? and Xtbest := Xt? for t = 1, . . . , T
t
for t = 2, . . . , T 1 do
>
> >
best >
Set x := [xbest
, xbest
t 1 , xt
t+1 ]
best
best
Set X := block(Xtbest
, Xt+1
) where block(?, ?) constructs block diagonal matrix from input
1 , Xt
arguments
Set f best := 1
Form the covariance matrix ? := X xxT and find its Cholesky factorization LL> = ?.
for k = 1, 2, . . . , itermax do
Random sampling: z k := x + Lw, where w ? N (0, I)
Round z k to the nearest integer point xk that satisfies the constraints of (7)
>
If f best > ft (xk ) then update xbest
and Xtbest from the corresponding entries of xk and xk xk ,
t
respectively
end for
end for
>
>
>
>
>
>
>
where x>
= [?>
t = [xt,1 , . . . , xt,M ], ?
1 , . . . , ?M ], zt = [vec(Zt,1 ) , . . . , vec(Zt,M ) ] and
>
pt = [vec(Et,1 + log P1 ), . . . , vec(log PT )], with vec(A) denoting the column vector obtained
by concatenating the columns of A for a matrix A. Expanding the first term of (7) and following the
relaxation method of Section 3.1, we get the following SDP problem:2
arg min
Xt ,zt
subject to
T
X
trace(Dt> Xt ) + d>
t zt
t=1
AXt = b,
Xt ? 0,
BXt + Czt + EXt+1 = g,
Xt , zt 0 .
0
(8)
0
K+1
K+1
Here A : SM
! Rm , B, E : SM
! Rm and C 2 RM KK?m are all appropriate linear
+
+
0
operators, ?and the integers m and m are determined by the number of equality constraints, while
0
yt ? >
Dt = 2 1 2
and dt = pt . Notice that (8) is a simple, though huge-dimensional SDP
yt ?
??>
? has a special block structure.
problem in the form of (5) where D
Next we apply the randomized rounding method from Section 3.1 to provide an approximate solution
to our original problem (2). Starting from an optimal solution (z ? , X ? ) of (8) , and utilizing that
we have an SDP problem for each time step t, we obtain Algorithm 1 that performs the rounding
sequentially for t = 1, 2, . . . , T . However we run the randomized method for three consecutive time
steps, since Xt appears at both time steps t 1 and t + 1 in addition to time t (cf., equation 9).
Following Park and Boyd [2015], in the experiments we introduce a simple greedy search within
Algorithm 1: after finding the initial point xk , we greedily try to objective the target value by change
the status of a single appliance at a single time instant. The search stops when no such improvement
is possible, and we use the resulting point as the estimate.
5
ADMM Solver for Large-Scale, Sparse Block-Structured SDP Problems
Given the relaxation and randomized rounding presented in the previous subsection all that remains
is to find Xt? , zt? to initialize Algorithm 1. Although interior point methods can solve SDP problems
efficiently, even for problems with sparse constraints as (4), the running time to obtain an ? optimal
solution is of the order of n3.5 log(1/?) [Nesterov, 2004, Section 4.3.3], which becomes prohibitive
in our case since the number of variables scales linearly with the time horizon T .
As an alternative solution, first-order methods can be used for large scale problems [Wen et al., 2010].
Since our problem (8) is an SDP problem where the objective function is separable, ADMM is a
promising candidate to find a near-optimal solution. To apply ADMM, we use the Moreau-Yosida
quadratic regularization [Malick et al., 2009], which is well suited for the primal formulation we
2
The only modification is that we need to keep the equality constraints in (7) that are missing from (3).
5
Algorithm 2 ADMM for sparse SDPs of the form (8)
Given: length of input data: T , number of iterations: itermax.
Set the initial values to zero. Wt0 , Pt0 , S 0 = 0, 0t = 0, ?t0 = 0, and rt0 , h0t = 0
Set ? = 0.001 {Default step-size value}
for k = 0, 1, . . . , itermax do
for t = 1, 2, . . . , T do
Update Ptk , Wtk , k , Stk , rtk , hkt , and ?tk , respectively, according to (11) (Appendix A).
end for
end for
consider. When implementing ADMM over the variables (Xt , zt )t , the sparse structure of our
constraints allows to consider the SDP problems for each time step t sequentially:
arg min
trace(Dt> Xt ) + d>
t zt
subject to
AXt = b,
BXt + Czt + EXt+1 = g,
BXt 1 + Czt 1 + EXt = g,
Xt ? 0,
Xt , zt 0 .
Xt ,zt
The regularized Lagrangian function for (9) is3
1
1
L? =trace(D> X) + d> z +
kX Sk2F +
kz
2?
2?
+ ? > (g BX Cz EX+ ) + ? > (g BX
trace(W > X)
trace(P > X)
h> z,
rk22 +
Cz
(9)
>
(b
EX)
AX)
(10)
where , ?, W
0, P ? 0, and h 0 are dual variables, and ? > 0 is a constant. By taking the
derivatives of L? and computing the optimal values of X and z, one can derive the standard ADMM
updates, which, due to space constraints, are given in Appendix A. The final algorithm, which updates
the variables for each t sequentially, is given by Algorithm 2.
Algorithms 1 and 2 together give an efficient algorithm for finding an approximate solution to (2) and
thus also to the inference problem of additive FHMMs.
6
Learning the Model
The previous section provided an algorithm to solve the inference part of our energy disaggregation
problem. However, to be able to run the inference method, we need to set up the model. To learn
the HMMs describing each appliance, we use the method of Kontorovich et al. [2013] to learn the
transition matrix, and the spectral learning method of Anandkumar et al. [2012] (following Mattfeld,
2014) to determine the emission parameters.
However, when it comes to the specific application of NILM, the problem of unknown, time-varying
bias also needs to be addressed, which appears due to the presence of unknown/unmodeled appliances
in the measured signal. A simple idea, which is also followed by Kolter and Jaakkola [2012], is to
use a ?generic model? whose contribution to the objective function is downweighted. Surprisingly,
incorporating this idea in the FHMM inference creates some unexpected challenges.4
Therefore, in this work we come up with a practical, heuristic solution tailored to NILM. First we
identify all electric events defined by a large change yt in the power usage (using some ad-hoc
(i)
threshold). Then we discard all events that are similar to any possible level change ?m,k . The
remaining large jumps are regarded as coming from a generic HMM model describing the unregistered
appliances: they are clustered into K 1 clusters, and an HMM model is built where each cluster is
regarded as power usage coming from a single state of the unregistered appliances. We also allow an
?off state? with power usage 0.
3
We drop the subscript t and replace t + 1 and t 1 with + and signs, respectively.
For example, the incorporation of this generic model breaks the derivation of the algorithm of Kolter and
Jaakkola [2012]. See Appendix B for a discussion of this.
4
6
7
Experimental Results
We evaluate the performance of our algorithm in two setups:5 we use a synthetic dataset to test
the inference method in a controlled environment, while we used the REDD dataset of Kolter and
Johnson [2011] to see how the method performs on non-simulated, ?real? data. The performance of
our algorithm is compared to the structured variational inference (SVI) method of Ghahramani and
Jordan [1997], the method of Kolter and Jaakkola [2012] and that of Zhong et al. [2014]; we shall
refer to the last two algorithms as KJ and ZGS, respectively.
7.1
Experimental Results: Synthetic Data
The synthetic dataset was generated randomly (the exact procedure is described in Appendix C).
To evaluate the performance, we use normalized disaggregation error as suggested by Kolter and
Jaakkola [2012] and also adopted by Zhong et al. [2014]. This measures the reconstruction error for
each individual appliance. Given the true output yt,i and the estimated output y?t,i (i.e. y?t,i = ?>
?t,i ),
i x
the error measure is defined as
qP
P
2
NDE =
y?t,i )2 / t,i (yt,i ) .
t,i (yt,i
Figures 1 and 2 show the performance of the algorithms as the number HMMs (M ) (resp., number of
states, K) is varied. Each plot is a report for T = 1000 steps averaged over 100 random models and
realizations, showing the mean and standard deviation of NDE. Our method, shown under the label
ADMM-RR, runs ADMM for 2500 iterations, runs the local search at the end of each 250 iterations,
and chooses the result that has the maximum likelihood. ADMM is the algorithm which applies naive
rounding. It can be observed that the variational inference method is significantly outperformed by
all other methods, while our algorithm consistently obtained better results than its competitors, KJ
coming second and ZGS third.
Number of states: 3; Data length T=1000; Number of samples: 100
Number of states: 3; Data length T=1000; Number of samples: 100
5
3
ADMM-RR
KJ method
ADMM
ZGS method
0.8
Normalized error
4
Normalized error
1
ADMM-RR
KJ method
ADMM
Variational Approx.
ZGS method
2
1
0
0.6
0.4
0.2
0
-1
-0.2
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
Figure 1: Disaggregation error varying the number of HMMs.
Number of appliances: 5; Data length T=1000; Number of samples: 100
3
0.8
2
ADMM-RR
KJ method
ADMM
ZGS method
0.6
Normalized error
2.5
Normalized error
Number of appliances: 5; Data length T=1000; Number of samples: 100
ADMM-RR
KJ method
ADMM
Variational Approx.
ZGS method
1.5
1
0.5
0.4
0.2
0
0
-0.5
-0.2
2
3
4
5
2
6
3
4
5
6
Figure 2: Disaggregation error varying the number of states.
7.2
Experimental Results: Real Data
In this section, we also compared the 3 best methods on the real dataset REDD [Kolter and Johnson,
2011]. We use the first half of the data for training and the second half for testing. Each HMM (i.e.,
5
Our code is available online at https://github.com/kiarashshaloudegi/FHMM_inference.
7
Appliance
1 Oven-3
2 Fridge
3 Microwave
4 Bath. GFI-12
5 Kitch. Out.-15
6 Wash./Dry.-20-A
7 Unregistered-A
8 Oven-4
9 Dishwasher-6
10 Wash./Dryer-10
11 Kitch. Out.-16
12 Wash./Dry.-20-B
13 Unregistered-B
Average
ADMM-RR
61.70/78.30%
90.22/97.63%
12.40/74.74%
50.88/60.25%
69.23/98.85%
98.23/93.80%
94.27/87.80%
25.41/76.37%
54.53/90.91%
21.92/63.58%
17.88/79.04%
98.19/28.31%
97.78/91.73%
60.97/78.56%
KJ method
27.62/72.32%
41.20/97.46%
13.40/96.32%
12.87/51.46%
16.66/79.47%
70.41/98.19%
85.35/25.91%
13.60/78.59%
25.20/98.72%
18.63/25.79%
8.87/100%
72.13/77.10%
96.92/73.97%
38.68/75.02%
ZGS method
5.35/15.04%
46.89/87.10%
4.55/45.07%
6.16/42.67%
5.69/26.72%
15.91/35.51%
57.43/99.31%
9.52/12.05%
29.42/31.01%
7.79/3.01%
0.00/0.00%
27.44/71.25%
33.63/99.98%
17.97/36.22%
Table 1: Comparing the disaggregation performance of three different algorithms: precision/recall.
Bold numbers represent statistically better performance on both measures.
appliance) is trained separately using the associated circuit level data, and the HMM corresponding
to unregistered appliances is trained using the main panel data. In this set of experiments we monitor
appliances consuming more than 100 watts. ADMM-RR is run for 1000 iterations, and the local
search is run at the end of each 250 iterations, and the result with the largest likelihood is chosen.
To be able to use the ZGS method on this data, we need to have some prior information about the
usage of each appliance; the authors suggestion is to us national energy surveys, but in the lack of
this information (also about the number of residents, type of houses, etc.) we used the training data to
extract this prior knowledge, which is expected to help this method.
Detailed results about the precision and recall of estimating which appliances are ?on? at any given
time are given in Table 1. In Appendix D we also report the error of the total power usage assigned
to different appliances (Table 2), as well as the amount of assigned power to each appliance as
a percentage of total power (Figure 3). As a summary, we can see that our method consistently
outperformed the others, achieving an average precision and recall of 60.97% and 78.56%, with about
50% better precision than KJ with essentially the same recall (38.68/75.02%), while significantly
improving upon ZGS (17.97/36.22%). Considering the error in assigning the power consumption to
different appliances, our method achieved about 30 35% smaller error (ADMM-RR: 2.87%, KJ:
4.44%, ZGS: 3.94%) than its competitors.
In our real-data experiments, there are about 1 million decision variables: M = 7 or 6 appliances
(for phase A and B power, respectively) with K = 4 states each and for about T = 30, 000 time
steps for one day, 1 sample every 6 seconds. KJ and ZGS solve quadratic programs, increasing their
memory usage (14GB vs 6GB in our case). On the other hand, our implementation of their method,
using the commercial solver MOSEK inside the Matlab-based YALMIP [L?fberg, 2004], runs in 5
minutes, while our algorithm, which is purely Matlab-based takes 5 hours to finish. We expect that an
optimized C++ version of our method could achieve a significant speed-up compared to our current
implementation.
8
Conclusion
FHMMs are widely used in energy disaggregation. However, the resulting model has a huge
(factored) state space, making standard inference FHMM algorithms infeasible even for only a
handful of appliances. In this paper we developed a scalable approximate inference algorithm, based
on a semidefinite relaxation combined with randomized rounding, which significantly outperformed
the state of the art in our experiments. A crucial component of our solution is a scalable ADMM
method that utilizes the special block-diagonal-like structure of the SDP relaxation and provides a
good initialization for randomized rounding. We expect that our method may prove useful in solving
other FHMM inference problems, as well as in large scale integer quadratic programming.
Acknowledgements
This work was supported in part by the Alberta Innovates Technology Futures through the Alberta Ingenuity
Centre for Machine Learning and by NSERC. K. is indebted to Pooria Joulani and Mohammad Ajallooeian,
whom provided much useful technical advise, while all authors are grateful for Zico Kolter for sharing his code.
8
References
A. Anandkumar, D. Hsu, and S. M. Kakade. A Method of Moments for Mixture Models and Hidden Markov
Models. In COLT, volume 23, pages 33.1?33.34, 2012.
S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed Optimization and Statistical Learning via
the Alternating Direction Method of Multipliers. FTML, 3(1):1?122, 2011.
S. Burer and D. Vandenbussche. Solving Lift-and-Project Relaxations of Binary Integer Programs. SIAM Journal
on Optimization, 16(3):726?750, 2006.
H.-H. Chang, K.-L. Chen, Y.-P. Tsai, and W.-J. Lee. A New Measurement Method for Power Signatures of
Nonintrusive Demand Monitoring and Load Identification. IEEE T. on Industry Applications, 48:764?771,
2012.
M. Dong, P. C. M. Meira, W. Xu, and W. Freitas. An Event Window Based Load Monitoring Technique for
Smart Meters. IEEE Transactions on Smart Grid, 3(2):787?796, June 2012.
M. Dong, Meira, W. Xu, and C. Y. Chung. Non-Intrusive Signature Extraction for Major Residential Loads.
IEEE Transactions on Smart Grid, 4(3):1421?1430, Sept. 2013.
D. Egarter, V. P. Bhuvana, and W. Elmenreich. PALDi: Online Load Disaggregation via Particle Filtering. IEEE
Transactions on Instrumentation and Measurement, 64(2):467?477, 2015.
M. Figueiredo, A. de Almeida, and B. Ribeiro. Home Electrical Signal Disaggregation for Non-intrusive Load
Monitoring (NILM) Systems. Neurocomputing, 96:66?73, Nov. 2012.
Z. Ghahramani and M. Jordan. Factorial Hidden Markov Models. Machine learning, 29(2):245?273, 1997.
M. X. Goemans and D. P. Williamson. Improved Approximation Algorithms for Maximum Cut and Satisfiability
Problems Using Semidefinite Programming. J. of the ACM, 42(6):1115?1145, 1995.
Z. Guo, Z. J. Wang, and A. Kashani. Home Appliance Load Modeling From Aggregated Smart Meter Data.
IEEE Transactions on Power Systems, 30(1):254?262, Jan. 2015.
J. Kelly and W. Knottenbelt. Neural NILM: Deep Neural Networks Applied to Energy Disaggregation. In
BuildSys, pages 55?64, 2015.
H. Kim, M. Marwah, M. F. Arlitt, G. Lyon, and J. Han. Unsupervised Disaggregation of Low Frequency Power
Measurements. In ICDM, volume 11, pages 747?758, 2011.
D. Koller and N. Friedman. Probabilistic graphical models: principles and techniques. Adaptive computation
and machine learning. MIT Press, Cambridge, MA, 2009.
J. Z. Kolter and T. Jaakkola. Approximate Inference in Additive Factorial HMMs with Application to Energy
Disaggregation. In AISTATS, pages 1472?1482, 2012.
J. Z. Kolter and M. J. Johnson. REDD: A Public Data Set for Energy Disaggregation Research. In Workshop on
Data Mining Applications in Sustainability (SIGKDD), pages 59?62, 2011.
J. Z. Kolter, S. Batra, and A. Y. Ng. Energy Disaggregation via Discriminative Sparse Coding. In Advances in
Neural Information Processing Systems, pages 1153?1161, 2010.
A. Kontorovich, B. Nadler, and R. Weiss. On Learning Parametric-Output HMMs. In ICML, pages 702?710,
2013.
J. Liang, S. K. K. Ng, G. Kendall, and J. W. M. Cheng. Load Signature Study -Part I: Basic Concept, Structure,
and Methodology. IEEE Transactions on Power Delivery, 25(2):551?560, Apr. 2010.
J. L?fberg. YALMIP : A Toolbox for Modeling and Optimization in MATLAB. In CACSD, 2004.
L. Lov?sz and A. Schrijver. Cones of Matrices and Set-functions and 0-1 Optimization. SIAM Journal on
Optimization, 1(2):166?190, 1991.
J. Malick, J. Povh, F. Rendl, and A. Wiegele. Regularization Methods for Semidefinite Programming. SIAM
Journal on Optimization, 20(1):336?356, Jan. 2009. ISSN 1052-6234, 1095-7189.
C. Mattfeld. Implementing spectral methods for hidden Markov models with real-valued emissions. arXiv
preprint arXiv:1404.7472, 2014.
Y. Nesterov. Introductory Lectures on Convex Optimization: A Basic Course. Kluwer, 2004.
J. Park and S. Boyd. A Semidefinite Programming Method for Integer Convex Quadratic Minimization. arXiv
preprint arXiv:1504.07672, 2015.
A. Prudenzi. A neuron nets based procedure for identifying domestic appliances pattern-of-use from energy
recordings at meter panel. In PESW, volume 2, pages 941?946, 2002.
S. J. Rennie, J. R. Hershey, and P. Olsen. Single-channel speech separation and recognition using loopy belief
propagation. In ICASSP, pages 3845?3848, 2009.
M. J. Wainwright and M. I. Jordan. Graphical Models, Exponential Families, and Variational Inference. FTML,
1(1?2):1?305, 2007.
M. Weiss, A. Helfenstein, F. Mattern, and T. Staake. Leveraging smart meter data to recognize home appliances.
In PerCom, pages 190?197, 2012.
Z. Wen, D. Goldfarb, and W. Yin. Alternating direction augmented Lagrangian methods for semidefinite
programming. Mathematical Programming Computation, 2(3-4):203?230, Dec. 2010.
M. Zhong, N. Goddard, and C. Sutton. Signal Aggregate Constraints in Additive Factorial HMMs, with
Application to Energy Disaggregation. In NIPS, pages 3590?3598, 2014.
T. Zia, D. Bruckner, and A. Zaidi. A hidden Markov model based procedure for identifying household electric
loads. In IECON, pages 3218?3223, 2011.
9
| 6555 |@word innovates:1 version:2 polynomial:2 simulation:1 covariance:2 moment:1 initial:2 series:2 pt0:1 denoting:2 outperforms:2 freitas:1 current:2 disaggregation:19 com:1 comparing:1 assigning:1 dx:3 written:1 chu:1 additive:6 drop:1 plot:1 update:4 v:1 greedy:1 prohibitive:1 half:2 czt:3 xk:6 beginning:1 provides:1 appliance:36 mathematical:1 prove:1 combine:1 introductory:1 owner:1 inside:1 introduce:1 lov:2 expected:2 ingenuity:1 p1:1 sdp:15 inspired:1 bhuvana:1 alberta:4 lyon:1 window:1 considering:2 solver:3 domestic:1 becomes:3 spain:1 xx:3 underlying:1 notation:2 provided:2 circuit:1 panel:2 estimating:1 increasing:1 developed:1 finding:2 csaba:1 impractical:1 temporal:2 every:1 exactly:1 axt:2 classifier:1 rm:3 uk:2 normally:1 zico:1 superiority:1 safety:1 positive:2 before:1 local:2 ext:3 sutton:1 subscript:1 solely:1 becoming:1 approximately:1 initialization:1 xbest:4 challenging:1 relaxing:1 hmms:9 factorization:1 statistically:1 averaged:1 practical:2 testing:1 practice:1 block:6 definite:1 svi:1 footprint:1 procedure:5 jan:2 area:1 significantly:5 boyd:8 advise:1 get:2 rk22:1 interior:3 operator:2 applying:1 bill:1 equivalent:1 demonstrated:1 yt:18 missing:1 lagrangian:2 starting:3 rt0:1 convex:11 survey:1 identifying:3 factored:1 utilizing:1 regarded:2 his:1 dw:1 classic:1 coordinate:1 resp:1 pt:4 suppose:1 target:1 ualberta:2 exact:4 programming:13 commercial:1 us:2 distinguishing:1 element:1 recognition:2 h0t:1 cut:1 observed:2 ft:1 preprint:2 electrical:3 solved:2 wang:1 revisiting:1 valuable:1 principled:1 mentioned:1 yosida:2 environment:1 nesterov:2 signature:3 trained:2 raise:1 reviewing:1 solving:2 smart:6 grateful:1 serve:1 creates:1 upon:1 efficiency:1 purely:1 yalmip:2 icassp:1 xxt:1 derivation:1 amend:1 london:2 artificial:1 lift:1 aggregate:1 whose:2 heuristic:3 widely:1 solve:6 valued:1 rennie:2 relax:3 otherwise:1 ip:2 ptk:1 final:1 hoc:2 sequence:1 rr:9 online:2 net:1 reconstruction:1 product:1 coming:4 combining:1 realization:1 bath:1 poorly:1 achieve:1 rgy:1 invest:1 cluster:2 optimum:2 hkt:1 tk:1 help:2 derive:3 develop:2 ac:2 linearize:1 measured:2 nearest:2 come:2 dishwasher:1 direction:2 public:1 implementing:2 clustered:1 tighter:2 extension:1 hold:1 great:1 nadler:1 major:2 consecutive:1 smallest:1 outperformed:3 label:1 largest:1 tool:1 hope:1 minimization:1 mit:1 clearly:1 gaussian:3 rki:2 rather:1 avoid:1 zhong:4 varying:3 jaakkola:14 derived:1 ax:2 june:1 emission:2 kitch:2 properly:1 improvement:1 likelihood:4 consistently:2 sigkdd:1 greedily:1 kim:3 inference:17 nn:1 hidden:5 koller:2 issue:3 arg:5 dual:1 colt:1 malick:4 art:4 special:3 initialize:1 itermax:4 aware:1 saving:3 construct:1 extraction:1 sampling:2 ng:2 park:6 unsupervised:1 nearly:1 icml:1 mosek:1 future:1 np:1 report:2 simplify:1 others:1 wen:4 randomly:2 simultaneously:1 national:1 neurocomputing:1 individual:5 recognize:1 replaced:1 phase:1 friedman:2 scaleable:1 huge:2 mining:2 mixture:2 semidefinite:9 primal:1 microwave:1 logarithm:1 column:3 modeling:4 industry:1 loopy:1 introducing:4 deviation:1 entry:4 rounding:15 johnson:3 too:1 synthetic:5 chooses:1 combined:1 st:2 randomized:12 siam:3 probabilistic:2 dong:4 off:1 lee:1 together:1 kontorovich:2 admit:1 nonintrusive:1 derivative:1 chung:1 return:1 bx:2 downweighted:1 potential:1 de:1 gy:1 coding:2 bold:1 trough:1 kolter:20 ad:2 try:2 break:1 kendall:1 doing:1 observing:1 start:1 bayes:1 contribution:1 minimize:5 formed:1 accuracy:1 variance:3 efficiently:1 identify:2 dry:2 fhmm:6 identification:1 sdps:2 monitoring:6 indebted:1 sharing:1 competitor:2 energy:22 frequency:1 associated:1 stop:1 hsu:1 dataset:4 recall:4 subsection:2 knowledge:1 improves:1 satisfiability:1 appears:2 higher:1 dt:4 day:1 follow:1 methodology:1 hershey:1 wei:3 fhmms:10 bxt:3 formulation:2 done:1 box:1 though:1 improved:1 just:2 hand:1 lack:1 resident:1 propagation:1 quality:1 perhaps:1 willingness:1 usage:7 building:1 contain:1 true:2 remedy:1 normalized:5 multiplier:1 regularization:3 equality:2 assigned:2 alternating:2 concept:1 goldfarb:1 i2:5 round:2 ll:1 szepesva:1 fberg:2 demonstrate:2 mohammad:1 performs:2 nde:2 variational:6 novel:1 parikh:1 superior:1 qp:1 volume:3 million:2 discussed:1 interpretation:1 kluwer:1 significant:2 measurement:4 refer:1 cambridge:1 vec:5 approx:2 grid:2 pm:3 zia:2 particle:2 centre:1 han:1 etc:2 oven:3 posterior:2 recent:2 instrumentation:1 discard:1 nonconvex:2 inequality:1 binary:5 arlitt:1 gyorgy:1 relaxed:2 employed:1 determine:1 aggregated:1 signal:10 ii:2 semi:1 multiple:2 technical:1 match:1 burer:2 icdm:1 controlled:1 rendl:1 scalable:4 variant:1 basic:2 essentially:1 zaidi:1 arxiv:4 iteration:6 represent:1 cz:2 tailored:1 achieved:1 dec:1 szepesv:1 addition:2 separately:1 addressed:1 source:1 crucial:1 rest:1 subject:11 recording:1 leveraging:1 jordan:7 integer:12 anandkumar:2 near:1 presence:1 finish:1 suboptimal:1 idea:2 enumerating:1 t0:1 utility:2 gb:2 speech:2 knottenbelt:2 deep:2 matlab:3 useful:2 detailed:2 involve:1 factorial:6 gfi:1 amount:1 concentrated:1 http:1 percentage:1 andr:1 notice:1 sign:1 estimated:1 rtk:1 track:1 bulk:1 shall:1 threshold:1 monitor:1 imperial:4 achieving:1 integrality:2 relaxation:15 sum:1 cone:1 residential:1 run:8 powerful:1 arrive:1 throughout:1 family:1 utilizes:1 home:6 draw:1 delivery:1 decision:1 appendix:5 separation:1 ki:9 stove:1 followed:1 cheng:1 quadratic:10 nontrivial:1 constraint:13 incorporation:1 handful:1 marwah:1 ri:1 n3:1 speed:1 argument:1 min:5 separable:1 structured:2 according:1 zgs:11 project:1 meira:2 combination:1 watt:1 smaller:1 wi:3 kakade:1 making:3 modification:1 wtk:1 joulani:1 intuitively:1 dryer:1 fridge:1 computationally:3 equation:1 remains:1 describing:3 is3:1 letting:1 fed:1 end:6 capitalizing:1 adopted:1 available:1 apply:4 sustainability:1 appropriate:3 spectral:2 generic:3 alternative:1 original:1 denotes:10 running:5 include:1 cf:1 remaining:1 graphical:2 household:5 instant:1 unsuitable:1 goddard:1 exploit:1 ghahramani:5 society:1 objective:7 added:1 parametric:1 diagonal:3 vandenbussche:2 dp:1 separate:1 simulated:1 hmm:7 consumption:11 whom:1 consumer:1 length:7 code:2 modeled:2 issn:1 ellipsoid:1 kk:2 minimizing:1 helfenstein:1 liang:2 setup:1 mostly:1 trace:11 negative:1 implementation:2 zt:20 unknown:2 observation:2 neuron:1 datasets:1 markov:5 sm:2 rn:2 varied:1 arbitrary:1 unmodeled:1 peleato:1 introduced:1 eckstein:1 required:1 toolbox:1 optimized:1 barcelona:1 hour:1 nip:2 address:2 able:2 suggested:1 usually:1 pattern:2 redd:3 challenge:2 program:5 built:1 memory:2 belief:1 wainwright:2 power:21 event:6 regularized:1 turning:1 indicator:2 sk2f:1 improve:1 github:1 technology:1 naive:1 extract:1 sept:1 sn:4 kj:10 prior:2 kelly:2 literature:2 meter:5 acknowledgement:1 expect:3 lecture:1 interesting:1 suggestion:1 intrusive:3 filtering:2 awareness:1 principle:1 pi:6 course:1 summary:1 surprisingly:1 last:1 supported:1 arriving:1 accidentally:1 figueiredo:3 infeasible:1 bias:1 allow:1 bruckner:1 neighbor:1 taking:2 sparse:6 moreau:2 distributed:2 feedback:1 dimension:1 default:1 world:1 transition:4 stand:1 kz:1 author:2 jump:1 projected:1 adaptive:1 ribeiro:1 transaction:5 approximate:11 compact:1 nov:1 olsen:1 status:1 keep:2 confirm:1 sz:2 sequentially:3 consuming:1 xi:2 discriminative:1 alternatively:1 search:4 wt0:1 table:3 promising:1 channel:1 learn:2 ca:2 expanding:1 improving:1 williamson:2 electric:2 domain:2 diag:3 aistats:1 apr:1 main:1 linearly:1 noise:6 xu:3 x1:3 augmented:1 precision:5 exponential:3 concatenating:1 candidate:1 house:1 tied:1 lw:1 third:1 minute:1 load:12 xt:47 specific:1 showing:1 unregistered:5 naively:1 incorporating:1 workshop:1 mattern:1 wash:3 horizon:1 kx:1 chen:1 demand:1 suited:2 yin:1 simply:1 unexpected:1 nserc:1 talking:1 chang:2 applies:1 satisfies:1 acm:1 ma:1 goal:1 presentation:1 ann:1 quantifying:1 stk:1 replace:4 admm:24 feasible:1 hard:1 change:10 determined:1 diff:3 total:8 called:1 goemans:2 batra:1 schrijver:2 experimental:3 ew:2 formally:1 college:2 people:1 guo:2 latter:1 cholesky:1 almeida:1 cacsd:1 tsai:1 evaluate:2 mcmc:1 ex:3 |
6,142 | 6,556 | Residual Networks Behave Like Ensembles of
Relatively Shallow Networks
Andreas Veit
Michael Wilber
Serge Belongie
Department of Computer Science & Cornell Tech
Cornell University
{av443, mjw285, sjb344}@cornell.edu
Abstract
In this work we propose a novel interpretation of residual networks showing that
they can be seen as a collection of many paths of differing length. Moreover,
residual networks seem to enable very deep networks by leveraging only the short
paths during training. To support this observation, we rewrite residual networks as
an explicit collection of paths. Unlike traditional models, paths through residual
networks vary in length. Further, a lesion study reveals that these paths show
ensemble-like behavior in the sense that they do not strongly depend on each other.
Finally, and most surprising, most paths are shorter than one might expect, and
only the short paths are needed during training, as longer paths do not contribute
any gradient. For example, most of the gradient in a residual network with 110
layers comes from paths that are only 10-34 layers deep. Our results reveal one
of the key characteristics that seem to enable the training of very deep networks:
Residual networks avoid the vanishing gradient problem by introducing short paths
which can carry gradient throughout the extent of very deep networks.
1
Introduction
Most modern computer vision systems follow a familiar architecture, processing inputs from lowlevel features up to task specific high-level features. Recently proposed residual networks [5, 6]
challenge this conventional view in three ways. First, they introduce identity skip-connections that
bypass residual layers, allowing data to flow from any layers directly to any subsequent layers. This
is in stark contrast to the traditional strictly sequential pipeline. Second, skip connections give rise to
networks that are two orders of magnitude deeper than previous models, with as many as 1202 layers.
This is contrary to architectures like AlexNet [13] and even biological systems [17] that can capture
complex concepts within half a dozen layers.1 Third, in initial experiments, we observe that removing
single layers from residual networks at test time does not noticeably affect their performance. This
is surprising because removing a layer from a traditional architecture such as VGG [18] leads to a
dramatic loss in performance.
In this work we investigate the impact of these differences. To address the influence of identity skipconnections, we introduce the unraveled view. This novel representation shows residual networks
can be viewed as a collection of many paths instead of a single deep network. Further, the perceived
resilience of residual networks raises the question whether the paths are dependent on each other or
whether they exhibit a degree of redundancy. To find out, we perform a lesion study. The results show
ensemble-like behavior in the sense that removing paths from residual networks by deleting layers or
corrupting paths by reordering layers only has a modest and smooth impact on performance. Finally,
we investigate the depth of residual networks. Unlike traditional models, paths through residual
networks vary in length. The distribution of path lengths follows a binomial distribution, meaning
1
Making the common assumption that a layer in a neural network corresponds to a cortical area.
that the majority of paths in a network with 110 layers are only about 55 layers deep. Moreover, we
show most gradient during training comes from paths that are even shorter, i.e., 10-34 layers deep.
This reveals a tension. On the one hand, residual network performance improves with adding more
and more layers [6]. However, on the other hand, residual networks can be seen as collections of
many paths and the only effective paths are relatively shallow. Our results could provide a first
explanation: residual networks do not resolve the vanishing gradient problem by preserving gradient
flow throughout the entire depth of the network. Rather, they enable very deep networks by shortening
the effective paths. For now, short paths still seem necessary to train very deep networks.
In this paper we make the following contributions:
? We introduce the unraveled view, which illustrates that residual networks can be viewed as
a collection of many paths, instead of a single ultra-deep network.
? We perform a lesion study to show that these paths do not strongly depend on each other,
even though they are trained jointly. Moreover, they exhibit ensemble-like behavior in the
sense that their performance smoothly correlates with the number of valid paths.
? We investigate the gradient flow through residual networks, revealing that only the short
paths contribute gradient during training. Deep paths are not required during training.
2
Related Work
The sequential and hierarchical computer vision pipeline Visual processing has long been understood to follow a hierarchical process from the analysis of simple to complex features. This
formalism is based on the discovery of the receptive field [10], which characterizes the visual system
as a hierarchical and feedforward system. Neurons in early visual areas have small receptive fields
and are sensitive to basic visual features, e.g., edges and bars. Neurons in deeper layers of the
hierarchy capture basic shapes, and even deeper neurons respond to full objects. This organization
has been widely adopted in the computer vision and machine learning literature, from early neural
networks such as the Neocognitron [4] and the traditional hand-crafted feature pipeline of Malik and
Perona [15] to convolutional neural networks [13, 14]. The recent strong results of very deep neural
networks [18, 20] led to the general perception that it is the depth of neural networks that govern their
expressive power and performance. In this work, we show that residual networks do not necessarily
follow this tradition.
Residual networks [5, 6] are neural networks in which each layer consists of a residual module
fi and a skip connection2 bypassing fi . Since layers in residual networks can comprise multiple
convolutional layers, we refer to them as residual blocks in the remainder of this paper. For clarity of
notation, we omit the initial pre-processing and final classification steps. With yi?1 as is input, the
output of the ith block is recursively defined as
yi ? fi (yi?1 ) + yi?1 ,
(1)
where fi (x) is some sequence of convolutions, batch normalization [11], and Rectified Linear Units
(ReLU) as nonlinearities. Figure 1 (a) shows a schematic view of this architecture. In the most recent
formulation of residual networks [6], fi (x) is defined by
fi (x) ? Wi ? ? (B (Wi0 ? ? (B (x)))) ,
(2)
0
where Wi and Wi are weight matrices, ? denotes convolution, B(x) is batch normalization and
?(x) ? max(x, 0). Other formulations are typically composed of the same operations, but may differ
in their order.
The idea of branching paths in neural networks is not new. For example, in the regime of convolutional
neural networks, models based on inception modules [20] were among the first to arrange layers in
blocks with parallel paths rather than a strict sequential order. We choose residual networks for this
study because of their simple design principle.
Highway networks Residual networks can be viewed as a special case of highway networks [19].
The output of each layer of a highway network is defined as
yi+1 ? fi+1 (yi ) ? ti+1 (yi ) + yi ? (1 ? ti+1 (yi ))
(3)
2
We only consider identity skip connections, but this framework readily generalizes to more complex
projection skip connections when downsampling is required.
2
(a) Conventional 3-block residual network
=
(b) Unraveled view of (a)
Figure 1: Residual Networks are conventionally shown as (a), which is a natural representation of
Equation (1). When we expand this formulation to Equation (6), we obtain an unraveled view of a
3-block residual network (b). Circular nodes represent additions. From this view, it is apparent that
residual networks have O(2n ) implicit paths connecting input and output and that adding a block
doubles the number of paths.
This follows the same structure as Equation (1). Highway networks also contain residual modules
and skip connections that bypass them. However, the output of each path is attenuated by a gating
function t, which has learned parameters and is dependent on its input. Highway networks are
equivalent to residual networks when ti (?) = 0.5, in which case data flows equally through both
paths. Given an omnipotent solver, highway networks could learn whether each residual module
should affect the data. This introduces more parameters and more complexity.
Investigating neural networks Several investigative studies seek to better understand convolutional
neural networks. For example, Zeiler and Fergus [23] visualize convolutional filters to unveil the
concepts learned by individual neurons. Further, Szegedy et al. [21] investigate the function learned
by neural networks and how small changes in the input called adversarial examples can lead to large
changes in the output. Within this stream of research, the closest study to our work is from Yosinski
et al. [22], which performs lesion studies on AlexNet. They discover that early layers exhibit little
co-adaptation and later layers have more co-adaptation. These papers, along with ours, have the
common thread of exploring specific aspects of neural network performance. In our study, we focus
our investigation on structural properties of neural networks.
Ensembling Since the early days of neural networks, researchers have used simple ensembling
techniques to improve performance. Though boosting has been used in the past [16], one simple
approach is to arrange a committee [3] of neural networks in a simple voting scheme, where the
final output predictions are averaged. Top performers in several competitions use this technique
almost as an afterthought [6, 13, 18]. Generally, one key characteristic of ensembles is their smooth
performance with respect to the number of members. In particular, the performance increase from
additional ensemble members gets smaller with increasing ensemble size. Even though they are not
strict ensembles, we show that residual networks behave similarly.
Dropout Hinton et al. [7] show that dropping out individual neurons during training leads to a
network that is equivalent to averaging over an ensemble of exponentially many networks. Similar
in spirit, stochastic depth [9] trains an ensemble of networks by dropping out entire layers during
training. In this work, we show that one does not need a special training strategy such as stochastic
depth to drop out layers. Entire layers can be removed from plain residual networks without impacting
performance, indicating that they do not strongly depend on each other.
3
The unraveled view of residual networks
To better understand residual networks, we introduce a formulation that makes it easier to reason
about their recursive nature. Consider a residual network with three building blocks from input y0 to
output y3 . Equation (1) gives a recursive definition of residual networks. The output of each stage is
based on the combination of two subterms. We can make the shared structure of the residual network
apparent by unrolling the recursion into an exponential number of nested terms, expanding one layer
3
(a) Deleting f2 from unraveled view
(b) Ordinary feedforward network
Figure 2: Deleting a layer in residual networks at test time (a) is equivalent to zeroing half of the
paths. In ordinary feed-forward networks (b) such as VGG or AlexNet, deleting individual layers
alters the only viable path from input to output.
at each substitution step:
y3 = y2 + f3 (y2 )
= [y1 + f2 (y1 )] + f3 (y1 + f2 (y1 ))
= y0 + f1 (y0 ) + f2 (y0 + f1 (y0 )) + f3 y0 + f1 (y0 ) + f2 (y0 + f1 (y0 ))
(4)
(5)
(6)
We illustrate this expression tree graphically in Figure 1 (b). With subscripts in the function modules
indicating weight sharing, this graph is equivalent to the original formulation of residual networks.
The graph makes clear that data flows along many paths from input to output. Each path is a unique
configuration of which residual module to enter and which to skip. Conceivably, each unique path
through the network can be indexed by a binary code b ? {0, 1}n where bi = 1 iff the input flows
through residual module fi and 0 if fi is skipped. It follows that residual networks have 2n paths
connecting input to output layers.
In the classical visual hierarchy, each layer of processing depends only on the output of the previous
layer. Residual networks cannot strictly follow this pattern because of their inherent structure.
Each module fi (?) in the residual network is fed data from a mixture of 2i?1 different distributions
generated from every possible configuration of the previous i ? 1 residual modules.
Compare this to a strictly sequential network such as VGG or AlexNet, depicted conceptually in
Figure 2 (b). In these networks, input always flows from the first layer straight through to the last in a
single path. Written out, the output of a three-layer feed-forward network is
y3F F = f3F F (f2F F (f1F F (y0 )))
(7)
fiF F (x)
where
is typically a convolution followed by batch normalization and ReLU. In these
FF
networks, each fiF F is only fed data from a single path configuration, the output of fi?1
(?).
It is worthwhile to note that ordinary feed-forward neural networks can also be ?unraveled? using the
above thought process at the level of individual neurons rather than layers. This renders the network
as a collection of different paths, where each path is a unique configuration of neurons from each
layer connecting input to output. Thus, all paths through ordinary neural networks are of the same
length. However, paths in residual networks have varying length. Further, each path in a residual
network goes through a different subset of layers.
Based on these observations, we formulate the following questions and address them in our experiments below. Are the paths in residual networks dependent on each other or do they exhibit a degree
of redundancy? If the paths do not strongly depend on each other, do they behave like an ensemble?
Do paths of varying lengths impact the network differently?
4
Lesion study
In this section, we use three lesion studies to show that paths in residual networks do not strongly
depend on each other and that they behave like an ensemble. All experiments are performed at test
4
Test error when dropping any single block
from residual network vs. VGG on CIFAR-10
1.0
residual network v2, 110 layers
VGG network, 15 layers
residual network baseline
VGG network baseline
0.8
0.6
0.4
residual network v2, 200 layers
residual network baseline
0.6
0.4
0.2
0.2
0.0
0
Top-1 error when dropping any single block
from 200-layer residual network on ImageNet
0.8
top 1 error
Test classification error
1.0
10
20
30
40
0.0
0
50
10
20
30
40
50
60
dropped layer index
dropped layer index
Figure 3: Deleting individual layers from VGG
and a residual network on CIFAR-10. VGG performance drops to random chance when any one
of its layers is deleted, but deleting individual
modules from residual networks has a minimal
impact on performance. Removing downsampling modules has a slightly higher impact.
Figure 4: Results when dropping individual
blocks from residual networks trained on ImageNet are similar to CIFAR results. However,
downsampling layers tend to have more impact
on ImageNet.
time on CIFAR-10 [12]. Experiments on ImageNet [2] show comparable results. We train residual
networks with the standard training strategy, dataset augmentation, and learning rate policy, [6]. For
our CIFAR-10 experiments, we train a 110-layer (54-module) residual network with modules of the
?pre-activation? type which contain batch normalization as first step. For ImageNet we use 200 layers
(66 modules). It is important to note that we did not use any special training strategy to adapt the
network. In particular, we did not use any perturbations such as stochastic depth during training.
4.1
Experiment: Deleting individual layers from neural networks at test time
As a motivating experiment, we will show that not all transformations within a residual network are
necessary by deleting individual modules from the neural network after it has been fully trained. To
do so, we remove the residual module from a single building block, leaving the skip connection (or
downsampling projection, if any) untouched. That is, we change yi = yi?1 + fi (yi?1 ) to yi0 = yi?1 .
We can measure the importance of each building block by varying which residual module we remove.
To compare to conventional convolutional neural networks, we train a VGG network with 15 layers,
setting the number of channels to 128 for all layers to allow the removal of any layer.
It is unclear whether any neural network can withstand such a drastic change to the model structure.
We expect them to break because dropping any layer drastically changes the input distribution of all
subsequent layers.
The results are shown in Figure 3. As expected, deleting any layer in VGG reduces performance to
chance levels. Surprisingly, this is not the case for residual networks. Removing downsampling
blocks does have a modest impact on performance (peaks in Figure 3 correspond to downsampling
building blocks), but no other block removal lead to a noticeable change. This result shows that
to some extent, the structure of a residual network can be changed at runtime without affecting
performance. Experiments on ImageNet show comparable results, as seen in Figure 4.
Why are residual networks resilient to dropping layers but VGG is not? Expressing residual networks
in the unraveled view provides a first insight. It shows that residual networks can be seen as a
collection of many paths. As illustrated in Figure 2 (a), when a layer is removed, the number of
paths is reduced from 2n to 2n?1 , leaving half the number of paths valid. VGG only contains a
single usable path from input to output. Thus, when a single layer is removed, the only viable path is
corrupted. This result suggests that paths in a residual network do not strongly depend on each other
although they are trained jointly.
4.2
Experiment: Deleting many modules from residual networks at test-time
Having shown that paths do not strongly depend on each other, we investigate whether the collection
of paths shows ensemble-like behavior. One key characteristic of ensembles is that their performance
5
Error when deleting layers
0.9
0.8
0.8
0.7
0.7
0.6
0.6
0.5
0.5
Error
Error
0.9
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0.0
1
2
3
4
5
6
0.0
1.0
7 8 9 10 11 12 13 14 15 16 17 18 19 20
Number of layers deleted
Error when permuting layers
0.98
0.96
0.94
0.92
0.9
0.88
Kendall Tau correlation
0.86
0.84
(a)
(b)
Figure 5: (a) Error increases smoothly when randomly deleting several modules from a residual
network. (b) Error also increases smoothly when re-ordering a residual network by shuffling building
blocks. The degree of reordering is measured by the Kendall Tau correlation coefficient. These results
are similar to what one would expect from ensembles.
depends smoothly on the number of members. If the collection of paths were to behave like an
ensemble, we would expect test-time performance of residual networks to smoothly correlate with
the number of valid paths. This is indeed what we observe: deleting increasing numbers of residual
modules, increases error smoothly, Figure 5 (a). This implies residual networks behave like ensembles.
When deleting k residual modules from a network originally of length n, the number of valid paths
decreases to O(2n?k ). For example, the original network started with 54 building blocks, so deleting
10 blocks leaves 244 paths. Though the collection is now a factor of roughly 10?6 of its original size,
there are still many valid paths and error remains around 0.2.
4.3
Experiment: Reordering modules in residual networks at test-time
Our previous experiments were only about dropping layers, which have the effect of removing
paths from the network. In this experiment, we consider changing the structure of the network
by re-ordering the building blocks. This has the effect of removing some paths and inserting new
paths that have never been seen by the network during training. In particular, it moves high-level
transformations before low-level transformations.
To re-order the network, we swap k randomly sampled pairs of building blocks with compatible
dimensionality, ignoring modules that perform downsampling. We graph error with respect to the
Kendall Tau rank correlation coefficient which measures the amount of corruption. The results are
shown in Figure 5 (b). As corruption increases, the error smoothly increases as well. This result is
surprising because it suggests that residual networks can be reconfigured to some extent at runtime.
5
The importance of short paths in residual networks
Now that we have seen that there are many paths through residual networks and that they do not
necessarily depend on each other, we investigate their characteristics.
Distribution of path lengths Not all paths through residual networks are of the same length. For
example, there is precisely one path that goes through all modules and n paths that go only through a
single module. From this reasoning, the distribution of all possible path lengths through a residual
network follows a Binomial distribution. Thus, we know that the path lengths are closely centered
around the mean of n/2. Figure 6 (a) shows the path length distribution for a residual network with
54 modules; more than 95% of paths go through 19 to 35 modules.
Vanishing gradients in residual networks Generally, data flows along all paths in residual networks.
However, not all paths carry the same amount of gradient. In particular, the length of the paths through
the network affects the gradient magnitude during backpropagation [1, 8]. To empirically investigate
the effect of vanishing gradients on residual networks we perform the following experiment. Starting
from a trained network with 54 blocks, we sample individual paths of a certain length and measure
the norm of the gradient that arrives at the input. To sample a path of length k, we first feed a batch
forward through the whole network. During the backward pass, we randomly sample k residual
6
1.5
1.0
0.5
0.0
1e 5
gradient m agnitude per path length
0.8
0.6
0.4
0.2
log gradient m agnitude
10
-6
10
-8
10
-10
10
-12
10
-14
10
-16
10
-18
10
-20
10
-22
10
-24
10
-26
0
10
20
30
path length
40
50
0.0
0
10
20
30
path length
40
50
total gradient m agnitude per path length
0.5
total gradient m agnitude
1.0
grad m agnitude at input
distribution of path length
1e15
grad m agnitude at input
num ber of paths
2.0
0.4
0.3
0.2
0.1
0.0
0
10
20
30
40
50
0
10
path length
20
30
40
50
path length
(a)
(b)
(c)
Figure 6: How much gradient do the paths of different lengths contribute in a residual network?
To find out, we first show the distribution of all possible path lengths (a). This follows a Binomial
distribution. Second, we record how much gradient is induced on the first layer of the network
through paths of varying length (b), which appears to decay roughly exponentially with the number
of modules the gradient passes through. Finally, we can multiply these two functions (c) to show how
much gradient comes from all paths of a certain length. Though there are many paths of medium
length, paths longer than ?20 modules are generally too long to contribute noticeable gradient during
training. This suggests that the effective paths in residual networks are relatively shallow.
blocks. For those k blocks, we only propagate through the residual module; for the remaining n ? k
blocks, we only propagate through the skip connection. Thus, we only measure gradients that flow
through the single path of length k. We sample 1,000 measurements for each length k using random
batches from the training set. The results show that the gradient magnitude of a path decreases
exponentially with the number of modules it went through in the backward pass, Figure 6 (b).
The effective paths in residual networks are relatively shallow Finally, we can use these results
to deduce whether shorter or longer paths contribute most of the gradient during training. To find the
total gradient magnitude contributed by paths of each length, we multiply the frequency of each path
length with the expected gradient magnitude. The result is shown in Figure 6 (c). Surprisingly, almost
all of the gradient updates during training come from paths between 5 and 17 modules long. These
are the effective paths, even though they constitute only 0.45% of all paths through this network.
Moreover, in comparison to the total length of the network, the effective paths are relatively shallow.
To validate this result, we retrain a residual network from scratch that only sees the effective paths
during training. This ensures that no long path is ever used. If the retrained model is able to perform
competitively compared to training the full network, we know that long paths in residual networks
are not needed during training. We achieve this by only training a subset of the modules during each
mini batch. In particular, we choose the number of modules such that the distribution of paths during
training aligns with the distribution of the effective paths in the whole network. For the network
with 54 modules, this means we sample exactly 23 modules during each training batch. Then, the
path lengths during training are centered around 11.5 modules, well aligned with the effective paths.
In our experiment, the network trained only with the effective paths achieves a 5.96% error rate,
whereas the full model achieves a 6.10% error rate. There is no statistically significant difference.
This demonstrates that indeed only the effective paths are needed.
6
Discussion
Removing residual modules mostly removes long paths Deleting a module from a residual network
mainly removes the long paths through the network. In particular, when deleting d residual modules
from a network of length n, the fraction of paths remaining per path length x is given by
n?d
fraction of remaining paths of length x = nx
(8)
x
Figure 7 illustrates the fraction of remaining paths after deleting 1, 10 and 20 modules from a 54
module network. It becomes apparent that the deletion of residual modules mostly affects the long
paths. Even after deleting 10 residual modules, many of the effective paths between 5 and 17 modules
long are still valid. Since mainly the effective paths are important for performance, this result is in line
with the experiment shown in Figure 5 (a). Performance only drops slightly up to the removal of 10
residual modules, however, for the removal of 20 modules, we observe a severe drop in performance.
7
remaining paths after deleting d modules
delete 1 module
delete 10 modules
delete 20 modules
0.8
0.6
effective paths
0.4
0.2
0.0
0
10
20
30
40
50
Residual network vs. stochastic depth error when dropping any single block (CIFAR-10)
1.0
Test classification error
fraction of remaining paths
1.0
residual network v2, 110 layers
stochastic depth, 110 layers, d = 0.5 linear decay
0.8
0.6
0.4
0.2
0.0
0
10
Figure 7: Fraction of paths remaining after deleting individual layers.
Deleting layers mostly affects long
paths through the networks.
20
30
40
50
dropped layer index
path length
Figure 8: Impact of stochastic depth on resilience to layer
deletion. Training with stochastic depth only improves resilience slightly, indicating that plain residual networks already don?t depend on individual layers. Compare to Fig. 3.
Connection to highway networks In highway networks, ti (?) multiplexes data flow through the
residual and skip connections and ti (?) = 0.5 means both paths are used equally. For highway
networks in the wild, [19] observe empirically that the gates commonly deviate from ti (?) = 0.5. In
particular, they tend to be biased toward sending data through the skip connection; in other words, the
network learns to use short paths. Similar to our results, it reinforces the importance of short paths.
Effect of stochastic depth training procedure Recently, an alternative training procedure for residual networks has been proposed, referred to as stochastic depth [9]. In that approach a random subset
of the residual modules is selected for each mini-batch during training. The forward and backward
pass is only performed on those modules. Stochastic depth does not affect the number of paths in the
network because all paths are available at test time. However, it changes the distribution of paths seen
during training. In particular, mainly short paths are seen. Further, by selecting a different subset of
short paths in each mini-batch, it encourages the paths to produce good results independently.
Does this training procedure significantly reduce the dependence between paths? We repeat the
experiment of deleting individual modules for a residual network trained using stochastic depth. The
result is shown in Figure 8. Training with stochastic depth improves resilience slightly; only the
dependence on the downsampling layers seems to be reduced. By now, this is not surprising: we
know that plain residual networks already don?t depend on individual layers.
7
Conclusion
What is the reason behind residual networks? increased performance? In the most recent iteration of
residual networks, He et al. [6] provide one hypothesis: ?We obtain these results via a simple but
essential concept?going deeper.? While it is true that they are deeper than previous approaches, we
present a complementary explanation. First, our unraveled view reveals that residual networks can be
viewed as a collection of many paths, instead of a single ultra deep network. Second, we perform
lesion studies to show that, although these paths are trained jointly, they do not strongly depend
on each other. Moreover, they exhibit ensemble-like behavior in the sense that their performance
smoothly correlates with the number of valid paths. Finally, we show that the paths through the
network that contribute gradient during training are shorter than expected. In fact, deep paths are
not required during training as they do not contribute any gradient. Thus, residual networks do not
resolve the vanishing gradient problem by preserving gradient flow throughout the entire depth of
the network. This insight reveals that depth is still an open research question. These promising
observations provide a new lens through which to examine neural networks.
Acknowledgements
We would like to thank Sam Kwak and Theofanis Karaletsos for insightful feedback. We also thank
the reviewers of NIPS 2016 for their very constructive and helpful feedback and for suggesting
the paper title. This work is partly funded by AOL through the Connected Experiences Laboratory
(Author 1), an NSF Graduate Research Fellowship award (NSF DGE-1144153, Author 2), and a
Google Focused Research award (Author 3).
8
References
[1] Yoshua Bengio, Patrice Simard, and Paolo Frasconi. Learning long-term dependencies with
gradient descent is difficult. IEEE Transactions on Neural Networks, 5(2):157?166, 1994.
[2] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale
hierarchical image database. In Conference on Computer Vision and Pattern Recognition, 2009.
[3] Harris Drucker, Corinna Cortes, Lawrence D. Jackel, Yann LeCun, and Vladimir Vapnik.
Boosting and other ensemble methods. Neural Computation, 6(6):1289?1301, 1994.
[4] Kunihiko Fukushima. Neocognitron: A self-organizing neural network model for a mechanism
of pattern recognition unaffected by shift in position. Biological Cybernetics, 36(4):193?202,
1980.
[5] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image
recognition. arXiv preprint arXiv:1512.03385, 2015.
[6] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual
networks. arXiv preprint arXiv:1603.05027, 2016.
[7] Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv
preprint arXiv:1207.0580, 2012.
[8] Sepp Hochreiter. Untersuchungen zu dynamischen neuronalen netzen. Master?s thesis, Institut
fur Informatik, Technische Universitat, Munchen, 1991.
[9] Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Weinberger. Deep networks with
stochastic depth. arXiv preprint arXiv:1603.09382, 2016.
[10] David H Hubel and Torsten N Wiesel. Receptive fields, binocular interaction and functional
architecture in the cat?s visual cortex. The Journal of Physiology, 160(1):106?154, 1962.
[11] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training
by reducing internal covariate shift. In International Conference on Machine Learning, 2015.
[12] Alex Krizhevsky. Learning multiple layers of features from tiny images, 2009.
[13] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep
convolutional neural networks. In Advances in Neural Information Processing Systems, 2012.
[14] Yann LeCun, L?on Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning
applied to document recognition. Proceedings of the IEEE, 86(11):2278?2324, 1998.
[15] Jitendra Malik and Pietro Perona. Preattentive texture discrimination with early vision mechanisms. Journal of the Optical Society of America, 1990.
[16] Robert E Schapire. The strength of weak learnability. Machine Learning, 5(2):197?227, 1990.
[17] Thomas Serre, Aude Oliva, and Tomaso Poggio. A feedforward architecture accounts for rapid
categorization. Proceedings of the National Academy of Sciences, 104(15):6424?6429, 2007.
[18] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale
image recognition. arXiv preprint arXiv:1409.1556, 2014.
[19] Rupesh Kumar Srivastava, Klaus Greff, and J?rgen Schmidhuber. Highway networks. arXiv
preprint arXiv:1505.00387, 2015.
[20] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov,
Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions.
In Conference on Computer Vision and Pattern Recognition, pages 1?9, 2015.
[21] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199,
2013.
[22] Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in
deep neural networks? In Advances in Neural Information Processing Systems, 2014.
[23] Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In
Computer Vision?ECCV 2014, pages 818?833. Springer, 2014.
9
| 6556 |@word torsten:1 wiesel:1 norm:1 seems:1 yi0:1 open:1 seek:1 propagate:2 dramatic:1 fif:2 recursively:1 carry:2 initial:2 substitution:1 configuration:4 contains:1 selecting:1 liu:2 daniel:1 ours:1 document:1 past:1 surprising:4 activation:1 intriguing:1 written:1 readily:1 subsequent:2 shape:1 christian:3 remove:4 drop:4 update:1 v:2 discrimination:1 half:3 leaf:1 selected:1 ith:1 vanishing:5 short:10 record:1 num:1 provides:1 boosting:2 contribute:7 node:1 zhang:2 along:3 viable:2 consists:1 veit:1 wild:1 introduce:4 indeed:2 expected:3 roughly:2 tomaso:1 examine:1 rapid:1 behavior:5 salakhutdinov:1 resolve:2 little:1 solver:1 increasing:2 unrolling:1 becomes:1 discover:1 moreover:5 notation:1 medium:1 alexnet:4 what:3 differing:1 transformation:3 y3:2 every:1 ti:6 voting:1 f3f:1 runtime:2 exactly:1 zaremba:1 demonstrates:1 unit:1 omit:1 before:1 understood:1 dropped:3 resilience:4 multiplex:1 subscript:1 path:145 might:1 suggests:3 co:3 bi:1 statistically:1 averaged:1 graduate:1 unique:3 lecun:2 recursive:2 block:25 backpropagation:1 procedure:3 area:2 thought:1 revealing:1 projection:2 significantly:1 pre:2 word:1 physiology:1 get:1 cannot:1 influence:1 conventional:3 equivalent:4 reviewer:1 graphically:1 go:4 lowlevel:1 starting:1 independently:1 focused:1 formulate:1 sepp:1 insight:2 hierarchy:2 netzen:1 hypothesis:1 goodfellow:1 recognition:6 database:1 module:54 preprint:7 capture:2 ensures:1 connected:1 sun:3 kilian:1 went:1 ordering:2 decrease:2 removed:3 govern:1 complexity:1 trained:8 depend:11 rewrite:1 raise:1 f2:5 swap:1 differently:1 cat:1 america:1 train:5 investigative:1 effective:14 klaus:1 apparent:3 widely:1 kai:1 agnitude:6 simonyan:1 jointly:3 final:2 patrice:1 sequence:1 wilber:1 propose:1 interaction:1 remainder:1 adaptation:3 inserting:1 aligned:1 organizing:1 iff:1 achieve:1 academy:1 validate:1 competition:1 sutskever:3 double:1 produce:1 categorization:1 object:1 illustrate:1 andrew:2 measured:1 noticeable:2 strong:1 skip:11 come:4 implies:1 differ:1 closely:1 filter:1 stochastic:13 centered:2 enable:3 noticeably:1 resilient:1 f1:4 investigation:1 ultra:2 biological:2 strictly:3 exploring:1 bypassing:1 around:3 lawrence:1 mapping:1 visualize:1 unraveled:9 matthew:1 rgen:1 achieves:2 vary:2 early:5 arrange:2 perceived:1 ruslan:1 jackel:1 title:1 sensitive:1 highway:10 always:1 reconfigured:1 rather:3 avoid:1 cornell:3 varying:4 clune:1 karaletsos:1 focus:1 rank:1 kwak:1 mainly:3 fur:1 tech:1 contrast:1 adversarial:1 tradition:1 baseline:3 sense:4 skipped:1 helpful:1 rupesh:1 dependent:3 entire:4 typically:2 perona:2 expand:1 going:2 unveil:1 classification:4 among:1 impacting:1 special:3 field:3 comprise:1 f3:3 having:1 never:1 frasconi:1 untersuchungen:1 yu:1 yoshua:3 inherent:1 richard:1 modern:1 randomly:3 composed:1 national:1 individual:14 familiar:1 fukushima:1 organization:1 investigate:7 circular:1 multiply:2 severe:1 introduces:1 mixture:1 arrives:1 behind:1 permuting:1 edge:1 necessary:2 experience:1 poggio:1 shorter:4 modest:2 institut:1 tree:1 indexed:1 re:3 minimal:1 delete:3 increased:1 formalism:1 rabinovich:1 ordinary:4 introducing:1 technische:1 subset:4 krizhevsky:3 too:1 learnability:1 motivating:1 universitat:1 dependency:1 corrupted:1 peak:1 international:1 dong:1 michael:1 connecting:3 ilya:3 augmentation:1 thesis:1 choose:2 huang:1 usable:1 simard:1 withstand:1 stark:1 li:4 szegedy:4 suggesting:1 account:1 nonlinearities:1 wojciech:1 coefficient:2 jitendra:1 depends:2 stream:1 later:1 view:11 performed:2 break:1 kendall:3 jason:1 characterizes:1 parallel:1 jia:3 lipson:1 contribution:1 convolutional:9 characteristic:4 ensemble:19 serge:1 correspond:1 conceptually:1 weak:1 vincent:1 informatik:1 ren:2 rectified:1 researcher:1 straight:1 corruption:2 unaffected:1 cybernetics:1 detector:1 sharing:1 aligns:1 definition:1 frequency:1 sampled:1 dynamischen:1 dataset:1 improves:3 dimensionality:1 appears:1 feed:4 higher:1 originally:1 day:1 follow:4 tension:1 zisserman:1 wei:2 sedra:1 formulation:5 though:6 strongly:8 inception:1 implicit:1 stage:1 binocular:1 correlation:3 hand:3 expressive:1 google:1 reveal:1 dge:1 aude:1 building:8 effect:4 serre:1 concept:3 contain:2 y2:2 true:1 wi0:1 laboratory:1 illustrated:1 visualizing:1 during:24 branching:1 encourages:1 self:1 transferable:1 neocognitron:2 performs:1 dragomir:1 greff:1 reasoning:1 meaning:1 image:4 novel:2 recently:2 fi:12 common:2 functional:1 empirically:2 exponentially:3 untouched:1 interpretation:1 yosinski:2 he:3 refer:1 expressing:1 measurement:1 significant:1 anguelov:1 enter:1 shuffling:1 similarly:1 zeroing:1 funded:1 bruna:1 longer:3 cortex:1 deduce:1 patrick:1 closest:1 recent:3 schmidhuber:1 certain:2 binary:1 yi:13 seen:8 preserving:2 additional:1 performer:1 deng:1 xiangyu:2 full:3 multiple:2 reduces:1 smooth:2 adapt:1 long:11 cifar:6 equally:2 award:2 impact:8 schematic:1 prediction:1 basic:2 oliva:1 vision:7 arxiv:14 iteration:1 normalization:5 represent:1 sergey:1 hochreiter:1 addition:1 affecting:1 whereas:1 fellowship:1 leaving:2 jian:2 biased:1 unlike:2 strict:2 pass:1 induced:1 tend:2 member:3 contrary:1 leveraging:1 flow:11 seem:3 spirit:1 structural:1 feedforward:3 bengio:3 affect:6 relu:2 architecture:6 andreas:1 idea:1 reduce:1 haffner:1 vgg:12 attenuated:1 grad:2 drucker:1 shift:2 whether:6 thread:1 expression:1 accelerating:1 render:1 karen:1 shaoqing:2 constitute:1 deep:21 generally:3 clear:1 amount:2 shortening:1 reduced:2 schapire:1 nsf:2 alters:1 per:3 reinforces:1 dropping:9 paolo:1 key:3 redundancy:2 yangqing:1 deleted:2 clarity:1 changing:1 backward:3 graph:3 pietro:1 fraction:5 master:1 respond:1 throughout:3 almost:2 yann:2 comparable:2 dropout:1 layer:74 followed:1 strength:1 precisely:1 fei:2 alex:3 aspect:1 nitish:1 kumar:1 optical:1 relatively:5 department:1 combination:1 smaller:1 slightly:4 y0:10 sam:1 wi:3 shallow:5 rob:2 making:1 conceivably:1 pipeline:3 equation:4 remains:1 committee:1 mechanism:2 needed:3 know:3 fed:2 drastic:1 sending:1 adopted:1 generalizes:1 operation:1 available:1 competitively:1 observe:4 hierarchical:4 worthwhile:1 v2:3 munchen:1 pierre:1 batch:11 alternative:1 corinna:1 gate:1 weinberger:1 original:3 thomas:1 binomial:3 denotes:1 top:3 remaining:7 zeiler:2 classical:1 society:1 malik:2 move:1 question:3 already:2 receptive:3 strategy:3 dependence:2 traditional:5 unclear:1 exhibit:5 gradient:35 thank:2 majority:1 nx:1 extent:3 reason:2 toward:1 length:38 code:1 index:3 reed:1 mini:3 downsampling:8 vladimir:1 sermanet:1 difficult:1 mostly:3 robert:1 rise:1 design:1 policy:1 perform:6 allowing:1 contributed:1 observation:3 neuron:7 convolution:4 descent:1 behave:6 hinton:3 ever:1 y1:4 perturbation:1 retrained:1 david:1 pair:1 required:3 connection:10 imagenet:8 theofanis:1 learned:3 deletion:2 nip:1 address:2 able:1 bar:1 below:1 perception:1 pattern:4 scott:1 regime:1 challenge:1 max:1 tau:3 explanation:2 deleting:23 power:1 natural:1 residual:118 recursion:1 scheme:1 improve:1 e15:1 zhuang:1 started:1 conventionally:1 deviate:1 joan:1 literature:1 discovery:1 removal:4 acknowledgement:1 understanding:1 loss:1 expect:4 reordering:3 fully:1 geoffrey:2 degree:3 vanhoucke:1 principle:1 neuronalen:1 corrupting:1 bypass:2 tiny:1 eccv:1 compatible:1 changed:1 surprisingly:2 last:1 repeat:1 drastically:1 allow:1 deeper:6 understand:2 ber:1 feedback:2 depth:18 cortical:1 valid:7 plain:3 preventing:1 forward:5 collection:11 commonly:1 author:3 erhan:2 correlate:3 transaction:1 hubel:1 reveals:4 investigating:1 ioffe:1 belongie:1 fergus:3 don:2 why:1 promising:1 learn:1 nature:1 channel:1 expanding:1 ignoring:1 aol:1 improving:1 bottou:1 complex:3 necessarily:2 did:2 whole:2 lesion:7 complementary:1 crafted:1 ensembling:2 retrain:1 fig:1 ff:1 referred:1 position:1 explicit:1 exponential:1 third:1 omnipotent:1 learns:1 ian:1 dozen:1 removing:8 dumitru:2 specific:2 zu:1 covariate:1 showing:1 gating:1 insightful:1 decay:2 cortes:1 essential:1 socher:1 vapnik:1 sequential:4 adding:2 importance:3 f1f:1 texture:1 magnitude:5 illustrates:2 hod:1 easier:1 smoothly:8 depicted:1 led:1 gao:1 visual:6 kaiming:2 springer:1 corresponds:1 nested:1 chance:2 harris:1 kunihiko:1 identity:4 viewed:4 jeff:1 shared:1 change:7 reducing:1 averaging:1 called:1 total:4 pas:3 lens:1 partly:1 preattentive:1 indicating:3 internal:1 support:1 constructive:1 scratch:1 srivastava:2 |
6,143 | 6,557 | Greedy Feature Construction
Dino Oglic? ?
[email protected]
?
Institut f?r Informatik III
Universit?t Bonn, Germany
Thomas G?rtner ?
[email protected]
?
School of Computer Science
The University of Nottingham, UK
Abstract
We present an effective method for supervised feature construction. The main
goal of the approach is to construct a feature representation for which a set of
linear hypotheses is of sufficient capacity ? large enough to contain a satisfactory
solution to the considered problem and small enough to allow good generalization
from a small number of training examples. We achieve this goal with a greedy
procedure that constructs features by empirically fitting squared error residuals.
The proposed constructive procedure is consistent and can output a rich set of
features. The effectiveness of the approach is evaluated empirically by fitting a
linear ridge regression model in the constructed feature space and our empirical
results indicate a superior performance of our approach over competing methods.
1
Introduction
Every supervised learning algorithm with the ability to generalize from training examples to unseen
data points has some type of inductive bias [5]. The bias can be defined as a set of assumptions that
together with the training data explain the predictions at unseen points [25]. In order to simplify
theoretical analysis of learning algorithms, the inductive bias is often represented by a choice
of a hypothesis space (e.g., the inductive bias of linear regression models is the assumption that
the relationship between inputs and outputs is linear). The fundamental limitation of learning
procedures with an a priori specified hypothesis space (e.g., linear models or kernel methods with
a preselected kernel) is that they can learn good concept descriptions only if the hypothesis space
selected beforehand is large enough to contain a good solution to the considered problem and small
enough to allow good generalization from a small number of training examples. As finding a good
hypothesis space is equivalent to finding a good set of features [5], we propose an effective supervised
feature construction method to tackle this problem. The main goal of the approach is to embed the
data into a feature space for which a set of linear hypotheses is of sufficient capacity. The motivation
for this choice of hypotheses is in the desire to exploit the scalability of existing algorithms for
training linear models. It is for their scalability that these models are frequently a method of choice
for learning on large scale data sets (e.g., the implementation of linear SVM [13] has won the large
scale learning challenge at ICML 2008 and KDD CUP 2010). However, as the set of linear hypotheses
defined on a small or moderate number of input features is usually of low capacity these methods
often learn inaccurate descriptions of target concepts. The proposed approach surmounts this and
exploits the scalability of existing algorithms for training linear models while overcoming their low
capacity on input features. The latter is achieved by harnessing the information contained in the
labeled training data and constructing features by empirically fitting squared error residuals.
We draw motivation for our approach by considering the minimization of the expected squared error
using functional gradient descent (Section 2.1). In each step of the descent, the current estimator
is updated by moving in the direction of the residual function. We want to mimic this behavior by
constructing a feature representation incrementally so that for each step of the descent we add a
feature which approximates well the residual function. In this constructive process, we select our
features from a predetermined set of basis functions which can be chosen so that a high capacity set
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
of linear hypotheses corresponds to the constructed feature space (Section 2.2). In our theoretical
analysis of the approach, we provide a convergence rate for this constructive procedure (Section 2.3)
and give a generalization bound for the empirical fitting of residuals (Section 2.4). The latter is needed
because the feature construction is performed based on an independent and identically distributed
sample of labeled examples. The approach, presented in Section 2.5, is highly flexible and allows for
an extension of a feature representation without complete re-training of the model. As it performs
similar to gradient descent, a stopping criteria based on an accuracy threshold can be devised and the
algorithm can then be simulated without specifying the number of features a priori. In this way, the
algorithm can terminate sooner than alternative approaches for simple hypotheses. The method is
easy to implement and can be scaled to millions of instances with a parallel implementation.
To evaluate the effectiveness of our approach empirically, we compare it to other related approaches
by training linear ridge regression models in the feature spaces constructed by these methods. Our
empirical results indicate a superior performance of the proposed approach over competing methods.
The results are presented in Section 3 and the approaches are discussed in Section 4.
2
Greedy feature construction
In this section, we present our feature construction approach. We start with an overview where we
introduce the problem setting and motivate our approach by considering the minimization of the
expected squared error using functional gradient descent. Following this, we define a set of features
and demonstrate that the approach can construct a rich set of hypotheses. We then show that our
greedy constructive procedure converges and give a generalization bound for the empirical fitting of
residuals. The section concludes with a pseudo-code description of our approach.
2.1
Overview
We consider a learning problem with the squared error loss function where the goal is to find a
mapping from a Euclidean space to the set of reals. In these problems, it is typically assumed that a
sample z = ((x1 , y1 ) , . . . , (xm , ym )) of m examples is drawn independently from a Borel probability
measure ? defined on Z = X ? Y , where X is a compact subset of a finite dimensional Euclidean
space with the dot product h?, ?i and Y ? R. For every x ? X let ? (y | x) be the conditional
probability measure on Y and ?X be the marginal probability measure on X. For
R the sake of brevity,
when it is clear from the context, we will write ? instead of ?X . Let f? (x) = y d? (y | x) be the
bounded target/regression function of the measure ?. Our goal is to construct a feature representation
such that there exists a linear hypothesis on this feature space that approximates well the target
function. For an estimator f of the function f? we measure the goodness of fit with the expected
R
2
squared error in ?, E? (f ) = (f (x) ? y) d?. The empirical counterpart of the error, defined over
Pm
2
1
m
a sample z ? Z , is denoted with Ez (f ) = m
i=1 (f (xi ) ? yi ) .
Having defined the problem setting, we proceed to motivate our approach by considering the minimization of the expected squared error using functional gradient descent. For that, we first review the
definition of functional gradient. For a functional F defined on a normed linear space and an element
p from this space, the functional gradient ?F (p) is the principal linear part of a change in F after
it is perturbed in the direction of q, F (p + q) = F (p) + ? (q) + kqk , where ? (q) is the linear
functional with ?F (p) as its principal linear part, and ? 0 as kqk ? 0 [e.g., see Section 3.2 in
16]. In our case, the normed space is the Hilbert space of square integrable functions L2? (X) and for
the expected squared error functional on this space we have that it holds
E? (f + q) ? E? (f ) = h2 (f ? f? ), qiL2 (X) + O 2 .
?
Hence, an algorithm for the minimization of the expected squared error using functional gradient
descent on this space could be specified as
ft+1 = ?ft + 2 (1 ? ?) (f? ? ft ) ,
where 0 ? ? ? 1 denotes the learning rate and ft is the estimate at step t. The functional gradient
direction 2 (f? ? ft ) is the residual function at step t and the main idea behind our approach is to
iteratively refine our feature representation by extending it with a new feature that matches the current
residual function. In this way, for a suitable choice of learning rate ?, the functional descent would
be performed through a convex hull of features and in each step we would have an estimate of the
target function f? expressed as a convex combination of the constructed features.
2
2.2
Greedy features
We introduce now a set of features parameterized with a ridge basis function and hyperparameters
controlling the smoothness of these features. As each subset of features corresponds to a set of
hypotheses, in this way we specify a family of possible hypothesis spaces. For a particular choice of
ridge basis function we argue below that the approach outlined in the previous section can construct a
highly expressive feature representation (i.e., a hypothesis space of high capacity).
Let C (X) be the Banach space of continuous functions on X with the uniform norm. For a
Lipschitz continuous function ? : R ? R, k?k? ? 1, and constants r, s, t > 0 let F? ? C (X),
? = (?, r, s, t), be a set of ridge-wave functions defined on the set X,
F? = a ? (hw, xi + b) | w ? Rd , a, b ? R, |a| ? r, kwk2 ? s, |b| ? t .
From this definition, it follows that for all g ? F? it holds kgk? ? r. As a ridge-wave function
g ? F? is bounded and Lipschitz continuous, it is also square integrable in the measure ? and
g ? L2? (X). Therefore, F? is a subset of the Hilbert space of square integrable functions defined on
X with respect to the probability measure ?, i.e., F? ? L2? (X).
Taking ? (?) = cos (?) in the definition of F? we obtain a set of cosine-wave features
Fcos = a cos (hw, xi + b) | w ? Rd , a, b ? R, |a| ? r, kwk2 ? s, |b| ? t .
For this set of features the approach outlined in Section 2.1 can construct a rich set of hypotheses. To
demonstrate this we make a connection to shift-invariant reproducing kernel Hilbert spaces and show
that the approach can approximate any bounded function from any shift-invariant reproducing kernel
Hilbert space. This means that a set of linear hypotheses defined by cosine features can be of high
capacity and our approach can overcome the problems with the low capacity of linear hypotheses
defined on few input features. A proof of the following theorem is provided in Appendix B.
Theorem 1. Let Hk be a reproducing kernel Hilbert space corresponding to a continuous shiftinvariant and positive definite kernel k defined on a compact set X. Let ? be the positive and bounded
spectral measure whose Fourier transform is the kernel k. For any probability measure ? defined
on X, it is possible to approximate any bounded function f ? Hk using a convex combination of n
?
ridge-wave functions from Fcos such that the approximation error in k?k? decays with rate O (1/ n).
2.3
Convergence
For the purpose of this paper, it suffices to show the convergence of -greedy sequences of functions
(see Definition 1) in Hilbert spaces. We, however, choose to provide a stronger result which holds for
-greedy sequences in uniformly smooth Banach spaces. In the remainder of the paper, co (S) and S
will be used to denote the convex hull of elements from a set S and the closure of S, respectively.
Definition 1. Let B be a Banach space with norm k?k and let S ? B. An incremental sequence is
any sequence {fn }n?1 of elements of B such that f1 ? S and for each n ? 1 there is some g ? S so
that fn+1 ? co ({fn , g}). An incremental sequence is greedy with respect to an element f ? co (S)
if for all n ? N it holds kfn+1 ? f k = inf {kh ? f k | h ? co ({fn , g}) , g ? S} . Given a positive
sequence of allowed slack terms {n }n?1 , an incremental sequence {fn }n?1 is called -greedy with
respect to f if for all n ? N it holds kfn+1 ? f k < inf {kh ? f k | h ? co ({fn , g}) , g ? S} + n .
Having introduced the notion of an -greedy incremental sequence of functions, let us now relate
it to our feature construction approach. In the outlined constructive procedure (Section 2.1), we
proposed to select new features corresponding to the functional gradient at the current estimate of
the target function. Now, if at each step of the functional gradient descent there exists a ridge-wave
function from our set of features which approximates well the residual function (w.r.t. f? ) then this
sequence of functions defines a descent through co (F? ) which is an -greedy incremental sequence
of functions with respect to f? ? co (F? ). In Section 2.1, we have also demonstrated that F? is a
subset of the Hilbert space L2? (X) and this is by definition a Banach space.
In accordance with Definition 1, we now consider under what conditions an -greedy sequence of
functions from this space converges to any target function f? ? co (F? ). Note that this relates to
Theorem 1 which confirms the strength of the result by showing that the capacity of co (F? ) is large.
Before we show the convergence of our constructive procedure, we need to prove that an -greedy
3
incremental sequence of functions/features can be constructed in our setting. For that, we characterize
the Banach spaces in which it is always possible to construct such sequences of functions/features.
Definition 2. Let B be a Banach space, B ? the dual space of B, and f ? B, f 6= 0. A peak functional
for f is a bounded linear operator F ? B ? such that kF kB? = 1 and F (f ) = kf kB . The Banach
space B is said to be smooth if for each f ? B, f 6= 0, there is a unique peak functional.
The existence of at least one peak functional for all f ? B, f 6= 0, is guaranteed by the Hahn-Banach
theorem [27]. For a Hilbert space H, for each element f ? H, f 6= 0, there exists a unique peak
functional F = hf,?iH/kf kH . Thus, every Hilbert space is a smooth Banach space. Donahue et al. [12,
Theorem 3.1] have shown that in smooth Banach spaces ? and in particular in the Hilbert space
L2? (X) ? an -greedy incremental sequence of functions can always be constructed. However, not
every such sequence of functions converges to the function with respect to which it was constructed.
For the convergence to hold, a stronger notion of smoothness is needed.
+
Definition 3. The modulus of smoothness of a Banach space B is a function ? : R+
0 ? R0 defined
1
as ? (r) = 2 supkf k=kgk=1 (kf + rgk + kf ? rgk) ? 1, where f, g ? B. The Banach space B is
said to be uniformly smooth if ? (r) ? o (r) as r ? 0.
We need to observe now that every Hilbert space is a uniformly smooth Banach space [12]. For the
sake of completeness, we provide a proof of this proposition in Appendix B.
?
Proposition 2. For any Hilbert space the modulus of smoothness is equal to ? (r) = 1 + r2 ? 1.
Having shown that Hilbert spaces are uniformly smooth Banach spaces, we proceed with two results
giving a convergence rate of an -greedy incremental sequence of functions. What is interesting about
these results is the fact that a feature does not need to match exactly the residual function in a greedy
descent step (Section 2.1); it is only required that condition (ii) from the next theorem is satisfied.
Theorem 3. [Donahue et al., 12] Let B be a uniformly smooth Banach space having modulus of
smoothness ? (u) ? ?ut , with ? being a constant and t > 1. Let S be a bounded subset of B and let
f ? co (S). Let K > 0 be chosen such that kf ? gk ? K for all g ? S, and let > 0 be a fixed slack
value. If the sequences {fn }n?1 ? co (S) and {gn }n?1 ? S are chosen recursively so that: (i) f1 ?
t
t
S, (ii) Fn (gn ? f ) ? 2? ((K+) ?K )/nt?1 kfn ?f kt?1 , and (iii) fn+1 = n/n+1 fn + 1/n+1 gn , where
h
i1/t
1/t
log2 n
Fn is the peak functional of fn ? f , then it holds kfn ? f k ? (2?t)n1?(K+)
1 + (t?1)2tn
.
1/t
The following corollary gives a convergence rate for an -greedy incremental sequence of functions
constructed according to Theorem 3 with respect to f? ? co (F? ). As this result (a proof is given in
Appendix B) holds for all such sequences of functions, it also holds for our constructive procedure.
Corollary 4. Let {fn }n?1 ? co (F? ) be an -greedy incremental sequence of functions constructed
according to the procedure?described in Theorem 3 with respect to a function f ? co (F? ) . Then, it
holds kfn ? f k? ?
2.4
(K+)
2+log2 n/2n
?
.
n
Generalization bound
m
In step t + 1 of the empirical residual fitting, based on a sample {(xi , yi ? ft (xi ))}i=1 , the approach
selects a ridge-wave function from F? that approximates well the residual function (f? ? ft ). In
the last section, we have specified in which cases such ridge-wave functions can be constructed and
provided a convergence rate for this constructive procedure. As the convergence result is not limited
to target functions from F? and co (F? ), we give a bound on the generalization error for hypotheses
from F = co (F? ), where the closure is taken with respect to C (X).
Before we give a generalization bound, we show that our hypothesis space F is a convex and compact
set of functions. The choice of a compact hypothesis space is important because it guarantees that a
minimizer of the expected squared error E? and its empirical counterpart Ez exists. In particular, a
continuous function attains its minimum and maximum value on a compact set and this guarantees
the existence of minimizers of E? and Ez . Moreover, for a hypothesis space that is both convex
and compact, the minimizer of the expected squared error is unique as an element of L2? (X). A
simple proof of the uniqueness of such a minimizer in L2? (X) and the continuity of the functionals
E? and Ez can be found in [9]. For the sake of completeness, we provide a proof in Appendix A
as Proposition A.2. The following proposition (a proof is given in Appendix B) shows that our
hypothesis space is a convex and compact subset of the metric space C (X).
4
Algorithm 1 G REEDY D ESCENT
Input: sample z = {(xi , yi )}si=1 , initial estimates at sample points {f0,i }si=1 , ridge basis function ?, maximum number
of descent steps p, regularization parameter ?, and precision
1: W ? ?
2: for k = 1, 2, . . . , p do
2
P
3:
wk , ck ? arg minw,c=(c0 ,c00 ) si=1 c0 fk?1,i + c00 ? w> xi ? yi + ?? (c, w)
>
4:
W ? W ? {wk } and fk,i ? c0k fk?1,i + c00
k ? wk xi , i = 1, . . . , s
f
E
(f
)?E
)|
(
|
z
z
5:
if
k?1 /max{Ez (fk ),Ez (fk?1 )} < then E XIT F OR L OOP end if
k
6: end for
7: return W
Proposition 5. The hypothesis space F is a convex and compact subset of the metric space C(X).
Moreover, the elements of this hypothesis space are Lipschitz continuous functions.
Having established that the hypothesis space is a compact set, we can now give a generalization
bound for learning with this hypothesis space. The fact that the hypothesis space is compact implies
that it is also a totally bounded set [27], i.e., for all > 0 there exists a finite -net of F. This, on the
other hand, allows us to derive a sample complexity bound by using the -covering number of a space
as a measure of its capacity [21]. The following theorem and its corollary (proofs are provided in
Appendix B) give a generalization bound for learning with the hypothesis space F.
Theorem 6. Let M > 0 such that, for all f ? F, |f (x) ? y| ? M almost surely. Then, for all > 0
P [E? (fz ) ? E? (f ? ) ? ] ? 1 ? N (F, /24M , k?k? ) exp (?m/288M 2 ) ,
where fz and f ? are the minimizers of Ez and E? on the set F, z ? Z m , and N (F, , k?k? ) denotes
the -covering number of F w.r.t. C (X).
Corollary 7. For all > 0 and all ? > 0, with probability 1 ? ?, a minimizer of the empirical
squared error on the hypothesis
space F is (, ?)-consistent when the number of samples m ?
? r (Rs + t) L? 12 + 1 ln 1? . Here, R is the radius of a ball containing the set of instances X in
its interior, L? is the Lipschitz constant of a function ?, and r, s, and t are hyperparameters of F? .
2.5
Algorithm
Algorithm 1 is a pseudo-code description of the outlined approach. To construct a feature space
with a good set of linear hypotheses the algorithm takes as input a set of labeled examples and an
initial empirical estimate of the target function. A dictionary of features is specified with a ridge
basis function and the smoothness of individual features is controlled with a regularization parameter.
Other parameters of the algorithm are the maximum allowed number of descent steps and a precision
term that defines the convergence of the descent. As outlined in Sections 2.1 and 2.3, the algorithm
works by selecting a feature that matches the residual function at the current estimate of the target
function. For each selected feature the algorithm also chooses a suitable learning rate and performs a
functional descent step (note that we are inferring the learning rate instead of setting it to 1/n+1 as
in Theorem 3). To avoid solving these two problems separately, we have coupled both tasks into a
single optimization problem (line 3) ? we fit a linear model to a feature representation consisting
of the current empirical estimate of the target function and a ridge function parameterized with a
d-dimensional vector w. The regularization term ? is chosen to control the smoothness of the new
feature and avoid over-fitting. The optimization problem over the coefficients of the linear model
and the spectrum of the ridge basis function is solved by casting it as a hyperparameter optimization
problem [20]. For the sake of completeness, we have provided a detailed derivation in Appendix C.
While the hyperparameter optimization problem is in general non-convex, Theorem 3 indicates that a
globally optimal solution is not (necessarily) required and instead specifies a weaker condition. To
account for the non-convex nature of the problem and compensate for the sequential generation of
features, we propose to parallelize the feature construction process by running several instances of
the greedy descent simultaneously. A pseudo-code description of this parallelized approach is given
in Algorithm 2. The algorithm takes as input parameters required for running the greedy descent
and some parameters specific to the parallelization scheme ? number of data passes and available
machines/cores, regularization parameter for the fitting of linear models in the constructed feature
space, and cut-off parameter for the elimination of redundant features. The whole process is started
by adding a bias feature and setting the initial empirical estimates at sample points to the mean value
of the outputs (line 1). Following this, the algorithm mimics stochastic gradient descent and makes
5
Algorithm 2 G REEDY F EATURE C ONSTRUCTION (GFC)
Input: sample z = {(xi , yi )}m
i=1 , ridge basis function ?, number of data passes T , maximum number of greedy descent
steps p, number of machines/cores
M , regularization parameters ? and ?, precision , and feature cut-off threshold ?
1 Pm
1: W ? {0} and f0,k ?
i=1 yi , k = 1, . . . , m
m
2: for i = 1, . . . , T do
3:
for j = 1, 2, . . . , M parallel do
4:
Sj ? U{1,2,...,m} and W ? W ? G REEDY D ESCENT {(xk , yk )}k?Sj , fi?1,k k?S , ?, p, ?,
j
5:
6:
7:
8:
9:
end for
P
2
P
|W |
>
a? ? arg mina m
+ ? kak22
k=1
l=1 al ? wl xk ? yk
P|W |
W ? W \ wl ? W | |a?l | < ?, 1 ? l ? |W | and fi,k ? l=1 a?l ? wl> xk , k = 1, . . . , m
end for
return (W, a? )
a specified number of passes through the data (line 2). In the first step of each pass, the algorithm
performs greedy functional descent in parallel using a pre-specified number of machines/cores M
(lines 3-5). This step is similar to the splitting step in parallelized stochastic gradient descent [32].
Greedy descent is performed on each of the machines for a maximum number of iterations p and
the estimated parameter vectors are added to the set of constructed features W (line 4). After the
features have been learned the algorithm fits a linear model to obtain the amplitudes (line 6). To fit a
linear model, we use least square regression penalized with the l2 -norm because it can be solved in
a closed form and cross-validation of the capacity parameter involves optimizing a 1-dimensional
objective function [20]. Fitting of the linear model can be understood as averaging of the greedy
approximations constructed on different chunks of the data. At the end of each pass, the empirical
estimates at sample points are updated and redundant features are removed (line 7).
One important detail in the implementation of Algorithm 1 is the data splitting between the training
and validation samples for the hyperparameter optimization. In particular, during the descent we are
more interested in obtaining a good spectrum than the amplitude because a linear model will be fit in
Algorithm 2 over the constructed features and the amplitude values will be updated. For this reason,
during the hyperparameter optimization over a k-fold splitting in Algorithm 1, we choose a single
fold as the training sample and a batch of folds as the validation sample.
3
Experiments
In this section, we assess the performance of our approach (see Algorithm 2) by comparing it to other
feature construction approaches on synthetic and real-world data sets. We evaluate the effectiveness
of the approach with the set of cosine-wave features introduced in Section 2.2. For this set of
features, our approach is directly comparable to random Fourier features [26] and ? la carte [31]. The
implementation details of the three approaches are provided in Appendix C. We address here the
choice of the regularization term in Algorithm 1: To control the smoothness of newly constructed
features, we penalize the objective in line 3 so that the solutions with the small L2? (X) norm are
preferred. For this choice of regularization term and cosine-wave features, we empirically observe
that the optimization objective is almost exclusively penalized by the l2 norm of the coefficient vector
2
c. Following this observation, we have simulated the greedy descent with ? (c, w) = kck2 .
We now briefly describe the data sets and the experimental setting. The experiments were conducted
on three groups of data sets. The first group contains four UCI data sets on which we performed
parameter tuning of all three algorithms (Table 1, data sets 1-4). The second group contains the data
sets with more than 5000 instances available from Lu?s Torgo [28]. The idea is to use this group of
data sets to test the generalization properties of the considered algorithms (Table 1, data sets 5-10).
The third group contains two artificial and very noisy data sets that are frequently used in regression
tree benchmark tests. For each considered data set, we split the data into 10 folds; we refer to these
splits as the outer cross-validation folds. In each step of the outer cross-validation, we use nine folds
as the training sample and one fold as the test sample. For the purpose of the hyperparameter tuning
we split the training sample into five folds; we refer to these splits as the inner cross-validation folds.
We run all algorithms on identical outer cross-validation folds and construct feature representations
with 100 and 500 features. The performance of the algorithms is assessed by comparing the root
mean squared error of linear ridge regression models trained in the constructed feature spaces and the
average time needed for the outer cross-validation of one fold.
6
Table 1: To facilitate the comparison between data sets we have normalized the outputs so that their range
is one. The accuracy of the algorithms is measured using the root mean squared error, multiplied by 100 to
mimic percentage error (w.r.t. the range of the outputs). The mean and standard deviation of the error are
computed after performing 10-fold cross-validation. The reported walltime is the average time it takes a method
to cross-validate one fold. To assess whether a method performs statistically significantly better than the other
on a particular data set we perform the paired Welch t-test [29] with p = 0.05. The significantly better results
for the considered settings are marked in bold.
n = 100
DATASET
m
d
parkinsons tm (total)
5875 21
ujindoorloc (latitude) 21048 527
53500 380
ct-slice
Year Prediction MSD 515345 90
delta-ailerons
7129
5
kinematics
8192
8
8192 21
cpu-activity
bank
8192 32
pumadyn
8192 32
delta-elevators
9517
6
13750 40
ailerons
pole-telecom
15000 26
elevators
16599 18
20640
8
cal-housing
breiman
40768 10
40768 10
friedman
GFC
E RROR
2.73 (?0.19)
3.17 (?0.15)
2.93 (?0.10)
10.06 (?0.09)
3.82 (?0.24)
5.18 (?0.09)
2.65 (?0.12)
9.83 (?0.27)
3.44 (?0.10)
5.26 (?0.17)
4.67 (?0.18)
7.34 (?0.29)
3.34 (?0.08)
11.55 (?0.24)
4.01 (?0.03)
3.29 (?0.09)
n = 500
ALC
WALLTIME
00
00
00
01
00
00
00
00
00
00
00
00
00
00
00
00
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
03
21
52
20
01
04
04
01
02
00
02
10
03
05
02
06
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
49
39
05
12
23
02
23
39
24
57
56
45
16
49
46
07
E RROR
0.78 (?0.13)
6.19 (?0.76)
3.82 (?0.64)
9.94 (?0.08)
3.73 (?0.20)
5.03 (?0.23)
2.68 (?0.27)
9.84 (?0.30)
3.24 (?0.07)
5.28 (?0.18)
4.89 (?0.43)
7.16 (?0.55)
3.37 (?0.55)
12.69 (?0.47)
4.06 (?0.04)
3.37 (?0.46)
GFC
WALLTIME
00
01
03
05
00
00
00
00
00
00
00
00
00
00
00
00
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
05
21
31
29
05
11
09
12
13
07
16
20
21
11
13
18
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
19
58
25
14
13
28
24
48
17
07
34
34
20
14
52
43
E RROR
2.20 (?0.27)
3.04 (?0.19)
2.59 (?0.10)
10.01 (?0.08)
3.79 (?0.25)
4.65 (?0.11)
2.60 (?0.16)
9.83 (?0.30)
3.30 (?0.06)
5.24 (?0.17)
4.51 (?0.12)
5.55 (?0.15)
3.12 (?0.20)
11.17 (?0.25)
4.01 (?0.03)
3.16 (?0.03)
ALC
WALLTIME
00
00
01
01
00
00
00
00
00
00
00
00
00
00
00
00
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
04
36
24
30
01
04
04
02
02
01
02
11
04
06
03
07
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
15
49
41
28
57
44
24
01
27
04
11
37
06
16
04
04
E RROR
0.31 (?0.17)
6.99 (?0.97)
2.73 (?0.29)
9.92 (?0.07)
3.73 (?0.24)
5.01 (?0.76)
2.62 (?0.15)
9.87 (?0.42)
3.42 (?0.15)
5.23 (?0.18)
4.77 (?0.40)
5.20 (?0.51)
3.13 (?0.24)
12.70 (?1.01)
4.03 (?0.03)
3.25 (?0.09)
WALLTIME
00
02
06
11
00
00
00
00
00
00
01
01
01
01
01
01
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
27
23
11
58
25
38
25
49
57
32
05
39
20
01
04
39
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
15
15
12
41
14
53
13
48
33
30
07
22
58
37
16
37
An extensive summary containing the results of experiments with the random Fourier features
approach (corresponding to Gaussian, Laplace, and Cauchy kernels) and different configurations
of ? la carte is provided in Appendix D. As the best performing configuration of ? la carte on the
development data sets is the one with Q = 5 components, we report in Table 1 the error and walltime
for this configuration. From the walltime numbers we see that our approach is in both considered
settings ? with 100 and 500 features ? always faster than ? la carte. Moreover, the proposed approach
is able to generate a feature representation with 500 features in less time than required by ? la carte
for a representation of 100 features. In order to compare the performance of the two methods with
respect to accuracy, we use the Wilcoxon signed rank test [30, 11]. As our approach with 500 features
is on all data sets faster than ? la carte with 100 features, we first compare the errors obtained in these
experiments. For 95% confidence, the threshold value of the Wilcoxon signed rank test with 16 data
sets is T = 30 and from our results we get the T-value of 28. As the T-value is below the threshold,
our algorithm can with 95% confidence generate in less time a statistically significantly better feature
representation than ? la carte. For the errors obtained in the settings where both methods have the
same number of features, we obtain the T-values of 60 and 42. While in the first case for the setting
with 100 features the test is inconclusive, in the second case our approach is with 80% confidence
statistically significantly more accurate than ? la carte. To evaluate the performance of the approaches
on individual data sets, we perform the paired Welch t-test [29] with p = 0.05. Again, the results
indicate a good/competitive performance of our algorithm compared to ? la carte.
4
Discussion
In this section, we discuss the advantages of the proposed method over the state-of-the-art baselines
in learning fast shift-invariant kernels and other related approaches.
Flexibility. The presented approach is a highly flexible supervised feature construction method. In
contrast to an approach based on random Fourier features [26], the proposed method does not require
a spectral measure to be specified a priori. In the experiments (details can be found in Appendix D),
we have demonstrated that the choice of spectral measure is important as, for the considered measures
(corresponding to Gaussian, Laplace, and Cauchy kernels), the random Fourier features approach
is outperformed on all data sets. The second competing method, ? la carte, is more flexible when it
comes to the choice of spectral measure and works by approximating it with a mixture of Gaussians.
However, the number of components and features per component needs to be specified beforehand or
cross-validated. In contrast, our approach mimics functional gradient descent and can be simulated
without specifying the size of the feature representation beforehand. Instead, a stopping criteria
(see, e.g., Algorithm 1) based on the successive decay of the error can be devised. As a result, the
proposed approach terminates sooner than the alternative approaches for simple concepts/hypothesis.
The proposed method is also easy to implement (for the sake of completeness, the hyperparameter
gradients are provided in Appendix C.1) and allows us to extend the existing feature representation
without complete re-training of the model. We note that the approaches based on random Fourier
7
features are also simple to implement and can be re-trained efficiently with the increase in the number
of features [10]. ? la carte, on the other hand, is less flexible in this regard ? due to the number of
hyperparameters and the complexity of gradients it is not straightforward to implement this method.
Scalability. The fact that our greedy descent can construct a feature in time linear in the number
of instances m and dimension of the problem d makes the proposed approach highly scalable. In
particular, the complexity of the proposed parallelization scheme is dominated
by the cost of fitting a
linear model and the whole algorithm runs in time O T n3 + n2 m + nmd , where T denotes the
number of data passes (i.e., linear model fits) and n number of constructed features. To scale this
scheme to problems with millions of instances, it is possible to fit linear models using the parallelized
stochastic gradient descent [32]. As for the choice of T , the standard setting in simulations of
stochastic gradient descent is 5-10 data passes. Thus, the presented approach is quite robust and
can be applied to large scale data sets. In contrast to this, the cost of performing
a gradient step in
the hyperparameter optimization of ? la carte is O n3 + n2 m + nmd . In our empirical evaluation
using an implementation with 10 random restarts, the approach needed at least 20 steps per restart
to learn an accurate model. The required number of gradient steps and the cost of computing them
hinder the application of ? la carte to large scale
data sets. In learning with random Fourier features
which also run in time O n3 + n2 m + nmd , the main cost is the fitting of linear models ? one for
each pair of considered spectral and regularization parameters.
Other approaches. Beside fast kernel learning approaches, the presented method is also related to
neural networks parameterized with a single hidden layer. These approaches can be seen as feature
construction methods jointly optimizing over the whole feature representation. A detailed study of
the approximation properties of a hypothesis space of a single layer network with the sigmoid ridge
function has been provided by Barron [4]. In contrast to these approaches, we construct features
incrementally by fitting residuals and we do this with a set of non-monotone ridge functions as a
dictionary of features. Regarding our generalization bound, we note that the past work on single layer
neural networks contains similar results but in the context of monotone ridge functions [1].
As the goal of our approach is to construct a feature space for which linear hypotheses will be of
sufficient capacity, the presented method is also related to linear models working with low-rank kernel
representations. For instance, Fine and Scheinberg [14] investigate a training algorithm for SVMs
using low-rank kernel representations. The difference between our approach and this method is in the
fact that the low-rank decomposition is performed without considering the labels. Side knowledge
and labels are considered by Kulis et al. [22] and Bach and Jordan [3] in their approaches to construct
a low-rank kernel matrix. However, these approaches are not selecting features from a set of ridge
functions, but find a subspace of a preselected kernel feature space with a good set of hypothesis.
From the perspective of the optimization problem considered in the greedy descent (Algorithm 1) our
approach can be related to single index models (SIM) where the goal is to learn a regression function
that can be represented as a single monotone ridge function [19, 18]. In contrast to these models, our
approach learns target/regression functions from the closure of the convex hull of ridge functions.
Typically, these target functions cannot be written as single ridge functions. Moreover, our ridge
functions do not need to be monotone and are more general than the ones considered in SIM models.
In addition to these approaches and considered baseline methods, the presented feature construction
approach is also related to methods optimizing expected loss functions using functional gradient
descent [23]. However, while Mason et al. [23] focus on classification problems and hypothesis spaces
with finite VC dimension, we focus on the estimation of regression functions in spaces with infinite
VC dimension (e.g., see Section 2.2). In contrast to that work, we provide a convergence rate for our
approach. Similarly, Friedman [15] has proposed a gradient boosting machine for greedy function
estimation. In their approach, the empirical functional gradient is approximated by a weak learner
which is then combined with previously constructed learners following a stagewise strategy. This is
different from the stepwise strategy that is followed in our approach where previously constructed
estimators are readjusted when new features are added. The approach in [15] is investigated mainly
in the context of regression trees, but it can be adopted to feature construction. To the best of our
knowledge, theoretical and empirical properties of this approach in the context of feature construction
and shift-invariant reproducing kernel Hilbert spaces have not been considered so far.
Acknowledgment: We are grateful for access to the University of Nottingham High Performance Computing
Facility. A part of this work was also supported by the German Science Foundation (grant number GA 1615/1-1).
8
References
[1] Martin Anthony and Peter L. Bartlett. Neural Network Learning: Theoretical Foundations. Cambridge
University Press, 2009.
[2] Nachman Aronszajn. Theory of reproducing kernels. Transactions of the American Math. Society, 1950.
[3] Francis R. Bach and Michael I. Jordan. Predictive low-rank decomposition for kernel methods. In
Proceedings of the 22nd International Conference on Machine Learning.
[4] Andrew R. Barron. Universal approximation bounds for superpositions of a sigmoidal function. IEEE
Transactions on Information Theory, 39(3), 1993.
[5] Jonathan Baxter. A model of inductive bias learning. Journal of Artificial Intelligence Research, 12, 2000.
[6] Alain Bertinet and Thomas C. Agnan. Reproducing Kernel Hilbert Spaces in Probability and Statistics.
Kluwer Academic Publishers, 2004.
[7] Salomon Bochner. Vorlesungen ?ber Fouriersche Integrale. In Akademische Verlagsgesellschaft, 1932.
[8] Bernd Carl and Irmtraud Stephani. Entropy, Compactness, and the Approximation of Operators. Cambridge
University Press, 1990.
[9] Felipe Cucker and Steve Smale. On the mathematical foundations of learning. Bulletin of the American
Mathematical Society, 39, 2002.
[10] Bo Dai, Bo Xie, Niao He, Yingyu Liang, Anant Raj, Maria-Florina Balcan, and Le Song. Scalable kernel
methods via doubly stochastic gradients. In Advances in Neural Information Processing Systems 27.
[11] Janez Dem?ar. Statistical comparisons of classifiers over multiple data sets. Journal of Machine Learning
Research, 7, 2006.
[12] Michael J. Donahue, Christian Darken, Leonid Gurvits, and Eduardo Sontag. Rates of convex approximation in non-Hilbert spaces. Constructive Approximation, 13(2), 1997.
[13] Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang-Rui Wang, and Chih-Jen Lin. LIBLINEAR: A library
for large linear classification. Journal of Machine Learning Research, 9, 2008.
[14] Shai Fine and Katya Scheinberg. Efficient SVM training using low-rank kernel representations. Journal of
Machine Learning Research, 2, 2002.
[15] Jerome H. Friedman. Greedy function approximation: A gradient boosting machine. The Annals of
Statistics, 29, 2000.
[16] Israel M. Gelfand and Sergei V. Fomin. Calculus of variations. Prentice-Hall Inc., 1963.
[17] Marc G. Genton. Classes of kernels for machine learning: A statistics perspective. Journal of Machine
Learning Research, 2, 2002.
[18] Sham M. Kakade, Varun Kanade, Ohad Shamir, and Adam T. Kalai. Efficient learning of generalized
linear and single index models with isotonic regression. In Advances in Neural Information Processing
Systems 24, 2011.
[19] Adam T. Kalai and Ravi Sastry. The isotron algorithm: High-dimensional isotonic regression. In
Proceedings of the Conference on Learning Theory, 2009.
[20] Sathiya Keerthi, Vikas Sindhwani, and Olivier Chapelle. An efficient method for gradient-based adaptation
of hyperparameters in SVM models. In Advances in Neural Information Processing Systems 19, 2006.
[21] Andrey N. Kolmogorov and Vladimir M. Tikhomirov. -entropy and -capacity of sets in function spaces.
Uspehi Matematicheskikh Nauk, 14(2), 1959.
[22] Brian Kulis, M?ty?s Sustik, and Inderjit Dhillon. Learning low-rank kernel matrices. In Proceedings of the
23rd International Conference on Machine Learning, 2006.
[23] Llew Mason, Jonathan Baxter, Peter L. Bartlett, and Marcus Frean. Functional gradient techniques for
combining hypotheses. In Advances in large margin classifiers. MIT Press, 1999.
[24] Sebastian Mayer, Tino Ullrich, and Jan Vybiral. Entropy and sampling numbers of classes of ridge
functions. Constructive Approximation, 42(2), 2015.
[25] Tom M. Mitchell. Machine Learning. McGraw-Hill, 1997.
[26] Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. In Advances in Neural
Information Processing Systems 20.
[27] Walter Rudin. Functional Analysis. Int. Series in Pure and Applied Mathematics. McGraw-Hill Inc., 1991.
[28] Lu?s Torgo. Repository with regression data sets. http://www.dcc.fc.up.pt/~ltorgo/
Regression/DataSets.html, accessed September 22, 2016.
[29] Bernard L. Welch. The generalization of student?s problem when several different population variances are
involved. Biometrika, 34(1/2), 1947.
[30] Frank Wilcoxon. Individual comparisons by ranking methods. Biometrics Bulletin, 1(6), 1945.
[31] Zichao Yang, Alexander J. Smola, Le Song, and Andrew G. Wilson. ? la carte?Learning fast kernels. In
Proceedings of the 18th International Conference on Artificial Intelligence and Statistics, 2015.
[32] Martin A. Zinkevich, Alex J. Smola, Markus Weimer, and Lihong Li. Parallelized stochastic gradient
descent. In Advances in Neural Information Processing Systems 23, 2010.
9
| 6557 |@word kgk:2 repository:1 briefly:1 kulis:2 norm:5 stronger:2 nd:1 c0:2 gfc:3 closure:3 confirms:1 r:1 simulation:1 calculus:1 decomposition:2 hsieh:1 recursively:1 liblinear:1 initial:3 configuration:3 contains:4 exclusively:1 selecting:2 series:1 past:1 existing:3 current:5 comparing:2 nt:1 si:3 written:1 sergei:1 fn:13 kdd:1 predetermined:1 christian:1 greedy:29 selected:2 intelligence:2 rudin:1 xk:3 core:3 completeness:4 boosting:2 math:1 successive:1 sigmoidal:1 accessed:1 five:1 mathematical:2 constructed:19 kak22:1 prove:1 doubly:1 fitting:12 yingyu:1 introduce:2 expected:9 behavior:1 frequently:2 globally:1 cpu:1 considering:4 totally:1 spain:1 provided:8 bounded:8 moreover:4 what:2 israel:1 finding:2 eduardo:1 guarantee:2 pseudo:3 every:5 tackle:1 exactly:1 universit:1 scaled:1 walltime:7 uk:2 control:2 classifier:2 grant:1 biometrika:1 oglic:2 positive:3 before:2 understood:1 accordance:1 llew:1 supkf:1 parallelize:1 signed:2 katya:1 specifying:2 salomon:1 co:18 limited:1 range:2 statistically:3 unique:3 acknowledgment:1 implement:4 definite:1 procedure:10 jan:1 empirical:15 universal:1 significantly:4 pre:1 confidence:3 jui:1 get:1 cannot:1 interior:1 ga:1 operator:2 cal:1 prentice:1 context:4 isotonic:2 www:1 equivalent:1 zinkevich:1 demonstrated:2 straightforward:1 independently:1 normed:2 convex:12 welch:3 splitting:3 pure:1 fomin:1 estimator:3 population:1 notion:2 variation:1 laplace:2 updated:3 annals:1 construction:14 target:12 controlling:1 shamir:1 pt:1 olivier:1 carl:1 hypothesis:37 element:7 approximated:1 cut:2 labeled:3 ft:7 solved:2 wang:1 removed:1 yk:2 benjamin:1 complexity:3 hinder:1 motivate:2 torgo:2 solving:1 trained:2 grateful:1 ali:1 predictive:1 learner:2 basis:7 represented:2 kolmogorov:1 derivation:1 walter:1 fast:3 effective:2 describe:1 artificial:3 harnessing:1 whose:1 quite:1 kai:1 gelfand:1 ability:1 statistic:4 unseen:2 transform:1 noisy:1 jointly:1 housing:1 sequence:20 advantage:1 net:1 propose:2 product:1 remainder:1 adaptation:1 uci:1 combining:1 flexibility:1 achieve:1 nauk:1 description:5 kh:3 validate:1 scalability:4 convergence:11 readjusted:1 felipe:1 extending:1 incremental:10 converges:3 adam:2 derive:1 andrew:2 ac:1 frean:1 measured:1 school:1 sim:2 involves:1 indicate:3 implies:1 come:1 direction:3 radius:1 escent:2 hull:3 kb:2 stochastic:6 vc:2 genton:1 elimination:1 require:1 suffices:1 generalization:12 f1:2 proposition:5 brian:1 c00:3 extension:1 rong:1 hold:10 considered:13 hall:1 exp:1 mapping:1 dictionary:2 purpose:2 uniqueness:1 estimation:2 outperformed:1 label:2 nachman:1 superposition:1 wl:3 minimization:4 carte:14 mit:1 always:3 gaussian:2 ck:1 kalai:2 avoid:2 parkinson:1 breiman:1 casting:1 wilson:1 corollary:4 validated:1 xit:1 focus:2 maria:1 rank:9 indicates:1 mainly:1 hk:2 contrast:6 attains:1 baseline:2 stopping:2 minimizers:2 inaccurate:1 typically:2 compactness:1 hidden:1 stephani:1 i1:1 germany:1 selects:1 interested:1 arg:2 dual:1 flexible:4 classification:2 denoted:1 priori:3 ullrich:1 development:1 vybiral:1 art:1 html:1 marginal:1 equal:1 construct:13 gurvits:1 having:5 sampling:1 identical:1 icml:1 mimic:4 report:1 simplify:1 few:1 simultaneously:1 individual:3 elevator:2 consisting:1 isotron:1 keerthi:1 n1:1 friedman:3 highly:4 investigate:1 evaluation:1 mixture:1 behind:1 kt:1 beforehand:3 accurate:2 minw:1 ohad:1 institut:1 biometrics:1 tree:2 sooner:2 euclidean:2 re:3 theoretical:4 instance:7 gn:3 ar:1 goodness:1 cost:4 pole:1 deviation:1 subset:7 uniform:1 conducted:1 characterize:1 reported:1 perturbed:1 synthetic:1 chooses:1 chunk:1 combined:1 cho:1 fundamental:1 peak:5 international:3 andrey:1 gaertner:1 recht:1 off:2 cucker:1 michael:2 together:1 ym:1 pumadyn:1 squared:14 again:1 satisfied:1 containing:2 choose:2 ltorgo:1 american:2 return:2 li:1 account:1 de:1 bold:1 wk:3 dem:1 coefficient:2 inc:2 int:1 student:1 ranking:1 performed:5 root:2 closed:1 nmd:3 francis:1 start:1 wave:9 hf:1 parallel:3 competitive:1 shai:1 ass:2 square:4 accuracy:3 variance:1 efficiently:1 generalize:1 weak:1 informatik:1 lu:2 explain:1 sebastian:1 definition:9 ty:1 kfn:5 involved:1 proof:7 newly:1 dataset:1 mitchell:1 knowledge:2 ut:1 hilbert:16 amplitude:3 steve:1 dcc:1 supervised:4 restarts:1 xie:1 specify:1 wei:1 varun:1 tom:1 evaluated:1 nottingham:3 smola:2 jerome:1 hand:2 working:1 expressive:1 aronszajn:1 incrementally:2 continuity:1 defines:2 stagewise:1 modulus:3 facilitate:1 contain:2 concept:3 normalized:1 counterpart:2 inductive:4 hence:1 regularization:8 facility:1 iteratively:1 satisfactory:1 dhillon:1 during:2 tino:1 covering:2 won:1 cosine:4 criterion:2 generalized:1 mina:1 hill:2 ridge:25 complete:2 demonstrate:2 tn:1 performs:4 balcan:1 fi:2 superior:2 sigmoid:1 functional:25 empirically:5 overview:2 million:2 discussed:1 banach:15 approximates:4 extend:1 kluwer:1 kwk2:2 he:1 refer:2 cup:1 cambridge:2 smoothness:8 rd:3 tuning:2 outlined:5 pm:2 fk:5 similarly:1 sastry:1 mathematics:1 dino:2 dot:1 lihong:1 moving:1 access:1 f0:2 chapelle:1 add:1 wilcoxon:3 perspective:2 optimizing:3 moderate:1 inf:2 raj:1 rgk:2 yi:6 integrable:3 seen:1 minimum:1 dai:1 r0:1 surely:1 parallelized:4 bochner:1 redundant:2 ii:2 relates:1 multiple:1 sham:1 rahimi:1 smooth:8 match:3 faster:2 academic:1 cross:9 compensate:1 bach:2 lin:1 devised:2 msd:1 c0k:1 paired:2 controlled:1 prediction:2 scalable:2 regression:15 florina:1 metric:2 iteration:1 kernel:25 achieved:1 penalize:1 addition:1 want:1 separately:1 fine:2 publisher:1 parallelization:2 pass:5 effectiveness:3 jordan:2 yang:1 iii:2 enough:4 identically:1 easy:2 split:4 baxter:2 fit:7 competing:3 inner:1 idea:2 tm:1 regarding:1 shift:4 whether:1 bartlett:2 song:2 peter:2 sontag:1 proceed:2 nine:1 clear:1 detailed:2 svms:1 generate:2 fz:2 specifies:1 kck2:1 percentage:1 http:1 estimated:1 delta:2 per:2 write:1 hyperparameter:7 oop:1 group:5 four:1 threshold:4 drawn:1 ravi:1 kqk:2 monotone:4 year:1 run:3 parameterized:3 family:1 almost:2 chih:1 draw:1 appendix:11 comparable:1 bound:10 ct:1 layer:3 guaranteed:1 followed:1 fold:13 fan:1 refine:1 activity:1 strength:1 alex:1 n3:3 sake:5 markus:1 dominated:1 bonn:2 fourier:7 performing:3 martin:2 verlagsgesellschaft:1 according:2 combination:2 ball:1 terminates:1 kakade:1 invariant:4 taken:1 ln:1 rtner:1 scheinberg:2 slack:2 kinematics:1 discus:1 previously:2 needed:4 german:1 end:5 sustik:1 adopted:1 available:2 gaussians:1 multiplied:1 observe:2 barron:2 spectral:5 alternative:2 batch:1 existence:2 thomas:3 vikas:1 denotes:3 running:2 log2:2 exploit:2 giving:1 hahn:1 approximating:1 society:2 objective:3 added:2 strategy:2 niao:1 said:2 september:1 gradient:27 subspace:1 simulated:3 capacity:13 restart:1 outer:4 argue:1 cauchy:2 reason:1 marcus:1 code:3 index:2 relationship:1 vladimir:1 zichao:1 liang:1 relate:1 smale:1 gk:1 frank:1 implementation:5 perform:2 observation:1 darken:1 datasets:1 benchmark:1 finite:3 descent:31 y1:1 reproducing:6 overcoming:1 introduced:2 pair:1 required:5 specified:8 extensive:1 connection:1 eature:1 bernd:1 anant:1 mayer:1 learned:1 established:1 barcelona:1 nip:1 address:1 able:1 usually:1 below:2 xm:1 agnan:1 latitude:1 challenge:1 preselected:2 max:1 suitable:2 residual:14 scheme:3 library:1 started:1 concludes:1 coupled:1 review:1 l2:10 kf:6 xiang:1 beside:1 loss:2 interesting:1 limitation:1 generation:1 validation:9 h2:1 foundation:3 sufficient:3 consistent:2 bank:1 penalized:2 summary:1 supported:1 last:1 alain:1 bias:6 allow:2 weaker:1 side:1 ber:1 taking:1 bulletin:2 distributed:1 slice:1 overcome:1 regard:1 dimension:3 world:1 rich:3 far:1 transaction:2 functionals:1 sj:2 approximate:2 compact:10 uni:1 preferred:1 mcgraw:2 assumed:1 sathiya:1 xi:9 spectrum:2 continuous:6 table:4 kanade:1 learn:4 terminate:1 nature:1 robust:1 obtaining:1 investigated:1 necessarily:1 constructing:2 anthony:1 marc:1 main:4 weimer:1 motivation:2 whole:3 hyperparameters:4 n2:3 allowed:2 x1:1 surmounts:1 telecom:1 en:1 borel:1 precision:3 inferring:1 third:1 learns:1 hw:2 donahue:3 theorem:13 embed:1 specific:1 jen:1 showing:1 r2:1 decay:2 svm:3 mason:2 inconclusive:1 exists:5 stepwise:1 ih:1 sequential:1 adding:1 alc:2 rui:1 margin:1 reedy:3 entropy:3 fc:1 ez:7 desire:1 contained:1 expressed:1 bo:2 rror:4 inderjit:1 chang:1 sindhwani:1 corresponds:2 minimizer:4 conditional:1 goal:7 marked:1 lipschitz:4 leonid:1 change:1 infinite:1 uniformly:5 averaging:1 principal:2 called:1 total:1 pas:2 bernard:1 experimental:1 la:14 shiftinvariant:1 select:2 latter:2 assessed:1 brevity:1 aileron:2 jonathan:2 alexander:1 constructive:10 evaluate:3 |
6,144 | 6,558 | Efficient and Robust Spiking Neural Circuit for
Navigation Inspired by Echolocating Bats
Pulkit Tandon, Yash H. Malviya
Indian Institute of Technology, Bombay
pulkit1495,[email protected]
Bipin Rajendran
New Jersey Institute of Technology
[email protected]
Abstract
We demonstrate a spiking neural circuit for azimuth angle detection inspired by
the echolocation circuits of the Horseshoe bat Rhinolophus ferrumequinum and
utilize it to devise a model for navigation and target tracking, capturing several key
aspects of information transmission in biology. Our network, using only a simple
local-information based sensor implementing the cardioid angular gain function,
operates at biological spike rate of approximately 10 Hz. The network tracks large
angular targets (60? ) within 1 sec with a 10% RMS error. We study the navigational
ability of our model for foraging and target localization tasks in a forest of obstacles
and show that it requires less than 200X spike-triggered decisions, while suffering
less than 1% loss in performance compared to a proportional-integral-derivative
controller, in the presence of 50% additive noise. Superior performance can be
obtained at a higher average spike rate of 100 Hz and 1000 Hz, but even the accelerated networks require 20X and 10X lesser decisions respectively, demonstrating the
superior computational efficiency of bio-inspired information processing systems.
1
Introduction
One of the most remarkable engineering marvels of nature is the ability of many species such as
bats, toothed whales and dolphins to navigate and identify preys and predators by echolocation, i.e.,
emit sounds with complex characteristics, and use neural circuits to discern the location, velocity
and features of obstacles or targets based on the echo of the signal. Echolocation problem can be
sub-divided into estimating range, height and azimuth angle of objects in the environment. These
coordinates are resolved by the bat using separate mechanisms and networks [1, 2]. While the bat?s
height detection capability is obtained through the unique structure of its ear that creates patterns
of interference in the spectrum of incoming echoes [3], the coordinates of range and azimuth are
estimated using specialized neural networks [1, 2].
Artificial neural networks are of great engineering interest, as they are suitable for a wide variety
of autonomous data analytics applications [4]. In spite of their impressive successes in solving
complex cognitive tasks [5], the commonly used neuronal and synaptic models today do not capture
the most crucial aspects of the animal brain where neuronal signals are encoded and transmitted as
spikes or action potentials and the synaptic strength which encodes memory and other computational
capabilities is adjusted autonomously based on the time of spikes [6, 7]. Spiking neural networks
(SNNs) are believed to be computationally more efficient than their second-generation counterparts[8].
Bat?s echolocation behavior has two distinct attributes ? prey catching and random foraging. It is
believed that an ?azimuth echolocation network? in the bat?s brain plays a major role in helping it to
forage randomly as it enables obstacle detection and avoidance, while a ?range detection network?
helps in modulating the sonar vocalizations of the bat which enable better detection, tracking and
catching of prey [1, 2]. In this paper, we focus on the relatively simple azimuth detection network of
the greater horseshoe bat to develop a SNN for object tracking and navigation.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
??
SPATIAL
PARAMETERS
f(?, ??)
AUDIO
SOURCE
SENSOR 1
LEFT EAR
AUDIO
SOURCE
NETWORK
8-Neurons
SPIKES
Rl
DYNAMICS
??
??
= ?(??????)
??
?
HEAD
AIM
LEFT
EAR
SENSOR 2
RIGHT EAR
NOISE
Rr
HEAD
??
SPATIAL
PARAMETERS
f(?, ??)
RIGHT
EAR
??
Figure 1: Schematic diagram of the navigation system based on a spiking neural network (SNN) for
azimuth detection, inspired by bat echolocation. The two input sensors (mimicking the ears), encode
incoming sound signals as spike arrival rates which is used by the SNN to generate output spikes that
controls the head aim. Spikes from the top channel induces an anti-clockwise turn, and the bottom
channel induces a clockwise turn. Thus, the head-aim is directed towards maximum intensity. We
use small-head approximation (i.e., Rl = Rr , ?l = ?/2 ? ? and ?r = ?/2 + ?).
2
System Design
We now discuss the broad overview of the design of our azimuth detection network and the navigation
system, and the organization of the paper. The functioning of our echolocation based navigation
model can be divided into five major parts. Figure 1 illustrates these parts along with a model of the
tracking head and the object to be detected. Firstly, we assume that all objects in the environment
emit sound isotropically in all our simulations. This mimics the echo signal, and is assumed to be
of the same magnitude for simplicity. Let the intensity of an arbitrary source be denoted as Is . We
assume that the intensity decays in accordance with an inverse square dependence on the distance
from the source. Hence, the intensity at the ears (sensors) at a distance Rl and Rr will be given as
Il =
Is
Rl2
Ir =
Is
Rr2
(1)
The emitted sound travels through the environment where it is corrupted with noise and falls on the
receivers (bat?s ears). In our model, the two receivers are positioned symmetric to the head aim,
180? apart. Like most mammals, we rely on sound signal received at the two receivers to determine
azimuth information [1]. By distinguishing the sound signals received at the two receivers, the
network formulates a direction for the potential obstacle or target that is emitting the signal.
In our model, we use a cardioid angular gain function as input sensor, described in detail in Section 3.
We filter incoming sound signals using the cardioid, which are then translated to spike domain and
forwarded to the azimuth detection network. The SNN we design (Section 4) is inspired by several
studies that have identified the different neurons that are part of the bat?s azimuth detection network
and how they are inter-connected [1], [2]. We have critically analyzed the internal functioning of this
biological network and identified components that enable the network to function effectively.
The spiking neural network that processes the input sound signals generates an output spike train
which determines the direction in which the head-aim of our artificial bat should turn to avoid
obstacles or track targets. The details of this dynamics are discussed in Section 5. We evaluate the
performance of our system in the presence of ambient noise by adding slowly varying noise signals
to the input (section 6). The simulation results are discussed in Section 7, and the performance of the
model evaluated in Section 8, before summarizing our conclusions.
3
Input and Receiver Modeling
The bat has two ears to record the incoming signal and like most mammals relies on them for
identifying the differences at these two sensors to detect azimuth information [1]. These differences
could either be in the time of arrival or intensity of signal detected at the two ears. Since the bat?s
head is small (and the ears are only about 1 ? 2 cm apart) the interaural time difference (ITD), defined
as the time difference of echo arrival at the two ears, is very small [9]. Hence, the bat relies on
measurement of the interaural level difference (ILD), also known as interaural intensity difference
2
(a)
(b)
Figure 2: a) The Interaural Level Difference, defined as relative intensity of input signals received at
the two sensors in the bat is strongly correlated with the azimuth deviation between the sound source
and the head aim. Adapted from [9]. b) In our model, sensitivity of the sensor (in dB) as a function
of the angle with source obeys cardioid dependence (readily available in commercial sensors).
(IID) for azimuth angle detection. As shown in Figure 2a, the ILD signal detected by the ears is a
strong function of the azimuth angle; our network is engineered to mimic this characteristic feature.
In most animals, the intensity of the signal detected by the ear depends on the angle between the ear
and the source; this arises due to the directionality of the ear. To model this feature of the receiver, we
use a simple cardioid based receiver gain function as shown in Figure 2b, which is the most common
gain characteristic of audio sensors available in the market. Hence, if ?r/l is the angle between the
source and the normal to the right/left ear, the detected intensity is given as
Id,r/l = Ir/l ? 10?1+cos(?r/l )
(2)
We model the output of the receiver as a spike stream, whose inter-arrival rate, ? encodes this filtered
intensity information
?r/l = kId,r/l
(3)
where k is a proportionality constant chosen to ensure a desired average spiking rate in the network.
We chose two different encoding schemes for our network. In the uniform signal encoding scheme,
the inter-arrival time of the spikes at the output of the receiver is a constant and equal to 1/?. In the
Poisson signal encoding scheme, we assume that the spikes are generated according to a Poisson
process with an inter-arrival rate equal to ?. Poisson source represents a more realistic version of
an echo signal observed in biological settings. In order to update the sound intensity space seen by
the bat as it moves, we sample the received sound intensity (Id,r/l ) for a duration of 300 ms at every
450 ms. The fallow 150 ms between the sampling periods allows the bat to process received signals,
and reduces interference between consecutive samples.
4
Spiking Neural Network Model
Figure 3a shows our azimuth detection SNN inspired by the bat. It consists of 16 sub-networks whose
spike outputs are summed up to generate the network output. In each sub-network, Antroventral
Cochlear Nucleus (AVCN) neurons receive the input signal translated to spike domain from the
front-end receiver as modeled above and deliver it to Lateral Superior Olive (LSO) neurons. Except
for the synaptic weights from AVCN layer to LSO layer, the 16 sub-networks are identical. The left
LSO neuron further projects excitatory synapses to the right Dorsal Nucleus of Lateral Lemniscus
(DNLL) neuron and to the right Inferior Colliculus (IC) neuron and inhibitory synapses to the left
DNLL and IC neurons. Additionally, inhibitory synapses connect the DNLL neurons to both IC
neurons which also inhibit each other. The AVCN and IC neurons trigger navigational decisions.
Minor variations in the spike patterns at the input of multi-layered spiking neural networks could result
in vastly divergent spiking behaviors at the output due to the rich variations in synaptic dynamics.
To avoid this, we use 16 clone networks which are identical except for the weights of synapse from
AVCN layer to LSO layer (which are incremented linearly for each clone). These clones operate in
3
(a)
(b)
Figure 3: (a) Our azimuth detection SNN consists of 16 sub-networks whose spike outputs are
summed up to generate the network output. Except for the synaptic weights from AVCN layer to
LSO layer, sub-networks are identical (see Supplementary Materials). Higher spike rate at the left
input results in higher output spike rate of neurons N1 and N8 . (b) Top panel shows normalized
response of the SNN with impulses presented to input neuron N1 at t = 50 ms and N2 at t = 150 ms.
Bottom panel shows that the output spike rate difference of our SNN mimics that in Figure 2a.
parallel and the output spike stream of the left and right IC neurons are merged for the 16 clones,
generating the net output spike train of the network.
We use the adaptive exponential integrate and fire model for all our neurons as they can exhibit
different kinds of spiking behavior seen in biological neurons [10]. All the neurons implemented in
our model are regular spiking (RS), except the IC layer neurons which are chattering neurons (CH).
CH neurons aggregate the input variations over a period and then produce brisk spikes for a fixed
duration, thereby improving accuracy. The weights for the various excitatory and inhibitory synapses
have been derived by parameter space exploration. The selected values enable the network to operate
for the range of spike frequencies considered and allows spike responses to propagate through the
depth of the network (All simulation parameters are listed in Supplementary Materials).
An exemplary behavior of the network corresponding to short impulses received by AVCNleft at
t = 50 ms and by AVCNright at t = 150 ms is shown in Figure 3b. If ?right of a particular sound input
is higher than ?left , the right AVCN neuron will have a higher spiking rate than its left counterpart.
This in turn induces a higher spiking rate in the right LSO neuron, while at the same time suppressing
the spikes in left LSO neuron. Thereafter, the LSO neurons excite spikes in the opposite DNLL and
IC neurons, while suppressing any spikes on the DNLL and IC neurons on its side. Consequently, an
input signal with higher ?right will produce a higher spike rate at the left IC neuron.
It has been proposed that the latter layers enable extraction of useful information by correlating
the past input signals with the current input signals [11]. The LSO neuron that sends excitatory
signals to an IC neuron also inhibits the DNLL neuron which suppresses IC neuron. Inhibition
of DNLL neuron lasts for a few seconds even after the input signal stops. Consequently, for a
short period, the IC neuron receives reduced inhibition. Lack of inhibition changes the network?s
response to future input signals. Hence, depending on the recent history of signals received, the
output spike difference may vary for the same instantaneous input, thus enabling the network to
exhibit proportional-integral-derivative controller like behavior. Figure 4 highlights this feature.
4
(a)
(b)
(c)
Figure 4: Spike response of the network (blue) depends not only on the variations in the present input
(red) but also on its past history, akin to a proportional-integral-derivative controller. Input spike
trains are fed to neurons N1 and N2 . Choosing the input spikes in (a) as reference, in (b) second half
of the input pattern is modified, whereas in (c) first half of the input pattern is modified.
5
Head-Rotation Dynamics
The difference in the spike rate of the two output neurons generated by the network indicates angular
deviation between the head-aim and the object detected (Figure 3b). In order to orient the trackinghead in the direction of maximum sound intensity, the head-aim is rotated by a pre-specified angle
for every spike, defined as the Angle of Rotation (AoR). AoR is a function of the operating spike
frequency of the network and the nature of input source coding (Poisson/Uniform). It is an engineered
parameter obtained by minimizing RMS error during constant angle tracking. We have provided AoR
values for a range of SNN frequency operation also ensuring that AoR chosen can be achieved by
commercial motors (Details in Supplementary Materials).
In a biological system, not every spike will necessarily cause a head turn as information transmission
through the neuromuscular junction is stochastic in nature. To model this, we specify that an AoR
turn is executed according to a probability model given as
?? = [(sl ? sr )pi ? (rl ? rr )pj ] AoR
(4)
where sl,r is 1 if spike is issued in left (or right) IC neuron and 0 otherwise and rl,r is 1 if a spike is
issued in left (or right) AVCN neurons. pi and pj are Bernoulli random variables (with mean values
hpi i = 0.5 and hpj i = 0.0005) denoting the probability that an output and input spike causes a turn
respectively. The direction and amplitude of the turn is naturally encoded in the spike rates of output
and input neurons. The sign of rl ? rr is opposite to sl ? sr as a higher spike rate in right (or left)
AVCN layer implies higher spike rate in left (or right) IC layer and hence they should have same
causal effect on head aim. We have assigned our ?artificial bat? a fixed speed of 15 mph (6.8 m/s)
consistent with biologically observed bat speeds [12].
6
Noise Modeling
In order to study the impact of realistic noisy environments on the performance of our network, we
incorporate noise in our simulations by adding a slowly varying component to the source sound
Intensity, Is . Hence, (3) is modified as
?r/l = k(Id,r/l + n)
(5)
where n is obtained by low-pass filtering uniform noise. Also note that for Poisson input source
encoding, since we are sampling a random signal for a fixed duration, large variations in the stimulus
spike count is possible for the same values of input intensity. We will study the effect of the above
additive uniform noise for both encoding schemes.
7
Simulation Results
We first show our system?s response to a stair-case input, i.e., the source is moving along a circle with
the ?bat? fixed at the center, but free to turn along the central axis (Figure 5). It can be seen that the
network performs reasonably well in tracking the moving source within a second.
5
Figure 5: Response of the azimuth tracking network for time varying staircase input for Poisson input
encoding at 10 Hz operating frequency.
We now study the step response of our SNN based azimuth tracking system for both uniform and
Poisson source encoding schemes at various operating frequencies of the network, with and without
additive noise (Figure 6). To quantify the performance of our system, we report the following two
metrics: (a) Time of Arrival (ToA) which is the first time when the head aim comes within 5% of
target head aim (source angle); and (b) RMS error in head aim measured in the interval [ToA, 4.5 s].
At t = 0, the network starts tracking a stationary source placed at ?60? ; the ToA is ? 1 s in all cases,
even in the presence of 50% additive noise. The trajectories for 1 kHz Poisson encoding is superior to
that corresponding to its low frequency counterpart. At low frequencies, there are not enough spikes
to distinguish between small changes in angles as the receiver?s sampling period is only 300 ms. It
is possible to tune the system to have much better RMS error by increasing the sampling period
or decreasing AoR, but at the cost of larger ToA. Our design parameters are chosen to mimic the
biologically observed ToA while minimizing the RMS error [13]. We observed that uniform source
encoding performs better than Poisson encoding in terms of average jitter after ToA, as there is no
sampling noise present in former.
(a) Poisson source 10 Hz
(b) Poisson source 1 kHz
(c) Uniform source 1 kHz
(d) Poisson source 10 Hz, 50% noise (e) Poisson source 1 kHz, 50% noise (f) Uniform source 1kHz,50% noise
Figure 6: Step response of our SNN based azimuth tracking system, for five different exemplary
tracks for different input signal encoding schemes, network frequencies and input noise levels. At
t = 0, the network starts tracking a stationary source placed at ?60? . The time taken to reach within
5% of the target angle, denoted as Time of Arrival (ToA), is ? 1 s for all cases.
We expect RMS error to increase with decrease in operation frequency and increase in percentage
channel noise. Figure 7a clearly shows this behavior for uniform source encoding. With no additive
noise (pink label), the RMS error decreases with increase in frequency. Although RMS error remains
almost constant with varying noise level for 10 Hz (in terms of median error and variance in error),
it clearly increases for 1 kHz case. This can be attributed to the fact that since our ?artificial bat?
moves whenever a spike occurs, at lower frequency, the network itself filters the noise by using it?s
slowly varying nature and averaging it. At higher frequencies, this averaging effect is reduced making
6
(a) RMS error, Uniform source encoding
(b) RMS error, Poisson source encoding
Figure 7: a) RMS error in head aim for Uniform source encoding measured after the ToA during
tracking a constant target angle in response to varying noise levels. At zero noise, increasing the
frequency improves performance due to fine-grained decisions. However, in the presence of additive
noise, increasing the frequency worsens the RMS error, as more error-prone decisions are likely. b)
RMS error with Poisson source encoding: at zero noise, an increase in operation frequency reduces
the RMS error but compared to Figure 7a, the performance even at 1 kHz is unaffected by noise.
the trajectory more susceptible to noise. A trade-off can be seen for 50% noise (red label), where
addition of noise is more dominating and hence the system performs worse when operated at higher
frequencies. Figure 7b reports the frequency dependence of the RMS error for the Poisson encoding
scheme. Performance improves with increase in operation frequency as before, but the effect of added
noise is negligible even at 50% additive noise, showing that this scheme is more noise resilient. It
should however be noted that performance of Poisson is at best equal to that of uniform encoding.
8
Performance Evaluation
To test the navigational efficiency of our design, we test its ability to track down targets while avoiding
obstacles on its path in a 2D arena (120 ? 120 m). The target and obstacles are modeled as a point
sources which emit fixed intensity sound signals. Net detected intensity due to these sources is
calculated as a linear superposition of all the intensities by modifying (2) as
X It
X Io
Id =
? 10?1+cos(?t ) +
? 10?1+cos(?+?o )
(6)
2
Rt
Ro2
t
o
where subscript t refers to targets and o to obstacles. Choosing the effective angle of the obstacles as
? + ?o has the effect of steering the ?bat? 180? away from the obstacles. There are approximately 10
obstacles for every target in the arena placed at random locations.
Neurobiological studies have identified a range detection network which determines the modulation
of bat?s voice signal depending on the distance to its prey [1]. Our model does not include it; we
replace the process of the bat generating sound signals and receiving echoes after reflection from
surrounding objects, by the targets and obstacles themselves emitting sound signals isotropically. It
is known that the bat can differentiate between prey and obstacles by detecting slight differences in
their echoes [14]. This ability is aided by specialized neural networks in bat?s nervous system. Since
our ?artificial bat? employs a network which detects azimuth information, we model it artificially.
To benchmark the efficiency of our SNN based navigation model, we compare it with the performance
of a particle that obeys standard second-order PID control system dynamics governed by the equation
d2 (? ? ?t )
d(? ? ?t )
+ k1
+ k2 (? ? ?t ) = 0
dt2
dt
(7)
The particle calculates a target angle ?t , which is chosen to be the angle at which the net detected
intensity calculated using (6) is a maximum. This calculation is performed periodically (every 450 ms,
SNN sampling period). The above PID controller thus tries to steer the instantaneous angle of the
particle ? towards the desired target angle. The parameters k1 and k2 (Refer Supplementary material)
have been chosen to match the rise-time and overshoot characteristics of the SNN-model.
In order to compare performance under noisy conditions we add 50% slow varying noise to the sound
signal emitted by targets and obstacles as explained in Section 6. We simulate the trajectory for
7
18 s (40 sampling periods of the bat) and report the number of successful cases where the particle
?reached? the target without ?running? into any obstacles (i.e., particle-target separation was less than
2 m and particle-obstacle separation was always more than 2 m). Table 1 summarizes the results
for these scenarios - the SNN model operating at 1000 Hz has significantly higher % Success and
comparable average success time, though the PID particle is highly efficient in avoiding obstacles.
Table 1: Performance Validation Results
% Success
% No-collision
% Obstacle
Avg. success time (sec)
SNN 1 kHz
68
2.4
29.6
6.27
SNN 100 Hz
66.2
3.6
30.2
6.66
SNN 10 Hz
28.4
21.6
50
6.68
PID
29.13
60.86
10
5.08
To compare the computational effort of these approaches, we define ?number of decisions? as number
of changes made in head aim while navigating. The SNN model utilizes 220X times less number of
decisions while suffering < 1% decrease in % Success and a 31.5% increase in average success time
as compared to PID particle. Our network when operated at 100Hz (1000Hz) still retains its efficiency
in terms of decision making as it incurs 20 (10) times lesser decisions respectively, as compared to
the PID particle while achieving much higher % Success. A closer look at the trajectories traced by
the bat and the PID particle shows that the PID particle has a tendency to get stuck in local maxima
of sound intensity space, explaining why it shows high % No-collision but poor foraging (Figure 8b).
(a)
(b)
Figure 8: a) At 50% slowly-varying additive noise, our network requires up to 220x lesser spiketriggered decisions, while suffering less than 1% loss in performance compared to a PID control
algorithm. Superior performance can be obtained at a higher spike rate of ? 100 Hz and ? 1000 Hz,
but even the accelerated networks requires 20x and 10x lesser decisions respectively (a decision
corresponds to a change in the head aim). b) Exemplary tracks traced by the SNN (blue) and the PID
particle (black) in a forest of obstacles (red dots) with sparse targets (green dots).
9
Conclusion
We have devised an azimuth detection spiking neural network for navigation and target tracking,
inspired by the echolocating bat. Our network can track large angular targets (60? ) within 1 sec
with a 10% mean RMS error, capturing the main features of observed biological behavior. Our
network performance is highly resilient to additive noise in the input and exhibits efficient decision
making while navigating and tracking targets in a forest of obstacles. Our SNN based model that
mimics several aspects of information processing of biology requires less than 200X decisions while
suffering < 1% loss in performance, compared to a standard proportional-integral-derivative based
control. We thus demonstrate that appropriately engineered neural information processing systems
can outperform conventional control algorithms in real-life noisy environments.
Acknowledgments
This research was supported in part by the CAMPUSENSE project grant from CISCO Systems Inc.
8
References
[1] C. F. Moss and S. R. Sinha. Neurobiology of echolocation in bats. 13(6):751?8, 2003.
[2] N. Suga. Biosonar and neural computation in bats. 262(6):60?8, 1990.
[3] Ferragamo M. J. Simmons J. A. Wotton J. M., Haresign T. Sound source elevation and external
ear cues influence the discrimination of spectral notches by the big brown bat, Eptesicus fuscus.
100(3):1764?76, 1996.
[4] Y. Bengio Y. LeCun and G. Hinton. Deep learning. 521(7553):436?44, 2015.
[5] A. Huang et al. D. Silver. Mastering the game of go with deep neural networks and tree search.
529:484?89, 2016.
[6] E. R. Kandel. Nobel lecture, phisiology or medicine, 2000.
[7] Gerstner W. Markram H. and Sjostrom P. J. Spike-timing-dependent plasticity: A comprehensive
overview. 4:2, 2012.
[8] W. Maass. Networks of spiking neurons: The third generation of neural network models.
10(9):1659?1671, 1997.
[9] R. Z. Shi and T. K. Horiuchi. A neuromorphic VLSI model of bat interaural level difference
processing for azimuthal echolocation. pages 74 ? 88, 2007.
[10] Romain Brette and Wulfram Gerstner. Adaptive exponential integrate-and-fire model as an
effective description of neuronal activity. Journal of Neurophysiology, 94(5):3637?3642, 2005.
[11] R. M. Burger and G. D. Pollak. Reversible inactivation of the dorsal nucleus of the lateral
lemniscus reveals its role in the processing of multiple sound.. 21(13):4830, 2001.
[12] B. Hayward and R. Davis. Flight speeds in western bats. 45(2):236, 1964.
[13] C. F. Moss and A. Surlykke. Probing the natural scene by echolocation in bats. 2010.
[14] H.-U. Schnitzler J. Ostwald and Schuller G. Target discrimination and target classification in
echo locating bats, page 413. 1988.
9
| 6558 |@word neurophysiology:1 worsens:1 version:1 proportionality:1 d2:1 simulation:5 r:1 propagate:1 azimuthal:1 incurs:1 mammal:2 thereby:1 n8:1 denoting:1 suppressing:2 past:2 current:1 com:1 gmail:1 readily:1 olive:1 additive:9 realistic:2 periodically:1 plasticity:1 enables:1 motor:1 update:1 discrimination:2 stationary:2 half:2 selected:1 cue:1 nervous:1 short:2 record:1 filtered:1 detecting:1 location:2 firstly:1 five:2 height:2 along:3 consists:2 interaural:5 inter:4 market:1 behavior:7 themselves:1 multi:1 brain:2 inspired:7 detects:1 decreasing:1 snn:21 increasing:3 spain:1 estimating:1 project:2 provided:1 circuit:4 panel:2 hayward:1 burger:1 cm:1 kind:1 suppresses:1 every:5 k2:2 bio:1 control:5 grant:1 before:2 negligible:1 engineering:2 local:2 accordance:1 timing:1 io:1 encoding:18 id:4 subscript:1 path:1 modulation:1 approximately:2 black:1 chose:1 co:3 analytics:1 range:6 suga:1 obeys:2 bat:40 unique:1 directed:1 acknowledgment:1 lecun:1 significantly:1 pre:1 regular:1 refers:1 spite:1 get:1 layered:1 influence:1 conventional:1 center:1 shi:1 go:1 duration:3 simplicity:1 identifying:1 aor:7 avoidance:1 lso:9 coordinate:2 autonomous:1 variation:5 simmons:1 target:25 today:1 tandon:1 play:1 commercial:2 trigger:1 distinguishing:1 romain:1 velocity:1 bottom:2 role:2 observed:5 capture:1 connected:1 autonomously:1 decrease:3 inhibit:1 incremented:1 trade:1 environment:5 dt2:1 dynamic:5 overshoot:1 solving:1 deliver:1 localization:1 creates:1 efficiency:4 translated:2 resolved:1 yash:1 jersey:1 various:2 surrounding:1 train:3 distinct:1 horiuchi:1 effective:2 artificial:5 detected:8 aggregate:1 choosing:2 whose:3 encoded:2 supplementary:4 larger:1 dominating:1 otherwise:1 forwarded:1 ability:4 pollak:1 echo:8 noisy:3 vocalization:1 itself:1 differentiate:1 triggered:1 rr:5 net:3 exemplary:3 ro2:1 description:1 dolphin:1 transmission:2 produce:2 generating:2 silver:1 rotated:1 object:6 help:1 depending:2 develop:1 measured:2 minor:1 received:7 strong:1 implemented:1 implies:1 come:1 quantify:1 direction:4 merged:1 attribute:1 filter:2 stochastic:1 modifying:1 exploration:1 engineered:3 enable:4 material:4 implementing:1 require:1 resilient:2 elevation:1 biological:6 adjusted:1 helping:1 considered:1 ic:14 itd:1 normal:1 great:1 major:2 vary:1 consecutive:1 travel:1 label:2 superposition:1 eptesicus:1 modulating:1 clearly:2 sensor:11 always:1 aim:15 modified:3 inactivation:1 avoid:2 varying:8 encode:1 derived:1 focus:1 kid:1 bernoulli:1 indicates:1 summarizing:1 detect:1 dependent:1 brette:1 vlsi:1 mimicking:1 classification:1 denoted:2 animal:2 spatial:2 summed:2 equal:3 extraction:1 sampling:7 biology:2 whale:1 broad:1 represents:1 identical:3 look:1 fuscus:1 mimic:5 future:1 report:3 stimulus:1 few:1 employ:1 randomly:1 comprehensive:1 fire:2 n1:3 detection:15 organization:1 interest:1 highly:2 evaluation:1 arena:2 navigation:8 analyzed:1 hpi:1 operated:2 stair:1 ambient:1 emit:3 integral:4 closer:1 pulkit:1 tree:1 desired:2 circle:1 causal:1 catching:2 sinha:1 modeling:2 obstacle:20 bombay:1 steer:1 formulates:1 retains:1 neuromorphic:1 cost:1 deviation:2 uniform:12 successful:1 azimuth:21 front:1 connect:1 foraging:3 corrupted:1 clone:4 sensitivity:1 off:1 receiving:1 vastly:1 central:1 ear:18 cisco:1 huang:1 slowly:4 worse:1 cognitive:1 external:1 derivative:4 potential:2 sec:3 coding:1 inc:1 depends:2 stream:2 performed:1 try:1 red:3 start:2 reached:1 capability:2 parallel:1 predator:1 square:1 il:1 ir:2 accuracy:1 variance:1 characteristic:4 identify:1 critically:1 iid:1 trajectory:4 unaffected:1 history:2 synapsis:4 reach:1 whenever:1 synaptic:5 echolocation:10 frequency:18 naturally:1 attributed:1 gain:4 stop:1 improves:2 amplitude:1 positioned:1 higher:15 dt:1 response:9 specify:1 synapse:1 evaluated:1 though:1 strongly:1 angular:5 flight:1 receives:1 ild:2 reversible:1 lack:1 western:1 impulse:2 effect:5 normalized:1 staircase:1 functioning:2 counterpart:3 former:1 hence:7 assigned:1 brown:1 symmetric:1 maass:1 during:2 hpj:1 game:1 inferior:1 davis:1 noted:1 m:9 demonstrate:2 performs:3 reflection:1 instantaneous:2 superior:5 common:1 specialized:2 rotation:2 spiking:15 rl:6 overview:2 khz:8 discussed:2 echolocating:2 slight:1 measurement:1 refer:1 particle:12 dot:2 moving:2 impressive:1 operating:4 inhibition:3 add:1 recent:1 apart:2 scenario:1 issued:2 success:8 life:1 devise:1 transmitted:1 seen:4 greater:1 steering:1 determine:1 period:7 signal:34 forage:1 clockwise:2 multiple:1 sound:21 reduces:2 match:1 calculation:1 believed:2 divided:2 devised:1 schematic:1 ensuring:1 impact:1 calculates:1 controller:4 metric:1 poisson:17 achieved:1 receive:1 whereas:1 addition:1 fine:1 interval:1 sjostrom:1 diagram:1 median:1 source:33 sends:1 crucial:1 neuromuscular:1 appropriately:1 operate:2 sr:2 hz:14 db:1 emitted:2 presence:4 bengio:1 enough:1 variety:1 identified:3 opposite:2 lesser:4 rms:16 notch:1 effort:1 akin:1 locating:1 cause:2 action:1 deep:2 marvel:1 useful:1 collision:2 listed:1 tune:1 induces:3 schnitzler:1 reduced:2 generate:3 sl:3 avcn:8 percentage:1 outperform:1 inhibitory:3 sign:1 estimated:1 track:6 blue:2 key:1 thereafter:1 demonstrating:1 achieving:1 traced:2 pj:2 prey:5 utilize:1 colliculus:1 orient:1 angle:19 inverse:1 jitter:1 discern:1 almost:1 separation:2 utilizes:1 decision:14 summarizes:1 toa:8 comparable:1 fallow:1 capturing:2 layer:10 distinguish:1 activity:1 strength:1 adapted:1 scene:1 encodes:2 lemniscus:2 generates:1 aspect:3 speed:3 simulate:1 relatively:1 inhibits:1 according:2 poor:1 pink:1 cardioid:5 mastering:1 biologically:2 making:3 explained:1 interference:2 taken:1 computationally:1 pid:10 equation:1 remains:1 turn:9 discus:1 mechanism:1 count:1 fed:1 end:1 available:2 operation:4 junction:1 away:1 spectral:1 rl2:1 voice:1 ferragamo:1 top:2 running:1 ensure:1 include:1 medicine:1 k1:2 move:2 added:1 spike:49 occurs:1 dependence:3 rt:1 exhibit:3 navigating:2 distance:3 separate:1 lateral:3 mph:1 bipin:2 cochlear:1 nobel:1 modeled:2 minimizing:2 executed:1 susceptible:1 rise:1 design:5 neuron:39 benchmark:1 enabling:1 horseshoe:2 anti:1 spiketriggered:1 neurobiology:1 hinton:1 head:21 arbitrary:1 intensity:20 specified:1 barcelona:1 nip:1 pattern:4 navigational:3 green:1 memory:1 suitable:1 natural:1 rely:1 schuller:1 scheme:8 technology:2 axis:1 moss:2 relative:1 loss:3 expect:1 highlight:1 lecture:1 generation:2 proportional:4 filtering:1 remarkable:1 validation:1 nucleus:3 integrate:2 consistent:1 pi:2 prone:1 excitatory:3 placed:3 last:1 free:1 supported:1 side:1 institute:2 wide:1 fall:1 explaining:1 markram:1 sparse:1 depth:1 calculated:2 rich:1 stuck:1 commonly:1 adaptive:2 avg:1 made:1 emitting:2 neurobiological:1 correlating:1 incoming:4 reveals:1 receiver:11 assumed:1 excite:1 spectrum:1 search:1 sonar:1 why:1 table:2 additionally:1 nature:4 channel:3 robust:1 reasonably:1 brisk:1 forest:3 improving:1 gerstner:2 complex:2 necessarily:1 artificially:1 domain:2 main:1 linearly:1 big:1 noise:34 arrival:8 n2:2 suffering:4 neuronal:3 slow:1 probing:1 rr2:1 sub:6 exponential:2 kandel:1 dnll:7 governed:1 third:1 grained:1 down:1 navigate:1 showing:1 decay:1 divergent:1 adding:2 effectively:1 magnitude:1 illustrates:1 likely:1 snns:1 chattering:1 tracking:14 isotropically:2 ch:2 corresponds:1 determines:2 relies:2 consequently:2 towards:2 replace:1 change:4 directionality:1 aided:1 wulfram:1 except:4 operates:1 averaging:2 specie:1 pas:1 tendency:1 internal:1 latter:1 arises:1 dorsal:2 indian:1 accelerated:2 incorporate:1 evaluate:1 audio:3 avoiding:2 correlated:1 |
6,145 | 6,559 | Sparse Support Recovery with
Non-smooth Loss Functions
Gabriel Peyr?
CNRS, DMA
?cole Normale Sup?rieure
Paris, France 75775
[email protected]
K?vin Degraux
ISPGroup/ICTEAM, FNRS
Universit? catholique de Louvain
Louvain-la-Neuve, Belgium 1348
[email protected]
Jalal M. Fadili
Normandie Univ, ENSICAEN,
CNRS, GREYC,
Caen, France 14050
[email protected]
Laurent Jacques
ISPGroup/ICTEAM, FNRS
Universit? catholique de Louvain
Louvain-la-Neuve, Belgium 1348
[email protected]
Abstract
In this paper, we study the support recovery guarantees of underdetermined sparse
regression using the `1 -norm as a regularizer and a non-smooth loss function for
data fidelity. More precisely, we focus in detail on the cases of `1 and `? losses,
and contrast them with the usual `2 loss. While these losses are routinely used to
account for either sparse (`1 loss) or uniform (`? loss) noise models, a theoretical
analysis of their performance is still lacking. In this article, we extend the existing
theory from the smooth `2 case to these non-smooth cases. We derive a sharp
condition which ensures that the support of the vector to recover is stable to small
additive noise in the observations, as long as the loss constraint size is tuned
proportionally to the noise level. A distinctive feature of our theory is that it also
explains what happens when the support is unstable. While the support is not stable
anymore, we identify an ?extended support? and show that this extended support
is stable to small additive noise. To exemplify the usefulness of our theory, we
give a detailed numerical analysis of the support stability/instability of compressed
sensing recovery with these different losses. This highlights different parameter
regimes, ranging from total support stability to progressively increasing support
instability.
1
1.1
Introduction
Sparse Regularization
This paper studies sparse linear regression problems of the form
y = ?x0 + w,
where x0 ? Rn is the unknown vector to estimate, supposed to be non-zero and sparse, w ? Rm
is some additive noise and the design matrix ?m?n is in general rank deficient corresponding to
a noisy underdetermined linear system of equations, i.e., typically in the high-dimensional regime
where m n. This can also be understood as an inverse problem in imaging sciences, a particular
instance of which being the compressed sensing problem [3], where the matrix ? is drawn from some
appropriate random matrix ensemble.
In order to recover a sparse vector x0 , a popular regularization is the `1 -norm, in which case we
consider the following constrained sparsity-promoting optimization problem
minn {||x||1 s.t. ||?x ? y||? 6 ? } ,
(P?? (y))
x?R
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
def. P
1/?
where for ? ? [1, +?], ||u||? = ( i |ui |? )
denotes the `? -norm, and the constraint size ? > 0
should be adapted to the noise level. To avoid trivialities, through the paper, we assume that
problem (P?? (y)) is feasible, which is of course the case if ? > ||w||? . In the special situation where
there is no noise, i.e., w = 0, it makes sense to consider ? = 0 and solve the so-called Lasso [14] or
Basis-Pursuit problem [4], which is independent of ?, and reads
min {||x||1 s.t. ?x = ?x0 } .
x
(P 0 (?x0 ))
The case ? = 2 corresponds to the usual `2 loss function, which entails a smooth constraint set, and
has been studied in depth in the literature (see Section 1.6 for an overview). In contrast, the cases
? ? {1, +?} correspond to very different setups, where the loss function || ? ||? is polyhedral and
non-smooth. They are expected to lead to significantly different estimation results and require to
develop novel theoretical results, which is the focus of this paper. The case ? = 1 corresponds to
a ?robust? loss function, and is important to cope with impulse noise or outliers contaminating the
data (see for instance [11, 13, 9]). At the extreme opposite, the case ? = +? is typically used to
handle uniform noise such as in quantization (see for instance [10]). This paper studies the stability
of the support supp(x? ) of minimizers x? of (P?? (y)). In particular, we provide a sharp analysis
for the polyhedral cases ? ? {1, +?} that allows one to control the deviation of supp(x? ) from
supp(x0 ) if ||w||? is not too large and ? is chosen proportionally to ||w||? . The general case is studied
numerically in a compressed sensing experiment where we compare supp(x? ) and supp(x0 ) for
? ? [1, +?].
1.2
Notations.
def.
def.
The support of x0 is noted I = supp(x0 ) where supp(u) = {i | ui 6= 0 }. The saturation support
def.
of a vector is defined as sat(u) = {i | |ui | = ||u||? }. The sub-differential of a convex function f
def.
is denoted ?f . The subspace parallel to a nonempty convex set C is par(C) = R(C ? C). A? is the
transpose of a matrix A and A+ is the Moore-Penrose pseudo-inverse of A. Id is the identity matrix
and ?i the canonical vector of index i. For a subspace V ? Rn , PV is the orthogonal projector onto
V . For sets of indices S and I, we denote ?S,I the submatrix of ? restricted to the rows indexed
by S and the columns indexed by I. When all rows or all columns are kept, a dot replaces the
def.
corresponding index set (e.g., ??,I ). We denote ??S,I = (?S,I )? , i.e. the transposition is applied after
the restriction.
1.3
Dual Certificates
Before diving into our theoretical contributions, we first give important definitions. Let Dx0 be the
set of dual certificates (see, e.g., [17]) defined by
def.
Dx0 = {p ? Rm | ?? p ? ?||x0 ||1 } = p ? Rm ???,I p = sign(x0,I ), ||?? p||? 6 1 . (1)
The first order optimality condition (see, e.g., [12]) states that x0 is a solution of (P 0 (?x0 )) if and
only if Dx0 6= ?. Assuming this is the case, our main theoretical finding (Theorem 1) states that the
stability (and instability) of the support of x0 is characterized by the following specific subset of
certificates
p? ? Argmin ||p||? where ?1 + ?1 = 1.
(2)
p?Dx0
We call such a certificate p? a minimum norm certificate. Note that for 1 < ? < +?, this p? is
actually unique but that for ? ? {1, ?} it might not be the case.
Associated to such a minimal norm certificate, we define the extended support as
def.
J = sat(?? p? ) = {i ? {1, . . . , n} | |(?? p? )i | = 1 } .
(3)
When the certificate p? from which J is computed is unclear from the context, we write it explicitly
as an index Jp? . Note that, from the definition of Dx0 , one always has I ? J. Intuitively, J indicates
the set of indexes that will be activated in the signal estimate when a small noise w is added to the
observation, and thus the situation when I = J corresponds to the case where the support of x0 is
stable.
2
par(?||p1 ||1 )
T1
0
?
?
?||p1 ||1
?
e1 p1
{p | ||p||1 6 1 }
Fig. 1: Model tangent subspace T? in R2 for (?, ?) = (?, 1).
1.4
Lagrange multipliers and restricted injectivity conditions
In the case of noiseless observations (w = 0) and when ? > 0, the following general lemma
whose proof can be found in Section 2 associate to a given dual certificate p? an explicit solution of
(P?? (?x0 )). This formula depends on a so-called Lagrange multiplier vector v? ? Rn , which will
be instrumental to state our main contribution (Theorem 1). Note that this lemma is valid for any
? ? [1, ?]. Even though this goes beyond the scope of our main result, one can use the same lemma
for an arbitrary `? -norm for ? ? [1, ?] (see Section 3) or for even more general loss functions.
Lemma 1 (Noiseless solution). We assume that x0 is identifiable, i.e. it is a solution to (P 0 (?x0 )),
and consider ? > 0. Then there exists a v? ? Rn supported on J such that
??,J v?,J ? ?||p? ||? and ? sign(v?,J?) = ???,J?p?
def.
x
where we denoted J? = J\I. If ? is such that 0 < ? <
, with x = mini?I |x0,I |, then a
||v?,I ||?
solution x
?? of (P?? (?x0 )) with support equal to J is given by
x
??,J = x0,J ? ? v?,J .
Moreover, its entries have the same sign as those of x0 on its support I, i.e., sign(?
x?,I ) = sign(x0,I ).
An important question that arises is whether v? can be computed explicitly. For this, let us define the
def.
model tangent subspace T? = par(?||p? ||? )? , i.e., T? is the orthogonal to the subspace parallel to
def.
?||p? ||? , which uniquely defines the model vector, e? = PT? ?||p? ||? , as shown on Figure 1 (see [17]
for details). Using this notation, v?,J is uniquely defined and expressed in closed-form as
v?,J = (PT? ??,J )+ e?
(4)
if and only if the following restricted injectivity condition holds
Ker(PT? ??,J ) = {0}.
(INJ? )
For the special case (?, ?) = (?, 1), the following lemma, proved in Section 2, gives easily verifiable
def.
sufficient conditions, which ensure that (INJ? ) holds. The notation S = supp(p1 ) is used.
Lemma 2 (Restricted injectivity for ? = ?). Assume x0 is identifiable and ?S,J has full rank. If
sJ ?
/ Im(??S 0 ,J ) ?S 0 ? {1, . . . , m}, |S 0 | < |J| and
qS ?
/ Im(?S,J 0 ) ?J 0 ? {1, . . . , n},
|J 0 | < |S|,
where sJ = ???,J p1 ? {?1, 1}|J| , and qS = sign(p1,S ) ? {?1, 1}|S| , then, |S| = |J| and ?S,J is
invertible, i.e., since PT1 ??,J = Id?,S ?S,J , (INJ? ) holds.
Remark 1. If ? is randomly drawn from a continuous distribution with i.i.d. entries, e.g., Gaussian,
then as soon as x0 is identifiable, the conditions of Lemma 2 hold with probability 1 over the
distribution of ?.
def.
For (?, ?) = (1, ?), we define Z = sat(p? ),
Id c
def.
e def.
? = sign(p?Z ,?)Id
and ?
= ???,J .
Z,?
?,Z
Following similar reasoning as in Lemma 2 and Remark 1, we can reasonably assume that |Z c | + 1 =
e is invertible. In that case, (INJ1 ) holds as Ker(PT ??,J ) = Ker(?).
e Table 1 summarizes
|J| and ?
?
for the three specific cases ? ? {1, 2, +?} the quantities introduced here.
Table 1: Model tangent subspace, restricted injectivity condition and Lagrange multipliers.
?
T?
2
R
m
(INJ? )
(PT? ??,J )+
v?,J
Ker(??,J ) = {0}
?+
?,J
p2
?+
?,J ||p2 ||2
?
{u | supp(u) = S }
Ker(?S,J ) = {0}
??1
S,J IdS,?
??1
S,J sign(p1,S )
1
{u | uZ = ? sign(p?,Z ), ? ? R }
e = {0}
Ker(?)
e ?1 ?
?
e ?1 ?|J|
?
3
x?
x0
x? 1
x? 2
1
0
?1
...
?? p?
1
Jc
J?
I
0
?1
Fig. 2: (best observed in color) Simulated compressed sensing example showing x? (above) for
increasing values of ? and random noise w respecting the hypothesis of Theorem 1 and ?? p? (bellow)
which predicts the support of x? when ? > 0.
1.5
Main result
Our main contribution is Theorem 1 below. A similar result is known to hold in the case of the smooth
`2 loss (? = 2, see Section 1.6). Our paper extends it to the more challenging case of non-smooth
losses ? ? {1, +?}. The proof for ? = +? is detailed in Section 2. It is important to emphasize
that the proof strategy is significantly different from the classical approach developed for ? = 2,
mainly because of the lack of smoothness of the loss function. The proof for ? = 1 follows a similar
structure, and due to space limitation, it can be found in the supplementary material.
Theorem 1. Let ? ? {1, 2, +?}. Suppose that x0 is identifiable, and let p? be a minimal norm
certificate (see (2)) with associated extended support J (see (3)). Suppose that the restricted injectivity
condition (INJ? ) is satisfied so that v?,J can be explicitly computed (see (4)). Then there exist
constants c1 , c2 > 0 depending only on ? and p? such that, for any (w, ? ) satisfying
||w||? < c1 ?
and
? 6 c2 x where
def.
x = min |x0,I |,
i?I
(5)
a solution x? of (P?? (?x0 + w)) with support equal to J is given by
def.
x?,J = x0,J + (PT? ??,J )+ w ? ? v?,J .
(6)
This theorem shows that if the signal-to-noise ratio is large enough and ? is chosen in proportion
to the noise level ||w||? , then there is a solution supported exactly in the extended support J. Note
in particular that this solution (6) has the correct sign pattern sign(x?,I ) = sign(x0,I ), but might
def.
exhibit outliers if J? = J\I 6= ?. The special case I = J characterizes the exact support stability
(?sparsistency?), and in the case ? = 2, the assumptions involving the dual certificate correspond to a
condition often referred to as ?irrepresentable condition? in the literature (see Section 1.6).
In Section 3, we propose numerical simulations to illustrate our theoretical findings on a compressed
sensing (CS) scenario. Using Theorem 1, we are able to numerically assess the degree of support
instability of CS recovery using `? fidelity. As a prelude to shed light on this result, we show on
Figure 2, a smaller simulated CS example for (?, ?) = (?, 1). The parameters are n = 20, m = 10
and |I| = 4 and x0 and ? are generated as in the experiment of Section 3 and we use CVX/MOSEK
[8, 7] at best precision to solve the optimization programs. First, we observe that x0 is indeed
identifiable by solving (P 0 (?x0 )). Then we solve (2) to compute p? and predict the extended support
J. Finally, we add uniformly distributed noise w with wi ?i.i.d. U(??, ?) and ? chosen appropriately
to ensure that the hypotheses hold and we solve (P?? (y)). Observe that as we increase ? , new non-zero
entries appear in x? but because w and ? are small enough, as predicted, we have supp(x? ) = J.
Let us now comment on the limitations of our analysis. First, this result does not trivially extend to
the general case ? ? [1, +?] as there is, in general, no simple closed form for x? . A generalization
would require more material and is out of the scope of this paper. Nevertheless, our simulations in
Section 3 stand for arbitrary ? ? [1, +?] which is why the general formulation was presented.
Second, larger noise regime, though interesting, is also out of the scope. Let us note that no other
results in the literature (even for `2 ) provide any insight about sparsistency in the large noise regime.
In that case, we are only able to provide bounds on the distance between x0 and the recovered vector
but this is the subject of a forthcoming paper.
Finally our work is agnostic with respect to the noise models. Being able to distinguish between
different noise models would require further analysis of the constant involved and some additional
constraint on ?. However, our result is a big step towards the understanding of the solutions behavior
and can be used in this analysis.
4
1.6
Relation to Prior Works
To the best of our knowledge, Theorem 1 is the first to study the support stability guarantees by
minimizing the `1 -norm with non-smooth loss function, and in particular here the `1 and `? losses.
The smooth case ? = 2 is however much more studied, and in particular, the associated support
stability results we state here are now well understood. Note that most of the corresponding literature
studies in general the penalized form, i.e., minx 12 ||?x ? y||2 + ?||x||1 instead of our constrained
formulation (P?? (y)). In the case ? = 2, since the loss is smooth, this distinction is minor and the
proof is almost the same for both settings. However, for ? ? {1, +?}, it is crucial to study the
constrained problems to be able to state our results. The support stability (also called ?sparsistency?,
corresponding to the special case I = J of our result) of (P?? (y)) in the case ? = 2 has been proved
by several authors in slightly different setups. In the signal processing literature, this result can be
traced back to the early work of J-J. Fuchs [6] who showed Theorem 1 when ? = 2 and I = J. In
the statistics literature, sparsistency is also proved in [19] in the case where ? is random, the result of
support stability being then claimed with high probability. The condition that I = J, i.e., that the
minimal norm certificate p? (for ? = ? = 2) is saturating only on the support, is often coined the
?irrepresentable condition? in the statistics and machine learning literature. These results have been
extended recently in [5] to the case where the support I is not stable, i.e. I ( J. One could also cite
[15], whose results are somewhat connected but are restricted to the `2 loss and do not hold in our
case. Note that ?sparsistency?-like results have been proved for many ?low-complexity? regularizers
beyond the `1 -norm. Let us quote among others: the group-lasso [1], the nuclear norm [2], the total
variation [16] and a very general class of ?partly-smooth? regularizers [17]. Let us also point out
that one of the main sources of application of these results is the analysis of the performance of
compressed sensing problems, where the randomness of ? allows to derive sharp sample complexity
bounds as a function of the sparsity of x0 and n, see for instance [18]. Let us also stress that these
support recovery results are different from those obtained using tools such as the Restricted Isometry
Property and alike (see for instance [3]) in many respects. For instance, the guarantees they provide
are uniform (i.e., they hold for any sparse enough vector x0 ), though they usually lead to quite
pessimistic worst-case bounds, and the stability is measured in `2 sense.
2
Proof of Theorem 1
In this section, we prove the main result of this paper. For the sake of brevity, when part of the proof
will become specific to a particular choice of ?, we will only write the details for ? = ?. The details
of the proof for ? = 1 can be found in the supplementary material.
It can be shown that the Fenchel-Rockafellar dual problem to (P?? (y)) is [12]
min {?hy, pi + ? ||p||? s.t. ||?? p||? 6 1} .
p?Rm
(D?? (y))
From the corresponding (primal-dual) extremality relations, one can deduce that (?
x, p?) is an optimal
primal-dual Kuhn-Tucker pair if, and only if,
where I? = supp(?
x), and
???,I?p? = sign(?
xI?) and
||?? p?||? 6 1.
(7)
y ? ??
x
? ?||?
p||? .
(8)
?
The first relationship comes from the sub-differential of the `1 regularization term while the second is
specific to a particular choice of ? for the `? -norm data fidelity constraint. We start by proving the
Lemma 1 and Lemma 2.
Proof of Lemma 1 Let us rewrite the problem (2) by introducing the auxiliary variable ? = ?? p
as
min {||p||? + ?B? (?) | ? = ?? p, ?I = sign(x0,I ) } ,
(9)
p,?
where ?B? is the indicator function of the unit `? ball. Define the Lagrange multipliers v and zI and
the associated Lagrangian function
L(p, ?, v, zI ) = ||p||? + ?B? (?) + hv, ? ? ?? pi + hzI , ?I ? sign(x0,I )i.
Defining zI c = 0, the first order optimality conditions (generalized KKT conditions) for p and ? read
?v ? ?||p||?
and
? v ? z ? ??B? (?),
5
From the normal cone of the B? at ? on its boundary, the second condition is
?v ? z ? {u | uJ c = 0, sign(uJ ) = ?J } ,
where J = sat(?) = sat(?? p). Since I ? J, v is supported on J. Moreover, on J? = J\I, we
have ? sign(vJ?) = ?J?. As p? is a solution to (9), we can define a corresponding vector of Lagrange
multipliers v? supported on J such that ? sign(v?,J?) = ???,J?p? and ??,J v?,J ? ?||p? ||? .
To prove the lemma, it remains to show that x
?? is indeed a solution to (P?? (y)), i.e., it obeys (7) and
(8) for some dual variable p?. We will show that this is the case with p? = p? . Observe that p? 6= 0 as
otherwise, it would mean that x0 = 0, which contradicts our initial assumption of non-zero x0 . We
def.
can then directly see that (8) is satisfied. Indeed, noting y0 = ?x0 , we can write
y0 ? ??,J x
??,J = ? ??,J v?,J ? ? ?||p? ||? .
By definition of p? , we have ||?? p? ||? 6 1. In addition, it must satisfy ???,J p? = sign(?
x?,J ). Outside
I, the condition is always satisfied since ? sign(v?,J?) = ???,J?p? . On I, we know that ???,I p? =
sign(x0,I ). The condition on ? is thus |x0,i | > ? |v?,i | , ?i ? I, or equivalently, ? < ||v?,Ix ||? .
Proof of Lemma 2 As established by Lemma 1, the existence of p1 and of v1 are implied by the
identifiability of x0 . We have the following,
?p1 ? ?pS , ??S,J pS = sJ ? ??S,J is surjective ? |S| > |J|
?v1 ? ?vJ , ?S,J vJ = qS ? ?S,J is surjective ? |J| > |S|,
To clarify, we detail the first line. Since ??S,J is full rank, |S| > |J| is equivalent to surjectivity.
Assume ??S,J is not surjective so that |S| < |J|, then sJ ?
/ Im(??S,J ) and the over-determined system
?
?S,J pS = sJ has no solution in pS , which contradicts the existence of p1 . Now assume ??S,J is
?,?
?
surjective, then we can take pS = ??,?
S,J sJ as a solution where ?S,J is any right-inverse of ?S,J . This
proves that ?S,J is invertible.
We are now ready to prove the main result in the particular case ? = ?.
Proof of Theorem 1 (? = ?) Our proof consists in constructing a vector supported on J, obeying
?
the implicit relationship (6) and which is indeed a solution to (P?
(?x0 + w)) for an appropriate
regime of the parameters (?, ||w||? ). Note that we assume that the hypothesis of Lemma 2 on ? holds
and in particular, ?S,J is invertible. When (?, ?) = (?, 1), the first order condition (8), which holds
def.
for any optimal primal-dual pair (x, p), reads, with Sp = supp(p),
ySp ? ?Sp ,? x = ? sign(pSp ) and ||y ? ?x||? 6 ?.
(10)
One should then look for a candidate primal-dual pair (?
x, p?) such that supp(?
x) = J and satisfying
ySp? ? ?Sp?,J x
?J = ? sign(?
pSp? ).
(11)
We now need to show that the first order conditions (7) and (10) hold for some p = p? solution of
the ?perturbed? dual problem (D1? (?x0 + w)) with x = x
?. Actually, we will show that under the
conditions of the theorem, this holds for p? = p1 , i.e., p1 is solution of (D1? (?x0 + w)) so that
?1
?1
x
?J = ??1
S,J yS ? ? ?S,J sign(p1,S ) = x0,J + ?S,J wS ? ? v1,J .
Let us start by proving the equality part of (7), ??S,J p?S = sign(?
xJ ). Since ?S,J is invertible, we have
?
p?S = p1,S if and only if sign(?
xJ ) = ?S,J p1,S . Noting IdI,J the restriction from J to I, we have
sign x0,I + IdI,J ??1
S,J wS ? ? v1,I = sign (x0,I )
as soon as
?1
?S,J wS ? ? v1,i < |x0,I |
i
?i ? I.
It is sufficient to require
||IdI,J ??1
S,J wS ? ? v1,I ||? < x
||??1
S,J ||?,? ||w||? + ? ||v1,I ||? < x,
with x = mini?I |x0,I |. Injecting the fact that ||w||? < c1 ? (the value of c1 will be derived later), we
get the condition
6
? (bc1 + ?) 6 x,
with b =
||??1
S,J ||?,?
and ? = ||v1 ||? 6 b. Rearranging the terms, we obtain
x
= c2 x,
?6
bc1 + ?
?
which guarantees sign(?
xI ) = sign(x0,I ). Outside I, defining IdJ,J
? as the restriction from J to J,
we must have
?1
??S,J?p1,S = sign IdJ,J
? ?S,J wS ? ? v1,J? .
From Lemma 1, we know that ? sign(v1,J?) = ??S,J?p1,S , so that the condition is satisfied as soon as
?1
?
? wS < ? |v1,j | ?j ? J.
S,J
j
Noting v = minj?J? |v1,j |, we get the sufficient condition for (7),
||??1
S,J wS ||? < ? v,
v
||w||? < ? .
(c1 a)
b
We can now verify (10). From (11) we see that the equality part is satisfied on S. Outside S, we have
yS c ? ?S c ,? x
? = wS c ? ?S c ,J ??1
S,J wS + ? ?S c ,J v1,J ,
which must be smaller than ? , i.e.,
||wS c ? ?S c ,J ??1
S,J wS + ? ?S c ,J v1,J ||? 6 ?.
It is thus sufficient to have
(1 + ||?S c ,J ??1
S,J ||?,? )||w||? + ? ? 6 ?,
def.
with ? = ||?S c ,J v1,J ||? . Noting a = ||?S c ,J ??1
S,J ||?,? , we get
1??
?.
(c1 b)
1+a
(c1 a) and (c1 b) together give the value of c1 . This ensures that the inequality part of (10) is satisfied
?
for x
? and with that, that x
? is solution to (P?
(?x0 + w)) and p1 solution to (D1? (?x0 + w)), which
concludes the proof.
Remark 2. From Lemma 1, we know that in all generality ? 6 1. If the inequality was saturated, it
would mean that c1 = 0 and no noise would be allowed. Fortunately, it is easy to prove that under a
mild assumption on ?, similar to the one of Lemma 2 (which holds with probability 1 for Gaussian
matrices), the inequality is strict, i.e., ? < 1.
||w||? 6
3
Numerical experiments
In order to illustrate support stability in Lemma 1 and Theorem 1, we address numerically the
problem of comparing supp(x? ) and supp(x0 ) in a compressed sensing setting. Theorem 1 shows
that supp(x? ) does not depend on w (as long as it is small enough); simulations thus do not involve
noise. All computations are done in Matlab, using CVX [8, 7], with the MOSEK solver at ?best?
precision setting to solve the convex problems. We set n = 1000, m = 900 and generate 200 times a
random sensing matrix ? ? Rm?n with ?ij ?i.i.d N (0, 1). For each sensing matrix, we generate
def.
60 different k-sparse vectors x0 with support I where k = |I| varies from 10 to 600. The non-zero
entries of x0 are randomly picked in {?1} with equal probability. Note that this choice does not
impact the result because the definition of Jp? only depends on sign(x0 ) (see (1)). It will only affect
the bounds in (5). For each case, we verify that x0 is identifiable and for ? ? {1, 2, ?} (which
correspond to ? ? {?, 2, 1}), we compute the minimum `? -norm certificate p? , solution to (2) and
def.
in particular, the support excess J?p? = sat(?? p? )\I. It is important to emphasize that there is no
noise in these simulations. As long as the hypotheses of the theorem are satisfied, we can predict that
supp(x? ) = Jp? ? I without actually computing x? , or choosing ? , or generating w.
7
1
0
200
400
k
1
?2
6000
200
400
k
??
P[|J?p1 | 6 se ]
?1
P[|J?p2 | 6 se ]
P[|J?p? | 6 se ]
1
6000
200
k
400
600
Fig. 3: (best observed in color) Sweep over se ? {0, 10, ...} of the empirical probability as a function
of the sparsity k that x0 is identifiable and |J?p? | 6 se (left), |J?p2 | 6 se (middle) or |J?p1 | 6 se
(right). The bluest corresponds to se = 0 and the redest to the maximal empirical value of |J?p? |.
se = 0
1
se = 50
se = 150
1
? 0.5
0
125
250
k
375
500
125
250
1
?
k
375
500
125
250
k
375
500
Fig. 4: (best observed in color) Sweep over ? [0, 1] of the empirical probability as a function of k
that x0 is identifiable and |J?p? | 6 se for three values of se . The dotted red line indicates ? = 2.
We define a support excess threshold se ? N varying from 0 to ?. On Figure 3 we plot the probability
that x0 is identifiable and |J?p? |, the cardinality of the predicted support excess, is smaller or equal
to se . It is interesting to note that the probability that |J?p1 | = 0 (the bluest horizontal curve on the
right plot) is 0, which means that even for extreme sparsity (k = 10) and a relatively high m/n
rate of 0.9, the support is never predicted as perfectly stable for ? = ? in this experiment. We can
observe as a rule of thumb, that a support excess of |J?p1 | ? k is much more likely. In comparison, `2
recovery provides a much more likely perfect support stability for k not too large and the expected
size of J?p2 increases slower with k. Finally, we can comment that the support stability with `1 data
fidelity is in between. It is possible to recover the support perfectly but the requirement on k is a bit
more restrictive than with `2 fidelity.
As previously noted, Lemma 1 and its proof remain valid for smooth loss functions such as the
`? -norm when ? ? (1, ?). Therefore, it makes sense to compare the results with the ones obtained
for ? ? (1, ?) . On Figure 4 we display the result of the same experiment but with 1/? as the
vertical axis. To realize the figure, we compute p? and J?p? for ? corresponding to 41 equispaced
values of 1/? ? [0, 1]. The probability that |J?p? | 6 se is represented by the color intensity. The three
different plots correspond to three different values for se . On this figure, the yellow to blue transition
can be interpreted as the maximal k to ensure, with high probability, that |J?p? | does not exceeds se . It
is always (for all se ) further to the right at ? = 2. It means that the `2 data fidelity constraint provides
the highest support stability. Interestingly, we can observe that this maximal k decreases gracefully
as ? moves away from 2 in one way or the other. Finally, as already observed on Figure 3, we see
that, especially when se is small, the `1 loss function has a small advantage over the `? loss.
4
Conclusion
In this paper, we provided sharp theoretical guarantees for stable support recovery under small enough
noise by `1 minimization with non-smooth loss functions. Unlike the classical setting where the data
loss is smooth, our analysis reveals the difficulties arising from non-smoothness, which necessitated a
novel proof strategy. Though we focused here on the case of `? data loss functions, for ? ? {1, 2, ?},
our analysis can be extended to more general non-smooth losses, including coercive gauges. This
will be our next milestone.
Acknowledgments
KD and LJ are funded by the Belgian F.R.S.-FNRS. JF is partly supported by Institut Universitaire de
France. GP is supported by the European Research Council (ERC project SIGMA-Vision).
8
References
[1] F.R. Bach. Consistency of the group Lasso and multiple kernel learning. Journal of Machine Learning
Research, 9:1179?1225, 2008.
[2] F.R. Bach. Consistency of trace norm minimization. Journal of Machine Learning Research, 9:1019?1048,
2008.
[3] E. J. Cand?s, J. K. Romberg, and T. Tao. Stable signal recovery from incomplete and inaccurate measurements. Communications on pure and . . . , 40698(8):1?15, aug 2006.
[4] S. S. Chen, D. L. Donoho, and M. A. Saunders. Atomic Decomposition by Basis Pursuit. SIAM Journal
on Scientific Computing, 20(1):33?61, jan 1998.
[5] V. Duval and G. Peyr?. Sparse spikes deconvolution on thin grids. Preprint 01135200, HAL, 2015.
[6] J.-J. Fuchs. On sparse representations in arbitrary redundant bases. IEEE Transactions on Information
Theory, 50(6):1341?1344, 2004.
[7] M. Grant and S. Boyd. Graph implementations for nonsmooth convex programs. In V. Blondel, S. Boyd, and
H. Kimura, editors, Recent Advances in Learning and Control, Lecture Notes in Control and Information
Sciences, pages 95?110. Springer-Verlag Limited, 2008. http://stanford.edu/~boyd/graph_dcp.
html.
[8] M. Grant and S. Boyd. CVX: Matlab software for disciplined convex programming, version 2.1. http:
//cvxr.com/cvx, March 2014.
[9] L. Jacques. On the optimality of a L1/L1 solver for sparse signal recovery from sparsely corrupted
compressive measurements. Technical Report, TR-LJ-2013.01, arXiv preprint arXiv:1303.5097, 2013.
[10] L. Jacques, D. K. Hammond, and Jalal M. Fadili. Dequantizing Compressed Sensing: When Oversampling
and Non-Gaussian Constraints Combine. IEEE Transactions on Information Theory, 57(1):559?571, jan
2011.
[11] M. Nikolova. A variational approach to remove outliers and impulse noise. Journal of Mathematical
Imaging and Vision, 20(1), 2004.
[12] R. T. Rockafellar. Conjugate duality and optimization, volume 16. Siam, 1974.
[13] C. Studer, P. Kuppinger, G. Pope, and H. Bolcskei. Recovery of Sparsely Corrupted Signals. IEEE
Transactions on Information Theory, 58(5):3115?3130, may 2012.
[14] R. Tibshirani. Regression Shrinkage and Selection via the Lasso. Journal of the Royal Statistical Society.
Series B: Statistical Methodology, 58(1):267?288, 1995.
[15] Ryan J. Tibshirani. The lasso problem and uniqueness. Electronic Journal of Statistics, 7:1456?1490,
2013.
[16] S. Vaiter, G. Peyr?, C. Dossal, and M.J. Fadili. Robust sparse analysis regularization. IEEE Transactions
on Information Theory, 59(4):2001?2016, 2013.
[17] S. Vaiter, G. Peyr?, and J. Fadili. Model consistency of partly smooth regularizers. Preprint 00987293,
HAL, 2014.
[18] M. J. Wainwright. Sharp thresholds for high-dimensional and noisy sparsity recovery using `1 -constrained
quadratic programming (lasso). IEEE Transactions on Information Theory, 55(5):2183?2202, 2009.
[19] P. Zhao and B. Yu. On model selection consistency of Lasso. J. Mach. Learn. Res., 7:2541?2563, December
2006.
9
| 6559 |@word mild:1 version:1 middle:1 norm:15 instrumental:1 proportion:1 simulation:4 decomposition:1 tr:1 initial:1 series:1 tuned:1 interestingly:1 existing:1 recovered:1 comparing:1 com:1 must:3 realize:1 numerical:3 additive:3 remove:1 plot:3 progressively:1 ysp:2 transposition:1 certificate:12 provides:2 idi:3 mathematical:1 c2:3 differential:2 become:1 prove:4 consists:1 combine:1 polyhedral:2 blondel:1 x0:64 indeed:4 expected:2 behavior:1 p1:22 cand:1 uz:1 solver:2 increasing:2 cardinality:1 spain:1 provided:1 notation:3 moreover:2 project:1 agnostic:1 what:1 argmin:1 interpreted:1 developed:1 compressive:1 coercive:1 finding:2 kimura:1 bolcskei:1 guarantee:5 pseudo:1 shed:1 exactly:1 universit:2 rm:5 milestone:1 control:3 unit:1 grant:2 appear:1 before:1 t1:1 understood:2 mach:1 id:5 laurent:2 might:2 studied:3 bc1:2 challenging:1 limited:1 obeys:1 unique:1 acknowledgment:1 atomic:1 ker:6 jan:2 empirical:3 significantly:2 boyd:4 studer:1 get:3 onto:1 irrepresentable:2 selection:2 romberg:1 context:1 instability:4 restriction:3 equivalent:1 projector:1 lagrangian:1 go:1 fadili:5 convex:5 focused:1 recovery:11 pure:1 q:3 insight:1 rule:1 nuclear:1 stability:14 handle:1 proving:2 variation:1 pt:6 suppose:2 exact:1 programming:2 equispaced:1 hypothesis:4 associate:1 satisfying:2 sparsely:2 predicts:1 observed:4 preprint:3 hv:1 worst:1 ensures:2 connected:1 decrease:1 highest:1 degraux:2 ui:3 respecting:1 complexity:2 depend:1 solving:1 rewrite:1 distinctive:1 basis:2 easily:1 routinely:1 represented:1 caen:1 regularizer:1 univ:1 fnrs:3 universitaire:1 kevin:1 outside:3 choosing:1 saunders:1 whose:2 quite:1 pt1:1 solve:5 supplementary:2 larger:1 stanford:1 otherwise:1 compressed:8 statistic:3 gp:1 noisy:2 advantage:1 propose:1 maximal:3 fr:2 supposed:1 p:5 requirement:1 generating:1 perfect:1 derive:2 develop:1 depending:1 illustrate:2 measured:1 ij:1 minor:1 aug:1 p2:5 auxiliary:1 c:3 predicted:3 come:1 kuhn:1 correct:1 material:3 explains:1 require:4 generalization:1 pessimistic:1 ryan:1 underdetermined:2 im:3 clarify:1 hold:14 normal:1 scope:3 predict:2 early:1 belgium:2 uniqueness:1 estimation:1 injecting:1 quote:1 cole:1 council:1 gauge:1 tool:1 minimization:2 always:3 gaussian:3 normale:1 avoid:1 shrinkage:1 varying:1 jalal:3 derived:1 focus:2 rank:3 indicates:2 mainly:1 contrast:2 sense:3 minimizers:1 cnrs:2 inaccurate:1 typically:2 lj:2 w:11 relation:2 france:3 tao:1 fidelity:6 dual:11 among:1 denoted:2 html:1 constrained:4 special:4 equal:4 never:1 look:1 yu:1 thin:1 mosek:2 others:1 nonsmooth:1 report:1 randomly:2 sparsistency:5 ensicaen:2 saturated:1 neuve:2 extreme:2 light:1 activated:1 primal:4 regularizers:3 belgian:1 orthogonal:2 necessitated:1 indexed:2 institut:1 incomplete:1 re:1 theoretical:6 minimal:3 instance:6 column:2 fenchel:1 introducing:1 deviation:1 subset:1 entry:4 uniform:3 usefulness:1 peyr:4 too:2 perturbed:1 varies:1 corrupted:2 dossal:1 siam:2 invertible:5 together:1 satisfied:7 hzi:1 zhao:1 supp:17 account:1 de:3 vaiter:2 rockafellar:2 satisfy:1 jc:1 explicitly:3 depends:2 later:1 picked:1 closed:2 sup:1 characterizes:1 start:2 recover:3 red:1 parallel:2 vin:1 identifiability:1 contribution:3 ass:1 who:1 ensemble:1 correspond:4 identify:1 yellow:1 thumb:1 graph_dcp:1 hammond:1 randomness:1 minj:1 definition:4 involved:1 tucker:1 associated:4 proof:15 proved:4 popular:1 exemplify:1 color:4 knowledge:1 actually:3 back:1 methodology:1 disciplined:1 formulation:2 done:1 though:4 prelude:1 generality:1 implicit:1 horizontal:1 lack:1 defines:1 impulse:2 scientific:1 greyc:1 hal:2 verify:2 multiplier:5 regularization:4 equality:2 read:3 moore:1 uniquely:2 noted:2 generalized:1 stress:1 l1:2 reasoning:1 ranging:1 variational:1 novel:2 recently:1 overview:1 jp:3 volume:1 extend:2 numerically:3 measurement:2 smoothness:2 trivially:1 consistency:4 grid:1 erc:1 dot:1 funded:1 stable:8 entail:1 deduce:1 add:1 base:1 contaminating:1 isometry:1 showed:1 recent:1 diving:1 rieure:1 scenario:1 claimed:1 verlag:1 inequality:3 minimum:2 injectivity:5 additional:1 somewhat:1 fortunately:1 redundant:1 signal:6 full:2 multiple:1 smooth:17 exceeds:1 technical:1 characterized:1 bach:2 long:3 e1:1 y:2 impact:1 involving:1 regression:3 noiseless:2 vision:2 arxiv:2 kernel:1 c1:10 addition:1 source:1 crucial:1 appropriately:1 unlike:1 strict:1 comment:2 subject:1 deficient:1 december:1 call:1 noting:4 enough:5 easy:1 nikolova:1 xj:2 affect:1 zi:3 forthcoming:1 lasso:7 opposite:1 perfectly:2 whether:1 triviality:1 fuchs:2 remark:3 matlab:2 gabriel:2 proportionally:2 detailed:2 involve:1 se:20 verifiable:1 generate:2 http:2 exist:1 canonical:1 oversampling:1 dotted:1 sign:33 jacques:4 arising:1 tibshirani:2 blue:1 write:3 group:2 nevertheless:1 threshold:2 traced:1 drawn:2 kept:1 v1:15 imaging:2 graph:1 cone:1 inverse:3 extends:1 almost:1 kuppinger:1 electronic:1 cvx:4 summarizes:1 submatrix:1 bit:1 def:23 bound:4 distinguish:1 display:1 duval:1 replaces:1 quadratic:1 identifiable:9 adapted:1 precisely:1 constraint:7 software:1 sake:1 hy:1 min:4 optimality:3 relatively:1 ball:1 march:1 kd:1 conjugate:1 smaller:3 slightly:1 psp:2 contradicts:2 y0:2 wi:1 remain:1 surjectivity:1 happens:1 alike:1 outlier:3 restricted:8 intuitively:1 equation:1 remains:1 previously:1 nonempty:1 know:3 pursuit:2 promoting:1 observe:5 away:1 appropriate:2 anymore:1 slower:1 existence:2 denotes:1 ensure:3 extremality:1 coined:1 restrictive:1 uj:2 prof:1 surjective:4 society:1 classical:2 especially:1 implied:1 sweep:2 move:1 added:1 question:1 quantity:1 already:1 strategy:2 spike:1 usual:2 unclear:1 exhibit:1 minx:1 subspace:6 distance:1 simulated:2 gracefully:1 unstable:1 assuming:1 minn:1 index:5 mini:2 ratio:1 minimizing:1 relationship:2 equivalently:1 setup:2 sigma:1 trace:1 design:1 implementation:1 unknown:1 vertical:1 observation:3 situation:2 extended:8 defining:2 communication:1 peyre:1 rn:4 sharp:5 arbitrary:3 intensity:1 introduced:1 pair:3 paris:1 louvain:4 distinction:1 established:1 barcelona:1 nip:1 address:1 beyond:2 able:4 below:1 pattern:1 usually:1 regime:5 sparsity:5 saturation:1 normandie:1 program:2 including:1 royal:1 wainwright:1 difficulty:1 indicator:1 uclouvain:2 axis:1 ready:1 concludes:1 prior:1 literature:7 understanding:1 tangent:3 lacking:1 loss:27 par:3 highlight:1 lecture:1 interesting:2 limitation:2 degree:1 sufficient:4 article:1 editor:1 pi:2 row:2 course:1 penalized:1 supported:7 transpose:1 soon:3 idj:2 catholique:2 sparse:13 distributed:1 boundary:1 depth:1 curve:1 valid:2 stand:1 transition:1 author:1 cope:1 transaction:5 sj:6 excess:4 emphasize:2 kkt:1 reveals:1 sat:6 xi:2 continuous:1 why:1 table:2 learn:1 reasonably:1 robust:2 rearranging:1 european:1 constructing:1 vj:3 sp:3 main:8 big:1 noise:23 cvxr:1 dma:1 allowed:1 fig:4 icteam:2 referred:1 en:1 pope:1 precision:2 sub:2 pv:1 explicit:1 obeying:1 candidate:1 ix:1 theorem:15 formula:1 specific:4 showing:1 sensing:10 r2:1 deconvolution:1 exists:1 quantization:1 chen:1 likely:2 penrose:1 lagrange:5 expressed:1 saturating:1 springer:1 corresponds:4 cite:1 identity:1 donoho:1 towards:1 jf:1 feasible:1 determined:1 uniformly:1 lemma:20 bellow:1 total:2 called:3 inj:5 partly:3 duality:1 la:2 support:44 dx0:5 arises:1 brevity:1 d1:3 |
6,146 | 656 | Efficient Pattern Recognition Using a
New Transformation Distance
Patrice Simard
Yann Le Cun
John Denker
AT&T Bell Laboratories, 101 Crawford Corner Road, Holmdel, NJ 07724
Abstract
Memory-based classification algorithms such as radial basis functions or K-nearest neighbors typically rely on simple distances (Euclidean, dot product ... ), which are not particularly meaningful on
pattern vectors. More complex, better suited distance measures are
often expensive and rather ad-hoc (elastic matching, deformable
templates). We propose a new distance measure which (a) can be
made locally invariant to any set of transformations of the input
and (b) can be computed efficiently. We tested the method on
large handwritten character databases provided by the Post Office
and the NIST. Using invariances with respect to translation, rotation, scaling, shearing and line thickness, the method consistently
outperformed all other systems tested on the same databases.
1
INTRODUCTION
Distance-based classification algorithms such as radial basis functions or K-nearest
neighbors often rely on simple distances (such as Euclidean distance, Hamming
distance, etc.). As a result, they suffer from a very high sensitivity to simple
transformations of the input patterns that should leave the classification unchanged
(e.g. translation or scaling for 2D images). This is illustrated in Fig. 1 where an
unlabeled image of a "9" must be classified by finding the closest prototype image
out of two images representing respectively a "9" and a "4". According to the
Euclidean distance (sum of the squares of the pixel to pixel differences), the "4"
is closer even though the "9" is much more similar once it has been rotated and
thickened. The result is an incorrect classification. The key idea is to construct a
distance measure which is invariant with respect to some chosen transformations
such as translation, rotation and others. The special case of linear transformations
has been well studied in statistics and is sometimes referred to as Procrustes analysis
50
Efficient Pattern Recognition Using a New Transformation Distance
Pattern to
be classified
prototype A
Prototype B
Figure 1: What is a good similarity measure? According to the Euclidean distance
the pattern to be classified is more similar to prototype B. A better distance measure
would find that prototype A is closer because it differs mainly by a rotation and a
thickness transformation, two transformations which should leave the classification
invariant.
(Sibson, 1978). It has been applied to on-line character recognition (Sinden and
Wilfong, 1992).
This paper considers the more general case of non-linear transformations such as
geometric transformations of gray-level images. Remember that even a simple
image translation corresponds to a highly non-linear transformation in the highdimensional pixel space l . In previous work (Simard et al., 1992b), we showed how
a neural network could be trained to be invariant with respect to selected transformations of the input. VVe now apply similar ideas to distance-based classifiers.
''''hen a pattern P is transformed (e.g. rotated) with a transformation s that depends
on one parameter a (e.g. the angle of the rotation), the set of all the transformed
patterns Sp = {x I 35 such that x = s(5, P)} is a one-dimensional curve in the
vector space of the inputs (see Fig. 2).
In certain cases, such as rotations of
digitized images, this curve must be made continuous using smoothing techniques
(see (Simard et al., 1992b)). When the set of transformations is parameterized by
n parameters ai (rotation, translation, scaling, etc.), Sp is a manifold of at most n
dimensions. The patterns in Sp that are obtained through small transformations
of P, i.e. the part of Sp that is close to P, can be approximated by a plane
tangent to the manifold Sp at the point P. Small transformations of P can be
obtained by adding to P a linear combination of vectors that span the tangent
plane (tangent vectors). The images at the bottom of Fig. 2 were obtained by that
procedure. Tangent vectors for a transformation s can easily be computed by finite
difference (evaluating os(a, P)/oa); more details can be found in (Simard et al.,
1992b; Simard et al., 1992a).
As we mentioned earlier, the Euclidean distance between two patterns P and E
is in general not appropriate because it is sensitive to irrelevant transformations
of P and of E. In contrast, the distance V(E, P) defined to be the minimal distance between the two manifolds Sp and SE is truly invariant with respect to the
transformation used to generate Sp and SE. Unfortunately, these manifolds have
no analytic expression in general, and finding the distance between them is a hard
optimization problem with multiple local minima. Besides, t.rue invariance is not
1 If the ima.ge of a "3" is translated vertica.lly upward, the middle top pixel will oscillate
from black to white three times.
51
52
Simard, Cun, and Denker
[3]
? -15 ?
True rotations of P
-7.5-
p
.7.5 Transformations at p
1/
II
_??_..........._??.Y
a=-O.2
a=-Q.l
p
a=O.l
Pixel space
a=O.2
p
T. V.
Figure 2: Top: Small rotations of an original digitized image of the digit "3".
Middle: Representation of the effect of the rotation in pixel space (if there were
only 3 pixels). Bottom: Images obtained by moving along the tangent to the
transformation curve for the same original digitized image P by adding various
amounts (a) of the tangent vector (T.V.).
necessarily desirable since a rotation of a "6" into a "9" does not preserve the correct
classification.
Our approach consists of approximating the non-linear manifold Sp and SE by
linear surfaces and computing the distance D( E, P) defined to be the minimum
distance between them. This solves three problems at once: 1) linear manifolds
have simple analytical expressions which can be easily computed and stored, 2)
finding the minimum distance between linear manifolds is a simple least squares
problem which can be solved efficiently and, 3) this distance is locally invaria.nt but
not globally invariant. Thus the distance between a "6" and a slightly rota.ted "6"
is small but the distance between a "6" and a "9" is la.rge. The different. distan ces
between P and E are represented schematically in Fig. 3.
The figure represents two patterns P and E in 3-dimensional space. The ma.nifolds
generated by s are represented by one-dimensional curves going through E and P
respectively. The linear approximations to the manifolds are represented by lines
tangent to the curves at E and P. These lines do not intersect in 3 dimensions and
the shortest distance between them (uniquely defined) is D(E, P). The distance
between the two non-linear transformation curves VeE, P) is also shown on the
figure.
An efficient implementation of the tangent distance D(E, P) will be given in the
Efficient Pattern Recognition Using a New Transformation Distance
Figure 3: Illustration of the Euclidean distance and the tangent distance between
P and E
next section. Although the tangent distance can be applied to any kind of patterns represented as vectors, we have concentrated our efforts on applications to
image recognition. Comparison of tangent distance with the best known competing
method will be described. Finally we will discuss possible variations on the tangent
distance and how it can be generalized to problems other than pattern recognition.
2
IMPLEMENTATION
In this section we describe formally the computation of the tangent distance. Let
the function s which map u, a to s(a, u) be a differentiable transformation of the
input space, depending on a vector a of parameter, verifying s(O, u) = 'It.
If u is a 2 dimensional image for instance, s(a, u) could be a rotation of u by
the angle &. If we are interested in all transformations of images which conserve
distances (isometry), 8(a, u) would be a rotation by a r followed by a translation
by ax, a y of the image u. In this case & = (a r , ax, a y) is a vector of parameters of
dimension 3. In general, & = (ao, .. " am-d is of dimension m.
=
Since 8 is differentiable, the set Stl. = {x I 3a for which x
8( a, 'It)} is a differentiable manifold which can be approximated to the first order by a hyperplane Ttl..
This hyperplane is tangent to Stl. at u and is generated by the columns of matrix
Ltl.
= 08(&~ 'It) I = [08(&, u), ... , 08(a, U)]
aa
d=cf
oao
aam-l
(1)
d=O
which are vectors tangent to the manifold. If E and P are two patterns to be
compared, the respective tangent planes TE and Tp can be used to define a new
distance D between these two patterns. The tangent distance D(E, P) between E
and P is defined by
D(E, P) =
min
IIx (2)
xETE,yETp
yW
The equation of the tangent planes TE and Tp is given by:
(3)
(4)
53
54
Simard, Cun, and Denker
where LE and Lp are the matrices containing the tangent vectors (see Eq. 1) and
the vectors a E and ap are the coordinates of E' and P' in the corresponding tangent
planes. The quantities LE and Lp are attributes of the patterns so in many cases
they can be precomputed and stored.
Computing the tangent distance
(5)
amounts to solving a linear least squares problem. The optimality condition is that
the partial derivatives of D(E, P) with respect to a p and aE should be zero:
oD(~, P)
=2(E'(aE) _ p'(ap? T LE = 0
(6)
oD(~,P)
= 2(p'(ap) _
(7)
oaE
oap
E'(aE?T Lp
=0
Substituting E' and P' by their expressions yields to the following linear system of
equations, which we must solve for ap and ilE:
+ LEaE ) = 0
Lpap + LEilE) = 0
(8)
(9)
L;(E - P - Lpilp
Lf(E - P -
The solution of this system is
(LPEL"E1L~ - L;)(E - P)
= (LPEL"E1LEP -
(LEPLp~L; - L~)(E - P)
= (LEE -
Lpp )ap
LEPLp~LpE)aE
(10)
(11)
where LEE = L T L E , LpE = L~LE' L EP = L~Lp and Lpp = L~Lp. LU
decompositions 0 LEE and Lpp can be precomputed. The most expenSIve part in
solving this system is evaluating LEP (LPE can be obtained by transposing LEP).
It requires mE x mp dot products, where mE is the number of tangent vectors for E
and mp is the number of tangent vectors for P. Once LEP has been computed, ilp
and ilE can be computed by solving two (small) linear system of respectively mE and
mp equations. The tangent distance is obtained by computing IIE'(aE) - p'(ap )11
using the value of a p and ilE in equations 3 and 4. If n is the length of vector E (or
P), the algorithm described above requires roughly n(mE+l)(mp+l)+3(m~+m~)
multiply-adds. Approximations to the tangent distance can be computed more
efficiently.
f
3
RESULTS
Before giving the results of handwritten digit recognition experiments, we would
like to demonstrate the property of "local invariance" of tangent distance. A 16 by
16 pixel image similar to the "3" in Fig 2 was translated by various amounts. The
tangent distance (using the tangent vector corresponding to horizonta.l translations)
and the Euclidean Distance between the original image and its translated version
were measured as a function of the size k (in pixels) of the translation. The result
is plotted in Fig. 4. It is clear that the Euclidean Distance starts increasing linearly
with k while the tangent distance remains very small for translations as large as
two pixels. This indicates that, while Euclidean Distance is not invariant to translation, tangent distance is locally invariant. The extent of the invariance can be
Efficient Pattern Recognition Using a New Transformation Distance
10
8
6
Distance
4
Tangent
Distance
2
o~--.---~~~~--~~
~ ~ ~ ~
0 2
4
6
8
# of pixels by which image is translated
Figure 4: Euclidean and tangent distances between a 16x16 handwritten digit image
and its translated version as a function of the amount of translation measured in
pixels.
increased by smoothing the original image, but significant features may be blurred
away, leading to confusion errors. The figure is not symmetric for large translations
because the translated image is truncated to the 16 by 16 pixel field of the original
image. In the following experiments, smoothing was done by convolution with a
Gaussian of standard deviation u = 0.75. This value, which was estimated visually,
turned out to be nearly optimal (but not critical).
3.1
Handwritten Digit Recognition
Experiments were conducted to evaluate the performance of tangent distance for
handwritten digit recognition. An interesting characteristic of digit images is that
we can readily identify a set of local transformations which do not affect the identity
of the character, while covering a large portion of the set of possible instances of the
character. Seven such image transformations were identified: X and Y translations,
rotation, scaling, two hyperbolic transformations (which can generate shearing and
squeezing), and line thickening or thinning. The first six transformations were
chosen to span the set of all possible linear coordinate transforms in the imn~e
plane (nevertheless, they correspond to highly non-linear transforms in pixel space).
Additional transformations have been tried with less success.
The simplest possible use of tangent distance is in a Nearest Neighbor classifier. A
set of prototypes is selected from a training set, and stored in memory. W?hen a
test pattern is to be classified, the J( nearest prototypes (in terms of tangent distance) are found, and the pattern is given the class that has the majority among the
neighbors. In our applications, the size of the prototype set is in the neighborhood
of 10,000. In principle, classifying a pattern would require computing 10,000 tangent distances, leading to excessive classification times, despite the efficiency of the
tangent distance computation. Fortunately, two patterns that are very far apart in
terms of Euclidean Distance are likely to be far apart in terms of tangent distance.
Therefore we can use Euclidean distance as a "prefilter" , and eliminate prototypes
that are unlikely to be among the nearest neighbors. V'le used the following 4-step
classification procedure: 1) the Euclidean distance is computed between the test
pattern and all the prototypes, 2) The closest 100 prototypes are selected, 3) the
tangent distance between these 100 prototypes and the test pattern is computed
55
56
Simard, Cun, and Denker
error (%)
6
USPS
5
5
4
4
3
3
2
2
1
1
0
Human
T-Dlst
NNet
K-NN
o
NIST
Human
T-Dlst
NNet
Figure 5: Comparison of the error rate of tangent nearest neighbors and other
methods on two handwritten digit databases
and 4) the most represented label among the J( closest prototype is outputed. This
procedure is two orders of magnitude faster than computing all 10,000 tangent
distances, and yields the same performance.
US Postal Service database: In the first experiment, the database consisted of
16 by 16 pixel size-normalized images of handwritten digits, coming from US mail
envelopes. The entire training set of 9709 examples of was used as the prototype
set. The test set contained 2007 patterns. The best performance was obtained with
the "one nearest neig~bor" rule. The results are plotted in Fig. 5. The error rate
of the method is 2.6%. Two members of our group labeled the test set by hand
with an error rate of 2.5% (using one of their labelings as the truth to test the other
also yielded 2.5% error rate). This is a good indicator of the level of difficulty of
this task 2 . The performance of our best neural network (Le Cun et al., 1990) was
3.3%. The performance of one nearest neighbor with the Euclidean distance was
5.9%. These results show that tangent distance performs substantially better than
both standard K-nearest neighbor and neural networks.
NIST database: The second experiment was a competition organized by the N8,tional Institute of Standards and Technology. The object of the competition was
to classify a test set of 59,000 handwritten digits, given a training set of 223,000
patterns. A total of 45 algorithms were submitted from 26 companies from 7 different countries. Since the training set was so big, a very simple procedure was used
to select about 12,000 patterns as prototypes. The procedure consists of creating
a new database (empty at the beginning), and classifying each pattern of the large
database using the new database as a prototype set. Each time an error is made,
the pattern is added to the new database. More than one pass may have to be made
before the new database is stable. Since this filtering process would take too long
with 223,000 prototypes, we split the large database into 22 smaller databases of
10,000 patterns each, filtered those (to about 550 patterns) and concatenated the
result, yielding a database of roughly 12,000 patterns. This procedure has many
drawbacks, and in particular, it is very good at picking up mislabeled characters
in the training set. To counteract this unfortunate effect, a 3 nearest neighbors
procedure was used with tangent distance. The organizers decided to collect the
2This is an extremely difficult test set. Procedures that achieve less than 0.5% error on
other handwritten digit tasks barely achieve less than 4% on this one
Efficient Pattern Recognition Using a New Transformation Distance
training set and the test set among two very different populations (census bureau
workers for the training set, high-school students for the test set), we therefore report results on the official NIST test set (named "hard test set"), and on a subset
of the official training set, which we kept aside for test purposes (the "easy test
set"). The results are shown in Fig. 5. The performance is much worse on the
hard test set since the distribution was very different from that of the training set.
Out of the 25 participants who used the NIST training database, tangent distance
finished first. The overall winner did not use the training set provided by NIST (he
used a much larger proprietary training set), and therefore was not affected by the
different distributions in the training set and test set.
4
DISCUSSION
The tangent distance algorithm described in the implementation section can be
improved/adjusted in at least four different ways: 1) approximating the tangent
distance for better speed 2) modifying the tangent distance itself, 3) changing the
set of transformations/tangent vectors and 4) using the tangent distance with classification algorithms other than K-nearest neighbors, perhaps in combination, to
minimize the number of prototypes. We will discuss each of these aspects in turn.
Approximation: The distance between two hyperplanes TE and Tp going through
P and E can be approximated by computing the projection PEep) of Ponto TE
and Pp(E) of E onto Tp. The distance IIPE(P) - Pp(E)1I can be computed in
O(n(mE + mp? multiply-adds and is a fairly good approximation of D(E, P).
This approximation can be improved at very low cost by computing the closest
points between the lines defined by (E, PEep?~ and (P, Pp(E?. This approximation
was used with no loss of performance to reduce the number of computed tangent
distance from 100 to 20 (this involves an additional "prefilter"). In the case of
images, another time-saving idea is to compute tangent distance on progressively
smaller sets of progressively higher resolution images.
Changing the distance: One may worry that the tangent planes of E and P
may be parallel and be very close at a very distant region (a bad side effect of the
linear a.pproximation). This effect can be limited by imposing a constraint of the
form IlaEIi < f{E and lIapli < f{p. This constraint was implemented but did not
yield better results. The reason is that tangent planes are mostly orthogonal in
high dimensional space and the norms of [[aEIi and !lapll are already small.
The tangent distance can be normalized by dividing it by the norm of the vectors.
This improves the results slightly because it offsets side effects introduced in some
transformations such as scaling. Indeed, if scaling is a transformation of interest,
there is a potential danger of finding the minimum distance between two images
after they have been scaled down to a single point. The linear approximation of
the scaling transformation does not reach this extreme, but still yields a slight
degradation of the performance. The error rate reported on the USPS database can
be improved to 2.4% using this normalization (which was not tried on NIST).
Tangent distance can be viewed as one iteration of a Newton-type algorithm which
finds the points of minimum distance on the true transformation manifolds. The
vectors aE and ap are the coordinates of the two closest points in the respective
tangent spaces, but they can also be interpreted for real (non-linear) transformations. If ae; is the amount of the translation tangent vector that must be added
to E to make it as close as possible to P, we can compute the true translation of
image E by ae,; pixels. In other words, E'(aE) and pl(ap) are projected onto
57
58
Simard, Cun, and Denker
close points of SE and Sp. This involves a resampling but can be done efficiently.
Once this new image has been computed, the corresponding tangent vectors can
be computed for this new image and the process can be repeated. Eventually this
will converge to a local minimum in the distance between the two transformation
manifold of P and E. The tangent distance needs to be normalized for this iteration
process to work.
A priori knowledge: The a priori knowledge used for tangent vectors depends
greatly on the application. For character recognition, thickness was one of the
most important transformations, reducing the error rate from 3.3% to 2.6%. Such
a transformation would be meaningless in, say, speech or face recognition. Other
transformations such as local rubber sheet deformations may be interesting for
character recognition. Transformations can be known a priori or learned from the
data.
Other algorithms, reducing the number of prototypes: Tangent distance is
a general method that can be applied to problems other than image recognition,
with classification methods other than K-nearest neighbors. Many distance- ba.sed
classification schemes could be used in conjunction with tangent distance, among
them LVQ (Kohonen, 1984), and radial basis functions. Since all the operators involved in the tangent distance are differentiable, it is possible to compute the partial
derivative of the tangent distance (between an object and a prototype) with respect
to the tangent vectors, or with respect to the prototype. Therefore the tangent
distance operators can be inserted in gradient-descent based adaptive machines (of
which LVQ and REF are particular cases). The main advantage of learning the
prototypes or the tangent vectors is that fewer prototypes may be needed to reach
the same (or superior) level of performance as, say, regular K-nearest neighbors.
In conclusion, tangent distance can greatly improve many of the distance-based
algorithms. We have used tangent distance in the simple K-nearest neighbor algorithm and outperformed all existing techniques on standard classification tasks.
This surprising success is probably due the fact that a priori knowledge can be very
effectively expressed in the form of tangent vectors. Fortuna.tely, many algorithms
are based on computing distances and can be adapted to express a priori knowledge
in a similar fashion. Promising candidates include Parzen windows, learning vector
quantization and radial basis functions.
References
Kohonen, T. (1984). Self-organization and Associative Memory. In Springer Sedes in
Information Sciences, volume 8. Springer-Verlag.
Le Cun, Y., Boser, B., Denker, J. S., Henderson, D., Howard, R. E., Hubbard, Vv., and
Jackel, 1. D. (1990). Handwritten digit recognition with a back-propagation network. In Touretzky, D., editor, Advances in Neural Information Processing Systems
2 (NIPS *89), Denver, CO. Morgan Kaufman.
Sibson, R. (1978) . Studies in the Robustness of Multidimensiona.l Scaling: Procrust.es
Statistices. 1. R. Statist. Soc., 40:234-238.
Simard, P. Y., LeCun, Y., Denker, J., and Victorri, B. (1992a). An Efficient Met.hod for
Learning Invariances in Adaptive classifiers. In International Conference on Pattern
Recognition, volume 2, pages 651-655, The Hague, Netherlands.
Simard, P. Y., Victorri, B., LeCun, Y., and Denker, J. (1992b). Tangent Prop - A formalism for specifying selected invariances in an adaptive network. In Neural Information
Processing Systems, volume 4, pages 895-903, San Mateo, CA.
Sinden, F. and Wilfong, G. (1992). On-line Recognition of Handwritten Symbols. Technical Report 11228-910930-02IM, AT&T Bell Laboratories.
| 656 |@word version:2 middle:2 oae:1 norm:2 horizonta:1 imn:1 tried:2 decomposition:1 lpp:3 n8:1 existing:1 nt:1 od:2 surprising:1 must:4 readily:1 john:1 distant:1 analytic:1 progressively:2 aside:1 resampling:1 selected:4 fewer:1 plane:8 beginning:1 filtered:1 postal:1 hyperplanes:1 along:1 incorrect:1 consists:2 indeed:1 shearing:2 roughly:2 hague:1 globally:1 company:1 window:1 increasing:1 provided:2 what:1 ttl:1 kind:1 interpreted:1 substantially:1 kaufman:1 finding:4 transformation:43 nj:1 remember:1 classifier:3 scaled:1 before:2 service:1 local:5 despite:1 iie:1 ap:8 black:1 studied:1 mateo:1 collect:1 specifying:1 co:1 limited:1 decided:1 lecun:2 differs:1 lf:1 digit:11 procedure:8 danger:1 intersect:1 bell:2 lpe:3 hyperbolic:1 matching:1 projection:1 word:1 road:1 radial:4 regular:1 rota:1 onto:2 unlabeled:1 close:4 sheet:1 operator:2 map:1 resolution:1 rule:1 population:1 variation:1 coordinate:3 recognition:18 particularly:1 expensive:2 approximated:3 conserve:1 database:16 labeled:1 bottom:2 ep:1 inserted:1 solved:1 verifying:1 region:1 mentioned:1 trained:1 solving:3 efficiency:1 basis:4 usps:2 translated:6 mislabeled:1 easily:2 various:2 represented:5 describe:1 neighborhood:1 larger:1 solve:1 say:2 statistic:1 itself:1 patrice:1 associative:1 hoc:1 advantage:1 differentiable:4 analytical:1 propose:1 product:2 coming:1 kohonen:2 turned:1 achieve:2 deformable:1 competition:2 empty:1 leave:2 rotated:2 object:2 depending:1 measured:2 nearest:14 school:1 eq:1 solves:1 dividing:1 implemented:1 soc:1 involves:2 met:1 drawback:1 correct:1 attribute:1 modifying:1 pproximation:1 human:2 require:1 ao:1 im:1 adjusted:1 pl:1 visually:1 substituting:1 purpose:1 outperformed:2 label:1 jackel:1 sensitive:1 hubbard:1 gaussian:1 rather:1 office:1 conjunction:1 ax:2 consistently:1 indicates:1 mainly:1 greatly:2 contrast:1 am:1 tional:1 nn:1 typically:1 eliminate:1 unlikely:1 entire:1 transformed:2 going:2 interested:1 labelings:1 pixel:16 upward:1 classification:12 among:5 overall:1 priori:5 smoothing:3 special:1 fairly:1 field:1 once:4 construct:1 saving:1 ted:1 represents:1 nearly:1 excessive:1 others:1 report:2 preserve:1 ima:1 transposing:1 organization:1 interest:1 highly:2 multiply:2 nnet:2 henderson:1 lep:3 truly:1 extreme:1 yielding:1 closer:2 partial:2 worker:1 respective:2 orthogonal:1 euclidean:14 plotted:2 deformation:1 minimal:1 increased:1 formalism:1 column:1 instance:2 classify:1 earlier:1 tp:4 cost:1 deviation:1 subset:1 conducted:1 too:1 stored:3 reported:1 thickness:3 international:1 sensitivity:1 lee:3 picking:1 parzen:1 containing:1 worse:1 corner:1 creating:1 simard:11 leading:2 derivative:2 potential:1 student:1 blurred:1 mp:5 ad:1 depends:2 portion:1 start:1 participant:1 parallel:1 sed:1 minimize:1 square:3 characteristic:1 efficiently:4 who:1 yield:4 identify:1 correspond:1 handwritten:11 bor:1 ponto:1 lu:1 classified:4 submitted:1 reach:2 touretzky:1 pp:3 involved:1 hamming:1 knowledge:4 improves:1 dlst:2 organized:1 thinning:1 back:1 worry:1 higher:1 improved:3 done:2 though:1 hand:1 o:1 propagation:1 wilfong:2 gray:1 perhaps:1 effect:5 consisted:1 true:3 normalized:3 symmetric:1 laboratory:2 illustrated:1 white:1 self:1 uniquely:1 covering:1 generalized:1 demonstrate:1 confusion:1 performs:1 oao:1 image:32 prefilter:2 superior:1 rotation:13 denver:1 ltl:1 winner:1 volume:3 he:1 slight:1 significant:1 imposing:1 ai:1 dot:2 moving:1 stable:1 similarity:1 surface:1 etc:2 add:2 closest:5 isometry:1 showed:1 irrelevant:1 apart:2 certain:1 verlag:1 success:2 morgan:1 minimum:6 additional:2 fortunately:1 converge:1 shortest:1 ii:1 multiple:1 desirable:1 technical:1 faster:1 long:1 post:1 ile:3 ae:9 lly:1 iteration:2 sometimes:1 normalization:1 schematically:1 victorri:2 country:1 envelope:1 meaningless:1 probably:1 member:1 split:1 easy:1 affect:1 competing:1 identified:1 reduce:1 idea:3 prototype:23 expression:3 six:1 effort:1 suffer:1 speech:1 oscillate:1 proprietary:1 se:4 yw:1 clear:1 procrustes:1 amount:5 transforms:2 netherlands:1 locally:3 statist:1 concentrated:1 simplest:1 generate:2 estimated:1 affected:1 express:1 group:1 key:1 sibson:2 four:1 nevertheless:1 changing:2 ce:1 kept:1 sum:1 counteract:1 angle:2 parameterized:1 named:1 yann:1 holmdel:1 scaling:8 followed:1 yielded:1 adapted:1 constraint:2 aspect:1 speed:1 span:2 min:1 optimality:1 extremely:1 according:2 combination:2 smaller:2 slightly:2 character:7 lp:5 cun:7 organizer:1 invariant:8 census:1 equation:4 rubber:1 remains:1 discus:2 precomputed:2 turn:1 ilp:1 eventually:1 needed:1 ge:1 denker:8 apply:1 away:1 appropriate:1 robustness:1 original:5 bureau:1 top:2 cf:1 include:1 unfortunate:1 iix:1 newton:1 giving:1 concatenated:1 approximating:2 unchanged:1 added:2 quantity:1 already:1 gradient:1 distance:92 oa:1 majority:1 me:5 seven:1 manifold:12 mail:1 considers:1 extent:1 barely:1 reason:1 besides:1 length:1 illustration:1 vee:1 difficult:1 unfortunately:1 mostly:1 ba:1 implementation:3 convolution:1 howard:1 nist:7 finite:1 descent:1 truncated:1 digitized:3 introduced:1 learned:1 boser:1 nip:1 pattern:34 memory:3 critical:1 difficulty:1 rely:2 indicator:1 representing:1 scheme:1 improve:1 technology:1 finished:1 crawford:1 geometric:1 tangent:71 hen:2 loss:1 interesting:2 filtering:1 thickening:1 squeezing:1 principle:1 editor:1 classifying:2 translation:15 side:2 vv:1 institute:1 neighbor:13 template:1 face:1 curve:6 dimension:4 evaluating:2 made:4 adaptive:3 projected:1 san:1 far:2 aam:1 continuous:1 promising:1 ca:1 elastic:1 peep:2 complex:1 necessarily:1 rue:1 official:2 sp:9 did:2 main:1 linearly:1 big:1 repeated:1 ref:1 vve:1 fig:8 referred:1 fashion:1 x16:1 candidate:1 down:1 bad:1 symbol:1 offset:1 stl:2 quantization:1 adding:2 effectively:1 magnitude:1 te:4 hod:1 suited:1 likely:1 expressed:1 contained:1 springer:2 aa:1 corresponds:1 truth:1 ma:1 prop:1 identity:1 viewed:1 lvq:2 hard:3 reducing:2 hyperplane:2 rge:1 degradation:1 total:1 pas:1 invariance:6 e:1 la:1 meaningful:1 formally:1 highdimensional:1 select:1 evaluate:1 tested:2 |
6,147 | 6,560 | Dual Space Gradient Descent for Online Learning
Trung Le, Tu Dinh Nguyen, Vu Nguyen, Dinh Phung
Centre for Pattern Recognition and Data Analytics
Deakin University, Australia
{trung.l, tu.nguyen, v.nguyen, dinh.phung}@deakin.edu.au
Abstract
One crucial goal in kernel online learning is to bound the model size. Common
approaches employ budget maintenance procedures to restrict the model sizes using
removal, projection, or merging strategies. Although projection and merging, in the
literature, are known to be the most effective strategies, they demand extensive computation whilst removal strategy fails to retain information of the removed vectors.
An alternative way to address the model size problem is to apply random features
to approximate the kernel function. This allows the model to be maintained directly
in the random feature space, hence effectively resolve the curse of kernelization.
However, this approach still suffers from a serious shortcoming as it needs to use a
high dimensional random feature space to achieve a sufficiently accurate kernel
approximation. Consequently, it leads to a significant increase in the computational
cost. To address all of these aforementioned challenges, we present in this paper
the Dual Space Gradient Descent (DualSGD), a novel framework that utilizes
random features as an auxiliary space to maintain information from data points
removed during budget maintenance. Consequently, our approach permits the
budget to be maintained in a simple, direct and elegant way while simultaneously
mitigating the impact of the dimensionality issue on learning performance. We
further provide convergence analysis and extensively conduct experiments on five
real-world datasets to demonstrate the predictive performance and scalability of
our proposed method in comparison with the state-of-the-art baselines.
1
Introduction
Online learning represents a family of effective and scalable learning algorithms for incrementally
building a predictive model from a sequence of data samples [1]. Unlike the conventional learning
algorithms, which usually require a costly procedure to retrain the entire dataset when a new instance
arrives [2], the goal of online learning is to utilize new incoming instances to improve the model
given knowledge of the correct answers to previously processed data. The seminal line of work in
online learning, referred to as linear online learning [3, 4], aims to learn a linear predictor in the
input space. The key limitation of this approach lies in its oversimplified assumption in using a linear
hyperplane to represent data that could possibly possess nonlinear dependency as commonly seen
in many real-world applications. This inspires the work of kernel online learning [5, 6] that uses a
linear model in the feature space to capture the nonlinearity of input data.
However, the kernel online learning approach suffers from the so-called curse of kernelization [7],
that is, the model size linearly grows with the data size accumulated over time. A notable approach
to address this issue is to use a budget [8, 9, 7, 10, 11]. The work in [7] leveraged the budgeted
approach with stochastic gradient descent (SGD) [12, 13] wherein the learning procedure employed
SGD and a budget maintenance procedure (e.g., removal, projection, or merging) was employed to
maintain the model size. Although the projection and merging were shown to be effective [7], their
associated computational costs render them impractical for large-scale datasets. An alternative way
to address the curse of kernelization is to use random features [14] to approximate a kernel function
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain
Acknowledgment: This work is partially supported by the Australian Research Council under the Discovery
Project DP160109394.
[15, 16]. The work in [16] proposed to transform data from the input space to the random-feature
space, and then performed SGD in the feature space. However, in order for this approach to achieve
good kernel approximation, excessive number of random features is required, hence could lead to
serious computational issue.
In this paper, we propose the Dual Space Gradient Descent (DualSGD) to address the computational
problem encountered in the projection and merging strategies in the budgeted approach [8, 9, 17, 7]
and the excessive number of random features in the random feature approach [15, 16]. In particular,
the proposed DualSGD utilizes the random-feature space as an auxiliary space to store the information
of the vectors that have been discarded during the budget maintenance process. More specifically, the
DualSGD uses a provision vector in the random-feature space to store the information of all vectors
being removed. This allows us to propose a novel budget maintenance strategy, named k-merging,
which unifies the removal, projection, and merging strategies.
Figure 1: Comparison of DualSGD with BSGD-M and FOGD on the cod-rna dataset. Left: DualSGD
vs. BSGD-M when B is varied. Right: DualSGD vs. FOGD when D is varied.
Our proposed DualSGD advances the existing works in the budgeted and random-feature approaches
in twofold. Firstly, since the goal of using random features is to approximate the original feature
space as much as possible, the proposed k-merging of DualSGD can preserve the information of
the removed vectors more effectively than the existing budget maintenance strategies. For example
comparing with the budgeted SGD using merging strategy (BSGD-M) [7], as shown in Fig. 1 (left),
the DualSGD with a small budget size (B = 5) can gain a significant better mistake rate than that of
BSGD-M with a 80-fold larger budget size (B = 400). Secondly, since the core part of the model
(i.e., the vectors in the support set) is stored in the feature space and the auxiliary part (i.e., the
removed vectors) is stored in the random-feature space, our DualSGD can significantly reduce the
influence of the number of random features to the learning performance. For example comparing
with the Fourier Online Gradient Descent (FOGD) [16], as shown in Fig. 1 (right), the DualSGD
with a small number of random features (D = 20) can achieve a comparable mistake rate to that of
FOGD with a 40-fold larger number of random features (D = 800) and the DualSGD with a medium
value of number of random features (D = 100) achieves a predictive performance that would not
be reached by FOGD (the detail of comparison in computational complexities of our DualSGD and
FOGD can be found in Section 3 in the supplementary material).
To provide theoretical foundation for DualSGD, we develop an extensive convergence analysis for a
wide spectrum of loss functions including Hinge, Logistic, and smooth Hinge [18] for classification
task and `1 , ?-insensitive for regression. We conduct extensive experiments on five real-world datasets
to compare the proposed method with the state-of-the-art online learning methods. The experimental
results show that our proposed DualSGD achieves the most optimal predictive results in almost all
cases, whilst its execution time is much faster than the baselines.
2
Dual Space Gradient Descent for Online Learning
2.1 Problem Setting
We propose to solve the following optimization problem: min J (w) whose objective function is
w
defined for online setting as follows:
J (w) ?
?
2
kwk + E(x,y)?pX ,Y [l (w, x, y)]
2
2
(1)
where x ? RM is the data vector, y the label, pX ,Y denotes the joint distribution over X ? Y with
the data domain X and label domain Y, l (w, x, y) is a convex loss function with parameters w,
and ? ? 0 is a regularization parameter. A kernelization of the loss function introduces a nonlinear
function ? that maps x from the inputspace to a feature space. A classic example is the Hinge loss:
l (w, x, y) = max 0, 1 ? yw> ? (x) .
2.2 The Key Ideas of the Proposed DualSGD
Our key motivations come from the shortcomings of three current budget maintenance strategies:
removal, projection and merging. The removal strategy fails to retain information of the removed
vectors. Although the projection strategy can overcome this problem, it requires a costly procedure
to compute the inverse of an B ? B matrix wherein B is the budget size, typically in the cubic
complexity of B. On the other hand, the merging strategy needs to estimate the preimage of a vector
in the feature space, leading to a significant information loss and requiring extensive computation.
Our aim is to find an approach to simultaneously retain the information of the removed vectors
accurately, and perform budget maintenance efficiently.
To this end, we introduce the k-merging, a new budget maintenance approach that unifies three
aforementioned budget maintenance strategies under the following interpretation. For k = 1, the
proposed k-merging can be seen as a hybrid strategy of removal and projection. For k = 2, it
can be regarded as the standard merging. Moreover, our proposed k-merging strategy enables an
arbitrary number of vectors to be conveniently merged. Technically, we employ a vector in the
? to retain the information of all removed vectors.
random-feature space [14], called provision vector w,
When k-merging is invoked, the most redundant k vectors are sorted out, e.g., xi1 , . . . , xik and we
P
? as w
? =w
? + kj=1 ?ij z xij where ?ij is the coefficient of support vector associated
increment w
with xij , and z xij denotes the mapping function from the input space to the random feature space.
The advantage of using the random-feature space as an auxiliary space is twofold: 1) the information
loss is negligible since the random-feature space is designed to approximate the original feature space,
and 2) the operations in budget maintenance strategy are direct and economic.
Algorithm 1 The learning of Dual Space Gradient Descent.
Input: Kernel K, regularization parameter ?, budget B, random feature dimension D.
? 1 = 0; w
? 1 = 0; b = 0; I0 = ?
1: w
2: for t = 1, . . . , T do
3:
(xt , yt ) ? pX ,Y
t?1
? t+1 = t?1
?
?
?
4:
w
t wt ; wt+1 = t wt
h
5:
if ?o l yt , ot 6= 0 then
6:
It = It?1 ? {t}
1
? t+1 = w
? t+1 ? ?t
7:
w
?o l yt , oht ? (xt )
8:
if |It | > B then
? t+1 , w
? t+1 )
9:
invokes k-merging(It , w
10:
end if
11:
end if
12: end for
? T +1 ? w
? T +1 .
Output: wTh +1 = w
2.3 The Proposed Algorithm
In our proposed DualSGD, the model is distributed into two spaces: the feature and random-feature
?t ?w
? t . Here we note that the kernel part w
? t and
spaces with a hybrid vector wth defined as: wth , w
? t lie in two different spaces, thus for convenience we define an abstract operator
the provision part w
? to allow the addition between them, which implies that the decision function crucially depends on
both kernel and provision parts
h
?t ? w
? t ) , xi , w
? tT ? (x) + w
? t> z (x)
wt , x , h(w
? t in the random-feature space to preserve the information of the discarded vecWe employ one vector w
? t . When an instance arrives and
tors, that are outside It ? the set of indices of all support vectors in w
? t+1 , w
? t+1 )
the model size exceeds the budget B, the budget maintenance procedure k-merging(It , w
? t+1 and w
? t+1 , accordingly. Our proposed DualSGD is summarized in Algois invoked to adjust w
rithm 1 where we note that, l (y, o) is another representation of convex loss function w.r.t the variable
3
? t> ? (x) + w
? t> z (x) (i.e.,
o (e.g., the Hinge loss given by l (y, o) = max (0, 1 ? yo)), and oht = w
hybrid objective value).
2.4 k-merging Budget Maintenance Strategy
Crucial to our proposed DualSGD in Algorithm 1 is the k-merging routine to allow efficient merging
of k arbitrary vectors. We summarize the key steps for k-merging in Algorithm 2. In particular, we
first select k support vectors whose corresponding coefficients (?i1 , ?i2 , ..., ?ik ) have the smallest
absolute values (cf. line 1). We then approximate them by z (xi1 ) , . . . , z (xik ) and merge them by
P
? t+1 = w
? t+1 + kj=1 ?ij z xij (cf. line 2). Finally, we remove
updating the provision vector as w
? t+1 (cf. line 2).
the chosen vectors from the kernel part w
2.5 Convergence Analysis
In this section, we present the convergence analysis for our proposed algorithm. We first prove that
with a high probability fth (x) (i.e., hybrid decision function and cf. 3) is a good approximation of
ft (x) for all x and t (cf. Theorem 1). Let w? be the optimal solution of the optimization problem
?
defined in Eq. (1): w? = argminJ (w). We then prove that if {wt }t=1 is constructed as in Eq. (2),
w
this sequence rapidly converges to w? or ft (x) = wt> ? (x) rapidly approaches the optimal decision
function (cf. Theorems 2, 3). Therefore, the decision function fth (x) also rapidly approaches the
optimal decision function. Our analysis can be generalized for the general k-merging strategy, but for
comprehensibility we present the analysis for the 1-merging case (i.e., k = 1).
We assume that the loss function used in the analysis satisfies the condition |?o l (y, o)| ? A, ?y, o,
where A is a positive constant. A wide spectrum of loss functions including Hinge, logistic, smooth
Hinge [18], `1 , and ?-insensitive satisfy this condition and hence are appropriate for this convergence
1/2
analysis. We further assume that k? (x)k = K (x, x) = 1, ?x. Let ?t be a binary random
variable which indicates whether
the budget maintenance procedure is performed at the iteration t
(i.e., the event ?o l yt , oht 6= 0). We assume that if ?t = 1, the vector ? (xit ) is selected to move to
the random-feature space. Without loss of generality, we assume that it = t since we can arrange the
data instances so as to realize it. We define
gth = ?wt + ?o l yt , fth (xt ) ? (xt ) and wt+1 = wt ? ?t gth
(2)
ft (x) = wt> ? (x) =
t
X
?j K (xj , x)
j=1
? t> ? (xt ) + w
? t> z (xt ) =
fth (x) = w
t
X
?j (1 ? ?j ) K (xj , x) +
j=1
t
X
? (xj , x)
? j ?j K
(3)
j=1
>
? (x, x0 ) = z (x) z (x0 ) is the approximated kernel induced by the random-feature space,
where K
1
and the learning rate ?t = ?t
.
Theorem 1 establishes that fth (.) is a good approximation of ft (x) with a high probability, followed
by Theorem 2 which establishes the bound on the regret.
Algorithm 2 k-merging Budget Maintenance Procedure.
? t+1 , w
? t+1 )
procedure k-merging(IP
t, w
? t+1 = j?It ?j ? (xj )
// Assume that w
1: (i1 , . . . , ik ) =k-argmin |?j |; It = It \ {i1 , . . . , ik }
j?I
P
P t
? t+1 = w
? t+1 + kj=1 ?ij z xij ; w
? t+1 = w
? t+1 ? kj=1 ?ij ? xij
2: w
endprod
? Ad
D?2 ?2
Theorem 1. With a probability at least 1 ? ? = 1 ? 28 ??? X exp ? 4(M
where M is
2
+2)A
the dimension of input space, D is the dimension of random feature space,dX denotes the diameter of
the compact set X , and the constant ?? is defined as in [14], we have
i) ft (x) ? fth (x) ? ? for all t > 0 and x ? X .
1/2 1/2
Pt
ii) E ft (x) ? fth (x) ? A?1 ?? j=1 E ?j2
?j where ?j = p (?j = 1).
4
Theorem 1 shows that with a high probability
fth (x) can approximate ft (x) with an ?-precision.
It also indicates that to decrease the gap ft (x) ? fth (x), when performing budget maintenance,
we should choose the vectors whose coefficients have smallest absolute values to move to the
random-feature space.
Theorem 2. The following statement guarantees for all T
"
#
T
T
1 X 2 1/2
1X
8A2 (log T + 1)
?
?
E [J (wT )] ? J (w ) ? E
J (wt ) ? J (w ) ?
+ W
E Mt
T t=1
?T
T
t=1
?
PT
where wT = T1 t=1 wt , Mt = ?o l (yt , ft (xt ))??o l yt , fth (xt ) , and W = 2A 1 + 5 ??1 .
If a smooth loss function is used, we can quantify the gap in more detail and with a high probability,
the gap is negligible and this is shown in Theorem 3.
Theorem
that l (y, o) isa ?-strongly smooth loss function. With a probability at least
3. Assume
1 ? 28
?? AdX
??
2 2
D? ?
exp ? 4(M
+2)A2 , we have
"
E [J (wT )] ? J (w? ) ? E
#
T
T
X
8A2 (log T + 1)
1 X
1
+ W ??
J (wt ) ? J (w? ) ?
T t=1
?T
T
t=1
Pt
j=1
?j
!1/2
t
2
?
3
8A (log T + 1)
+ W ??
?T
Experiments
In this section, we conduct comprehensive experiments to quantitatively evaluate the performance
of our proposed Dual Space Gradient Descent (DualSGD) on binary classification, multiclass classification and regression tasks under online settings. Our main goal is to examine the scalability,
classification and regression capabilities of DualSGDs by directly comparing them with those of
several recent state-of-the-art online learning approaches using a number of real-world datasets with
a wide range of sizes. In what follows, we present the data statistics, experimental setup, results and
our observations.
3.1 Data Statistics and Experimental Setup
We use 5 datasets which are ijcnn1, cod-rna, poker, year, and airlines. The datasets where purposely
are selected with various sizes in order to clearly expose the differences among scalable capabilities
of the models. Three of which are large-scale datasets with hundreds of thousands and millions of
data points (year: 515, 345; poker: 1, 025, 010; and airlines: 5, 929, 413), whilst the rest are medium
size databases (ijcnn1: 141, 691 and cod-rna: 331, 152). These datasets can be downloaded from
LIBSVM1 and UCI2 websites, except the airlines which was obtained from American Statistical
Association (ASA3 ). For the airlines dataset, our aim is to predict whether a flight will be delayed or
not under binary classification setting, and how long (in minutes) the flight will be delayed in terms
of departure time under regression setting. A flight is considered delayed if its delay time is above
15 minutes, and non-delayed otherwise. Following the procedure in [19], we extract 8 features for
flights in the year of 2008, and then normalize them into the range [0,1].
For each dataset, we perform 10 runs on each algorithm with different random permutations of the
training data samples. In each run, the model is trained in a single pass through the data. Its prediction
result and time spent are then reported by taking the average together with the standard deviation over
all runs. For comparison, we employ 11 state-of-the-art online kernel learning methods: perceptron
[5], online gradient descent (OGD) [6], randomized budget perceptron (RBP) [9], forgetron [8]
projectron, projectron++ [20], budgeted passive-aggressive simple (BPAS) [17], budgeted SGD using
merging strategy (BSGD-M) [7], bounded OGD (BOGD) [21], Fourier OGD (FOGD) and Nystrom
OGD (NOGD) [16]. Their implementations are published as a part of LIBSVM, BudgetedSVM4 and
LSOKL5 toolboxes. We use a Windows machine with 3.46GHz Xeon processor and 96GB RAM to
conduct our experiments.
1
https://www.csie.ntu.edu.tw/?cjlin/libsvmtools/datasets/
https://archive.ics.uci.edu/ml/datasets.html
3
http://stat-computing.org/dataexpo/2009/.
4
http://www.dabi.temple.edu/budgetedsvm/index.html
5
http://lsokl.stevenhoi.com/
2
5
3.2 Model Evaluation on the Effect of Hyperparameters
In the first experiment, we investigate the effect of hyperparameters, i.e., budget size B, merging
size k and random feature dimension D (cf. Section 2) on the performance behavior of DualSGD.
Particularly, we conduct an initial analysis to quantitatively evaluate the sensitivity of these hyperparameters and their impact on the predictive accuracy and wall-clock time. This analysis provides an
approach to find the best setting of hyperparameters. Here the DualSGD with Hinge loss is trained
on the cod-rna dataset under the online classification setting.
Figure 2: The effect of k-merging size on the mistake rate and running time (left). The effect of
budget size B and random feature dimension D on the mistake rate (middle) and running time (right).
First we set B = 200, D = 100, and vary k in the range of 1, 2, 10, 20, 50, 100, 150. For each
setting, we run our models and record the average mistake rates and running time as shown in Fig. 2
(left). There is a pattern that the classification error increases for larger k whilst the wall-clock
time decreases. This represents the trade-off between model discriminative performance and model
computational complexity via the number of merging vectors. In this analysis, we can choose k = 20
to balance the performance and computational cost.
Fixing k = 20, we vary B and D in 4 values doubly increasing from 50 to 400 and from 100 to 800,
respectively, to evaluate the prediction performance and execution time. Fig. 2 depicts the average
mistake rates (middle) and running time in seconds (right) as a heat map of these values. These
visualizations indicate that the higher B and D produce better classification results, but hurt the
training speed of the model. We found that increasing the dimension of random feature space from
100 to 800 at B = 50 significantly reduces the mistake rates by 25%, at the same time increases the
wall-clock time by 76%. The same pattern with less effect is observed when increasing the budget
size B from 50 to 400 at D = 100 (mistake rate decreases by 1.5%, time increases by 54%). For a
good trade-off between classification performance and computational cost, we select B = 100 and
D = 200 which achieves fairly comparable classification result and running time.
3.3 Online Classification
We now examine the performances of DualSGDs in the online classification task. We use four
datasets: cod-rna, ijcnn1, poker and airlines (delayed and non-delayed labels). We create two
versions of our approach: DualSGD with Hinge loss (DualSGD-Hinge) and DualSGD with Logistic
loss (DualSGD-Logit). It is worth mentioning that the Hinge loss is not a smooth function with
undefined gradient at the point that the classification confidence yf (x) = 1. Following the subgradient definition, in our experiment, we compute the gradient given the condition that yf (x) < 1,
and set it to 0 otherwise.
Hyperparameters setting. There are a number of different hyperparameters for all methods. Each
method requires a different set of hyperparameters, e.g., the regularization parameters (? in DualSGD),
the learning rates (? in FOGD and NOGD), and the RBF kernel width (? in all methods). Thus, for a
fair comparison, these hyperparameters are specified using cross-validation on a subset of data.
In particular, we further partition the training set into 80% for learning and 20% for validation. For large-scale databases, we use only 1% of dataset, so that the searching can finish within an acceptable time budget. The hyperparameters are varied in certain ranges and
selected for the best performance on the validation set. The ranges are given as follows:
?4
?2
16
C ?{2?5 , 2?3 , ..., 215 }, ? ?{2 /N , 2 /N , ..., 2 /N }, ? ?{2?8 , 2?4 , 2?2 , 20 , 22 , 24 , 28 }, and
?4 ?3
?1 1 2
4
? ?{2 , 2 , ..., 2 , 2 , 2 ..., 2 } where N is the number of data points. The budget size B,
merging size k and random feature dimension D of DualSGD are selected following the approach
? in NOGD and Pegasos algorithm, and the feature
described in Section 3.2. For the budget size B
? in FOGD for each dataset, we use identical values to those used in Section 7.1.1 of [16].
dimension D
6
? D]
? denotes the
Table 1: Mistake rate (%) and execution time (seconds). The notation [k; B; D; B;
?
merging size k, the budget sizes B and B of DualSGD-based models and other budgeted algorithms,
? of DualSGD and FOGD, respectively.
and the number of random features D and D
Dataset
i
?|D
?
k|B|D|B
Algorithm
Perceptron
OGD
RBP
Forgetron
Projectron
Projectron++
BPAS
BSGD-M
BOGD
FOGD
NOGD
DualSGD-Hinge
DualSGD-Logit
h Dataset [S] i
?|D
?
k|B|D|B
h
Algorithm
FOGD
NOGD
DualSGD-Hinge
DualSGD-Logit
cod-rna
[20 | 100 | 200 | 400 | 1, 600]
Mistake Rate
Time
9.79?0.04
1,393.56
7.81?0.03
2,804.01
26.02?0.39
85.84
28.56?2.22
102.64
11.16?3.61
97.38
17.97?15.60
1,799.93
11.97?0.09
92.08
5.33?0.04
184.58
38.13?0.11
104.60
7.15?0.03
53.45
7.83?0.06
105.18
4.92?0.25
28.29
4.83?0.21
31.96
poker
[20 | 100 | 200 | 1, 000 | 4, 000]
Mistake Rate
Time
52.28?0.04
928.89
44.90?0.16
4,920.33
46.73?0.22
139.87
46.65?0.14
133.50
ijcnn1
[20 | 100 | 200 | 1, 000 | 4, 000]
Mistake Rate
Time
12.85?0.09
727.90
10.39?0.06
960.44
15.54?0.21
54.29
16.17?0.26
60.54
12.98?0.23
59.37
9.97?0.09
749.70
10.68?0.05
55.44
9.14?0.18
1,562.61
10.87?0.18
55.99
9.41?0.03
25.93
10.43?0.08
59.36
8.35?0.20
12.12
8.82?0.24
13.30
airlines
[20 | 100 | 200 | 1, 000 | 4, 000]
Mistake Rate
Time
20.98?0.01
1,270.75
25.56?0.01
3,553.50
19.28?0.00
472.21
19.28?0.00
523.23
Results. Table 1 reports the average classification results and execution time after the methods see
all data samples. Note that for two biggest datasets (poker, airlines) that consist of millions of data
points, we only include the fast algorithms FOGD, NOGD and DualSGDs. The other methods would
exceed the time limit, which we set to two hours, when running on such data as they suffer from
serious computation issue. From these results, we can draw key observations below.
The budgeted online approaches show their effectiveness with substantially faster computation than
the ones without budgets. More specifically, the execution time of our proposed models is several
orders of magnitude (100 times) lower than that of regular online algorithms (e.g., 28.29 seconds
compared with 2, 804 seconds for cod-rna dataset). Moreover, our models are twice as fast as the
recent fast algorithm FOGD for cod-rna and ijcnn1 datasets, and approximately eight and three times
for vast-sized data poker and airlines. This is because the DualSGDs maintain a sparse budget of
support vectors and a low random feature space, whose size and dimensionality are 10 times and 20
times smaller than those of other methods.
Second, in terms of classification, the DualSGD-Hinge and DualSGD-Logit outperform other methods for almost all datasets except the poker data. In particular, the DualSGD-based methods achieve
the best mistake rates 4.83?0.21, 8.35?0.20, 19.28?0.00 for the cod-rna, ijcnn1 and airlines data,
that are, respectively, 32.4%, 11.3%, 8.8% lower than the error rates of the second best models ?
two recent approaches FOGD and NOGD. For poker dataset, our methods obtain fairly comparable
results with that of the NOGD, but still surpass the FOGD with a large margin. The reason is that the
DualSGD uses a dual space: a kernel space containing core support vectors and a random feature
space keeping the projections of the core vectors that are removed from the budget in kernel space.
This would minimize the information loss when the model performs budget maintenance.
Finally, two versions of DualSGDs demonstrate similar discriminative performances and computational complexities wherein the DualSGD-Logit is slightly slower due to the additional exponential
operators. All of these observations validate the effectiveness and efficiency of our proposed technique. Thus, we believe that our approximation machine is a promising technique for building
scalable online kernel learning algorithms for large-scale classification tasks.
3.4 Online Regression
The last experiment addresses the online regression problem to evaluate the capabilities of our
approach with two proposed loss functions: `1 and ?-insensitive losses. Incorporating these loss
functions creates two versions: DualSGD-?, DualSGD-`1 . We use two datasets: year and airlines
(delay minutes), and six baselines: RBP, Forgetron, Projectron, BOGD, FOGD and NOGD.
7
Table 2: Root mean squared error (RMSE) and execution time (seconds) of 6 baselines and 2 versions
? D]
? denotes the same meaning as those in Table 1.
of our DualSGDs. The notation [k; B; D; B;
Dataset
h
i
?|D
?
k|B|D|B
Algorithm
RBP
Forgetron
Projectron
BOGD
FOGD
NOGD
DualSGD-?
DualSGD-`1
year
[20 | 100 | 200 | 400 | 1, 600]
RMSE
Time
0.19?0.00
605.42
0.19?0.00
904.09
0.14?0.00
605.19
0.20?0.00
596.10
0.16?0.00
76.70
0.14?0.00
607.37
0.13?0.00
48.01
0.12?0.00
47.29
airlines
[20 | 100 | 200 | 1, 000 | 2, 000]
RMSE
Time
36.51?0.00
3,418.89
36.51?0.00
5,774.47
36.14?0.00
3,834.19
35.73?0.00
3,058.96
53.16?0.01
646.15
34.74?0.00
3,324.38
36.20?0.01
457.30
36.20?0.01
443.39
Hyperparameters setting. We adopt the same hyperparameter searching procedure for online
? and the feature dimension
classification task as in Section 3.3. Furthermore, for the budget size B
? in FOGD, we follow the same strategy used in Section 7.1.1 of [16]. More specifically, these
D
hyperparameters are separately set for different datasets as reported in Table 2. They are chosen
such that they are roughly proportional to the number of support vectors produced by the batch SVM
algorithm in LIBSVM running on a small subset. The aim is to achieve competitive accuracy using a
relatively larger budget size for tackling more challenging regression tasks.
Results. Table 2 reports the average regression errors and computation costs after the methods see
all data samples. From these results, we can draw some observations below.
Our proposed models enjoy a significant advantage in computational efficacy whilst achieve better
(for year dataset) or competitive regression results (for airlines dataset) with other methods. The
DualSGD, again, secures the best performance in terms of model sparsity. Among the baselines, the
FOGD is the fastest, that is, its time costs can be considered to compare with those of our methods,
but its regression performances are worse. The remaining algorithms usually obtain better results, but
is paid by the sacrifice of scalability.
Finally, comparing the capability of two DualSGD?s variants, both models demonstrate similar
regression capabilities and computational complexities wherein the DualSGD-`1 is slightly faster due
to its simpler operator in computing the gradient. Besides, its regression scores are also lower or equal
to those of DualSGD-?. These observations, once again, verifies the effectiveness and efficiency of
our proposed techniques. Therefore the DualSGD is also a promising machine to perform online
regression task for large-scale datasets.
4
Conclusion
In this paper, we have proposed Dual Space Gradient Descent (DualSGD) that overcomes the
computational problem in the projection and merging strategies in Budgeted SGD (BSGD) and the
excessive number of random features in Fourier Online Gradient Descent (FOGD). More specifically,
we have employed the random features to form an auxiliary space for storing the vectors being
removed during the budget maintenance process. This makes the operations in budget maintenance
simple and convenient. We have further presented the convergence analysis that is appropriate for a
wide spectrum of loss functions. Finally, we have conducted the extensive experiments on several
benchmark datasets to prove the efficiency and accuracy of the proposed method.
8
References
[1] F. Rosenblatt. The perceptron: A probabilistic model for information storage and organization
in the brain. Psychological Review, 65(6):386?408, 1958.
[2] C.-C. Chang and C.-J. Lin. Libsvm: A library for support vector machines. ACM Trans. Intell.
Syst. Technol., 2(3):27:1?27:27, May 2011.
[3] K. Crammer, O. Dekel, J. Keshet, S. Shalev-Shwartz, and Y. Singer. Online passive-aggressive
algorithms. J. Mach. Learn. Res., 7:551?585, 2006.
[4] M. Dredze, K. Crammer, and F. Pereira. Confidence-weighted linear classification. In International Conference on Machine Learning 2008, pages 264?271, 2008.
[5] Y. Freund and R. E. Schapire. Large margin classification using the perceptron algorithm. Mach.
Learn., 37(3):277?296, December 1999.
[6] J. Kivinen, A. J. Smola, and R. C. Williamson. Online Learning with Kernels. IEEE Transactions
on Signal Processing, 52:2165?2176, August 2004.
[7] Z. Wang, K. Crammer, and S. Vucetic. Breaking the curse of kernelization: Budgeted stochastic
gradient descent for large-scale svm training. J. Mach. Learn. Res., 13(1):3103?3131, 2012.
[8] O. Dekel, S. Shalev-Shwartz, and Y. Singer. The forgetron: A kernel-based perceptron on a
fixed budget. In Advances in Neural Information Processing Systems, pages 259?266, 2005.
[9] G. Cavallanti, N. Cesa-Bianchi, and C. Gentile. Tracking the best hyperplane with a simple
budget perceptron. Machine Learning, 69(2-3):143?167, 2007.
[10] T. Le, V. Nguyen, T. D. Nguyen, and Dinh Phung. Nonparametric budgeted stochastic gradient
descent. In The 19th International Conference on Artificial Intelligence and Statistics, May
2016.
[11] T. Le, P. Duong, M. Dinh, T. D. Nguyen, V. Nguyen, and D. Phung. Budgeted semi-supervised
support vector machine. In The 32th Conference on Uncertainty in Artificial Intelligence, June
2016.
[12] H. Robbins and S. Monro. A stochastic approximation method. Annals of Mathematical
Statistics, 22:400?407, 1951.
[13] S. Shalev-shwartz, Y. Singer, and N. Srebro. Pegasos: Primal estimated sub-gradient solver for
svm. In ICML 2007, pages 807?814, 2007.
[14] A. Rahimi and B. Recht. Random features for large-scale kernel machines. In Advances in
Neural Infomration Processing Systems, 2007.
[15] L. Ming, W. Shifeng, and Z. Changshui. On the sample complexity of random fourier features
for online learning: How many random fourier features do we need? ACM Trans. Knowl.
Discov. Data, 8(3):13:1?13:19, June 2014.
[16] J. Lu, S. C.H. Hoi, J. Wang, P. Zhao, and Z.-Y. Liu. Large scale online kernel learning. J. Mach.
Learn. Res., 2015.
[17] Z. Wang and S. Vucetic. Online passive-aggressive algorithms on a budget. In AISTATS,
volume 9, pages 908?915, 2010.
[18] S. Shalev-Shwartz and T. Zhang. Stochastic dual coordinate ascent methods for regularized
loss. Journal of Machine Learning Research, 14(1):567?599, 2013.
[19] J. Hensman, N. Fusi, and N. D Lawrence. Gaussian processes for big data. In Uncertainty in
Artificial Intelligence, pages 282?290, 2013.
[20] F. Orabona, J. Keshet, and B. Caputo. Bounded kernel-based online learning. J. Mach. Learn.
Res., 10:2643?2666, December 2009.
[21] P. Zhao, J. Wang, P. Wu, R. Jin, and S. C. H. Hoi. Fast bounded online gradient descent
algorithms for scalable kernel-based online learning. CoRR, 2012.
9
| 6560 |@word version:4 middle:2 logit:5 dekel:2 crucially:1 paid:1 sgd:6 initial:1 liu:1 efficacy:1 score:1 existing:2 duong:1 current:1 comparing:4 com:1 tackling:1 dx:1 realize:1 partition:1 enables:1 remove:1 designed:1 v:2 intelligence:3 selected:4 website:1 trung:2 accordingly:1 shifeng:1 core:3 record:1 wth:3 provides:1 firstly:1 org:1 simpler:1 five:2 zhang:1 mathematical:1 constructed:1 direct:2 ik:3 prove:3 fth:10 doubly:1 introduce:1 x0:2 sacrifice:1 roughly:1 behavior:1 examine:2 brain:1 oversimplified:1 ming:1 resolve:1 curse:4 window:1 solver:1 increasing:3 spain:1 project:1 moreover:2 bounded:3 notation:2 medium:2 what:1 argmin:1 substantially:1 whilst:5 gth:2 impractical:1 guarantee:1 rm:1 enjoy:1 positive:1 negligible:2 t1:1 mistake:14 limit:1 mach:5 merge:1 approximately:1 twice:1 au:1 challenging:1 mentioning:1 fastest:1 analytics:1 range:5 acknowledgment:1 vu:1 regret:1 projectron:6 procedure:11 significantly:2 projection:11 convenient:1 confidence:2 regular:1 oht:3 convenience:1 pegasos:2 operator:3 storage:1 influence:1 seminal:1 www:2 conventional:1 map:2 yt:7 convex:2 regarded:1 classic:1 searching:2 coordinate:1 increment:1 hurt:1 annals:1 pt:3 ogd:5 us:3 recognition:1 approximated:1 updating:1 particularly:1 database:2 observed:1 ft:9 csie:1 wang:4 capture:1 thousand:1 decrease:3 removed:10 trade:2 complexity:6 trained:2 predictive:5 technically:1 creates:1 efficiency:3 joint:1 various:1 heat:1 fast:4 effective:3 shortcoming:2 cod:9 artificial:3 outside:1 shalev:4 whose:4 larger:4 supplementary:1 solve:1 otherwise:2 statistic:4 transform:1 ip:1 online:35 fogd:21 sequence:2 advantage:2 propose:3 tu:2 j2:1 uci:1 rapidly:3 achieve:6 nogd:10 validate:1 normalize:1 scalability:3 convergence:6 produce:1 converges:1 spent:1 develop:1 fixing:1 stat:1 ij:5 eq:2 auxiliary:5 come:1 australian:1 implies:1 quantify:1 indicate:1 merged:1 correct:1 stochastic:5 australia:1 libsvmtools:1 material:1 hoi:2 require:1 wall:3 ntu:1 vucetic:2 secondly:1 sufficiently:1 considered:2 ic:1 exp:2 lawrence:1 mapping:1 predict:1 tor:1 achieves:3 arrange:1 smallest:2 a2:3 vary:2 adopt:1 label:3 knowl:1 expose:1 council:1 robbins:1 changshui:1 create:1 establishes:2 weighted:1 clearly:1 rna:9 gaussian:1 aim:4 yo:1 xit:1 june:2 indicates:2 baseline:5 accumulated:1 i0:1 entire:1 typically:1 i1:3 mitigating:1 issue:4 dual:9 aforementioned:2 html:2 classification:19 among:2 art:4 fairly:2 equal:1 once:1 identical:1 represents:2 icml:1 excessive:3 report:2 quantitatively:2 serious:3 employ:4 simultaneously:2 preserve:2 comprehensive:1 intell:1 delayed:6 maintain:3 organization:1 investigate:1 evaluation:1 adjust:1 introduces:1 arrives:2 undefined:1 primal:1 accurate:1 conduct:5 re:4 theoretical:1 psychological:1 instance:4 xeon:1 temple:1 cost:6 deviation:1 subset:2 predictor:1 hundred:1 delay:2 conducted:1 inspires:1 stored:2 reported:2 dependency:1 answer:1 recht:1 international:2 randomized:1 sensitivity:1 retain:4 probabilistic:1 xi1:2 off:2 together:1 squared:1 again:2 cesa:1 containing:1 leveraged:1 possibly:1 choose:2 worse:1 american:1 zhao:2 leading:1 syst:1 aggressive:3 summarized:1 coefficient:3 satisfy:1 notable:1 depends:1 ad:1 performed:2 root:1 kwk:1 reached:1 competitive:2 capability:5 rmse:3 monro:1 minimize:1 accuracy:3 efficiently:1 unifies:2 accurately:1 produced:1 lu:1 worth:1 published:1 processor:1 suffers:2 definition:1 nystrom:1 associated:2 gain:1 dataset:14 knowledge:1 dimensionality:2 provision:5 routine:1 forgetron:5 higher:1 supervised:1 follow:1 wherein:4 strongly:1 generality:1 furthermore:1 smola:1 clock:3 hand:1 flight:4 nonlinear:2 incrementally:1 logistic:3 yf:2 grows:1 believe:1 preimage:1 building:2 rbp:4 effect:5 requiring:1 dredze:1 hence:3 regularization:3 i2:1 during:3 width:1 maintained:2 generalized:1 tt:1 demonstrate:3 performs:1 passive:3 adx:1 meaning:1 invoked:2 novel:2 purposely:1 common:1 mt:2 insensitive:3 volume:1 million:2 association:1 interpretation:1 dinh:5 significant:4 centre:1 nonlinearity:1 recent:3 store:2 certain:1 binary:3 seen:2 additional:1 gentile:1 employed:3 redundant:1 signal:1 semi:1 ii:1 reduces:1 rahimi:1 smooth:5 exceeds:1 faster:3 cross:1 long:1 lin:1 discov:1 impact:2 prediction:2 scalable:4 regression:13 maintenance:19 variant:1 iteration:1 kernel:23 represent:1 addition:1 separately:1 crucial:2 ot:1 rest:1 unlike:1 posse:1 comprehensibility:1 airline:12 archive:1 induced:1 ascent:1 elegant:1 december:2 effectiveness:3 exceed:1 xj:4 finish:1 restrict:1 reduce:1 idea:1 economic:1 multiclass:1 whether:2 six:1 gb:1 render:1 suffer:1 argminj:1 yw:1 nonparametric:1 extensively:1 processed:1 diameter:1 http:5 schapire:1 outperform:1 xij:6 estimated:1 rosenblatt:1 hyperparameter:1 key:5 four:1 budgeted:12 libsvm:3 utilize:1 ram:1 vast:1 subgradient:1 year:6 run:4 inverse:1 uncertainty:2 named:1 family:1 almost:2 wu:1 utilizes:2 draw:2 fusi:1 decision:5 acceptable:1 comparable:3 bound:2 followed:1 fold:2 encountered:1 phung:4 fourier:5 speed:1 min:1 performing:1 px:3 relatively:1 smaller:1 slightly:2 tw:1 ijcnn1:6 visualization:1 previously:1 cjlin:1 singer:3 end:4 operation:2 permit:1 apply:1 eight:1 appropriate:2 alternative:2 batch:1 slower:1 original:2 denotes:5 running:7 cf:7 include:1 remaining:1 hinge:13 invokes:1 objective:2 move:2 strategy:21 costly:2 poker:8 gradient:18 reason:1 besides:1 index:2 balance:1 setup:2 statement:1 xik:2 implementation:1 perform:3 bianchi:1 observation:5 datasets:18 discarded:2 benchmark:1 descent:14 jin:1 technol:1 varied:3 arbitrary:2 august:1 deakin:2 required:1 toolbox:1 extensive:5 specified:1 barcelona:1 hour:1 nip:1 trans:2 address:6 usually:2 pattern:3 below:2 departure:1 sparsity:1 challenge:1 summarize:1 cavallanti:1 including:2 max:2 event:1 hybrid:4 regularized:1 kivinen:1 improve:1 library:1 extract:1 kj:4 review:1 literature:1 discovery:1 removal:7 freund:1 loss:23 permutation:1 limitation:1 proportional:1 srebro:1 validation:3 foundation:1 downloaded:1 storing:1 supported:1 last:1 keeping:1 allow:2 perceptron:7 wide:4 taking:1 absolute:2 sparse:1 distributed:1 ghz:1 overcome:1 dimension:9 hensman:1 world:4 commonly:1 nguyen:8 transaction:1 approximate:6 compact:1 overcomes:1 ml:1 incoming:1 xi:1 discriminative:2 shwartz:4 spectrum:3 table:6 promising:2 learn:6 caputo:1 williamson:1 domain:2 aistats:1 main:1 linearly:1 motivation:1 big:1 hyperparameters:11 verifies:1 fair:1 fig:4 referred:1 retrain:1 biggest:1 rithm:1 cubic:1 dabi:1 depicts:1 precision:1 fails:2 sub:1 pereira:1 exponential:1 lie:2 breaking:1 theorem:9 minute:3 xt:8 svm:3 consist:1 incorporating:1 merging:33 effectively:2 corr:1 keshet:2 magnitude:1 execution:6 budget:42 demand:1 margin:2 gap:3 conveniently:1 tracking:1 partially:1 chang:1 satisfies:1 acm:2 goal:4 sorted:1 sized:1 consequently:2 rbf:1 orabona:1 twofold:2 specifically:4 except:2 hyperplane:2 wt:15 surpass:1 called:2 pas:1 experimental:3 select:2 support:9 crammer:3 infomration:1 evaluate:4 kernelization:5 |
6,148 | 6,561 | Improved Dropout for Shallow and Deep Learning
Zhe Li1 , Boqing Gong2 , Tianbao Yang1
The University of Iowa, Iowa city, IA 52245
2
University of Central Florida, Orlando, FL 32816
{zhe-li-1,tianbao-yang}@uiowa.edu
[email protected]
1
Abstract
Dropout has been witnessed with great success in training deep neural networks by
independently zeroing out the outputs of neurons at random. It has also received
a surge of interest for shallow learning, e.g., logistic regression. However, the
independent sampling for dropout could be suboptimal for the sake of convergence. In this paper, we propose to use multinomial sampling for dropout, i.e.,
sampling features or neurons according to a multinomial distribution with different
probabilities for different features/neurons. To exhibit the optimal dropout probabilities, we analyze the shallow learning with multinomial dropout and establish
the risk bound for stochastic optimization. By minimizing a sampling dependent
factor in the risk bound, we obtain a distribution-dependent dropout with sampling
probabilities dependent on the second order statistics of the data distribution. To
tackle the issue of evolving distribution of neurons in deep learning, we propose
an efficient adaptive dropout (named evolutional dropout) that computes the sampling probabilities on-the-fly from a mini-batch of examples. Empirical studies on
several benchmark datasets demonstrate that the proposed dropouts achieve not
only much faster convergence and but also a smaller testing error than the standard
dropout. For example, on the CIFAR-100 data, the evolutional dropout achieves
relative improvements over 10% on the prediction performance and over 50% on
the convergence speed compared to the standard dropout.
1
Introduction
Dropout has been widely used to avoid overfitting of deep neural networks with a large number of
parameters [9, 16], which usually identically and independently at random samples neurons and sets
their outputs to be zeros. Extensive experiments [4] have shown that dropout can help obtain the
state-of-the-art performance on a range of benchmark data sets. Recently, dropout has also been
found to improve the performance of logistic regression and other single-layer models for natural
language tasks such as document classification and named entity recognition [21].
In this paper, instead of identically and independently at random zeroing out features or neurons, we
propose to use multinomial sampling for dropout, i.e., sampling features or neurons according to
a multinomial distribution with different probabilities for different features/neurons. Intuitively, it
makes more sense to use non-uniform multinomial sampling than identical and independent sampling
for different features/neurons. For example, in shallow learning if input features are centered, we
can drop out features with small variance more frequently or completely allowing the training to
focus on more important features and consequentially enabling faster convergence. To justify the
multinomial sampling for dropout and reveal the optimal sampling probabilities, we conduct a
rigorous analysis on the risk bound of shallow learning by stochastic optimization with multinomial
dropout, and demonstrate that a distribution-dependent dropout leads to a smaller expected risk (i.e.,
faster convergence and smaller generalization error).
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Inspired by the distribution-dependent dropout, we propose a data-dependent dropout for shallow
learning, and an evolutional dropout for deep learning. For shallow learning, the sampling probabilities are computed from the second order statistics of features of the training data. For deep learning,
the sampling probabilities of dropout for a layer are computed on-the-fly from the second-order
statistics of the layer?s outputs based on a mini-batch of examples. This is particularly suited for deep
learning because (i) the distribution of each layer?s outputs is evolving over time, which is known
as internal covariate shift [5]; (ii) passing through all the training data in deep neural networks (in
particular deep convolutional neural networks) is much more expensive than through a mini-batch
of examples. For a mini-batch of examples, we can leverage parallel computing architectures to
accelerate the computation of sampling probabilities.
We note that the proposed evolutional dropout achieves similar effect to the batch normalization
technique (Z-normalization based on a mini-batch of examples) [5] but with different flavors. Both
approaches can be considered to tackle the issue of internal covariate shift for accelerating the
convergence. Batch normalization tackles the issue by normalizing the output of neurons to zero
mean and unit variance and then performing dropout independently 1 . In contrast, our proposed
evolutional dropout tackles this issue from another perspective by exploiting a distribution-dependent
dropout, which adapts the sampling probabilities to the evolving distribution of a layer?s outputs. In
other words, it uses normalized sampling probabilities based on the second order statistics of internal
distributions. Indeed, we notice that for shallow learning with Z-normalization (normalizing each
feature to zero mean and unit variance) the proposed data-dependent dropout reduces to uniform
dropout that acts similarly to the standard dropout. Because of this connection, the presented
theoretical analysis also sheds some lights on the power of batch normalization from the angle
of theory. Compared to batch normalization, the proposed distribution-dependent dropout is still
attractive because (i) it is rooted in theoretical analysis of the risk bound; (ii) it introduces no
additional parameters and layers without complicating the back-propagation and the inference; (iii) it
facilitates further research because its shares the same mathematical foundation as standard dropout
(e.g., equivalent to a form of data-dependent regularizer) [18].
We summarize the main contributions of the paper below.
? We propose a multinomial dropout and demonstrate that a distribution-dependent dropout
leads to a faster convergence and a smaller generalization error through the risk bound
analysis for shallow learning.
? We propose an efficient evolutional dropout for deep learning based on the distributiondependent dropout.
? We justify the proposed dropouts for both shallow learning and deep learning by experimental results on several benchmark datasets.
In the remainder, we first review some related work and preliminaries. We present the main results in
Section 4 and experimental results in Section 5.
2
Related Work
In this section, we review some related work on dropout and optimization algorithms for deep
learning.
Dropout is a simple yet effective technique to prevent overfitting in training deep neural networks [16].
It has received much attention recently from researchers to study its practical and theoretical properties.
Notably, Wager et al. [18], Baldi and Sadowski [2] have analyzed the dropout from a theoretical
viewpoint and found that dropout is equivalent to a data-dependent regularizer. The most simple
form of dropout is to multiply hidden units by i.i.d Bernoulli noise. Several recent works also found
that using other types of noise works as well as Bernoulli noise (e.g., Gaussian noise), which could
lead to a better approximation of the marginalized loss [20, 7]. Some works tried to optimize the
hyper-parameters that define the noise level in a Bayesian framework [23, 7]. Graham et al. [3] used
the same noise across a batch of examples in order to speed up the computation. The adaptive dropout
proposed in[1] overlays a binary belief network over a neural netowrk, incurring more computational
overhead to dropout because one has to train the additional binary belief network. In constrast,
1
The author also reported that in some cases dropout is even not necessary
2
the present work proposes a new dropout with noise sampled according to distribution-dependent
sampling probabilities. To the best of our knowledge, this is the first work that rigorously studies this
type of dropout with theoretical analysis of the risk bound. It is demonstrated that the new dropout
can improve the speed of convergence.
Stochastic gradient descent with back-propagation has been used a lot in optimizing deep neural
networks. However, it is notorious for its slow convergence especially for deep learning. Recently,
there emerge a battery of studies trying to accelearte the optimization of deep learning [17, 12, 22, 5, 6],
which tackle the problem from different perspectives. Among them, we notice that the developed
evolutional dropout for deep learning achieves similar effect as batch normalization [5] addressing
the internal covariate shift issue (i.e., evolving distributions of internal hidden units).
3
Preliminaries
In this section, we present some preliminaries, including the framework of risk minimization in
machine learning and learning with dropout noise. We also introduce the multinomial dropout, which
allows us to construct a distribution-dependent dropout as revealed in the next section.
Let (x, y) denote a feature vector and a label, where x ? Rd and y ? Y. Denote by P the joint
distribution of (x, y) and denote by D the marginal distribution of x. The goal of risk minimization
is to learn a prediction function f (x) that minimizes the expected loss, i.e., minf ?H EP [`(f (x), y)],
where `(z, y) is a loss function (e.g., the logistic loss) that measures the inconsistency between z
and y and H is a class of prediction functions. In deep learning, the prediction function f (x) is
determined by a deep neural network. In shallow learning, one might be interested in learning a linear
model f (x) = w> x. In the following presentation, the analysis will focus on the risk minimization
of a linear model, i.e.,
min L(w) , EP [`(w> x, y)]
(1)
w?Rd
In this paper, we are interested in learning with dropout, i.e., the feature vector x is corrupted by
a dropout noise. In particular, let ? M denote a dropout noise vector of dimension d, and the
b = x ? , where the operator ? represents the element-wise
corrupted feature vector is given by x
b denote the joint distribution of the new data (b
b denote the marginal
multiplication. Let P
x, y) and D
b. With the corrupted data, the risk minimization becomes
distribution of x
b
, EPb [`(w> (x ? ), y)]
min L(w)
(2)
w?Rd
In standard dropout [18, 4], the entries of the noise vector are sampled independently according
1
to Pr(j = 0) = ? and Pr(j = 1??
) = 1 ? ?, i.e., features are dropped with a probability ? and
b
j
1
scaled by 1??
with a probability 1 ? ?. We can also write j = 1??
, where bj ? {0, 1}, j ? [d]
1
are i.i.d Bernoulli random variables with Pr(bj = 1) = 1 ? ?. The scaling factor 1??
is added to
ensure that E [b
x] = x. It is obvious that using the standard dropout different features will have equal
probabilities to be dropped out or to be selected independently. However, in practice some features
could be more informative than the others for learning purpose. Therefore, it makes more sense to
assign different sampling probabilities for different features and make the features compete with each
other.
To this end, we introduce the following multinomial dropout.
b = x ? , where
Definition 1. (Multinomial Dropout) A multinomial dropout is defined as x
mi
i = kp
,
i
?
[d]
and
{m
,
.
.
.
,
m
}
follow
a
multinomial
distribution
M
ult(p
1
d
1 , . . . , pd ; k) with
i
Pd
i=1 pi = 1 and pi ? 0.
Remark: The multinomial dropout allows us to use non-uniform sampling probabilities p1 , . . . , pd
for different features. The value of mi is the number of times that the i-th feature is selected in k
independent trials of selection. In each trial, the probability that the i-th feature is selected is given by
pi . As in the standard dropout, the normalization by kpi is to ensure that E [b
x] = x. The parameter k
plays the same role as the parameter 1 ? ? in standard dropout, which controls the number of features
to be dropped. In particular, the expected total number of the kept features using multinomial dropout
is k and that using standard dropout is d(1 ? ?). In the sequel, to make fair comparison between
3
the two dropouts, we let k = d(1 ? ?). In this case, when a uniform distribution pi = 1/d is used
mi
in multinomial dropout to which we refer as uniform dropout, then i = 1??
, which acts similarly
to the standard dropout using i.i.d Bernoulli random variables. Note that another choice to make
the sampling probabilities different is still using i.i.d Bernoulli random variables but with different
probabilities for different features. However, multinomial dropout is more suitable because (i) it is
easy to control the level of dropout by varying
P the value of k; (ii) it gives rise to natural competition
among features because of the constraint i pi = 1; (iii) it allows us to minimize the sampling
dependent risk bound for obtaining a better distribution than uniform sampling.
Dropout is a data-dependent regularizer Dropout as a regularizer has been studied in [18, 2] for
logistic regression, which is stated in the following proposition for ease of discussion later.
Proposition 1. If `(z, y) = log(1 + exp(?yz)), then
b, y)] = EP [`(w> x, y)] + RD,M (w)
EPb [`(w> x
(3)
h
i
> x?
exp(w> x?
2 )+exp(?w
2 )
where M denotes the distribution of and RD,M (w) = ED,M log exp(w> x/2)+exp(?w
> x/2) .
Remark: It is notable that RD,M ? 0 due to the Jensen inequality. Using the second order Taylor
expansion, [18] showed that the following approximation of RD,M (w) is easy to manipulate and
understand:
>
>
>
bD,M (w) = ED [q(w x)(1 ? q(w x))w CM (x ? )w]
R
2
(4)
1
where q(w> x) = 1+exp(?w
> x/2) , and CM denotes the covariance matrix in terms of . In particular,
if is the standard dropout noise, then CM [x ? ] = diag(x21 ?/(1 ? ?), . . . , x2d ?/(1 ? ?)), where
diag(s1 , . . . , sn ) denotes a d?d diagonal matrix with the i-th entry equal to si . If is the multinomial
dropout noise in Definition 1, we have
CM [x ? ] =
4
1
1
diag(x2i /pi ) ? xx>
k
k
(5)
Learning with Multinomial Dropout
In this section, we analyze a stochastic optimization approach for minimizing the dropout loss
in (2). Assume the sampling probabilities are known. We first obtain a risk bound of learning with
multinomial dropout for stochastic optimization. Then we try to minimize the factors in the risk
bound that depend on the sampling probabilities. We would like to emphasize that our goal here is
not to show that using dropout would render a smaller risk than without using dropout, but rather
focus on the impact of different sampling probabilities on the risk. Let the initial solution be w1 . At
the iteration t, we sample (xt , yt ) ? P and t ? M as in Definition 1 and then update the model by
wt+1 = wt ? ?t ?`(wt> (xt ? t ), yt )
(6)
where ?` denotes the (sub)gradient in terms of wt and ?t is a step size. Suppose we run the stochastic
P
b n = n1 nt=1 wt .
optimization by n steps (i.e., using n examples) and compute the final solution as w
We note that another approach of learning with dropout is to minimize the empirical risk by marginalizing out the dropout noise, i.e., replacing the true expectations EP and ED in (3) with empirical
expectations over a set of samples (x1 , y1 ), . . . , (xn , yn ) denoted by EPn and EDn . Since the
data dependent regularizer RDn ,M (w) is difficult to compute, one usually uses an approximation
bD ,M (w) (e.g., as in (4)) in place of RD ,M (w). However, the resulting problem is a non-convex
R
n
n
optimization, which together with the approximation error would make the risk analysis much more
involved. In contrast, the update in (6) can be considered as a stochastic gradient descent update
for solving the convex optimization problem in (2), allowing us to establish the risk bound based
on previous results of stochastic gradient descent for risk minimization [14, 15]. Nonetheless, this
restriction does not lose the generality. Indeed, stochastic optimization is usually employed for
solving empirical loss minimization in big data and deep learning.
b n in expectation.
The following theorem establishes a risk bound of w
4
Theorem 1. Let L(w) be the expected risk of w defined in (1). Assume EDb [kx ? k22 ] ? B 2 and
`(z, y) is G-Lipschitz continuous. For any kw? k2 ? r, by appropriately choosing ?, we can have
GBr
b n ) + RD,M (w
b n )] ? L(w? ) + RD,M (w? ) + ?
E[L(w
n
where E[?] is taking expectation over the randomness in (xt , yt , t ), t = 1, . . . , n.
Remark: In the above theorem, we can choose w? to be the best model that minimizes the expected
risk in (1). Since RD,M (w) ? 0, the upper bound in the theorem above is also the upper bound of
b n ), in expectation. The proof of the above theorem follows the standard
b n , i.e., L(w
the risk of w
analysis of stochastic gradient descent. The detailed proof of theorem is included in the appendix.
4.1
Distribution Dependent Dropout
Next, we consider the sampling dependent factors in the risk bounds. From Theorem 1, we can
see that there are two terms that depend on the sampling probabilities, i.e., B 2 - the upper bound
b n ) ? RD,M (w? ). We note that the second term also
of EDb [kx ? k22 ], and RD,M (w? ) ? RD,M (w
b n , which is more difficult to optimize. We first try to minimize EDb [kx?k22 ] and
depends on w? and w
present the discussion on minimizing RD,M (w? ) later. From Theorem 1, we can see that minimizing
EDb [kx ? k22 ] would lead to not only a smaller risk (given the same number of total examples, smaller
EDb [kx ? k22 ] gives a smaller risk bound) but also a faster convergence (with the same number of
iterations, smaller EDb [kx ? k22 ] gives a smaller optimization error).
Due to the limited space, the proofs of Proposition 2, 3, 4 are included in supplement. The following
proposition simplifies the expectation EDb [kx ? k22 ].
Proposition 2. Let follow the distribution M defined in Definition 1. Then
EDb [kx ? k22 ] =
d
d
1X 1
k?1X
ED [x2i ]
ED [x2i ] +
k i=1 pi
k i=1
(7)
Given the expression of EDb [kx ? k22 ] in Proposition 2, we can minimize it over p, leading to the
following result.
Proposition 3. The solution to p? = arg minp?0,p> 1=1 EDb [kx ? k22 ] is given by
p
ED [x2i ]
?
q
pi = P
, i = 1, . . . , d
(8)
d
ED [x2j ]
j=1
Next, we examine RD,M (w? ). Since direct manipulation on RD,M (w? ) is difficult, we try to
bD,M (w? ) for logistic loss. The following theorem
minimize the second order Taylor expansion R
b
establishes an upper bound of RD,M (w? ).
bD,M (w? ) ?
Proposition
the distribution
M defined in Definition 1. We have R
P4. Let follow
2
E
[x
]
d
D
1
2
i
? ED [kxk22 ]
i=1
8k kw? k2
pi
Remark: By minimizing the relaxed upper bound in Proposition 4, we obtain the same sampling
probabilities as in (8). We note that a tighter upper bound can be established, however, which will
yield sampling probabilities dependent on the unknown w? .
In summary, using the probabilities in (8), we can reduce both EDb [kx ? k22 ] and RD,M (w? ) in the
risk bound, leading to a faster convergence and a smaller generalization error. In practice, we can use
empirical second-order statistics to compute the probabilities, i.e.,
q P
n
1
2
j=1 [[xj ]i ]
n
q P
pi = P
(9)
d
n
1
2]
[[x
]
0
j
i
i0 =1
j=1
n
where [xj ]i denotes the i-th feature of the j-th example, which gives us a data-dependent dropout.
We state it formally in the following definition.
5
Evolutional Dropout for Deep Learning
Input: a batch of outputs of a layer: X l = (xl1 , . . . , xlm )
and dropout level parameter k ? [0, d]
b l = X l ? ?l
Output: X
Compute sampling probabilities by (10)
For j = 1, . . . , m
Sample mlj ? M ult(pl1 , . . . , pld ; k)
mlj
Construct lj =
? Rd , where pl = (pl1 , . . . , pld )>
kpl
b l = X l ? ?l
Let ?l = (l1 , . . . , lm ) and compute X
Figure 1: Evolutional Dropout applied to a layer over a mini-batch
Definition 2. (Data-dependent Dropout) Given a set of training examples (x1 , y1 ), . . . , (xn , yn ). A
mi
b = x ? , where i = kp
, i ? [d] and {m1 , . . . , md } follow a
data-dependent dropout is defined as x
i
multinomial distribution M ult(p1 , . . . , pd ; k) with pi given by (9).
Remark: Note that if the data is normalized such that each feature has zero mean and unit variance
(i.e., according to Z-normliazation), the data-dependent dropout reduces to uniform dropout. It
implies that the data-dependent dropout achieves similar effect as Z-normalization plus uniform
dropout. In this sense, our theoretical analysis also explains why Z-normalization usually speeds up
the training [13].
4.2
Evolutional Dropout for Deep Learning
Next, we discuss how to implement the distribution-dependent dropout for deep learning. In training
deep neural networks, the dropout is usually added to the intermediate layers (e.g., fully connected
layers and convolutional layers). Let xl = (xl1 , . . . , xld ) denote the outputs of the l-th layer (with the
index of data omitted). Adding dropout to this layer is equivalent to multiplying xl by a dropout
bl = xl ? l as the input to the next layer. Inspired by the datanoise vector l , i.e., feeding x
dependent dropout, we can generate l according to a distribution given in Definition 1 with sampling
probabilities pli computed from {xl1 , . . . , xln } similar to that (9). However, deep learning is usually
trained with big data and a deep neural network is optimized by mini-batch stochastic gradient
descent. Therefore, at each iteration it would be too expensive to afford the computation to pass
through all examples. To address this issue, we propose to use a mini-batch of examples to calculate
the second-order statistics similar to what was done in batch normalization. Let X l = (xl1 , . . . , xlm )
denote the outputs of the l-th layer for a mini-batch of m examples. Then we can calculate the
q P
probabilities for dropout by
m
1
l 2
j=1 [[xj ]i ]
m
l
q P
pi = P
, i = 1, . . . , d
(10)
m
d
1
l 2
i0 =1
j=1 [[xj ]i0 ]
m
which define the evolutional dropout named as such because the probabilities pli will also evolve as
the the distribution of the layer?s outputs evolve. We describe the evolutional dropout as applied to a
layer of a deep neural network in Figure 1.
Finally, we would like to compare the evolutional dropout with batch normalization. Similar to batch
normalization, evolutional dropout can also address the internal covariate shift issue by adapting
the sampling probabilities to the evolving distribution of layers? outputs. However, different from
batch normalization, evolutional dropout is a randomized technique, which enjoys many benefits
as standard dropout including (i) the back-propagation is simple to implement (just multiplying the
b l by the dropout mask to get the gradient of X l ); (ii) the inference (i.e., testing) remains
gradient of X
the same 2 ; (iii) it is equivalent to a data-dependent regularizer with a clear mathematical explanation;
2
Different from some implementations for standard dropout which doest no scale by 1/(1 ? ?) in training
but scale by 1 ? ? in testing, here we do scale in training and thus do not need any scaling in testing.
6
(iv) it prevents units from co-adapting of neurons, which facilitate generalization. Moreover, the
evolutional dropout has its root in distribution-dependent dropout, which has theoretical guarantee to
accelerate the convergence and improve the generalization for shallow learning.
5
Experimental Results
In the section, we present some experimental results to justify the proposed dropouts. In all experiments, we set ? = 0.5 in the standard dropout and k = 0.5d in the proposed dropouts for fair
comparison, where d represents the number of features or neurons of the layer that dropout is applied
to. For the sake of clarity, we divided the experiments into three parts. In the first part, we compare
the performance of the data-dependent dropout (d-dropout) to the standard dropout (s-dropout)
for logistic regression. In the second part, we compare the performance of evolutional dropout
(e-dropout) to the standard dropout for training deep convolutional neural networks. Finally, we
compare e-dropout with batch normalization.
s-dropout(tr)
s-dropout(te)
d-dropout(tr)
d-dropout(te)
0.45
0.4
0.35
0.15
0.3
0.25
0.2
0.1
0.05
0
1
2
3
# of iters
4
5
6
?10 4
0.12
0.1
0.06
0.05
0
0
0.14
0.08
0.15
0.1
0.9
s-dropout(tr)
s-dropout(te)
d-dropout(tr)
d-dropout(te)
0.16
error
error
0.2
error
0.18
0.5
s-dropout(tr)
s-dropout(te)
d-dropout(tr)
d-dropout(te)
1
2
# of iters
3
4
?10 4
0.7
0.6
0.5
0.4
no BN and no Dropout
BN
BN+Dropout
Evolutional Dropout
0.3
0.2
0.1
0.04
0
0.8
test accuracy
0.3
0.25
0
2
4
# of iters
6
8
?10 5
0
0
1
2
3
4
# of iters
5
6
7
?10 4
Figure 2: Left three: data-dependent dropout vs. standard dropout on three data sets (real-sim,
news20, RCV1) for logistic regression; Right: Evolutional dropout vs BN on CIFAR-10. (best seen
in color).
5.1
Shallow Learning
We implement the presented stochastic optimization algorithm. To evaluate the performance
of data-dependent dropout for shallow learning, we use the three data sets: real-sim, news20
and RCV13 . In this experiment, we use a fixed step size and tune the step size in
[0.1, 0.05, 0.01, 0.005, 0.001, 0.0005, 0.0001] and report the best results in terms of convergence
speed on the training data for both standard dropout and data-dependent dropout. The left three
panels in Figure 2 show the obtained results on these three data sets. In each figure, we plot both
the training error and the testing error. We can see that both the training and testing errors using the
proposed data-dependent dropout decrease much faster than using the standard dropout and also a
smaller testing error is achieved by using the data-dependent dropout.
5.2
Evolutional Dropout for Deep Learning
We would like to emphasize that we are not aiming to obtain better prediction performance by trying
different network structures and different engineering tricks such as data augmentation, whitening,
etc., but rather focus on the comparison of the proposed dropout to the standard dropout using
Bernoulli noise on the same network structure. In our experiments, we use the default splitting of
training and testing data in all data sets. We directly optimize the neural networks using all training
images without further splitting it into a validation data to be added into the training in later stages,
which explains some marginal gaps from the literature results that we observed (e.g., on CIFAR-10
compared with [19]).
We conduct experiments on four benchmark data sets for comparing e-dropout and s-dropout: MNIST
[10], SVHN [11], CIFAR-10 and CIFAR-100 [8]. We use the same or similar network structure as in
the literatures for the four data sets. In general, the networks consist of convolution layers, pooling
layers, locally connected layers, fully connected layers, softmax layers and a cost layer. For the
detailed neural network structures and their parameters, please refer to the supplementary materials.
The dropout is added to some fully connected layers or locally connected layers. The rectified linear
activation function is used for all neurons. All the experiments are conducted using the cuda-convnet
library 4 . The training procedure is similar to [9] using mini-batch SGD with momentum (0.9). The
3
4
https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/
https://code.google.com/archive/p/cuda-convnet/
7
0.09
s-dropout(tr)
s-dropout(te)
e-dropout(tr)
e-dropout(te)
0.8
0.7
0.4
0.3
0.2
0.05
1
s-dropout(tr)
s-dropout(te)
e-dropout(tr)
e-dropout(te)
0.5
0.4
error
0.06
0.5
0.8
0.7
0.3
0.04
0.2
0.03
0.1
2000
4000
6000
# of iters
(a) MNIST
8000
0.01
0.6
0.5
0.4
0.3
0.2
0
0
s-dropout(tr)
s-dropout(te)
e-dropout(tr)
e-dropout(te)
0.9
0.02
0.1
0
0.07
error
error
0.6
0.6
s-dropout(tr)
s-dropout(te)
e-dropout(tr)
e-dropout(te)
0.08
error
0.9
0
2000
4000
6000
8000
10000
0
1
2
3
4
# of Iters
12000
# of iters
(b) SVHN
(c) CIFAR-10
5
6
?10 5
0.1
0
2
4
6
8
10
# of iters
12
?10 4
(d) CIFAR-100
Figure 3: Evolutional dropout vs. standard dropout on four benchmark datasets for deep learning
(best seen in color).
size of mini-batch is fixed to 128. The weights are initialized based on the Gaussian distribution
with mean zero and standard deviation 0.01. The learning rate (i.e., step size) is decreased after a
number of epochs similar to what was done in previous works [9]. We tune the initial learning rates
for s-dropout and e-dropout separately from 0.001, 0.005, 0.01, 0.1 and report the best result on each
data set that yields the fastest convergence.
Figure 3 shows the training and testing error curves in the optimization process on the four data sets
using the standard dropout and the evolutional dropout. For SVHN data, we only report the first
12000 iterations, after which the error curves of the two methods almost overlap. We can see that
using the evolutional dropout generally converges faster than using the standard dropout. On CIFAR100 data, we have observed significant speed-up. In particular, the evolutional dropout achieves
relative improvements over 10% on the testing performance and over 50% on the convergence speed
compared to the standard dropout.
5.3
Comparison with the Batch Normalization (BN)
Finally, we make a comparison between the evolutional dropout and the batch normalization. For
batch normalization, we use the implementation in Caffe 5 . We compare the evolutional dropout with
the batch normalization on CIFAR-10 data set. The network structure is from the Caffe package and
can be found in the supplement, which is different from the one used in the previous experiment.
It contains three convolutional layers and one fully connected layer. Each convolutional layer is
followed by a pooling layer. We compare four methods: (1) No BN and No dropout - without using
batch normalization and dropout; (2) BN; (3) BN with standard dropout; (4) Evolutional Dropout.
The rectified linear activation is used in all methods. We also tried BN with the sigmoid activation
function, which gives worse results. For the methods with BN, three batch normalization layers are
inserted before or after each pooling layer following the architecture given in Caffe package (see
supplement). For the evolutional dropout training, only one layer of dropout is added to the the last
convolutional layer. The mini-batch size is set to 100, the default value in Caffe. The initial learning
rates for the four methods are set to the same value (0.001), and they are decreased once by ten times.
The testing accuracy versus the number of iterations is plotted in the right panel of Figure 2, from
which we can see that the evolutional dropout training achieves comparable performance with BN
+ standard dropout, which justifies our claim that evolutional dropout also addresses the internal
covariate shift issue.
6
Conclusion
In this paper, we have proposed a distribution-dependent dropout for both shallow learning and
deep learning. Theoretically, we proved that the new dropout achieves a smaller risk and faster
convergence. Based on the distribution-dependent dropout, we developed an efficient evolutional
dropout for training deep neural networks that adapts the sampling probabilities to the evolving
distributions of layers? outputs. Experimental results on various data sets verified that the proposed
dropouts can dramatically improve the convergence and also reduce the testing error.
Acknowledgments
We thank anonymous reviewers for their comments. Z. Li and T. Yang are partially supported by
National Science Foundation (IIS-1463988, IIS-1545995). B. Gong is supported in part by NSF
(IIS-1566511) and a gift from Adobe.
5
https://github.com/BVLC/caffe/
8
References
[1] Jimmy Ba and Brendan Frey. Adaptive dropout for training deep neural networks. In Advances in Neural
Information Processing Systems, pages 3084?3092, 2013.
[2] Pierre Baldi and Peter J Sadowski. Understanding dropout. In Advances in Neural Information Processing
Systems, pages 2814?2822, 2013.
[3] Benjamin Graham, Jeremy Reizenstein, and Leigh Robinson. Efficient batchwise dropout training using
submatrices. CoRR, abs/1502.02478, 2015.
[4] Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580,
2012.
[5] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing
internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
[6] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980,
2014.
[7] Diederik P. Kingma, Tim Salimans, and Max Welling. Variational dropout and the local reparameterization
trick. CoRR, abs/1506.02557, 2015.
[8] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images, 2009.
[9] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional
neural networks. In Advances in neural information processing systems, pages 1097?1105, 2012.
[10] Yann LeCun, L?on Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to
document recognition. Proceedings of the IEEE, 86(11):2278?2324, 1998.
[11] Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in
natural images with unsupervised feature learning. In NIPS workshop on deep learning and unsupervised
feature learning, volume 2011, page 4. Granada, Spain, 2011.
[12] Behnam Neyshabur, Ruslan R Salakhutdinov, and Nati Srebro. Path-sgd: Path-normalized optimization in
deep neural networks. In Advances in Neural Information Processing Systems, pages 2413?2421, 2015.
[13] Marc?Aurelio Ranzato, Alex Krizhevsky, and Geoffrey E. Hinton. Factored 3-way restricted boltzmann
machines for modeling natural images. In AISTATS, pages 621?628, 2010.
[14] Shai Shalev-Shwartz, Ohad Shamir, Nathan Srebro, and Karthik Sridharan. Stochastic convex optimization.
In The 22nd Conference on Learning Theory (COLT), 2009.
[15] Nathan Srebro, Karthik Sridharan, and Ambuj Tewari. Smoothness, low noise and fast rates. In Advances
in Neural Information Processing Systems 23 (NIPS), pages 2199?2207, 2010.
[16] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout:
A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15
(1):1929?1958, 2014.
[17] Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization and
momentum in deep learning. In Proceedings of the 30th international conference on machine learning
(ICML-13), pages 1139?1147, 2013.
[18] Stefan Wager, Sida Wang, and Percy S Liang. Dropout training as adaptive regularization. In Advances in
Neural Information Processing Systems, pages 351?359, 2013.
[19] Li Wan, Matthew Zeiler, Sixin Zhang, Yann L Cun, and Rob Fergus. Regularization of neural networks
using dropconnect. In Proceedings of the 30th International Conference on Machine Learning (ICML-13),
pages 1058?1066, 2013.
[20] Sida Wang and Christopher Manning. Fast dropout training. In Proceedings of the 30th International
Conference on Machine Learning (ICML-13), pages 118?126, 2013.
[21] Sida I Wang, Mengqiu Wang, Stefan Wager, Percy Liang, and Christopher D Manning. Feature noising for
log-linear structured prediction. In EMNLP, pages 1170?1179, 2013.
[22] Sixin Zhang, Anna Choromanska, and Yann LeCun. Deep learning with elastic averaging sgd. arXiv
preprint arXiv:1412.6651, 2014.
[23] Jingwei Zhuo, Jun Zhu, and Bo Zhang. Adaptive dropout rates for learning with corrupted features. In
IJCAI, pages 4126?4133, 2015.
9
| 6561 |@word trial:2 nd:1 tried:2 bn:11 covariance:1 sgd:3 tr:14 initial:3 contains:1 document:2 comparing:1 nt:1 com:2 si:1 yet:1 activation:3 bd:4 diederik:2 informative:1 christian:1 drop:1 plot:1 update:3 v:3 selected:3 gbr:1 bissacco:1 zhang:3 mathematical:2 direct:1 overhead:1 baldi:2 introduce:2 theoretically:1 news20:2 mask:1 indeed:2 notably:1 p1:2 surge:1 frequently:1 examine:1 expected:5 inspired:2 salakhutdinov:3 becomes:1 spain:2 xx:1 moreover:1 gift:1 panel:2 what:2 cm:4 minimizes:2 developed:2 guarantee:1 act:2 tackle:5 shed:1 scaled:1 k2:2 control:2 unit:6 yn:2 before:1 dropped:3 engineering:1 frey:1 local:1 aiming:1 path:2 might:1 plus:1 initialization:1 studied:1 co:2 ease:1 limited:1 fastest:1 range:1 practical:1 acknowledgment:1 lecun:2 testing:12 practice:2 implement:3 digit:1 procedure:1 empirical:5 evolving:6 submatrices:1 adapting:2 jingwei:1 word:1 get:1 uiowa:1 selection:1 operator:1 noising:1 risk:29 optimize:3 equivalent:4 restriction:1 demonstrated:1 yt:3 www:1 reviewer:1 tianbao:2 attention:1 marten:1 independently:6 kpi:1 convex:3 jimmy:2 splitting:2 constrast:1 factored:1 reparameterization:1 cifar100:1 pli:2 shamir:1 play:1 suppose:1 edn:1 us:2 mengqiu:1 trick:2 element:1 recognition:2 particularly:1 expensive:2 ep:4 role:1 observed:2 fly:2 csie:1 inserted:1 preprint:3 wang:5 calculate:2 connected:6 ranzato:1 decrease:1 alessandro:1 benjamin:1 pd:4 battery:1 rigorously:1 trained:1 depend:2 solving:2 completely:1 accelerate:2 joint:2 various:1 regularizer:6 train:1 x2d:1 fast:2 effective:1 describe:1 kp:2 hyper:1 choosing:1 shalev:1 caffe:5 widely:1 supplementary:1 statistic:6 final:1 propose:7 remainder:1 p4:1 adaptation:1 achieve:1 adapts:2 competition:1 exploiting:1 convergence:17 sutskever:4 ijcai:1 adam:2 converges:1 help:1 tim:1 andrew:1 gong:1 received:2 sim:2 implies:1 stochastic:14 centered:1 libsvmtools:1 material:1 explains:2 orlando:1 assign:1 feeding:1 generalization:5 preliminary:3 ntu:1 proposition:9 tighter:1 anonymous:1 pl:1 considered:2 exp:6 great:1 bj:2 lm:1 claim:1 matthew:1 achieves:7 omitted:1 purpose:1 ruslan:3 lose:1 label:1 xld:1 city:1 establishes:2 minimization:6 stefan:2 gaussian:2 rather:2 avoid:1 varying:1 focus:4 improvement:2 bernoulli:6 contrast:2 rigorous:1 brendan:1 sense:3 pld:2 inference:2 dependent:37 i0:3 lj:1 hidden:2 choromanska:1 interested:2 tao:1 arg:1 issue:8 classification:2 among:2 denoted:1 colt:1 proposes:1 art:1 iters:8 softmax:1 marginal:3 equal:2 construct:2 edb:11 once:1 ng:1 sampling:34 identical:1 represents:2 kw:2 unsupervised:2 icml:3 minf:1 others:1 report:3 leigh:1 yoshua:1 national:1 n1:1 karthik:2 ab:3 interest:1 multiply:1 introduces:1 analyzed:1 light:1 wager:3 necessary:1 netzer:1 ohad:1 conduct:2 iv:1 taylor:2 initialized:1 plotted:1 theoretical:7 witnessed:1 modeling:1 cost:1 deviation:1 addressing:1 entry:2 uniform:8 krizhevsky:5 conducted:1 too:1 reported:1 corrupted:4 international:3 randomized:1 sequel:1 together:1 ilya:4 w1:1 augmentation:1 central:1 choose:1 wan:1 emnlp:1 dropconnect:1 worse:1 leading:2 li:3 szegedy:1 jeremy:1 notable:1 depends:1 later:3 try:3 lot:1 root:1 analyze:2 parallel:1 shai:1 contribution:1 minimize:6 accuracy:2 convolutional:7 variance:4 yield:2 bayesian:1 multiplying:2 researcher:1 rectified:2 randomness:1 detector:1 ed:8 definition:8 nonetheless:1 involved:1 james:1 obvious:1 proof:3 mi:4 sampled:2 proved:1 knowledge:1 color:2 back:3 mlj:2 follow:4 improved:1 done:2 generality:1 just:1 stage:1 replacing:1 christopher:2 propagation:3 google:1 logistic:7 reveal:1 facilitate:1 effect:3 k22:11 normalized:3 true:1 regularization:2 attractive:1 please:1 rooted:1 trying:2 demonstrate:3 l1:1 svhn:3 percy:2 image:4 wise:1 variational:1 recently:3 sigmoid:1 multinomial:22 volume:1 m1:1 refer:2 significant:1 smoothness:1 rd:20 similarly:2 zeroing:2 language:1 ucf:1 whitening:1 etc:1 patrick:1 recent:1 showed:1 perspective:2 optimizing:1 pl1:2 boqing:1 manipulation:1 sixin:2 inequality:1 binary:2 success:1 inconsistency:1 seen:2 additional:2 relaxed:1 george:1 employed:1 sida:3 ii:7 doest:1 multiple:1 reduces:2 faster:9 cifar:8 divided:1 manipulate:1 impact:1 prediction:6 adobe:1 regression:5 expectation:6 arxiv:6 iteration:5 normalization:22 sergey:1 achieved:1 separately:1 decreased:2 appropriately:1 archive:1 comment:1 pooling:3 facilitates:1 sridharan:2 yang:2 leverage:1 revealed:1 iii:3 identically:2 easy:2 intermediate:1 bengio:1 xj:4 li1:1 architecture:2 suboptimal:1 reduce:2 simplifies:1 haffner:1 shift:6 expression:1 accelerating:2 render:1 peter:1 passing:1 afford:1 remark:5 deep:39 dramatically:1 generally:1 tewari:1 detailed:2 clear:1 tune:2 locally:2 ten:1 bvlc:1 generate:1 http:3 overlay:1 nsf:1 coates:1 notice:2 cuda:2 write:1 four:6 clarity:1 prevent:2 verified:1 dahl:1 kept:1 compete:1 angle:1 run:1 distributiondependent:1 package:2 named:3 place:1 almost:1 yann:3 wu:1 appendix:1 scaling:2 graham:2 comparable:1 dropout:203 bound:20 fl:1 layer:37 epn:1 followed:1 constraint:1 alex:5 sake:2 nathan:2 speed:7 nitish:2 min:2 performing:1 rcv1:1 structured:1 according:6 manning:2 smaller:13 across:1 shallow:15 tw:1 cun:1 s1:1 rob:1 intuitively:1 restricted:1 pr:3 notorious:1 remains:1 discus:1 cjlin:1 end:1 incurring:1 neyshabur:1 salimans:1 evolutional:31 pierre:1 batch:31 florida:1 denotes:5 ensure:2 x21:1 zeiler:1 marginalized:1 especially:1 establish:2 yz:1 bl:1 added:5 md:1 diagonal:1 exhibit:1 gradient:9 convnet:2 thank:1 entity:1 code:1 index:1 mini:12 minimizing:5 liang:2 difficult:3 stated:1 rise:1 ba:2 implementation:2 boltzmann:1 unknown:1 allowing:2 upper:6 neuron:13 convolution:1 datasets:4 benchmark:5 enabling:1 descent:5 hinton:6 y1:2 extensive:1 connection:1 optimized:1 imagenet:1 established:1 barcelona:1 kingma:2 nip:3 robinson:1 address:3 zhuo:1 usually:6 below:1 reading:1 summarize:1 ambuj:1 including:2 max:1 explanation:1 belief:2 ia:1 power:1 suitable:1 natural:4 overlap:1 zhu:1 improve:4 kxk22:1 x2i:4 github:1 library:1 jun:1 sn:1 review:2 literature:2 epoch:1 understanding:1 nati:1 multiplication:1 marginalizing:1 relative:2 evolve:2 loss:7 fully:4 srebro:3 versus:1 geoffrey:6 validation:1 foundation:2 iowa:2 minp:1 viewpoint:1 tiny:1 share:1 pi:12 granada:1 summary:1 supported:2 xln:1 last:1 enjoys:1 understand:1 taking:1 emerge:1 benefit:1 curve:2 dimension:1 complicating:1 xn:2 default:2 computes:1 preventing:1 author:1 adaptive:5 welling:1 kpl:1 emphasize:2 consequentially:1 overfitting:3 ioffe:1 fergus:1 shwartz:1 zhe:2 continuous:1 why:1 learn:1 elastic:1 obtaining:1 improving:1 expansion:2 bottou:1 marc:1 diag:3 aistats:1 anna:1 main:2 aurelio:1 big:2 noise:16 fair:2 x1:2 slow:1 sub:1 momentum:2 xl:3 rdn:1 sadowski:2 theorem:9 xt:3 covariate:6 jensen:1 behnam:1 normalizing:2 consist:1 workshop:1 mnist:2 adding:1 corr:3 importance:1 supplement:3 te:14 justifies:1 kx:11 gap:1 flavor:1 yang1:1 suited:1 prevents:1 partially:1 bo:2 xl1:4 goal:2 presentation:1 lipschitz:1 xlm:2 included:2 determined:1 reducing:1 yuval:1 justify:3 wt:5 averaging:1 crcv:1 total:2 x2j:1 pas:1 experimental:5 formally:1 internal:8 ult:3 evaluate:1 srivastava:2 |
6,149 | 6,562 | Communication-Optimal Distributed Clustering?
Jiecao Chen
Indiana University
Bloomington, IN 47401
[email protected]
He Sun
University of Bristol
Bristol, BS8 1UB, UK
[email protected]
David P. Woodruff
IBM Research Almaden
San Jose, CA 95120
[email protected]
Qin Zhang
Indiana University
Bloomington, IN 47401
[email protected]
Abstract
Clustering large datasets is a fundamental problem with a number of applications
in machine learning. Data is often collected on different sites and clustering needs
to be performed in a distributed manner with low communication. We would
like the quality of the clustering in the distributed setting to match that in the
centralized setting for which all the data resides on a single site. In this work, we
study both graph and geometric clustering problems in two distributed models:
(1) a point-to-point model, and (2) a model with a broadcast channel. We give
protocols in both models which we show are nearly optimal by proving almost
matching communication lower bounds. Our work highlights the surprising power
of a broadcast channel for clustering problems; roughly speaking, to spectrally
cluster n points or n vertices in a graph distributed across s servers, for a worst-case
partitioning the communication complexity in a point-to-point model is n ? s, while
in the broadcast model it is n + s. A similar phenomenon holds for the geometric
setting as well. We implement our algorithms and demonstrate this phenomenon
on real life datasets, showing that our algorithms are also very efficient in practice.
1
Introduction
Clustering is a fundamental task in machine learning with widespread applications in data mining,
computer vision, and social network analysis. Example applications of clustering include grouping
similar webpages by search engines, finding users with common interests in a social network, and
identifying different objects in a picture or video. For these applications, one can model the objects
that need to be clustered as points in Euclidean space Rd , where the similarities of two objects are
represented by the Euclidean distance between the two points. Then the task of clustering is to choose
k points as centers, so that the total distance between all input points to their corresponding closest
center is minimized. Depending on different distance objective functions, three typical problems
have been studied: k-means, k-median, and k-center.
The other popular approach for clustering is to model the input data as vertices of a graph, and the
similarity between two objects is represented by the weight of the edge connecting the corresponding
vertices. For this scenario, one is asked to partition the vertices into clusters so that the ?highly
connected? vertices belong to the same cluster. A widely-used approach for graph clustering is
spectral clustering, which embeds the vertices of a graph into the points in Rk through the bottom k
eigenvectors of the graph?s Laplacian matrix, and applies k-means on the embedded points.
?
Full version appears on arXiv, 2017, under the same title.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Both the spectral clustering and the geometric clustering algorithms mentioned above have been
widely used in practice, and have been the subject of extensive theoretical and experimental studies
over the decades. However, these algorithms are designed for the centralized setting, and are not
applicable in the setting of large-scale datasets that are maintained remotely by different sites. In
particular, collecting the information from all the remote sites and performing a centralized clustering
algorithm is infeasible due to high communication costs, and new distributed clustering algorithms
with low communication cost need to be developed.
There are several natural communication models, and we focus on two of them: (1) a point-to-point
model, and (2) a model with a broadcast channel. In the former, sometimes referred to as the messagepassing model, there is a communication channel between each pair of users. This may be impractical,
and the so-called coordinator model can often be used in place; in the coordinator model there is a
centralized site called the coordinator, and all communication goes through the coordinator. This
affects the total communication by a factor of two, since the coordinator can forward a message from
one server to another and therefore simulate a point-to-point protocol. There is also an additional
additive O(log s) bits per message, where s is the number of sites, since a server must specify to the
coordinator where to forward its message. In the model with a broadcast channel, sometimes referred
to as the blackboard model, the coordinator has the power to send a single message which is received
by all s sites at once. This can be viewed as a model for single-hop wireless networks.
In both models we study the total number of bits communicated among all sites. Although the
blackboard model is at least as powerful as the message-passing model, it is often unclear how to
exploit its power to obtain better bounds for specific problems. Also, for a number of problems the
communication complexity is the same in both models, such as computing the sum of s length-n bit
vectors modulo two, where each site holds one bit vector [18], or estimating large moments [20].
Still, for other problems like set disjointness it can save a factor of s in the communication [5].
Our contributions. We present algorithms for graph clustering: for any n-vertex graph whose
e
edges are arbitrarily partitioned across s sites, our algorithms have communication cost O(ns)
e
in the message passing model, and have communication cost O(n + s) in the blackboard model,
e notation suppresses polylogarithmic factors. The algorithm in the message passing
where the O
model has each site send a spectral sparsifier of its local data to the coordinator, who then merges
them in order to obtain a spectral sparsifier of the union of the datasets, which is sufficient for
solving the graph clustering problem. Our algorithm in the blackboard model is technically more
involved, as we show a particular recursive sampling procedure for building a spectral sparsifier
can be efficiently implemented using a broadcast channel. It is unclear if other natural ways of
building spectral sparsifiers can be implemented with low communication in the blackboard model.
Our algorithms demonstrate the surprising power of the blackboard model for clustering problems.
Since our algorithms compute sparsifiers, they also have applications to solving symmetric diagonally
dominant linear systems in a distributed model. Any such system can be converted into a system
involving a Laplacian (see, e.g., [1]), from which a spectral sparsifier serves as a good preconditioner.
Next we show that ?(ns) bits of communication is necessary in the message passing model to even
recover a constant fraction of a cluster, and ?(n + s) bits of communication is necessary in the
blackboard model. This shows the optimality of our algorithms up to poly-logarithmic factors.
We then study clustering problems in constant-dimensional Euclidean space. We show for any c > 1,
computing a c-approximation for k-median, k-means, or k-center correctly with constant probability
in the message passing model requires ?(sk) bits of communication. We then strengthen this lower
bound, and show even for bicriteria clustering algorithms, which may output a constant factor more
clusters and a constant factor approximation, our ?(sk) bit lower bound still holds. Our proofs are
based on communication and information complexity. Our results imply that existing algorithms [3]
e
for k-median and k-means with O(sk)
bits of communication, as well as the folklore parallel guessing
e
algorithm for k-center with O(sk)
bits of communication, are optimal up to poly-logarithmic factors.
For the blackboard model, we present an algorithm for k-median and k-means that achieves an
e + k) bits of communication. This again separates the models.
O(1)-approximation using O(s
We give empirical results which show that using spectral sparsifiers preserves the quality of spectral
clustering surprisingly well in real-world datasets. For example, when we partition a graph with over
70 million edges (the Sculpture dataset) into 30 sites, only 6% of the input edges are communicated
in the blackboard model and 8% are communicated in the message passing model, while the values
2
of the normalized cut (the objective function of spectral clustering) given in those two models are
at most 2% larger than the ones given by the centralized algorithm, and the visualized results are
almost identical. This is strong evidence that spectral sparsifiers can be a powerful tool in practical,
distributed computation. When the number of sites is large, the blackboard model incurs significantly
less communication than the message passing model, e.g., in the Twomoons dataset when there are
90 sites, the message passing model communicates 9 times as many edges as communicated in the
blackboard model, illustrating the strong separation between these models that our theory predicts.
Related work. There is a rich literature on spectral and geometric clustering algorithms from various
aspects (see, e.g., [2, 16, 17, 19]). Balcan et al. [3, 4] and Feldman et al. [9] study distributed k-means
([3] also studies k-median). Very recently Guha et al. [10] studied distributed k-median/center/means
with outliers. Cohen et al. [7] study dimensionality reduction techniques for the input data matrices
that can be used for distributed k-means. The main takeaway is that there is no previous work
which develops protocols for spectral clustering in the common message passing and blackboard
models, and lower bounds are lacking as well. For geometric clustering, while upper bounds exist
(e.g., [3, 4, 9]), no provable lower bounds in either model existed, and our main contribution is to
show that previous algorithms are optimal. We also develop a new protocol in the blackboard model.
2
Preliminaries
Let G = (V, E, w) be an undirected graph with n vertices, m edges, and weight function
P V ?V ?
R?0 . The set of neighbors of a vertex v is represented by N (v), and its degree is dv = u?v w(u, v).
The maximum degree of G is defined to be ?(G) = maxv {dv }. For any set S ? V , let ?(S) ,
P
P
v?S dv . For any sets S, T ? V , we define w(S, T ) ,
u?S,v?T w(u, v) to be the total weight of
edges crossing S and T . For two sets X and Y , the symmetric difference of X and Y is defined as
X4Y , (X \ Y ) ? (Y \ X).
For any matrix A ? Rn?n , let ?1 (A) ? ? ? ? ? ?n (A) = ?max (A) be the eigenvalues of A. For any
two matrices A, B ? Rn?n , we write A B to represent B ? A is positive semi-definite (PSD).
Notice that this condition implies that x| Ax ? x| Bx for any x ? Rn . Sometimes we also use a
weaker notation (1 ? ?)A r B r (1 + ?)A to indicate that (1 ? ?)x| Ax ? x| Bx ? (1 + ?)x| Ax
for all x in the row span of A.
Graph Laplacian. The Laplacian matrix of G is an n ? n matrix LG defined by LG = DG ? AG ,
where AG is the adjacency matrix of G defined by AG (u, v) = w(u, v), and DG is the n ? n diagonal
matrix with DG (v, v) = dv for any v ? V [G]. Alternatively, we can write LG with respect to a
signed edge-vertex incidence matrix: we assign every edge e = {u, v} an arbitrary orientation, and
let BG (e, v) = 1 if v is e?s head, BG (e, v) = ?1 if v is e?s tail, and BG (e, v) = 0 otherwise. We
further define a diagonal matrix WG ? Rm?m , where WG (e, e) = we for any edge e ? E[G].
|
Then, we can write LG as LG = BG
WG BG . The normalized Laplacian matrix of G is defined by
?1/2
?1/2
?1/2
?1/2
LG , DG LG DG
= I ? DG AG DG . We sometimes drop the subscript G when the
underlying graph is clear from the context.
Spectral sparsification. For any undirected and weighted graph G = (V, E, w), we say a subgraph
H of G with proper reweighting of the edges is a (1 + ?)-spectral sparsifier if
(1 ? ?)LG LH (1 + ?)LG .
(1)
By definition, it is easy to show that, if we decompose the edge set of a graph G = (V, E) into
E1 , . . . , E` for a constant ` and Hi is a spectral sparsifier of Gi = (V, Ei ) for any 1 ? i ? `, then
the graph formed by the union of edge sets from Hi is a spectral sparsifier of G. It is known that, for
any undirected graph G of n vertices, there is a (1 + ?)-spectral sparsifier of G with O(n/?2 ) edges,
and it can be constructed in almost-linear time [13]. We will show that a spectral sparsifier preserves
the cluster structure of a graph.
Models of computation. We will study distributed clustering in two models for distributed data: the
message passing model and the blackboard model. The message passing model represents those
distributed computation systems with point-to-point communication, and the blackboard model
represents those where messages can be broadcast to all parties.
More precisely, in the message passing model there are s sites P1 , . . . , Ps , and one coordinator.
These sites can talk to the coordinator through a two-way private channel. In fact, this is referred to
3
as the coordinator model in Section 1, where it is shown to be equivalent to the point-to-point model
up to small factors. The input is initially distributed at the s sites. The computation is in terms of
rounds: at the beginning of each round, the coordinator sends a message to some of the s sites, and
then each of those sites that have been contacted by the coordinator sends a message back to the
coordinator. At the end, the coordinator outputs the answer. In the alternative blackboard model, the
coordinator is simply a blackboard where these s sites P1 , . . . , Ps can share information; in other
words, if one site sends a message to the coordinator/blackboard then all the other s ? 1 sites can see
this information without further communication. The order for the sites to speak is decided by the
contents of the blackboard.
For both models we measure the communication cost as the total number of bits sent through the
channels. The two models are now standard in multiparty communication complexity (see, e.g.,
[5, 18, 20]). They are similar to the congested clique model [14] studied in the distributed computing
community; the main difference is that in our models we do not post any bandwidth limitations at
each channel but instead consider the total number of bits communicated.
3
Distributed graph clustering
In this section we study distributed graph clustering. We assume that the vertex set of the input graph
G = (V, E) can be partitioned into k clusters, where vertices in each cluster S are highly connected to
each other, and there are fewer edges between S and V \S. To formalize this notion, we define the conductance of a vertex set S by ?G (S) , w(S, V \ S)/?(S). Generalizing the Cheeger constant, we define the k-way expansion constant of graph G by ?(k) , minpartition A1 , . . . , Ak max1?i?k ?G (Ai ).
Notice that a graph G has k clusters if the value of ?(k) is small.
Lee et al. [12] relate the value of ?(k) to ?k (LG ) by the following higher-order Cheeger inequality:
p
?k (LG )
? ?(k) ? O(k 2 ) ?k (LG ).
2
Based on this, a large gap between ?k+1 (LG ) and ?(k) implies (i) the existence of a k-way partition
{Si }ki=1 with smaller value of ?G (Si ) ? ?(k), and (ii) any (k + 1)-way partition of G contains a
subset with high conductance ?(k + 1) ? ?k+1 (LG )/2. Hence, a large gap between ?k+1 (LG ) and
?(k) ensures that G has exactly k clusters.
In the following, we assume that ? , ?k+1 (LG )/?(k) = ?(k 3 ), as this assumption was used in the
literature for studying graph clustering in the centralized setting [17].
Both algorithms presented in the section are based on the following spectral clustering algorithm:
(i) compute the k eigenvectors f1 , . . . , fk of LG associated with ?1 (LG ), . . . , ?k (LG ); (ii) embed
every vertex v to a point in Rk through the embedding F (v) = ?1d ? (f1 (v), . . . , fk (v)); (iii) run
v
k-means on the embedded points {F (v)}v?V , and group the vertices of G into k clusters according
to the output of k-means.
3.1
The message passing model
We assume the edges of the input graph G = (V, E) are arbitrarily allocated among s sites P1 , ? ? ? , Ps ,
and we use Ei to denote the edge set maintained by site Pi . Our proposed algorithm consists of
two steps: (i) every Pi computes a linear-sized (1 + c)-spectral sparsifier Hi of Gi , (V, Ei ), for a
small constant c ? 1/10, and sends the edge set of Hi , denoted by Ei0 , to the coordinator;
S (ii) the
k
coordinator runs a spectral clustering algorithm on the union of received graphs H , V, i=1 Ei0 .
The theorem below summarizes the performance of this algorithm, and shows the approximation
guarantee of this algorithm is as good as the provable guarantee of spectral clustering known in the
centralized setting [17].
Theorem 3.1. Let G = (V, E) be an n-vertex graph with ? = ?(k 3 ), and suppose the edges
of G are arbitrarily allocated among s sites. Assume S1 , ? ? ? , Sk is an optimal partition that
achieves ?(k). Then, the algorithm above computes a partition A1 , . . . , Ak satisfying vol(Ai 4Si ) =
e
O k 3 ? ??1 ? vol(Si ) for any 1 ? i ? k. The total communication cost of this algorithm is O(ns)
bits.
4
Our proposed algorithm is very easy to implement, and the next theorem shows that the communication cost of our algorithm is optimal up to a logarithmic factor.
Theorem 3.2. Let G be an undirected graph with n vertices, and suppose the edges of G are
distributed among s sites. Then, any algorithm that correctly outputs a constant fraction of a cluster
in G requires ?(ns) bits of communication. This holds even if each cluster has constant expansion.
As a remark, it is easy to see that this lower bound also holds for constructing spectral sparsifiers:
for any n ? n PSD matrix A whose entries are arbitrarily distributed among s sites, any distributed
algorithm that constructs a (1 + ?(1))-spectral sparsifier of A requires ?(ns) bits of communication.
This follows since such a spectral sparsifier can be used to solve the spectral clustering problem.
Spectral sparsification has played an important role in designing fast algorithms from different areas,
e.g., machine learning, and numerical linear algebra. Hence our lower bound result for constructing
spectral sparsifiers may have applications to studying other distributed learning algorithms.
3.2
The blackboard model
e + s) bits of communication cost in the
Next we present a graph clustering algorithm with O(n
blackboard model. Our result is based on the observation that a spectral sparsifier preserves the
structure of clusters, which was used for proving Theorem 3.1. So it suffices to design a distributed
algorithm for constructing a spectral sparsifier in the blackboard model.
Our distributed algorithm is based on constructing a chain of coarse sparsifiers [15], which is described
as follows: for any input PSD matrix K with ?max (K) ? ?u and all the non-zero eigenvalues of K
at least ?` , we define d = dlog2 (?u /?` )e and construct a chain of d + 1 matrices
[K(0), K(1), . . . , K(d)],
(2)
i
where ?(i) = ?u /2 and K(i) = K + ?(i)I. Notice that in the chain above every K(i ? 1) is
obtained by adding weights to the diagonal entries of K(i), and K(i ? 1) approximates K(i) as long
as the weights added to the diagonal entries are small. We will construct this chain recursively, so that
K(0) has heavy diagonal entries and can be approximated by a diagonal matrix. Moreover, since K
is the Laplacian matrix of a graph G, it is easy to see that d = O(log n) as long as the edge weights
of G are polynomially upper-bounded in n.
Lemma 3.3 ([15]). The chain (2) satisfies the following relations: (1) K r K(d) r 2K; (2)
K(`) K(` ? 1) 2K(`) for all ` ? {1, . . . , d}; (3) K(0) 2?(0)I 2K(0).
Based on Lemma 3.3, we will construct a chain of matrices
h
i
e
e
e
K(0),
K(1),
. . . , K(d)
(3)
e
e + 1)
in the blackboard model, such that every K(`)
is a spectral sparsifier of K(`), and every K(`
e
can be constructed from K(`). The basic idea behind our construction is to use the relations among
different K(`) shown in Lemma 3.3 and the fact that, for any K = B | B, sampling rows of B with
respect to their leverage scores can be used to obtain a matrix approximating K.
Theorem 3.4. Let G be an undirected graph on n vertices, where the edges of G are allocated among
s sites, and the edge weights are polynomially upper bounded in n. Then, a spectral sparsifier of G
e + s) bits of communication in the blackboard model. That is, the chain
can be constructed with O(n
e + s) bits of communication in the blackboard model.
(3) can be constructed with O(n
Proof. Let K = B | B be the Laplacian matrix of the underlying graph G, where B ? Rm?n is the
e + 1) can be constructed based on
edge-vertex incidence matrix of G. We will prove that every K(i
e
e
e
K(i) with O(n + s) bits of communication. This implies that K(d),
a (1 + ?)-spectral sparsifier of
e
K, can be constructed with O(n + s) bits of communication, as the length of the chain d = O(log n).
First of all, notice that ?u ? 2n, and the value of n can be obtained with communication cost
e + s) (different sites sequentially write the new IDs of the vertices on the blackboard). In the
O(n
following we assume that ?u is the upper bound of ?max that we actually obtained in the blackboard.
Base case of ` = 0: By definition, K(0) = K + ?u ? I, and 12 ? K(0) ?(0) ? I K(0), due
to Statement 3 of Lemma 3.3. Let ? denote appending the rows of one matrix to another. We
5
p
|
define B?(0) = B ? ?(0) ? I, and write K(0) = K + ?(0) ? I = B?(0)
B?(0) . By defining
|
|
|
?i = bi (K(0)) bi for each row of B?(0) , we have ?i ? bi (?(0) ? I) bi ? 2 ? ?i . Let ?ei =
+
b|i (?(0) ? I) bi be the leverage score of bi approximated using ?(0) ? I, and let ?e be thepvector of
approximate leverage scores, with the leverage scores of the n rows corresponding to ?(0) ? I
rounded up to 1. Then, with high probability sampling O(??2 n log n) rows of B will give a matrix
e
e
K(0)
such that (1 ? ?)K(0) K(0)
(1 + ?)K(0). Notice that, as every row of B corresponds
to an edge of G, the approximate leverage scores ?ei for different edges can be computed locally by
different sites maintaining the edges, and the sites only need to send the information of the sampled
e + s) bits.
edges to the blackboard, hence the communication cost is O(n
e
Induction step: We assume that (1 ? ?)K(`) r K(`)
r (1 + ?)K(`), and the blackboard maintains
e
e
the matrix K(`).
This implies that (1 ? ?)/(1 + ?) ? K(`) r 1/(1 + ?) ? K(`)
r K(`). Combining
this with Statement 2 of Lemma 3.3, we have that
1??
1
e
K(` + 1) r
K(`)
K(` + 1).
2(1 + ?)
2(1 + ?)
e + 1) such
We apply the same sampling procedure as in the base case, and obtain a matrix K(`
e
e
that (1 ? ?)K(` + 1) r K(` + 1) r (1 + ?)K(` + 1). Notice that, since K(`) is written on
the blackboard, the probabilities used for sampling individual edges can be computed locally by
different sites, and in each round only the sampled edges will be sent to the blackboard in order for
e + 1). Hence, the total communication cost in each iteration is O(n
e + s)
the blackboard to obtain K(`
bits. Combining this with the fact that the chain length d = O(log n) proves the theorem.
Combining Theorem 3.4 and the fact that a spectral sparsifier preserves the structure of clusters,
e + s)
we obtain a distributed algorithm in the blackboard model with total communication cost O(n
bits, and the performance of our algorithm is the same as in the statement of Theorem 3.1. Notice
that ?(n + s) bits of communication are needed for graph clustering in the blackboard model,
since the output of a clustering algorithm contains ?(n) bits of information and each site needs to
communicate at least one bit. Hence the communication cost of our proposed algorithm is optimal up
to a poly-logarithmic factor.
4
Distributed geometric clustering
We now consider geometric clustering, including k-median, k-means and k-center. Let P be a set
of points of size n in a metric space with distance function d(?, ?), and let k ? n be an integer. In
the k-center problem we want to find a set C (|C| = k) such that maxp?P d(p, C) is minimized,
where d(p, C) = minP
and k-means we replace the objective function
c?C d(p, c). In k-median
P
maxp?P d(p, C) with p?P d(p, C) and p?P (d(p, C))2 , respectively.
4.1
The message passing model
As mentioned, for constant dimensional Euclidean space and a constant c > 1, there are algorithms
e
that c-approximate k-median and k-means using O(sk)
bits of communication [3]. For k-center, the
e
folklore parallel guessing algorithms (see, e.g., [8]) achieve a 2.01-approximation using O(sk)
bits
of communication.
The following theorem states that the above upper bounds are tight up to logarithmic factors. Due
to space constraints we defer the proof to the full version of this paper. The proof uses tools from
multiparty communication complexity. We in fact can prove a stronger statement that any algorithm
that can differentiate whether we have k points or k + 1 points in total in the message passing model
needs ?(sk) bits of communication.
Theorem 4.1. For any c > 1, computing c-approximation for k-median, k-means or k-center
correctly with probability 0.99 in the message passing model needs ?(sk) bits of communication.
A number of works on clustering consider bicriteria solutions (e.g., [11, 6]). An algorithm is a
(c1 , c2 )-approximation (c1 , c2 > 1) if the optimal solution costs W when using k centers, then the
6
output of the algorithm costs at most c1 W when using at most c2 k centers. We can show that for kmedian and k-means, the ?(sk) lower bound holds even for algorithms with bicriteria approximations.
The proof of the following theorem can be found in the full version of this paper.
Theorem 4.2. For any c ? [1, 1.01], computing (7.1 ? 6c, c)-bicriteria-approximation for k-median
or k-means correctly with probability 0.99 in the message passing model needs ?(sk) bits of
communication.
4.2
The blackboard model
e + k) bits of
We can show that there is an algorithm that achieves an O(1)-approximation using O(s
communication for k-median and k-means. Due to space constraints we defer the description of the
algorithm to the full version of this paper. For k-center, it is straightforward to implement the parallel
e + k) bits of communication.
guessing algorithm in the blackboard model using O(s
Theorem 4.3. There are algorithms that compute O(1)-approximations for k-median, k-means and
e + k) bits of communication.
k-center correctly with probability 0.9 in the blackboard model using O(s
5
Experiments
In this section we present experimental results for spectral graph clustering in the message passing
and blackboard models. We will compare the following three algorithms. (1) Baseline: each site
sends all the data to the coordinator directly; (2) MsgPassing: our algorithm in the message passing
model (Section 3.1); (3) Blackboard: our algorithm in the blackboard model (Section 3.2).
Besides giving the visualized results of these algorithms on various datasets, we also measure the
P
i ,V \Ai )
qualities of the results via the normalized cut, defined as ncut(A1 , . . . , Ak ) = 12 i?[k] w(A
vol(Ai ) ,
which is a standard objective function to be minimized for spectral clustering algorithms.
We implemented the algorithms using multiple languages, including Matlab, Python and C++. Our
experiments were conducted on an IBM NeXtScale nx360 M4 server, which is equipped with 2 Intel
Xeon E5-2652 v2 8-core processors, 32GB RAM and 250GB local storage.
Datasets. We test the algorithms in the following real and synthetic datasets.
? Twomoons: this dataset contains n = 14, 000 coordinates in R2 . We consider each point to
be a vertex. For any two vertices u, v, we add an edge with weight w(u, v) = exp{?ku ?
vk22 /? 2 } with ? = 0.1 when one vertex is among the 7000-nearest points of the other. This
construction results in a graph with about 110, 000, 000 edges.
? Gauss: this dataset contains n = 10, 000 points in R2 . There are 4 clusters in this dataset,
each generated using a Gaussian distribution. We construct a complete graph as the similarity
graph. For any two vertices u, v, we define the weight w(u, v) = exp{?ku ? vk22 /? 2 } with
? = 1. The resulting graph has about 100, 000, 000 edges.
? Sculpture: a photo of The Greek Slave We use an 80 ? 150 version of this photo where
each pixel is viewed as a vertex. To construct a similarity graph, we map each pixel to a point
in R5 , i.e., (x, y, r, g, b), where the latter three coordinates are the RGB values. For any two
vertices u, v, we put an edge between u, v with weight w(u, v) = exp{?ku ? vk22 /? 2 }
with ? = 0.5 if one of u, v is among the 5000-nearest points of the other. This results in a
graph with about 70, 000, 000 edges.
In the distributed model edges are randomly partitioned across s sites.
Results on clustering quality. We visualize the clustered results for the Twomoons dataset in
Figure 1. It can be seen that Baseline, MsgPassing and Blackboard give results of very similar
qualities. For simplicity, here we only present the visualization for s = 15. Similar results were
observed when we varied the values of s.
We also compare the normalized cut (ncut) values of the clustering results of different algorithms.
The results are presented in Figure 2. In all datasets, the ncut values of different algorithms are very
close. The ncut value of MsgPassing slightly decreases when we increase the value of s, while the
ncut value of Blackboard is independent of s.
7
(a) Baseline
(b) MsgPassing
(c) Blackboard
Figure 1: Visualization of the results on Twomoons. In the message passing model each site samples
5n edges; in the blackboard model all sites jointly sample 10n edges and the chain has length 18.
(a) Twomoons
(b) Gauss
(c) Sculpture
Figure 2: Comparisons on normalized cuts. In the message passing model, each site samples 5n
edges; in each round of the algorithm in the blackboard model, all sites jointly sample 10n edges (in
Twomoons and Gauss) or 20n edges (in Sculpture) edges and the chain has length 18.
Results on Communication Costs. We compare the communication costs of different algorithms
in Figure 3. We observe that while achieving similar clustering qualities as Baseline, both
MsgPassing and Blackboard are significantly more communication-efficient (by one or two orders
of magnitudes in our experiments). We also notice that the value of s does not affect the communication cost of Blackboard, while the communication cost of MsgPassing grows almost linearly with
s; when s is large, MsgPassing uses significantly more communication than Blackboard.
(a) Twomoons
(d) Twomoons
(b) Gauss
(e) Gauss
(c) Sculpture
(f) Sculpture
Figure 3: Comparisons on communication costs. In the message passing model, each site samples
5n edges; in each round of the algorithm in the blackboard model, all sites jointly sample 10n (in
Twomoons and Gauss) or 20n (in Sculpture) edges and the chain has length 18.
Acknowledgement: Jiecao Chen and Qin Zhang are supported in part by NSF CCF-1525024 and
IIS-1633215. D.W. thanks support from the XDATA program of the Defense Advanced Research
Projects Agency (DARPA), Air Force Research Laboratory contract FA8750-12-C-0323.
8
References
[1] Alexandr Andoni, Jiecao Chen, Robert Krauthgamer, Bo Qin, David P. Woodruff, and Qin
Zhang. On sketching quadratic forms. In ITCS, pages 311?319, 2016.
[2] David Arthur and Sergei Vassilvitskii. k-means++: The advantages of careful seeding. In
SODA, pages 1027?1035, 2007.
[3] Maria-Florina Balcan, Steven Ehrlich, and Yingyu Liang. Distributed k-means and k-median
clustering on general communication topologies. In NIPS, pages 1995?2003, 2013.
[4] Maria-Florina Balcan, Vandana Kanchanapally, Yingyu Liang, and David P. Woodruff. Improved
distributed principal component analysis. CoRR, abs/1408.5823, 2014.
[5] Mark Braverman, Faith Ellen, Rotem Oshman, Toniann Pitassi, and Vinod Vaikuntanathan. A
tight bound for set disjointness in the message-passing model. In FOCS, pages 668?677, 2013.
[6] Moses Charikar, Samir Khuller, David M. Mount, and Giri Narasimhan. Algorithms for facility
location problems with outliers. In SODA, pages 642?651, 2001.
[7] Michael B. Cohen, Sam Elder, Cameron Musco, Christopher Musco, and Madalina Persu.
Dimensionality reduction for k-means clustering and low rank approximation. In STOC, pages
163?172, 2015.
[8] Graham Cormode, S Muthukrishnan, and Wei Zhuang. Conquering the divide: Continuous
clustering of distributed data streams. In ICDE, pages 1036?1045, 2007.
[9] Dan Feldman, Melanie Schmidt, and Christian Sohler. Turning big data into tiny data: Constantsize coresets for k-means, PCA and projective clustering. In SODA, pages 1434?1453, 2013.
[10] Sudipto Guha, Yi Li, and Qin Zhang. Distributed partial clustering. Manuscript, 2017.
[11] Madhukar R. Korupolu, C. Greg Plaxton, and Rajmohan Rajaraman. Analysis of a local search
heuristic for facility location problems. In SODA, pages 1?10, 1998.
[12] James R. Lee, Shayan Oveis Gharan, and Luca Trevisan. Multi-way spectral partitioning and
higher-order cheeger inequalities. In STOC, pages 1117?1130, 2012.
[13] Yin Tat Lee and He Sun. Constructing linear-sized spectral sparsification in almost-linear time.
In FOCS, pages 250?269, 2015.
[14] Zvi Lotker, Elan Pavlov, Boaz Patt-Shamir, and David Peleg. MST construction in O(log log n)
communication rounds. In SPAA, pages 94?100, 2003.
[15] Gary L. Miller and Richard Peng. Iterative approaches to row sampling. CoRR, abs/1211.2713,
2012.
[16] Andrew Y. Ng, Michael I. Jordan, and Yair Weiss. On spectral clustering: Analysis and an
algorithm. Advances in neural information processing systems, 2:849?856, 2002.
[17] Richard Peng, He Sun, and Luca Zanetti. Partitioning well-clustered graphs: Spectral clustering
works! In COLT, pages 1423?1455, 2015.
[18] Jeff M. Phillips, Elad Verbin, and Qin Zhang. Lower bounds for number-in-hand multiparty
communication complexity, made easy. SIAM J. Comput., 45(1):174?196, 2016.
[19] Ulrike Von Luxburg. A tutorial on spectral clustering. Statistics and computing, 17(4):395?416,
2007.
[20] David P. Woodruff and Qin Zhang. Tight bounds for distributed functional monitoring. In
STOC, pages 941?960, 2012.
9
| 6562 |@word shayan:1 illustrating:1 version:5 private:1 stronger:1 rajaraman:1 tat:1 rgb:1 incurs:1 bicriteria:4 recursively:1 reduction:2 moment:1 contains:4 score:5 woodruff:4 fa8750:1 existing:1 com:1 incidence:2 surprising:2 si:4 must:1 written:1 sergei:1 mst:1 additive:1 partition:6 numerical:1 christian:1 seeding:1 designed:1 drop:1 maxv:1 fewer:1 beginning:1 core:1 cormode:1 coarse:1 location:2 zhang:6 constructed:6 c2:3 contacted:1 focs:2 consists:1 prove:2 dan:1 yingyu:2 manner:1 peng:2 roughly:1 p1:3 multi:1 equipped:1 spain:1 estimating:1 notation:2 underlying:2 moreover:1 bounded:2 project:1 spectrally:1 developed:1 suppresses:1 narasimhan:1 finding:1 indiana:4 ag:4 impractical:1 sparsification:3 guarantee:2 every:8 collecting:1 exactly:1 ehrlich:1 rm:2 uk:2 partitioning:3 positive:1 local:3 ak:3 id:1 mount:1 subscript:1 signed:1 studied:3 pavlov:1 projective:1 bi:6 decided:1 practical:1 alexandr:1 practice:2 union:3 implement:3 recursive:1 communicated:5 definite:1 procedure:2 area:1 empirical:1 remotely:1 significantly:3 matching:1 word:1 close:1 storage:1 context:1 put:1 equivalent:1 map:1 center:14 send:3 go:1 straightforward:1 musco:2 simplicity:1 identifying:1 ellen:1 proving:2 embedding:1 notion:1 coordinate:2 congested:1 construction:3 suppose:2 shamir:1 user:2 modulo:1 strengthen:1 speak:1 us:2 designing:1 oveis:1 crossing:1 satisfying:1 approximated:2 cut:4 predicts:1 bottom:1 role:1 observed:1 steven:1 worst:1 ensures:1 connected:2 sun:4 remote:1 decrease:1 mentioned:2 cheeger:3 agency:1 complexity:6 asked:1 solving:2 tight:3 algebra:1 technically:1 max1:1 darpa:1 represented:3 various:2 talk:1 muthukrishnan:1 fast:1 whose:2 heuristic:1 widely:2 larger:1 solve:1 say:1 elad:1 otherwise:1 wg:3 maxp:2 statistic:1 gi:2 jointly:3 differentiate:1 advantage:1 eigenvalue:2 kmedian:1 qin:7 blackboard:49 combining:3 subgraph:1 achieve:1 sudipto:1 description:1 faith:1 webpage:1 cluster:16 p:3 object:4 depending:1 develop:1 ac:1 andrew:1 nearest:2 received:2 strong:2 implemented:3 implies:4 indicate:1 peleg:1 greek:1 adjacency:1 assign:1 f1:2 clustered:3 suffices:1 preliminary:1 decompose:1 hold:6 exp:3 visualize:1 achieves:3 applicable:1 title:1 tool:2 weighted:1 bs8:1 gaussian:1 gharan:1 ax:3 focus:1 maria:2 rank:1 baseline:4 initially:1 coordinator:20 relation:2 pixel:2 among:9 orientation:1 colt:1 almaden:1 denoted:1 once:1 construct:6 ng:1 sampling:6 hop:1 identical:1 represents:2 r5:1 sohler:1 nearly:1 minimized:3 develops:1 richard:2 randomly:1 dg:7 preserve:4 individual:1 m4:1 psd:3 ab:2 conductance:2 centralized:7 interest:1 message:31 mining:1 highly:2 braverman:1 behind:1 chain:12 edge:44 partial:1 necessary:2 arthur:1 lh:1 euclidean:4 divide:1 theoretical:1 xeon:1 cost:20 vertex:27 subset:1 entry:4 conducted:1 guha:2 zvi:1 answer:1 synthetic:1 thanks:1 fundamental:2 siam:1 sparsifiers:7 lee:3 contract:1 rounded:1 michael:2 connecting:1 sketching:1 verbin:1 again:1 von:1 broadcast:7 choose:1 bx:2 li:1 converted:1 disjointness:2 coresets:1 bg:5 kanchanapally:1 stream:1 performed:1 ulrike:1 recover:1 maintains:1 parallel:3 defer:2 contribution:2 vk22:3 formed:1 air:1 greg:1 who:1 efficiently:1 miller:1 itcs:1 monitoring:1 bristol:3 processor:1 definition:2 involved:1 james:1 proof:5 associated:1 sampled:2 bloomington:2 dpwoodru:1 dataset:6 popular:1 dimensionality:2 formalize:1 actually:1 back:1 elder:1 appears:1 manuscript:1 higher:2 specify:1 improved:1 wei:2 preconditioner:1 hand:1 ei:5 christopher:1 reweighting:1 widespread:1 quality:6 grows:1 building:2 normalized:5 ccf:1 former:1 hence:5 facility:2 symmetric:2 laboratory:1 round:6 maintained:2 complete:1 demonstrate:2 balcan:3 recently:1 common:2 functional:1 cohen:2 million:1 belong:1 he:3 tail:1 approximates:1 feldman:2 ai:4 phillips:1 rd:1 fk:2 xdata:1 language:1 similarity:4 pitassi:1 base:2 add:1 dominant:1 closest:1 scenario:1 server:4 inequality:2 arbitrarily:4 life:1 yi:1 seen:1 additional:1 semi:1 ii:4 full:4 multiple:1 match:1 long:2 luca:2 post:1 e1:1 cameron:1 a1:3 laplacian:7 involving:1 basic:1 florina:2 vision:1 metric:1 arxiv:1 iteration:1 sometimes:4 represent:1 c1:3 want:1 median:14 sends:5 allocated:3 subject:1 undirected:5 sent:2 jordan:1 integer:1 leverage:5 iii:1 easy:5 vinod:1 affect:2 rotem:1 bandwidth:1 topology:1 idea:1 whether:1 vassilvitskii:1 pca:1 defense:1 gb:2 speaking:1 passing:23 remark:1 matlab:1 clear:1 eigenvectors:2 locally:2 visualized:2 exist:1 nsf:1 tutorial:1 notice:8 moses:1 per:1 correctly:5 patt:1 write:5 vol:3 group:1 achieving:1 ram:1 graph:41 icde:1 fraction:2 sum:1 run:2 jose:1 luxburg:1 powerful:2 communicate:1 soda:4 place:1 almost:5 multiparty:3 separation:1 summarizes:1 graham:1 bit:35 bound:15 hi:4 ki:1 played:1 existed:1 quadratic:1 precisely:1 constraint:2 takeaway:1 aspect:1 simulate:1 optimality:1 span:1 conquering:1 performing:1 charikar:1 according:1 across:3 smaller:1 slightly:1 sam:1 partitioned:3 s1:1 ei0:2 outlier:2 dv:4 visualization:2 needed:1 serf:1 end:1 photo:2 studying:2 apply:1 observe:1 v2:1 spectral:42 appending:1 save:1 alternative:1 schmidt:1 yair:1 existence:1 clustering:53 include:1 krauthgamer:1 madalina:1 maintaining:1 folklore:2 exploit:1 giving:1 prof:1 approximating:1 objective:4 added:1 diagonal:6 guessing:3 unclear:2 distance:4 separate:1 sculpture:7 collected:1 provable:2 induction:1 length:6 besides:1 trevisan:1 liang:2 lg:19 robert:1 statement:4 relate:1 stoc:3 plaxton:1 design:1 proper:1 upper:5 observation:1 datasets:9 defining:1 communication:61 head:1 rn:3 varied:1 arbitrary:1 community:1 david:7 pair:1 extensive:1 vandana:1 engine:1 merges:1 polylogarithmic:1 barcelona:1 nip:2 below:1 program:1 max:3 including:2 video:1 power:4 natural:2 force:1 turning:1 melanie:1 advanced:1 zhuang:1 imply:1 picture:1 geometric:7 literature:2 python:1 acknowledgement:1 embedded:2 lacking:1 toniann:1 highlight:1 limitation:1 degree:2 sufficient:1 minp:1 tiny:1 share:1 pi:2 ibm:3 row:8 heavy:1 diagonally:1 surprisingly:1 wireless:1 supported:1 infeasible:1 weaker:1 neighbor:1 distributed:32 world:1 resides:1 rich:1 computes:2 forward:2 made:1 san:1 party:1 polynomially:2 social:2 approximate:3 boaz:1 dlog2:1 clique:1 persu:1 sequentially:1 alternatively:1 search:2 continuous:1 iterative:1 decade:1 sk:11 channel:9 ku:3 spaa:1 ca:1 messagepassing:1 e5:1 expansion:2 poly:3 constructing:5 protocol:4 main:3 linearly:1 big:1 site:42 referred:3 intel:1 sparsifier:18 embeds:1 n:5 slave:1 comput:1 samir:1 communicates:1 rk:2 theorem:14 embed:1 specific:1 showing:1 r2:2 evidence:1 grouping:1 andoni:1 adding:1 corr:2 magnitude:1 chen:3 gap:2 generalizing:1 logarithmic:5 yin:1 simply:1 ncut:5 khuller:1 bo:1 applies:1 corresponds:1 gary:1 satisfies:1 viewed:2 sized:2 careful:1 jeff:1 replace:1 content:1 typical:1 lemma:5 principal:1 total:10 called:2 experimental:2 gauss:6 support:1 mark:1 latter:1 ub:1 phenomenon:2 |
6,150 | 6,563 | MoCap-guided Data Augmentation
for 3D Pose Estimation in the Wild
Gr?gory Rogez
Cordelia Schmid
Inria Grenoble Rh?ne-Alpes, Laboratoire Jean Kuntzmann, France
Abstract
This paper addresses the problem of 3D human pose estimation in the wild. A significant challenge is the lack of training data, i.e., 2D images of humans annotated
with 3D poses. Such data is necessary to train state-of-the-art CNN architectures.
Here, we propose a solution to generate a large set of photorealistic synthetic images of humans with 3D pose annotations. We introduce an image-based synthesis
engine that artificially augments a dataset of real images with 2D human pose
annotations using 3D Motion Capture (MoCap) data. Given a candidate 3D pose
our algorithm selects for each joint an image whose 2D pose locally matches the
projected 3D pose. The selected images are then combined to generate a new
synthetic image by stitching local image patches in a kinematically constrained
manner. The resulting images are used to train an end-to-end CNN for full-body
3D pose estimation. We cluster the training data into a large number of pose classes
and tackle pose estimation as a K-way classification problem. Such an approach is
viable only with large training sets such as ours. Our method outperforms the state
of the art in terms of 3D pose estimation in controlled environments (Human3.6M)
and shows promising results for in-the-wild images (LSP). This demonstrates that
CNNs trained on artificial images generalize well to real images.
1
Introduction
Convolutionnal Neural Networks (CNN) have been very successful for many different tasks in
computer vision. However, training these deep architectures requires large scale datasets which are
not always available or easily collectable. This is particularly the case for 3D human pose estimation,
for which an accurate annotation of 3D articulated poses in large collections of real images is nontrivial: annotating 2D images with 3D pose information is impractical [6] while large scale 3D pose
capture is only available through marker-based systems in constrained environments [13]. The images
captured in such conditions do not match well real environments. This has limited the development
of end-to-end CNN architectures for in-the-wild 3D pose understanding.
Learning architectures usually augment existing training data by applying synthetic perturbations
to the original images, e.g. jittering exemplars or applying more complex affine or perspective
transformations [15]. Such data augmentation has proven to be a crucial stage, especially for training
deep architectures. Recent work [14, 23, 34, 40] has introduced the use of data synthesis as a solution
to train CNNs when only limited data is available. Synthesis can potentially provide infinite training
data by rendering 3D CAD models from any camera viewpoint [23, 34, 40]. Fisher et al [8] generate
a synthetic ?Flying Chairs? dataset to learn optical flow with a CNN and show that networks trained
on this unrealistic data still generalize very well to existing datasets. In the context of scene text
recognition, Jaderberg et al. [14] trained solely on data produced by a synthetic text generation
engine. In this case, the synthetic data is highly realistic and sufficient to replace real data. Although
synthesis seems like an appealing solution, there often exists a large domain shift from synthetic to
real data [23]. Integrating a human 3D model in a given background in a realistic way is not trivial.
Rendering a collection of photo-realistic images (in terms of color, texture, context, shadow) that
would cover the variations in pose, body shape, clothing and scenes is a challenging task.
Instead of rendering a human 3D model, we propose an image-based synthesis approach that makes
use of Motion Capture (MoCap) data to augment an existing dataset of real images with 2D pose
annotations. Our system synthesizes a very large number of new in-the-wild images showing more
pose configurations and, importantly, it provides the corresponding 3D pose annotations (see Fig. 1).
For each candidate 3D pose in the MoCap library, our system combines several annotated images to
generate a synthetic image of a human in this particular pose. This is achieved by ?copy-pasting? the
image information corresponding to each joint in a kinematically constrained manner. Given this
large ?in-the-wild? dataset, we implement an end-to-end CNN architecture for 3D pose estimation.
Our approach first clusters the 3D poses into K pose classes. Then, a K-way CNN classifier is trained
to return a distribution over probable pose classes given a bounding box around the human in the
image. Our method outperforms state-of-the-art results in terms of 3D pose estimation in controlled
environments and shows promising results on images captured ?in-the-wild?.
Figure 1: Image-based synthesis engine. Input: real images with manual annotation of 2D poses, and 3D poses
captured with a Motion Capture (MoCap) system. Output: 220x220 synthetic images and associated 3D poses.
1.1
Related work
3D human pose estimation in monocular images. Recent approaches employ CNNs for 3D pose
estimation in monocular images [20] or in videos [44]. Due to the lack of large scale training data,
they are usually trained (and tested) on 3D MoCap data in constrained environments [20]. Pose
understanding in natural images is usually limited to 2D pose estimation [7, 36, 37]. Recent work
also tackles 3D pose understanding from 2D poses [2, 10]. Some approaches use as input the 2D
joints automatically provided by a 2D pose detector [32, 38], while others jointly solve the 2D and
3D pose estimation [31, 43]. Most similar to ours is the approach of Iqbal et al. [42] who use a
dual-source approach that combines 2D pose estimation with 3D pose retrieval. Our method uses
the same two training sources, i.e., images with annotated 2D pose and 3D MoCap data. However,
we combine both sources off-line to generate a large training set that is used to train an end-to-end
CNN 3D pose classifier. This is shown to improve over [42], which can be explained by the fact that
training is performed in an end-to-end fashion.
Synthetic pose data. A number of works have considered the use of synthetic data for human pose
estimation. Synthetic data have been used for upper body [29], full-body silhouettes [1], hand-object
interactions [28], full-body pose from depth [30] or egocentric RGB-D scenes [27]. Recently, Zuffi
and Black [45] used a 3D mesh-model to sample synthetic exemplars and fit 3D scans. In [11],
a scene-specific pedestrian detectors was learned without real data while [9] synthesized virtual
samples with a generative model to enhance the classification performance of a discriminative model.
In [12], pictures of 2D characters were animated by fitting and deforming a 3D mesh model. Later,
[25] augmented labelled training images with small perturbations in a similar way. These methods
require a perfect segmentation of the humans in the images. Park and Ramanan [22] synthesized
hypothetical poses for tracking purposes by applying geometric transformations to the first frame of
a video sequence. We also use image-based synthesis to generate images but our rendering engine
combines image regions from several images to create images with associated 3D poses.
2
Image-based synthesis engine
At the heart of our approach is an image-based synthesis engine that artificially generates ?in-the-wild?
images with 3D pose annotations. Our method takes as input a dataset of real images with 2D
2
Figure 2: Synthesis engine. From left to right: for each joint j of a 2D query pose p (centered in a 220 ? 220
bounding box), we align all the annotated 2D poses w.r.t the limb and search for the best pose match, obtaining a
list of n matches {(Ij0 , q0j ), j = 1...n} where Ij0 is obtained after transforming Ij with Tqj ? q0j . For each
retrieved pair, we compute a probability map pj [u, v]. These n maps are used to compute index[u, v] ? {1...n},
pointing to the image Ij0 that should be used for a particular pixel (u, v). Finally, our blending algorithm
computes each pixel value of the synthetic image M [u, v] as the weighted sum over all aligned images Ij0 , the
weights being calculated using an histogram of indexes in a squared region Ru,v around (u, v).
annotations and a library of 3D Motion Capture (MoCap) data, and generates a large number of
synthetic images with associated 3D poses (Fig. 1). We introduce an image-based rendering engine
that augments the existing database of annotated images with a very large set of photorealistic images
covering more body pose configurations than the original set. This is done by selecting and stitching
image patches in a kinematically constrained manner using the MoCap 3D poses. Our synthesis
process consists of two stages: a MoCap-guided mosaic construction stage that stitches image patches
together and a pose-aware blending process that improves image quality and erases patch seams.
These are discussed in the following subsections. Fig. 2 summarizes the overall process.
2.1
MoCap-guided image mosaicing
Given a 3D pose with n joints P ? Rn?3 , and its projected 2D joints p = {pj , j = 1...n} in a
particular camera view, we want to find for each joint j ? {1...n} an image whose annotated 2D pose
presents a similar kinematic configuration around j. To do so, we define a distance function between
2 different 2D poses p and q, conditioned on joint j as:
Dj (p, q) =
n
X
dE (pk , qk0 )
(1)
k=1
where dE is the Euclidean distance. q0 is the aligned version of q with respect to joint j after applying
a rigid transformation Tqj ?q0j , which respects qj0 = pj and qi0 = pi , where i is the farthest directly
connected joint to j in p. This function Dj measures the similarity between 2 joints by aligning and
taking into account the entire poses. To increase the influence of neighboring joints, we weight the
distances dE between each pair of joints {(pk , qk0 ), k = 1...n} according to their distance to the query
joint j in both poses. Eq. 1 becomes:
Dj (p, q) =
n
X
(wkj (p) + wkj (q)) dE (pk , qk0 )
(2)
k=1
where weight wkj is inversely proportional to the distance between joint k and the query joint j, i.e.,
P
wkj (p) = 1/dE (pk , pj ) and normalized so that k wkj (p) = 1. For each joint j of the query pose p,
we retrieve from our dataset Q = {(I1 , q1 ) . . . (IN , qN )} of images and annotated 2D poses1 :
qj = argminq?Q Dj (p, q) ?j ? {1...n}.
(3)
We obtain a list of n matches {(Ij0 , q0j ), j = 1...n} where Ij0 is the cropped image obtained after
transforming Ij with Tqj ?q0j . Note that a same pair (I, q) can appear multiple times in the list of
candidates, i.e., being a good match for several joints.
1
In practice, we do not search for occluded joints.
3
Finally, to render a new image, we need to select the candidate images Ij0 to be used for each pixel
(u, v). Instead of using regular patches, we compute a probability map pj [u, v] associated with each
pair (Ij0 , q0j ) based on local matches measured by dE (pk , qk0 ) in Eq. 1. To do so, we first apply a
Delaunay triangulation to the set of 2D joints in {q0j } obtaining a partition of the image into triangles,
accordingly to the selected pose. Then, we assign the probability pj (qk0 ) = exp(?dE (pk , qk0 )2 /? 2 )
to each vertex qk0 . We finally compute a probability map pj [u, v] by interpolating values from these
vertices using barycentric interpolation inside each triangle. The resulting n probability maps are
concatenated and an index map index[u, v] ? {1...n} can be computed as follows:
index[u, v] = argmaxj?{1...n} pj [u, v],
(4)
this map pointing to the training image Ij0 that should be used for each pixel (u, v). A mosaic M [u, v]
can be generated by ?copy-pasting? image information at pixel (u, v) indicated by index[u, v]:
M [u, v] = Ij0 ? [u, v] with j ? = index[u, v].
(5)
2.2
Pose-aware image blending
The mosaic M [u, v] resulting from the previous stage presents significant artifacts at the boundaries
between image regions. Smoothing is necessary to prevent the learning algorithm from interpreting
these artifacts as discriminative pose-related features. We first experimented with off-the-shelf image
filtering and alpha blending algorithms, but the results were not satisfactory. Instead, we propose
a new pose-aware blending algorithm that maintains image information on the human body while
erasing most of the stitching artifacts. For each pixel (u, v), we select a surrounding squared region
Ru,v whose size varies with the distance of pixel (u, v) to the pose: Ru,v will be larger when far
from the body and smaller nearby. Then, we evaluate how much each image Ij0 should contribute to
the value of pixel (u, v) by building an histogram of the image indexes inside the region Ru,v :
wj [u, v] = Hist(index(Ru,v )) ?j ? {1 . . . n},
(6)
P
where the weights are normalized so that j wj [u, v] = 1. The final mosaic M [u, v] (see examples
in Fig. 1) is then computed as the weighted sum over all aligned images:
X
M [u, v] =
wj [u, v]Ij0 [u, v].
(7)
j
This procedure produces plausible images that are kinematically correct and locally photorealistic.
3
CNN for full-body 3D pose estimation
Human pose estimation has been addressed as a classification problem in the past [4, 21, 27, 26].
Here, the 3D pose space is partitioned into K clusters and a K-way classifier is trained to return a
distribution over pose classes. Such a classification approach allows modeling multimodal outputs
in ambiguous cases, and produces multiple hypothesis that can be rescored, e.g. using temporal
information. Training such a classifier requires a reasonable amount of data per class which implies a
well-defined and limited pose space (e.g. walking action) [26, 4], a large-scale synthetic dataset [27] or
both [21]. Here, we introduce a CNN-based classification approach for full-body 3D pose estimation.
Inspired by the DeepPose algorithm [37] where the AlexNet CNN architecture [19] is used for
full-body 2D pose regression, we select the same architecture and adapt it to the task of 3D body
pose classification. This is done by adapting the last fully-connected layer to output a distribution of
scores over pose classes as illustrated in Fig. 3. Training such a classifier requires a large amount of
training data that we generate using our image-based synthesis engine.
Given a library of MoCap data and a set of camera views, we synthesize for each 3D pose a 220 ? 220
image. This size has proved to be adequate for full-body pose estimation [37]. The 3D poses are
then aligned with respect to the camera center and translated to the center of the torso. In that way,
we obtain orientated 3D poses that also contain the viewpoint information. We cluster the resulting
3D poses to define our classes which will correspond to groups of similar orientated 3D poses.We
empirically found that K=5000 clusters was a sufficient number of clusters. For evaluation, we return
the average 2D and 3D poses of the top scoring class.
To compare with [37], we also train a holistic pose regressor, which regresses to 2D and 3D poses
(not only 2D). To do so, we concatenate the 3D coordinates expressed in meters normalized to the
range [?1, 1], with the 2D pose coordinates, also normalized in the range [?1, 1] following [37].
4
Figure 3: CNN-based pose classifier. We show the different layers with their corresponding dimensions, with
convolutional layers depicted in blue and fully connected ones in green. The output is a distribution over K pose
classes. Pose estimation is obtained by taking the highest score in this distribution. We show on the right the 3D
poses for 3 highest scores.
4
Experiments
We address 3D pose estimation in the wild. However, there does not exist a dataset of real-world
images with 3D annotations. We thus evaluate our method in two different settings using existing
datasets: (1) we validate our 3D pose predictions using Human3.6M [13] which provides accurate 3D
and 2D poses for 15 different actions captured in a controlled indoor environment; (2) we evaluate
on Leeds Sport dataset (LSP)[16] that presents in-the-wild images together with full-body 2D pose
annotations. We demonstrate competitive results with state-of-the-art methods for both of them.
Our image-based rendering engine requires two different training sources: 1) a 2D source of images
with 2D pose annotations and 2) a MoCap 3D source. We consider two different datasets for each:
for 3D poses we use the CMU Motion Capture Dataset2 and the Human3.6M 3D poses [13], and for
2D pose annotations the MPII-LSP-extended dataset [24] and the Human3.6M 2D poses and images.
MoCap 3D source. The CMU Motion Capture dataset consists of 2500 sequences and a total of
140,000 3D poses. We align the 3D poses w.r.t. the torso and select a subset of 12,000 poses, ensuring
that selected poses have at least one joint 5 cm apart. In that way, we densely populate our pose space
and avoid repeating common poses (e.g. neutral standing or walking poses which are over-represented
in the dataset). For each of the 12,000 original MoCap poses, we sample 180 random virtual views
with azimuth angle spanning 360 degrees and elevation angles in the range [?45, 45]. We generate
over 2 million pairs of 3D/2D pose configurations (articulated poses + camera position and angle).
For Human3.6M, we randomly selected a subset of 190,000 orientated 3D poses, discarding similar
poses, i.e., when the average Euclidean distance of the joints is less than 15mm as in [42].
2D source. For the training dataset of real images with 2D pose annotations, we use the MPII-LSPextended [24] which is a concatenation of the extended LSP [17] and the MPII dataset [3]. Some of
the poses were manually corrected as a non-negligible number of annotations are not accurate enough
or completely wrong (eg., right-left inversions or bad ordering of the joints along a limb). We mirror
the images to double the size of the training set, obtaining a total of 80,000 images with 2D pose
annotations. For Human3.6M, we consider the 4 cameras and create a pool of 17,000 images and
associated 2D poses that we also mirror. We ensure that most similar poses have at least one joint 5
cm apart in 3D.
4.1
Evaluation on Human3.6M Dataset (H3.6M)
To compare our results with very recent work in 3D pose estimation [42], we follow the protocol
introduced in [18] and employed in [42]: we consider six subjects (S1, S5, S6, S7, S8 and S9)
for training, use every 64th frame of subject S11 for testing and evaluate the 3D pose error (mm)
averaged over the 13 joints. We refer to this protocol by P1. As in [42], we consider a 3D pose error
that measures accuracy of aligned pose by a rigid transformation but also report the absolute error.
We first evaluate the impact of our synthetic data on the performances for both the regressor and
classifier. The results are reported in Tab. 1. We can observe that when considering few training
images (17,000), the regressor clearly outperforms the classifier which, in turns, reaches better
performances when trained on larger sets. This can be explained by the fact that the classification
approach requires a sufficient amount of examples. We, then, compare results when training both
regressor and classifier on the same 190,000 poses considering a) synthetic data generating from
H3.6M, b) the real images corresponding to the 190,000 poses and c) the synthetic and real images
2
http://mocap.cs.cmu.edu
5
Table 1: 3D pose estimation results on Human3.6M (protocol P1).
Method Type of images 2D source size 3D source size Error (mm)
Reg.
Real
17,000
17,000
112.9
Class.
Real
17,000
17,000
149.7
Reg.
Synth
17,000
190,000
101.9
Synth
17,000
190,000
97.2
Class.
Reg.
Real
190,000
190,000
139.6
Class.
Real
190,000
190,000
97.7
Reg.
Synth + Real
207,000
190,000
125.5
Class.
Synth + Real
207,000
190,000
88.1
Table 2: Comparison with state-of-the-art results on Human3.6M. The average 3D pose error (mm) is
reported before (Abs.) and after rigid 3D alignment for 2 different protocols. See text for details.
Abs. Error (P1) Error (P1) Abs. Error (P2) Error (P2)
Method
Bo&Sminchisescu [5]
117.9
Kostrikov&Gall [18]
115.7
108.3
Iqbal et al. [42]
Li et al. [20]
121.31
Tekin et al. [35]
124.97
113.01
Zhou et al. [44]
Ours
126
88.1
121.2
87.3
together. We observe that the classifier has similar performance when trained on synthetic or real
images, which means that our image-based rendering engine synthesizes useful data. Furthermore,
we can see that the classifier performs much better when trained on synthetic and real images together.
This means that our data is different from the original data and allows the classifier to learn better
features. Note that we retrain Alexnet from scratch. We found that it performed better than just
fine-tuning a model pre-trained on Imagenet (3D error of 88.1mm vs 98.3mm with fine-tuning).
In Tab. 2, we compare our results to state-of-the-art approaches. We also report results for a second
protocol (P2) employed in [20, 44, 35] where all the frames from subjects S9 and S11 are used
for testing and only S1, S5, S6, S7 and S8 are used for training. Our best classifier, trained with
a combination of synthetic and real data, outperforms state-of-the-art results in terms of 3D pose
estimation for single frames. Zhou et al. [44] report better performance, but they integrate temporal
information. Note that our method estimates absolute pose (including orientation w.r.t. the camera),
which is not the case for other methods such as Bo et al. [5], who estimate a relative pose and do not
provide 3D orientation.
4.2
Evaluation on Leeds Sport Dataset (LSP)
We now train our pose classifier using different combinations of training sources and use them to
estimate 3D poses on images captured in-the-wild, i.e., LSP. Since 3D pose evaluation is not possible
on this dataset, we instead compare 2D pose errors expressed in pixels and measure this error on the
normalized 220 ? 220 images following [44]. We compute the average 2D pose error over the 13
joints on both LSP and H3.6M (see Table 3).
As expected, we observe that when using a pool of the in-the-wild images to generate the synthetic
data, the performance increases on LSP and drops on H3.6M, showing the importance of realistic
images for good performance in-the-wild and the lack of generability of models trained on constrained
indoor images. The error slightly increases in both cases when using the same number (190,000)
of CMU 3D poses. The same drop was observed by [42] and can be explained by the fact that by
CMU data covers a larger portions of the 3D pose space, resulting in a worse fit. The results improve
on both test sets when considering more poses and synthetic images (2 millions). The larger drop
in Abs 3D error and 2D error compared to 3D error means that a better camera view is estimated
when using more synthetic data. In all cases, the performance (in pixel) is lower on LSP than on
H3.6M due to the fact that the poses observed in LSP are more different from the ones in the CMU
MoCap data. In Fig. 4 , we visualize the 2D pose error on LSP and Human3.6M 1) for different pools
of annotated 2D images, 2) varying the number of synthesized training images and 3) considering
different number of pose classes K. As expected using a bigger set of annotated images improves the
6
Table 3: Pose error on LSP and H3.6M using different sources for rendering the synthetic images.
2D
3D
Num. of
H3.6M
H3.6M
H3.6M
LSP
source
source 3D poses Abs Error (mm) Error (mm) Error (pix) Error (pix)
H3.6M
H3.6M 190,000
130.1
97.2
8.8
31.1
MPII+LSP H3.6M 190,000
248.9
122.1
17.3
20.7
MPII+LSP CMU
190,000
320.0
150.6
19.7
22.4
2.106
216.5
138.0
11.2
13.8
MPII+LSP CMU
Table 4: State-of-the-art results on LSP (2D pose error in pixels on normalized 220 ? 220 images).
Method
Feet Knees Hips Hands Elbows Shoulder Head All
Wei et al. [39]
6.6
5.3
4.8
8.6
7.0
5.2
5.3
6.2
Pishchulin et al. [24] 10.0
6.8
5.0
11.1
8.2
5.7
5.9
7.6
Chen & Yuille [7]
15.7
11.5
8.1
15.6
12.1
8.6
6.8
11.5
15.5
11.5
8.0
14.7
12.2
8.9
7.4
11.5
Yang et al. [41]
Ours (Alexnet)
19.1
13
4.9
21.4
16.6
10.5
10.3 13.8
Ours (VGG)
16.2
10.6
4.1
17.7
13.0
8.4
9.8
11.5
performance in-the-wild. Pose error converges both on LSP and H3.6M when using 1.5 million of
images; using more than K = 5000 classes does not further improve the performance.
Figure 4: 2D pose error on LSP and Human3.6M using different pools of annotated images to generate 2 million
of synthetic training images (left), varying the number of synthetic training images (center) and considering
different number of pose classes K (right).
To further improve the performance, we also experiment with fine-tuning a VGG-16 architecture
[33] for pose classification. By doing so, the average (normalized) 2D pose error decreases by 2.3
pixels. In Table 4, we compare our results on LSP to the state-of-the-art 2D pose estimation methods.
Although our approach is designed to estimate a coarse 3D pose, its performances is comparable to
recent 2D pose estimation methods [7, 41].
The qualitative results in Fig. 5 show that our algorithm correctly estimates the global 3D pose. After
a visual analysis of the results, we found that failures occur in two cases: 1) when the observed pose
does not belong to the MoCap training database, which is a limitation of purely holistic approaches,
or 2) when there is a possible right-left or front-back confusion. We observed that this later case
is often correct for subsequent top-scoring poses. This highlights a property of our approach that
can keep multiple pose hypotheses which could be rescored adequately, for instance, using temporal
information in videos.
5
Conclusion
In this paper, we introduce an approach for creating a synthetic training dataset of ?in-the-wild?
images and their corresponding 3D pose. Our algorithm artificially augments a dataset of real images
with new synthetic images showing new poses and, importantly, with 3D pose annotations. We
show that CNNs can be trained on artificial images and generalize well to real images. We train
an end-to-end CNN classifier for 3D pose estimation and show that, with our synthetic training
images, our method outperforms state-of-the-art results in terms of 3D pose estimation in controlled
environments and shows promising results for in-the-wild images (LSP). In this paper, we have
7
Figure 5: Qualitative results on LSP. We show correct 3D pose estimations (top 2 rows) and typical failure
cases (bottom row) corresponding to unseen poses or right-left and front-back confusions.
estimated a coarse 3D pose by returning the average pose of the top scoring cluster. In future work,
we will investigate how top scoring classes could be re-ranked and also how the pose could be refined.
Acknowledgments. This work was supported by the European Commission under FP7 Marie Curie
IOF grant (PIOF-GA-2012-328288) and partially supported by ERC advanced grant Allegro. We
acknowledge the support of NVIDIA with the donation of the GPUs used for this research. We thank
P. Weinzaepfel for his help and the anonymous reviewers for their comments and suggestions.
References
[1] A. Agarwal and B. Triggs. Recovering 3D human pose from monocular images. PAMI, 28(1):44?58, 2006.
[2] I. Akhter and M. Black. Pose-conditioned joint angle limits for 3D human pose reconstruction. In CVPR,
2015.
[3] M. Andriluka, L. Pishchulin, P. Gehler, and B. Schiele. 2D human pose estimation: New benchmark and
state-of- the-art analysis. In CVPR, 2014.
[4] A. Bissacco, M.-H. Yang, and S. Soatto. Detecting humans via their pose. In NIPS, 2006.
[5] L. Bo and C. Sminchisescu. Twin Gaussian processes for structured prediction. IJCV, 87(1-2):28?52,
2010.
[6] L. Bourdev and J. Malik. Poselets: Body part detectors trained using 3D human pose annotations. In ICCV,
2009.
[7] X. Chen and A. L. Yuille. Articulated pose estimation by a graphical model with image dependent pairwise
relations. In NIPS, 2014.
[8] A. Dosovitskiy, P. Fischer, E. Ilg, P. H?usser, C. Hazirbas, V. Golkov, P. van der Smagt, D. Cremers, and
T. Brox. Flownet: Learning optical flow with convolutional networks. In ICCV, 2015.
[9] M. Enzweiler and D. M. Gavrila. A mixed generative-discriminative framework for pedestrian classification.
In CVPR, 2008.
[10] X. Fan, K. Zheng, Y. Zhou, and S. Wang. Pose locality constrained representation for 3D human pose
reconstruction. In ECCV, 2014.
[11] H. Hattori, V. N. Boddeti, K. M. Kitani, and T. Kanade. Learning scene-specific pedestrian detectors
without real data. In CVPR, 2015.
[12] A. Hornung, E. Dekkers, and L. Kobbelt. Character animation from 2D pictures and 3D motion data. ACM
Trans. Graph., 26(1), 2007.
[13] C. Ionescu, D. Papava, V. Olaru, and C. Sminchisescu. Human3.6m: Large scale datasets and predictive
methods for 3D human sensing in natural environments. PAMI, 36(7):1325?1339, 2014.
[14] M. Jaderberg, K. Simonyan, A. Vedaldi, and A. Zisserman. Reading text in the wild with convolutional
neural networks. IJCV, 116(1):1?20, 2016.
[15] M. Jaderberg, K. Simonyan, A. Zisserman, and K. Kavukcuoglu. Spatial transformer networks. NIPS,
2015.
8
[16] S. Johnson and M. Everingham. Clustered pose and nonlinear appearance models for human pose
estimation. In BMVC, 2010.
[17] S. Johnson and M. Everingham. Learning effective human pose estimation from inaccurate annotation. In
CVPR, 2011.
[18] I. Kostrikov and J. Gall. Depth sweep regression forests for estimating 3D human pose from images. In
BMVC, 2014.
[19] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural
networks. In NIPS, 2012.
[20] S. Li, W. Zhang, and A. B. Chan. Maximum-margin structured learning with deep networks for 3D human
pose estimation. In ICCV, 2015.
[21] R. Okada and S. Soatto. Relevant feature selection for human pose estimation and localization in cluttered
images. In ECCV, 2008.
[22] D. Park and D. Ramanan. Articulated pose estimation with tiny synthetic videos. In CVPRW, 2015.
[23] X. Peng, B. Sun, K. Ali, and K. Saenko. Learning deep object detectors from 3D models. In ICCV, 2015.
[24] L. Pishchulin, E. Insafutdinov, S. Tang, B. Andres, M. Andriluka, P. V. Gehler, and B. Schiele. Deepcut:
Joint subset partition and labeling for multi person pose estimation. CVPR, 2016.
[25] L. Pishchulin, A. Jain, M. Andriluka, T. Thorm?hlen, and B. Schiele. Articulated people detection and
pose estimation: Reshaping the future. In CVPR, 2012.
[26] G. Rogez, J. Rihan, C. Orrite, and P. Torr. Fast human pose detection using randomized hierarchical
cascades of rejectors. IJCV, 99(1):25?52, 2012.
[27] G. Rogez, J. Supancic, and D. Ramanan. First-person pose recognition using egocentric workspaces. In
CVPR, 2015.
[28] J. Romero, H. Kjellstrom, and D. Kragic. Hands in action: real-time 3D reconstruction of hands in
interaction with objects. In ICRA, 2010.
[29] G. Shakhnarovich, P. A. Viola, and T. Darrell. Fast pose estimation with parameter-sensitive hashing. In
ICCV, 2003.
[30] J. Shotton, A. W. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio, R. Moore, A. Kipman, and A. Blake.
Real-time human pose recognition in parts from single depth images. In CVPR, 2011.
[31] E. Simo-Serra, A. Quattoni, C. Torras, and F. Moreno-Noguer. A joint model for 2D and 3D pose estimation
from a single image. In CVPR, 2013.
[32] E. Simo-Serra, A. Ramisa, G. Aleny?, C. Torras, and F. Moreno-Noguer. Single image 3D human pose
estimation from noisy observations. In CVPR, 2012.
[33] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition.
CoRR, abs/1409.1556, 2014.
[34] H. Su, C. Ruizhongtai Qi, Y. Li, and L. J. Guibas. Render for CNN: viewpoint estimation in images using
CNNs trained with rendered 3D model views. In ICCV, 2015.
[35] Bugra Tekin, Artem Rozantsev, Vincent Lepetit, and Pascal Fua. Direct prediction of 3d body poses from
motion compensated sequences. In CVPR, 2016.
[36] J. J. Tompson, A. Jain, Y. LeCun, and C. Bregler. Joint training of a convolutional network and a graphical
model for human pose estimation. In NIPS, 2014.
[37] A. Toshev and C. Szegedy. DeepPose: Human pose estimation via deep neural networks. In CVPR, 2014.
[38] C. Wang, Y. Wang, Z. Lin, A. L. Yuille, and W. Gao. Robust estimation of 3D human poses from a single
image. In CVPR, 2014.
[39] S-E Wei, V. Ramakrishna, T. Kanade, and Y. Sheikh. Convolutional pose machines. In CVPR, 2016.
[40] Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao. 3D shapenets: A deep representation for
volumetric shapes. In CVPR, 2015.
[41] W. Yang, W. Ouyang, H. Li, and X. Wang. End-to-end learning of deformable mixture of parts and deep
convolutional neural networks for human pose estimation. In CVPR, 2016.
[42] H. Yasin, U. Iqbal, B. Kr?ger, A. Weber, and J. Gall. A dual-source approach for 3D pose estimation from
a single image. In CVPR, 2016.
[43] F. Zhou and F. De la Torre. Spatio-temporal matching for human detection in video. In ECCV, 2014.
[44] X. Zhou, M. Zhu, S. Leonardos, K. Derpanis, and K. Daniilidis. Sparseness meets deepness: 3D human
pose estimation from monocular video. In CVPR, 2016.
[45] S. Zuffi and M. J. Black. The stitched puppet: A graphical model of 3D human shape and pose. In CVPR,
2015.
9
| 6563 |@word cnn:14 version:1 inversion:1 seems:1 everingham:2 triggs:1 dekker:1 rgb:1 q1:1 lepetit:1 configuration:4 score:3 iqbal:3 selecting:1 ours:5 animated:1 outperforms:5 existing:5 past:1 cad:1 mesh:2 realistic:4 partition:2 concatenate:1 subsequent:1 shape:3 romero:1 moreno:2 drop:3 designed:1 v:1 generative:2 selected:4 cook:1 accordingly:1 bissacco:1 num:1 provides:2 coarse:2 contribute:1 detecting:1 zhang:2 hazirbas:1 along:1 direct:1 viable:1 qualitative:2 consists:2 ijcv:3 wild:17 combine:4 fitting:1 inside:2 manner:3 introduce:4 pairwise:1 peng:1 expected:2 p1:4 multi:1 yasin:1 inspired:1 automatically:1 considering:5 elbow:1 becomes:1 provided:1 estimating:1 alexnet:3 cm:2 ouyang:1 transformation:4 impractical:1 pasting:2 temporal:4 every:1 hypothetical:1 rihan:1 tackle:2 returning:1 demonstrates:1 classifier:15 wrong:1 puppet:1 ramanan:3 farthest:1 grant:2 appear:1 before:1 negligible:1 local:2 limit:1 era:1 meet:1 solely:1 interpolation:1 deeppose:2 pami:2 inria:1 black:3 challenging:1 limited:4 range:3 averaged:1 acknowledgment:1 camera:8 lecun:1 testing:2 practice:1 implement:1 fitzgibbon:1 procedure:1 adapting:1 vedaldi:1 cascade:1 matching:1 pre:1 integrating:1 regular:1 hattori:1 ga:1 selection:1 s9:2 context:2 applying:4 influence:1 transformer:1 map:7 reviewer:1 center:3 compensated:1 cluttered:1 tekin:2 alpes:1 knee:1 kostrikov:2 importantly:2 his:1 retrieve:1 s6:2 variation:1 coordinate:2 construction:1 us:1 mosaic:4 hypothesis:2 gall:3 synthesize:1 recognition:4 particularly:1 walking:2 database:2 gehler:2 observed:4 bottom:1 wang:4 capture:7 region:5 wj:3 connected:3 sun:1 ordering:1 decrease:1 highest:2 environment:8 transforming:2 schiele:3 occluded:1 jittering:1 trained:15 shakhnarovich:1 ali:1 predictive:1 flying:1 yuille:3 purely:1 localization:1 completely:1 triangle:2 translated:1 easily:1 joint:30 multimodal:1 represented:1 surrounding:1 train:7 articulated:5 jain:2 fast:2 effective:1 artificial:2 query:4 labeling:1 refined:1 jean:1 whose:3 larger:4 solve:1 plausible:1 cvpr:20 hornung:1 annotating:1 simonyan:3 fischer:1 unseen:1 jointly:1 noisy:1 final:1 sequence:3 propose:3 reconstruction:3 interaction:2 neighboring:1 aligned:5 relevant:1 holistic:2 deformable:1 validate:1 sutskever:1 cluster:7 double:1 darrell:1 deepcut:1 produce:2 generating:1 perfect:1 converges:1 object:3 help:1 donation:1 bourdev:1 pose:196 measured:1 exemplar:2 ij:2 h3:13 eq:2 p2:3 recovering:1 c:1 shadow:1 implies:1 poselets:1 guided:3 foot:1 annotated:10 correct:3 cnns:5 qi0:1 torre:1 centered:1 human:36 olaru:1 virtual:2 require:1 assign:1 clustered:1 anonymous:1 elevation:1 probable:1 blending:5 bregler:1 clothing:1 mm:8 around:3 considered:1 blake:1 guibas:1 exp:1 visualize:1 pointing:2 purpose:1 estimation:47 sensitive:1 ilg:1 create:2 weighted:2 clearly:1 always:1 gaussian:1 avoid:1 shelf:1 zhou:5 varying:2 dependent:1 rigid:3 inaccurate:1 entire:1 relation:1 smagt:1 france:1 selects:1 i1:1 pixel:12 overall:1 classification:10 dual:2 orientation:2 augment:2 pascal:1 development:1 andriluka:3 art:11 constrained:7 smoothing:1 weinzaepfel:1 brox:1 aware:3 spatial:1 cordelia:1 manually:1 park:2 papava:1 yu:1 future:2 others:1 report:3 dosovitskiy:1 grenoble:1 employ:1 few:1 randomly:1 densely:1 ab:6 detection:3 highly:1 kinematic:1 qk0:7 investigate:1 evaluation:4 zheng:1 alignment:1 mixture:1 tompson:1 stitched:1 accurate:3 necessary:2 simo:2 euclidean:2 re:1 hip:1 instance:1 modeling:1 cover:2 vertex:2 subset:3 neutral:1 krizhevsky:1 successful:1 johnson:2 gr:1 azimuth:1 front:2 reported:2 commission:1 mosaicing:1 varies:1 synthetic:32 combined:1 person:2 randomized:1 standing:1 workspace:1 off:2 regressor:4 enhance:1 synthesis:12 together:4 pool:4 augmentation:2 squared:2 flownet:1 kinematically:4 worse:1 creating:1 return:3 li:4 szegedy:1 account:1 de:8 twin:1 pedestrian:3 cremers:1 performed:2 later:2 view:5 tab:2 doing:1 portion:1 competitive:1 maintains:1 annotation:18 curie:1 accuracy:1 convolutional:8 who:2 correspond:1 generalize:3 vincent:1 kavukcuoglu:1 andres:1 produced:1 daniilidis:1 detector:5 quattoni:1 reach:1 manual:1 volumetric:1 failure:2 regress:1 associated:5 photorealistic:3 dataset:19 proved:1 color:1 subsection:1 improves:2 torso:2 segmentation:1 usser:1 back:2 hashing:1 seam:1 follow:1 zisserman:3 wei:2 bmvc:2 fua:1 done:2 box:2 furthermore:1 just:1 stage:4 hand:4 su:1 nonlinear:1 marker:1 lack:3 quality:1 indicated:1 artifact:3 building:1 normalized:7 contain:1 adequately:1 soatto:2 kitani:1 q0:1 moore:1 satisfactory:1 illustrated:1 eg:1 covering:1 ambiguous:1 demonstrate:1 confusion:2 performs:1 motion:8 interpreting:1 ruizhongtai:1 image:121 weber:1 recently:1 lsp:22 common:1 tqj:3 empirically:1 million:4 discussed:1 s8:2 belong:1 enzweiler:1 synthesized:3 significant:2 s5:2 refer:1 tuning:3 erc:1 dj:4 similarity:1 qj0:1 align:2 aligning:1 delaunay:1 recent:5 triangulation:1 perspective:1 retrieved:1 chan:1 apart:2 nvidia:1 dataset2:1 s11:2 der:1 rescored:2 scoring:4 captured:5 employed:2 mocap:18 torras:2 full:8 multiple:3 boddeti:1 match:7 adapt:1 retrieval:1 lin:1 reshaping:1 akhter:1 bigger:1 controlled:4 ensuring:1 prediction:3 impact:1 regression:2 qi:1 vision:1 cmu:8 histogram:2 agarwal:1 achieved:1 background:1 want:1 argminq:1 cropped:1 addressed:1 fine:3 laboratoire:1 wkj:5 source:15 crucial:1 comment:1 subject:3 gavrila:1 flow:2 kobbelt:1 yang:3 golkov:1 shotton:1 enough:1 rendering:8 fit:2 architecture:9 vgg:2 shift:1 qj:1 orrite:1 six:1 s7:2 human3:12 finocchio:1 song:1 render:2 action:3 adequate:1 deep:9 useful:1 amount:3 repeating:1 locally:2 augments:3 generate:10 http:1 exist:1 zuffi:2 estimated:2 per:1 correctly:1 blue:1 ionescu:1 group:1 prevent:1 pj:8 marie:1 iof:1 egocentric:2 graph:1 sum:2 pix:2 angle:4 reasonable:1 wu:1 patch:5 summarizes:1 comparable:1 layer:3 fan:1 nontrivial:1 occur:1 scene:5 leonardo:1 nearby:1 generates:2 toshev:1 chair:1 optical:2 rendered:1 gpus:1 structured:2 according:1 combination:2 gory:1 smaller:1 slightly:1 character:2 partitioned:1 appealing:1 sheikh:1 s1:2 explained:3 iccv:6 heart:1 monocular:4 turn:1 argmaxj:1 stitching:3 fp7:1 end:14 photo:1 available:3 apply:1 limb:2 observe:3 hierarchical:1 noguer:2 original:4 top:5 ensure:1 graphical:3 kuntzmann:1 concatenated:1 especially:1 icra:1 sweep:1 malik:1 shapenets:1 distance:7 thank:1 concatenation:1 trivial:1 spanning:1 ru:5 index:9 potentially:1 synth:4 upper:1 observation:1 datasets:5 benchmark:1 acknowledge:1 viola:1 extended:2 shoulder:1 head:1 hinton:1 frame:4 rn:1 perturbation:2 barycentric:1 orientated:3 sharp:1 introduced:2 pair:5 kipman:1 imagenet:2 engine:11 learned:1 nip:5 trans:1 address:2 usually:3 indoor:2 reading:1 challenge:1 green:1 including:1 video:6 unrealistic:1 natural:2 ranked:1 leeds:2 advanced:1 zhu:1 improve:4 pishchulin:4 library:3 ne:1 picture:2 inversely:1 schmid:1 text:4 understanding:3 geometric:1 meter:1 relative:1 fully:2 highlight:1 mixed:1 generation:1 limitation:1 proportional:1 filtering:1 proven:1 suggestion:1 ramakrishna:1 ger:1 integrate:1 degree:1 affine:1 sufficient:3 xiao:1 viewpoint:3 tiny:1 pi:1 erasing:1 row:2 eccv:3 ij0:12 last:1 copy:2 supported:2 populate:1 taking:2 absolute:2 serra:2 van:1 boundary:1 depth:3 calculated:1 dimension:1 world:1 computes:1 qn:1 collection:2 projected:2 rozantsev:1 far:1 alpha:1 jaderberg:3 silhouette:1 keep:1 global:1 hist:1 rejectors:1 spatio:1 discriminative:3 allegro:1 search:2 mpii:6 khosla:1 table:6 promising:3 learn:2 kanade:2 okada:1 robust:1 obtaining:3 synthesizes:2 forest:1 sminchisescu:3 rogez:3 artificially:3 complex:1 domain:1 interpolating:1 protocol:5 european:1 pk:6 rh:1 bounding:2 q0j:7 animation:1 derpanis:1 body:16 augmented:1 fig:7 retrain:1 insafutdinov:1 fashion:1 position:1 candidate:4 tang:2 artem:1 bad:1 specific:2 discarding:1 cvprw:1 showing:3 sensing:1 list:3 experimented:1 exists:1 corr:1 importance:1 kr:1 mirror:2 texture:1 conditioned:2 sparseness:1 margin:1 chen:2 locality:1 depicted:1 appearance:1 gao:1 visual:1 expressed:2 stitch:1 tracking:1 sport:2 bo:3 partially:1 acm:1 labelled:1 replace:1 fisher:1 infinite:1 typical:1 corrected:1 torr:1 total:2 la:1 deforming:1 saenko:1 select:4 support:1 people:1 scan:1 evaluate:5 reg:4 tested:1 scratch:1 |
6,151 | 6,564 | A Bio-inspired Redundant Sensing Architecture
Anh Tuan Nguyen, Jian Xu and Zhi Yang?
Department of Biomedical Engineering
University of Minnesota
Minneapolis, MN 55455
?
[email protected]
Abstract
Sensing is the process of deriving signals from the environment that allows artificial systems to interact with the physical world. The Shannon theorem specifies
the maximum rate at which information can be acquired [1]. However, this upper bound is hard to achieve in many man-made systems. The biological visual
systems, on the other hand, have highly efficient signal representation and processing mechanisms that allow precise sensing. In this work, we argue that redundancy is one of the critical characteristics for such superior performance. We
show architectural advantages by utilizing redundant sensing, including correction
of mismatch error and significant precision enhancement. For a proof-of-concept
demonstration, we have designed a heuristic-based analog-to-digital converter - a
zero-dimensional quantizer. Through Monte Carlo simulation with the error probabilistic distribution as a priori, the performance approaching the Shannon limit
is feasible. In actual measurements without knowing the error distribution, we
observe at least 2-bit extra precision. The results may also help explain biological
processes including the dominance of binocular vision, the functional roles of the
fixational eye movements, and the structural mechanisms allowing hyperacuity.
1
Introduction
Visual systems have perfected the art of sensing through billions of years of evolution. As an example, with roughly 100 million photoreceptors absorbing light and 1.5 million retinal ganglion cells
transmitting information [2, 3, 4], a human can see images in three-dimensional space with great
details and unparalleled resolution. Anatomical studies determine the spatial density of the photoreceptors on the retina, which limits the peak foveal angular resolution to 20-30 arcseconds according
to Shannon theory [1, 2]. There are also other imperfections due to nonuniform distribution of cells?
shape, size, location, and sensitivity that further constrain the precision. However, experiment data
have shown that human can achieve an angular separation close to 1 arcminute in a two-point acuity test [5]. In certain conditions, it is even possible to detect an angular misalignment of only 2-5
arcseconds [6], which surpasses the virtually impossible physical barrier. This ability, known as
hyperacuity, has baffled scientists for decades: what kind of mechanism allows human to read an
undistorted image with such a blunt instrument?
Among the approaches to explain this astonishing feat of human vision, redundant sensing is a
promising candidate. It is well-known that redundancy is an important characteristic of many biological systems, from DNA coding to neural network [7]. Previous studies [8, 9] suggest there is
a connection between hyperacuity and binocular vision - the ability to see images using two eyes
with overlapping field of vision. Also known as stereopsis, it presents a passive form of redundant sensing. In addition to the obvious advantage of seeing objects in three-dimensional space,
the binocular vision has been proven to increase visual dynamic range, contrast, and signal-to-noise
ratio [10]. It is evident that seeing with two eyes enables us to sense a higher level of information
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Figure 1: Illustration of n-dimensional quantizers without (ideal) and with mismatch error. (a) Twodimensional quantizers for image sensing. (b) Zero-dimensional quantizers for analog-to-digital
data conversion.
as well as to correct many intrinsic errors and imperfections. Furthermore, the eyes continuously
and involuntarily engage in a complex micro-fixational movement known as microsaccade, which
suggests an active form of redundant sensing [11]. During microsaccade, the image projected on the
retina is shifted across a few photoreceptors in a pseudo-random manner. Empirical studies [12] and
computational models [13] suggest that the redundancy created by these micro-movements allows
efficient sampling of spatial information that can surpass the static diffraction limitation.
Both biological and artificial systems encounter similar challenges to achieve precise sensing in the
presence of non-ideal imperfections. One of those is mismatch error. At a high resolution, even a
small degree of mismatch error can degrade the performance of many man-made sensors [14, 15].
For example, it is not uncommon for a 24-bit analog-to-digital converter (ADC) to have 18-20 bits
effective resolution [16]. Inspired by the human visual system, we explore a new computational
framework to remedy mismatch error based on the principle of redundant sensing. The proposed
mechanism resembles the visual systems? binocular architecture and is designed to increase the
precision of a zero-dimensional data quantization process. By assuming the error probabilistic distribution as a priori, we show that precise data conversion approaching the Shannon limit can be
accomplished.
As a proof-of-concept demonstration, we have designed and validated a high-resolution ADC integrated circuit. The device utilizes a heuristic approach that allows unsupervised estimation and
calibration of mismatch error. Simulation and measurement results have demonstrated the efficacy
of the proposed technique, which can increase the effective resolution by 2-5 bits and linearity by
4-6 times without penalties in chip area and power consumption.
2
2.1
Mismatch Error
Quantization & Shannon Limit
Data quantization is the partition of a continuous n-dimensional vector space into M subspaces,
?0 , ..., ?M ?1 , called quantization regions as illustrated in Figure 1. For example, an eye is a twodimensional biological quantizer while an ADC is a zero-dimensional artificial quantizer, where the
partition occurs in a spatial, temporal and scalar domain. Each quantization region is assigned a
representative value, d0 , ..., dM ?1 , which uniquely encodes the quantized information. While the
representative values are well-defined in the abstract domain, the actual partition often depends on
the physical properties of the quantization device and has a limited degree of freedom for adjustment.
An optimal data conversation is achieved with a set of uniformly distributed quantization regions. In
practice, it is difficult to achieve due to the physical constraints in the partition process. For example,
individual pixel cells can deviate from the ideal morphology, location, and sensitivity. These relative
differences, referred to as mismatch error, contribute to the data conversion error.
In this paper, we consider a zero-dimensional (scalar) quantizer, which is the mathematical equivalence of an ADC device. A N -bit quantizer divides the continuous conversion full-range (FR =
[0, 2N ]) into 2N quantization regions, ?0 , ..., ?2N ?1 , with nominal unity length E(|?i |) = ? = 1
2
Figure 2: (a) Degeneration of entropy, i.e. maximum effective resolution, due to mismatch error versus quantizer?s intrinsic resolution. (b) The proportion of data conversion error measured
by mismatch-to-quantization ratio (MQR). With a conventional architecture, mismatch error is the
dominant source, especially in a high-resolution domain. The proposed method allows suppressing
mismatch error below quantization noise and approaching the Shannon limit.
least-significant-bit (LSB). The quantization regions are defined by a set of discrete references1 ,
SR = {?0 , ..., ?2N }, where 0 = ?0 < ?1 < ... < ?2N = 2N . An input signal x is assigned the
digital code d(x) = i ? SD = {0, 1, 2, ..., 2N ? 1}, if it falls into region ?i defined by
x ? d(x) = i
?
x ? ?i
?
?i ? x < ?i+1 .
(1)
The Shannon entropy of a N -bit quantizer [17, 18] quantifies the maximum amount of information
that can be acquired by the data conversion process
?
H = ? log2 12 ? M ,
(2)
where M is the normalized total mean square error integrated over each digital code
Z 2N
1
M = 3N
[x ? d(x) ? 1/2]2 dx
2
0
N
2X
?1 Z ?i+1
1
(x ? i ? 1/2)2 dx.
= 3N
2
?i
i=0
(3)
In this work, we consider both quantization noise and mismatch error. The Shannon limit is generally
preferred as the maximum rate at which information can be acquired without any mismatch error,
where ?i = i, ?i or SR \{2N } = SD , M is equal to the total quantization noise Q = 2?2N /12,
and the entropy is equal to the quantizer?s intrinsic resolution H = N . The differences between
SR \{2N } and SD are caused by mismatch error and result in the degeneration of entropy. Figure
2(a) shows the entropy, i.e. maximum effective resolution, versus the quantizer?s intrinsic resolution
with fixed mismatch ratios ?0 = 1% and ?0 = 10%. Figure 2(b) describes the proportion of error
contributed by each source, as measured by mismatch-to-quantization ratio (MQR)
MQR =
M ?Q
.
Q
(4)
It is evident that at a high resolution, mismatch error is the dominant source causing data conversion error. The Shannon theory implies that mismatch error is the fundamental problem relating to
the physical distribution of the reference set. [19, 20] have proposed post-conversion calibration
methods, which are ineffective in removing mismatch error without altering the reference set itself.
A standard workaround solution is using larger components thus better matching characteristics;
however, this incurs penalties concerning cost and power consumption. As a rule of thumb, 1-bit
increase in resolution requires a 4-time increase of resources [14]. To further advance the system
performance, a design solution that is robust to mismatch error must be realized.
1
?2N = 2N is a dummy reference to define the conversion full-range.
3
Figure 3: Simulated distribution of mismatch error in terms of (a) expected absolute error |PE (i)|
and (b) expected differential error PD (i) in a 16-bit quantizer with 10% mismatch ratio. (c, d)
Optimal mismatch error distribution in the proposed strategy. At the maximum redundancy 16 ?
(15, 1), mismatch error becomes negligible.
2.2
Mismatch Error Model
For artificial systems, binary coding is popularly used to encode the reference set. It involves partitioning the array of unit cells into a set of binary-weighted components SC , and assembling different
components in SC to form the needed references. The precision of the data conversion is related
to the precise matching of these unit cells, which can be in forms of comparators, capacitors, resistors, or transistors, etc. Due to fabrication variations, undesirable parasitics, and environmental
interference, each unit cell follows a probabilistic distribution which is the basis of mismatch error.
We consider the situation where the distribution of mismatch error is known as a priori. Each unit
cell, cu , is assumed to be normally distributed with mismatch ratio ?0 : cu ? N(1, ?02 ). SC is then a
collection of the binary-weighted components ci , each has 2i independent and identically distributed
unit cells
SC = {ci |ci ? N(2i , 2i ?02 )}, ?i ? [0, N ? 1].
(5)
Each reference ?i is associated with a unique assembly Xi of the components2
P
?Xi ck
N
SR \{2 } = {?i = 1 ckP
|Xi ? P(SC )}, ?i ? [0, 2N ? 1],
N ?1
c
j=0 j
2N ?1
(6)
where P(SC ) is the power set of SC . Binary coding allows the shortest data length to encode the
references: N control signals are required to generate 2N elements of SR . However, because each
reference is bijectively associated with an assembly of components, it is not possible to rectify the
mismatch error due to the random distribution of the components? weight without physically altering
the components themselves.
The error density function defined as PE (i) = ?i ? i quantifies the mismatch error at each digital
code. Figure 3(a) shows the distribution of |PE (i)| at 10% mismatch ratio through Monte Carlo
The dummy reference ?2N = 2N is exempted. Other references are normalized over the total weight to
define the conversion full-range of FR = [0, 2N ]
2
4
Figure 4: Associating and exchanging the information between individual pixels in the same field of
vision generate an exponential number of combinations and allow efficient spatial data acquisition
beyond physical constraints. Inspired by this process, we propose a redundant sensing strategy that
involves blending components between two imperfect sets to gain extra precision.
simulations, where there is noticeably larger error associating with middle-range codes. In fact, it
can be shown that if unit cells are independent, identically distributed, PE (i) approximates a normal
distribution as follows
N
?1
X
i 2
N
PE (i) = ?i ? i ? N(0,
2j?1 Dj ? N
(7)
?0 ), i ? [0, 2 ? 1],
2
?
1
j=0
where i = DN ?1 ...D1 D0 (Dj ? {0, 1}, ?j) is the binary representation of i.
Another drawback of binary coding is that it can create differential ?gap? between the references.
Figure 3(b) presents the estimated distribution of differential gap PD (i) = ?i+1 ? ?i at 10% mismatch ratio. When the gap exceeds two unit-length, signals that should be mapped to two or multiple
codes collapse into a single code, resulting in a loss of information. This phenomenon is commonly
known as wide code, an unrecoverable situation by any post-conversion calibration methods. Also,
wide gaps tend to appear at two adjacent codes that have large Hamming distance, e.g. 01111 and
10000. Subsequently, the amount of information loss can be signal dependent and amplified at
certain parts of data conversation range.
3
Proposed Strategy
The proposed general strategy is to incorporate redundancy into the quantization process such that
one reference ?i can be generated by a large number of distinct component assemblies Xi , each
yields a different amount of mismatch. Among numerous options that lead to the same goal, the
optimal reference set is the collection of assemblies with the least mismatch error over every digital
code.
Furthermore, we propose that such redundant characteristic can be achieved by resembling the visual
systems? binocular structure. It involves a secondary component set that has overlapping weights
with the primary component set. By exchanging the components with similar weights between the
two sets, excessive redundant component assemblies can be realized. We hypothesize that a similar mechanism may have been employed in the brain that allows associating information between
individual pixels on the same field of vision in each eye as illustrated in Figure 4. Because such
association creates an exponential number of combinations, even a small percentage of 100 million
photoreceptors and 1.5 million retinal ganglion cells that are ?interchangeable? could result in a
significant degree of redundancy.
The design of the primary and secondary component set, SC,0 and SC,1 , specifies the level and
distribution of redundancy. Specifically, SC,1 is derived by subtracting from the conventional binaryweighted set SC , while the remainders
form the primary component set SC,0 . The total nominal
P
weight remains unchanged as ci,j ?(SC,0 ?SC,1 ) ci,j = 2N0 ? 1, where N0 is the resolution of the
5
Figure 5: The distribution of the number of assemblies NA (i) with different geometrical identity
in (a) 2-component-set design and (b) 3-component-set design. Higher assembly count, i.e., larger
level of redundancy, is allocated for digital codes with larger mismatch error.
quantizer as well as the primary component set. It is worth mentioning that mismatch error is mostly
contributed by the most-significant-bit (MSB) rather than the least-significant-bit (LSB) as implied
by Equation (5). Subsequently, to optimize the level and distribution of redundancy, the secondary
set should advantageously consist of binary-weighted components that are derived from the MSB.
SC,0 and SC,1 can be described as follows
i
2,
if i < N0 ? N1
Primary: SC,0 = {c0,i |c0,i =
, ?i ? [0, N0 ? 1]},
i
2 ? c1,i?N0 +N1 , otherwise
(8)
Secondary: SC,1 = {c1,i |c1,i = 2N0 ?N1 +i?s1 , ?i ? [0, N1 ? 1]},
where N1 is the resolution of SC,1 and s1 is a scaling factor satisfying 1 ? N1 ? N0 ? 1 and
1 ? s1 ? N0 ? N1 . Different values of N1 and s1 result in different degree and distribution
of redundancy. Any design within this framework can be represented by its unique geometrical
identity: N0 ? (N1 , s1 ). The total number of components assemblies is |P(SC,0 ? SC,1 )| = 2N0 +N1 ,
which is much greater than the cardinality of the reference-set |SR | = 2N0 , thus implies the high
level of intrinsic redundancy.
NA (i) is defined as the number of assemblies that represent the same reference ?i and is an essential
indicator that specifies the redundancy distribution
X
NA (i) = |{X|X ? P(SC,0 ? SC,1 ) ?
cj,k = i}|, i ? [0, 2N0 ? 1].
(9)
cj,k ?X
Figure 5(a) shows NA (i) versus digital codes with N0 = 8 and multiple combinations of
(N1 , s1 ). The design of SC,1 should generate more options for middle-range codes, which suffer from larger mismatch error. Simulations suggest N1 decides the total number of assemblies,
P2N0 ?1
NA (i) = |P(SC,0 ? SC,1 )| = 2N0 +N1 ; s1 defines the morphology of the redundancy disi=0
tribution. A larger value of s1 gives a more spreading distribution.
Removing mismatch error is equivalent to searching for the optimal component assembly Xop,i that
generates the reference ?i with the least amount of mismatch
X
Xop,i =
argmin
cj,k , i ? [0, 2N0 ? 1].
(10)
i ?
X?P(SC,0 ?SC,1 )
cj,k ?X
The optimal reference set SR,op is then the collection of all references generated by Xop,i . In this
work, we do not attempt to find Xop,i as it is an NP-optimization problem with the complexity of
O(2N0 +N1 ) that may not have a solution in the polynomial space. Instead, this section focuses
on showing the achievable precision with the proposed architecture while section 4 will describe a
heuristic approach. The simulation results in Figure 2(b) demonstrate our technique can suppress
6
mismatch error below quantization noise, thus approaching the Shannon limit even at high resolution
and large mismatch ratio. In this simulation, the secondary set is chosen as N1 = N0 ? 1 for
maximum redundancy. Figure 3(c, d) shows the distribution of mismatch error after correction.
Even at the minimum redundancy (N1 = 1), a significant degree of mismatch is rectified. At the
maximum redundancy (N1 = N0 ? 1), the mismatch error becomes negligible compared with
quantization noise.
Based on the same principles, a n-set components design (n = 3, 4, ...) can be realized, which gives
an increased level redundancy and more complex distribution as shown in Figure 5(b), where n = 3
and the geometrical identity is N0 ? (N1 , s1 ) ? (N2 , s2 ). With different combinations of Nk and
sk (k = 1, 2, ...), NA (i) can be catered to a known mismatch error distribution and yield a better
performance. However, adding more component set(s) can increase the computational burden as
the complexity increases rapidly with every additional set(s): O(2N0 +N1 +N2 +... ). Given mismatch
error can be well rectified with a two-set implementation over a wide range of resolution, n > 2
might be unnecessary.
Similarly, three or more eyes may give better vision. However, the brain circuits and control network
would become much more complicated to integrate signals and information. In fact, stereopsis is an
advanced feature to human and animals with well-developed neural capacity [7]. Despite possessing
two eyes, many reptiles, fishes and other mammals, have their eyes located on the opposite sides of
the head, which limits the overlapping region thus stereopsis, in exchange for a wider field of vision.
Certain species of insect such as Arachnids can possess from six to eight eyes. However, studies have
pointed out that their eyes do not function in synchronous to resolve the fine resolution details [21].
It is not a coincidence that at least 30% of the human brain cortex is directly or indirectly involved
in processing visual data [7]. We conjecture that the computational limitation is a major reason that
many higher-order animals are evolved to have two eyes, thus keep the cyclops and triclops remain
in the realm of mythology. No less as it would sacrifice visual processing precision, yet no more as
it would overload the brain?s circuit complexity.
4
Practical Implementation & Results
A mixed-signal ADC integrated circuit has been designed and fabricated to demonstrate the feasibility of the proposed architecture. The nature of hardware implementation limits the deployment
of sophisticated learning algorithms. Instead, the circuit relies on a heuristic approach to efficiently
estimate the mismatch error and adaptively reconfigure its components in an unsupervised manner.
The detailed hardware algorithm and circuits implementation are presented seperately. In this paper,
we only briefly summarize the techniques and results.
The ADC design is based on successive-approximation register (SAR) architecture and features
redundant sensing with a geometrical identity 14 ? (13, 1). The component set SC is a binaryweighted capacitor array. We have chosen the smallest capacitance available in the CMOS process to
implement the unit cell for reducing circuits power and area. However, it introduces large capacitor
mismatch ratios up to 5% which limits the effective resolution to 10-bit or below for previous works
reported in the literature [14, 19, 20].
The resolution of the secondary array is chosen as N1 = N0 ? 1 to maximize the exchange capacity
between two component sets
c0,i = c1,i?1 = 1/2c0,i+1 ,
i ? [1, N ? 2].
(11)
In the auto-calibration mode, the mismatch error of each component is estimated by comparing the
capacitors with similar nominal values implied by Equation (11). The procedure is unsupervised
and fully automatic. The result is a reduced dimensional set of parameters that characterize the
distribution of mismatch error. In the data conversion mode, a heuristic algorithm is employed that
utilizes the estimated parameters to generate the component assembly with near-minimal mismatch
error for each reference. A key technique is to shift the capacitor utilization towards the MSB by
exchanging the components with similar weight, then to compensate the left-over error using the
LSB. Although the algorithm has the complexity of O(N0 + N1 ), parallel implementation allows
the computation to finish within a single clock cycle.
By assuming the LSB components contribute an insignificant level of mismatch error as implied by
Equation (5), this heuristic approach trades accuracy for speed. However, the excessive amount of
7
Figure 6: High-resolution ADC implementation. (a) Monte Carlo simulations of the unsupervised
error estimation and calibration technique. (b) The chip micrograph. (c) Differential nonlinearity
(DNL) and (d) integral nonlinearity (INL) measurement results.
redundancy guarantees the convergence of an adequate near-optimal solution. Figure 6(a) shows
simulated plots of effective-number-of-bits (ENOB) versus unit-capacitor mismatch ratio, ?0 (Cu ).
With the proposed method, the effective resolution is shown to approach the Shannon limit even with
large mismatch ratios. It is worth mentioning that we also take the mismatch error associated with
the bridge-capacitor, ?0 (Cb ), into consideration. Figure 6(b) shows the chip micrograph. Figure
6(c, d) gives the measurement results of standard ADC performance merit in terms of differential
nonlinearity (DNL) and integral nonlinearity (INL). The results demonstrate that a 4-6 fold increase
of linearity is feasible.
5
Conclusion
This work presents a redundant sensing architecture inspired by the binocular structure of the human visual system. We show architectural advantages of using redundant sensing in removing mismatch error and enhancing sensing precision. A high resolution, zero-dimensional data quantizer
is presented as a proof-of-concept demonstration. Through Monte Carlo simulation with the error
probabilistic distribution as a priori, we find the precision can approach the Shannon limit. In actual
measurements without knowing the error probabilistic distribution, the gain of extra 2-bit precision
and 4-6 times linearity is observed. We envision that the framework can be generalized to handle
higher dimensional data and apply to a variety of applications such as digital imaging, functional
magnetic resonance imaging (fMRI), 3D data acquisition, etc. Moreover, engineering such bioinspired artificial systems may help better understand the biological processes such as stereopsis,
microsaccade, and hyperacuity.
Acknowledgment
The authors would like to thank Phan Minh Nguyen for his valuable comments.
8
References
[1] Shannon, C.E. (1948) A Mathematical Theory of Communication. Bell System Technical Journal, vol.
27(3), pp. 379423.
[2] Curcio, C.A., Sloan, K.R., Kalina, R.E., Hendrickson, A.E. (1990) Human photoreceptor topography.
Journal of Comparative Neurology, vol. 292(4), pp. 497-523.
[3] Curcio, C. A., Allen, K. A. (1990) Topography of ganglion cells in human retina. Journal of Comparative
Neurology, vol. 300(1), pp. 5-25.
[4] Read, J.C. (2015) What is stereoscopic vision good for? Proc. SPIE 9391, Stereoscopic Displays and
Applications XXVI, pp. 93910N.
[5] Westheimer, G. (1977) Spatial frequency and light-spread descriptions of visual acuity and hyperacuity.
Journal of the Optical Society of America, vol. 67(2), pp. 207-212.
[6] Beck, J., Schwartz, T. (1979) Vernier acuity with dot test objects. Vision Research, vol. 19(3), pp. 313319.
[7] Reece, J.B., Urry, L.A, Cain, M.L., Wasserman, S.A, Minorsky, P.V., Jackson R.B., Campbell, N.A.
(2010) Campbell biology, 9th Ed. Boston: Benjamin Cummings/Pearson.
[8] Westheimer, G., McKee, S.P. (1978) Stereoscopic acuity for moving retinal images. Journal of the Optical
Society of America, vol. 68(4), pp. 450-455.
[9] Crick, F.H., Marr, D.C., Poggio, T. (1980) An information processing approach to understanding the
visual cortex. The Organization of the Cerebral Cortex, MIT Press, pp. 505-533.
[10] Cagenello, R., Arditi, A., Halpern, D. L. (1993) Binocular enhancement of visual acuity. Journal of the
Optical Society of America A, vol. 10(8), pp. 1841-1848.
[11] Martinez-Conde, S., Otero-Millan, J., Macknik, S.L. (2013) The impact of microsaccades on vision:
towards a unified theory of saccadic function. Nature Reviews Neuroscience, vol. 14(2), pp. 83-96.
[12] Hicheur, H., Zozor, S., Campagne, A., Chauvin, A. (2013) Microsaccades are modulated by both attentional demands of a visual discrimination task and background noise. Journal of vision, vol. 13(13), pp.
18-18.
[13] Hennig, M.H., W?org?otter, F. (2004) Eye micro-movements improve stimulus detection beyond the
Nyquist limit in the peripheral retina. Advances in Neural Information Processing Systems.
[14] Murmann, B. (2008) A/D converter trends: Power dissipation, scaling and digitally assisted architectures.
Custom Integrated Circuits Conference, 2008. CICC 2008. IEEE, pp. 105-112.
[15] Nguyen, A.T., Xu, J., Yang, Z. (2015) A 14-bit 0.17 mm2 SAR ADC in 0.13?m CMOS for high precision
nerve recording. Custom Integrated Circuits Conference (CICC), 2015 IEEE, pp. 1-4.
[16] Analog Devices (2016) 24-Bit Delta-Sigma ADC with Low Noise PGA. AD1555/1556 datasheet.
[17] Frey, M., Loeliger., H.A. (2007) On the static resolution of digitally corrected analog-to-digital and
digital-to-analog converters with low-precision components. Circuits and Systems I: Regular Papers,
IEEE Transactions on, vol. 54(1), pp. 229-237.
[18] Biveroni, J., Loeliger, H.A. (2008) On sequential analog-to-digital conversion with low-precision components. Information Theory and Applications Workshop, 2008. IEEE, pp. 185-187.
[19] Um, J.Y., Kim, Y.J., Song, E.W., Sim, J.Y., Park, H.J. (2013) A digital-domain calibration of splitcapacitor DAC for a differential SAR ADC without additional analog circuits. Circuits and Systems I:
Regular Papers, IEEE Transactions on, vol. 60(11), pp. 2845-2856.
[20] Xu, R., Liu, B., Yuan, J. (2012) Digitally calibrated 768-kS/s 10-b minimum-size SAR ADC array with
dithering. Solid-State Circuits, IEEE Journal of, vol. 47(9), pp. 2129-2140.
[21] Land, M.F. (1985) The morphology and optics of spider eyes. Neurobiology of arachnids, pp. 53-78,
Springer Berlin Heidelberg.
9
| 6564 |@word cu:3 briefly:1 middle:2 polynomial:1 proportion:2 achievable:1 c0:4 simulation:8 mammal:1 incurs:1 solid:1 liu:1 foveal:1 efficacy:1 loeliger:2 suppressing:1 envision:1 comparing:1 yet:1 dx:2 must:1 partition:4 otero:1 shape:1 enables:1 hypothesize:1 designed:4 plot:1 n0:22 msb:3 discrimination:1 device:4 quantizer:12 quantized:1 contribute:2 location:2 successive:1 org:1 mathematical:2 dn:1 differential:6 become:1 yuan:1 manner:2 acquired:3 sacrifice:1 expected:2 roughly:1 themselves:1 morphology:3 brain:4 inspired:4 zhi:1 actual:3 resolve:1 cardinality:1 becomes:2 spain:1 linearity:3 moreover:1 circuit:13 anh:1 what:2 evolved:1 kind:1 argmin:1 developed:1 adc:12 unified:1 murmann:1 fabricated:1 guarantee:1 pseudo:1 temporal:1 every:2 disi:1 um:1 schwartz:1 control:2 bio:1 partitioning:1 unit:9 normally:1 appear:1 utilization:1 negligible:2 engineering:2 scientist:1 frey:1 sd:3 limit:13 despite:1 might:1 resembles:1 k:1 equivalence:1 suggests:1 deployment:1 mentioning:2 limited:1 collapse:1 range:8 minneapolis:1 unique:2 practical:1 acknowledgment:1 practice:1 tribution:1 implement:1 bioinspired:1 procedure:1 area:2 empirical:1 bell:1 matching:2 regular:2 seeing:2 suggest:3 close:1 undesirable:1 twodimensional:2 impossible:1 optimize:1 conventional:2 equivalent:1 demonstrated:1 resembling:1 resolution:25 wasserman:1 rule:1 utilizing:1 array:4 deriving:1 jackson:1 marr:1 his:1 searching:1 handle:1 variation:1 sar:4 nominal:3 engage:1 element:1 trend:1 hyperacuity:5 satisfying:1 located:1 observed:1 role:1 coincidence:1 region:7 degeneration:2 cycle:1 movement:4 trade:1 valuable:1 digitally:3 benjamin:1 environment:1 workaround:1 pd:2 complexity:4 dynamic:1 halpern:1 interchangeable:1 astonishing:1 creates:1 misalignment:1 basis:1 chip:3 represented:1 america:3 distinct:1 reece:1 effective:7 describe:1 monte:4 artificial:5 sc:29 pearson:1 heuristic:6 larger:6 otherwise:1 ability:2 curcio:2 itself:1 advantage:3 transistor:1 propose:2 subtracting:1 fr:2 remainder:1 causing:1 rapidly:1 achieve:4 amplified:1 description:1 billion:1 convergence:1 enhancement:2 dithering:1 comparative:2 cmos:2 object:2 help:2 wider:1 undistorted:1 measured:2 op:1 sim:1 involves:3 implies:2 popularly:1 correct:1 drawback:1 subsequently:2 human:10 noticeably:1 exchange:2 biological:6 blending:1 correction:2 assisted:1 normal:1 great:1 cb:1 major:1 smallest:1 estimation:2 proc:1 spreading:1 bridge:1 create:1 weighted:3 mit:1 imperfection:3 sensor:1 ck:1 rather:1 encode:2 validated:1 acuity:5 derived:2 focus:1 contrast:1 kim:1 detect:1 sense:1 dependent:1 integrated:5 pixel:3 among:2 insect:1 priori:4 animal:2 art:1 spatial:5 resonance:1 field:4 equal:2 sampling:1 biology:1 mm2:1 park:1 unsupervised:4 comparators:1 excessive:2 fmri:1 np:1 stimulus:1 micro:3 few:1 retina:4 individual:3 beck:1 n1:21 attempt:1 freedom:1 detection:1 organization:1 highly:1 custom:2 unrecoverable:1 umn:1 uncommon:1 introduces:1 light:2 integral:2 poggio:1 divide:1 minimal:1 increased:1 altering:2 exchanging:3 cost:1 surpasses:1 fabrication:1 characterize:1 reported:1 calibrated:1 adaptively:1 density:2 peak:1 sensitivity:2 fundamental:1 probabilistic:5 continuously:1 transmitting:1 na:6 retinal:3 coding:4 caused:1 register:1 depends:1 sloan:1 option:2 complicated:1 parallel:1 square:1 accuracy:1 characteristic:4 efficiently:1 yield:2 thumb:1 carlo:4 worth:2 rectified:2 explain:2 ed:1 acquisition:2 pp:18 involved:1 frequency:1 obvious:1 dm:1 proof:3 associated:3 spie:1 static:2 hamming:1 gain:2 conversation:2 realm:1 cj:4 sophisticated:1 campbell:2 nerve:1 higher:4 cummings:1 furthermore:2 angular:3 biomedical:1 binocular:7 clock:1 hand:1 dac:1 overlapping:3 defines:1 mode:2 concept:3 normalized:2 remedy:1 evolution:1 assigned:2 read:2 illustrated:2 adjacent:1 during:1 uniquely:1 generalized:1 evident:2 demonstrate:3 allen:1 dissipation:1 passive:1 geometrical:4 image:6 consideration:1 possessing:1 superior:1 absorbing:1 functional:2 mckee:1 physical:6 cerebral:1 million:4 analog:8 assembling:1 approximates:1 relating:1 association:1 significant:6 measurement:5 automatic:1 similarly:1 pointed:1 nonlinearity:4 dj:2 dot:1 minnesota:1 rectify:1 calibration:6 moving:1 cortex:3 etc:2 dominant:2 certain:3 binary:7 accomplished:1 cain:1 minimum:2 greater:1 additional:2 employed:2 determine:1 shortest:1 redundant:12 maximize:1 signal:9 full:3 multiple:2 d0:2 exceeds:1 technical:1 compensate:1 concerning:1 post:2 feasibility:1 impact:1 vision:13 enhancing:1 physically:1 unparalleled:1 represent:1 dnl:2 achieved:2 cell:12 c1:4 addition:1 background:1 fine:1 lsb:4 jian:1 source:3 allocated:1 extra:3 sr:7 posse:1 ineffective:1 comment:1 seperately:1 tend:1 virtually:1 recording:1 capacitor:7 structural:1 near:2 yang:2 ideal:3 presence:1 identically:2 variety:1 finish:1 architecture:8 converter:4 approaching:4 associating:3 imperfect:1 opposite:1 knowing:2 shift:1 synchronous:1 six:1 nyquist:1 penalty:2 song:1 suffer:1 advantageously:1 adequate:1 generally:1 detailed:1 amount:5 hardware:2 fixational:2 dna:1 generate:4 specifies:3 reduced:1 percentage:1 shifted:1 fish:1 stereoscopic:3 estimated:3 neuroscience:1 dummy:2 delta:1 anatomical:1 discrete:1 hennig:1 vol:12 dominance:1 redundancy:18 key:1 microsaccades:2 micrograph:2 imaging:2 year:1 architectural:2 separation:1 utilizes:2 diffraction:1 scaling:2 bit:16 pga:1 bound:1 display:1 fold:1 optic:1 constraint:2 constrain:1 encodes:1 generates:1 speed:1 optical:3 conjecture:1 department:1 according:1 peripheral:1 combination:4 across:1 describes:1 remain:1 unity:1 s1:9 interference:1 resource:1 equation:3 remains:1 count:1 mechanism:5 needed:1 merit:1 instrument:1 available:1 eight:1 observe:1 apply:1 bijectively:1 indirectly:1 magnetic:1 encounter:1 assembly:12 tuan:1 log2:1 especially:1 society:3 unchanged:1 implied:3 capacitance:1 realized:3 occurs:1 strategy:4 primary:5 saccadic:1 subspace:1 distance:1 thank:1 mapped:1 simulated:2 capacity:2 attentional:1 berlin:1 consumption:2 degrade:1 argue:1 reason:1 chauvin:1 assuming:2 length:3 code:12 illustration:1 ratio:12 demonstration:3 westheimer:2 difficult:1 mostly:1 quantizers:3 vernier:1 spider:1 sigma:1 blunt:1 suppress:1 design:8 implementation:6 contributed:2 allowing:1 upper:1 conversion:14 minh:1 situation:2 neurobiology:1 communication:1 precise:4 head:1 nonuniform:1 required:1 connection:1 barcelona:1 nip:1 beyond:2 below:3 mismatch:57 challenge:1 summarize:1 including:2 power:5 critical:1 indicator:1 advanced:1 mn:1 improve:1 eye:14 numerous:1 created:1 auto:1 deviate:1 review:1 literature:1 understanding:1 relative:1 loss:2 fully:1 mixed:1 topography:2 limitation:2 proven:1 versus:4 digital:14 integrate:1 degree:5 principle:2 land:1 side:1 allow:2 understand:1 fall:1 wide:3 barrier:1 absolute:1 distributed:4 hendrickson:1 world:1 author:1 made:2 collection:3 projected:1 commonly:1 nguyen:3 reptile:1 transaction:2 preferred:1 feat:1 keep:1 otter:1 active:1 decides:1 photoreceptors:4 assumed:1 unnecessary:1 xi:4 neurology:2 stereopsis:4 continuous:2 decade:1 quantifies:2 sk:1 promising:1 nature:2 robust:1 interact:1 heidelberg:1 complex:2 conde:1 domain:4 spread:1 s2:1 noise:8 n2:2 martinez:1 kalina:1 xu:3 representative:2 referred:1 precision:14 resistor:1 exponential:2 urry:1 candidate:1 pe:5 reconfigure:1 theorem:1 removing:3 showing:1 sensing:16 insignificant:1 intrinsic:5 consist:1 quantization:17 essential:1 adding:1 burden:1 sequential:1 ci:5 workshop:1 demand:1 nk:1 gap:4 phan:1 boston:1 entropy:5 explore:1 ganglion:3 visual:13 adjustment:1 inl:2 scalar:2 springer:1 environmental:1 relies:1 goal:1 identity:4 towards:2 man:2 feasible:2 hard:1 crick:1 specifically:1 uniformly:1 reducing:1 corrected:1 surpass:1 called:1 total:6 secondary:6 specie:1 shannon:13 photoreceptor:1 millan:1 modulated:1 overload:1 incorporate:1 d1:1 phenomenon:1 |
6,152 | 6,565 | Learning Supervised PageRank with Gradient-Based
and Gradient-Free Optimization Methods
Lev Bogolubsky1,2 , Gleb Gusev1,5 , Andrei Raigorodskii5,2,1,8 , Aleksey Tikhonov1 , Maksim Zhukovskii1,5
Yandex1 , Moscow State University2 , Buryat State University8
{bogolubsky, gleb57, raigorodsky, altsoph, zhukmax}@yandex-team.ru
Pavel Dvurechensky3,4 , Alexander Gasnikov4,5
Weierstrass Institute3 , Institute for Information Transmission Problems RAS4 ,
Moscow Institute of Physics and Technology5
[email protected], [email protected]
Yurii Nesterov6,7
Center for Operations Research and Econometrics6 ,
Higher School of Economics7
[email protected]
Abstract
In this paper, we consider a non-convex loss-minimization problem of learning
Supervised PageRank models, which can account for features of nodes and edges.
We propose gradient-based and random gradient-free methods to solve this problem.
Our algorithms are based on the concept of an inexact oracle and unlike the state-ofthe-art gradient-based method we manage to provide theoretically the convergence
rate guarantees for both of them. Finally, we compare the performance of the
proposed optimization methods with the state of the art applied to a ranking task.
1
INTRODUCTION
The most acknowledged methods of measuring importance of nodes in graphs are based on random
walk models. Particularly, PageRank [18], HITS [11], and their variants [8, 9, 19] are originally
based on a discrete-time Markov random walk on a link graph. Despite undeniable advantages of
PageRank and its mentioned modifications, these algorithms miss important aspects of the graph
that are not described by its structure. In contrast, a number of approaches allows to account for
different properties of nodes and edges between them by encoding them in restart and transition
probabilities (see [3, 4, 6, 10, 12, 20, 21]). These properties may include, e.g., the statistics about
users? interactions with the nodes (in web graphs [12] or graphs of social networks [2]), types of
edges (such as URL redirecting in web graphs [20]) or histories of nodes? and edges? changes [22].
In the general ranking framework called Supervised PageRank [21], weights of nodes and edges in a
graph are linear combinations of their features with coefficients as the model parameters. The existing
optimization method [21] of learning these parameters and the optimizations methods proposed
in the presented paper have two levels. On the lower level, the following problem is solved: to
estimate the value of the loss function (in the case of zero-order oracle) and its derivatives (in the
case of first-order oracle) for a given parameter vector. On the upper level, the estimations obtained
on the lower level of the optimization methods (which we also call inexact oracle information) are
used for tuning the parameters by an iterative algorithm. Following [6], the authors of Supervised
PageRank consider a non-convex loss-minimization problem for learning the parameters and solve
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
it by a two-level gradient-based method. On the lower level of this algorithm, an estimation of the
stationary distribution of the considered Markov random walk is obtained by classical power method
and estimations of derivatives w.r.t. the parameters of the random walk are obtained by power method
introduced in [23, 24]. On the upper level, the obtained gradient of the stationary distribution is
exploited by the gradient descent algorithm. As both power methods give imprecise values of the
stationary distribution and its derivatives, there was no proof of the convergence of the state-of-the-art
gradient-based method to a stationary point.
The considered non-convex loss-minimization problem [21] can not be solved by existing optimization
methods such as [16] and [7] due to presence of constraints for parameter vector and the impossibility
to calculate the exact value of the loss function. Moreover, standard global optimization methods can
not be applied, because they require unbiased estimations of the loss function.
In our paper, we propose two two-level methods to solve the problem [21]. On the lower level of
these methods, we use the linearly convergent method [17] to calculate an approximation to the
stationary distribution of Markov random walk. We show that this method allows to approximate
the value of the loss function at any given accuracy and has the lowest proved complexity bound
among methods proposed in [5]. We develop a gradient method for general constrained non-convex
optimization problems with inexact oracle, estimate its convergence rate to the stationary point of the
problem. We exploit this gradient method on the upper level of the two-level algorithm for learning
Supervised PageRank. Our contribution to the gradient-free methods framework consists in adapting
the approach of [16] to the case of constrained optimization problems when the value of the function
is calculated with some known accuracy. We prove a convergence theorem for this method and
exploit it on the upper level of the second two-level algorithm.
Another contribution consists in investigating both for the gradient and gradient-free methods the
trade-off between the accuracy of the lower-level algorithm, which is controlled by the number of
iterations of method in [17] and its generalization (for derivatives estimation), and the computational
complexity of the two-level algorithm as a whole. Finally, we estimate the complexity of the whole
two-level algorithms for solving the loss-minimization problem with a given accuracy.
In the experiments, we apply our algorithms to learning Supervised PageRank on a real ranking task.
Summing up, both two-level methods, unlike the state-of-the-art [21], have theoretical guarantees
on convergence rate, and outperform it in the ranking quality in experiments. The main advantages
of the first gradient-based algorithm: the guarantees of a convergence do not require the convexity,
this algorithm has less input parameters than gradient-free one. The main advantage of the second
gradient-free algorithm is that it avoids calculating the derivative for each element of a large matrix.
2
MODEL DESCRIPTION
We concider the following random walk on a directed graph ? = (V, E) introduced in [21]. Assume
1
that each node i ? V and each edge i ? j ? E is represented by a vector of features Vi ? Rm
+ and
m2
a vector of features Eij ? R+ respectively. A surfer starts from a random page v0 of a seed set
U ? V . The restart probability that v0 = i equals
h?1 , Vi i
,
l?U h?1 , Vl i
[? 0 ]i = P
i?U
(2.1)
and [? 0 ]i = P
0 for i ? V \ U , where ?1 ? Rm1 is a parameter, which conducts the random walk. We
assume that l?U h?1 , Vl i should be non-zero.
At each step, the surfer makes a restart with probability ? ? (0, 1) (originally [18], ? = 0.15) or
traverses an outgoing edge (makes a transition) with probability 1 ? ?. In the former case, the surfer
chooses a vertex according to the distribution ? 0 . In the latter one, the transition probability of
traversing an edge i ? j ? E is
h?2 , Eij i
,
l:i?l h?2 , Eil i
[P ]i,j = P
(2.2)
where ?2 ? Rm2 is a parameter and the current position i has non-zero outdegree, and [P (?)]i,j =
[? 0 (?)]j for all jP? V if the outdegree of i is zero (thus the surfer always makes a restart in this case).
We assume that l:i?l h?2 , Eil i is non-zero for all i with non-zero outdegree.
2
By Equations 2.1 and 2.2, the total probability of choosing vertex j ? V conditioned by the surfer
being at vertex i equals ?[? 0 (?)]j + (1 ? ?)[P (?)]i,j , where ? = (?1 , ?2 )T and we use ? 0 (?) and
P (?) to express the dependence of ? 0 , P on the parameters.
The stationary distribution ?(?) ? Rp of the described Markov process is a solution of the system
? = ?? 0 (?) + (1 ? ?)P T (?)?.
(2.3)
In this paper, we learn an algorithm, which ranks nodes i according to scores [?(?)]i .
Let Q be a set of queries and a set of nodes Vq ? V is associated to each query q. For example,
vertices in Vq may represent web pages visited by users after submitting query q. For each q ? Q,
some nodes of Vq are manually judged by relevance labels 1, . . . , `. Our goal is to learn the parameter
vector ? of a ranking algorithm ?q = ?q (?) which minimizes the discrepancy of its ranking scores
[?q ]i , i ? Vq , from the the assigned labels. We consider the square loss function [12, 21, 22]
|Q|
1 X
f (?) =
k(Aq ?q (?))+ k22 .
|Q| q=1
(2.4)
Each row of matrix Aq ? Rrq ?pq corresponds to some pair of pages i1 , i2 ? Vq such that the label of
i1 is strictly greater than the label of i2 (we denote by rq the number of all such pairs from Vq and
pq := |Vq |). The i1 -th element of this row is equal to ?1, i2 -th element is equal to 1, and all other
elements are equal to 0. Vector x+ has components [x+ ]i = max{xi , 0}.
To make ranking scores (2.3) query?dependent, we assume that ? is defined on a query?dependent
graph ?q = (Vq , Eq ) with query-dependent feature vectors Viq , i ? Vq , Eqij , i ? j ? Eq . For
example, these features may reflect different aspects of query?page relevance. For a given q ? Q,
we consider all the objects related to the graph ?q introduced above: Uq := U , ?q0 := ? 0 , Pq := P ,
?q := ?. In this way, the ranking scores ?q depend on query via the query?dependent features, but
the parameters of the model ? and ? are not query?dependent. In what follows, we use the following
notations throughout the paper: nq := |Uq |, m = m1 + m2 , r = maxq?Q rq , p = maxq?Q pq ,
n = maxq?Q nq , s = maxq?Q sq , where sq = maxi?Vq |{j : i ? j ? Eq }|. In order to
guarantee that the probabilities in (2.1) and (2.2) are correctly defined, we need to appropriately
choose a set ? of possible values of parameters ?. We choose some ?? and R > 0 such that
1
? = {? ? Rm : k? ? ?k
? 2 ? R} lies in the set of vectors with positive components Rm
++ . In this
paper, we solve the following loss-minimization problem:
min f (?), ? = {? ? Rm : k? ? ?k
? 2 ? R}.
(2.5)
???
NUMERICAL CALCULATION OF f (?) AND ?f (?)
3
Our goal is to provide methods for solving Problem 2.5 with guarantees on rate of convergence and
complexity bounds. The calculation of the values of f (?) and its gradient ?f (?) is problematic,
since it requires to calculate those for |Q| vectors ?q (?) defined by Equation 2.3. While the exact
values are impossible to derive in general, existing methods provide estimations of ?q (?) and its
d?q (?)
derivatives d?
in an iterative way with a trade-off between time and accuracy. To be able to
T
guarantee convergence of our optimization algorithm in this inexact oracle setting, we consider
numerical methods that calculate approximation for ?q (?) and its derivatives with any required
accuracy. We have analysed state-of-the-art methods summarized in the review [5] and power method
used in [18, 2, 21] and have found that the method [17] is the most suitable.
It constructs a sequence ?k and outputs ?
?q (?, N ) by the following rule (integer N > 0 is a parameter):
?0 = ?q0 (?),
?k+1 = PqT (?)?k ,
?
?q (?, N ) =
N
X
?
(1 ? ?)k ?k .
1 ? (1 ? ?)N +1
(3.1)
k=0
As probablities [?q0 (?)]i , i ? Vq , [Pq (?)]?i,i , ?i ? i ? Eq , are scale-invariant (?q0 (??) = ?q0 (?), Pq (??) =
Pq (?)), in our experiments, we consider the set ? = {? ? Rm : k? ? em k2 ? 0.99} , where em ? Rm is the
vector of all ones, that has large intersection with the simplex {? ? Rm
++ : k?k1 = 1}
1
3
l
m
Lemma 1. Assume that, for some ?1 > 0, Method 3.1 with N = ?1 ln 8r
?1 ? 1 is used to calculate
P|Q|
1
?q (?, N ))+ k22 satisfies
the vector ?
?q (?, N ) for every q ? Q. Then f?(?, ?1 ) = |Q|
q=1 k(Aq ?
|f?(?, ?1 ) ? f (?)| ? ?1 . Moreover, the calculation of f?(?, ?1 ) requires not more than |Q|(3mps +
3psN + 6r) a.o.
The proof of Lemma 1 is in Supplementary Materials.
Let pi (?) be the i-th column of the matrix PqT (?). Our generalization of the method [17] for
d?q (?)
d?T
for any q ? Q is the following. Choose some non-negative integer N1 and
? q (?, N2 )
calculate ?
?q (?, N1 ) using (3.1). Choose some N2 ? 0, calculate ?k , k = 0, ..., N2 and ?
calculation of
?0 = ?
pq
X
d?q0 (?)
dpi (?)
+
(1
?
?)
[?
?q (?, N1 )]i ,
T
d?
d?T
i=1
? q (?, N2 ) =
?
?k+1 = PqT (?)?k ,
N2
X
1
(1 ? ?)k ?k .
1 ? (1 ? ?)N2 +1
(3.2)
(3.3)
k=0
n1 ?n2
In what follows,
: kAk1 =
Pn1we use the following norm on the space of matrices A ? R
maxj=1,...,n2 i=1
|aij |.
computable
constant (see Supplementary Materials). Assume
Lemma 2. Let ?1 be some explicitly
l
m
1r
that Method 3.1 with N1 = ?1 ln 24?
?
1
is
used for every q ? Q to calculate the vector
??2
l
m
1r
?
?q (?, N1 ) and Method 3.2, 3.3 with N2 = ?1 ln 8?
? 1 is used for every q ? Q to calculate the
??2
T
? q (?, N2 ) (3.3). Then the vector g?(?, ?2 ) = 2 P|Q| ?
? q (?, N2 ) ATq (Aq ?
?q (?, N1 ))+
matrix ?
q=1
|Q|
satisfies k?
g (?, ?2 ) ? ?f (?)k? ? ?2 . Moreover, the calculation of g?(?, ?2 ) requires not more than
|Q|(10mps + 3psN1 + 3mpsN2 + 7r) a.o.
The proof of Lemma 2 can be found in Supplementary Materials.
4
RANDOM GRADIENT-FREE OPTIMIZATION METHODS
In this section, we first describe general framework of random gradient-free methods with inexact
oracle and then apply it for Problem 2.5. Lemma 1 allows to control the accuracy of the inexact
zero-order oracle and hence apply random gradient-free methods with inexact oracle.
4.1
GENERAL FRAMEWORK
Below we extend the framework of random gradient-free methods [1, 16, 7] for the situation of
presence of uniformly bounded error of unknown nature in the value of an objective function in
general optimization problem. Unlike [16], we consider a constrained optimization problem and a
randomization on a Euclidean sphere which seems to give better large deviations bounds and doesn?t
need the assumption that the objective function can be calculated at any point of Rm .
Let E be a m-dimensional vector space and E ? be its dual. In this subsection, we consider a general
function f (?) : E ? R and denote its argument by x or y to avoid confusion with other sections. We
denote the value of linear function g ? E ? at x ? E by hg, xi. We choose some norm k ? k in E and
say that f ? CL1,1 (k ? k) iff |f (x) ? f (y) ? h?f (y), x ? yi| ? L2 kx ? yk2 , ?x, y ? E. The problem
of our interest is to find minx?X f (x), where f ? CL1,1 (k ? k), X is a closed convex set and there
exists a number D ? (0, +?) such that diamX := maxx,y?X kx ? yk ? D. Also we assume that
?
?
the inexact zero-order oracle for f (x) returns a value f?(x, ?) = f (x) + ?(x),
where ?(x)
is the error
?
?
satisfying for some ? > 0 (which is known) |?(x)| ? ? for all x ? X. Let x ? arg minx?X f (x).
Denote f ? = minx?X f (x).
?
?
Unlike [16], we define the biased gradient-free oracle g? (x, ?) = m
? (f (x + ? ?, ?) ? f (x, ?))?, where
m
? is a random vector uniformly distributed over the unit sphere S = {t ? R : ktk2 = 1}, ? is a
smoothing parameter.
4
Algorithm 1 Gradient-type method
Input: Point x0 ? X, stepsize h > 0, number of steps M .
Set k = 0.
repeat
Generate ?k and calculate corresponding g? (xk , ?).
Calculate xk+1 = ?X (xk ? hg? (xk , ?)) (?X (?) ? Euclidean projection onto the set X).
Set k = k + 1.
until k > M
Output: The point yM = arg minx {f (x) : x ? {x0 , . . . , xM }}.
Theorem 1. Let f ? CL1,1 (k ? k2 ) and convex. Assume that x? ? intX, and the sequence xk is
1
generated by Algorithm 1 with h = 8mL
. Then for any M ? 0, we have E?M ?1 f (yM ) ? f ? ?
8mLD 2
M +1
+
vector ?.
? 2 L(m+8)
8
+
?mD
4?
+
?2 m
L? 2 .
Here ?k = (?0 , . . . , ?k ) is the history of realizations of the
The full proof of the theorem is in Supplementary Materials.
4.2
SOLVING THE LEARNING PROBLEM
Now, we apply the results of Subsection 4.1 to solve Problem 2.5. Note that presence of constraints
and oracle inexactness do not allow to directly apply the results of [16]. We assume that there is a
local minimum ?? , and ? is a small vicinity of ?? , in which f (?) (2.4) is convex (generally speaking,
it is nonconvex). We choose the desired accuracy ? for f ? (the optimal value) approximation in the
sense that E?M ?1 f (yM ) ? f ? ? ?. In accordance with Theorem 1, ? gives the number of steps
M of Algorithm 1, the value of ? , the value of the required accuracy ? of the inexact zero-order
oracle. The value ?, by Lemma 1, gives the number of steps N of Method 3.1 required to calculate
a ?-approximation f?(?, ?) for f (?). Then the inexact zero-order oracle f?(?, ?) is used to make
Algorithm 1 step. Theorem 1 and the choice of the feasible set ? to be a Euclidean ball make it
natural to choose k ? k2 -norm in the space Rm of parameter ?. It is easy to see that in this norm
diam? ? 2R. Algorithm 2 in Supplementary Materials is a formal record of these ideas.
The most computationally hard on each iteration of the main cycle of this method are calculations
of f?(?k + ? ?k , ?), f?(?k , ?). Using Lemma 1, we obtain the complexity of each iteration and the
following result, which gives the complexity of Algorithm 2.
Theorem 2. Assume that the set ? in (2.5) is chosen in a way such that f (?) is convex on ? and some
?? ? arg min??? f (?) belongs also to int?. Then the mean total number of arithmetic operations
of the Algorithm 2 for the accuracy ? (i.e. for the inequality E?M ?1 f (??M ) ? f (?? ) ? ? to hold) is
not more than
!
p
1 128mrR L(m + 8)
LR2
?
768mps|Q|
m + ln
+ 6r .
?
?
?3/2 2
5
GRADIENT-BASED OPTIMIZATION METHODS
In this section, we first develop a general framework of gradient methods with inexact oracle for
non-convex problems from rather general class and then apply it for the particular Problem 2.5.
Lemma 1 and Lemma 2 allow to control the accuracy of the inexact first-order oracle and hence apply
proposed framework.
5.1
GENERAL FRAMEWORK
In this subsection, we generalize the approach in [7] for constrained non-convex optimization
problems. Our main contribution consists in developing this framework for an inexact first-order
oracle and unknown "Lipschitz constant" of this oracle.
We consider a composite optimization problem of the form minx?X {?(x) := f (x) + h(x)}, where
X ? E is a closed convex set, h(x) is a simple convex function, e.g. kxk1 . We assume that f (x) is
5
a general function endowed with an inexact first-order oracle in the following sense. There exists
a number L ? (0, +?) such that for any ? ? 0 and any x ? X one can calculate f?(x, ?) ? R and
g?(x, ?) ? E ? satisfying
L
|f (y) ? (f?(x, ?) ? h?
g (x, ?), y ? xi)| ? kx ? yk2 + ?.
(5.1)
2
for all y ? X. The constant L can be considered as "Lipschitz constant" because for the exact firstorder oracle for a function f ? CL1,1 (k ? k) Inequality 5.1 holds with ? = 0. This is a generalization
of the concept of (?, L)-oracle considered in [25] for convex problems.
We choose a prox-function d(x) which is continuously differentiable and 1-strongly convex on X
with respect to k ? k. This means that for any x, y ? X d(y) ? d(x) ? h?d(x), y ? xi ? 21 ky ? xk2 .
We define also the corresponding Bregman distance V (x, z) = d(x) ? d(z) ? h?d(z), x ? zi.
Algorithm 2 Adaptive projected gradient algorithm
Input: Point x0 ? X, number L0 > 0.
Set k = 0, z = +?.
repeat
Set Mk = Lk , flag = 0.
repeat
?
Set ? = 16M
. Calculate f?(xk , ?) and g?(xk , ?).
k
Find wk = arg minx?Q {h?
g (xk , ?), xi + Mk V (x, xk ) + h(x)} and calculate f?(wk , ?).
?
?
If the inequality f (wk , ?) ? f?(xk , ?) + h?
holds,
g (xk , ?), wk ? xk i + M2k kwk ? xk k2 + 8M
k
set flag = 1. Otherwise set Mk = 2Mk .
until flag = 1
Set xk+1 = wk , Lk+1 = M2k .
If kMk (xk ? xk+1 )k < z, set z = kMk (xk ? xk+1 )k, K = k.
Set k = k + 1.
until z ? ?
Output: The point xK+1 .
Theorem 3. Assume that f (x) is endowed with the inexact first-order oracle in a sense (5.1) and
that there exists a number ? ? > ?? such that ?(x) ? ? ? for all x ? X. Then after M iterations of
?
2
0 )?? )
Algorithm 2 it holds that kMK (xK ? xK+1 )k ? 4L(?(x
+ 2? . Moreover, the total number of
M +1
inexact oracle calls is not more than 2M + 2 log2 2L
L0 .
The full proof of the theorem is in Supplementary Materials.
5.2
SOLVING THE LEARNING PROBLEM
In this subsection, we return to Problem 2.5 and apply the results of the previous subsection. Note
that we can not directly apply the results of [7] due to the?
inexactness of the oracle. For this problem,
h(?) ? 0. It is easy to show that in 1-norm diam? ? 2R m. For any ? > 0, Lemma 1 with ?1 = 2?
allows us to obtain f?(?, ?1 ) such that inequality |f?(?, ?1 ) ? f (?)| ? ?1 holds and Lemma 2 with
?2 = 4R??m allows us to obtain g?(?, ?2 ) such that inequality k?
g (?, ?2 ) ? ?f (?)k? ? ?2 holds.
Similar to [25], since f ? CL1,1 (k ? k2 ), these two inequalities lead to Inequality 5.1 for f?(?, ?1 ) in
the role of f?(x, ?), g?(?, ?2 ) in the role of g?(x, ?) and k ? k2 in the role of k ? k.
We choose the desired accuracy ? for approximating the stationary point of Problem 2.5. This
accuracy gives the required accuracy ? of the inexact first-order oracle for f (?) on each step of the
inner cycle of the Algorithm 2. Knowing the value ?1 = 2? and using Lemma 1, we choose the number
of steps N of Method 3.1 and thus approximate f (?) with the required accuracy ?1 by f?(?, ?1 ).
Knowing the value ?2 = 4R??m and using Lemma 2, we choose the number of steps N1 of Method 3.1
and the number of steps N2 of Method 3.2, 3.3 and obtain the approximation g?(?, ?2 ) of ?f (?) with
the required accuracy ?2 . Then we use the inexact first-order oracle (f?(?, ?1 ), g?(?, ?2 )) to perform
a step of Algorithm 2. Since ? is the Euclidean ball, it is natural to set E = Rm and k ? k = k ? k2 ,
6
choose the prox-function d(?) = 12 k?k22 . Then the Bregman distance is V (?, ?) =
Algorithm 4 in Supplementary Materials is a formal record of the above ideas.
1
2 k?
? ?k22 .
The most computationally consuming operations of the inner cycle of Algorithm 4 are calculations of
f?(?k , ?1 ), f?(?k , ?1 ) and g?(?k , ?2 ). Using Lemma 1 and Lemma 2, we obtain the complexity of each
iteration. From Theorem 3 we obtain the following result, which gives the complexity of Algorithm
4.
Theorem 4. The total number of arithmetic operations in Algorithm 4 for the accuracy ? (i.e. for the
2
inequality kMK (?K ? ?K+1 )k2 ? ? to hold) is not more than
?
8L(f (?0 ) ? f ? )
2L
6mps|Q| 1024?1 rRL m
+ log2
? 7r|Q| +
ln
.
?
L0
?
??
6
EXPERIMENTAL RESULTS
In this section, we compare our gradient-free and gradient-based methods with the state-of-the-art
gradient-based method [21] on the web page ranking problem. In the next section, we describe the
dataset. In Section 6.2, we report the results of the experiments.
6.1
DATA
We consider the user web browsing graph ?q = (Vq , Eq ), q ? Q, introduced in [12]. Unlike a link
graph, a user browsing graph is query?dependent. The set of vertices Vq consists of all different
pages visited by users during their sessions started from q. The set of directed edges Eq represents all
the ordered pairs of neighboring elements (?i, i) from such sessions. We add a page i in the seed set
Uq if and only if there is a session where i is the first page visited after submitting query q.
All experiments are performed with data of a popular commercial search engine Yandex2 . We chose
a random set of 600 queries Q and collected user sessions started with them. There are ? 11.7K
vertices and ? 7.5K edges in graphs ?q , q ? Q, in total. For each query, a set of pages was labelled
by professional assessors with standard 5 relevance grades (? 1.7K labeled query?document pairs
in total). We divide our data into two parts. On the first part Q1 (50% of the set of queries Q) we
train the parameters and on the second part Q2 we test the algorithms. For each q ? Q and i ? Vq ,
vector Viq of size m1 = 26 encodes features for query?document pair (q, i). Vector E?qii of m2 = 52
features for an edge ?i ? i ? Eq is obtained as the concatenation of V?iq and Viq .
To study a dependency between the efficiency of the algorithms and the sizes of the graphs, we sort
the sets Q1 , Q2 in ascending order of sizes of the respective graphs. Sets Q1j , Q2j , Q3j contain first (in
terms of these order) 100, 200, 300 elements respectively for j ? {1, 2}.
6.2
PERFORMANCES OF THE OPTIMIZATION ALGORITHMS
We optimized the parameters ? by three methods: our gradient-free method GFN (Algorithm 2), the
gradient-based method GBN (Algorithm 4), and the state-of-the-art gradient-method GBP. The values
of hyperparameters are the following: the Lipschitz constant L = 10?4 in GFN (and L0 = 10?4
in GBN), the accuracy ? = 10?6 (in both GBN and GFN), the radius R = 0.99 (in both GBN
and GFN). On all sets of queries, we compare final values of the loss function for GBN when
L0 ? {10?4 , 10?3 , 10?2 , 10?1 , 1}. The differences are less than 10?7 . We choose L in GFN to be
equal to L0 (we show how the choice of L influences the output of the gradient-free algorithm, see
supplementary materials, Figure 2). Moreover, we evaluate both our gradient-based and gradient-free
algorithms for different values of the accuracies. The outputs of the algorithms differ insufficiently
on all test sets Qi2 , i ? {1, 2, 3}, when ? ? 10?6 . On the lower level of the state-of-the-art gradientbased algorithm, the stochastic matrix and its derivative are raised to the power 100. We evaluate
GBP for different values of the step size (50, 100, 200, 500). We stop the GBP algorithms when the
differences between the values of the loss function on the next step and the current step are less than
?10?5 on the test sets.
2
yandex.com
7
In Table 1, we present the performances of the optimization algorithms in terms of the loss function
f (2.4). We also compare the algorithms with the untuned Supervised PageRank (? = ?0 = em ).
On Figure 1, we give the outputs of the optimization algorithms on each iteration of the upper levels
of the learning processes on the test set Q32 , similar results were obtained for the sets Q12 , Q22 .
Q12
Q22
Q32
Meth.
PR
GBN
GFN
GBP 50s.
GBP 100s.
GBP 200s.
GBP 500s.
loss
steps
loss
steps
loss
steps
.00357
0
.00354
0
.0033
0
.00279
12
.00305
12
.00295
12
.00274
106
.00297
106
.00292
106
.00282
16
.00307
31
.00295
40
.00282
8
.00307
16
.00295
20
.00283
4
.00308
7
.00295
9
.00283
2
.00308
2
.00295
3
Table 1: Comparison of the algorithms on the test sets.
Figure 1: Values of the loss function on each iteration of the optimization algorithms on the test set Q32 .
GFN significantly outperforms the state-of-the-art algorithms on all test sets. GBN significantly
outperforms the state-of-the-art algorithm on Q12 (we obtain the p-values of the paired t-tests for all
the above differences on the test sets of queries, all these values are less than 0.005). However, GBN
requires less iterations of the upper level (until it stops) than GBP for step sizes 50 and 100 on Q22 , Q32 .
Finally, we show that Nesterov?Nemirovski method converges to the stationary distribution faster
than the power method (in supplementary materials, on Figure 2, we demonstrate the dependencies of
the value of the loss function on Q11 for both methods of computing the untuned Supervised PageRank
? = ?0 = em ).
7
CONCLUSION
We propose a gradient-free optimization method for general convex problems with inexact zero-order
oracle and an adaptive gradient method for possibly nonconvex general composite optimization
problems with inexact first-order oracle. For both methods, we provide convergence rate analysis.
We also apply our new methods for known problem of learning a web-page ranking algorithm.
Our new algorithms not only outperform existing algorithms, but also are guaranteed to solve this
learning problem. In practice, this means that these algorithms can increase the reliability and speed
of a search engine. Also, to the best of our knowledge, this is the first time when the ideas of
random gradient-free and gradient optimization methods are combined with some efficient method
for huge-scale optimization using the concept of an inexact oracle.
Acknowledgments The research by P. Dvurechensky and A. Gasnikov presented in Section 4 of this paper was
conducted in IITP RAS and supported by the Russian Science Foundation grant (project 14-50-00150), the
research presented in Section 5 was supported by RFBR.
8
References
[1] A. Agarwal, O. Dekel and L. Xiao, Optimal algorithms for online convex optimization with multi-point
bandit feedback, 2010, 23rd Annual Conference on Learning Theory (COLT).
[2] L. Backstrom and J. Leskovec, Supervised random walks: predicting and recommending links in social
networks, 2011, WSDM.
[3] Na Dai and Brian D. Davison, Freshness Matters: In Flowers, Food, and Web Authority, 2010, SIGIR.
[4] N. Eiron, K. S. McCurley and J. A. Tomlin, Ranking the web frontier, 2004, WWW.
[5] A. Gasnikov and D. Dmitriev, Efficient randomized algorithms for PageRank problem, Comp. Math. &
Math. Phys, 2015, 55(3): 1?18.
[6] B. Gao, T.-Y. Liu, W. W. Huazhong, T. Wang and H. Li, Semi-supervised ranking on very large graphs
with rich metadata, 2011, KDD.
[7] S. Ghadimi, G. Lan, Stochastic first- and zeroth-order methods for nonconvex stochastic programming,
SIAM Journal on Optimization, 2014, 23(4), 2341?2368.
[8] T. H. Haveliwala, Efficient computation of PageRank, Stanford University, 1999.
[9] T. H. Haveliwala, Topic-Sensitive PageRank, 2002, WWW.
[10] G. Jeh and J. Widom, Scaling Personalized Web Search, 2003, WWW.
[11] J. M. Kleinberg, Authoritative sources in a hyperlinked environment, 1998, SODA.
[12] Y. Liu, B. Gao, T.-Y. Liu, Y. Zhang, Z. Ma, S. He, H. Li, BrowseRank: Letting Web Users Vote for Page
Importance, 2008, SIGIR.
[13] J. Matyas, Random optimization, Automation and Remote Control, 1965, 26: 246?253.
[14] Yu. Nesterov, Introductory Lectures on Convex Optimization, Springer, 2004, New York.
[15] Yu. Nesterov, Efficiency of coordinate descent methods on huge-scale optimization problems, SIAM
Journal on Optimization, 2012, 22(2): 341?362.
[16] Yu. Nesterov and V. Spokoiny, Random Gradient-Free Minimization of Convex Functions, Foundations of
Computational Mathematics, 2015, 1?40.
[17] Yu. Nesterov and A. Nemirovski, Finding the stationary states of Markov chains by iterative methods,
Applied Mathematics and Computation, 2015, 255: 58?65.
[18] L. Page, S. Brin, R. Motwani, T. Winograd, The PageRank citation ranking: Bringing order to the web,
Stanford InfoLab, 1999.
[19] M. Richardson and P. Domingos, The intelligent surfer: Probabilistic combination of link and content
information in PageRank, 2002, NIPS.
[20] M. Zhukovskii, G. Gusev, P. Serdyukov, URL Redirection Accounting for Improving Link-Based Ranking
Methods, 2013, ECIR.
[21] M. Zhukovskii, G. Gusev, P. Serdyukov, Supervised Nested PageRank, 2014, CIKM.
[22] M. Zhukovskii, A. Khropov, G. Gusev, P. Serdyukov, Fresh BrowseRank, 2013, SIGIR.
[23] A. L. Andrew, Convergence of an iterative method for derivatives of eigensystems, Journal of Computational
Physics, 1978, 26: 107?112.
[24] A. Andrew, Iterative computation of derivatives of eigenvalues and eigenvectors, IMA Journal of Applied
Mathematics, 1979, 24(2): 209?218.
[25] O. Devolder, F. Glineur, Yu. Nesterov, First-order methods of smooth convex optimization with inexact
oracle, Mathematical Programming, 2013, 146(1): 37?75.
[26] Yu. Nesterov, B.T. Polyak, Cubic regularization of Newton method and its global performance, Mathematical Programming, 2006, 108(1) 177?205.
[27] Yu. Nesterov, Gradient methods for minimizing composite functions, Mathematical Programming, 2012,
140(1) 125?161.
9
| 6565 |@word norm:5 seems:1 dekel:1 widom:1 accounting:1 pavel:2 q1:2 liu:3 q32:4 score:4 document:2 outperforms:2 existing:4 kmk:4 current:2 com:1 analysed:1 numerical:2 kdd:1 stationary:10 nq:2 serdyukov:3 xk:21 ecir:1 record:2 weierstrass:1 davison:1 authority:1 node:10 math:2 traverse:1 zhang:1 mathematical:3 consists:4 prove:1 introductory:1 x0:3 theoretically:1 ra:1 multi:1 grade:1 wsdm:1 eil:2 food:1 spain:1 project:1 moreover:5 notation:1 bounded:1 lowest:1 what:2 minimizes:1 q2:2 finding:1 guarantee:6 every:3 firstorder:1 rm:10 hit:1 k2:8 control:3 unit:1 grant:1 positive:1 local:1 accordance:1 despite:1 encoding:1 lev:1 chose:1 zeroth:1 qii:1 nemirovski:2 directed:2 acknowledgment:1 practice:1 sq:2 maxx:1 adapting:1 composite:3 projection:1 imprecise:1 significantly:2 onto:1 judged:1 impossible:1 influence:1 www:3 ghadimi:1 center:1 convex:19 sigir:3 m2:3 rule:1 coordinate:1 commercial:1 user:7 exact:3 programming:4 domingo:1 element:6 satisfying:2 particularly:1 labeled:1 winograd:1 kxk1:1 role:3 solved:2 wang:1 calculate:15 cycle:3 iitp:1 remote:1 trade:2 yk:1 mentioned:1 rq:2 environment:1 convexity:1 complexity:8 nesterov:9 depend:1 solving:4 efficiency:2 represented:1 train:1 describe:2 query:19 choosing:1 supplementary:9 solve:6 stanford:2 say:1 otherwise:1 gleb:1 statistic:1 richardson:1 tomlin:1 final:1 online:1 advantage:3 sequence:2 differentiable:1 hyperlinked:1 eigenvalue:1 propose:3 interaction:1 neighboring:1 realization:1 kak1:1 iff:1 description:1 cl1:5 ky:1 convergence:10 motwani:1 transmission:1 converges:1 object:1 derive:1 develop:2 iq:1 andrew:2 school:1 eq:7 q12:3 differ:1 radius:1 stochastic:3 material:9 brin:1 require:2 generalization:3 randomization:1 brian:1 strictly:1 frontier:1 hold:7 gradientbased:1 considered:4 viq:3 seed:2 surfer:6 xk2:1 estimation:6 label:4 visited:3 sensitive:1 minimization:6 always:1 rrl:1 rather:1 avoid:1 l0:6 rank:1 impossibility:1 contrast:1 sense:3 dependent:6 vl:2 bandit:1 i1:3 arg:4 among:1 dual:1 colt:1 art:10 constrained:4 smoothing:1 raised:1 equal:6 construct:1 manually:1 outdegree:3 represents:1 yu:7 discrepancy:1 simplex:1 report:1 intelligent:1 maxj:1 ima:1 n1:8 interest:1 huge:2 hg:2 chain:1 bregman:2 edge:11 respective:1 traversing:1 conduct:1 euclidean:4 divide:1 walk:8 desired:2 theoretical:1 leskovec:1 mk:4 column:1 measuring:1 vertex:6 deviation:1 conducted:1 mld:1 haveliwala:2 dependency:2 chooses:1 combined:1 randomized:1 siam:2 probabilistic:1 physic:2 off:2 ym:3 continuously:1 na:1 reflect:1 q11:1 manage:1 choose:13 possibly:1 derivative:10 matyas:1 return:2 li:2 huazhong:1 account:2 prox:2 de:1 summarized:1 wk:5 automation:1 coefficient:1 int:1 spokoiny:1 matter:1 explicitly:1 mp:4 vi:2 ranking:14 yandex:3 performed:1 closed:2 kwk:1 start:1 sort:1 lr2:1 contribution:3 square:1 accuracy:19 ofthe:1 infolab:1 generalize:1 comp:1 history:2 phys:1 inexact:22 proof:5 associated:1 stop:2 proved:1 dataset:1 popular:1 subsection:5 knowledge:1 higher:1 originally:2 supervised:11 strongly:1 until:4 web:11 quality:1 russian:1 k22:4 concept:3 unbiased:1 contain:1 former:1 hence:2 assigned:1 vicinity:1 regularization:1 q0:6 i2:3 during:1 demonstrate:1 confusion:1 jp:1 extend:1 he:1 m1:2 tuning:1 rd:1 mathematics:3 session:4 aq:4 pq:8 reliability:1 yk2:2 v0:2 add:1 mrr:1 belongs:1 nonconvex:3 inequality:8 yi:1 exploited:1 freshness:1 minimum:1 greater:1 dai:1 arithmetic:2 semi:1 full:2 eigensystems:1 smooth:1 faster:1 calculation:7 sphere:2 paired:1 controlled:1 variant:1 iteration:8 represent:1 agarwal:1 redirection:1 source:1 appropriately:1 biased:1 unlike:5 bringing:1 call:2 integer:2 presence:3 easy:2 zi:1 polyak:1 inner:2 idea:3 knowing:2 computable:1 url:2 speaking:1 york:1 generally:1 eigenvectors:1 pqt:3 generate:1 qi2:1 outperform:2 problematic:1 cikm:1 eiron:1 correctly:1 discrete:1 express:1 devolder:1 lan:1 acknowledged:1 graph:17 soda:1 throughout:1 atq:1 scaling:1 jeh:1 bound:3 undeniable:1 guaranteed:1 convergent:1 oracle:30 annual:1 insufficiently:1 constraint:2 psn:1 encodes:1 personalized:1 kleinberg:1 aspect:2 speed:1 argument:1 min:2 developing:1 according:2 combination:2 ball:2 em:4 backstrom:1 modification:1 invariant:1 pr:1 ln:5 equation:2 vq:14 computationally:2 letting:1 ascending:1 yurii:2 operation:4 endowed:2 apply:10 uq:3 stepsize:1 m2k:2 professional:1 rp:1 rfbr:1 moscow:2 include:1 log2:2 newton:1 calculating:1 exploit:2 k1:1 approximating:1 classical:1 objective:2 dependence:1 md:1 gradient:42 minx:6 distance:2 link:5 berlin:1 concatenation:1 restart:4 topic:1 collected:1 fresh:1 ru:2 minimizing:1 q2j:1 glineur:1 maksim:1 negative:1 unknown:2 perform:1 upper:6 markov:5 descent:2 gusev:3 situation:1 team:1 dpi:1 introduced:4 pair:5 required:6 optimized:1 gbp:8 engine:2 barcelona:1 maxq:4 nip:2 able:1 below:1 flower:1 xm:1 pagerank:16 max:1 power:6 suitable:1 natural:2 predicting:1 wias:1 meth:1 uclouvain:1 lk:2 started:2 metadata:1 review:1 l2:1 loss:18 lecture:1 untuned:2 foundation:2 authoritative:1 xiao:1 inexactness:2 pi:1 row:2 repeat:3 supported:2 free:18 aij:1 formal:2 allow:2 institute:2 rm2:1 distributed:1 q22:3 feedback:1 calculated:2 transition:3 avoids:1 rich:1 doesn:1 author:1 adaptive:2 projected:1 social:2 approximate:2 citation:1 ml:1 global:2 ktk2:1 investigating:1 summing:1 recommending:1 consuming:1 xi:5 search:3 iterative:5 gbn:8 table:2 learn:2 nature:1 improving:1 gfn:7 main:4 linearly:1 whole:2 hyperparameters:1 n2:12 assessor:1 andrei:1 cubic:1 position:1 lie:1 theorem:10 maxi:1 submitting:2 exists:3 importance:2 conditioned:1 kx:3 browsing:2 intersection:1 eij:2 gao:2 ordered:1 springer:1 corresponds:1 nested:1 satisfies:2 ma:1 goal:2 diam:2 rm1:1 labelled:1 lipschitz:3 feasible:1 change:1 hard:1 content:1 uniformly:2 miss:1 lemma:15 flag:3 called:1 total:6 experimental:1 vote:1 latter:1 alexander:1 relevance:3 evaluate:2 outgoing:1 |
6,153 | 6,566 | Stochastic Optimization for
Large-scale Optimal Transport
Aude Genevay
CEREMADE, Universit? Paris-Dauphine
INRIA ? Mokaplan project-team
[email protected]
Gabriel Peyr?
CNRS and DMA, ?cole Normale Sup?rieure
INRIA ? Mokaplan project-team
[email protected]
Marco Cuturi
CREST, ENSAE
Universit? Paris-Saclay
[email protected]
Francis Bach
INRIA ? Sierra project-team
DI, ENS
[email protected]
Abstract
Optimal transport (OT) defines a powerful framework to compare probability
distributions in a geometrically faithful way. However, the practical impact of OT
is still limited because of its computational burden. We propose a new class of
stochastic optimization algorithms to cope with large-scale OT problems. These
methods can handle arbitrary distributions (either discrete or continuous) as long
as one is able to draw samples from them, which is the typical setup in highdimensional learning problems. This alleviates the need to discretize these densities,
while giving access to provably convergent methods that output the correct distance
without discretization error. These algorithms rely on two main ideas: (a) the
dual OT problem can be re-cast as the maximization of an expectation; (b) the
entropic regularization of the primal OT problem yields a smooth dual optimization
which can be addressed with algorithms that have a provably faster convergence.
We instantiate these ideas in three different setups: (i) when comparing a discrete
distribution to another, we show that incremental stochastic optimization schemes
can beat Sinkhorn?s algorithm, the current state-of-the-art finite dimensional OT
solver; (ii) when comparing a discrete distribution to a continuous density, a semidiscrete reformulation of the dual program is amenable to averaged stochastic
gradient descent, leading to better performance than approximately solving the
problem by discretization ; (iii) when dealing with two continuous densities, we
propose a stochastic gradient descent over a reproducing kernel Hilbert space
(RKHS). This is currently the only known method to solve this problem, apart
from computing OT on finite samples. We backup these claims on a set of discrete,
semi-discrete and continuous benchmark problems.
1
Introduction
Many problems in computational sciences require to compare probability measures or histograms.
As a set of representative examples, let us quote: bag-of-visual-words comparison in computer
vision [17], color and shape processing in computer graphics [21], bag-of-words for natural language
processing [11] and multi-label classification [9]. In all of these problems, a geometry between the
features (words, visual words, labels) is usually known, and can be leveraged to compare probability
distributions in a geometrically faithful way. This underlying geometry might be for instance the
planar Euclidean domain for 2-D shapes, a perceptual 3D color metric space for image processing
or a high-dimensional semantic embedding for words. Optimal transport (OT) [24] is the canonical
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
way to automatically lift this geometry to define a metric for probability distributions. That metric is
known as the Wasserstein or earth mover?s distance. As an illustrative example, OT can use a metric
between words to build a metric between documents that are represented as frequency histograms of
words (see [11] for details). All the above-cited lines of work advocate, among others, that OT is the
natural choice to solve these problems, and that it leads to performance improvement when compared
to geometrically-oblivious distances such as the Euclidean or ?2 distances or the Kullback-Leibler
divergence. However, these advantages come at the price of an enormous computational overhead.
This is especially true because current OT solvers require to sample beforehand these distributions
on a pre-defined set of points, or on a grid. This is both inefficient (in term of storage and speed)
and counter-intuitive. Indeed, most high-dimensional computational scenarios naturally represent
distributions as objects from which one can sample, not as density functions to be discretized.
Our goal is to alleviate these shortcomings. We propose a class of provably convergent stochastic
optimization schemes that can handle both discrete and continuous distributions through sampling.
Previous works. The prevalent way to compute OT distances is by solving the so-called Kantorovitch
problem [10] (see Section 2 for a short primer on the basics of OT formulations), which boils down
to a large-scale linear program when dealing with discrete distributions (i.e., finite weighted sums of
Dirac masses). This linear program can be solved using network flow solvers, which can be further
refined to assignment problems when comparing measures of the same size with uniform weights [3].
Recently, regularized approaches that solve the OT with an entropic penalization [6] have been shown
to be extremely efficient to approximate OT solutions at a very low computational cost. These regularized approaches have supported recent applications of OT to computer graphics [21] and machine
learning [9]. These methods apply the celebrated Sinkhorn algorithm [20], and can be extended
to solve more exotic transportation-related problems such as the computation of barycenters [21].
Their chief computational advantage over competing solvers is that each iteration boils down to
matrix-vector multiplications, which can be easily parallelized, streams extremely well on GPU, and
enjoys linear-time implementation on regular grids or triangulated domains [21].
These methods are however purely discrete and cannot cope with continuous densities. The only
known class of methods that can overcome this limitation are so-called semi-discrete solvers [1], that
can be implemented efficiently using computational geometry primitives [12]. They can compute
distance between a discrete distribution and a continuous density. Nonetheless, they are restricted to
the Euclidean squared cost, and can only be implemented in low dimensions (2-D and 3-D). Solving
these semi-discrete problems efficiently could have a significant impact for applications to density
fitting with an OT loss [2] for machine learning applications, see [13]. Lastly, let us point out that
there is currently no method that can compute OT distances between two continuous densities, which
is thus an open problem we tackle in this article.
Contributions. This paper introduces stochastic optimization methods to compute large-scale optimal
transport in all three possible settings: discrete OT, to compare a discrete vs. another discrete measure;
semi-discrete OT, to compare a discrete vs. a continuous measure; and continous OT, to compare
a continuous vs. another continuous measure. These methods can be used to solve classical OT
problems, but they enjoy faster convergence properties when considering their entropic-regularized
versions. We show that the discrete regularized OT problem can be tackled using incremental
algorithms, and we consider in particular the stochastic averaged gradient (SAG) method [19]. Each
iteration of that algorithm requires N operations (N being the size of the supports of the input
distributions), which makes it scale better in large-scale problems than the state-of-the-art Sinkhorn
algorithm, while still enjoying a convergence rate of O(1/k), k being the number of iterations. We
show that the semi-discrete OT problem?can be solved using averaged stochastic gradient descent
(SGD), whose convergence rate is O(1/ k). This approach is numerically advantageous over the
brute force approach consisting in sampling first the continuous density to solve next a discrete OT
problem. Lastly, for continuous optimal transport, we propose a novel method which makes use of an
expansion of the dual variables in a reproducing kernel Hilbert space (RKHS). This allows us for
the first time to compute with a converging algorithm OT distances between two arbitrary densities,
under the assumption that the two potentials belong to such an RKHS.
Notations. In the following we consider two metric spaces X and Y. We denote by M1+ (X ) the set
of positive Radon probability measures on X , and C(X ) the space of continuous functions on X . Let
? ? M1+ (X ), ? ? M1+ (Y), we define
def.
?(?, ?) = ? ? M1+ (X ? Y) ; ?(A, B) ? X ? Y, ?(A ? Y) = ?(A), ?(X ? B) = ?(B) ,
2
the set of joint probability measures on X ? Y with marginals ? and ?. The Kullback-Leibler
divergence between joint probabilities is defined as
def. R
?(?, ?) ? M1+ (X ? Y)2 , KL(?|?) = X ?Y log d?
d? (x, y) ? 1 d?(x, y),
def.
where we denote d?
d? the relative density of ? with respect to ?, and by convention KL(?|?) = +?
if ? does not have a density with respect to ?. The Dirac measure at point x is ?x . For a set C,
?C (x) = 0 ifPx ? C and
?C (x) = +? otherwise. The probability simplex of N bins>is ?N =
. Element-wise multiplication of vectors is denoted by and K denotes
? ? RN
;
?
=
1
+
i i
the transpose of a matrix K. We denote 1N = (1, . . . , 1)> ? RN and 0N = (0, . . . , 0)> ? RN .
2
Optimal Transport: Primal, Dual and Semi-dual Formulations
We consider the optimal transport problem between two measures ? ? M1+ (X ) and ? ? M1+ (Y),
defined on metric spaces X and Y. No particular assumption is made on the form of ? and ?, we
only assume that they both can be sampled from to be able to apply our algorithms.
Primal, Dual and Semi-dual Formulations. The Kantorovich formulation [10] of OT and its
entropic regularization [6] can be conveniently written in a single convex optimization problem as
follows
Z
def.
1
1
?(?, ?) ? M+ (X )?M+ (Y), W? (?, ?) = min
c(x, y)d?(x, y)+? KL(?|???). (P? )
???(?,?)
X ?Y
Here c ? C(X ? Y) and c(x, y) should be interpreted as the ?ground cost? to move a unit of mass
from x to y. This c is typically application-dependent, and reflects some prior knowledge on the
data to process. We refer to the introduction for a list of previous work where various examples (in
imaging, vision, graphics or machine learning) of such costs are given.
1
When X = Y, ? = 0 and c = dp for p ? 1, where d is a distance on X , then W0 (?, ?) p is known as
the p-Wasserstein distance on M1+ (X ). Note that this definition can be used for any type of measure,
both discrete and continuous. When ? > 0, problem (P? ) is strongly convex, so that the optimal ? is
unique, and algebraic properties of the KL regularization result in computations that can be tackled
using the Sinkhorn algorithm [6].
For any c ? C(X ? Y), we define the following constraint set
def.
Uc = {(u, v) ? C(X ) ? C(Y) ; ?(x, y) ? X ? Y, u(x) + v(y) ? c(x, y)} ,
and define its indicator function as well as its ?smoothed? approximation
?Uc (u, v) if ? = 0,
def.
?
R
?Uc (u, v) =
? X ?Y exp( u(x)+v(y)?c(x,y)
)d?(x)d?(y) if
?
? > 0.
For any v ? C(Y), we define its c-transform and its ?smoothed? approximation
?
? min c(x, y) ? v(y) if ? = 0,
def.
y?Y
R
?x ? X , v c,? (x) =
? ?? log
exp( v(y)?c(x,y)
)d?(y)
if ? > 0.
?
Y
(1)
(2)
The proposition below describes two dual problems. It is central to our analysis and paves the way
for the application of stochastic optimization methods.
Proposition 2.1 (Dual and semi-dual formulations). For ? ? 0, one has
Z
Z
def.
W? (?, ?) =
max
F? (u, v) =
u(x)d?(x) +
v(y)d?(y) ? ??Uc (u, v),
(D? )
u?C(X ),v?C(Y)
X
Y
Z
Z
def.
= max H? (v) =
v c,? (x)d?(x) +
v(y)d?(y) ? ?,
(S? )
v?C(Y)
??Uc
X
Y
c,?
where
is defined in (1) and v in (2). Furthermore, u solving (D? ) is recovered from an optimal
v solving (S? ) as u = v c,? . For ? > 0, the solution ? of (P? ) is recovered from any (u, v) solving
(D? ) as d?(x, y) = exp( u(x)+v(y)?c(x,y)
)d?(x)d?(y).
?
3
Proof. Problem (D? ) is the convex dual of (P? ), and is derived using Fenchel-Rockafellar?s theorem.
The relation between u and v is obtained by writing the first order optimality condition for v in (D? ).
Plugging this expression back in (D? ) yields (S? ).
Problem (P? ) is called the primal while (D? ) is its associated dual problem. We refer to (S? ) as the
?semi-dual? problem, because in the special case ? = 0, (S? ) boils down to the so-called semi-discrete
OT problem [1]. Both dual problems are concave maximization problems. The optimal dual variables
(u, v)?known as Kantorovitch potentials?are not unique, since for any solution (u, v) of (D? ),
(u + ?, v ? ?) is also a solution for any ? ? R. When ? > 0, they can be shown to be unique up to
this scalar translation [6]. We refer to the supplementary material for a discussion (and proofs) of the
convergence of the solutions of (P? ), (D? ) and (S? ) towards those of (P0 ), (D0 ) and (S0 ) as ? ? 0.
A key advantage of (S? ) over (D? ) is that, when ? is a discrete density (but not necessarily ?),
then (S? ) is a finite-dimensional concave maximization problem, which can thus be solved using
stochastic programming techniques, as highlighted in Section 4. By contrast, when both ? and ? are
continuous densities, these dual problems are intrinsically infinite dimensional, and we propose in
Section 5 more advanced techniques based on RKHSs.
Stochastic Optimization Formulations. The fundamental property needed to apply stochastic
programming is that both dual problems (D? ) and (S? ) must be rephrased as maximizing expectations:
?? > 0, F? (u, v) = EX,Y [f? (X, Y, u, v)]
and
?? ? 0, H? (v) = EX [h? (X, v)] ,
(3)
where the random variables X and Y are independent and distributed according to ? and ? respectively, and where, for (x, y) ? X ? Y and (u, v) ? C(X ) ? C(Y),
u(x) + v(y) ? c(x, y)
def.
f? (x, y, u, v) = u(x) + v(y) ? ? exp
,
?
Z
def.
?? ? 0, h? (x, v) =
v(y)d?(y) + v c,? (x) ? ?.
?? > 0,
Y
This reformulation is at the heart of the methods detailed in the remainder of this article. Note that
the dual problem (D? ) cannot be cast as an unconstrained expectation maximization problem when
? = 0, because of the constraint on the potentials which arises in that case.
PJ
When ? is discrete, i.e ? = j=1 ?j ?yj the potential v is a J-dimensional vector (vj )j={1...J}
and we can compute the gradient of h? . When ? > 0 the gradient reads ?v h? (v, x) =
? ? ?(x) and the hessian is given by ?v2 h? (v, x) = 1? (?(x)?(x)T ? diag(?(x))) where ?(x)i =
P
?1
vj ?c(x,yj )
J
i)
exp( vi ?c(x,y
)
)
. The eigenvalues of the hessian can be upper bounded
j=1 exp(
?
?
by 1? , which guarantees a lipschitz gradient, and lower bounded by 0 which does not ensure
strong convexity. In the unregularized case, h0 is not smooth and a subgradient is given by
?v h0 (v, x) = ? ? ?
? (x), where ?
? (x)i = ?i=j ? and j ? = arg minj?{1...J} c(x, yj ) ? vj (when
several elements are in the argmin, we arbitrarily choose one of them to be j ? ). We insist on the
lack of strong convexity of the semi-dual problem, as it impacts the convergence properties of the
stochastic algorithms (stochastic averaged gradient and stochastic gradient descent) described below.
3
Discrete Optimal Transport
We assume in this section that both ? and ? are discrete measures, i.e. finite sums of Diracs, of
PI
PJ
the form ? = i=1 ?i ?xi and ? = j=1 ?j ?yj , where (xi )i ? X and (yj )j ? Y, and the
histogram vector weights are ? ? ?I and ? ? ?J . These discrete measures may come from the
evaluation of continuous densities on a grid, counting features in a structured object, or be empirical
measures based on samples. This setting is relevant for several applications, including all known
applications of the earth mover?s distance. We show in this section that our stochastic formulation
can prove extremely efficient to compare measures with a large number of points.
Discrete Optimization and Sinkhorn. In this setup, the primal (P? ), dual (D? ) and semi-dual (S? )
problems can be rewritten as finite-dimensional optimization problems involving the cost matrix
4
c ? RI?J
defined by ci,j = c(xi , yj ):
+
o
nP
P
?i,j
>
?
1
?
;
?1
=
?,
?
1
=
?
, (P?? )
W? (?, ?) = min
c
?
+
?
log
i,j
J
I
i,j
i,j
i,j
i,j
?i ?j
I?J
??R+
P
P
P
u +v ?c
?? )
= max
ui ?i + j vj ?j ? ? i,j exp i ?j i,j ?i ?j , (for ? > 0) (D
i
u?RI ,v?RJ
?
? ? (v) = P h
= max H
where
(S?? )
i?I ? (xi , v)?i ,
J
v?R
(
P
v ?c(x,y )
X
?? log( j?J exp( j ? j )?j ) ? ?
if ? > 0,
? ? (x, v) =
h
vj ?j +
(4)
minj (c(x, yj ) ? vj )
if ? = 0,
j?J
The state-of-the-art method to solve the discrete regularized OT (i.e. when ? > 0) is Sinkhorn?s
algorithm [6, Alg.1], which has linear convergence rate [8]. It corresponds to a block coordinate
? ? ) with respect to either u or v. Each iteration of this algomaximization, successively optimizing (D
rithm is however costly, because it requires a matrix-vector multiplication. Indeed, this corresponds
to a ?batch? method where all the samples (xi )i and (yj )j are used at each iteration, which has thus
complexity O(N 2 ) where N = max(I, J). We now detail how to alleviate this issue using online
stochastic optimization methods.
Incremental Discrete Optimization when ? > 0. Stochastic gradient descent (SGD), in which an
index k is drawn from distribution ? at each iteration can be used to minimize the finite sum that
? ? (xk , ?) can be used as a proxy for the full gradient in a
appears in in S?? . The gradient of that term h
? ?.
standard gradient ascent step to maximize H
When ? > 0, the finite sum appearing in (S?? ) sug- Algorithm 1 SAG for Discrete OT
gests to use incremental gradient methods?rather than
purely stochastic ones?which are known to converge Input: C
faster than SGD. We propose to use the stochastic av- Output: v
v ? 0J , d ? 0J , ?i, gi ? 0J
eraged gradient (SAG) [19]. As SGD, SAG operates at
for k = 1, 2, . . . do
each iteration by sampling a point xk from ?, to comSample i ? {1, 2, . . . , I} uniform.
pute the gradient corresponding to that sample for the
d
? d ? gi
current estimate v. Unlike SGD, SAG keeps in memory
? ? (xi , v)
gi ? ?i ?v h
a copy of that gradient. Another difference is that SAG
d
?
d
+
g
i ; v ? v + Cd
applies a fixed length update, in the direction of the
end
for
average of all gradients stored so far, which provides a
better proxy of the gradient corresponding to the entire
? ? (v?? ) ? H
? ? (vk )| = O(1/k), where v?? is a minimizer
sum. This improves the convergence rate to |H
? ? , at the expense of storing the gradient for each of the I points. This expense can be mitigated
of H
by considering mini-batches instead of individual points. Note that the SAG algorithm is adaptive to
strong-convexity and will be linearly convergent around the optimum. The pseudo-code for SAG
is provided in Algorithm 1, and we defer more details on SGD for Section 4, in which it will be
shown to play a crucial role. Note that the Lipschitz constant of all these terms is upperbounded by
L = maxi ?i /?.
Numerical Illustrations on Bags of Word-Embeddings. Comparing texts using a Wasserstein
distance on their representations as clouds of word embeddings has been recently shown to yield
state-of-the-art accuracy for text classification [11]. The authors of [11] have however highlighted
that this accuracy comes at a large computational cost. We test our stochastic approach to discrete
OT in this scenario, using the complete works of 35 authors (names in supplementary material). We
use Glove word embeddings [14] to represent words, namely X = Y = R300 . We discard all most
frequent 1, 000 words that appear at the top of the file glove.840B.300d provided on the authors?
website. We sample N = 20, 000 words (found within the remaining huge dictionary of relatively
rare words) from each authors? complete work. Each author is thus represented as a cloud of 20, 000
points in R300 . The cost function c between the word embeddings is the squared-Euclidean distance,
re-scaled so that it has a unit empirical median on 2, 000 points sampled randomly among all vector
embeddings. We set ? to 0.01 (other values are considered in the supplementary material). We
compute all (35 ? 34/2 = 595) pairwise regularized Wasserstein distances using both the Sinkhorn
algorithm and SAG. Following the recommendations in [19], SAG?s stepsize is tested for 3 different
settings, 1/L, 3/L and 5/L. The convergence of each algorithm is measured by computing the `1
norm of the gradient of the full sum (which also corresponds to the marginal violation of the primal
transport solution that can be recovered with these dual variables[6]), as well as the `2 norm of the
5
Figure 1: We compute all 595 pairwise word mover?s distances [11] between 35 very large corpora
of text, each represented as a cloud of I = 20, 000 word embeddings. We compare the Sinkhorn
algorithm with SAG, tuned with different stepsizes. Each pass corresponds to a I ? I matrix-vector
product. We used minibatches of size 200 for SAG. Left plot: convergence of the gradient `1 norm
(average and ? standard deviation error bars). A stepsize of 3/L achieves a substantial speed-up
of ? 2.5, as illustrated in the boxplots in the center plot. Convergence to v? (the best dual variable
across all variables after 4, 000 passes) in `2 norm is given in the right plot, up to 2, 000 ? 211 steps.
deviation to the optimal scaling found after 4, 000 passes for any of the three methods. Results are
presented in Fig. 1 and suggest that SAG can be more than twice faster than Sinkhorn on average
for all tolerance thresholds. Note that SAG retains exactly the same parallel properties as Sinkhorn:
all of these computations can be streamlined on GPUs. We used 4 Tesla K80 cards to compute both
SAG and Sinkhorn results. For each computation, all 4, 000 passes take less than 3 minutes (far less
are needed if the goal is only to approximate the Wasserstein distance itself, as proposed in [11]).
4
Semi-Discrete Optimal Transport
In this section, we assume that ? is an arbitrary measure (in particular, it needs not to be discrete) and
PJ
that ? = j=1 ?j ?yj is a discrete measure. This corresponds to the semi-discrete OT problem [1, 12].
The semi-dual problem (S? ) is then
maximization problem, written in expectation
a finite-dimensional
? ? (X, v) where X ? ? and h
? ? is defined in (4).
form as W? (?, ?) = max EX h
v?RJ
Stochastic Semi-discrete Optimization. Since the expectation
is taken over an arbitrary measure, neither Sinkhorn algorithm nor
incremental algorithms such as SAG can be used. An alternative
PN
def.
is to approximate ? by an empirical measure ?
?N = N1 i=1 ?xi
where (xi )i=1,...,N are i.i.d samples from ?, and computing
W? (?
?N , ?) using the discrete methods (Sinkhorn or SAG) detailed in Section 3. However this introduces a discretization noise
in the solution as the discrete problem is now different from the
original one and thus has a different solution. Averaged SGD
on the other hand does not require ? to be discrete and is thus
perfectly adapted to this semi-discrete setting. The algorithm
? ? being given
is detailed in Algorithm 2 (the expression for ?h
?
in Equation 4). The convergence rate is O(1/ k) thanks to
? k [15].
averaging v
Algorithm 2 Averaged SGD for
Semi-Discrete OT
Input: C
Output: v
? ? 0J , v ? v
?
v
for k = 1, 2, . . . do
Sample xk from ?
? ? (xk , v
??v
? + ?Ck ?v h
?)
v
?+
v ? k1 v
end for
k?1
k v
Numerical Illustrations. Simulations are performed in X = Y = R3 . Here ? is a Gaussian mixture
PJ
(continuous density) and ? = J1 j=1 ?yj with J = 10 and (xj )j are i.i.d. samples from another
gaussian mixture. Each mixture is composed of three gaussians whose means are drawn randomly in
[0, 1]3 , and their correlation matrices are constructed as ? = 0.01(RT + R) + 3I3 where R is 3 ? 3
with random entries in [0, 1]. In the following, we denote v?? a solution of (S? ), which is approximated
by running SGD for 107 iterations, 100 times more than those plotted, to ensure reliable convergence
curves. Both plots are averaged over 50 runs, lighter lines show the variability in a single run.
6
(a) SGD
(b) SGD vs. SAG
?
?
Figure 2: (a) Plot of kvk ? v0 k2 / kv0 k2 as a function of k, for SGD and different values of ?
(? = 0 being un-regularized). (b) Plot of kvk ? v?? k2 / kv?? k2 as a function of k, for SGD and SAG
with different number N of samples, for regularized OT using ? = 10?2 .
Figure 2 (a) shows the evolution of kvk ? v0? k2 / kv0? k2 as a function of k. It highlights the influence
of the regularization parameters ? on the iterates of SGD. While the regularized iterates converge
faster, they do not converge to the correct unregularized solution. This figure also illustrates the
convergence theorem of solution of (S? ) toward those (S0 ) when ? ? 0, which can be found in the
supplementary material. Figure 2 (b) shows the evolution of kvk ? v?? k2 / kv?? k2 as a function of
k, for a fixed regularization parameter value ? = 10?2 . It compares SGD to SAG using different
numbers N of samples for the empirical measures ?
?N . While SGD converges to the true solution of
the semi-discrete problem, the solution computed by SAG is biased because of the approximation
error which comes from the discretization of ?. This error decreases when the sample size N is
increased, as the approximation of ? by ?
?N becomes more accurate.
5
Continuous optimal transport using RKHS
In the case where neither ? nor ? are discrete, problem (S? ) is infinite-dimensional, so it cannot be
solved directly using SGD. We propose in this section to solve the initial dual problem (D? ), using
expansions of the dual variables in two reproducing kernel Hilbert spaces (RKHS). Choosing dual
variables (or test functions) in a RKHS is the fundamental assumption underlying the Maximum
Mean Discrepancy (MMD)[22]. It is thus tempting to draw parallels between the approach in this
section and the MMD. The two methods do not, however, share much beyond using RKHSs. Indeed,
unlike the MMD, problem (D? ) involves two different dual (test) functions u and v, one for each
measure; these are furthermore linked through a regularizer ??Uc . Recall finally that contrarily to the
semi-discrete setting, we can only solve the regularized problem here (i.e. ? > 0), since (D? ) cannot
be cast as an expectation maximization problem when ? = 0.
Stochastic Continuous Optimization. We consider two RKHS H and G defined on X and on Y,
with kernels ? and `, associated with norms k ? kH and k ? kG . Recall the two main properties of
RKHS: (a) if u ? H, then u(x) = hu, ?(?, x)iH and (b) ?(x, x0 ) = h?(?, x), ?(?, x0 )iH .
The dual problem (D? ) is conveniently re-written in (3) as the maximization of the expectation of
f ? (X, Y, u, v) with respect to the random variables (X, Y ) ? ? ? ?. The SGD algorithm applied to
this problem reads, starting with u0 = 0 and v0 = 0,
C
def.
(uk , vk ) = (uk?1 , vk?1 ) + ? ?f? (xk , yk , uk?1 , vk?1 ) ? H ? G,
(5)
k
where (xk , yk ) are i.i.d. samples from ? ? ?. The following proposition shows that these (uk , vk )
iterates can be expressed as finite sums of kernel functions, with a simple recursion formula.
Proposition 5.1. The iterates (uk , vk ) defined in (5) satisfy
k
X
ui?1 (xi )+vi?1 (yi )?c(xi ,yi )
C
def.
def.
?
(uk , vk ) =
?i (?(?, xi ), `(?, yi )), where ?i = ?Br ? 1 ? e
, (6)
i
i=1
where (xi , yi )i=1...k are i.i.d samples from ? ? ? and ?Br is the projection on the centered ball of
radius r. If the solutions of (D? ) are in the H ? G and if r is large enough, the iterates (uk ,vk )
converge to a solution of (D? ).
The proof of proposition 5.1 can be found in the supplementary material.
7
(a) setting
(b) convergence of uk
(c) plots of uk
d?
d?
?
?
? k2 / k?
Figure 3: (a) Plot of dx and dx . (b) Plot of kuk ? u
u k2 as a function of k with SGD in the
RKHS, for regularized OT using ? = 10?1 . (c) Plot of the iterates uk for k = 103 , 104 , 105 and the
? ? , evaluated on a grid where ? has non negligible mass.
proxy for the true potential u
Algorithm 3 describes our kernel SGD approach, Algorithm 3 Kernel SGD for continuous OT
in which both potentials u and v are approxi- Input: C, kernels ? and `
mated by a linear combination of kernel func- Output: (?k , xk , yk )k=1,...
for k = 1, 2, . . . do
tions. The main cost lies in the computation of
Sample xk from ?
the terms uk?1 (xk ) and vk?1 (yk ) which imply
2
Sample yk from ?
a quadratic complexity O(k ). Several methods
def. Pk?1
exist to alleviate the running time complexity
uk?1 (xk ) = i=1 ?i ?(xk , xi )
def. Pk?1
of kernel algorithms, e.g. random Fourier feavk?1 (yk ) = i=1 ?i `(yk , yi )
tures [16] or incremental incomplete Cholesky
uk?1 (xk )+vk?1 (yk )?c(xk ,yk )
def.
?
?k = ?Ck 1 ? e
decomposition [25].
end for
Kernels that are associated with dense RHKS
are called universal [23] and can approach any arbitrary potential. In Euclidean spaces X = Y = Rd ,
where d > 0, a natural choice of universal kernel is the kernel defined by ?(x, x0 ) = exp(?kx ?
x0 k2 /? 2 ). Tuning its bandwidth ? is crucial to obtain a good convergence of the algorithm.
Finally, let us note that, while entropy regularization of the primal problem (P? ) was instrumental
to be able to apply semi-discrete methods in Sections 3 and 4, this is not the case here. Indeed,
since the kernel SGD algorithm is applied to the dual (D? ), it is possible to replace KL(?|? ? ?)
appearing
in (P? ) by other regularizing divergences. A typical example would be a ?2 divergence
R
d?
(
(x, y))2 d?(x)d?(y) (with positivity constraints on ?).
X ?Y d?d?
Numerical Illustrations. We consider optimal transport in 1D between a Gaussian ? and a Gaussian
mixture ? whose densities are represented in Figure 3 (a). Since there is no existing benchmark for
continuous transport, we use the solution of the semi-discrete problem W? (?, ??N ) with N = 103
computed with SGD as a proxy for the solution and we denote it by u
?? . We focus on the convergence
of the potential u, as it is continuous in both problems contrarily to v. Figure 3 (b) represents the plot
? ? k2 /k?
of kuk ? u
u? k2 where u is the evaluation of u on a sample (xi )i=1...N 0 drawn from ?. This
gives more emphasis to the norm on points where ? has more mass. The convergence is rather slow
but still noticeable. The iterates uk are plotted on a grid for different values of k in Figure 3 (c), to
emphasize the convergence to the proxy u
?? . We can see that the iterates computed with the RKHS
converge faster where ? has more mass, which is actually where the value of u has the greatest impact
in F? (u being integrated against ?).
Conclusion
We have shown in this work that the computations behind (regularized) optimal transport can be
considerably alleviated, or simply enabled, using a stochastic optimization approach. In the discrete
case, we have shown that incremental gradient methods can surpass the Sinkhorn algorithm in
terms of efficiency, taken for granted that the (constant) stepsize has been correctly selected, which
should be possible in practical applications. We have also proposed the first known methods that can
address the challenging semi-discrete and continuous cases. All of these three settings can open new
perspectives for the application of OT to high-dimensional problems.
Acknowledgements GP was supported by the European Research Council (ERC SIGMA-Vision); AG by
R?gion Ile-de-France; MC by JSPS grant 26700002.
8
References
[1] F. Aurenhammer, F. Hoffmann, and B. Aronov. Minkowski-type theorems and least-squares clustering.
Algorithmica, 20(1):61?76, 1998.
[2] F. Bassetti, A. Bodini, and E. Regazzini. On minimum Kantorovich distance estimators. Statistics &
Probability Letters, 76(12):1298?1302, 2006.
[3] R. Burkard, M. Dell?Amico, and S. Martello. Assignment Problems. SIAM, 2009.
[4] G. Carlier, V. Duval, G. Peyr?, and B. Schmitzer. Convergence of entropic schemes for optimal transport
and gradient flows. arXiv preprint arXiv:1512.02783, 2015.
[5] R. Cominetti and J. San Martin. Asymptotic analysis of the exponential penalty trajectory in linear
programming. Mathematical Programming, 67(1-3):169?187, 1994.
[6] M. Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. In Adv. in Neural Information
Processing Systems, pages 2292?2300, 2013.
[7] A. Dieuleveut and F. Bach. Non-parametric stochastic approximation with large step sizes. arXiv preprint
arXiv:1408.0361, 2014.
[8] J. Franklin and J. Lorenz. On the scaling of multidimensional matrices. Linear Algebra and its applications,
114:717?735, 1989.
[9] C. Frogner, C. Zhang, H. Mobahi, M. Araya, and T. Poggio. Learning with a Wasserstein loss. In Adv. in
Neural Information Processing Systems, pages 2044?2052, 2015.
[10] L. Kantorovich. On the transfer of masses (in russian). Doklady Akademii Nauk, 37(2):227?229, 1942.
[11] M. J. Kusner, Y. Sun, N. I. Kolkin, and K. Q. Weinberger. From word embeddings to document distances.
In ICML, 2015.
[12] Q. M?rigot. A multiscale approach to optimal transport. Comput. Graph. Forum, 30(5):1583?1592, 2011.
[13] G. Montavon, K.-R. M?ller, and M. Cuturi. Wasserstein training of restricted Boltzmann machines. In Adv.
in Neural Information Processing Systems, 2016.
[14] J. Pennington, R. Socher, and C.D. Manning. Glove: Global vectors for word representation. Proc. of the
Empirical Methods in Natural Language Processing (EMNLP 2014), 12:1532?1543, 2014.
[15] B. T Polyak and A. B Juditsky. Acceleration of stochastic approximation by averaging. SIAM Journal on
Control and Optimization, 30(4):838?855, 1992.
[16] A. Rahimi and B. Recht. Random features for large-scale kernel machines. In Adv. in Neural Information
Processing Systems, pages 1177?1184, 2007.
[17] Y. Rubner, C. Tomasi, and L. J. Guibas. The earth mover?s distance as a metric for image retrieval. IJCV,
40(2):99?121, November 2000.
[18] F. Santambrogio. Optimal transport for applied mathematicians. Birk?user, NY, 2015.
[19] M. Schmidt, N. Le Roux, and F. Bach. Minimizing finite sums with the stochastic average gradient.
Mathematical Programming, 2016.
[20] R. Sinkhorn. A relationship between arbitrary positive matrices and doubly stochastic matrices. Ann. Math.
Statist., 35:876?879, 1964.
[21] J. Solomon, F. de Goes, G. Peyr?, M. Cuturi, A. Butscher, A. Nguyen, T. Du, and L. Guibas. Convolutional
Wasserstein distances: Efficient optimal transportation on geometric domains. ACM Transactions on
Graphics (SIGGRAPH), 34(4):66:1?66:11, 2015.
[22] Bharath K Sriperumbudur, Kenji Fukumizu, Arthur Gretton, Bernhard Sch?lkopf, Gert RG Lanckriet, et al.
On the empirical estimation of integral probability metrics. Electronic Journal of Statistics, 6:1550?1599,
2012.
[23] I. Steinwart and A. Christmann. Support vector machines. Springer Science & Business Media, 2008.
[24] C. Villani. Topics in Optimal Transportation. Graduate studies in Math. AMS, 2003.
[25] G. Wu, E. Chang, Y. K. Chen, and C. Hughes. Incremental approximate matrix factorization for speeding
up support vector machines. In Proc. of the 12th ACM SIGKDD Intern. Conf. on Knowledge Discovery
and Data Mining, pages 760?766, 2006.
9
| 6566 |@word version:1 instrumental:1 villani:1 advantageous:1 norm:6 open:2 hu:1 simulation:1 decomposition:1 p0:1 sgd:23 initial:1 celebrated:1 tuned:1 rkhs:10 document:2 franklin:1 existing:1 current:3 discretization:4 comparing:4 recovered:3 dx:2 written:3 gpu:1 must:1 numerical:3 j1:1 shape:2 plot:11 update:1 juditsky:1 v:4 instantiate:1 website:1 selected:1 xk:13 short:1 provides:1 iterates:8 math:2 kv0:2 dell:1 zhang:1 mathematical:2 constructed:1 prove:1 doubly:1 ijcv:1 advocate:1 overhead:1 fitting:1 x0:4 pairwise:2 indeed:4 nor:2 multi:1 discretized:1 insist:1 automatically:1 solver:5 considering:2 becomes:1 provided:2 project:3 spain:1 underlying:2 exotic:1 notation:1 mass:6 bounded:2 mitigated:1 kg:1 argmin:1 interpreted:1 burkard:1 medium:1 mathematician:1 ag:1 guarantee:1 pseudo:1 multidimensional:1 concave:2 tackle:1 sag:21 exactly:1 universit:2 scaled:1 k2:13 uk:14 brute:1 unit:2 grant:1 enjoy:1 appear:1 control:1 positive:2 negligible:1 kantorovitch:2 approximately:1 inria:4 might:1 twice:1 emphasis:1 challenging:1 limited:1 factorization:1 graduate:1 averaged:7 faithful:2 practical:2 unique:3 yj:10 hughes:1 block:1 empirical:6 universal:2 projection:1 alleviated:1 word:19 pre:1 regular:1 suggest:1 cannot:4 storage:1 influence:1 writing:1 transportation:3 maximizing:1 center:1 primitive:1 go:1 starting:1 convex:3 roux:1 estimator:1 enabled:1 embedding:1 handle:2 gert:1 coordinate:1 play:1 user:1 programming:5 lighter:1 lanckriet:1 element:2 approximated:1 bodini:1 role:1 cloud:3 ensae:2 preprint:2 solved:4 adv:4 sun:1 counter:1 decrease:1 yk:9 substantial:1 convexity:3 ui:2 cuturi:5 complexity:3 solving:6 algebra:1 purely:2 efficiency:1 easily:1 joint:2 siggraph:1 represented:4 various:1 bassetti:1 regularizer:1 shortcoming:1 lift:1 choosing:1 refined:1 h0:2 whose:3 supplementary:5 solve:9 tested:1 otherwise:1 statistic:2 gi:3 gp:1 transform:1 highlighted:2 itself:1 online:1 advantage:3 eigenvalue:1 propose:7 product:1 fr:4 remainder:1 frequent:1 relevant:1 alleviates:1 nauk:1 intuitive:1 kh:1 kv:2 dirac:3 convergence:20 optimum:1 incremental:8 converges:1 sierra:1 object:2 tions:1 measured:1 noticeable:1 strong:3 implemented:2 kenji:1 involves:1 come:4 christmann:1 triangulated:1 convention:1 direction:1 radius:1 correct:2 stochastic:29 centered:1 material:5 bin:1 require:3 alleviate:3 proposition:5 marco:2 around:1 considered:1 ground:1 guibas:2 exp:9 claim:1 dictionary:1 entropic:5 achieves:1 earth:3 estimation:1 proc:2 bag:3 label:2 currently:2 quote:1 cole:1 council:1 weighted:1 reflects:1 fukumizu:1 gaussian:4 i3:1 normale:1 rather:2 pn:1 ck:2 stepsizes:1 derived:1 focus:1 improvement:1 vk:10 prevalent:1 ceremade:2 contrast:1 sigkdd:1 martello:1 am:1 dependent:1 cnrs:1 typically:1 entire:1 integrated:1 relation:1 france:1 provably:3 issue:1 dual:31 classification:2 among:2 denoted:1 arg:1 art:4 special:1 uc:6 marginal:1 sampling:3 represents:1 icml:1 discrepancy:1 simplex:1 others:1 np:1 oblivious:1 randomly:2 composed:1 mover:4 divergence:4 individual:1 geometry:4 consisting:1 algorithmica:1 n1:1 huge:1 aronov:1 mining:1 evaluation:2 introduces:2 violation:1 mixture:4 upperbounded:1 kvk:4 primal:7 behind:1 dieuleveut:1 amenable:1 accurate:1 beforehand:1 integral:1 arthur:1 poggio:1 rhks:1 enjoying:1 euclidean:5 incomplete:1 re:3 plotted:2 regazzini:1 instance:1 fenchel:1 increased:1 retains:1 assignment:2 maximization:7 cost:8 deviation:2 entry:1 rare:1 uniform:2 jsps:1 peyr:3 graphic:4 stored:1 considerably:1 thanks:1 density:17 cited:1 fundamental:2 siam:2 recht:1 santambrogio:1 butscher:1 squared:2 central:1 successively:1 solomon:1 leveraged:1 choose:1 emnlp:1 positivity:1 conf:1 inefficient:1 leading:1 potential:8 de:2 rockafellar:1 satisfy:1 vi:2 stream:1 performed:1 linked:1 sup:1 francis:2 parallel:2 defer:1 contribution:1 minimize:1 square:1 accuracy:2 convolutional:1 efficiently:2 yield:3 lkopf:1 mc:1 trajectory:1 bharath:1 minj:2 definition:1 streamlined:1 against:1 sriperumbudur:1 nonetheless:1 frequency:1 naturally:1 proof:3 di:1 associated:3 boil:3 sampled:2 intrinsically:1 recall:2 color:2 knowledge:2 improves:1 hilbert:3 actually:1 back:1 appears:1 planar:1 formulation:7 evaluated:1 strongly:1 furthermore:2 lastly:2 correlation:1 hand:1 steinwart:1 transport:18 multiscale:1 lack:1 defines:1 birk:1 aude:1 russian:1 name:1 true:3 evolution:2 regularization:6 read:2 leibler:2 semantic:1 illustrated:1 illustrative:1 sug:1 complete:2 image:2 wise:1 novel:1 recently:2 belong:1 m1:8 numerically:1 marginals:1 significant:1 refer:3 rd:1 unconstrained:1 grid:5 tuning:1 erc:1 language:2 pute:1 access:1 sinkhorn:16 v0:3 recent:1 perspective:1 optimizing:1 apart:1 discard:1 rieure:1 scenario:2 arbitrarily:1 dauphine:2 yi:5 minimum:1 wasserstein:8 parallelized:1 converge:5 maximize:1 ller:1 tempting:1 u0:1 ii:1 semi:23 full:2 rj:2 gretton:1 d0:1 rahimi:1 smooth:2 faster:6 bach:4 long:1 retrieval:1 plugging:1 impact:4 converging:1 involving:1 basic:1 ile:1 vision:3 expectation:7 metric:9 arxiv:4 histogram:3 kernel:15 represent:2 iteration:8 mmd:3 addressed:1 median:1 crucial:2 sch:1 ot:37 biased:1 unlike:2 contrarily:2 ascent:1 file:1 pass:3 flow:2 counting:1 iii:1 embeddings:7 enough:1 xj:1 competing:1 perfectly:1 bandwidth:1 lightspeed:1 idea:2 polyak:1 br:2 expression:2 granted:1 penalty:1 carlier:1 algebraic:1 hessian:2 gabriel:2 detailed:3 statist:1 exist:1 canonical:1 correctly:1 discrete:48 rephrased:1 key:1 reformulation:2 threshold:1 enormous:1 drawn:3 pj:4 neither:2 kuk:2 boxplots:1 imaging:1 graph:1 subgradient:1 geometrically:3 sum:8 run:2 letter:1 powerful:1 electronic:1 wu:1 draw:2 scaling:2 radon:1 def:18 convergent:3 tackled:2 duval:1 quadratic:1 adapted:1 constraint:3 ri:2 fourier:1 speed:2 extremely:3 min:3 optimality:1 mated:1 minkowski:1 relatively:1 gpus:1 martin:1 structured:1 according:1 eraged:1 ball:1 combination:1 manning:1 describes:2 across:1 kusner:1 restricted:2 heart:1 unregularized:2 taken:2 equation:1 r3:1 needed:2 end:3 operation:1 rewritten:1 gaussians:1 apply:4 v2:1 appearing:2 stepsize:3 batch:2 rkhss:2 alternative:1 primer:1 weinberger:1 schmidt:1 original:1 denotes:1 top:1 ensure:2 remaining:1 running:2 clustering:1 giving:1 k1:1 build:1 especially:1 classical:1 forum:1 aurenhammer:1 move:1 hoffmann:1 barycenter:1 costly:1 rt:1 parametric:1 pave:1 kantorovich:3 gradient:25 dp:1 distance:21 card:1 w0:1 topic:1 toward:1 length:1 code:1 index:1 relationship:1 mini:1 illustration:3 gion:1 minimizing:1 setup:3 expense:2 sigma:1 implementation:1 boltzmann:1 discretize:1 upper:1 av:1 benchmark:2 finite:11 descent:5 november:1 beat:1 extended:1 variability:1 team:3 peyre:1 rn:3 reproducing:3 smoothed:2 arbitrary:6 cast:3 paris:2 kl:5 namely:1 continous:1 tomasi:1 barcelona:1 nip:1 address:1 able:3 bar:1 beyond:1 usually:1 below:2 akademii:1 saclay:1 program:3 max:6 including:1 memory:1 reliable:1 doklady:1 greatest:1 natural:4 rely:1 regularized:12 force:1 indicator:1 business:1 recursion:1 advanced:1 scheme:3 imply:1 func:1 speeding:1 text:3 prior:1 geometric:1 acknowledgement:1 discovery:1 multiplication:3 relative:1 asymptotic:1 loss:2 araya:1 highlight:1 limitation:1 tures:1 penalization:1 rubner:1 proxy:5 s0:2 article:2 storing:1 pi:1 cd:1 translation:1 share:1 supported:2 transpose:1 copy:1 enjoys:1 distributed:1 tolerance:1 overcome:1 dimension:1 curve:1 author:5 made:1 adaptive:1 san:1 nguyen:1 far:2 cope:2 transaction:1 k80:1 crest:1 approximate:4 emphasize:1 kullback:2 bernhard:1 keep:1 dealing:2 approxi:1 global:1 corpus:1 xi:14 continuous:24 un:1 chief:1 transfer:1 genevay:2 expansion:2 alg:1 du:1 necessarily:1 european:1 domain:3 vj:6 diag:1 pk:2 main:3 dense:1 linearly:1 backup:1 noise:1 dma:1 tesla:1 fig:1 representative:1 en:2 rithm:1 slow:1 ny:1 exponential:1 comput:1 lie:1 perceptual:1 montavon:1 down:3 theorem:3 minute:1 formula:1 mobahi:1 maxi:1 list:1 amico:1 frogner:1 burden:1 socher:1 ih:2 lorenz:1 pennington:1 ci:1 illustrates:1 kx:1 chen:1 entropy:1 r300:2 rg:1 simply:1 intern:1 visual:2 conveniently:2 expressed:1 scalar:1 recommendation:1 chang:1 applies:1 springer:1 corresponds:5 minimizer:1 acm:2 minibatches:1 goal:2 acceleration:1 ann:1 towards:1 price:1 lipschitz:2 replace:1 typical:2 infinite:2 operates:1 glove:3 averaging:2 surpass:1 called:5 pas:1 highdimensional:1 support:3 cholesky:1 arises:1 regularizing:1 ex:3 |
6,154 | 6,567 | Mistake Bounds for Binary Matrix Completion
Mark Herbster
University College London
Department of Computer Science
London WC1E 6BT, UK
[email protected]
Stephen Pasteris
University College London
Department of Computer Science
London WC1E 6BT, UK
[email protected]
Massimiliano Pontil
Istituto Italiano di Tecnologia
16163 Genoa, Italy
and
University College London
Department of Computer Science
London WC1E 6BT, UK
[email protected]
Abstract
We study the problem of completing a binary matrix in an online learning setting.
On each trial we predict a matrix entry and then receive the true entry. We propose
a Matrix Exponentiated Gradient algorithm [1] to solve this problem. We provide a
mistake bound for the algorithm, which scales with the margin complexity [2, 3] of
the underlying matrix. The bound suggests an interpretation where each row of
the matrix is a prediction task over a finite set of objects, the columns. Using this
we show that the algorithm makes a number of mistakes which is comparable up
to a logarithmic factor to the number of mistakes made by the Kernel Perceptron
with an optimal kernel in hindsight. We discuss applications of the algorithm to
predicting as well as the best biclustering and to the problem of predicting the
labeling of a graph without knowing the graph in advance.
1
Introduction
We consider the problem of predicting online the entries in an m ? n binary matrix U . We formulate
this as the following game: nature queries an entry (i1 , j1 ); the learner predicts y?1 2 { 1, 1} as the
matrix entry; nature presents a label y1 = Ui1 ,j1 ; nature queries the entry (i2 , j2 ); the learner predicts
y?2 ; and so forth. The learner?s goal is to minimize the total number of mistakes M = |{t : y?t 6= yt }|.
If nature is adversarial, the learner will always mispredict, but if nature is regular or simple, there is
hope that a learner may make only a few mispredictions.
In our setting we are motivated by the following interpretation of matrix completion. Each of the
m rows represents a task (or binary classifier) and each of the n columns is associated with an
object (or input). A task is the problem of predicting the binary label of each of the objects. For a
single task, if we were given a kernel matrix between the objects in advance we could then use the
Kernel Perceptron algorithm to sequentially label the objects and this algorithm would incur O(1/ 2 )
mistakes, where is the margin of the best linear classifier in the inner product space induced by
the kernel. Unfortunately, in our setup, we do not know a good kernel in advance. However, we will
show that a remarkable property of our algorithm is that it enjoys, up to logarithmic factors, a mistake
bound of O(1/ 2 ) per task, where is the largest possible margin (over the choice of the kernel)
which is achieved on all tasks.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
The problem of predicting online the labels of a finite set of objects under the assumption that the
similarity between objects can be described by a graph was introduced in [4], building upon earlier
work in the batch setting [5, 6]. In this and later research the common assumption is that two objects
are similar if there is an edge in the graph connecting them and the aim is to predict well when there
are few edges between objects with disagreeing labels. Lower bounds and an optimal algorithm (up
to logarithmic factors) for this problem were given in [7, 8]. The problem of predicting well when
the graph is unknown was previously addressed in [9, 10]. That research took the approach that when
receiving a vertex to predict, edges local to that vertex were then revealed. In this paper we take a
different approach - the graph structure is never revealed to the learner. Instead, we have a number of
tasks over the same unknown graph, and the hope is to perform comparably to the case in which the
graph in known in advance.
The general problem of matrix completion has been studied extensively in the batch statistical i.i.d.
setting, see for example [11, 12, 13] and references therein. These studies are concerned either with
Rademacher bounds or statistical oracle inequalities, both of which are substantially different from the
focus of the present paper. In the online mistake-bound setting a special form of matrix completion
was previously considered as the problem of learning a binary relation [14, 15] (see Section 5). In
a more general online setting, with minimal assumptions on the loss function [16, 17] bounded the
regret of the learner in terms of the trace-norm of the underlying matrix. Instead our bounds are
with respect to the margin complexity of the matrix. As a result, although our bounds have a more
?
restricted applicability they have the advantage that they become non-trivial after only ?(n)
matrix
1
3/2
7/4
?
?
entries are observed as opposed to the required ?(n ) in [16] and ?(n ) in [17]. The notion
of margin complexity in machine learning was introduced in [2] where it was used to the study the
learnability of concept classes via linear embeddings and further studied in [3], where it was linked
to the 2 norm. Here we adopt the terminology in [11] and refer to the 2 norm as the max-norm.
The margin complexity seems to be a more natural parameter as opposed to the trace-norm for the
0-1 loss as it only depends on the signs of the underlying comparator matrix. To the best of our
knowledge the bounds contained herein are the first online matrix completion bounds in terms of the
margin complexity.
To obtain our results, we use an online matrix multiplicative weights algorithm, e.g., see [1, 18, 17, 19]
and references therein. These kinds of algorithms have been applied in a number of learning
scenarios, including online PCA [20], online variance minimization [21], solving SDPs [18], and
online prediction with switching sequences [22]. These algorithms update a new hypothesis matrix
on each trial by trading off fidelity to the previous hypothesis and the incorporation of the new label
information. The tradeoff is computed as an approximate spectral regularization via the quantum
relative entropy (see [1, Section 3.1]). The particular matrix multiplicative weights algorithm we
apply is Matrix Winnow [19]; we adapt this algorithm and its mistake bound analysis for our purposes
via selection of comparator, threshold, and appropriate ?progress inequalities.?
The paper is organized as follows. In Section 2 we introduce basic notions used in the paper. In
Section 3 we present our algorithm and derive a mistake bound, also comparing it to related bounds
in the literature. In Section 4 we observe that our algorithm is able to exploit matrix structure to
perform comparably to the Kernel Perceptron with the best kernel known in advance. Finally, in
Section 5 we discuss the example of biclustered matrices, and argue that our bound is optimal up to a
polylogarithmic factor. The appendix contains proofs of the results only stated in the main body of
the paper, and other auxiliary results.
2
Preliminaries
We denote the set of the first m positive integers as Nm = {1, . . . , m}. We denote
p the inner
Pn
product of vectors x, w 2 Rn as hx, wi = i=1 xi wi and the norm as |w| = hw, wi. We
let Rm?n be the set of all m ? n real-valued matrices. If X 2 Rm?n then X i denotes the i-th
n?n
n-dimensional row vector and the (i, j) entry in X is Xij . The trace of a square matrix
p >X 2 R
Pn
m?n
is
is kXk1 = Tr( X X), where
p Tr(X) = i=1 Xii . The trace norm of a matrix X 2 R
? indicates the unique positive square root of a positive semi-definite matrix. For every matrix
U 2 { 1, 1}m?n , we define SP(U ) = {V 2 Rm?n : 8ij Vij Uij > 0}, the set of matrices which
1
For simplicity we assume m 2 ?(n).
2
are sign consistent with U . We also define SP1 (U ) = {V 2 Rm?n : 8ij Vij Uij
of matrices which are sign consistent to U with a margin of at least one.
The max-norm (or
2
1}, that is the set
norm [3]) of a matrix U 2 Rm?n is defined by the formula
?
kU kmax := inf
max |P i | max |Qj | ,
P Q> =U
1?i?m
1?j?n
(1)
where the infimum is over all matrices P 2 Rm?k and Q 2 Rn?k and every integer k. The margin
complexity of a matrix U 2 Rm?n is
|P i ||Qj |
mc(U ) :=
inf
max
.
>
P Q 2SP(U ) ij |hP i , Qj i|
This quantity plays a central role in the analysis of our algorithm. If we interpret the rows of U as m
different binary classification tasks, and the columns as a finite set of objects which we wish to label,
the ?min-max? margin with respect to an embedding is smallest of the m maximal margins over the
tasks. The quantity 1/ mc(U ) is then the maximum ?min-max? margin with respect to all possible
embeddings. Specifically, the rows of matrix P represent the ?weights? of the binary classifiers
|hP ,Q i|
and the rows of matrix Q the ?input vectors? associated with the objects. The quantity |P ii||Qj |
j
is the margin of the i-th classifier on the j-th input. Observe that margin complexity depends only
on the sign pattern of the matrix and not the magnitudes. The margin complexity is equivalently
mc(U ) = minV 2SP1 (U ) kV kmax , see e.g., [3, Lemma 3.1].
In our online setting we are concerned with predicting an (example) sequence
((i1 , j1 ), y1 ), . . . , ((iT , jT ), yT ) 2 (Nm ? Nn ) ? { 1, 1}. A sequence must be consistent, that is,
given examples ((i, j), y) and ((i0 , j 0 ), y 0 ) if (i, j) = (i0 , j 0 ) then y = y 0 . We define the set of signconsistent matrices with a sequence S as cons(S) := {M 2 Rm?n : 0 < yMij , ((i, j), y) 2 S}.
We extend the notion of margin complexity to sequences via mc(S) := inf U 2cons(S) mc(U ).
The number of margin violations in a sequence S at complexity is defined to be,
?
|P i ||Qj |
1
merr(S, ) :=
inf
((i, j), y) 2 S :
>
.
>
|hP
,
Q
i|
P Q 2cons(S)
i
j
In particular, note that merr(S, ) = 0 if
?
(2)
1
mc(S) .
Finally, we introduce the following quantity, which plays a central role in the amortized analysis of
our algorithm.
Definition 2.1. The quantum relative entropy of symmetric positive semidefinite square matrices A
and B is
(A, B) := Tr(A log (A) A log (B) + B A).
3
Algorithm and Analysis
Algorithm 1 presents an adaptation of the Matrix Exponentiated Gradient algorithm [1, 17, 18, 19] to
our setting. This algorithm is a matrix analog of the Winnow algorithm [19]; we refer to the above
papers for more insights into this family of algorithms.
The following theorem provides a mistake bound for the algorithm.
Theorem 3.1. The number of mistakes, M , on sequence S made by the Algorithm 1 with parameter
0 < ? 1 is upper bounded by
?
1
M ? c (m + n) log(m + n) 2 + merr(S, ) ,
(3)
where c = 1/(3
e) ? 3.55 and the quantity merr(S, ) is given in equation (2).
Proof. Given U 2 Rm?n , let P 2 Rm?k and Q 2 Rn?k be such that P Q> = U . For every
i 2 Nm , we denote by P i the i-th row vector of P and for every j 2 Nn , we denote by Qj the j-th
row vector of Q. We construct the (m + n) ? k matrix
?
??
1
1
1
1
P
R := diag
,...,
,
,...,
Q
|P 1 |
|P m | |Q1 |
|Qn |
3
Algorithm 1 Predicting a binary matrix.
Parameters: Learning rate 0 < ? 1 .
Initialization: W (0)
I
(m+n) ,
where I is the (m + n) ? (m + n) identity matrix.
For t = 1, . . . , T
? Get pair (it , jt ) 2 Nm ? Nn .
? Define X (t) := 12 (eit + em+jt )(eit + em+jt )> , where ek is the k-th basis vector of Rm+n .
? Predict
(
1
1
if Tr(W (t 1) X (t) ) m+n
,
y?t =
1 otherwise.
? Receive label yt 2 { 1, 1} and if y?t 6= yt update
? ?
?
W (t)
exp log W (t 1) +
2
?
y?t )X (t) .
(yt
? := ( 1 )RR> . Define matrix X (t) := 1 (ei + em+j )(ei + em+j )> , where
and construct U
t
t
t
t
m+n
2
ek is the k-th basis vector of Rm+n .
? ) = 1 (since every row of R is normalized) and
Note that Tr(X (t) ) = 1, Tr(U
? X (t) ) =
Tr(U
=
=
=
=
1
1
Tr((RR> ) (eit + em+jt )(eit + em+jt )> )
n+m
2
1
(ei + em+jt )> RR> (eit + em+jt )
2(n + m) t
1
(R> (eit + em+jt ))> (R> (eit + em+jt ))
2(n + m)
?
??
?
Qjt
Qjt >
1
P it
P it
+
+
2(n + m) |P it | |Qjt |
|P it | |Qjt |
?
?
hP it , Qjt i
1
1+
.
(n + m)
|P it ||Qjt |
For a trial t we say there is a margin violation if
|P it ||Qjt |
|hP it ,Qjt i|
+
mistakes made in trials with margin violations and let M
trials without margin violations.
>
1
. Let M
denote the number of
denote the number of mistakes made in
From Lemma A.3 in the appendix we have
? , W (t
(U
1)
? , W (t) )
(U
)
2
(yt
then substituting in the above we have that
? , W (t
(U
1)
)
? , W (t) )
(U
2
?
? X (t) ) + 1
y?t ) Tr(U
(yt
e 2 (yt
y?t )
?
?
?
hP it , Qjt i
1
1+
n+m
|P it ||Qjt |
?
?
(y
y
?t )
t
+ 1 e2
Tr(W (t
Tr(W (t
1)
X (t) ) ,
y?t )
1)
X (t) ) .
To further simplify the above we use Lemma A.4 presented in the appendix, which gives
8 0
1
2
1) n+m
, if there is a margin violation ,
< (c
(t
1)
(t)
?
?
(U , W
)
(U , W )
:
1
2
c0 n+m
, otherwise.
where c0 = 3
e.
4
Using a telescoping sum, this gives
? , W (0) )
(U
? , W (0) )
(U
=
(c0 M +
? , W (T ) )
(U
c0 )M )
(1
and hence
M+ ?
1
c0
1
2
n+m
1
n+m
M + c0
1
n+m
2
+ M (c0
1)
1
n+m
2
2
? , W (0) ) +
(U
c0
1
c0
M .
We conclude that
M = M+ + M
?
We also have that
? , W (0) ) = Tr(U
? log(U
? ))
(U
? log(U
? ))
= Tr(U
? log(U
? ))
= Tr(U
1
c0
? , W (0) ) +
(U
1
2
n+m
1
M .
c0
? log(W (0) )) + Tr(W (0) )
Tr(U
? log(W (0) )) + 1 1
Tr(U
?)
Tr(U
? log(W (0) )) .
Tr(U
? as Pm+n i ?i ?T . Now we have Pm+n i = Tr(U
?) = 1
Write the eigen-decomposition of U
i
i=1
i=1
so all eigenvalues i are in the range [0, 1] meaning log( i ) ? 0 so i log( i ) < 0 which are the
? log(U
? ) meaning that Tr(U
? log(U
? )) ? 0. Also, log(W (0) ) = log( 1 )I so
eigenvalues of U
n+m
? log(W (0) ) = log( 1 )U
? and hence Tr(U
? log(W (0) )) = log( 1 ) Tr(U
? ) = log(m + n).
U
n+m
n+m
So by the above we have
? , W (0) ) ? log(m + n)
(U
and hence putting together we get
M?
m+n
1
log(m + n) + 0 M .
c0 2
c
Observe that in the simplifying case when we have no margin errors (merr(S, ) = 0) and the
1
learning rate is := mc(S)
we have that the number of mistakes of Algorithm 1 is bounded by
2
?
O((n
+ m) mc (S)). More generally although the learning rate is fixed in advance, we may use a
?doubling trick? to avoid the need to tune the .
Corollary 3.2. For any value of ? the number of mistakes M made by the following algorithm:
D OUBLING A LGORITHM :
p
Set ?
2 and loop over
1. Run Algorithm
p 1 with
2. Set ?
? 2
is upper bounded by
with c = 1/(3
=
1
?
until it has made d2c(m + n) log(m + n)?2 e mistakes
?
M ? 12c (m + n) log(m + n)
1
(
? )2
+ merr(S,
?
) ,
e) ? 3.55.
See the appendix for a proof. We now compare our bound to other online learning algorithms for
matrix completion. The algorithms of [16, 17] address matrix completion in a significantly more
general setting. Both algorithms operate with weak assumptions on the loss function, while our
algorithm is restricted to the 0?1 loss (mistake counting). Those papers present regret bounds,
whereas we apply the stronger assumption that there exists a consistent predictor. As a regret bound
is not possible for a deterministic predictor with the 0?1 loss, we compare Theorem 3.1 to their
5
bound when their algorithm is allowed to predict y? 2 [ 1, 1] and uses absolute loss. For clarity in
our discussion we will assume that m 2 ?(n).
Under
the above assumptions, the regret bound in [17, Corollary 7] becomes
p
2 kU k1 (m + n)1/2 log(m + n)T .
For simplicity we consider the simplified setting in
which each entry is predicted, that is T = mn; then absorbing polylogarithmic factors, their bound is
1
? 5/4 kU k 2 ). From Theorem 3.1 we have a bound of O(n
? mc2 (U )). Using [11, Theorem 10], we
O(n
1
may upper bound the margin complexity in terms of the trace norm,
mc(U ) ? 3
1
min
1
V 2SP (U )
1
kV k13 ? 3kU k13 .
(4)
2
?
Substituting this into Theorem 3.1 our bound is O(nkU
k13 ). Since the trace norm may be bounded
3/2
as n ? kU k1 ? n , both bounds become vacuous when kU k1 = n3/2 , however if the trace norm
is bounded away from n3/2 , the bound of Theorem 3.1 is smaller by a polynomial factor. An aspect
of the bounds which this comparison fails to capture is the fact that since [17, Corollary 7] is a regret
bound it will degrade more smoothly under adversarial noise than Theorem 3.1.
p
?
The algorithm in [16] is probabilistic and the regret bound is of O(kU
k1 n). Unlike [17], the setting
of [16] is transductive, that is each matrix entry is seen only once, and thus less general. If we use the
upper bound from [11, Theorem 10] as in the discussion of [17] then [16] improves uniformly on
our bound and the bound in [17]. However, using this upper bound oversimplifies the comparison
as 1 ? mc2 (U ) ? n while n ? kU k1 ? n3/2 for U 2 { 1, 1}m?n . In other words we have been
very conservative in our comparison; the bound (4) may be loose and our algorithm may often have a
much smaller bound. A specific example is provided by the class of (k, `)-biclustered matrices (see
also the discussion in Section 5 below) where mc2 (U ) ? min(k, `), in which case bound becomes
?
nontrivial after ?(min(k,
`) n) examples while the bounds in [16] and [17] become nontrivial after
3/2
?
? 7/4 ) examples, respectively.
at least ?(n ) and ?(n
With respect to computation our algorithm on each trial requires a single eigenvalue decomposition
of a PSD matrix, whereas the algorithm of [17] requires multiple eigenvalue decompositions per trial.
Although [16] does not discuss the complexity of their algorithm beyond the fact that it is polynomial,
in [17] it is conjectured that it requires at a minimum ?(n4 ) time per trial.
4
Comparison to the Best Kernel Perceptron
In this section, we observe that Algorithm 1 has a mistake bound that is comparable to Novikoff?s
bound [23] for the Kernel Perceptron with an optimal kernel in hindsight. To explain our observation,
we interpret the rows of matrix U as m different binary classification tasks, and the columns as a finite
set of objects which we wish to label; think for example of users/movies matrix in recommendation
systems. If we solve the tasks independently using a Kernel Perceptron algorithm, we will make
O(1/ 2 ) mistakes per task, where is the largest margin of a consistent hypothesis. If every task has
a margin larger than we will make O(m/ 2 ) mistakes in total. This algorithm and the parameter
crucially depend on the kernel used: if there exists a kernel which makes large for all (or most of)
the tasks, then the Kernel Perceptron will incur a small number of mistakes on all (or most of) the
tasks. We now argue that our bound mimics this ?oracle?, without knowing in advance the kernel.
Without loss of generality, we assume m n (otherwise apply the same reasoning below to matrix
U > ). In this scenario, Theorem 3.1 upper bounds the number of mistakes as
?
?
m log m
O
2
where is chosen so that merr(S, ) = 0. To further illustrate our idea, we define the task complexity
of a matrix U 2 Rm?n as
where
? (U ) = min h(V ) : V 2 SP1 (U )
h(V ) = inf max V i K
K 0 1?i?m
1
V>
i max Kjj .
1?j?n
(5)
Note that the quantity V i K 1 V >
i max1?j?n Kjj is exactly the bound in Novikoff?s Theorem on
the number of mistakes of the Kernel Perceptron on the i-th task with kernel K. Hence the quantity
6
h(V ) represents the best upper bound on the number of mistakes made by a Kernel Perceptron on
the worst (since we take the maximum over i) task.
Proposition 4.1. For every U 2 Rm?n , it holds that mc2 (U ) = ? (U ).
Proof. The result follows by Lemma A.6 presented in the appendix and by the formula mc(U ) =
minV 2SP1 (U ) kV kmax , see, e.g., [3, Lemma 3.1].
Returning to the interpretation of the bound in Theorem 3.1, we observe that if no more than r out of
the m tasks have margin smaller than a threshold then in Algorithm 1 setting parameter = ,
Theorem 3.1 gives a bound of
?
?
(m r) log m
O
+
rn
.
2
Thus we essentially ?pay? linearly for every object in a difficult task. Since we assume n ? m,
provided r is small the bound is ?robust? to the presence of bad tasks.
We specialize the above discussion to the case that each of the m tasks is a binary labeling of an
unknown underlying connected graph G := (V, E) with n vertices and assume that m
n. We
let U 2 { 1, 1}m?n be the matrix, the rows of which are different binary labelings of the graph.
For every i 2 Nm , we interpret U i , the i-th row of matrix U , as the i-th labeling of the graph
and let i be the corresponding cutsize, namely, i := |{(j, j 0 ) 2 E : Uij 6= Uij 0 }| and define
max := max1?i?m i . In order to apply Theorem 3.1, we need to bound the margin complexity of
U . Using the above analysis (Proposition 4.1), this quantity is upper bounded by
mc2 (U ) ? max U i K
1?i?m
1
U>
i max Kjj .
1?j?n
(6)
We choose the kernel K := L+ + (R 11T ), where L is the graph Laplacian of G, the vector 1
has all components equal to one, and R = maxj L+
jj . Since the graph is connected then 1 is the
only eigenvector of L with zero eigenvalue. Hence K is invertible and K 1 = L + (R 11T )+ =
>
T
1
1
L + (R n p1n 11T p1n )+ = L + Rn
2 11 . Then using the formula
i = 4 U i LU i we obtain from (6)
that
?
?
1
mc2 (U ) ? max 4 i +
R.
1?i?m
R
Theorem 3.1 then gives a bound of M ? O ((1 + max R) m log m). The quantity R may be further
upper bounded by the graph resistance diameter, see for example [24].
5
Biclustering and Near Optimality
The problem of learning a (k, `)-binary-biclustered matrix, corresponds to the assumption that the
row indices and column indices represent k and ` distinct object types and that there exists a binary
relation on these objects which determines the matrix entry. Formally we have the following
Definition 5.1. The class of (k, `)-binary-biclustered matrices is defined as
m?n
n
k?`
Bm,n
: r 2 Nm
, Uij = Fri cj , i 2 Nm , j 2 Nn } .
k , c 2 N` , F 2 { 1, 1}
k,` = {U 2 R
The intuition is that a matrix is (k, `)-biclustered if after a permutation of the rows and columns the
resulting matrix is a k ? ` grid of rectangles and all entries in a given rectangle are either 1 or 1.
The problem of determining a (k, `)-biclustered matrix with a minimum number of ?violated? entries
given a subset of entries was shown to be NP-hard in [25]. Thus although we do not give an algorithm
that provides a biclustering, we provide a bound in terms of the best consistent biclustering.
2
Lemma 5.2. If U 2 Bm,n
k,` then mc (U ) ? min(k, `).
Proof. We use Proposition 4.1 to upper bound mc2 (U ) by h(U ), where the function h is given
in equation (5). We further upper bound h(U ) by choosing a kernel matrix in the underlying
n
k?`
optimization problem. By Definition 5.1, there exists r 2 Nm
k , c 2 N` and F 2 { 1, 1}
7
such that Uij = Fri cj , for every i 2 Nm and every j 2 Nn . Then we choose the kernel matrix
K = (Kjj 0 )1?j,j 0 ?n such that
Kjj 0 :=
cj c0j
+?
jj 0
One verifies that U i K 1 U >
i ? ` for every i 2 {1, . . . , m}, hence by taking the limit for ? ! 0
Proposition 4.1 gives that mc2 (U ) ? ` . By the symmetry of our construction we can swap ` with k,
giving the bound.
Using this lemma with Theorem 3.1 gives us the following upper bound on the number of mistakes.
Corollary 5.3. The number of mistakes of Algorithm 1 applied to sequences generated by a (k, `)binary-biclustered matrix is upper bounded by O(min(k, `)(m + n) log(m + n)).
A special case of the setting in this corollary was first studied in the mistake bound setting in [14].
In [15] the bound was improved and generalized to include robustness to noise (for simplicity we do
not compare in the noisy setting). In both papers the underlying assumption is that there are k distinct
row types and no restrictions on the number of columns thus ` = n. In this case they obtained an
p
2
upper bound of kn + min( m
2e log2 e, m 3n log2 k). Comparing the two bounds we can see that
1
when k < n 2 ? the bound in Corollary 5.3 improves over [15, Corollary 1] by a polynomial factor
1
and on other hand when k n 2 we are no worse than a polylogarithmic factor.
We now establish that the mistake bound (3) is tight up to a poly-logarithmic factor.
Theorem 5.4. Given an online algorithm A that predicts the entries of a matrix U 2 { 1, 1}m?n
and given p
an ` 2 Nn there exists a sequence S constructed by an adversary with margin complexity
mc(S) ? `. On this sequence the algorithm A will make at least ` ? m mistakes.
See the appendix for a proof.
6
Conclusion
In this paper, we presented a Matrix Exponentiated Gradient algorithm for completing the entries of a
binary matrix in an online learning setting. We established a mistake bound for this algorithm, which
is controlled by the margin complexity of the underlying binary matrix. We discussed improvements
of the bound over related bounds for matrix completion. Specifically, we noted that our bound requires
fewer examples before it becomes non-trivial, as compared to the bounds in [16, 17]. Here we require
?
?
?
only ?(m
+ n) examples as opposed to the required ?((m
+ n)3/2 ) in [16] and ?((m
+ n)7/4 ),
respectively. Thus although our bound is more sensitive to noise, it captures structure more quickly
in the underlying matrix. When interpreting the rows of the matrix as binary tasks, we argued that
our algorithm performs comparably (up to logarithmic factors) to the Kernel Perceptron with the
optimal kernel in retrospect. Finally, we highlighted the example of completing a biclustered matrix
and noted that this is instrumental in showing the optimality of the algorithm in Theorem 5.4.
We observed that Algorithm 1 has a per trial computational cost which is smaller than currently
available algorithms for matrix completion with online guarantees. In the future it would be valuable
to study if improvements in this computation are possible by exploiting the special structure in our
algorithm. Furthermore, it would be very interesting to study a modification of our analysis to the
case in which the tasks (rows of matrix U ) grow over time, a setting which resembles the lifelong
learning frameworks in [26, 27].
Acknowledgements. We wish to thank the anonymous reviewers for their useful comments. This work was
supported in part by EPSRC Grants EP/P009069/1, EP/M006093/1, and by the U.S. Army Research Laboratory
and the U.K. Defence Science and Technology Laboratory and was accomplished under Agreement Number
W911NF-16-3-0001. The views and conclusions contained in this document are those of the authors and should
not be interpreted as representing the official policies, ether expressed or implied, of the U.S. Army Research
Laboratory, the U.S. Government, the U.K. Defence Science and Technology Laboratory or the U.K. Government.
The U.S. and U.K. Governments are authorized to reproduce and distribute reprints for Government purposes
notwithstanding any copyright notation herein.
8
References
[1] K. Tsuda, G. R?tsch, and M.K. Warmuth. Matrix exponentiated gradient updates for on-line learning and
bregman projection. Journal of Machine Learning Research, 6:995?1018, 2005.
[2] S. Ben-David, N. Eiron, and H. U. Simon. Limitations of learning via embeddings in euclidean half spaces.
Journal of Machine Learning Research, 3:441?461, 2003.
[3] N. Linial, S. Mendelson, G. Schechtman, and A. Shraibman. Complexity measures of sign matrices.
Combinatorica, 27(4):439?463, 2007.
[4] M. Herbster, M. Pontil, and L. Wainer. Online learning over graphs. In Proceedings of the 22nd
International Conference on Machine Learning, pages 305?312, 2005.
[5] X. Zhu, Z. Ghahramani, and J. Lafferty. Semi-supervised learning using gaussian fields and harmonic
functions. In Proc. 20th International Conference on Machine Learning, pages 912?919, 2003.
[6] M. Belkin and P. Niyogi. Semi-supervised learning on riemannian manifolds. Machine Learning, 56:209?
239, 2004.
[7] N. Cesa-Bianchi, C. Gentile, and F. Vitale. Fast and optimal prediction of a labeled tree. In Proceedings of
the 22nd Annual Conference on Learning Theory, 2009.
[8] N. Cesa-Bianchi, C. Gentile, F. Vitale, and G. Zappella. Random spanning trees and the prediction of
weighted graphs. Journal of Machine Learning Research, 14(1):1251?1284, 2013.
[9] N. Cesa-Bianchi, C. Gentile, and F. Vitale. Predicting the labels of an unknown graph via adaptive
exploration. Theoretical Computer Science, 412(19):1791?1804, 2011.
[10] C. Gentile, M. Herbster, and S. Pasteris. Online similarity prediction of networked data from known and
unknown graphs. In Proceedings of the 26th Annual Conference on Learning Theory, 2013.
[11] N. Srebro and A. Shraibman. Rank, trace-norm and max-norm. In Proceedings of the 18th Annual
Conference on Learning Theory, pages 545?560, 2005.
[12] E. J. Cand?s and T. Tao. The power of convex relaxation: Near-optimal matrix completion. IEEE Trans.
Inf. Theor., 56(5):2053?2080, May 2010.
[13] A. Maurer and M. Pontil. Excess risk bounds for multitask learning with trace norm regularization. In
Proceedings of The 27th Conference on Learning Theory (COLT), pages pages 55?76, 2013.
[14] S. A. Goldman, R. L. Rivest, and R. E. Schapire. Learning binary relations and total orders. SIAM J.
Comput., 22(5), 1993.
[15] S. A. Goldman and M. K. Warmuth. Learning binary relations using weighted majority voting. In
Proceedings of the 6th Annual Conference on Computational Learning Theory, pages 453?462, 1993.
[16] N. Cesa-Bianchi and O. Shamir. Efficient online learning via randomized rounding. In Advances in Neural
Information Processing Systems 24, pages 343?351, 2011.
[17] E. Hazan, S. Kale, and S. Shalev-Shwartz. Near-optimal algorithms for online matrix prediction. In Proc.
23rd Annual Conference on Learning Theory, volume 23:38.1-38.13. JMLR W&CP, 2012.
[18] S. Arora and S. Kale. A combinatorial, primal-dual approach to semidefinite programs. In Proceedings of
the 29th Annual ACM Symposium on Theory of Computing, pages 227?236, 2007.
[19] M.K. Warmuth. Winnowing subspaces. In Proceedings of the 24th International Conference on Machine
Learning, pages 999?1006, 2007.
[20] J. Nie, W. Kot?owski, and M. K. Warmuth. Online PCA with optimal regrets. In Proceedings of the 24th
International Conference on Algorithmic Learning Theory, pages 98?112, 2013.
[21] M. K. Warmuth and D. Kuzmin. Online variance minimization. Machine Learning, 87(1):1?32, 2012.
[22] M. Herbster, S. Pasteris, and S. Pontil. Predicting a switching sequence of graph labelings. Journal of
Machine Learning Research, 16:2003?2022, 2015.
[23] A.B. Novikoff. On convergence proofs on perceptrons. In Proceedings of the Symposium on the Mathematical Theory of Automata, pages 615?622, 1962.
[24] M. Herbster and M. Pontil. Prediction on a graph with a perceptron. In Advances in Neural Information
Processing Systems 19, pages 577?584, 2006.
[25] S. Wulff, R. Urner, and S. Ben-David. Monochromatic bi-clustering. In Proc. 30th International Conference
on Machine Learning, volume 28, pages 145?153. JMLR W&CP, 2013.
[26] P. Alquier, T.-T. Mai, and M. Pontil. Regret bounds for lifelong learning. Preprint, 2016.
[27] M.-F. Balcan, A. Blum, and S. Vempala. Efficient representations for lifelong learning and autoencoding.
In Proc. 28th Conference on Learning Theory, pages 191?210, 2015.
[28] R. Bhatia. Matrix Analysis. Springer Verlag, New York, 1997.
9
| 6567 |@word multitask:1 trial:9 polynomial:3 norm:15 seems:1 stronger:1 c0:12 instrumental:1 nd:2 crucially:1 decomposition:3 simplifying:1 q1:1 tr:23 contains:1 document:1 comparing:2 must:1 j1:3 update:3 half:1 fewer:1 warmuth:5 provides:2 mathematical:1 constructed:1 become:3 symposium:2 specialize:1 introduce:2 cand:1 owski:1 goldman:2 becomes:3 spain:1 provided:2 underlying:8 bounded:9 qjt:10 notation:1 rivest:1 kind:1 interpreted:1 substantially:1 eigenvector:1 shraibman:2 hindsight:2 guarantee:1 every:12 voting:1 exactly:1 returning:1 classifier:4 rm:14 uk:6 grant:1 positive:4 before:1 local:1 mistake:31 pasteris:4 switching:2 limit:1 mc2:8 therein:2 studied:3 initialization:1 resembles:1 suggests:1 range:1 bi:1 unique:1 regret:8 definite:1 minv:2 pontil:7 nku:1 significantly:1 projection:1 word:1 regular:1 get:2 selection:1 kmax:3 risk:1 restriction:1 deterministic:1 reviewer:1 yt:8 kale:2 independently:1 convex:1 mispredictions:1 formulate:1 automaton:1 simplicity:3 insight:1 embedding:1 notion:3 construction:1 play:2 shamir:1 user:1 us:1 hypothesis:3 agreement:1 trick:1 amortized:1 predicts:3 labeled:1 disagreeing:1 observed:2 kxk1:1 role:2 epsrc:1 ep:2 capture:2 worst:1 preprint:1 connected:2 valuable:1 intuition:1 complexity:17 nie:1 tsch:1 depend:1 solving:1 tight:1 incur:2 upon:1 max1:2 linial:1 learner:7 basis:2 swap:1 eit:7 massimiliano:1 distinct:2 fast:1 london:6 query:2 labeling:3 bhatia:1 choosing:1 shalev:1 larger:1 solve:2 valued:1 say:1 otherwise:3 niyogi:1 transductive:1 think:1 noisy:1 highlighted:1 online:21 autoencoding:1 advantage:1 sequence:11 rr:3 eigenvalue:5 ucl:3 propose:1 took:1 product:2 maximal:1 adaptation:1 j2:1 networked:1 loop:1 forth:1 kv:3 exploiting:1 convergence:1 rademacher:1 ben:2 object:15 derive:1 oubling:1 completion:10 ac:3 illustrate:1 ij:3 progress:1 auxiliary:1 c:3 predicted:1 trading:1 exploration:1 require:1 argued:1 hx:1 government:4 preliminary:1 anonymous:1 proposition:4 theor:1 hold:1 considered:1 exp:1 algorithmic:1 predict:5 substituting:2 adopt:1 smallest:1 purpose:2 proc:4 label:10 currently:1 combinatorial:1 sensitive:1 winnowing:1 largest:2 weighted:2 hope:2 minimization:2 always:1 defence:2 aim:1 gaussian:1 pn:2 avoid:1 corollary:7 focus:1 improvement:2 rank:1 indicates:1 adversarial:2 nn:6 i0:2 bt:3 relation:4 uij:6 reproduce:1 labelings:2 i1:2 tao:1 fidelity:1 colt:1 dual:1 classification:2 special:3 equal:1 construct:2 never:1 once:1 field:1 represents:2 mimic:1 future:1 np:1 simplify:1 novikoff:3 few:2 c0j:1 belkin:1 maxj:1 psd:1 violation:5 semidefinite:2 copyright:1 primal:1 bregman:1 edge:3 istituto:1 tree:2 euclidean:1 maurer:1 tsuda:1 theoretical:1 minimal:1 column:7 earlier:1 w911nf:1 applicability:1 cost:1 vertex:3 entry:16 subset:1 genoa:1 predictor:2 rounding:1 learnability:1 kn:1 herbster:6 international:5 siam:1 randomized:1 probabilistic:1 off:1 receiving:1 invertible:1 connecting:1 together:1 quickly:1 central:2 nm:9 cesa:4 opposed:3 choose:2 worse:1 ek:2 distribute:1 depends:2 later:1 multiplicative:2 root:1 view:1 linked:1 hazan:1 simon:1 minimize:1 square:3 variance:2 weak:1 sdps:1 comparably:3 mc:12 lu:1 explain:1 urner:1 definition:3 e2:1 associated:2 di:1 proof:7 con:3 riemannian:1 knowledge:1 improves:2 cj:3 organized:1 supervised:2 improved:1 generality:1 furthermore:1 until:1 retrospect:1 hand:1 ei:3 infimum:1 building:1 alquier:1 concept:1 true:1 normalized:1 lgorithm:1 regularization:2 hence:6 symmetric:1 laboratory:4 i2:1 game:1 noted:2 generalized:1 wainer:1 performs:1 cp:2 interpreting:1 balcan:1 reasoning:1 meaning:2 harmonic:1 common:1 absorbing:1 volume:2 extend:1 interpretation:3 analog:1 discussed:1 interpret:3 refer:2 rd:1 grid:1 pm:2 hp:6 similarity:2 italy:1 winnow:2 inf:6 conjectured:1 scenario:2 verlag:1 inequality:2 binary:21 accomplished:1 seen:1 minimum:2 gentile:4 fri:2 stephen:1 semi:3 ii:1 multiple:1 adapt:1 laplacian:1 controlled:1 prediction:7 basic:1 essentially:1 kernel:25 ui1:1 represent:2 achieved:1 receive:2 whereas:2 addressed:1 grow:1 operate:1 unlike:1 comment:1 induced:1 monochromatic:1 lafferty:1 integer:2 near:3 counting:1 presence:1 revealed:2 embeddings:3 concerned:2 inner:2 idea:1 knowing:2 tradeoff:1 qj:6 motivated:1 pca:2 resistance:1 york:1 jj:2 generally:1 useful:1 tune:1 extensively:1 diameter:1 schapire:1 mai:1 xij:1 sign:5 eiron:1 per:5 xii:1 write:1 putting:1 terminology:1 threshold:2 blum:1 clarity:1 rectangle:2 graph:20 relaxation:1 sum:1 run:1 family:1 appendix:6 comparable:2 bound:68 completing:3 pay:1 oracle:2 annual:6 nontrivial:2 incorporation:1 n3:3 aspect:1 min:9 optimality:2 vempala:1 department:3 smaller:4 em:10 cutsize:1 wi:3 n4:1 modification:1 restricted:2 equation:2 previously:2 discus:3 loose:1 know:1 italiano:1 available:1 apply:4 observe:5 away:1 spectral:1 appropriate:1 batch:2 robustness:1 eigen:1 denotes:1 clustering:1 include:1 log2:2 wc1e:3 exploit:1 giving:1 k1:5 ghahramani:1 establish:1 implied:1 quantity:9 gradient:4 subspace:1 thank:1 majority:1 degrade:1 manifold:1 argue:2 trivial:2 spanning:1 index:2 equivalently:1 setup:1 unfortunately:1 difficult:1 trace:9 stated:1 policy:1 unknown:5 perform:2 bianchi:4 upper:14 observation:1 mispredict:1 finite:4 y1:2 rn:5 introduced:2 vacuous:1 pair:1 required:2 namely:1 david:2 herein:2 polylogarithmic:3 established:1 barcelona:1 nip:1 trans:1 address:1 able:1 beyond:1 adversary:1 below:2 pattern:1 kot:1 program:1 max:15 including:1 power:1 zappella:1 natural:1 predicting:10 oversimplifies:1 telescoping:1 mn:1 representing:1 zhu:1 movie:1 technology:2 arora:1 reprint:1 literature:1 acknowledgement:1 determining:1 relative:2 loss:7 permutation:1 interesting:1 limitation:1 srebro:1 remarkable:1 consistent:6 vij:2 row:17 supported:1 enjoys:1 exponentiated:4 perceptron:11 lifelong:3 merr:7 taking:1 absolute:1 quantum:2 qn:1 author:1 made:7 adaptive:1 simplified:1 bm:2 excess:1 approximate:1 sequentially:1 conclude:1 xi:1 shwartz:1 nature:5 ku:8 robust:1 symmetry:1 kjj:5 poly:1 diag:1 official:1 sp:3 main:1 linearly:1 noise:3 verifies:1 allowed:1 wulff:1 p1n:2 body:1 kuzmin:1 fails:1 wish:3 comput:1 jmlr:2 hw:1 formula:3 theorem:18 bad:1 specific:1 jt:10 showing:1 exists:5 mendelson:1 magnitude:1 notwithstanding:1 margin:29 authorized:1 entropy:2 smoothly:1 logarithmic:5 army:2 expressed:1 contained:2 doubling:1 biclustering:4 recommendation:1 springer:1 corresponds:1 determines:1 acm:1 comparator:2 goal:1 identity:1 hard:1 biclustered:8 tecnologia:1 sp1:4 specifically:2 uniformly:1 lemma:7 conservative:1 total:3 schechtman:1 vitale:3 perceptrons:1 formally:1 college:3 combinatorica:1 mark:1 violated:1 |
6,155 | 6,568 | A Powerful Generative Model Using Random Weights
for the Deep Image Representation
Kun He?, Yan Wang ?
Department of Computer Science and Technology
Huazhong University of Science and Technology, Wuhan 430074, China
[email protected], [email protected]
John Hopcroft
Department of Computer Science
Cornell University, Ithaca 14850, NY, USA
[email protected]
Abstract
To what extent is the success of deep visualization due to the training? Could
we do deep visualization using untrained, random weight networks? To address
this issue, we explore new and powerful generative models for three popular deep
visualization tasks using untrained, random weight convolutional neural networks.
First we invert representations in feature spaces and reconstruct images from white
noise inputs. The reconstruction quality is statistically higher than that of the same
method applied on well trained networks with the same architecture. Next we
synthesize textures using scaled correlations of representations in multiple layers
and our results are almost indistinguishable with the original natural texture and
the synthesized textures based on the trained network. Third, by recasting the
content of an image in the style of various artworks, we create artistic images with
high perceptual quality, highly competitive to the prior work of Gatys et al. on
pretrained networks. To our knowledge this is the first demonstration of image
representations using untrained deep neural networks. Our work provides a new
and fascinating tool to study the representation of deep network architecture and
sheds light on new understandings on deep visualization. It may possibly lead to a
way to compare network architectures without training.
1
Introduction
In recent years, Deep Neural Networks (DNNs), especially Convolutional Neural Networks (CNNs),
have demonstrated highly competitive results on object recognition and image classification [1, 2, 3, 4].
With advances in training, there is a growing trend towards understanding the inner working of these
deep networks. By training on a very large image data set, DNNs develop a representation of images
that makes object information increasingly explicit at various levels of the hierarchical architecture.
Significant visualization techniques have been developed to understand the deep image representations
on trained networks [5, 6, 7, 8, 9, 10, 11].
Inversion techniques have been developed to create synthetic images with feature representations
similar to the representations of an original image in one or several layers of the network. Feature
representations are a function ? of the source image x0 . An approximate inverse ??1 is used to
?
?
The three authors contributing equally.
Corresponding author.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
construct a new image x from the code ?(x0 ) by reducing some statistical discrepancy between
?(x) and ?(x0 ). Mahendran et al. [7] use the pretrained CNN AlexNet [2] and define a squared
Euclidean loss on the activations to capture the representation differences and reconstruct the image.
Gatys et al. [8, 12] define a squared loss on the correlations between feature maps of some layers
and synthesize natural textures of high perceptual quality using the pretrained CNN called VGG [3].
Gatys et al. [13] then combine the loss on the correlations as a proxy to the style of a painting and the
loss on the activations to represent the content of an image, and successfully create artistic images
by converting the artistic style to the content image, inspiring several followups [14, 15]. Another
stream of visualization aims to understand what each neuron has learned in a pretrained network
and synthesize an image that maximally activates individual features [5, 9] or the class prediction
scores [6]. Nguyen et al. further try multifaceted visualization to separate and visualize different
features that a neuron learns [16].
Feature inversion and neural activation maximization both start from a white noise image and calculate
the gradient via backpropagation to morph the white noise image and output a natural image. In
addition, some regularizers are incorporated as a natural image prior to improve the visualization
quality, including ??norm [6], total variation [7], jitter [7], Gaussian blur [9], data-driven patch
priors [17], etc. The method of visualizing the feature representation on the intermediate layers sheds
light on the information represented at each layer of the pretrained CNN.
A third set of researchers trains a separate feed-forward CNN with deconvolutional layers using
representations or correlations of the feature maps produced in the original network as the input and
the source image as the target to learn the inversion of the original network. The philosophy is to
train another neural network to inverse the representation and speedup the visualization on image
reconstruction [10, 18], texture synthesis [19] or even style transfer [15]. Instead of designing a
natural prior, some researchers incorporate adversarial training [20] to improve the realism of the
generated images [18]. Their trained deconvolutional network could give similar qualitative results as
the inversion technique does and is two or three orders of magnitude faster, as the previous inversion
technique needs a forward and backward pass through the pretrained network. This technique is
slightly different from the previous two in that it does not focus on understanding representations
encoded in the original CNN but on the visualization of original images by training another network.
It is well recognized that deep visualization techniques conduct a direct analysis of the visual information contained in image representations, and help us understand the representation encoded
at the intermediate layers of the well trained DNNs. In this paper, we raise a fundamental issue
that other researchers rarely address: Could we do deep visualization using untrained, random
weight DNNs? What kind of deep visualization could be applied on random weight DNNs?
This would allow us to separate the contribution of training from the contribution of the network structure. It might even give us a method to evaluate deep network architectures without
spending days and significant computing resources in training networks so that we could compare them. Also, it will be useful not to have to store the weights, which can have significant
impact for mobile applications. Though Gray et al. demonstrated that the VGG architecture with
random weights failed in generating textures and resulted in white noise images in an experiment
indicating the trained filters might be crucial for texture generation [8], we conjecture the success
of deep visualization mainly originates from the intrinsic nonlinearity and complexity of the deep
network hierarchical structure rather than from the training, and that the architecture itself may
cause the inversion invariant to the original image. Gatys et al.?s unsuccessful attempt on the texture
synthesis using the VGG architecture with random weights may be due to their inappropriate scale of
the weighting factors.
To verify our hypothesis, we try three popular inversion tasks for visualization using the CNN
architecture with random weights. Our results strongly suggest that this is true. Applying inversion
techniques on the untrained VGG with random weights, we reconstruct high perceptual quality
images. The results are qualitatively better than the reconstructed images produced on the pretrained
VGG with the same architecture. Then, we try to synthesize natural textures using the random weight
VGG. With automatic normalization to scale the squared correlation loss for different activation
layers, we succeed in generating similar textures as the prior work of Gatys et al. [8] on well-trained
VGG. Furthermore, we continue the experiments on style transfer, combining the content of an image
and the style of an artwork, and create artistic imagery using random weight CNN.
2
To our knowledge this is the first demonstration of image representations using untrained deep neural
networks. Our work provides a new and fascinating tool to study the perception and representation of
deep network architecture, and shed light on new understandings on deep visualization. Our work
will inspire more possibilities of using the generative power of CNNs with random weights, which
do not need long training time on multi-GPUs. Furthermore, it is very hard to prove why trained
deep neural networks work so well. Based on the networks with random weights, we might be able
to prove some properties of the deep networks. Our work using random weights shows a possible
way to start developing a theory of deep learning since with well-trained weights, theorems might be
impossible.
2
Methods
In order to better understand the deep representation in the CNN architecture, we focus on three
tasks: inverting the image representation, synthesizing texture, and creating artistic style images.
Our methods are similar in spirit to existing methods [7, 8, 13]. The main difference is that we
use untrained weights instead of trained weights, and we apply weighting factors determined by a
pre-process to normalize the different impact scales of different activation layers on the input layer.
Compared with purely random weight CNN, we select a random weight CNN among a set of random
weight CNNs to get slightly better results.
For the reference network, we choose VGG-19 [3], a convolutional neural network trained on the
1.3 million-image ILSVRC 2012 ImageNet dataset [1] using the Caffe-framework [22]. The VGG
architecture has 16 convolutional and 5 pooling layers, followed by 3 fully connected layers. Gatys
et al. re-train the VGG-19 network using average pooling instead of maximum pooling, which they
suggest could improve the gradient flow and obtain slightly better results [8]. They only consider
the convolutional and pooling layers for texture synthesis, and they rescale the weights such that the
mean activation of each filter over the images and positions is 1. Their trained network is denoted as
VGG in the following discussion. We adopt the same architecture, replacing the weights with purely
random values from a Gaussian distribution N (0, ?). The standard deviation, ?, is set to a small
number like 0.015 in the experiments. The VGG-based random weight network created as described
in the following subsection is used as our reference network, denoted as ranVGG in the following
discussion.
Inverting deep representations. Given a representation function F l : RH?W ?C ? RNl ?Ml for
the lth layer of a deep network and F l (x0 ) for an input image x0 , we want to reconstruct an image x
that minimizes the L2 loss among the representations of x0 and x.
?l
x? = argmin Lcontent (x, x0 , l) = argmin
kF l (x) ? F l (x0 )k22
(1)
x?RH?W ?C
x?RH?W ?C 2Nl Ml
Here H and W denote the size of the image, C = 3 the color channels, and ?l the weighting factor.
We regard the feature map matrix F l as the representation function of the lth layer which has Nl ? Ml
l
dimensions where Nl is the number of distinct feature maps, each of size Ml when vectorised. Fik
th
denotes the activation of the i filter at position k.
The representations are a chain of non-linear filter banks even if untrained random weights are applied
to the network. We initialize the pre_image with white noise, and apply the L_BFGS gradient descent
using standard error backpropagation to morph the input pre_image to the target.
?L(x, x0 , l) ?F l
xt+1 = xt ?
(2)
?F l
?x xt
?L(x, x0 , l)
?l
= l l (F l (xt ) ? F l (x0 ))i,k
(3)
l
N M
?Fi,k
xt
The weighting factor ?l is applied to normalize the gradient impact on the morphing image x. We use
a pre-processing procedure to determine the value of ?l . For the current layer l, we approximately
calculate the maximum possible gradient by Equation (4), and back propagate the gradient to the
input layer. Then we regard the reciprocal of the absolute mean gradient over all pixels and RGB
channels as the value of ?l such that the gradient impact of different layers is approximately of the
same scale. This normalization doesn?t affect the reconstruction from the activations of a single layer,
3
but is added for the combination of content and style for the style transfer task.
W H C
1
1 XXX ?L(x0 , x0 , l)
=
?l
W HC
?xi,j,k l 0
i=1 j=1 k=1
(4)
F (x )=0
To stabilize the reconstruction quality, we apply a greedy approach to build a ?stacked" random
weight network ranVGG based on the VGG-19 architecture. Select one single image as the reference
image and starting from the first convolutional layer, we build the stacked random weight VGG by
sampling, selecting and fixing the weights of each layer in forward order. For the current layer l,
fix the weights of the previous l ? 1 layers and sample several sets of random weights connecting
the lth layer. Then reconstruct the target image using the rectified representation of layer l, and
choose weights yielding the smallest loss. Experiments in the next section show our success on the
reconstruction by using the untrained, random weight CNN, ranVGG.
Texture synthesis. Can we synthesize natural textures based on the feature space of an untrained
deep network? To address this issue, we refer to the method proposed by Gatys et al.[8] and use the
correlations between feature responses on each layer as the textureP
representation. The inner product
l
l
between pairwise feature maps i and j within each layer l, Glij = k Fik
Fjk
, defines a gram matrix
l
l
l T
G = F (F ) . We seek a texture image x that minimizes the L2 loss among the correlations of the
representations of several candidate layers for x and a groundtruth image x0 .
X
x? = argmin Ltexture = argmin
?l E(x, x0 , l),
(5)
x?RH?W ?C
x?RH?W ?C l?L
where the contribution of layer l to the total loss is defined as
1
E(x, x0 , l) =
kGl (F l (x)) ? Gl (F l (x0 ))k22 .
4Nl2 Ml2
The derivative of E(x, x0 , l) with respect to the activations F l in layer l is [8]:
1
?E(x, x0 , l)
= 2 2 {(F l (x))T [Gl (F l (x)) ? Gl (F l (x0 ))]}i,k
l
Nl M l
?Fi,k
(6)
(7)
The weighting factor ?l is defined similarly to ?l , but here we use the loss contribution E(x, x0 , l) of
layer l to get its gradient impact on the input layer.
W H C
1
1 XXX ?E(x0 , x0 , l)
(8)
=
?l
W HC
?xi,j,k l 0
i
j
k
F (x )=0
We then perform the L_BFGS gradient descent using standard error backpropagation to morph the
input image to a synthesized texture image using the untrained ranVGG.
Style transfer. Can we use the untrained deep network to create artistic images? Referring to the
prior work of Gatys et al.[13] from the feature responses of VGG trained on ImageNet, we use an
untrained VGG and succeed in separating and recombining content and style of arbitrary images.
The objective requires terms for content and style respectively with suitable combination factors. For
content we use the method of reconstruction on medium layer representations, and for style we use
the method of synthesising texture on some lower through higher layer representation correlations.
Let xc be the content image and xs the style image. We combine the content of the former and the
style of the latter by optimizing the following objective:
x? = argmin ?Lcontent (x, xc ) + ?Ltexture (x, xs ) + ?R(x)
(9)
x?RH?W ?C
Here ? and ? are the contributing factors for content and style respectively. We apply a regularizer
R(x), total variation(TV) [7] defined as the squared sum on the adjacent pixel?s difference of x, to
encourage the spatial smoothness in the output image.
3
Experiments
This section evaluates the results obtained by our model using the untrained network ranVGG 3 .
3
https://github.com/mileyan/random_weights
4
The input image is required to be of size 256 ? 256 if we want to invert the representation of the fully
connected layers. Else, the input could be of arbitrary size.
Inverting deep representations. We select several source images from the ILSVRC 2012 challenge [1] validation data as examples for the inversion task, and choose a monkey image as the
reference image to build the stacked ranVGG (Note that using other image as the reference image
also returns similar results). As compared with the inverting technique of Mahendran et al. [7], we
only consider the Euclidean loss over the activations and ignore the regularizer they used to capture
the natural image prior. ranVGG contains 19 layers of random weights (16 convolutional layers and 3
fully connected layers), plus 5 pooling layers. Mahendran et al. use a reference network AlexNet [2]
which contains 8 layers of trained weights (5 convolutional layers and 3 fully connected layers), plus
3 pooling layers.
Figure 1 shows that we reach higher perceptive reconstructions. The reason may lie in the fact
that the VGG architecture uses filters with a small receptive field of 3 ? 3 and we adopt average
pooling. Though shallower than VGG, their reference network, AlexNet, adopts larger filters and
uses maximum pooling, which makes it harder to get images well inverted and easily leads to spikes.
That?s why they used regularizers to polish the reconstructed image. Figure 2 shows more examples
(house, flamingo, girl).
Figure 3 shows the variations on an example image, the girl. As compared with the VGG with purely
random weights, ranVGG (the VGG with stacked random weights) exhibits lower variations and
lower reconstruction distances. As compared with the trained VGG, both stacked ranVGG and VGG
with purely random weights exhibit lower reconstruction distance with lower variations. ranVGG
demonstrates a more stable and high performance for the inversion task and is slightly better than an
purely random VGG. So we will use ranVGG for the following experiments.
To compare the convergence of ranVGG and VGG, Figure 4 shows the loss (average Euclidean
distance) along the gradient descent iterations on an example image, the house. The reconstruction
converges much quicker on ranVGG and yields higher perceptual quality results. Note that the
reconstruction on VGG remains the same even if we double the iteration limits to 4000 iterations.
Texture synthesis. Figure 5 shows the textures synthesized by our model on ranVGG for several
natural texture images (fifth row) selected from a texture website4 and an artwork named Starry
Night by Vincent van Gohn 1989. Each row of images was generated using an increasing number
of convolutional layers to constrain the gradient descent. conv1_1 for the first row, conv1_1 and
conv2_1 for the second row, etc (the labels at each row indicate the top-most layer included). The
joint matching of conv1_1, conv2_1, and con3_1 (third row) already exhibits high quality texture
representations. Adding one more layer of conv4_1 (fourth row) could slightly improve the natural
textures. By comparison, results of Gatys et al.[8] on the trained VGG using four convolutional layers
up to conv4_1 are as shown at the bottom row.
Our experiments show that with suitable weighted factors, calculated automatically by our method,
ranVGG could synthesize complex natural textures that are almost indistinguishable with the original
texture and the synthesized texture on the trained VGG. Trained VGG generates slightly better
textures on neatly arranged original textures (cargo at the second column of Figure 5).
Style transfer. We select conv2_2 as the content layer, and use the combination of conv1_1,
conv2_1, ..., conv5_1 as the style. We set the ratio of ? : ? : ? = 100 : 1 : 1000 in the experiments.
We first compare our style transfer results with the prior work of Gatys et al.[13] on several wellknown artworks for the style: Starry Night by Vincent van Gohn 1989, Der Schrei by Edward Munch
1893, Picasso by Pablo Picasso 1907, Woman with a Hat by Henri Matisse 1905, Meadow with
Poplars by Claude Monet 1875. As shown in Figure 6, the second row, by recasting the content of a
university image in the style of the five artworks, we obtain different artistic images based on the
untrained ranVGG (second row). Our results are comparable to their work [13] on the pretrained
VGG (third row), and are in the same order of magnitude. They have slightly smoother lines and
textures which may attributed to the training. We further try the content and style combination on
some Chinese paintings and scenery photographs, as shown in Figure 7, and create high perceptual
artistic Chinese paintings that well combine the style of the painting and the content of the sceneries.
4
http://www.textures.com/
5
pool2
pool3/conv3
pool4/conv4
pool5
[7] on AlexNet
Ours on VGG
Ours on ranVGG
pool1
Figure 1: Reconstructions from layers of ranVGG (top) and the pretrained VGG (middle) and
[7] (bottom). As AlexNet only contains 3 pooling layers, we compare their results on conv3 and
conv4 with ours on pool3 and pool4. Our method on ranVGG demonstrates a higher perceptive
quality, especially on the higher layers. Note that VGG is much deeper than AlexNet even when we
compare on the same pooling layer.
VGG
VGG
ranVGG
ranVGG
VGG
pool5
pool3
pool1
ranVGG
Figure 2: Reconstructions from different pooling layers of the untrained ranVGG and the
pretrained VGG. ranVGG demonstrates a higher perceptive quality, especially on the higher layers.
The pretrained VGG could rarely reconstruct even the contours from representations of the fifth
pooling layer.
Figure 3: Variations in samples on the girl image, with maximum, minimum, mean and quartiles.
6
Figure 4: Reconstruction qualities of conv5_1 during the gradient descent iterations.
Cargo
Floors
Flowers
Leaves
Nigh Starry
trained conv4_1
original
conv4_1
conv3_1
conv2_1
conv1_1
Camouflage
Figure 5: Generated textures using random weights. Each row corresponds to a different processing stage in ranVGG. Considering only the lowest layer, conv1_1, the synthesised textures are
of lowest granularity, showing very local structure. Increasing the number of layers on which we
match the texture representation (conv1_1 plus conv2_1 for the second row, etc), we have higher
organizations of the previous local structure. The third row and the fourth row show high-quality
synthesized textures of the original images. The lowest row corresponds to the result of using the
trained VGG to match the texture representation from conv1_1, conv2_1 conv3_1 and conv4_1.
Der Schrei
Photograph
Picasso Woman with a Hat
Meadow with Poplars
[13] on VGG
Ours on ranVGG
Original
Starry Night
Figure 6: Artistic style images of ours on the untrained ranVGG (medium row) and of Gatys
et al.[8] on the pretrained VGG (bottom row). We select a university image (first row, center) and
several well-known artworks for the style (first row, others images). The third column under the
photograph are for the Picasso. We obtain similar quality results as compared with Gatys et al.[13].
7
Chinese painting
Photograph
Created image
Figure 7: Style transfer of Chinese paintings on the untrained ranVGG. We select several
Chinese paintings for the style (first column), including The Great Wall by Songyan Qian 1975,
a painting of anonymous author and Beautiful landscape by Ping Yang. We select the mountain
photographs (second column) as the content images. The created images performed on the untrained
ranVGG are shown in the third column, which seem to have learned how to paint the rocks and clouds
from paintings of the first column and transfer the style to the content to ?draw? Chinese landscape
paintings.
4
Discussion
Our work offers a testable hypothesis about the representation of image appearance based only on
the network structure. The success on the untrained, random weight networks on deep visualization
raises several fundamental questions in the area of deep learning. Researchers have developed many
visualization techniques to understand the representation of well trained deep networks. However, if
we could do the same or similar visualization using an untrained network, then the understanding
is not for the training but for the network architecture. What is the difference of a trained network
and a random weight network with the same architecture, and how could we explore the difference?
What else could one do using the generative power of untrained, random weight networks? Explore
other visualization tasks in computer vision developed on the well-trained network, such as image
morphing [23], would be a promising aspect.
Training deep neural networks not only requires a long time but also significant high performance
computing resources. The VGG network, which contains 11-19 weight layers depending on the
typical architecture [3], takes 2 to 3 weeks on a system equipped with 4 NVIDIA Titan Black GPUs
for training a single net. The residual network ResNet, which achieved state-of-the-art results in
image classification and detection in 2015 [4], takes 3.5 days for the 18-layer model and 14 days for
the 101-layer model using 4 NVIDIA Kepler GPU.5 Could we evaluate a network structure without
taking a long time to train it? There are some prior works to deal with this issue but they deal with
much shallow networks [21]. In future work, we will address this issue by utilizing the untrained
network to attempt to compare networks quickly without having to train them.
Acknowledgments
This research work was supported by US Army Research Office(W911NF-14-1-0477) and
National Science Foundation of China(61472147) and National Science Foundation of Hubei
Province(2015CFB566).
5
http://torch.ch/blog/2016/02/04/resnets.html
8
References
[1] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang,
Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large
Scale Visual Recognition Challenge. International Journal of Computer Vision, 115(3):211?252, 2015.
[2] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional
neural networks. In NIPS, pages 1097?1105, 2012.
[3] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In
ICLR, 2015.
[4] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition.
In CVPR, 2016.
[5] Dumitru Erhan, Yoshua Bengio, Aaron Courville, and Pascal Vincent. Visualizing higher-layer features of
a deep network. University de Montreal Technical Report 4323, 2009.
[6] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks: Visualising
image classification models and saliency maps. In ICLR, 2014.
[7] Aravindh Mahendran and Andrea Vedaldi. Understanding deep image representations by inverting them.
In CVPR, pages 5188?5196, 2015.
[8] Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge. Texture synthesis using convolutional neural
networks. In NIPS, pages 262?270, May 2015.
[9] Jason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs, and Hod Lipson. Understanding neural networks
through deep visualization. In Deep Learning Workshop at ICML, 2015.
[10] Alexey Dosovitskiy and Thomas Brox. Inverting visual representations with convolutional networks. In
CVPR, pages 4829?4837, 2016.
[11] Anh Nguyen, Jason Yosinski, and Jeff Clune. Deep neural networks are easily fooled: High confidence
predictions for unrecognizable images. In CVPR, 2015.
[12] L. A. Gatys, A. S. Ecker, and M. Bethge. Texture synthesis and the controlled generation of natural stimuli
using convolutional neural networks. arXiv:1505.07376, 2015.
[13] Leon A Gatys, Alexander S Ecker, and Matthias Bethge.
arXiv:1508.06576, 2015.
A neural algorithm of artistic style.
[14] Yaroslav Nikulin and Roman Novak. Exploring the neural algorithm of artistic style. arXiv:1602.07188,
2016.
[15] Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and
super-resolution. In ECCV, 2016.
[16] Anh Mai Nguyen, Jason Yosinski, and Jeff Clune. Multifaceted feature visualization: Uncovering the
different types of features learned by each neuron in deep neural networks. arXiv:1602.03616, 2016.
[17] Donglai Wei, Bolei Zhou, Antonio Torralba, and William T. Freeman. Understanding intra-class knowledge
inside CNN. arXiv:1507.02379, 2015.
[18] Alexey Dosovitskiy and Thomas Brox. Generating images with perceptual similarity metrics based on
deep networks. In NIPS, 2016.
[19] Dmitry Ulyanov, Vadim Lebedev, Andrea Vedaldi, and Victor Lempitsky. Texture networks: Feed-forward
synthesis of textures and stylized images. In ICML, 2016.
[20] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron
Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, pages 2672?2680, 2014.
[21] Andrew Saxe, Pang W Koh, Zhenghao Chen, Maneesh Bhand, Bipin Suresh, and Andrew Y Ng. On
random weights and unsupervised feature learning. In ICML, pages 1089?1096, 2011.
[22] Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio
Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. In
Proceedings of the ACM International Conference on Multimedia, ACM, pages 675?678, 2014.
[23] Jacob R. Gardner, Paul Upchurch, Matt J. Kusner, Yixuan Li, Kilian Q. Weinberger, and John E. Hopcroft.
Deep manifold traversal: Changing labels with convolutional features. arXiv:1511.06421, 2015.
9
| 6568 |@word cnn:12 middle:1 inversion:10 norm:1 seek:1 propagate:1 rgb:1 jacob:1 harder:1 contains:4 score:1 selecting:1 ours:5 deconvolutional:2 existing:1 current:2 com:2 guadarrama:1 activation:10 gpu:1 john:2 recasting:2 blur:1 generative:5 greedy:1 selected:1 leaf:1 reciprocal:1 realism:1 pool2:1 provides:2 kepler:1 zhang:1 five:1 along:1 novak:1 direct:1 rnl:1 qualitative:1 prove:2 conv3_1:2 combine:3 inside:2 pairwise:1 x0:23 andrea:3 gatys:15 growing:1 multi:1 freeman:1 automatically:1 website4:1 inappropriate:1 considering:1 increasing:2 equipped:1 spain:1 medium:2 alexnet:6 lowest:3 what:5 mountain:1 kind:1 argmin:5 minimizes:2 monkey:1 anh:3 developed:4 alahi:1 shed:3 scaled:1 demonstrates:3 sherjil:1 originates:1 local:2 ulyanov:1 limit:1 approximately:2 might:4 plus:3 black:1 alexey:2 china:2 statistically:1 acknowledgment:1 backpropagation:3 procedure:1 suresh:1 evan:1 area:1 yan:1 maneesh:1 vedaldi:3 matching:1 pre:2 confidence:1 suggest:2 get:3 andrej:1 applying:1 impossible:1 www:1 map:6 demonstrated:2 center:1 ecker:3 starting:1 conv4:2 resolution:1 qian:1 pouget:1 fik:2 utilizing:1 embedding:1 variation:6 target:3 us:2 designing:1 hypothesis:2 pool4:2 goodfellow:1 synthesize:6 trend:1 recognition:4 bottom:3 cloud:1 quicker:1 wang:1 capture:2 visualising:1 calculate:2 connected:4 sun:1 kilian:1 complexity:1 warde:1 traversal:1 trained:23 raise:2 purely:5 monet:1 girl:3 easily:2 hopcroft:2 lcontent:2 joint:1 stylized:1 various:2 represented:1 regularizer:2 pool1:2 train:5 stacked:5 distinct:1 fast:1 pool5:2 picasso:4 caffe:2 jean:1 encoded:2 larger:1 cvpr:4 reconstruct:6 simonyan:2 itself:1 karayev:1 claude:1 net:2 rock:1 matthias:2 reconstruction:14 nikulin:1 product:1 combining:1 normalize:2 sutskever:1 convergence:1 double:1 darrell:1 generating:3 converges:1 object:2 help:1 depending:1 develop:1 resnet:1 fixing:1 montreal:1 andrew:3 rescale:1 unrecognizable:1 edward:1 c:1 indicate:1 ml2:1 cnns:3 filter:6 quartile:1 camouflage:1 saxe:1 donglai:1 dnns:5 fix:1 wall:1 anonymous:1 exploring:1 great:1 week:1 visualize:1 adopt:2 smallest:1 torralba:1 label:2 ross:1 create:6 successfully:1 tool:2 weighted:1 activates:1 gaussian:2 aim:1 super:1 rather:1 zhou:1 conv5_1:2 cornell:2 mobile:1 office:1 clune:3 focus:2 mainly:1 fooled:1 polish:1 adversarial:2 torch:1 bhand:1 pixel:2 issue:5 classification:4 among:3 pascal:1 denoted:2 html:1 uncovering:1 spatial:1 art:1 initialize:1 brox:2 field:1 construct:1 having:1 ng:1 sampling:1 icml:3 unsupervised:1 discrepancy:1 future:1 others:1 yoshua:2 report:1 dosovitskiy:2 stimulus:1 roman:1 mirza:1 resulted:1 national:2 individual:1 william:1 attempt:2 detection:1 organization:1 highly:2 possibility:1 intra:1 yixuan:1 nl:4 yielding:1 light:3 farley:1 regularizers:2 chain:1 synthesised:1 encourage:1 conduct:1 euclidean:3 conv4_1:5 re:1 girshick:1 column:6 w911nf:1 maximization:1 artistic:11 deviation:1 munch:1 krizhevsky:1 johnson:1 morph:3 synthetic:1 referring:1 fundamental:2 international:2 michael:1 synthesis:8 connecting:1 quickly:1 ilya:1 sanjeev:1 bethge:3 squared:4 imagery:1 lebedev:1 choose:3 possibly:1 woman:2 huang:1 creating:1 derivative:1 style:31 return:1 li:3 huazhong:1 de:1 yaroslav:1 stabilize:1 titan:1 hust:2 stream:1 performed:1 try:4 jason:3 competitive:2 start:2 jia:2 lipson:1 contribution:4 pang:1 convolutional:18 pool3:3 poplar:2 yield:1 saliency:1 painting:10 landscape:2 vincent:3 produced:2 ren:1 researcher:4 rectified:1 russakovsky:1 ping:1 nl2:1 reach:1 trevor:1 evaluates:1 attributed:1 vectorised:1 dataset:1 popular:2 knowledge:3 subsection:1 color:1 sean:1 back:1 feed:2 alexandre:1 higher:10 day:3 xxx:2 response:2 maximally:1 inspire:1 zisserman:2 arranged:1 wei:1 though:2 strongly:1 furthermore:2 stage:1 correlation:8 working:1 night:3 replacing:1 su:1 mehdi:1 defines:1 quality:13 gray:1 multifaceted:2 usa:1 matt:1 k22:2 verify:1 true:1 former:1 white:5 deal:2 visualizing:2 indistinguishable:2 adjacent:1 during:1 zhiheng:1 image:86 spending:1 fi:2 million:1 he:2 yosinski:3 synthesized:5 significant:4 refer:1 smoothness:1 automatic:1 similarly:1 neatly:1 nonlinearity:1 stable:1 similarity:1 meadow:2 etc:3 sergio:1 recent:1 optimizing:1 driven:1 wellknown:1 store:1 nvidia:2 blog:1 success:4 continue:1 der:2 victor:1 inverted:1 minimum:1 floor:1 deng:1 converting:1 recognized:1 determine:1 xiangyu:1 smoother:1 multiple:1 technical:1 faster:1 match:2 offer:1 long:4 bolei:1 equally:1 controlled:1 impact:5 prediction:2 vision:2 metric:1 arxiv:6 iteration:4 represent:1 normalization:2 resnets:1 sergey:1 invert:2 achieved:1 addition:1 want:2 krause:1 else:2 source:3 jian:1 crucial:1 ithaca:1 vadim:1 pooling:12 mahendran:4 flow:1 spirit:1 seem:1 yang:1 granularity:1 intermediate:2 bernstein:1 bengio:2 affect:1 followup:1 architecture:20 inner:2 cn:2 vgg:41 fuchs:1 karen:1 shaoqing:1 cause:1 deep:44 antonio:1 useful:1 karpathy:1 inspiring:1 http:3 mai:1 cargo:2 four:1 yangqing:1 changing:1 backward:1 year:1 sum:1 inverse:2 powerful:2 jitter:1 fourth:2 named:1 almost:2 groundtruth:1 patch:1 draw:1 jeh:1 comparable:1 layer:61 followed:1 courville:2 fascinating:2 constrain:1 fei:4 alex:1 generates:1 aspect:1 leon:2 recombining:1 conjecture:1 gpus:2 speedup:1 department:2 developing:1 tv:1 combination:4 slightly:7 increasingly:1 kusner:1 shallow:1 invariant:1 artwork:6 koh:1 resource:2 visualization:22 equation:1 remains:1 bing:1 apply:4 hierarchical:2 weinberger:1 hat:2 original:12 thomas:3 denotes:1 top:2 flamingo:1 xc:2 testable:1 especially:3 build:3 chinese:6 objective:2 added:1 already:1 spike:1 paint:1 receptive:1 question:1 exhibit:3 gradient:13 iclr:2 distance:3 separate:3 matisse:1 separating:1 bipin:1 manifold:1 extent:1 reason:1 ozair:1 code:1 ratio:1 demonstration:2 kun:1 hao:1 synthesizing:1 satheesh:1 perform:1 shallower:1 neuron:3 descent:5 hinton:1 incorporated:1 arbitrary:2 pablo:1 inverting:6 david:1 required:1 glij:1 imagenet:4 learned:3 barcelona:1 zhenghao:1 nip:5 address:4 able:1 justin:1 flower:1 perception:1 challenge:2 including:2 unsuccessful:1 power:2 suitable:2 natural:12 beautiful:1 residual:2 improve:4 github:1 technology:2 gardner:1 created:3 fjk:1 prior:9 understanding:8 l2:2 morphing:2 kf:1 contributing:2 loss:13 fully:4 generation:2 geoffrey:1 validation:1 foundation:2 shelhamer:1 proxy:1 bank:1 row:20 eccv:1 gl:3 supported:1 allow:1 understand:5 deeper:1 conv3:2 taking:1 absolute:1 fifth:2 van:2 regard:2 dimension:1 calculated:1 gram:1 contour:1 doesn:1 author:3 forward:4 qualitatively:1 adopts:1 nguyen:4 erhan:1 reconstructed:2 approximate:1 henri:1 ignore:1 dmitry:1 ml:4 xi:2 khosla:1 why:2 promising:1 learn:1 transfer:9 channel:2 untrained:23 hc:2 complex:1 main:1 rh:6 noise:5 paul:1 xu:1 ny:1 position:2 explicit:1 candidate:1 lie:1 perceptual:7 house:2 third:7 weighting:5 learns:1 donahue:1 ian:1 theorem:1 dumitru:1 xt:5 showing:1 x:2 aravindh:1 abadie:1 upchurch:1 intrinsic:1 workshop:1 adding:1 texture:39 magnitude:2 province:1 hod:1 chen:1 photograph:5 explore:3 synthesising:1 appearance:1 army:1 visual:3 failed:1 aditya:1 contained:1 kaiming:1 pretrained:12 ch:1 corresponds:2 acm:2 ma:1 succeed:2 lempitsky:1 lth:3 scenery:2 towards:1 jeff:4 content:17 hard:1 included:1 determined:1 typical:1 reducing:1 olga:1 called:1 total:3 pas:1 multimedia:1 aaron:2 rarely:2 indicating:1 select:7 ilsvrc:2 perceptive:3 berg:1 latter:1 jonathan:2 alexander:3 philosophy:1 incorporate:1 evaluate:2 |
6,156 | 6,569 | PAC-Bayesian Theory Meets Bayesian Inference
Pascal Germain? Francis Bach? Alexandre Lacoste? Simon Lacoste-Julien?
?
INRIA Paris - ?cole Normale Sup?rieure, [email protected]
?
Google, [email protected]
Abstract
We exhibit a strong link between frequentist PAC-Bayesian risk bounds and the
Bayesian marginal likelihood. That is, for the negative log-likelihood loss function, we show that the minimization of PAC-Bayesian generalization risk bounds
maximizes the Bayesian marginal likelihood. This provides an alternative explanation to the Bayesian Occam?s razor criteria, under the assumption that the data
is generated by an i.i.d. distribution. Moreover, as the negative log-likelihood is
an unbounded loss function, we motivate and propose a PAC-Bayesian theorem
tailored for the sub-gamma loss family, and we show that our approach is sound on
classical Bayesian linear regression tasks.
1
Introduction
Since its early beginning [24, 34], the PAC-Bayesian theory claims to provide ?PAC guarantees
to Bayesian algorithms? (McAllester [24]). However, despite the amount of work dedicated to
this statistical learning theory?many authors improved the initial results [8, 21, 25, 30, 35] and/or
generalized them for various machine learning setups [4, 12, 15, 20, 28, 31, 32, 33]?it is mostly used
as a frequentist method. That is, under the assumptions that the learning samples are i.i.d.-generated
by a data-distribution, this theory expresses probably approximately correct (PAC) bounds on the
generalization risk. In other words, with probability 1 , the generalization risk is at most " away
from the training risk. The Bayesian side of PAC-Bayes comes mostly from the fact that these bounds
are expressed on the averaging/aggregation/ensemble of multiple predictors (weighted by a posterior
distribution) and incorporate prior knowledge. Although it is still sometimes referred as a theory that
bridges the Bayesian and frequentist approach [e.g., 16], it has been merely used to justify Bayesian
methods until now.1
In this work, we provide a direct connection between Bayesian inference techniques [summarized
by 5, 13] and PAC-Bayesian risk bounds in a general setup. Our study is based on a simple
but insightful connection between the Bayesian marginal likelihood and PAC-Bayesian bounds
(previously mentioned by Gr?nwald [14]) obtained by considering the negative log-likelihood loss
function (Section 3). By doing so, we provide an alternative explanation for the Bayesian Occam?s
razor criteria [18, 22] in the context of model selection, expressed as the complexity-accuracy
trade-off appearing in most PAC-Bayesian results. In Section 4, we extend PAC-Bayes theorems
to regression problems with unbounded loss, adapted to the negative log-likelihood loss function.
Finally, we study the Bayesian model selection from a PAC-Bayesian perspective (Section 5), and
illustrate our finding on classical Bayesian regression tasks (Section 6).
2
PAC-Bayesian Theory
We denote the learning sample (X, Y )={(xi , yi )}ni=1 2(X ?Y)n , that contains n input-output pairs.
The main assumption of frequentist learning theories?including PAC-Bayes?is that (X, Y ) is
1
Some existing connections [3, 6, 14, 19, 29, 30, 36] are discussed in Appendix A.1.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
randomly sampled from a data generating distribution that we denote D. Thus, we denote (X, Y )?Dn
the i.i.d. observation of n elements. From a frequentist perspective, we consider in this work loss
functions ` : F?X ?Y ! R, where F is a (discrete or continuous) set of predictors f : X ! Y, and
we write the empirical risk on the sample (X, Y ) and the generalization error on distribution D as
n
1X
`
b
LX,Y (f ) =
`(f, xi , yi ) ; LD` (f ) =
E `(f, x, y) .
n i=1
(x,y)?D
The PAC-Bayesian theory [24, 25] studies an averaging of the above losses according to a posterior
distribution ?? over F. That is, it provides probably approximately correct generalization bounds
on the (unknown) quantity Ef ??? LD` (f ) = Ef ??? E(x,y)?D `(f, x, y) , given the empirical estimate
`
Ef ??? LbX,Y
(f ) and some other parameters. Among these, most PAC-Bayesian theorems rely on
the Kullback-Leibler divergence KL(?
?k?) = Ef ??? ln[?
?(f )/?(f )] between a prior distribution ?
over F?specified before seeing the learning sample X, Y ?and the posterior ???typically obtained
by feeding a learning process with (X, Y ).
Two appealing aspects of PAC-Bayesian theorems are that they provide data-driven generalization
bounds that are computed on the training sample (i.e., they do not rely on a testing sample), and
that they are uniformly valid for all ?? over F. This explains why many works study them as model
selection criteria or as an inspiration for learning algorithm conception. Theorem 1, due to Catoni [8],
has been used to derive or study learning algorithms [10, 17, 26, 27].
Theorem 1 (Catoni [8]). Given a distribution D over X ? Y, a hypothesis set F, a loss function
`0 : F ? X ? Y ! [0, 1], a prior distribution ? over F, a real number 2 (0, 1], and a real number
> 0, with probability at least 1
over the choice of (X, Y ) ? Dn , we have
?
1
1
1
b `0
?
ln
`0
8?
? on F :
E LD (f ) ?
1 e Ef ??? LX,Y (f ) n KL(?k?)+
.
(1)
f ???
1 e
Theorem 1 is limited to loss functions mapping to the range [0, 1]. Through a straightforward rescaling
we can extend it to any bounded loss, i.e., ` : F ? X ? Y ! [a, b], where [a, b] ? R. This is done by
using := b a and with the rescaled loss function `0 (f, x, y) := (`(f, x, y) a)/(b a) 2 [0, 1] .
After few arithmetic manipulations, we can rewrite Equation (1) as
h
?
?i
`
8?
? on F : E LD` (f ) ? a + b aa b 1 exp
E LbX,Y
(f )+a 1 KL(?
?k?)+ ln 1
. (2)
f ???
1 e
n
f ???
From an algorithm design perspective, Equation (2) suggests optimizing a trade-off between the
empirical expected loss and the Kullback-Leibler divergence. Indeed, for fixed ?, X, Y , n, and ,
minimizing Equation (2) is equivalent to find the distribution ?? that minimizes
`
n E LbX,Y
(f ) + KL(?
?k?) .
(3)
f ???
It is well known [1, 8, 10, 21] that the optimal Gibbs posterior ??? is given by
??? (f ) =
1
ZX,Y
?(f ) e
b ` (f )
nL
X,Y
,
(4)
where ZX,Y is a normalization term. Notice that the constant of Equation (1) is now absorbed in
the loss function as the rescaling factor setting the trade-off between the expected empirical loss
and KL(?
?k?).
3
Bridging Bayes and PAC-Bayes
In this section, we show that by choosing the negative log-likelihood loss function, minimizing the
PAC-Bayes bound is equivalent to maximizing the Bayesian marginal likelihood. To obtain this
result, we first consider the Bayesian approach that starts by defining a prior p(?) over the set of
possible model parameters ?. This induces a set of probabilistic estimators f? 2 F, mapping x to a
probability distribution over Y. Then, we can estimate the likelihood of observing y given x and ?,
i.e., p(y|x, ?) ? f? (y|x).2 Using Bayes? rule, we obtain the posterior p(?|X, Y ):
p(?) p(Y |X, ?)
p(?|X, Y ) =
/ p(?) p(Y |X, ?) ,
(5)
p(Y |X)
R
Qn
where p(Y |X, ?) = i=1 p(yi |xi , ?) and p(Y |X) = ? p(?) p(Y |X, ?) d?.
2
To stay aligned with the PAC-Bayesian setup, we only consider the discriminative case in this paper. One
can extend to the generative setup by considering the likelihood of the form p(y, x|?) instead.
2
To bridge the Bayesian approach with the PAC-Bayesian framework, we consider the negative
log-likelihood loss function [3], denoted `nll and defined by
`nll (f? , x, y) ?
(6)
ln p(y|x, ?) .
`
Then, we can relate the empirical loss LbX,Y
of a predictor to its likelihood:
n
1X
`nll
LbX,Y
(?) =
`nll (?, xi , yi ) =
n i=1
or, the other way around,
n
1X
ln p(yi |xi , ?) =
n i=1
p(Y |X, ?) = e
b `nll (?)
nL
X,Y
1
ln p(Y |X, ?) ,
n
.
(7)
Unfortunately, existing PAC-Bayesian theorems work with bounded loss functions or in very specific
contexts [e.g., 9, 36], and `nll spans the whole real axis in its general form. In Section 4, we explore
PAC-Bayes bounds for unbounded losses. Meanwhile, we consider priors with bounded likelihood.
1
This can be done by assigning a prior of zero to any ? yielding ln p(y|x,?)
2
/ [a, b].
Now, using Equation (7) in the optimal posterior (Equation 4) simplifies to
b `nll
?(?) e n LX,Y (?)
p(?) p(Y |X, ?)
?? (?) =
=
= p(?|X, Y ) ,
ZX,Y
p(Y |X)
?
where the normalization constant ZX,Y corresponds to the Bayesian marginal likelihood:
Z
b `nll
ZX,Y ? p(Y |X) =
?(?) e n LX,Y (?) d? .
(8)
(9)
?
This shows that the optimal PAC-Bayes posterior given by the generalization bound of Theorem 1
coincides with the Bayesian posterior, when one chooses `nll as loss function and := b a (as in
Equation 2). Moreover, using the posterior of Equation (8) inside Equation (3), we obtain
`nll
n E ? LbX,Y
(?) + KL(?
?? k?)
????
Z
Z
h
b `nll (?)
b `nll (?)
b `nll (?) i
nL
nL
nL
X,Y
X,Y
X,Y
?(?) e
?(?) e
?(?) e
`nll
b
= n
LX,Y (?) d? +
ln
d?
ZX,Y
ZX,Y
?(?) ZX,Y
?
Z ?
`
h
i
nll
b
nL
(?)
X,Y
Z
?(?) e
1
1
=
ln ZX,Y
d? = ZX,Y
ln ZX,Y
= ln ZX,Y .
ZX,Y
X,Y
(10)
?
In other words, minimizing the PAC-Bayes bound is equivalent to maximizing the marginal likelihood. Thus, from the PAC-Bayesian standpoint, the latter encodes a trade-off between the averaged
negative log-likelihood loss function and the prior-posterior Kullback-Leibler divergence. Note that
Equation (10) has been mentioned by Gr?nwald [14], based on an earlier observation of Zhang
[36]. However, the PAC-Bayesian theorems proposed by the latter do not bound the generalization
loss directly, as the ?classical? PAC-Bayesian results [8, 24, 29] that we extend to regression in
forthcoming Section 4 (see the corresponding remarks in Appendix A.1).
We conclude this section by proposing a compact form of Theorem 1 by expressing it in terms of the
marginal likelihood, as a direct consequence of Equation (10).
Corollary 2. Given a data distribution D, a parameter set ?, a prior distribution ? over ?, a
2 (0, 1], if `nll lies in [a, b], we have, with probability at least 1
over the choice of (X, Y ) ? Dn ,
h
i
p
E ? L`Dnll (?) ? a + 1 beaa b 1 ea n ZX,Y
,
????
where ??? is the Gibbs optimal posterior (Eq. 8) and ZX,Y is the marginal likelihood (Eq. 9).
In Section 5, we exploit the link between PAC-Bayesian bounds and Bayesian marginal likelihood to
expose similarities between both frameworks in the context of model selection. Beforehand, next
Section 4 extends the PAC-Bayesian generalization guarantees to unbounded loss functions. This is
mandatory to make our study fully valid, as the negative log-likelihood loss function is in general
unbounded (as well as other common regression losses).
3
4
PAC-Bayesian Bounds for Regression
This section aims to extend the PAC-Bayesian results of Section 3 to real valued unbounded loss.
These results are used in forthcoming sections to study `nll , but they are valid for broader classes of
loss functions. Importantly, our new results are focused on regression problems, as opposed to the
usual PAC-Bayesian classification framework.
The new bounds are obtained through a recent theorem of Alquier et al. [1], stated below (we provide
a proof in Appendix A.2 for completeness).
Theorem 3 (Alquier et al. [1]). Given a distribution D over X ? Y, a hypothesis set F, a loss
function ` : F ? X ? Y ! R, a prior distribution ? over F, a 2 (0, 1], and a real number > 0,
with probability at least 1
over the choice of (X, Y ) ? Dn , we have
?
1
1
`
`
b
8?
? on F :
E LD (f ) ? E LX,Y (f ) +
KL(?
?k?) + ln + `,?,D ( , n) ,
(11)
f ???
where
f ???
`,?,D (
, n) = ln E
E
0
f ?? X 0,Y ?D
exp
n
h ?
LD` (f )
`
LbX
0,Y 0 (f )
?i
.
(12)
Alquier et al. used Theorem 3 to design a learning algorithm for {0, 1}-valued classification losses.
Indeed, a bounded loss function ` : F ? X ? Y ! [a, b] can be used along with Theorem 3 by
applying the Hoeffding?s lemma to Equation (12), that gives `,?,D ( , n) ? 2 (b a)2 /(2n). More
specifically, with := n, we obtain the following bound
?
?
`
8?
? on F :
E LD` (f ) ? E LbX,Y
(f ) + 1 KL(?
?k?) + ln 1 + 1 (b a)2 .
(13)
f ???
n
f ???
2
Note that the latter bound leads to the same trade-off as Theorem 1 (expressed by Equation 3).
However, the choice := n has the inconvenience that the bound value is at least 12 (b a)2 , even
p
at the limit n ! 1. With := n the bound converges (a result similar to Equation (14) is also
formulated by Pentina and Lampert [28]):
?
?
`
8?
? on F :
E LD` (f ) ? E LbX,Y
(f ) + p1 KL(?
?k?) + ln 1 + 1 (b a)2 .
(14)
f ???
2
n
f ???
Sub-Gaussian losses. In a regression context, it may be restrictive to consider strictly bounded loss
functions. Therefore, we extend Theorem 3 to sub-Gaussian losses. We say that a loss function ` is
sub-Gaussian with variance factor s2 under a prior ? and a data-distribution D if it can be described by
a sub-Gaussian random variable V =LD` (f ) `(f, x, y), i.e., its moment generating function is upper
bounded by the one of a normal distribution of variance s2 (see Boucheron et al. [7, Section 2.3]):
?
?
2 2
( ) = ln E e V = ln E
E exp
LD` (f ) `(f, x, y) ? 2s , 8 2 R . (15)
V
f ?? (x,y)?D
The above sub-Gaussian assumption corresponds to the Hoeffding assumption of Alquier et al. [1],
and allows to obtain the following result.
Corollary 4. Given D, F, `, ? and defined in the statement of Theorem 3, if the loss is sub-Gaussian
with variance factor s2 , we have, with probability at least 1
over the choice of (X, Y ) ? Dn ,
?
?
8?
? on F :
E L ` (f ) ? E Lb ` (f ) + 1 KL(?
?k?) + ln 1 + 1 s2 .
f ???
D
f ???
X,Y
n
2
Proof. For i = 1 . . . n, we denote `i a i.i.d. realization of the random variable LD` (f ) `(f, x, y).
? Pn
?
? ? Pn
Qn
2 2
2 2
s
s
`,?,D ( , n) = ln E exp n
i=1 `i = ln
i=1 E exp n `i =
i=1 `i ( n ) ? n 2n2 = 2n ,
where the inequality comes from the sub-Gaussian loss assumption (Equation 15). The result is then
obtained from Theorem 3, with := n.
Sub-gamma losses. We say that an unbounded loss function ` is sub-gamma with a variance
factor s2 and scale parameter c, under a prior ? and a data-distribution D, if it can be described by a
sub-gamma random variable V (see Boucheron et al. [7, Section 2.4]), that is
V
( ) ?
s2
c2 (
ln(1
c)
c) ?
2 2
s
2(1 c )
,
8 2 (0, 1c ) .
(16)
Under this sub-gamma assumption, we obtain the following new result, which is necessary to study
linear regression in the next sections.
4
Corollary 5. Given D, F, `, ? and defined in the statement of Theorem 3, if the loss is sub-gamma
with variance factor s2 and scale c < 1, we have, with probability at least 1
over (X, Y ) ? Dn ,
?
?
`
8?
? on F :
E LD` (f ) ? E LbX,Y
(f ) + n1 KL(?
?k?) + ln 1 + 2(11 c) s2 .
(17)
f ???
f ???
As a special case, with ` := `nll and ?? := ??? (Equation 8), we have
E L`Dnll (?) ?
?????
s2
2(1 c)
1
n
(18)
ln (ZX,Y ) .
Proof. Following the same path as in the proof of Corollary 4 (with
Pn
Qn
Pn
`,?,D (n, n) = ln E exp [
i=1 `i ] = ln
i=1 E exp [`i ] =
i=1
:= n), we have
`i (1)
? n 2(1s
2
c)
=
n s2
2(1 c)
,
where the inequality comes from the sub-gamma loss assumption, with 1 2 (0, 1c ).
Squared loss. The parameters s and c of Corollary 5 rely on the chosen loss function and prior,
and the assumptions concerning the data distribution. As an example, consider a regression problem
where X ?Y ? Rd ?R, a family of linear predictors fw (x) = w ? x, with w 2 Rd , and a Gaussian
prior N (0, ?2 I). Let us assume that the input examples are generated by x ? N (0, x2 I) with label
y = w?? x + ?, where w? 2 Rd and ? ? N (0, ?2 ) is a Gaussian noise. Under the squared loss function
y)2 ,
(19)
?
we show in Appendix A.4 that Corollary 5 is valid with s2
2 x2 ( ?2 d + kw? k2 ) + ?2 (1 c)
and c 2 x2 ?2 . As expected, the bound degrades when the noise increases
`sqr (w, x, y) = (w ? x
?
Regression versus classification. The classical PAC-Bayesian theorems are stated in a classification context and bound the generalization error/loss of the stochastic Gibbs predictor G??. In
order to predict the label of an example x 2 X , the Gibbs predictor first draws a hypothesis h 2 F
according to ??, and then returns h(x). Maurer [23] shows that we can generalize PAC-Bayesian
bounds on the generalization risk of the Gibbs classifier to any loss function with output between
zero and one. Provided that y 2 { 1, 1} and h(x) 2 [ 1, 1], a common choice is to use the
1
linear loss function `001 (h, x, y) = 12
2 y h(x). The Gibbs generalization loss is then given by
0
RD (G??) = E(x,y)?D Eh??? `01 (h, x, y) . Many PAC-Bayesian works use RD (G??) as a surrogate
loss to study the zero-one classification loss of the majority vote classifier RD (B??):
?
?
h
i
RD (B??) = Pr
y E h(x) < 0 =
E I y E h(x) < 0 ,
(20)
(x,y)?D
h???
(x,y)?D
h???
where I[?] being the indicator function. Given a distribution ??, an upper bound on the Gibbs risk
is converted to an upper bound on the majority vote risk by RD (B??) ? 2RD (G??) [20]. In some
situations, this factor of two may be reached, i.e., RD (B??) ' 2RD (G??). In other situations, we
may have RD (B??) = 0 even if RD (G??) = 12 ? (see Germain et al. [11] for an extensive study).
Indeed, these bounds obtained via the Gibbs risk are exposed to be loose and/or unrepresentative of
the majority vote generalization error.3
In the current work, we study regression losses instead of classification ones. That is, the provided
results express upper bounds on Ef ??? LD` (f ) for any (bounded, sub-Gaussian, or sub-gamma)
losses. Of course, one may want to bound the regression loss of the averaged regressor F??(x) =
Ef ??? f (x). In this case, if the loss function ` is convex (as the squared loss), Jensen?s inequality
gives LD` (F??) ? Ef ??? LD` (f ) . Note that a strict inequality replaces the factor two mentioned above
for the classification case, due to the non-convex indicator function of Equation (20).
Now that we have generalization bounds for real-valued loss functions, we can continue our study
linking PAC-Bayesian results to Bayesian inference. In the next section, we focus on model selection.
3
It is noteworthy that the best PAC-Bayesian empirical bound values are so far obtained by considering
a majority vote of linear classifiers, where the prior and posterior are Gaussian [2, 10, 20], similarly to the
Bayesian linear regression analyzed in Section 6.
5
5
Analysis of Model Selection
We consider L distinct models {Mi }L
i=1 , each one defined by a set of parameters ?i . The PACBayesian theorems naturally suggest selecting the model that is best adapted for the given task by
evaluating the bound for each model {Mi }L
i=1 and selecting the one with the lowest bound [2, 25, 36].
This is closely linked with the Bayesian model selection procedure, as we showed in Section 3 that
minimizing the PAC-Bayes bound amounts to maximizing the marginal likelihood. Indeed, given a
collection of L optimal Gibbs posteriors?one for each model?given by Equation (8),
p(?|X, Y, Mi ) ? ???i (?) =
`
b nll (?)
nL
1
X,Y
,
ZX,Y,i ?i (?) e
for ? 2 ?i ,
the Bayesian Occam?s razor criteria [18, 22] chooses the one with the higher model evidence
Z
b`
p(Y |X, Mi ) ? ZX,Y,i =
?i (?) e n LX,Y (?) d? .
(21)
(22)
?i
Corollary 6 below formally links the PAC-Bayesian and the Bayesian model selection. To obtain
this result, we simply use the bound of Corollary 5 L times, together with `nll and Equation (10).
From the union bound (a.k.a. Bonferroni inequality), it is mandatory to compute each bound with a
confidence parameter of /L, to ensure that the final conclusion is valid with probability at least 1 .
Corollary 6. Given a data distribution D, a family of model parameters {?i }L
i=1 and associated
priors {?i }L
?where
?
is
defined
over
?
?
,
a
2
(0,
1],
if
the
loss
is
sub-gamma
with parameters
i
i
i=1
s2 and c < 1, then, with probability at least 1
over (X, Y ) ? Dn ,
8i 2 {1, . . . , L} :
E ? L`Dnll (?) ? 2(11 c) s2 n1 ln ZX,Y,i L .
????i
where ???i is the Gibbs optimal posterior (Eq. 21) and ZX,Y,i is the marginal likelihood (Eq. 22).
Hence, under the uniform prior over the L models, choosing the one with the best model evidence is
equivalent to choosing the one with the lowest PAC-Bayesian bound.
Hierarchical Bayes. To perform proper inference on hyperparameters, we have to rely on the
Hierarchical Bayes approach. This is done by considering an hyperprior p(?) over the set of
hyperparameters H. Then, the prior p(?|?) can be conditioned on a choice of hyperparameter ?. The
p(Y |X,?)
Bayes rule of Equation (5) becomes p(?, ?|X, Y ) = p(?) p(?|?)
.
p(Y |X)
Under the negative log-likelihood loss function, we can rewrite the results of Corollary 5 as a
generalization bound on E????0 E?????? L`Dnll (?), where ??0 (?) / ?0 (?) ZX,Y,? is the hyperposterior
on H and ?0 the hyperprior. Indeed, Equation (18) becomes
?
?
E ? L`Dnll (?) = E ? E ? L`Dnll (?) ? 2(11 c) s2 n1 ln E ZX,Y,?
.
(23)
????
???0
????0 ?????
To relate to the bound obtained in Corollary 6, we consider the case of a discrete hyperparameter set
1
H = {?i }L
i=1 , with a uniform prior ?0 (?i ) = L (from now on, we regard each hyperparameter ?i as
the specification of a model ?i ). Then, Equation (23) becomes
?P
?
L
E ? L`Dnll (?) = E ? E ? L`Dnll (?) ? 2(11 c) s2 n1 ln
i=1 ZX,Y,?i L .
????
????0 ?????
PL
This bound is now a function of i=1 ZX,Y,?i instead of maxi ZX,Y,?i as in the bound given by
the ?best? model in Corollary 6. This yields a tighter bound, corroborating the Bayesian wisdom
that model averaging performs best. Conversely, when selecting a single hyperparameter ? ? 2 H,
the hierarchical representation is equivalent to choosing a deterministic hyperposterior, satisfying
??0 (? ? ) = 1 and 0 for every other values. We then have
KL(?
?||?) = KL(?
?0 ||?0 ) + E KL(?
?? ||?? ) = ln(L) + KL(?
??? ||??? ) .
????0
With the optimal posterior for the selected ? ? , we have
`nll
`nll
n E LbX,Y
(?) + KL(?
?||?) = n E ? LbX,Y
(?) + KL(?
???? ||??? ) + ln(L)
????
?????
?
?
?
Z
= ln(ZX,Y,?? ) + ln(L) = ln X,Y,?
.
L
Inserting this result into Equation (17), we fall back on the bound obtained in Corollary 6. Hence,
by comparing the values of the bounds, one can get an estimate on the consequence of performing
model selection instead of model averaging.
6
6
Linear Regression
In this section, we perform Bayesian linear regression using the parameterization of Bishop [5]. The
output space is Y := R and, for an arbitrary input space X , we use a mapping function : X !Rd .
The model. Given (x, y) 2 X ? Y and model parameters ? := hw, i 2 Rd ? R+ , we consider
the likelihood p(y|x, hw, i) = N (y|w ? (x), 2 ). Thus, the negative log-likelihood loss is
`nll (h w, i, x, y) =
ln p(y|x, h w, i) =
1
2
ln(2?
2
)+
1
2
2
(y
w ? (x))2 .
(24)
For a fixed 2 , minimizing Equation (24) is equivalent to minimizing the squared loss function of
Equation (19). We also consider an isotropic Gaussian prior of mean 0 and variance ?2 : p(w| ? ) =
N (w|0, ?2 I). For the sake of simplicity, we consider fixed parameters 2 and ?2 . The Gibbs optimal
posterior (see Equation 8) is then given by
??? (w) ? p(w|X, Y, ,
?)
=
p(w| ? ) p(Y |X,w, )
p(Y |X, , ? )
b A
= N (w | w,
1
(25)
),
b := 12 A 1 T y ; is a n?d matrix such that the ith line is (xi ) ;
where A := 12 T + 12 I ; w
?
y := [y1 , . . . yn ] is the labels-vector ; and the negative log marginal likelihood is
ln p(Y |X, ,
=
`nll
n LbX,Y
|
?)
=
b +
(w)
1
2
1
2
2
2
{z
ky
tr(
b `nll
n Ew???? L
X,Y
T
(w)
b 2+
wk
A
n
2
ln(2?
1
2
)+
) + 2 12 tr(A
} | ?
1
)
1
2
2
?
b 2+
kwk
d
2
+
1
2
2
?
b
KL N (w,A
1
2
log |A| + d ln
2
b +
kwk
{z
1
2
?
log |A| + d ln
1 ) k N (0, 2 I)
?
?
}
.
`nll
b 2 + n2 ln(2? 2 ) = n LbX,Y
b and insert
To obtain the second equality, we substitute 2 1 2 ky
wk
(w)
1
1
1
1
1
1
T
1
1
T
1
1
1
tr(
A
)
+
tr(A
)
=
tr(
A
+
A
)
=
tr(A
A) = d2 .
2
2
2
2
2
2 ?
2
2
?
This exhibits how the Bayesian regression optimization problem is related to the mini`nll
mization of a PAC-Bayesian bound, expressed by a trade-off between Ew???? LbX,Y
(w) and
b A 1 ) k N (0, ?2 I) . See Appendix A.5 for detailed calculations.
KL N (w,
Model selection experiment. To produce Figures 1a and 1b, we reimplemented the toy experiment
of Bishop [5, Section 3.5.1]. That is, we generated a learning sample of 15 data points according to
y = sin(x) + ?, where x is uniformly sampled in the interval [0, 2?] and ? ? N (0, 14 ) is a Gaussian
noise. We then learn seven different polynomial models applying Equation (25). More precisely, for
a polynomial model of degree d, we map input x 2 R to a vector (x) = [1, x1 , x2 , . . . , xd ] 2 Rd+1 ,
1
and we fix parameters ?2 = 0.005
and 2 = 12 . Figure 1a illustrates the seven learned models.
Figure 1b shows the negative log marginal likelihood computed for each polynomial model, and is
designed to reproduce Bishop [5, Figure 3.14], where it is explained that the marginal likelihood
correctly indicates that the polynomial model of degree d = 3 is ?the simplest model which gives a
good explanation for the observed data?. We show that this claim is well quantified by the trade-off
intrinsic to our PAC-Bayesian approach: the complexity KL term keeps increasing with the parameter
d 2 {1, 2, . . . , 7}, while the empirical risk drastically decreases from d = 2 to d = 3, and only
slightly afterward. Moreover, we show that the generalization risk (computed on a test sample of size
1000) tends to increase with complex models (for d 4).
Empirical comparison of bound values. Figure 1c compares the values of the PAC-Bayesian
bounds presented in this paper on a synthetic dataset, where each input x2R20 is generated by
a Gaussian x?N (0, I). The associated output y2R is given by y=w? ? x + ?, with kw? k= 12 ,
??N (0, ?2 ), and ?2 = 19 . We perform Bayesian linear regression in the input space, i.e., (x)=x,
1
fixing ?2 = 100
and 2 =2. That is, we compute the posterior of Equation (25) for training samples of
sizes from 10 to 106 . For each learned model, we compute the empirical negative log-likelihood loss
1
of Equation (24), and the three PAC-Bayes bounds, with confidence parameter of = 20
. Note that
this loss function is an affine transformation of the squared loss studied in Section 4 (Equation 19), i.e.,
`nll (hw, ? i, x, y)= 12 ln(2? 2 )+ 2 1 2 `sqr (w,
out that `nll is sub-gamma with parameters
? x, y). It1 turns
1
2
2
2
? 2
2
2 2
s
2
2 ( x ? ), as shown in Appendix A.6. The bounds
x ( ? d+kw k )+ ? (1 c) and c
of Corollary 5 are computed using the above mentioned values of kw? k, d, , x , ? , ? , leading
7
1.5
model
model
model
model
1.0
0.5
d=1
d=2
d=3
d=4
60
model d=5
model d=6
model d=7
sin(x)
50
40
0.0
ln ZX,Y
30
KL(???k?)
`nll
n E?????LbX,Y
(?)
0.5
20
1.0
n E?????LD`nll(?)
10
1.5
2.0
0
1
2?
3
2?
?
0
1
2?
2
x
3
4
model degree d
5
6
7
(a) Predicted models. Black dots are the 15 training (b) Decomposition of the marginal likelihood into the
samples.
empirical loss and KL-divergence.
4.0
Alquier et al?s [a, b] bound (Theorem 3 + Eq 14)
3.5
Catoni?s [a, b] bound (Corollary 2)
3.0
sub-gamma bound (Corollary 5)
E????? LD`nll (?) (test loss)
2.5
`nll
E????? LbX,Y
(?) (train loss)
2.0
1.5
1.0 1
10
102
103
104
105
n
(c) Bound values on a synthetic dataset according to the number of training samples.
Figure 1: Model selection experiment (a-b); and comparison of bounds values (c).
to s2 ' 0.280 and c ' 0.005. As the two other bounds of Figure 1c are not suited for unbounded
loss, we compute their value using a cropped loss [a, b] = [1, 4]. Different parameter values could
have been chosen, sometimes leading to another picture: a large value of s degrades our sub-gamma
bound, as a larger [a, b] interval does for the other bounds.
In the studied setting, the bound of Corollary 5?that we have developed for (unbounded) subgamma losses?gives tighter guarantees than the two results for [a, b]-bounded losses (up to n=106 ).
However, our new bound always maintains a gap of 2(11 c) s2 between its value and the generalization
loss. The result of Corollary 2 (adapted from Catoni [8]) for bounded losses suffers from a similar
gap, while having higher values than
p our sub-gamma result. Finally, the result of Theorem 3 (Alquier
et al. [1]), combined with = 1/ n (Eq. 14), converges to the expected loss, but it provides good
guarantees only for large training sample (n & 105 ). Note that the latter bound is not directly
minimized by our ?optimal posterior?, as opposed to the one with = 1/n (Eq. 13), for which we
observe values between 5.8 (for n=106 ) and 6.4 (for n=10)?not displayed on Figure 1c.
7
Conclusion
The first contribution of this paper is to bridge the concepts underlying the Bayesian and the PACBayesian approaches; under proper parameterization, the minimization of the PAC-Bayesian bound
maximizes the marginal likelihood. This study motivates the second contribution of this paper, which
is to prove PAC-Bayesian generalization bounds for regression with unbounded sub-gamma loss
functions, including the squared loss used in regression tasks.
In this work, we studied model selection techniques. On a broader perspective, we would like to
suggest that both Bayesian and PAC-Bayesian frameworks may have more to learn from each other
than what has been done lately (even if other works paved the way [e.g., 6, 14, 30]). Predictors
learned from the Bayes rule can benefit from strong PAC-Bayesian frequentist guarantees (under the
i.i.d. assumption). Also, the rich Bayesian toolbox may be incorporated in PAC-Bayesian driven
algorithms and risk bounding techniques.
Acknowledgments
We thank Gabriel Dub? and Maxime Tremblay for having proofread the paper and supplemental.
8
References
[1] Pierre Alquier, James Ridgway, and Nicolas Chopin. On the properties of variational approximations of
Gibbs posteriors. JMLR, 17(239):1?41, 2016.
[2] Amiran Ambroladze, Emilio Parrado-Hern?ndez, and John Shawe-Taylor. Tighter PAC-Bayes bounds. In
NIPS, 2006.
[3] Arindam Banerjee. On Bayesian bounds. In ICML, pages 81?88, 2006.
[4] Luc B?gin, Pascal Germain, Fran?ois Laviolette, and Jean-Francis Roy. PAC-Bayesian theory for transductive learning. In AISTATS, pages 105?113, 2014.
[5] Christopher M. Bishop. Pattern Recognition and Machine Learning (Information Science and Statistics).
Springer-Verlag New York, Inc., Secaucus, NJ, USA, 2006.
[6] P. G. Bissiri, C. C. Holmes, and S. G. Walker. A general framework for updating belief distributions.
Journal of the Royal Statistical Society: Series B (Statistical Methodology), 2016.
[7] St?phane Boucheron, G?bor Lugosi, and Pascal Massart. Concentration inequalities : a nonasymptotic
theory of independence. Oxford university press, 2013. ISBN 978-0-19-953525-5.
[8] Olivier Catoni. PAC-Bayesian supervised classification: the thermodynamics of statistical learning,
volume 56. Inst. of Mathematical Statistic, 2007.
[9] Arnak S. Dalalyan and Alexandre B. Tsybakov. Aggregation by exponential weighting, sharp PAC-Bayesian
bounds and sparsity. Machine Learning, 72(1-2):39?61, 2008.
[10] Pascal Germain, Alexandre Lacasse, Fran?ois Laviolette, and Mario Marchand. PAC-Bayesian learning of
linear classifiers. In ICML, pages 353?360, 2009.
[11] Pascal Germain, Alexandre Lacasse, Francois Laviolette, Mario Marchand, and Jean-Francis Roy. Risk
bounds for the majority vote: From a PAC-Bayesian analysis to a learning algorithm. JMLR, 16, 2015.
[12] Pascal Germain, Amaury Habrard, Fran?ois Laviolette, and Emilie Morvant. A new PAC-Bayesian
perspective on domain adaptation. In ICML, pages 859?868, 2016.
[13] Zoubin Ghahramani. Probabilistic machine learning and artificial intelligence. Nature, 521:452?459, 2015.
[14] Peter Gr?nwald. The safe Bayesian - learning the learning rate via the mixability gap. In ALT, 2012.
[15] Peter D. Gr?nwald and Nishant A. Mehta. Fast rates with unbounded losses. CoRR, abs/1605.00252, 2016.
[16] Isabelle Guyon, Amir Saffari, Gideon Dror, and Gavin C. Cawley. Model selection: Beyond the
Bayesian/frequentist divide. JMLR, 11:61?87, 2010.
[17] Tamir Hazan, Subhransu Maji, Joseph Keshet, and Tommi S. Jaakkola. Learning efficient random maximum
a-posteriori predictors with non-decomposable loss functions. In NIPS, pages 1887?1895, 2013.
[18] William H. Jeffreys and James O. Berger. Ockham?s razor and Bayesian analysis. American Scientist,
1992.
[19] Alexandre Lacoste. Agnostic Bayes. PhD thesis, Universit? Laval, 2015.
[20] John Langford and John Shawe-Taylor. PAC-Bayes & margins. In NIPS, pages 423?430, 2002.
[21] Guy Lever, Fran?ois Laviolette, and John Shawe-Taylor. Tighter PAC-Bayes bounds through distributiondependent priors. Theor. Comput. Sci., 473:4?28, 2013.
[22] David J. C. MacKay. Bayesian interpolation. Neural Computation, 4(3):415?447, 1992.
[23] Andreas Maurer. A note on the PAC-Bayesian theorem. CoRR, cs.LG/0411099, 2004.
[24] David McAllester. Some PAC-Bayesian theorems. Machine Learning, 37(3):355?363, 1999.
[25] David McAllester. PAC-Bayesian stochastic model selection. Machine Learning, 51(1):5?21, 2003.
[26] David McAllester and Joseph Keshet. Generalization bounds and consistency for latent structural probit
and ramp loss. In NIPS, pages 2205?2212, 2011.
[27] Asf Noy and Koby Crammer. Robust forward algorithms via PAC-Bayes and Laplace distributions. In
AISTATS, 2014.
[28] Anastasia Pentina and Christoph H. Lampert. A PAC-Bayesian bound for lifelong learning. In ICML,
2014.
[29] Matthias Seeger. PAC-Bayesian generalization bounds for Gaussian processes. JMLR, 3:233?269, 2002.
[30] Matthias Seeger. Bayesian Gaussian Process Models: PAC-Bayesian Generalisation Error Bounds and
Sparse Approximations. PhD thesis, University of Edinburgh, 2003.
[31] Yevgeny Seldin and Naftali Tishby. PAC-Bayesian analysis of co-clustering and beyond. JMLR, 11, 2010.
[32] Yevgeny Seldin, Peter Auer, Fran?ois Laviolette, John Shawe-Taylor, and Ronald Ortner. PAC-Bayesian
analysis of contextual bandits. In NIPS, pages 1683?1691, 2011.
[33] Yevgeny Seldin, Fran?ois Laviolette, Nicol? Cesa-Bianchi, John Shawe-Taylor, and Peter Auer. PACBayesian inequalities for martingales. In UAI, 2012.
[34] John Shawe-Taylor and Robert C. Williamson. A PAC analysis of a Bayesian estimator. In COLT, 1997.
[35] Ilya O. Tolstikhin and Yevgeny Seldin. PAC-Bayes-empirical-Bernstein inequality. In NIPS, 2013.
[36] Tong Zhang. Information-theoretic upper and lower bounds for statistical estimation. IEEE Trans.
Information Theory, 52(4):1307?1321, 2006.
9
| 6569 |@word polynomial:4 mehta:1 d2:1 decomposition:1 tr:6 ld:17 it1:1 moment:1 ndez:1 contains:1 series:1 selecting:3 initial:1 existing:2 current:1 com:1 comparing:1 contextual:1 assigning:1 john:7 ronald:1 designed:1 generative:1 selected:1 intelligence:1 parameterization:2 amir:1 isotropic:1 beginning:1 inconvenience:1 ith:1 provides:3 completeness:1 lx:7 zhang:2 unbounded:11 mathematical:1 dn:7 along:1 direct:2 c2:1 prove:1 inside:1 indeed:5 expected:4 p1:1 considering:4 increasing:1 becomes:3 spain:1 provided:2 bounded:9 moreover:3 maximizes:2 underlying:1 agnostic:1 lowest:2 what:1 minimizes:1 dror:1 developed:1 proposing:1 supplemental:1 finding:1 transformation:1 nj:1 guarantee:5 every:1 xd:1 universit:1 k2:1 classifier:4 yn:1 arnak:1 before:1 scientist:1 tends:1 limit:1 consequence:2 despite:1 oxford:1 meet:1 path:1 interpolation:1 approximately:2 noteworthy:1 inria:2 black:1 lugosi:1 studied:3 quantified:1 suggests:1 conversely:1 christoph:1 co:1 limited:1 range:1 averaged:2 acknowledgment:1 testing:1 union:1 procedure:1 empirical:11 word:2 confidence:2 seeing:1 suggest:2 zoubin:1 get:1 selection:14 risk:15 context:5 applying:2 equivalent:6 deterministic:1 map:1 maximizing:3 straightforward:1 dalalyan:1 convex:2 focused:1 simplicity:1 decomposable:1 estimator:2 rule:3 holmes:1 importantly:1 laplace:1 olivier:1 hypothesis:3 element:1 roy:2 satisfying:1 recognition:1 updating:1 observed:1 trade:7 rescaled:1 decrease:1 mentioned:4 complexity:2 motivate:1 rewrite:2 exposed:1 mization:1 various:1 maji:1 train:1 distinct:1 fast:1 artificial:1 paved:1 choosing:4 y2r:1 jean:2 larger:1 valued:3 say:2 ramp:1 statistic:2 transductive:1 final:1 nll:33 isbn:1 matthias:2 propose:1 asf:1 fr:1 adaptation:1 inserting:1 aligned:1 realization:1 secaucus:1 ridgway:1 ky:2 francois:1 produce:1 generating:2 converges:2 phane:1 illustrate:1 derive:1 fixing:1 eq:7 strong:2 predicted:1 ois:6 come:3 c:1 tommi:1 safe:1 closely:1 correct:2 stochastic:2 mcallester:4 saffari:1 explains:1 feeding:1 fix:1 generalization:20 tighter:4 theor:1 strictly:1 pl:1 insert:1 around:1 gavin:1 normal:1 exp:7 mapping:3 predict:1 claim:2 early:1 estimation:1 label:3 expose:1 pacbayesian:3 cole:1 bridge:3 weighted:1 minimization:2 gaussian:16 always:1 aim:1 normale:1 pn:4 broader:2 jaakkola:1 corollary:18 focus:1 likelihood:32 indicates:1 seeger:2 inst:1 inference:4 posteriori:1 typically:1 bandit:1 reproduce:1 chopin:1 subhransu:1 among:1 classification:8 pascal:6 denoted:1 colt:1 special:1 mackay:1 marginal:16 having:2 kw:4 koby:1 icml:4 minimized:1 few:1 ortner:1 randomly:1 gamma:14 divergence:4 n1:4 william:1 ab:1 tolstikhin:1 analyzed:1 nl:7 yielding:1 beforehand:1 necessary:1 maurer:2 hyperprior:2 taylor:6 divide:1 earlier:1 habrard:1 predictor:8 uniform:2 gr:4 tishby:1 synthetic:2 chooses:2 combined:1 st:1 stay:1 probabilistic:2 off:7 regressor:1 together:1 ilya:1 squared:6 thesis:2 lever:1 cesa:1 opposed:2 hoeffding:2 guy:1 american:1 leading:2 rescaling:2 return:1 toy:1 converted:1 nonasymptotic:1 summarized:1 wk:2 inc:1 linked:1 hazan:1 mario:2 observing:1 doing:1 francis:3 sup:1 bayes:22 aggregation:2 start:1 reached:1 simon:1 maintains:1 contribution:2 ni:1 accuracy:1 variance:6 sqr:2 ensemble:1 yield:1 wisdom:1 generalize:1 bayesian:92 bor:1 dub:1 zx:27 emilie:1 suffers:1 james:2 naturally:1 proof:4 mi:4 associated:2 sampled:2 dataset:2 knowledge:1 ea:1 back:1 auer:2 alexandre:5 higher:2 supervised:1 methodology:1 improved:1 done:4 until:1 langford:1 christopher:1 banerjee:1 google:2 usa:1 alquier:7 concept:1 hence:2 inspiration:1 equality:1 boucheron:3 leibler:3 sin:2 bonferroni:1 lastname:1 razor:4 naftali:1 coincides:1 criterion:4 generalized:1 theoretic:1 performs:1 dedicated:1 variational:1 ef:8 arindam:1 common:2 laval:1 volume:1 extend:6 discussed:1 linking:1 expressing:1 isabelle:1 gibbs:12 rd:16 consistency:1 similarly:1 shawe:6 dot:1 specification:1 similarity:1 posterior:19 recent:1 showed:1 perspective:5 optimizing:1 driven:2 rieure:1 manipulation:1 mandatory:2 verlag:1 inequality:8 continue:1 yi:5 proofread:1 arithmetic:1 nwald:4 multiple:1 sound:1 emilio:1 calculation:1 bach:1 concerning:1 amiran:1 regression:20 sometimes:2 tailored:1 normalization:2 maxime:1 cropped:1 want:1 cawley:1 interval:2 walker:1 standpoint:1 probably:2 strict:1 massart:1 structural:1 bernstein:1 conception:1 pentina:2 independence:1 forthcoming:2 andreas:1 simplifies:1 bridging:1 peter:4 york:1 remark:1 gabriel:1 detailed:1 amount:2 tsybakov:1 induces:1 simplest:1 notice:1 correctly:1 discrete:2 write:1 hyperparameter:4 express:2 lacoste:3 merely:1 distributiondependent:1 extends:1 family:3 guyon:1 fran:6 draw:1 appendix:6 bound:72 replaces:1 marchand:2 adapted:3 precisely:1 x2:4 encodes:1 sake:1 aspect:1 span:1 performing:1 according:4 slightly:1 appealing:1 joseph:2 jeffreys:1 explained:1 pr:1 ln:42 equation:30 previously:1 hern:1 turn:1 loose:1 ambroladze:1 observe:1 hierarchical:3 away:1 pierre:1 appearing:1 frequentist:7 alternative:2 substitute:1 clustering:1 ensure:1 laviolette:7 exploit:1 restrictive:1 ghahramani:1 classical:4 society:1 mixability:1 quantity:1 degrades:2 concentration:1 usual:1 anastasia:1 surrogate:1 exhibit:2 gin:1 link:3 thank:1 sci:1 majority:5 seven:2 berger:1 mini:1 minimizing:6 setup:4 mostly:2 unfortunately:1 lg:1 statement:2 relate:2 robert:1 negative:13 stated:2 design:2 proper:2 motivates:1 unknown:1 perform:3 bianchi:1 upper:5 observation:2 ockham:1 lacasse:2 displayed:1 defining:1 situation:2 incorporated:1 y1:1 lb:1 arbitrary:1 sharp:1 david:4 germain:6 paris:1 pair:1 kl:22 connection:3 specified:1 extensive:1 toolbox:1 learned:3 nishant:1 barcelona:1 nip:7 trans:1 beyond:2 below:2 reimplemented:1 firstname:1 pattern:1 sparsity:1 gideon:1 including:2 royal:1 explanation:3 belief:1 rely:4 eh:1 indicator:2 thermodynamics:1 julien:1 picture:1 axis:1 lately:1 prior:20 nicol:1 loss:78 fully:1 probit:1 afterward:1 versus:1 degree:3 affine:1 amaury:1 occam:3 course:1 drastically:1 side:1 fall:1 lifelong:1 sparse:1 benefit:1 regard:1 edinburgh:1 valid:5 evaluating:1 rich:1 qn:3 tamir:1 author:1 collection:1 forward:1 far:1 compact:1 kullback:3 keep:1 uai:1 corroborating:1 conclude:1 xi:6 discriminative:1 continuous:1 latent:1 parrado:1 why:1 learn:2 nature:1 robust:1 nicolas:1 williamson:1 complex:1 meanwhile:1 domain:1 aistats:2 main:1 whole:1 s2:17 lampert:2 noise:3 n2:2 hyperparameters:2 bounding:1 yevgeny:4 x1:1 referred:1 martingale:1 tong:1 sub:22 lbx:17 exponential:1 comput:1 lie:1 dnll:8 jmlr:5 weighting:1 hw:3 theorem:26 specific:1 bishop:4 pac:71 insightful:1 jensen:1 maxi:1 alt:1 evidence:2 intrinsic:1 corr:2 kwk:2 keshet:2 catoni:5 phd:2 conditioned:1 illustrates:1 margin:1 gap:3 suited:1 simply:1 explore:1 seldin:4 absorbed:1 expressed:4 springer:1 aa:1 corresponds:2 formulated:1 unrepresentative:1 luc:1 fw:1 specifically:1 generalisation:1 uniformly:2 averaging:4 justify:1 lemma:1 vote:5 ew:2 formally:1 latter:4 crammer:1 morvant:1 incorporate:1 |
6,157 | 657 | Optimal Depth Neural Networks for Multiplication
and Related Problems
Kai-Yeung Siu
Dept. of Electrical & Compo Engineering
University of California, Irvine
Irvine, CA 92717
Vwani Roychowdhury
School of Electrical Engineering
Purdue University
West Lafayette, IN 47907
Abstract
An artificial neural network (ANN) is commonly modeled by a threshold
circuit, a network of interconnected processing units called linear threshold
gates. The depth of a network represents the number of unit delays or the
time for parallel computation. The SIze of a circuit is the number of gates
and measures the amount of hardware . It was known that traditional logic
circuits consisting of only unbounded fan-in AND, OR, NOT gates would
require at least O(log n/log log n) depth to compute common arithmetic
functions such as the product or the quotient of two n-bit numbers, unless
we allow the size (and fan-in) to increase exponentially (in n). We show in
this paper that ANNs can be much more powerful than traditional logic
circuits. In particular, we prove that that iterated addition can be computed by depth-2 ANN, and multiplication and division can be computed
by depth-3 ANNs with polynomial size and polynomially bounded integer
weights, respectively. Moreover, it follows from known lower bound results that these ANNs are optimal in depth. We also indicate that these
techniques can be applied to construct polynomial-size depth-3 ANN for
powering, and depth-4 ANN for mUltiple product.
1
Introduction
Recent interest in the application of artificial neural networks [10, 11] has spurred
research interest in the theoretical study of such networks. In most models of neural networks, the basic processing unit is a Boolean gate that computes a linear
59
60
Siu and Roychowdhury
threshold function, or an analog element that computes a sigmoidal function. Artificial neural networks can be viewed as circuits of these processing units which are
massively interconnected together.
While neural networks have found wide application in many areas, the behavior
and the limitation of these networks are far from being understood. One common
model of a neural network is a threshold circuit. Incidentally, the study of threshold
circuits, motivated by some other complexity theoretic issues, has also gained much
interest in the area of computer science. Threshold circuits are Boolean circuits in
which each gate computes a linear threshold function, whereas in the classical model
of unbounded fan-in Boolean circuits only AND, OR, NOT gates are allowed. A
Boolean circuit is usually arranged in layers such that all gates in the same layer are
computed concurrently and the circuit is computed layer by layer in some increasing
depth order. We define the depth as the number of layers in the circuit. Thus each
layer represents a unit delay and the depth represents the overall delay in the
computation of the circuit .
2
Related Work
Theoretical computer scientists have used unbounded fan-in Boolean circuits as
a model to understand fundamental issues of parallel computation. To be more
specific, this computational model should be referred to as unbounded fan-in parallelism, since the number of inputs to each gate in the Boolean circuit is not bounded
by a constant. The theoretical study of unbounded fan-in parallelism may give us
insights into devising faster algorithms for various computational problems than
would be possible with bounded fan-in parallelism. In fact, any nondegenerate
Boolean function of n variables requires at least O(log n) depth to compute in a
bounded fan-in circuit. On the other hand, in some practical situations, (for example large fan-in circuits such as programmable logic arrays (PLAs) or multiple
processors simultaneously accessing a shared bus), unbounded fan-in parallelism
seems to be a natural model. For example, a PLA can be considered as a depth-2
AND/OR circuit.
In the Boolean circuit model, the amount of resources is usually measured by the
number of gates, and is considered to be 'reasonable' as long as it is bounded
by a polynomial (as opposed to exponential) in the number of the inputs. For
example, a Boolean circuit for computing the sum of two n-bit numbers with O(n 3 )
gates is 'reasonable', though circuit designers might consider the size of the circuit
impractical for moderately large n. One of the most important theoretical issues in
parallel computation is the following: Given that the number of gates in the Boolean
circuit is bounded by a polynomial in the size of inputs, what is the minimum depth
(i.e. number of layers) that is needed to compute certain functions?
A first step toward answering this important question was taken by Furst et al. [4]
and independently by Ajtai [2]. It follows from their results that for many basic
functions, such as the parity and the majority of n Boolean variables, or the multiplication of two n-bit numbers, any constant depth (i. e. independent of n) classical
Boolean circuit of unbounded fan-in AND/OR gates computing these functions
must have more than a polynomial (in n) number of gates. This lower bound on
the size was subsequently improved by Yao [18] and Hastad [7]; it was proved that
Optimal Depth Neural Networks for Multiplication and Related Problems
indeed an exponential number of AND/OR gates are needed. So functions such as
parity and majority are computationally 'hard' with respect to constant depth and
polynomial size classical Boolean circuits. Another way of interpreting these results
is that circuits of AND/OR gates computing these 'hard' functions which use polynomial amount of chip area must have unbounded delay (i. e. delay that increases
with n). In fact, the lower bound results imply that the minimum possible delay
for multipliers (with polynomial number of AND/OR gates) is O(logn/loglogn).
These results also give theoretical justification why it is impossible for circuit designers to implement fast parity circuit or multiplier in small chip area using AND,
OR gates as the basic building blocks.
One of the 'hard' functions mentioned above is the majority function, a special case
of a threshold function in which the weights or parameters are restricted. A natural
extension is to study Boolean circuits that contain majority gates. This type of
Boolean circuit is called a threshold circuit and is believed to capture some aspects
of the computation in our brain [12]. In the rest of the paper, the term 'neural
networks' refers to the threshold circuits model.
With the addition of majority gates, the resulting Boolean circuit model seems
much more powerful than the classical one. Indeed, it was first shown by Muroga
[13] three decades ago that any symmetric Boolean function (e.g. parity) can be
computed by a two-layer neural network with (n + 1) gates. Recently, Chandra
et al. [3] showed that multiplication of two n-bit numbers and sorting of n n-bit
numbers can be computed by neural networks with 'constant' depth and polynomial
size. These 'constants' have been significantly reduced by Siu and Bruck [14, 15] to
4 in both cases, whereas a lower bound of depth-3 was proved by Hajnal et al. [6]
in the case of multiplication. It is now known [8] that the size of the depth-4 neural
networks for multiplication can be reduced to O(n 2 ). However, the existence of
depth-3 and polynomial-size neural networks for multiplication was left as an open
problem [6, 5, 15] since the lower bound result in [6]. In [16], some depth-efficient
neural networks were constructed for division and related arithmetic problems; the
networks in [16] do not have optimal depth.
Our main contribution in this paper is to show that small constant depth neural
networks for multiplication, division and related problems can be constructed. For
the problems such as iterated addition, multiplication, and division, the neural networks constructed can be shown to have optimal depth. These results have the
following implication on their practical significance: Suppose we can use analog devices to build threshold gates with a cost (in terms of delay and chip area) that is
comparable to that of AND, OR, logic gates, then we can compute many basic functions much faster than using traditional circuits. Clearly, the particular weighting
of depth, fan-in, and size that gives a realistic measure of a network's cost and speed
depends on the technology used to build it. One case where circuit depth would
seem to be the most important parameter is when the circuit is implemented using
optical devices. We refer those who are interested in the optical implementation of
neural networks to [1].
Due to space limitations, we shall only state some of the important results; further
results and detailed proofs will appear in the journal version of this paper [17].
61
62
Siu and Roychowdhury
3
Main Results
Definition 1
Given n n-bit integers, Zi = Lj~; zi,i2i, i = 1, ... , n, zi,i E {O, I},
We define iterated addition to be the problem of computing the (n + log n )-bit sum
L~=l Zi of the n integers.
=
=
Definition 2
Given 2 n-bit integers, x
Lj==-~ xi2i and Y Lj==-~ Yi2i. We
define multiplication to be the problem of computing the (2n)-bit product of x and
y.
Using the notations of [15], let us denote the class of depth-d polynomial-size neural
networks where the (integer) weights are polynomially bounded by & d and the
corresponding class where the weights are unrestricted by LTd. It is easy to see that
if it~ated addition can be computed in &2, then multiplication can be computed
in LT 3 . We first prove the result on iterated addition. Our result hinges on a
recent striking result of Goldmann, Hcistad and Razborov [5]. The key observation
is that iterated addition can be computed as a sum of polynomially many linear
threshold (LTd functions (with exponential weights). Let us first state the result
of Goldmann, Hastad and Razborov [5].
Lemma 1
[5] Let LTd denote the class of depth-d polynomial-size neural networks where the weights at the output gate are polynomially bounded integers (with
no restriction on the weights of the other gates). Then LTd = & d for any fixed
integer d ~ 1.
The following lemma is a generalization of the result in [13]. Informally, the result
says that if a function is 1 when a weighted sum (possibly exponential) of its inputs
lies in one of polynomially many intervals, and is 0 otherwise, then the function can
be computed as a sum of polynomially many LTI functions.
Lemma 2
Let S = L7=1 WiXi and f(X) be a function such that f = 1 if S E
[Ii, ud for i = 1, ... , Nand f = 0 otherwise, where N is polynomially bounded.
The~ can be computed as a sum of polynomially many LTI functions and thus
f E LT2 ?
Combining the above two lemmas yields a depth-2 neural network for iterated addition.
.-
Theorem 1
Iterated addition is in LT2 ?
It is also easy to see that iterated addition cannot be computed in LTI
Simply
observe that the first bit of the sum is the parity function, which does not belong
to LT1 . Thus the above neural network for iterated addition has minimum possible
depth.
Theorem 2
Multiplication of 2 n-bit integers can be computed in
.
LT3.
It follows from the results in [6] that the depth-3 neural network for multiplication
stated in the above theorem has optimal depth.
Optimal Depth Neural Networks for Multiplication and Related Problems
We can further apply the results in [5] to construct small depth neural networks for
division, powering and multiple product. Let us give a formal definition of these
problems.
Definition 3
Let X be an input n-bit integer
2
n -bit representation of xn.
~
O. We define powering to be the
Definition 4
Given n n-bit integers Zi, i = 1, ... , n, We define multiple product
2
to be the n -bit representation of n~=l Zi.
Suppose we want to compute the quotient of two integers. Some quotient in binary representation might require infinitely many bits, however, a circuit can only
compute the most significant bits of the quotient. If a number has both finite and
infinite binary representation (for example 0.1 = 0.0111 ... ), we shall always express
the number in its finite binary representation. We are interested in computing the
truncated quotient, defined below:
=
Definition 5
Let X and Y ~ 1 be two input n bit integers. Let X /Y
L~;~oo zi 2i be the quotient of X divided by Y. We define DIVk(X/Y) to be
X/Y truncated to the (n + k)-bit number, i.e.
o
In particular, DIVo(X /Y) is l X /Y J, the greatest integer ~ X /Y.
Theorem 3
-.
1. Powering can be computed in LT3 .
2. DIVk(x/y) can be computed in
Lr3 .
3. Multiple Product can be computed in
LT4 .
It can be shown from the lower-bound results in [9] that the neural networks for
division are optimal in depth.
References
[1] Y. S. Abu-Mostafa and D. Psaltis. Optical Neural Computers. Scientific American
, 256(3) :88-95, 1987.
L~ -formulae on finite structures. Annals of Pure and Applied Logic,
24:1-48, 1983.
[2] M. Ajtai.
[3] A. K. Chandra, 1. Stockmeyer, and U. Vishkin. Constant depth reducibility. Siam
J. Comput., 13:423-439, 1984.
[4] M. Furst, J. B. Saxe, and M. Sipser. Parity, Circuits and the Polynomial-Time
Hierarchy. IEEE Symp. Found. Compo Sci., 22:260-270, 1981.
[5] M. Goldmann, J. Hastad, and A. Razborov. Majority Gates vs. General Weighted
Threshold Gates. preprint, 1991.
63
64
Siu and Roychowdhury
[6] A. Hajnal, W. Maass, P. Pudlak, M. Szegedy, and G. Turan. Threshold circuits of
bounded depth. IEEE Symp. Found. Compo Sci., 28:99-110, 1987.
[7] J. H1stad and M. Goldmann.
On the power of small-depth threshold circuits.
InProceedings of the 31st IEEE FOCS, pp. 610-618, 1990.
[8] T. Hofmeister, W. Hohberg and S. Kohling . Some notes on threshold circuits and
multiplication in depth 4. Information Processing Letters, 39:219-225, 1991.
[9] T. Hofmeister and P. PudIa.k, A proof that division is not in TC~. Forschungsbericht
Nr. 447, 1992, Uni Dortmund.
[10] J. J. Hopfield. Neural Networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences, 79:2554-2558,
1982.
[11] J. L. McClelland D. E. Rumelhart and the PDP Research Group. Parallel Distributed
Processing: Explorations in the Microstructure of Cognition, vol. 1. MIT Press, 1986.
[12] W. S. McCulloch and W. Pitts. A Logical Calculus of Ideas Immanent in Nervous
Activity. Bulletin of Mathematical Biophysics, 5:115-133, 1943.
[13] S. Muroga. The principle of majority decision logic elements and the complexity of
their circuits. Inti. Con/. on Information Processing, Paris, France, June 1959.
[14] K. Y. Siu and J. Bruck. Neural Computation of Arithmetic Functions. Proc. IEEE,
78, No. 10:1669-1675, October 1990. Special Issue on Neural Networks.
[15] K.-Y. Siu and J. Bruck. On the Power of Threshold Circuits with Small Weights.
SIAM J. Discrete Math., 4(3):423-435, August 1991.
[16] K.-Y. Siu, J. Bruck, T. Kailath, and T. Hofmeister. Depth-Efficient Neural Networks
for Division and Related Problems . to appear in IEEE Trans. Information Theory,
1993.
[17] K.- Y. Siu and V. Roychowdhury.
On Optimal Depth Threshold Circuits for Mulitplication and Related Problems. to appear in SIAM J. Discrete Math.
[18] A. Yao.
Separating the polynomial-time hierarchy by oracles. IEEE Symp. Found.
Compo Sci., pages 1-10, 1985.
| 657 |@word version:1 polynomial:14 seems:2 open:1 calculus:1 must:2 realistic:1 hajnal:2 v:1 devising:1 device:2 nervous:1 compo:4 math:2 sigmoidal:1 unbounded:8 mathematical:1 constructed:3 focs:1 prove:2 symp:3 indeed:2 behavior:1 brain:1 increasing:1 bounded:10 moreover:1 circuit:45 notation:1 mcculloch:1 what:1 turan:1 impractical:1 unit:5 appear:3 engineering:2 understood:1 scientist:1 might:2 lafayette:1 practical:2 pla:1 block:1 implement:1 pudlak:1 area:5 significantly:1 refers:1 cannot:1 impossible:1 restriction:1 independently:1 pure:1 insight:1 array:1 justification:1 razborov:3 annals:1 hierarchy:2 suppose:2 element:2 rumelhart:1 preprint:1 electrical:2 capture:1 mentioned:1 accessing:1 complexity:2 moderately:1 division:8 hopfield:1 chip:3 emergent:1 various:1 fast:1 artificial:3 kai:1 say:1 otherwise:2 ability:1 vishkin:1 interconnected:2 product:6 combining:1 plas:1 academy:1 incidentally:1 oo:1 measured:1 school:1 implemented:1 quotient:6 indicate:1 subsequently:1 exploration:1 saxe:1 require:2 microstructure:1 generalization:1 extension:1 considered:2 cognition:1 pitt:1 mostafa:1 furst:2 proc:1 psaltis:1 weighted:2 mit:1 concurrently:1 clearly:1 always:1 powering:4 vwani:1 june:1 lj:3 nand:1 france:1 interested:2 issue:4 overall:1 l7:1 logn:1 special:2 construct:2 represents:3 muroga:2 simultaneously:1 national:1 lt3:2 consisting:1 interest:3 implication:1 unless:1 theoretical:5 boolean:17 hastad:3 cost:2 siu:9 delay:7 st:1 fundamental:1 siam:3 together:1 yao:2 opposed:1 possibly:1 american:1 szegedy:1 depends:1 sipser:1 ated:1 hofmeister:3 parallel:4 contribution:1 who:1 yield:1 i2i:1 iterated:9 dortmund:1 processor:1 ago:1 anns:3 definition:6 pp:1 proof:2 con:1 irvine:2 proved:2 wixi:1 logical:1 stockmeyer:1 improved:1 arranged:1 though:1 hand:1 scientific:1 building:1 contain:1 multiplier:2 symmetric:1 maass:1 theoretic:1 interpreting:1 recently:1 common:2 physical:1 exponentially:1 analog:2 belong:1 refer:1 significant:1 recent:2 showed:1 massively:1 certain:1 binary:3 minimum:3 unrestricted:1 ud:1 arithmetic:3 ii:1 multiple:5 faster:2 believed:1 long:1 divided:1 biophysics:1 basic:4 chandra:2 yeung:1 loglogn:1 addition:11 whereas:2 want:1 interval:1 rest:1 seem:1 integer:13 easy:2 zi:7 idea:1 ajtai:2 motivated:1 ltd:4 programmable:1 detailed:1 informally:1 amount:3 hardware:1 mcclelland:1 reduced:2 roychowdhury:5 designer:2 discrete:2 shall:2 vol:1 express:1 group:1 abu:1 key:1 threshold:18 lti:3 sum:7 letter:1 powerful:2 striking:1 reasonable:2 decision:1 comparable:1 bit:19 bound:6 layer:8 fan:12 oracle:1 activity:1 aspect:1 speed:1 optical:3 lt4:1 lt2:2 restricted:1 inti:1 taken:1 computationally:1 resource:1 bus:1 needed:2 goldmann:4 apply:1 observe:1 gate:26 existence:1 spurred:1 hinge:1 build:2 classical:4 question:1 traditional:3 nr:1 sci:3 separating:1 majority:7 toward:1 modeled:1 october:1 stated:1 implementation:1 collective:1 observation:1 purdue:1 finite:3 truncated:2 situation:1 pdp:1 august:1 paris:1 california:1 trans:1 usually:2 parallelism:4 below:1 power:2 greatest:1 natural:2 bruck:4 technology:1 imply:1 reducibility:1 multiplication:16 limitation:2 principle:1 nondegenerate:1 parity:6 formal:1 allow:1 understand:1 wide:1 bulletin:1 distributed:1 depth:42 xn:1 computes:3 commonly:1 far:1 polynomially:8 uni:1 logic:6 decade:1 why:1 ca:1 significance:1 main:2 immanent:1 allowed:1 west:1 referred:1 lt1:1 exponential:4 comput:1 lie:1 answering:1 weighting:1 theorem:4 formula:1 specific:1 gained:1 sorting:1 tc:1 lt:1 simply:1 infinitely:1 viewed:1 kailath:1 ann:4 shared:1 hard:3 infinite:1 lemma:4 called:2 dept:1 |
6,158 | 6,570 | Total Variation Classes Beyond 1d: Minimax Rates,
and the Limitations of Linear Smoothers
Veeranjaneyulu Sadhanala
Machine Learning Department
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Yu-Xiang Wang
Machine Learning Department
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Ryan J. Tibshirani
Department of Statistics
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Abstract
We consider the problem of estimating a function defined over n locations on a
d-dimensional grid (having all side lengths equal to n1/d ). When the function is
constrained to have discrete total variation bounded by Cn , we derive the minimax
optimal (squared) `2 estimation error rate, parametrized by n, Cn . Total variation
denoising, also known as the fused lasso, is seen to be rate optimal. Several simpler
estimators exist, such as Laplacian smoothing and Laplacian eigenmaps. A natural
question is: can these simpler estimators perform just as well? We prove that these
estimators, and more broadly all estimators given by linear transformations of the
input data, are suboptimal over the class of functions with bounded variation. This
extends fundamental findings of Donoho and Johnstone [12] on 1-dimensional total
variation spaces to higher dimensions. The implication is that the computationally
simpler methods cannot be used for such sophisticated denoising tasks, without
sacrificing statistical accuracy. We also derive minimax rates for discrete Sobolev
spaces over d-dimensional grids, which are, in some sense, smaller than the total
variation function spaces. Indeed, these are small enough spaces that linear estimators can be optimal?and a few well-known ones are, such as Laplacian smoothing
and Laplacian eigenmaps, as we show. Lastly, we investigate the adaptivity of the
total variation denoiser to these smaller Sobolev function spaces.
1
Introduction
Let G = (V, E) be a d-dimensional grid graph, i.e., lattice graph, with equal side lengths. Label the
nodes as V = {1, . . . , n}, and edges as E = {e1 , . . . , em }. Consider data y = (y1 , . . . , yn ) ? Rn
observed over the nodes, from a model
yi ? N (?0,i , ? 2 ),
i.i.d., for i = 1, . . . , n,
(1)
where ?0 = (?0,1 , . . . , ?0,n ) ? Rn is an unknown mean parameter to be estimated, and ? 2 > 0 is the
marginal noise variance. It is assumed that ?0 displays some kind of regularity over the grid G, e.g.,
?0 ? Td (Cn ) for some Cn > 0, where
Td (Cn ) = ? : kD?k1 ? Cn ,
(2)
and D ? Rm?n is the edge incidence matrix of G. This has `th row D` = (0, . . . , ?1, . . . , 1, . . . , 0),
with a ?1 in the ith location, and 1 in the jth location, provided that the `th edge is e` = (i, j) with
i < j. Equivalently, L = DT D is the graph Laplacian matrix of G, and thus
X
X
kD?k1 =
|?i ? ?j |,
and
kD?k22 = ?T L? =
(?i ? ?j )2 .
(i,j)?E
(i,j)?E
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
We will refer to the class in (2) as a discrete total variation (TV) class, and to the quantity kD?0 k1 as
the discrete total variation of ?0 , though for simplicity we will often drop the word ?discrete?.
The problem of estimating ?0 given a total variation bound as in (2) is of great importance in both
nonparametric statistics and signal processing, and has many applications, e.g., changepoint detection
for 1d grids, and image denoising for 2d and 3d grids. There has been much methodological and
computational work devoted to this problem, resulting in practically efficient estimators in dimensions
1, 2, 3, and beyond. However, theoretical performance, and in particularly optimality, is only really
well-understood in the 1-dimensional setting. This paper seeks to change that, and offers theory in
d-dimensions that parallel more classical results known in the 1-dimensional case.
Estimators under consideration. Central role to our work is the total variation (TV) denoising or
fused lasso estimator (e.g., [21, 25, 7, 15, 27, 23, 2]), defined by the convex optimization problem
??TV = argmin ky ? ?k22 + ?kD?k1 ,
(3)
??Rn
where ? ? 0 is a tuning parameter. Another pair of methods that we study carefully are Laplacian
smoothing and Laplacian eigenmaps, which are most commonly seen in the context of clustering,
dimensionality reduction, and semi-supervised learning, but are also useful tools for estimation in a
regression setting like ours (e.g., [3, 4, 24, 30, 5, 22]). The Laplacian smoothing estimator is given
by
??LS = argmin ky ? ?k22 + ?kD?k22 , i.e., ??LS = (I + ?L)?1 y,
(4)
??Rn
for a tuning parameter ? ? 0, where in the second expression we have written ??LS in closed-form
(this is possible since it is the minimizer of a convex quadratic). For Laplacian eigenmaps, we must
introduce the eigendecomposition of the graph Laplacian, L = V ?V T , where ? = diag(?1 , . . . , ?n )
with 0 = ?1 < ?2 ? . . . ? ?n , and where V = [V1 , V2 , . . . , Vn ] ? Rn?n has orthonormal columns.
The Laplacian eigenmaps estimator is
T
??LE = V[k] V[k]
y,
where
V[k] = [V1 , V2 , . . . , Vk ] ? Rn?k ,
(5)
where now k ? {1, . . . , n} acts as a tuning parameter.
Laplacian smoothing and Laplacian eigenmaps are appealing because they are (relatively) simple:
they are just linear transformations of the data y. Indeed, as we are considering G to be a grid, both
estimators in (4), (5) can be computed very quickly, in nearly O(n) time, since the columns of V
here are discrete cosine transform (DCT) basis vectors when d = 1, or Kronecker products thereof,
when d ? 2 (e.g., [9, 17, 20, 28]). The TV denoising estimator in (3), on the other hand, cannot be
expressed in closed-form, and is much more difficult to compute, especially when d ? 2, though
several advances have been made over the years (see the references above, and in particular [2] for an
efficient operator-splitting algorithm and nice literature survey). Importantly, these computational
difficulties are often worth it: TV denoising often practically outperforms `2 -regularized estimators
like Laplacian smoothing (and also Laplacian eigenmaps) in image denoising tasks, as it is able to
better preserve sharp edges and object boundaries (this is now widely accepted, early references are,
e.g., [1, 10, 8]). See Figure 1 for an example, using the often-studied ?cameraman? image.
In the 1d setting, classical theory from nonparametric statistics draws a clear distinction between the
performance of TV denoising and estimators like Laplacian smoothing and Laplacian eigenmaps.
Perhaps surprisingly, this theory has not yet been fully developed in dimensions d ? 2. Arguably, the
comparison between TV denoising and Laplacian smoothing and Laplacian eigenmaps is even more
interesting in higher dimensions, because the computational gap between the methods is even larger
(the former method being much more expensive, say in 2d and 3d, than the latter two). Shortly, we
review the 1d theory, and what is known in d-dimensions, for d ? 2. First, we introduce notation.
Notation. For deterministic (nonrandom) sequences an , bn we write an = O(bn ) to denote that
an /bn is upper bounded for all n large enough, and an bn to denote that both an = O(bn ) and
?1
a?1
n = O(bn ). Also, for random sequences An , Bn , we write An = OP (Bn ) to denote that An /Bn
is bounded in probability. We abbreviate a ? b = min{a, b} and a ? b = max{a, b}. For an estimator
?? of the parameter ?0 in (1), we define its mean squared error (MSE) to be
? ?0 ) =
MSE(?,
2
1 ?
k? ? ?0 k22 .
n
Noisy image
Laplacian smoothing
TV denoising
Figure 1: Comparison of Laplacian smoothing and TV denoising for the common ?cameraman? image. TV
denoising provides a more visually appealing result, and also achieves aboutx a 35% reduction in MSE compared
to Laplacian smoothing (MSE being measured to the original image). Both methods were tuned optimally.
The risk of ?? is the expectation of its MSE, and for a set K ? Rn , we define the minimax risk and
minimax linear risk to be
? ?0 ) and RL (K) = inf sup E MSE(?,
? ?0 ) ,
R(K) = inf sup E MSE(?,
??
?? linear ?0 ?K
?0 ?K
? and in the second
respectively, where the infimum on in the first expression is over all estimators ?,
? meaning that ?? = Sy for a matrix S ? Rn?n . We will
expression over all linear estimators ?,
also refer to linear estimators as linear smoothers. Note that both Laplacian smoothing in (4) and
Laplacian eigenmaps in (5) are linear smoothers, but TV denoising in (3) is not. Lastly, in somewhat
of an abuse of nomenclature, we will often call the parameter ?0 in (1) a function, and a set of possible
values for ?0 as in (2) a function space; this comes from thinking of the components of ?0 as the
evaluations of an underlying function over n locations on the grid. This embedding has no formal
importance, but it is convenient notationally, and matches the notation in nonparametric statistics.
Review: TV denoising in 1d. The classical nonparametric statistics literature [13, 12, 18] provides
a more or less complete story for estimation under total variation constraints in 1d. See also [26] for
a translation of these results to a setting more consistent (notationally) to that in the current paper.
Assume that d = 1 and Cn = C > 0, a constant (not growing with n). The results in [12] imply that
R(T1 (C)) n?2/3 .
Furthermore, [18] proved that the TV denoiser ??TV in (3), with ? n1/3 , satisfies
MSE(??TV , ?0 ) = OP (n?2/3 ),
(6)
(7)
for all ?0 ? T1 (C), and is thus minimax rate optimal over T1 (C). (In assessing rates here and throughout, we do not distinguish between convergence in expectation versus convergence in probability.)
Wavelet denoising, under various choices of wavelet bases, also achieves the minimax rate. However,
many simpler estimators do not. To be more precise, it is shown in [12] that
RL (T1 (C)) n?1/2 .
(8)
Therefore, a substantial number of commonly used nonparametric estimators?such as running mean
estimators, smoothing splines, kernel smoothing, Laplacian smoothing, and Laplacian eigenmaps,
which are all linear smoothers?have a major deficiency when it comes to estimating functions of
bounded variation. Roughly speaking, they will require many more samples to estimate ?0 within
the same degree of accuracy as an optimal method like TV or wavelet denoising (on the order of
?1/2 times more samples to achieve an MSE of ). Further theory and empirical examples (e.g.,
[11, 12, 26]) offer the following perspective: linear smoothers cannot cope with functions in T (C)
that have spatially inhomogeneous smoothness, i.e., that vary smoothly at some locations and vary
wildly at others. Linear smoothers can only produce estimates that are smooth throughout, or wiggly
throughout, but not a mix of the two. They can hence perform well over smaller, more homogeneous
function classes like Sobolev or Holder classes, but not larger ones like total variation classes (or
more generally, Besov and Triebel classes), and for these, one must use more sophisticated, nonlinear
techniques. A motivating question: does such a gap persist in higher dimensions, between optimal
nonlinear and linear estimators, and if so, how big is it?
3
Review: TV denoising in multiple dimensions. Recently, [29] established rates for TV denoising
over various graph models, including grids, and [16] made improvements, particularly in the case of
d-dimensional grids with d ? 2. We can combine Propositions 4 and 6 of [16] with Theorem 3 of
[29] to give the following result: if d ? 2, and Cn is an arbitrary sequence (potentially unbounded
with n), then the TV denoiser ??TV in (3) satisfies, over all ?0 ? Td (Cn ),
?
Cn log n
Cn log n
MSE(??TV , ?0 ) = OP
for d = 2, and MSE(??TV , ?0 ) = OP
for d ? 3,
n
n
(9)
?
with ? log n for d = 2, and ? log n for d ? 3. Note that, at first glance, this is a very different
result from the 1d case. We expand on this next.
2
Summary of results
A gap in multiple dimensions. For estimation of ?0 in (1) when d ? 2, consider, e.g., the simplest
possible linear smoother: the mean estimator, ??mean = y?1 (where 1 = (1, . . . , 1) ? Rn , the vector
of all 1s). Lemma 4, given below, implies that over ?0 ? Td (Cn ), the MSE of the mean estimator is
bounded in probability by Cn2 log n/n for d = 2, and Cn2 /n for d ? 3. Compare this to (9). When
Cn = C > 0 is a constant, i.e., when the TV of ?0 is assumed to be bounded (which is assumed for
the 1d results in (6), (7), (8)), this means that the TV denoiser and the mean estimator converge to ?0
at the same rate, basically (ignoring log terms), the ?parametric rate? of 1/n, for estimating a finitedimensional parameter! That TV denoising and such a trivial linear smoother perform comparably
over 2d and 3d grids could not be farther from the story in 1d, where TV denoising is separated by an
unbridgeable gap from all linear smoothers, as shown in (6), (7), (8).
Our results in Section 3 clarify this conundrum, and can be summarized by three points.
? We argue in Section 3.1 that there is a proper ?canonical? scaling for the TV class defined
?in
(2). E.g., when d = 1, this yields Cn 1, a constant, but when d = 2, this yields Cn n,
and Cn also diverges with n for all d ? 3. Sticking with d = 2 as an interesting example,
we see that under such a scaling, the MSE rates achieved by TV denoising and the mean
estimator respectively, are drastically different; ignoring log terms, these are
Cn2
1,
(10)
n
?
respectively. Hence, TV denoising has an MSE rate of 1/ n, in a setting where the mean
estimator has a constant rate, i.e., a setting where it is not even known to be consistent.
1
Cn
?
n
n
and
? We show in Section 3.3 that our choice to study the mean estimator here is not somehow
?unlucky? (it is not a particularly bad linear smoother, nor is the upper bound on its MSE
loose): the minimax linear risk over Td (Cn ) is on the order Cn2 /n, for all d ? 2. Thus, even
the best linear smoothers have the same poor performance as the mean over Td (Cn ).
? We show in Section 3.2 that the TV estimator is (essentially) minimax optimal over Td (Cn ),
as the minimax risk over this class scales as Cn /n (ignoring log terms).
To summarize, these results reveal a significant gap between linear smoothers and optimal estimators
like TV denoising, for estimation over Td (Cn ) in d dimensions, with d ? 2, as long as Cn scales
appropriately. Roughly speaking, the TV classes encompass a challenging setting for estimation
because they are very broad, containing a wide array of functions?both globally smooth functions,
said to have homogeneous smoothness, and functions with vastly different levels of smoothness at
different grid locations, said to have heterogeneous smoothness. Linear smoothers cannot handle
heterogeneous smoothness, and only nonlinear methods can enjoy good estimation properties over
?
the entirety of Td (Cn ). To reiterate, a telling example is d?= 2 with the canonical scaling Cn n,
where we see that TV denoising achieves the optimal 1/ n rate
? (up to log factors), meanwhile, the
best linear smoothers have max risk that is constant over T2 ( n). See Figure 2 for an illustration.
Minimax rates over smaller function spaces, and adaptivity. Sections 4 and 5 are focused on
different function spaces, discrete Sobolev spaces, which are `2 analogs of discrete TV spaces as we
have defined them in (2). Under the canonical scaling of Section 3.1, Sobolev spaces are contained in
4
Canonical scaling, Cn
100
100
10-1
10-1
MSE
MSE
Trivial scaling, Cn 1
10-2
10-3
10-4 2
10
103
104
10-4 2
10
105
n
10-2
10-3
TV denoising (-tted slope -0.88)
Laplacian smoothing (-tted slope -0.99)
Mean estimator (-tted slope -1.01)
Trivial rate: n!1
?
n
TV denoising (-tted slope -0.84)
Laplacian smoothing (-tted slope -0.01)
Mean estimator (-tted slope 0.00)
Minimax rate: n!1=2
103
104
n
105
?
Figure 2: MSE curves for estimation over a 2d grid, under two very different scalings of Cn : constant and n.
The parameter ?0 was a ?one-hot? signal, with all but one component equal to 0. For each n, the results were
averaged over 5 repetitions, and Laplacian smoothing and TV denoising were tuned for optimal average MSE.
TV spaces, and the former can be roughly thought of as containing functions of more homogeneous
smoothness. The story now is more optimistic for linear smoothers, and the following is a summary.
? In Section 4, we derive minimax rates for Sobolev spaces, and prove that linear smoothers?
in particular, Laplacian smoothing and Laplacian eigenmaps?are optimal over these spaces.
? In Section 5, we discuss an interesting phenomenon, a phase transition of sorts, at d = 3
dimensions. When d = 1 or 2, the minimax rates for a TV space and its inscribed Sobolev
space match; when d ? 3, they do not, and the inscribed Sobolev space has a faster minimax
rate. Aside from being an interesting statement about the TV and Sobolev function spaces
in high dimensions, this raises an important question of adaptivity over the smaller Sobolev
function spaces. As the minimax rates match for d = 1 and 2, any method optimal over TV
spaces in these dimensions, such as TV denoising, is automatically optimal over the inscribed
Sobolev spaces. But the question remains open for d ? 3?does, e.g., TV denoising adapt
to the faster minimax rate over Sobolev spaces? We present empirical evidence to suggest
that this may be true, and leave a formal study to future work.
Other considerations and extensions. There are many problems related to the one that we study
in this paper. Clearly, minimax rates for the TV and Sobolev classes over general graphs, not just
d-dimensional grids, are of interest. Our minimax lower bounds for TV classes actually apply to
generic graphs with bounded max degree, though it is unclear whether to what extent they are sharp
beyond grids; a detailed study will be left to future work. Another related topic is that of higher-order
smoothness classes, e.g., classes containing functions whose derivatives are of bounded variation.
The natural extension of TV denoising here is called trend filtering, defined via the regularization of
discrete higher-order derivatives. In the 1d setting, minimax rates, the optimality of trend filtering,
and the suboptimality of linear smoothers is already well-understood [26]. Trend filtering has been
defined and studied to some extent on general graphs [29], but no notions of optimality have been
investigated beyond 1d. This will also be left to future work. Lastly, it is worth mentioning that there
are other estimators (i.e., other than the ones we study in detail) that attain or nearly attain minimax
rates over various classes we consider in this paper. E.g., wavelet denoising is known to be optimal
over TV classes in 1d [12]; and comparing recent upper bounds from [19, 16] with the lower bounds
in this work, we see that wavelet denoising is also nearly minimax in 2d (ignoring log terms).
3
3.1
Analysis over TV classes
Canonical scalings for TV and Sobolev classes
We start by establishing what we call a ?canonical? scaling for the radius Cn of the TV ball Td (Cn )
in (2), as well as the radius Cn0 of the Sobolev ball Sd (Cn0 ), defined as
Sd (Cn0 ) = ? : kD?k2 ? Cn0 .
(11)
5
Proper scalings for Cn , Cn0 will be critical for properly interpreting our new results in d dimensions,
in a way that is comparable?to known results for d = 1 (which are usually stated in terms of the 1d
scalings Cn 1, Cn0 1/ n). To study (2), (11), it helps to introduce a third function space,
n
o
Hd (1) = ? : ?i = f (i1 /` . . . , id /`), i = 1, . . . , n, for some f ? Hdcont (1) .
(12)
Above, we have mapped each location i on the grid to a multi-index (i1 , . . . , id ) ? {1, . . . , `}d , where
` = n1/d , and Hdcont (1) denotes the (usual) continuous Holder space on [0, 1]d , i.e., functions that are
1-Lipschitz with respect to the `? norm. We seek an embedding that is analogous to the embedding
of continuous Holder, Sobolev, and total variation spaces in 1d functional analysis, namely,
Hd (1) ? Sd (Cn0 ) ? Td (Cn ).
(13)
Cn , Cn0
Our first lemma provides a choice of
that makes the above true. Its proof, as with all proofs
in this paper, can be found in the supplementary document.
Lemma 1. For d ? 1, the embedding in (13) holds with choices Cn n1?1/d and Cn0 n1/2?1/d .
Such choices are called the canonical scalings for the function classes in (2), (11).
As a sanity check, both the (usual) continuous Holder and Sobolev function spaces in d dimensions
are known to have minimax risks that scale as n?2/(2+d) , in a standard nonparametric regression
setup (e.g., [14]). Under the canonical scaling Cn0 n1/2?1/d , our results in Section 4 show that the
discrete Sobolev class Sd (n1/2?1/d ) also admits a minimax rate of n?2/(2+d) .
3.2
Minimax rates over TV classes
The following is a lower bound for the minimax risk of the TV class Td (Cn ) in (2).
Theorem 2. Assume n ? 2, and denote dmax = 2d. Then, for constants c > 0, ?1 ? (2.34, 2.35),
p
?
?
?Cn 1 + log(?dmax n/Cn )
?
?
?
if Cn ? [?dmax log n, ?dmax n/ ?1 ]
?
d
n
max
?
R(Td (Cn )) ? c ?
. (14)
if Cn < ?dmax log n
C 2 /(d2 n) ? ? 2 /n
?
?
? 2n max
?
? /?1
if Cn > ?dmax n/ ?1
The proof uses a simplifying reduction of the TV class, via Td (Cn ) ? B1 (Cn /dmax ), the latter set
denoting the `1 ball of radius Cn /dmax in Rn . It then invokes a sharp characterization of the minimax
risk in normal means problems over `p balls due to [6]. Several remarks are in order.
Remark 1. The first line on the right-hand side in (14) often provides the most useful lower bound.
To see this, recall that under the canonical
we have Cn = n1?1/d . For all
?scaling for TV classes,
?
d ? 2, this certainly implies Cn ? [?dmax log n, ?dmax n/ ?1 ], for large n.
Remark 2. Even though its construction is very simple, the lower bound on the
? minimax risk in?(14)
is sharp or nearly sharp in many
p interesting cases. Assume that Cn ? [?dmax log n, ?dmax n/ ?1 ].
The lower bound rate is Cn log(n/Cn )/n. When d = 2, we see that this is very close to the upper
bound rate of Cn log n/n achieved by the TV denoiser, as stated in (9). These two differ by at most a
log n factor (achieved when C?
n n). When d ? 3, we see that the lower bound rate is even closer
to the upper bound
rate of Cn log n/n achieved by the TV denoiser, as in (9). These two now differ
?
by at most a log n factor (again achieved when Cn n). We hence conclude that the TV denoiser
is essentially minimax optimal in all dimensions d ? 2.
?
Remark 3. When d = 1, and (say) Cn 1, the lower bound rate of log n/n given by Theorem 2
is not sharp; we know from [12] (recall (6)) that the minimax rate over T1 (1) is n?2/3 . The result in
the theorem (and also Theorem 3) in fact holds more generally, beyond grids: for an arbitrary graph
G, its edge incidence matrix D, and Td (Cn ) as defined in (2), the result holds for dmax equal to the
max degree of G. It is unclear to what extent this is sharp, for different graph models.
3.3
Minimax linear rates over TV classes
We now turn to a lower bound on the minimax linear risk of the TV class Td (Cn ) in (2).
Theorem 3. Recall the notation dmax = 2d. Then
!
? 2 Cn2
?2
1
Cn2
?2
2
RL (Td (Cn )) ? 2
?
?
?
?
?
.
Cn + ? 2 d2max n
n
2 d2max n
n
6
(15)
The proof relies on an elegant meta-theorem on minimax rates from [13], which uses the concept of a
?quadratically convex? set, whose minimax linear risk is the same as that of its hardest rectangular
subproblem. An alternative proof can be given entirely from first principles.
?
Remark 4. When Cn2 grows with n, but not too fast (scales as n, at most), the lower bound rate in
(15) will be Cn2 /n. Compared to the Cn /n minimax rate from Theorem 2 (ignoring log terms), we
see a clear gap between optimal nonlinear and linear estimators. In fact, under the canonical scaling
Cn n1?1/d , for any d ? 2, this gap is seemingly huge: the lower bound for the minimax linear
rate will be a constant, whereas the minimax rate from Theorem 2 (ignoring log terms) will be n?1/d .
We now show that the lower bound in Theorem 3 is essentially tight, and remarkably, it is certified by
analyzing two trivial linear estimators: the mean estimator and the identity estimator.
Lemma 4. Let Mn denote the largest column norm of D? . For the mean estimator ??mean = y?1,
? 2 + Cn2 Mn2
E MSE(??mean , ?0 ) ?
,
n
?0 ?Td (Cn )
?
From Proposition 4 in [16], we have Mn = O( log n) when d = 2 and Mn = O(1) when d ? 3.
sup
The risk of the identity estimator ??id = y is clearly ? 2 . Combining this logic with Lemma 4 gives the
upper bound RL (Td (Cn )) ? (? 2 + Cn2 Mn2 )/n ? ? 2 . Comparing this with the lower bound described
in Remark 4, we see that the two rates basically match, modulo the Mn2 factor in the upper bound,
which only provides an extra log n factor when d = 2. The takeaway message: in the sense of max
risk, the best linear smoother does not perform much better than the trivial estimators.
Additional empirical experiments, similar to those shown in Figure 2, are given in the supplement.
4
Analysis over Sobolev classes
Our first result here is a lower bound on the minimax risk of the Sobolev class Sd (Cn0 ) in (11).
Theorem 5. For a universal constant c > 0,
?2
2
2d
c
R(Sd (Cn0 )) ?
.
(n? 2 ) d+2 (Cn0 ) d+2 ? n? 2 ? n2/d (Cn0 )2 +
n
n
Elegant tools for minimax analysis from [13], which leverage the fact that the ellipsoid Sd (Cn0 ) is
orthosymmetric and quadratically convex (after a rotation), are used to prove the result.
The next theorem gives upper bounds, certifying that the above lower bound is tight, and showing
that Laplacian eigenmaps and Laplacian smoothing, both linear smoothers, are optimal over Sd (Cn0 ).
Theorem 6. For Laplacian eigenmaps, ??LE in (5), with k ((n(Cn0 )d )2/(d+2) ? 1) ? n, we have
c? 2
2
2d
c
sup E MSE(??LE , ?0 ) ?
(n? 2 ) d+2 (Cn0 ) d+2 ? n? 2 ? n2/d (Cn0 )2 +
,
n
n
0 )
?0 ?Sd (Cn
for a universal constant c > 0, and n large enough. When d = 1, 2, or 3, the same bound holds for
Laplacian smoothing ??LS in (5), with ? (n/(Cn0 )2 )2/(d+2) (and a possibly different constant c).
5
A phase transition, and adaptivity
The TV and Sobolev classes in (2) and (11), respectively, display a curious relationship. We reflect on
Theorems 2 and 5, using, for concreteness, the canonical scalings Cn n1?1/d and Cn0 n1/2?1/d
(that, recall, guarantee Sd (Cn0 ) ? Td (Cn ))). When d = 1, both the TV and Sobolev classes have a
minimax rate of n?2/3 (this TV result is actually due to [12], as stated in (6), not Theorem 2). When
?1/2
d = 2, both the TV and Sobolev classes
, the caveat being
? again have the same minimax rate of n
that the rate for TV class has an extra log n factor. But for all d ? 3, the rates for the canonical TV
and Sobolev classes differ, and the smaller Sobolev spaces have faster rates than their inscribing TV
spaces. This may be viewed as a phase transition at d = 3; see Table 1.
We may paraphrase to say that 2d is just like 1d, in that expanding the Sobolev ball into a larger TV
ball does not hurt the minimax rate, and methods like TV denoising are automatically adaptive, i.e.,
7
Function class
Dimension 1 Dimension 2 Dimension d ? 3
?
?
TV ball Td (n1?1/d )
n?2/3
n?1/2 log n
n?1/d log n
2
n?2/3
n?1/2
n? 2+d
Sobolev ball Sd (n1/2?1/d )
Table 1: Summary of rates for canonically-scaled TV and Sobolev spaces.
Linear signal in 2d
Linear signal in 3d
100
100
TV denoising (-tted slope -0.54)
Laplacian smoothing (-tted slope -0.62)
TV-ball minimax rate: n!1=2
Sobolev-ball minimax rate: n!1=2
MSE
10-1
MSE
10-1
10-2
10-2
TV denoising (-tted slope -0.44)
Laplacian smoothing (-tted slope -0.50)
TV-ball minimax rate: n!1=3
Sobolev-ball minimax rate: n!2=5
10-3 2
10
103
104
10-3 2
10
105
n
103
104
105
n
Figure 3: MSE curves for estimating a ?linear? signal, a very smooth signal, over 2d and 3d grids. For each n,
the results were averaged over 5 repetitions, and Laplacian smoothing and TV denoising were tuned for best
average MSE performance. The signal was set to satisfy kD?0 k2 n1/2?1/d , matching the canonical scaling.
optimal over both the bigger and smaller classes. However, as soon as we enter the 3d world, it is no
longer clear whether TV denoising can adapt to the smaller, inscribed Sobolev ball, whose minimax
rate is faster, n?2/5 versus n?1/3 (ignoring log factors). Theoretically, this is an interesting open
problem that we do not approach in this paper and leave to future work.
We do, however, investigate the matter empirically: see Figure 3, where we run Laplacian smoothing
and TV denoising on a highly smooth ?linear? signal ?0 . This is constructed so that each component
?i is proportional to i1 + i2 + . . . + id (using the multi-index notation (i1 , . . . , id ) of (12) for grid
location i), and the Sobolev norm is kD?0 k2 n1/2?1/d . Arguably, these are among the ?hardest?
types of functions for TV denoising to handle. The left panel, in 2d, is a case in which we know that
TV denoising attains the minimax rate; the right panel, in 3d, is a case in which we do not, though
empirically, TV denoising surely seems to be doing better than the slower minimax rate of n?1/3
(ignoring log terms) that is associated with the larger TV ball.
Even if TV denoising is shown to be minimax optimal over the inscribed Sobolev balls when d ? 3,
note that this does not necessarily mean that we should scrap Laplacian smoothing in favor of TV
denoising, in all problems. Laplacian smoothing is the unique Bayes estimator in a normal means
model under a certain Markov random field prior (e.g., [22]); statistical decision theory therefore tells
that it is admissible, i.e., no other estimator?TV denoising included?can uniformly dominate it.
6
Discussion
We conclude with a quote from Albert Einstein: ?Everything should be made as simple as possible,
but no simpler?. In characterizing the minimax rates for TV classes, defined over d-dimensional grids,
we have shown that simple methods like Laplacian smoothing and Laplacian eigenmaps?or even in
fact, all linear estimators?must be passed up in favor of more sophisticated, nonlinear estimators,
like TV denoising, if one wants to attain the optimal max risk. Such a result was previously known
when d = 1; our work has extended it to all dimensions d ? 2. We also characterized the minimax
rates over discrete Sobolev classes, revealing an interesting phase transition where the optimal rates
over TV and Sobolev spaces, suitably scaled, match when d = 1 and 2 but diverge for d ? 3. It is an
open question as to whether an estimator like TV denoising can be optimal over both spaces, for all d.
Acknolwedgements. We thank Jan-Christian Hutter and Philippe Rigollet, whose paper [16] inspired
us to think carefully about problem scalings (i.e., radii of TV and Sobolev classes) in the first place.
YW was supported by NSF Award BCS-0941518 to CMU Statistics, a grant by Singapore NRF under
its International Research Centre @ Singapore Funding Initiative, and a Baidu Scholarship. RT was
supported by NSF Grants DMS-1309174 and DMS-1554123.
8
References
[1] Robert Acar and Curtis R. Vogel. Analysis of total variation penalty methods. Inverse Problems, 10:
1217?1229, 1994.
[2] Alvero Barbero and Suvrit Sra. Modular proximal optimization for multidimensional total-variation
regularization. arXiv: 1411.0589, 2014.
[3] Mikhail Belkin and Partha Niyogi. Using manifold structure for partially labelled classification. Advances
in Neural Information Processing Systems, 15, 2002.
[4] Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 15(6):1373?1396, 2003.
[5] Mikhail Belkin and Partha Niyogi. Towards a theoretical foundation for Laplacian-based manifold methods.
Conference on Learning Theory (COLT-05), 18, 2005.
[6] Lucien Birge and Pascal Massart. Gaussian model selection. Journal of the European Mathematical
Society, 3(3):203?268, 2001.
[7] Antonin Chambolle and Jerome Darbon. On total variation minimization and surface evolution using
parametric maximum flows. International Journal of Computer Vision, 84:288?307, 2009.
[8] Antonin Chambolle and Pierre-Louis Lions. Image recovery via total variation minimization and related
problems. Numerische Mathematik, 76(2):167?188, 1997.
[9] Samuel Conte and Carl de Boor. Elementary Numerical Analysis: An Algorithmic Approach. McGraw-Hill,
New York, 1980. International Series in Pure and Applied Mathematics.
[10] David Dobson and Fadil Santosa. Recovery of blocky images from noisy and blurred data. SIAM Journal
on Applied Mathematics, 56(4):1181?1198, 1996.
[11] David Donoho and Iain Johnstone. Ideal spatial adaptation by wavelet shrinkage. Biometrika, 81(3):
425?455, 1994.
[12] David Donoho and Iain Johnstone. Minimax estimation via wavelet shrinkage. Annals of Statistics, 26(8):
879?921, 1998.
[13] David Donoho, Richard Liu, and Brenda MacGibbon. Minimax risk over hyperrectangles, and implications.
Annals of Statistics, 18(3):1416?1437, 1990.
[14] Laszlo Gyorfi, Michael Kohler, Adam Krzyzak, and Harro Walk. A Distribution-Free Theory of Nonparametric Regression. Springer, New York, 2002.
[15] Holger Hoefling. A path algorithm for the fused lasso signal approximator. Journal of Computational and
Graphical Statistics, 19(4):984?1006, 2010.
[16] Jan-Christian Hutter and Philippe Rigollet. Optimal rates for total variation denoising. In Conference on
Learning Theory (COLT-16), 2016. to appear.
[17] Hans Kunsch. Robust priors for smoothing and image restoration. Annals of the Institute of Statistical
Mathematics, 46(1):1?19, 1994.
[18] Enno Mammen and Sara van de Geer. Locally apadtive regression splines. Annals of Statistics, 25(1):
387?413, 1997.
[19] Deanna Needell and Rachel Ward. Stable image reconstruction using total variation minimization. SIAM
Journal on Imaging Sciences, 6(2):1035?1058, 2013.
[20] Michael Ng, Raymond Chan, and Wun-Cheung Tang. A fast algorithm for deblurring models with
Neumann boundary conditions. SIAM Journal on Scientific Computing, 21(3):851?866, 1999.
[21] Leonid Rudin, Stanley Osher, and Emad Faterni. Nonlinear total variation based noise removal algorithms.
Physica D: Nonlinear Phenomena, 60:259?268, 1992.
[22] James Sharpnack and Aarti Singh. Identifying graph-structured activation patterns in networks. Advances
in Neural Information Processing Systems, 13, 2010.
[23] James Sharpnack, Alessandro Rinaldo, and Aarti Singh. Sparsistency of the edge lasso over graphs.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 15:1028?1036, 2012.
[24] Alexander Smola and Risi Kondor. Kernels and regularization on graphs. Proceedings of the Annual
Conference on Learning Theory, 16, 2003.
[25] Robert Tibshirani, Michael Saunders, Saharon Rosset, Ji Zhu, and Keith Knight. Sparsity and smoothness
via the fused lasso. Journal of the Royal Statistical Society: Series B, 67(1):91?108, 2005.
[26] Ryan J. Tibshirani. Adaptive piecewise polynomial estimation via trend filtering. Annals of Statistics, 42
(1):285?323, 2014.
[27] Ryan J. Tibshirani and Jonathan Taylor. The solution path of the generalized lasso. Annals of Statistics, 39
(3):1335?1371, 2011.
[28] Yilun Wang, Junfeng Yang, Wotao Yin, and Yin Zhang. A new alternating minimization algorithm for
total variation image reconstruction. SIAM Journal on Imaging Sciences, 1(3):248?272, 2008.
[29] Yu-Xiang Wang, James Sharpnack, Alex Smola, and Ryan J. Tibshirani. Trend filtering on graphs. Journal
of Machine Learning Research, 2016. To appear.
[30] Xiaojin Zhu, Zoubin Ghahramani, and John Lafferty. Semi-supervised learning using Gaussian fields and
harmonic functions. International Conference on Machine Learning (ICML-03), 20, 2003.
9
| 6570 |@word kondor:1 polynomial:1 norm:3 seems:1 suitably:1 open:3 d2:1 seek:2 bn:9 simplifying:1 reduction:4 liu:1 series:2 tuned:3 ours:1 document:1 denoting:1 outperforms:1 current:1 comparing:2 incidence:2 activation:1 yet:1 written:1 must:3 john:1 dct:1 numerical:1 christian:2 acar:1 drop:1 aside:1 intelligence:1 rudin:1 ith:1 farther:1 caveat:1 provides:5 characterization:1 node:2 location:8 simpler:5 zhang:1 unbounded:1 mathematical:1 constructed:1 baidu:1 initiative:1 prove:3 combine:1 introduce:3 boor:1 theoretically:1 indeed:2 roughly:3 nor:1 growing:1 multi:2 inspired:1 globally:1 td:21 automatically:2 considering:1 provided:1 estimating:5 bounded:9 spain:1 notation:5 underlying:1 panel:2 what:4 kind:1 argmin:2 developed:1 finding:1 transformation:2 nonrandom:1 guarantee:1 multidimensional:1 act:1 biometrika:1 rm:1 k2:3 scaled:2 hyperrectangles:1 grant:2 enjoy:1 yn:1 louis:1 arguably:2 appear:2 t1:5 veeranjaneyulu:1 understood:2 sd:11 id:5 analyzing:1 establishing:1 path:2 abuse:1 ryantibs:1 studied:2 challenging:1 sara:1 mentioning:1 gyorfi:1 averaged:2 unique:1 jan:2 empirical:3 universal:2 thought:1 attain:3 convenient:1 matching:1 word:1 revealing:1 suggest:1 zoubin:1 cannot:4 close:1 selection:1 operator:1 context:1 risk:17 deterministic:1 l:4 convex:4 survey:1 focused:1 rectangular:1 simplicity:1 identifying:1 splitting:1 recovery:2 numerische:1 pure:1 needell:1 estimator:45 iain:2 array:1 importantly:1 orthonormal:1 conundrum:1 dominate:1 hd:2 embedding:4 handle:2 notion:1 variation:24 hurt:1 analogous:1 annals:6 construction:1 modulo:1 homogeneous:3 us:2 carl:1 deblurring:1 pa:3 trend:5 expensive:1 particularly:3 persist:1 observed:1 role:1 subproblem:1 wang:3 besov:1 knight:1 substantial:1 alessandro:1 raise:1 tight:2 singh:2 basis:1 various:3 separated:1 fast:2 mn2:3 artificial:1 tell:1 saunders:1 sanity:1 whose:4 modular:1 widely:1 larger:4 supplementary:1 say:3 favor:2 statistic:13 niyogi:3 ward:1 think:1 transform:1 noisy:2 certified:1 seemingly:1 sequence:3 reconstruction:2 product:1 adaptation:1 junfeng:1 combining:1 canonically:1 achieve:1 sticking:1 ky:2 convergence:2 regularity:1 assessing:1 diverges:1 produce:1 neumann:1 adam:1 leave:2 object:1 help:1 derive:3 stat:1 measured:1 op:4 keith:1 c:2 entirety:1 come:2 implies:2 differ:3 inhomogeneous:1 radius:4 everything:1 require:1 really:1 proposition:2 ryan:4 elementary:1 extension:2 clarify:1 hold:4 practically:2 physica:1 normal:2 visually:1 great:1 algorithmic:1 changepoint:1 major:1 achieves:3 early:1 vary:2 enno:1 aarti:2 estimation:10 label:1 lucien:1 quote:1 emad:1 largest:1 repetition:2 tool:2 minimization:4 clearly:2 gaussian:2 shrinkage:2 vk:1 methodological:1 improvement:1 properly:1 check:1 sharpnack:3 attains:1 sense:2 birge:1 expand:1 i1:4 among:1 classification:1 colt:2 pascal:1 constrained:1 smoothing:29 spatial:1 marginal:1 equal:4 field:2 having:1 ng:1 nrf:1 broad:1 yu:2 hardest:2 nearly:4 holger:1 thinking:1 icml:1 future:4 others:1 spline:2 piecewise:1 richard:1 few:1 t2:1 belkin:3 preserve:1 sparsistency:1 phase:4 n1:15 detection:1 interest:1 huge:1 message:1 investigate:2 highly:1 evaluation:1 certainly:1 blocky:1 unlucky:1 devoted:1 wiggly:1 implication:2 laszlo:1 edge:6 closer:1 taylor:1 walk:1 sacrificing:1 theoretical:2 hutter:2 column:3 restoration:1 lattice:1 eigenmaps:16 too:1 motivating:1 optimally:1 faterni:1 proximal:1 rosset:1 international:5 fundamental:1 siam:4 diverge:1 michael:3 fused:4 quickly:1 squared:2 central:1 vastly:1 again:2 containing:3 reflect:1 possibly:1 derivative:2 de:2 summarized:1 matter:1 blurred:1 satisfy:1 reiterate:1 closed:2 optimistic:1 doing:1 sup:4 start:1 sort:1 bayes:1 parallel:1 slope:10 partha:3 accuracy:2 holder:4 variance:1 sy:1 yield:2 comparably:1 basically:2 worth:2 harro:1 james:3 thereof:1 dm:2 proof:5 associated:1 proved:1 recall:4 yuxiangw:1 dimensionality:2 stanley:1 sophisticated:3 carefully:2 actually:2 higher:5 dt:1 supervised:2 though:5 chambolle:2 wildly:1 furthermore:1 just:4 hoefling:1 lastly:3 smola:2 jerome:1 hand:2 nonlinear:7 glance:1 somehow:1 infimum:1 perhaps:1 reveal:1 scientific:1 grows:1 k22:5 concept:1 true:2 former:2 hence:3 regularization:3 evolution:1 spatially:1 alternating:1 i2:1 mammen:1 cosine:1 suboptimality:1 samuel:1 generalized:1 hill:1 complete:1 saharon:1 interpreting:1 image:11 meaning:1 consideration:2 harmonic:1 recently:1 funding:1 common:1 rotation:1 functional:1 rigollet:2 rl:4 empirically:2 ji:1 analog:1 mellon:3 refer:2 significant:1 enter:1 smoothness:8 tuning:3 grid:20 mathematics:3 centre:1 stable:1 han:1 longer:1 surface:1 base:1 recent:1 sadhanala:1 perspective:1 chan:1 inf:2 certain:1 cn0:22 meta:1 suvrit:1 yi:1 seen:2 additional:1 somewhat:1 surely:1 converge:1 signal:9 semi:2 smoother:19 multiple:2 mix:1 bcs:1 encompass:1 smooth:4 match:5 faster:4 adapt:2 offer:2 long:1 characterized:1 e1:1 award:1 bigger:1 laplacian:45 regression:4 heterogeneous:2 essentially:3 cmu:4 expectation:2 d2max:2 albert:1 arxiv:1 kernel:2 vision:1 achieved:5 whereas:1 remarkably:1 want:1 appropriately:1 extra:2 vogel:1 massart:1 elegant:2 flow:1 lafferty:1 call:2 inscribed:5 curious:1 leverage:1 ideal:1 yang:1 enough:3 lasso:6 suboptimal:1 cn:65 triebel:1 whether:3 expression:3 passed:1 krzyzak:1 penalty:1 nomenclature:1 speaking:2 york:2 remark:6 useful:2 generally:2 clear:3 detailed:1 yw:1 nonparametric:7 locally:1 simplest:1 exist:1 canonical:13 nsf:2 singapore:2 estimated:1 tibshirani:5 darbon:1 yilun:1 broadly:1 dobson:1 carnegie:3 discrete:11 write:2 v1:2 imaging:2 graph:14 concreteness:1 year:1 run:1 inverse:1 extends:1 throughout:3 place:1 rachel:1 vn:1 sobolev:36 draw:1 decision:1 scaling:18 comparable:1 entirely:1 bound:24 distinguish:1 display:2 quadratic:1 annual:1 kronecker:1 constraint:1 deficiency:1 alex:1 conte:1 takeaway:1 barbero:1 certifying:1 optimality:3 min:1 notationally:2 relatively:1 department:3 tv:89 structured:1 ball:15 poor:1 kd:9 smaller:8 em:1 appealing:2 osher:1 computationally:1 remains:1 previously:1 cameraman:2 loose:1 discus:1 dmax:14 turn:1 know:2 mathematik:1 apply:1 einstein:1 v2:2 generic:1 pierre:1 alternative:1 shortly:1 slower:1 original:1 denotes:1 clustering:1 running:1 graphical:1 invokes:1 k1:4 especially:1 scholarship:1 risi:1 classical:3 society:2 ghahramani:1 question:5 quantity:1 already:1 parametric:2 rt:1 usual:2 said:2 unclear:2 thank:1 mapped:1 parametrized:1 topic:1 manifold:2 argue:1 extent:3 trivial:5 denoiser:7 length:2 index:2 relationship:1 illustration:1 ellipsoid:1 equivalently:1 difficult:1 setup:1 robert:2 potentially:1 statement:1 stated:3 proper:2 unknown:1 perform:4 wotao:1 upper:8 markov:1 philippe:2 extended:1 precise:1 y1:1 rn:10 sharp:7 arbitrary:2 paraphrase:1 david:4 pair:1 namely:1 distinction:1 quadratically:2 established:1 barcelona:1 nip:1 beyond:5 able:1 deanna:1 below:1 usually:1 lion:1 pattern:1 sparsity:1 summarize:1 max:8 including:1 royal:1 hot:1 critical:1 difficulty:1 natural:2 regularized:1 abbreviate:1 mn:3 minimax:54 zhu:2 imply:1 brenda:1 xiaojin:1 raymond:1 nice:1 literature:2 review:3 prior:2 removal:1 xiang:2 fully:1 adaptivity:4 interesting:7 limitation:1 filtering:5 proportional:1 versus:2 approximator:1 eigendecomposition:1 foundation:1 degree:3 cn2:10 consistent:2 principle:1 story:3 translation:1 row:1 summary:3 surprisingly:1 supported:2 soon:1 free:1 jth:1 drastically:1 side:3 formal:2 telling:1 johnstone:3 wide:1 institute:1 characterizing:1 mikhail:3 van:1 boundary:2 dimension:20 finitedimensional:1 curve:2 transition:4 world:1 commonly:2 made:3 adaptive:2 cope:1 mcgraw:1 logic:1 b1:1 pittsburgh:3 assumed:3 conclude:2 continuous:3 table:2 robust:1 expanding:1 sra:1 ignoring:8 curtis:1 mse:25 investigated:1 necessarily:1 meanwhile:1 european:1 diag:1 big:1 noise:2 n2:2 antonin:2 third:1 wavelet:7 admissible:1 tang:1 theorem:15 bad:1 showing:1 admits:1 evidence:1 importance:2 supplement:1 gap:7 smoothly:1 yin:2 rinaldo:1 expressed:1 contained:1 partially:1 springer:1 minimizer:1 satisfies:2 relies:1 identity:2 viewed:1 cheung:1 donoho:4 towards:1 labelled:1 tted:10 lipschitz:1 leonid:1 change:1 included:1 uniformly:1 denoising:47 lemma:5 total:21 called:2 geer:1 accepted:1 latter:2 jonathan:1 alexander:1 kohler:1 phenomenon:2 |
6,159 | 6,571 | Exponential Family Embeddings
Maja Rudolph
Columbia University
Francisco J. R. Ruiz
Univ. of Cambridge
Columbia University
Stephan Mandt
Columbia University
David M. Blei
Columbia University
Abstract
Word embeddings are a powerful approach for capturing semantic similarity among
terms in a vocabulary. In this paper, we develop exponential family embeddings,
a class of methods that extends the idea of word embeddings to other types of
high-dimensional data. As examples, we studied neural data with real-valued
observations, count data from a market basket analysis, and ratings data from
a movie recommendation system. The main idea is to model each observation
conditioned on a set of other observations. This set is called the context, and
the way the context is defined is a modeling choice that depends on the problem.
In language the context is the surrounding words; in neuroscience the context is
close-by neurons; in market basket data the context is other items in the shopping
cart. Each type of embedding model defines the context, the exponential family of
conditional distributions, and how the latent embedding vectors are shared across
data. We infer the embeddings with a scalable algorithm based on stochastic
gradient descent. On all three applications?neural activity of zebrafish, users?
shopping behavior, and movie ratings?we found exponential family embedding
models to be more effective than other types of dimension reduction. They better
reconstruct held-out data and find interesting qualitative structure.
1
Introduction
Word embeddings are a powerful approach for analyzing language (Bengio et al., 2006; Mikolov et al.,
2013a,b; Pennington et al., 2014). A word embedding method discovers distributed representations of
words; these representations capture the semantic similarity between the words and reflect a variety of
other linguistic regularities (Rumelhart et al., 1986; Bengio et al., 2006; Mikolov et al., 2013c). Fitted
word embeddings can help us understand the structure of language and are useful for downstream
tasks based on text.
There are many variants, adaptations, and extensions of word embeddings (Mikolov et al., 2013a,b;
Mnih and Kavukcuoglu, 2013; Levy and Goldberg, 2014; Pennington et al., 2014; Vilnis and McCallum, 2015), but each reflects the same main ideas. Each term in a vocabulary is associated with
two latent vectors, an embedding and a context vector. These two types of vectors govern conditional
probabilities that relate each word to its surrounding context. Specifically, the conditional probability
of a word combines its embedding and the context vectors of its surrounding words. (Different methods combine them differently.) Given a corpus, we fit the embeddings by maximizing the conditional
probabilities of the observed text.
In this paper we develop the exponential family embedding (ef-emb), a class of models that generalizes
the spirit of word embeddings to other types of high-dimensional data. Our motivation is that other
types of data can benefit from the same assumptions that underlie word embeddings, namely that
a data point is governed by the other data in its context. In language, this is the foundational idea
that words with similar meanings will appear in similar contexts (Harris, 1954). We use the tools of
exponential families (Brown, 1986) and generalized linear models (glms) (McCullagh and Nelder,
1989) to adapt this idea beyond language.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
As one example beyond language, we will study computational neuroscience. Neuroscientists measure
sequential neural activity across many neurons in the brain. Their goal is to discover patterns in these
data with the hope of better understanding the dynamics and connections among neurons. In this
example, a context can be defined as the neural activities of other nearby neurons, or as neural activity
in the past. Thus, it is plausible that the activity of each neuron depends on its context. We will use
this idea to fit latent embeddings of neurons, representations of neurons that uncover hidden features
which help suggest their roles in the brain.
Another example we study involves shoppers at the grocery store. Economists collect shopping
data (called ?market basket data?) and are interested in building models of purchase behavior for
downstream econometric analysis, e.g., to predict demand and market changes. To build such models,
they seek features of items that are predictive of when they are purchased and in what quantity. Similar
to language, purchasing an item depends on its context, i.e., the other items in the shopping cart. In
market basket data, Poisson embeddings can capture important econometric concepts, such as items
that tend not to occur together but occur in the same contexts (substitutes) and items that co-occur,
but never one without the other (complements).
We define an ef-emb, such as one for neuroscience or shopping data, with three ingredients. (1)
We define the context, which specifies which other data points each observation depends on. (2) We
define the conditional exponential family. This involves setting the appropriate distribution, such as a
Gaussian for real-valued data or a Poisson for count data, and the way to combine embeddings and
context vectors to form its natural parameter. (3) We define the embedding structure, how embeddings
and context vectors are shared across the conditional distributions of each observation. These three
ingredients enable a variety of embedding models.
We describe ef-emb models and develop efficient algorithms for fitting them. We show how existing methods, such as continuous bag of words (cbow) (Mikolov et al., 2013a) and negative
sampling (Mikolov et al., 2013b), can each be viewed as an ef-emb. We study our methods on
three different types of data?neuroscience data, shopping data, and movie ratings data. Mirroring the success of word embeddings, ef-emb models outperform traditional dimension reduction,
such as exponential family principal component analysis (pca) (Collins et al., 2001) and Poisson
factorization (Gopalan et al., 2015), and find interpretable features of the data.
Related work. ef-emb models generalize cbow (Mikolov et al., 2013a) in the same way that
exponential family pca (Collins et al., 2001) generalizes pca, glms (McCullagh and Nelder, 1989)
generalize regression, and deep exponential families (Ranganath et al., 2015) generalize sigmoid belief
networks (Neal, 1990). A linear ef-emb (which we define precisely below) relates to context-windowbased embedding methods such as cbow or the vector log-bilinear language model (vlbl) (Mikolov
et al., 2013a; Mnih and Kavukcuoglu, 2013), which model a word given its context. The more
general ef-emb relates to embeddings with a nonlinear component, such as the skip-gram (Mikolov
et al., 2013a) or the inverse vector log-bilinear language model (ivlbl) (Mnih and Kavukcuoglu,
2013). (These methods might appear linear but, when viewed as a conditional probabilistic model,
the normalizing constant of each word induces a nonlinearity.)
Researchers have developed different approximations of the word embedding objective to scale the
procedure. These include noise contrastive estimation (Gutmann and Hyv?rinen, 2010; Mnih and Teh,
2012), hierarchical softmax (Mikolov et al., 2013b), and negative sampling (Mikolov et al., 2013a).
We explain in Section 2.2 and Supplement A how negative sampling corresponds to biased stochastic
gradients of an ef-emb objective.
2
Exponential Family Embeddings
We consider a matrix x D x1WI of I observations, where each xi is a D-vector. As one example, in
language xi is an indicator vector for the word at position i and D is the size of the vocabulary. As
another example, in neural data xi is the neural activity measured at index pair i D .n; t /, where n
indexes a neuron and t indexes a time point; each measurement is a scalar (D D 1).
The goal of an exponential family embedding (ef-emb) is to derive useful features of the data. There
are three ingredients: a context function, a conditional exponential family, and an embedding structure.
These ingredients work together to form the objective. First, the ef-emb models each data point
conditional on its context; the context function determines which other data points are at play. Second,
2
the conditional distribution is an appropriate exponential family, e.g., a Gaussian for real-valued data.
Its parameter is a function of the embeddings of both the data point and its context. Finally, the
embedding structure determines which embeddings are used when the i th point appears, either as
data or in the context of another point. The objective is the sum of the log probabilities of each data
point given its context. We describe each ingredient, followed by the ef-emb objective. Examples
are in Section 2.1.
Context. Each data point i has a context ci , which is a set of indices of other data points. The
ef-emb models the conditional distribution of xi given the data points in its context.
The context is a modeling choice; different applications will require different types of context. In
language, the data point is a word and the context is the set of words in a window around it. In neural
data, the data point is the activity of a neuron at a time point and the context is the activity of its
surrounding neurons at the same time point. (It can also include neurons at future time or in the past.)
In shopping data, the data point is a purchase and the context is the other items in the cart.
Conditional exponential family. An ef-emb models each data point xi conditional on its context
xci . The distribution is an appropriate exponential family,
xi j xci ExpFam.i .xci /; t .xi //;
(1)
where i .xci / is the natural parameter and t .xi / is the sufficient statistic. In language modeling, this
family is usually a categorical distribution. Below, we will study Gaussian and Poisson.
We parameterize the conditional with two types of vectors, embeddings and context vectors. The
embedding of the ith data point helps govern its distribution; we denote it ?i ? 2 RKD . The context
vector of the i th data point helps govern the distribution of data for which i appears in their context;
we denote it ??i ? 2 RKD .
How to define the natural parameter as a function of these vectors is a modeling choice. It captures
how the context interacts with an embedding to determine the conditional distribution of a data
point. Here we focus on the linear embedding, where the natural parameter is a function of a linear
combination of the latent vectors,
0
1
X
i .xci / D fi @?i ?>
??j ?xj A :
(2)
j 2ci
Following the nomenclature of generalized linear models (glms), we call fi ./ the link function. We
will see several examples of link functions in Section 2.1.
This is the setting of many existing word embedding models, though not all. Other models, such as
the skip-gram, determine the probability through a ?reverse? distribution of context words given the
data point. These non-linear embeddings are still instances of an ef-emb.
Embedding structure. The goal of an ef-emb is to find embeddings and context vectors that
describe features of the data. The embedding structure determines how an ef-emb shares these
vectors across the data. It is through sharing the vectors that we learn an embedding for the object of
primary interest, such as a vocabulary term, a neuron, or a supermarket product. In language the same
parameters ?i ? D and ??i ? D ? are shared across all positions i. In neural data, observations share
parameters when they describe the same neuron. Recall that the index connects to both a neuron and
time point i D .n; t /. We share parameters with ?i ? D n and ??i ? D ?n to find embeddings and
context vectors that describe the neurons. Other variants might tie the embedding and context vectors
to find a single set of latent variables, ?i ? D ??i ?.
The objective function. The ef-emb objective sums the log conditional probabilities of each data
point, adding regularizers for the embeddings and context vectors.1 We use log probability functions
as regularizers, e.g., a Gaussian probability leads to `2 regularization. We also use regularizers to
constrain the embeddings,e.g., to be non-negative. Thus, the objective is
L.; ?/ D
I
X
>
i t .xi /
a.i / C log p./ C log p.?/:
(3)
i D1
1 One might be tempted to see this as a probabilistic model that is conditionally specified.
it does not have a consistent joint distribution (Arnold et al., 2001).
3
However, in general
We maximize this objective with respect to the embeddings and context vectors. In Section 2.2 we
explain how to fit it with stochastic gradients.
Equation (3) can be seen as a likelihood function for a bank of glms (McCullagh and Nelder, 1989).
Each data point is modeled as a response conditional on its ?covariates,? which combine the context
vectors and context, e.g., as in Equation (2); the coefficient for each response is the embedding itself.
We use properties of exponential families and results around glms to derive efficient algorithms for
ef-emb models.
2.1
Examples
We highlight the versatility of ef-emb models with three example models and their variations. We
develop the Gaussian embedding (g-emb) for analyzing real observations from a neuroscience
application; we also introduce a nonnegative version, the nonnegative Gaussian embedding (ngemb). We develop two Poisson embedding models, Poisson embedding (p-emb) and additive Poisson
embedding (ap-emb), for analyzing count data; these have different link functions. We present
a categorical embedding model that corresponds to the continuous bag of words (cbow) word
embedding (Mikolov et al., 2013a). Finally, we present a Bernoulli embedding (b-emb) for binary
data. In Section 2.2 we explain how negative sampling (Mikolov et al., 2013b) corresponds to biased
stochastic gradients of the b-emb objective. For convenience, these acronyms are in Table 1.
exponential family embedding
Gaussian embedding
nonnegative Gaussian embedding
Poisson embedding
additive Poisson embedding
Bernoulli embedding
Table 1: Acronyms used for exponential family embeddings.
ef-emb
g-emb
ng-emb
p-emb
ap-emb
b-emb
Example 1: Neural data and Gaussian observations. Consider the (calcium) expression of a large
population of zebrafish neurons (Ahrens et al., 2013). The data are processed to extract the locations
of the N neurons and the neural activity xi D x.n;t / across location n and time t. The goal is to
model the similarity between neurons in terms of their behavior, to embed each neuron in a latent
space such that neurons with similar behavior are close to each other.
We consider two neurons similar if they behave similarly in the context of the activity pattern of
their surrounding neurons. Thus we define the context for data index i D .n; t / to be the indices
of the activity of nearby neurons at the same time. We find the K-nearest neighbors (knn) of each
neuron (using a Ball-tree algorithm) according to their spatial distance in the brain. We use this set to
construct the context ci D c.n;t / D f.m; t /jm 2 knn.n/g. This context varies with each neuron, but
is constant over time.
With the context defined, each data point xi is modeled with a conditional Gaussian. The conditional
mean is the inner product from Equation (2), where the context is the simultaneous activity of the
nearest neurons and the link function is the identity. The conditionals of two observations share
parameters if they correspond to the same neuron. The embedding structure is thus ?i ? D n and
??i ? D ?n for all i D .n; t /. Similar to word embeddings, each neuron has two distinct latent vectors:
the neuron embedding n 2 RK and the context vector ?n 2 RK .
These ingredients, along with a regularizer, combine to form a neural embedding objective. g-emb
uses `2 regularization (i.e., a Gaussian prior); ng-emb constrains the vectors to be nonnegative (`2
regularization on the logarithm. i.e., a log-normal prior).
Example 2: Shopping data and Poisson observations. We also study data about people shopping.
The data contains the individual purchases of anonymous users in chain grocery and drug stores.
There are N different items and T trips to the stores among all households. The data is a sparse
N T matrix of purchase counts. The entry xi D x.n;t / indicates the number of units of item n that
was purchased on trip t. Our goal is to learn a latent representation for each product that captures the
similarity between them.
4
We consider items to be similar if they tend to be purchased in with similar groups of other items.
The context for observation xi is thus the other items in the shopping basket on the same trip. For the
purchase count at index i D .n; t /, the context is ci D fj D .m; t /jm ? ng.
We use conditional Poisson distributions to modelthe count data. The sufficient statistic of the Poisson
is t .xi / D xi , and its natural parameter is the logarithm of the rate (i.e., the mean). We set the natural
parameter as in Equation (2), with the link function defined below. The embedding structure is the
same as in g-emb, producing embeddings for the items.
We explore two choices for the link function. p-emb uses an identity link function. Since the
conditional mean is the exponentiated natural parameter, this implies that the context items contribute
multiplicatively to the mean. (We use `2 -regularization on the embeddings.) Alternatively, we can
constrain the parameters to be nonnegative and set the link function f ./ D log./. This is ap-emb, a
model with an additive mean parameterization. (We use `2 -regularization in log-space.) ap-emb
only captures positive correlations between items.
Example 3: Text modeling and categorical observations. ef-embs are inspired by word embeddings, such as cbow (Mikolov et al., 2013a). cbow is a special case of an ef-emb; it is equivalent
to a multivariate ef-emb with categorical conditionals. In the notation here, each xi is an indicator
vector of the i th word. Its dimension is the vocabulary size. The context of the i th word are the other
words in a window around it (of size w), ci D fj ? i ji w j i C wg.
The distribution of xi is categorical, conditioned on the surrounding words xci ; this is a softmax
regression. It has natural parameter as in Equation (2) with an identity link function. The embedding
structure imposes that parameters are shared across all observed words. The embeddings are shared
globally (?i ? D , ??i ? D ? 2 RN K ). The word and context embedding of the nt h word is the nt h
row of and ? respectively. cbow does not use any regularizer.
Example 4: Text modeling and binary observations. One way to simplify the cbow objective is
with a model of each entry of the indicator vectors. The data are binary and indexed by i D .n; v/,
where n is the position in the text and v indexes the vocabulary; the variable xn;v is the indicator that
word n is equal to term v. (This model relaxes the constraint that for any n only one xn;v will be on.)
With this notation, the context is ci D f.j; v 0 /j8v 0 ; j ? n; n w j n C wg; the embedding
structure is ?i ? D ?.n; v/? D v and ??i ? D ??.n; v/? D ?v .
We can consider different conditional distributions in this setting. As one example, set the conditional
distribution to be a Bernoulli with an identity link; we call this the b-emb model for text. In
Section 2.2 we show that biased stochastic gradients of the b-emb objective recovers negative
sampling (Mikolov et al., 2013b). As another example, set the conditional distribution to Poisson with
link f ./ D log./. The corresponding embedding model relates closely to Poisson approximations of
distributed multinomial regression (Taddy et al., 2015).
2.2
Inference and Connection to Negative Sampling
We fit the embeddings ?i ? and context vectors ??i ? by maximizing the objective function in Equation (3). We use stochastic gradient descent (sgd) with Adagrad (Duchi et al., 2011). We can derive
the analytic gradient of the objective function using properties of the exponential family (see the
Supplement for details). The gradients linearly combine the data in summations we can approximate
using subsampled minibatches of data. This reduces the computational cost.
When the data is sparse, we can split the gradient into the summation of two terms: one term
corresponding to all data entries i for which xi ? 0, and one term corresponding to those data entries
xi D 0. We compute the first term of the gradient exactly?when the data is sparse there are not many
summations to make?and we estimate the second term by subsampling the zero entries. Compared
to computing the full gradient, this reduces the complexity when most of the entries xi are zero. But
it retains the strong information about the gradient that comes from the non-zero entries.
This relates to negative sampling, which is used to approximate the skip-gram objective (Mikolov
et al., 2013b). Negative sampling re-defines the skip-gram objective to distinguish target (observed)
words from randomly drawn words, using logistic regression. The gradient of the stochastic objective
is identical to a noisy but biased estimate of the gradient for a b-emb model. To obtain the equivalence,
preserve the terms for the non-zero data and subsample terms for the zero data. While an unbiased
5
Model
single neuron held out
K D 10
K D 100
0:290 ? 0:003 0:275 ? 0:003
0:239 ? 0:006 0:239 ? 0:005
0:227 ? 0:002 0:222 ? 0:002
0:263 ? 0:004 0:261 ? 0:004
25% of neurons held out
K D 10
K D 100
0:290 ? 0:003 0:276 ? 0:003
0:246 ? 0:004 0:245 ? 0:003
0:235 ? 0:003 0:232 ? 0:003
0:250 ? 0:004 0:261 ? 0:004
fa
g-emb (c=10)
g-emb (c=50)
ng-emb (c=10)
Table 2: Analysis of neural data: mean squared error and standard errors of neural activity (on the test
set) for different models. Both ef-emb models significantly outperform fa; g-emb is more accurate
than ng-emb.
stochastic gradient would rescale the subsampled terms, negative sampling does not. Thus, negative
sampling corresponds to a biased estimate, which down-weights the contribution of the zeros. See
the Supplement for the mathematical details.
3
Empirical Study
We study exponential family embedding (ef-emb) models on real-valued and count-valued data, and
in different application domains?computational neuroscience, shopping behavior, and movie ratings.
We present quantitative comparisons to other dimension reduction methods and illustrate how we can
glean qualitative insights from the fitted embeddings.
3.1
Real Valued Data: Neural Data Analysis
Data. We analyze the neural activity of a larval zebrafish, recorded at single cell resolution for
3000 time frames (Ahrens et al., 2013). Through genetic modification, individual neurons express a
calcium indicator when they fire. The resulting calcium imaging data is preprocessed by a nonnegative
matrix factorization to identify neurons, their locations, and the fluorescence activity x t 2 RN of the
individual neurons over time (Friedrich et al., 2015). Using this method, our data contains 10,000
neurons (out of a total of 200,000).
We fit all models on the lagged data x t D x t x t 1 to filter out correlations based on calcium decay
and preprocessing.2 The calcium levels can be measured with great spatial resolution but the temporal
resolution is poor; the neuronal firing rate is much higher than the sampling rate. Hence we ignore
all ?temporal structure? in the data and model the simultaneous activity of the neurons. We use the
Gaussian embedding (g-emb) and nonnegative Gaussian embedding (ng-emb) from Section 2.1 to
model the lagged activity of the neurons conditional on the lags of surrounding neurons. We study
context sizes c 2 f10; 50g and latent dimension K 2 f10; 100g.
Models. We compare ef-emb to probabilistic factor analysis (fa), fitting K-dimensional factors for
each neuron and K-dimensional factor loadings for each time frame. In fa, each entry of the data
matrix is Gaussian distributed, with mean equal to the inner product of the corresponding factor and
factor loading.
Evaluation. We train each model on a random sample of 90% of the lagged time frames and hold
out 5% each for validation and testing. With the test set, we use two types of evaluation. (1) Leave
one out: For each neuron xi in the test set, we use the measurements of the other neurons to form
predictions. For fa this means the other neurons are used to recover the factor loadings; for ef-emb
this means the other neurons are used to construct the context. (2) Leave 25% out: We randomly split
the neurons into 4 folds. Each neuron is predicted using the three sets of neurons that are out of its
fold. (This is a more difficult task.) Note in ef-emb, the missing data might change the size of the
context of some neurons. See Table 5 in Supplement C for the choice of hyperparameters.
Results. Table 2 reports both types of evaluation. The ef-emb models significantly outperform fa in
terms of mean squared error on the test set. g-emb obtains the best results with 100 components and a
context size of 50. Figure 1 illustrates how to use the learned embeddings to hypothesize connections
between nearby neurons.
2 We
also analyzed unlagged data but all methods resulted in better reconstruction on the lagged data.
6
Figure 1: Top view of the zebrafish brain, with blue circles at the location of the individual neurons.
We zoom on 3 neurons and their 50 nearest neighbors (small blue dots), visualizing the ?synaptic
weights? learned by a g-emb model (K D 100). The edge color encodes the inner product of the
neural embedding vector and the context vectors n> ?m for each neighbor m. Positive values are
green, negative values are red, and the transparency is proportional to the magnitude. With these
weights we can hypothesize how nearby neurons interact.
Model
p-emb
p-emb (dw)
ap-emb
hpf
Poisson pca
K D 20
7:497 ? 0:007
7:110 ? 0:007
7:868 ? 0:005
7:740 ? 0:008
8:314 ? 0:009
K D 100
7:199 ? 0:008
6:950 ? 0:007
8:414 ? 0:003
7:626 ? 0:007
11:01 ? 0:01
K D 20
5:691 ? 0:006
5:790 ? 0:003
5:964 ? 0:003
5:787 ? 0:006
5:908 ? 0:006
(a) Market basket analysis.
K D 100
5:726 ? 0:005
5:798 ? 0:003
6:118 ? 0:002
5:859 ? 0:006
7:50 ? 0:01
(b) Movie ratings.
Table 3: Comparison of predictive log-likelihood between p-emb, ap-emb, hierarchical Poisson
factorization (hpf) (Gopalan et al., 2015), and Poisson principal component analysis (pca) (Collins
et al., 2001) on held out data. The p-emb model outperforms the matrix factorization models in both
applications. For the shopping data, downweighting the zeros improves the performance of p-emb.
3.2
Count Data: Market Basket Analysis and Movie Ratings
We study the Poisson models Poisson embedding (p-emb) and additive Poisson embedding (ap-emb)
on two applications: shopping and movies.
Market basket data. We analyze the IRI dataset3 (Bronnenberg et al., 2008), which contains the
purchases of anonymous households in chain grocery and drug stores. It contains 137; 632 trips in
2012. We remove items that appear fewer than 10 times, leaving a dataset with 7; 903 items. The
context for each purchase is the other purchases from the same trip.
MovieLens data. We also analyze the MovieLens-100K dataset (Harper and Konstan, 2015), which
contains movie ratings on a scale from 1 to 5. We keep only positive ratings, defined to be ratings of
3 or more (we subtract 2 from all ratings and set the negative ones to 0). The context of each rating
is the other movies rated by the same user. After removing users who rated fewer than 20 movies
and movies that were rated fewer than 50 times, the dataset contains 777 users and 516 movies; the
sparsity is about 5%.
Models. We fit the p-emb and the ap-emb models using number of components K 2 f20; 100g.
For each K we select the Adagrad constant based on best predictive performance on the validation
set. (The parameters we used are in Table 5.) In these datasets, the distribution of the context size
is heavy tailed. To handle larger context sizes we pick a link function for the ef-emb model which
rescales the sum over the context in Equation (2) by the context size (the number of terms in the
sum). We also fit a p-emb model that artificially downweights the contribution of the zeros in the
objective function by a factor of 0:1, as done by Hu et al. (2008) for matrix factorization. We denote
it as ?p-emb (dw).?
3 We
thank IRI for making the data available. All estimates and analysis in this paper, based on data provided
by IRI, are by the authors and not by IRI.
7
Maruchan chicken ramen
M. creamy chicken ramen
M. oriental flavor ramen
M. roast chicken ramen
Yoplait strawberry yogurt
Yoplait apricot mango yogurt
Yoplait strawberry orange smoothie
Yoplait strawberry banana yogurt
Mountain Dew soda
Mtn. Dew orange soda
Mtn. Dew lemon lime soda
Pepsi classic soda
Dean Foods 1 % milk
Dean Foods 2 % milk
Dean Foods whole milk
Dean Foods chocolate milk
Table 4: Top 3 similar items to a given example query words (bold face). The p-emb model successfuly
captures similarities.
We compare the predictive performance with hpf (Gopalan et al., 2015) and Poisson pca (Collins
et al., 2001). Both hpf and Poisson pca factorize the data into K-dimensional positive vectors of user
preferences, and K-dimensional positive vectors of item attributes. ap-emb and hpf parameterize
the mean additively; p-emb and Poisson pca parameterize it multiplicatively. For the ef-emb models
and Poisson pca, we use stochastic optimization with `2 regularization. For hpf, we use variational
inference. See Table 5 in Supplement C for details.
Evaluation. For the market basket data we hold out 5% of the trips to form the test set, also removing
trips with fewer than two purchased different items. In the MovieLens data we hold out 20% of
the ratings and set aside an additional 5% of the non-zero entries from the test for validation. We
report prediction performance based on the normalized log-likelihood on the test set. For p-emb and
ap-emb, we compute the likelihood as the Poisson mean of each nonnegative count (be it a purchase
quantity or a movie rating) divided by the sum of the Poisson means for all items, given the context.
To evaluate hpf and Poisson pca at a given test observation we recover the factor loadings using the
other test entries we condition on, and we use the factor loading to form the prediction.
Predictive performance. Table 3 summarizes the test log-likelihood of the four models, together
with the standard errors across entries in the test set. In both applications the p-emb model outperforms
hpf and Poisson pca. On shopping data p-emb with K D 100 provides the best predictions; on
MovieLens p-emb with K D 20 is best. For p-emb on shopping data, downweighting the contribution
of the zeros gives more accurate estimates.
Item similarity in the shopping data. Embedding models can capture qualitative aspects of the
data as well. Table 4 shows four example products and their three most similar items, where similarity
is calculated as the cosine distance between embedding vectors. (These vectors are from p-emb with
downweighted zeros and K D 100.) For example, the most similar items to a soda are other sodas;
the most similar items to a yogurt are (mostly) other yogurts.
The p-emb model can also identify complementary and substitutable products. To see this, we
compute the inner products of the embedding and the context vectors for all item pairs. A high value
of the inner product indicates that the probability of purchasing one item is increased if the second
item is in the shopping basket (i.e., they are complements). A low value indicates the opposite effect
and the items might be substitutes for each other.
We find that items that tend to be purchased together have high value of the inner product (e.g., potato
chips and beer, potato chips and frozen pizza, or two different types of soda), while items that are
substitutes have negative value (e.g., two different brands of pasta sauce, similar snacks, or soups
from different brands). Other items with negative value of the inner product are not substitutes, but
they are rarely purchased together (e.g., toast crunch and laundry detergent, milk and a toothbrush).
Supplement D gives examples of substitutes and complements.
Topics in the movie embeddings. The embeddings from MovieLens data identify thematically
similar movies. For each latent dimension k, we sort the context vectors by the magnitude of the kth
component. This yields a ranking of movies for each component. In Supplement E we show two
example rankings. (These are from a p-emb model with K D 50.) The first one contains children?s
movies; the second contains science-fiction/action movies.
Acknowledgments
This work is supported by the EU H2020 programme (Marie Sk?odowska-Curie grant agreement
706760), NFS IIS-1247664, ONR N00014-11-1-0651, DARPA FA8750-14-2-0009, DARPA N6600115-C-4032, Adobe, the John Templeton Foundation, and the Sloan Foundation.
8
References
Ahrens, M. B., Orger, M. B., Robson, D. N., Li, J. M., and Keller, P. J. (2013). Whole-brain functional imaging
at cellular resolution using light-sheet microscopy. Nature Methods, 10(5):413?420.
Arnold, B. C., Castillo, E., Sarabia, J. M., et al. (2001). Conditionally specified distributions: an introduction
(with comments and a rejoinder by the authors). Statistical Science, 16(3):249?274.
Bengio, Y., Schwenk, H., Sen?cal, J.-S., Morin, F., and Gauvain, J.-L. (2006). Neural probabilistic language
models. In Innovations in Machine Learning, pages 137?186. Springer.
Bronnenberg, B. J., Kruger, M. W., and Mela, C. F. (2008). Database paper: The IRI marketing data set.
Marketing Science, 27(4):745?748.
Brown, L. D. (1986). Fundamentals of statistical exponential families with applications in statistical decision
theory. Lecture Notes-Monograph Series, 9:i?279.
Collins, M., Dasgupta, S., and Schapire, R. E. (2001). A generalization of principal components analysis to the
exponential family. In Neural Information Processing Systems, pages 617?624.
Duchi, J., Hazan, E., and Singer, Y. (2011). Adaptive subgradient methods for online learning and stochastic
optimization. Journal of Machine Learning Research, 12:2121?2159.
Friedrich, J., Soudry, D., Paninski, L., Mu, Y., Freeman, J., and Ahrens, M. (2015). Fast constrained non-negative
matrix factorization for whole-brain calcium imaging data. In NIPS workshop on Neural Systems.
Gopalan, P., Hofman, J., and Blei, D. M. (2015). Scalable recommendation with hierarchical Poisson factorization.
In Uncertainty in Artificial Intelligence.
Gutmann, M. and Hyv?rinen, A. (2010). Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In Journal of Machine Learning Research.
Harper, F. M. and Konstan, J. A. (2015). The MovieLens datasets: History and context. ACM Transactions on
Interactive Intelligent Systems (TiiS), 5(4):19.
Harris, Z. S. (1954). Distributional structure. Word, 10(2-3):146?162.
Hu, Y., Koren, Y., and Volinsky, C. (2008). Collaborative filtering for implicit feedback datasets. Data Mining.
Levy, O. and Goldberg, Y. (2014). Neural word embedding as implicit matrix factorization. In Neural Information
Processing Systems, pages 2177?2185.
McCullagh, P. and Nelder, J. A. (1989). Generalized linear models, volume 37. CRC press.
Mikolov, T., Chen, K., Corrado, G., and Dean, J. (2013a). Efficient estimation of word representations in vector
space. ICLR Workshop Proceedings. arXiv:1301.3781.
Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and Dean, J. (2013b). Distributed representations of words
and phrases and their compositionality. In Neural Information Processing Systems, pages 3111?3119.
Mikolov, T., Yih, W.-T. a., and Zweig, G. (2013c). Linguistic regularities in continuous space word representations.
In HLT-NAACL, pages 746?751.
Mnih, A. and Kavukcuoglu, K. (2013). Learning word embeddings efficiently with noise-contrastive estimation.
In Neural Information Processing Systems, pages 2265?2273.
Mnih, A. and Teh, Y. W. (2012). A fast and simple algorithm for training neural probabilistic language models.
In International Conference on Machine Learning, pages 1751?1758.
Neal, R. M. (1990). Learning stochastic feedforward networks. Department of Computer Science, University of
Toronto.
Pennington, J., Socher, R., and Manning, C. D. (2014). Glove: Global vectors for word representation. In
Conference on Empirical Methods on Natural Language Processing, volume 14, pages 1532?1543.
Ranganath, R., Tang, L., Charlin, L., and Blei, D. M. (2015). Deep exponential families. Artificial Intelligence
and Statistics.
Rumelhart, D. E., Hintont, G. E., and Williams, R. J. (1986). Learning representations by back-propagating
errors. Nature, 323:9.
Taddy, M. et al. (2015). Distributed multinomial regression. The Annals of Applied Statistics, 9(3):1394?1414.
Vilnis, L. and McCallum, A. (2015). Word representations via Gaussian embedding. In International Conference
on Learning Representations.
9
| 6571 |@word version:1 loading:5 hu:2 hyv:2 ivlbl:1 seek:1 additively:1 snack:1 contrastive:3 pick:1 sgd:1 yih:1 reduction:3 contains:8 series:1 genetic:1 fa8750:1 past:2 existing:2 outperforms:2 nt:2 gauvain:1 john:1 additive:4 analytic:1 hypothesize:2 remove:1 interpretable:1 aside:1 intelligence:2 fewer:4 item:32 parameterization:1 mccallum:2 ith:1 blei:3 provides:1 contribute:1 location:4 preference:1 toronto:1 mathematical:1 along:1 qualitative:3 combine:6 fitting:2 laundry:1 introduce:1 market:9 behavior:5 brain:6 inspired:1 globally:1 freeman:1 food:4 jm:2 window:2 spain:1 discover:1 maja:1 notation:2 provided:1 what:1 mountain:1 developed:1 temporal:2 quantitative:1 interactive:1 tie:1 exactly:1 unit:1 underlie:1 grant:1 appear:3 producing:1 positive:5 soudry:1 bilinear:2 analyzing:3 chocolate:1 firing:1 mandt:1 ap:10 might:5 studied:1 equivalence:1 collect:1 co:1 factorization:8 acknowledgment:1 testing:1 procedure:1 foundational:1 empirical:2 drug:2 significantly:2 word:48 morin:1 suggest:1 convenience:1 close:2 sheet:1 cal:1 context:76 equivalent:1 dean:6 xci:6 missing:1 maximizing:2 williams:1 iri:5 keller:1 resolution:4 insight:1 dw:2 embedding:56 population:1 handle:1 variation:1 classic:1 annals:1 target:1 play:1 user:6 rinen:2 taddy:2 goldberg:2 us:2 agreement:1 rumelhart:2 distributional:1 database:1 observed:3 role:1 capture:7 parameterize:3 gutmann:2 eu:1 monograph:1 govern:3 complexity:1 covariates:1 constrains:1 mu:1 dynamic:1 hofman:1 predictive:5 hintont:1 joint:1 darpa:2 differently:1 chip:2 schwenk:1 regularizer:2 surrounding:7 train:1 univ:1 distinct:1 fast:2 effective:1 describe:5 query:1 artificial:2 lag:1 larger:1 valued:6 plausible:1 reconstruct:1 wg:2 statistic:4 knn:2 rudolph:1 itself:1 noisy:1 online:1 frozen:1 sen:1 reconstruction:1 product:11 adaptation:1 f10:2 sutskever:1 regularity:2 h2020:1 leave:2 object:1 help:4 derive:3 develop:5 illustrate:1 propagating:1 measured:2 rescale:1 nearest:3 strong:1 orger:1 predicted:1 involves:2 skip:4 implies:1 come:1 closely:1 attribute:1 filter:1 stochastic:11 enable:1 crc:1 require:1 shopping:17 generalization:1 anonymous:2 toast:1 summation:3 larval:1 sauce:1 extension:1 hold:3 around:3 normal:1 great:1 predict:1 estimation:5 robson:1 bag:2 fluorescence:1 tool:1 reflects:1 hope:1 gaussian:15 linguistic:2 focus:1 bernoulli:3 likelihood:5 indicates:3 inference:2 hidden:1 tiis:1 interested:1 among:3 grocery:3 spatial:2 softmax:2 special:1 orange:2 constrained:1 equal:2 construct:2 never:1 ng:6 sampling:11 identical:1 purchase:9 future:1 report:2 simplify:1 intelligent:1 randomly:2 preserve:1 resulted:1 zoom:1 individual:4 subsampled:2 connects:1 sarabia:1 fire:1 versatility:1 neuroscientist:1 interest:1 mining:1 mnih:6 evaluation:4 analyzed:1 light:1 regularizers:3 held:4 chain:2 vlbl:1 accurate:2 edge:1 potato:2 expfam:1 dataset3:1 tree:1 indexed:1 logarithm:2 re:1 circle:1 fitted:2 instance:1 increased:1 modeling:6 retains:1 phrase:1 cost:1 entry:11 varies:1 fundamental:1 international:2 probabilistic:5 bronnenberg:2 together:5 squared:2 reflect:1 recorded:1 cbow:8 li:1 downweighted:1 bold:1 nfs:1 coefficient:1 rescales:1 sloan:1 ranking:2 depends:4 view:1 analyze:3 hazan:1 red:1 recover:2 sort:1 curie:1 contribution:3 collaborative:1 odowska:1 who:1 efficiently:1 correspond:1 identify:3 yield:1 generalize:3 kavukcuoglu:4 researcher:1 history:1 explain:3 simultaneous:2 basket:10 sharing:1 synaptic:1 hlt:1 volinsky:1 associated:1 recovers:1 dataset:3 recall:1 color:1 improves:1 uncover:1 back:1 appears:2 higher:1 response:2 done:1 though:1 charlin:1 marketing:2 implicit:2 correlation:2 glms:5 nonlinear:1 defines:2 logistic:1 building:1 effect:1 naacl:1 brown:2 concept:1 unbiased:1 normalized:1 regularization:6 hence:1 semantic:2 neal:2 conditionally:2 visualizing:1 cosine:1 unnormalized:1 generalized:3 duchi:2 fj:2 soup:1 meaning:1 variational:1 discovers:1 ef:32 fi:2 sigmoid:1 multinomial:2 functional:1 ji:1 volume:2 measurement:2 cambridge:1 similarly:1 nonlinearity:1 language:16 dot:1 similarity:7 multivariate:1 reverse:1 store:4 n00014:1 binary:3 success:1 onr:1 seen:1 additional:1 dew:3 determine:2 maximize:1 corrado:2 ii:1 relates:4 full:1 infer:1 reduces:2 transparency:1 adapt:1 downweights:1 zweig:1 divided:1 adobe:1 prediction:4 scalable:2 variant:2 regression:5 poisson:29 arxiv:1 microscopy:1 cell:1 chicken:3 embs:1 conditionals:2 leaving:1 biased:5 comment:1 cart:3 tend:3 spirit:1 call:2 feedforward:1 bengio:3 embeddings:39 stephan:1 relaxes:1 variety:2 xj:1 fit:7 split:2 opposite:1 inner:7 idea:6 shopper:1 expression:1 pca:11 nomenclature:1 action:1 mirroring:1 deep:2 useful:2 gopalan:4 kruger:1 induces:1 processed:1 schapire:1 specifies:1 outperform:3 fiction:1 ahrens:4 neuroscience:6 glean:1 blue:2 dasgupta:1 express:1 group:1 four:2 drawn:1 downweighting:2 preprocessed:1 marie:1 mango:1 econometric:2 imaging:3 subgradient:1 downstream:2 sum:5 inverse:1 powerful:2 uncertainty:1 soda:7 extends:1 family:25 zebrafish:4 decision:1 summarizes:1 lime:1 capturing:1 hpf:8 followed:1 distinguish:1 koren:1 fold:2 nonnegative:8 activity:17 lemon:1 occur:3 precisely:1 constraint:1 constrain:2 mtn:2 pepsi:1 encodes:1 nearby:4 aspect:1 mikolov:18 department:1 according:1 combination:1 ball:1 poor:1 manning:1 across:8 templeton:1 modification:1 making:1 equation:7 count:9 singer:1 acronym:2 generalizes:2 available:1 hierarchical:3 appropriate:3 substitute:5 top:2 include:2 subsampling:1 household:2 build:1 purchased:6 objective:19 quantity:2 fa:6 primary:1 traditional:1 interacts:1 gradient:15 kth:1 iclr:1 distance:2 link:12 thank:1 strawberry:3 topic:1 cellular:1 economist:1 index:9 modeled:2 multiplicatively:2 innovation:1 difficult:1 mostly:1 relate:1 negative:16 lagged:4 pizza:1 calcium:6 teh:2 observation:15 neuron:51 datasets:3 descent:2 behave:1 banana:1 frame:3 rn:2 compositionality:1 rating:13 david:1 complement:3 namely:1 pair:2 specified:2 trip:7 connection:3 friedrich:2 successfuly:1 learned:2 barcelona:1 nip:2 beyond:2 below:3 pattern:2 usually:1 sparsity:1 green:1 belief:1 natural:9 indicator:5 movie:18 rated:3 categorical:5 extract:1 columbia:4 supermarket:1 text:6 prior:2 understanding:1 adagrad:2 lecture:1 highlight:1 interesting:1 proportional:1 rejoinder:1 filtering:1 ingredient:6 validation:3 foundation:2 purchasing:2 sufficient:2 consistent:1 imposes:1 beer:1 principle:1 bank:1 share:4 heavy:1 row:1 supported:1 exponentiated:1 understand:1 emb:84 arnold:2 neighbor:3 face:1 sparse:3 distributed:5 benefit:1 feedback:1 dimension:6 vocabulary:6 gram:4 xn:2 calculated:1 author:2 adaptive:1 preprocessing:1 programme:1 transaction:1 ranganath:2 approximate:2 obtains:1 ignore:1 keep:1 global:1 corpus:1 francisco:1 nelder:4 xi:21 factorize:1 alternatively:1 continuous:3 latent:10 tailed:1 sk:1 table:11 learn:2 nature:2 interact:1 pasta:1 artificially:1 domain:1 main:2 linearly:1 motivation:1 noise:3 subsample:1 hyperparameters:1 whole:3 child:1 complementary:1 neuronal:1 position:3 exponential:24 konstan:2 governed:1 levy:2 ruiz:1 tang:1 rk:4 down:1 embed:1 removing:2 decay:1 normalizing:1 workshop:2 socher:1 sequential:1 pennington:3 adding:1 ci:6 supplement:7 milk:5 magnitude:2 f20:1 conditioned:2 illustrates:1 demand:1 chen:2 flavor:1 subtract:1 paninski:1 explore:1 toothbrush:1 scalar:1 recommendation:2 springer:1 corresponds:4 determines:3 harris:2 minibatches:1 acm:1 conditional:25 goal:5 viewed:2 identity:4 tempted:1 shared:5 change:2 mccullagh:4 specifically:1 movielens:6 glove:1 principal:3 called:2 total:1 castillo:1 brand:2 rarely:1 select:1 thematically:1 people:1 harper:2 collins:5 vilnis:2 evaluate:1 d1:1 |
6,160 | 6,572 | On Regularizing Rademacher Observation Losses
Richard Nock
Data61, The Australian National University & The University of Sydney
[email protected]
Abstract
It has recently been shown that supervised learning linear classifiers with two of
the most popular losses, the logistic and square loss, is equivalent to optimizing an
equivalent loss over sufficient statistics about the class: Rademacher observations
(rados). It has also been shown that learning over rados brings solutions to two
prominent problems for which the state of the art of learning from examples can be
comparatively inferior and in fact less convenient: (i) protecting and learning from
private examples, (ii) learning from distributed datasets without entity resolution.
Bis repetita placent: the two proofs of equivalence are different and rely on specific
properties of the corresponding losses, so whether these can be unified and generalized inevitably comes to mind. This is our first contribution: we show how they can
be fit into the same theory for the equivalence between example and rado losses. As
a second contribution, we show that the generalization unveils a surprising new connection to regularized learning, and in particular a sufficient condition under which
regularizing the loss over examples is equivalent to regularizing the rados (i.e. the
data) in the equivalent rado loss, in such a way that an efficient algorithm for one
regularized rado loss may be as efficient when changing the regularizer. This is our
third contribution: we give a formal boosting algorithm for the regularized exponential rado-loss which boost with any of the ridge, lasso, SLOPE, `? , or elastic
net regularizer, using the same master routine for all. Because the regularized exponential rado-loss is the equivalent of the regularized logistic loss over examples
we obtain the first efficient proxy to the minimization of the regularized logistic
loss over examples using such a wide spectrum of regularizers. Experiments with a
readily available code display that regularization significantly improves rado-based
learning and compares favourably with example-based learning.
1
Introduction
What kind of data should we use to train a supervised learner ? A recent result has shown that
minimising the popular logistic loss over examples with linear classifiers (in supervised learning) is
equivalent to the minimisation of the exponential loss over sufficient statistics about the class known
as Rademacher observations (rados, [Nock et al., 2015]), for the same classifier. In short, we fit a
classifier over data that is different from examples, and the same classifier generalizes well to new
observations. It has been shown that rados offer solutions for two problems for which the state of the
art involving examples can be comparatively significantly inferior:
? protection of the examples? privacy from various algebraic, geometric, statistical and computational standpoints, and learning from private data [Nock et al., 2015];
? learning from a large number of distributed datasets without having to perform entity
resolution between datasets [Patrini et al., 2016].
Quite remarkably, the training time of the algorithms involved can be smaller than it would be on
examples, by orders of magnitude [Patrini et al., 2016]. Two key problems remain however: the
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
accuracy of learning from rados can compete experimentally with that of learning from examples, yet
there is a gap to reduce for rados to be not just a good material to learn from in a privacy/distributed
setting, but also a serious alternative to learning from examples at large, yielding new avenues to
supervised learning. Second, theoretically speaking, it is now known that two widely popular losses
over examples admit an equivalent loss in the rado world: the logistic loss and the square loss [Nock
et al., 2015, Patrini et al., 2016]. This inevitably suggests that this property may hold for more losses,
yet barely anything displays patterns of generalizability in the existing proofs.
Our contributions: in this paper, we provide answers to these two questions, with three main
contributions. Our first contribution is to show that this generalization indeed holds: other example
losses admit equivalent losses in the rado world, meaning in particular that their minimiser classifier
is the same, regardless of the dataset of examples. The technique we use exploits a two-player zero
sum game representation of convex losses, that has been very useful to analyse boosting algorithms
[Schapire, 2003, Telgarsky, 2012], with one key difference: payoffs are non-linear convex, eventually
non-differentiable. These also resemble the entropic dual losses [Reid et al., 2015], with the difference
that we do not enforce conjugacy over the simplex. The conditions of the game are slightly different
for examples and rados. We provide necessary and sufficient conditions for the resulting losses over
examples and rados to be equivalent. Informally, equivalence happens iff the convex functions of the
games satisfy a symmetry relationship and the weights satisfy a linear system of equations. Some
popular losses fit in the equivalence [Nair and Hinton, 2010, Gentile and Warmuth, 1998, Nock and
Nielsen, 2008, Telgarsky, 2012, Vapnik, 1998, van Rooyen et al., 2015].
Our second contribution came unexpectedly through this equivalence. Regularizing a loss is standard
in machine learning [Bach et al., 2011]. We show a sufficient condition for the equivalence under
which regularizing the example loss is equivalent to regularizing the rados in the equivalent rado
loss, i.e. making a Minkowski sum of the rado set with a classifier-based set. This property is
independent of the regularizer, and incidentally happens to hold for all our cases of equivalence (Cf
first contribution). A regularizer added to a loss over examples thus transfers to data in the rado world,
in essentially the same way for all regularizers, and if one can solve the non-trivial computational and
optimization problem that poses this data modification for one regularized rado loss, then, basically,
"A good optimization algorithm for this regularized rado loss may fit to other regularizers as well?
Our third contribution exemplifies this. We propose an iterative boosting algorithm, ?-R.A DA B OOST,
that learns a classifier from rados using the exponential regularized rado loss, with regularization
choice belonging to the ridge, lasso, `? , or the recently coined SLOPE [Bogdan et al., 2015]. Since
rado regularization would theoretically require to modify data at each iteration, such schemes are
computationally non-trivial. We show that this modification can in fact be bypassed for the exponential rado loss, and the algorithm, ?-R.A DA B OOST, is as fast as A DA B OOST. ?-R.A DA B OOST has
however a key advantage over A DA B OOST that to our knowledge is new in the boosting world: for
any of these four regularizers, ?-R.A DA B OOST is a boosting algorithm ? thus, because of the
equivalence between the minimization of the logistic loss over examples and the minimization of the
exponential rado loss, ?-R.A DA B OOST is in fact an efficient proxy to boost the regularized logistic
loss over examples using whichever of the four regularizers, and by extension, linear combination of
them (e.g., elastic net regularization [Zou and Hastie, 2005]). We are not aware of any regularized
logistic loss formal boosting algorithm with such a wide spectrum of regularizers. Extensive experiments validate this property: ?-R.A DA B OOST is all the better vs A DA B OOST (unregularized or
regularized) as the domain gets larger, and is able to rapidly learn both accurate and sparse classifiers,
making it an especially good contender for supervised learning at large on big domains.
The rest of this paper is as follows. Sections ?2, 3 and 4 respectively present the equivalence
between example and rado losses, its extension to regularized learning and ?-R.A DA B OOST. ?5
and 6 respectively present experiments, and conclude. In order not to laden the paper?s body, a
Supplementary Material (SM) contains the proofs and additional theoretical and experimental results.
2
Games and equivalent example/rado losses
To avoid notational load, we briefly present our learning setting to point the key quantity in our
.
.
formulation of the general two players game. Let [m] = {1, 2, ..., m} and ?m = {?1, 1}m , for
m > 0. The classical (batch) supervised learner is example-based: it is given a set of examples
S = {(xi , yi ), i ? [m]} where xi ? Rd , yi ? ?1 , ?i ? [m]. It returns a classifier h : Rd ? R from
2
.
a predefined set H. Let zi (h) = yh(xi ) and abbreviate z(h) by z for short. The learner fits h to the
minimization of a loss. Table 1, column `e , presents some losses that can be used: we remark that h
appears only through z, so let us consider in this section that the learner rather fits vector z ? Rm .
We can now define our two players game setting. Let ?e : R ? R and ?r : R ? R two convex and
m
lower-semicontinuous generators. We define functions Le : Rm ?Rm ? R and Lr : R2 ?Rm ? R:
X
X
.
Le (p, z) =
pi zi + ?e
?e (pi ) ,
(1)
i?[m]
.
Lr (q, z) =
i?[m]
X
qI
I?[m]
X
zi + ?r
X
?r (qI ) ,
(2)
I?[m]
i?I
where ?e , ?r > 0 do not depend on z. For the notation to be meaningful, the coordinates in q are
assumed (wlog) to be in bijection with 2[m] . The dependence of both problems in their respective
generators is implicit and shall be clear from context. The adversary?s goal is to fit
.
p? (z) =
.
.
(3)
arg minm Lr (q, z) ,
(4)
p?R
q ? (z) =
m
arg minm Le (p, z) ,
q?H2
m
with H2 = {q ? R2 : 1> q = 1}, so as to attain
.
Le (z) = Le (p? (z), z) ,
.
Lr (z) = Lr (q ? (z), z) ,
(5)
(6)
and let ?Le (z) and ?Lr (z) denote their subdifferentials. We view the learner?s task as the problem
of maximising the corresponding problems in eq. (5) (with examples; this is already sketched above)
or (6) (with what we shall call Rademacher observations, or rados), or equivalently minimising
negative the corresponding function, and then resort to a loss function. The question of when these
two problems are equivalent from the learner?s standpoint motivates the following definition.
Definition 1 Two generators ?e , ?r are said proportionate iff ?m > 0, there exists (?e , ?r ) such that
Le (z)
= Lr (z) + b , ?z ? Rm .
(b does not depend on z) ?m ? N? , let
.
0>
2m?1
Gm =
Gm?1
1>
2m?1
Gm?1
(7)
m
(? {0, 1}m?2 )
(8)
.
if m > 1, and G1 = [0 1] otherwise (notation zd indicates a vector in Rd ).
Theorem 2 ?e , ?r are proportionate iff the optima p? (z) and q ? (z) to eqs (3) and (4) satisfy:
p? (z) ? ?Lr (z) ,
Gm q ? (z) ? ?Le (z) .
(9)
(10)
If ?e , ?r are differentiable and strictly convex, they are proportionate iff p? (z) = Gm q ? (z).
We can alleviate the fact that convexity is strict, which results in a set-valued identity for ?e , ?r to be
proportionate. This gives a necessary and sufficient condition for two generators to be proportionate.
It does not say how to construct one from the other, if possible. We now show that it is indeed possible
and prune the search space: if ?e is proportionate to some ?r , then it has to be a ?symmetrized?
version of ?r , according to the following definition.
.
Definition 3 Let ?r s.t. dom?r ? (0, 1). ?s(r) (z) = ?r (z) + ?r (1 ? z) is the symmetrisation of ?r .
Lemma 4 If ?e and ?r are proportionate, then ?e (z) = (?r /?e ) ? ?s(r) (z) + (b/?e ) (b is in (7)).
3
#
I
II
III
IV
P
`e (z, ?e )
log (1 + exp (zie ))
P
2
(1 + zie )
P i?[m]
max {0, zie }
i?[m]P
e
i zi
i?[m]
P `r (z, ?r ) r
I?[m] exp (zI )
?(EI [?zIr ] ? ?r ? VI [?zIr ])
max 0, maxI?[m] {zIr }
EI [zIr ]
?r (z)
z log z ? z
(1/2) ? z 2
?[0,1] (z)
?[ 1m , 1 ] (z)
2
2
ae
?e
?e /4
?e
?e
?e and ?r
??e = ?r
??e = ?r
??e , ?r
??e , ?r
Table 1: Examples of equivalent example and rado losses. Names of the rado-losses `r (z, ?r ) are
respectively the Exponential (I), Mean-variance (II), ReLU
P (III) and Unhinged (IV) rado loss. We
.
.
use shorthands zie = ?(1/?e ) ? zi and zIr = ?(1/?r ) ? i?I zi . Parameter ae appears in eq. (14).
Column ??e and ?r ? gives the constraints for the equivalence to hold. EI and VI are the expectation
and variance over uniform sampling in sets I ? [m] (see text for details).
To summarize, ?e and ?r are proportionate iff (i) they meet the structural property that ?e is
(proportional to) the symmetrized version of ?r (according to Definition 3), and (ii) the optimal
solutions p? (z) and q ? (z) to problems (1) and (2) satisfy the conditions of Theorem 2. Depending on
the direction, we have two cases to craft proportionate generators. First, if we have ?r , then necessarily
?e ? ?s(r) so we merely have to check Theorem 2. Second, if we have ?e , then it matches Definition
31 . In this case, we have to find ?r = f + g where g(z) = ?g(1 ? z) and ?e (z) = f (z) + f (1 ? z).
We now come back to Le (z), Lr (z) (Definition 1), and make the connection with example and rado
losses. In the next definition, an e-loss `e (z) is a function defined over the coordinates of z, and a
r-loss `r (z) is a function defined over the subsets of sums of coordinates. Functions can depend on
other parameters as well.
Definition 5 Suppose e-loss `e (z) and r-loss `r (z) are such that there exist (i) fe : R ? R and
fr (z) : R ? R both strictly increasing and such that ?z ? Rm ,
?Le (z) =
?Lr (z) =
fe (`e (z)) ,
fr (`r (z)) ,
(11)
(12)
where Le (z) and Lr (z) are defined via two proportionate generators ?e and ?r (Definition 1). Then
the couple (`e , `r ) is called a couple of equivalent example-rado losses.
Following is the main Theorem of this Section, which summarizes all the cases of equivalence
between example and rado losses, and shows that the theory developed on example / rado losses with
proportionate generators encompasses the specific proofs and cases already known [Nock et al., 2015,
Patrini et al., 2016]. Table 1 also displays generator ?r .
Theorem 6 In each row of Table 1, `e (z, ?e ) and `r (z, ?r ) are equivalent for ?e and ?r as indicated.
The proof (SM, Subsection 2.3) details for each case the proportionate generators ?e and ?r .
3
Learning with (rado) regularized losses
.
We now detail further the learning setting. In the preceeding Section, we have definef zi (h) = yh(xi ),
which we plug in the losses of Table 1 to obtain the corresponding example and rado losses. Losses
.
simplify conveniently when H consists of linear classifiers, h(x) = ? > x for some ? ? ? ? Rd . In
.
this case, the example loss can be described using edge vectors Se = {yi ? xi , i = 1, 2, ..., m} since
>
zi = ?P (yi ?xi ), and the rado loss can be described using rademacher observations [Nock
P et al., 2015],
.
since i?I zi = ? > ?? for ?i = yi iff i ? I (and ?yi otherwise) and ?? = (1/2) ? i (?i + yi ) ? xi .
.
Let us define S?r = {?? , ? ? ?m } the set of all rademacher observations. We rewrite any couple of
equivalent example and rado losses as `e (Se , ?) and `r (S?r , ?) respectively2 , omitting parameters ?e
and ?r , assumed to be fixed beforehand for the equivalence to hold (see Table 1). Let us regularize
the example loss, so that the learner?s goal is to minimize
`e (Se , ?, ?)
1
2
.
= `e (Se , ?) + ?(?) ,
(13)
Alternatively, ??e is permissible [Kearns and Mansour, 1999].
To prevent notational overload, we blend notions of (pointwise) loss and (samplewise) risk, as just ?losses?.
4
Algorithm 1 ?-R.A DA B OOST
.
Input set of rados Sr = {?1 , ?2 , ..., ?n }; T ? N? ; parameters ? ? (0, 1), ? ? R+ ;
Step 1 : let ?0 ? 0, w0 ? (1/n)1 ;
Step 2 : for t = 1, 2, ..., T
Step 2.1 : call the weak learner: (?(t), rt ) ? ?- WL(Sr , wt , ?, ?, ?t?1 );
.
Step 2.2 : compute update parameters ??(t) and ?t (here, ??k = maxj |?jk |):
??(t) ? (1/(2???(t) )) log((1 + rt )/(1 ? rt )) and
?t ? ? ? (?(?t ) ? ?(?t?1 )) ;
(16)
Step 2.3 : update and normalize weights: for j = 1, 2, ..., n,
wtj
? w(t?1)j ? exp ??t ?j?(t) + ?t /Zt ;
(17)
Return ?T ;
with ? a regularizer [Bach et al., 2011]. The following shows that when fe in eq. (11) is linear, there
is a rado-loss equivalent to this regularized loss, regardless of ?.
Theorem 7 Suppose H contains linear classifiers. Let (`e (Se , ?), `r (S?r , ?)) be any couple of equivalent example-rado losses such that fe in eq. (11) is linear:
fe (z)
= ae ? z + be ,
(14)
for some ae > 0, be ? R. Then for any regularizer ?(.) (assuming wlog ?(0) = 0), the regularized
example loss `e (Se , ?, ?) is equivalent to rado loss `r (S?,?,?
, ?) computed over regularized rados:
r
S?,?,?
r
.
?
= S?r ? {??(?)
? ?} ,
(15)
.
?
Here, ? is Minkowski sum and ?(?)
= ae ? ?(?)/k?k22 if ? 6= 0 (and 0 otherwise).
Theorem 7 applies to all rado losses (I-IV) in Table 1. The effect of regularization on rados is intuitive
from the margin standpoint: assume that a ?good? classifier ? is one that ensures lowerbounded inner
products ? > z ? ? for some margin P
threshold ? . Then any good classifier on a regularized rado ??
shall actually meet, over examples, i:yi =?i ? > (yi ? xi ) ? ? + ae ? ?(?). This inequality ties an
"accuracy" of ? (edges, left hand-side) and its sparsity (right-hand side). Clearly, Theorem 7 has an
unfamiliar shape since regularisation modifies data in the rado world: a different ?, or a different
?, yields a different S?,?,?
, and therefore it may seem very tricky to minimize such a regularized
r
loss. Even more, iterative algorithms like boosting algorithms look at first glance a poor choice, since
any update on ? implies an update on the rados as well. What we show in the following Section
is essentially the opposite for the exponential rado loss, and a generalization of the R ADO B OOST
algorithm of Nock et al. [2015], which does not modify rados, is a formal boosting algorithm for a
broad set of regularizers. Also, remarkably, only the high-level code of the weak learner depends on
the regularizer; that of the strong learner is not affected.
4
Boosting with (rado) regularized losses
?-R.A DA B OOST presents our approach to learning with rados regularized
with regularizer ? to
. Pt
exp
minimise loss `r (Sr , ?, ?) in eq. (45). Classifier ?t is defined as ?t = t0 =1 ??(t0 ) ? 1?(t0 ) , where
1k is the k th canonical basis vector. The expected edge rt used to compute ?t in eq. (16) is based on
the following basis assignation:
r?(t)
?
1
n
X
???(t)
j=1
wtj ?j?(t) (? [?1, 1]) .
(19)
The computation of rt is eventually tweaked by the weak learner, as displayed in Algorithm ?WL . We investigate four choices for ?. For each of them, we prove the boosting ability of ?R.A DA B OOST (? is symmetric positive definite, Sd is the symmetric group of order d, |?| is the
5
Algorithm 2 ?- WL, for ? ? {k.k1 , k.k2? , k.k? , k.k? }
.
Input set of rados Sr = {?1 , ?2 , ..., ?n }; weights w ? 4n ; parameters ? ? (0, 1), ? ? R+ ;
classifier ? ? Rd ;
Step 1 : pick weak feature ?? ? [d];
Optional ? use preference order: ? ?0 ? |r? | ? ?? ? |r?0 | ? ??0
.
// ?? = ? ? (?(? + ?? ? 1? ) ? ?(?)), r? is given in (19) and ?? is given in (16)
Step 2 : if ? = k.k2? then
r??
if r?? ? [??, ?]
r? ?
;
(18)
sign (r?? ) ? ? otherwise
else r? ? r?? ;
Return (?? , r? );
vector whose coordinates are the absolute values of the coordinates of ?):
?
.
k?k1 = |?|> 1
Lasso
?
?
.
2
k?k? = ? > ? ?
Ridge
.
?(?) =
k?k
=
max
|?
|
`?
?
?
k
k
?
.
k?k? = maxM?Sd (M|?|)> ? SLOPE
(20)
[Bach et al., 2011, Bogdan et al., 2015, Duchi and Singer, 2009, Su and Cand?s, 2015]. The
.
coordinates of ? in SLOPE are ?k = ??1 (1 ? kq/(2d)) where ??1 (.) is the quantile of the standard
normal distribution and q ? (0, 1); thus, the largest coordinates (in absolute value) of ? are more
penalized. We now establish the boosting ability of ?-R.A DA B OOST. We give no direction for Step
1 in ?- WL, which is consistent with the definition of a weak learner in the boosting theory: all we
require from the weak learner is |r. | no smaller than some weak learning threshold ?WL > 0.
Definition 8 Fix any constant ?WL ? (0, 1). ?- WL is said to be a ?WL -Weak Learner iff the feature
?(t) it picks at iteration t satisfies |r?(t) | ? ?WL , for any t = 1, 2, ..., T .
We also provide an optional step for the weak learner in ?- WL, which we exploit in the experimentations, which gives a total preference order on features to optimise further ?-R.A DA B OOST.
Theorem 9 (boosting with ridge). Take ?(.) = k.k2? . Fix any 0 < a < 1/5, and suppose that ?
and the number of iterations T of ?-R.A DA B OOST are chosen so that
?
<
(2a min max ?2jk )/(T ?? ) ,
j
k
(21)
where ?? > 0 is the largest eigenvalue of ?. Then there exists some ? > 0 (depending on a,
and given to ?- WL) such that for any fixed 0 < ?WL < ?, if ?- WL is a ?WL -Weak Learner, then
?-R.A DA B OOST returns at the end of the T boosting iterations a classifier ?T which meets:
`exp
(Sr , ?T , k.k2? ) ? exp(?a?2WL T /2) .
r
(22)
Furthermore, if we fix a = 1/7, then we can fix ? = 0.98, and if a = 1/10, then we can fix ? = 0.999.
Two remarks are in order. First, the cases a = 1/7, 1/10 show that ?- WL can still obtain large
edges in eq. (19), so even a ?strong? weak learner might fit in for ?- WL, without clamping edges.
Second, the right-hand side of ineq. (21) may be very large if we consider that mink maxj ?2jk may
be proportional to m2 . So the constraint on ? is in fact loose.
Theorem 10 (boosting with lasso or `? ). Take ?(.) ? {k.k1 , k.k? }. Suppose ?- WL is a ?WL -Weak
Learner for some ?WL > 0. Suppose ?0 < a < 3/11 s. t. ? satisfies:
?
= a?WL min max |?jk | .
k
j
(23)
Then ?-R.A DA B OOST returns at the end of the T boosting iterations a classifier ?T which meets:
`exp
(Sr , ?T , ?) ? exp(?T??2WL /2) ,
r
6
(24)
where T? = a?WL T if ? = k.k1 , and T? = (T ? T? ) + a?WL ? T? if ? = k.k? ; T? is the number of
iterations where the feature computing the `? norm was updated3 .
We finally investigate the SLOPE choice. The Theorem is proven for ? = 1 in ?-R.A DA B OOST, for
two reasons: it matches the original definition [Bogdan et al., 2015] and furthermore it unveils an
interesting connection between boosting and SLOPE properties.
.
Theorem 11 (boosting with SLOPE). Take ?(.) = k.k? . Let a = min{3?WL /11, ??1 (1 ?
q/(2d))/ mink maxj |?jk |}. Suppose wlog |?T k | ? |?T (k+1) |, ?k, and fix ? = 1. Suppose (i)
?- WL is a ?WL -Weak Learner for some ?WL > 0, and (ii) the q-value is chosen to meet:
3?WL
k
q ? 2 ? max
1??
? max |?jk |
.
j
k
11
d
Then classifier ?T returned by ?-R.A DA B OOST at the end of the T boosting iterations satisfies:
`exp
(Sr , ?T , k.k? ) ? exp(?a?2WL T /2) .
r
(25)
Constraint (ii) on q is interesting in the light of the properties of SLOPE [Su and Cand?s, 2015].
Modulo some assumptions, SLOPE yields a control the false discovery rate (FDR) ? i.e., negligible
coefficients in the "true? linear model ? ? that are found significant in the learned ? ?. Constraint
(ii) links the "small? achievable FDR (upperbounded by q) to the "boostability? of the data: the fact
that each feature k can be chosen by the weak learner for a "large? ?WL , or has maxj |?jk | large,
precisely flags potential significant features, thus reducing the risk of sparsity errors, and allowing
small q, which is constraint (ii). Using the second order approximation of normal quantiles [Su and
Cand?s, 2015], a sufficient condition for (ii) is that, for some K > 0,
p
?WL min max |?jk | ? K ? log d + log q ?1 ;
(26)
j
j
but minj maxj |?jk | is proportional to m, so ineq. (26), and thus (ii), may hold even for small
samples and q-values. An additional Theorem deferred to SM sor space considerations shows that
for any applicable choice of regularization (eq. 20), the regularized log-loss of ?T over examples
enjoys with high probability a monotonically decreasing upperbound with T as: `log
(Se , ?, ?) ?
e
log 2 ? ? ? T + ?(m), with ?(m) ? 0 when m ? ? (and ? does not depend on T ), and ? > 0 does
not depend on T . Hence, ?-R.A DA B OOST is an efficient proxy to boost the regularized log-loss over
examples, using whichever of the ridge, lasso, `? or SLOPE regularization ? establishing the first
boosting algorithm for this choice ?, or linear combinations of the choices, e.g. for elastic nets. If
we were to compare Theorems 9 ? 11 (eqs (22, 24, 25)), then the convergence looks best for ridge
? 2 )) while it looks slightly worse for `? and SLOPE (the unsigned
(the unsigned exponent is O(?
WL
3
?
exponent is now O(?WL )), the lasso being in between.
5
Experiments
We have implemented ?- WL4 using the order suggested to retrieve the topmost feature in the order.
Hence, the weak learner returns the feature maximising
|r? | ? ?? . The rationale for this comes from
Q
2
the proofs of Theorems 9 ? 11, showing that t exp(?(r?(t)
/2 ? ??(t) )) is an upperbound on the
exponential regularized rado-loss. We do not clamp the weak learner for ?(.) = k.k2? , so the weak
learner is restricted to Step 1 in ?- WL5 .
The objective of these experiments is to evaluate ?-R.A DA B OOST as a contender for supervised
learning per se. We compared ?-R.A DA B OOST to A DA B OOST/`1 regularized-A DA B OOST [Schapire
and Singer, 1999, Xi et al., 2009]. All algorithms are run for a total of T = 1000 iterations, and
at the end of the iterations, the classifier in the sequence that minimizes the empirical loss is kept.
Notice therefore that rado-based classifiers are evaluated on the training set which computes the
3
If several features match this criterion, T? is the total number of iterations for all these features.
Code available at: http://users.cecs.anu.edu.au/?rnock/
5
the values for ? that we test, in {10?u , u ? {0, 1, 2, 3, 4, 5}}, are small with respect to the upperbound in
ineq. (21) given the number of boosting steps (T = 1000), and would yield on most domains a maximal ? ? 1.
4
7
rados. To obtain very sparse solutions for regularized-A DA B OOST, we pick its ? (? in [Xi et al.,
2009]) in {10?4 , 1, 104 }. The complete results aggregate experiments on twenty (20) domains, all
but one coming from the UCI [Bache and Lichman, 2013] (plus the Kaggle competition domain
?Give me some credit?), with up to d =500+ features and m =100 000+ examples. Two tables, in
the SM (Tables 1 and 2 in Section 3) report respectively the test errors and sparsity of classifiers,
whose summary is given here in Table 2. The experimental setup is a ten-folds stratified cross
validation for all algorithms and each domain. A DA B OOST/regularized-A DA B OOST is trained
using the complete training fold. When the domain size m ? 40000, the number of rados n
used for ?-R.A DA B OOST is a random subset of rados of size equal to that of the training fold.
When the domain size exceeds 40000, a random set of n = 10000 rados is computed from the
training fold. Thus, (i) there is no optimisation of the examples chosen to compute rados, (ii) we
always keep a very small number of rados compared to the maximum available, and (iii) when the
domain size gets large, we keep a comparatively tiny number of rados. Hence, the performances
of ?-R.A DA B OOST do not stem from any optimization in the choice or size of the rado sample.
Ada
?
11
k.k2Id
10
3
k.k1
10
3
11
k.k?
8
2
9
7
k.k?
9
1
7
4
8
Experiments support several key observations.
First, regularization consistently reduces the
9
?
test error of ?-R.A DA B OOST, by more than
k.k2Id
10 17
15% on Magic, and 20% on Kaggle. In Table
k.k1
10 17
7
2, ?-R.A DA B OOST unregularized ("?") is virk.k?
11 18
9
9
tually always beaten by its SLOPE regularized
k.k?
10 19
10
10
11
version. Second, ?-R.A DA B OOST is able to
obtain both very sparse and accurate classiTable 2: Number of domains for which algorithm in
fiers (Magic, Hardware, Marketing, Kaggle).
row beats algorithm in column (Ada = best result of A D Third, ?-R.A DA B OOST competes or beats
A B OOST , ? = ?-R.A DA B OOST not regularized, see text).
A DA B OOST on all domains, and is all the
better as the domain gets bigger. Even qualitatively as seen in Table 2, the best result
obtained by A DA B OOST (regularized or not) does not manage to beat any of the regularized versions
of ?-R.A DA B OOST on the majority of the domains. Fourth, it is important to have several choices
of regularizers at hand. On domain Statlog, the difference in test error between the worst and the
best regularization of ?-R.A DA B OOST exceeds 15%. Fifth, as already remarked [Nock et al., 2015],
significantly subsampling rados (e.g. Marketing, Kaggle) still yields very accurate classifiers. Sixth,
regularization in ?-R.A DA B OOST successfully reduces sparsity to learn more accurate classifiers on
several domains (Spectf, Transfusion, Hill-noise, Winered, Magic, Marketing), achieving efficient
adaptive sparsity control. Last, the comparatively extremely poor results of A DA B OOST on the
biggest domains seems to come from another advantage of rados that the theory developed so far does
not take into account: on domains for which some features are significantly correlated with the class
and for which we have a large number of examples, the concentration of the expected feature value in
rados seems to provide leveraging coefficients that tend to have much larger (absolute) value than in
A DA B OOST, making the convergence of ?-R.A DA B OOST significantly faster than A DA B OOST. For
example, we have checked that it takes much more than the T = 1000 iterations for A DA B OOST to
start converging to the results of regularized ?-R.A DA B OOST on Hardware or Kaggle.
Ada
6
Conclusion
We have shown that the recent equivalences between two example and rado losses can be unified
and generalized via a principled representation of a loss function in a two-player zero-sum game.
Furthermore, we have shown that this equivalence extends to regularized losses, where the regularization in the rado loss is performed over the rados themselves with Minkowski sums. Our theory and
experiments on ?-R.A DA B OOST with prominent regularizers (including ridge, lasso, `? , SLOPE)
indicate that when such a simple regularized form of the rado loss is available, it may help to devise
accurate and efficient workarounds to boost a regularized loss over examples via the rado loss, even
when the regularizer is significantly more involved like e.g. for group norms [Bach et al., 2011].
Acknowledgments
Thanks are due to Stephen Hardy and Giorgio Patrini for stimulating discussions around this material.
8
References
F. Bach, R. Jenatton, J. Mairal, and G. Obozinski. Optimization with sparsity-inducing penalties.
Foundations and Trends in Machine Learning, 4:1?106, 2011.
K. Bache and M. Lichman. UCI machine learning repository, 2013.
M Bogdan, E. van den Berg, C. Sabatti, W. Su, and E.-J. Cand?s. SLOPE ? adaptive variable selection
via convex optimization. Annals of Applied Statistics, 2015. Also arXiv:1310.1969v2.
J.-C. Duchi and Y. Singer. Efficient learning using forward-backward splitting. In NIPS*22, pages
495?503, 2009.
C. Gentile and M. Warmuth. Linear hinge loss and average margin. In NIPS*11, pages 225?231,
1998.
M.J. Kearns and Y. Mansour. On the boosting ability of top-down decision tree learning algorithms.
J. Comp. Syst. Sc., 58:109?128, 1999.
V. Nair and G. Hinton. Rectified linear units improve restricted boltzmann machines. In 27th ICML,
pages 807?814, 2010.
R. Nock and F. Nielsen. On the efficient minimization of classification-calibrated surrogates. In
NIPS*21, pages 1201?1208, 2008.
R. Nock, G. Patrini, and A Friedman. Rademacher observations, private data, and boosting. In 32nd
ICML, pages 948?956, 2015.
G. Patrini, R. Nock, S. Hardy, and T. Caetano. Fast learning from distributed datasets without entity
matching. In 26 th IJCAI, 2016.
M.-D. Reid, R.-M. Frongillo, R.-C. Williamson, and N.-A. Mehta. Generalized mixability via
entropic duality. In 28th COLT, pages 1501?1522, 2015.
R.-E. Schapire. The boosting approach to machine learning: An overview. In D.-D. Denison, M.-H.
Hansen, C.-C. Holmes, B. Mallick, and B. Yu, editors, Nonlinear Estimation and Classification,
volume 171 of Lecture Notes in Statistics, pages 149?171. Springer Verlag, 2003.
R. E. Schapire and Y. Singer. Improved boosting algorithms using confidence-rated predictions. MLJ,
37:297?336, 1999.
W. Su and E.-J. Cand?s. SLOPE is adaptive to unkown sparsity and asymptotically minimax. CoRR,
abs/1503.08393, 2015.
M. Telgarsky. A primal-dual convergence analysis of boosting. JMLR, 13:561?606, 2012.
B. van Rooyen, A. Menon, and R.-C. Williamson. Learning with symmetric label noise: The
importance of being unhinged. In NIPS*28, 2015.
V. Vapnik. Statistical Learning Theory. John Wiley, 1998.
Y.-T. Xi, Z.-J. Xiang, P.-J. Ramadge, and R.-E. Schapire. Speed and sparsity of regularized boosting.
In 12th AISTATS, pages 615?622, 2009.
H. Zou and T. Hastie. Regularization and variable selection via the elastic net. Journal of the Royal
Statistical Society B, 67:301?321, 2005.
9
| 6572 |@word private:3 version:4 briefly:1 achievable:1 norm:2 seems:2 repository:1 nd:1 mehta:1 semicontinuous:1 pick:3 contains:2 lichman:2 hardy:2 existing:1 surprising:1 protection:1 yet:2 readily:1 john:1 shape:1 update:4 v:1 denison:1 warmuth:2 short:2 lr:11 boosting:27 bijection:1 preference:2 shorthand:1 consists:1 prove:1 privacy:2 theoretically:2 expected:2 indeed:2 cand:5 themselves:1 decreasing:1 increasing:1 spain:1 notation:2 tweaked:1 competes:1 what:3 kind:1 minimizes:1 developed:2 unified:2 tie:1 classifier:24 rm:6 tricky:1 k2:5 control:2 unit:1 reid:2 positive:1 negligible:1 giorgio:1 modify:2 sd:2 cecs:1 establishing:1 meet:5 might:1 plus:1 au:2 equivalence:14 suggests:1 ramadge:1 stratified:1 bi:1 acknowledgment:1 definite:1 empirical:1 significantly:6 attain:1 convenient:1 matching:1 confidence:1 get:3 selection:2 unsigned:2 context:1 risk:2 equivalent:20 modifies:1 regardless:2 laden:1 convex:6 resolution:2 preceeding:1 splitting:1 m2:1 holmes:1 regularize:1 retrieve:1 notion:1 coordinate:7 annals:1 pt:1 gm:5 suppose:7 modulo:1 user:1 trend:1 jk:9 bache:2 boostability:1 unexpectedly:1 worst:1 ensures:1 caetano:1 topmost:1 principled:1 rado:45 convexity:1 symmetrisation:1 dom:1 unveils:2 trained:1 depend:5 rewrite:1 wtj:2 learner:23 basis:2 various:1 regularizer:9 train:1 fast:2 sc:1 aggregate:1 quite:1 whose:2 widely:1 solve:1 larger:2 supplementary:1 valued:1 otherwise:4 say:1 ability:3 statistic:4 g1:1 analyse:1 unkown:1 advantage:2 differentiable:2 eigenvalue:1 net:4 sequence:1 propose:1 clamp:1 product:1 maximal:1 fr:2 coming:1 uci:2 rapidly:1 iff:7 intuitive:1 inducing:1 validate:1 zir:5 normalize:1 competition:1 convergence:3 ijcai:1 optimum:1 assignation:1 rademacher:7 incidentally:1 telgarsky:3 bogdan:4 depending:2 help:1 pose:1 eq:10 strong:2 sydney:1 implemented:1 resemble:1 come:4 australian:1 implies:1 indicate:1 direction:2 nock:13 material:3 require:2 sor:1 fix:6 generalization:3 alleviate:1 statlog:1 extension:2 strictly:2 hold:6 around:1 credit:1 normal:2 exp:11 entropic:2 estimation:1 applicable:1 label:1 hansen:1 maxm:1 wl:34 largest:2 successfully:1 minimization:5 clearly:1 always:2 rather:1 avoid:1 frongillo:1 minimisation:1 exemplifies:1 notational:2 consistently:1 indicates:1 check:1 sketched:1 arg:2 dual:2 classification:2 colt:1 exponent:2 art:2 equal:1 aware:1 construct:1 having:1 sampling:1 broad:1 look:3 icml:2 yu:1 simplex:1 report:1 simplify:1 richard:2 serious:1 national:1 maxj:5 csiro:1 friedman:1 ab:1 investigate:2 deferred:1 upperbounded:1 yielding:1 light:1 primal:1 regularizers:9 predefined:1 accurate:5 beforehand:1 edge:5 necessary:2 minimiser:1 respective:1 tree:1 iv:3 theoretical:1 column:3 ada:3 subset:2 uniform:1 kq:1 answer:1 data61:2 generalizability:1 contender:2 calibrated:1 thanks:1 transfusion:1 manage:1 worse:1 admit:2 resort:1 return:6 syst:1 account:1 potential:1 upperbound:3 coefficient:2 satisfy:4 vi:2 depends:1 performed:1 view:1 start:1 proportionate:12 slope:15 contribution:9 minimize:2 square:2 accuracy:2 variance:2 yield:4 weak:17 basically:1 comp:1 rectified:1 minm:2 minj:1 checked:1 definition:13 sixth:1 involved:2 remarked:1 proof:6 couple:4 dataset:1 popular:4 knowledge:1 subsection:1 improves:1 nielsen:2 routine:1 actually:1 mlj:1 back:1 jenatton:1 appears:2 supervised:7 improved:1 zie:4 formulation:1 evaluated:1 furthermore:3 marketing:3 just:2 implicit:1 hand:4 favourably:1 ei:3 su:5 nonlinear:1 glance:1 logistic:8 brings:1 indicated:1 menon:1 name:1 omitting:1 subdifferentials:1 k22:1 effect:1 true:1 regularization:12 hence:3 symmetric:3 game:7 inferior:2 anything:1 criterion:1 generalized:3 prominent:2 hill:1 ridge:7 complete:2 lowerbounded:1 duchi:2 patrini:7 meaning:1 consideration:1 recently:2 overview:1 volume:1 unfamiliar:1 significant:2 rd:5 kaggle:5 respectively2:1 recent:2 optimizing:1 ineq:3 verlag:1 inequality:1 came:1 yi:9 devise:1 seen:1 gentile:2 additional:2 prune:1 monotonically:1 ii:11 stephen:1 reduces:2 stem:1 exceeds:2 match:3 faster:1 plug:1 bach:5 minimising:2 offer:1 cross:1 bigger:1 qi:2 converging:1 involving:1 prediction:1 ae:6 essentially:2 expectation:1 optimisation:1 arxiv:1 iteration:11 fiers:1 remarkably:2 else:1 standpoint:3 permissible:1 rest:1 sr:7 strict:1 tend:1 leveraging:1 seem:1 call:2 structural:1 iii:3 fit:8 zi:10 relu:1 hastie:2 lasso:7 opposite:1 reduce:1 inner:1 avenue:1 minimise:1 t0:3 whether:1 tually:1 penalty:1 algebraic:1 returned:1 speaking:1 remark:2 useful:1 clear:1 informally:1 se:8 ten:1 hardware:2 schapire:5 http:1 exist:1 canonical:1 notice:1 sign:1 per:1 zd:1 shall:3 affected:1 group:2 key:5 four:3 threshold:2 achieving:1 changing:1 prevent:1 kept:1 backward:1 asymptotically:1 merely:1 sum:6 compete:1 run:1 master:1 fourth:1 extends:1 decision:1 summarizes:1 display:3 fold:4 constraint:5 precisely:1 speed:1 min:4 extremely:1 minkowski:3 according:2 combination:2 poor:2 belonging:1 smaller:2 remain:1 slightly:2 making:3 happens:2 modification:2 den:1 restricted:2 unregularized:2 computationally:1 equation:1 conjugacy:1 eventually:2 loose:1 singer:4 mind:1 whichever:2 end:4 available:4 generalizes:1 experimentation:1 v2:1 enforce:1 alternative:1 batch:1 symmetrized:2 original:1 top:1 cf:1 subsampling:1 ado:1 hinge:1 exploit:2 coined:1 k1:6 especially:1 quantile:1 establish:1 classical:1 comparatively:4 mixability:1 society:1 objective:1 question:2 added:1 quantity:1 already:3 blend:1 concentration:1 dependence:1 rt:5 surrogate:1 said:2 link:1 entity:3 majority:1 w0:1 me:1 trivial:2 barely:1 reason:1 maximising:2 assuming:1 code:3 pointwise:1 relationship:1 equivalently:1 setup:1 fe:5 negative:1 mink:2 rooyen:2 magic:3 motivates:1 zt:1 fdr:2 perform:1 allowing:1 twenty:1 boltzmann:1 observation:9 datasets:4 sm:4 protecting:1 inevitably:2 displayed:1 optional:2 beat:3 payoff:1 hinton:2 mansour:2 workarounds:1 extensive:1 connection:3 unhinged:2 learned:1 boost:4 barcelona:1 nip:5 able:2 adversary:1 suggested:1 sabatti:1 pattern:1 samplewise:1 sparsity:8 summarize:1 encompasses:1 max:8 optimise:1 including:1 royal:1 mallick:1 rely:1 regularized:36 abbreviate:1 minimax:1 scheme:1 improve:1 rated:1 text:2 geometric:1 discovery:1 regularisation:1 xiang:1 loss:86 lecture:1 rationale:1 interesting:2 proportional:3 proven:1 generator:9 validation:1 h2:2 foundation:1 sufficient:7 proxy:3 consistent:1 editor:1 tiny:1 pi:2 row:2 penalized:1 summary:1 last:1 enjoys:1 formal:3 side:3 wide:2 absolute:3 sparse:3 fifth:1 distributed:4 van:3 world:5 computes:1 forward:1 qualitatively:1 adaptive:3 far:1 keep:2 mairal:1 conclude:1 assumed:2 xi:11 alternatively:1 spectrum:2 search:1 iterative:2 table:12 learn:3 transfer:1 bypassed:1 elastic:4 symmetry:1 williamson:2 necessarily:1 zou:2 domain:17 da:47 aistats:1 main:2 big:1 noise:2 body:1 biggest:1 quantiles:1 wiley:1 wlog:3 exponential:9 jmlr:1 third:3 yh:2 learns:1 theorem:15 down:1 load:1 specific:2 showing:1 maxi:1 r2:2 beaten:1 exists:2 vapnik:2 false:1 corr:1 importance:1 magnitude:1 anu:1 margin:3 clamping:1 gap:1 conveniently:1 applies:1 springer:1 satisfies:3 nair:2 stimulating:1 obozinski:1 goal:2 identity:1 oost:49 experimentally:1 reducing:1 wt:1 flag:1 lemma:1 kearns:2 called:1 total:3 duality:1 experimental:2 player:4 craft:1 meaningful:1 berg:1 support:1 overload:1 spectf:1 evaluate:1 regularizing:6 correlated:1 |
6,161 | 6,573 | Binarized Neural Networks
Itay Hubara1 *
[email protected]
Matthieu Courbariaux2 *
[email protected]
Ran El-Yaniv1
[email protected]
Daniel Soudry3
[email protected]
Yoshua Bengio2,4
[email protected]
(1) Technion, Israel Institute of Technology.
(3) Columbia University.
(*) Indicates equal contribution.
(2) Universit? de Montr?al.
(4) CIFAR Senior Fellow.
Abstract
We introduce a method to train Binarized Neural Networks (BNNs) - neural
networks with binary weights and activations at run-time. At train-time the binary
weights and activations are used for computing the parameter gradients. During the
forward pass, BNNs drastically reduce memory size and accesses, and replace most
arithmetic operations with bit-wise operations, which is expected to substantially
improve power-efficiency. To validate the effectiveness of BNNs, we conducted
two sets of experiments on the Torch7 and Theano frameworks. On both, BNNs
achieved nearly state-of-the-art results over the MNIST, CIFAR-10 and SVHN
datasets. We also report our preliminary results on the challenging ImageNet
dataset. Last but not least, we wrote a binary matrix multiplication GPU kernel
with which it is possible to run our MNIST BNN 7 times faster than with an
unoptimized GPU kernel, without suffering any loss in classification accuracy. The
code for training and running our BNNs is available on-line.
Introduction
Deep Neural Networks (DNNs) have substantially pushed Artificial Intelligence (AI) limits in a wide
range of tasks (LeCun et al., 2015). Today, DNNs are almost exclusively trained on one or many very
fast and power-hungry Graphic Processing Units (GPUs) (Coates et al., 2013). As a result, it is often
a challenge to run DNNs on target low-power devices, and substantial research efforts are invested in
speeding up DNNs at run-time on both general-purpose (Gong et al., 2014; Han et al., 2015b) and
specialized computer hardware (Chen et al., 2014; Esser et al., 2015).
This paper makes the following contributions:
? We introduce a method to train Binarized-Neural-Networks (BNNs), neural networks with binary
weights and activations, at run-time, and when computing the parameter gradients at train-time
(see Section 1).
? We conduct two sets of experiments, each implemented on a different framework, namely Torch7
and Theano, which show that it is possible to train BNNs on MNIST, CIFAR-10 and SVHN and
achieve near state-of-the-art results (see Section 2). Moreover, we report preliminary results on the
challenging ImageNet dataset
? We show that during the forward pass (both at run-time and train-time), BNNs drastically reduce
memory consumption (size and number of accesses), and replace most arithmetic operations with
bit-wise operations, which potentially lead to a substantial increase in power-efficiency (see Section
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
3). Moreover, a binarized CNN can lead to binary convolution kernel repetitions; we argue that
dedicated hardware could reduce the time complexity by 60% .
? Last but not least, we programed a binary matrix multiplication GPU kernel with which it is
possible to run our MNIST BNN 7 times faster than with an unoptimized GPU kernel, without
suffering any loss in classification accuracy (see Section 4).
The code for training and running our BNNs is available on-line (both Theano1 and Torch framework2 ).
1
Binarized Neural Networks
In this section, we detail our binarization function, show how we use it to compute the parameter
gradients,and how we backpropagate through it.
Deterministic vs Stochastic Binarization When training a BNN, we constrain both the weights
and the activations to either +1 or ?1. Those two values are very advantageous from a hardware
perspective, as we explain in Section 4. In order to transform the real-valued variables into those
two values, we use two different binarization functions, as in (Courbariaux et al., 2015). Our first
binarization function is deterministic:
+1 if x ? 0,
xb = Sign(x) =
(1)
?1 otherwise,
where xb is the binarized variable (weight or activation) and x the real-valued variable. It is very
straightforward to implement and works quite well in practice. Our second binarization function is
stochastic:
+1 with probability p = ?(x),
b
x =
(2)
?1 with probability 1 ? p,
where ? is the ?hard sigmoid? function:
x+1
x+1
, 0, 1) = max(0, min(1,
)).
(3)
2
2
The stochastic binarization is more appealing than the sign function, but harder to implement as
it requires the hardware to generate random bits when quantizing. As a result, we mostly use the
deterministic binarization function (i.e., the sign function), with the exception of activations at
train-time in some of our experiments.
?(x) = clip(
Gradient Computation and Accumulation Although our BNN training method uses binary
weights and activation to compute the parameter gradients, the real-valued gradients of the weights
are accumulated in real-valued variables, as per Algorithm 1. Real-valued weights are likely required
for Stochasic Gradient Descent (SGD) to work at all. SGD explores the space of parameters in small
and noisy steps, and that noise is averaged out by the stochastic gradient contributions accumulated
in each weight. Therefore, it is important to maintain sufficient resolution for these accumulators,
which at first glance suggests that high precision is absolutely required.
Moreover, adding noise to weights and activations when computing the parameter gradients provide
a form of regularization that can help to generalize better, as previously shown with variational
weight noise (Graves, 2011), Dropout (Srivastava et al., 2014) and DropConnect (Wan et al., 2013).
Our method of training BNNs can be seen as a variant of Dropout, in which instead of randomly
setting half of the activations to zero when computing the parameter gradients, we binarize both the
activations and the weights.
Propagating Gradients Through Discretization The derivative of the sign function is zero almost
everywhere, making it apparently incompatible with back-propagation, since the exact gradient of
the cost with respect to the quantities before the discretization (pre-activations or weights) would
1
2
https://github.com/MatthieuCourbariaux/BinaryNet
https://github.com/itayhubara/BinaryNet
2
be zero. Note that this remains true even if stochastic quantization is used. Bengio (2013) studied
the question of estimating or propagating gradients through stochastic discrete neurons. He found in
his experiments that the fastest training was obtained when using the ?straight-through estimator,?
previously introduced in Hinton?s lectures (Hinton, 2012). We follow a similar approach but use the
version of the straight-through estimator that takes into account the saturation effect, and does use
deterministic rather than stochastic sampling of the bit. Consider the sign function quantization
q = Sign(r),
and assume that an estimator gq of the gradient ?C
?q has been obtained (with the straight-through
estimator when needed).
Algorithm 1: Training a BNN. C is the cost function Algorithm 2: Shift based AdaMax learning
for minibatch, ? the learning rate decay factor and L rule (Kingma & Ba, 2014). gt2 indicates the
the number of layers. ? indicates element-wise mul- element-wise square gt ?gt and stands for
tiplication. The function Binarize() specifies how to both left and right bit-shift. Good default
(stochastically or deterministically) binarize the activa- settings are ? = 2?10 , 1 ? ?1 = 2?3 , 1 ?
tions and weights, and Clip() specifies how to clip the ?2 = 2?10 . All operations on vectors are
weights. BatchNorm() specifies how to batch-normalize element-wise. With ?1t and ?2t we denote
the activations, using either batch normalization (Ioffe & ?1 and ?2 to the power t.
Szegedy, 2015) or its shift-based variant we describe in
Algorithm 3. BackBatchNorm() specifies how to back- Require: Previous parameters ?t?1 and
their gradient gt , and learning rate ?.
propagate through the normalization. Update() specifies
Ensure:
Updated parameters ?t .
how to update the parameters when their gradients are
{Biased 1st and 2nd moment estimates:}
known, using either ADAM (Kingma & Ba, 2014) or
the shift-based AdaMax we describe in Algorithm 2.
mt ? ?1 ? mt?1 + (1 ? ?1 ) ? gt
vt ? max(?2 ? vt?1 , |gt |)
Require: a minibatch of inputs and targets (a0 , a? ),
{Updated parameters:}
previous weights W , previous BatchNorm parameters ?, weight initialization coefficients from (Glorot
?t ? ?t?1 ? (? (1 ? ?1 )) ? m
? vt?1 )
& Bengio, 2010) ?, and previous learning rate ?.
Ensure: updated weights W t+1 , updated BatchNorm
parameters ?t+1 and updated learning rate ? t+1 .
Algorithm 3: Shift based Batch Normaliz{1. Computing the gradients:}
ing Transform, applied to activation x over
{1.1. Forward propagation:}
a mini-batch. The approximate power-offor k = 1 to L do
2 is3 AP 2(x) = sign(x)2round(log2|x|) , and
Wkb ? Binarize(Wk ), sk ? abk?1 Wkb
stands for both left and right binary shift.
ak ? BatchNorm(sk , ?k )
Require: Values of x over a mini-batch:
if k < L then abk ? Binarize(ak )
B = {x1...m }; parameters to learn: ?, ?.
{1.2. Backward propagation:}
Ensure:
{yi = BN(xi ,?, ?)}
{Please note that the gradients are not binary.}
{1.
Mini-batch
?C
?
Pm mean:}
Compute gaL = ?aL knowing aL and a
1
?B ? m
i=1 xi
for k = L to 1 do
{2. Centered input: }
if k < L then gak ? gabk ? 1|ak |?1
C(xi ) ? (xi ? ?B )
(gsk , g?k ) ? BackBatchNorm(gak , sk , ?k )
{3. Approximate
variance:}
Pm
1
2
?B
?m
gabk?1 ? gsk Wkb , gWkb ? gs>k abk?1
i=1(C(xi )AP 2(C(xi )))
{4. Normalize:}
{2. Accumulating the gradients:}
p
2 + )?1 )
x?i ? C(xi ) AP 2(( ?B
for k = 1 to L do
t+1
t
t+1
t
{5.
Scale
and
shift:}
?k ? Update(?k , ? , g?k ), ?
? ??
yi ? AP 2(?) x?i
Wkt+1 ? Clip(Update(Wk , ?k ? t , gWkb ), ?1, 1)
Then, our straight-through estimator of
?C
?r
is simply
gr = gq 1|r|?1 .
(4)
Note that this preserves the gradient?s information and cancels the gradient when r is too large.
Not cancelling the gradient when r is too large significantly worsens the performance. The use of
this straight-through estimator is illustrated in Algorithm 1. The derivative 1|r|?1 can also be seen
as propagating the gradient through hard tanh, which is the following piece-wise linear activation
function:
Htanh(x) = Clip(x, ?1, 1).
(5)
3
For hidden units, we use the sign function nonAlgorithm 4: Running a BNN. L = layers.
linearity to obtain binary activations, and for
Require: a vector of 8-bit inputs a0 , the binary
weights we combine two ingredients:
weights W b , and the BatchNorm parameters ?.
? Constrain each real-valued weight between -1 Ensure: the MLP output aL .
and 1, by projecting wr to -1 or 1 when the
{1. First layer:}
weight update brings wr outside of [?1, 1],
a1 ? 0
i.e., clipping the weights during training, as
for n = 1 to 8 do
per Algorithm 1. The real-valued weights
a1 ? a1 +2n?1 ?XnorDotProduct(an0 , W1b )
would otherwise grow very large without any
ab1 ? Sign(BatchNorm(a1 , ?1 ))
impact on the binary weights.
{2. Remaining hidden layers:}
for k = 2 to L ? 1 do
? When using a weight wr , quantize it using
ak ? XnorDotProduct(abk?1 , Wkb )
wb = Sign(wr ).
abk ? Sign(BatchNorm(ak , ?k ))
This is consistent with the gradient canceling
{3. Output layer:}
when |wr | > 1, according to Eq. 4.
aL ? XnorDotProduct(abL?1 , WLb )
aL ? BatchNorm(aL , ?L )
Shift-based Batch Normalization Batch
Normalization (BN) (Ioffe & Szegedy, 2015), accelerates the training and also seems to reduces
the overall impact of the weight scale. The normalization noise may also help to regularize the
model. However, at train-time, BN requires many multiplications (calculating the standard deviation
and dividing by it), namely, dividing by the running variance (the weighted mean of the training
set activation variance). Although the number of scaling calculations is the same as the number of
neurons, in the case of ConvNets this number is quite large. For example, in the CIFAR-10 dataset
(using our architecture), the first convolution layer, consisting of only 128 ? 3 ? 3 filter masks,
converts an image of size 3 ? 32 ? 32 to size 3 ? 128 ? 28 ? 28, which is two orders of magnitude
larger than the number of weights. To achieve the results that BN would obtain, we use a shift-based
batch normalization (SBN) technique. detailed in Algorithm 3. SBN approximates BN almost
without multiplications. In the experiment we conducted we did not observe accuracy loss when
using the shift based BN algorithm instead of the vanilla BN algorithm.
Shift based AdaMax The ADAM learning rule (Kingma & Ba, 2014) also seems to reduce the
impact of the weight scale. Since ADAM requires many multiplications, we suggest using instead the
shift-based AdaMax we detail in Algorithm 2. In the experiment we conducted we did not observe
accuracy loss when using the shift-based AdaMax algorithm instead of the vanilla ADAM algorithm.
First Layer In a BNN, only the binarized values of the weights and activations are used in all
calculations. As the output of one layer is the input of the next, all the layers inputs are binary,
with the exception of the first layer. However, we do not believe this to be a major issue. First, in
computer vision, the input representation typically has far fewer channels (e.g, red, green and blue)
than internal representations (e.g, 512). As a result, the first layer of a ConvNet is often the smallest
convolution layer, both in terms of parameters and computations (Szegedy et al., 2014). Second, it is
relatively easy to handle continuous-valued inputs as fixed point numbers, with m bits of precision.
For example, in the common case of 8-bit fixed point inputs:
s = x ? wb
;
s=
8
X
2n?1 (xn ? wb ),
(6)
n=1
where x is a vector of 1024 8-bit inputs, x81 is the most significant bit of the first input, wb is a vector
of 1024 1-bit weights, and s is the resulting weighted sum. This trick is used in Algorithm 4.
2
Benchmark Results
We conduct two sets of experiments, each based on a different framework, namely Torch7 and Theano.
Implementation details are reported in Appendix A and code for both frameworks is available online.
Results are reported in Table 1.
3
Hardware implementation of AP2 is as simple as extracting the index of the most significant bit from the
number?s binary representation.
4
Table 1: Classification test error rates of DNNs trained on MNIST (fully connected architecture),
CIFAR-10 and SVHN (convnet). No unsupervised pre-training or data augmentation was used.
Data set
MNIST
SVHN
Binarized activations+weights, during training and test
BNN (Torch7)
1.40%
2.53%
BNN (Theano)
0.96%
2.80%
Committee Machines? Array (Baldassi et al., 2015)
1.35%
Binarized weights, during training and test
BinaryConnect (Courbariaux et al., 2015)
1.29? 0.08% 2.30%
Binarized activations+weights, during test
EBP (Cheng et al., 2015)
2.2? 0.1%
Bitwise DNNs (Kim & Smaragdis, 2016)
1.33%
Ternary weights, binary activations, during test
(Hwang & Sung, 2014)
1.45%
No binarization (standard results)
No regularization
1.3? 0.2%
2.44%
Gated pooling (Lee et al., 2015)
1.69%
CIFAR-10
10.15%
11.40%
9.90%
10.94%
7.62%
Preliminary Results on ImageNet To Figure 1: Training curves for different methods on
test the strength of our method, we applied CIFAR-10 dataset. The dotted lines represent the trainit to the challenging ImageNet classifica- ing costs (square hinge losses) and the continuous lines
tion task. Considerable research has been the corresponding validation error rates. Although
concerned with compressing ImageNet ar- BNNs are slower to train, they are nearly as accurate as
chitectures while preserving high accuracy 32-bit float DNNs.
performance (e.g., Han et al. (2015a)). Previous approaches that have been tried include pruning near zero weights using matrix factorization techniques, quantizing
the weights and applying Huffman codes
among others. To the best of the our knowledge, so far there are no reports on successfully quantizing the network?s activations.
Moreover, a recent work Han et al. (2015a)
showed that accuracy significantly deteriorates when trying to quantize convolutional
layers? weights below 4 bits (FC layers are
more robust to quantization and can operate
quite well with only 2 bits). In the present
work we attempted to tackle the difficult task of binarizing both weights and activations. Employing
the well known AlexNet and GoogleNet architectures, we applied our techniques and achieved
36.1% top-1 and 60.1% top-5 accuracies using AlexNet and 47.1% top-1 and 69.1% top-5 accuracies
using GoogleNet. While this performance leaves room for improvement (relative to full precision
nets), they are by far better than all previous attempts to compress ImageNet architectures using less
than 4 bits precision for the weights. Moreover, this advantage is achieved while also binarizing
neuron activations. Detailed descriptions of these results as well as full implementation details
of our experiments are reported in the supplementary material (Appendix B). In our latest work
(Hubara et al., 2016) we relaxed the binary constrains and allowed more than 1-bit per weight and
activations. The resulting QNNs achieve prediction accuracy comparable to their 32-bit counterparts.
For example, our quantized version of AlexNet with 1-bit weights and 2-bit activations achieves
51% top-1 accuracy and GoogleNet with 4-bits weighs and activation achived 66.6%. Moreover, we
quantize the parameter gradients to 6-bits as well which enables gradients computation using only
bit-wise operation. Full details can be found in (Hubara et al., 2016)
5
Table 2: Energy consumption of multiplyaccumulations in pico-joules (Horowitz, 2014)
Operation
MUL ADD
8bit Integer
0.2pJ 0.03pJ
32bit Integer
3.1pJ
0.1pJ
16bit Floating Point 1.1pJ
0.4pJ
32tbit Floating Point 3.7pJ
0.9pJ
3
Table 3: Energy consumption of memory accesses
in pico-joules (Horowitz, 2014)
Memory size 64-bit memory access
8K
10pJ
32K
20pJ
1M
100pJ
DRAM
1.3-2.6nJ
High Power Efficiency during the Forward Pass
Computer hardware, be it general-purpose or specialized, is composed of memories, arithmetic
operators and control logic. During the forward pass (both at run-time and train-time), BNNs
drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise
operations, which might lead to a great increase in power-efficiency. Moreover, a binarized CNN can
lead to binary convolution kernel repetitions, and we argue that dedicated hardware could reduce the
time complexity by 60% .
Memory Size and Accesses Improving computing performance has always been and remains a
challenge. Over the last decade, power has been the main constraint on performance (Horowitz, 2014).
This is why much research effort has been devoted to reducing the energy consumption of neural
networks. Horowitz (2014) provides rough numbers for the energy consumed by the computation (the
given numbers are for 45nm technology), as summarized in Tables 2 and 3. Importantly, we can see
that memory accesses typically consume more energy than arithmetic operations, and memory access
cost augments with memory size. In comparison with 32-bit DNNs, BNNs require 32 times smaller
memory size and 32 times fewer memory accesses. This is expected to reduce energy consumption
drastically (i.e., more than 32 times).
XNOR-Count Applying a DNN mainly consists of convolutions and matrix multiplications. The
key arithmetic operation of deep learning is thus the multiply-accumulate operation. Artificial neurons
are basically multiply-accumulators computing weighted sums of their inputs. In BNNs, both the
activations and the weights are constrained to either ?1 or +1. As a result, most of the 32-bit floating
point multiply-accumulations are replaced by 1-bit XNOR-count operations. This could have a big
impact on dedicated deep learning hardware. For instance, a 32-bit floating point multiplier costs
about 200 Xilinx FPGA slices (Govindu et al., 2004; Beauchamp et al., 2006), whereas a 1-bit XNOR
gate only costs a single slice.
Exploiting Filter Repetitions When using a ConvNet architecture with binary weights, the number
of unique filters is bounded by the filter size. For example, in our implementation we use filters of
size 3 ? 3, so the maximum number of unique 2D filters is 29 = 512. Since we now have binary
filters, many 2D filters of size k ? k repeat themselves. By using dedicated hardware/software, we
can apply only the unique 2D filters on each feature map and sum the results to receive each 3D
filter?s convolutional result. For example, in our ConvNet architecture trained on the CIFAR-10
benchmark, there are only 42% unique filters per layer on average. Hence we can reduce the number
of the XNOR-popcount operations by 3.
4
Seven Times Faster on GPU at Run-Time
It is possible to speed up GPU implementations of BNNs, by using a method sometimes called
SIMD (single instruction, multiple data) within a register (SWAR). The basic idea of SWAR is to
concatenate groups of 32 binary variables into 32-bit registers, and thus obtain a 32-times speed-up
on bitwise operations (e.g, XNOR). Using SWAR, it is possible to evaluate 32 connections with only
3 instructions:
32b
a1 + = popcount(xnor(a32b
0 , w1 )),
(7)
where a1 is the resulting weighted sum, and a32b
and w132b are the concatenated inputs and weights.
0
Those 3 instructions (accumulation, popcount, xnor) take 1 + 4 + 1 = 6 clock cycles on recent
6
Nvidia GPUs (and if they were to become a fused instruction, it would only take a single clock cycle).
Consequently, we obtain a theoretical Nvidia GPU speed-up of factor of 32/6 ? 5.3. In practice, this
speed-up is quite easy to obtain as the memory bandwidth to computation ratio is also increased by 6
times.
In order to validate those theoretical results, we
programed two GPU kernels:
Figure 2: The first three columns represent the
time it takes to perform a 8192 ? 8192 ? 8192 (bi? The first kernel (baseline) is an unoptimized nary) matrix multiplication on a GTX750 Nvidia
matrix multiplication kernel.
GPU, depending on which kernel is used. We
? The second kernel (XNOR) is nearly identical can see that our XNOR kernel is 23 times faster
to the baseline kernel, except that it uses the than our baseline kernel and 3.4 times faster than
cuBLAS. The next three columns represent the
SWAR method, as in Equation (7).
time it takes to run the MLP from Section 2 on the
The two GPU kernels return identical outputs full MNIST test set. As MNIST?s images are not
when their inputs are constrained to ?1 or +1 binary, the first layer?s computations are always
(but not otherwise). The XNOR kernel is about performed by the baseline kernel. The last three
23 times faster than the baseline kernel and 3.4 columns show that the MLP accuracy does not
times faster than cuBLAS, as shown in Figure 2. depend on which kernel is used.
Last but not least, the MLP from Section 2 runs
7 times faster with the XNOR kernel than with
the baseline kernel, without suffering any loss
in classification accuracy (see Figure 2).
5
Discussion and Related Work
Until recently, the use of extremely lowprecision networks (binary in the extreme case)
was believed to be highly destructive to the network performance (Courbariaux et al., 2014).
Soudry et al. (2014) and Cheng et al. (2015)
proved the contrary by showing that good performance could be achieved even if all neurons
and weights are binarized to ?1 . This was done
using Expectation BackPropagation (EBP), a
variational Bayesian approach, which infers networks with binary weights and neurons by updating the posterior distributions over the weights.
These distributions are updated by differentiating their parameters (e.g., mean values) via the back
propagation (BP) algorithm. Esser et al. (2015) implemented a fully binary network at run time using
a very similar approach to EBP, showing significant improvement in energy efficiency. The drawback
of EBP is that the binarized parameters are only used during inference.
The probabilistic idea behind EBP was extended in the BinaryConnect algorithm of Courbariaux et al.
(2015). In BinaryConnect, the real-valued version of the weights is saved and used as a key reference
for the binarization process. The binarization noise is independent between different weights, either
by construction (by using stochastic quantization) or by assumption (a common simplification; see
Spang (1962). The noise would have little effect on the next neuron?s input because the input is
a summation over many weighted neurons. Thus, the real-valued version could be updated by the
back propagated error by simply ignoring the binarization noise in the update. Using this method,
Courbariaux et al. (2015) were the first to binarize weights in CNNs and achieved near state-of-the-art
performance on several datasets. They also argued that noisy weights provide a form of regularization,
which could help to improve generalization, as previously shown in (Wan et al., 2013). This method
binarized weights while still maintaining full precision neurons.
Lin et al. (2015) carried over the work of Courbariaux et al. (2015) to the back-propagation process
by quantizing the representations at each layer of the network, to convert some of the remaining
multiplications into bit-shifts by restricting the neurons values to be power-of-two integers. Lin et al.
(2015)?s work and ours seem to share similar characteristics . However, their approach continues to
use full precision weights during the test phase. Moreover, Lin et al. (2015) quantize the neurons
only during the back propagation process, and not during forward propagation.
7
Other research Baldassi et al. (2015) showed that full binary training and testing is possible in an
array of committee machines with randomized input, where only one weight layer is being adjusted.
Gong et al. (2014) aimed to compress a fully trained high precision network by using a quantization
or matrix factorization methods. These methods required training the network with full precision
weights and neurons, thus requiring numerous MAC operations the proposed BNN algorithm avoids.
Hwang & Sung (2014) focused on a fixed-point neural network design and achieved performance
almost identical to that of the floating-point architecture. Kim & Smaragdis (2016) retrained neural
networks with binary weights and activations.
So far, to the best of our knowledge, no work has succeeded in binarizing weights and neurons, at the
inference phase and the entire training phase of a deep network. This was achieved in the present
work. We relied on the idea that binarization can be done stochastically, or be approximated as
random noise. This was previously done for the weights by Courbariaux et al. (2015), but our BNNs
extend this to the activations. Note that the binary activations are especially important for ConvNets,
where there are typically many more neurons than free weights. This allows highly efficient operation
of the binarized DNN at run time, and at the forward-propagation phase during training. Moreover,
our training method has almost no multiplications, and therefore might be implemented efficiently
in dedicated hardware. However, we have to save the value of the full precision weights. This is a
remaining computational bottleneck during training, since it is an energy-consuming operation.
Conclusion
We have introduced BNNs, which binarize deep neural networks and can lead to dramatic improvements in both power consumption and computation speed. During the forward pass (both at run-time
and train-time), BNNs drastically reduce memory size and accesses, and replace most arithmetic
operations with bit-wise operations. Our estimates indicate that power efficiency can be improved by
more than one order of magnitude (see Section 3). In terms of speed, we programed a binary matrix
multiplication GPU kernel that enabled running MLP over the MNIST datset 7 times faster (than
with an unoptimized GPU kernel) without suffering any accuracy degradation (see Section 4).
We have shown that BNNs can handle MNIST, CIFAR-10 and SVHN while achieving nearly stateof-the-art accuracy performance. While our preliminary results for the challenging ImageNet are
not on par with the best results achievable with full precision networks, they significantly improve
all previous attempts to compress ImageNet-capable architectures (see Section 2 and supplementary
material - Appendix B). Moreover by relaxing the binary constrains and allowed more than 1-bit per
weight and activations we have been able to achieve prediction accuracy comparable to their 32-bit
counterparts. Full details can be found in our latest work (Hubara et al., 2016) A major open question
would be to further improve our results on ImageNet. A substantial progress in this direction might
lead to huge impact on DNN usability in low power instruments such as mobile phones.
Acknowledgments
We would like to express our appreciation to Elad Hoffer, for his technical assistance and constructive
comments. We thank our fellow MILA lab members who took the time to read the article and give us
some feedback. We thank the developers of Torch, Collobert et al. (2011) a Lua based environment,
and Theano (Bergstra et al., 2010; Bastien et al., 2012), a Python library which allowed us to easily
develop a fast and optimized code for GPU. We also thank the developers of Pylearn2 (Goodfellow
et al., 2013) and Lasagne (Dieleman et al., 2015), two Deep Learning libraries built on the top of
Theano. We thank Yuxin Wu for helping us compare our GPU kernels with cuBLAS. We are also
grateful for funding from NSERC, the Canada Research Chairs, Compute Canada, and CIFAR. We
are also grateful for funding from CIFAR, NSERC, IBM, Samsung. This research was also supported
by The Israel Science Foundation (grant No. 1890/14).
References
Baldassi, C., Ingrosso, A., Lucibello, C., Saglietti, L., and Zecchina, R. Subdominant Dense Clusters Allow for
Simple Learning and High Computational Performance in Neural Networks with Discrete Synapses. Physical
Review Letters, 115(12):1?5, 2015.
8
Bastien, F., Lamblin, P., Pascanu, R., et al. Theano: new features and speed improvements. Deep Learning and
Unsupervised Feature Learning NIPS 2012 Workshop, 2012.
Beauchamp, M. J., Hauck, S., Underwood, K. D., and Hemmert, K. S. Embedded floating-point units in FPGAs.
In Proceedings of the 2006 ACM/SIGDA 14th international symposium on Field programmable gate arrays,
pp. 12?20. ACM, 2006.
Bengio, Y. Estimating or propagating gradients through stochastic neurons. Technical Report arXiv:1305.2982,
Universite de Montreal, 2013.
Bergstra, J., Breuleux, O., Bastien, F., et al. Theano: a CPU and GPU math expression compiler. In Proceedings
of the Python for Scientific Computing Conference (SciPy), June 2010. Oral Presentation.
Chen, T., Du, Z., Sun, N., et al. Diannao: A small-footprint high-throughput accelerator for ubiquitous machinelearning. In Proceedings of the 19th international conference on Architectural support for programming
languages and operating systems, pp. 269?284. ACM, 2014.
Cheng, Z., Soudry, D., Mao, Z., and Lan, Z. Training binary multilayer neural networks for image classification
using expectation backpropgation. arXiv preprint arXiv:1503.03562, 2015.
Coates, A., Huval, B., Wang, T., et al. Deep learning with COTS HPC systems. In Proceedings of the 30th
international conference on machine learning, pp. 1337?1345, 2013.
Collobert, R., Kavukcuoglu, K., and Farabet, C. Torch7: A matlab-like environment for machine learning. In
BigLearn, NIPS Workshop, 2011.
Courbariaux, M., Bengio, Y., and David, J.-P. Training deep neural networks with low precision multiplications.
ArXiv e-prints, abs/1412.7024, December 2014.
Courbariaux, M., Bengio, Y., and David, J.-P. Binaryconnect: Training deep neural networks with binary weights
during propagations. ArXiv e-prints, abs/1511.00363, November 2015.
Dieleman, S., Schl?ter, J., Raffel, C., et al. Lasagne: First release., August 2015.
Esser, S. K., Appuswamy, R., Merolla, P., Arthur, J. V., and Modha, D. S. Backpropagation for energy-efficient
neuromorphic computing. In Advances in Neural Information Processing Systems, pp. 1117?1125, 2015.
Glorot, X. and Bengio, Y. Understanding the difficulty of training deep feedforward neural networks. In
AISTATS?2010, 2010.
Gong, Y., Liu, L., Yang, M., and Bourdev, L. Compressing deep convolutional networks using vector quantization.
arXiv preprint arXiv:1412.6115, 2014.
Goodfellow, I. J., Warde-Farley, D., Lamblin, P., et al. Pylearn2: a machine learning research library. arXiv
preprint arXiv:1308.4214, 2013.
Govindu, G., Zhuo, L., Choi, S., and Prasanna, V. Analysis of high-performance floating-point arithmetic on
FPGAs. In Parallel and Distributed Processing Symposium, 2004. Proceedings. 18th International, pp. 149.
IEEE, 2004.
Graves, A. Practical variational inference for neural networks. In Advances in Neural Information Processing
Systems, pp. 2348?2356, 2011.
Han, S., Mao, H., and Dally, W. J. Deep compression: Compressing deep neural networks with pruning, trained
quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015a.
Han, S., Pool, J., Tran, J., and Dally, W. Learning both weights and connections for efficient neural network. In
Advances in Neural Information Processing Systems, pp. 1135?1143, 2015b.
Hinton, G. Neural networks for machine learning. Coursera, video lectures, 2012.
Horowitz, M. Computing?s Energy Problem (and what we can do about it). IEEE Interational Solid State
Circuits Conference, pp. 10?14, 2014.
Hubara, I., Courbariaux, M., Soudry, D., El-Yaniv, R., and Bengio, Y. Quantized neural networks: Training
neural networks with low precision weights and activations. arXiv preprint arXiv:1609.07061, 2016.
Hwang, K. and Sung, W. Fixed-point feedforward deep neural network design using weights+ 1, 0, and- 1. In
Signal Processing Systems (SiPS), 2014 IEEE Workshop on, pp. 1?6. IEEE, 2014.
Ioffe, S. and Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate
shift. 2015.
Kim, M. and Smaragdis, P. Bitwise Neural Networks. ArXiv e-prints, January 2016.
Kingma, D. and Ba, J. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
LeCun, Y., Bengio, Y., and Hinton, G. Deep learning. Nature, 521(7553):436?444, 2015.
Lee, C.-Y., Gallagher, P. W., and Tu, Z. Generalizing pooling functions in convolutional neural networks: Mixed,
gated, and tree. arXiv preprint arXiv:1509.08985, 2015.
Lin, Z., Courbariaux, M., Memisevic, R., and Bengio, Y. Neural networks with few multiplications. ArXiv
e-prints, abs/1510.03009, October 2015.
Soudry, D., Hubara, I., and Meir, R. Expectation backpropagation: Parameter-free training of multilayer neural
networks with continuous or discrete weights. In NIPS?2014, 2014.
Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. Dropout: A simple way to
prevent neural networks from overfitting. Journal of Machine Learning Research, 15:1929?1958, 2014.
Szegedy, C., Liu, W., Jia, Y., et al. Going deeper with convolutions. Technical report, arXiv:1409.4842, 2014.
Wan, L., Zeiler, M., Zhang, S., LeCun, Y., and Fergus, R. Regularization of neural networks using dropconnect.
In ICML?2013, 2013.
9
| 6573 |@word worsens:1 cnn:2 version:4 rani:1 achievable:1 advantageous:1 seems:2 nd:1 compression:1 open:1 instruction:4 propagate:1 bn:7 tried:1 dramatic:1 sgd:2 solid:1 harder:1 moment:1 liu:2 exclusively:1 daniel:2 ours:1 bitwise:3 com:5 discretization:2 x81:1 activation:32 gmail:3 gpu:15 concatenate:1 enables:1 update:6 v:1 intelligence:1 half:1 device:1 fewer:2 leaf:1 yuxin:1 provides:1 quantized:2 beauchamp:2 lua:1 pascanu:1 math:1 zhang:1 become:1 symposium:2 consists:1 combine:1 introduce:2 mask:1 expected:2 cot:1 themselves:1 salakhutdinov:1 little:1 cpu:1 spain:1 estimating:2 moreover:10 linearity:1 bounded:1 circuit:1 alexnet:3 israel:2 what:1 substantially:2 developer:2 gal:1 nj:1 sung:3 fellow:2 zecchina:1 binarized:15 tackle:1 sip:1 universit:1 abk:5 control:1 unit:3 grant:1 before:1 limit:1 soudry:5 ak:5 modha:1 ap:4 might:3 initialization:1 studied:1 lasagne:2 suggests:1 challenging:4 relaxing:1 fastest:1 factorization:2 range:1 bi:1 averaged:1 unique:4 lecun:3 accumulator:2 testing:1 ternary:1 practice:2 practical:1 implement:2 acknowledgment:1 backpropagation:3 footprint:1 itayhubara:1 significantly:3 pre:2 suggest:1 operator:1 applying:2 accumulating:1 accumulation:3 baldassi:3 deterministic:4 map:1 raffel:1 straightforward:1 latest:2 focused:1 resolution:1 matthieu:2 scipy:1 estimator:6 rule:2 array:3 importantly:1 spang:1 regularize:1 lamblin:2 his:2 enabled:1 machinelearning:1 handle:2 updated:7 target:2 today:1 construction:1 itay:1 exact:1 programming:1 us:2 goodfellow:2 trick:1 element:3 approximated:1 updating:1 continues:1 preprint:7 wang:1 sbn:2 compressing:3 connected:1 cycle:2 sun:1 coursera:1 ran:1 substantial:3 environment:2 complexity:2 constrains:2 warde:1 trained:5 depend:1 grateful:2 oral:1 efficiency:6 binarizing:3 easily:1 samsung:1 train:11 fast:2 describe:2 artificial:2 outside:1 quite:4 larger:1 valued:10 supplementary:2 consume:1 elad:1 otherwise:3 invested:1 transform:2 noisy:2 online:1 advantage:1 quantizing:4 net:1 took:1 tran:1 gq:2 cancelling:1 tu:1 achieve:4 description:1 validate:2 normalize:2 exploiting:1 sutskever:1 cluster:1 yaniv:1 adam:5 help:3 tions:1 batchnorm:8 ac:2 propagating:4 gong:3 schl:1 depending:1 develop:1 montreal:1 progress:1 eq:1 dividing:2 implemented:3 c:1 indicate:1 direction:1 drawback:1 saved:1 filter:11 stochastic:10 cnns:1 centered:1 material:2 require:5 argued:1 dnns:8 generalization:1 preliminary:4 summation:1 adjusted:1 helping:1 pico:2 great:1 dieleman:2 bnns:19 major:2 achieves:1 smallest:1 purpose:2 ingrosso:1 tanh:1 hubara:5 hpc:1 repetition:3 successfully:1 weighted:5 rough:1 biglearn:1 always:2 rather:1 mobile:1 release:1 june:1 improvement:4 indicates:3 mainly:1 kim:3 baseline:6 inference:3 el:2 accumulated:2 typically:3 entire:1 torch:2 a0:2 hidden:2 dnn:3 unoptimized:4 going:1 overall:1 classification:5 issue:1 among:1 stateof:1 art:4 constrained:2 equal:1 simd:1 field:1 sampling:1 identical:3 cancel:1 nearly:4 unsupervised:2 govindu:2 throughput:1 icml:1 yoshua:2 report:5 others:1 few:1 randomly:1 composed:1 preserve:1 abl:1 floating:7 replaced:1 an0:1 consisting:1 phase:4 maintain:1 attempt:2 ab:3 montr:1 mlp:5 huge:1 highly:2 multiply:3 extreme:1 farley:1 behind:1 devoted:1 xb:2 accurate:1 succeeded:1 capable:1 arthur:1 conduct:2 tree:1 weighs:1 theoretical:2 instance:1 increased:1 column:3 wb:4 ar:1 cublas:3 neuromorphic:1 clipping:1 cost:6 mac:1 deviation:1 technion:3 fpga:1 krizhevsky:1 conducted:3 gr:1 graphic:1 too:2 reported:3 subdominant:1 st:1 explores:1 randomized:1 international:4 programed:3 memisevic:1 lee:2 probabilistic:1 pool:1 fused:1 w1:1 augmentation:1 nm:1 wan:3 dropconnect:2 popcount:3 stochastically:2 horowitz:5 derivative:2 return:1 szegedy:5 account:1 wkb:4 de:2 huval:1 bergstra:2 summarized:1 wk:2 coding:1 coefficient:1 gt2:1 register:2 collobert:2 piece:1 tion:1 performed:1 lab:1 dally:2 apparently:1 red:1 compiler:1 relied:1 parallel:1 jia:1 contribution:3 il:2 square:2 accuracy:15 convolutional:4 variance:3 characteristic:1 efficiently:1 who:1 generalize:1 bayesian:1 kavukcuoglu:1 basically:1 straight:5 explain:1 synapsis:1 canceling:1 farabet:1 energy:10 pp:9 destructive:1 universite:1 propagated:1 dataset:4 proved:1 knowledge:2 infers:1 ubiquitous:1 back:6 interational:1 follow:1 classifica:1 bourdev:1 improved:1 done:3 merolla:1 convnets:2 clock:2 until:1 propagation:9 glance:1 minibatch:2 ebp:5 qnns:1 brings:1 hwang:3 believe:1 scientific:1 effect:2 requiring:1 true:1 multiplier:1 counterpart:2 regularization:4 hence:1 read:1 bnn:10 illustrated:1 xnor:11 round:1 assistance:1 during:17 please:1 xilinx:1 trying:1 hungry:1 svhn:5 dedicated:5 image:3 wise:9 variational:3 recently:1 umontreal:1 funding:2 sigmoid:1 common:2 specialized:2 mt:2 physical:1 googlenet:3 he:1 approximates:1 extend:1 accumulate:1 significant:3 ap2:1 ai:1 vanilla:2 pm:2 backpropgation:1 htanh:1 language:1 esser:3 access:10 han:5 operating:1 gt:5 add:1 posterior:1 recent:2 showed:2 perspective:1 phone:1 nvidia:3 binary:31 vt:3 yi:2 seen:2 preserving:1 relaxed:1 signal:1 arithmetic:8 full:11 multiple:1 reduces:1 ing:2 technical:3 faster:9 usability:1 calculation:2 believed:1 cifar:11 lin:4 a1:6 impact:5 prediction:2 variant:2 basic:1 multilayer:2 vision:1 expectation:3 arxiv:20 kernel:24 normalization:7 represent:3 sometimes:1 achieved:7 receive:1 huffman:2 whereas:1 appuswamy:1 grow:1 float:1 biased:1 operate:1 nary:1 breuleux:1 wkt:1 pooling:2 comment:1 member:1 contrary:1 december:1 effectiveness:1 seem:1 integer:3 extracting:1 near:3 yang:1 ter:1 feedforward:2 bengio:9 easy:2 concerned:1 architecture:8 bandwidth:1 reduce:9 idea:3 knowing:1 consumed:1 shift:15 bottleneck:1 expression:1 torch7:5 accelerating:1 effort:2 programmable:1 deep:17 matlab:1 detailed:2 aimed:1 hardware:10 clip:5 augments:1 generate:1 http:2 specifies:5 meir:1 coates:2 dotted:1 sign:11 deteriorates:1 per:5 wr:5 blue:1 discrete:3 express:1 group:1 key:2 saglietti:1 lan:1 achieving:1 binaryconnect:4 prevent:1 pj:11 backward:1 convert:2 sum:4 run:14 everywhere:1 letter:1 almost:5 wu:1 architectural:1 ab1:1 incompatible:1 appendix:3 scaling:1 comparable:2 bit:38 pushed:1 dropout:3 layer:18 accelerates:1 simplification:1 cheng:3 smaragdis:3 g:1 strength:1 constraint:1 constrain:2 bp:1 software:1 speed:7 min:1 extremely:1 chair:1 relatively:1 gpus:2 according:1 smaller:1 appealing:1 making:1 projecting:1 theano:8 equation:1 previously:4 remains:2 count:2 is3:1 committee:2 needed:1 instrument:1 hoffer:1 w1b:1 available:3 operation:20 apply:1 observe:2 save:1 batch:10 slower:1 gate:2 fpgas:2 compress:3 top:6 running:5 ensure:4 remaining:3 include:1 underwood:1 log2:1 hinge:1 maintaining:1 zeiler:1 calculating:1 concatenated:1 especially:1 question:2 quantity:1 print:4 gradient:27 convnet:4 thank:4 gak:2 consumption:6 seven:1 argue:2 binarize:7 code:5 index:1 mini:3 ratio:1 difficult:1 mostly:1 october:1 potentially:1 ba:4 dram:1 implementation:5 design:2 gated:2 perform:1 appreciation:1 convolution:6 neuron:15 datasets:2 benchmark:2 descent:1 november:1 january:1 hinton:5 extended:1 wlb:1 retrained:1 august:1 canada:2 introduced:2 david:2 namely:3 required:3 connection:2 imagenet:9 optimized:1 binarynet:2 barcelona:1 kingma:4 nip:4 pylearn2:2 able:1 zhuo:1 below:1 challenge:2 saturation:1 built:1 max:2 memory:15 green:1 video:1 power:13 difficulty:1 improve:4 github:2 technology:2 library:3 numerous:1 carried:1 columbia:1 speeding:1 binarization:12 review:1 understanding:1 python:2 multiplication:13 graf:2 relative:1 lucibello:1 loss:6 lecture:2 fully:3 par:1 embedded:1 accelerator:1 mixed:1 ingredient:1 validation:1 foundation:1 sufficient:1 consistent:1 article:1 courbariaux:12 prasanna:1 share:1 ibm:1 repeat:1 last:5 free:2 supported:1 drastically:5 senior:1 allow:1 deeper:1 institute:1 wide:1 differentiating:1 distributed:1 slice:2 curve:1 default:1 xn:1 stand:2 avoids:1 feedback:1 forward:8 far:4 employing:1 approximate:2 pruning:2 wrote:1 logic:1 overfitting:1 ioffe:3 consuming:1 xi:7 fergus:1 continuous:3 decade:1 sk:3 why:1 table:5 nature:1 learn:1 channel:1 robust:1 ignoring:1 improving:1 quantize:4 du:1 did:2 aistats:1 main:1 dense:1 big:1 noise:8 suffering:4 mul:2 allowed:3 x1:1 mila:1 normaliz:1 precision:12 mao:2 deterministically:1 choi:1 bastien:3 covariate:1 showing:2 decay:1 datset:1 glorot:2 workshop:3 mnist:10 quantization:7 adding:1 restricting:1 magnitude:2 gallagher:1 gsk:2 chen:2 backpropagate:1 lowprecision:1 generalizing:1 fc:1 simply:2 likely:1 nserc:2 acm:3 presentation:1 consequently:1 room:1 replace:4 considerable:1 hard:2 except:1 reducing:2 degradation:1 called:1 pas:5 adamax:5 attempted:1 exception:2 internal:2 support:1 absolutely:1 constructive:1 evaluate:1 srivastava:2 |
6,162 | 6,574 | Exploiting Tradeoffs for Exact Recovery in
Heterogeneous Stochastic Block Models
Qiyang Han
Department of Statistics
University of Washington
Seattle, WA 98195
[email protected]
Amin Jalali
Department of Electrical Engineering
University of Washington
Seattle, WA 98195
[email protected]
Ioana Dumitriu
Department of Mathematics
University of Washington
Seattle, WA 98195
[email protected]
Maryam Fazel
Department of Electrical Engineering
University of Washington
Seattle, WA 98195
[email protected]
Abstract
The Stochastic Block Model (SBM) is a widely used random graph model for
networks with communities. Despite the recent burst of interest in community
detection under the SBM from statistical and computational points of view, there
are still gaps in understanding the fundamental limits of recovery. In this paper,
we consider the SBM in its full generality, where there is no restriction on the
number and sizes of communities or how they grow with the number of nodes, as
well as on the connectivity probabilities inside or across communities. For such
stochastic block models, we provide guarantees for exact recovery via a semidefinite program as well as upper and lower bounds on SBM parameters for exact
recoverability. Our results exploit the tradeoffs among the various parameters
of heterogenous SBM and provide recovery guarantees for many new interesting
SBM configurations.
1 Introduction
A fundamental problem in network science and machine learning is to discover structures in large,
complex networks (e.g., biological, social, or information networks). Community or cluster detection underlies many decision tasks, as a basic step that uses pairwise relations between data points
in order to understand more global structures in the data. Applications include recommendation
systems [27], image segmentation [24, 20], learning gene network structures in bioinformatics, e.g.,
in protein detection [9] and population genetics [17].
In spite of a long history of heuristic algorithms (see, e.g., [18] for an empirical overview), as well as
strong research interest in recent years on the theoretical side as briefly reviewed in the sequel, there
are still gaps in understanding the fundamental information theoretic limits of recoverability (i.e., if
there is enough information to reveal the communities) and computational tractability (if there are
efficient algorithms to recover them). This is particularly true in the case of sparse graphs (that test
the limits of recoverability), graphs with heterogeneous communities (communities varying greatly
in size and connectivity), graphs with a number of communities that grows with the number of nodes,
and partially observed graphs (with various observation models).
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
1.1 Exact Recovery for Heterogenous Stochastic Block Model
The stochastic block model (SBM), first introduced and studied in mathematical sociology by Holland, Laskey and Leinhardt in 1983 [16], can be described as follows. Consider n vertices partitioned
into r communities V1 , V2 , . . . , Vr , of sizes n1 , n2 , . . . , nr . We endow the kth community with an
Erd?os-R?nyi random graph model G(nk , pk ) and draw an edge between pairs of nodes in different
communities independently with probability q; i.e., for any pair of nodes i and j , if i, j ? Vk for
some k ? {1, . . . , r} we draw an edge with probability pk , and draw an edge with probability q if
they are in different communities. We assume q < mink pk in order for the idea of communities to
make sense. This defines a distribution over random graphs known as the stochastic block model. In
this paper, we assume the above model while allowing the number of communities to grow with the
number of nodes (similar to [13, 15, 23]). We refer to this model as the heterogeneous stochastic
block model to contrast our study of this general setting with previous works on special cases of
SBM such as 1) homogenous SBM where the communities are equivalent (they are of the same size
and the connectivity probabilities are equal,) e.g., [12], or, 2) SBM with linear-sized communities,
where the number of communities is fixed and all community sizes are O(n); e.g., [1].
1.2 Statistical and Computational Regimes
What we can infer about the community structure from a single draw of the random graph varies
based on the regime of model parameters. Often, the following scenarios are considered.
1. Recovery, where the proportion of misclassified nodes is negligible; either 0 (corresponding to
exact recovery with strong consistency, and considered in [12, 1]) or asymptotically 0 (corresponding to exact recovery with weak consistency as considered in [23, 22, 28]) as the number of
nodes grows.
2. Approximation, where a finite fraction (bounded away from 1) of the vertices is recovered. This
regime was first introduced in [13, 14], and has been considered in many other works since then;
e.g., see [15] and references therein.
Both recovery and approximation can be studied from statistical and computational points of view.
Statistically, one can ask about the parameter regimes for which the model can be recovered or approximated. Such characterizations are specially important when an information-theoretical lower
bound (below which recovery is not possible with high probability) is shown to be achievable with
an algorithm (with high probability), hence characterizing a phase transition in model parameters.
Recently, there has been significant interest in identifying such sharp thresholds for various parameter regimes.
Computationally, one might be interested to study algorithms for recovery or approximation. In
the older approach, algorithms were studied to provide upper bounds on the parameter regimes for
recovery or approximation. See [10] or [1, Section 5] for a summary of such results. More recently,
the paradigm has shifted towards understanding the limitations and strengths of tractable methods
(e.g. see [21] on semidefinite programming based methods) and assessing whether successful retrieval can be achieved by tractable algorithms at the sharp statistical thresholds or there is a gap.
So far, it is understood that there is no such gap in the case of exact recovery (weak and strong)
and approximation of binary SBM as well as the exact recovery of linear-sized communities [1].
However, this is still an open question for more general cases; e.g., see [2] and the list of unresolved
conjectures therein.
The statistical-computational picture for SBM with only two equivalent communities has been fully
characterized in a series of recent papers. Apart from the binary SBM, the best understood cases are
where there is a finite number r of equivalent or linear-sized communities. Outside of the settings
described above, the full picture has not yet emerged and many questions are unresolved.
1.3 This paper
The community detection problem studied in this paper is stated as: given the adjacency matrix of
a graph generated by the heterogenous stochastic block model, for what SBM parameters we can
recover the labels of all vertices, with high probability, using an algorithm that has been proved to
do so. We consider a convex program in (2.4) and an estimator similar to the maximum likelihood
2
estimator in (2.5) and characterize parts of the model space for which exact recovery is possible via
these algorithms. Theorems 1 and 2 provide sufficient conditions for the convex recovery program
and Theorem 3 provides sufficient conditions for the modified maximum likelihood estimator to
exactly recover the underlying model. In Section 2.3, we extend the above bounds to the case of
partial observations, i.e., when each entry of the matrix is observed uniformly at random with some
probability ? and the results are recorded. We also provide an information-theoretic lower bound,
describing an impossibility regime for exact recovery in heterogenous SBM in Theorem 4. All of
our results only hold with high probability, as this is the best one can hope for; with tiny probability
the model can generate graphs like the complete graph where the partition is unrecoverable.
The results of this paper provide a clear improvement in the understanding of stochastic block models by exploiting tradeoffs among SBM parameters. We identify a key parameter (or summary
statistic), defined in (2.1) and referred to as relative density, which shows up in our results and provides improvements in the statistical assessment and efficient computational approaches for certain
configurations of heterogenous SBM; examples are given in in Section 3 to illustrate a number of
such beneficial tradeoffs such as
?
? semidefinite programming can successfully recover communities of size O( log n) under
mild conditions on other communities (see Example 3 for details) while log n has long been
believed to be the threshold for the smallest community size.
? The sizes of the communities can be very spread, or the inter- and intra-community probabilities can be very close, and the model still be efficiently recoverable, while existing
methods (e.g., peeling strategy [3]) providing false negatives.
While these results are a step towards understanding the information-computational picture about
the heterogenous SBM with a growing number of communities, we cannot comment on phase transitions or a possible information-computational gap (see Section 1.2) in this setup based on the
results of this paper.
2 Main Results
Consider the heterogenous stochastic block model described above. In the proofs, we can allow
for isolated nodes (communities of size 1) which are omitted from the model here to simplify the
presentation. Denote by Y the set of admissible adjacency matrices according to a community
assignment as above, i.e.,
Y := {Y ? {0, 1}n?n : Y is a valid community matrix w.r.t. V1 , . . . , Vr where |Vk | = nk } .
Define the relative density of community k as
?k = (pk ? q)nk
(2.1)
?k2 = max
(2.2)
which can be seen as the increase in the average degree of a node in community k in the SBM,
relative to its average degree in an Erd?os-R?nyi model. Define nmin and nmax as the minimum
and maximum of n1 , . . . , nk respectively. The total variance over the kth community is defined as
?k2 = nk pk (1 ? pk ) , and we let ?02 = nq(1 ? q) . Moreover, consider
2
= max
?max
k=1,...,r
k=1,...,r
nk pk (1 ? pk ) .
A Bernoulli random variable with parameter p is denoted by Ber(p) , and a Binomial random variable with parameters n and p is denoted by Bin(n, p) . The Neyman Chi-square divergence between
the two discrete random variables Ber(p) and Ber(q) is given by
2
e q) := (p ? q)
D(p,
q(1 ? q)
(2.3)
e q) ? DKL (p, q) := DKL (Ber(p), Ber(q)) . Chi-square divergence is an instance
and we have D(p,
of a more general family of divergence functions called f -divergences or Ali-Silvey distances. This
family also has KL-divergence, total variation distance, Hellinger distance and Chernoff distance as
special cases. Moreover, the divergence used in [1] is an f -divergence.
Lastly, log denotes the natural logarithm (base e), and the notation ? & 1 is equivalent to ? ? O(1) .
3
2.1 Convex Recovery
Inspired by the success of semidefinite programs in community detection (e.g., see [15, 21]) we
consider a natural convex relaxation of the maximum likelihood estimator, similar to the one used
in [12], for exactP
recovery of the heterogeneous SBM with a growing number of communities. Asr
suming that ? = k=1 n2k is known, we solve
P
Y? = arg max
i,j Aij Yij
Y
(2.4)
P
subject to kY k? ? n , i,j Yij = ? , 0 ? Yij ? 1 .
where k ? k? denotes the nuclear norm (the sum of singular values of the matrix).
We prove two theorems giving conditions under which the above convex program outputs the true
community matrix with high probability. In establishing these performance guarantees, we follow
the standard dual certificate argument in convex analysis while utilizing strong matrix concentration results from random matrix theory [8, 25, 26, 5]. These results allow us to bound the spectral
radius of the matrix A ? E[A] where A is an instance of adjacency matrix generated under heterogenous SBM. The proofs for both theorems along with the matrix concentration bounds are given in
Appendix A.
Theorem 1 Under the heterogenous stochastic block model, the output of the semidefinite program
in (2.4) coincides with Y ? with high probability, provided that
and
e min , q) & log nmin , ?2 & max{? 2 , nq(1 ? q), log n}
?2k & ?k2 log nk , D(p
min
max
nmin
Pr
k=1
n??
k = o(1) for some ? > 0 .
Proof Sketch. For Y ? to be the unique solution of (2.4), we need to show that for any feasible
Y 6= Y ? , the following quantity
hA, Y ? ? Y i = hE[A], Y ? ? Y i + hA ? E[A], Y ? ? Y i
is strictly positive. In bounding the second term above, we make use of the constraint kY k? ? n =
kY ? k? by constructing a dual certificate from A ? E[A] . This is where the bounds on the spectral
norm (dual norm for the nuclear norm) of A ? E[A] enter and we use matrix concentration bounds
(see Lemma 7 in Appendix A).
The first condition of Theorem 1 is equivalent to each community being connected, second condition
ensures that each community is identifiable (pmin ? q is large enough), and
P the third condition
requires minimal density to dominate global variability. The assumption rk=1 n??
= o(1) is
k
tantamount to saying that the number of tiny communities cannot be too large (e.g., the number
of polylogarithmic-size communities cannot be a power of n). In other words, one needs to have
mostly large communities (growing like n? , for some ? > 0) for this assumption to be satisfied.
Note, however, that the condition does not restrict the number of communities of size n? for any
fixed ? ?
> 0 . In fact, Theorem 1 allows us to describe a regime in which tiny communities of
size O( log n) are recoverable provided that they are very dense and that only few tiny or small
communities exist; see Example 3. The second theorem imposes more stringent conditions on the
relative density, hence only allowing for communities of size down to log n , but relaxes the condition
that only a small number of nodes can be in small communities.
Theorem 2 Under the heterogenous stochastic block model, the output of the semidefinite program
in (2.4) coincides with Y ? , with high probability, provided that
e min , q) &
?2k & ?k2 log n , D(p
log n
nmin
2
, ?2min & max{?max
, nq(1 ? q)} .
The proof of Theorem 2 is similar to the proof of Theorem 1 except that we use a different matrix
concentration bound (see Lemma 10 in Appendix A).
2.2 Recoverability Lower and Upper Bounds
Next, we consider an estimator, inspired by maximum likelihood estimation, and identify a subset of
the model space which is exactly recoverable via this estimator. The proposed estimation approach
4
is not computationally tractable and is only used to examine the conditions for which exact recovery
is possible. For a fixed Y ? Y and an observed matrix A , the likelihood function is given by
Y A Y
ij ij
(1 ? p? (i,j) )(1?Aij )Yij q Aij (1?Yij ) (1 ? q)(1?Aij )(1?Yij ) ,
p? (i,j)
PY (A) =
i<j
where ? : {1, . . . , n}2 ? {1, . . . , r} and ? (i, j) = k if and only if i, j ? Vk , and arbitrary in
{1, . . . , r} otherwise. The log-likelihood function is given by
X
X
(1 ? q)p? (i,j)
1 ? p? (i,j)
log
log PY (A) =
log
Aij Yij +
Yij + terms not involving {Yij }.
q(1 ? p? (i,j) )
1?q
i<j
i<j
Maximizing the log-likelihood involves maximizing a weighted sum of {Yij }?s where the weights
depend on the (usually unknown) values of q, p1 , . . . , pr . To be able to work with less information,
we will use the following modification of maximum likelihood estimation, which only uses the
knowledge of n1 , . . . , nr ,
Y? = arg max
Y ?Y
n
X
Aij Yij .
(2.5)
i,j=1
Theorem 3 Suppose nmin ? 2 and n ? 8 . Under the heterogenous stochastic block model, if
1 pmin(1 ? pmin ) + q(1 ? q)
log n ,
+
?min ? 4(17 + ?)
3
pmin ? q
for some choice of ? > 0 , then the optimal solution Y? of the non-convex recovery program in (2.5)
?q 2??
n
.
coincides with Y ? , with a probability not less than 1 ? 7 ppmax
min ?q
Notice that ?min = mink=1,...,r nk (pk ? q) and pmin = mink=1,...,r pk do not necessarily correspond to the same community. Similar to the proof of Theorem 1, we establish hA, Y ? ? Y i > 0
for any Y ? Y, while this time, we use a counting argument (see Lemma 11 in Appendix B) similar
to the one in [12]. The proofs for this Theorem and the next one are given in Appendix B.
Finally, to provide a better picture of community detection for heterogenous SBM we provide the
e pk )
following necessary conditions for exact recovery. Notice that Theorems 1 and 2 require D(q,
e k , q) (in their second condition) to be bounded from below for
(in their first condition) and D(p
recoverability by the SDP. Similarly, the conditions of Theorem 4 can be seen as average-case and
worst-case upper bounds on these divergences.
Theorem 4 If any of the following conditions holds,
Pr
e k , q) ? 1 P nk log n ? r ? 2
(1) 2 ? nk ? n/e , and 4 k=1 n2k D(p
k
2
nk
e pk ) ? 1 log(n ? nmin )
e k , q) + nk D(q,
(2) n ? 128 , r ? 2 and maxk nk D(p
12
then inf Y? supY ? ?Y P[Y? 6= Y ? ] ? 21 where the infimum is taken over all measurable estimators Y?
based on the realization A generated according to the heterogenous stochastic block model.
2.3 Partial Observations
In the general stochastic block model, we assume that the entries of a symmetric adjacency matrix
A ? {0, 1}n?n have been generated according to a combination of Erd?os-R?nyi models with parameters that depend on the true community matrix. In the case of partial observations, we assume
that the entries of A has been observed independently with probability ? . In fact, every entry of
the input matrix falls into one of these categories: observed as one denoted by ?1 , observed as zero
denoted by ?0 , and unobserved which corresponds to ?c where ? = ?0 ? ?1 . If an estimator only
takes the observed part of the matrix as the input, one can revise the underlying probabilistic model
to incorporate both the stochastic block model and the observation model; i.e. a revised distribution
for entries of A as
Ber(?pk ) i, j ? Vk for some k
Aij =
Ber(?q)
i ? Vk and j ? Vl for k 6= l .
5
yields the same output from an estimator that only takes in the observed values. Therefore, the
estimators in (2.4) and (2.5), as well as the results of Theorems 1, 2, 3, can be easily adapted to the
case of partially observed graphs. It is worth mentioning that the above model for partially observed
SBM (which is another SBM) is different from another random model known as Censored Block
Model (CBM) [4]. In SBM, absence of an edge provides information, whereas in CBM it does not.
3 Tradeoffs in Heterogenous SBM
As it can be seen from the results presented in this paper, and the main summary statistics they utilize (the relative densities ?1 , . . . , ?r ), the parameters of SBM can vary significantly and still satisfy
the same recoverability conditions. In the following, we examine a number of such tradeoffs which
leads to recovery guarantees for interesting SBM configurations. Here, a configuration is a list of
community sizes nk , their connectivity probabilities pk , and the inter-community connectivity probability q . A triple (m, p, k) represents k communities of size m each, with connectivity parameter p .
We do not worry about whether m and k are always integers; if they are not, one can always round
up or down as needed so that the total number of vertices is n, without changing the asymptotics.
Moreover, when the O(?) notation is used, we mean that appropriate constants can be determined.
A detailed list of computations for the examples in this section are given in Appendix D.
Table 1: A summary of examples in Section 3. Each row gives the important aspect of the
corresponding example as well as whether, under appropriate regimes of parameters, it would
satisfy the conditions of the theorems proved in this paper.
importance
Ex. 1
Ex. 2
Ex. 3
Ex. 4
Ex. 5
Ex. 6
convex recovery
by Thm. 1
convex recovery
by Thm. 2
recoverability
by Thm. 3
?
X
X
X
?
X
?
X
?
X
X
X
X
X
?
X
X
X
{?k } instead of (pmin , nmin )
stronger ?
guarantees for convex recovery
nmin = log n
many small communities, nmax = O(n)
nmin = O(log n), spread in sizes
small pmin ? q
Better Summary Statistics. It is intuitive that using summary statistics such as (pmin , nmin ), for
a heterogenous SBM where nk ?s and pk ?s are allowed to take very different values, can be very
limiting. Examples 1 and 2 are intended to give configurations that are guaranteed to be recoverable
by our results but fail the existing recoverability conditions in the literature.
?
?
Example 1 Suppose we have two communities of sizes n1 = n ? n, n2 = n, with p1 = n?2/3
and p2 = 1/ log n while q = n?2/3?0.01 . The bound we obtain here in Theorem 3 makes it clear
that this case is theoretically solvable (the modified maximum likelihood estimator successfully
recovers it). By contrast, Theorem 3.1 in [7] (specialized for the case of no outliers), requiring
?
?
n2min (pmin ? q)2 & ( pmin nmin + nq)2 log n ,
(3.1)
would fail and provide no guarantee for recoverability.
Example 2 Consider a configuration as
?
(n ? n2/3 , n?1/3+? , 1) , ( n, O( log1 n ), n1/6 ) , q = n?2/3+3?
where ? is a small quantity, e.g., ? = 0.1 . Either of Theorems 1 and 2 certify this case as recoverable
via the semidefinite
program (2.4) with high probability. By contrast, using the pmin = n?1/3+? and
?
nmin = n heuristic, neither the condition of Theorem 3.1 in [7] (given in (3.1)) nor the condition
of Theorem 2.5 in [12] is fulfilled, hence providing no recovery guarantee for this configuration.
3.1 Small communities can be efficiently recovered
Most algorithms for clustering the SBM run into the problem of small communities [11, 6, 19],
often because the models employed do not allow for enough parameter variation to identify the key
quantities involved. The next three examples attempt to provide an idea of how small the community
6
sizes can be, how many small communities are allowed, and how wide the spread of community sizes
can be, as characterized by our results.
Example 3 (smallest community size for convex recovery) Consider a configuration as
p
?
? n ),
n) , q = O( logn n )
( log n, O(1), m) , (n2 , O( log
n
p
?
?
where n2 = n ? m? log n/n to ensure a total of n vertices. Here, we assume m ? n/(2 log n)
which implies n2 ? n/2 . It is straightforward to verify the conditions of Theorem 1.
To our knowledge, this is the first example in the literature for which semidefinite programming
based recovery works and allows the recovery of (a few) communities of size smaller than log n.
Previously, log n was considered to be the standard bound on the community size for exact recovery,
as illustrated by Theorem 2.5 of [12] in the case of equivalent communities. We have thus shown
that it is possible, in the right circumstances (when sizes are?
spread and the smaller the community
the denser it is), to recover very small communities (up to log n size), if there are just a few of
them (at most polylogarithmic in n). The significant improvement we made in the bound on the
size of the smallest community is due to the fact that we were able to perform a closer analysis of
the semidefinite program by utilizing stronger matrix concentration bounds, mainly borrowed from
[8, 25, 26, 5]. For more details, see Appendix A.2.
Notice that the condition of Theorem 3 is not satisfied. This is not an inconsistency (as Theorem 3
gives only an upper bound for the threshold), but indicates the limitation of this theorem in characterizing all recoverable cases.
Spreading the sizes. As mentioned before, while Theorem 1 allows for going lower than the standard log n bound on the community size for exact recovery, it requires the number of very small
communities to be relatively small. On the other hand, Theorem 2 provides us with the option of
having many small communities but requires the smallest community to be of size O(log n) . We
explore two cases with many small communities in the following.
Example 4 Consider a configuration where small communities are dense and there is one big
community,
( 12 n? , O(1), n1?? ) , ( 12 n, n?? log n, 1) , q = O(n?? log n)
with 0 < ? < 1 and 0 < ? < ? < 1. We are interested to see how large the number of small
communities can be. Then the conditions of Theorems 1 and 2 both require that
1
2 (1
? ?) < ? < 2(1 ? ?) ,
? > 2? ? ?
(3.2)
and are depicted in Figure 1. Since we have not specified the constants in our results, we only
consider strict inequalities.
2?
1
+?
=
0.8
2
?
0.6
? + 2? = 1
0.4
2? =
?+?
0.2
0
0
0.25 0.5 0.75
?
0
1/3
2/3
1
?
Figure 1: The space of parameters in Equation (3.2). The face defined by ? = ? is shown
with dotted edges. The three gray faces in the back correspond to ? = 1 , ? = 0 and ? = 1.
The green plane (corresponding to the last condition in (3.2)) comes from controlling the intracommunity interactions uniformly (interested reader is referred to Equations (A.8) and (A.9)
in the supplement material) which might be only an artifact of our proof and can be possibly
improved.
Notice that the small communities are as dense as can be, but the large one is not necessarily very
dense. By picking ? to be just over 1/4, we can make ? just shy of 1/2, and ? very close to 1. As
7
far as we can tell, there are no results in the literature surveyed that cover such a case, although
the clever ?peeling? strategy introduced in [3] would recover the largest community. The strongest
result in [3] that seems applicable here is Corollary 4 (which works for non-constant
probabilities).
?
The algorithm in [3] works to recover a large community (larger than O( n log2 n)), subject to
existence
of a gap
?
? in the community sizes (roughly, there should be no community sizes between
O( n) and O( n log2 n)). Therefore, in this example, after a single iteration, the algorithm will
stop, despite the continued existence of a gap, as there is no community with size above the gap.
Hence the ?peeling? strategy on this example would fail to recover all the communities.
Example 5 Consider a configuration with many small dense communities of size log n . We are interested to see how large the spread of community sizes can be for the semidefinite program to work.
As required by Theorems 1 and 2 and to control ?max (defined in (2.2)), the larger a community
the smaller its connectivity probability should be; therefore we choose the largest community at the
threshold of connectivity (required for recovery). Consider the community sizes and probabilities:
p
p
p
(log n, O(1), n/log n ? m n/log n) , ( n log n, O( (log n)/n), m) , q = O((log n)/n)
where m is a constant. Again, we round up or down where necessary to make sure the sizes are
integers and the total number of vertices is n. All the conditions of Theorem 2 are satisfied and
exact convex recovery is possible via the semidefinite program. Note that the last condition of
Theorem 1 is not satisfied since there are too many small communities. Also note that alternative
methods proposed in the literature surveyed would not be applicable; in particular, the gap condition
in [3] is not satisfied for this case from the start.
3.2 Weak communities are efficiently recoverable
The following examples illustrate how small pmin ? q can be in order for the recovery, respectively,
the convex recovery algorithms to still be guaranteed to work. When some pk is very close to q ,
the Erd?os-R?nyi model G(nk , pk ) looks very similar to the ambient edges from G(n, q) . Again, we
are going to exploit the possible tradeoffs in the parameters of SBM to guarantee recovery. Note
that the difference in pmin ? q for the two types of recovery is noticeable, indicating that there is a
significant difference between what we know to be recoverable and what we can recover efficiently
by our convex method. We consider both dense graphs (where pmin is O(1)) and sparse ones.
Example 6 Consider a configuration where all of the probabilities are O(1) and
min
(n1 , pmin , 1) , (nmin , p2 , 1) , (n3 , p3 , n?n1n?n
) , q = O(1)
3
where p2 ? q and p3 ? q are O(1) . On the other hand, we assume pmin ? q = f (n) is small. For
2
recoverability by Theorem 3, we need f (n) & (log
p n)/nmin and f (n) & (log n)/n1 . Notice that,
since n & n1 & nmin , we should have f (n) & log n/n . For the convex program to recover this
?
configuration (by Theorem 1 or 2), we need nmin & n and f 2 (n) & max{n/n21 , log n/nmin} ,
while all the probabilities are O(1) .
Note that if all the probabilities, as well as pmin ? q , are O(1), then by Theorem 3 all communities
down to a logarithmic size should be recoverable.
However, the success of convex recovery is
?
guaranteed by Theorems 1 and 2 when nmin & n .
For a similar configuration to Example
are not O(1) , recoverability by
p 6, where the probabilities
?c
Theorem 3 requires f (n) & max{ pmin (log n)/n , n } for some appropriate c > 0 .
4 Discussion
We have provided a series of extensions to prior works (especially [12, 1]) by considering the exact
recovery for stochastic block model in its full generality with a growing number of communities. By
capturing the tradeoffs among the various parameters of SBM, we have identified interesting SBM
configurations that are efficiently recoverable via semidefinite programs. However there are still
interesting problems that remain open. Sharp thresholds for recovery or approximation of heterogenous SBM, models for partial observation (non-uniform, based on prior information, or adaptive as
in [28]), as well as overlapping communities (e.g., [1]) are important future directions. Moreover,
other estimators similar to the ones considered in this paper can be analyzed; e.g. when the unknown parameters in the maximum likelihood estimator, or ? in (2.4), are estimated from the given
observations.
8
References
[1] E. Abbe and C. Sandon. Community detection in general stochastic block models: fundamental limits
and efficient recovery algorithms. arXiv preprint arXiv:1503.00609, 2015.
[2] E. Abbe and C. Sandon. Detection in the stochastic block model with multiple clusters: proof
of the achievability conjectures, acyclic bp, and the information-computation gap. arXiv preprint
arXiv:1512.09080, 2015.
[3] N. Ailon, Y. Chen, and H. Xu. Breaking the small cluster barrier of graph clustering. In ICML, pages
995?1003, 2013.
[4] A. S. Bandeira. An efficient algorithm for exact recovery of vertex variables from edge measurements,
2015.
[5] A. S. Bandeira and R. van Handel. Sharp nonasymptotic bounds on the norm of random matrices with
independent entries. arXiv preprint arXiv:1408.6185, 2014.
[6] R. B. Boppana. Eigenvalues and graph bisection: An average-case analysis. In Foundations of Computer
Science, 1987., 28th Annual Symposium on, pages 280?285. IEEE, 1987.
[7] T. T. Cai and X. Li. Robust and computationally feasible community detection in the presence of arbitrary
outlier nodes. Ann. Statist., 43(3):1027?1059, 2015.
[8] S. Chatterjee. Matrix estimation by universal singular value thresholding. Ann. Statist., 43(1):177?214,
2015.
[9] J. Chen and B. Yuan. Detecting functional modules in the yeast protein?protein interaction network.
Bioinformatics, 22(18):2283?2290, 2006.
[10] Y. Chen, A. Jalali, S. Sanghavi, and H. Xu. Clustering partially observed graphs via convex optimization.
J. Mach. Learn. Res., 15:2213?2238, 2014.
[11] Y. Chen, S. Sanghavi, and H. Xu. Clustering sparse graphs. In Advances in neural information processing
systems, pages 2204?2212, 2012.
[12] Y. Chen and J. Xu. Statistical-computational tradeoffs in planted problems and submatrix localization
with a growing number of clusters and submatrices. J. Mach. Learn. Res., 17(27):1?57, 2016.
[13] A. Coja-Oghlan. Graph partitioning via adaptive spectral techniques. Combin. Probab. Comput.,
19(2):227?284, 2010.
[14] A. Decelle, F. Krzakala, C. Moore, and L. Zdeborov?. Asymptotic analysis of the stochastic block model
for modular networks and its algorithmic applications. Physical Review E, 84(6):066106, 2011.
[15] O. Gu?don and R. Vershynin. Community detection in sparse networks via grothendieck?s inequality.
Probability Theory and Related Fields, pages 1?25, 2015.
[16] P. W. Holland, K. B. Laskey, and S. Leinhardt. Stochastic blockmodels: First steps. Social networks,
5(2):109?137, 1983.
[17] D. Jiang, C. Tang, and A. Zhang. Cluster analysis for gene expression data: A survey. Knowledge and
Data Engineering, IEEE Transactions on, 16(11):1370?1386, 2004.
[18] J. Leskovec, K. J. Lang, and M. Mahoney. Empirical comparison of algorithms for network community
detection. In Proceedings of the 19th international conference on World wide web, pages 631?640. ACM,
2010.
[19] F. McSherry. Spectral partitioning of random graphs. In Foundations of Computer Science, 2001. Proceedings. 42nd IEEE Symposium on, pages 529?537. IEEE, 2001.
[20] M. Meila and J. Shi. A random walks view of spectral segmentation. 2001.
[21] A. Montanari and S. Sen. Semidefinite programs on sparse random graphs.
arXiv preprint
arXiv:1504.05910, 2015.
[22] E. Mossel, J. Neeman, and A. Sly. Consistency thresholds for the planted bisection model. In Proceedings
of the Forty-Seventh Annual ACM on Symposium on Theory of Computing, pages 69?75. ACM, 2015.
[23] K. Rohe, S. Chatterjee, and B. Yu. Spectral clustering and the high-dimensional stochastic blockmodel.
The Annals of Statistics, 39(4):1878?1915, 2011.
[24] J. Shi and J. Malik. Normalized cuts and image segmentation. Pattern Analysis and Machine Intelligence,
IEEE Transactions on, 22(8):888?905, 2000.
[25] D.-C. Tomozei and L. Massouli?. Distributed user profiling via spectral methods. Stoch. Syst., 4(1):1?43,
2014.
[26] V. Vu. A simple svd algorithm for finding hidden partitions. arXiv:1404.3918, 2014.
[27] J. Xu, R. Wu, K. Zhu, B. Hajek, R. Srikant, and L. Ying. Jointly clustering rows and columns of binary
matrices: Algorithms and trade-offs. In The 2014 ACM international conference on Measurement and
modeling of computer systems, pages 29?41. ACM, 2014.
[28] S.-Y. Yun and A. Proutiere. Community detection via random and adaptive sampling. In Proceedings of
The 27th Conference on Learning Theory, pages 138?175, 2014.
9
| 6574 |@word mild:1 briefly:1 achievable:1 proportion:1 norm:5 stronger:2 seems:1 nd:1 open:2 configuration:14 series:2 neeman:1 existing:2 recovered:3 lang:1 yet:1 partition:2 intelligence:1 nq:4 plane:1 characterization:1 provides:4 node:11 certificate:2 detecting:1 zhang:1 mathematical:1 burst:1 along:1 symposium:3 yuan:1 prove:1 inside:1 krzakala:1 hellinger:1 theoretically:1 pairwise:1 inter:2 roughly:1 p1:2 examine:2 growing:5 nor:1 sdp:1 chi:2 inspired:2 considering:1 spain:1 discover:1 bounded:2 underlying:2 moreover:4 notation:2 provided:4 what:4 unobserved:1 finding:1 guarantee:8 every:1 exactly:2 k2:4 control:1 partitioning:2 positive:1 negligible:1 engineering:3 understood:2 before:1 decelle:1 limit:4 despite:2 mach:2 jiang:1 establishing:1 might:2 therein:2 studied:4 mentioning:1 statistically:1 fazel:1 unique:1 vu:1 block:21 asymptotics:1 empirical:2 universal:1 submatrices:1 significantly:1 word:1 spite:1 protein:3 nmax:2 cannot:3 close:3 clever:1 py:2 restriction:1 equivalent:6 measurable:1 shi:2 maximizing:2 straightforward:1 independently:2 convex:17 survey:1 recovery:43 identifying:1 estimator:13 sbm:34 utilizing:2 continued:1 nuclear:2 dominate:1 cbm:2 population:1 variation:2 limiting:1 annals:1 controlling:1 suppose:2 user:1 exact:17 programming:3 us:2 approximated:1 particularly:1 cut:1 observed:11 module:1 preprint:4 electrical:2 worst:1 ensures:1 connected:1 handel:1 trade:1 mentioned:1 depend:2 ali:1 localization:1 gu:1 easily:1 various:4 describe:1 tell:1 outside:1 heuristic:2 widely:1 emerged:1 solve:1 denser:1 larger:2 otherwise:1 modular:1 statistic:6 jointly:1 eigenvalue:1 cai:1 sen:1 leinhardt:2 maryam:1 interaction:2 unresolved:2 realization:1 amin:1 ioana:1 intuitive:1 ky:3 exploiting:2 seattle:4 cluster:5 assessing:1 illustrate:2 ij:2 noticeable:1 borrowed:1 strong:4 p2:3 involves:1 implies:1 come:1 direction:1 radius:1 stochastic:22 stringent:1 material:1 adjacency:4 bin:1 require:2 biological:1 yij:11 strictly:1 extension:1 hold:2 considered:6 algorithmic:1 vary:1 smallest:4 omitted:1 estimation:4 applicable:2 label:1 spreading:1 largest:2 successfully:2 weighted:1 hope:1 offs:1 always:2 modified:2 varying:1 endow:1 corollary:1 stoch:1 vk:5 improvement:3 bernoulli:1 likelihood:10 mainly:1 indicates:1 impossibility:1 contrast:3 blockmodel:1 greatly:1 sense:1 vl:1 hidden:1 relation:1 proutiere:1 misclassified:1 going:2 interested:4 arg:2 among:3 dual:3 denoted:4 logn:1 special:2 homogenous:1 equal:1 field:1 asr:1 having:1 washington:4 sampling:1 chernoff:1 represents:1 look:1 icml:1 abbe:2 yu:1 future:1 sanghavi:2 simplify:1 few:3 divergence:8 phase:2 intended:1 n1:9 attempt:1 detection:12 interest:3 intra:1 unrecoverable:1 mahoney:1 analyzed:1 semidefinite:13 mcsherry:1 silvey:1 ambient:1 edge:7 closer:1 partial:4 necessary:2 censored:1 logarithm:1 walk:1 re:2 isolated:1 combin:1 theoretical:2 sociology:1 minimal:1 leskovec:1 instance:2 column:1 modeling:1 cover:1 assignment:1 tractability:1 vertex:7 entry:6 subset:1 uniform:1 successful:1 seventh:1 too:2 characterize:1 varies:1 vershynin:1 density:5 fundamental:4 international:2 sequel:1 probabilistic:1 picking:1 connectivity:8 again:2 recorded:1 satisfied:5 n21:1 choose:1 possibly:1 n1n:1 pmin:18 li:1 syst:1 nonasymptotic:1 satisfy:2 view:3 start:1 recover:10 option:1 square:2 variance:1 efficiently:5 correspond:2 identify:3 yield:1 weak:3 bisection:2 worth:1 history:1 strongest:1 involved:1 proof:9 recovers:1 stop:1 proved:2 ask:1 revise:1 knowledge:3 segmentation:3 oghlan:1 hajek:1 back:1 worry:1 follow:1 improved:1 erd:4 generality:2 just:3 lastly:1 sly:1 nmin:18 sketch:1 hand:2 web:1 o:4 assessment:1 suming:1 overlapping:1 defines:1 infimum:1 artifact:1 laskey:2 reveal:1 gray:1 grows:2 yeast:1 requiring:1 true:3 verify:1 normalized:1 hence:4 symmetric:1 moore:1 illustrated:1 round:2 coincides:3 yun:1 theoretic:2 complete:1 image:2 recently:2 specialized:1 functional:1 physical:1 overview:1 extend:1 he:1 refer:1 significant:3 measurement:2 enter:1 meila:1 consistency:3 mathematics:1 similarly:1 mfazel:1 han:1 base:1 recent:3 inf:1 apart:1 scenario:1 certain:1 bandeira:2 inequality:2 binary:3 success:2 inconsistency:1 seen:3 minimum:1 employed:1 forty:1 paradigm:1 recoverable:10 full:3 multiple:1 infer:1 characterized:2 profiling:1 believed:1 long:2 retrieval:1 dkl:2 underlies:1 basic:1 involving:1 heterogeneous:4 circumstance:1 arxiv:9 iteration:1 achieved:1 whereas:1 grow:2 singular:2 specially:1 strict:1 comment:1 subject:2 sure:1 integer:2 counting:1 presence:1 enough:3 relaxes:1 restrict:1 identified:1 idea:2 tradeoff:9 whether:3 expression:1 clear:2 detailed:1 statist:2 category:1 generate:1 exist:1 srikant:1 shifted:1 notice:5 dotted:1 certify:1 fulfilled:1 estimated:1 discrete:1 n2k:2 key:2 threshold:7 changing:1 neither:1 utilize:1 uw:4 v1:2 graph:20 asymptotically:1 relaxation:1 fraction:1 year:1 sum:2 run:1 massouli:1 family:2 saying:1 reader:1 wu:1 p3:2 draw:4 decision:1 appendix:7 submatrix:1 capturing:1 bound:19 guaranteed:3 identifiable:1 annual:2 strength:1 adapted:1 constraint:1 bp:1 n3:1 aspect:1 argument:2 min:8 relatively:1 conjecture:2 department:4 ailon:1 according:3 combination:1 across:1 beneficial:1 smaller:3 remain:1 partitioned:1 modification:1 outlier:2 pr:3 taken:1 computationally:3 neyman:1 equation:2 previously:1 describing:1 fail:3 needed:1 know:1 tractable:3 v2:1 away:1 spectral:7 appropriate:3 alternative:1 existence:2 binomial:1 denotes:2 include:1 clustering:6 ensure:1 log2:2 exploit:2 giving:1 especially:1 establish:1 nyi:4 malik:1 question:2 quantity:3 strategy:3 concentration:5 planted:2 jalali:2 nr:2 zdeborov:1 kth:2 distance:4 providing:2 ying:1 setup:1 mostly:1 mink:3 stated:1 negative:1 unknown:2 perform:1 allowing:2 upper:5 coja:1 observation:7 revised:1 finite:2 maxk:1 variability:1 recoverability:11 sharp:4 arbitrary:2 thm:3 community:99 introduced:3 pair:2 required:2 kl:1 specified:1 sandon:2 polylogarithmic:2 barcelona:1 heterogenous:16 nip:1 able:2 below:2 usually:1 pattern:1 regime:9 program:15 max:12 green:1 power:1 natural:2 solvable:1 zhu:1 older:1 mossel:1 picture:4 log1:1 grothendieck:1 prior:2 understanding:5 literature:4 probab:1 review:1 relative:5 tantamount:1 asymptotic:1 fully:1 interesting:4 limitation:2 acyclic:1 shy:1 triple:1 foundation:2 supy:1 degree:2 sufficient:2 imposes:1 thresholding:1 tiny:4 row:2 genetics:1 summary:6 achievability:1 last:2 aij:7 side:1 allow:3 understand:1 ber:7 fall:1 wide:2 characterizing:2 face:2 barrier:1 sparse:5 van:1 distributed:1 transition:2 valid:1 world:1 made:1 adaptive:3 far:2 social:2 transaction:2 gene:2 global:2 don:1 reviewed:1 table:1 learn:2 robust:1 complex:1 necessarily:2 constructing:1 pk:17 spread:5 main:2 dense:6 blockmodels:1 bounding:1 big:1 montanari:1 n2:6 allowed:2 xu:5 referred:2 vr:2 surveyed:2 comput:1 breaking:1 third:1 peeling:3 admissible:1 tang:1 theorem:41 rk:1 down:4 rohe:1 list:3 false:1 importance:1 supplement:1 chatterjee:2 nk:16 gap:10 chen:5 depicted:1 logarithmic:1 explore:1 partially:4 recommendation:1 holland:2 corresponds:1 acm:5 sized:3 presentation:1 boppana:1 ann:2 towards:2 absence:1 feasible:2 determined:1 except:1 uniformly:2 lemma:3 total:5 called:1 svd:1 indicating:1 bioinformatics:2 incorporate:1 ex:6 |
6,163 | 6,575 | PAC Reinforcement Learning with Rich Observations
Akshay Krishnamurthy
University of Massachusetts, Amherst
Amherst, MA, 01003
[email protected]
Alekh Agarwal
Microsoft Research
New York, NY 10011
[email protected]
John Langford
Microsoft Research
New York, NY 10011
[email protected]
Abstract
We propose and study a new model for reinforcement learning with rich observations, generalizing contextual bandits to sequential decision making. These
models require an agent to take actions based on observations (features) with the
goal of achieving long-term performance competitive with a large set of policies.
To avoid barriers to sample-efficient learning associated with large observation
spaces and general POMDPs, we focus on problems that can be summarized by a
small number of hidden states and have long-term rewards that are predictable by a
reactive function class. In this setting, we design and analyze a new reinforcement
learning algorithm, Least Squares Value Elimination by Exploration. We prove
that the algorithm learns near optimal behavior after a number of episodes that is
polynomial in all relevant parameters, logarithmic in the number of policies, and
independent of the size of the observation space. Our result provides theoretical
justification for reinforcement learning with function approximation.
1
Introduction
The Atari Reinforcement Learning research program [21] has highlighted a critical deficiency of
practical reinforcement learning algorithms in settings with rich observation spaces: they cannot effectively solve problems that require sophisticated exploration. How can we construct Reinforcement
Learning (RL) algorithms which effectively plan and plan to explore?
In RL theory, this is a solved problem for Markov Decision Processes (MDPs) [6, 13, 26]. Why do
these results not apply?
An easy response is, ?because the hard games are not MDPs.? This may be true for some of the hard
games, but it is misleading?popular algorithms like Q-learning with ?-greedy exploration do not
even engage in minimal planning and global exploration1 as is required to solve MDPs efficiently.
MDP-optimized global exploration has also been avoided because of a polynomial dependence on
the number of unique observations which is intractably large with observations from a visual sensor.
In contrast, supervised and contextual bandit learning algorithms have no dependence on the number
of observations and at most a logarithmic dependence on the size of the underlying policy set.
Approaches to RL with a weak dependence on these quantities exist [15] but suffer from an exponential
dependence on the time horizon?with K actions and a horizon of H, they require ?(K H ) samples.
Examples show that this dependence is necessary, although they typically require a large number of
states. Can we find an RL algorithm with no dependence on the number of unique observations and a
polynomial dependence on the number of actions K, the number of necessary states M , the horizon
H, and the policy complexity log(|?|)?
1
We use ?global exploration? to distinguish the sophisticated exploration strategies required to solve an MDP
efficiently from exponentially less efficient alternatives such as ?-greedy.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
To begin answering this question we consider a simplified setting with episodes of bounded length H
and deterministic state transitions. We further assume that we have a function class that contains the
optimal observation-action value function Q? . These simplifications make the problem significantly
more tractable without trivializing the core goal of designing a Poly(K, M, H, log(|?|))) algorithm.
To this end, our contributions are:
1. A new class of models for studying reinforcement learning with rich observations. These
models generalize both contextual bandits and small-state MDPs, but do not exhibit the partial
observability issues of more complex models like POMDPs. We show exponential lower bounds
on sample complexity in the absence of the assumptions to justify our model.
2. A new reinforcement learning algorithm Least Squares Value Elimination by Exploration
(LSVEE) and a PAC guarantee
that it finds a?policy that is at most ? sub-optimal (with the
?
M K2H6
above assumptions) using O
log(|?|) samples, with no dependence on the number of
?3
unique observations. This is done by combining ideas from contextual bandits with a novel state
equality test and a global exploration technique. Like initial contextual bandit approaches [1],
the algorithm is computationally inefficient since it requires enumeration of the policy class, an
aspect we hope to address in future work.
LSVEE uses a function class to approximate future rewards, and thus lends theoretical backing for
reinforcement learning with function approximation, which is the empirical state-of-the-art.
2
The Model
Our model is a Contextual Decision Process, a term we use broadly to refer to any sequential
decision making task where an agent must make decision on the basis of rich features (context) to
optimize long-term reward. In this section, we introduce the model, starting with basic notation. Let
H 2 N denote an episode length, X ? Rd an observation space, A a finite set of actions, and S a
finite set of latent states. Let K , |A|. We partition S into H disjoint groups S1 , . . . , SH , each of
size at most M . For a set P , (P ) denotes the set of distributions over P .
2.1
Basic Definitions
Our model is defined by the tuple ( 1 , , D) where 1 2 (S1 ) denotes a starting state distribution,
: (S ? A) ! (S) denotes the transition dynamics, and Ds 2 (X ? [0, 1]K ) associates a
distribution over observation-reward pairs with each state s 2 S. We also use Ds to denote the
marginal distribution over observations (usage will be clear from context) and use Ds|x for the
conditional distribution over reward given the observation x in state s. The marginal and conditional
probabilities are referred to as Ds (x) and Ds|x (r).
We assume that the process is layered (also known as loop-free or acyclic) so that for any sh 2 Sh
and action a 2 A, (sh , a) 2 (Sh+1 ). Thus, the environment transitions from state space S1 up to
SH via a sequence of actions. Layered structure allows us to avoid indexing policies and Q-functions
with time, which enables concise notation.
Each episode produces a full record of interaction (s1 , x1 , a1 , r1 , . . . , sH , xH , aH , rH ) where s1 ?
1 , sh ? (sh 1 , ah 1 ), (xh , rh ) ? Dsh and all actions ah are chosen by the learning agent. The
record of interaction observed by the learner is (x1 , a1 , r1 (a1 ), . . . , xH , aH , rH (aH )) and at time
point h, the learner may use all observable information up to and including xh to select ah . Notice
that all state information and rewards for alternative actions are unobserved by the learning agent.
PH
The learner?s reward for an episode is h=1 rh (ah ), and the goal is to maximize the expected
PH
cumulative reward, R = E[ h=1 rh (ah )], where the expectation accounts for all the randomness
PH
in the model and the learner. We assume that almost surely h=1 rh (ah ) 2 [0, 1] for any action
sequence.
In this model, the optimal expected reward achievable can be computed recursively as
?
?
V ? , Es? 1 [V ? (s)] with V ? (s) , Ex?Ds max Er?Ds|x r(a) + Es0 ? (s,a) V ? (s0 ) .
a
2
(1)
As the base case, we assume that for states s 2 SH , all actions transition to a terminal state sH+1
with V ? (sH+1 ) , 0. For each (s, x) pair such that Ds (x) > 0 we also define a Q? function as
?
?
Q?s (x, a) , Er?Ds|x r(a) + Es0 ? (s,a) V ? (s0 ) .
(2)
This function captures the optimal choice of action given this (state, observation) pair and therefore
encodes optimal behavior in the model.
With no further assumptions, the above model is a layered episodic Partially Observable Markov
Decision Process (LE-POMDP). Both learning and planning are notoriously challenging in POMDPs,
because the optimal policy depends on the entire trajectory and the complexity of learning such a
policy grows exponentially with H (see e.g. Kearns et al. [15] as well as Propositions 1 and 2 below).
Our model avoids this statistical barrier with two assumptions: (a) we consider only reactive policies,
and (b) we assume access to a class of functions that can realize the Q? function. Both assumptions
are implicit in the empirical state of the art RL results. They also eliminate issues related to partial
observability, allowing us to focus on our core goal of systematic exploration. We describe both
assumptions in detail before formally defining the model.
Reactive Policies: One approach taken by some prior theoretical work is to consider reactive (or
memoryless) policies that use only the current observation to select an action [4, 20]. Memorylessness
is slightly generalized in the recent empirical advances in RL, which typically employ policies that
depend only on the few most recent observations [21].
A reactive policy ? : X ! A is a strategy for navigating the search space by taking actions ?(x)
given observation x. The expected reward for a policy is defined recursively through
?
?
V (?) , Es? 1 [V (s, ?)] and V (s, ?) , E(x,r)?Ds r(?(x)) + Es0 ? (s,?(x)) V (s0 , ?) .
A natural learning goal is to identify a policy with maximal value V (?) from a given collection of
reactive policies ?. Unfortunately, even when restricting to reactive policies, learning in POMDPs
requires exponentially many samples, as we show in the next lower bound.
p
Proposition 1. Fix H, K 2 N with K
2 and ? 2 (0, 1/8). For any algorithm, there exists a
LE-POMDP with horizon H, K actions, and 2H total states; a class ? of reactive policies with
|?| = K H ; and a constant c > 0 such that the probability that the algorithm outputs a policy ?
? with
V (?
? ) > max?2? V (?) ? after collecting T trajectories is at most 2/3 for all T ? cK H /?2 .
This lower bound precludes a Poly(K, M, H, log(|?|)) sample complexity bound for learning reactive policies in general POMDPs as log(|?|) = H log(K) in the construction, but the number of
samples required is exponential in H. The lower bound instance provides essentially no instantaneous
feedback and therefore forces the agent to reason over K H paths independently.
Predictability of Q? : The assumption underlying the empirical successes in RL is that the Q?
function can be well-approximated by some large set of functions F. To formalize this assumption, note that for some POMDPs, we may be able to write Q? as a function of the observed
history (x1 , a1 , r1 (a1 ), . . . , xh ) at time h. For example, this is always true in deterministic-transition
POMDPs, since the sequence of previous actions encodes the state and Q? as in Eq. (2) depends only
on the state, the current observation, and the proposed action. In the realizable setting, we have access
to a collection of functions F mapping the observed history to [0, 1], and we assume that Q? 2 F.
Unfortunately, even with realizability, learning in POMDPs can require exponentially many samples.
p
Proposition 2. Fix H, K 2 N with K
2 and ? 2 (0, 1/8). For any algorithm, there exists
a LE-POMDP with time horizon H, K actions, and 2H total states; a class of predictors F with
|F| = K H and Q? 2 F; and a constant c 0 such that the probability that the algorithm outputs a
policy ?
? with V (?
? ) > V ? ? after collecting T trajectories is at most 2/3 for all T ? cK H /?2 .
As with Proposition 1, this lower bound precludes a Poly(K, M, H, log(|?|)) sample complexity
bound for learning POMDPs with realizability. The lower bound shows that even with realizability,
the agent may have to reason over K H paths independently since the functions can depend on the
entire history. Proofs of both lower bounds here are deferred to Appendix A.
Both lower bounds use POMDPs with deterministic transitions and an extremely small observation
space. Consequently, even learning in deterministic-transition POMDPs requires further assumptions.
3
2.2
Main Assumptions
As we have seen, neither restricting to reactive policies, nor imposing realizability enable tractable
learning in POMDPs on their own. Combined however, we will see that sample-efficient learning is
possible, and the combination of these two assumptions is precisely how we characterize our model.
Specifically, we study POMDPs for which Q? can be realized by a predictor that uses only the current
observation and proposed action.
Assumption 1 (Reactive Value Functions). We assume that for all x 2 X , a 2 A and any two state
s, s0 such that Ds (x), Ds0 (x) > 0, we have Q?s (x, a) = Q?s0 (x, a).
The restriction on Q? implies that the optimal policy is reactive and also that the optimal predictor of
long-term reward depends only on the current observation. In the following section, we describe how
this condition relates to other RL models in the literature. We first present a natural example.
Example 1 (Disjoint observations). The simplest example is one where each state s can be identified
with a subset Xs with Ds (x) > 0 only for x 2 Xs and where Xs \ Xs0 = ; when s 6= s0 . A realized
observation then uniquely identifies the underlying state s so that Assumption 1 trivially holds, but
this mapping from s to Xs is unknown to the agent. Thus, the problem cannot be easily reduced
to a small-state MDP. This setting is quite natural in several robotics and navigation tasks, where
the visual signals are rich enough to uniquely identify the agent?s position (and hence state). It also
applies to video game playing, where the raw pixel intensities suffice to decode the game?s memory
state, but learning this mapping is challenging.
Thinking of x as the state, the above example is an MDP with infinite state space but with structured
transition operator. While our model is more general, we are primarily motivated by these infinitestate MDPs, for which the reactivity assumptions are completely non-restrictive. For infinite-state
MDPs, our model describes a particular structure on the transition operator that we show enables
efficient learning. We emphasize that our focus is not on partial observability issues.
As we are interested in understanding function approximation, we make a realizability assumption.
Assumption 2 (Realizability). We are given access to a class of predictors F ? (X ? A ! [0, 1])
of size |F| = N and assume that Q? = f ? 2 F. We identify each predictor f with a policy
?f (x) , argmaxa f (x, a). Observe that the optimal policy is ?f ? which satisfies V (?f ? ) = V ? .
Assumptions 1 and 2 exclude the lower bounds from Propositions 1 and 2. Our algorithm requires
one further assumption.
Assumption 3 (Deterministic Transitions). We assume that the transition model is deterministic.
This means that the starting distribution 1 is a point-mass on some state s1 and : (S ? A) ! S.
Even with deterministic transitions, learning requires systematic global exploration that is unaddressed
in previous work. Recall that the lower bound constructions for Propositions 1 and 2 actually use
deterministic transition POMDPs. Therefore, deterministic transitions combined with either the
reactive or the realizability assumption by itself still precludes tractable learning. Nevertheless, we
hope to relax this final assumption in future work.
More broadly, this model provides a framework to reason about reinforcement learning with function
approximation. This is highly desirable as such approaches are the empirical state-of-the-art, but the
limited supporting theory provides little advice on systematic global exploration.
2.3
Connections to Other Models and Techniques
The above model is closely related to several well-studied models in the literature, namely:
Contextual Bandits: If H = 1, then our model reduces to stochastic contextual bandits [8, 16], a
well-studied simplification of the general reinforcement learning problem. The main difference is
that the choice of action does not influence the future observations (there is only one state), and
algorithms do not need to perform long-term planning to obtain low sample complexity.
Markov Decision Processes: If X = S and Ds (x) for each state s is concentrated on s, then our
model reduces to small-state MDPs, which can be efficiently solved by tabular approaches [6, 13, 26].
The key differences in our setting are that the observation space X is extremely large or infinite
4
and the underlying state is unobserved, so tabular methods are not viable and algorithms need to
generalize across observations.
When the number of states is large, existing methods typically require exponentially many samples
such as the O(K H ) result of Kearns et al. [15]. Others depend poorly on the complexity of the policy
set or scale linearly in the size of a covering over the state space [10, 12, 23]. Lastly, policy gradient
methods avoid dependence on size of the state space, but do not achieve global optimality [11, 27] in
theory and in practice, unlike our algorithm which is guaranteed to find the globally optimal policy.
POMDPs: By definition our model is a POMDP where the Q? function is consistent across states.
This restriction implies that the agent does not have to reason over belief states as is required in
POMDPs. There are some sample complexity guarantees for learning in arbitrarily complex POMDPs,
but the bounds we are aware of are quite weak as they scale linearly with |?| [14, 19], or require
discrete observations from a small set [4].
State Abstraction: State abstraction (see [18] for a survey) focuses on understanding what optimality
properties are preserved in an MDP after the state space is compressed. While our model does have a
small number of underlying states, they do not necessarily admit non-trivial state abstractions that are
easy to discover (i.e. that do not amount to learning the optimal behavior) as the optimal behavior
can depend on the observation in an arbitrary manner. Furthermore, most sample complexity results
cannot search over large abstraction sets (see e.g. Jiang et al. [9]), limiting their scope.
Function Approximation: Our approach uses function approximation to address the generalization
problem implicit in our model. Function approximation is the empirical state-of-the-art in reinforcement learning [21], but theoretical analysis has been quite limited. Several authors have studied linear
or more general function approximation (See [5, 24, 28]), but none of these results give finite sample
bounds, as they do not address the exploration question. Li and Littman [17] do give finite sample
bounds, but they assume access to a ?Knows-what-it-knows" (KWIK) oracle, which cannot exist
even for simple problems. Other theoretical results either make stronger realizability assumptions
(c.f., [2]) or scale poorly with problem parameters (e.g., polynomial in the number of functions [22]
or the size of the observation space [23]).
3
The Result
We consider the task of Probably Approximately Correct (PAC) learning the models defined in
Section 2. Given F (Assumption 2), we say that an algorithm PAC learns our model if for any
?, 2 (0, 1), the algorithm outputs a policy ?
? satisfying V (?
? ) V ? ? with probability at least
1
. The sample complexity is a function n : (0, 1)2 ! N such that for any ?, 2 (0, 1), the
algorithm returns an ?-suboptimal policy with probability at least 1
using at most n(?, ) episodes.
We refer to a Poly(M, K, H, 1/?, log N, log(1/ )) sample complexity bound as polynomial in all
relevant parameters. Notably, there should be no dependence on |X |, which may be infinite.
3.1
The Algorithm
Before turning to the algorithm, it is worth clarifying some additional notation. Since we are focused
on the deterministic transition setting, it is natural to think about the environment as an exponentially
large search tree with fan-out K and depth H. Each node in the search tree is labeled with an
(unobserved) state s 2 S, and each edge is labeled with an action a 2 A, consistent with the
transition model. A path p 2 A? is a sequence of actions from the root of the search tree, and we also
use p to denote the state reached after executing the path p from the root. Thus, Dp is the observation
distribution of the state at the end of the path p. We use p a to denote a path formed by executing
all actions in p and then executing action a, and we use |p| to denote the length of the path. Let ?
denote the empty path, which corresponds to the root of the search tree.
The pseudocode for the algorithm, which we call Least Squares Value Elimination by Exploration
(LSVEE), is displayed in Algorithm 1 (See also Appendix B). LSVEE has two main components: a
depth-first-search routine with a learning step (step 6 in Algorithm 2) and an on-demand exploration
technique (steps 5-8 in Algorithm 1). The high-level idea of the algorithm is to eliminate regression
functions that do not meet Bellman-like consistency properties of the Q? function. We now describe
both components and their properties in detail.
5
Algorithm 1 Least Squares Value Elimination by Exploration: LSVEE (F, ?, )
1: F
DFS-L EARN(?, F, ?, /2).
2: Choose any f 2 F. Let V? ? be a Monte Carlo estimate of V f (?, ?f ). (See Eq. (3))
32 log(12M H/ )
H/ )
3: Set ?demand = ?/2, n1 =
and n2 = 8 log(6M
.
?2
?
4: while true do
5:
Fix a regressor f 2 F.
6:
Collect n1 trajectories according to ?f and estimate V (?f ) via Monte-Carlo estimate V? (?f ).
7:
If |V? (?f ) V? ? | ? ?demand , return ?f .
8:
Otherwise update F by calling DFS-L EARN (p, F, ?, 6M H 2 n2 ) on each of the H 1 prefixes
p of each of the first n2 paths collected in step 6.
9: end while
Algorithm 2 DFS-L EARN (p, F, ?, )
1: Set
=
? p
320H 2 K
and ?test = 20(H
|p|
p
5/4) K .
/2
2: for a 2 A, if not C ONSENSUS(p a, F, ?test , , M KH ) do
3:
F
DFS-L EARN(p a, F, ?, ).
4: end for
5: Collect ntrain = 242 log 8M HN observations (xi , ai , ri ) where (xi , ri0 ) ? Dp , ai is chosen
0
uniformly
n at random, and ri = ri (ai ).
? ) ? minf 0 2F R(f
? 0) + 2
6: Return f 2 F : R(f
2
+
22 log(4M HN/ )
ntrain
o
? ) defined in Eq. (4).
, R(f
The DFS routine: When the DFS routine, displayed in Algorithm 2, is run at some path p, we first
decide whether to recursively expand the descendants p a by performing a consensus test. Given a
path p0 , this test, displayed in Algorithm 3, computes estimates of value predictions,
V f (p0 , ?f ) , Ex?Dp0 f (x, ?f (x)),
(3)
for all the surviving regressors. These value predictions are easily estimated by collecting many
observations after rolling in to p0 and using empirical averages (See line 2 in Algorithm 3). If all the
functions agree on this value for p0 the DFS need not visit this path.
After the recursive calls, the DFS routine performs the elimination step (line 6). When this step is
invoked at path p, the algorithm collects ntrain observations (xi , ai , ri ) where (xi , ri0 ) ? Dp , ai is
chosen uniformly at random, and ri = ri0 (ai ) and eliminates regressors that have high empirical risk,
? ),
R(f
1
ntrain
n
train
X
(f (xi , ai )
ri
V? f (p ai , ?f ))2 .
(4)
i=1
Intuition for DFS: This regression problem is motivated by the realizability assumption and the
definition of Q? in Eq. (2), which imply that at path p and for all actions a,
f ? (x, a) = Er?Dp|x r(a) + V (p a, ?f ? ) = Er?Dp|x r(a) + Ex0 ?Dp a f ? (x0 , ?f ? (x0 )).
(5)
Thus f ? is consistent between its estimate at the current state s and the future state s0 = (s, a).
The regression problem (4) is essentially a finite sample version of this identity. However, some
care must be taken as the target for the regression function f includes V f (p a, ?f ), which is f ?s
value prediction for the future. The fact that the target differs across functions can cause instability in
the regression problem, as some targets may have substantially lower variance than f ? ?s. To ensure
correct behavior, we must obtain high-quality future value prediction estimates, and so, we re-use
the Monte-Carlo estimates V? f (p a, ?f ) in Eq. (3) from the consensus tests. Each time we perform
elimination, the regression targets are close for all considered f in Equation (4) owing to consensus
being satisfied at the successor nodes in Step 2 of Algorithm 2.
Given consensus at all the descendants, each elimination step inductively propagates learning towards
the start state by ensuring the following desirable properties hold: (i) f ? is not eliminated, (ii)
6
Algorithm 3 C ONSENSUS(p, F, ?test , , )
1: Set ntest =
log(2N/ ). Collect ntest observations xi ? Dp .
Pntest
2: Compute for each function, V? f (p, ?f ) = n1test
i=1 fi(xi , ?f (xi )).
h
f
g
?
?
3: Return 1 |V (p, ?f ) V (p, ?g )| ? ?test 8f, g 2 F .
2
2
consensus is reached at p, and (iii) surviving policies choose good actions at p. Property (ii) controls
the sample complexity, since consensus tests at state s return true once elimination has been invoked
on s, so DFS avoids exploring the entire search space. Property (iii) leads to the PAC-bound; if we
have run the elimination step on all states visited by a policy, that policy must be near-optimal.
To bound the sample complexity of the DFS routine, since there are M states per level and the
consensus test returns true once elimination has been performed, we know that the DFS does not visit
a large fraction of the search tree. Specifically, this means DFS is invoked on at most M H nodes
in total, so we run elimination at most M H times, and we perform at most M KH consensus tests.
Each of these operations requires polynomially many samples.
The elimination step is inspired by the RegressorElimination algorithm of Agarwal et. al [1] for
contextual bandit learning in the realizable setting. In addition to forming a different regression
problem, RegressorElimination carefully chooses actions to balance exploration and exploitation
which leads to an optimal regret bound. In contrast, we are pursuing a PAC-guarantee here, for which
it suffices to focus exclusively on exploration.
On-demand Exploration: While DFS is guaranteed to estimate the optimal value V ? , it unfortunately does not identify the optimal policy. For example, if consensus is satisfied at a state s
without invoking the elimination step, then each function accurately predicts the value V ? (s), but
the associated policies are not guaranteed to achieve this value. To overcome this issue, we use an
on-demand exploration technique in the second phase of the algorithm (Algorithm 1, steps 5-8).
At each iteration of this phase, we select a policy ?f and estimate its value via Monte Carlo sampling.
If the policy has sub-optimal value, we invoke the DFS procedure on many of the paths visited. If the
policy has near-optimal value, we have found a good policy, so we are done. This procedure requires
an accurate estimate of the optimal value, which we already obtained by invoking the DFS routine at
the root, since it guarantees that all surviving regressors agree with f ? ?s value on the starting state
distribution. f ? ?s value is precisely the optimal value.
Intuition for On-demand Exploration: Running the elimination step at some path p ensures that
all surviving regressors take good actions at p, in the sense that taking one action according to any
surviving policy and then behaving optimally thereafter achieves near-optimal reward for path p.
This does not ensure that all surviving policies achieve near-optimal reward, because they may take
highly sub-optimal actions after the first one. On the other hand, if a surviving policy ?f visits only
states for which the elimination step has been invoked, then it must have near-optimal reward. More
precisely, letting L denote the set of states for which the elimination step has been invoked (the
?learned" states), we prove that any surviving ?f satisfies
V?
V (?f ) ? ?/8 + P [?f visits a states 2
/ L]
Thus, if ?f is highly sub-optimal, it must visit some unlearned states with substantial probability. By
calling DFS-L EARN on the paths visited by ?f , we ensure that the elimination step is run on at least
one unlearned states. Since there are only M H distinct states and each non-terminal iteration ensures
training on an unlearned state, the algorithm must terminate and output a near-optimal policy.
Computationally, the running time of the algorithm may be O(N ), since eliminating regression
functions according to Eq. (4) may require enumerating over the class and the consensus function
requires computing the maximum and minimum of N numbers, one for each function. This may
be intractably slow for rich function classes, but our focus is on statistical efficiency, so we ignore
computational issues here.
3.2
The PAC Guarantee
Our main result certifies that LSVEE PAC-learns our models with polynomial sample complexity.
7
Theorem 1 (PAC bound). For any (?, ) 2 (0, 1) and under Assumptions 1, 2, and 3, with probability
at least 1
, the policy ? returned by LSVEE is at most ?-suboptimal. Moreover, the number of
episodes required is at most
?
?
M H 6K 2
?
O
log(N/ ) log(1/ ) .
?3
? notation to suppress logarithmic dependence in all parameters except for N
This result uses the O
and . The precise dependence on all parameters can be recovered by examination of our proof and is
shortened here simply for clarity. See Appendix C for the full proof of the result.
This theorem states that LSVEE produces a policy that is at most ?-suboptimal using a number of
episodes that is polynomial in all relevant parameters. To our knowledge, this is the first polynomial
sample complexity bound for reinforcement learning with infinite observation spaces, without
prohibitively strong assumptions (e.g., [2, 22, 23]). We also believe this is the first finite-sample
guarantee for reinforcement learning with general function approximation without prohibitively
strong assumptions (e.g., [2]).
Since our model generalizes both contextual bandits and MDPs, it is worth comparing the sample
complexity bounds.
1. In contextual bandits, we have M = H = 1 so that the sample complexity of LSVEE is
? K32 log(N/ ) log(1/ )), in contrast with known O(
? K2 log(N/ )) results.
O(
?
?
2. Prior results establish the sample complexity for learning layered episodic MDPs with determinis? M Kpoly(H)
tic transitions is O(
log(1/ )) [7, 25].
?2
Both comparisons show our sample complexity bound may be suboptimal in its dependence on K
and ?. Looking into our proof, the additional factor of K comes from collecting observations to
estimate the value of future states, while the additional 1/? factor arises from trying to identify a
previously unexplored state. In contextual bandits, these issues do not arise since there is only one
state, while, in tabular MDPs, they can be trivially resolved as the states are observed. Thus, with
minor modifications, LSVEE can avoid these dependencies for both special cases. In addition, our
bound disagrees with the MDP results in the dependence on the policy complexity log(N ); which we
believe is unavoidable when working with rich observation spaces.
Finally, our bound depends on the number of states M in the worst case, but the algorithm actually
uses a more refined notion. Since the states are unobserved, the algorithm considers two states distinct
only if they have reasonably different value functions, meaning learning on one does not lead to
consensus on the other. Thus, a more distribution-dependent analysis defining states through the
function class is a promising avenue for future work.
4
Discussion
This paper introduces a new model in which it is possible to design and analyze principled reinforcement learning algorithms engaging in global exploration. As a first step, we develop a new
algorithm and show that it learns near-optimal behavior under a deterministic-transition assumption
with polynomial sample complexity. This represents a significant advance in our understanding of
reinforcement learning with rich observations. However, there are major open questions:
1. Do polynomial sample bounds for this model with stochastic transitions exist?
2. Can we design an algorithm for learning this model that is both computationally and statistically
efficient? The sample complexity of our algorithm is logarithmic in the size of the function class
F but uses an intractably slow enumeration of these functions.
Good answers to both of these questions may yield new practical reinforcement learning algorithms.
Acknowledgements
We thank Akshay Balsubramani and Hal Daum? III for formative discussions, and we thank Tzu-Kuo
Huang and Nan Jiang for carefully reading an early draft of this paper. This work was carried out
while AK was at Microsoft Research.
8
References
[1] A. Agarwal, M. Dud?k, S. Kale, J. Langford, and R. E. Schapire. Contextual bandit learning with predictable
rewards. In AISTATS, 2012.
[2] A. Antos, C. Szepesv?ri, and R. Munos. Learning near-optimal policies with Bellman-residual minimization
based fitted policy iteration and a single sample path. MLJ, 2008.
[3] P. Auer, N. Cesa-Bianchi, Y. Freund, and R. E. Schapire. The nonstochastic multiarmed bandit problem.
SICOMP, 2002.
[4] K. Azizzadenesheli, A. Lazaric, and A. Anandkumar. Reinforcement learning of POMDPs using spectral
methods. In COLT, 2016.
[5] L. Baird. Residual algorithms: Reinforcement learning with function approximation. In ICML, 1995.
[6] R. I. Brafman and M. Tennenholtz. R-max ? a general polynomial time algorithm for near-optimal
reinforcement learning. JMLR, 2003.
[7] C. Dann and E. Brunskill. Sample complexity of episodic fixed-horizon reinforcement learning. In NIPS,
2015.
[8] M. Dudik, D. Hsu, S. Kale, N. Karampatziakis, J. Langford, L. Reyzin, and T. Zhang. Efficient optimal
learning for contextual bandits. In UAI, 2011.
[9] N. Jiang, A. Kulesza, and S. Singh. Abstraction selection in model-based reinforcement learning. In ICML,
2015.
[10] N. K. Jong and P. Stone. Model-based exploration in continuous state spaces. In Abstraction, Reformulation,
and Approximation, 2007.
[11] S. Kakade and J. Langford. Approximately optimal approximate reinforcement learning. In ICML, 2002.
[12] S. Kakade, M. J. Kearns, and J. Langford. Exploration in metric state spaces. In ICML, 2003.
[13] M. Kearns and S. Singh. Near-optimal reinforcement learning in polynomial time. MLJ, 2002.
[14] M. J. Kearns, Y. Mansour, and A. Y. Ng. Approximate planning in large POMDPs via reusable trajectories.
In NIPS, 1999.
[15] M. J. Kearns, Y. Mansour, and A. Y. Ng. A sparse sampling algorithm for near-optimal planning in large
markov decision processes. MLJ, 2002.
[16] J. Langford and T. Zhang. The epoch-greedy algorithm for multi-armed bandits with side information. In
NIPS, 2008.
[17] L. Li and M. L. Littman. Reducing reinforcement learning to KWIK online regression. Ann. Math AI,
2010.
[18] L. Li, T. J. Walsh, and M. L. Littman. Towards a unified theory of state abstraction for MDPs. In ISAIM,
2006.
[19] Y. Mansour. Reinforcement learning and mistake bounded algorithms. In COLT, 1999.
[20] N. Meuleau, L. Peshkin, K.-E. Kim, and L. P. Kaelbling. Learning finite-state controllers for partially
observable environments. In UAI, 1999.
[21] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller,
A. K. Fidjeland, G. Ostrovski, S. Petersen, B. Charles, S. Amir, I. Antonoglou, H. King, D. Kumaran,
D. Wierstra, S. Legg, and D. Hassabis. Human-level control through deep reinforcement learning. Nature,
2015.
[22] P. Nguyen, O.-A. Maillard, D. Ryabko, and R. Ortner. Competing with an infinite set of models in
reinforcement learning. In AISTATS, 2013.
[23] J. Pazis and R. Parr. Efficient PAC-optimal exploration in concurrent, continuous state MDPs with delayed
updates. In AAAI, 2016.
[24] T. J. Perkins and D. Precup. A convergent form of approximate policy iteration. In NIPS, 2002.
[25] S. Reveliotis and T. Bountourelis. Efficient PAC learning for episodic tasks with acyclic state spaces.
DEDS, 2007.
[26] A. L. Strehl, L. Li, E. Wiewiora, J. Langford, and M. L. Littman. PAC model-free reinforcement learning.
In ICML, 2006.
[27] R. S. Sutton, D. A. McAllester, S. P. Singh, and Y. Mansour. Policy gradient methods for reinforcement
learning with function approximation. In NIPS, 1999.
[28] J. N. Tsitsiklis and B. Van Roy. An analysis of temporal-difference learning with function approximation.
IEEE TAC, 1997.
9
| 6575 |@word exploitation:1 version:1 eliminating:1 achievable:1 polynomial:12 stronger:1 open:1 p0:4 invoking:2 concise:1 recursively:3 initial:1 contains:1 uma:1 exclusively:1 prefix:1 existing:1 current:5 com:2 contextual:14 recovered:1 comparing:1 must:7 john:1 realize:1 partition:1 wiewiora:1 enables:2 update:2 greedy:3 ntrain:4 amir:1 core:2 meuleau:1 record:2 provides:4 draft:1 node:3 math:1 zhang:2 wierstra:1 viable:1 descendant:2 prove:2 manner:1 introduce:1 x0:2 notably:1 expected:3 behavior:6 planning:5 nor:1 multi:1 terminal:2 bellman:2 inspired:1 globally:1 little:1 enumeration:2 es0:3 armed:1 spain:1 begin:1 underlying:5 bounded:2 notation:4 suffice:1 mass:1 moreover:1 what:2 tic:1 atari:1 substantially:1 unified:1 unobserved:4 guarantee:6 temporal:1 unexplored:1 collecting:4 prohibitively:2 k2:1 control:2 before:2 mistake:1 sutton:1 shortened:1 ak:1 jiang:3 meet:1 path:19 approximately:2 studied:3 collect:4 challenging:2 limited:2 walsh:1 statistically:1 practical:2 unique:3 practice:1 recursive:1 regret:1 differs:1 procedure:2 episodic:4 riedmiller:1 empirical:8 significantly:1 argmaxa:1 petersen:1 cannot:4 close:1 layered:4 operator:2 selection:1 context:2 influence:1 risk:1 instability:1 bellemare:1 optimize:1 restriction:2 deterministic:11 kale:2 starting:4 independently:2 sicomp:1 pomdp:4 survey:1 focused:1 notion:1 krishnamurthy:1 justification:1 limiting:1 construction:2 target:4 engage:1 decode:1 us:6 designing:1 engaging:1 associate:1 roy:1 approximated:1 satisfying:1 predicts:1 labeled:2 observed:4 solved:2 capture:1 worst:1 ensures:2 alekha:1 ryabko:1 episode:8 substantial:1 intuition:2 environment:3 predictable:2 complexity:23 unlearned:3 reward:15 littman:4 inductively:1 principled:1 dynamic:1 depend:4 singh:3 efficiency:1 learner:4 basis:1 completely:1 easily:2 resolved:1 train:1 distinct:2 describe:3 monte:4 refined:1 quite:3 solve:3 say:1 relax:1 otherwise:1 precludes:3 compressed:1 think:1 highlighted:1 reactivity:1 itself:1 final:1 online:1 sequence:4 propose:1 interaction:2 maximal:1 relevant:3 loop:1 combining:1 reyzin:1 poorly:2 achieve:3 kh:2 empty:1 r1:3 dsh:1 produce:2 silver:1 executing:3 develop:1 minor:1 eq:6 strong:2 c:1 implies:2 come:1 closely:1 correct:2 dfs:17 owing:1 stochastic:2 exploration:24 human:1 enable:1 successor:1 mcallester:1 elimination:17 require:8 fix:3 generalization:1 suffices:1 proposition:6 exploring:1 hold:2 considered:1 mapping:3 scope:1 parr:1 major:1 achieves:1 early:1 visited:3 ex0:1 concurrent:1 hope:2 minimization:1 sensor:1 always:1 ck:2 avoid:4 rusu:1 focus:6 legg:1 karampatziakis:1 contrast:3 kim:1 realizable:2 sense:1 abstraction:7 dependent:1 typically:3 entire:3 eliminate:2 hidden:1 bandit:15 expand:1 interested:1 backing:1 pixel:1 issue:6 colt:2 plan:2 art:4 special:1 marginal:2 construct:1 aware:1 once:2 ng:2 eliminated:1 sampling:2 veness:1 represents:1 icml:5 minf:1 thinking:1 future:9 tabular:3 others:1 employ:1 few:1 primarily:1 ortner:1 delayed:1 phase:2 microsoft:5 n1:2 ostrovski:1 highly:3 mnih:1 deferred:1 introduces:1 navigation:1 sh:12 antos:1 accurate:1 tuple:1 edge:1 partial:3 necessary:2 tree:5 kpoly:1 re:1 theoretical:5 minimal:1 fitted:1 instance:1 kaelbling:1 subset:1 rolling:1 predictor:5 characterize:1 optimally:1 dependency:1 answer:1 dp0:1 combined:2 chooses:1 amherst:2 systematic:3 invoke:1 regressor:1 precup:1 earn:5 aaai:1 tzu:1 satisfied:2 unavoidable:1 choose:2 hn:2 huang:1 cesa:1 isaim:1 admit:1 inefficient:1 return:6 li:4 account:1 exclude:1 summarized:1 includes:1 baird:1 dann:1 depends:4 performed:1 root:4 analyze:2 reached:2 competitive:1 start:1 contribution:1 square:4 formed:1 variance:1 efficiently:3 yield:1 identify:5 generalize:2 weak:2 raw:1 kavukcuoglu:1 accurately:1 none:1 carlo:4 trajectory:5 pomdps:19 notoriously:1 worth:2 randomness:1 ah:9 history:3 definition:3 associated:2 proof:4 hsu:1 massachusetts:1 popular:1 recall:1 knowledge:1 maillard:1 formalize:1 routine:6 sophisticated:2 actually:2 carefully:2 mlj:3 auer:1 supervised:1 response:1 done:2 furthermore:1 implicit:2 lastly:1 langford:7 d:13 hand:1 working:1 bountourelis:1 quality:1 mdp:6 grows:1 believe:2 usage:1 hal:1 xs0:1 true:5 equality:1 hence:1 dud:1 memoryless:1 game:4 uniquely:2 covering:1 pazis:1 generalized:1 trying:1 stone:1 performs:1 meaning:1 instantaneous:1 novel:1 invoked:5 fi:1 charles:1 pseudocode:1 rl:8 exponentially:6 refer:2 significant:1 multiarmed:1 imposing:1 ai:9 tac:1 rd:1 trivially:2 consistency:1 access:4 alekh:1 behaving:1 base:1 kwik:2 own:1 recent:2 success:1 arbitrarily:1 seen:1 minimum:1 additional:3 care:1 dudik:1 surely:1 maximize:1 signal:1 ii:2 relates:1 full:2 desirable:2 reduces:2 long:5 visit:5 a1:5 ensuring:1 prediction:4 basic:2 regression:9 controller:1 essentially:2 expectation:1 metric:1 iteration:4 agarwal:3 robotics:1 preserved:1 addition:2 szepesv:1 jcl:1 eliminates:1 unlike:1 probably:1 unaddressed:1 call:2 surviving:8 anandkumar:1 near:12 iii:3 easy:2 enough:1 nonstochastic:1 identified:1 suboptimal:4 competing:1 observability:3 idea:2 avenue:1 enumerating:1 whether:1 motivated:2 peshkin:1 suffer:1 returned:1 york:2 cause:1 action:30 deep:1 clear:1 amount:1 ph:3 concentrated:1 simplest:1 reduced:1 schapire:2 exist:3 notice:1 estimated:1 disjoint:2 per:1 lazaric:1 broadly:2 write:1 discrete:1 group:1 key:1 thereafter:1 reformulation:1 nevertheless:1 reusable:1 achieving:1 clarity:1 neither:1 fraction:1 run:4 almost:1 decide:1 pursuing:1 discover:1 decision:8 appendix:3 bound:26 guaranteed:3 distinguish:1 simplification:2 nan:1 fan:1 convergent:1 oracle:1 precisely:3 deficiency:1 perkins:1 ri:7 encodes:2 calling:2 aspect:1 extremely:2 optimality:2 performing:1 structured:1 ri0:3 according:3 combination:1 describes:1 slightly:1 across:3 kakade:2 making:2 s1:6 modification:1 indexing:1 taken:2 computationally:3 equation:1 agree:2 previously:1 know:3 letting:1 tractable:3 antonoglou:1 end:4 certifies:1 studying:1 generalizes:1 operation:1 apply:1 observe:1 balsubramani:1 spectral:1 alternative:2 hassabis:1 k32:1 denotes:3 running:2 ensure:3 daum:1 restrictive:1 establish:1 question:4 quantity:1 realized:2 infinitestate:1 strategy:2 already:1 dependence:15 exhibit:1 navigating:1 lends:1 gradient:2 dp:7 thank:2 fidjeland:1 clarifying:1 collected:1 consensus:11 trivial:1 reason:4 considers:1 length:3 balance:1 unfortunately:3 suppress:1 design:3 policy:51 unknown:1 perform:3 allowing:1 bianchi:1 observation:42 kumaran:1 markov:4 finite:7 displayed:3 supporting:1 defining:2 looking:1 precise:1 mansour:4 arbitrary:1 intensity:1 pair:3 required:5 namely:1 optimized:1 connection:1 ds0:1 learned:1 barcelona:1 nip:6 address:3 able:1 tennenholtz:1 below:1 kulesza:1 reading:1 program:1 including:1 max:3 video:1 memory:1 belief:1 critical:1 natural:4 force:1 examination:1 turning:1 residual:2 misleading:1 mdps:12 imply:1 identifies:1 realizability:9 carried:1 prior:2 literature:2 understanding:3 disagrees:1 acknowledgement:1 epoch:1 graf:1 freund:1 acyclic:2 agent:9 consistent:3 s0:7 propagates:1 playing:1 strehl:1 brafman:1 free:2 intractably:3 tsitsiklis:1 side:1 taking:2 barrier:2 akshay:3 munos:1 sparse:1 van:1 feedback:1 depth:2 overcome:1 transition:19 cumulative:1 rich:9 avoids:2 computes:1 author:1 collection:2 reinforcement:31 regressors:4 avoided:1 simplified:1 nguyen:1 polynomially:1 approximate:4 observable:3 emphasize:1 ignore:1 global:8 uai:2 xi:8 search:9 latent:1 continuous:2 why:1 promising:1 terminate:1 reasonably:1 nature:1 poly:4 complex:2 necessarily:1 aistats:2 main:4 linearly:2 rh:6 arise:1 ded:1 n2:3 x1:3 advice:1 referred:1 ny:2 predictability:1 slow:2 sub:4 position:1 brunskill:1 xh:5 exponential:3 answering:1 jmlr:1 learns:4 theorem:2 pac:12 er:4 x:4 exists:2 restricting:2 sequential:2 effectively:2 horizon:6 demand:6 generalizing:1 logarithmic:4 simply:1 explore:1 forming:1 visual:2 reveliotis:1 partially:2 applies:1 corresponds:1 satisfies:2 ma:1 conditional:2 goal:5 identity:1 king:1 consequently:1 ann:1 towards:2 absence:1 formative:1 hard:2 specifically:2 infinite:6 uniformly:2 except:1 justify:1 reducing:1 kearns:6 total:3 kuo:1 ntest:2 e:2 jong:1 select:3 formally:1 arises:1 reactive:13 ex:2 |
6,164 | 6,576 | Algorithms and matching lower bounds for
approximately-convex optimization
Yuanzhi Li
Department of Computer Science
Princeton University
Princeton, NJ, 08450
[email protected]
Andrej Risteski
Department of Computer Science
Princeton University
Princeton, NJ, 08450
[email protected]
Abstract
In recent years, a rapidly increasing number of applications in practice requires
optimizing non-convex objectives, like training neural networks, learning graphical
models, maximum likelihood estimation. Though simple heuristics such as gradient
descent with very few modifications tend to work well, theoretical understanding
is very weak.
We consider possibly the most natural class of non-convex functions where one
could hope to obtain provable guarantees: functions that are ?approximately convex?, i.e. functions f? : Rd ? R for which there exists a convex function f such
that for all x, |f?(x) ? f (x)| ? ? for a fixed value ?. We then want to minimize
f?, i.e. output a point x
? such that f?(?
x) ? minx f?(x) + .
It is quite natural to conjecture that for fixed , the problem gets harder for larger
?, however, the exact dependency of and ? is not known. In this paper, we
significantly improve the known lower bound on ? as a function of and an
algorithm matching this lower bound for a natural class of convex bodies. More
precisely, we identify a function T : R+ ? R+ such that when ? = O(T ()),
we can give an algorithm
? such that f?(?
x) ? minx f?(x) +
that outputs a point x
1
within time poly d, . On the other hand, when ? = ?(T ()), we also prove an
information theoretic lower bound that any algorithm that outputs such a x
? must
use super polynomial number of evaluations of f?.
1
Introduction
Optimization of convex functions over a convex domain is a well studied problem in machine
learning, where a variety of algorithms exist to solve the problem efficiently. However, in recent years,
practitioners face ever more often non-convex objectives ? e.g. training neural networks, learning
graphical models, clustering data, maximum likelihood estimation etc. Albeit simple heuristics such
as gradient descent with few modifications usually work very well, theoretical understanding in these
settings are still largely open.
The most natural class of non-convex functions where one could hope to obtain provable guarantees
is functions that are ?approximately convex?: functions f? : Rd ? R for which there exists a convex
function f such that for all x, |f?(x) ? f (x)| ? ? for a fixed value ?. In this paper, we focus on zero
order optimization of f?: an algorithm that outputs a point x
? such that f?(?
x) ? minx f?(x) + , where
the algorithm in the course of its execution is allowed to pick points x ? Rd and query the value of
f?(x).
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Trivially, one can solve the problem by constructing a -net and search through all the net points.
d
However, such an algorithm requires ? 1 evaluations of f?, which is highly inefficient in high
dimension. In this paper, we are interested in efficient algorithms: algorithms that run in time
poly d, 1 (in particular, this implies the algorithm makes poly d, 1 evaluations of f?).
One extreme case of the problem is ? = 0, which is just standard convex optimization, where
algorithms exist to solve it in polynomial time for every > 0. However, even when ? is any
quantity > 0, none of these algorithms extend without modification. (Indeed, we are not imposing
any structure on f? ? f like stochasticity.) Of course, when ? = +?, the problem includes any
non-convex optimization, where we cannot hope for an efficient solution for any finite . Therefore,
the crucial quantity to study is the optimal tradeoff of and ?: For which , ? the problem can be
solved in polynomial time, and for which it can not.
In this paper, we study the rate of ? as a function of : We identify a function T : R+ ? R+ such that
when ? = O(T ()), we
? such that f?(?
x) ? minx f?(x) +
can give an algorithm that outputs a point x
1
within time poly d, over a natural class of well-conditioned convex bodies. On the other hand,
? ())1 , we also prove an information theoretic lower bound that any algorithm outputs
when ? = ?(T
such x
? must use super polynomial number of evaluations of f?. Our result can be summarized as the
following two theorems:
Theorem (Algorithmic upper bound, informal). There exists an algorithm A that for any function f?
over a well-conditioned convex set in Rd of diameter 1 which is ? close to an 1-Lipschitz convex
function 2 f , and
2
? ,
? = O max
d d
A finds a point x
? such that f?(?
x) ? minx f?(x) + within time poly d, 1
The notion of well-conditioning will formally be defined in section 3, but intuitively captures the
notion that the convex body ?curves? in all directions to a good extent.
Theorem (Information theoretic lower bound, informal). For every algorithm A, every d, ?, with
2
? max ?
?=?
,
d d
there exists a function f? on a convex set in Rd of diameter 1, and f? is ? close to an 1-Lipschitz
convex function f , such that A can not find a point x
? with f?(?
x) ? minx f?(x) + in poly d, 1
evaluations of f?.
2
Prior work
To the best of our knowledge, there are three works on the problem of approximately convex
optimization, which we summarize briefly below.
On the algorithmic side, the classical paper by [DKS14] considered optimizing smooth convex
functions over convex bodies with smooth boundaries. More precisely, they assume a bound on both
the gradient and the Hessian of F . Furthermore, they assume that for every small ball centered at a
point in the body, a large proportion of the volume of the ball lies in the body. Their algorithm is local
search: they show that for a sufficiently small r, in a ball of radius r there is with high probability a
point which has a smaller value than the current one, as long as the current value is sufficiently larger
than the optimum. For constant-smooth functions only, their algorithm applies when ? = O( ?d ).
Also on the algorithmic side, the work by [BLNR15] considers 1-Lipschitz functions, but their
algorithm only applies to the case where ? = O( d ) (so not optimal unless = O( ?1d )). Their
methods rely on sampling log-concave distribution via hit and run walks. The crucial idea is to show
that for approximately convex functions, one needs to sample from ?approximately log-concave?
? notation hides polylog(d/) factors.
The ?
The assumptions on the diameter of K and the Lipschitz condition are for convenience of stating the results.
(See Section ?? to extend to arbitrary diameter and Lipschitz constant)
1
2
2
distributions, which they show can be done by a form of rejection sampling together with classical
methods for sampling log-concave distributions.
Finally, [SV15] consider information theoretic lower bounds. They show that when ? = 1/d1/2??
no algorithm can, in polynomial time, achieve achieve = 21 ? ?, when optimizing a convex function
over the hypercube. This translates to a super polynomial information theoretic lower bound when
? = ?( ?d ). They additionally give lower bounds when the approximately convex function is
multiplicatively, rather than additively, close to a convex function. 3
We also note a related problem is zero-order optimization, where the goal is to minimize a function
we only have value oracle access to. The algorithmic motivations here come from various applications
where we only have black-box access to the function we are optimizing, and there is a classical line of
work on characterizing the oracle complexity of convex optimization.[NY83, NS, DJWW15]. In all
of these settings however, the oracles are either noiseless, or the noise is stochastic, usually because
the target application is in bandit optimization. [AD10, AFH+ 11, Sha12]
3
Overview of results
Formally, we will consider the following scenario.
Definition 3.1. A function f? : K ? Rd will be called ?-approximately convex if there exists a
1-Lipschitz convex function f : K ? Rd , s.t. ?x ? K, |f?(x) ? f (x)| ? ?.
For ease of exposition, we also assume that K has diameter 14 . We consider the problem of optimizing
f?, more precisely, we are interesting in finding a point x
? ? K, such that
f?(?
x) ? min f?(x) +
x?K
We give the following results:
Theorem 3.1 (Information theoretic lower bound). For very constant c ? 1, there exists a constant
dc such that for every algorithm A, every d ? dc , there exists a convex set K ? Rd with diameter 1,
an ?-approximate convex function f? : K ? R and ? [0, 1/64) 5 such that
2
2
d
? 13c log
? ? max ? ,
d d
Such that A fails to output, with probability ? 1/2, a point x
? ? K with f?(?
x) ? minx?K {f?(x)} +
d c
in o(( ) ) time.
In order to state the upper bounds, we will need the definition of a well-conditioned body:
Definition 3.2 (?-well-conditioned). A convex body K is said to be ?-well-conditioned for ? ? 1,
if there exists a function F : Rd ? R such that K = {x|F (x) ? 0} and for every x ? ?K:
k?2 F (x)k2
k?F (x)k2 ? ?.
This notion of well-conditioning of a convex body to the best of our knowledge has not been defined
before, but it intuitively captures the notion that the convex body should ?curve? in all directions to a
certain extent. In particular, the unit ball has ? = 1.
Theorem 3.2 (Algorithmic upper bound). Let d be a positive integer, ? > 0 be a positive real number,
, ? be two positive real number such that
2
1
? ,
? ? max
?
d
16348
? d
Then there exists an algorithm A such that on given any ?-approximate convex function f? over a
d
?-rounded convex
? ? K with probability 1 ? ? in time
set K ? R of diameter 1, A returns a point x
1
1
poly d, , log ? such that
f?(?
x) ? min f?(x) +
x?K
3
Though these are not too difficult to derive from the additive ones, considering the convex body has diameter
bounded by 1.
4
Generalizing to arbitrary Lipschitz constants and diameters is discussed in Section 6.
5
Since we normalize f to be 1-Lipschitz and K to have diameter 1, the problem is only interesting for ? 1
3
For the reader wishing to digest a condition-free version of the above result, the following weaker
result is also true (and much easier to prove):
Theorem 3.3 (Algorithmic upper bound (condition-free)). Let d be a positive integer, ? > 0 be a
positive real number, , ? be two positive real number such that
2
1
? ? max ? ,
?
d
16348
d
Then there exists an algorithm A such that on given any ?-approximate convex function f? over a
d
?-rounded convex
? ? K with probability 1 ? ? in time
set K ? R of diameter 1, A returns a point x
1
1
poly d, , log ? such that
f?(?
x) ? min f?(x) +
x?S(K,?)
Where S(K, ?) = {x ? K|B (x) ? K}
The result merely states that we can output a value that competes with points ?well-inside? the convex
body ? around which a ball of radius of still lies inside the body.
The assumptions on the diameter of K and the Lipschitz condition are for convenience of stating
the results. It?s quite easy to extend both the lower and upper bounds to an arbitrary diameter and
Lipschitz constant, as we discuss in Section 6.
3.1
Proof techniques
We briefly outline the proof techniques we use. We proceed with the information theoretic lower
bound first. The idea behind the proof is the following. We will construct a function G(x) and a family
of convex functions {fw (x)} depending on a direction w ? S d (S d is the unit sphere in Rd ). On one
hand, the minimal value of G and fw are quite different: minx G(x) ? 0, and minx fw (x) ? ?2.
On the other hand, the approximately convex function f?w (x) for fw (x) we consider will be such
that f?w (x) = G(x) except in a very small cone around w. Picking w at random, no algorithm with
small number of queries will, with high probability, every query a point in this cone. Therefore, the
algorithm will proceed as if the function is G(x) and fail to optimize f?w .
Proceeding to the algorithmic result, since [BLNR15] already shows the existence of an efficient
algorithm when ? = O( d ), we only need to give an algorithm that solves the problem when
2
? = ?( d ) and ? = O( ? d ) (i.e. when , ? are large). There are two main ideas for the algorithm.
First, we show that the gradient of a smoothed version of f?w (in the spirit of [FKM05]) at any point
x will be correlated with x? ? x, where x? = argminx?K f?w (x). The above strategy will however
require averaging the value of f?w along a ball of radius , which in many cases will not be contained
in K (especially when is large). Therefore, we come up with a way to extend f?w outside of K in a
manner that maintains the correlation with x? ? x.
4
Information-theoretic lower bound
In this section, we present the proof of Theorem 3.1.
The idea is to construct a function G(x), a family of convex functions {fw (x)} depending on a
direction w ? S d , such that minx G(x) ? 0, minx fw (x) ? ?2, and an approximately convex
f?w (x) for fw (x) such that f?w (x) = G(x) except in a very small ?critical? region depending on w.
Picking w at random, we want to argue that the algorithm will with high probability not query the
critical region. The convex body K used in the lower bound will be arguably the simplest convex
body imaginable: the unit ball B1 (0).
We might hope to prove a lower bound for even a linear function fw for a start, similarly as in [SV15].
A reasonable candidate construction is the following: we set fw (x) = ?hw, xi for some random
log d
chosen unit vector w and define f?(x) = 0 when |hx, wi| ? ? kxk2 and f?(x) = fw (x) otherwise.6
d
6
For the proof sketch only, to maintain ease of reading all of the inequalities we state will be only correct up
to constants. In the actual proofs we will be completely formal.
4
Observe, this translates to ? =
log d
?
d
. It?s a standard concentration of measure fact that for ?most?
log
d
of the points x in the unit ball, |hx, wi| ? ?d kxk2 . This implies that any algorithm that makes a
polynomial number of queries to f? will with high probability see 0 in all of the queries, but clearly
min f?(x) = ?. However, this idea fails to generalize to optimal range as ? = ?1 is tight for
d
linear, even smooth functions.7
In order to obtain the optimal bound, we need to modify the construction to a non-linear, non-smooth
function. We will, in a certain sense, ?hide? a random linear function inside a non-linear function.
For a random unit vector w, we consider two regions inside the unit ball: a core C = Br (0) for
r = max{, ?1d }, and a ?critical angle? A = {x | |hx, wi| ?
kxk1+?
2
log d
?
d
kxk2 }. The convex function f
will look like
for some ? > 0 outside C ? A and ?hw, xi for x ? C ? A. We construct
?
?
f? as f? = f when f (x) is sufficiently large (e.g. |f (x)| > ?
2 ) and 2 otherwise. Clearly, such f
1+?
obtain its minimal at point w, with f?(w) = ?. However, since f? = kxk2 outside C or A, the
algorithm needs either query A or query C ? Ac to detect w. The former happens with exponentially
log d
small probability in high dimensions, and for any x ? C ? Ac , |f (x)| = |hw, xi| ? ?d kxk2 ?
log d
?
d
2
r ? max{ ? d , d } ? log
fail with high probability.
d
?
?
2,
which implies that f?(x) =
?
2.
Therefore, the algorithm will
Now, we move on to the detailed of the constructions. We will consider K = B 21 (0): the ball of
radius 12 in Rd centered at 0. 8
The family {fw (x)}
4.1
Before delving into the construction we need the following definition:
Definition 4.1 (Lower Convex Envelope (LCE)). Given a set S ? Rd , a function F : S ? R,
define the lower convex envelope FLCE = LCE(F ) as a function FLCE : Rd ? R such that for every
x ? Rd ,
FLCE (x) = max{hx ? y, ?F (y)i + F (y)}
y?S
Proposition 4.1. LCE(F ) is convex.
Proof. LCE(F) is the pointwise maximum of linear functions, so the claim follows.
Remark : The LCE of a function F is a function defined over the entire Rd , while the input function
F is only defined in a set S (not necessarily convex set). When the input function F is convex,
LCE(F ) can be considered as an extension of F to the entire Rd .
To define the family fw (x), we will need four parameters: a power factor ? > 0, a shrinking factor ?,
and a radius factor ? > 0, and a vector w ? Rd such that kwk2 = 12 , which we specify in a short bit.
Construction 4.1. Given w, ?, ?, ?, define the core C = B? (0), the critical angle A = {x |
? : H ? R be defined as
|hx, wi| ? ?kxk2 } and let H = K ? C ? A. Let h
1
1+?
?
h(x)
= kxk2
2
and define lw (x) = ?8hx, wi. Finally let fw : K ? Rd as
n
o
? LCE (x), lw (x)
fw (x) = max h
? LCE = LCE(h)
? as in Definition 4.1.
Where h
We then construct the ?hard? function f?w as the following:
Construction 4.2. Consider the function f?w : K ? R:
fw (x)
if x ? K ? C ? A ;
?
fw (x) =
max{fw (x), 21 ?} otherwise.
7
8
This follows from the results in [DKS14]
We pick B 1 (0) instead of the unit ball in order to ensure the diameter is 1.
2
5
Consider the following settings of the parameters ?, ?, ? (depending on the magnitude of ):
?
c log d
1
? Case 1, ?1d ? ? (log1d)2 : ? = ?d , ? = 10c(log d )1.5 , ? = log(1/?)
.
?
c log d/
1
?
? (log d/)3/2 , ? =
? Case 2, ? ?1d : ? =
, ? = 10c
log(1/?) .
d
d
? Case 3,
1
64
??
1
(log d)2 :
?
?=
c?log d
,
d
? = 21 , ? = 1.
Then, the we formalize the proof intuition from the previous section with the following claims.
Following the the proof outline, we first show the minimum of fw is small, in particular we will show
fw (w) ? ?2.
Lemma 4.1. fw (w) = ?2
Finally, we show that f?w is indeed a ?-approximately convex, by showing ?x ? K, |fw ? f?w | ? ?
and fw is 1-Lipschitz and convex.
Proposition 4.2. f?w is a ?-approximately convex.
Next, we construct G(x), which does not depend on w, we want to show that for an algorithm with
small number of queries of f?w , it can not distinguish fw from this function.
Construction 4.3. Let G : K ? R be defined as:
max 1+?
kxk2 ? ?4 ?, 12 ?
4
G(x) =
1+?
1
2 kxk2
if x ? K ? C ;
otherwise.
The following is true:
Lemma 4.2. G(x) ? 0 and {x ? K | G(x) 6= f?w (x)} ? A
We show how Theorem 3.1 is implied given these statements:
Proof of Theorem 3.1. With everything prior to this set up, the final claim is somewhat standard.
We want to show that no algorithm can, with probability ? 12 , output a point x, s.t. f?w (x) ?
minx f?w (x) + . Since we know that f?w (x) agrees with G(x) everywhere except in K ? A, and
G(x) satisfies minx G(x) ? minx f?w (x) + , we only need to show that with high probability, any
polynomial time algorithm will not query any point in K ? A.
Consider a (potentially) randomized algorithm A, making random choices R1 , R2 , . . . , Rm . Conditioned on a particular choice of randomness r1 , r2 , . . . , rm , for a random choice of w, each ri lies in
A with probability at most exp(?c log(d/)), by a standard Gaussian tail bound. Union bounding,
since m = o(( d )c ) for an algorithm that runs in time o(( d )c ), the probability that at least of the
queries of A lies in A is at most 12 .
But the claim is true for any choice r1 , r2 , . . . , rm of the randomness, by averaging, the claim holds
for r1 , r2 , . . . , rm being sampled according to the randomness of the algorithm.
The proofs of all of the lemmas above have been ommited due to space constraints, and are included
in the appendix in full.
5
Algorithmic upper bound
As mentioned before, the algorithm in [BLNR15] covers the case when ? = O( d ), so we only
2
need to give an algorithm when ? = ?( d ) and ? = O( d ). Our approach will not be making use
of simulated annealing, but a more robust version of gradient descent. The intuition comes from
[FKM05] who use estimates of the gradient of a convex function derived from Stokes? formula:
Z
d
Ew?S d
f (x + rw)w =
?f (x)dx
r
B
6
where w ? S d denotes w being a uniform sample from the sphere S d . Our observation is the gradient
estimation is robust to noise if we instead use f? in the left hand side. Crucially, robust is not in the
sense that it approximates the gradient of f , but it preserves the crucial property of the gradient of
f we need: h??f (x), x? ? xi ? f (x) ? f (x? ). In words, this means if we move x at direction
??f (x) for a small step, then x will be closer to x? , and we will show the property is preserved by
2
f? when ? ? ? d . Indeed, we have that:
d?
?
?Ew?S d
f (x + rw)w , x ? x
r
d
d?
?
?
f (x + rw)w, x ? x
E
? ?Ew?S d
?
d [|hw, x ? xi|]
r
r w?S
The usual [FKM05] calculation
shows that
d
?
f (x + rw)w, x ? x
= ? (f (x) ? f (x? ) ? 2r)
Ew?S d )
r
?
and dr ?Ew?U (S d ) [|hw, x? ? xi|] is bounded by O( ?r d ), since Ew?U (S d ) [|hw, x? ? xi|] = O( ?1d ).
?
? d
?
Therefore, we want f (x) ? f (x ) ? 2r ?
whenever f (x) ? f (x? ) ? . Choosing the optimal
r
2
parameter leads to r = 4 and ? ? ? d .
This intuitive calculation basically proves the simple upper bound guarantee (Theorem 3.3). On
the other hand, the argument requires sampling from a ball of radius ?() around point x. This is
problematic when > ?1d : many convex bodies (e.g. the simplex, L1 ball after rescaling to diameter
one) will not contain a ball of radius even ?1d . The idea is then to make the sampling possible by
?extending? f? outside of K. Namely, we define a new function g : Rd ? R such that (?K (x) is the
projection of x to K)
g(x) = f?(?K (x)) + d(x, K)
g(x) will not be in general convex, but we instead directly bound hEw? 1r g(x + rw)w , x ? x? i
for x ? K and show that it behaves like h??f (x), x? ? xi ? f (x) ? f (x? ).
Algorithm 1 Noisy Convex Optimization
1: Input: A convex set K ? Rd with diam(K) = 1 and 0 ? K. A ?-approximate convex function
f?
2: Define: g : R ? R as:
3:
4:
5:
6:
g?(x) = f?(?K (x)) + d(x, K)
where ?K is the projection to K and d(x, K) is the Euclidean distance from x to K.
3
8388608d2
Initial: x1 = 0, r = 128?
, ? = 4194304d
.
2,T =
4
for t = 1, 2, ...., T do
Let vt = f?(xt ).
Estimate up to accuracy 4194304
in l2 norm (by uniformly randomly sample w):
d
gt = Ew?S d
g?(xt + rw)w
r
where w ? S d means w is uniform sample from the unit sphere.
7:
Update xt+1 = ?K (xt ? ?gt )
8: end for
9: Output mint?[T ] {vt }
The rest of this section will be dedicated to showing the following main lemma for Algorithm 1.
Lemma 5.1 (Main, algorithm). Suppose ? <
x? ? K such that f?(x? ) < f?(xt ) ? 2, then
2 ?
,
16348 d
h?gt , x? ? xt i ?
7
we have: For every t ? [T ], if there exists
64
Assuming this Lemma, we can prove Theorem 3.2.
Proof of Theorem 3.2. We first focus on the number of iterations:
For every t ? 1, suppose f?(x? ) < f?(xt ) ? 2, then we have: (since kgt k ? 2d/r ?
kx? ? xt+1 k22 ? kx? ? (xt ? ?gt )k22
256d
)
kx? ? xt k22 ? 2?hx? ? xt , gt i + ? 2 kgt k22
?
65536d2
? kx? ? xt k22 ?
+ ?2
64
2
4
4
? kx? ? xt k22 ?
+
8388608d2
4194304d2
4
= kx? ? xt k22 ?
8388608d2
?
Since originally kx ? x1 k ? 1, the algorithm ends in poly(d, 1 ) iterations.
=
Now we consider the sample complexity.
Since we know
that
d
g?(xt + rw)w
? 64d
r
2
By standard concentration bound we know that we need poly(d, 1 ) samples to estimate the expecta
tion up to error 2097152
per iteration.
Due to space constraints, we forward the proof of Lemma 5.1 to the appendix.
6
6.1
Discussion and open problems
Arbitrary Lipschitz constants and diameter
We assumed throughout the paper that the convex function f is 1-Lipschitz and the convex set K
has diameter 1. Our results can be easily extended to arbitrary functions and convex sets through a
simple linear transformation. For f with Lipschitz constant kf kLip and K with diameter D, and the
K
K
corresponding approximately convex f?, define g? : D
? R as g?(x) = Dkf1kLip f?(rx). (Where D
is the
rescaling of K by a factor of
1
D .)
This translates to k?
g (x) ? g(x)k2 ?
?
Rkf kLip .
But g(x) =
f (Rx)
Rkf kLip
is
K
R
1-Lipschitz over a set of diameter 1. Therefore, for general functions over a general convex sets,
our result trivially implies the rate for being
approximately-convex
functions is
( ableto optimize
)
2
?
1
1
= max ?
,
Rkf kLip
Rkf
k
d
Rkf
kLip
d
Lip
o
n
2
which simplifies to ? = max ?dRkf
, d .
k
Lip
6.2
Body specific bounds
Our algorithmic result matches the lower bound on well-conditioned bodies. The natural open
problem is to resolve the problem for arbitrary bodies. 9
Also note the lower bound can not hold for any convex body K in Rd : for example, if K is just a one
dimensional line in Rd , then the threshold should not depend on d at all. But even when the ?inherent
dimension? of K is d, the result is still body specific: one can show that for f? over the simplex in Rd ,
when ? ?1d , it is possible to optimize f? in polynomial time even when ? is as large as . 10
Finally, while our algorithm made use of the well-conditioning ? what is the correct property/parameter of the convex body that governs the rate of T () is a tantalizing question to explore in
future work.
9
We do not show it here, but one can prove the upp/lower bound still holds over the hypercube and when one
can find a ball of radius that has most of the mass in the convex body K.
10
Again, we do not show that here, but essentially one can search through the d + 1 lines from the center to
the d + 1 corners.
8
References
[AD10] Alekh Agarwal and Ofer Dekel. Optimal algorithms for online convex optimization
with multi-point bandit feedback. In COLT, pages 28?40. Citeseer, 2010.
[AFH+ 11] Alekh Agarwal, Dean P Foster, Daniel J Hsu, Sham M Kakade, and Alexander Rakhlin.
Stochastic convex optimization with bandit feedback. In Advances in Neural Information
Processing Systems, pages 1035?1043, 2011.
[BLNR15] Alexandre Belloni, Tengyuan Liang, Hariharan Narayanan, and Alexander Rakhlin.
Escaping the local minima via simulated annealing: Optimization of approximately
convex functions. In Proceedings of The 28th Conference on Learning Theory, pages
240?265, 2015.
[DJWW15] John C Duchi, Michael I Jordan, Martin J Wainwright, and Andre Wibisono. Optimal rates for zero-order convex optimization: The power of two function evaluations.
Information Theory, IEEE Transactions on, 61(5):2788?2806, 2015.
[DKS14] Martin Dyer, Ravi Kannan, and Leen Stougie. A simple randomised algorithm for
convex optimisation. Mathematical Programming, 147(1-2):207?229, 2014.
[FKM05] Abraham D Flaxman, Adam Tauman Kalai, and H Brendan McMahan. Online convex
optimization in the bandit setting: gradient descent without a gradient. In Proceedings
of the sixteenth annual ACM-SIAM symposium on Discrete algorithms, pages 385?394.
Society for Industrial and Applied Mathematics, 2005.
[NS] Yurii Nesterov and Vladimir Spokoiny. Random gradient-free minimization of convex
functions. Foundations of Computational Mathematics, pages 1?40.
[NY83] Arkadii Nemirovskii and David Borisovich Yudin. Problem complexity and method
efficiency in optimization. Wiley-Interscience series in discrete mathematics. Wiley,
Chichester, New York, 1983. A Wiley-Interscience publication.
[Sha12] Ohad Shamir. On the complexity of bandit and derivative-free stochastic convex optimization. arXiv preprint arXiv:1209.2388, 2012.
[SV15] Yaron Singer and Jan Vondr?k. Information-theoretic lower bounds for convex optimization with erroneous oracles. In Advances in Neural Information Processing Systems,
pages 3186?3194, 2015.
9
| 6576 |@word version:3 briefly:2 polynomial:9 proportion:1 norm:1 dekel:1 open:3 d2:5 additively:1 crucially:1 citeseer:1 pick:2 harder:1 initial:1 series:1 daniel:1 current:2 dx:1 must:2 john:1 additive:1 update:1 core:2 short:1 lce:9 mathematical:1 along:1 symposium:1 prove:6 interscience:2 inside:4 manner:1 indeed:3 multi:1 resolve:1 actual:1 considering:1 increasing:1 spain:1 notation:1 bounded:2 competes:1 mass:1 what:1 finding:1 transformation:1 nj:2 guarantee:3 every:11 concave:3 k2:3 hit:1 rm:4 unit:9 arguably:1 before:3 positive:6 local:2 modify:1 approximately:15 black:1 might:1 studied:1 ease:2 range:1 practice:1 union:1 jan:1 significantly:1 matching:2 projection:2 word:1 get:1 cannot:1 close:3 convenience:2 andrej:1 optimize:3 dean:1 center:1 convex:80 notion:4 target:1 construction:7 suppose:2 shamir:1 exact:1 programming:1 kxk1:1 preprint:1 solved:1 capture:2 region:3 mentioned:1 intuition:2 complexity:4 nesterov:1 depend:2 tight:1 efficiency:1 completely:1 easily:1 various:1 query:11 outside:4 choosing:1 quite:3 heuristic:2 larger:2 solve:3 otherwise:4 noisy:1 final:1 online:2 net:2 rapidly:1 achieve:2 sixteenth:1 intuitive:1 normalize:1 optimum:1 r1:4 extending:1 adam:1 polylog:1 derive:1 stating:2 depending:4 ac:2 solves:1 c:2 implies:4 come:3 direction:5 radius:8 imaginable:1 correct:2 kgt:2 stochastic:3 centered:2 everything:1 require:1 hx:7 proposition:2 extension:1 hold:3 sufficiently:3 considered:2 around:3 exp:1 algorithmic:9 claim:5 estimation:3 agrees:1 hope:4 minimization:1 clearly:2 gaussian:1 super:3 rkf:5 rather:1 kalai:1 publication:1 derived:1 focus:2 likelihood:2 industrial:1 brendan:1 wishing:1 sense:2 detect:1 entire:2 bandit:5 interested:1 colt:1 yuanzhil:1 construct:5 sampling:5 look:1 sv15:3 future:1 simplex:2 inherent:1 few:2 randomly:1 preserve:1 argminx:1 maintain:1 highly:1 evaluation:6 chichester:1 extreme:1 behind:1 closer:1 ohad:1 unless:1 euclidean:1 walk:1 theoretical:2 minimal:2 cover:1 uniform:2 too:1 dependency:1 ad10:2 randomized:1 siam:1 rounded:2 picking:2 together:1 michael:1 again:1 possibly:1 dr:1 corner:1 inefficient:1 derivative:1 return:2 rescaling:2 li:1 summarized:1 includes:1 spokoiny:1 tion:1 start:1 maintains:1 yaron:1 minimize:2 hariharan:1 accuracy:1 largely:1 efficiently:1 who:1 identify:2 generalize:1 weak:1 basically:1 none:1 rx:2 randomness:3 whenever:1 andre:1 definition:6 proof:13 sampled:1 hsu:1 knowledge:2 formalize:1 alexandre:1 originally:1 specify:1 leen:1 done:1 though:2 box:1 furthermore:1 just:2 correlation:1 klip:5 hand:6 sketch:1 fkm05:4 k22:7 contain:1 true:3 former:1 arkadii:1 upp:1 outline:2 theoretic:9 duchi:1 l1:1 dedicated:1 behaves:1 overview:1 conditioning:3 exponentially:1 volume:1 extend:4 discussed:1 tail:1 approximates:1 kwk2:1 imposing:1 rd:23 trivially:2 mathematics:3 similarly:1 stochasticity:1 access:2 risteski:2 alekh:2 etc:1 gt:5 recent:2 hide:2 optimizing:5 mint:1 scenario:1 certain:2 inequality:1 vt:2 minimum:2 expecta:1 somewhat:1 borisovich:1 full:1 sham:1 smooth:5 match:1 calculation:2 long:1 sphere:3 optimisation:1 essentially:1 noiseless:1 arxiv:2 iteration:3 agarwal:2 preserved:1 want:5 annealing:2 crucial:3 envelope:2 rest:1 tend:1 tengyuan:1 spirit:1 jordan:1 practitioner:1 integer:2 easy:1 variety:1 escaping:1 idea:6 simplifies:1 tradeoff:1 translates:3 br:1 hessian:1 proceed:2 york:1 remark:1 detailed:1 governs:1 narayanan:1 diameter:19 simplest:1 rw:7 exist:2 problematic:1 per:1 discrete:2 afh:2 four:1 threshold:1 ravi:1 merely:1 year:2 cone:2 run:3 angle:2 everywhere:1 family:4 reader:1 reasonable:1 throughout:1 appendix:2 bit:1 bound:31 distinguish:1 oracle:4 annual:1 precisely:3 constraint:2 belloni:1 ri:1 hew:1 argument:1 min:4 martin:2 conjecture:1 department:2 according:1 ball:15 smaller:1 wi:5 kakade:1 modification:3 happens:1 making:2 intuitively:2 randomised:1 discus:1 fail:2 singer:1 know:3 dyer:1 end:2 yurii:1 informal:2 ofer:1 yuanzhi:1 observe:1 existence:1 denotes:1 clustering:1 ensure:1 graphical:2 especially:1 prof:1 classical:3 hypercube:2 society:1 implied:1 objective:2 move:2 already:1 quantity:2 digest:1 question:1 strategy:1 concentration:2 usual:1 said:1 gradient:12 minx:14 distance:1 simulated:2 argue:1 extent:2 considers:1 provable:2 kannan:1 assuming:1 pointwise:1 multiplicatively:1 vladimir:1 liang:1 difficult:1 statement:1 potentially:1 upper:7 observation:1 finite:1 descent:4 extended:1 ever:1 stokes:1 nemirovskii:1 dc:2 smoothed:1 arbitrary:6 david:1 namely:1 barcelona:1 nip:1 able:1 usually:2 below:1 reading:1 summarize:1 max:13 wainwright:1 power:2 critical:4 natural:6 rely:1 improve:1 flaxman:1 prior:2 understanding:2 l2:1 kf:1 interesting:2 foundation:1 foster:1 course:2 free:4 side:3 weaker:1 formal:1 face:1 characterizing:1 tauman:1 curve:2 dimension:3 boundary:1 feedback:2 yudin:1 forward:1 made:1 transaction:1 approximate:4 vondr:1 b1:1 assumed:1 xi:8 search:3 additionally:1 lip:2 delving:1 robust:3 poly:10 necessarily:1 constructing:1 domain:1 main:3 abraham:1 motivation:1 noise:2 bounding:1 allowed:1 body:23 x1:2 wiley:3 n:2 shrinking:1 fails:2 lie:4 candidate:1 kxk2:9 mcmahan:1 lw:2 hw:6 theorem:12 formula:1 erroneous:1 xt:15 specific:2 showing:2 r2:4 rakhlin:2 exists:11 albeit:1 magnitude:1 execution:1 conditioned:7 kx:7 easier:1 rejection:1 generalizing:1 tantalizing:1 explore:1 contained:1 applies:2 satisfies:1 acm:1 goal:1 diam:1 exposition:1 lipschitz:15 fw:23 hard:1 included:1 except:3 uniformly:1 averaging:2 lemma:7 called:1 ew:7 formally:2 alexander:2 wibisono:1 princeton:6 d1:1 correlated:1 |
6,165 | 6,577 | Improving PAC Exploration
Using the Median of Means
Jason Pazis
Laboratory for Information and Decision Systems
Massachusetts Institute of Technology
Cambridge, MA 02139, USA
[email protected]
Ronald Parr
Department of Computer Science
Duke University
Durham, NC 27708
[email protected]
Jonathan P. How
Aerospace Controls Laboratory
Department of Aeronautics and Astronautics
Massachusetts Institute of Technology
Cambridge, MA 02139, USA
[email protected]
Abstract
We present the first application of the median of means in a PAC exploration
algorithm for MDPs. Using the median of means allows us to significantly reduce
the dependence of our bounds on the range of values that the value function can
take, while introducing a dependence on the (potentially much smaller) variance of
the Bellman operator. Additionally, our algorithm is the first algorithm with PAC
bounds that can be applied to MDPs with unbounded rewards.
1
Introduction
As the reinforcement learning community has shifted its focus from heuristic methods to methods
that have performance guarantees, PAC exploration algorithms have received significant attention.
Thus far, even the best published PAC exploration bounds are too pessimistic to be useful in practical applications. Even worse, lower bound results [14, 7] indicate that there is little room for
improvement.
While these lower bounds prove that there exist pathological examples for which PAC exploration
can be prohibitively expensive, they leave the door open for the existence of ?well-behaved? classes
of problems in which exploration can be performed at a significantly lower cost. The challenge of
course is to identify classes of problems that are general enough to include problems of real-world
interest, while at the same time restricted enough to have a meaningfully lower cost of exploration
than pathological instances.
The approach presented in this paper exploits the fact that while the square of the maximum value
that the value function can take (Q2max ) is typically quite large, the variance of the Bellman operator
is rather small in many domains of practical interest. For example, this is true in many control tasks:
It is not very often that an action takes the system to the best possible state with 50% probability and
to the worst possible state with 50% probability.
Most PAC exploration algorithms take an average over samples. By contrast, the algorithm presented
in this paper splits samples into sets, takes the average over each set, and returns the median of the
averages. This seemingly simple trick (known as the median trick [1]), allows us to derive sample
complexity bounds that depend on the variance of the Bellman operator rather than Q2max . Addi30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
tionally, our algorithm (Median-PAC) is the first reinforcement learning algorithm with theoretical
guarantees that allows for unbounded rewards.1
Not only does Median-PAC offer significant sample complexity savings in the case when the variance
of the Bellman operator is low, but even in the worst case (the variance of the Bellman operator is
Q2
bounded above by max
4 ) our bounds match the best, published PAC bounds. Note that Median-PAC
does not require the variance of the Bellman operator to be known in advance. Our bounds show that
there is an inverse relationship between the (possibly unknown) variance of the Bellman operator
and Median-PAC?s performance. This is to the best of our knowledge not only the first application
of the median of means in PAC exploration, but also the first application of the median of means in
reinforcement learning in general.
Contrary to recent work which has exploited variance in Markov decision processes to improve PAC
bounds [7, 3], Median-PAC makes no assumptions about the number of possible next-states from
every state-action (it does not even require the number of possible next states to be finite), and as a
result it is easily extensible to the continuous state, concurrent MDP, and delayed update settings [12].
2
Background, notation, and definitions
In the following, important symbols and terms will appear in bold when first introduced. Let X be
the domain of x. Throughout this paper, 8 x will serve as a shorthand for 8x 2 X . In the following
s, s?, s?, s0 are used to denote various states, and a, a
?, a
?, a0 are used to denote actions.
A Markov Decision Process (MDP) [13] is a 5-tuple (S, A, P, R, ), where S is the state space
of the process, A is the action space2 , P is a Markovian transition model p(s0 |s, a) denotes the
probability of a transition to state s0 when taking action a in state s , R is a reward function
R(s, a, s0 ) is the reward for taking action a in state s and transitioning to state s0 , and 2 [0, 1)
is a discount factor for future rewards. A deterministic policy ? is a mapping ? : S 7! A from
states to actions; ?(s) denotes the action choice in state s. The value V ? (s) of state s under
policy ? is defined as the expected, accumulated, discounted reward when the process begins
in state s and all decisions are made according to policy ?. There exists an optimal policy ??
for choosing actions which yields the optimal P
value function V ? (s), defined recursively via the
?
Bellman optimality equation V (s) = maxa { s0 p(s0 |s, a) (R(s, a, s0 ) + V ? (s0 ))}. Similarly,
the value Q? (s, a) of a state-action (s, a) under policy ? is defined as the expected, accumulated,
discounted reward when the process begins in state s by taking action a and all decisions thereafter
are
according to policy ?. The Bellman optimality equation for Q becomes Q? (s, a) =
P made
0
0
? 0 0
0
(s , a )}). For a fixed policy ?
s0 p(s |s, a) (R(s, a, s ) + maxa {Q ?
? the Bellman operator for Q
P
?
0
0
0
0
is defined as B Q(s, a) = s0 p(s |s, a) R(s, a, s ) + Q(s , ?(s )) . In reinforcement learning
(RL) [15], a learner interacts with a stochastic process modeled as an MDP and typically observes the
state and immediate reward at every step; however, the transition model P and reward function R are
not known. The goal is to learn a near optimal policy using experience collected through interaction
with the process. At each step of interaction, the learner observes the current state s, chooses an
action a, and observes the reward received r, and resulting next state s0 , essentially sampling the
transition model and reward function of the process. Thus experience comes in the form of (s, a, r, s0 )
samples.
We assume that all value functions Q live in a complete metric space.
Definition 2.1. Qmax denotes an upper bound on the expected, accumulated, discounted reward
from any state-action under any policy.
We require that Qmin , the minimum expected, accumulated, discounted reward from any state-action
under any policy is bounded, and in order to simplify notation we also assume without loss of
1
Even though domains with truly unbounded rewards are not common, many domains exist for which
infrequent events with extremely high (winning the lottery) or extremely low (nuclear power-plant meltdown)
rewards exist. Algorithms whose sample complexity scales with the highest magnitude event are not well suited
to such domains.
2
For simplicity of exposition we assume that the same set of actions is available at every state. Our results
readily extend to the case where the action set can differ from state to state.
2
generality that it is bounded below by 0. If Qmin < 0, this assumption is easy to satisfy in all MDPs
for which Qmin is bounded by simply shifting the reward space by (
1)Qmin .
There have been many definitions of sample complexity in RL. In this paper we will be using the
following [12]:
Definition 2.2. Let (s1 , s2 , s3 , . . . ) be the random path generated on some execution of ?, where ?
is an arbitrarily complex, possibly non-stationary, possibly history dependent policy (such as the
policy followed by an exploration algorithm). Let ? be a positive constant, T the (possibly infinite)
set of time steps for which V ? (st ) < V ? (st ) ?, and define3
?e (t) =V ? (st ) V ? (st )
?e (t) =0, 8 t 2
/ T.
?, 8 t 2 T.
The Total Cost of Exploration (TCE) is defined as the undiscounted infinite sum
P1
t=0 ?e (t).
?Number of suboptimal steps? bounds follow as a simple corollary of TCE bounds.
We will be using the following definition of efficient PAC exploration [14]:
Definition 2.3. An algorithm is said to be efficient PAC-MDP (Probably Approximately Correct in
Markov Decision Processes) if, for any ? > 0 and 0 < < 1, its sample complexity, its per-timestep
computational complexity, and its space complexity, are less than some polynomial in the relevant
quantities (S, A, 1? , 1 , 1 1 ), with probability at least 1
.
3
The median of means
Before we present Median-PAC we will demonstrate the usefulness of the median of means with a
simple example. Suppose we are given n independent samples from a random variable X and we
want to estimate its mean. The types of guarantees that we can provide about how close that estimate
will be to the expectation, will depend on what knowledge we have about the variable, and on the
method we use to compute the estimate. The main question of interest in our work is how many
samples are needed until our estimate is ?-close to the expectation with probability at least 1
.
Let the expectation of X be E[X] = ? and its variance var[X] = 2 . Cantelli?s inequality tells
2
2
us that: P (X ? ?) ? 2 +?2 and P (X ? ? ?) ? 2 +?2 . Let Xi be a random variable
describing the value of the i-th sample, and define X 0 = X1 +X2n+???+Xn . We have that E[X 0 ] =
2
2
? and var[X 0 ] = n . From Cantelli?s inequality we have that P (X 0 ? ?) ? 2 +n?2 and
? 2?
2
2
P (X 0 ? ? ?) ? 2 +n?2 . Solving for n we have that we need at most n = (1 ?2) = O ?2
samples until our estimate is ?-close to the expectation with probability at least 1
. In RL, it is
common to apply a union bound over the entire state-action space in order to prove uniformly good
approximation. This means that has to be small enough that even when multiplied with the number
of state-actions, it yields an acceptably low probability of failure. The most significant drawback of
the bound above is that it grows very quickly as becomes smaller. Without further assumptions one
can show that the bound above is tight for the average estimator.
If we know that X can only take values in a bounded range a ? X ? b, Hoeffding?s inequality
tells us that P (X 0
(b a)2 ln
2?2
?
1
?) ? e
2n?2
(b a)2
and P (X 0
??
?) ? e
2n?2
(b a)2
. Solving for n we have
that n =
samples suffice to guarantee that our estimate is ?-close to the expectation with
probability at least 1
. Hoeffding?s inequality yields a much better bound with respect to , but
introduces a quadratic dependence on the range of values that the variable can take. For long planning
horizons (discount factor close to 1) and/or large reward magnitudes, the range of possible Q-values
can be very large, much larger than the variance of individual state-actions.
We can get the best of both worlds by using a more sophisticated estimator. Instead of taking the
2
2
average over n samples, we will split them into km = 4n?2 sets of 4?2 samples each,4 compute the
3
Note that V ? (st ) denotes the expected, discounted, accumulated reward of the arbitrarily complex policy ?
from state st at time t, rather than the expectation of some stationary snapshot of ?.
4
The number of samples per set was chosen so as to minimize the constants in the final bound.
3
average over each set, and then take the median of the averages. From Cantelli?s inequality we have
that with probability at least 45 , each one of the sets will not underestimate, or overestimate the mean
? by more than ?. Let f be the function that counts the number of sets that underestimate the
mean by more than ?, and f + the function that counts the number of sets that overestimate the mean
3km 2
2(
10 )
km
km
by more than ?. From McDiarmid?s inequality [9] we have that P f
?
e
and
2
3km 2
2(
2
200 2
1
1
10 )
ln( )
22.22 ln( )
km
km
P f+
?e
. Solving for n we have that n = 9 ?2
?
samples
2
?2
suffice to guarantee that our estimate is ?-close to the expectation with probability at least 1
.
The median of means offers logarithmic dependence on 1 , independence from the range of values
that the variables in question can take (even allowing for them to be infinite), and can be computed
efficiently. The median of means estimator only requires a finite variance and the existence of a mean.
No assumptions (including boundedness) are made on higher moments.
4
Median PAC exploration
Algorithm 1 Median-PAC
1: Inputs: start state s, discount factor , max number of samples k, number of sets km , and
acceptable error ?a .
2: Initialize sample sets unew (s, a) = ;, u(s, a) = ; 8 (s, a). (|u(s, a)| denotes the number of
samples in u(s,
p a))
? a) = Qmax 8 (s, a).
3: Set ?b = ?a k, and initialize value function Q(s,
4: loop
? a
5:
Perform action a = arg maxa? Q(s,
?)
6:
Receive reward r, and transition to state s0 .
7:
if |u(s, a)| < k then
8:
Add (s, a, r, s0 ) to unew (s, a).
9:
if |unew (s, a)| > |u(s, a)| and |unew (s, a)| = 2i km , where i 0 is an integer then
10:
u(s, a) = unew (s, a)
11:
unew (s, a) = ;
12:
end if
? Q(s,
? a) Q(s,
? a)) > ?a or max(s,a) (Q(s,
? a) B
? Q(s,
? a)) > ?a do
13:
while max(s,a) (B
? a) = B
? Q(s,
? a) 8 (s, a).
14:
Set Q(s,
15:
end while
16:
end if
17: end loop
? Q(s,
? a)
18: function B
19:
if |u(s, a)| km then
20:
Let (s, a, ri , s0i ) be the i-th sample in u(s, a).
21:
for j = 1 to km do
!
Pj |u(s,a)|
km
? 0, a
22:
g(j) =
ri + maxa? Q(s
?)
|u(s,a)|
i=1+(j 1)
23:
end for
24:
return min Qmax , p
?
25:
else
26:
return Qmax .
27:
end if
28: end function
i
km
?b
|u(s,a)|
+
km median{g(1),...g(km )}
|u(s,a)|
Algorithm 1 has three parameters that can be set by the user:
? k is the maximum number of samples per state-action. As we will show, higher values for k
lead to increased sample complexity but better approximation.
? ?a is an ?acceptable error? term. Since Median-PAC is based on value iteration (lines 13
through 15) we specify a threshold after which value iteration should terminate. Value
4
iteration is suspended when the max-norm of the difference between Bellman backups is no
larger than ?a .
? Due to the stochasticity of Markov decision processes, Median-PAC is only guaranteed
to achieve a particular approximation quality with some probability. km offers a tradeoff between approximation quality and the probability that this approximation quality is
achieved. For a fixed k smaller values of km offer potentially improved approximation
quality, while larger values offer a higher probability of success. For
& simplicity of exposition
'
our analysis requires that k = 2 km for some integer i. If km
i
50
9
ln
4 log2
4Q2
max
?2
a
|SA|2
the probability of failure is bounded above by .
Like most modern PAC exploration algorithms, Median-PAC is based on the principle of optimism
in the face of uncertainty. At every step, the algorithm selects an action greedily based on the
? The value function is optimistically initialized to Qmax ,
current estimate of the Q-value function Q.
the highest value that any state-action can take. If k is set appropriately (see theorem 5.4), the
value function is guaranteed to remain approximately optimistic (approximately represent the most
optimistic world consistent with the algorithm?s observations) with high probability.
We would like to draw the reader?s attention to two aspects of Median-PAC, both in the way Bellman
backups are computed: 1) Instead of taking a simple average over sample values, Median-PAC divides
them into km sets, computes the mean over each set, and takes the median of means. 2) Instead of
using all the samples available for every state-action, Median-PAC uses samples in batches of a power
of 2 times km (line 9). The reasoning behind the first choice follows from the discussion above: using
the median of means will allow us to show that Median-PAC?s complexity scales with the variance of
the Bellman operator (see definition 5.1) rather than Q2max . The reasoning behind using samples in
batches of increasing powers of 2 is more subtle. A key requirement in the analysis of our algorithm
is that samples belonging to the same state-action are independent. While the outcome of sample i
does not provide information about the outcome of sample j if i < j (from the Markov property), the
fact that j samples exist can reveal information about the outcome of i. If the first i samples led to a
severe underestimation of the value of the state-action in question, it is likely that j samples would
never have been collected. The fact that they did gives us some information about the outcome of the
first i samples. Using samples in batches, and discarding the old batch when a new batch becomes
available, ensures that the outcomes of samples within each batch are independent from one another.
5
Analysis
Definition 5.1.
is the minimal constant satisfying
?
Q
?
8(s, a, ? , Q),
s
X
s0
?
? 0 , ? Q? (s0 ))
p(s0 |s, a) R(s, a, s0 ) + Q(s
?
? a)
B ?Q Q(s,
?2
? ,
? refers to any value function produced by Median-PAC, rather than any conceivable
where 8Q
?
? followed during the execution of
value function (similarly ? Q refers to any greedy policy over Q
Median-PAC rather than any conceivable policy).
In the following we will call 2 the variance of the Bellman operator. Note that the variance of the
Bellman operator is not the same as the variance, or stochasticity in the transition model of an MDP.
A state-action can be highly stochastic (lead to many possible next states), yet if all the states it
transitions to have similar values, the variance of its Bellman operator will be small.
From Lemmas 5.2, 5.3, and theorem 5.4 below, we have that Median-PAC is efficient PAC-MDP.
Lemma 5.2. The space complexity of algorithm 1 is O (k|S||A|).
Proof. Follows directly from the fact that at most k samples are stored per state-action.
Lemma 5.3. The per step computational complexity of algorithm 1 is bounded above by
O
?
k|S||A|2
Qmax
ln
1
?a
5
?
.
Proof. The proof of this lemma is deferred to the appendix.
Theorem 5.4 below is the main theorem of this paper. It decomposes errors into the following three
sources:
1. ?a is the error caused by the fact that we are only finding an ?a -approximation, rather than
? and the fact that we are using
the true fixed point of the approximate Bellman operator B,
only a finite set of samples (at most k) to compute the median of the means, thus we only
have an estimate.
2. ?u is the error caused by underestimating the variance of the MDP. When k is too small
and Median-PAC fails to be optimistic, ?u will be non-zero. ?u is a measure of how far
Median-PAC is from being optimistic (follow the greedy policy over the value function of
the most optimistic world consistent with its observations).
3. Finally, ?e (t) is the error caused by the fact that at time t there may exist state-actions that
do not yet have k samples.
Theorem 5.4. Let (s1 , s2 , s3 , . . . ) be the random path generated on some execution of
Median-PAC, and ?
? be the (non-stationary) policy followed by Median-PAC.
Let ?u =
&
'
4Q2
4 log2 ?max
|SA|2
p
p
2
50
a
max{0,
4km ?a k}, and ?a be defined as in algorithm 1. If km = 9 ln
,
1
?b 2 d 1
p
,
k
?a ?
at least 1
ln
(1
)Qmax
?a
e2 ln
log2 2k
km
, for all t
V ? (st )
where
1
X
?e (t) < c0
t=0
and
< 1, and k = 2i km for some integer i, then with probability
km |SA|+1
??
2?u + 5?a
+ ?e (t),
1
V ?? (st ) ?
2km + log2
2k
km
?
l
(|SA| + 1) 1 + log2 1 1 ln
s
c0 =
1
l
2 11
ln
(1
(1
?
?
??
8
Qmax + ?a k 8 + p
,
2
)Qmax
?a
where5
1
X
t=0
?
?e (t) ? O
??
m? l
1
1
ln
(1
)Qmax
?a
m
log2 2k
)Qmax
km
2 ln
?
(2)
m
.
a
km |SA|+1
If k = 2i km where i is the smallest integer such that 2i
probability at least 1
, for all t
V ? (st )
(1)
4 2
?2a ,
and ?0 = (1
)?a , then with
V ?? (st ) ? ?0 + ?e (t),
(3)
?
(4)
2
)2
?0 (1
+
Qmax
1
?
|SA| .
Note that the probability of success holds for all timesteps simultaneously, and
undiscounted infinite sum.
P1
t=0 ?e (t)
is an
Proof. The detailed proof of this theorem is deferred to the appendix. Here we provide a proof
sketch:
The non-stationary policy of the algorithm can be broken up into fixed policy (and fixed approximate
value function) segments. The first step in proving theorem 5.4 is to show that the Bellman error of
each state-action at a particular fixed approximate value function segment is acceptable with respect
to the number of samples currently available for that state-action with high probability. We use
Cantelli?s and McDiarmid?s inequalities to prove this point. This is where the median of means
5
?
f (n) = O(g(n))
is a shorthand for f (n) = O(g(n) logc g(n)) for some constant c.
6
becomes useful, and the main difference between our work and earlier work. We then combine the
result from the median of means, the fact that there are only a small number of possible policy and
approximate value function changes that can happen during the lifetime of the algorithm, and the
union bound, to prove that the Bellman error of all state-actions during all timesteps is acceptable
with high probability. We subsequently prove that due to the optimistic nature of Median-PAC, at
every time-step it will either perform well, or learn something new about the environment with high
probability. Since there is only a finite number of things it can learn, the total cost of exploration for
Median-PAC will be small with high probability.
A typical ?number of suboptimal steps? sample
bound follows as a simple corollary of
Pcomplexity
1
theorem 5.4.
If the total cost of exploration is t=0 ?e (t) for an ?0 -optimal policy, there can be no
P1
? (t)
more than t=0?1 e steps that are more than (?0 + ?1 )-suboptimal.
Note that the sample complexity of Median-PAC depends log-linearly on Qmax , which can be finite
even if Rmax is infinite. Consider for example an MDP for which the reward at every state-action
follows a Gaussian distribution (for discrete MDPs this example requires rewards to be stochastic,
while for continuous MDPs rewards can be a deterministic function of state-action-nextstate since
there can be an infinite number of possible nextstates for every state-action). If the mean of the reward
for every state-action is bounded above by c, Qmax is bounded above by 1 c , even though Rmax is
infinite.
As we can see from theorem 5.4, apart from being the first PAC exploration algorithm that can be
applied to MDPs with unbounded rewards, Median-PAC offers significant advantages over the current
state of the art for MDPs with bounded rewards. Until recently, the algorithm with the best known
sample complexity for the discrete state-action setting was MORMAX, an algorithm by Szita and
Szepesv?ri [16]. Theorem 5.4 offers an improvement of (1 1 )2 even in the worst case, and trades
a factor of Q2max for a (potentially much smaller) factor of 2 . A recent algorithm by Pazis and
Parr [12] currently offers the best known bounds for PAC exploration without additional assumptions
on the number of states that each action can transition to. Compared to that work we trade a factor of
Q2max for a factor of 2 .
5.1
Using Median-PAC when
is not known
In many practical situations will not be known. Instead the user will have a fixed exploration cost
budget, a desired maximum probability of&failure , and a desired'maximum error ?a . Given we
can solve for the number of sets as km =
50
9
ln
4 log2
4Q2
max
?2
a
|SA|2
, at which point all variables in
equation 2 except for k are known, and we can solve for k. When the sampling budget is p
large enough
p
2
such that k 4 ?2km , then ?u in equation 1 will be zero. Otherwise ?u =
4km ?a k.
a
5.2
Beyond the discrete state-action setting
Recent work has extended PAC exploration to the continuous state [11] concurrent exploration [4] and
delayed update [12] settings. The goal in the concurrent exploration setting is to explore in multiple
identical or similar MDPs and incur low aggregate exploration cost over all MDPs. For a concurrent
algorithm to offer an improvement over non-concurrent exploration, the aggregate cost must be lower
than the cost of non-concurrent exploration times the number of tasks. The delayed update setting
takes into account the fact that in real world domains, reaching a fixed point after collecting a new
sample can take longer that the time between actions. Contrary to other work that has exploited
the variance of MDPs to improve bounds on PAC exploration [7, 3] our analysis does not make
assumptions about the number of possible next states from a given action. As such, Median-PAC
and its bounds are easily extensible to the continuous state, concurrent exploration, delayed update
setting. Replacing the average over samples in an approximation unit with the median of means over
samples in an approximation unit in the algorithm of Pazis and Parr [12], improves their bounds
(which are the best published bounds for PAC exploration in these settings) by (Rmax + Qmax )2
while introducing a factor of 2 .
7
6
Experimental evaluation
We compared Median-PAC against the algorithm of Pazis and Parr [12] on a simple 5 by 5 gridworld
(see appendix for more details). The agent has four actions: move one square up, down, left, or right.
All actions have a 1% probability of self-transition with a reward of 100. Otherwise the agent moves
in the chosen direction and receives a reward of 0, unless its action causes it to land on the top-right
corner, in which case it receives a reward of 1. The world wraps around and the agent always starts at
the center. The optimal policy for this domain is to take the shortest path to the top-right corner if at a
state other than the top-right corner, and take any action while at the top-right corner.
While the probability of any individual sample being a self-transition is small, unless the number of
samples per state-action is very large, the probability that there will exist at least one state-action
1
with significantly more than 100
sampled self-transitions is high. As a result, the naive average
algorithm frequently produced a policy that maximized the probability of encountering state-actions
1
with more than 100
sampled self-transitions. By contrast, it is far less likely that there will exist
1
a state-action for which at least half of the sets used by the median of means have more than 100
sampled self-transitions. Median-PAC was able to consistently find the optimal policy.
7
Related Work
Maillard, Mann, and Mannor [8] present the distribution norm, a measure of hardness of an MDP.
Similarly to our definition of the variance of the Bellman operator, the distribution norm does not
directly depend on the stochasticity of the underlying transition model. It would be interesting to see
if the distribution norm (or a similar concept) can be used to improve PAC exploration bounds for
?easy? MDPs.
While to the best our knowledge our work is the first in PAC exploration for MDPs that introduces a
measure of hardness for MDPs (the variance of the Bellman operator), measures of hardness have
been previously used in regret analysis [6]. Such measures include the diameter of an MDP [6], the
one way diameter [2], as well as the span [2]. These measures express how hard it is to reach any
state of an MDP from any other state. A major advantage of sample complexity over regret is that
finite diameter is not required to prove PAC bounds. Nevertheless, if introducing a requirement for a
finite diameter could offer drastically improved PAC bounds, it may be worth the trade-off for certain
classes of problems. Note that variance and diameter of an MDP appear to be orthogonal. One can
construct examples of arbitrary diameter and then manipulate the variance by changing the reward
function and/or discount factor.
Another measure of hardness which was recently introduced in regret analysis is the Eluder dimension.
Osband and Van Roy [10] show that if an MDP can be parameterized within some known function
class, regret bounds that scale with the dimensionality, rather than cardinality of the underlying MDP
can be obtained. Like the diameter, the Eluder dimension appears to be orthogonal to the variance of
the Bellman operator, potentially allowing for the two concepts to be combined.
Lattimore and Hutter [7] have presented an algorithm that can match the best known lower bounds
for PAC exploration up to logarithmic factors for the case of discrete MDPs where every state-action
can transition to at most two next states.
To the best of our knowledge there has been no work in learning with unbounded rewards. Harrison [5]
has examined the feasibility of planning with unbounded rewards.
Acknowledgments
We would like to thank Emma Brunskill, Tor Lattimore, and Christoph Dann for spotting an error
in an earlier version of this paper, as well as the anonymous reviewers for helpful comments and
suggestions. This material is based upon work supported in part by The Boeing Company, by
ONR MURI Grant N000141110688, and by the National Science Foundation under Grant No. IIS1218931. Opinions, findings, conclusions or recommendations herein are those of the authors and not
necessarily those of the NSF.
8
References
[1] N Alon, Y Matias, and M Szegedy. The space complexity of approximating the frequency
moments. Journal of Computer and System Sciences - JCSS (special issue of selected papers
from STOC?96), 58:137?147, 1999.
[2] Peter L. Bartlett and Ambuj Tewari. REGAL: A regularization based algorithm for reinforcement
learning in weakly communicating MDPs. In Proceedings of the 25th Conference on Uncertainty
in Artificial Intelligence (UAI2009), pages 35?42, June 2009.
[3] Christoph Dann and Emma Brunskill. Sample complexity of episodic fixed-horizon reinforcement learning. Advances in Neural Information Processing Systems, 2015.
[4] Zhaohan Guo and Emma Brunskill. Concurrent PAC RL. In AAAI Conference on Artificial
Intelligence, pages 2624?2630, 2015.
[5] J. Michael Harrison. Discrete dynamic programming with unbounded rewards. The Annals of
Mathematical Statistics, 43(2):636?644, 04 1972.
[6] Thomas Jaksch, Ronald Ortner, and Peter Auer. Near-optimal regret bounds for reinforcement
learning. Journal of Machine Learning Research, 11:1563?1600, August 2010.
[7] Tor Lattimore and Marcus Hutter. PAC bounds for discounted MDPs. In Proceedings of the
23th International Conference on Algorithmic Learning Theory, volume 7568 of Lecture Notes
in Computer Science, pages 320?334. Springer Berlin / Heidelberg, 2012.
[8] Odalric-Ambrym Maillard, Timothy A Mann, and Shie Mannor. How hard is my MDP?? the
distribution-norm to the rescue?. In Advances in Neural Information Processing Systems 27,
page 1835?1843. 2014.
[9] C. McDiarmid. On the method of bounded differences. In Surveys in Combinatorics, number 141
in London Mathematical Society Lecture Note Series, pages 148?188. Cambridge University
Press, August 1989.
[10] Ian Osband and Benjamin Van Roy. Model-based reinforcement learning and the eluder
dimension. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger,
editors, Advances in Neural Information Processing Systems 27, pages 1466?1474. 2014.
[11] Jason Pazis and Ronald Parr. PAC optimal exploration in continuous space Markov decision
processes. In AAAI Conference on Artificial Intelligence, pages 774?781, July 2013.
[12] Jason Pazis and Ronald Parr. Efficient PAC-optimal exploration in concurrent, continuous state
MDPs with delayed updates. In AAAI Conference on Artificial Intelligence, February 2016.
[13] Martin L. Puterman. Markov Decision Processes: Discrete Stochastic Dynamic Programming.
Wiley-Interscience, April 1994.
[14] Alexander L. Strehl, Lihong Li, and Michael L. Littman. Reinforcement learning in finite
MDPs: PAC analysis. Journal of Machine Learning Research, 10:2413?2444, December 2009.
[15] Richard Sutton and Andrew Barto. Reinforcement Learning: An Introduction. The MIT Press,
Cambridge, Massachusetts, 1998.
[16] Istvan Szita and Csaba Szepesv?ri. Model-based reinforcement learning with nearly tight
exploration complexity bounds. In International Conference on Machine Learning, pages
1031?1038, 2010.
9
| 6577 |@word version:1 polynomial:1 norm:5 c0:2 open:1 km:34 boundedness:1 recursively:1 moment:2 series:1 current:3 yet:2 must:1 readily:1 ronald:4 happen:1 update:5 stationary:4 greedy:2 half:1 selected:1 intelligence:4 underestimating:1 mannor:2 mcdiarmid:3 unbounded:7 mathematical:2 prove:6 shorthand:2 combine:1 interscience:1 emma:3 hardness:4 expected:5 p1:3 planning:2 frequently:1 bellman:22 discounted:6 company:1 little:1 cardinality:1 increasing:1 becomes:4 spain:1 begin:2 bounded:11 notation:2 suffice:2 qmin:4 underlying:2 what:1 rmax:3 q2:4 maxa:4 finding:2 csaba:1 guarantee:5 every:10 collecting:1 prohibitively:1 control:2 unit:2 grant:2 acceptably:1 appear:2 overestimate:2 positive:1 before:1 sutton:1 path:3 optimistically:1 approximately:3 examined:1 christoph:2 range:5 practical:3 acknowledgment:1 union:2 regret:5 episodic:1 significantly:3 refers:2 get:1 close:6 operator:16 live:1 deterministic:2 reviewer:1 center:1 attention:2 survey:1 simplicity:2 communicating:1 estimator:3 nuclear:1 proving:1 mormax:1 annals:1 suppose:1 infrequent:1 user:2 duke:2 programming:2 us:1 trick:2 roy:2 expensive:1 satisfying:1 muri:1 jcss:1 worst:3 ensures:1 trade:3 highest:2 observes:3 benjamin:1 environment:1 broken:1 complexity:17 reward:32 littman:1 dynamic:2 depend:3 solving:3 tight:2 segment:2 weakly:1 incur:1 serve:1 upon:1 learner:2 easily:2 various:1 london:1 artificial:4 tell:2 aggregate:2 eluder:3 choosing:1 outcome:5 quite:1 heuristic:1 whose:1 larger:3 solve:2 otherwise:2 statistic:1 final:1 seemingly:1 advantage:2 interaction:2 relevant:1 loop:2 achieve:1 q2max:5 undiscounted:2 requirement:2 leave:1 derive:1 alon:1 andrew:1 received:2 sa:7 c:1 indicate:1 come:1 differ:1 direction:1 drawback:1 correct:1 unew:6 stochastic:4 subsequently:1 exploration:34 opinion:1 material:1 mann:2 require:3 anonymous:1 pessimistic:1 hold:1 around:1 lawrence:1 mapping:1 algorithmic:1 parr:7 major:1 tor:2 smallest:1 currently:2 concurrent:9 mit:3 gaussian:1 always:1 rather:8 reaching:1 barto:1 corollary:2 focus:1 june:1 improvement:3 consistently:1 cantelli:4 contrast:2 greedily:1 helpful:1 dependent:1 accumulated:5 typically:2 entire:1 a0:1 selects:1 arg:1 szita:2 issue:1 art:1 special:1 initialize:2 logc:1 construct:1 saving:1 never:1 x2n:1 sampling:2 identical:1 nearly:1 future:1 simplify:1 richard:1 ortner:1 modern:1 pathological:2 simultaneously:1 national:1 individual:2 delayed:5 interest:3 highly:1 evaluation:1 severe:1 deferred:2 introduces:2 truly:1 behind:2 tuple:1 experience:2 orthogonal:2 unless:2 divide:1 old:1 initialized:1 desired:2 theoretical:1 minimal:1 hutter:2 instance:1 increased:1 earlier:2 markovian:1 extensible:2 cost:9 introducing:3 usefulness:1 too:2 stored:1 my:1 chooses:1 combined:1 st:10 international:2 off:1 michael:2 quickly:1 aaai:3 possibly:4 hoeffding:2 astronautics:1 worse:1 corner:4 return:3 li:1 szegedy:1 account:1 bold:1 satisfy:1 combinatorics:1 caused:3 dann:2 depends:1 performed:1 jason:3 optimistic:6 start:2 minimize:1 square:2 variance:23 efficiently:1 maximized:1 yield:3 identify:1 produced:2 worth:1 published:3 history:1 reach:1 definition:9 failure:3 underestimate:2 against:1 matias:1 frequency:1 e2:1 proof:6 sampled:3 massachusetts:3 knowledge:4 improves:1 maillard:2 dimensionality:1 subtle:1 sophisticated:1 auer:1 appears:1 higher:3 follow:2 specify:1 improved:2 april:1 though:2 generality:1 lifetime:1 until:3 sketch:1 receives:2 replacing:1 quality:4 reveal:1 behaved:1 mdp:15 grows:1 usa:2 concept:2 true:2 regularization:1 laboratory:2 jaksch:1 puterman:1 during:3 self:5 pazis:6 complete:1 demonstrate:1 reasoning:2 lattimore:3 recently:2 common:2 rl:4 volume:1 extend:1 tce:2 significant:4 cambridge:4 similarly:3 stochasticity:3 tionally:1 lihong:1 longer:1 encountering:1 aeronautics:1 add:1 something:1 recent:3 apart:1 certain:1 inequality:7 onr:1 arbitrarily:2 suspended:1 success:2 exploited:2 minimum:1 additional:1 shortest:1 july:1 multiple:1 match:2 offer:10 long:1 space2:1 manipulate:1 feasibility:1 essentially:1 metric:1 expectation:7 iteration:3 represent:1 achieved:1 receive:1 background:1 want:1 szepesv:2 else:1 median:50 source:1 harrison:2 appropriately:1 probably:1 comment:1 thing:1 meaningfully:1 contrary:2 shie:1 december:1 integer:4 call:1 near:2 door:1 split:2 enough:4 easy:2 independence:1 timesteps:2 suboptimal:3 reduce:1 tradeoff:1 optimism:1 bartlett:1 osband:2 peter:2 cause:1 action:50 useful:2 tewari:1 detailed:1 discount:4 diameter:7 exist:7 lottery:1 nsf:1 shifted:1 s3:2 rescue:1 per:6 discrete:6 express:1 thereafter:1 key:1 four:1 threshold:1 nevertheless:1 changing:1 pj:1 timestep:1 sum:2 inverse:1 parameterized:1 uncertainty:2 qmax:15 throughout:1 reader:1 draw:1 decision:9 acceptable:4 appendix:3 bound:33 followed:3 guaranteed:2 quadratic:1 ri:4 aspect:1 optimality:2 extremely:2 min:1 span:1 martin:1 department:2 according:2 belonging:1 smaller:4 remain:1 s1:2 restricted:1 ln:13 equation:4 previously:1 describing:1 count:2 needed:1 know:1 end:7 available:4 jhow:1 multiplied:1 apply:1 batch:6 weinberger:1 existence:2 thomas:1 denotes:5 top:4 include:2 log2:7 exploit:1 ghahramani:1 approximating:1 society:1 february:1 move:2 question:3 quantity:1 dependence:4 istvan:1 interacts:1 said:1 conceivable:2 wrap:1 thank:1 berlin:1 collected:2 odalric:1 marcus:1 modeled:1 relationship:1 nc:1 potentially:4 stoc:1 boeing:1 policy:24 unknown:1 perform:2 allowing:2 upper:1 observation:2 snapshot:1 markov:7 finite:8 immediate:1 situation:1 extended:1 gridworld:1 arbitrary:1 regal:1 august:2 community:1 introduced:2 required:1 aerospace:1 herein:1 barcelona:1 nip:1 beyond:1 able:1 spotting:1 below:3 challenge:1 ambuj:1 max:9 including:1 shifting:1 power:3 event:2 improve:3 technology:2 mdps:18 naive:1 loss:1 plant:1 lecture:2 interesting:1 suggestion:1 var:2 foundation:1 agent:3 consistent:2 s0:19 principle:1 editor:1 land:1 strehl:1 course:1 supported:1 drastically:1 allow:1 ambrym:1 institute:2 taking:5 face:1 van:2 dimension:3 xn:1 world:6 transition:15 computes:1 author:1 made:3 reinforcement:11 far:3 welling:1 approximate:4 n000141110688:1 xi:1 continuous:6 s0i:1 decomposes:1 additionally:1 learn:3 terminate:1 nature:1 improving:1 heidelberg:1 complex:2 necessarily:1 domain:7 did:1 main:3 linearly:1 s2:2 backup:2 x1:1 wiley:1 fails:1 brunskill:3 winning:1 ian:1 theorem:10 down:1 transitioning:1 discarding:1 pac:59 symbol:1 cortes:1 exists:1 magnitude:2 execution:3 budget:2 horizon:2 durham:1 suited:1 logarithmic:2 led:1 simply:1 likely:2 explore:1 timothy:1 recommendation:1 springer:1 ma:2 goal:2 exposition:2 room:1 change:1 hard:2 infinite:7 typical:1 uniformly:1 except:1 lemma:4 total:3 experimental:1 underestimation:1 guo:1 jonathan:1 alexander:1 |
6,166 | 6,578 | Dynamic Filter Networks
Bert De Brabandere1?
ESAT-PSI, KU Leuven, iMinds
Xu Jia1?
ESAT-PSI, KU Leuven, iMinds
Tinne Tuytelaars1
ESAT-PSI, KU Leuven, iMinds
Luc Van Gool1,2
ESAT-PSI, KU Leuven, iMinds
D-ITET, ETH Zurich
1
[email protected] 2 [email protected]
Abstract
In a traditional convolutional layer, the learned filters stay fixed after training. In
contrast, we introduce a new framework, the Dynamic Filter Network, where filters
are generated dynamically conditioned on an input. We show that this architecture
is a powerful one, with increased flexibility thanks to its adaptive nature, yet without
an excessive increase in the number of model parameters. A wide variety of filtering
operations can be learned this way, including local spatial transformations, but also
others like selective (de)blurring or adaptive feature extraction. Moreover, multiple
such layers can be combined, e.g. in a recurrent architecture.
We demonstrate the effectiveness of the dynamic filter network on the tasks of
video and stereo prediction, and reach state-of-the-art performance on the moving
MNIST dataset with a much smaller model. By visualizing the learned filters,
we illustrate that the network has picked up flow information by only looking at
unlabelled training data. This suggests that the network can be used to pretrain
networks for various supervised tasks in an unsupervised way, like optical flow and
depth estimation.
1
Introduction
Humans are good at predicting another view from related views. For example, humans can use
their everyday experience to predict how the next frame in a video will differ; or after seeing a
person?s profile face have an idea of her frontal view. This capability is extremely useful to get early
warnings about impinging dangers, to be prepared for necessary actions, etc. The vision community
has realized that endowing machines with similar capabilities would be rewarding.
Several papers have already addressed the generation of an image conditioned on given image(s).
Yim et al. [24] and Yang et al. [23] learn to rotate a given face to another pose. The authors
of [16, 19, 18, 15, 12] train a deep neural network to predict subsequent video frames. Flynn et al. [3]
use a deep network to interpolate between views separated by a wide baseline. Yet all these methods
apply the exact same set of filtering operations on each and every input image. This seems suboptimal
for the tasks at hand. For example, for video prediction, there are different motion patterns within
different video clips. The main idea behind our work is to generate the future frames with parameters
?
X. Jia and B. De Brabandere contributed equally to this work and are listed in alphabetical order.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
adapted to the motion pattern within a particular video. Therefore, we propose a learnable parameter
layer that provides custom parameters for different samples.
Our dynamic filter module consists of two parts: a filter-generating network and a dynamic filtering
layer (see Figure 1). The filter-generating network dynamically generates sample-specific filter
parameters conditioned on the network?s input. Note that these are not fixed after training, like
regular model parameters. The dynamic filtering layer then applies those sample-specific filters to the
input. Both components of the dynamic filter module are differentiable with respect to the model
parameters such that gradients can be backpropagated throughout the network. The filters can be
convolutional, but other options are possible. In particular, we propose a special kind of dynamic
filtering layer which we coin dynamic local filtering layer, which is not only sample-specific but also
position-specific. The filters in that case vary from position to position and from sample to sample,
allowing for more sophisticated operations on the input. Our framework can learn both spatial and
photometric changes, as pixels are not simply displaced, but the filters possibly operate on entire
neighbourhoods.
We demonstrate the effectiveness of the proposed dynamic filter module on several tasks, including
video prediction and stereo prediction. We also show that, because the computed dynamic filters
are explicitly calculated - can be visualised as an image similar to an optical flow or stereo map.
Moreover, they are learned in a totally unsupervised way, i.e. without groundtruth maps.
The rest of paper is organised as follows. In section 2 we discuss related work. Section 3 describes
the proposed method. We show the evaluation in section 4 and conclude the paper in section 5.
2
Related Work
Deep learning architectures Several recent works explore the idea of introducing more flexibility
into the network architecture. Jaderberg et al. [9] propose a module called Spatial Transformer,
which allows the network to actively spatially transform feature maps conditioned on themselves
without explicit supervision. They show this module is able to perform translation, scaling, rotation
and other general warping transformations. They apply this module to a standard CNN network
for classification, making it invariant to a set of spatial transformations. This seminal method only
works with parametric transformations however, and applies a single transformation to the entire
feature map(s). Patraucean et al. [15] extend the Spatial Transformer by modifying the grid generator
such that it has one transformation for each position, instead of a single transformation for the entire
image. They exploit this idea for the task of video frame prediction, applying the learned dense
transformation map to the current frame to generate the next frame. Similarly, our method also applies
a position specific transformation to the image or feature maps and takes video frame prediction as
one testbed. In contrast to their work, our method generates the new image by applying dynamic local
filters to the input image or feature maps instead of using a grid generator and sampler. Our method
is not only able to learn how to displace a pixel, but how to construct it from an entire neighborhood,
including its intensity (e.g. by learning a blur kernel).
In the context of visual question answering, Noh et al. [13] introduce a dynamic parameter layer
which output is used as parameters of a fully connected layer. In that work, the dynamic parameter
layer takes the information from another domain, i.e. question representation, as input. They further
apply hashing to address the issue of predicting the large amount of weights needed for a fully
connected layer. Different from their work, we propose to apply the dynamically generated filters
to perform a filtering operation on an image, hence we do not have the same problem of predicting
large amounts of parameters. Our work also shares similar ideas with early work on fast-weight
networks [4], that is, having a network learn to generate context dependent weights for another
network. However, we instantiate this idea as a convolution/local filtering operation with spatial
information under consideration while they use a fully connected layer, and use it as an alternative
for RNN. Most similar to our work, a dynamic convolution layer is proposed by Klein et al. [10] in
the context of short range weather prediction and by Riegler et al. [17] for single image non-blind
single image super resolution. Our work differs from theirs in that it is more general: dynamic filter
networks are not limited to translation-invariant convolutions, but also allow position-specific filtering
using a dynamic locally connected layer. Lastly, Finn et al. [2] recently independently proposed a
mechanism called (convolutional) dynamic neural advection that is very similar to ours.
2
New view synthesis Our work is also related to works on new view synthesis, that is, generating a
new view conditioned on the given views of a scene. One popular task in this category is to predict
future video frames. Ranzato et al. [16] use an encoder-decoder framework in a way similar to
language modeling. Srivastava et al. [19] propose a multilayer LSTM based autoencoder for both
past frames reconstruction and future frames prediction. This work has been extended by Shi et
al. [18] who propose to use convolutional LSTM to replace the fully connected LSTM in the network.
The use of convolutional LSTM reduces the amount of model parameters and also exploits the local
correlation in the image. Oh et al. [14] address the problem of predicting future frames conditioned
on both previous frames and actions. They propose the encoding-transformation-decoding framework
with either feedforward encoding or recurrent encoding to address this task. Mathieu et al. [12]
manage to generate reasonably sharp frames by means of a multi-scale architecture, an adversarial
training method, and an image gradient difference loss function. In a similar vein, Flynn et al. [3]
apply a deep network to produce unseen views given neighboring views of a scene. Their network
comes with a selection tower and a color tower, and is trained in an end-to-end fashion. This idea
is further refined by Xie et al. [22] for 2D-to-3D conversion. None of these works adapt the filter
operations of the network to the specific input sample, as we do, with the exception of [3, 22]. We?ll
discuss the relation between their selection tower and our dynamic filter layer in section 3.3.
Shortcut connections Our work also shares some similarity, through the use of shortcut connections, with the highway network [20] and the residual network [7, 8]. For a module in the highway
network, the transform gate and the carry gate are defined to control the information flow across
layers. Similarly, He et al. [7, 8] propose to reformulate layers as learning residual functions instead
of learning unreferenced functions. Compared to the highway network, residual networks remove the
gates in the highway network module and the path for input is always open throughout the network.
In our network architecture, we also learn a referenced function. Yet, instead of applying addition to
the input, we apply filtering to the input - see section 3.3 for more details.
3
Dynamic Filter Networks
In this section we describe our dyFilter-generating
Input A
Filters
namic filter framework. A dynamic
network
filter module consists of a filterInput
generating network that produces filters conditioned on an input, and a
Output
Input B
dynamic filtering layer that applies
Dynamic filtering layer
the generated filters to another input.
Both components of the dynamic filter
module are differentiable. The two inputs of the module can be either identi- Figure 1: The general architecture of a Dynamic Filter Network.
cal or different, depending on the task. The general architecture of this module is shown schematically
in Figure 1. We explicitly model the transformation: invariance to change should not imply one
becomes totally blind to it. Moreover, such explicit modeling allows unsupervised learning of
transformation fields like optical flow or depth.
For clarity, we make a distinction between model parameters and dynamically generated parameters.
Model parameters denote the layer parameters that are initialized in advance and only updated during
training. They are the same for all samples. Dynamically generated parameters are sample-specific,
and are produced on-the-fly without a need for initialization. The filter-generating network outputs
dynamically generated parameters, while its own parameters are part of the model parameters.
3.1
Filter-Generating Network
The filter-generating network takes an input IA ? Rh?w?cA , where h, w and cA are height, width
and number of channels of the input A respectively. It outputs filters F? parameterized by parameters
? ? Rs?s?cB ?n?d , where s is the filter size, cB the number of channels in input B and n the number
of filters. d is equal to 1 for dynamic convolution and h ? w for dynamic local filtering, which we
discuss below. The filters are applied to input IB ? Rh?w?cB to generate an output G = F? (IB ),
with G ? Rh?w?n . The filter size s determines the receptive field and is chosen depending on the
3
application. The size of the receptive field can also be increased by stacking multiple dynamic filter
modules. This is for example useful in applications that may involve large local displacements.
The filter-generating network can be implemented with any differentiable architecture, such as a
multilayer perceptron or a convolutional network. A convolutional network is particularly suitable
when using images as input to the filter-generating network.
3.2
Dynamic Filtering Layer
The dynamic filtering layer takes images or feature maps IB as input and outputs the filtered result
G ? Rh?w?n . For simplicity, in the experiments we only consider a single feature map (cB = 1)
filtered with a single generated filter (n = 1), but this is not required in a general setting. The dynamic
filtering layer can be instantiated as a dynamic convolutional layer or a dynamic local filtering layer.
Dynamic convolutional layer. A dynamic convolutional layer is similar to a traditional convolutional layer in that the same filter is applied at every position of the input IB . But different from the
traditional convolutional layer where filter weights are model parameters, in a dynamic convolutional
layer the filter parameters ? are dynamically generated by a filter-generating network:
G(i, j) = F? (IB (i, j))
(1)
The filters are sample-specific and conditioned on the input of the filter-generating network. The
dynamic convolutional layer is shown schematically in Figure 2(a). Given some prior knowledge
about the application at hand, it is sometimes possible to facilitate training by constraining the
generated convolutional filters in a certain way. For example, if the task is to produce a translated
version of the input image IB where the translation is conditioned on another input IA , the generated
filter can be sent through a softmax layer to encourage elements to only have a few high magnitude
elements. We can also make the filter separable: instead of a single square filter, generate separate
horizontal and vertical filters that are applied to the image consecutively similar to what is done
in [10].
Dynamic local filtering layer. An extension of the dynamic convolution layer that proves interesting,
as we show in the experiments, is the dynamic local filtering layer. In this layer the filtering operation
is not translation invariant anymore. Instead, different filters are applied to different positions of the
input IB similarly to the traditional locally connected layer: for each position (i, j) of the input IB , a
specific local filter F? (i,j) is applied to the region centered around IB (i, j):
G(i, j) = F? (i,j) (IB (i, j))
(2)
The filters used in this layer are not only sample specific but also position specific. Note that dynamic
convolution as discussed in the previous section is a special case of local dynamic filtering where
the local filters are shared over the image?s spatial dimensions. The dynamic local filtering layer
is shown schematically in Figure 2b. If the generated filters are again constrained with a softmax
function so that each filter only contains one non-zero element, then the dynamic local filtering layer
replaces each element of the input IB by an element selected from a local neighbourhood around
it. This offers a natural way to model local spatial deformations conditioned on another input IA .
The dynamic local filtering layer can perform not only a single transformation like the dynamic
convolutional layer, but also position-specific transformations like local deformation. Before or after
applying the dynamic local filtering operation we can add a dynamic pixel-wise bias to each element
of the input IB to address situations like photometric changes. This dynamic bias can be produced by
the same filter-generating network that generates the filters for the local filtering.
When inputs IA and IB are both images, a natural way to implement the filter-generating network is
with a convolutional network. This way, the generated position-specific filters are conditioned on the
local image region around their corresponding position in IA . The receptive field of the convolutional
network that generates the filters can be increased by using an encoder-decoder architecture. We can
also apply a smoothness penalty to the output of the filter-generating network, so that neighboring
filters are encouraged to apply the same transformation.
Another advantage of the dynamic local filtering layer over the traditional locally connected layer is
that we do not need so many model parameters. The learned model is smaller and this is desirable in
embedded system applications.
4
Input A
Filter-generating
network
Input A
Input
Filter-generating
network
Input
Input B
Input B
Output
(a)
Output
(b)
Figure 2: Left: Dynamic convolution: the filter-generating network produces a single filter that is applied
convolutionally on IB . Right: Dynamic local filtering: each location is filtered with a location-specific
dynamically generated filter.
3.3
Relationship with other networks
The generic formulation of our framework allows to draw parallels with other networks in the
literature. Here we discuss the relation with the spatial transformer networks [9], the deep stereo
network [3, 22], and the residual networks [7, 8].
Spatial Transformer Networks The proposed dynamic filter network shares the same philosophy
as the spatial transformer network proposed by [9], in that it applies a transformation conditioned on
an input to a feature map. The spatial transformer network includes a localization network which
takes a feature map as input, and it outputs the parameters of the desired spatial transformation. A
grid generator and sampler are needed to apply the desired transformation to the feature map. This
idea is similar to our dynamic filter network, which uses a filter-generating network to compute the
parameters of the desired filters. The filters are applied on the feature map with a simple filtering
operation that only consists of multiplication and summation operations.
A spatial transformer network is naturally suited for global transformations, even sophisticated ones
such as a thin plate spline. The dynamic filter network is more suitable for local transformations,
because of the limited receptive field of the generated filters, although this problem can be alleviated
with larger filters, stacking multiple dynamic filter modules, and using multi-resolution extensions. A
more fundamental difference is that the spatial transformer is only suited for spatial transformations,
whereas the dynamic filter network can apply more general ones (e.g. photometric, filtering), as long
as the transformation is implementable as a series of filtering operations. This is illustrated in the first
experiment in the next section.
Deep Stereo The deep stereo network of [3] can be seen as a specific instantiation of a dynamic
filter network with a local filtering layer where inputs IA and IB denote the same image, only a
horizontal filter is generated and softmax is applied to each dynamic filter. The effect of the selection
tower used in their network is equivalent to the proposed dynamic local filtering layer. For the specific
task of stereo prediction, they use a more complicated architecture for the filter-generating network.
Residual Networks The core idea
of ResNets [7, 8] is to learn a residual
function with respect to the identity
Parameter-generating
Dynamic filtering
Parameters
Output
mapping, which is implemented as an Input
network
layer
additive shortcut connection. In the
dynamic filter network, we also have
two branches where one branch acts as
Figure 3: Relation with residual networks.
a shortcut connection. This becomes
clear when we redraw the diagram (Figure 3). Instead of merging the branches with addition, we
merge them with a dynamic filtering layer which is multiplicative in nature. Multiplicative interactions
in neural networks have also been investigated by [21].
5
Moving MNIST
Model
# params
bce
FC-LSTM [19]
142,667,776 341.2
Conv-LSTM [18]
7,585,296 367.1
Spatio-temporal [15]
1,035,067 179.8
Baseline (ours)
637,443 432.5
DFN (ours)
637,361 285.2
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
SOFTMAX
t-2
t-1
t
t+1
Table 1: Left: Quantitative results on Moving MNIST: number of model parameters and average binary
cross-entropy (bce). Right: The dynamic filter network for video prediction.
4
Experiments
The Dynamic Filter Network can be used in different ways in a wide variety of applications.
In this section we show its application in learning steerable filters, video prediction and stereo
prediction. All code to reproduce the experiments is available at https://github.com/
dbbert/dfn.
4.1
? = 45?
0?
Filter-generating
network
90?
139.2?
180?
242.9?
Learning steerable filters
We first set up a simple experiment to illustrate
the basics of the dynamic filter module with Figure 4: The dynamic filter network for learning steerable filters and several examples of learned filters.
a dynamic convolution layer. The task is to
filter an input image with a steerable filter of a
given orientation ?. The network must learn this
transformation from looking at input-output pairs, consisting of randomly chosen input images and
angles together with their corresponding output.
The task of the filter-generating network here is to transform an angle into a filter, which is then
applied to the input image to generate the final output. We implement the filter-generating network as
a few fully-connected layers with the last layer containing 81 neurons, corresponding to the elements
of a 9x9 convolution filter. Figure 4 shows an example of the trained network. It has indeed learned
the expected filters and applies the correct transformation to the image.
4.2
Video prediction
In video prediction, the task is to predict the sequence of future frames that follows the given sequence
of input frames. To address this task we use a convolutional encoder-decoder as the filter-generating
network where the encoder consists of several strided convolutional layers and the decoder consists
of several unpooling layers and convolutional layers. The convolutional encoder-decoder is able to
exploit the spatial correlation within a frame and generates feature maps that are of the same size
as the frame. To exploit the temporal correlation between frames we add a recurrent connection
inside the filter-generating network: we pass the previous hidden state through two convolutional
layers and sum it with the output of the encoder to produce the new hidden state. During prediction,
we propagate the prediction from the previous time step. Table 1 (right) shows a diagram of our
architecture. Note that we use a very simple recurrent architecture rather than the more advanced
LSTM as in [19, 18]. A softmax layer is applied to each generated filter such that each filter is
encouraged to only have a few high magnitude elements. This helps the dynamic filtering layer to
generate sharper images because each pixel in the output image comes from only a few pixels in the
previous frame. To produce the prediction of the next frame, the generated filters are applied on the
previous frame to transform it with the dynamic local filtering mechanism explained in Section 3.
Moving MNIST We first evaluate the method on the synthetic moving MNIST dataset [19]. Given
a sequence of 10 frames with two moving digits as input, the goal is to predict the following 10 frames.
We use the code provided by [19] to generate training samples on-the-fly, and use the provided test
6
Input Sequence
Ground Truth and Prediction
Figure 5: Qualitative results on moving MNIST. Note that the network has learned the bouncing dynamics and
separation of overlapping digits. More examples and out-of-domain results are in the supplementary material.
Input Sequence
Ground Truth and Prediction
Figure 6: Qualitative results of video prediction on the Highway Driving dataset. Note the good prediction of
the lanes (red), bridge (green) and a car moving in the opposite direction (purple).
set for comparison. Only simple pre-processing is done to convert pixel values into the range [0,1].
As the loss function we use average binary cross-entropy over the 10 frames. The size of the dynamic
filters is set to 9x9. This allows the network to translate pixels over a distance of at most 4 pixels,
which is sufficient for this dataset. Details on the hyper-parameter can be found in the available code.
We also compare our results with a baseline consisting of only the filter-generating network, followed
by a 1 ? 1 convolution layer. This way, the baseline network has approximately the same structure and
number of parameters as the proposed dynamic filter network. The quantitative results are shown in
Table 1 (left). Our method outperforms the baseline and [19, 18] with a much smaller model. Figure 5
shows some qualitative results. Our method is able to correctly learn the individual motions of digits.
We observe that the predictions deteriorate over time, i.e. the digits become blurry. This is partly
because of the model error: our model is not able to perfectly separate digits after an overlap, and
these errors accumulate over time. Another cause of blurring comes from an artifact of the dataset:
because of imperfect cropping, it is uncertain when exactly the digit will bounce and change its
direction. The behavior is not perfectly deterministic. This uncertainty combined with the pixel-wise
loss function encourages the model to "hedge its bets" when a digit reaches the boundary, causing a
blurry result. This issue could be alleviated with the methods proposed in [5, 6, 11].
Highway Driving We also evaluate our method on real-world data of a car driving on the highway.
Compared to natural video like UCF101 used in [16, 12], the highway driving data is highly structured
and much more predictable, making it a good testbed for video prediction. We add a small extension
to the architecture: a dynamic per-pixel bias is added to the image before the filtering operation. This
allows the network to handle illumination changes such as when the car drives through a tunnel.
Because the Highway Driving sequence is less deterministic than moving MNIST, we only predict
the next 3 frames given an input sequence of 3 frames. We split the approximately 20, 000 frames
of the 30-minute video into a training set of 16, 000 frames and a test set of 4, 000 frames. We train
with a Euclidean loss function and obtain a loss of 13.54 on the test set with a model consisting of
368, 122 parameters, beating the baseline which gets a loss of 15.97 with 368, 245 parameters.
Figure 6 shows some qualitative results. Similar to the experiments on moving MNIST, the predictions
get blurry over time. This can partly be attributed to the increasing uncertainty combined with an
7
input
filters
prediction
ground
truth
Figure 7: Some samples for video (left) and stereo (right) prediction and visualization of the dynamically
generated filters. More examples and a video can be found in the supplementary material.
element-wise loss-function which encourages averaging out the possible predictions. Moreover, the
errors accumulate over time and make the network operate in an out-of-domain regime.
We can visualize the dynamically generated filters of the trained model in a flow-like manner. The
result is shown in Figure 7 and the visualization process is explained in the supplementary material.
Note that the network seems to generate "valid" flow only insofar that it helps with minimizing its
video prediction objective. This is sometimes noticeable in uniform, textureless regions of the image,
where a valid optical flow is no prerequisite for correctly predicting the next frame. Although the
flow map is not perfectly smooth, it is learned in a self-supervised way by only training on unlabeled
video data. This is different from supervised methods like [1].
4.3
Stereo prediction
We define stereo prediction as predicting the right view given the left view of a stereo camera. This
task is a variant of video prediction, where the goal is to predict a new view in space rather than in
time, and from a single image rather than multiple ones. Flynn et al. [3] developed a network for new
view synthesis from multiple views in unconstrained settings like musea, parks and streets. We limit
ourselves to the more structured Highway Driving dataset and a classical two-view stereo setup.
We recycle the architecture from the previous section, and replace the square 9x9 filters with horizontal
13x1 filters. The network is trained and evaluated on the same train- and test split as in the previous
section, with the left view as input and the right one as target. It reaches a loss of 0.52 on the test set
with a model consisting of 464, 494 parameters. The baseline obtains a loss of 1.68 with 464, 509
parameters. The network has learned to shift objects to the left depending on their distance to the
camera, as shown in Figure 7 (right). The results suggest that it is possible to use the proposed
dynamic filter network architecture to pre-train networks for optical flow and disparity map estimation
in a self-supervised manner using only unlabeled data.
5
Conclusion
In this paper we introduced Dynamic Filter Networks, a class of networks that applies dynamically
generated filters to an image in a sample-specific way. We discussed two versions: dynamic convolution and dynamic local filtering. We validated our framework in the context of steerable filters, video
prediction and stereo prediction. As future work, we plan to explore the potential of dynamic filter
networks on other tasks, such as finegrained image classification, where filters could learn to adapt to
the object pose, or image deblurring, where filters can be tuned to adapt to the image structure.
6
Acknowledgements
This work was supported by FWO through the project G.0696.12N ?Representations and algorithms
for captation, visualization and manipulation of moving 3D objects, subjects and scenes?, the EU
FP7 project Europa2, the iMinds ICON project Footwork and bilateral Toyota project.
8
References
[1] Alexey Dosovitskiy, Philipp Fischer, Eddy Ilg, Philip H?usser, Caner Hazirbas, Vladimir Golkov, Patrick
van der Smagt, Daniel Cremers, and Thomas Brox. Flownet: Learning optical flow with convolutional
networks. In ICCV, 2015.
[2] Chelsea Finn, Ian Goodfellow, and Sergey Levine. Unsupervised learning for physical interaction through
video prediction. In NIPS, 2016.
[3] John Flynn, Ivan Neulander, James Philbin, and Noah Snavely. Deepstereo: Learning to predict new views
from the world?s imagery. In CVPR, 2015.
[4] Faustino J. Gomez and J?rgen Schmidhuber. Evolving modular fast-weight networks for control. In
ICANN, 2005.
[5] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair,
Aaron C. Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, 2014.
[6] Ross Goroshin, Micha?l Mathieu, and Yann LeCun. Learning to linearize under uncertainty. In NIPS,
2015.
[7] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition.
CoRR, abs/1512.03385, 2015.
[8] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks.
CoRR, abs/1603.05027, 2016.
[9] Max Jaderberg, Karen Simonyan, Andrew Zisserman, and Koray Kavukcuoglu. Spatial transformer
networks. In NIPS, 2015.
[10] Benjamin Klein, Lior Wolf, and Yehuda Afek. A dynamic convolutional layer for short range weather
prediction. In CVPR, 2015.
[11] Anders B. L. Larsen, S?ren Kaae S?nderby, Hugo Larochelle, and Ole Winther. Autoencoding beyond
pixels using a learned similarity metric. In ICML, 2016.
[12] Micha?l Mathieu, Camille Couprie, and Yann LeCun. Deep multi-scale video prediction beyond mean
square error. In ICLR, 2016.
[13] Hyeonwoo Noh, Paul Hongsuck Seo, and Bohyung Han. Image question answering using convolutional
neural network with dynamic parameter prediction. In CVPR, 2016.
[14] Junhyuk Oh, Xiaoxiao Guo, Honglak Lee, Richard L Lewis, and Satinder Singh. Action-conditional video
prediction using deep networks in atari games. In NIPS, 2015.
[15] Viorica Patraucean, Ankur Handa, and Roberto Cipolla. Spatio-temporal video autoencoder with differentiable memory. CoRR, abs/1511.06309, 2015.
[16] Marc?Aurelio Ranzato, Arthur Szlam, Joan Bruna, Micha?l Mathieu, R. Collobert, Video (language)
modeling: a baseline for generative models of natural videos. CoRR, abs/1412.6604, 2014.
[17] Gernot Riegler, Samuel Schulter, Matthias R?ther, and Horst Bischof. Conditioned regression models for
non-blind single image super-resolution. In ICCV, 2015.
[18] Xingjian Shi, Zhourong Chen, Hao Wang, Dit-Yan Yeung, Wai-Kin Wong, and Wang-chun Woo. Convolutional LSTM network: A machine learning approach for precipitation nowcasting. In NIPS, 2015.
[19] Nitish Srivastava, Elman Mansimov, and Ruslan Salakhutdinov. Unsupervised learning of video representations using LSTMs. In ICML, 2015.
[20] Rupesh Kumar Srivastava, Klaus Greff, and J?rgen Schmidhuber. Training very deep networks. In NIPS,
2015.
[21] G. Taylor and G. Hinton. Factored conditional restricted boltzmann machines for modeling motion style.
In ICML, 2009.
[22] Junyuan Xie, Ross Girshick, and Ali Farhadi. Deep3d: Fully automatic 2D-to-3D video conversion with
deep convolutional neural networks. In ECCV, 2016.
[23] Jimei Yang, Scott Reed, Ming-Hsuan Yang, and Honglak Lee. Weakly-supervised disentangling with
recurrent transformations for 3d view synthesis. In NIPS, 2015.
[24] Junho Yim, Heechul Jung, ByungIn Yoo, Changkyu Choi, Du-Sik Park, and Junmo Kim. Rotating your
face using multi-task deep neural network. In CVPR, 2015.
9
| 6578 |@word cnn:1 version:2 seems:2 open:1 r:1 propagate:1 carry:1 contains:1 series:1 disparity:1 daniel:1 tuned:1 ours:3 past:1 outperforms:1 current:1 com:1 yet:3 must:1 john:1 unpooling:1 subsequent:1 additive:1 blur:1 displace:1 remove:1 generative:2 instantiate:1 selected:1 short:2 core:1 filtered:3 provides:1 location:2 philipp:1 zhang:2 height:1 hazirbas:1 become:1 qualitative:4 consists:5 inside:1 manner:2 introduce:2 deteriorate:1 expected:1 indeed:1 behavior:1 themselves:1 elman:1 multi:4 salakhutdinov:1 ming:1 farhadi:1 totally:2 becomes:2 spain:1 conv:1 moreover:4 provided:2 increasing:1 project:4 precipitation:1 what:1 namic:1 kind:1 atari:1 developed:1 flynn:4 transformation:25 warning:1 temporal:3 quantitative:2 every:2 act:1 jimei:1 exactly:1 mansimov:1 sherjil:1 control:2 szlam:1 dfn:2 before:2 local:29 referenced:1 limit:1 encoding:3 riegler:2 path:1 merge:1 approximately:2 alexey:1 initialization:1 ankur:1 dynamically:11 suggests:1 micha:3 limited:2 range:3 deepstereo:1 camera:2 lecun:2 yehuda:1 alphabetical:1 implement:2 differs:1 digit:7 steerable:5 displacement:1 danger:1 rnn:1 evolving:1 eth:1 yan:1 weather:2 alleviated:2 ucf101:1 pre:2 regular:1 seeing:1 suggest:1 get:3 unlabeled:2 selection:3 cal:1 context:4 transformer:9 seminal:1 applying:4 wong:1 junyuan:1 equivalent:1 gool1:1 map:16 shi:2 deterministic:2 independently:1 resolution:3 simplicity:1 hsuan:1 pouget:1 factored:1 oh:2 handle:1 updated:1 target:1 exact:1 us:1 deblurring:1 goodfellow:2 element:9 recognition:1 particularly:1 nderby:1 vein:1 levine:1 module:15 fly:2 wang:2 region:3 connected:8 sun:2 ranzato:2 eu:1 visualised:1 benjamin:1 predictable:1 nowcasting:1 warde:1 dynamic:81 trained:4 singh:1 weakly:1 ali:1 localization:1 blurring:2 tinne:1 translated:1 various:1 train:4 separated:1 instantiated:1 fast:2 describe:1 ole:1 klaus:1 hyper:1 neighborhood:1 refined:1 jean:1 modular:1 larger:1 supplementary:3 cvpr:4 encoder:6 simonyan:1 fischer:1 unseen:1 transform:4 final:1 autoencoding:1 advantage:1 differentiable:4 sequence:7 net:1 matthias:1 propose:8 reconstruction:1 interaction:2 jia1:1 neighboring:2 causing:1 translate:1 flexibility:2 everyday:1 cropping:1 produce:6 generating:27 advection:1 object:3 help:2 illustrate:2 recurrent:5 depending:3 pose:2 linearize:1 andrew:1 textureless:1 noticeable:1 implemented:2 come:3 goroshin:1 larochelle:1 differ:1 direction:2 kaae:1 correct:1 filter:122 modifying:1 consecutively:1 centered:1 human:2 caner:1 material:3 unreferenced:1 summation:1 extension:3 around:3 ground:3 cb:4 mapping:2 predict:8 visualize:1 rgen:2 driving:6 vary:1 early:2 estimation:2 ruslan:1 faustino:1 seo:1 ross:2 bridge:1 highway:10 ilg:1 always:1 super:2 rather:3 bet:1 validated:1 pretrain:1 contrast:2 adversarial:2 baseline:8 kim:1 dependent:1 anders:1 rupesh:1 entire:4 her:1 relation:3 hidden:2 smagt:1 reproduce:1 selective:1 pixel:11 issue:2 noh:2 classification:2 orientation:1 plan:1 spatial:18 art:1 special:2 softmax:5 brox:1 field:5 construct:1 equal:1 extraction:1 having:1 koray:1 encouraged:2 park:2 unsupervised:5 excessive:1 thin:1 icml:3 future:6 photometric:3 others:1 spline:1 dosovitskiy:1 richard:1 few:4 strided:1 mirza:1 randomly:1 yoshua:1 bce:2 interpolate:1 individual:1 consisting:4 ourselves:1 ab:4 highly:1 custom:1 evaluation:1 farley:1 behind:1 encourage:1 necessary:1 experience:1 arthur:1 euclidean:1 taylor:1 initialized:1 desired:3 rotating:1 deformation:2 girshick:1 uncertain:1 increased:3 modeling:4 stacking:2 introducing:1 junmo:1 uniform:1 params:1 synthetic:1 combined:3 thanks:1 person:1 lstm:8 fundamental:1 winther:1 stay:1 lee:2 rewarding:1 decoding:1 synthesis:4 together:1 again:1 imagery:1 x9:3 manage:1 containing:1 flownet:1 possibly:1 style:1 actively:1 potential:1 de:3 includes:1 cremers:1 explicitly:2 blind:3 collobert:1 multiplicative:2 view:19 picked:1 bilateral:1 philbin:1 red:1 option:1 capability:2 parallel:1 complicated:1 jia:1 square:3 purple:1 convolutional:28 who:1 kavukcuoglu:1 neulander:1 produced:2 none:1 ren:3 drive:1 icon:1 reach:3 wai:1 xiaoxiao:1 larsen:1 james:1 naturally:1 psi:4 attributed:1 lior:1 dataset:6 popular:1 finegrained:1 color:1 knowledge:1 car:3 usser:1 eddy:1 sophisticated:2 hashing:1 supervised:5 patraucean:2 xie:2 zisserman:1 formulation:1 done:2 evaluated:1 lastly:1 correlation:3 hand:2 horizontal:3 lstms:1 mehdi:1 overlapping:1 artifact:1 facilitate:1 effect:1 hence:1 spatially:1 changkyu:1 illustrated:1 visualizing:1 ll:1 during:2 width:1 encourages:2 lastname:1 self:2 game:1 samuel:1 plate:1 demonstrate:2 motion:4 greff:1 image:37 wise:3 consideration:1 handa:1 recently:1 endowing:1 rotation:1 junhyuk:1 physical:1 hugo:1 extend:1 he:3 discussed:2 theirs:1 accumulate:2 honglak:2 xingjian:1 smoothness:1 leuven:4 unconstrained:1 grid:3 afek:1 similarly:3 automatic:1 language:2 moving:11 bruna:1 han:1 supervision:1 similarity:2 etc:1 add:3 patrick:1 chelsea:1 own:1 recent:1 manipulation:1 schmidhuber:2 certain:1 binary:2 der:1 seen:1 xiangyu:2 branch:3 multiple:5 desirable:1 reduces:1 smooth:1 unlabelled:1 adapt:3 convolutionally:1 offer:1 long:1 cross:2 equally:1 prediction:38 variant:1 basic:1 regression:1 multilayer:2 vision:2 metric:1 yeung:1 resnets:1 kernel:1 sometimes:2 sergey:1 addition:2 schematically:3 whereas:1 addressed:1 diagram:2 jian:2 operate:2 rest:1 subject:1 sent:1 bohyung:1 flow:11 effectiveness:2 ee:1 yang:3 golkov:1 feedforward:1 constraining:1 split:2 insofar:1 bengio:1 variety:2 ivan:1 architecture:16 perfectly:3 suboptimal:1 opposite:1 imperfect:1 idea:9 shift:1 bounce:1 penalty:1 stereo:14 hyeonwoo:1 karen:1 shaoqing:2 cause:1 action:3 deep:14 tunnel:1 useful:2 clear:1 listed:1 involve:1 amount:3 fwo:1 prepared:1 backpropagated:1 locally:3 clip:1 category:1 dit:1 generate:10 http:1 correctly:2 per:1 klein:2 clarity:1 vangool:1 sum:1 convert:1 angle:2 parameterized:1 powerful:1 bouncing:1 uncertainty:3 throughout:2 groundtruth:1 yann:2 separation:1 draw:1 scaling:1 layer:59 followed:1 gomez:1 courville:1 replaces:1 adapted:1 noah:1 your:1 scene:3 lane:1 generates:5 nitish:1 extremely:1 kumar:1 separable:1 optical:6 structured:2 recycle:1 smaller:3 describes:1 across:1 making:2 constrained:1 explained:2 invariant:3 iccv:2 restricted:1 zurich:1 visualization:3 bing:1 discus:4 mechanism:2 needed:2 kuleuven:1 finn:2 fp7:1 end:2 schulter:1 hongsuck:1 available:2 operation:12 prerequisite:1 apply:10 observe:1 generic:1 blurry:3 yim:2 anymore:1 neighbourhood:2 alternative:1 coin:1 gate:3 thomas:1 exploit:4 prof:1 junho:1 classical:1 warping:1 objective:1 already:1 realized:1 question:3 added:1 parametric:1 receptive:4 snavely:1 traditional:5 gradient:2 iclr:1 distance:2 separate:2 decoder:5 street:1 philip:1 tower:4 ozair:1 code:3 byungin:1 relationship:1 reformulate:1 reed:1 minimizing:1 vladimir:1 setup:1 disentangling:1 sharper:1 hao:1 boltzmann:1 contributed:1 allowing:1 perform:3 conversion:2 convolution:11 displaced:1 vertical:1 neuron:1 implementable:1 situation:1 extended:1 looking:2 hinton:1 frame:30 bert:1 sharp:1 camille:1 community:1 intensity:1 introduced:1 david:1 pair:1 required:1 connection:5 bischof:1 identi:1 learned:12 distinction:1 testbed:2 barcelona:1 ther:1 esat:5 nip:9 address:5 able:5 beyond:2 below:1 pattern:2 firstname:1 beating:1 scott:1 regime:1 including:3 green:1 video:32 max:1 memory:1 ia:6 suitable:2 overlap:1 natural:4 predicting:6 residual:9 advanced:1 github:1 imply:1 mathieu:4 woo:1 autoencoder:2 roberto:1 joan:1 prior:1 literature:1 acknowledgement:1 multiplication:1 embedded:1 fully:6 loss:9 generation:1 interesting:1 filtering:39 organised:1 generator:3 sufficient:1 heechul:1 sik:1 share:3 translation:4 eccv:1 jung:1 supported:1 last:1 bias:3 allow:1 perceptron:1 wide:3 face:3 van:2 boundary:1 calculated:1 depth:2 dimension:1 world:2 valid:2 author:1 horst:1 adaptive:2 obtains:1 jaderberg:2 satinder:1 global:1 instantiation:1 viorica:1 conclude:1 spatio:2 table:3 ku:4 nature:2 learn:9 reasonably:1 ca:2 channel:2 du:1 investigated:1 domain:3 impinging:1 marc:1 icann:1 main:1 dense:1 rh:4 aurelio:1 paul:1 profile:1 zhourong:1 xu:2 x1:1 fashion:1 position:13 explicit:2 answering:2 ib:15 toyota:1 ian:2 kin:1 minute:1 choi:1 specific:18 brabandere:1 learnable:1 abadie:1 chun:1 mnist:8 merging:1 corr:4 magnitude:2 illumination:1 conditioned:13 chen:1 suited:2 entropy:2 fc:1 simply:1 explore:2 visual:1 kaiming:2 applies:7 cipolla:1 ch:1 wolf:1 truth:3 determines:1 lewis:1 hedge:1 conditional:2 identity:2 goal:2 couprie:1 luc:1 replace:2 shortcut:4 change:5 shared:1 sampler:2 averaging:1 called:2 pas:1 invariance:1 partly:2 exception:1 aaron:1 guo:1 rotate:1 ethz:1 philosophy:1 frontal:1 evaluate:2 yoo:1 srivastava:3 |
6,167 | 6,579 | Gradient-based Sampling: An Adaptive Importance
Sampling for Least-squares
Rong Zhu
Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, China.
[email protected]
Abstract
In modern data analysis, random sampling is an efficient and widely-used strategy
to overcome the computational difficulties brought by large sample size. In previous
studies, researchers conducted random sampling which is according to the input
data but independent on the response variable, however the response variable may
also be informative for sampling. In this paper we propose an adaptive sampling
called the gradient-based sampling which is dependent on both the input data
and the output for fast solving of least-square (LS) problems. We draw the data
points by random sampling from the full data according to their gradient values.
This sampling is computationally saving, since the running time of computing
the sampling probabilities is reduced to O(nd) where n is the full sample size
and d is the dimension of the input. Theoretically, we establish an error bound
analysis of the general importance sampling with respect to LS solution from full
data. The result establishes an improved performance of the use of our gradientbased sampling. Synthetic and real data sets are used to empirically argue that the
gradient-based sampling has an obvious advantage over existing sampling methods
from two aspects of statistical efficiency and computational saving.
1
Introduction
Modern data analysis always addresses enormous data sets in recent years. Facing the increasing large
sample data, computational savings play a major role in the data analysis. One simple way to reduce
the computational cost is to perform random sampling, that is, one uses a small proportion of the data
as a surrogate of the full sample for model fitting and statistical inference. Among random sampling
strategies, uniform sampling is simple but trivial way since it fails to exploit the unequal importance
of the data points. As an alternative, leverage-based sampling is to perform random sampling with
respect to nonuniform sampling probabilities that depend on the empirical statistical leverage scores
of the input matrix X. It has been intensively studied in the machine learning community and has
been proved to achieve much better results for worst-case input than uniform sampling [1, 2, 3, 4].
However it is known that leverage-based sampling replies on input data but is independent on the
output variable, so does not make use of the information of the output. Another shortcoming is that it
needs to cost much time to get the leverage scores, although approximating leverage scores has been
proposed to further reduce the computational cost [5, 6, 7].
In this paper, we proposed an adaptive importance sampling, the gradient-based sampling, for solving
least-square (LS) problem. This sampling attempts to sufficiently make use of the data information
including the input data and the output variable. This adaptive process can be summarized as follows:
given a pilot estimate (good ?guess") for the LS solution, determine the importance of each data
point by calculating the gradient value, then sample from the full data by importance sampling
according to the gradient value. One key contribution of this sampling is to save more computational
time than leverage-based sampling, and the running time of getting the probabilities is reduced to
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
O(nd) where n is the sample size and d is the input dimension. It is worthy noting that, although we
apply gradient-based sampling into the LS problem, we believe that it may be extended to fast solve
other large-scale optimization problems as long as the gradient of optimization function is obtained.
However this is out of the scope so we do not extend it in this paper.
Theoretically, we give the risk analysis, error bound of the LS solution from random sampling. [8]
and [9] gave the risk analysis of approximating LS by Hadamard-based projection and covariancethresholded regression, respectively. However, no such analysis is studied for importance sampling.
The error bound analysis is a general result on any importance sampling as long as the conditions hold.
By this result, we establishes an improved performance guarantee on the use of our gradient-based
sampling. It is improved in the sense that our gradient-based sampling can make the bound approximately attain its minimum, while previous sampling methods can not get this aim. Additionally, the
non-asymptotic result also provides a way of balancing the tradeoff between the subsample size and
the statistical accuracy.
Empirically, we conduct detailed experiments on datasets generated from the mixture Gaussian and
real datasets. We argue by these empirical studies that the gradient-based sampling is not only more
statistically efficient than leverage-based sampling but also much computationally cheaper from the
computational viewpoint. Another important aim of detailed experiments on synthetic datasets is to
guide the use of the sampling in different situations that users may encounter in practice.
The remainder of the paper is organized as follows: In Section 2, we formally describe random
sampling algorithm to solve LS, then establish the gradient-based sampling in Section 3. The nonasymptotic analysis is provided in Section 4. We study the empirical performance on synthetic and
real world datasets in Section 5.
Notation: For a symmetric matrix M ? Rd?d , we define ?min (M) and ?max (M) as its the largest and
smallest eigenvalues. For a vector v ? Rd , we define kvk as its L2 norm.
2
Problem Set-up
For LS problem, suppose that there are an n ? d matrix X = (x1 , ? ? ? , xn )T and an n ? 1 response
vector y = (y1 , ? ? ? , yn )T . We focus on the setting n d. The LS problem is to minimize the
sample risk function of parameters ? as follows:
n
X
(yi ? xTi ?)2 /2 =:
i=1
n
X
li .
(1)
i=1
The solution of equation (1) takes the form of
? = (n?1 XT X)?1 (n?1 XT y) =: ??1 bn ,
?
n
n
?1
T
?1
(2)
T
where ?n = n X X and bn = n X y. However, the challenge of large sample size also exists
in this simple problem, i.e., the sample size n is so large that the computational cost for calculating
LS solution (2) is very expensive or even not affordable.
We perform the random sampling algorithm as follows:
Pn
(a) Assign sampling probabilities {?i }ni=1 for all data points such that i=1 ?i = 1;
(b) Get a subsample S = {(xi , yi ) : i is drawn} by random sampling according to the probabilities;
?
(c) Maximize a weighted loss function to get an estimate ?
X 1
kyi ? xTi ?k2 = ??1
s bs ,
2?i
??Rd
? = arg min
?
(3)
i?S
1 T ?1
where ?s = n1 XTs ??1
s Xs , bs = n Xs ?s y s , and Xs , y s and ?s are the partitions of X, y and
? = diag{r?i }ni=1 with the subsample size r, respectively, corresponding the subsample S . Note
that the last equality in (3) holds under the assumption that ?s is invertible. Throughout this paper,
we assume that ?s is invertible for the convenience since p n in our setting and it can be replaced
with its regularized version if it is not invertible.
How to construct {?i }ni=1 is a key component in random sampling algorithm. One simple method
is the uniform sampling, i.e.,?i = n?1 , and another method is leverage-based sampling, i.e., ?i ?
2
xTi (XT X)?1 xi . In the next section, we introduce a new efficient method: gradient-based sampling,
which draws data points according to the gradient value of each data point.
Related Work. [10, 11, 4] developed leverage-based sampling in matrix decomposition. [10, 12]
applied the sampling method to approximate the LS solution. [13] derived the bias and variance
formulas for the leverage-based sampling algorithm in linear regression using the Taylor series
expansion. [14] further provided upper bounds for the mean-squared error and the worst-case error of
randomized sketching for the LS problem. [15] proposed a sampling-dependent error bound then
implied a better sampling distribution by this bound. Fast algorithms for approximating leverage
scores {xTi (XT X)?1 xi }ni=1 were proposed to further reduce the computational cost [5, 6, 7].
3
Gradient-based Sampling Algorithm
The gradient-based sampling uses a pilot solution of the LS problem to compute the gradient of the
objective function, and then sampling a subsample data set according to the calculated gradient values.
It differs from leverage-based sampling in that the sampling probability ?i is allowed to depend on
input data X as well as y. Given a pilot estimate (good guess) ? 0 for parameters ?, we calculate the
gradient for the ith data point
gi =
?li (? 0 )
= xi (yi ? xTi ? 0 ).
?? 0
(4)
Gradient represents the slope of the tangent of the loss function, so logically if gradient of data points
are large in some sense, these data points are important to find the optima. Our sampling strategy
makes use of the gradient upon observing yi given xi , and specifically,
?i0 = kg i k/
n
X
kg i k.
(5)
i=1
4
Equations (4) and (5) mean that, kg i k includes two parts of information: one is kxi k which is the
information provided by the input data and the other is |yi ? xTi ? 0 | which is considered to provide a
justification from the pilot estimate ? 0 to a better estimate. Figure 1 illustrates the efficiency benefit
of the gradient-based sampling by constructing the following simple example. The figure shows that
the data points with larger |yi ? xi ?0 | are probably considered to be more important in approximating
the solution. On the other side, given |yi ? xi ?0 |, we hope to choose the data points with larger kxi k
values, since larger kxi k values probably cause the approximate solution be more efficient. From the
computation view, calculating {?i0 }ni=1 costs O(nd), so the gradient-based sampling is much saving
computational cost.
?
?
?
2
?
?
?
y
0
?
?
?
?2
?
?
?4
?
?3
?2
?1
0
x
1
2
3
Figure 1: An illustration example. 12 data points are generated from yi = xi + ei where xi =
(?3, ?2.5, ?2, ?1.5, ?1, ?0.5) and ei ? N (0, 0.5). The LS solution denoted by the red line
P12
P12
?? = i=1 xi yi / i=1 x2i . The pilot estimate denoted by dashed line ?0 = 0.5.
Choosing the pilot estimate ? 0 . In many applications, there may be a natural choice of pilot estimate
? 0 , for instance, the fit from last time is a natural choice for this time. Another simple way is to use
a pilot estimate ? 0 from an initial subsample of size r0 obtained by uniform sampling. The extra
computational cost is O(r0 d2 ), which is assumed to be small since a choice r0 ? r will be good
3
enough. We empirically show the effect of small r0 (r0 ? r) on the performance of the gradientbased sampling by simulations, and argue that one does not need to be careful when choosing r0 to
get a pilot estimate. (see Supplementary Material, Section S1)
Poisson sampling v.s. sampling with replacement. In this study, we do not choose sampling with
replacement as did in previous studies, but apply Poisson sampling into this algorithm. Poisson
sampling is executed in the following way: proceed down the list of elements and carry out one
randomized experiment for each element, which results either in the election or in the nonselection of
the element [16]. Thus, Poisson sampling can improve the efficiency in some context compared to
sampling with replacement since it can avoid repeatedly drawing the same data points, especially
when the sampling ratio increases, We empirically illustrates this advantage of Poisson sampling
compared to sampling with replacement. (see Supplementary Material, Section S2)
Independence on model assumption. LS solution is well known to be statistically efficient under
the linear regression model with homogeneous errors, but model misspecification is ubiquitous in real
applications. On the other hand, LS solution is also an optimization problem without any linear model
assumption from the algorithmic view. To numerically show the independence of the gradient-based
sampling on model assumption, we do simulation studies and find that it is an efficient sampling
method from the algorithmic perspective. (see Supplementary Material, Section S3)
Now as a summary we present the gradient-based sampling in Algorithm 1.
Algorithm 1 Gradient-based sampling Algorithm
? Pilot estimate ? 0 :
(1) Have a good guess as the pilot estimate ? 0 , or use the initial estimate ? 0 from an initial
subsample of size r0 by uniform sampling as the pilot estimate.
? Gradient-based sampling:
n
P
(2) Assign sampling probabilities {?i ? kg i k}ni=1 for all data points such that
?i = 1.
i=1
(3) Generate independent si ? Bernoulli(1, pi ), where pi = r?i and r is the expected
subsample size.
(4) Get a subsample by selecting the element corresponding to {si = 1}, that is, if si = 1,
the ith data is chosen, otherwise not.
? Estimation:
(5) Solve the LS problem using the subsample using equation (3) then get the subsample
?
estimator ?.
Remarks on Algorithm 1. (a) The subsample size r? from Poisson sampling
Pn is random in Algorithm
1. Since r?Pis multinomial distributed with expectation E(r? ) =
i=1 pi = r and variance
n
V ar(r? ) = i=1 pi (1 ? pi ), the range of probable values of r? can be assessed by an interval. In
practice we just need to set the expected subsample size r. (b) If ?i ?s are so large that pi = r?i > 1
for some data points, we should take pi = 1, i.e., ?i = 1/r for them.
4
Error Bound Analysis of Sampling Algorithms
Our main theoretical result establishes the excess risk, i.e., an upper error bound of the subsample
? to approximate ?
? for an random sampling method. Given sampling probabilities
estimator ?
n
n
? with respect to ?
? is given in Theorem 1.
{?i }i=1 , the excess risk of the subsample estimator ?
n
(see Section S4 in Supplementary Material for the proof). By this general result, we provide an
explanation why the gradient-based sampling algorithm is statistically efficient.
2
Theorem 1 Define ??
=
1
n2
n
P
i=1
?i?1 kxi k4 , ?b2 =
1
n2
n
P
i=1
1
2 2
?i kxi k ei
? , and
where ei = yi ? xTi ?
n
R = max{kxi k2 }ni=1 , if
r>
? 2 (2?1 ?
2
??
log d
?1 R log d)2
min (?n ) ? (3n?)
4
? for approximating ?
? is bounded in probability 1 ? ? for ? >
holds, the excess risk of ?
n
as
? ??
? k ? Cr?1/2 ,
k?
n
where C =
R log d
3n?min (?n )
(6)
?1
2??1
?b .
min (?n )?
? ??
? k can be bounded by Cr?1/2 . From (6), the choice of sampling
Theorem 1 indicates that, k?
n
method has no effect on the decreasing rate of the bound, r?1/2 , but influences the constant C. Thus,
a theoretical measure of efficiency for some sampling method is whether it can make the constant
C attain its minimum. In Corollary 1 (see Section S5 in Supplementary Material for the proof), we
show that Algorithm 1 can approximately get this aim.
Remarks on Theorem 1. (a) Theorem 1 can be used to guide the choice of r in practice so as to
guarantee the desired accuracy of the solution with high probability. (b) The constants ?b , ?min (?n )
? to predict X?
? follows
and ?? can be estimated based on the subsample. (c) The risk of X?
n
1/2
? ? X?
? k/n ? Cr?1/2 ?max (?n ). (d) Although Theorem 1
from equation (6) and get that kX?
n
is established under Poisson sampling, we can easily extend the error bound to sampling with
replacement by following the technical proofs in Supplementary Material, since each drawing in
sampling with replacement is considered to be independent.
? = op (1), then C is approximately mimimized by Algorithm 1, that is,
Corollary 1 If ? 0 ? ?
n
C(?i0 ) ? min C = op (1),
?
(7)
where C(?i0 ) denotes the value C corresponding to our gradient-based sampling.
The significance of Corollary 1 is to give an explanation why the gradient-based sampling is
statistically efficient. The corollary establishes an improved performance guarantee on the use of
the gradient-based sampling. It is improved in the sense that our gradient-based sampling can
make the bound approximately attain its minimum as long as the condition is satisfied, while neither
? = op (1)
uniform sampling nor leverage-based sampling can get this aim. The condition that ? 0 ? ?
n
? . Note the condition is
provides a benchmark whether the pilot estimate ? 0 is a good guess of ?
n
satisfied by the initial estimate ? 0 from an initial subsample of size r0 by uniform sampling since
? = Op (r?1/2 ).
?0 ? ?
n
0
5
Numerical Experiments
? based on L2 loss
Detailed numerical experiments are conducted to compare the excess risk of ?
against the expected subsample size r for different synthetic datasets and real data examples. In this
section, we report several representative studies.
5.1
Performance of gradient-based sampling
The n ? d design matrix X is generated with elements drawn independently from the mixture
2
Gaussian distributions 12 N (??, ?x2 ) + 12 N (?, ?mg
?x2 ) below: (1) ? = 0 and ?mg = 1, i.e., Gaussian
distribution (referred as to GA data); (2) ? = 0 and ?mg = 2, i.e.,the mixture between small and
relatively large variances (referred as to MG1 data); (3) ? = 0 and ?mg = 5, i.e., the mixture between
small and highly large variances (referred as to MG2 data); (4) ? = 5 and ?mg = 1, i.e., the mixture
between two symmetric peaks (referred as to MG3 data). We also do simulations on X generated
from multivariate mixture Gaussian distributions with AR(1) covariance matrix, but obtain the similar
performance to the setting above, so we do not report them here. Given X, we generate y from the
model y = X? + where each element of ? is drawn from normal distribution N (0, 1) and then
fixed, and ? N (0, ? 2 In ), where ? = 10. Note that we also consider the heteroscedasticity setting
that is from a mixture Gaussian, and get the similar results to the homoscedasticity setting. So we
do not report them here. We set d as 100, and n as among 20K, 50K, 100K, 200K, 500K.
? for each dataset, and repeatedly apply various sampling
We calculate the full sample LS solution ?
n
? for b = 1, . . . , B. We calculate the
methods for B = 1000 times to get subsample estimates ?
b
5
empirical risk based on L2 loss (MSE) as follows:
MSE = B ?1
B
X
? ??
? k2 .
k?
b
n
b=1
Two sampling ratio r/n values are considered: 0.01 and 0.05. We compare uniform sampling (UNIF),
the leverage-based sampling (LEV) and the gradient-based sampling (GRAD) to these data sets. For
GRAD, we set the r0 = r to getting the pilot estimate ? 0 .
?8
MG3
?8
MG2
?8
MG1
?8
GA
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?12
?16
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?20
?16
?16
?
?
?
?
?
?
?
?20
?20
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?20
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?12
?12
?
?
?
?
?
?16
?12
?
?
?
?
?
?
?
?
LEV
GRAD
LEV
GRAD
LEV
GRAD
LEV
GRAD
?
Figure 2: Boxplots of the logarithm of different sampling probabilities of X matrices with n = 50K.
From left to right: GA, MG1, MG2 and MG3 data sets.
Figure 2 gives boxplots of the logarithm of sampling probabilities of LEV and GRAD, where taking
the logarithm is to clearly show their distributions. We have some observations from the figure. (1)
For all four datasets, GRAD has heavier tails than LEV, that is, GRAD lets sampling probabilities
more disperse than LEV. (2) MG2 tends to have the most heterogeneous sampling probabilities, MG1
has less heterogeneous than MG2, whereas MG3 and GA have the most homogeneous sampling
probabilities. This indicates that the mixture of large and small variances has effect on the distributions
of sampling probabilities while the mixture of different peak locations has no effect.
We plot the logarithm of MSE values for GA, MG1, and MG2 in Figure 3, where taking the logarithm
is to clearly show the relative values. We do not report the results for MG3, as there is little difference
between MG3 and GA. There are several interesting results shown in Figure 3. (1) GRAD has
better performance than others, and the advantage of GRAD becomes obvious as r/n increases. (2)
For GA, LEV is shown to have similar performance to UNIF, however GRAD has obviously better
performance than UNIF. (3) When r/n increases, the smaller n is needed to make sure that GRAD
outperforms others.
From the computation view, we compare the computational cost for UNIF, approximate LEV (ALEV)
[5, 6] and GRAD in Table 1, since ALEV is shown to be computationally efficient to approximate
LEV. From the table, UNIF is the most saving, and the time cost of GRAD is much less than that
of ALEV. It indicates that GRAD is also an efficient method from the computational view, since its
running time is O(nd). Additionally, Table 2 summaries the computational complexity of several
sampling methods for fast solving LS problems.
5.2
Real Data Examples
In this section, we compare the performance of various sampling algorithms on two UCI datasets:
CASP (n = 45730, d = 9) and OnlineNewsPopularity (NEWS) (n = 39644, d = 59). At first, we
plot boxplots of the logarithm of sampling probabilities of LEV and GRAD in Figure 4. From it,
similar to synthetic datasets, we know that the sampling probabilities of GRAD looks more dispersed
compared to those of LEV.
The MSE values are reported in Table 3. From it, we have two observations below. First, GRAD
has smaller MSE values than others when r is large. Second, as r increases, the outperformance
of Poisson sampling than sampling with replacement gets obvious for various methods. Similar
observation is gotten in simulations (see Supplementary Material, Section S2).
6
MG1
0
?1
?
?
?
?
?2
?
?
?3
0
?
?
?
?
12.0
12.5
?5
13.0
10.0
10.5
log(sample size)
11.0
11.5
12.0
12.5
13.0
10.0
10.5
log(sample size)
11.0
11.5
12.0
12.5
13.0
log(sample size)
?3
?
?
?
?
?
?2
1
?
?
?4
?1
?
11.5
?
?
?
?
11.0
?
?
?
?
?
?
10.5
?
?
?
?2
log(MSE)
?
?
?
?4
?
?
?
?3
1
?
?
0
log(MSE)
?
?
10.0
?
?
?1
2
3
UNIF
LEV
GRAD
1
log(MSE)
MG2
2
?
?
1
GA
?
?
?
?
?
?
?
?
?4
?
?
?
?
?
?
?
?
?
?
?
?
?
?5
?2
?
?
?5
?
?
log(MSE)
?
?
?6
?3
log(MSE)
?
?
?1
log(MSE)
?
?4
0
?
?
?
?
?
?
?
?6
?3
?
?7
?
?
10.0
10.5
11.0
11.5
12.0
12.5
log(sample size)
13.0
10.0
10.5
11.0
11.5
12.0
12.5
log(sample size)
13.0
10.0
10.5
11.0
11.5
12.0
12.5
13.0
log(sample size)
? for approximating ?
? . From top to bottom: upper
Figure 3: Empirical mean-squared error of ?
n
panels are r/n = 0.01, and lower panels r/n = 0.05. From left to right: GA, MG1, and MG2 data,
respectively.
? on various subsample sizes r by UNIF, ALEV and GRAD for
Table 1: The cost time of obtaining ?
? . We perform the
n = 500K, 5M , where () denotes the time of calculating full sample LS solution ?
n
computation by R software in PC with 3 GHz intel i7 processor, 8 GB memory and OS X operation
system.
r
UNIF
ALEV
GRAD
r
UNIF
ALEV
GRAD
6
n = 500K
System Time (0.406)
200
500
2000
0.000 0.002 0.003
0.494 0.642 0.797
0.099 0.105 0.114
n = 5M
System Time (121.4)
500
2000 10000
0.057 0.115 0.159
50.86 53.64 81.85
5.836 6.107 6.479
User Time (7.982)
200
500
2000
0.010 0.018 0.050
2.213 2.592 4.353
0.338 0.390 0.412
User Time (129.88)
500
2000 10000
2.81
5.94
14.28
86.12 88.36 120.15
28.85 30.06 37.51
Conclusion
In this paper we have proposed gradient-based sampling algorithm for approximating LS solution.
This algorithm is not only statistically efficient but also computationally saving. Theoretically, we
provide the error bound analysis, which supplies a justification for the algorithm and give a tradeoff
between the subsample size and approximation efficiency. We also argue from empirical studies
that: (1) since the gradient-based sampling algorithm is justified without linear model assumption,
it works better than the leverage-based sampling under different model specifications; (2) Poisson
sampling is much better than sampling with replacement when sampling ratio r/n increases.
7
? by various sampling strategy. Stage D1 is computing the
Table 2: The running time of obtaining ?
weights, D2 is computing the LS based on subsample, ?overall" is the total running time.
Stage
Full
UNIF
LEV
ALEV
GRAD
D1
O(nd2 )
O(nd log n)
O(nd)
D2
O(max{nd2 , d3 })
O(max{rd2 , d3 })
O(max{rd2 , d3 })
O(max{rd2 , d3 })
O(max{rd2 , d3 })
overall
O(max{nd2 , d3 })
O(max{rd2 , d3 })
O(max{nd2 , rd2 , d3 })
O(max{nd log n, rd2 , d3 })
O(max{nd, rd2 , d3 })
?5
NEWS
?10
?
?
?
?
?
GRAD
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?20
?20
?
?
?
?
LEV
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?15
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?15
?10
?5
CASP
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
LEV
GRAD
Figure 4: Boxplots of the logarithm of sampling probabilities for LEV and GRAD among datasets
CASP and NEWS
Table 3: The MSE comparison among various methods for real datasets, where ?SR" denotes sampling
with replacement, and ?PS" denotes Poisson sampling.
r
UNIF-SR
UNIF-PS
LEV-SR
LEV-PS
GRAD-SR
GRAD-PS
45
2.998e-05
2.702e-05
1.962e-05
2.118e-05
2.069e-05
2.411e-05
r
UNIF-SR
UNIF-PS
LEV-SR
LEV-PS
GRAD-SR
GRAD-PS
300
22.050
27.215
22.487
21.971
10.997
9.729
CASP n = 45730, d = 9
180
450
1800
9.285e-06 4.411e-06 1.330e-06
9.669e-06 4.243e-06 1.369e-06
4.379e-06 1.950e-06 4.594e-07
5.240e-06 1.689e-06 4.685e-07
5.711e-06 1.861e-06 4.322e-07
5.138e-06 1.678e-06 3.687e-07
NEWS n = 39644, d = 59
600
1200
2400
14.832
10.790
7.110
19.607
15.258
9.504
11.047
5.519
2.641
9.419
4.072
2.101
5.508
3.074
1.505
5.252
2.403
1.029
4500
4.574e-07
4.824e-07
2.050e-07
1.694e-07
1.567e-07
1.179e-07
4800
4.722
4.378
1.392
0.882
0.752
0.399
There is an interesting problem to address in the further study. Although the gradient-based sampling
is proposed to approximate LS solution in this paper, we believe that this sampling method can apply
into other optimization problems for large-scale data analysis, since gradient is considered to be the
steepest way to attain the (local) optima. Thus, applying this idea to other optimization problems is
an interesting study.
Acknowledgments
This research was supported by National Natural Science Foundation of China grants 11301514 and
71532013. We thank Xiuyuan Cheng for comments in a preliminary version.
8
References
[1] P. Drineas, R. Kannan, and M.W. Mahoney. Fast monte carlo algorithms for matrices i:
approximating matrix multiplication. SIAM Journal on Scientific Computing, 36:132?157,
2006.
[2] P. Drineas, R. Kannan, and M.W. Mahoney. Fast monte carlo algorithms for matrices ii:
computing a low-rank approximation to a matrix. SIAM Journal on Scientific Computing,
36:158?183, 2006.
[3] P. Drineas, R. Kannan, and M.W. Mahoney. Fast monte carlo algorithms for matrices iii:
computing a compressed approximate matrix decomposition. SIAM Journal on Scientific
Computing, 36:184?206, 2006.
[4] M.W. Mahoney and P. Drineas. CUR matrix decompositions for improved data analysis.
Proceedings of the National Academy of Sciences, 106:697?702, 2009.
[5] P. Drineas, M. Magdon-Ismail, M.W. Mahoney, and D.P. Woodruff. Fast approximation of
matrix coherence and statistical leverage. Journal of Machine Learning Research, 13:3475?
3506, 2012.
[6] D.P. Clarkson, K.L.and Woodruff. Low rank approximation and regression in input sparsity
time. STOC, 2013.
[7] M.B. Cohen, Y.T. Lee, C. Musco, C. Musco, R. Peng, and A. Sidford. Uniform sampling for
matrix approximation. arXiv:1408.5099, 2014.
[8] P. Dhillon, Y. Lu, D.P. Foster, and L. Ungar. New subsampling algorithns for fast least squares
regression. In Advances in Neural Information Processing Systems, volume 26, pages 360?368,
2013.
[9] D. Shender and J. Lafferty. Computation-risk tradeoffs for covariance-thresholded regression.
In Proceedings of the 30th International Conference on Machine Learning, 2013.
[10] P. Drineas, M.W. Mahoney, and S. Muthukrishnan. Sampling algorithms for l2 regression and
applications. In Proceedings of the 17th Annual ACM-SIAM Symposium on Discrete Algorithms,
pages 1127?1136, 2006.
[11] P. Drineas, M.W. Mahoney, and S. Muthukrishnan. Relative-error CUR matrix decomposition.
SIAM Journal on Matrix Analysis and Applications, 30:844?881, 2008.
[12] P. Drineas, M.W. Mahoney, S. Muthukrishnan, and T. Sarlos. Faster least squares approximation.
Numerische Mathematik, 117:219?249, 2011.
[13] P. Ma, M.W. Mahoney, and B. Yu. A statistical perspective on algorithmic leveraging. In
Proceedings of the 31th International Conference on Machine Learning, 2014.
[14] G. Raskutti and M.W. Mahoney. A statistical perspective on randomized sketching for ordinary
least-squares. In Proc. of the 32nd ICML Conference, 2015.
[15] T. Yang, L. Zhang, R. Jin, and S. Zhu. An explicit sampling dependent spectral error bound for
column subset selection. In Proc. of the 32nd ICML Conference, 2015.
[16] C.E. S?rndal, B. Swensson, and J.H. Wretman. Model Assisted Survey Sampling. Springer,
New York, 2003.
9
| 6579 |@word version:2 proportion:1 norm:1 nd:10 unif:14 d2:3 simulation:4 bn:2 decomposition:4 covariance:2 carry:1 initial:5 series:1 score:4 selecting:1 woodruff:2 outperforms:1 existing:1 si:3 numerical:2 partition:1 informative:1 plot:2 rd2:8 guess:4 ith:2 steepest:1 provides:2 location:1 casp:4 zhang:1 supply:1 symposium:1 fitting:1 introduce:1 theoretically:3 peng:1 homoscedasticity:1 expected:3 nor:1 decreasing:1 xti:7 election:1 little:1 increasing:1 becomes:1 spain:1 provided:3 notation:1 bounded:2 panel:2 kg:4 developed:1 guarantee:3 k2:3 grant:1 yn:1 local:1 tends:1 lev:22 approximately:4 china:2 studied:2 range:1 statistically:5 acknowledgment:1 practice:3 differs:1 empirical:6 attain:4 projection:1 get:13 convenience:1 ga:9 selection:1 risk:10 influence:1 context:1 applying:1 sarlos:1 l:24 independently:1 survey:1 musco:2 numerische:1 estimator:3 justification:2 play:1 suppose:1 user:3 homogeneous:2 us:2 element:6 expensive:1 bottom:1 role:1 worst:2 calculate:3 news:4 complexity:1 depend:2 solving:3 heteroscedasticity:1 upon:1 efficiency:5 drineas:8 easily:1 various:6 muthukrishnan:3 fast:9 shortcoming:1 describe:1 monte:3 choosing:2 widely:1 solve:3 larger:3 supplementary:7 drawing:2 otherwise:1 compressed:1 gi:1 obviously:1 advantage:3 eigenvalue:1 mg:5 propose:1 remainder:1 uci:1 hadamard:1 achieve:1 academy:3 ismail:1 getting:2 mg2:8 optimum:2 p:7 ac:1 op:4 gotten:1 material:7 assign:2 ungar:1 preliminary:1 probable:1 rong:1 assisted:1 hold:3 gradientbased:2 sufficiently:1 considered:5 normal:1 scope:1 algorithmic:3 predict:1 major:1 smallest:1 estimation:1 proc:2 largest:1 establishes:4 weighted:1 hope:1 brought:1 clearly:2 always:1 gaussian:5 aim:4 pn:2 avoid:1 cr:3 corollary:4 derived:1 focus:1 nd2:4 bernoulli:1 indicates:3 logically:1 rank:2 am:1 sense:3 inference:1 dependent:3 i0:4 arg:1 among:4 overall:2 denoted:2 construct:1 saving:6 sampling:130 represents:1 look:1 yu:1 icml:2 report:4 others:3 modern:2 national:2 cheaper:1 replaced:1 replacement:9 n1:1 attempt:1 highly:1 disperse:1 mahoney:10 mixture:9 kvk:1 pc:1 conduct:1 taylor:1 logarithm:7 desired:1 theoretical:2 instance:1 column:1 ar:2 sidford:1 ordinary:1 cost:11 subset:1 uniform:9 conducted:2 reported:1 kxi:6 synthetic:5 peak:2 randomized:3 siam:5 international:2 lee:1 invertible:3 sketching:2 squared:2 satisfied:2 choose:2 li:2 nonasymptotic:1 summarized:1 b2:1 includes:1 view:4 observing:1 red:1 slope:1 contribution:1 minimize:1 square:6 ni:7 accuracy:2 variance:5 lu:1 carlo:3 researcher:1 processor:1 outperformance:1 against:1 obvious:3 proof:3 cur:2 pilot:14 proved:1 dataset:1 intensively:1 ubiquitous:1 organized:1 response:3 improved:6 just:1 stage:2 reply:1 hand:1 ei:4 o:1 scientific:3 believe:2 effect:4 equality:1 symmetric:2 dhillon:1 p12:2 raskutti:1 multinomial:1 empirically:4 cohen:1 volume:1 extend:2 tail:1 numerically:1 s5:1 rd:3 mathematics:1 specification:1 multivariate:1 recent:1 perspective:3 yi:10 minimum:3 r0:9 determine:1 maximize:1 dashed:1 ii:1 full:8 technical:1 faster:1 long:3 regression:7 heterogeneous:2 expectation:1 poisson:10 affordable:1 arxiv:1 justified:1 whereas:1 interval:1 extra:1 sr:7 probably:2 sure:1 comment:1 lafferty:1 leveraging:1 leverage:16 noting:1 yang:1 iii:1 enough:1 independence:2 fit:1 gave:1 reduce:3 idea:1 cn:1 tradeoff:3 grad:31 i7:1 whether:2 heavier:1 gb:1 clarkson:1 proceed:1 cause:1 york:1 repeatedly:2 remark:2 detailed:3 s4:1 reduced:2 generate:2 s3:1 estimated:1 discrete:1 key:2 four:1 enormous:1 drawn:3 d3:10 kyi:1 k4:1 neither:1 boxplots:4 thresholded:1 year:1 beijing:1 throughout:1 draw:2 coherence:1 bound:14 cheng:1 annual:1 x2:2 software:1 aspect:1 min:7 relatively:1 according:6 smaller:2 b:2 s1:1 computationally:4 equation:4 mathematik:1 needed:1 know:1 operation:1 magdon:1 apply:4 spectral:1 save:1 alternative:1 encounter:1 denotes:4 running:5 top:1 subsampling:1 calculating:4 exploit:1 chinese:1 establish:2 approximating:8 especially:1 implied:1 objective:1 strategy:4 surrogate:1 gradient:40 thank:1 argue:4 trivial:1 kannan:3 illustration:1 ratio:3 executed:1 stoc:1 design:1 perform:4 upper:3 observation:3 datasets:10 benchmark:1 jin:1 situation:1 extended:1 misspecification:1 y1:1 worthy:1 nonuniform:1 community:1 unequal:1 established:1 barcelona:1 nip:1 address:2 below:2 sparsity:1 challenge:1 including:1 max:13 explanation:2 memory:1 difficulty:1 natural:3 regularized:1 zhu:2 improve:1 x2i:1 l2:4 tangent:1 multiplication:1 asymptotic:1 relative:2 loss:4 interesting:3 facing:1 foundation:1 foster:1 viewpoint:1 pi:8 balancing:1 summary:2 supported:1 last:2 guide:2 bias:1 side:1 taking:2 benefit:1 distributed:1 overcome:1 dimension:2 xn:1 world:1 calculated:1 ghz:1 adaptive:4 excess:4 approximate:7 assumed:1 xi:10 why:2 table:7 additionally:2 obtaining:2 expansion:1 mse:12 constructing:1 diag:1 did:1 significance:1 main:1 s2:2 subsample:22 n2:2 allowed:1 x1:1 representative:1 referred:4 intel:1 xiuyuan:1 fails:1 explicit:1 formula:1 down:1 theorem:6 xt:5 list:1 x:3 exists:1 importance:8 illustrates:2 kx:1 springer:1 dispersed:1 acm:1 ma:1 careful:1 specifically:1 called:1 total:1 formally:1 assessed:1 d1:2 |
6,168 | 658 | Perceiving Complex Visual Scenes:
An Oscillator Neural Network Model that
Integrates Selective Attention, Perceptual
Organisation, and Invariant Recognition
Rainer Goebel
Department of Psychology
University of Braunschweig
Spielmannstr. 19
W-3300 Braunschweig, Germany
Abstract
Which processes underly our ability to quickly recognize familiar
objects within a complex visual input scene? In this paper an implemented neural network model is described that attempts to specify
how selective visual attention, perceptual organisation, and invariance transformations might work together in order to segment, select,
and recognize objects out of complex input scenes containing multiple, possibly overlapping objects. Retinotopically organized feature
maps serve as input for two main processing routes: the 'wherepathway' dealing with location information and the 'what-pathway'
computing the shape and attributes of objects. A location-based attention mechanism operates on an early stage of visual processing
selecting a contigous region of the visual field for preferential processing. Additionally, location-based attention plays an important role
for invariant object recognition controling appropriate normalization
processes within the what-pathway. Object recognition is supported
through the segmentation of the visual field into distinct entities. In
order to represent different segmented entities at the same time, the
model uses an oscillatory binding mechanism. Connections between
the where-pathway and the what-pathway lead to a flexible cooperation between different functional subsystems producing an overall
behavior which is consistent with a variety of psychophysical data.
903
904
Goebel
1
INTRODUCTION
We are able to recognize a familiar object from many different viewpoints. Additionally, an object normally does not appear in isolation but in combination with
other objects. These varying viewing conditions produce very different retinal neural
representations. The task of the visual system can be considered as a transformation
process forming high-level object representations which are invariant with respect to
different viewing conditions. Selective attention and perceptual organisation seem to
play an important. role in this transformation process.
1.1
LOCATION-BASED VS OBJECT-BASED ATTENTION
Neisser (1967) assumed that visual processing is done in two stages: an early stage
that operates in parallel across the entire visual field, and a later stage that can
only process information from a limited part of the field at anyone time. Neisser
(1967) proposed an object-based approach to selective attention: the first, 'preattentive', stage segments the whole field into seperate objects on the basis of Gestalt
principles; the second stage, focal attention, selects one of these objects for detailed
analysis.
Other theories stress the location-based nature of visual attention: a limited contigous
region is filtered for detailed analysis (e.g., Posner et al., 1980). There exists a number of models of location-based at.tention (e .g., Hinton & Lang, 1985; Mozer, 1991;
Sandon, 1990) and a few models of object-based attention using whole object knowledge (e.g., Fukushima, 1986). Our model attempts to integrate both approaches:
location-based attention - implemented as a 'spotlight' - operates on an early stage
of visual processing selecting a contigous region for detailed processing. However, the
position and the size of the attentional window is determined to a large extent from
the results of a segmentation process operating at different levels within the system.
1.2
DYNAMIC BINDING
The question of how groupings can be represented in a neural network is known as
the binding problem. It occurs in many variations, e.g., as the problem of how to
represent multiple objects simultaneously but sufficiently distinct that confusions ('illusory conjunctions') at later processing stages are avoided.
An interesting solution of the binding problem is based on ideas proposed by Milner (1974) and von der Malsburg (1981). In contrast to most connectionist models
assuming that only the average output activity of neurons encodes important information, they suggest that the exact timing of neuronal activity (the firing of individual
neurons or the 'bursting' of cell groups) plays an important role for information processing in the brain. The central idea is that stimulated units do not respond with a
constant output but with oscillatory bellavior which can be exploited to represent feature linkings. A possible solution for representing multiple objects might be that the
parts of one object are bound together through synchronized (phase-locked) oscillations and separated from other objects through an uncorellated phase relation. Recent
empirical findings (Eckhorn et al., 1988; Gray & Singer, 1989) provide some evidence
that the brain may indeed use phase-locked oscillations as a means for representing
global object properties.
Perceiving Complex Visual Scenes: An Oscillator Neural Network Model
2
2.1
THE MODEL
SYSTEM DYNAMICS
In order to establish dynamic binding via phase-locked oscillations the units of the
model must be able to exhibit oscillatory behavior . Stimulated from the empirical
findings mentioned earlier, a rapidly growing number of work has studied populations
of oscillating units (e.g., Eckhorn et al., }990; Sompolinsky et al., }990). There exists
also a number of models using phase-locked oscillations in order to simulate various
aspects of perceptual organisation (e.g., Schillen & Konig, 1991; Mozer, Zemel, Behrmann & Williams, 1992). We defined computationally simple model neurons which
allow to represent independently an activation value and a period value. Such a model neuron possesses two types of input areas: the activation gate (a-gate) and the
period-gate (p-gate) which allow the model neurons to communicate via two types of
connections (cf. Eckhorn et al., }990; they distinguish between 'feeding' and 'linking'
connections). We make the following definitions:
? wfj: weight from model neuron j to the a-gate of model neuron i.
? wfj: weight from model neuron j to the p-gate of model neuron i.
?
~i(t):
internal time-keeper of unit i
? T: globally defined period length
? 7i (N): period length of unit i (Nth oscillation)
Each model neuron possesses an internal time-keeper ~i(t) counting the number of
bins elapsed since the last firing point. A model neuron is refractory until the timekeeper reaches the value Ii (e.g., Ii = T = 8). Then it may emit an activation value
and resets the time-keeper. Depending on the stimulation received at the p-gate
(see below) a model neuron fires either if ~ = T - } or ~ = T. This variation of
the individual period length Ii is the only possibility for a unit to change its phase
relation to other units. The value of the globally defined period length T determines
directly how many objects may be represented 'simultaneously'.
The activation value aj at the internal time ~ is determined as follows:
T.
neti(~ =
Td
=
n
L L wfjaj(O
(1)
(=lj=l
if ~ = Ti
otherwise
(2)
where u(x) is the logistic (sigmoidal) function. If we consider an extreme case with
T = 1 we obtain the following equations:
n
netj(t) =
L W0 a
j
j=l
(3)
905
906
Goebel
(4)
This derivation allows us to study the same network as a conventional connectionist
1) with a 'non-oscillatory' activation function to which we can add a
network (T
dynamic binding mechanism by simply setting T > 1. In the latter case the input
at the p-gate determines the length of the current period as either Ii = T - 1 or
Ii T. The decision to shift the phase relation to other neurons should be done in
such a way that the 'belongingness constraints' imposed by the connectivity pattern
of the p-weights wfj is maximized, e.g., if two units are positively p-coupled they
should oscillate in phase, if they are negatively p-coupled they should oscillate out
of phase. The decision whether a unit fires at T - 1 or T depends on two values,
the stimulation received during the refractory period 1 < f. < Ii (N - 1) and on the
stimulation received at the last firing point f. = Ti(N - 1). These values behave as
two opposite forces gi determining the probability Pj < of shortening the next period:
=
=
p.< = r
z
1- 2r
+ --~~~
1 + e(g~-g~)
if f. = Ii
(5)
if 1 < f. < Ti
(6)
(7)
If the value of gl-gl is large (e.g., there are many positively p-coupled units firing at
the same time) it is unlikely that the unit shortens its next period length. If instead
the value of gl- gl is large (e.g., there are many positively coupled neurons firing just
before the considered unit) it is likely that the unit will shorten its next period. There
exists also a small overall noise level r = 0.01 which allows for symmetry breaking
(e.g., if two strongly negatively coupled neurons are accidentally phase-locked).
2.2
THE INPUT MODULE
Figure 1 shows an overview of the architecture of the model, called HOTSPOT. An
input is presented to the model by clamping on units at the model-retina consisting
of two layers with 15x25 units. Each layer is meant to correspond to a different
color-sensitive ganglion cell type. The retinal representation is then analyzed within
different retinotopically organized feature maps (4 oriented line segments and 2 unoriented color blobs) as a simplified representation of an early visual processing stage
(corresponding roughly to VI). A lateral connectivity pattern of p-weights within
and between these feature maps computes initial feature linkings consistent with the
findings of Eckhorn et al., (1988) and Gray and Singer (1989). Each feature map also
projects to a second feature-specific layer. The weights between those layers compute
the saliency at each position of a particular feature type. These saliency values are
finally integrated within the saliency map. The retinotopic feature maps project to
both the what pathway, corresponding roughly to the occipito-temporal processing
stream and the where-pathway, corresponding to the occipita-parietal stream (e.g.,
Ungerleider & Mishkin, 1982).
Perceiving Complex Visual Scenes: An Oscillator Neural Network Model
2.3
THE SPOTLIGHT-LAYER
The spotlight-layer receives bottom-up input from the feature maps via the saliency
map and top-down input from the spotlight-control module. Based on these sources
of stimulation, the spotlight layer computes a circular region of activity representing
the current focus of spatial attention. The spotlight-layer corresponds roughly to the
pulvinar nucleus of the thalamus. The spotlight-layer gates the flow of information
within the what-pathway.
2.4
THE WHAT-PATHWAY: FROM FEATURES TO OBJECTS
Processing within the what-pathway includes spatial selection, invariance transformation, complex grouping, object-based selection and object recognition.
2.4.1
The Invariallce Module
The task of the Invariance module is to retain the spatial arrangement of the features
falling within the attentional spotlight while abstracting at the same time the absolute
retinal position of the att.ended information. This goal is achieved in several stages
along the what-pathway for each feature type. The basic idea is that each neuron
connects to several neurons at the next layer. If a certain position is not attended
its 'standard' way may be 'open'. If, however, a position is attended, the decision
which way is currently gated for a neuron depends on the position and width of
the attentional spotlight. Special control layers compute explicitly whether a certain
absolute position falls within one of 5 horizontal and 5 vertical regions of the spotlight
(e.g., the horizontal regions are 'far left', 'near left', 'center', 'near right', 'far right').
These layers gate the feedforward-synapses within the what-pathway. Finally, the
selected information reaches the invariance-output layers which have a 7x7 resolution
for each feature type. Recently Olshausen, Anderson and Van Essen (1992) proposed
a strikingly similar approach for forming invariant representations.
Despite invariance transformations the representation of an object at the invarianceoutput layers may not be exactly the same as in previous experiences. Therefore the
model uses additional processes contributing to invariant object recognition, most
importantly the extraction of global features and the exploitation of population codes
for the length, position and orientation of features. This also establishes a limited
kind of rotation invariance. The selection of information within the what-pathway is
consistent with findings from Moran & Desimone (1985): unattended information is
excluded from further processing only, if it would stimulate the same population of
neurons at the next stage as the selected information.
2.4.2
The Object-Recogllition-Module
The output of the Invariance Module, the perceptual-code stage, feeds to the objectrecognition layer and receives recurrent connections from that layer terminating both
on the a-gate and the p-gate of its units. These connections are trained using the
back-propagation learning rule (T
T
1). The recurrent loop establishes an
interactive recognition process allowing to recognize distorted patterns through the
completion of missing informat.ion and the suppression of noise.
At the perceptual-code stage perceptual organisation continues based on the initial
feature linkings computed within the elementary feature maps. The p-weight pattern
= =
907
908
Goebel
normalisation
grouplng \
complex
what-pathway
color maps
~ttellttve
stage
bIdlng. salienCy!
onentatlon maps
"&reeD/,
~7L.
~7
-
"blue:/?
-7L-
-'7
./
"retina"
Figure 1: The architecture of HOTSPOT
Perceiving Complex Visual Scenes: An Oscillator Neural Network Model
within the perceptual-code stage implements a set of Gestalt principles such as spatial
proximity, similarity and continuity of contour. In additon, acquired shape knowledge
is another force acting on the perceptual-code stage in order to bind or separate global
features. Object-based attention may select one of multiple oscillating objects. For
determining a specific object it may use whole-object knowledge (e.g., 'select the
letter H'), spatial cues (e.g., 'select the right object') or color cues (e.g., 'select the
green object') as well as a combined cue. If the selected object does not use the whole
resolution of the perceptual-code stage, commands are sent to the where-pathway in
order to adjust the spotlight accordingly.
2.5
THE WHERE-PATHWAY
The where-pathway consists of the saliency map, the spotlight-control module, the disengagement layer and the spatial-representation layer. The spotlight-control module
performs relative movements and size changes of the attentional spotlight which are
demanded by the saliency map, object-based selection or commands from a short-term
store holding task instructions. If the current position of the spotlight is not changed
for some time, the disengagement layer inhibits the corresponding position at the saliency map. The spatial-representation layer contains a coarsely tuned representation
of all active retinal positions. If no position within the visual field is particularly
salient, this layer determines possible target positions for spatial attention.
If the model knows "where what is" this knowledge is transferred to the visual shortterm memory where a sequence of 'location-object couplings' can be stored.
3
CONCLUSION
In this paper an oscillator neural network model was presented that integrates
location-based attention, perceptual organisation, and invariance transformations.
It was outlined how the cooperation between these mechanisms allow the model to
segment, select and recognize objects within a complex input scene. The model was
successfully applied to simulate a wide variety of psychophysical data including texture segregation, visual search, hierarchical segmentation and recognition. A typical
'processing cycle' of the model consists of an initial segmentation of the visual field
with a broadly tuned spotlight. Then a segmented, but not necessarily recognizable, entity may be selected due to its saliency or by object-based attention. This
selection in turn induces movements of the location-based attention mechanism until
the selected entity is surrounded by the spotlight. Since in this case appropriate
invariance transformations are computed the selected object is optimally recognized.
Some predictions of the model concerning the object-based nature of selective attention are currently experimentally tested. HOTSPOT indicates a promising way
towards a deeper understanding of complex visual processing by bringing together
both neurobiological and psychophysical findings in a fruitful way.
Acknowledgements
I am grateful to Reinhard Eckhorn, Peter Konig, Michael Mozer, Werner X. Schneider,
Wolf Singer and Dirk Vorberg for valuable discussions.
909
910
Goebel
References
Eckhorn, R, Bauer, R, Jordan, W., Brosch, M., Kruse, W., Munk, M. & Reitboeck,
H.J. (1988) Coherent Oscillations: A mechanism of feature linking in the visual cortex? Biological Cybernetics, 60, 121-130
Eckhorn, R., Reitboeck, H. J., Arndt, M., & Dicke, P. (1990). Feature linking via
synchronization among distributed assemblies: The simulation of results from cat
visual cortex. Neural Computation, 2, 293-307.
Fukushima, K. (1986). A neural network model for selective attention in visual
pattern recognition. Biological Cybernetics, 55, 5-15.
Gray, M. C. & Singer, W. (1989). Stimulus-specific neuronal oscillations in orientation
columns of cat visual cortex. PNAS USA, 86, 1698-1702.
Hinton, G.E, Lang, K.J. (1985). Shape Recognition and Illusory Conjunctions. Proceedings of the 9th IlCAI - Los Angeles, 1, 252-259.
Milner, P.M. (1974). A model for visual shape recognition. Psych. Rev., 81,521-535.
Moran, J. & Desimone, R. (1985). Selective attention gates visual processing in the
extrastriate cortex. Science, 229,782-784.
Mozer, M. C. (1991). The perception of multiple objects: a connectionist approach.
MIT Press / Bradford Books.
Mozer, M. C., Zemel, R. S., & Behrmann, M., Williams, C.K.l. (1992). Learning to
segment images using dynamic feature binding. Neural Computation, 4, 650-665.
Neisser, U. (1967) . Cognitive Psychology. New York: Appleton-Century-Crofts.
Olshausen, B., Anderson, Ch., & Van Essen, D. (1992), A neural model of visual
attention and invariant pattern recognition. CNS Memo 18, CalTech.
Posner, M.I., Snyder, C. R. R., & Davidson, B.J. (1980). Attention and the detection
of signals. Journal of Experimental Psychology: General, 109, 160-174.
Sandon, P. (1990). Simulating visual attention. Journal of Cog. Neurosc., 2, 213-231.
Schillen, Th. B. & Konig, P. (1991) . Stimulus-dependent assembly formation of
oscillatory responses: II. Desynchronization. Neural Computation, 3, 167-178.
Sompolinsky, H., Golomb, D., & Kleinfeld, D. (1990). Global processing of visual
stimuli in a neural network of coupled oscillators. Proc. Natl. Acad. Sci. USA, 87,
7200-7204.
Ungerleider, L. G., & Mishkin, M. (1982). Two cortical visual systems. In D. J. Ingle,
M. A. Goodale, & R. J. W. Mansfield (Eels.), Analysis of visual behavior. Cambridge,
MA: MIT Press.
Van Essen (1985). Functional organization of primate visual cortex. In A. Peters &
E. G. Jones (Eds.)., Cerebral cortex, vol. 3. New York: Plenum Press.
Von der Malsburg, C. (1981) The correlation theory of brain function. Internal Report
81-2, Dept. of Neurobiology, MPI for Biophysical Chemistry.
PART XII
COMPUTATIONAL
AND THEORETICAL
NEUROBIOLOGY
| 658 |@word exploitation:1 open:1 instruction:1 simulation:1 attended:2 extrastriate:1 initial:3 contains:1 att:1 selecting:2 tuned:2 current:3 lang:2 activation:5 must:1 underly:1 shape:4 v:1 cue:3 selected:6 accordingly:1 short:1 filtered:1 location:10 sigmoidal:1 along:1 neisser:3 consists:2 pathway:16 recognizable:1 acquired:1 indeed:1 roughly:3 behavior:3 growing:1 brain:3 globally:2 td:1 window:1 project:2 retinotopic:1 golomb:1 what:12 kind:1 psych:1 wfj:3 finding:5 transformation:7 ended:1 temporal:1 ti:3 interactive:1 exactly:1 control:4 normally:1 unit:16 appear:1 producing:1 before:1 timing:1 bind:1 acad:1 despite:1 firing:5 might:2 studied:1 bursting:1 limited:3 locked:5 implement:1 area:1 empirical:2 suggest:1 subsystem:1 selection:5 keeper:3 unattended:1 conventional:1 map:14 fruitful:1 center:1 imposed:1 missing:1 williams:2 attention:23 independently:1 resolution:2 shorten:1 rule:1 importantly:1 posner:2 population:3 century:1 variation:2 plenum:1 target:1 play:3 milner:2 exact:1 us:2 recognition:11 particularly:1 continues:1 bottom:1 role:3 module:8 region:6 cycle:1 sompolinsky:2 movement:2 valuable:1 mentioned:1 mozer:5 goodale:1 dynamic:5 terminating:1 trained:1 grateful:1 segment:5 serve:1 negatively:2 basis:1 strikingly:1 represented:2 various:1 cat:2 derivation:1 separated:1 distinct:2 seperate:1 zemel:2 formation:1 otherwise:1 ability:1 gi:1 blob:1 sequence:1 biophysical:1 reset:1 loop:1 rapidly:1 los:1 konig:3 produce:1 oscillating:2 object:42 depending:1 recurrent:2 completion:1 coupling:1 received:3 implemented:2 synchronized:1 attribute:1 viewing:2 bin:1 munk:1 feeding:1 biological:2 elementary:1 proximity:1 sufficiently:1 considered:2 ungerleider:2 arndt:1 early:4 proc:1 integrates:2 currently:2 sensitive:1 establishes:2 successfully:1 mit:2 hotspot:3 varying:1 command:2 conjunction:2 rainer:1 focus:1 indicates:1 contrast:1 suppression:1 am:1 dependent:1 entire:1 lj:1 unlikely:1 integrated:1 relation:3 selective:7 selects:1 germany:1 overall:2 among:1 flexible:1 orientation:2 spatial:8 special:1 field:7 extraction:1 jones:1 controling:1 connectionist:3 stimulus:3 report:1 few:1 retina:2 oriented:1 simultaneously:2 recognize:5 individual:2 familiar:2 phase:10 consisting:1 connects:1 fire:2 cns:1 fukushima:2 attempt:2 detection:1 organization:1 normalisation:1 possibility:1 circular:1 essen:3 adjust:1 analyzed:1 extreme:1 natl:1 netj:1 emit:1 desimone:2 preferential:1 experience:1 theoretical:1 column:1 earlier:1 werner:1 optimally:1 stored:1 combined:1 retain:1 eel:1 michael:1 together:3 quickly:1 connectivity:2 von:2 central:1 containing:1 possibly:1 cognitive:1 book:1 retinal:4 chemistry:1 includes:1 explicitly:1 depends:2 vi:1 stream:2 later:2 parallel:1 maximized:1 correspond:1 saliency:9 mishkin:2 schillen:2 cybernetics:2 oscillatory:5 synapsis:1 reach:2 ed:1 definition:1 illusory:2 knowledge:4 color:4 organized:2 segmentation:4 back:1 feed:1 specify:1 response:1 done:2 strongly:1 anderson:2 just:1 stage:17 until:2 correlation:1 receives:2 horizontal:2 overlapping:1 propagation:1 kleinfeld:1 continuity:1 logistic:1 aj:1 gray:3 stimulate:1 olshausen:2 usa:2 excluded:1 during:1 width:1 mpi:1 stress:1 confusion:1 performs:1 image:1 recently:1 rotation:1 functional:2 stimulation:4 overview:1 retinotopically:2 refractory:2 cerebral:1 linking:3 unoriented:1 spotlight:17 goebel:5 cambridge:1 appleton:1 focal:1 eckhorn:7 outlined:1 similarity:1 operating:1 cortex:6 add:1 recent:1 route:1 certain:2 store:1 der:2 exploited:1 caltech:1 additional:1 schneider:1 recognized:1 period:11 kruse:1 signal:1 ii:8 multiple:5 pnas:1 thalamus:1 segmented:2 concerning:1 prediction:1 mansfield:1 basic:1 normalization:1 represent:4 achieved:1 cell:2 ion:1 source:1 posse:2 bringing:1 sent:1 flow:1 seem:1 jordan:1 near:2 counting:1 feedforward:1 variety:2 isolation:1 psychology:3 architecture:2 opposite:1 idea:3 shift:1 angeles:1 whether:2 peter:2 york:2 oscillate:2 detailed:3 shortening:1 reinhard:1 induces:1 ilcai:1 blue:1 broadly:1 xii:1 vol:1 coarsely:1 snyder:1 group:1 salient:1 falling:1 pj:1 letter:1 respond:1 communicate:1 distorted:1 oscillation:7 decision:3 informat:1 bound:1 layer:21 distinguish:1 activity:3 braunschweig:2 pulvinar:1 constraint:1 scene:7 encodes:1 x7:1 aspect:1 simulate:2 anyone:1 inhibits:1 transferred:1 department:1 combination:1 across:1 rev:1 primate:1 invariant:6 computationally:1 equation:1 segregation:1 brosch:1 turn:1 mechanism:6 neti:1 singer:4 know:1 hierarchical:1 appropriate:2 simulating:1 gate:13 top:1 cf:1 assembly:2 malsburg:2 neurosc:1 establish:1 psychophysical:3 question:1 arrangement:1 occurs:1 exhibit:1 attentional:4 separate:1 lateral:1 entity:4 sci:1 w0:1 extent:1 assuming:1 length:7 code:6 reed:1 holding:1 memo:1 ingle:1 gated:1 allowing:1 vertical:1 neuron:19 behave:1 parietal:1 hinton:2 neurobiology:2 dirk:1 disengagement:2 connection:5 sandon:2 elapsed:1 coherent:1 able:2 below:1 pattern:6 perception:1 green:1 memory:1 including:1 force:2 nth:1 representing:3 shortterm:1 coupled:6 understanding:1 acknowledgement:1 determining:2 contributing:1 relative:1 synchronization:1 abstracting:1 interesting:1 integrate:1 nucleus:1 x25:1 reitboeck:2 consistent:3 principle:2 viewpoint:1 surrounded:1 cooperation:2 changed:1 supported:1 last:2 gl:4 accidentally:1 allow:3 deeper:1 fall:1 wide:1 shortens:1 absolute:2 van:3 bauer:1 distributed:1 cortical:1 contour:1 computes:2 avoided:1 simplified:1 far:2 gestalt:2 neurobiological:1 dealing:1 global:4 active:1 assumed:1 davidson:1 search:1 demanded:1 additionally:2 stimulated:2 nature:2 promising:1 symmetry:1 complex:10 necessarily:1 main:1 whole:4 noise:2 positively:3 neuronal:2 occipito:1 position:13 perceptual:11 breaking:1 behrmann:2 croft:1 down:1 cog:1 specific:3 moran:2 desynchronization:1 organisation:6 grouping:2 exists:3 evidence:1 texture:1 clamping:1 simply:1 likely:1 forming:2 ganglion:1 visual:32 binding:7 ch:1 corresponds:1 wolf:1 determines:3 ma:1 goal:1 towards:1 oscillator:6 change:2 experimentally:1 determined:2 perceiving:4 operates:3 typical:1 acting:1 called:1 bradford:1 invariance:9 experimental:1 preattentive:1 select:6 internal:4 latter:1 meant:1 dept:1 tested:1 |
6,169 | 6,580 | Noise-Tolerant Life-Long Matrix Completion via
Adaptive Sampling
Maria-Florina Balcan
Machine Learning Department
Carnegie Mellon University, USA
[email protected]
Hongyang Zhang
Machine Learning Department
Carnegie Mellon University, USA
[email protected]
Abstract
We study the problem of recovering an incomplete m ? n matrix of rank r with
columns arriving online over time. This is known as the problem of life-long
matrix completion, and is widely applied to recommendation system, computer
vision, system identification, etc. The challenge is to design provable algorithms
tolerant to a large amount of noises, with small sample complexity. In this work,
we give algorithms achieving strong guarantee under two realistic noise models. In
bounded deterministic noise, an adversary can add any bounded yet unstructured
noise to each column. For this problem, we present an algorithm that returns a
matrix of a small error, with sample complexity almost as small as the best prior
results in the noiseless case. For sparse random noise, where the corrupted columns
are sparse and drawn randomly, we give an algorithm that exactly recovers an
?0 -incoherent matrix by probability at least 1 ? ? with sample complexity as small
as O (?0 rn log(r/?)). This result advances the state-of-the-art work and matches
the lower bound in a worst case. We also study the scenario where the hidden
matrix lies on a mixture of subspaces and show that the sample complexity can
be even smaller. Our proposed algorithms perform well experimentally in both
synthetic and real-world datasets.
1
Introduction
Life-long learning is an emerging object of study in machine learning, statistics, and many other
domains [2, 11]. In machine learning, study of such a framework has led to significant advances
in learning systems that continually learn many tasks over time and improve their ability to learn
as they do so, like humans [15]. A natural approach to achieve this goal is to exploit information
from previously-learned tasks under the belief that some commonalities exist across the tasks [2,
24]. The focus of this work is to apply this idea of life-long learning to the matrix completion
problem. That is, given columns of a matrix that arrive online over time with missing entries, how to
approximately/exactly recover the underlying matrix by exploiting the low-rank commonality across
each column.
Our study is motivated by several promising applications where life-long matrix completion is
applicable. In recommendation systems, the column of the hidden matrix consists of ratings by
multiple users to a specific movie/news; The news or movies are updated online over time but usually
only a few ratings are submitted by those users. In computer vision, inferring camera motion from a
sequence of online arriving images with missing pixels has received significant attention in recent
years, known as the structure-from-motion problem; Recovering those missing pixels from those
partial measurements is an important preprocessing step. Other examples where our technique is
applicable include system identification, multi-class learning, global positioning of sensors, etc.
Despite a large amount of applications of life-long matrix completion, many fundamental questions remain unresolved. One of the long-standing challenges is designing noise-tolerant, life-long
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
algorithms that can recover the unknown target matrix with small error. In the absence of noise,
this problem is not easy because the overall structure of the low rankness is unavailable in each
round. This problem is even more challenging in the context of noise, where an adversary can add
any bounded yet unstructured noise to those observations and the error propagates as the algorithm
proceeds. This is known as bounded deterministic noise. Another type of noise model that receives
great attention is sparse random noise, where the noise is sparse compared to the number of columns
and is drawn i.i.d. from a non-degenerate distribution.
Our Contributions: This paper tackles the problem of noise-tolerant, life-long matrix completion
and advances the state-of-the-art results under the two realistic noise models.
? Under bounded deterministic noise, we design and analyze an algorithm that is robust to
noise, with only a small output error (See Figure 3). The sample complexity is almost as
small as the best prior results in the noiseless case, provided that the noise level is small.
? Under sparse random noise, we give sample complexity that guarantees an exact recovery of
the hidden matrix with high probability. The sample complexity advances the state-of-the-art
results (See Figure 3) and matches the lower bound in the worst case of this scenario.
? We extend our result of sparse random noise to the setting where the columns of the hidden
matrix lie on a mixture of subspaces, and show that smaller sample complexity suffices to
exactly recover the hidden matrix in this more benign setting.
? We also show that our proposed algorithms perform well experimentally in both synthetic
and real-world datasets.
2
Preliminaries
Before proceeding, we define some notations and clarify problem setup in this section.
Notations: We will use bold capital letter to represent matrix, bold lower-case letter to represent
vector, and lower-case letter to represent scalar. Specifically, we denote by M ? Rm?n the noisy
observation matrix in hindsight. We denote by L the underlying clean matrix, and by E the noise. We
will frequently use M:t ? Rm?1 to indicate the t-th column of matrix M, and similarly Mt: ? R1?n
the t-th row. For any set of indices ?, M?: ? R|?|?n represents subsampling the rows of M at
coordinates ?. Without confusion, denote by U the column space spanned by the matrix L. Denote by
e the noisy version of U, i.e., the subspace corrupted by the noise, and by U
b our estimated subspace.
U
k
k
e means that U
e has k columns in the current round. PU is frequently used to
The superscript k of U
represent the orthogonal projection operator onto subspace U. We use ?(a, b) to denote the angle
between vectors a and b. For a vector u and a subspace V, define ?(u, V) = minv?V ?(u, v). We
define the angle between two subspaces U and V as ?(U,P
V) = maxu?U ?(u, V). For norms, denote
by kvk2 the vector `2 norm of v. For matrix, kMk2F = ij M2ij and kMk?,2 = maxi kMi: k2 , i.e.,
the maximum vector `2 norm across rows. The operator norm is induced by the matrix Frobenius
norm, which is defined as kPk = maxkMkF ?1 kPMkF . If P can be represented as a matrix, kPk
also denotes the maximum singular value.
2.1
Problem Setup
In the setting of life-long matrix completion, we assume that each column of the underlying matrix
L is normalized1 and arrives online over time. We are not allowed to get access to the next column
until we perform the completion for the current one. This is in sharp contrast to the offline setting
where all columns come at one time and so we are able to immediately exploit the low-rank structure
to do the completion. In hindsight, we assume the underlying matrix is of rank r. This assumption
enables us to represent L as L = US, where U is the dictionary (a.k.a. basis matrix) of size m ? r
with each column representing a latent metafeature, and S is a matrix of size r ? n containing the
weights of linear combination for each column L:t . The overall subspace structure is captured by U
and the finer grouping structure, e.g., the mixture of multiple subspaces, is captured by the sparsity
of S. Our goal is to approximately/exactly recover the subspace U and the matrix L from a small
fraction of the entries, possibly corrupted by noise, although these entries can be selected sequentially
in a feedback-driven way.
Noise Models: We study two types of realistic noise models, one of which is the deterministic noise.
In this setting, we assume that the `2 norm of noise on each column is bounded by noise . Beyond
1
Without loss of generality, we assume kL:t k2 = 1 for all t, although our result can be easily extended to the
general case.
2
that, no other assumptions are made on the nature of noise. The challenge under this noise model is to
design an online algorithm limiting the possible error propagation during the completion procedure.
Another noise model we study is the sparse random noise, where we assume that the noise vectors
are drawn i.i.d. from any non-degenerate distribution. Additionally, we assume the noise is sparse,
i.e., only a few columns of L are corrupted by noise. Our goal is to exactly recover the underlying
matrix L with sample complexity as small as possible.
Incoherence: Apart from the sample budget and noise level, another quantity governing the difficulty
of the completion problem is the coherence parameter on the row/column space. Intuitively, the
completion should perform better when the information spreads evenly throughout the matrix. To
quantify this term, for subspace U of dimension r in Rm , we define
m
?(U) =
max kPU ei k22 ,
(1)
r i?[m]
where ei is the i-th column of the identity matrix. Indeed, without (1) there is an identifiability issue
in the matrix completion problem [7, 8, 27]. As an extreme example, let L be a matrix with only one
non-zero entry. Such a matrix cannot be exactly recovered unless we see the non-zero element. As in
[19], to mitigate the issue, in this paper we assume incoherence ?0 = ?(U) on the column space of
the underlying matrix. This is in contrast to the classical results of Cand?s et al. [7, 8], in which one
requires incoherence ?0 = max{?(U), ?(V)} on both the column and the row subspaces.
Sampling Model: Instead of sampling the entries passively by uniform distribution, our sampling
oracle allows adaptively measuring entries in each round. Specifically, for any arriving column we
are allowed to have two types of sampling phases: we can either uniformly take the samples of the
entries, as the passive sampling oracle, or choose to request all entries of the column in an adaptive
manner. This is a natural extension of the classical passive sampling scheme with wide applications.
For example, in network tomography, a network operator is interested in inferring latencies between
hosts while injecting few packets into the network. The operator is in control of the network, thus
can adaptively sample the matrix of pair-wise latencies. In particular, the operator can request full
columns of the matrix by measuring one host to all others. In gene expression analysis, we are
interested in recovering a matrix of expression levels for various genes across a number of conditions.
The high-throughput microarrays provide expression levels of all genes of interest across operating
conditions, corresponding to revealing entire columns of the matrix.
3
Main Results
In this section, we formalize our life-long matrix completion algorithm, develop our main theoretical
contributions, and compare our results with the prior work.
3.1
Bounded Deterministic Noise
To proceed, our algorithm streams the columns of noisy M into memory and iteratively updates the
b of subspace
estimate for the column space of L. In particular, the algorithm maintains an estimate U
U, and when processing an arriving column M:t , requests only a few entries of M:t and a few rows
b to estimate the distance between L:t and U. If the value of the estimator is greater than a given
of U
threshold ?k , the algorithm requests the remaining entries of M:t and adds the new direction M:t
to the subspace estimate; Otherwise, finds a best approximation of M:t by a linear combination of
b The pseudocode of the procedure is displayed in Algorithm 1. We note that our
columns of U.
algorithm is similar to the algorithm of [19] for the problem of offline matrix completion without
noise. However, our setting, with the presence of noise (which might conceivably propagate through
the course of the algorithm), makes our analysis significantly more subtle.
The key ingredient of the algorithm is to estimate the distance between the noiseless column L:t
and the clean subspace Uk with only a few measurements with noise. To estimate this quantity,
b k to M?t and U
b k , respectively. We then project M?t onto
we downsample both M:t and U
?:
b k and use the projection residual kM?t ? P b k M?t k2 as our estimator. A subtle and
subspace U
?:
U?:
critical aspect of the algorithm is the choice of the threshold ?k for this estimator. In the noiseless
setting, we can simply set ?k = 0 if the sampling number |?| is large enough ? in the order of
O(?0 r log2 r), because O(?0 r log2 r) noiseless measurements already contain enough information
for testing whether a specific column lies in a given subspace [19]. In the noisy setting, however, the
3
Algorithm 1 Noise-Tolerant Life-Long Matrix Completion under Bounded Deterministic Noise
Input: Columns of matrices arriving over time.
b 0 = ?. Randomly draw entries ? ? [m] of size d uniformly with
Initialize: Let the basis matrix U
replacement.
1: For t from 1 to n, do
2:
(a) If kM?t ? PU
b k M?t k2 > ?k
?:
b k . Orthogonalize U
b k.
3:
i. Fully measure M:t and add it to the basis matrix U
4:
ii. Randomly draw entries ? ? [m] of size d uniformly with replacement.
5:
iii. k := k + 1.
c :t := U
b kU
b k? M?t .
6:
(b) Otherwise M
?:
7: End For
b K and the underlying matrix M
c with column M
c :t .
Output: Estimated range space U
b k are corrupted by noise, and the error propagates as the algorithm
challenge is that both M:t and U
proceeds. Thus instead
? of setting the threshold as 0 always, our theory suggests setting ?k proportional
to the noise level noise . Indeed, the threshold ?k balances the trade-off between the estimation
error and the sample complexity: a) if ?k is too large, most of the columns are represented by the
noisy dictionary and therefore the error propagates too quickly; b) In contrast, if ?k is too small, we
observe too many columns in full and so the sample complexity increases. Our goal in this paper
is to capture this trade-off, providing a global upper bound on the estimation error of the life-long
arriving columns while keeping the sample complexity as small as possible.
3.1.1
Recovery Guarantee
Our analysis leads to the following guarantee on the performance of Algorithm 1.
Theorem 1 (Robust Recovery under Deterministic Noise). Let r be the rank of the underlying
matrix L with ?0 -incoherent column space. Suppose that the `2 norm of noise in each column
is p
upper bounded by noise . Set the parameters d ? c(?0 r + mknoise ) log2 (2n/?)) and ?k =
C dknoise /m for global constants c and C. Then with probability at least 1 ? ?, Algorithm 1
?
b K with K ? r and outputs M
c with `2 error kM
c :t ? L:t k2 ? O m knoise 2 uniformly
outputs U
d
for all t, where k ? r is the number of base vectors when processing the t-th column.
Proof Sketch. We firstly show that our estimated subspace in each round is accurate. The key
ingredient of our proof is a result pertaining the angle between the underlying subspace and the noisy
one. Ideally, the column space spanned by the noisy dictionary cannot be too far to the underlying
subspace if the noise level is small. This is true only if the angle between the newly added vector and
the column space of the current dictionary is large, as shown by the following lemma.
e k = span{e
e 2 , ..., u
e k } be two subspaces
Lemma 2. Let Uk = span{u1 , u2 , ..., uk } and U
u1 , u
?
e i?1 ) ? ?i for
e i ) ? noise for all i ? [k]. Let ?k = 20knoise and ?(e
such that ?(ui , u
ui , U
k ek
i = 2, ..., k. Then ?(U , U ) ? ?k /2.
We then prove the correctness of our test in Step 2. Lemma 2 guarantees that the underlying subspace
e k cannot be too distinct. So by algorithm, projecting any vector on
Uk and our estimated one U
e
e k ) ? ?(M:t , Uk ).
the subspace spanned by Uk does not make too many mistakes, i.e., ?(M:t , U
On the other hand, by standard concentration argument our test statistic kM?t ? PU
e k M?t k2 is
?:
d
ek
close to m kM:t ? PU
e k M:t k2 . Note that the latter term is determined by the angle of ?(M:t , U ).
e k ), or ?(L:t , U
e k)
Therefore, our test statistic in Step 2 is indeed an effective measure of ?(M:t , U
since L:t ? M:t , as proven by the following novel result.
?
Lemma 3. Let k = 2?k , ?k = 20knoise , and k ? r. Suppose that we observe a set of coordinates
? ? [m] of size d uniformly at random with replacement, where d ? c0 (?0 r + mknoise ) log2 (2/?).
e k ) ? k , then with probability at least 1 ? 4?, we have kM?t ? P e k M?t k2 ?
If ?(L:t , U
U?:
p
e k ) ? ck , then with probability at least 1 ? 4?, we have
C dknoise /m. Inversely,pif ?(L:t , U
kM?t ? PU
dknoise /m, where c0 , c and C are absolute constants.
e k M?t k2 ? C
?:
2
By our proof, the constant factor is 9.
4
Finally, as both our dictionary and our statistic are accurate, the output error cannot be too large. A
simple deduction on the union bound over all columns leads to Theorem 1.
Theorem 1 implies a result in the noiseless setting when noise goes to zero. Indeed, with the sample
size growing in the order of O(?0 nr log2 n), Algorithm 1 outputs a solution that is exact with
probability at least 1 ? n110 . To the best of our knowledge, this is the best sample complexity in the
existing literature for noiseless matrix completion without additional side information [19, 22]. For
the noisy setting, Algorithm 1 enjoys the same sample complexity O(?0 nr log2 n) as the noiseless
case, if noise ? ?(?0 r/(mk)). In addition, Algorithm 1 inherits the benefits of adaptive sampling
scheme. The vast majority results in the passive sampling scenarios require both the row and column
incoherence for exact/robust recovery [22]. In contrast, via adaptive sampling we can relax the
incoherence assumption on the row space of the underlying matrix and are therefore more applicable.
We compare our result with several related lines of research in the prior work. While lots of online
matrix completion algorithms have been proposed recently, they either lack of solid theoretical
guarantee [17], or require strong assumptions for the streaming data [19, 21, 13, 18]. Specifically,
Krishnamurthy et al. [18] proposed an algorithm that requires column subset selection in the noisy
case, which might be impractical in the online setting as we cannot measure columns that do not
arrive. Focusing on a similar online matrix completion problem, Lois et al. [21] assumed that a)
there is a good initial estimate for the column space; b) the column space changes slowly; c) the
base vectors of the column space are dense; d) the support of the measurements changes by at least a
certain amount. In contrast, our assumptions are much simpler and more realistic.
We mention another related line of research ? matched subspace detection. The goal of matched
subspace detection is to decide whether an incomplete signal/vector lies within a given subspace [5, 4].
It is highly related to the procedure of our algorithm in each round, where we aim at determining
whether an arriving vector belongs to a given subspace based on partial and noisy observations. Prior
work targeting on this problem formalizes the task as a hypothesis testing problem. So they assume
a specific random distribution on the noise, e.g., Gaussian, and choose ?k by fixing the probability
of false alarm in the hypothesis testing [5, 23]. Compared with this, our result does not have any
assumption on the noise structure/distribution.
3.2
Sparse Random Noise
In this section, we discuss life-long matrix completion on a simpler noise model but with a stronger
recovery guarantee. We assume that noise is sparse, meaning that the total number of noisy columns
is small compared to the total number of columns n. The noisy columns may arrive at any time, and
each noisy column is assumed to be drawn i.i.d. from a non-degenerate distribution. Our goal is to
exactly recover the underlying matrix and identify the noise with high probability.
We use an algorithm similar to Algorithm 1 to attack the problem, with ?k = 0. The challenge is that
here we frequently add noise vectors to the dictionary and so we need to distinguish the noise from the
clean column and remove them out of the dictionary at the end of the algorithm. To resolve the issue,
we additionally record the support of the representation coefficients in each round when we represent
the arriving vector by the linear combinations of the columns in the dictionary matrix. On one hand,
the noise vectors in the dictionary fail to represent any column, because they are random. So if the
representation coefficient corresponding to a column in the dictionary is 0 always, it is convincing to
identify the column as a noise. On the other hand, to avoid recognizing a true base vector as a noise,
we make a mild assumption that the underlying column space is identifiable. Typically, that means
for each direction in the underlying subspace, there are at least two clean data points having non-zero
projection on that direction. We argue that the assumption is indispensable, since without it there
is an identifiability issue between the clean data and the noise. As an extreme example, we cannot
identify the black point in Figures 1 as the clean data or as noise if we make no assumption on the
underlying subspace. To mitigate the problem, we assume that for each i ? [r] and a subspace Ur
with orthonormal basis, there are at least two columns L:ai and L:bi of L such that [Ur ]T:i L:ai 6= 0
and [Ur ]T:i L:bi 6= 0. The detailed algorithm can be found in the supplementary material.
3.2.1
Upper Bound
We now provide upper and lower bound on the sample complexity of above algorithm for the exact
recovery of underlying matrix. Our upper bound matches the lower bound up to a constant factor. We
then analyze a more benign setting, namely, the data lie on a mixture of low-rank subspaces with
5
Table 1: Comparisons of our sample complexity with the best prior results in the noise-free setting.
Complexity
Lower bound
Passive Sampling
O ?0 nr log2 (n/?) [22]
O (?0 nr log(n/?))[10]
Adaptive
Sampling
O ?0 nr log2 (r/?) [19] O (?0 nr log(r/?)) (Ours)
O (?0 nr log(r/?)) (Ours)
dimensionality ? r. Our analysis leads to the following guarantee on the performance of above
algorithm. The proof is in the supplementary material.
Theorem 4 (Exact Recovery under Random Noise). Let r be the rank of the underlying matrix L
with ?0 -incoherent column space. Suppose that the noise Es0 of size m ? s0 are drawn from any
non-degenerate distribution, and that the underlying subspace Ur is identifiable. Then our algorithm
exactly recovers the underlying matrix L, the column space Ur , and the outlier Es0 with probability
at least 1 ? ?, provided that d ? c?0 r log (r/?) and s0 ? d ? r ? 1. The total sample complexity is
thus c?0 rn log (r/?), where c is a universal constant.
Theorem 4 implies an immediate result in the noise-free setting
as noise goes to zero. In particular, O (?0 nr log(r/?)) measurements are sufficient so that our algorithm outputs a solution
Underlying Subspace
that is exact with probability at least 1 ? ?. This sample complexity improves over existing results of O ?0 nr log2 (n/?) [22]
and O ?0 nr3/2 log(r/?) [18], and over O ?0 nr log2 (r/?) of
Theorem 1 when noise = 0. Indeed, our sample complexity
(a) Identifiable Subspace
O (?0 nr log(r/?)) matches the lower bound, as shown by Theorem
5 (See Table 1 for comparisons of sample complexity). We notice
another paper of Gittens [14] which showed that Nsytr?om method
Underlying Subspace
recovers a positive-semidefinite matrix of rank r from uniformly sampling O(?0 r log(r/?)) columns. While this result matches our sample complexity, the assumptions of positive-semidefiniteness and of
subsampling the columns are impractical in the online setting.
We compare Theorem 4 with prior methods on decomposing an in(b)
Unidentifiable
Subspace
complete matrix as the sum of a low-rank term and a column-sparse
term. Probably one of the best known algorithms is Robust PCA via
Outlier Pursuit [25, 28, 27, 26]. Outlier Pursuit converts this problem
Figure 1: Identifiability.
to a convex program:
min kLk? + ?kEk2,1 , s.t. P? M = P? (L + E),
(2)
L,E
where k ? k? captures the low-rankness of the underlying subspace and k ? k2,1 captures the columnsparsity of the noise. Recent papers on Outlier Pursuit [26] prove that the solution to (2) exactly
recovers the underlying subspace, provided that d ? c1 ?20 r2 log3 n and s0 ? c2 d4 n/(?50 r5 m3 log6 n)
for constants c1 and c2 . Our result definitely outperforms the existing result in term of the sample
complexity d, while our dependence of s0 is not always better (although in some cases better) when
n is large. Note that while Outlier Pursuit loads all columns simultaneously and so can exploit the
global low-rank structure, our algorithm is online and therefore cannot tolerate too much noise.
3.2.2 Lower Bound
We now establish a lower bound on the sample complexity. Our lower bound shows that in our
adaptive sampling setting, one needs at least ? (?0 rn log (r/?)) many samples in order to uniquely
identify a certain matrix in the worst case. This lower bound matches our analysis of upper bound in
Section 3.2.1.
Theorem 5 (Lower Bound on Sample Complexity). Let 0 < ? < 1/2, and ? ? Uniform(d) be the
index of the row sampling ? [m]. Suppose that Ur is ?0 -incoherent. If the total sampling number
dn < c?0 rn log (r/?) for a constant c, then with probability at least 1 ? ?, there is an example of M
such that under the sampling model of Section 2.1 (i.e., when a column arrives the choices are either
(a) randomly sample or (b) view the entire column), there exist infinitely many matrices L0 of rank r
obeying ?0 -incoherent condition on column space such that L0?: = L?: .
The proof can be found in the supplementary material. We mention several lower bounds on the
sample complexity for passive matrix completion. The first is the paper of Cand?s and Tao [10], that
6
gives a lower bound of ?(?0 nr log(n/?)) if the matrix has both incoherent rows and columns. Taking
a weaker assumption, Krishnamurthy and Singh [18, 19] showed that if the row space is coherent,
any passive sampling scheme followed by any recovery algorithm must have ?(mn) measurements.
In contrast, Theorem 5 demonstrates that in the absence of row-space
incoherence, exact recovery of
the matrix is possible with only ?(?0 nr log(r/?)) samples, if the sampling scheme is adaptive.
3.2.3
Extension to Mixture of Subspaces
Output Layer
Theorem 5 gives a lower bound on sample complexity in the worst
case. In this section, we explore the possibility of further reducing
Hidden Layer
the sample complexity with more complex common structure. We
assume that the underlying subspace is a mixture of h independent
Underlying Space
subspaces3 [20], each of which is of dimension at most ? r. Such
(a) Single Subspace
an assumption naturally models settings in which there are really h
different categories of movies/news while they share a certain commonality across categories. We can view this setting as a network
Output Layer
with two layers: The first layer captures the overall subspace with
r metafeatures; The second layer is an output layer, consisting of
Hidden Layer
metafeatures each of which is a linear combination of only ? metafeatures in the first layer. See Figures 2 for visualization. Our argument Subspace 1 Subspace 2
shows that the sparse connections between the two layers significantly
(b) Mixture of Subspaces
improve the sample complexity.
? log r)
Algorithmically, given a new column, we uniformly sample O(?
Figure 2: Subspace structure.
entries as our observations. We try to represent those elements by
a sparse linear combination of only ? columns in the basis matrix,
whose rows are truncated to those sampled indices; If we fail, we measure the column in full, add that
column into the dictionary, and repeat the procedure for the next arriving column. See supplementary
material for the detailed algorithm.
Regarding computational considerations, learning a ? -sparse representation of a given vector w.r.t.
a known dictionary can be done in polynomial time if the dictionary matrix satisfies the restricted
isometry property [9], or trivially if ? is a constant [2]. This can be done by applying `1 minimization
or brute-force algorithm, respectively. Indeed, many real datasets match the constant-? assumption,
e.g., face image [6] (each person lies on a subspace of dimension ? = 9), 3D motion trajectory [12]
(each object lies on a subspace of dimension ? = 4), handwritten digits [16] (each script lies on a
subspace of dimension ? = 12), etc. So our algorithm is applicable for all these settings.
Theoretically, the following theorem provides a strong guarantee for our algorithm. The proof can be
found in the supplementary material.
Theorem 6 (Mixture of Subspaces). Let r be the rank of the underlying matrix L. Suppose that the
columns of L lie on a mixture of identifiable and independent subspaces, each of which is of dimension
at most ? . Denote by ?? the maximal incoherence over all ? -combinations of L. Let the noise model
be that of Theorem 4. Then our algorithm exactly recovers the underlying matrix L, the column space
Ur , and the outlier Es0 with probability at least 1 ? ?, provided that d ? c?? ? 2 log (r/?) for some
global constant c and s0 ? d ? ? ? 1. The total sample complexity is thus c?? ? 2 n log (r/?).
As a concrete example, if the incoherence parameter ?? is a global constant and the dimension ? of
each subspace is far less than r, the sample complexity of O(?? n? 2 log(r/?)) is significantly better
than the complexity of O(?0 nr log(r/?)) for the structure of a single subspace in Theorem 4. This
argument shows that the sparse connections between the two layers improve the sample complexity.
4
Experimental Results
Bounded Deterministic Noise: We verify the estimated error of our algorithm in Theorem 1 under
bounded deterministic noise. Our synthetic data are generated as follows. We construct 5 base
5
vectors {u
sampling their entries from N (0, 1). The underlying matrix
h i }i=1 byP
i L is then generated
P3
P4
P5
2
T
T
T
T
T
by L = u1 1200 , i=1 ui 1200 , i=1 ui 1200 , i=1 ui 1200 , i=1 ui 11,200 ? R100?2,000 , each
column of which is normalized to the unit `2 norm. Finally, we add bounded yet unstructured noise
to each column, with noise level noise = 0.6. We randomly pick 20% entries to be unobserved. The
left figure in Figure 3 shows the comparison between our estimated error4 and the true error by our
3
4
h linear subspaces are independent if the dimensionality of their sum is equal to the sum of their dimensions.
The estimated error is up to a constant factor.
7
50#500
2
100#1000
1.8
1.4
Error
1.2
1
0.8
0.6
0.1
0.2
0.2
Observations/m
Observations/m
Error by Algorithm
Estimated Error
1.6
0.1
0.3
0.4
0.5
0.6
0.7
0.8
0.3
0.4
0.5
0.6
0.7
0.8
0.4
0.9
0.9
0.2
1
0
1
0.2
0
200
400
600
800
1000
1200
1400
1600
1800
2000
0.4
0.6
Rank/m
Column Index
0.8
1
0.2
0.4
0.6
0.8
1
Rank/m
Figure 3: Left Figure: Approximate recovery under bounded deterministic noise with estimated error.
Right Two Figures: Exact recovery under sparse random noise with varying rank and sample size.
White Region: Nuclear norm minimization (passive sampling) succeeds. White and Gray Regions:
Our algorithm (adaptive sampling) succeeds. Black Region: Our algorithm fails. It shows that the
success region of our algorithm strictly contains that of the passive sampling method.
algorithm. The result demonstrates that empirically, our estimated error successfully predicts the
trend of the true algorithmic error.
Sparse Random Noise: We then verify the exact recoverability of our algorithm under sparse random
noise. The synthetic data are generated as follows. We construct the underlying matrix L = XY
as a product of m ? r and r ? n i.i.d. N (0, 1) matrices. The sparse random noise is drawn from
standard Gaussian distribution such that s0 ? d ? r ? 1. For each size of problem (50 ? 500 and
100 ? 1, 000), we test with different rank ratios r/m and measurement ratios d/m. The experiment is
b ? LkF ? 10?6 , rank(L)
b = r, and the
run by 10 times. We define that the algorithm succeeds if kL
recovered support of the noise is exact for at least one experiment. The right two figures in Figure 3
plots the fraction of correct recoveries: white denotes perfect recovery by nuclear norm minimization
approach (2); white+gray represents perfect recovery by our algorithm; black indicates failure for
both methods. It shows that the success region of our algorithm strictly contains that of the prior
approach. Moreover, the phase transition of our algorithm is nearly a linear function w.r.t r and d.
This is consistent with our prediction d = ?(?0 r log(r/?)) when ? is small, e.g., poly(1/n).
Mixture of Subspaces: To test the performance of our algorithm for the mixture of subspaces, we
conduct an experiment on the Hopkins 155 dataset. The Hopkins 155 database is composed of 155
matrices/tasks, each of which consists of multiple data points drawn from two or three motion objects.
The trajectory of each object lie in a subspace. We input the data matrix to our algorithm with varying
b ? LkF /kLkF of 10 trials for the first five
sample sizes. Table 2 records the average relative error kL
tasks in the dataset. It shows that our algorithm is able to recover the target matrix with high accuracy.
Another experiment comparing the sample complexity of single subspace v.s. mixture of subspaces
can be found in the supplementary material.
Table 2: Life-long Matrix Completion on the first 5 tasks in Hopkins 155 database.
#Task Motion Number d = 0.8m
d = 0.85m
d = 0.9m
d = 0.95m
#1
2
9.4 ? 10?3 6.0 ? 10?3 3.4 ? 10?3 2.6 ? 10?3
#2
3
5.9 ? 10?3 4.4 ? 10?3 2.4 ? 10?3 1.9 ? 10?3
#3
2
6.3 ? 10?3 4.8 ? 10?3 2.8 ? 10?3 7.2 ? 10?4
#4
2
7.1 ? 10?3 6.8 ? 10?3 6.1 ? 10?3 1.5 ? 10?3
#5
2
8.7 ? 10?3 5.8 ? 10?3 3.1 ? 10?3 1.2 ? 10?3
5
Conclusions
In this paper, we study life-long matrix completion that aims at online recovering an m ? n matrix of
rank r under two realistic noise models ? bounded deterministic noise and sparse random noise. Our
result advances the state-of-the-art work and matches the lower bound under sparse random noise. In
a more benign setting where the columns of the underlying matrix lie on a mixture of subspaces, we
show that a smaller sample complexity is possible to exactly recover the target matrix. It would be
interesting to extend our results to other realistic noise models, including random classification noise
or malicious noise previously studied in the context of supervised classification [1, 3]
Acknowledgements. This work was supported in part by grants NSF-CCF 1535967, NSF CCF1422910, NSF CCF-1451177, a Sloan Fellowship, and a Microsoft Research Fellowship.
8
References
[1] P. Awasthi, M. F. Balcan, and P. M. Long. The power of localization for efficiently learning
linear separators with noise. In ACM Symposium on Theory of Computing, pages 449?458.
ACM, 2014.
[2] M.-F. Balcan, A. Blum, and S. Vempala. Efficient representations for life-long learning and
autoencoding. In Annual Conference on Learning Theory, 2015.
[3] M.-F. F. Balcan and V. Feldman. Statistical active learning algorithms. In Advances in Neural
Information Processing Systems, pages 1295?1303, 2013.
[4] L. Balzano, R. Nowak, and B. Recht. Online identification and tracking of subspaces from
highly incomplete information. In Annual Allerton Conference on Communication, Control,
and Computing, pages 704?711, 2010.
[5] L. Balzano, B. Recht, and R. Nowak. High-dimensional matched subspace detection when data
are missing. In IEEE International Symposium on Information Theory, pages 1638?1642, 2010.
[6] R. Basri and D. W. Jacobs. Lambertian reflectance and linear subspaces. IEEE Transactions on
Pattern Analysis and Machine Intelligence, 25(2):218?233, 2003.
[7] E. J. Cand?s and Y. Plan. Matrix completion with noise. Proceedings of the IEEE, 98(6):925?
936, 2010.
[8] E. J. Cand?s and B. Recht. Exact matrix completion via convex optimization. Foundations of
Computational Mathematics, 9(6):717?772, 2009.
[9] E. J. Cand?s, J. Romberg, and T. Tao. Robust uncertainty principles: Exact signal reconstruction
from highly incomplete frequency information. IEEE Transactions on Information Theory,
52(2):489?509, 2006.
[10] E. J. Cand?s and T. Tao. The power of convex relaxation: Near-optimal matrix completion.
IEEE Transactions on Information Theory, 56(5):2053?2080, 2010.
[11] A. Carlson, J. Betteridge, B. Kisiel, B. Settles, E. R. H. Jr., and T. M. Mitchell. Toward an
architecture for never-ending language learning. In AAAI Conference on Artificial Intelligence,
2010.
[12] J. Costeira and T. Kanade. A multibody factorization method for independently moving objects.
International Journal of Computer Vision, 29(3):159?179, 1998.
[13] C. Dhanjal, R. Gaudel, and S. Cl?mencon. Online matrix completion through nuclear norm
regularisation. In SIAM International Conference on Data Mining, pages 623?631, 2014.
[14] A. Gittens. The spectral norm error of the na?ve Nystr?m extension. arXiv preprint
arXiv:1110.5305, 2011.
[15] A. Gopnik, A. N. Meltzoff, and P. K. Kuhl. How babies think: the science of childhood. Phoenix,
2001.
[16] T. Hastie and P. Y. Simard. Metrics and models for handwritten character recognition. Statistical
Science, pages 54?65, 1998.
[17] R. Kennedy, C. J. Taylor, and L. Balzano. Online completion of ill-conditioned low-rank
matrices. In IEEE Global Conference on Signal and Information, pages 507?511, 2014.
[18] A. Krishnamurthy and A. Singh. Low-rank matrix and tensor completion via adaptive sampling.
In Advances in Neural Information Processing Systems, pages 836?844, 2013.
[19] A. Krishnamurthy and A. Singh. On the power of adaptivity in matrix completion and approximation. arXiv preprint arXiv:1407.3619, 2014.
[20] G. Lerman and T. Zhang. `p -recovery of the most significant subspace among multiple subspaces
with outliers. Constructive Approximation, 40(3):329?385, 2014.
[21] B. Lois and N. Vaswani. Online matrix completion and online robust PCA. In IEEE International
Symposium on Information Theory, pages 1826?1830, 2015.
[22] B. Recht. A simpler approach to matrix completion. Journal of Machine Learning Research,
12:3413?3430, 2011.
[23] L. L. Scharf and B. Friedlander. Matched subspace detectors. IEEE Transactions on Signal
Processing, 42(8):2146?2157, 1994.
[24] M. K. Warmuth and D. Kuzmin. Randomized online pca algorithms with regret bounds that are
logarithmic in the dimension. Journal of Machine Learning Research, 9(10):2287?2320, 2008.
[25] H. Xu, C. Caramanis, and S. Sanghavi. Robust PCA via outlier pursuit. IEEE Transaction on
Information Theory, 58(5):3047?3064, 2012.
[26] H. Zhang, Z. Lin, and C. Zhang. Completing low-rank matrices with corrupted samples from
few coefficients in general basis. IEEE Transactions on Information Theory, 62(8):4748?4768,
2016.
[27] H. Zhang, Z. Lin, C. Zhang, and E. Chang. Exact recoverability of robust PCA via outlier pursuit
with tight recovery bounds. In AAAI Conference on Artificial Intelligence, pages 3143?3149,
2015.
[28] H. Zhang, Z. Lin, C. Zhang, and J. Gao. Relations among some low rank subspace recovery
models. Neural Computation, 27:1915?1950, 2015.
9
| 6580 |@word mild:1 trial:1 version:1 polynomial:1 norm:12 stronger:1 c0:2 km:7 propagate:1 jacob:1 pick:1 mention:2 nystr:1 solid:1 klk:1 initial:1 contains:2 ours:2 outperforms:1 existing:3 kmk:1 current:3 recovered:2 comparing:1 yet:3 must:1 realistic:6 benign:3 hongyang:1 enables:1 remove:1 plot:1 update:1 intelligence:3 selected:1 warmuth:1 record:2 provides:1 attack:1 firstly:1 simpler:3 zhang:8 five:1 allerton:1 dn:1 kvk2:1 c2:2 symposium:3 consists:2 prove:2 manner:1 theoretically:1 indeed:6 cand:6 frequently:3 growing:1 multi:1 resolve:1 es0:3 spain:1 provided:4 bounded:14 underlying:31 notation:2 matched:4 moreover:1 project:1 multibody:1 emerging:1 hindsight:2 unobserved:1 impractical:2 guarantee:9 formalizes:1 mitigate:2 tackle:1 exactly:11 rm:3 k2:10 uk:6 control:2 brute:1 unit:1 demonstrates:2 grant:1 continually:1 before:1 positive:2 mistake:1 despite:1 incoherence:8 approximately:2 might:2 black:3 studied:1 suggests:1 challenging:1 factorization:1 vaswani:1 range:1 bi:2 camera:1 testing:3 union:1 minv:1 regret:1 digit:1 procedure:4 universal:1 significantly:3 revealing:1 projection:3 lois:2 get:1 onto:2 cannot:7 close:1 operator:5 selection:1 targeting:1 context:2 applying:1 romberg:1 deterministic:11 missing:4 go:2 attention:2 independently:1 convex:3 unstructured:3 recovery:17 immediately:1 estimator:3 spanned:3 orthonormal:1 nuclear:3 coordinate:2 krishnamurthy:4 updated:1 limiting:1 target:3 suppose:5 user:2 exact:13 designing:1 hypothesis:2 element:2 trend:1 recognition:1 predicts:1 database:2 p5:1 preprint:2 capture:4 worst:4 childhood:1 region:5 news:3 trade:2 complexity:35 kmi:1 ui:6 ideally:1 singh:3 tight:1 localization:1 basis:6 r100:1 easily:1 represented:2 various:1 caramanis:1 distinct:1 effective:1 pertaining:1 artificial:2 whose:1 balzano:3 widely:1 supplementary:6 relax:1 otherwise:2 ability:1 statistic:4 think:1 noisy:13 superscript:1 online:18 autoencoding:1 sequence:1 reconstruction:1 maximal:1 unresolved:1 p4:1 product:1 degenerate:4 achieve:1 frobenius:1 exploiting:1 r1:1 perfect:2 object:5 develop:1 completion:32 fixing:1 ij:1 received:1 strong:3 recovering:4 c:2 indicate:1 come:1 quantify:1 implies:2 direction:3 gopnik:1 correct:1 meltzoff:1 human:1 packet:1 settle:1 material:6 require:2 suffices:1 really:1 preliminary:1 extension:3 strictly:2 clarify:1 great:1 maxu:1 algorithmic:1 dictionary:13 commonality:3 estimation:2 injecting:1 applicable:4 correctness:1 successfully:1 minimization:3 awasthi:1 sensor:1 always:3 gaussian:2 aim:2 nr3:1 avoid:1 varying:2 l0:2 focus:1 inherits:1 maria:1 pif:1 rank:22 indicates:1 contrast:6 downsample:1 streaming:1 entire:2 typically:1 hidden:7 relation:1 deduction:1 interested:2 tao:3 pixel:2 issue:4 overall:3 classification:2 ill:1 among:2 plan:1 art:4 initialize:1 equal:1 construct:2 never:1 having:1 sampling:25 represents:2 r5:1 throughput:1 nearly:1 sanghavi:1 others:1 few:7 randomly:5 composed:1 simultaneously:1 ve:1 phase:2 consisting:1 replacement:3 microsoft:1 detection:3 interest:1 highly:3 possibility:1 mining:1 mixture:13 arrives:2 extreme:2 semidefinite:1 accurate:2 nowak:2 partial:2 xy:1 orthogonal:1 unless:1 conduct:1 incomplete:4 taylor:1 theoretical:2 mk:3 column:81 measuring:2 entry:15 subset:1 uniform:2 recognizing:1 too:9 byp:1 corrupted:6 synthetic:4 adaptively:2 person:1 recht:4 fundamental:1 definitely:1 international:4 siam:1 kmk2f:1 standing:1 randomized:1 off:2 quickly:1 concrete:1 hopkins:3 na:1 aaai:2 containing:1 choose:2 possibly:1 slowly:1 ek:2 simard:1 return:1 semidefiniteness:1 bold:2 coefficient:3 sloan:1 stream:1 script:1 view:2 lot:1 try:1 analyze:2 recover:8 maintains:1 identifiability:3 contribution:2 om:1 accuracy:1 efficiently:1 identify:4 identification:3 handwritten:2 trajectory:2 kennedy:1 finer:1 submitted:1 detector:1 kpk:2 failure:1 frequency:1 ninamf:1 naturally:1 proof:6 recovers:5 sampled:1 newly:1 dataset:2 mitchell:1 knowledge:1 dimensionality:2 improves:1 formalize:1 subtle:2 focusing:1 tolerate:1 supervised:1 costeira:1 unidentifiable:1 done:2 generality:1 governing:1 until:1 sketch:1 receives:1 hand:3 ei:2 propagation:1 lack:1 gray:2 usa:2 k22:1 contain:1 true:4 verify:2 normalized:1 ccf:2 iteratively:1 white:4 round:6 during:1 uniquely:1 d4:1 complete:1 confusion:1 motion:5 passive:8 balcan:4 image:2 wise:1 meaning:1 novel:1 recently:1 consideration:1 common:1 pseudocode:1 mt:1 empirically:1 phoenix:1 extend:2 metafeature:1 mellon:2 significant:3 measurement:7 feldman:1 ai:2 trivially:1 mathematics:1 similarly:1 language:1 moving:1 access:1 operating:1 etc:3 add:7 pu:5 base:4 isometry:1 recent:2 showed:2 belongs:1 driven:1 apart:1 scenario:3 indispensable:1 certain:3 success:2 life:16 baby:1 captured:2 greater:1 additional:1 signal:4 ii:1 multiple:4 full:3 positioning:1 match:8 hongyanz:1 long:18 lin:3 host:2 prediction:1 florina:1 vision:3 cmu:2 noiseless:8 metric:1 arxiv:4 represent:8 c1:2 addition:1 fellowship:2 singular:1 malicious:1 probably:1 induced:1 near:1 presence:1 iii:1 easy:1 enough:2 architecture:1 hastie:1 idea:1 regarding:1 microarrays:1 whether:3 motivated:1 expression:3 pca:5 proceed:1 latency:2 detailed:2 amount:3 tomography:1 category:2 exist:2 nsf:3 notice:1 estimated:10 algorithmically:1 carnegie:2 key:2 threshold:4 blum:1 achieving:1 drawn:7 capital:1 clean:6 vast:1 relaxation:1 fraction:2 year:1 sum:3 convert:1 run:1 angle:5 letter:3 uncertainty:1 arrive:3 almost:2 throughout:1 decide:1 p3:1 draw:2 coherence:1 bound:22 layer:11 completing:1 followed:1 distinguish:1 oracle:2 identifiable:4 annual:2 aspect:1 u1:3 argument:3 span:2 min:1 passively:1 vempala:1 department:2 combination:6 request:4 jr:1 smaller:3 across:6 remain:1 character:1 ur:7 gittens:2 conceivably:1 intuitively:1 projecting:1 outlier:9 restricted:1 visualization:1 previously:2 discus:1 fail:2 end:2 pursuit:6 decomposing:1 apply:1 observe:2 lambertian:1 kuhl:1 spectral:1 denotes:2 remaining:1 include:1 subsampling:2 log2:10 carlson:1 exploit:3 reflectance:1 establish:1 classical:2 tensor:1 question:1 quantity:2 already:1 added:1 concentration:1 dependence:1 nr:14 subspace:67 distance:2 majority:1 evenly:1 argue:1 toward:1 provable:1 index:4 providing:1 balance:1 convincing:1 ratio:2 setup:2 scharf:1 kpu:1 design:3 unknown:1 perform:4 upper:6 observation:6 datasets:3 displayed:1 truncated:1 immediate:1 kisiel:1 extended:1 communication:1 rn:4 sharp:1 recoverability:2 rating:2 pair:1 namely:1 kl:3 connection:2 coherent:1 learned:1 barcelona:1 nip:1 able:2 adversary:2 proceeds:2 usually:1 beyond:1 pattern:1 sparsity:1 challenge:5 program:1 max:2 memory:1 including:1 belief:1 power:3 critical:1 natural:2 difficulty:1 force:1 kek2:1 residual:1 mn:1 representing:1 scheme:4 improve:3 movie:3 inversely:1 incoherent:6 prior:8 literature:1 acknowledgement:1 friedlander:1 determining:1 relative:1 regularisation:1 loss:1 fully:1 lkf:2 log6:1 interesting:1 adaptivity:1 proportional:1 proven:1 ingredient:2 foundation:1 sufficient:1 consistent:1 s0:6 propagates:3 principle:1 share:1 row:13 course:1 repeat:1 supported:1 keeping:1 arriving:9 free:2 enjoys:1 offline:2 side:1 weaker:1 wide:1 taking:1 face:1 absolute:1 sparse:21 benefit:1 feedback:1 dimension:9 world:2 transition:1 ending:1 made:1 adaptive:9 preprocessing:1 far:2 log3:1 transaction:6 approximate:1 basri:1 gene:3 global:7 sequentially:1 tolerant:5 active:1 assumed:2 latent:1 table:4 additionally:2 kanade:1 promising:1 learn:2 robust:8 nature:1 ku:1 unavailable:1 complex:1 poly:1 separator:1 domain:1 cl:1 spread:1 main:2 dense:1 noise:103 alarm:1 allowed:2 kuzmin:1 xu:1 fails:1 inferring:2 obeying:1 lie:11 theorem:16 load:1 specific:3 m2ij:1 maxi:1 r2:1 dk:3 betteridge:1 grouping:1 false:1 budget:1 conditioned:1 rankness:2 klkf:1 led:1 logarithmic:1 simply:1 explore:1 infinitely:1 gao:1 tracking:1 scalar:1 recommendation:2 u2:1 chang:1 satisfies:1 acm:2 goal:6 identity:1 absence:2 experimentally:2 change:2 specifically:3 determined:1 uniformly:7 reducing:1 lemma:4 total:5 experimental:1 orthogonalize:1 m3:1 succeeds:3 lerman:1 metafeatures:3 support:3 latter:1 constructive:1 |
6,170 | 6,581 | Improved Variational Inference
with Inverse Autoregressive Flow
Diederik P. Kingma
[email protected]
Tim Salimans
[email protected]
Rafal Jozefowicz
[email protected]
Ilya Sutskever
[email protected]
Xi Chen
[email protected]
Max Welling?
[email protected]
Abstract
The framework of normalizing flows provides a general strategy for flexible variational inference of posteriors over latent variables. We propose a new type of
normalizing flow, inverse autoregressive flow (IAF), that, in contrast to earlier
published flows, scales well to high-dimensional latent spaces. The proposed flow
consists of a chain of invertible transformations, where each transformation is
based on an autoregressive neural network. In experiments, we show that IAF
significantly improves upon diagonal Gaussian approximate posteriors. In addition,
we demonstrate that a novel type of variational autoencoder, coupled with IAF, is
competitive with neural autoregressive models in terms of attained log-likelihood
on natural images, while allowing significantly faster synthesis.
1
Introduction
Stochastic variational inference (Blei et al., 2012; Hoffman et al., 2013) is a method for scalable
posterior inference with large datasets using stochastic gradient ascent. It can be made especially
efficient for continuous latent variables through latent-variable reparameterization and inference
networks, amortizing the cost, resulting in a highly scalable learning procedure (Kingma and Welling,
2013; Rezende et al., 2014; Salimans et al., 2014). When using neural networks for both the
inference network and generative model, this results in class of models called variational autoencoders (Kingma and Welling, 2013) (VAEs). A general strategy for building flexible inference
networks, is the framework of normalizing flows (Rezende and Mohamed, 2015). In this paper we
propose a new type of flow, inverse autoregressive flow (IAF), which scales well to high-dimensional
latent space.
At the core of our proposed method lie Gaussian autoregressive functions that are normally used
for density estimation: functions that take as input a variable with some specified ordering such
as multidimensional tensors, and output a mean and standard deviation for each element of the
input variable conditioned on the previous elements. Examples of such functions are autoregressive
neural density estimators such as RNNs, MADE (Germain et al., 2015), PixelCNN (van den Oord
et al., 2016b) or WaveNet (van den Oord et al., 2016a) models. We show that such functions
can often be turned into invertible nonlinear transformations of the input, with a simple Jacobian
determinant. Since the transformation is flexible and the determinant known, it can be used as a
normalizing flow, transforming a tensor with relatively simple known density, into a new tensor with
more complicated density that is still cheaply computable. In contrast with most previous work on
?
University of Amsterdam, University of California Irvine, and the Canadian Institute for Advanced Research
(CIFAR).
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
(a) Prior distribution
(b) Posteriors in standard VAE (c) Posteriors in VAE with IAF
Figure 1: Best viewed in color. We fitted a variational auto-encoder (VAE) with a spherical Gaussian
prior, and with factorized Gaussian posteriors (b) or inverse autoregressive flow (IAF) posteriors (c)
to a toy dataset with four datapoints. Each colored cluster corresponds to the posterior distribution of
one datapoint. IAF greatly improves the flexibility of the posterior distributions, and allows for a
much better fit between the posteriors and the prior.
improving inference models including previously used normalizing flows, this transformation is well
suited to high-dimensional tensor variables, such as spatio-temporally organized variables.
We demonstrate this method by improving inference networks of deep variational auto-encoders.
In particular, we train deep variational auto-encoders with latent variables at multiple levels of the
hierarchy, where each stochastic variable is a three-dimensional tensor (a stack of featuremaps), and
demonstrate improved performance.
2
Variational Inference and Learning
Let x be a (set of) observed variable(s), z a (set of) latent variable(s) and let p(x, z) be the parametric
model of their joint distribution, called the generative model defined over the variables. Given a
dataset X = {x1 , ..., xN } we typically wish to perform maximum marginal likelihood learning of its
parameters, i.e. to maximize
log p(X) =
N
X
log p(x(i) ),
(1)
i=1
but in general this marginal likelihood is intractable to compute or differentiate directly for flexible
generative models, e.g. when components of the generative model are parameterized by neural
networks. A solution is to introduce q(z|x), a parametric inference model defined over the latent
variables, and optimize the variational lower bound on the marginal log-likelihood of each observation
x:
log p(x)
Eq(z|x) [log p(x, z)
log q(z|x)] = L(x; ?)
(2)
where ? indicates the parameters of p and q models. Keeping in mind that Kullback-Leibler divergences DKL (.) are non-negative, it?s clear that L(x; ?) is a lower bound on log p(x) since it can be
written as follows ):
L(x; ?) = log p(x)
DKL (q(z|x)||p(z|x))
(3)
There are various ways to optimize the lower bound L(x; ?); for continuous z it can be done efficiently
through a re-parameterization of q(z|x), see e.g. (Kingma and Welling, 2013; Rezende et al., 2014).
As can be seen from equation (3), maximizing L(x; ?) w.r.t. ? will concurrently maximize log p(x)
and minimize DKL (q(z|x)||p(z|x)). The closer DKL (q(z|x)||p(z|x)) is to 0, the closer L(x; ?) will
be to log p(x), and the better an approximation our optimization objective L(x; ?) is to our true objective log p(x). Also, minimization of DKL (q(z|x)||p(z|x)) can be a goal in itself, if we?re interested
in using q(z|x) for inference after optimization. In any case, the divergence DKL (q(z|x)||p(z|x))
is a function of our parameters through both the inference model and the generative model, and
increasing the flexibility of either is generally helpful towards our objective.
2
Note that in models with multiple latent variables, the inference model is typically factorized into
partial inference models with some ordering; e.g. q(za , zb |x) = q(za |x)q(zb |za , x). We?ll write
q(z|x, c) to denote such partial inference models, conditioned on both the data x and a further context
c which includes the previous latent variables according to the ordering.
2.1
Requirements for Computational Tractability
Requirements for the inference model, in order to be able to efficiently optimize the bound, are that it
is (1) computationally efficient to compute and differentiate its probability density q(z|x), and (2)
computationally efficient to sample from, since both these operations need to be performed for each
datapoint in a minibatch at every iteration of optimization. If z is high-dimensional and we want
to make efficient use of parallel computational resources like GPUs, then parallelizability of these
operations across dimensions of z is a large factor towards efficiency. This requirement restrict the
class of approximate posteriors q(z|x) that are practical to use. In practice this often leads to the use
of diagonal posteriors, e.g. q(z|x) ? N (?(x), 2 (x)), where ?(x) and (x) are often nonlinear
functions parameterized by neural networks. However, as explained above, we also need the density
q(z|x) to be sufficiently flexible to match the true posterior p(z|x).
2.2
Normalizing Flow
Normalizing Flow (NF), introduced by (Rezende and Mohamed, 2015) in the context of stochastic
gradient variational inference, is a powerful framework for building flexible posterior distributions
through an iterative procedure. The general idea is to start off with an initial random variable with a
relatively simple distribution with known (and computationally cheap) probability density function,
and then apply a chain of invertible parameterized transformations ft , such that the last iterate zT has
a more flexible distribution2 :
z0 ? q(z0 |x),
zt = ft (zt
1 , x)
8t = 1...T
(4)
dzt
dzt 1
(5)
As long as the Jacobian determinant of each of the transformations ft can be computed, we can still
compute the probability density function of the last iterate:
log q(zT |x) = log q(z0 |x)
T
X
log det
t=1
However, (Rezende and Mohamed, 2015) experiment with only a very limited family of such
invertible transformation with known Jacobian determinant, namely:
ft (zt
1)
= zt
1
+ uh(wT zt
1
+ b)
(6)
where u and w are vectors, w is w transposed, b is a scalar and h(.) is a nonlinearity, such that
uh(wT zt 1 + b) can be interpreted as a MLP with a bottleneck hidden layer with a single unit. Since
information goes through the single bottleneck, a long chain of transformations is required to capture
high-dimensional dependencies.
T
3
Inverse Autoregressive Transformations
In order to find a type of normalizing flow that scales well to high-dimensional space, we consider
Gaussian versions of autoregressive autoencoders such as MADE (Germain et al., 2015) and the
PixelCNN (van den Oord et al., 2016b). Let y be a variable modeled by such a model, with some
chosen ordering on its elements y = {yi }D
i=1 . We will use [?(y), (y)] to denote the function of the
vector y, to the vectors ? and . Due to the autoregressive structure, the Jacobian is lower triangular
with zeros on the diagonal: @[?i , i ]/@yj = [0, 0] for j i. The elements [?i (y1:i 1 ), i (y1:i 1 )]
are the predicted mean and standard deviation of the i-th element of y, which are functions of only
the previous elements in y.
Sampling from such a model is a sequential transformation from a noise vector ? ? N (0, I) to the
corresponding vector y: y0 = ?0 + 0 ?0 , and for i > 0, yi = ?i (y1:i 1 ) + i (y1:i 1 ) ? ?i . The
2
where x is the context, such as the value of the datapoint. In case of models with multiple levels of latent
variables, the context also includes the value of the previously sampled latent variables.
3
Algorithm 1: Pseudo-code of an approximate posterior with Inverse Autoregressive Flow (IAF)
Data:
x: a datapoint, and optionally other conditioning information
?: neural network parameters
EncoderNN(x; ?): encoder neural network, with additional output h
AutoregressiveNN[?](z, h; ?): autoregressive neural networks, with additional input h
sum(.): sum over vector elements
sigmoid(.): element-wise sigmoid function
Result:
z: a random sample from q(z|x), the approximate posterior distribution
l: the scalar value of log q(z|x), evaluated at sample ?z?
[?, , h]
EncoderNN(x; ?)
? ? N (0, I)
z
?+?
l
sum(log + 12 ?2 + 12 log(2?))
for t
1 to T do
[m, s]
AutoregressiveNN[t](z, h; ?)
sigmoid(s)
z
z + (1
) m
l
l sum(log )
end
computation involved in this transformation is clearly proportional to the dimensionality D. Since
variational inference requires sampling from the posterior, such models are not interesting for direct
use in such applications. However, the inverse transformation is interesting for normalizing flows, as
we will show. As long as we have i > 0 for all i, the sampling transformation above is a one-to-one
1:i 1 )
transformation, and can be inverted: ?i = yi i?(yi (y
.
1:i 1 )
We make two key observations, important for normalizing flows. The first is that this inverse
transformation can be parallelized, since (in case of autoregressive autoencoders) computations of
the individual elements ?i do not depend on eachother. The vectorized transformation is:
? = (y
(7)
?(y))/ (y)
where the subtraction and division are elementwise.
The second key observation, is that this inverse autoregressive operation has a simple Jacobian
determinant. Note that due to the autoregressive structure, @[?i , i ]/@yj = [0, 0] for j
i. As a
result, the transformation has a lower triangular Jacobian (@?i /@yj = 0 for j > i), with a simple
diagonal: @?i /@yi = i . The determinant of a lower triangular matrix equals the product of the
diagonal terms. As a result, the log-determinant of the Jacobian of the transformation is remarkably
simple and straightforward to compute:
D
log det
X
d?
=
dy
i=1
log
i (y)
(8)
The combination of model flexibility, parallelizability across dimensions, and simple log-determinant,
make this transformation interesting for use as a normalizing flow over high-dimensional latent space.
4
Inverse Autoregressive Flow (IAF)
We propose a new type normalizing flow (eq. (5)), based on transformations that are equivalent to
the inverse autoregressive transformation of eq. (7) up to reparameterization. See algorithm 1 for
pseudo-code of an appproximate posterior with the proposed flow. We let an initial encoder neural
network output ?0 and 0 , in addition to an extra output h, which serves as an additional input to
each subsequent step in the flow. We draw a random sample ? ? N (0, I), and initialize the chain
with:
z0 = ? 0 +
4
0
?
(9)
Approximate Posterior with Inverse Autoregressive Flow (IAF)
x
Encoder NN
?
?
?
?
+
IAF Step
h
h
???
IAF
step
IAF
step
z
Autoregressive NN
???
z
z
z
???
?
?
?
+
z
Figure 2: Like other normalizing flows, drawing samples from an approximate posterior with Inverse
Autoregressive Flow (IAF) consists of an initial
sample
z Inverse
drawn
from aFlow
simple
distribution, such as a
Approximate
Posterior with
Autoregressive
(IAF)
Gaussian with diagonal covariance, followed by a chain of nonlinear invertible transformations of z,
each with a simple Jacobian determinants.
x
Encoder NN
?
?
?
+
h
Autoregressive NN
The flow
consists of a chain of T of the following transformations:
z
?
?
?
+
Autoregressive NN
?
?
?
(10)
T
?
Normalizing
Flow
z
zt = ? t +
t
zt
z
z
1
Initial Network
IAF step
+
???
z
IAF Step
where at the t-th step of the flow, we use a different autoregressive neural network with inputs zt 1
Generative
and h,
and outputs
? and t . The neural network is structured to be autoregressive w.r.t. zt 1 , such
z
modelt
t
that for any choice of its parameters, the Jacobians dzd?
and dzd t t1 are triangular with zeros on the
t 1
QD
Inference
x
dzt
diagonal.
As a result,
model
i=1 t,i . (Note
dzt 1 is triangular with t on the diagonal, with determinant
that the Jacobian w.r.t. h does not have constraints.) Following
eq.
(5),
the
density
under
the final
Posterior with Inverse Autoregressive Flow (IAF)
x is:
iterate
!
Encoder?
IAF
IAF
D
T
x
z
z
???
X
NN X
step
step
1 2
1
log q(zT |x) =
?
+
log(2?)
+
log
(11)
t,i
2 i
2
0
0
1
t=0
i=1
The flexibility of the distribution of the final iterate zT , and its ability to closely fit to the true posterior,
increases with the expressivity of the autoregressive models and the depth of the chain. See figure 2
for an illustration.
A numerically stable version, inspired by the LSTM-type update, is where we let the autoregressive
network output [mt , st ], two unconstrained real-valued vectors:
[mt , st ]
AutoregressiveNN[t](zt , h; ?)
(12)
and compute zt as:
t = sigmoid(st )
zt = t zt 1 + (1
t)
mt
(13)
(14)
This version is shown in algorithm 1. Note that this is just a particular version of the update of
eq. (10), so the simple computation of the final log-density of eq. (11) still applies.
We found it beneficial for results to parameterize or initialize the parameters of each
AutoregressiveNN[t] such that its outputs st are, before optimization, sufficiently positive, such as
close to +1 or +2. This leads to an initial behaviour that updates z only slightly with each step of IAF.
Such a parameterization is known as a ?forget gate bias? in LSTMs, as investigated by Jozefowicz
et al. (2015).
Perhaps the simplest special version of IAF is one with a simple step, and a linear autoregressive
model. This transforms a Gaussian variable with diagonal covariance, to one with linear dependencies,
i.e. a Gaussian distribution with full covariance. See appendix A for an explanation.
Autoregressive neural networks form a rich family of nonlinear transformations for IAF. For nonconvolutional models, we use the family of masked autoregressive networks introduced in (Germain
et al., 2015) for the autoregressive neural networks. For CIFAR-10 experiments, which benefits more
from scaling to high dimensional latent space, we use the family of convolutional autoregressive
autoencoders introduced by (van den Oord et al., 2016b,c).
We found that results improved when reversing the ordering of the variables after each step in the IAF
chain. This is a volume-preserving transformation, so the simple form of eq. (11) remains unchanged.
5
IAF Step
Context
(e.g. Encoder NN)
zT
Autoregress
?t
zt
-
5
Related work
Inverse autoregressive flow (IAF) is a member of the family of normalizing flows, first discussed
in (Rezende and Mohamed, 2015) in the context of stochastic variational inference. In (Rezende and
Mohamed, 2015) two specific types of flows are introduced: planar flows and radial flows. These
flows are shown to be effective to problems relatively low-dimensional latent space (at most a few
hundred dimensions). It is not clear, however, how to scale such flows to much higher-dimensional
latent spaces, such as latent spaces of generative models of /larger images, and how planar and radial
flows can leverage the topology of latent space, as is possible with IAF. Volume-conserving neural
architectures were first presented in in (Deco and Brauer, 1995), as a form of nonlinear independent
component analysis.
Another type of normalizing flow, introduced by (Dinh et al., 2014) (NICE), uses similar transformations as IAF. In contrast with IAF, this type of transformations updates only half of the latent variables
z1:D/2 per step, adding a vector f (zD/2+1:D ) which is a neural network based function of the
remaining latent variables zD/2+1:D . Such large blocks have the advantage of computationally cheap
inverse transformation, and the disadvantage of typically requiring longer chains. In experiments,
(Rezende and Mohamed, 2015) found that this type of transformation is generally less powerful than
other types of normalizing flow, in experiments with a low-dimensional latent space. Concurrently
to our work, NICE was extended to high-dimensional spaces in (Dinh et al., 2016) (Real NVP). An
empirical comparison would be an interesting subject of future research.
A potentially powerful transformation is the Hamiltonian flow used in Hamiltonian Variational
Inference (Salimans et al., 2014). Here, a transformation is generated by simulating the flow
of a Hamiltonian system consisting of the latent variables z, and a set of auxiliary momentum
variables. This type of transformation has the additional benefit that it is guided by the exact
posterior distribution, and that it leaves this distribution invariant for small step sizes. Such as
transformation could thus take us arbitrarily close to the exact posterior distribution if we can apply
it for a sufficient number of times. In practice, however, Hamiltonian Variational Inference is very
demanding computationally. Also, it requires an auxiliary variational bound to account for the
auxiliary variables, which can impede progress if the bound is not sufficiently tight.
An alternative method for increasing the flexiblity of the variational inference, is the introduction
of auxiliary latent variables (Salimans et al., 2014; Ranganath et al., 2015; Tran et al., 2015) and
corresponding auxiliary inference models. Latent variable models with multiple layers of stochastic
variables, such as the one used in our experiments, are often equivalent to such auxiliary-variable
methods. We combine deep latent variable models with IAF in our experiments, benefiting from both
techniques.
6
Experiments
We empirically evaluate IAF by applying the idea to improve variational autoencoders. Please see
appendix C for details on the architectures of the generative model and inference models. Code for
reproducing key empirical results is available online3 .
6.1
MNIST
In this expermiment we follow a similar implementation of the convolutional VAE as in (Salimans
et al., 2014) with ResNet (He et al., 2015) blocks. A single layer of Gaussian stochastic units
of dimension 32 is used. To investigate how the expressiveness of approximate posterior affects
performance, we report results of different IAF posteriors with varying degrees of expressiveness.
We use a 2-layer MADE (Germain et al., 2015) to implement one IAF transformation, and we stack
multiple IAF transformations with ordering reversed between every other transformation.
Results: Table 1 shows results on MNIST for these types of posteriors. Results indicate that as
approximate posterior becomes more expressive, generative modeling performance becomes better.
Also worth noting is that an expressive approximate posterior also tightens variational lower bounds
as expected, making the gap between variational lower bounds and marginal likelihoods smaller. By
making IAF deep and wide enough, we can achieve best published log-likelihood on dynamically
3
https://github.com/openai/iaf
6
Table 1: Generative modeling results on the dynamically sampled binarized MNIST version used
in previous publications (Burda et al., 2015). Shown are averages; the number between brackets
are standard deviations across 5 optimization runs. The right column shows an importance sampled
estimate of the marginal likelihood for each model with 128 samples. Best previous results are reproduced in the first segment: [1]: (Salimans et al., 2014) [2]: (Burda et al., 2015) [3]: (Kaae S?nderby
et al., 2016) [4]: (Tran et al., 2015)
Model
VLB
log p(x) ?
Convolutional VAE + HVI [1]
DLGM 2hl + IWAE [2]
LVAE [3]
DRAW + VGP [4]
-83.49
-81.94
-82.90
-81.74
Diagonal covariance
IAF (Depth = 2, Width = 320)
IAF (Depth = 2, Width = 1920)
IAF (Depth = 4, Width = 1920)
IAF (Depth = 8, Width = 1920)
-84.08 (? 0.10)
-82.02 (? 0.08)
-81.17 (? 0.08)
-80.93 (? 0.09)
-80.80 (? 0.07)
?
?
?
?
z3
z3
-79.88
-81.08 (? 0.08)
-79.77 (? 0.06)
-79.30 (? 0.08)
-79.17 (? 0.08)
-79.10 (? 0.07)
?
+
z3
Bottom-Up
ResNet Block
+
z2
z2
z1
=
z2
z1
Top-Down
ResNet Block
Layer Prior:?
z ~ p(zi|z>i)
ELU
ELU
+
Layer Posterior:?
z ~ q(zi|z>i,x)
ELU
z1
ELU
+
x
Deep
generative model
x
Bidirectional?
inference model
x
x
VAE with
bidirectional inference
= Identity
Identity
= Convolution
ELU
= Nonlinearity
Figure 3: Overview of our ResNet VAE with bidirectional inference. The posterior of each layer is
parameterized by its own IAF.
binarized MNIST: -79.10. On Hugo Larochelle?s statically binarized MNIST, our VAE with deep
IAF achieves a log-likelihood of -79.88, which is slightly worse than the best reported result, -79.2,
using the PixelCNN (van den Oord et al., 2016b).
6.2
CIFAR-10
We also evaluated IAF on the CIFAR-10 dataset of natural images. Natural images contain a much
greater variety of patterns and structure than MNIST images; in order to capture this structure well,
we experiment with a novel architecture, ResNet VAE, with many layers of stochastic variables, and
based on residual convolutional networks (ResNets) (He et al., 2015, 2016). Please see our appendix
for details.
Log-likelihood. See table 2 for a comparison to previously reported results. Our architecture with
IAF achieves 3.11 bits per dimension, which is better than other published latent-variable models,
and almost on par with the best reported result using the PixelCNN. See the appendix for more
experimental results. We suspect that the results can be further improved with more steps of flow,
which we leave to future work.
Synthesis speed. Sampling took about 0.05 seconds/image with the ResNet VAE model, versus
52.0 seconds/image with the PixelCNN model, on a NVIDIA Titan X GPU. We sampled from the
PixelCNN na?vely by sequentially generating a pixel at a time, using the full generative model at each
iteration. With custom code that only evaluates the relevant part of the network, PixelCNN sampling
could be sped up significantly; however the speedup will be limited on parallel hardware due to the
7
Table 2: Our results with ResNet VAEs on CIFAR-10 images, compared to earlier results, in average
number of bits per data dimension on the test set. The number for convolutional DRAW is an upper
bound, while the ResNet VAE log-likelihood was estimated using importance sampling.
Method
bits/dim ?
Results with tractable likelihood models:
Uniform distribution (van den Oord et al., 2016b)
Multivariate Gaussian (van den Oord et al., 2016b)
NICE (Dinh et al., 2014)
Deep GMMs (van den Oord and Schrauwen, 2014)
Real NVP (Dinh et al., 2016)
PixelRNN (van den Oord et al., 2016b)
Gated PixelCNN (van den Oord et al., 2016c)
8.00
4.70
4.48
4.00
3.49
3.00
3.03
Results with variationally trained latent-variable models:
Deep Diffusion (Sohl-Dickstein et al., 2015)
Convolutional DRAW (Gregor et al., 2016)
ResNet VAE with IAF (Ours)
5.40
3.58
3.11
sequential nature of the sampling operation. Efficient sampling from the ResNet VAE is a parallel
computation that does not require custom code.
7
Conclusion
We presented inverse autoregressive flow (IAF), a new type of normalizing flow that scales well to
high-dimensional latent space. In experiments we demonstrated that autoregressive flow leads to
significant performance gains compared to similar models with factorized Gaussian approximate
posteriors, and we report close to state-of-the-art log-likelihood results on CIFAR-10, for a model
that allows much faster sampling.
Acknowledgements
We thank Jascha Sohl-Dickstein, Karol Gregor, and many others at Google Deepmind for interesting
discussions. We thank Harri Valpola for referring us to Gustavo Deco?s relevant pioneering work on
a form of inverse autoregressive flow applied to nonlinear independent component analysis.
References
Blei, D. M., Jordan, M. I., and Paisley, J. W. (2012). Variational Bayesian inference with Stochastic Search. In
Proceedings of the 29th International Conference on Machine Learning (ICML-12), pages 1367?1374.
Bowman, S. R., Vilnis, L., Vinyals, O., Dai, A. M., Jozefowicz, R., and Bengio, S. (2015). Generating sentences
from a continuous space. arXiv preprint arXiv:1511.06349.
Burda, Y., Grosse, R., and Salakhutdinov, R. (2015). Importance weighted autoencoders. arXiv preprint
arXiv:1509.00519.
Clevert, D.-A., Unterthiner, T., and Hochreiter, S. (2015). Fast and accurate deep network learning by Exponential
Linear Units (ELUs). arXiv preprint arXiv:1511.07289.
Deco, G. and Brauer, W. (1995). Higher order statistical decorrelation without information loss. Advances in
Neural Information Processing Systems, pages 247?254.
Dinh, L., Krueger, D., and Bengio, Y. (2014). Nice: non-linear independent components estimation. arXiv
preprint arXiv:1410.8516.
Dinh, L., Sohl-Dickstein, J., and Bengio, S. (2016). Density estimation using Real NVP. arXiv preprint
arXiv:1605.08803.
8
Germain, M., Gregor, K., Murray, I., and Larochelle, H. (2015). Made: Masked autoencoder for distribution
estimation. arXiv preprint arXiv:1502.03509.
Gregor, K., Besse, F., Rezende, D. J., Danihelka, I., and Wierstra, D. (2016). Towards conceptual compression.
arXiv preprint arXiv:1604.08772.
Gregor, K., Mnih, A., and Wierstra, D. (2013). Deep AutoRegressive Networks. arXiv preprint arXiv:1310.8499.
He, K., Zhang, X., Ren, S., and Sun, J. (2015). Deep residual learning for image recognition. arXiv preprint
arXiv:1512.03385.
He, K., Zhang, X., Ren, S., and Sun, J. (2016). Identity mappings in deep residual networks. arXiv preprint
arXiv:1603.05027.
Hoffman, M. D., Blei, D. M., Wang, C., and Paisley, J. (2013). Stochastic variational inference. The Journal of
Machine Learning Research, 14(1):1303?1347.
Jozefowicz, R., Zaremba, W., and Sutskever, I. (2015). An empirical exploration of recurrent network architectures. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pages
2342?2350.
Kaae S?nderby, C., Raiko, T., Maal?e, L., Kaae S?nderby, S., and Winther, O. (2016). How to train deep
variational autoencoders and probabilistic ladder networks. arXiv preprint arXiv:1602.02282.
Kingma, D. P. and Welling, M. (2013). Auto-encoding variational Bayes. Proceedings of the 2nd International
Conference on Learning Representations.
Ranganath, R., Tran, D., and Blei, D. M. (2015).
arXiv:1511.02386.
Hierarchical variational models.
arXiv preprint
Rezende, D. and Mohamed, S. (2015). Variational inference with normalizing flows. In Proceedings of The
32nd International Conference on Machine Learning, pages 1530?1538.
Rezende, D. J., Mohamed, S., and Wierstra, D. (2014). Stochastic backpropagation and approximate inference
in deep generative models. In Proceedings of the 31st International Conference on Machine Learning
(ICML-14), pages 1278?1286.
Salimans, T. (2016). A structured variational auto-encoder for learning deep hierarchies of sparse features. arXiv
preprint arXiv:1602.08734.
Salimans, T. and Kingma, D. P. (2016). Weight normalization: A simple reparameterization to accelerate training
of deep neural networks. arXiv preprint arXiv:1602.07868.
Salimans, T., Kingma, D. P., and Welling, M. (2014). Markov chain Monte Carlo and variational inference:
Bridging the gap. arXiv preprint arXiv:1410.6460.
Sohl-Dickstein, J., Weiss, E. A., Maheswaranathan, N., and Ganguli, S. (2015). Deep unsupervised learning
using nonequilibrium thermodynamics. arXiv preprint arXiv:1503.03585.
Tran, D., Ranganath, R., and Blei, D. M. (2015). Variational gaussian process. arXiv preprint arXiv:1511.06499.
van den Oord, A., Dieleman, S., Zen, H., Simonyan, K., Vinyals, O., Graves, A., Kalchbrenner, N., Senior, A.,
and Kavukcuoglu, K. (2016a). Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499.
van den Oord, A., Kalchbrenner, N., and Kavukcuoglu, K. (2016b). Pixel recurrent neural networks. arXiv
preprint arXiv:1601.06759.
van den Oord, A., Kalchbrenner, N., Vinyals, O., Espeholt, L., Graves, A., and Kavukcuoglu, K. (2016c).
Conditional image generation with pixelcnn decoders. arXiv preprint arXiv:1606.05328.
van den Oord, A. and Schrauwen, B. (2014). Factoring variations in natural images with deep gaussian mixture
models. In Advances in Neural Information Processing Systems, pages 3518?3526.
Zagoruyko, S. and Komodakis, N. (2016). Wide residual networks. arXiv preprint arXiv:1605.07146.
Zeiler, M. D., Krishnan, D., Taylor, G. W., and Fergus, R. (2010). Deconvolutional networks. In Computer
Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pages 2528?2535. IEEE.
9
| 6581 |@word determinant:10 version:6 compression:1 nd:3 flexiblity:1 covariance:4 initial:5 ours:1 deconvolutional:1 com:6 z2:3 diederik:1 written:1 gpu:1 subsequent:1 cheap:2 update:4 generative:14 half:1 leaf:1 parameterization:2 hamiltonian:4 core:1 colored:1 blei:5 provides:1 zhang:2 wierstra:3 bowman:1 direct:1 consists:3 combine:1 introduce:1 expected:1 wavenet:2 inspired:1 salakhutdinov:1 spherical:1 increasing:2 becomes:2 spain:1 factorized:3 interpreted:1 deepmind:1 transformation:37 pseudo:2 every:2 multidimensional:1 nf:1 binarized:3 zaremba:1 normally:1 unit:3 danihelka:1 t1:1 before:1 positive:1 encoding:1 rnns:1 dynamically:2 limited:2 practical:1 yj:3 practice:2 block:4 implement:1 backpropagation:1 procedure:2 empirical:3 online3:1 significantly:3 radial:2 pixelrnn:1 close:3 context:6 dzt:4 applying:1 optimize:3 equivalent:2 demonstrated:1 maximizing:1 go:1 straightforward:1 iwae:1 jascha:1 estimator:1 datapoints:1 reparameterization:3 dzd:2 variation:1 hierarchy:2 vgp:1 exact:2 us:1 element:9 recognition:2 nderby:3 observed:1 ft:4 bottom:1 preprint:21 wang:1 capture:2 parameterize:1 distribution2:1 sun:2 ordering:6 transforming:1 brauer:2 trained:1 depend:1 tight:1 segment:1 upon:1 division:1 efficiency:1 uh:2 accelerate:1 joint:1 maheswaranathan:1 various:1 harri:1 train:2 fast:1 effective:1 monte:1 kalchbrenner:3 larger:1 valued:1 cvpr:1 drawing:1 encoder:8 triangular:5 ability:1 simonyan:1 itself:1 nonconvolutional:1 final:3 reproduced:1 differentiate:2 advantage:1 took:1 propose:3 tran:4 product:1 clevert:1 turned:1 relevant:2 flexibility:4 conserving:1 achieve:1 benefiting:1 sutskever:2 cluster:1 requirement:3 generating:2 karol:1 leave:1 resnet:10 tim:2 recurrent:2 progress:1 eq:7 auxiliary:6 predicted:1 indicate:1 larochelle:2 qd:1 elus:1 kaae:3 guided:1 closely:1 stochastic:11 exploration:1 require:1 espeholt:1 behaviour:1 sufficiently:3 mapping:1 dieleman:1 achieves:2 hvi:1 estimation:4 weighted:1 hoffman:2 minimization:1 concurrently:2 clearly:1 gaussian:13 varying:1 vae:13 publication:1 rezende:11 likelihood:12 indicates:1 greatly:1 contrast:3 helpful:1 inference:34 dim:1 ganguli:1 factoring:1 nn:7 typically:3 hidden:1 interested:1 pixel:2 flexible:7 art:1 special:1 initialize:2 vlb:1 marginal:5 equal:1 sampling:9 icml:3 unsupervised:1 future:2 report:2 others:1 few:1 divergence:2 individual:1 consisting:1 mlp:1 highly:1 investigate:1 mnih:1 custom:2 mixture:1 bracket:1 nl:1 chain:10 accurate:1 closer:2 partial:2 vely:1 unterthiner:1 taylor:1 re:2 fitted:1 column:1 earlier:2 modeling:2 disadvantage:1 cost:1 tractability:1 deviation:3 masked:2 hundred:1 uniform:1 nonequilibrium:1 reported:3 encoders:2 dependency:2 referring:1 st:5 density:11 lstm:1 international:5 winther:1 oord:14 probabilistic:1 off:1 invertible:5 synthesis:2 ilya:2 nvp:3 na:1 schrauwen:2 deco:3 rafal:2 zen:1 worse:1 toy:1 jacobians:1 account:1 amortizing:1 includes:2 titan:1 performed:1 competitive:1 start:1 bayes:1 complicated:1 parallel:3 minimize:1 convolutional:6 efficiently:2 bayesian:1 raw:1 kavukcuoglu:3 ren:2 carlo:1 worth:1 published:3 za:3 datapoint:4 evaluates:1 mohamed:8 involved:1 transposed:1 irvine:1 sampled:4 dataset:3 gain:1 color:1 improves:2 dimensionality:1 organized:1 variationally:1 bidirectional:3 attained:1 parallelizability:2 higher:2 follow:1 planar:2 improved:4 wei:1 done:1 evaluated:2 just:1 autoencoders:7 lstms:1 expressive:2 nonlinear:6 google:1 minibatch:1 perhaps:1 impede:1 building:2 requiring:1 true:3 contain:1 leibler:1 komodakis:1 ll:1 width:4 please:2 demonstrate:3 image:11 variational:29 wise:1 novel:2 krueger:1 sigmoid:4 sped:1 mt:3 empirically:1 overview:1 hugo:1 conditioning:1 tightens:1 volume:2 discussed:1 he:4 elementwise:1 numerically:1 jozefowicz:4 dinh:6 significant:1 paisley:2 unconstrained:1 dlgm:1 nonlinearity:2 pixelcnn:9 stable:1 longer:1 posterior:33 own:1 multivariate:1 nvidia:1 arbitrarily:1 yi:5 inverted:1 seen:1 preserving:1 additional:4 greater:1 dai:1 parallelized:1 subtraction:1 maximize:2 multiple:5 full:2 faster:2 match:1 long:3 cifar:6 dkl:6 scalable:2 vision:1 arxiv:42 iteration:2 resnets:1 normalization:1 hochreiter:1 addition:2 want:1 remarkably:1 extra:1 zagoruyko:1 ascent:1 subject:1 suspect:1 member:1 flow:48 gmms:1 jordan:1 leverage:1 noting:1 canadian:1 bengio:3 enough:1 krishnan:1 iterate:4 affect:1 fit:2 zi:2 variety:1 architecture:5 restrict:1 topology:1 idea:2 computable:1 det:2 bottleneck:2 bridging:1 peter:1 deep:18 generally:2 clear:2 transforms:1 hardware:1 simplest:1 http:1 estimated:1 per:3 zd:2 write:1 dickstein:4 key:3 four:1 openai:6 drawn:1 diffusion:1 sum:4 run:1 inverse:19 parameterized:4 powerful:3 family:5 almost:1 draw:4 dy:1 appendix:4 scaling:1 bit:3 bound:9 layer:8 followed:1 constraint:1 speed:1 statically:1 relatively:3 gpus:1 speedup:1 structured:2 according:1 combination:1 across:3 beneficial:1 slightly:2 y0:1 smaller:1 making:2 hl:1 den:14 explained:1 invariant:1 computationally:5 equation:1 resource:1 previously:3 remains:1 mind:1 tractable:1 end:1 serf:1 maal:1 available:1 operation:4 apply:2 hierarchical:1 salimans:9 simulating:1 alternative:1 gate:1 top:1 remaining:1 zeiler:1 especially:1 murray:1 gregor:5 unchanged:1 tensor:5 objective:3 strategy:2 parametric:2 diagonal:10 gradient:2 reversed:1 thank:2 valpola:1 decoder:1 code:5 modeled:1 illustration:1 z3:3 optionally:1 potentially:1 negative:1 implementation:1 iaf:46 zt:20 perform:1 allowing:1 upper:1 gated:1 observation:3 convolution:1 datasets:1 markov:1 extended:1 y1:4 stack:2 reproducing:1 expressiveness:2 introduced:5 germain:5 namely:1 specified:1 required:1 z1:4 sentence:1 california:1 expressivity:1 kingma:7 barcelona:1 nip:1 able:1 pattern:2 pioneering:1 max:1 including:1 explanation:1 demanding:1 natural:4 decorrelation:1 residual:4 advanced:1 thermodynamics:1 improve:1 github:1 ladder:1 temporally:1 raiko:1 autoencoder:2 coupled:1 auto:5 prior:4 nice:4 acknowledgement:1 graf:2 loss:1 par:1 interesting:5 generation:1 proportional:1 versus:1 degree:1 vectorized:1 sufficient:1 last:2 keeping:1 bias:1 senior:1 burda:3 institute:1 wide:2 sparse:1 van:14 benefit:2 dimension:6 xn:1 depth:5 rich:1 autoregressive:39 made:5 welling:7 ranganath:3 approximate:12 kullback:1 dpkingma:1 elu:5 sequentially:1 conceptual:1 spatio:1 xi:1 fergus:1 continuous:3 latent:28 iterative:1 search:1 table:4 nature:1 improving:2 investigated:1 uva:1 noise:1 x1:1 besse:1 grosse:1 momentum:1 wish:1 exponential:1 lie:1 jacobian:9 z0:4 down:1 specific:1 normalizing:19 intractable:1 mnist:6 gustavo:1 sequential:2 adding:1 importance:3 sohl:4 conditioned:2 chen:1 gap:2 suited:1 forget:1 cheaply:1 vinyals:3 amsterdam:1 lvae:1 scalar:2 applies:1 corresponds:1 conditional:1 viewed:1 goal:1 identity:3 towards:3 reversing:1 wt:2 zb:2 called:2 experimental:1 vaes:2 vilnis:1 evaluate:1 audio:1 |
6,171 | 6,582 | Neurons Equipped with Intrinsic Plasticity
Learn Stimulus Intensity Statistics
Travis Monk
Cluster of Excellence Hearing4all
University of Oldenburg
26129 Oldenburg, Germany
[email protected]
Cristina Savin
IST Austria
3400 Klosterneuburg
Austria
[email protected]
?
J?org Lucke
Cluster of Excellence Hearing4all
University of Oldenburg
26129 Oldenburg, Germany
[email protected]
Abstract
Experience constantly shapes neural circuits through a variety of plasticity mechanisms. While the functional roles of some plasticity mechanisms are wellunderstood, it remains unclear how changes in neural excitability contribute to
learning. Here, we develop a normative interpretation of intrinsic plasticity (IP)
as a key component of unsupervised learning. We introduce a novel generative
mixture model that accounts for the class-specific statistics of stimulus intensities,
and we derive a neural circuit that learns the input classes and their intensities.
We will analytically show that inference and learning for our generative model
can be achieved by a neural circuit with intensity-sensitive neurons equipped with
a specific form of IP. Numerical experiments verify our analytical derivations and
show robust behavior for artificial and natural stimuli. Our results link IP to nontrivial input statistics, in particular the statistics of stimulus intensities for classes
to which a neuron is sensitive. More generally, our work paves the way toward
new classification algorithms that are robust to intensity variations.
1
Introduction
Confronted with the continuous flow of experience, the brain takes amorphous sensory inputs and
translates them into coherent objects and scenes. This process requires neural circuits to extract key
regularities from their inputs and to use those regularities to interpret novel experiences. Such learning is enabled by a variety of plasticity mechanisms which allow neural networks to represent the
statistics of the world. The most well-studied plasticity mechanism is synaptic plasticity, where the
strength of connections between neurons changes as a function of their activity [1]. Other plasticity
mechanisms exist and operate in tandem. One example is intrinsic plasticity (IP), where a neuron?s
response to inputs changes as a function of its own past activity. It is a challenge for computational
neuroscience to understand how different plasticity rules jointly contribute to circuit computation.
While much is known about the contribution of Hebbian plasticity to different variants of unsupervised learning, including linear and non-linear sparse coding [2?5], ICA [6], PCA [7] or clustering [8?12], other aspects of unsupervised learning remain unclear. First, on the computational side,
there are many situations in which the meaning of inputs should be invariant to its overall gain. For
example, a visual scene?s content does not depend on light intensity, and a word utterance should
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
be recognized irrespective of its volume. Current models do not explicitly take into account such
gain variations, and often eliminate them using an ad hoc preprocessing step that normalizes inputs [8, 9, 13]. Second, on the biological side, the roles of other plasticity mechanisms such as IP,
and their potential contributions to unsupervised learning, remain poorly understood.
IP changes the input-output function of a neuron depending on its past activity. Typically, IP is a
homeostatic negative feedback loop that preserves a neuron?s activation levels despite its changing
input [14, 15]. There is no consensus on which quantities IP regulates, e.g. a neuron?s firing rate, its
internal Ca concentration, its spiking threshold, etc. In modeling work, IP is usually implemented
as a simple threshold change that controls the mean firing rate, although some models propose
more sophisticated rules that also constrain higher order statistics of the neuron?s output [6, 16].
Functionally, while there have been suggestions that IP can play an important role in circuit function
[6, 10, 11, 17], its role in unsupervised learning is still not fully understood.
Here we show that a neural network that combines specific forms of Hebbian plasticity and IP
can learn the statistics of inputs with variable gain. We propose a novel generative model named
Product-Poisson-Gamma (PPG) that explicitly accounts for class-specific variation in input gain.
We then derive, from first principles, a neural circuit that implements inference and learning for this
model. Our derivation yields a novel IP rule as a required component of unsupervised learning given
gain variations. Our model is unique in that it directly links IP to the gain variations of the pattern to
which a neuron is sensitive, which may be tested experimentally. Beyond neurobiology, the models
provide a new class of efficient clustering algorithms that do not require data preprocessing. The
learned representations also permit efficient classification from very little labeled data.
2
The Product-Poisson-Gamma model
Intensity can vary drastically across images although the features present in it are the same.1 This
variability constitutes a challenge for learning and is typically eliminated through a preprocessing
stage in which the inputs are normalized [9]. While such preprocessing can make learning easier, ad
hoc normalizations may be suboptimal, or may require additional parameters to be set by hand. More
importantly, input normalization has the side-effect of losing information about intensity, which
might have helped identify the features themselves. For instance, in computer vision objects of the
same class are likely to have similar surface properties, resulting in a characteristic distribution of
light intensities. Light intensities can therefore aid classification. In the neural context, the overall
drive to neurons may vary, e.g. due to attentional gain modulation, despite the underlying encoded
features being the same.
A principled way to address intensity variations is to explicitly model them in a generative model
describing the data. Then we can use that generative model to derive optimal inference and learning
for such data and map them to a corresponding neural circuit implementation. Let us assume the
stimuli are drawn from one of C classes, and let us denote a stimulus by ~y . Given a stimulus /
data point ~y , we wish to infer the class c that generated it (see Figure 1). Let ~y depend not only on
the class c, but also on a continuous random variable z, representing the intensity of the stimulus,
that itself depends on c as well as some parameters ?. Given these dependencies Pr(~y |c, z, ?) and
Pr(z|c, ?), Bayes? rule specifies how to infer the class c and hidden variable z given an observation
of ~y :
Pr(~y |c, z, ?) Pr(z|c, ?) Pr(c|?)
Pr(c, z|~y , ?) = P R
.
(1)
Pr(~y |c0 , z 0 , ?) Pr(z 0 |c0 , ?) Pr(c0 |?)dz
0
c
We can obtain neurally-implementable expressions for the posterior if our data generative model is
a mixture model with non-negative noise, e.g. a Poisson mixture model [9]. We extend the Poisson
mixture model by including an additional statistical description of stimulus intensity. The Gamma
distribution is a natural choice due to its conjugacy with the Poisson distribution. Let each of the D
elements in the vector ~y |z, c, ? (e.g. pixels in an image) be independent and Poisson-distributed, let
z|c, ? be Gamma-distributed, and let the prior of each class be uniform:
D
Y
1
Pr(~y |c, z, ?) =
Pois(yd ; zWcd ); Pr(z|c, ?) = Gam(z; ?c , ?c ); Pr(c|?) =
C
d=1
1
We use images as inputs and intensity as a measure of input gain as a running example. Our arguments
apply regardless of the type of sensory input, e.g. the volume of sound or the concentration of odor.
2
where all W , ?, and ? represent the parameters of P
the model. To avoid ambiguity in scales, we
constrain the weights of the model to sum to one, d Wcd = 1. We call this generative model
a Product-Poisson-Gamma (PPG). While the multiplicative interaction between features and the
intensity or gain variable is reminiscent of the Gaussian Scale Mixture (GSM) generative model, note
that PPG has separate intensity distributions for each of the classes; each is a Gamma distribution
with a (possibly unique) shape parameter ?c and rate parameter ?c . Furthermore, the non-gaussian
observation noise is critical for deriving the circuit dynamics.
The model is general and flexible, yet it is sufficiently constrained to allow for closed-form joint
posteriors. As shown in Appendix A, the joint posterior of the class and intensity is:
P
NB(?
y ; ?c , ?c1+1 ) exp ( d yd ln Wcd )
P
Pr(c, z|~y , ?) = P
Gam(z; ?c + y?, ?c + 1),
y ; ?c0 , ? 01+1 ) exp ( d yd ln Wc0 d )
c0 NB(?
c
P
where y? = d yd , and NB represents the negative binomial distribution.
We also obtain a closed-form expression of the posterior marginalized over z, which takes the form
of a softmax function weighted by negative binomials:
P
NB(?
y ; ?c , ?c1+1 ) exp ( d yd ln Wcd )
P
Pr(c|~y , ?) = P
(2)
y ; ?c0 , ? 01+1 ) exp ( d0 yd0 ln Wc0 d0 )
c0 NB(?
c
This is a straightforward generalization of the standard softmax, used for optimal learning in winnertake-all (WTA) networks [2,8,9,11] and WTA-based microcircuits [18]. Note that Eqn. 2 represents
the optimal way to integrate evidence for class membership originating from stimulus intensity (pa~ and pattern ?shape? (parameterized by W ). If one of the two is not instrucrameterized by ?
~ and ?)
tive, then the corresponding terms cancel out: if the patterns have identical shape (W with identical
rows), then the softmax drops out and only negative binomial terms remain, and if all pattern classes
have the same intensity distribution, then the posterior reduces to the standard softmax function as
in previous work [2, 8?11].
To facilitate the link to neural dynamics, Eqn. 2 can be simplified by approximating the negative
binomial distribution as Poisson. In the limit that ?c ? ? and the mean ?c ? ?c /?c = constant,
the negative binomial distribution is:
?c
1
lim
NB(?
y ; ?c ,
) = Pois(?
y ; ) ? Pois(?
y ; ?c ).
?c + 1
?c
?c ??,?c /?c =const.
In this limit, Eqn. 2 becomes:
P
exp( d0 yd0 ln(Wcd0 ?c ) ? ?c )
P
0
0 0
0
0
c0 exp(
d0 yd ln(Wc d ?c ) ? ?c )
Pr(c|~y , ?) ? P
(3)
which can be evaluated by a neural network using soft-WTA dynamics [9].
3
Expectation-Maximization of PPG-generated data
As a starting point for deriving a biologically-plausible neural network for learning PPG-generated
data, let us first consider optimal learning derived from the Expectation-Maximization (EM) algorithm [19]. Given a set of N data points ~y (n) , we seek the parameters ? = {W, ?} that maximize
the data likelihood given the PPG-model defined above. We use the EM formulation introduced
in [20] and optimize the free-energy given by:
XX
F(?t , ?t-1 ) =
Pr(c0 |~y (n) , ?t-1 )(ln Pr(~y (n) |c0 , ?t ) + ln Pr(c0 |?t )) + H(?t-1 ).
n
c0
Here, H(?t-1 ) is the Shannon entropy of the posterior as a function of the previous parameter values.
We can find the M-step update rules for the parameters of the model ?c and Wcd by taking the partial
derivative of F(?t , ?t-1 ) w.r.t. the desired parameter and setting it to zero. As shown in Appendix B,
the resultant update rule for ?c,t is:
P
y (n) , ?t-1 )?
y (n)
?F(?t , ?t-1 )
n Pr(c|~
= 0 ? ?c,t = P
(4)
(n)
??c,t
y , ?t-1 )
n Pr(c|~
3
The M-step update rules for the weights Wcd areP
found by setting the corresponding partial derivative
of F(?t , ?t-1 ) to zero, under the constraint that d Wcd = 1. Using Lagrange multipliers ?c yields
the following update rule (see Appendix B):
!
X
?F(?t , ?t-1 )
? X
+
?c0
Wc0 d0 ,t ? 1 = 0
?Wcd,t
?Wcd,t 0
c
d0
P
y (n) , ?t-1 )
n yd Pr(c|~
.
(5)
? Wcd,t = P P
y (n) , ?t-1 )
d
n yd Pr(c|~
As numerical verification, Figure 1 illustrates the evolution of parameters ?c and Wcd yielded by
the EM algorithm on artificial data. Our artificial data set consists of four classes of rectangles on
a grid of 10x10 pixels. Rectangles from different classes have different sizes and positions and are
represented by a generative vector Wcgen .
We generate a data set by drawing a large number N of observations of Wcgen , with each class
equiprobable. We then draw a random variable z from a Gamma distribution with parameters ?c
and ?c that depend on the class of each observation. Then, given Wcgen and z, we create a data vector
~y (n) by adding Poisson noise to each pixel. With a set of N data vectors ~y (n) , we then perform EM
to find the parameters Wcd and ?c that maximize the likelihood of the data set (at least locally). The
E-step evaluates Equation 2 for each data vector, and the M-step evaluates Equations 4 and 5. Figure
1 shows that, after about five iterations, the EM algorithm returns the values of Wcd and ?c that were
used to generate the data set, i.e. the parameter values that maximize the data likelihood.
Figure 1: The evolution of model parameters yielded by the EM algorithm on artificial data. A: Four classes of
rectangles represented by the vector Wcgen , with the values of ?c for each class displayed to the left. B: Evolution
of the parameters Wcd for successive iterations of the EM algorithm. C: Evolution of the parameters ?c , with
dashed lines indicating the values from the generative model. The EM algorithm returns the values of Wcd and
?c that were used to generate the data set, i.e. the parameter values that maximize the data likelihood. For these
plots, we generated a data set of 2000 inputs. Wcgen = 100 for white pixels and 1 for black pixels. The shape
and rate parameters of the Gamma distributions, from the top class to the bottom, are ? = [98, 112, 128, 144]
and ? = [7, 7.5, 8, 8.5], giving ?c = ?c /?c = [14, 15, 16, 17].
4
Optimal neural learning for varying stimulus intensities
For PPG-generated data, the posterior distribution of the class given an observation is approximately
the softmax function (or soft-WTA, Eqn. 3). Neural networks that implement the softmax function,
usually via some form of lateral inhibition, have been extensively investigated [2, 8?11, 21]. Thus,
inference in our model reduces to well-understood neural circuit dynamics.
The key remaining challenge is to analytically relate optimal learning as derived by EM to circuit
plasticity. To map abstract random variables to neural counterparts, we consider a complete bipartite neural network, with the input layer corresponding to the observables y and the hidden layer
representing the latent causes of the observables, i.e. classes.2 The network is feedforward; each
2
The number of hidden neurons does not necessarily need to equal the number of classes; see Figure 3.
4
neuron in the input layer connects to each neuron in the hidden layer via synaptic weights Wcd ,
where c ? [1, C] indexes the C hidden neurons and d ? [1, D] indexes the D input neurons.
Let each of the hidden neurons have a standard activity variable, sc , and additionally an intrinsic
parameter ?c that represents its excitability. Let the activity of each hidden neuron be given by
Eqn. 2. The activity of each hidden neuron is then the posterior distribution for one particular class,
given the inputs it receives from the input layer, its synaptic weights, and its excitability:
X
exp(Ic )
sc = P
yd0 ln(Wcd0 ?c ) ? ?c .
;
Ic =
0
c0 exp(Ic )
0
d
The weights of the neural network Wcd are plastic and change according to a Hebbian learning rule
with synaptic scaling [22]:
? c Wcd ),
?Wcd = W (sc yd ? sc ?c W
(6)
P
?
where W is a small and positive learning rate, and Wc = d Wcd .
The intrinsic parameters ?c are also plastic and change according to a similar learning rule:
X
??c = ? sc (
yd ? ?c ),
(7)
d
where ? is another small positive learning rate. This type of regulation of excitability is homeostatic in form, but differs from standard implementations in that the excitability changes not only
depending onP
the neuron output, s, but also on the net input to the neuron (see also [17] for a formal
link between d yd and average incoming inputs).
?c
Appendix C shows that these online update rules enforce the desired weight normalization, with W
converging to one. Assuming weight convergence, and assuming a small learning rate and a large
set of data points, the weights and intrinsic parameters converge to (see [9] and Appendix C):
P (n)
P
?(n)
conv
conv
n yd sc
n sc y
.
?c = P
Wcd ? P P (n) ;
y sc
n sc
0
d
n
d
Comparing these convergence expressions with the EM updates (Eqns. 5 and 4) and inserting the
definition sc = Pr(c|~y , ?), we see that the neural dynamics given in Eqns. 6 and 7 have the same
fixed points as optimal EM learning. The network can therefore find the parameter values that optimize the data likelihood using compact and neurally-plausible learning rules. Eqn. 6 is a standard
form of Hebbian plasticity with synaptic scaling, while Eqn. 7 states how the excitability of hidden
neurons should be governed by the gain of the inputs and the current to the neuron.
5
Numerical Experiments
To verify our analytical results, we first investigated learning in the derived neural network using
data generated according to the PPG model. Figure 2 illustrates the evolution of parameters ?c and
Wcd yielded by the neural network on artificial data (the same as used for Figure 1). The neural
network learns the synaptic weights and intrinsic parameters that were used to generate the data set,
i.e. the parameter values that maximize the data likelihood.
Since our artificial data was PPG-generated, one can expect the neural network to learn the classes
and intensities quickly and accurately. To test the neural network on more realistic data, we followed
a number of related studies [8?12] and used the MNIST as a standard dataset containing different
stimulus classes. The input to the network was 28x28 pixel images (converted to vectors) from
the MNIST dataset. We present our results for the digits 0-3 for visual ease and simulation speed;
our results on the full dataset are qualitatively similar. We added an offset of 1 to all pixels and
rescaled them so that no pixel was greater than 1. The ?c were initialized to be the mean intensity
of all digit classes as calculated from our modified MNIST training set. Each Wcd was initialized
as Wcd ? Pois(Wcd ; ?d ) + 1, where ?d is the mean of each pixel over all classes and is calculated
from our modified MNIST training set.
Figure 3 shows an example run using C = 16 hidden neurons. It shows the change in both neural
weights and intrisic excitabilities ?c during learning. We observe that the weights change to represent the digit classes and converge relatively quickly (panels A, B). We verified that they sum to 1
5
Figure 2: The evolution of model parameters yielded by the neural network on artificial data generated from
the same model as that used in Figure 1. A: Four classes of rectangles with the values of ?c for each class
displayed to the left. B: Evolution of the synaptic weights Wcd that feed each hidden unit after 0, 20, 40,
. . . , 120 time steps, respectively. C: Evolution of the intrinsic parameters ?c over 4000 time steps, with dashed
lines indicating the values from the generative model. The neural network returns the values of Wcd and ?c that
were used to generate the data set, i.e. the parameter values that maximize the data likelihood. For these plots,
W = ? = .005, D = 100 (for a 10x10 pixel grid), C = 4, initialized weights were uniformly-distributed
between .01 and .06, and initialized intrinsic parameters were uniformly-distributed between 10 and 20.
Figure 3: The neural network?s performance on a reduced MNIST dataset (the digits 0 to 3). A: Representatives of the input digits. B: The network?s synaptic weights during training. Each square represents the weights
feeding one hidden neuron. Each box of 16 squares represents the weights feeding each of the C = 16 hidden
neurons after initialization, and after subsequent iterations over the training set. The network learns different
writing styles for different digits. C: The network learns the average intensities, i.e. the sum of the pixels in
an image, of each class of digit in MNIST. Algorithms that impose ad hoc intensity normalization in their preprocessing cannot learn these intensities. The horizontal dashed lines are the average intensities of each digit,
with 1 having the lowest overall luminance and 0 the largest. The average ?c for all hidden units representing
a given digit converge to those ground truth values. D: The network?s learned intensity differences improve
classification performance. The percentage of correct digit classifications by a network with IP (solid lines) is
higher than that by a network without IP (dashed lines). This result is robust to the number of iterations over
the dataset and the number of labels used to calculate the Bayesian classifier used in [9].
6
for each class at convergence (not shown). We also observe that the network?s IP dynamics allow it
to learn the average intensities of each class of digit (panel C). The thin horizontal dashed lines are
the true values for ?c as calculated from the MNIST test set using its ground-truth label information.
IP modifies the network?s excitability parameters ? to converge to their true values. Our network is
not only robust to variations in intensity, but learns their class-specific values.
A network that learns the excitability parameters ? exhibits a higher classification rate than a network
without IP (panel D). We computed the performance of the network derived in Sec. 4 on unnormalized data in comparison with a network without IP (all else being equal). As a performance measure
we used the classification error (computed using the same Bayesian classifier as used in [9]). Classification success rates were calculated with very few labels, using 0.5% (thin lines) and 5% (thick
lines) of labels in the training set (both settings for both networks). The classification performance
of the network with IP outperforms that of the network without it. This result suggests that the
differences in intensities in MNIST, albeit visually small, are sufficient to aid classification.
Finally, Figure 4 shows that the neural network can learn classes that differ only in their intensities.
The dataset used for Figure 4 comprises 40000 images of two types of sphere: dull and shiny. The
spheres were identical in shape and position, and we generated data points (i.e. images) under a
variety of lighting conditions. On average, the shiny spheres were brighter (?shiny ? 720) than
the dull spheres (?dull ? 620). The network represents the two classes in its learned weights and
intensities. Algorithms that utilize ad hoc normalization preprocessing schemes would have serious
difficulties learning input statistics for datasets of this kind.
Figure 4: The neural network can learn classes that differ only in their intensities. The dataset consisted of
either dull or shiny spheres. The network had C = 2 hidden neurons. A: Three pairs of squares represent
the weights feeding each hidden neuron after initialization (leftmost pair), 10 iterations (center pair), and 200
iterations (rightmost pair) over the training set. Note the rightmost pair, particularly how the right sphere
appears brighter than the left sphere. The right sphere corresponds to the shiny class and the left sphere to
the dull class. B: Learned mean intensities as a function of iterations over the training set. The dull spheres
have an average intensity of 620, and the shiny spheres 720. The network learns the classes and their average
intensities, even when data points from different classes have the same sizes and positions.
6
Discussion
Neural circuit models are powerful tools for understanding neural learning and information processing. They have attracted attention as inherently parallel information processing devices for analog
VLSI, a fast and power-efficient alternative to standard processor architectures [12, 23]. Much work
has investigated learning with winner-take-all (WTA) type networks [2, 8?12, 18, 21, 24]. A subset
of these studies [2, 8?11, 21] link synaptic plasticity in WTA networks to optimal learning, mostly
using mixture distributions to model input stimuli [8?11, 21]. Our contribution expands on these
results both computationally, by allowing for a robust treatment of variability in input gain, and
biologically, by providing a normative justification for intrinsic plasticity during learning. Our analytical results show that the PPG-generative model is tractable and neurally-implementable, while
our numerical results show that it is flexible and robust.
Our model provides a principled treatment of intensity variations, something ubiquitous in realistic
datasets. As a result, it allows for robust learning without requiring normalized input data. This addresses the criticisms (see [10]) of earlier WTA-like circuits [8,9] that required normalized data. We
found that explicitly accounting for intensity improves classification performance even for datasets
that have been size-normalized (e.g. MNIST), presumably by providing an additional dimension for
discriminating across latent features. Furthermore, we found that the learned representation of the
MNIST data allows for good classification in a semi-supervised setting, when only a small fraction
7
of the data is labeled. Thus, our model provides a starting point for constructing novel clustering
and classification algorithms following the general approach in [9].
The treatment of intensity as an explicit variable is not new. The well-investigated class of Gaussian
Scale Mixtures (GSM) is built on that idea. Nonetheless, while GSM and PPG share some conceptual similarities, they are mathematically distinct. While GSMs assume 1) Gaussian distributed
random variables and 2) a common scale variable [25], PPG assumes 1?) Poisson observation noise
and 2?) class-specific scale variables. Consequently, none of the GSM results carry over to our
work, and our PPG assumptions are critical for our derived intrinsic plasticity and Hebbian plasticity rules. It would be interesting to investigate a circuit analog of intensity parameter learning in a
GSM. Since this class of models is known to capture many features of afferent sensory neurons, we
might make more specific predictions concerning IP in V1. It would also be interesting to compare
the classification performance of a GSM with that of PPG on the same dataset. The nature of the
GSM generative model (linear combination of features with multiplicative gain modulation) makes
it an unusual choice for a classification task. However, in principle, one could use a GSM to learn a
representation of a dataset and train a classifier on it.
The optimal circuit implementation of learning in our generative model requires a particular form of
IP. The formulation of IP is a phenomenological one, reflecting the biological observation that the
excitability of a neuron changes in a negative feedback loop as a function of past activity [14, 15].
Mathematically, our model shares similarities with past IP models [6, 10, 17] with the important
difference that the controlled variable is the input current, rather than the output firing rate. Since
the two quantities are closely related, we expect it will be difficult to directly disambiguate between
IP models experimentally. Nonetheless, our model makes potentially testable predictions in terms
of the functional role of IP, by directly linking the excitability of individual neurons to nontrivial
statistics of their inputs, namely their average intensity under a Gamma distribution. Since past IP
work invariably assumes the target excitability is a fixed parameter, usually shared across neurons,
the link between neural excitability and real world statistics is very specific to our model and potentially testable experimentally. Furthermore, our work provides a computational rationale for the
dramatic variations in excitability across neurons, even within a local cortical circuit, which could
not be explained by traditional models.
The functional role for IP identified here complements previous proposals linking the regulation
of neuronal excitability to learning priors [11] or as posterior constraints [10, 26]. Ultimately, it
is likely that the role of IP is manifold. Recent theoretical work suggests that the net effect of
inputs on neural excitability may arise as a complex interaction between several forms of IP, some
homeostatic and others not [17]. Furthermore, different experimental paradigms may preferentially
expose one IP process over the others, which would explain the confusion within the literature on
the exact nature of biological IP. Taken together, these models point to a fundamental role of IP for
circuit computation in a variety of setups. Given its many possible roles, any approach based on
first principles is valuable, as it tightly connects IP to concrete stimulus properties in a way that can
translate into better-constrained experiments.
Acknowledgements. We acknowledge funding by the DFG within the Cluster of Excellence EXC
1077/1 (Hearing4all) and by grant LU 1196/5-1 (JL and TM) and the People Programme (Marie
Curie Actions) of the European Union?s Seventh Framework Programme (FP7/2007-2013) under
REA grant agreement no. 291734 (CS).
References
[1] L F Abbott and S B Nelson. Synaptic plasticity: taming the beast. Nat Neurosci, 3:1178 ?
1183, 2000.
[2] J L?ucke and M Sahani. Maximal causes for non-linear component extraction. J Mach Learn
Res, 9:1227?67, 2008.
[3] C J Rozell, D H Johnson, R G Baraniuk, and B A Olshausen. Sparse coding via thresholding
and local competition in neural circuits. Neural Comput, 20(10):2526?63, October 2008.
[4] J L?ucke. Receptive field self-organization in a model of the fine-structure in V1 cortical
columns. Neural Comput, 21(10):2805?45, 2009.
8
[5] J Zylberberg, J T Murphy, and M R Deweese. A Sparse Coding Model with Synaptically
Local Plasticity and Spiking Neurons Can Account for the Diverse Shapes of V1 Simple Cell
Receptive Fields. PLoS Comp Biol, 7(10):e1002250, 2011.
[6] C Savin, P Joshi, and J Triesch. Independent Component Analysis in Spiking Neurons. PLoS
Comp Biol, 6(4):e1000757, April 2010.
[7] E Oja. A simplified neuron model as a principal component analyzer. J Math Biol, 15:267 ?
273, 1982.
[8] B Nessler, M Pfeiffer, and W Maass. Stdp enables spiking neurons to detect hidden causes of
their inputs. In Adv Neural Inf Process Syst, pages 1357?1365, 2009.
[9] C Keck, C Savin, and J L?ucke. Feedforward inhibition and synaptic scaling?two sides of the
same coin? PLoS Comp Biol, 8(3):e1002432, 2012.
[10] S Habenschuss, J Bill, and B Nessler. Homeostatic plasticity in bayesian spiking networks as
expectation maximization with posterior constraints. In Adv Neural Inf Process Syst, pages
773?781, 2012.
[11] B Nessler, M Pfeiffer, L Buesing, and W Maass. Bayesian computation emerges in
generic cortical microcircuits through spike-timing-dependent plasticity. PLoS Comp Biol,
9(4):e1003037, 2013.
[12] M Schmuker, T Pfeil, and M P Nawrot. A neuromorphic network for generic multivariate data
classification. Proc Natl Acad Sci, 111(6):2081?2086, 2014.
[13] O Schwartz and E P Simoncelli. Natural sound statistics and divisive normalization in the
auditory system. Adv Neural Inf Process Syst, pages 166?172, 2000.
[14] G Daoudal and D Debanne. Long-term plasticity of intrinsic excitability: learning rules and
mechanisms. Learn Memory, 10(6):456?465, 2003.
[15] R H Cudmore and G G Turrigiano. Long-term potentiation of intrinsic excitability in lv visual
cortical neurons. J Neurophysiol, 92(1):341?348, 2004.
[16] M Stemmler and C Koch. How voltage-dependent conductances can adapt to maximize the
information encoded by neuronal firing rate. Nat Neurosci, 2(6):521?527, 1999.
[17] C Savin, P Dayan, and M Lengyel. Optimal Recall from Bounded Metaplastic Synapses: Predicting Functional Adaptations in Hippocampal Area CA3. PLoS Comp Biol, 10(2):e1003489,
February 2014.
[18] Rodney J Douglas and Kevan AC Martin. Neuronal circuits of the neocortex. Annu Rev
Neurosci, 27:419?451, 2004.
[19] A P Dempster, N M Laird, and D B Rubin. Maximum likelihood from incomplete data via the
EM algorithm (with discussion). J R Stat Soc Series B, 39:1?38, 1977.
[20] R Neal and G Hinton. A view of the EM algorithm that justifies incremental, sparse, and other
variants. In M. I. Jordan, editor, Learning in Graphical Models. Kluwer, 1998.
[21] D J Rezende, D Wierstra, and W Gerstner. Variational learning for recurrent spiking networks.
Adv Neural Inf Process Syst, pages 136?144, 2011.
[22] L F Abbott and S B Nelson. Synaptic plasticity: taming the beast. Nat Neurosci, 3(Supp):1178?
1183, November 2000.
[23] E Neftci, J Binas, U Rutishauser, E Chicca, G Indiveri, and R J Douglas. Synthesizing cognition in neuromorphic electronic systems. Proc Natl Acad Sci, 110(37):E3468?E3476, 2013.
[24] J L?ucke and C Malsburg. Rapid processing and unsupervised learning in a model of the cortical
macrocolumn. Neural Comput, 16:501?33, 2004.
[25] M J Wainwright, E P Simoncelli, and A S Willsky. Random cascades on wavelet trees and
their use in analyzing and modeling natural images. Appl Comput Harmon Anal, 11(1):89?
123, 2001.
[26] S Deneve. Bayesian spiking neurons i: inference. Neural Comput, 20(1):91?117, 2008.
9
| 6582 |@word schmuker:1 c0:14 ucke:4 simulation:1 seek:1 accounting:1 dramatic:1 solid:1 carry:1 cristina:1 series:1 oldenburg:4 rightmost:2 past:5 outperforms:1 current:3 comparing:1 activation:1 yet:1 reminiscent:1 attracted:1 numerical:4 realistic:2 subsequent:1 plasticity:25 shape:7 enables:1 drop:1 plot:2 update:6 generative:14 device:1 monk:2 provides:3 math:1 contribute:2 successive:1 org:1 five:1 wierstra:1 shiny:6 consists:1 combine:1 introduce:1 excellence:3 ica:1 rapid:1 behavior:1 themselves:1 brain:1 little:1 equipped:2 tandem:1 yd0:3 spain:1 becomes:1 underlying:1 xx:1 circuit:19 panel:3 bounded:1 lowest:1 conv:2 kind:1 expands:1 classifier:3 schwartz:1 control:1 unit:2 grant:2 intrisic:1 positive:2 understood:3 local:3 gsms:1 timing:1 limit:2 acad:2 despite:2 mach:1 analyzing:1 firing:4 modulation:2 yd:12 approximately:1 might:2 black:1 triesch:1 initialization:2 studied:1 suggests:2 appl:1 ease:1 hearing4all:3 unique:2 union:1 implement:2 differs:1 digit:11 area:1 cascade:1 word:1 cannot:1 nb:6 context:1 writing:1 nessler:3 optimize:2 bill:1 map:2 dz:1 center:1 modifies:1 straightforward:1 regardless:1 starting:2 attention:1 joerg:1 chicca:1 rule:14 importantly:1 deriving:2 enabled:1 variation:9 justification:1 debanne:1 target:1 play:1 exact:1 losing:1 agreement:1 pa:1 element:1 rozell:1 particularly:1 labeled:2 bottom:1 role:9 capture:1 calculate:1 adv:4 plo:5 rescaled:1 valuable:1 principled:2 dempster:1 dynamic:6 ultimately:1 depend:3 bipartite:1 observables:2 neurophysiol:1 joint:2 represented:2 derivation:2 train:1 stemmler:1 distinct:1 fast:1 artificial:7 sc:10 encoded:2 plausible:2 drawing:1 statistic:11 jointly:1 itself:1 laird:1 ip:34 online:1 confronted:1 hoc:4 turrigiano:1 analytical:3 net:2 propose:2 interaction:2 product:3 maximal:1 adaptation:1 inserting:1 loop:2 translate:1 poorly:1 description:1 competition:1 convergence:3 cluster:3 regularity:2 keck:1 klosterneuburg:1 incremental:1 object:2 derive:3 develop:1 ac:2 stat:1 depending:2 recurrent:1 soc:1 implemented:1 c:1 differ:2 thick:1 closely:1 correct:1 require:2 feeding:3 potentiation:1 generalization:1 biological:3 mathematically:2 sufficiently:1 koch:1 ic:3 stdp:1 ground:2 exp:8 visually:1 presumably:1 cognition:1 vary:2 proc:2 label:4 expose:1 sensitive:3 largest:1 create:1 tool:1 weighted:1 gaussian:4 modified:2 rather:1 avoid:1 poi:4 varying:1 voltage:1 derived:5 rezende:1 indiveri:1 likelihood:8 criticism:1 detect:1 inference:5 dependent:2 dayan:1 membership:1 eliminate:1 typically:2 hidden:17 vlsi:1 originating:1 csavin:1 full:1 germany:2 pixel:11 overall:3 classification:16 flexible:2 constrained:2 softmax:6 equal:2 field:2 having:1 extraction:1 eliminated:1 identical:3 represents:6 unsupervised:7 constitutes:1 cancel:1 thin:2 others:2 stimulus:14 serious:1 equiprobable:1 few:1 oja:1 gamma:9 preserve:1 tightly:1 individual:1 dfg:1 murphy:1 connects:2 metaplastic:1 conductance:1 invariably:1 organization:1 investigate:1 mixture:7 light:3 natl:2 partial:2 experience:3 harmon:1 tree:1 incomplete:1 initialized:4 desired:2 re:1 theoretical:1 instance:1 column:1 modeling:2 soft:2 earlier:1 maximization:3 neuromorphic:2 ca3:1 subset:1 uniform:1 seventh:1 johnson:1 dependency:1 wc0:3 fundamental:1 discriminating:1 together:1 quickly:2 concrete:1 ambiguity:1 containing:1 possibly:1 derivative:2 style:1 return:3 syst:4 account:4 potential:1 converted:1 de:2 supp:1 coding:3 sec:1 explicitly:4 afferent:1 ad:4 depends:1 multiplicative:2 helped:1 view:1 closed:2 bayes:1 parallel:1 rodney:1 curie:1 amorphous:1 contribution:3 square:3 characteristic:1 yield:2 identify:1 buesing:1 bayesian:5 plastic:2 accurately:1 none:1 lu:1 lighting:1 drive:1 comp:5 lengyel:1 processor:1 gsm:8 explain:1 synapsis:1 synaptic:12 definition:1 evaluates:2 energy:1 nonetheless:2 resultant:1 gain:12 auditory:1 dataset:9 treatment:3 austria:2 lim:1 emerges:1 improves:1 ubiquitous:1 recall:1 sophisticated:1 reflecting:1 appears:1 feed:1 higher:3 supervised:1 response:1 april:1 formulation:2 evaluated:1 microcircuit:2 box:1 furthermore:4 stage:1 hand:1 eqn:7 receives:1 horizontal:2 olshausen:1 facilitate:1 effect:2 requiring:1 verify:2 normalized:4 multiplier:1 evolution:8 analytically:2 counterpart:1 true:2 excitability:18 dull:6 maass:2 neal:1 white:1 during:3 self:1 eqns:2 unnormalized:1 leftmost:1 hippocampal:1 complete:1 confusion:1 meaning:1 image:8 variational:1 novel:5 funding:1 common:1 functional:4 spiking:7 regulates:1 winner:1 volume:2 jl:1 extend:1 interpretation:1 analog:2 linking:2 interpret:1 functionally:1 kluwer:1 grid:2 analyzer:1 winnertake:1 luecke:1 had:1 phenomenological:1 similarity:2 surface:1 inhibition:2 etc:1 something:1 posterior:10 own:1 recent:1 multivariate:1 inf:4 success:1 additional:3 greater:1 impose:1 recognized:1 converge:4 maximize:7 paradigm:1 wellunderstood:1 semi:1 dashed:5 neurally:3 simoncelli:2 sound:2 habenschuss:1 infer:2 d0:6 hebbian:5 reduces:2 x10:2 onp:1 x28:1 adapt:1 sphere:11 long:2 concerning:1 controlled:1 converging:1 variant:2 prediction:2 vision:1 expectation:3 poisson:10 iteration:7 represent:4 normalization:6 achieved:1 synaptically:1 c1:2 proposal:1 rea:1 cell:1 fine:1 else:1 operate:1 ppg:14 flow:1 jordan:1 call:1 joshi:1 feedforward:2 variety:4 brighter:2 architecture:1 identified:1 suboptimal:1 idea:1 tm:1 translates:1 expression:3 pca:1 cause:3 action:1 kevan:1 generally:1 neocortex:1 locally:1 extensively:1 reduced:1 generate:5 specifies:1 exist:1 percentage:1 e1002250:1 neuroscience:1 diverse:1 ist:2 key:3 four:3 threshold:2 drawn:1 changing:1 marie:1 deweese:1 verified:1 abbott:2 douglas:2 utilize:1 rectangle:4 luminance:1 v1:3 deneve:1 fraction:1 sum:3 run:1 parameterized:1 powerful:1 baraniuk:1 named:1 electronic:1 draw:1 appendix:5 scaling:3 layer:5 followed:1 yielded:4 activity:7 nontrivial:2 lucke:1 strength:1 constraint:3 constrain:2 scene:2 wc:2 aspect:1 speed:1 argument:1 savin:4 relatively:1 martin:1 according:3 combination:1 remain:3 across:4 em:13 beast:2 wta:7 biologically:2 rev:1 explained:1 invariant:1 pr:23 taken:1 wcd:26 ln:9 conjugacy:1 remains:1 equation:2 describing:1 computationally:1 mechanism:7 tractable:1 fp7:1 unusual:1 permit:1 apply:1 gam:2 observe:2 travis:2 enforce:1 generic:2 alternative:1 odor:1 coin:1 binomial:5 clustering:3 running:1 top:1 remaining:1 assumes:2 graphical:1 marginalized:1 malsburg:1 const:1 giving:1 testable:2 approximating:1 february:1 added:1 quantity:2 spike:1 receptive:2 concentration:2 pave:1 traditional:1 unclear:2 exhibit:1 link:6 attentional:1 separate:1 lateral:1 sci:2 nelson:2 exc:1 manifold:1 consensus:1 toward:1 willsky:1 assuming:2 index:2 providing:2 preferentially:1 regulation:2 mostly:1 difficult:1 setup:1 potentially:2 relate:1 october:1 negative:8 synthesizing:1 implementation:3 anal:1 perform:1 allowing:1 neuron:39 observation:7 datasets:3 implementable:2 acknowledge:1 november:1 displayed:2 situation:1 neurobiology:1 variability:2 hinton:1 homeostatic:4 intensity:42 tive:1 introduced:1 pair:5 required:2 namely:1 complement:1 connection:1 coherent:1 learned:5 barcelona:1 nip:1 address:2 beyond:1 nawrot:1 usually:3 pattern:4 challenge:3 built:1 including:2 memory:1 wainwright:1 power:1 critical:2 natural:4 difficulty:1 predicting:1 pfeiffer:2 representing:3 scheme:1 improve:1 irrespective:1 extract:1 utterance:1 sahani:1 taming:2 prior:2 understanding:1 literature:1 acknowledgement:1 fully:1 expect:2 rationale:1 suggestion:1 interesting:2 lv:1 integrate:1 rutishauser:1 verification:1 sufficient:1 rubin:1 principle:3 thresholding:1 editor:1 share:2 normalizes:1 row:1 free:1 drastically:1 side:4 allow:3 understand:1 formal:1 taking:1 sparse:4 distributed:5 feedback:2 calculated:4 dimension:1 world:2 cortical:5 sensory:3 qualitatively:1 preprocessing:6 simplified:2 programme:2 compact:1 zylberberg:1 incoming:1 conceptual:1 continuous:2 latent:2 disambiguate:1 additionally:1 learn:10 nature:2 robust:7 ca:1 inherently:1 investigated:4 necessarily:1 complex:1 constructing:1 european:1 gerstner:1 neurosci:4 noise:4 arise:1 neuronal:3 representative:1 aid:2 consisted:1 position:3 comprises:1 wish:1 explicit:1 comput:5 governed:1 learns:7 wavelet:1 annu:1 specific:8 normative:2 offset:1 evidence:1 intrinsic:13 mnist:10 albeit:1 adding:1 nat:3 illustrates:2 justifies:1 easier:1 entropy:1 likely:2 visual:3 lagrange:1 corresponds:1 truth:2 constantly:1 consequently:1 shared:1 content:1 change:11 experimentally:3 uniformly:2 uol:2 principal:1 experimental:1 divisive:1 shannon:1 indicating:2 internal:1 people:1 tested:1 biol:6 |
6,172 | 6,583 | Dynamic Mode Decomposition with Reproducing
Kernels for Koopman Spectral Analysis
a
Yoshinobu Kawaharaab
The Institute of Scientific and Industrial Research, Osaka University
b
Center for Advanced Integrated Intelligence Research, RIKEN
[email protected]
Abstract
A spectral analysis of the Koopman operator, which is an infinite dimensional linear operator on an observable, gives a (modal) description of the global behavior
of a nonlinear dynamical system without any explicit prior knowledge of its governing equations. In this paper, we consider a spectral analysis of the Koopman
operator in a reproducing kernel Hilbert space (RKHS). We propose a modal decomposition algorithm to perform the analysis using finite-length data sequences
generated from a nonlinear system. The algorithm is in essence reduced to the
calculation of a set of orthogonal bases for the Krylov matrix in RKHS and the
eigendecomposition of the projection of the Koopman operator onto the subspace
spanned by the bases. The algorithm returns a decomposition of the dynamics
into a finite number of modes, and thus it can be thought of as a feature extraction
procedure for a nonlinear dynamical system. Therefore, we further consider applications in machine learning using extracted features with the presented analysis.
We illustrate the method on the applications using synthetic and real-world data.
1
Introduction
Modeling nonlinear dynamical systems using data is fundamental in a variety of engineering and
scientific fields. In machine learning, the problem of learning dynamical systems has been actively
discussed, and several Bayesian approaches have been proposed [11, 34]. In the fields of physics,
one popular approach for this purpose is the decomposition methods that factorize the dynamics into
modes based on some criterion from the data. For example, proper orthogonal decomposition (POD)
(see, for example, [12]), which generates orthogonal modes that optimally capture the vector energy
of a given dataset, has been extensively applied to complex phenomena in physics [5, 22] even
though this method is currently known to have several drawbacks. The so-called spectral method
for dynamical systems [15, 31, 17], which is often discussed in machine learning, is closely related
to this type of technique, where one aims to estimate a prediction model rather than understand the
dynamics by examining the obtained modes.
Among the decomposition techniques, dynamic mode decomposition (DMD) [25, 26] has recently
attracted attention in the field of physics, such as flow mechanics, and in engineering, and has been
applied to data obtained from complex phenomena [2, 4, 6, 10, 21, 25, 27, 32]. DMD approximates
the spectra of the Koopman operator [16], which is an infinite-dimensional linear operator that represents nonlinear and finite-dimensional dynamics without linearization. While POD just finds the
principal directions in a dataset, DMD can yield direct information concerning the dynamics such
as growth rates and the frequencies of the dynamics.
In this paper, we consider a spectral analysis of the Koopman operator in reproducing kernel Hilbert
spaces (RKHSs) for a nonlinear dynamical system
xt+1 = f (xt ),
(1)
where x ? M is the state vector on a finite-dimensional manifold M ? Rd , and f is a (possibly,
nonlinear) state-transition function. We present a modal decomposition algorithm to perform this,
1
which is in principle reduced to the calculation of a set of orthogonal bases for the Krylov matrix
in RKHS and the eigendecomposition of the projection of the Koopman operator onto the subspace
spanned by the bases. Although existing DMD algorithms can conceptually be thought of as producing an approximation of the eigenfunctions of the Koopman operator using a set of linear monomials
of observables (or the pre-determined functional maps of observables) as basis functions, which is
analogous to a one-term Taylor expansion at each point, our algorithm gives an approximation with
a set of nonlinear basis functions due to the expressiveness of kernel functions. The proposed algorithm provides a modal decomposition of the dynamics into a finite number of modes, and thus it
could be considered as a feature extraction procedure for a nonlinear dynamical system. Therefore,
we consider applications using extracted features from our analysis such as state prediction, sequential change-point detection, and dynamics recognition. We illustrate our method on the applications
using synthetic and real-world data.
The remainder of this paper is organized as follows. In Section 2, we briefly review the spectral analysis of nonlinear dynamical systems with the Koopman operator and DMD. In Section 3, we extend
the analysis with reproducing kernels, and provide a modal decomposition algorithm to perform this
analysis based on the equivalent principle of DMD. Although this method is mathematically correct,
a practical implementation could yield an ill-conditioned algorithm. Therefore, in Section 4, we
describe a way to robustly it by projecting data onto the POD directions. In Section 5, we describe
related works. In Section 6, we show some empirical examples by the proposed algorithm and, in
Section 7, we describe several applications using extracted features with empirical results. Finally,
we conclude the paper in Section 8.
2
The Koopman Operator and Dynamic Mode Decomposition
Consider a discrete-time nonlinear dynamical system (1). The Koopman operator [16], which we
denote here by K, is an infinite-dimensional linear operator that acts on a scalar function gi : M ? C,
mapping gi to a new function Kgi given as follows:
(Kgi )(x) = gi ? f (x),
(2)
where ? denotes the composition of gi with f . We see that K acts linearly on the function gi , even
though the dynamics defined by f may be nonlinear. Since K is a linear operator, it has, in general,
an eigendecomposition
K?j (x) = ?j ?j (x),
(3)
where ?j ? C is the j-th eigenvalue (called the Koopman eigenvalue) and ?j is the corresponding
eigenfunction (called the Koopman eigenfunction). We denote the concatenation of gi as g :=
[g1 , . . . , gp ]? . If each gi lies within the span of the eigenfunctions ?j , we can expand the vectorvalued g in terms of these eigenfunctions as
??
(4)
g(x) = j=1 ?j (x)uj ,
where uj is a set of vector coefficients called Koopman modes. Then, by the iterative applications
of Eqs. (2) and (3), we obtain
??
g ? f l (x) = j=1 ?lj ?j (x)uj ,
(5)
where f l is the l-time compositions of f . Therefore, ?j characterizes the temporal behavior of the
corresponding Koopman mode uj , i.e., the phase of ?j determines its frequency, and the magnitude
determines the growth rate of the dynamics. Note that, for a system evolving on an attractor, the
Koopman eigenvalues always lie on a unit circle [20].
DMD [25, 26] (and its variants) is a popular approach for estimating the approximations of ?j and uj
from a finite-length data sequence y 0 , y 1 , . . . , y ? (? Rp ), where we denote y t := g(xt ). DMD can
fundamentally be considered as a special use of the Arnoldi method [1]. That is, using the empirical
? j and vectors v j obtained by the Arnoldi method when regarding the subspace spanned
Ritz values ?
by y 0 , . . . , y ? ?1 as the Krylov subspace for y 0 (and implicitly for some matrix A ? Rp?p ), it is
shown that the observables are expressed as
?? ? t
y t = j=1 ?
(6a)
j v j (t = 0, . . . , ? ? 1), and
?? ? ?
y ? = j=1 ?j v j + r where r ? span{y 0 , . . . , y ? ?1 }.
(6b)
? j and vectors v j behave in
Comparing Eq. (6a) with Eq. (5) infers that the empirical Ritz values ?
precisely the same manner as the Koopman eigenvalues ?j and modes uj (?j (x0 )uj ), but for the
2
finite sum in Eq. (6a) instead of the infinite sum in Eq. (5). Note that, for r = 0 in Eq. (6b) (which
could happen when the data are sufficiently large), the approximate modes are indistinguishable
from the true Koopman eigenvalues and modes (as far as the data points are concerned), with the
expansion (5) comprising only a finite number of terms.
3
Dynamic Mode Decomposition with Reproducing Kernels
As described above, the estimation of the Koopman mode by DMD (and its variants) can capture
the nonlinear dynamics from finite-length data sequences generated from a dynamical system. Conceptually, DMD can be considered as producing an approximation of the Koopman eigenfunctions
using a set of linear monomials of observables as basis functions, which is analogous to a one-term
Taylor expansion at each point. In situations where eigenfunctions can be accurately approximated
using linear monomials (e.g., in a small neighborhood of a stable fixed point), DMD will produce
an accurate local approximation of the Koopman eigenfunctions. However, this is certainly not applicable to all systems (in particular, beyond the region of validity for local linearization). Here, we
extend the Koopman spectral analysis with reproducing kernels to approximate the Koopman eigenfunctions with richer basis functions. We provide a modal decomposition algorithm to perform this
analysis based on the equivalent principle with DMD.
Let H be the RKHS embedded with the dot product ??, ??H (we abbreviate ??, ??H as ??, ?? for simplicity) and a positive definite kernel k. Additionally, let ? : M ? H. Then, we define the Koopman
operator on the feature map ? by
(KH ?)(x) = ? ? f (x).
(7)
Thus, the Koopman operator KH is a linear operator in H. Note that almost of the theoretical claims
in this and the next sections do not necessarily require ? to be in RKHS (it is sufficient that ? stays in
a Hilbert space). However, this assumption should perform the calculation in practice (as described
in the last parts of this and the next sections). Therefore, we proceed with this assumption in the
following parts. We denote by ?j the j-th eigenfunction of KH with the corresponding eigenvalue
?j . Also, we define ? := span{?(x) : x ? M}.
We first expand the notions, such as the Ritz values and vectors, that appear in DMD with reproducing kernels. Suppose we have a sequence x0 , x1 , . . . , x? . The Krylov subspace for ?(x0 ) is defined
? ?1
as the subspace spanned by ?(x0 ), (KH ?)(x0 ), . . . , (KH
?)(x0 ). Note that this is identical to the
one spanned by ?(x0 ), . . . , ?(x? ?1 ), whose corresponding Krylov matrix is given by
M? = [?(x0 ) ? ? ? ?(x? ?1 )].
(8)
Therefore, if we denote a set of ? orthogonal bases of the Krylov subspace by q1 , . . . , q? (? H)
(obtained from the Gram-Schmidt orthogonalization described below), then the orthogonal projection of KH onto M? is given by P? = Q?? KH Q? , where Q? = [q1 ? ? ? q? ] and Q?? indicates the
Hermitian transpose of Q? . Consequently, the empirical Ritz values and vectors are defined as the
eigenvalues and vectors of P? , respectively. Now, we have the following theorem:
? j and ??j be the empirical Ritz
Theorem 1. Consider a sequence ?(x0 ), ?(x1 ), . . . , ?(x? ), and let ?
? j ?s are distinct. Then, we have
values and vectors for this sequence. Assume that ?
?? ? t
?(xt ) = j=1 ?j ??j (t = 0, . . . , ? ? 1), and
(9a)
?? ? ?
?(x? ) = j=1 ?j ??j + ? where ? ? span{?(x0 ), . . . , ?(x? ?1 )}.
(9b)
Proof. Let M? = Q? R (R ? C? ?? ) be the Gram-Schmidt QR decomposition of M? . Then, the
companion matrix (rational canonical form) of P? is given as F := R?1 P? R. Note that the sets of
? j ?s are distinct, F can
eigenvalues of P? and F are equivalent. Since F is a companion matrix and ?
?1 ?
?1, . . . , ?
? ? and T is a
?
be diagonalized in the form F = T ?T , where ? is a diagonal matrix with ?
j?1
?
Vandermonde matrix defined by Tij = ?i . Therefore, the empirical Ritz vectors ??j are obtained
as the columns of V = M? T ?1 . This proves Eq. (9a). Suppose a linear expansion of ?(x? ) is
represented as
?(x? ) = M? c + ? where ? ? span{?(x0 ), . . . , ?(x? ?1 )}.
(10)
?1
?1
Since F = R P? R = M? KH M? (therefore, M? F = KH M? ), the first term is given by the
? = V ?T
? . This proves Eq. (9b).
last column of M? F = M? T ?1 ?T
3
This theorem gives an extension of DMD via the Gram-Schmidt QR decomposition in the feature
space. Although in Step (2), the Gram-Schmidt QR orthogonalization is performed in RKHS, this
calculation can be reduced to operations on a Gram matrix due to the reproducing property of kernel
functions.
(1) Define M? by Eq. (8) and M+ := [?(x1 ), . . . , ?(x? )].
(2) Calculate the Gram-Schmidt QR decomposition M? = Q? R (e.g., refer to Section 5.2 of [29]).
? , where each diagonal ele(3) Calculate the eigendecomposition of R?1 Q?? M+ (=F ) = T ?1 ?T
?j .
? gives ?
ment of ?
(4) Define ??j to be the columns of M? T ?1 .
The original DMD algorithm (and its variants) produce an approximation of the eigenfunctions of
the Koopman operator in Eq. (2) using the set of linear monomials of observables as basis functions.
In contrast, because the above algorithm works with operations directly in the functional space,
the Koopman operator defined in Eq. (7) is identical to the transition operator on an observable.
Therefore, the eigenfunctions of the Koopman operator are fully recovered if the Krylov subspace
is sufficiently large, i.e., ?(x? ) is also in span{?(x0 ), . . . , ?(x? ?1 )} (or ? = 0).
4
Robustifying with POD Bases
Although the above decomposition based on the Gram-Schmidt orthogonalization is mathematically
correct, a practical implementation could yield an ill-conditioned algorithm that is often incapable
of extracting multiple modes. A similar issue has been well known for DMD [26], where one needs
to adopt a way to robustify DMD by projecting data onto the (truncated) POD directions [8, 33].
Here, we discuss a similar modification of our principle with the POD basis.
? = BSB ? be the eigen-decomposition
First, consider kernel PCA [28] on x0 , x1 , . . . , x? ?1 : Let G
?
of the centered Gram matrix G = HGH = G ? 1? G ? G1? + 1? G1? , where G = M?? M? is
the Gram matrix for the data, H = I ? 1? and 1? is a ? -by-? matrix for which each element takes
the value 1/? . Suppose the eigenvalues and eigenvectors can be truncated accordingly based on the
??B
? S?B
? ? where p (?? ) eigenvalues are adopted.
magnitudes of the eigenvalues, which results in G
? i )=?(xi )??c , where ?c = ?? ?1 ?(xj ). A principal
? by ? j and let ?(x
Denote the j-th column of B
j=0
?? ?1
? i ) = M? H?j (j =
orthogonal direction in the feature space is then given by ?j = i=0 ?j,i ?(x
?1/2
? S??1/2 ). Since M+ = KH M? ,
1, . . . , p), where ?j = S?jj ? j . Let U = [?1 , . . . , ?p ] (= M? HB
the projection of KH onto the space spanned by ?j is given as
? S??1/2 .
? ? H(M?? M+ )HB
(11)
F? := U ? KH U = S??1/2 B
Note that the (i, j)-the element of the matrix (M?? M+ ) is given by k(xi?1 , xj ). Then, if we let
? T? be the eigendecomposition of F? , then
F? = T??1 ?
? S??1/2 bj ,
??j = Ubj = M? HB
where bj is the j-th column of T??1 , can be used as an alternative to the empirical Ritz vector ??j .
That is, we have the following theorem:
Theorem 2. Assume that ?j ? ?, so that ?j (x) = ??(x), ?j ? for some ?j ? H and ?x ? M. If
?j is in the subspace spanned by the columns of U, so that ?j = Uaj for some aj ? Cp , then aj is
a left eigenvector of F? with eigenvalue ?j , and also we have
?p
?(x) = j=1 ?j (x)??j .
(12)
Proof. Since KH ?j = ?j ?j , we have ??(f (x)), ?j ? = ?j ??(x), ?j ?. Thus, from the assumption,
??(f (x)), Uaj ? = ?j ??(x), Uaj ? .
By evaluating at x0 , x1 , . . . , x? ?1 and then stacking into matrices, we have
(Uaj )? M+ = ?j (Uaj )? M? .
? ?1 HM?? U from the righthand side, this gives
If we multiply HG
? ?1 HM?? U = ?j a?j U ? M? HG
? ?1 HM?? U = ?j a?j .
a?j U ? M+ HG
4
? ?1 HM?? U = U ? KH U(= F? ), this means aj is a left eigenvector of F? with eigenSince U ? M+ HG
value ?j . Let bj be a (right) eigenvector of F? with eigenvalue ?j and the corresponding left eigenvector aj .Assuming these have been normalized so that a?j bj = ?ij , then any vector h ? Cp can be
?p
written as h = j=1 (a?j h)bj . Applying this to U ? ?(x) gives
?p
?p
U ? ?(x) = j=1 (a?j U ? ?(x))bj . = j=1 ?j (x)bj
Since bj = (U ? U)bj = U ? ??j , this proves Eq. (12).
This theorem clearly gives the connection between the eigenvalues/eigenvectors found by the above
procedure and the Koopman eigenvalues/eigenfunctions. The assumptions in the theorem means
that the data are sufficiently rich and thus a set of the kernel principal components gives a good
approximation of the representation with the Koopman eigenfunctions. As in the case of Eq. (5), by
the iterative applications of Eq. (3), we obtain
?p
?(xt ) = j=1 ?tj ?j (x0 )??j .
(13)
1
The procedure for the robustified variant of the DMD is summarized as follows.
? = HM?? M? H.
(1) Define M? and calculate the centered Gram matrix G
?
??B
? S?B
? , which gives the kernel principal directions U.
(2) Calculate the eigendecomposition G
? T?, where each diagonal
(3) Calculate F? as in Eq. (11) and its eigendecomposition F? = T??1 ?
?
element of ? gives ?j .
? S??1/2 T??1 .
(4) Define ??j to be the columns of M? HB
Unlike the procedure described in Section 3, the above procedure can perform the truncation of
eigenvectors corresponding to small singular values. As well as DMD, this step becomes beneficial
in practice when the Gram matrix G, in our case, is rank-deficient or nearly so.
Remark: Although we assumed that data is a consecutive sequence for demonstrating the correctness of the algorithm, as evident from the above steps, the estimation procedure itself does not neces(i)
(i)
sarily require a sequence but rather a collection of pairs of consecutive observables {(x1 , x2 )}?i=1 ,
(i)
(i)
where each pair is supposed to be x2 = f (x1 ), with the appropriate definitions of M? and M+ .
5
Related Works
Spectral analysis (or, referred as the decomposition technique) for dynamical systems is a popular
approach aimed at extracting information concerning (low-dimensional) dynamics from data. Common techniques include global eigenmodes for linearized dynamics (see, e.g., [3]), discrete Fourier
transforms, POD for nonlinear dynamics [30, 12], and balancing modes for linear systems [24] as
well as multiple variants of these techniques, such as those using shift modes [22] in conjunction
with POD modes. In particular, POD, which is in principle equivalent to principal component analysis, has been extensively applied to the analysis of physical phenomena [5, 22] even though it suffers
from numerous known issues, including the possibility of principal directions in a set of data may
not necessarily correspond to the dynamically important ones.
DMD has recently attracted considerable attention in physics such as fluid mechanics [2, 10, 21, 25,
27] and in engineering fields [4, 6, 32]. Unlike POD (and its variants), DMD yields direct information about the dynamics such as growth rates and frequencies associated with each mode, which
can be obtained from the magnitude and phase of each corresponding eigenvalue of the Koopman
operator. However, the original DMD has several numerical disadvantages related to the accuracy of
the approximate expressions of the Koopman eigenfunctions from data. Therefore, several variants
of DMD have been proposed to rectify this point, including exact DMD [33] and optimized DMD
[8]. Jovanovi?c et al. proposed sparsity-promoting DMD [13], which provides a framework for the
approximation of the Koopman eigenfunctions with fewer bases. Williams et al. proposed extended
DMD [35], which works on pre-determined basis functions instead of the monomials of observables.
Although in extended DMD the Koopman mode is defined as the eigenvector of the corresponding
operator of coefficients on basis functions, the resulting procedure is similar to the robust-version of
our algorithm.
1
The Matlab code is available at http://en.44nobu.net/codes/kdmd.zip
5
1.5
1
kernel DMD
True
0.6
Image
Eigenvalue
1
2
0.5
0.8
True
Predicted value
4
kernel DMD
DMD
Equilibrium
1
0.5
x1 0
x1
0
0
-0.5
0.4
-2
-1
0.5
True
Predicted value
0.2
-4
-1.5
0
0
1
2
3
4
5
6
7
Index
8
-1
-3
-2
-1
0
Real
1
2
3
50
100
time
150
200
0
20
40
60
80
100
time
Figure 2: Examples of the true versus (1-step) preFigure 1: Estimated eigenvalues with the data from the dicted values via the proposed method for the toy
toy system (left) and the H?enon map (right).
system (left) and the H?enon map (right).
In system control, subspace identification [23, 14], or called the eigensystem realization method,
has been a popular approach to modeling of dynamical systems. This method basically identifies
low-dimensional (hidden) states as canonical vectors determined by canonical correlation analysis,
and estimates parameters in the governing system using the state estimates. This type of method
is known as a spectral method for dynamical systems in the machine learning community and has
recently been applied to several types of systems such as variants of hidden Markov models [31, 19],
nonlinear dynamical systems [15], and predictive state-representation [17]. The relation between
DMD and other methods, particularly the eigensystem realization method, is an interesting open
problem. This is briefly mentioned in [33] but it would require further investigation in future studies.
6
Empirical Example
To illustrate how our algorithm works, we here consider two examples: a toy nonlinear system given
by xt+1 = 0.9xt , yt+1 = 0.5yt +(0.92 ?0.5)x2t , and one of the well-known chaotic maps, called
the H?enon map (xt+1 = 1 ? ax2t + yt , yt+1 = bxt ), which was originally presented by H?enon
as a simplified model of the Poincar?e section of the Lorenz attractor. As for the toy one, the two
eigenvalues are 0.5 and 0.9 with the corresponding eigenfunctions ?0.9 = xt and ?0.5 = yt ? x2t ,
respectively. And as for the H?enon map, we set the parameters as a = 1.4, b = 0.3. It is known
that this map has two equilibrium points (?1.13135, ?0.339406) and (0.631354, 0.189406), whose
corresponding eigenvalues are 2.25982 and ?1.09203, and ?2.92374 and ?0.844054.
We generated samples according to these systems with several initial conditions and then applied
the presented procedure to estimate the Koopman modes. We used the polynomial kernel of degree
three for the toy system, and the Gaussian kernel with width 1 for the H?enon map, respectively.
The graphs in Fig. 1 show the estimated eigenvalues for two cases. As seen from the left graph, the
eigenvalues for the toy system were precisely estimated. Meanwhile, from the right graph, the part
of the eigenvalues of the equilibrium points seem to be approximately estimated by the algorithm.
7
Applications
The above algorithm provides a decomposition of the dynamics into a finite number of modes, and
therefore, could be considered as a feature extraction procedure for a nonlinear dynamical system.
This would be useful to directly understand dominant characteristics of the dynamics, as done in
scientific fields with DMD [2, 10, 21, 25, 27]. However, here we consider some examples of applications using extracted features with the proposed analysis; prediction, sequential change detection,
and the recognition of dynamic patterns, with some empirical examples.
Prediction via Preimage: As is known in physics (nonlinear science), long-term predictions in a
nonlinear dynamical system are, in principle, impossible if at least one of its Lyapunov exponents
is positive, which would be typically the case of interests. This is true even if the dimension of the
system is low because uncertainty involved in the evolution of the system exponentially increases
over time. However, it may be possible to predict an observable in the near future (i.e., shortterm prediction) if we could formulate a precise predictive model. Therefore, we here consider a
prediction based on estimated Koopman spectra as in Eq. (13). Since Eq. (13) is represented as the
linear combination of ?(xi ) (i = 0, . . . , ? ? 1), a prediction can be obtained by considering the
pre-image of the predicted observables in the feature space. Even though any method for finding a
pre-image of a vector in the feature space can be used for this purpose, here we describe an approach
6
1.2
50
40
30
20
10
0
-10
-20
varied
run, then stop
0.8
jump
walk, then turn
0.4
x1
x2
x3
1
slow walk, then stop
-0.4
-1
kDMD
run, then turn
0
-0.5
0
1
walk
0
run
0.5
1-SVM
1
1.5
0
0
500
1000
1500
2000
Figure 3: MDS embedding with the distance matrix from kernel principal angle between subspaces of Figure 4: Sample sequence (top) and change
the estimated Koopman eigenfunctions for locomotion scores by our method (green) and the kernel
data. Each point is colored according to its assigned change detection method (blue).
motion (jump, walk, run, and varied).
based on a similar idea with multidimensional scaling (MDS), as describe in [18], where a pre-image
is recovered to preserve the distance between it and other data points in the input space as well as the
? ? +l ) in the feature space,
feature space. The basic steps are (i) find n-neighbors of a new point ?(x
? ? +l and each data point xt based on
(ii) calculate the corresponding distance between the preimage x
the relation between the feature- and input-space distances, and (iii) calculate the pre-image in order
to preserve the input distances. For step (i), we need the distance between the estimated feature and
each data point in the feature space, which is calculated as
? ? +l ) ? ?(xt )?2 = ??(x
? ? +l )?2 + ??(xt )?2 ? 2?(x
? ? +l )? ?(xt )
??(x
= c? (M?? M? )c + k(xt , xt ) ? 2c? (M?? ?(xt )),
where c is from Eq. (10). Note that the first and third terms in the above equation can be calculated
using the values in the Gram matrix for the data. Once we obtain n-neighbors based on the feature
distances, we can construct the corresponding local coordinate by calculating a set of orthogonal
bases (via, for example, singular value decomposition of the data matrix for the neighbors) based
on the distances in the input spaces, which are analytically obtained from the feature distances [18].
The graphs in Fig. 2 show empirical examples of the true versus predicted values as described above
for the toy nonlinear system and the H?enon map. The setups for the data generation and the kernels
etc. are same with the previous section.
Embedding and Recognition of Dynamics: A direct but important application of the presented
analysis is the embedding and recognition of dynamics with the extracted features. Like (kernel)
PCA, a set of Koopman eigenfunctions estimated via the analysis can be used as the bases of a
low dimensional subspace that represents the dynamics. For example, the recognition of dynamics
based on this representation can be performed as follows. Suppose we are given m collection of
i
data sequences {xt }?t=0
(i=1,. . . ,m) each of which is generated from some known dynamics C
(e.g., walks, runs, jumps etc.). Then, a set of estimated Koopman eigenfunctions for each known
dynamics, which we denote by Ac = M? wc for the corresponding complex vector wc , can be
regarded as the bases of a low-dimensional embedding of the sequences. Hence, if we let A be a
set of the estimated Koopman eigenfunctions for a new sequence, its category of dynamics can be
estimated as
?i = argmin dist(A, Ac ),
c?C
where dist(A, Ac ) is a distance between two subspaces spanned by A and Ac . For example, such a
distance can be given via the kernel principal angles between two subspaces in the feature space [36].
Fig. 3 shows an empirical example of this application using the locomotion data from CMU Graphics
Lab Motion Capture Database.2 We used the RBF Gaussian kernel, where the kernel width was set
as the median of the distances from a data matrix. The figure shows an embedding of the sequences
via MDS with the distance matrix, which was calculated with kernel principal angles [36] between
subspaces spanned by the Koopman eigenfunctions. Each point is colored according to its motion
(jump, walk, run, and varied).
2
Available at http://mocap.cs.cmu.edu.
7
Sequential Change-Point Detection: Another possible application is the sequential detection of
change-points in a nonlinear dynamical system based on the prediction via the presented analysis. Here, we give a criterion for this problem based on the so-called cumulative-sum (CUSUM)
of likelihood-ratios (see, for example, [9]). Let x0 , x1 , x2 , . . . be a sequence of random vectors distributed according to some distribution ph (h = 0, 1). Then, change-point detection is
defined as the sequential decision between hypotheses; H0 : p(xi ) = p0 (xi ) for i = 1, . . . , T ,
and H1 : p(xi ) = p0 (xi ) for i = 1, . . . , ? and p(xi ) = p1 (xi ) for i = ? + 1, . . . , T , where
1 ? ? ? T (? ?). In CUSUM, the stopping rule is given as
{
}
?T
T ? = inf T : max1?? <T t=? +1 log (p1 (xt )/p0 (xt )) ? c ,
where c > 0 (T ? is the stopping time). Although the Koopman operator is, in general, defined for
a deterministic system, it is known to be extended to a stochastic system xt+1 = f (xt , v t ), where
v t is a stochastic disturbance [20]. In that case, the operator works on the expectation. Hence, let us
define the distribution of xt as a nonparametric exponential family [7], given by
p(xt ) = exp (??(?), ?(xt )?H ? g(?)) = exp (?? ? f (xt?1 ), ?(xt )?H ? g(? ? f (xt?1 ))) ,
where g is the log-partition function. Then, the log-likelihood ratio score is given as
log ?? (x1:T ) :=
(0)
?T
i=? +1
(?
)
?
??
(0)
(1)
?
log (p1 (xt )/p0 (xt )) ? ? Ti=? +1
j=1 ?i k(xj , xi ) ?
j=1 ?i k(xj , xi ) ,
(1)
where ?i and ?i are the coefficients obtained by the proposed algorithm with the data for i =
1, . . . , ? and i = ? + 1, . . . , T , respectively. Here, since the variation of the second term is much
smaller than the first one (cf. [7]), the decision rule, log ?? ? c, can be simplified by ignoring the
second term. As a result, we have the following decision rule with some critical value c? ? 0:
?T
??
(0)
? log ?? (x1:T ) ? i=? +1 j=1 ?i k(xj , xi ) ? c?,
A change-point is detected if the above rule is satisfied. Otherwise, the procedure will be repeated
until a change-point is detected by updating the coefficients using new samples. Fig. 4 shows an
empirical example of the (normalized) change score calculated with the proposed algorithm, with
comparison with the one by the kernel change detection method (cf. [7]), for the shown data generated from the Lorenz map. We used the RBF Gaussian kernel as in the same way. In the simulation,
the parameter of the map changes at 800 and 1200 although the ranges of the data values dramatically
change in other areas (where the score by the comparative method has changed correspondingly).
8
Conclusions
We presented a spectral analysis method with the Koopman operator in RKHSs, and developed
algorithms to perform the analysis using a finite-length data sequence from a nonlinear dynamical
system, that is essentially reduced to the calculation of a set of orthogonal bases of the Krylov matrix
in RKHSs and the eigendecomposition of the projection of the Koopman operator onto the subspace
spanned by the bases. We further considered applications using estimated Koopman spectra with
the proposed analysis, which were empirically illustrated using synthetic and real-world data.
Acknowledgments
This work was supported by JSPS KAKENHI Grant Number JP16H01548.
References
[1] W.E. Arnoldi. The principle of minimized iterations in the solution of the matrix eigenvalue problem.
Quarterly of Applied Mathematics, 9:17?29, 1951.
[2] S. Bagheri. Koopman-mode decomposition of the cylinder wake. Journal of Fluid Mechanics, 726:596?
623, 2013.
[3] S. Bagheri, P. Schlatter, P.J. Schmid, and D.S. Henningson. Global stability of a jet in cross flow. Journal
of Fluid Mechanics, 624:33?44, 2009.
[4] E. Berger, M. Satsuma, D. Vogt, B. Jung, and H. Ben Amor. Dynamic mode decomposition for perturbation estimation in human robot interaction. In Proc. of the 23rd IEEE Int?l Symp. on Robot and Human
Interactive Communication, pages 593?600, 2014.
8
[5] J.-P. Bonnet, C.R. Cole, J. Delville, M.N. Glauser, and L.S. Ukeiley. Stochastic estimation and proper
orthogonal decomposition: Complementary techniques for identifying structure. Experiments in Fluids,
17:307?314, 1994.
[6] B. Brunton, L. Aohnson, J. Ojemann, and J. Nathan Kutz. Extracting spatial-temporal coherent patterns
in large-scale neural recordings using dynamic mode decomposition. Journal of Neuroscience Methods,
258:1?15, 2016.
[7] S. Canu and A. Smola. Kernel methods and the exponential family. Neurocomputing, 69:714?720, 2006.
[8] K.K. Chen, J.H. Tu, and C.W. Rowley. Variants of dynamic mode decomposition: Boundary condition,
Koopman, and Fourier analyses. Journal of Nonlinear Science, 22(6):887?915, 2012.
[9] M. Cs?org?o and L. Horv?ath. Limit Theorems in Change-Point Analysis. Wiley, 1988.
[10] D. Duke, D. Honnery, and J. Soria. Experimental investigation of nonlinear instabilities in annular liquid
sheets. Journal of Fluid Mechanics, 691:594?604, 2012.
[11] Z. Ghahramani and S.T. Roweis. Learning nonlinear dynamical systems using an EM algorithm. In Proc.
of the 1998 Conf. on Advances in Neural Information Processing Systems II, pages 431?437.
[12] P. Holmes, J.L. Lumley, and G. Berkooz. Turbulence, Coherent Structures, Dynamical Systems and
Symmetry. Cambridge University Press, 1996.
[13] M.R. Jovanovi?c, P.J. Schmid, and J.W. Nichols. Sparsity-promoting dynamic mode decomposition.
Physics of Fluids, 26:024103, 2014.
[14] T. Katayama. Subspace Methods for System Identification. Springer, 2005.
[15] Y. Kawahara, T. Yairi, and K. Machida. A kernel subspace method by stochastic realization for learning
nonlinear dynamical systems. In Adv. in Neural Infor. Processing Systems 19, pages 665?672. 2007.
[16] B.O. Koopman. Hamiltonian systems and transformation in Hilbert space. Proc. of the National Academy
of Sciences of the United States of America, 17(5):315?318, 1931.
[17] A. Kulesza, N. Jiang, and S. Singh. Spectral learning of predictive state representations with insufficient
statistics. In Proc. of the 29th AAAI Conf. on Artificial Intelligence (AAAI?15), pages 2715?2721.
[18] James Tin-Yau Kwok and Ivor Wai-Hung Tsang. The pre-image problem in kernel methods. IEEE Trans.
on Neural Networks, 15(6):1517?1525, 2004.
[19] I. Melnyk and A. Banerjee. A spectral algorithm for inference in hidden semi-markov models. In Proc.
of the 18th Int?l Conf. on Artificial Intelligence and Statistics (AISTATS?15), pages 690?698, 2015.
[20] I. Mezi?c. Spectral properties of dynamical systems, model reduction and decompositions. Nonlinear
Dynamics, 41:309?325, 2005.
[21] T.W. Muld, G. Efraimsson, and D.S. Henningson. Flow structures around a high-speed train extracted
using proper orthogonal decomposition and dynamic mode decomposition. Computers and Fluids, 57:87?
97, 2012.
[22] B.R. Noack, K. Afanasiev, M. Morzynski, G. Tadmor, and F. Thiele. A hierarchy of low-dimensional
models for the transient and post-transient cylinder wake. J. of Fluid Mechanics, 497:335?363, 2003.
[23] P. Van Overschee and B. De Moor. Subspace Identification for Linear Systems: Theory, Implementation,
Applications. Kluwer Academic Publishers, 1996.
[24] C.W. Rowley. Model reduction for fluids using balanced proper orthogonal decomposition. International
Journal of Bifurcation Chaos, 15(3):997?1013, 2005.
[25] C.W. Rowley, I. Mezi?c, S. Bagheri, P. Schlatter, and D.S. Henningson. Spectral analysis of nonlinear
flows. Journal of Fluid Mechanics, 641:115?127, 2009.
[26] P.J. Schmid. Dynamic mode decomposition of numerical and experimental data. Journal of Fluid Mechanics, 656:5?28, 2010.
[27] P.J. Schmid and J. Sesterhenn. Dynamic mode decomposition of turbulent cavity flows for self-sustained
oscillations. Int?l J. of Heat and Fluid Flow, 32(6):1098?1110, 2010.
[28] B. Sch?olkopf, A. Smola, and K.-R. M?uller. Nonlinear component analysis as a kernel eigenvalue problem.
Neural Computation, 10:1299?1319, 1998.
[29] J. Shawe-Taylor and N. Cristianini. Kernel Methods for Pattern Analysis. Cambridge Univ. Press, 2004.
[30] L. Sirovich. Turbulence and the dynamics of coherent structures. Quarterly of applied mathematics,
45:561?590, 1987.
[31] L. Song, B. Boots, S.M. Siddiqi, G. Gordon, and A. Smola. Hilbert space embeddings of hidden markov
models. In Proc. of the 27th Int?l Conf. on Machine Learning (ICML?10), pages 991?998.
[32] Y. Suzuki and I. Mezi?c. Nonlinear koopman modes and power system stability assessment without models. IEEE Trans. on Power Systems, 29:899?907, 2013.
[33] J.H. Tu, C.W. Rowley, D.M. Luchtenburg, S.L. Brunton, and J.N. Kutz. On dynamic mode decomposition:
Theory and applications. Journal of Computational Dynamics, 1(2):391?421, 2014.
[34] J. Wang, A. Hertzmann, and D.M. Blei. Gaussian process dynamical models. In Advances in Neural
Information Processing Systems 18, pages 1441?1448. 2006.
[35] M.O. Williams, I.G. Kevrekidis, and C.W. Rowley. A data-driven approximation of the Koopman operator: Extending dynamic mode decomposition. Journal of Nonlinear Science, 25:1307?1346, 2015.
[36] L. Wolf and A. Shashua. Learning over sets using kernel principal angles. Journal of Machine Learning
Research, 4:913?931, 2003.
9
| 6583 |@word version:1 briefly:2 polynomial:1 vogt:1 open:1 simulation:1 linearized:1 decomposition:35 p0:4 q1:2 reduction:2 initial:1 score:4 united:1 liquid:1 rkhs:6 existing:1 diagonalized:1 recovered:2 comparing:1 yairi:1 attracted:2 written:1 numerical:2 happen:1 partition:1 intelligence:3 fewer:1 accordingly:1 hamiltonian:1 colored:2 blei:1 provides:3 org:1 direct:3 sustained:1 symp:1 hermitian:1 manner:1 x0:16 behavior:2 p1:3 dist:2 mechanic:8 considering:1 becomes:1 brunton:2 estimating:1 kevrekidis:1 argmin:1 eigenvector:5 developed:1 finding:1 transformation:1 temporal:2 multidimensional:1 act:2 ti:1 growth:3 interactive:1 control:1 unit:1 grant:1 appear:1 producing:2 arnoldi:3 positive:2 engineering:3 local:3 limit:1 jiang:1 approximately:1 dynamically:1 range:1 practical:2 acknowledgment:1 bsb:1 practice:2 definite:1 x3:1 chaotic:1 procedure:11 poincar:1 area:1 empirical:13 evolving:1 thought:2 projection:5 pre:7 onto:7 operator:28 sheet:1 turbulence:2 applying:1 impossible:1 instability:1 equivalent:4 map:12 deterministic:1 center:1 yt:5 williams:2 attention:2 pod:10 formulate:1 simplicity:1 identifying:1 rule:4 holmes:1 ritz:7 spanned:10 regarded:1 osaka:2 embedding:5 stability:2 notion:1 coordinate:1 variation:1 analogous:2 hierarchy:1 suppose:4 exact:1 duke:1 hypothesis:1 locomotion:2 element:3 recognition:5 approximated:1 particularly:1 updating:1 database:1 wang:1 capture:3 tsang:1 calculate:7 region:1 adv:1 thiele:1 amor:1 sirovich:1 mentioned:1 balanced:1 rowley:5 hertzmann:1 ojemann:1 cristianini:1 dynamic:41 singh:1 predictive:3 max1:1 observables:8 basis:8 represented:2 america:1 riken:1 train:1 univ:1 distinct:2 heat:1 describe:5 detected:2 artificial:2 neighborhood:1 h0:1 kawahara:1 whose:2 richer:1 otherwise:1 statistic:2 gi:7 g1:3 gp:1 itself:1 sequence:15 eigenvalue:25 net:1 propose:1 ment:1 interaction:1 product:1 remainder:1 tu:2 ath:1 realization:3 hgh:1 roweis:1 academy:1 supposed:1 description:1 kh:14 olkopf:1 qr:4 extending:1 produce:2 comparative:1 ben:1 illustrate:3 ac:5 ij:1 eq:18 predicted:4 c:2 lyapunov:1 direction:6 drawback:1 closely:1 correct:2 stochastic:4 centered:2 human:2 transient:2 require:3 investigation:2 mathematically:2 extension:1 sufficiently:3 considered:5 around:1 exp:2 equilibrium:3 mapping:1 bj:9 predict:1 claim:1 sanken:1 adopt:1 consecutive:2 purpose:2 estimation:4 proc:6 applicable:1 currently:1 cole:1 correctness:1 moor:1 uller:1 clearly:1 always:1 gaussian:4 aim:1 rather:2 conjunction:1 kakenhi:1 rank:1 indicates:1 likelihood:2 industrial:1 contrast:1 inference:1 stopping:2 dicted:1 integrated:1 lj:1 typically:1 hidden:4 relation:2 expand:2 comprising:1 infor:1 issue:2 among:1 ill:2 exponent:1 spatial:1 special:1 bifurcation:1 field:5 once:1 construct:1 extraction:3 identical:2 represents:2 icml:1 nearly:1 future:2 minimized:1 fundamentally:1 gordon:1 preserve:2 national:1 neurocomputing:1 phase:2 attractor:2 cylinder:2 detection:7 interest:1 possibility:1 multiply:1 righthand:1 certainly:1 tj:1 hg:4 accurate:1 orthogonal:12 taylor:3 walk:6 circle:1 theoretical:1 column:7 modeling:2 disadvantage:1 stacking:1 monomials:5 jsps:1 examining:1 graphic:1 optimally:1 synthetic:3 fundamental:1 kgi:2 international:1 stay:1 physic:6 kutz:2 aaai:2 satisfied:1 possibly:1 conf:4 yau:1 return:1 actively:1 toy:7 koopman:50 de:1 summarized:1 coefficient:4 int:4 performed:2 h1:1 lab:1 characterizes:1 shashua:1 accuracy:1 characteristic:1 yield:4 correspond:1 conceptually:2 bayesian:1 identification:3 accurately:1 basically:1 suffers:1 wai:1 definition:1 energy:1 frequency:3 involved:1 james:1 proof:2 associated:1 rational:1 stop:2 dataset:2 popular:4 ele:1 knowledge:1 infers:1 hilbert:5 organized:1 originally:1 modal:6 bxt:1 done:1 though:4 governing:2 just:1 smola:3 robustify:1 correlation:1 until:1 nonlinear:31 banerjee:1 assessment:1 mode:34 eigenmodes:1 aj:4 scientific:3 preimage:2 validity:1 normalized:2 true:7 nichols:1 evolution:1 analytically:1 assigned:1 hence:2 illustrated:1 indistinguishable:1 width:2 self:1 essence:1 criterion:2 eigensystem:2 evident:1 cp:2 motion:3 orthogonalization:3 machida:1 image:6 chaos:1 recently:3 common:1 vectorvalued:1 functional:2 physical:1 empirically:1 melnyk:1 jp:1 exponentially:1 discussed:2 extend:2 approximates:1 kluwer:1 refer:1 composition:2 cambridge:2 rd:2 mathematics:2 canu:1 shawe:1 dot:1 rectify:1 tadmor:1 stable:1 robot:2 etc:2 base:12 dominant:1 bagheri:3 inf:1 driven:1 incapable:1 seen:1 zip:1 mocap:1 ii:2 semi:1 multiple:2 annular:1 jet:1 academic:1 calculation:5 cross:1 long:1 concerning:2 post:1 prediction:9 variant:9 basic:1 essentially:1 cmu:2 expectation:1 iteration:1 kernel:33 wake:2 singular:2 median:1 publisher:1 sch:1 unlike:2 eigenfunctions:19 recording:1 deficient:1 flow:6 seem:1 extracting:3 near:1 iii:1 embeddings:1 concerned:1 hb:4 variety:1 xj:5 regarding:1 idea:1 shift:1 expression:1 pca:2 song:1 proceed:1 jj:1 remark:1 matlab:1 dramatically:1 tij:1 useful:1 eigenvectors:3 aimed:1 transforms:1 nonparametric:1 extensively:2 ph:1 siddiqi:1 category:1 reduced:4 http:2 canonical:3 estimated:12 neuroscience:1 blue:1 discrete:2 demonstrating:1 graph:4 sum:3 run:6 angle:4 uncertainty:1 almost:1 family:2 oscillation:1 decision:3 scaling:1 henningson:3 precisely:2 x2:4 generates:1 fourier:2 wc:2 robustifying:1 span:6 nathan:1 speed:1 bonnet:1 robustified:1 according:4 combination:1 beneficial:1 smaller:1 em:1 modification:1 projecting:2 equation:2 discus:1 turn:2 x2t:2 turbulent:1 adopted:1 available:2 operation:2 promoting:2 quarterly:2 kwok:1 spectral:14 appropriate:1 robustly:1 schmidt:6 rkhss:3 alternative:1 eigen:1 rp:2 original:2 denotes:1 top:1 include:1 cf:2 calculating:1 ghahramani:1 uj:7 prof:3 md:3 diagonal:3 subspace:19 distance:13 concatenation:1 manifold:1 assuming:1 length:4 code:2 index:1 berger:1 insufficient:1 ratio:2 setup:1 fluid:12 implementation:3 proper:4 perform:7 boot:1 markov:3 finite:11 behave:1 truncated:2 situation:1 extended:3 communication:1 precise:1 varied:3 reproducing:8 perturbation:1 community:1 expressiveness:1 enon:7 pair:2 neces:1 connection:1 optimized:1 coherent:3 mezi:3 eigenfunction:3 trans:2 beyond:1 krylov:8 dynamical:23 below:1 pattern:3 kulesza:1 sparsity:2 including:2 green:1 overschee:1 power:2 critical:1 disturbance:1 abbreviate:1 advanced:1 cusum:2 numerous:1 identifies:1 hm:5 shortterm:1 schmid:4 katayama:1 prior:1 review:1 uaj:5 embedded:1 fully:1 interesting:1 generation:1 versus:2 eigendecomposition:8 vandermonde:1 degree:1 sufficient:1 principle:7 balancing:1 changed:1 jung:1 supported:1 last:2 transpose:1 truncation:1 horv:1 side:1 understand:2 institute:1 neighbor:3 correspondingly:1 distributed:1 van:1 boundary:1 dimension:1 calculated:4 world:3 transition:2 gram:12 evaluating:1 rich:1 cumulative:1 collection:2 jump:4 suzuki:1 simplified:2 far:1 approximate:3 observable:3 implicitly:1 cavity:1 global:3 conclude:1 assumed:1 factorize:1 xi:12 spectrum:3 iterative:2 additionally:1 yoshinobu:1 robust:1 ignoring:1 symmetry:1 expansion:4 complex:3 necessarily:2 meanwhile:1 aistats:1 linearly:1 ubj:1 repeated:1 complementary:1 x1:13 fig:4 referred:1 en:1 slow:1 wiley:1 explicit:1 exponential:2 lie:2 dmd:33 third:1 tin:1 theorem:8 companion:2 xt:29 svm:1 lorenz:2 sequential:5 magnitude:3 linearization:2 conditioned:2 chen:1 ivor:1 expressed:1 scalar:1 springer:1 wolf:1 schlatter:2 determines:2 extracted:6 sarily:1 consequently:1 rbf:2 considerable:1 change:14 infinite:4 determined:3 principal:10 called:7 experimental:2 phenomenon:3 hung:1 |
6,173 | 6,584 | Efficient High-Order Interaction-Aware Feature
Selection Based on Conditional Mutual Information
Alexander Shishkin, Anastasia Bezzubtseva, Alexey Drutsa,
Ilia Shishkov, Ekaterina Gladkikh, Gleb Gusev, Pavel Serdyukov
Yandex; 16 Leo Tolstoy St., Moscow 119021, Russia
{sisoid,nstbezz,adrutsa,ishfb,kglad,gleb57,pavser}@yandex-team.ru
Abstract
This study introduces a novel feature selection approach CMICOT, which is a
further evolution of filter methods with sequential forward selection (SFS) whose
scoring functions are based on conditional mutual information (MI). We state and
study a novel saddle point (max-min) optimization problem to build a scoring
function that is able to identify joint interactions between several features. This
method fills the gap of MI-based SFS techniques with high-order dependencies.
In this high-dimensional case, the estimation of MI has prohibitively high sample
complexity. We mitigate this cost using a greedy approximation and binary representatives what makes our technique able to be effectively used. The superiority of
our approach is demonstrated by comparison with recently proposed interactionaware filters and several interaction-agnostic state-of-the-art ones on ten publicly
available benchmark datasets.
1
Introduction
Methods of feature selection is an important topic of machine learning [8, 2, 17], since they improve
performance of learning systems while reducing their computational costs. Feature selection methods
are usually grouped into three main categories: wrapper, embedded, and filter methods [8]. Filters are
computationally cheap and are independent of a particular learning model that make them popular
and broadly applicable. In this paper, we focus on most popular filters, which are based on mutual
information (MI) and apply the sequential forward selection (SFS) strategy to obtain an optimal
subset of features [17]. In such applications as web search, features may be highly relevant only
jointly (having a low relevance separately). A challenging task is to account for such interactions [17].
Existing SFS-based filters [18, 3, 24] are able to account for interactions of only up to 3 features.
In this study, we fill the gap in the absence of effective SFS-based filters accounting for feature
dependences of higher orders. A search of t-way interacting features is turned into a novel saddle
point (max-min) optimization problem for MI of the target variable and the candidate feature with
its complementary team conditioned on its opposing team of previously selected features. We show
that, on the one hand, the saddle value of this conditional MI is a low-dimensional approximation
of the CMI score1 and, on the other hand, solving that problem represents two practical challenges:
(a) prohibitively high computational complexity and (b) sample complexity, a larger number of
instances required to accurately estimate the MI. These issues are addressed by two novel techniques:
(a) a two stage greedy search for the approximate solution of the above-mentioned problem whose
computational complexity is O(i) at each i-th SFS iteration; and (b) binary representation of features
that reduces the dimension of the space of joint distributions by a factor of (q/2)2t for q-value
features. Being reasonable and intuitive, these techniques together constitute the main contribution of
our study: a novel SFS method CMICOT that is able to identify joint interactions between multiple
1
The CMI filter is believed to be a ?north star" for vast majority of the state-of-the-art filters [2].
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
features. We also empirically validate our approach with 3 state-of-the-art classification models on
10 publicly available benchmark datasets and compare it with known interaction-aware SFS-based
filters and several state-of-the-art ones.
2
Preliminaries and related work
Information-theoretic measures. The mutual information (MI) of two random variables f and
g is defined as I(f ; g) = H(f ) + H(g) ? H(f, g), where H(f ) = ?E [log P(f )] is Shannon?s
entropy [4]2 . The conditional mutual information of two random variables f and g given the variable
h is I(f ; g | h) = I(f ; g, h) ? I(f ; h). The conditional MI measures the amount of additional
information about the variable f carried by g compared to the variable h. Given sample data, entropy
(and, hence, MI and conditional MI) of discrete variables could be simply estimated using the
empirical frequencies (the point estimations) [15] or in a more sophisticated way (e.g., by means of
the Bayesian framework [10]). More details on different entropy estimators can be found in [15].
Background of the feature selection based on MI. Let F be a set of features that could be used by
a classifier to predict a variable c representing a class label. The objective of a feature selection (FS)
procedure is to find a feature subset S o ? F of a given size k ? N that maximizes its joint MI with
the class label c, i.e., S o = argmax{S:S?F,|S|?k} I(c; S). In our paper, we focus on this simple but
commonly studied FS objective in the context of MI-based filters [2], though there is a wide variety
of other definitions of optimal subset of features [17] (e.g., the all-relevant problem [13]).
In order to avoid an exhaustive search of an optimal subset S , most filters are based on sub-optimal
search strategies. The most popular one is the sequential forward selection (SFS) [20, 23, 17], which
starts with an empty set (S0 := ?) and iteratively increases it by adding one currently unselected
feature on each step (Si := Si?1 ? {fi }, i = 1, . . . , k, and S o := Sk ). The feature fi is usually
selected by maximizing a certain scoring function (also called score) Ji (f ) that is calculated with
respect to currently selected features Si?1 , i.e., fi := argmaxf ?F \Si?1 Ji (f ).
A trivial feature selection approach is to select top-k features in terms of their MI with the class label
c [12]. This technique is referred to as MIM [2] and is a particular case of the SFS strategy based
on score JiMIM (f ) := I(c; f ). Note that the resulting set may contain a lot of redundant features,
since the scoring function JiMIM (?) is independent from already selected features Si?1 . Among
methods that take into account the redundancy between features [2, 17], the most popular and widely
applicable ones are MIFS [1], JMI [21, 14], CMIM [6, 19], and mRMR [16]. Brown et al. [2] unified
these techniques under one framework, where they are different low-order approximations of CMI
feature selection approach. This method is based on the score equal to MI of the label with the
evaluated feature conditioned on already selected features:
JiCMI (f ) := I(c; f | Si?1 ).
(1)
The main drawback of CMI is the sample complexity, namely, the exponential growth of the dimension
of the distribution of the tuple (c, f, Si?1 ) with respect to i. The larger the dimension is, the larger
number of instances is required to accurately estimate the conditional MI in Eq. (1). Therefore, this
technique is not usable in the case of small samples and in the cases, when a large number of features
should be selected [2]. This is also observed in our experiment in Appendix.F2, where empirical
score estimated over high dimensions results in drastically low performance of CMI.
Thus, low-dimensional approximations of Eq. (1) are more preferable in practice. For instance, the
CMIM approach approximates Eq. (1) by
JiCMIM (f ) := min I(c; f | g),
g?Si?1
(2)
i.e., one replaces the redundancy of f with respect to the whole subset Si?1 by the worst redundancy
with respect to one feature from this subset. The other popular methods (mentioned above) are
particular cases of the following approximation of the I(c; f | Si?1 ):
X
Ji?,? (f ) := I(c; f ) ?
?I(g; f ) ? ?I(g; f | c) ,
(3)
g?Si?1
2
From here on in the paper, variables separated by commas
or a set of variables in MI expressions are treated as
one random vector variable, e.g., I(f ; g, h) := I f ; (g, h) and, for F = ?n
i=1 {fi }, I(f ; F ) := I(f ; f1 , .., fn ).
2
e.g., MIFS (? ? [0, 1], ? = 0), mRMR (? = |Si?1 |?1 , ? = 0), and JMI (? = ? = |Si?1 |?1 ).
An important but usually neglected aspect in FS methods is feature complementariness [8, 3] (also
known as synergy [24] and interaction [11]). In general, complementary features are those that
appear to have low relevance to the target class c individually, but whose combination is highly
relevant [25, 24]. In the next subsection, we provide a brief overview of existing studies on filters that
take into account feature interaction. A reader interested in a formalized concept of feature relevance,
redundancy, and interaction is referred to [11] and [24].
Related work on interaction-aware filters. To the best of our knowledge, existing interaction-aware
filters that utilize the pure SFS strategy with a MI-based scoring function are the following ones.
RelaxMRMR [18] is a modification of the mRMR method,Pwhose scoring function in Eq. (3) was
refined by adding the three-way feature interaction terms h,g?Si?1 ,h6=g I(f ; h | g). The method
RCDFS [3] is a special case of Eq. (3), where ? = ? are equal to a transformation of the standard
deviation of the set {I(f ; h)}h?Si?1 . The approach IWFS [24] is based on the following idea: at
each step i, for each unselected feature f ? F \ Si , one calculates the next step score Ji+1 (f ) as
the current score Ji (f ) multiplied by a certain measure of interaction between this feature f and the
feature fi selected at the current step. Both RCDFS and IWFS can catch dependences between no
more than 2 features, while RelaxMRMR is able to identify an interaction of up to 3 features, but
its score?s computational complexity is O(i2 ) what makes it unusable in real applications. All these
methods could not be straightforwardly improved to incorporate interactions of a higher order.
In our study, we propose a general methodology that fills the gap between the ideal (?oracle") but
infeasible CMI method, which takes all interactions into account, and the above-described methods
that account for up to 3 interacting features. Our method can be effectively used in practice with its
score?s computational complexity of a linear growth O(i) (as in most state-of-the-art SFS-filters).
3
Proposed feature selection
In this section, we introduce a novel feature selection approach based on the SFS strategy whose
score is built by solving from a novel optimization problem and comprises two novel techniques that
makes the approach efficient and effective in practice.
3.1
Score with t-way interacted complementary and opposing teams
Our FS method has a parameter t ? N that is responsible for the desirable number of features whose
mutual interaction (referred to as a t-way feature interaction) should be taken into account by the
scoring function Ji (?). We build the scoring function according to the following intuitions.
First, the amount of relevant information carried by a t-way interaction of a candidate feature f has the
form I(c; f, H) for some set of features H of size |H| ? t ? 1. Second, we remove the redundant part
of this information w.r.t. the already selected features Si?1 and obtain the non-redundant information
part I(c; f, H | Si?1 ). Following the heuristic of the CMIM method, this could be approximated
by use of a small subset G ? Si?1 , |G| ? s ? N, i.e., by the low-dimensional approximation
min{G?Si?1 ,|G|?s} I(c; f, H | G) (assuming s i). Third, since in the SFS strategy one has to
select only one feature at an iteration i, this approximated additional information of the candidate f
with H w.r.t. Si?1 will be gained by with the feature f at this SFS iteration only if all complementary
features H have been already selected (i.e., H ? Si?1 ). In this way, the score of the candidate f
should be equal to the maximal additional information estimated using above reasoning, i.e., we
come to the score which is a solution of the following saddle point (max-min) optimization problem
?
(t,s)
Ji
min I(c; f, H | G).
(f ) := max
H?Si?1 , G?Si?1 ,
|H|?t?1 |G|?s
(4)
We refer to the set {f } ? Hfo , where Hfo is an optimal set H in Eq. (4), as an optimal complementary
team of the feature f ? F \ Si?1 , while an optimal set G in Eq. (4) is referred to as an optimal
opposing team to this feature f (and, thus, to its complementary team as well) and is denoted by Gof .
The described approach is inspired by methods of greedy learning of ensembles of decision trees [7],
where an ensemble of trees is built by sequentially adding a decision tree that maximizes the gain in
learning quality. In this way, our complementary team corresponds to the features used in a candidate
3
decision tree, while our opposing team corresponds to the features used to build previous trees in the
ensemble. Since they are already selected by SFS, they are expectedly stronger than f and we can
assume that, at the early iterations, a greedy machine learning algorithm would more likely use these
features rather than the new feature f once we add it into the feature set. So, Eq. (4) tries to mimic
the maximal amount of information that feature f can provide additionally to the worst-case baseline
built on Si?1 .
?
(t,s)
from Eq. (4) is equal to the score JiCMI from Eq. (1).
Statement 1. For t, s + 1 ? i, the score Ji
?
(t,s)
The proof?s sketch is: (a) justify the identity Ji (f ) = maxH?Si?1 minG?Si?1 \H I(c; f | H, G)
for t, s + 1 ? i; (b) get a contradiction to the assumption that there are no optimal subsets H and G
such that Si?1 = H ? G. Detailed proof of Statement 1 is given in Appendix A. Thus, we argue that
?
(t,s)
the score Ji
from Eq. (4) is a low-dimensional approximation of the CMI score JiCMI .3 .
The score from Eq. (4) is of a general nature and reasonable, but, to the best of our knowledge, was
never considered in existing studies. However, this score is not suitable for effective application,
since it suffers from two practical issues:
(PI.a) computational complexity: efficient search of optimal sets Hfo and Gof in Eq. (4);
(PI.b) sample complexity: accurate estimation of the MI over features with a large dimension of its
joint distribution.
We address these research problems and propose the following solutions to them: in Sec. 3.2, the
issue (PI.a) is overcome in a greedy fashion, while, in Sec. 3.3,the issue (PI.b) is mitigated by means
of binary representatives.
3.2
Greedy approximation of the score
Note that an exhaustive search of a saddle point in Eq. (4) requires
?
i?1
t?1
i?1
s
MI calculations
(t,s)
Ji
that can make calculation of the scoring function
infeasible at a large iteration i even for low
team sizes t, s > 1. In order to overcome this issue, we propose the following greedy search for
sub-optimal complementary and opposing teams.
At the first stage, we start from a greedy search of a sub-optimal set H that cannot be done straightforwardly, since Eq. (4) comprises both max and min operators. The latter one requires a search of
an optimal G that we want do at the second stage (after H). Hence, the double optimization problem
needs to be replaced by a simpler one which does not utilize a search of G.
Proposition 1. (1) For any H ? Si?1 such that |H| ? s, the following holds
I(c; f, H | G) ? I(c; f | H).
min
G?Si?1 ,|G|?s
(5)
(2) If s ? t ? 1, then the score given by the following optimization problem
I(c; f | H),
max
(6)
H?Si?1 ,|H|?t?1
?
(t,s)
is an upper bound for the score Ji
from Eq. (4).
The optimization problem Eq. (6) seems reasonable due to the following properties: (a) in fact, the
search of H in Eq. (6) is maximization of the additional information carried out by the candidate f
w.r.t. no more than t ? 1 already selected features from Si?1 ; (b) if a candidate f is a combination of
features from H, then the right hand side in Eq. (5) is 0 and the inequality becomes an equality.
So, we greedily search the maximum in Eq. (6), obtaining the (greedy) complementary team {f }?Hf ,
where Hf := {h1 , . . . , ht?1 } is defined by4
hj := argmax I(c; f | h1 , . . . , hj?1 , h),
j = 1, . . . , t ? 1.
(7)
h?Si?1
3
4
Moreover, the CMIM score from Eq. (2) is a special case of Eq. (4) with s = t = 1 and restriction G 6= ?.
If several elements provide an optimum (the case of ties), then we randomly select one of them.
4
At the second stage, given the complementary team {f } ? Hf , we greedily search the (greedy)
opposing team Gf := {g1 , . . . , gs } in the following way:
gj := argmin I(c; f, h1 , . . . , hmin{j,t}?1 | g1 , . . . , gj?1 , g),
j = 1, . . . , s.
(8)
g?Si?1
?
(t,s)
Finally, given the teams {f } ? Hf and Gf , we get the following greedy approximation of Ji
(t,s)
Ji (f )
:= I(c; f, Hf | Gf ).
(f ):
(9)
This score requires (t + s ? 1)i MI calculations (see Eq. (7)?(9)), which is a linear dependence
on an iteration i as in the most state-of-the-art SFS-based filters [2]. Thus, we built an efficient
?
(t,s)
approximation of the score Ji
and resolve the issue (PI.a).
Note that we have two options on the minimization stage: either to search among all members of the
set Hf at each step (as in Eq. (A.7) in Appendix A.3), or (what we actually do in Eq. (8)) to use only
a few first members of Hf . The latter option demonstrates noticeably better MAUC performance and
also results in 0 score for a feature that is a copy of an already selected one (Proposition 2), while the
former does not (Remark A.2 in Appendix A.3). That is why we chose this option.
Proposition 2. Let s ? t and a candidate feature f ? F \ Si?1 be such that its copy f? ? f is
(t,s)
already selected f? ? Si?1 , then, in the absence of ties in Eq. (8) for j ? t, the score J
(f ) = 0.
i
(t,s)
Proposition 2 shows that the FS approach based on the greedy score Ji (f ) remains conservative,
i.e., a copy of an already selected feature will not be selected, despite that it exploits sub-optimal
?
(t,s)
teams in contrast to the FS approach based on the optimal score Ji (f ).
3.3
Binary representatives of features
As it is mentioned in Sec. 2, a FS method that is based on calculation of MI over more than three
features is usually not popular in practice, since a large number of features implies a large dimension
of their joint distribution that leads to a large number of instances required to accurately estimate the
?
(t,s)
(t,s)
MI [2]. Both our optimal score Ji
and our greedy one Ji
suffer from the same issue (PI.b) as
well, since they exploit high-dimensional MI in Eq.(4) and Eq. (7)?(9). For instance, if we deal with
binary classification and each feature in F has q unique values (e.g., continuous features are usually
preprocessed into discrete variables with q ? 5 [18]), then the dimension of the joint distribution of
features in Eq. (9) is equal to 2 ? q t+s (e.g., ? 4.9 ? 108 for t = s = 6, q = 5). In our method, we
cannot reduce the number of features used in MIs (since t-way interaction constitutes the key basis
of our approach), but we can mitigate the effect of the sample complexity by the following novel
(t,s)
technique, which we demonstrate on our greedy score Ji . Let F consists of discrete features5 .
Definition 1. For each discrete feature f ? F , we denote by B[f ] the binary transformation of f ,
i.e., the set of binary variables (referred to as the binary representatives (BR) of f ) that constitute all
together a vector containing the same information as f 6 . For S
any subset F 0 ? F , the set of binary
representatives of all features from F 0 is denoted by B[F 0 ] = f ?F 0 B[f ].
Then, we replace all features by their binary representatives at each stage of our score calculation.
Namely, in Eq. (7) and Eq. (8), (a) the searches are performed for each binary representative b ? B[f ]
instead of f ; (b) the set Hbbin of the complementary team is found among B[Si?1 ] ? B[f ]; while
(c) the opposing team Gbin
is found among B[Si?1 ] (exact formulas could be found in Algorithm 1,
b
lines 12 and 15). Finally, the score of a feature f in this FS approach based on binary representatives
is defined as the best score among the binary representatives B[f ] of the candidate f :
(t,s),bin
Ji
(f ) := max I(c; b, Hbbin | Gbin
b ).
b?B[f ]
(10)
Note that, in the previous example with a binary target variable c and q-value features, the dimension
(t,s),bin
of the joint distribution of binary representatives used to calculate MI in Ji
is equal to 21+t+s ,
5
If there is a non-discrete feature, then we apply a discretization (e.g., by equal-width, equal-frequency
binnings [5], MDL [22, 3], etc.), which is the state-of-the-art preprocessing of continuous features in filters.
6
For instance, for f with values in {xl }ql=1 , one could take B[f ] = {I{f =xl } }q?1
l=1 , where IX is X ?s indicator,
or take bits of a binary encoding of {xl }ql=1 that is a smallest set (i.e., |B[f ]| = dlog2 qe) among possible B[f ].
5
Algorithm 1 Pseudo-code of the CMICOT feature selection method (an implementation of this
algorithm is available at https://github.com/yandex/CMICOT).
1: Input: F ? the set of all features; B[f ], f ? F, ? set of binary representatives built on f ;
2: c ? the target variable; k ? N ? the number of features to be selected;
3: t ? N, s ? Z+ ? the team sizes (parameters of the algorithm);
4: Output: S ? the set of selected features;
5: Initialize:
6: fbest := argmaxf ?F maxb?B[f ] I(c; b);
// Select the first feature
7: S := {fbest }; S bin := B[fbest ];
8: while |S| < k and |F \ S| > 0 do
9:
for f ? F \ S do
10:
for b ? B[f ] do
11:
for j := 1 to t ? 1 do
12:
hj := argmaxh?S bin ?B[f ] I(c; b | h1 , .., hj?1 , h); // Search for complementary feat.
13:
end for
14:
for j := 1 to s do
15:
gj := argming?S bin I(c; b, h1 , .., hmin{j,t}?1 | g1 , .., gj?1 , g); // Search for opp. feat.
16:
end for
17:
Ji [b] := I(c; b, h1 , .., ht?1 | g1 , .., gs ); // Calculate the score of the binary rep. b
18:
end for
19:
Ji [f ] := maxb?B[f ] Ji [b]; // Calculate the score of the feature f
20:
end for
21:
fbest := argmaxf ?F \S Ji [f ];
// Select the best candidate feature at the current step
bin
bin
22:
S := S ? {fbest }; S
:= S ? B[fbest ];
23: end while
(t,s)
which is (q/2)t+s times smaller (the dimension reduction rate) than for the MI in Ji . For
instance, for t = s = 6, q = 5, the MI from Eq. (10) deals with ? 8.2 ? 103 dimensions, which is
? 6 ? 104 times lower than? 4.9 ? 108 ones for the MI from Eq. (9). The described technique has been
inspired by the intuition that probably two binary representatives of two different features interact on
average better than two binary representatives of one feature (see App. A.5.1). Therefore, we believe
that the BR modification retains the score?s awareness to the most interactions between features.
Surely, on the one hand, the BR technique can also be applied to any state-of-the-art SFS-filter [2] or
any existing interaction-aware one (RelaxMRMR [18], RCDSFS [3], and IWFS [24]), but the effect
on them will not be striking breakthrough, since these filters exploit no more than 3 features in one
MI, and the dimension reduction rate will thus be not more than (q/2)3 (e.g., ? 15.6 for q = 5). On
the other hand, this technique is of a general nature and represents a self-contained contribution to
ML community, since it may be applied with noticeable profit to SFS-based filters with MIs of higher
orders (possibly not yet invented).
3.4
CMICOT feature selection method
We summarize Sec. 3.1?Sec. 3.3 in our novel feature selection method that is based on sequential
forward selection strategy with the scoring function from Eq. (10). We refer to this FS method as
CMICOT (Conditional Mutual Information with Complementary and Opposing Teams) and present
its pseudo-code in Algorithm 1, which has a form of a SFS strategy with a specific algorithm to
calculate the score (lines 10?19). In order to benefit from Prop. 1 and 2, one has to select s ? t, and,
for simplicity, from here on in this paper we consider only equally limited teams, i.e., t = s.
Proposition 3. Let |B[f ]| ? ?, ?f ? F , |F | ? M , and entropies in MIs are calculated over
(t,t),bin
N instances, then O(i? 2 t2 N ) simple operations are needed to calculate the score Ji
and
2 2 2
O(k ? t M N ) simple operations are needed to select top-k features by CMICOT from Alg. 1.
Let us remind how each of our techniques contributes to the presented above computational complexity
of the score. First, the factor t2 is an expected payment for the ability to be aware of t-way interactions
(Sec. 3.1). Second, the two stage greedy technique from Sec. 3.2 makes the score? computational
complexity linearly depend on a SFS iteration i. Third, utilization of the BR technique from Sec. 3.3,
on the one hand, seems to increase the computational complexity by the factor ? 2 , but, on the other
6
hand, we know that it drastically reduces the sample complexity (i.e., the number of instances required
to accurately estimate the used MIs). For simplicity, let us assume that each feature has 2? values and
is transformed to ? binary ones. If we do not use the BR technique, the complexity will be lower by
the factor ? 2 for the same number of instances N , but estimation of the MIs will require (2? /2)2t
times more instances to achieve the same level of accuracy as with the BRs. Hence, the BR technique
actually reduces the computational complexity by the factor 22t(??1) /? 2 . Note that the team size
t can be used to trade off between the number of instances available in the sample dataset and the
maximal number of features whose joint interaction could be taken into account in a SFS manner.
Finally, for a given dataset and a given team size t, the score?s computational complexity linearly
depends on the i-th SFS iteration, on the one hand, as in most state-of-the-art SFS-filters [2] like
CMIM, MIFS, mRMR, JMI, etc. (see Eq. (2)?(3)). On the other hand, scores of existing interactionaware ones have either the same (O(i) for RCDFS [3]), or higher (O(M ? i) for IWFS [24] and
O(i2 ) for RelaxMRMR [18]) order of complexity w.r.t. i. Thus, we conclude that our FS method is
not inferior in efficiency to all baseline filters, but is able to identify feature dependences of higher
orders than these baselines.
4
Experimental evaluation
We compare our CMICOT approach with (a) all known interaction-aware SFS-based filters (RelaxMRMR [18], IWFS [24], and RCDFS [3]); (b) the state-of-the-art filters [2] (MIFS, mRMR, CMIM,
JMI, DISR, and FCBF (CBFS)); (c) and the idealistic but practically infeasible CMI method (see
Sec. 2 and [2]). In our experiments, we consider t = 1, . . . , 10 to validate that CMICOT is able to
detect interactions of a considerably higher order than its competitors.
Evaluation on synthetic data. First, we study the ability to detect high-order feature dependencies
using synthetic datasets where relevant and interacting features are a priory known. A synthetic
dataset has feature set F , which contains a group of jointly interacting relevant features Fint , and a its
target c is a deterministic function of Fint for half of examples (|F \Fint | = 15 and |Fint | = 2, . . . , 11
in our experiments). The smaller k0 = min{k | Fint ? Sk }, the more effective the considered FS
method, since it builds the smaller set of features needed to construct the best possible classifier.
We conduct an experiment where, first, we randomly sample 100 datasets from the predefined joint
distribution (more details in Appendix C). Second, we calculate k0 for each of studied FS methods
on these datasets. Finally, we average k0 over the datasets and present the results in Figure 1 (a). We
see, first, that CMICOT with t ? |Fint | significantly outperforms all baselines, except the idealistic
CMI method whose results are similar to CMICOT. This is expected, since CMI is infeasible only for
large k, and, in App. F.2, we show that CMICOT is the closest approximation of true CMI among
all baselines. Second, the team size t definitely responds to the number of interacted features, that
provides an experimental evidence for ability of CMICOT to identify high-order feature interactions.
Evaluation on benchmark real data. Following the state-of-the-art practice [6, 22, 2, 18, 24, 3],
we conduct an extensive empirical evaluation of the effectiveness of our CMICOT approach on
10 large public datasets from the UCI ML Repo (that include the NIPS?2003 FS competition) and
one private dataset from one of the most popular search engines7 . We employ three state-of-theart classifiers: Naive Bayes Classifier (NBC), k-Nearest Neighbor (kNN), and AdaBoost [6] (see
App. B). Their performance on a set of features is measured by means of AUC [2] (MAUC [9])
for a binary (multi-class) target variable. First, we apply each of the FS methods to select top-k
features Sk for each dataset and for k = 1, .., 50 [2, 24, 3]. Given k ? {1, .., 50}, a dataset, and a
certain classifier, we measure the performance of a FS method (1) in terms of the (M)AUC of the
classifier built on the selected features Sk (2) and in terms of the rank of the FS method among
the other FS methods w.r.t. (M)AUC. The resulting (M)AUC and rank averaged over all datasets
are shown in Fig. 1(b,c) for kNN and AdaBoost. From these figures we see that our CMICOT for
t = 68 method noticeably outperforms all baselines for the classification models kNN and AdaBoost9
starting from approximately k = 10. We reason this frontier by the size of the teams in CMICOT
7
The number of features, instances, and target classes varies from 85 to 5000, from 452 to 105 , and from 2
to 26 respectively. More datasets? characteristics and preprocessing can be found in Appendix D.
8
Our experimentation on CMICOT with different t = 1, . . . , 10 on our datasets showed that t = 5 and 6 are
the most reasonable in terms of classifier performance (see Appendix E.1.1).
9
The results of CMICOT on NBC classifier are similar to the ones of other baselines. This is expected
since NBC does not exploit high-order feature dependences, which is the key advantage of CMICOT. Note that
7
Figure 1: (a) Comparison of the performance of SFS-based filters in terms of average k0 on synthetic
datasets. (b) Average values of (M)AUC for compared FS methods and (c) their ranks w.r.t. (M)AUC
k = 1, .., 50 and for the kNN and AdaBoost classification models over all datasets (see also App. C,E).
method, which should select different teams more likely when |Si?1 | > 2t (= 12 for t = 6). The
curves on Fig. 1 (b,c) are obtained over a test set, while a 10-fold cross-validation [2, 18] is also
applied for several key points (e.g. k = 10, 20, 50) to estimate the significance of differences in
classification quality. The detailed results of this CV for k = 50 on representative datasets are given
in Appendix E.2. A more comprehensive details on these and other experiments are in App. E and F.
We find that our approach either significantly outperforms baselines (most one for kNN and AdaBoost),
or have non-significantly different difference with the other (most one for NBC). Note that the
interaction awareness of RelaxMRMR, RCDFS and IWFS is apparently not enough to outperform
CMIM, our strongest competitor. In fact, there is no comparison of RelaxMRMR and IWFS with
CMIM in [3, 24], while RCDFS is outperformed by CMIM on some datasets including the only one
utilized in both [18] and our work. One compares CMICOT with and without BR technique: on
the one hand, we observed that CMICOT without BRs loses in performance to the one with BRs
on the datasets with non-binary features, that emphasizes importance of the problem (PI.b); on the
other hand, results on binary datasets (poker, ranking, and semeion; see App. E), where the CMICOT
variants are the same, the effectiveness of our approach separately to the BR technique is established.
5
Conclusions
We proposed a novel feature selection method CMICOT that is based on sequential forward selection
and is able to identify high-order feature interactions. The technique based on a two stage greedy
search and binary representatives of features makes our approach able to be effectively used on
datasets of different sizes for restricted team sized t. We also empirically validated our approach
for t up to 10 by means of 3 state-of-the-art classification models (NBC, kNN, and AdaBoost) on
10 publicly available benchmark datasets and compared it with known interaction-aware SFS-based
filters (RelaxMRMR, IWFS, and RCDFS) and several state-of-the-art ones (CMIM, JMI, CBFS,
and others). We conclude that our FS algorithm, unlike all competitor methods, is capable to detect
interactions between up to t features. The overall performance of our algorithm is the best among the
state-of-the-art competitors.
Acknowledgments
We are grateful to Mikhail Parakhin for important remarks which resulted in significant improvement
of the paper presentation.
RelaxMRMR also showed its poorest performance on NBC in [18], while IWFS and RCDFS in [3, 24] didn?t
consider NBC at all.
8
References
[1] R. Battiti. Using mutual information for selecting features in supervised neural net learning. Neural
Networks, IEEE Transactions on, 5(4):537?550, 1994.
[2] G. Brown, A. Pocock, M.-J. Zhao, and M. Luj?n. Conditional likelihood maximisation: a unifying
framework for information theoretic feature selection. JMLR, 13(1):27?66, 2012.
[3] Z. Chen, C. Wu, Y. Zhang, Z. Huang, B. Ran, M. Zhong, and N. Lyu. Feature selection with redundancycomplementariness dispersion. arXiv preprint arXiv:1502.00231, 2015.
[4] T. M. Cover and J. A. Thomas. Elements of information theory. John Wiley & Sons, 2012.
[5] J. Dougherty, R. Kohavi, M. Sahami, et al. Supervised and unsupervised discretization of continuous
features. In ICML, volume 12, pages 194?202, 1995.
[6] F. Fleuret. Fast binary feature selection with conditional mutual information. JMLR, 5:1531?1555, 2004.
[7] J. H. Friedman. Greedy function approximation: a gradient boosting machine. Annals of Statistics, 2001.
[8] I. Guyon and A. Elisseeff. An introduction to variable and feature selection. JMLR, 3:1157?1182, 2003.
[9] D. J. Hand and R. J. Till. A simple generalisation of the area under the roc curve for multiple class
classification problems. Machine Learning, 2001.
[10] M. Hutter. Distribution of mutual information. NIPS, 1:399?406, 2002.
[11] A. Jakulin and I. Bratko. Analyzing attribute dependencies. Springer, 2003.
[12] D. D. Lewis. Feature selection and feature extraction for text categorization. In Proceedings of the
workshop on Speech and Natural Language, pages 212?217. ACL, 1992.
[13] J. Liu, C. Zhang, C. A. McCarty, P. L. Peissig, E. S. Burnside, and D. Page. High-dimensional structured
feature screening using binary markov random fields. In AISTATS, pages 712?721, 2012.
[14] P. E. Meyer, C. Schretter, and G. Bontempi. Information-theoretic feature selection in microarray data
using variable complementarity. IEEE Journal of STSP, 2(3):261?274, 2008.
[15] L. Paninski. Estimation of entropy and mutual information. Neural comput., 15(6):1191?1253, 2003.
[16] H. Peng, F. Long, and C. Ding. Feature selection based on mutual information criteria of max-dependency,
max-relevance, and min-redundancy. PAMI, 27(8):1226?1238, 2005.
[17] J. R. Vergara and P. A. Est?vez. A review of feature selection methods based on mutual information.
Neural Computing and Applications, 24(1):175?186, 2014.
[18] N. X. Vinh, S. Zhou, J. Chan, and J. Bailey. Can high-order dependencies improve mutual information
based feature selection? Pattern Recognition, 2015.
[19] G. Wang and F. H. Lochovsky. Feature selection with conditional mutual information maximin in text
categorization. In ACM CIKM, pages 342?349. ACM, 2004.
[20] A. W. Whitney. A direct method of nonparametric measurement selection. Computers, IEEE Transactions
on, 100(9):1100?1103, 1971.
[21] H. Yang and J. Moody. Feature selection based on joint mutual information. In Proceedings of international
ICSC symposium on advances in intelligent data analysis, pages 22?25. Citeseer, 1999.
[22] L. Yu and H. Liu. Efficient feature selection via analysis of relevance and redundancy. JMLR, 5:1205?1224,
2004.
[23] M. Zaffalon and M. Hutter. Robust feature selection by mutual information distributions. In UAI, pages
577?584. Morgan Kaufmann Publishers Inc., 2002.
[24] Z. Zeng, H. Zhang, R. Zhang, and C. Yin. A novel feature selection method considering feature interaction.
Pattern Recognition, 48(8):2656?2666, 2015.
[25] Z. Zhao and H. Liu. Searching for interacting features in subset selection. Intelligent Data Analysis,
13(2):207?228, 2009.
9
| 6584 |@word cmi:11 private:1 stronger:1 seems:2 accounting:1 pavel:1 citeseer:1 elisseeff:1 profit:1 reduction:2 wrapper:1 liu:3 contains:1 score:42 selecting:1 outperforms:3 existing:6 current:3 discretization:2 com:1 si:40 yet:1 john:1 fn:1 cheap:1 remove:1 greedy:17 selected:18 half:1 serdyukov:1 provides:1 boosting:1 simpler:1 zhang:4 direct:1 symposium:1 consists:1 manner:1 introduce:1 nbc:7 peng:1 expected:3 multi:1 inspired:2 ming:1 resolve:1 considering:1 becomes:1 spain:1 mitigated:1 moreover:1 maximizes:2 agnostic:1 didn:1 what:3 argmaxf:3 argmin:1 unified:1 transformation:2 pseudo:2 mitigate:2 gbin:2 growth:2 tie:2 preferable:1 prohibitively:2 classifier:8 hfo:3 demonstrates:1 utilization:1 superiority:1 appear:1 despite:1 encoding:1 jakulin:1 analyzing:1 mccarty:1 approximately:1 pami:1 alexey:1 chose:1 acl:1 studied:2 challenging:1 limited:1 averaged:1 practical:2 responsible:1 unique:1 acknowledgment:1 practice:5 maximisation:1 procedure:1 peissig:1 area:1 empirical:3 significantly:3 get:2 cannot:2 selection:35 operator:1 context:1 restriction:1 deterministic:1 demonstrated:1 maximizing:1 starting:1 formalized:1 simplicity:2 pure:1 contradiction:1 estimator:1 fill:3 searching:1 argming:1 annals:1 target:7 exact:1 complementarity:1 element:2 approximated:2 recognition:2 utilized:1 observed:2 invented:1 preprint:1 ding:1 wang:1 worst:2 calculate:6 trade:1 repo:1 ran:1 mentioned:3 intuition:2 complexity:18 neglected:1 depend:1 solving:2 grateful:1 f2:1 efficiency:1 basis:1 joint:11 k0:4 leo:1 separated:1 fast:1 effective:4 refined:1 exhaustive:2 whose:7 heuristic:1 larger:3 widely:1 ability:3 gleb:1 knn:6 g1:4 dougherty:1 statistic:1 jointly:2 advantage:1 net:1 propose:3 interaction:33 maximal:3 relevant:6 turned:1 uci:1 zaffalon:1 till:1 achieve:1 intuitive:1 validate:2 competition:1 interacted:2 empty:1 double:1 optimum:1 categorization:2 measured:1 nearest:1 noticeable:1 eq:35 come:1 implies:1 drawback:1 attribute:1 filter:27 public:1 noticeably:2 bin:8 require:1 f1:1 preliminary:1 proposition:5 frontier:1 hold:1 practically:1 considered:2 lyu:1 predict:1 early:1 smallest:1 idealistic:2 estimation:5 outperformed:1 applicable:2 label:4 currently:2 individually:1 grouped:1 minimization:1 rather:1 avoid:1 hj:4 zhong:1 zhou:1 semeion:1 validated:1 focus:2 improvement:1 rank:3 likelihood:1 contrast:1 greedily:2 baseline:8 detect:3 transformed:1 interested:1 issue:7 classification:7 among:9 overall:1 denoted:2 art:14 special:2 initialize:1 mutual:17 breakthrough:1 field:1 construct:1 equal:8 aware:8 having:1 once:1 never:1 extraction:1 represents:2 yu:1 unsupervised:1 constitutes:1 theart:1 icml:1 mimic:1 t2:2 others:1 intelligent:2 few:1 employ:1 randomly:2 resulted:1 comprehensive:1 replaced:1 argmax:2 opposing:8 friedman:1 screening:1 highly:2 evaluation:4 mdl:1 introduces:1 bontempi:1 predefined:1 accurate:1 tuple:1 capable:1 tree:5 conduct:2 hutter:2 instance:13 cover:1 retains:1 whitney:1 maximization:1 cost:2 deviation:1 subset:10 straightforwardly:2 dependency:5 varies:1 considerably:1 synthetic:4 st:1 definitely:1 international:1 off:1 together:2 vergara:1 moody:1 containing:1 huang:1 russia:1 possibly:1 usable:1 zhao:2 account:8 star:1 sec:9 north:1 inc:1 ranking:1 yandex:3 depends:1 performed:1 try:1 lot:1 h1:6 icsc:1 apparently:1 start:2 hf:7 option:3 bayes:1 vinh:1 contribution:2 publicly:3 hmin:2 accuracy:1 kaufmann:1 characteristic:1 ensemble:3 identify:6 bayesian:1 accurately:4 emphasizes:1 app:6 strongest:1 suffers:1 definition:2 competitor:4 frequency:2 proof:2 mi:35 gain:1 dataset:6 popular:7 subsection:1 knowledge:2 sophisticated:1 actually:2 higher:6 supervised:2 methodology:1 adaboost:5 improved:1 evaluated:1 though:1 done:1 stage:8 hand:12 sketch:1 web:1 zeng:1 quality:2 believe:1 effect:2 contain:1 brown:2 concept:1 true:1 evolution:1 hence:3 equality:1 former:1 iteratively:1 i2:2 deal:2 width:1 self:1 inferior:1 auc:6 qe:1 criterion:1 theoretic:3 demonstrate:1 reasoning:1 novel:12 recently:1 fi:5 empirically:2 ji:28 overview:1 volume:1 approximates:1 refer:2 significant:1 measurement:1 cv:1 language:1 maxh:1 gj:4 etc:2 add:1 closest:1 showed:2 chan:1 burnside:1 certain:3 inequality:1 binary:27 rep:1 battiti:1 scoring:10 morgan:1 additional:4 surely:1 redundant:3 multiple:2 desirable:1 reduces:3 mifs:4 calculation:5 believed:1 cross:1 long:1 gof:2 equally:1 calculates:1 variant:1 luj:1 arxiv:2 iteration:8 score1:1 background:1 want:1 separately:2 addressed:1 argmaxh:1 microarray:1 kohavi:1 publisher:1 unlike:1 probably:1 jmi:5 cmim:10 member:2 effectiveness:2 yang:1 ideal:1 maxb:2 enough:1 variety:1 reduce:1 idea:1 br:11 expression:1 f:19 suffer:1 speech:1 constitute:2 remark:2 fleuret:1 detailed:2 sfs:27 amount:3 nonparametric:1 ten:1 category:1 http:1 outperform:1 estimated:3 cikm:1 broadly:1 discrete:5 group:1 redundancy:6 key:3 preprocessed:1 ht:2 utilize:2 vast:1 striking:1 reasonable:4 reader:1 wu:1 guyon:1 decision:3 appendix:8 bit:1 poorest:1 bound:1 expectedly:1 fold:1 replaces:1 oracle:1 g:2 aspect:1 min:10 structured:1 according:1 combination:2 smaller:3 son:1 pocock:1 modification:2 restricted:1 taken:2 computationally:1 previously:1 remains:1 payment:1 needed:3 know:1 sahami:1 end:5 available:5 operation:2 h6:1 experimentation:1 multiplied:1 apply:3 bailey:1 thomas:1 moscow:1 top:3 include:1 ilium:1 unifying:1 exploit:4 build:4 objective:2 already:9 fint:6 strategy:8 mim:1 dependence:5 anastasia:1 responds:1 poker:1 gradient:1 majority:1 topic:1 argue:1 trivial:1 priory:1 reason:1 assuming:1 ru:1 code:2 remind:1 ql:2 statement:2 implementation:1 upper:1 dispersion:1 datasets:18 markov:1 benchmark:4 gusev:1 team:27 interacting:5 community:1 namely:2 required:4 extensive:1 maximin:1 established:1 barcelona:1 nip:3 address:1 able:9 usually:5 pattern:2 challenge:1 summarize:1 built:6 max:9 including:1 suitable:1 treated:1 natural:1 indicator:1 ekaterina:1 bratko:1 representing:1 improve:2 github:1 brief:1 unselected:2 carried:3 catch:1 naive:1 gf:3 text:2 review:1 embedded:1 validation:1 awareness:2 s0:1 pi:7 copy:3 infeasible:4 drastically:2 side:1 wide:1 neighbor:1 mikhail:1 benefit:1 overcome:2 dimension:11 calculated:2 curve:2 forward:5 commonly:1 preprocessing:2 transaction:2 approximate:1 feat:2 opp:1 synergy:1 dlog2:1 ml:2 sequentially:1 uai:1 mauc:2 conclude:2 search:20 comma:1 continuous:3 sk:4 why:1 additionally:1 nature:2 robust:1 obtaining:1 contributes:1 interact:1 alg:1 aistats:1 significance:1 main:3 linearly:2 whole:1 complementary:13 fig:2 representative:15 referred:5 roc:1 fashion:1 wiley:1 sub:4 meyer:1 comprises:2 exponential:1 xl:3 candidate:10 comput:1 jmlr:4 third:2 ix:1 formula:1 unusable:1 specific:1 evidence:1 workshop:1 sequential:5 effectively:3 adding:3 gained:1 importance:1 conditioned:2 gap:3 chen:1 entropy:5 yin:1 simply:1 saddle:5 likely:2 paninski:1 contained:1 springer:1 corresponds:2 loses:1 lewis:1 acm:2 prop:1 conditional:11 identity:1 sized:1 presentation:1 mrmr:5 fbest:6 replace:1 absence:2 generalisation:1 except:1 reducing:1 justify:1 conservative:1 called:1 experimental:2 shannon:1 est:1 select:9 latter:2 alexander:1 relevance:5 incorporate:1 |
6,174 | 6,585 | Distributed Flexible Nonlinear Tensor Factorization
Shandian Zhe? , Kai Zhang? , Pengyuan Wang? , Kuang-chih Lee] , Zenglin Xu\ ,
Yuan Qi[ , Zoubin Gharamani?
?
Dept. Computer Science, Purdue University, ? NEC Laboratories America, Princeton NJ,
?
Dept. Marketing, University of Georgia at Athens, ] Yahoo! Research,
\
Big Data Res. Center, School Comp. Sci. Eng., Univ. of Electr. Sci. & Tech. of China,
[
Ant Financial Service Group, Alibaba, ? University of Cambridge
?
[email protected], ? [email protected], ? [email protected],
]
[email protected], \ [email protected],
[
[email protected], ? [email protected]
Abstract
Tensor factorization is a powerful tool to analyse multi-way data. Recently proposed nonlinear factorization methods, although capable of capturing complex
relationships, are computationally quite expensive and may suffer a severe learning
bias in case of extreme data sparsity. Therefore, we propose a distributed, flexible
nonlinear tensor factorization model, which avoids the expensive computations and
structural restrictions of the Kronecker-product in the existing TGP formulations,
allowing an arbitrary subset of tensorial entries to be selected for training. Meanwhile, we derive a tractable and tight variational evidence lower bound (ELBO) that
enables highly decoupled, parallel computations and high-quality inference. Based
on the new bound, we develop a distributed, key-value-free inference algorithm in
the M AP R EDUCE framework, which can fully exploit the memory cache mechanism in fast M AP R EDUCE systems such as SPARK. Experiments demonstrate the
advantages of our method over several state-of-the-art approaches, in terms of both
predictive performance and computational efficiency.
1
Introduction
Tensors, or multidimensional arrays, are generalizations of matrices (from binary interactions) to
high-order interactions between multiple entities. For example, we can extract a three-mode tensor
(user, advertisement, context) from online advertising logs. To analyze tensor data, people usually
turn to factorization approaches, which use a set of latent factors to represent each entity and
model how the latent factors interact with each other to generate tensor elements. Classical tensor
factorization models, including Tucker [18] and CANDECOMP/PARAFAC (CP) [5], assume multilinear interactions and hence are unable to capture more complex, nonlinear relationships. Recently,
Xu et al. [19] proposed Infinite Tucker decomposition (InfTucker), which generalizes the Tucker
model to infinite feature space using a Tensor-variate Gaussian process (TGP) and is hence more
powerful in modeling intricate nonlinear interactions. However, InfTucker and its variants [22, 23]
are computationally expensive, because the Kronecker product between the covariances of all the
modes requires the TGP to model the entire tensor structure. In addition, they may suffer from
the extreme sparsity of real-world tensor data, i.e., when the proportion of the nonzero entries is
extremely low. As is often the case, most of the zero elements in real tensors are meaningless: they
simply indicate missing or unobserved entries. Incorporating all of them in the training process may
affect the factorization quality and lead to biased predictions.
To address these issues, we propose a distributed, flexible nonlinear tensor factorization model,
which has several important advantages. First, it can capture highly nonlinear interactions in the
tensor, and is flexible enough to incorporate arbitrary subset of (meaningful) tensor entries for the
training. This is achieved by placing a Gaussian process prior over tensor entries, where the input
is constructed by concatenating the latent factors from each mode and the intricate relationships
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
are captured by using the kernel function. By using such a construction, the covariance function
is then free of the Kronecker-product structure, and as a result users can freely choose any subset
of tensor elements for the training process and incorporate prior domain knowledge. For example,
one can choose a combination of balanced zero and nonzero elements to overcome the learning bias.
Second, the tight variational evidence lower bound (ELBO) we derived using functional derivatives
and convex conjugates subsumes optimal variational posteriors, thus evades inefficient, sequential
E-M updates and enables highly efficient, parallel computations as well as improved inference quality.
Moreover, the new bound allows us to develop a distributed, gradient-based optimization algorithm.
Finally, we develop a simple yet very efficient procedure to avoid the data shuffling operation, a
major performance bottleneck in the (key-value) sorting procedure in M AP R EDUCE. That is, rather
than sending out key-value pairs, each mapper simply calculates and sends a global gradient vector
without keys. This key-value-free procedure is general and can effectively prevent massive disk IOs
and fully exploit the memory cache mechanism in fast M AP R EDUCE systems, such as SPARK.
Evaluation using small real-world tensor data have fully demonstrated the superior prediction accuracy
of our model in comparison with InfTucker and other state-of-the-art; on large tensors with millions
of nonzero elements, our approach is significantly better than, or at least as good as two popular
large-scale nonlinear factorization methods based on TGP: one uses hierarchical modeling to perform
distributed infinite Tucker decomposition [22]; the other further enhances InfTucker by using Dirichlet
process mixture prior over the latent factors and employs an online learning scheme [23]. Our method
also outperforms GigaTensor [8], a typical large-scale CP factorization algorithm, by a large margin.
In addition, our method achieves a faster training speed and enjoys almost linear speedup with respect
to the number of computational nodes. We apply our model to CTR prediction for online advertising
and achieves a significant, 20% improvement over the popular logistic regression and linear SVM
approaches (Section 4 of the supplementary material).
2
Background
We first introduce the background knowledge. For convenience, we will use the same notations
in [19]. Specifically, we denote a K-mode tensor by M ? Rd1 ?...?dK , where the k-th mode is
of dimension dk . The tensor entry at location i (i = (i1 , . . . , iK )) is denoted by mi . To introduce
Tucker decomposition, we need to generalize matrix-matrix products to tensor-matrix products.
Specifically, a tensor W ? Rr1 ?...?rK can multiply with a matrix U ? Rs?t at mode k when its
dimension at mode-k is consistent with the number of columns in U, i.e., rk = t. The product is
a new tensor, with size r1 ? .P
. . ? rk?1 ? s ? rk+1 ? . . . ? rK . Each element is calculated by
r
(W ?k U)i1 ...ik?1 jik+1 ...iK = ikk=1 wi1 ...iK ujik .
The Tucker decomposition model uses a latent factor matrix Uk ? Rdk ?rk in each mode k and a
core tensor W ? Rr1 ?...?rK and assumes the whole tensor M is generated by M = W ?1 U(1) ?2
. . . ?K U(K) . Note that this is a multilinear function of W and {U1 , . . . , UK }. It can be further
simplified by restricting r1 = r2 = . . . = rK and the off-diagonal elements of W to be 0. In this
case, the Tucker model becomes CANDECOMP/PARAFAC (CP).
The infinite Tucker decomposition (InfTucker) generalizes the Tucker model to infinite feature space
via a tensor-variate Gaussian process (TGP) [19]. Specifically, in a probabilistic framework, we
assign a standard normal prior over each element of the core tensor W, and then marginalize out W
to obtain the probability of the tensor given the latent factors:
p(M|U(1) , . . . , U(K) ) = N (vec(M); 0, ?(1) ? . . . ? ?(K) )
>
(1)
where vec(M) is the vectorized whole tensor, ?(k) = U(k) U(k) and ? is the Kronecker-product.
Next, we apply the kernel trick to model nonlinear interactions between the latent factors: Each
row ukt of the latent factors U(k) is replaced by a nonlinear feature transformation ?(ukt ) and thus
>
an equivalent nonlinear covariance matrix ?(k) = k(U(k) , U(k) ) is used to replace U(k) U(k) ,
where k(?, ?) is the covariance function. After the nonlinear feature mapping, the original Tucker
decomposition is performed in an (unknown) infinite feature space. Further, since the covariance of
vec(M) is a function of the latent factors U = {U(1) , . . . , U(K) }, Equation (1) actually defines a
Gaussian process (GP) on tensors, namely tensor-variate GP (TGP) [19], where the input are based
on U. Finally, we can use different noisy models p(Y|M) to sample the observed tensor Y. For
example, we can use Gaussian models and Probit models for continuous and binary observations,
respectively.
2
3
Model
Despite being able to capture nonlinear interactions, InfTucker may suffer from the extreme sparsity
issue in real-world tensor data sets. The reason is that its full covariance is a Kronecker-product
(1)
between the covariances over all the modes?{?
, . .Q
. , ?(K) } (see Equation (1)). Each ?(k) is of
Q
size dk ? dk and the full covariance is of size k dk ? k dk . Thus TGP is projected onto the entire
tensor with respect to the latent factors U, including all zero and nonzero elements, rather than a
(meaningful) subset of them. However, the real-world tensor data are usually extremely sparse, with
a huge number of zero entries and a tiny portion of nonzero entries. On one hand, because most zero
entries are meaningless?they are either missing or unobserved, using them can adversely affect the
tensor factorization quality and lead to biased predictions; on the other hand, incorporating numerous
zero entries into GP models will result in large covariance matrices and high computational costs. Zhe
et al. [22, 23] proposed to improve the scalability by modeling subtensors instead, but the sampled
subtensors can still be very sparse. Even worse, because they are typically of small dimensions (for
efficiency considerations), it is often possible to encounter subtensors full of zeros. This may further
incur numerical instabilities in model estimation.
To address these issues, we propose a flexible Gaussian process tensor factorization model. While
inheriting the nonlinear modeling power, our model disposes of the Kronecker-product structure in
the full covariance and can therefore select an arbitrary subset of tensor entries for training.
Specifically, given a tensor M ? Rd1 ?...?dK , for each tensor entry mi (i = (i1 , . . . , iK )), we
construct an input xi by concatenating the corresponding latent factors from all the modes: xi =
(1)
(K)
(k)
[ui1 , . . . , uiK ], where uik is the ik -th row in the latent factor matrix U(k) for mode k. We assume
PK
(1)
(K)
that there is an underlying function f : R j=1 dj ? R such that mi = f (xi ) = f ([ui1 , . . . , uiK ]).
This function is unknown and can be complex and nonlinear. To learn the function, we assign a
Gaussian process prior over f : for any set of tensor entries S = {i1 , . . . , iN }, the function values
fS = {f (xi1 ), . . . , f (xiN )} are distributed according to a multivariate Gaussian distribution with
mean 0 and covariance determined by XS = {xi1 , . . . , xiN }:
p(fS |U) = N (fS |0, k(XS , XS ))
where k(?, ?) is a (nonlinear) covariance function.
(1)
(K)
(1)
(K)
Because k(xi , xj ) = k([ui1 , . . . , uiK ], [uj1 , . . . , ujK ]), there is no Kronecker-product structure
constraint and so any subset of tensor entries can be selected for training. To prevent the learning
process to be biased toward zero, we can use a set of entries with balanced zeros and nonzeros;
furthermore, useful domain knowledge can also be incorporated to select meaningful entries for
training. Note, however, that if we still use all the tensor entries and intensionally impose the
Kronecker-product structure in the full covariance, our model is reduced to InfTucker. Therefore,
from the modeling perspective, the proposed model is more general.
We further assign a standard normal prior over the latent factors U. Given the selected tensor entries
m = [mi1 , . . . , miN ], the observed entries y = [yi1 , . . . , yiN ] are sampled from a noise model
p(y|m). In this paper, we deal with both continuous and binary observations. For continuous data,
we use the Gaussian model, p(y|m) = N (y|m, ? ?1 I) and the joint probability is
YK
p(y, m, U) =
N (vec(U(t) )|0, I)N (m|0, k(XS , XS ))N (y|m, ? ?1 I)
(2)
t=1
where S = [i1 , . . . , iN ]. For binary data, we use the Probit model in the following manner. We
first introduce augmented variables z = [z1 , . . . , zN ] and then decompose the Probit model into
p(zj |mij ) = N (zj |mij , 1) and p(yij |zj ) = 1(yij = 0)1(zj ? 0) + 1(yij = 1)1(zj > 0) where
1(?) is the indicator function. Then the joint probability is
YK
p(y, z, m, U) =
N (vec(U(t) )|0, I)N (m|0, k(XS , XS ))N (z|m, I)
t=1
Y
?
1(yij = 0)1(zj ? 0) + 1(yij = 1)1(zj > 0).
(3)
j
4
Distributed Variational Inference
Real-world tensors often comprise a large number of entries, say, millions of non-zeros and billions
of zeros, making exact inference of the proposed model totally intractable. This motives us to develop
a distributed variational inference algorithm, presented as follows.
3
4.1
Tractable Variational Evidence Lower Bound
Since the GP covariance term ? k(XS , XS ) (see Equations (2) and (3)) intertwines all the latent factors, exact inference in parallel is quite difficult. Therefore, we first derive a tractable
variational evidence lower bound (ELBO), following the sparse Gaussian process framework by
Titsias [17]. The key idea is to introduce a small set of inducing points B = {b1 , . . . , bp } and
latent targets v = {v1 , . . . , vp } (p N ). Then we augment the original model with a joint
multivariate Gaussian distribution of the latent tensor entries m and targets v, p(m, v|U, B) =
N ([m, v]> |[0, 0]> , [KSS , KSB ; KBS , KBB ]) where KSS = k(XS , XS ), KBB = k(B, B),
KSB = k(XS , B) and KBS = k(B, XS ). We use Jensen?s inequality and conditional Gaussian distributions to construct the ELBO. Using a very similar derivation
to [17], we can obtain a
tractable ELBO for our model on continuous data, log p(y, U|B) ? L1 U, B, q(v) , where
Z
X Z
p(v|B)
q(v)Fv (yij , ?)dv.
(4)
dv +
L1 U, B, q(v) = log(p(U)) + q(v) log
j
q(v)
Here p(v|B)R = N (v|0, KBB ), q(v) is the variational posterior for the latent targets v and
2
Fv (?j , ?) = log N (?j |mij , ?) N (mij |?j , ?j2 )dmij , where ?j = k(xij , B)K?1
BB v and ?j =
?(j, j) = k(xij , xij ) ? k(xij , B)K?1
BB k(B, xij ). Note that L1 is decomposed into a summation of
terms involving individual tensor entries ij (1 ? j ? N ). The additive form enables us to distribute
the computation across multiple computers.
For binary
Q data, we introduce a variational posterior q(z) and make the mean-field assumption that
q(z) = j q(zj ). Following a similar derivation to the continuous case, we can obtain a tractable
ELBO for binary data, log p(y, U|B) ? L2 U, B, q(v), q(z) , where
Z
X
p(yij |zj )
p(v|B)
L2 U, B, q(v), q(z) = log(p(U)) + q(v) log(
)dv +
q(zj ) log(
)
j
q(v)
q(zj )
Z
Z
X
+
q(v) q(zj )Fv (zj , 1)dzj dv.
(5)
j
One can simply use the standard Expectation-maximization (EM) framework to optimize (4) and
(5) for model inference, i.e., the E step updates the variational posteriors {q(v), q(z)} and the M
step updates the latent factors U, the inducing points B and the kernel parameters. However, the
sequential E-M updates can not fully exploit the paralleling computing resources. Due to the strong
dependencies between the E step and the M step, the sequential E-M updates may take a large number
of iterations to converge. Things become worse for binary case: in the E step, the updates of q(v)
and q(z) are also dependent on each other, making a parallel inference even less efficient.
4.2
Tight and Parallelizable Variational Evidence Lower Bound
In this section, we further derive tight(er) ELBOs that subsume the optimal variational posteriors
for q(v) and q(z). Thereby we can avoid the sequential E-M updates to perform decoupled, highly
efficient parallel inference. Moreover, the inference quality is very likely to be improved using tighter
bounds. Due to the space limit, we only present key ideas and results here; detailed discussions are
given in Section 1 and 2 of the supplementary material.
Tight ELBO for continuous tensors. We take functional derivative of L1 with respect to q(v) in
(4). By setting the derivative to zero, we obtain the optimal q(v) (which is a Gaussian distribution)
and then substitute it into L1 , manipulating the terms, we achieve the following tighter ELBO.
Theorem 4.1. For continuous data, we have
1
1
1
1
log p(y, U|B) ? L?1 (U, B) = log |KBB | ? log |KBB + ?A1 | ? ?a2 ? ?a3
2
2
2
2
K
X
?
1
1
N
?
?1
+ tr(K?1
kU(k) k2F + ? 2 a>
a4 +
log( ),
(6)
4 (KBB + ?A1 )
BB A1 ) ?
2
2
2
2
2?
k=1
where k ? kF is Frobenius norm, and
X
X
A1 =
k(B, xij )k(xij , B), a2 =
yi2j ,
j
a3 =
j
4
X
j
k(xij , xij ), a4 =
X
j
k(B, xij )yij .
Tight ELBO for binary tensors. The binary case is more difficult because q(v) and q(z) are
coupled together (see (5)). We use the following steps: we first fix q(z) and plug the optimal q(v) in
? 2 that only contains
the same way as the continuous case. Then we obtain an intermediate ELBO L
1
>
?1
?
q(z). However, a quadratic term in L2 , 2 (KBS hzi) (KBB + A1 ) (KBS hzi), intertwines all
? 2 , making it infeasible to analytically derive or parallelly compute the optimal {q(zj )}j .
{q(zj )}j in L
To overcome this difficulty, we use the convex conjugate of the quadratic term, and introduce a
variational parameter ? to decouple the dependences between {q(zj )}j . After that, we are able to
derive the optimal {q(zj )}j using functional derivatives and to obtain the following tight ELBO.
Theorem 4.2. For binary data, we have
1
1
1
log p(y, U|B) ? L?2 (U, B, ?) = log |KBB | ? log |KBB + A1 | ? a3
2
2
2
K
X
1
1
1X
+
log ?((2yij ? 1)?> k(B, xij )) ? ?> KBB ? + tr(K?1
kU(k) k2F (7)
A
)
?
1
BB
2
2
2
j
k=1
where ?(?) is the cumulative distribution function of the standard Gaussian.
As we can see, due to the additive forms of the terms in L?1 and L?2 , such as A1 , a2 , a3 and a4 , the
computation of the tight ELBOs and their gradients can be efficiently performed in parallel.
4.3
4.3.1
Distributed Inference on Tight Bound
Distributed Gradient-based Optimization
Given the tighter ELBOs in (6) and (7), we develop a distributed algorithm to optimize the latent
factors U, the inducing points B, the variational parameters ? (for binary data) and the kernel
parameters. We distribute the computations over multiple computational nodes (M AP step) and then
collect the results to calculate the ELBO and its gradient (R EDUCE step). A standard routine, such as
gradient descent and L-BFGS, is then used to solve the optimization problem.
For binary data, we further find that ? can be updated with a simple fixed point iteration:
?(t+1) = (KBB + A1 )?1 (A1 ?(t) + a5 )
P
N k(B,xij )> ?(t) |0,1
.
where a5 = j k(B, xij )(2yij ? 1)
> (t)
(8)
? (2yij ?1)k(B,xij ) ?
Apparently, the updating can be efficiently performed in parallel (due to the additive structure of A1
and a5 ). Moreover, the convergence is guaranteed by the following lemma. The proof is given in
Section 3 of the supplementary material.
Lemma 4.3. Given U and B, we have L?2 (U, B, ?t+1 ) ? L?2 (U, B, ?t ) and the fixed point iteration
(8) always converges.
To use the fixed point iteration, before we calculate the gradients with respect to U and B, we
first optimize ? via (8) in an inner loop. In the outer control, we then employ gradient descent or
L-BFGS to optimize U and B. This will lead to an even tighter bound for our model: L??
2 (U, B) =
max? L?2 (U, B, ?) = maxq(v),q(z) L2 (U, B, q(v), q(z)). Empirically, this converges must faster
than feeding the optimization algorithms with ??, ?U and ?B altogether, especially for large data.
4.3.2
Key-Value-Free M AP R EDUCE
We now present the detailed design of M AP R EDUCE procedures to fulfill our distributed inference.
Basically, we first allocate a set of tensor entries St on each M APPER t such that the corresponding
components of the ELBO and the gradients are calculated; then the R EDUCER aggregates local results
from each M APPER to obtain the integrated, global ELBO and gradient.
We first consider the standard (key-value) design. For brevity, we take the gradient computation for
the latent factors as an example. For each tensor entry i on a M APPER, we calculate the corresponding
(1)
(K)
(k)
gradients {?ui1 , . . . ?uiK } and then send out the key-value pairs {(k, ik ) ? ?uik }k , where the
key indicates the mode and the index of the latent factors. The R EDUCER aggregates gradients with
the same key to recover the full gradient with respect to each latent factor.
5
Although the (key-value) M AP R EDUCE has been successfully applied in numerous applications, it
relies on an expensive data shuffling operation: the R EDUCE step has to sort the M APPERS? output
by the keys before aggregation. Since the sorting is usually performed on disk due to significant data
size, intensive disk I/Os and network communications will become serious computational overheads.
To overcome this deficiency, we devise a key-value-free M AP-R EDUCE scheme to avoid on-disk data
shuffling operations. Specifically, on each M APPER, a complete gradient vector is maintained for all
the parameters, including U, B and the kernel parameters; however, only relevant components of the
gradient, as specified by the tensor entries allocated to this M APPER, will be updated. After updates,
each M APPER will then send out the full gradient vector, and the R EDUCER will simply sum them up
together to obtain a global gradient vector without having to perform any extra data sorting. Note that
a similar procedure can also be used to perform the fixed point iteration for ? (in binary tensors).
Efficient M AP R EDUCE systems, such as SPARK [21], can fully optimize the non-shuffling M AP
and R EDUCE, where most of the data are buffered in memory and disk I/Os are circumvented to the
utmost; by contrast, the performance with data shuffling degrades severely [3]. This is verified in our
evaluations: on a small tensor of size 100 ? 100 ? 100, our key-value-free M AP R EDUCE gains 30
times speed acceleration over the traditional key-value process. Therefore, our algorithm can fully
exploit the memory-cache mechanism to achieve fast inference.
4.4
Algorithm Complexity
Suppose we use N tensor entries for training, with p inducing points and T M APPER, the time
complexity for each M APPER node is O( T1 p2 N ). Since p N is a fixed constant (p = 100 in our
experiments), the time complexity is linear in the number of tensor entries. The space complexity
PK
for each M APPER node is O( j=1 mj rj + p2 + N
T K), in order to store the latent factors, their
gradients, the covariance matrix on inducing points, and the indices of the latent factors for each
tensor entry. Again, the space complexity is linear in the number of tensor entries. In comparison,
InfTucker utilizes the Kronecker-product properties to calculate the gradients and has to perform
eigenvalue decomposition of the covariance matrices in each tensor mode. Therefor it has a higher
time and space complexity (see [19] for details) and is not scalable to large dimensions.
5
Related work
Classical tensor factorization models include Tucker [18] and CP [5], based on which there are many
excellent works [2, 16, 6, 20, 14, 7, 13, 8, 1]. Despite the wide-spread success, their underlying
multilinear factorization structures prevent them from capturing more complex, nonlinear relationship
in real-world applications. Infinite Tucker decomposition [19], and its distributed or online extensions [22, 23] overcome this limitation by modeling tensors or subtensors via tensor-variate Gaussian
processes (TGP). However, these methods may suffer from the extreme sparsity in real-world tensors
due to the Kronecker-product structure in TGP formulations. Our model further address this issue by
eliminating the Kronecker-product restriction, and can model an arbitrary subset of tensor entries.
In theory, all such nonlinear factorization models belong to the family of random function prior
models [11] for exchangeable multidimensional arrays.
Our distributed variational inference algorithm is based on sparse GP [12], an efficient approximation
framework to scale up GP models. Sparse GP uses a small set of inducing points to break the
dependency between random function values. Recently, Titsias [17] proposed a variational learning
framework for sparse GP, based on which Gal et al. [4] derived a tight variational lower bound for
distributed inference of GP regression and GPLVM [10]. The derivation of the tight ELBO in our
model for continuous tensors is similar to [4]; however, the gradient calculation is substantially
different, because the input to our GP factorization model is the concatenation of the latent factors.
Many tensor entries may partly share the same latent factors, causing a large amount of key-value
pair to be sent during the distributed gradient calculation. This will incur an expensive data shuffling
procedure that takes place on disk. To improve the computational efficiency, we develop a nonkey-value M AP-R EDUCE to avoid data shuffling and fully exploit the memory-cache mechanism
in efficient M AP R EDUCE systems. This strategy is also applicable to other M AP-R EDUCE based
learning algorithms. In addition to continuous data, we also develop a tight ELBO for binary data on
optimal variational posteriors. By introducing p extra variational parameters with convex conjugates
(p is the number of inducing points), our inference can be performed efficiently in a distributed
manner, which avoids explicit optimization on a large number of variational posteriors for the latent
tensor entries and inducing targets. Our method can also be useful for GP classification problem.
6
6
Experiments
6.1 Evaluation on Small Tensor Data
For evaluation, we first compared our method with various existing tensor factorization methods.
To this end, we used four small real datasets where all methods are computationally feasible: (1)
Alog, a real-valued tensor of size 200 ? 100 ? 200, representing a three-way interaction (user, action,
resource) in a file access log. It contains 0.33% nonzero entries.(2) AdClick, a real-valued tensor
of size 80 ? 100 ? 100, describing (user, publisher, advertisement) clicks for online advertising.
It contains 2.39% nonzero entries. (3) Enron, a binary tensor depicting the three-way relationship
(sender, receiver, time) in emails. It contains 203 ? 203 ? 200 elements, of which 0.01% are nonzero.
(4) NellSmall, a binary tensor of size 295 ? 170 ? 94, depicting the knowledge predicates (entity,
relationship, entity). The data set contains 0.05% nonzero elements.
We compared with CP, nonnegative CP (NN-CP) [15], high order SVD (HOSVD) [9], Tucker, infinite
Tucker (InfTucker) [19] and its extension (InfTuckerEx) which uses the Dirichlet process mixture
(DPM) prior to model latent clusters and local TGP to perform scalable, online factorization [23].
Note that InfTucker and InfTuckerEx are nonlinear factorization approaches.
For testing, we used the same setting as in [23]. All the methods were evaluated via a 5-fold cross
validation. The nonzero entries were randomly split into 5 folds; 4 folds were used for training and
the remaining non-zero entries and 0.1% zero entries were used for testing so that the number of
non-zero entries is comparable to the number of zero entries. In doing so, zero and nonzero entries are
treated equally important in testing, and the evaluation will not be dominated by large portion of zeros.
For InfTucker and InfTuckerEx, we performed extra cross-validations to select the kernel form (e.g.,
RBF, ARD and Matern kernels) and the kernel parameters. For InfTuckerEx, we randomly sampled
subtensors and tuned the learning rate following [23]. For our model, the number of inducing points
was set to 100, and we used a balanced training set generated as follows: in addition to nonzero
entries, we randomly sampled the same number of zero entries and made sure that they would not
overlap with the testing zero elements.
Our model used ARD kernel and the kernel parameters were estimated jointly with the latent factors.
We implemented our distributed inference algorithm with two optimization frameworks, gradient
descent and L-BFGS (denoted by Ours-GD and Ours-LBFGS respectively). For a comprehensive
evaluation, we also examined CP on balanced training entries generated in the same way as our
model, denoted by CP-2. The mean squared error (MSE) is used to evaluate predictive performance
on Alog and Click and area-under-curve (AUC) on Enron and NellSmall. The averaged results from
the 5-fold cross validation are reported.
Our model achieves a higher prediction accuracy than InfTucker, and a better or comparable accuracy
than InfTuckerEx (see Figure 1). A t-test shows that our model outperforms InfTucker significantly
(p < 0.05) in almost all situations. Although InfTuckerEx uses the DPM prior to improve factorization, our model still obtains significantly better predictions on Alog and AdClick and comparable or
better performance on Enron and NellSmall. This might be attributed to the flexibility of our model
in using balanced training entries to prevent the learning bias (toward numerous zeros). Similar
improvements can be observed from CP to CP-2. Finally, our model outperforms all the remaining
methods, demonstrating the advantage of our nonlinear factorization approach.
6.2
Scalability Analysis
To examine the scalability of the proposed distributed inference algorithm, we used the following
large real-world datasets: (1) ACC, A real-valued tensor describing three-way interactions (user,
action, resource) in a code repository management system [23]. The tensor is of size 3K ? 150 ?
30K, where 0.009% are nonzero. (2) DBLP: a binary tensor depicting a three-way bibliography
relationship (author, conference, keyword) [23]. The tensor was extracted from DBLP database and
contains 10K ? 200 ? 10K elements, where 0.001% are nonzero entries. (3) NELL: a binary tensor
representing the knowledge predicates, in the form of (entity, entity, relationship) [22]. The tensor
size is 20K ? 12.3K ? 280 and 0.0001% are nonzero.
The scalability of our distributed inference algorithm was examined with regard to the number of
machines on ACC dataset. The number of latent factors was set to 3. We ran our algorithm using
the gradient descent. The results are shown in Figure 2(a). The Y-axis shows the reciprocal of the
running time multiplied by a constant?which corresponds to the running speed. As we can see, the
speed of our algorithm scales up linearly to the number of machines.
7
3
5
8
10
Tucker
InfTucker
InfTuckerEx
1.2
0.8
CP-2
Ours-GD
Ours-LBFGS
1
1
0.9
0.9
AUC
2
1.5
HOSVD
1.9
AUC
2.5
0.65
NNCP
Mean Squared Error (MSE)
Mean Squared Error (MSE)
CP
3
0.8
0.8
0.7
0.3
3
5
8
3
10
Number of Factors
Number of Factors
(a) Alog
(b) AdClick
5
8
10
Number of Factors
0.7
(c) Enron
3
5
8
10
Number of Factors
(d) NellSmall
1
5
10
15
Number of Machines
20
(a) Scalability
0.7
0.5
1
0.95
GigaTensor
DinTucker
InfTuckerEx
Ours-GD
Ours-LBFGS
0.95
0.9
AUC
0.9
AUC
3
Mean Squared Error (MSE)
1 / RunningTime X Const
Figure 1: The prediction results on small datasets. The results are averaged over 5 runs.
5
0.82
0.1
(b) ACC
0.9
0.82
(c) DBLP
(d) NELL
Figure 2: Prediction accuracy (averaged on 50 test datasets) on large tensor data and the scalability.
6.3
Evaluation on Large Tensor Data
We then compared our approach with three state-of-the-art large-scale tensor factorization methods:
GigaTensor [8], Distributed infinite Tucker decomposition (DinTucker) [22], and InfTuckerEx [23].
Both GigaTensor and DinTucker are developed on H ADOOP, while InfTuckerEx uses online inference.
Our model was implemented on SPARK. We ran Gigatensor, DinTucker and our approach on a large
YARN cluster and InfTuckerEx on a single computer.
We set the number of latent factors to 3 for ACC and DBLP data set, and 5 for NELL data set.
Following the settings in [23, 22], we randomly chose 80% of nonzero entries for training, and then
sampled 50 test data sets from the remaining entries. For ACC and DBLP, each test data set comprises
200 nonzero elements and 1, 800 zero elements; for NELL, each test data set contains 200 nonzero
elements and 2, 000 zero elements. The running of GigaTensor was based on the default settings
of the software package. For DinTucker and InfTuckerEx, we randomly sampled subtensors for
distributed or online inference. The parameters, including the number and size of the subtensors and
the learning rate, were selected in the same way as [23]. The kernel form and parameters were chosen
by a cross-validation on the training tensor. For our model, we used the same setting as in the small
data. We set 50 M APPERS for GigaTensor, DinTucker and our model.
Figure 2(b)-(d) shows the predictive performance of all the methods. We observe that our approach
consistently outperforms GigaTensor and DinTucker on all the three datasets; our approach outperforms InfTuckerEx on ACC and DBLP and is slightly worse than InfTuckerEx on NELL. Note again
that InfTuckerEx uses DPM prior to enhance the factorization while our model doesn?t; finally, all the
nonlinear factorization methods outperform GigaTensor, a distributed CP factorization algorithm by a
large margin, confirming the advantages of nonlinear factorizations on large data. In terms of speed,
our algorithm is much faster than GigaTensor and DinTucker. For example, on DBLP dataset, the
average per-iteration running time were 1.45, 15.4 and 20.5 minutes for our model, GigaTensor and
DinTucker, respectively. This is not surprising, because (1) our model uses the data sparsity and can
exclude numerous, meaningless zero elements from training; (2) our algorithm is based on SPARK,
a more efficient M AP R EDUCE system than H ADOOP; (3) our algorithm gets rid of data shuffling and
can fully exploit the memory-cache mechanism of SPARK.
7
Conclusion
In this paper, we have proposed a novel flexible GP tensor factorization model. In addition, we have
derived a tight ELBO for both continuous and binary problems, based on which we further developed
an efficient distributed variational inference algorithm in M AP R EDUCE framework.
Acknowledgement
Dr. Zenglin Xu was supported by a grant from NSF China under No. 61572111. We thank IBM T.J.
Watson Research Center for providing one dataset. We also thank Jiasen Yang for proofreading this
paper.
8
References
[1] Choi, J. H. & Vishwanathan, S. (2014). Dfacto: Distributed factorization of tensors. In NIPS.
[2] Chu, W. & Ghahramani, Z. (2009). Probabilistic models for incomplete multi-dimensional arrays. In
AISTATS.
[3] Davidson, A. & Or, A. (2013). Optimizing shuffle performance in Spark. University of California,
Berkeley-Department of Electrical Engineering and Computer Sciences, Tech. Rep.
[4] Gal, Y., van der Wilk, M., & Rasmussen, C. (2014). Distributed variational inference in sparse Gaussian
process regression and latent variable models. In NIPS.
[5] Harshman, R. A. (1970). Foundations of the PARAFAC procedure: Model and conditions for an "explanatory" multi-mode factor analysis. UCLA Working Papers in Phonetics, 16, 1?84.
[6] Hoff, P. (2011). Hierarchical multilinear models for multiway data. Computational Statistics & Data
Analysis.
[7] Hu, C., Rai, P., & Carin, L. (2015). Zero-truncated poisson tensor factorization for massive binary tensors.
In UAI.
[8] Kang, U., Papalexakis, E., Harpale, A., & Faloutsos, C. (2012). Gigatensor: scaling tensor analysis up by
100 times-algorithms and discoveries. In KDD.
[9] Lathauwer, L. D., Moor, B. D., & Vandewalle, J. (2000). A multilinear singular value decomposition. SIAM
J. Matrix Anal. Appl, 21, 1253?1278.
[10] Lawrence, N. D. (2004). Gaussian process latent variable models for visualisation of high dimensional
data. In NIPS.
[11] Lloyd, J. R., Orbanz, P., Ghahramani, Z., & Roy, D. M. (2012). Random function priors for exchangeable
arrays with applications to graphs and relational data. In NIPS.
[12] Qui?onero-Candela, J. & Rasmussen, C. E. (2005). A unifying view of sparse approximate Gaussian
process regression. The Journal of Machine Learning Research, 6, 1939?1959.
[13] Rai, P., Hu, C., Harding, M., & Carin, L. (2015). Scalable probabilistic tensor factorization for binary and
count data. In IJCAI.
[14] Rai, P., Wang, Y., Guo, S., Chen, G., Dunson, D., & Carin, L. (2014). Scalable Bayesian low-rank
decomposition of incomplete multiway tensors. In ICML.
[15] Shashua, A. & Hazan, T. (2005). Non-negative tensor factorization with applications to statistics and
computer vision. In ICML.
[16] Sutskever, I., Tenenbaum, J. B., & Salakhutdinov, R. R. (2009). Modelling relational data using Bayesian
clustered tensor factorization. In NIPS.
[17] Titsias, M. K. (2009). Variational learning of inducing variables in sparse Gaussian processes. In AISTATS.
[18] Tucker, L. (1966). Some mathematical notes on three-mode factor analysis. Psychometrika, 31, 279?311.
[19] Xu, Z., Yan, F., & Qi, Y. (2012). Infinite Tucker decomposition: Nonparametric Bayesian models for
multiway data analysis. In ICML.
[20] Yang, Y. & Dunson, D. B. (2016). Bayesian conditional tensor factorizations for high-dimensional
classification. Journal of the American Statistical Association, 656?669.
[21] Zaharia, M., Chowdhury, M., Das, T., Dave, A., Ma, J., McCauley, M., Franklin, M. J., Shenker, S., &
Stoica, I. (2012). Resilient distributed datasets: A fault-tolerant abstraction for in-memory cluster computing.
In NSDI.
[22] Zhe, S., Qi, Y., Park, Y., Xu, Z., Molloy, I., & Chari, S. (2016). Dintucker: Scaling up Gaussian process
models on large multidimensional arrays. In AAAI.
[23] Zhe, S., Xu, Z., Chu, X., Qi, Y., & Park, Y. (2015). Scalable nonparametric multiway data analysis. In
AISTATS.
9
| 6585 |@word repository:1 eliminating:1 proportion:1 norm:1 disk:6 tensorial:1 hu:2 r:1 eng:1 decomposition:12 covariance:16 thereby:1 tr:2 outlook:1 contains:7 tuned:1 ours:6 franklin:1 outperforms:5 existing:2 com:3 surprising:1 nell:5 yet:1 chu:2 must:1 numerical:1 additive:3 confirming:1 kdd:1 enables:3 update:8 electr:1 selected:4 yi1:1 reciprocal:1 core:2 node:4 location:1 zhang:1 mathematical:1 lathauwer:1 constructed:1 become:2 ik:7 yuan:1 overhead:1 manner:2 introduce:6 intricate:2 zlxu:1 examine:1 multi:3 salakhutdinov:1 decomposed:1 cache:5 totally:1 becomes:1 spain:1 psychometrika:1 moreover:3 notation:1 underlying:2 harding:1 substantially:1 jik:1 developed:2 unobserved:2 transformation:1 gal:2 nj:1 berkeley:1 multidimensional:3 uk:3 control:1 exchangeable:2 grant:1 harshman:1 harpale:1 before:2 service:1 t1:1 local:2 engineering:1 papalexakis:1 limit:1 io:1 severely:1 despite:2 ap:17 might:1 chose:1 china:2 examined:2 collect:1 appl:1 nsdi:1 factorization:33 averaged:3 testing:4 procedure:7 area:1 yan:1 significantly:3 zoubin:2 get:1 convenience:1 marginalize:1 onto:1 context:1 instability:1 restriction:2 equivalent:1 optimize:5 demonstrated:1 center:2 missing:2 send:2 kbb:11 convex:3 spark:7 array:5 financial:1 updated:2 construction:1 target:4 suppose:1 user:5 massive:2 exact:2 paralleling:1 us:8 trick:1 element:18 roy:1 expensive:5 updating:1 database:1 observed:3 wang:2 capture:3 electrical:1 calculate:4 keyword:1 shuffle:1 evades:1 yk:2 balanced:5 ran:2 complexity:6 inftucker:14 cam:1 tight:13 predictive:3 incur:2 titsias:3 efficiency:3 joint:3 hosvd:2 various:1 america:1 derivation:3 univ:1 fast:3 aggregate:2 quite:2 kai:1 supplementary:3 solve:1 say:1 valued:3 elbo:17 statistic:2 gp:12 analyse:1 noisy:1 jointly:1 online:8 advantage:4 eigenvalue:1 propose:3 interaction:9 product:14 j2:1 relevant:1 loop:1 causing:1 flexibility:1 achieve:2 inducing:10 frobenius:1 scalability:6 billion:1 convergence:1 cluster:3 ijcai:1 r1:2 sutskever:1 converges:2 derive:5 develop:7 ac:1 ard:2 ij:1 school:1 p2:2 strong:1 implemented:2 indicate:1 kb:4 material:3 resilient:1 feeding:1 assign:3 fix:1 generalization:1 clustered:1 decompose:1 tighter:4 multilinear:5 summation:1 yij:11 extension:2 normal:2 lawrence:1 mapping:1 major:1 achieves:3 elbos:3 a2:3 wi1:1 estimation:1 athens:1 applicable:1 successfully:1 tool:1 moor:1 gaussian:20 always:1 rather:2 fulfill:1 avoid:4 ikk:1 parafac:3 derived:3 improvement:2 consistently:1 rank:1 indicates:1 modelling:1 tech:2 contrast:1 inference:24 motif:1 dependent:1 abstraction:1 nn:1 entire:2 typically:1 integrated:1 explanatory:1 visualisation:1 manipulating:1 i1:5 issue:4 classification:2 flexible:6 denoted:3 tgp:10 yahoo:2 augment:1 art:3 hoff:1 field:1 construct:2 comprise:1 having:1 placing:1 park:2 k2f:2 carin:3 ksb:2 icml:3 uj1:1 serious:1 employ:2 randomly:5 comprehensive:1 individual:1 replaced:1 subtensors:7 huge:1 a5:3 highly:4 multiply:1 chowdhury:1 evaluation:7 severe:1 mixture:2 extreme:4 capable:1 decoupled:2 mccauley:1 incomplete:2 re:1 column:1 modeling:6 zn:1 maximization:1 cost:1 introducing:1 subset:7 entry:47 kuang:1 predicate:2 vandewalle:1 reported:1 dependency:2 zenglin:2 gd:3 st:1 siam:1 lee:1 off:1 probabilistic:3 xi1:2 enhance:1 together:2 ctr:1 squared:4 aaai:1 again:2 management:1 ukt:2 choose:2 hzi:2 dr:1 worse:3 adversely:1 american:1 derivative:4 inefficient:1 distribute:2 exclude:1 bfgs:3 lloyd:1 subsumes:1 pengyuan:2 inc:1 stoica:1 performed:6 break:1 lab:1 candela:1 hazan:1 analyze:1 apparently:1 portion:2 doing:1 recover:1 sort:1 parallel:7 aggregation:1 shashua:1 view:1 accuracy:4 efficiently:3 ant:1 vp:1 generalize:1 bayesian:4 basically:1 intertwines:2 onero:1 advertising:3 comp:1 dave:1 acc:6 parallelizable:1 email:1 tucker:18 rdk:1 mi:3 proof:1 attributed:1 sampled:6 gain:1 dataset:3 popular:2 knowledge:5 routine:1 actually:1 higher:2 improved:2 formulation:2 evaluated:1 furthermore:1 marketing:1 hand:2 working:1 nonlinear:22 o:2 defines:1 mode:15 logistic:1 quality:5 hence:2 analytically:1 laboratory:1 nonzero:18 deal:1 during:1 auc:5 maintained:1 complete:1 demonstrate:1 cp:14 l1:5 phonetics:1 dfacto:1 variational:23 consideration:1 novel:1 recently:3 superior:1 functional:3 empirically:1 million:2 belong:1 association:1 shenker:1 significant:2 buffered:1 cambridge:1 rr1:2 vec:5 shuffling:8 therefor:1 multiway:4 mapper:1 dj:1 access:1 posterior:7 multivariate:2 perspective:1 optimizing:1 orbanz:1 store:1 inequality:1 binary:21 success:1 watson:1 rep:1 fault:1 der:1 devise:1 captured:1 impose:1 freely:1 converge:1 gigatensor:12 multiple:3 full:7 rj:1 nonzeros:1 faster:3 plug:1 calculation:2 cross:4 dept:2 equally:1 a1:10 qi:4 prediction:8 variant:1 calculates:1 regression:4 involving:1 vision:1 expectation:1 scalable:5 poisson:1 iteration:6 represent:1 kernel:11 ui1:4 achieved:1 addition:5 background:2 chari:1 matern:1 singular:1 sends:1 allocated:1 publisher:1 biased:3 meaningless:3 extra:3 enron:4 file:1 sure:1 sent:1 thing:1 dpm:3 structural:1 yang:2 intermediate:1 split:1 enough:1 affect:2 variate:4 xj:1 ujk:1 click:2 inner:1 idea:2 cn:1 intensive:1 bottleneck:1 allocate:1 suffer:4 f:3 action:2 useful:2 detailed:2 utmost:1 amount:1 nonparametric:2 tenenbaum:1 reduced:1 generate:1 outperform:1 xij:14 zj:17 nsf:1 estimated:1 per:1 group:1 key:18 four:1 demonstrating:1 prevent:4 verified:1 v1:1 graph:1 sum:1 run:1 package:1 powerful:2 place:1 almost:2 chih:1 family:1 utilizes:1 scaling:2 qui:1 kzhang:1 comparable:3 ks:2 capturing:2 bound:11 guaranteed:1 fold:4 quadratic:2 nonnegative:1 kronecker:11 constraint:1 deficiency:1 bp:1 vishwanathan:1 bibliography:1 software:1 ucla:1 dominated:1 u1:1 speed:5 extremely:2 min:1 proofreading:1 circumvented:1 speedup:1 department:1 according:1 rai:3 combination:1 conjugate:3 across:1 slightly:1 em:1 making:3 dv:4 computationally:3 equation:3 resource:3 jiasen:1 turn:1 describing:2 mechanism:5 count:1 tractable:5 end:1 sending:1 generalizes:2 operation:3 molloy:1 multiplied:1 apply:2 observe:1 hierarchical:2 encounter:1 faloutsos:1 altogether:1 original:2 substitute:1 assumes:1 dirichlet:2 include:1 remaining:3 running:4 a4:3 unifying:1 const:1 exploit:6 ghahramani:2 especially:1 classical:2 tensor:94 degrades:1 strategy:1 dependence:1 diagonal:1 traditional:1 enhances:1 gradient:24 unable:1 thank:2 sci:2 entity:6 concatenation:1 outer:1 reason:1 toward:2 code:1 index:2 relationship:8 providing:1 difficult:2 dunson:2 negative:1 design:2 anal:1 unknown:2 perform:6 allowing:1 observation:2 datasets:6 alog:4 purdue:2 wilk:1 descent:4 gplvm:1 truncated:1 subsume:1 situation:1 incorporated:1 communication:1 relational:2 arbitrary:4 pair:3 namely:1 specified:1 z1:1 california:1 fv:3 alibaba:1 kang:1 barcelona:1 maxq:1 nip:6 address:3 able:2 usually:3 candecomp:2 sparsity:5 including:4 memory:7 max:1 power:1 overlap:1 difficulty:1 treated:1 indicator:1 representing:2 scheme:2 improve:3 mi1:1 numerous:4 axis:1 extract:1 coupled:1 prior:11 l2:4 acknowledgement:1 kf:1 discovery:1 fully:8 probit:3 limitation:1 zaharia:1 validation:4 foundation:1 vectorized:1 consistent:1 tiny:1 educe:18 share:1 ibm:1 row:2 supported:1 free:6 rasmussen:2 infeasible:1 enjoys:1 bias:3 wide:1 sparse:9 distributed:28 regard:1 overcome:4 dimension:4 calculated:2 world:8 avoids:2 cumulative:1 curve:1 default:1 author:1 made:1 doesn:1 projected:1 simplified:1 bb:4 approximate:1 obtains:1 global:3 uai:1 tolerant:1 rid:1 b1:1 receiver:1 xi:4 davidson:1 zhe:4 continuous:11 latent:33 learn:1 ku:2 mj:1 gharamani:1 depicting:3 interact:1 mse:4 excellent:1 complex:4 meanwhile:1 domain:2 inheriting:1 da:1 aistats:3 pk:2 spread:1 linearly:1 big:1 whole:2 noise:1 xu:6 augmented:1 uik:6 georgia:1 comprises:1 explicit:1 concatenating:2 advertisement:2 parallelly:1 rk:8 theorem:2 minute:1 choi:1 jensen:1 er:1 r2:1 dk:7 svm:1 x:13 evidence:5 a3:4 incorporating:2 intractable:1 restricting:1 sequential:4 effectively:1 nec:2 margin:2 sorting:3 dblp:7 chen:1 rd1:2 runningtime:1 yin:1 simply:4 likely:1 sender:1 lbfgs:3 van:1 mij:4 corresponds:1 relies:1 extracted:1 ma:1 conditional:2 acceleration:1 rbf:1 replace:1 feasible:1 infinite:10 typical:1 specifically:5 determined:1 decouple:1 lemma:2 partly:1 svd:1 xin:2 meaningful:3 select:3 people:1 guo:1 brevity:1 incorporate:2 evaluate:1 princeton:1 |
6,175 | 6,586 | Edge-exchangeable graphs and sparsity
Diana Cai
Dept. of Statistics, U. Chicago
Chicago, IL 60637
[email protected]
Trevor Campbell
CSAIL, MIT
Cambridge, MA 02139
[email protected]
Tamara Broderick
CSAIL, MIT
Cambridge, MA 02139
[email protected]
Abstract
Many popular network models rely on the assumption of (vertex) exchangeability,
in which the distribution of the graph is invariant to relabelings of the vertices.
However, the Aldous-Hoover theorem guarantees that these graphs are dense or
empty with probability one, whereas many real-world graphs are sparse. We
present an alternative notion of exchangeability for random graphs, which we call
edge exchangeability, in which the distribution of a graph sequence is invariant
to the order of the edges. We demonstrate that edge-exchangeable models, unlike
models that are traditionally vertex exchangeable, can exhibit sparsity. To do
so, we outline a general framework for graph generative models; by contrast to
the pioneering work of Caron and Fox [12], models within our framework are
stationary across steps of the graph sequence. In particular, our model grows the
graph by instantiating more latent atoms of a single random measure as the dataset
size increases, rather than adding new atoms to the measure.
1
Introduction
In recent years, network data have appeared in a growing number of applications, such as online
social networks, biological networks, and networks representing communication patterns. As a result,
there is growing interest in developing models for such data and studying their properties. Crucially,
individual network data sets also continue to increase in size; we typically assume that the number of
vertices is unbounded as time progresses. We say a graph sequence is dense if the number of edges
grows quadratically in the number of vertices, and a graph sequence is sparse if the number of edges
grows sub-quadratically as a function of the number of vertices. Sparse graph sequences are more
representative of real-world graph behavior. However, many popular network models (see, e.g., Lloyd
et al. [19] for an extensive list) share the undesirable scaling property that they yield dense sequences
of graphs with probability one. The poor scaling properties of these models can be traced back to a
seemingly innocent assumption: that the vertices in the model are exchangeable, that is, any finite
permutation of the rows and columns of the graph adjacency matrix does not change the distribution
of the graph. Under this assumption, the Aldous-Hoover theorem [1, 16] implies that such models
generate dense or empty graphs with probability one [20].
This fundamental model misspecification motivates the development of new models that can achieve
sparsity. One recent focus has been on models in which an additional parameter is employed to
uniformly decrease the probabilities of edges as the network grows (e.g., Bollob?s et al. [3], Borgs
et al. [4, 5], Wolfe and Olhede [24]). While these models allow sparse graph sequences, the sequences
are no longer projective. In projective sequences, vertices and edges are added to a graph as a
graph sequence progresses?whereas in the models above, there is not generally any strict subgraph
relationship between earlier graphs and later graphs in the sequence. Projectivity is natural in
streaming modeling. For instance, we may wish to capture new users joining a social network and
new connections being made among existing users?or new employees joining a company and new
communications between existing employees.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Caron and Fox [12] have pioneered initial work on sparse, projective graph sequences. Instead of
the vertex exchangeability that yields the Aldous-Hoover theorem, they consider a notion of graph
exchangeability based on the idea of independent increments of subordinators [18], explored in depth
by Veitch and Roy [22]. However, since this Kallenberg-style exchangeability introduces a new
countable infinity of latent vertices at every step in the graph sequence, its generative mechanism
seems particularly suited to the non-stationary domain. By contrast, we are here interested in exploring
stationary models that grow in complexity with the size of the data set. Consider classic Bayesian
nonparametric models as the Chinese restaurant process (CRP) and Indian buffet process (IBP); these
engender growth by using a single infinite latent collection of parameters to generate a finite but
growing set of instantiated parameters. Similarly, we propose a framework that uses a single infinite
latent collection of vertices to generate a finite but growing set of vertices that participate in edges
and thereby in the network. We believe our framework will be a useful component in more complex,
non-stationary graphical models?just as the CRP and IBP are often combined with hidden Markov
models or other explicit non-stationary mechanisms. Additionally, Kallenberg exchangeability is
intimately tied to continuous-valued labels of the vertices, and here we are interested in providing a
characterization of the graph sequence based solely on its topology.
In this work, we introduce a new form of exchangeability, distinct from both vertex exchangeability
and Kallenberg exchangeability. In particular, we say that a graph sequence is edge exchangeable if
the distribution of any graph in the sequence is invariant to the order in which edges arrive?rather
than the order of the vertices. We will demonstrate that edge exchangeability admits a large family of
sparse, projective graph sequences.
In the remainder of the paper, we start by defining dense and sparse graph sequences rigorously.
We review vertex exchangeability before introducing our new notion of edge exchangeability in
Section 2, which we also contrast with Kallenberg exchangeability in more detail in Section 4. We
define a family of models, which we call graph frequency models, based on random measures in
Section 3. We use these models to show that edge-exchangeable models can yield sparse, projective
graph sequences via theoretical analysis in Section 5 and via simulations in Section 6. Along the way,
we highlight other benefits of the edge exchangeability and graph frequency model frameworks.
2
Exchangeability in graphs: old and new
Let (Gn )n := G1 , G2 , . . . be a sequence of graphs, where each graph Gn = (Vn , En ) consists of a
(finite) set of vertices Vn and a (finite) multiset of edges En . Each edge e ? En is a set of two vertices
in Vn . We assume the sequence is projective?or growing?so that Vn ? Vn+1 and En ? En+1 .
Consider, e.g., a social network with more users joining the network and making new connections
with existing users. We say that a graph sequence is dense if |En | = ?(|Vn |2 ), i.e., the number of
edges is asymptotically lower bounded by c ? |Vn |2 for some constant c. Conversely, a sequence is
sparse if |En | = o(|Vn |2 ), i.e., the number of edges is asymptotically upper bounded by c ? |Vn |2
for all constants c. In what follows, we consider random graph sequences, and we focus on the case
where |Vn | ? ? almost surely.
2.1
Vertex-exchangeable graph sequences
If the number of vertices in the graph sequence grows to infinity, the graphs in the sequence can
be thought of as subgraphs of an ?infinite? graph with infinitely many vertices and a correspondingly infinite adjacency matrix. Traditionally, exchangeability in random graphs is defined as the
invariance of the distribution of any finite submatrix of this adjacency matrix?corresponding to any
finite collection of vertices?under finite permutation. Equivalently, we can express this form of
exchangeability, which we henceforth call vertex exchangeability, by considering a random sequence
of graphs (Gn )n with Vn = [n], where [n] := {1, . . . , n}. In this case, only the edge sequence is
random. Let ? be any permutation of the integers [n]. If e = {v, w}, let ?(e) := {?(v), ?(w)}. If
En = {e1 , . . . , em }, let ?(En ) := {?(e1 ), . . . , ?(em )}.
Definition 2.1. Consider the random graph sequence (Gn )n , where Gn has vertices Vn = [n] and
edges En . (Gn )n is (infinitely) vertex exchangeable if for every n ? N and for every permutation ?
d ?
?
of the vertices [n], Gn = G
n , where Gn has vertices [n] and edges ?(En ).
2
2
2
1
5
1
2
1
5
1
1
4
1
5
1
3
2
2
4
5
1
1
3
2
2
1
2
3
2
1
4
3
1
5
2
4
4
6
4
2
3
4
3
1
2
3
4
4
1
1
2
2
1
6
Figure 1: Upper, left four: Step-augmented graph sequence from Ex. 2.2. At each step n, the step
value is always at least the maximum vertex index. Upper, right two: Two graphs with the same
probability under vertex exchangeability. Lower, left four: Step-augmented graph sequence from
Ex. 2.3. Lower, right two: Two graphs with the same probability under edge exchangeability.
A great many popular models for graphs are vertex exchangeable; see Appendix B and Lloyd
et al. [19] for a list. However, it follows from the Aldous-Hoover theorem [1, 16] that any vertexexchangeable graph is a mixture of sampling procedures from graphons. Further, any graph sampled
from a graphon is almost surely dense or empty [20]. Thus, vertex-exchangeable random graph
models are misspecified models for sparse network datasets, as they generate dense graphs.
2.2
Edge-exchangeable graph sequences
Vertex-exchangeable sequences have distributions invariant to the order of vertex arrival. We introduce
edge-exchangeable graph sequences, which will instead be invariant to the order of edge arrival.
As before, we let Gn = (Vn , En ) be the nth graph in the sequence. Here, though, we consider
only active vertices?that is, vertices that are connected via some edge. That lets us define Vn as a
function of En ; namely, Vn is the union of the vertices in En . Note that a graph that has sub-quadratic
growth in the number of edges as a function of the number of active vertices will necessarily have
sub-quadratic growth in the number of edges as a function of the number of all vertices, so we obtain
strictly stronger results by considering active vertices. In this case, the graph Gn is completely
defined by its edge set En .
As above, we suppose that En ? En+1 . We can emphasize this projectivity property by augmenting
each edge with the step on which it is added to the sequence. Let En0 be a collection of tuples, in
which the first element is the edge and the second element is the step (i.e., index) on which the edge
is added: En0 = {(e1 , s1 ), . . . , (em , sm )}. We can then define a step-augmented graph sequence
(En0 )n = (E10 , E20 , . . .) as a sequence of step-augmented edge sets. Note that there is a bijection
between the step-augmented graph sequence and the original graph sequence.
Example 2.2. In the setup for vertex exchangeability, we assumed Vn = [n] and every edge is
introduced as soon as both of its vertices are introduced. In this case, the step of any edge in the
step-augmented graph is the maximum vertex value. For example, in Figure 1, we have
E10 = ?, E20 = E30 = {({1, 2}, 2)}, E40 = {({1, 2}, 2), ({1, 4}, 4), ({2, 4}, 4), ({3, 4}, 4)}.
In general step-augmented graphs, though, the step need not equal the max vertex, as we see next.
Example 2.3. Suppose we have a graph given by the edge sequence (see Figure 1):
E1 = E2 = {{2, 5}, {5, 5}}, E3 = E2 ? {{2, 5}}, E4 = E3 ? {{1, 6}}.
The step-augmented graph E40 is {({2, 5}, 1), ({5, 5}, 1), ({2, 5}, 3), ({1, 6}, 4)}.
Roughly, a random graph sequence is edge exchangeable if its distribution is invariant to finite
permutations of the steps. Let ? be a permutation of the integers [n]. For a step-augmented edge set
En0 = {(e1 , s1 ), . . . , (em , sm )}, let ?(En0 ) = {(e1 , ?(s1 )), . . . , (em , ?(sm ))}.
Definition 2.4. Consider the random graph sequence (Gn )n , where Gn has step-augmented edges
En0 and Vn are the active vertices of En . (Gn )n is (infinitely) edge exchangeable if for every n ? N
3
d
? n , where G
? n has step-augmented edges ?(En0 )
and for every permutation ? of the steps [n], Gn = G
and associated active vertices.
See Figure 1 for visualizations of both vertex exchangeability and edge exchangeability. It remains
to show that there are non-trivial models that are edge exchangeable (Section 3) and that edgeexchangeable models admit sparse graphs (Section 5).
3
Graph frequency models
We next demonstrate that a wide class of models, which we call graph frequency models, exhibit edge
exchangeability. Consider a latent infinity of vertices indexed by the positive integers N = {1, 2, . . .},
along with an infinity of edge labels (?{i,j} ), each in a set ?, and positive edge rates (or frequencies)
(w{i,j} ) in R+ . We allow both the (?{i,j} ) and (w{i,j} ) to be random, though this is not mandatory.
For instance, we might choose ?{i,j} = (i, j) for i ? j, and ? = R2 . Alternatively, the ?{i,j}
could be drawn iid from a continuous distribution such as Unif[0, 1]. For any choice of (?{i,j} ) and
(w{i,j} ),
X
W :=
w{i,j} ??{i,j}
(1)
{i,j}:i,j?N
is a measure on ?. Moreover, it is a discrete measure since it is always atomic. If either (?{i,j} ) or
(w{i,j} ) (or both) are random, W is a discrete random measure on ? since it is a random, discretemeasure-valued element. Given the edge rates (or frequencies) (w{i,j} ) in W , we next show some
natural ways to construct edge-exchangeable graphs.
P
Single edge per step. If the rates (w{i,j} ) are normalized such that {i,j}:i,j?N w{i,j} = 1, then
(w{i,j} ) is a distribution over all possible vertex pairs. In other words, W is a probability measure. We
can form an edge-exchangeable graph sequence by first drawing values for (w{i,j} ) and (?{i,j} )?and
setting E0 = ?. We recursively set En+1 = En ? {e}, where e is an edge {i, j} chosen from the
distribution (w{i,j} ). This construction introduces a single edge in the graph each step, although it
may be a duplicate of an edge that already exists. Therefore, this technique generates multigraphs
one edge at a time. Since the edge every step is drawn conditionally iid given W , we have an
edge-exchangeable graph.
Multiple edges per step. Alternatively, the rates (w{i,j} ) may not be normalized. Then W may
not be a probability measure. Let f (m|w) be a distribution over non-negative integers m given some
rate w ? R+ . We again initialize our sequence by drawing (w{i,j} ) and (?{i,j} ) and setting E0 = ?.
In this case, recursively, on the nth step, start by setting F = ?. For every possible edge e = {i, j},
ind
we draw the multiplicity of the edge e in this step as me ? f (?|we ) and add me copies of edge e to
F . Finally, En+1 = En ? F . This technique potentially introduces multiple edges in each step, in
which edges themselves may have multiplicity greater than one and may be duplicates of edges that
already exist in the graph. Therefore, this technique generates multigraphs, multiple edges at a time.
If we restrict f and W such that finitely many edges are added on every step almost surely, we have
an edge-exchangeable graph, as the edges in each step are drawn conditionally iid given W .
Given a sequence of edge sets E0 , E1 , . . . constructed via either of the above methods, we can
?0 , E
?1 , . . . by setting E
?i to have the same edges as Ei except with
form a binary graph sequence E
multiplicity 1. Although this binary graph is not itself edge exchangeable, it inherits many of the
properties (such as sparsity, as shown in Section 5) of the underlying edge-exchangeable multigraph.
The choice of the distribution on the measure W has a strong influence on the properties of the
resulting edge-exchangeable graph sampled via one of the above methods. For example, one choice is
to set w{i,j} = wi wj , where the (wi )i are a countable infinity of random values generated according
to a Poisson point process (PPP). We say that (wi )i is distributed according to a Poisson point process
parameterized by rate measure ?, (wi )i ? PPP(?), if (a) #{i : wi ? A} ? Poisson(?(A)) for any
set A with finite measure ?(A) and (b) #{i : wi ? Aj } are independent random variables across any
finite collection of disjoint sets (Aj )Jj=1 . In Section 5 we examine a particular example of this graph
frequency model, and demonstrate that sparsity is possible in edge-exchangeable graphs.
4
(a) Graph frequency model (fixed y, n steps)
(b) Caron?Fox, PPP on [0, y] ? [0, y] (1 step, y grows)
Figure 2: A comparison of a graph frequency model (Section 3 and Equation (2)) and the generative
model of Caron and Fox [12]. Any interval [0, y] contains a countably infinite number of atoms with
a nonzero weight in the random measure; a draw from the random measure is plotted at the top (and
repeated on the right side). Each atom corresponds to a latent vertex. Each point (?i , ?j ) corresponds
to a latent edge. Darker point colors on the left occur for greater edge multiplicities. On the left, more
latent edges are instantiated as more steps n are taken. On the right, the edges within [0, y]2 are fixed,
but more edges are instantiated as y grows.
4
Related work and connection to nonparametric Bayes
Given a unique label ?i for each vertex i ? N, and denoting gij = gji to be the number of undirected
edges P
between vertices i and j, the graph itself can be represented as the discrete random measure
2
G =
i,j gij ?(?i ,?j ) on R+ . A different notion of exchangeability for graphs than the ones in
Section 2 can be phrased for such atomic random measures: a point process G on R2+ is (jointly)
exchangeable if, for all finite permutations ? of N and all h > 0,
d
G(Ai ? Aj ) = G(A?(i) ? A?(j) ), for (i, j) ? N2 ,
where Ai := [h ? (i ? 1), h ? i].
This form of exchangeability, which we refer to as Kallenberg exchangeability, can intuitively be
viewed as invariance of the graph distribution to relabeling of the vertices, which are now embedded in
R2+ . As such it is analogous to vertex exchangeability, but for discrete random measures [12, Sec. 4.1].
Exchangeability for random measures was introduced by Aldous [2], and a representation theorem
was given by Kallenberg [17, 18, Ch. 9]. The use of Kallenberg exchangeability for modeling graphs
was first proposed by Caron and Fox [12], and then characterized in greater generality by Veitch and
Roy [22] and Borgs et al. [6]. Edge exchangeability is distinct from Kallenberg exchangeability, as
shown by the following example.
Example 4.1 (Edge exchangeable but not Kallenberg exchangeable). Consider the graph frequency
model developed in Section 3, with w{i,j} = (ij)?2 and ?{i,j} = {i, j}. Since the edges at each
step are drawn iid given
the corresponding
PW , the graph sequence is edge exchangeable. However,
?2
graph measure G =
)) is not Kallenberg
i,j nij ?(i,j) (where nij = nji ? Binom(N, (ij)
exchangeable, since the probability of generating edge {i, j} is directly related to the positions (i, j)
and (j, i) in R2+ of the corresponding atoms in G (in particular, the probability is decreasing in ij).
Our graph frequency model is reminiscent of the Caron and Fox [12] generative model, but has a
number
of key differences. At a high level, this earlier model generates a weight measure W =
P
w
?(?i ,?j ) (Caron and Fox [12] used, in particular, the outer product of a completely random
ij
i,j
measure), and the graph measure G is constructed by sampling gij once given wij for each pair
i, j. To create a finite graph, the graph measure G is restricted to the subset [0, y] ? [0, y] ? R2+ for
0 < y < ?; to create a projective growing graph sequence, the value of y is increased. By contrast,
in the analogous graph frequency model of the present work, y is fixed, and we grow the network
5
by repeatedly sampling the number of edges gij between vertices i and j and summing the result.
Thus, in the Caron and Fox [12] model, a latent infinity of vertices (only finitely many of which
are active) are added to the network each time y increases. In our graph frequency model, there is
a single collection of latent vertices, which are all gradually activated by increasing the number of
samples that generate edges between the vertices. See Figure 2 for an illustration.
Increasing n in the graph frequency model has the interpretation of both (a) time passing and (b) new
individuals joining a network because they have formed a connection that was not previously there. In
particular, only latent individuals that will eventually join the network are considered. This behavior
is analogous to the well-known behavior of other nonparametric Bayesian models such as, e.g., a
Chinese restaurant process (CRP). In this analogy, the Dirichlet process (DP) corresponds to our
graph frequency model, and the clusters instantiated by the CRP correspond to the vertices that are
active after n steps. In the DP, only latent clusters that will eventually appear in the data are modeled.
Since the graph frequency setting is stationary like the DP/CRP, it may be more straightforward to
develop approximate Bayesian inference algorithms, e.g., via truncation [11].
Edge exchangeability first appeared in work by Crane and Dempsey [13, 14], Williamson [23], and
Broderick and Cai [7, 8], Cai and Broderick [10]. Broderick and Cai [7, 8] established the notion of
edge exchangeability used here and provided characterizations via exchangeable partitions and feature
allocations, as in Appendix C. Broderick and Cai [7], Cai and Broderick [10] developed a frequency
model based on weights (wi )i generated from a Poisson process and studied several types of power
laws in the model. Crane and Dempsey [13] established a similar notion of edge exchangeability
in the context of a larger statistical modeling framework. Crane and Dempsey [13, 14] provided
sparsity and power law results for the case where the weights (wi )i are generated from a Pitman-Yor
process and power law degree distribution simulations. Williamson [23] described a similar notion
of edge exchangeability and developed an edge-exchangeable model where the weights (wi )i are
generated from a Dirichlet process, a mixture model extension, and an efficient Bayesian inference
procedure. In work concurrent to the present paper, Crane and Dempsey [15] re-examined edge
exchangeability, provided a representation theorem, and studied sparsity and power laws for the same
model based on Pitman-Yor weights. By contrast, we here obtain sparsity results across all Poisson
point process-based graph frequency models of the form in Equation (2) below, and use a specific
three-parameter beta process rate measure only for simulations in Section 6.
5
Sparsity in Poisson process graph frequency models
We now demonstrate that, unlike vertex exchangeability, edge exchangeability allows for sparsity in
random graph sequences. We develop a class of sparse, edge-exchangeable multigraph sequences via
the Poisson point process construction introduced in Section 3, along with their binary restrictions.
Model. Let W be a Poisson process on [0, 1] with a nonatomic, ?-finite rate measure ? satisfying
R1
?([0, 1]) = ? and 0 w?(dw) < ?. These
P two conditions on ? guarantee that W is a countably
infinite collection of rates in [0, 1] and that w?W w < ? almost surely. We can use W to construct
the set of rates: w{i,j} = wi wj if i 6= j, and w{i,i} = 0. The edge labels ?{i,j} are unimportant in
characterizing sparsity, and so can be ignored.
To use the multiple-edges-per-step graph frequency model from Section 3, we let f (?|w) be Bernoulli
with probability w. Since edge {i, j} is added in each step with probability wi wj , its multiplicity
M{i,j} after n steps has a binomial distribution with parameters n, wi wj . Note that self-loops are
avoided by setting w{i,i} = 0. Therefore, the graph after n steps is described by:
ind
(2)
W ? PPP(?)
M{i,j} ? Binom(n, wi wj ) for i < j ? N.
As mentioned earlier, this generative model yields an edge-exchangeable graph, with
P edge multiset
En containing {i, j} with multiplicity M{i,j} , and active vertices Vn = {i :
j M{i,j} > 0}.
?n ) by
Although this model generates multigraphs, it can be modified to sample a binary graph (V?n , E
?n to the set of edges {i, j} such that {i, j} has multiplicity ? 1 in En . We
setting V?n = Vn and E
can express the number of vertices and edges, in the multi- and binary graphs respectively, as
?
?
X X
X
1X
?n | = 1
|V?n | = |Vn | =
1? M{i,j} > 0? , |En | =
M{i,j} , |E
1 M{i,j} > 0 .
2
2
i
j6=i
i6=j
6
i6=j
Moments. Recall that a sequence of graphs is considered sparse if |En | = o(|Vn |2 ). Thus, sparsity
in the present setting is an asymptotic property of a random graph sequence. Rather than consider the
asymptotics of the (dependent) random sequences |En | and |Vn | in concert, Lemma 5.1 allows us to
consider the asymptotics of their first moments, which are deterministic sequences and can be analyzed
separately. We use ? to denote asymptotic equivalence, i.e., an ? bn ?? limn?? abnn = 1. For
details on our asymptotic notation and proofs for this section, see Appendix D.
Lemma 5.1. The number of vertices and edges for both the multi- and binary graphs satisfy
a.s.
a.s.
?n | a.s.
?n | ,
|V?n | = |Vn | ? E (|Vn |) ,
|En | ? E (|En |) ,
|E
? E |E
n ? ?.
Thus, we can examine the asymptotic behavior of the random numbers of edges and vertices by
examining the asymptotic behavior of their expectations, which are provided by Lemma 5.2.
Lemma 5.2. The expected numbers of vertices and edges for the multi- and binary graphs are
Z
Z
E |V?n | = E (|Vn |) =
1 ? exp ? (1 ? (1 ? wv)n )?(dv) ?(dw),
ZZ
ZZ
n
?n | = 1
E (|En |) =
wv ?(dw)?(dv),
E |E
(1 ? (1 ? wv)n ) ?(dw)?(dv).
2
2
Sparsity. We are now equipped to characterize the sparsity of this random graph sequence:
Theorem 5.3. Suppose ? has a regularly varying tail, i.e., there exist ? ? (0, 1) and ` : R+ ? R+
s.t.
Z 1
`(cx)
= 1.
?(dw) ? x?? `(x?1 ), x ? 0
and
?c > 0, lim
x?? `(x)
x
Then as n ? ?,
a.s.
|Vn | = ?(n? `(n)),
1+?
3?
?n | a.s.
|E
= O `(n1/2 ) min n 2 , `(n)n 2
.
a.s.
|En | = ?(n),
Theorem 5.3 implies that the multigraph is sparse when ? ? (1/2, 1), and that the restriction to the
binary graph is sparse for any ? ? (0, 1). See Remark D.7 for a discussion. Thus, edge-exchangeable
random graph sequences allow for a wide range of sparse and dense behavior.
6
Simulations
In this section, we explore the behavior of graphs generated by the model from Section 5 via
simulation, with the primary goal of empirically demonstrating that the model produces sparse graphs.
We consider the case when the Poisson process generating the weights in Equation (2) has the rate
measure of a three-parameter beta process (3-BP) on (0, 1) [9, 21]:
?(dw) = ?
?(1 + ?)
w?1?? (1 ? w)?+??1 dw,
?(1 ? ?)?(? + ?)
(3)
with mass ? > P
0, concentration ? > 0, and discount ? ? (0, 1). In order for the 3-BP to have
finite total mass j wj < ?, we require that ? > ??. We draw realizations of the weights from a
3-BP(?, ?, ?) according to the stick-breaking representation given by Broderick, Jordan, and Pitman
[9]. That is, the wi are the atom weights of the measure W for
W =
Ci
? X
X
i=1 j=1
(`) ind
Vi,j ?
(i)
Vi,j
i?1
Y
(`)
(1 ? Vi,j )??i,j ,
iid
Ci ? Pois(?),
l=1
iid
Beta(1 ? ?, ? + `?),
?i,j ? B0
and any continuous (i.e., non-atomic) choice of distribution B0 .
Since simulating an infinite number of atoms is not possible, we truncate the outer summation in i to
P2000
2000 rounds, resulting in i=1 Ci weights. The parameters of the beta process were fixed to ? = 3
and ? = 1, as they do not influence the sparsity of the resulting graph frequency model, and we varied
7
(a) Multigraph edges vs. active vertices
(b) Binary graph edges vs. active vertices
Figure 3: Data simulated from a graph frequency model with weights generated according to a 3-BP.
Colors represent different random draws. The dashed line has a slope of 2.
the discount parameter ?. Given a single draw W (at some specific discount ?), we then simulated
the edges of the graph, where the number of Bernoulli draws N varied between 50 and 2000.
Figure 3a shows how the number of edges varies versus the total number of active vertices for
the multigraph, with different colors representing different random seeds. To check whether the
generated graph was sparse, we determined the exponent by examining the slope of the data points
(on a log-scale). In all plots, the black dashed line is a line with slope 2. In the multigraph, we found
that for the discount parameter settings ? = 0.6, 0.7, the slopes were below 2; for ? = 0, 0.3, the
slopes were greater than 2. This corresponds to our theoretical results; for ? < 0.5 the multigraph
is dense with slope greater than 2, and for ? > 0.5 the multigraph is sparse with slope less than 2.
Furthermore, the sparse graphs exhibit power law relationships between the number of edges and
a.s.
vertices, i.e., |EN | ? c |VN |b , N ? ?, where b ? (1, 2), as suggested by the linear relationship in
the plots between the quantities on a log-scale. Note that there are necessarily fewer edges in the
binary graph than in the multigraph, and thus this plot implies that the binary graph frequency model
can also capture sparsity. Figure 3b confirms this observation; it shows how the number of edges
varies with the number of active vertices for the binary graph. In this case, across ? ? (0, 1), we
observe slopes that are less than 2. This agrees with our theory from Section 5, which states that the
binary graph is sparse for any ? ? (0, 1).
7
Conclusions
We have proposed an alternative form of exchangeability for random graphs, which we call edge
exchangeability, in which the distribution of a graph sequence is invariant to the order of the edges. We
have demonstrated that edge-exchangeable graph sequences, unlike traditional vertex-exchangeable
sequences, can be sparse by developing a class of edge-exchangeable graph frequency models that
provably exhibit sparsity. Simulations using edge frequencies drawn according to a three-parameter
beta process confirm our theoretical results regarding sparsity. Our results suggest that a variety of
future directions would be fruitful?including theoretically characterizing different types of power
laws within graph frequency models, characterizing the use of truncation within graph frequency
models as a means for approximate Bayesian inference in graphs, and understanding the full range of
distributions over sparse, edge-exchangeable graph sequences.
Acknowledgments
We would like to thank Bailey Fosdick and Tyler McCormick for helpful conversations.
8
References
[1] D. J. Aldous. Representations for partially exchangeable arrays of random variables. Journal of Multivariate
Analysis, 11(4):581?598, 1981.
[2] D. J. Aldous. Exchangeability and related topics. In ?cole d??t? de probabilit?s de Saint-Flour, XIII?1983,
volume 1117 of Lecture Notes in Math., pages 1?198. Springer, Berlin, 1985.
[3] B. Bollob?s, S. Janson, and O. Riordan. The phase transition in inhomogeneous random graphs. Random
Structures Algorithms, 31(1):3?122, 2007.
[4] C. Borgs, J. T. Chayes, H. Cohn, and Y. Zhao. An Lp theory of sparse graph convergence I: limits, sparse
random graph models, and power law distributions. arXiv e-print 1401.2906, 2014.
[5] C. Borgs, J. T. Chayes, H. Cohn, and S. Ganguly. Consistent nonparametric estimation for heavy-tailed
sparse graphs. arXiv e-print 1401.1137, 2015.
[6] C. Borgs, J. T. Chayes, H. Cohn, and N. Holden. Sparse exchangeable graphs and their limits via graphon
processes. arXiv e-print 1601.07134, 2016.
[7] T. Broderick and D. Cai. Edge-exchangeable graphs, sparsity, and power laws. In NIPS 2015 Workshop on
Bayesian Nonparametrics: The Next Generation, 2015.
[8] T. Broderick and D. Cai. Edge-exchangeable graphs and sparsity. In NIPS 2015 Workshop on Networks in
the Social and Informational Sciences, 2015.
[9] T. Broderick, M. I. Jordan, and J. Pitman. Beta processes, stick-breaking and power laws. Bayesian
Analysis, 7(2):439?475, 2012.
[10] D. Cai and T. Broderick. Completely random measures for modeling power laws in sparse graphs. In NIPS
2015 Workshop on Networks in the Social and Informational Sciences, 2015.
[11] T. Campbell, J. Huggins, J. How, and T. Broderick. Truncated random measures. arXiv e-print 1603.00861,
2016.
[12] F. Caron and E. Fox. Sparse graphs using exchangeable random measures. arXiv e-print 1401.1137v3,
2015.
[13] H. Crane and W. Dempsey. A framework for statistical network modeling. arXiv e-print 1509.08185, 2015.
[14] H. Crane and W. Dempsey. Atypical scaling behavior persists in real world interaction networks. arXiv
e-print 1509.08184, 2015.
[15] H. Crane and W. Dempsey. Edge exchangeable models for network data. arXiv e-print 1603.04571, 2016.
[16] D. N. Hoover. Relations on probability spaces and arrays of random variables. Preprint, Institute for
Advanced Study, Princeton, NJ, 1979.
[17] O. Kallenberg. Exchangeable random measures in the plane. Journal of Theoretical Probability, 3(1):
81?136, 1990.
[18] O. Kallenberg. Probabilistic symmetries and invariance principles. Probability and its Applications.
Springer, New York, 2005.
[19] J. R. Lloyd, P. Orbanz, Z. Ghahramani, and D. M. Roy. Random function priors for exchangeable arrays
with applications to graphs and relational data. In NIPS 25, 2012.
[20] P. Orbanz and D. M. Roy. Bayesian models of graphs, arrays and other exchangeable random structures.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(2):437?461, 2015.
[21] Y. W. Teh and D. G?r?r. Indian buffet processes with power-law behavior. In NIPS 23, 2009.
[22] V. Veitch and D. M. Roy. The class of random graphs arising from exchangeable random measures. arXiv
e-print 1512.03099, 2015.
[23] S. Williamson. Nonparametric network models for link prediction. Journal of Machine Learning Research,
17:1?21, 2016.
[24] P. J. Wolfe and S. C. Olhede. Nonparametric graphon estimation. arXiv e-print 1309.5936, 2013.
9
| 6586 |@word pw:1 seems:1 stronger:1 unif:1 confirms:1 simulation:6 crucially:1 bn:1 thereby:1 recursively:2 moment:2 initial:1 contains:1 denoting:1 janson:1 existing:3 reminiscent:1 chicago:2 partition:1 plot:3 concert:1 v:2 stationary:6 generative:5 fewer:1 intelligence:1 plane:1 olhede:2 characterization:2 multiset:2 bijection:1 math:1 unbounded:1 along:3 constructed:2 beta:6 consists:1 introduce:2 subordinators:1 theoretically:1 expected:1 roughly:1 themselves:1 examine:2 growing:6 multi:3 behavior:9 informational:2 decreasing:1 company:1 equipped:1 considering:2 increasing:2 spain:1 provided:4 bounded:2 moreover:1 underlying:1 notation:1 mass:2 e30:1 what:1 developed:3 nj:1 guarantee:2 every:9 innocent:1 growth:3 stick:2 exchangeable:48 appear:1 before:2 positive:2 persists:1 limit:2 joining:4 solely:1 might:1 black:1 studied:2 examined:1 equivalence:1 conversely:1 projective:7 range:2 unique:1 acknowledgment:1 atomic:3 union:1 procedure:2 asymptotics:2 probabilit:1 thought:1 word:1 suggest:1 e40:2 undesirable:1 context:1 influence:2 restriction:2 crane:7 deterministic:1 demonstrated:1 fruitful:1 straightforward:1 subgraphs:1 array:4 dw:7 classic:1 notion:7 traditionally:2 increment:1 analogous:3 construction:2 suppose:3 user:4 pioneered:1 us:1 wolfe:2 roy:5 element:3 particularly:1 satisfying:1 preprint:1 capture:2 wj:6 connected:1 decrease:1 mentioned:1 projectivity:2 diana:1 broderick:12 complexity:1 rigorously:1 completely:3 represented:1 instantiated:4 distinct:2 larger:1 valued:2 say:4 drawing:2 statistic:1 g1:1 ganguly:1 jointly:1 itself:2 online:1 seemingly:1 chayes:3 sequence:62 cai:9 propose:1 interaction:1 product:1 remainder:1 loop:1 realization:1 subgraph:1 achieve:1 convergence:1 empty:3 cluster:2 r1:1 produce:1 generating:2 engender:1 develop:2 augmenting:1 ij:4 finitely:2 b0:2 ibp:2 progress:2 strong:1 implies:3 direction:1 inhomogeneous:1 ppp:4 adjacency:3 require:1 hoover:5 biological:1 summation:1 exploring:1 graphon:3 strictly:1 extension:1 considered:2 exp:1 great:1 seed:1 tyler:1 estimation:2 label:4 cole:1 concurrent:1 agrees:1 create:2 mit:4 always:2 modified:1 rather:3 poi:1 exchangeability:43 varying:1 focus:2 inherits:1 bernoulli:2 check:1 contrast:5 helpful:1 inference:3 dependent:1 streaming:1 typically:1 holden:1 hidden:1 relation:1 wij:1 interested:2 provably:1 among:1 exponent:1 development:1 initialize:1 equal:1 construct:2 once:1 atom:7 sampling:3 zz:2 future:1 duplicate:2 xiii:1 individual:3 relabeling:1 binom:2 phase:1 n1:1 interest:1 flour:1 introduces:3 mixture:2 analyzed:1 activated:1 edge:122 en0:7 fox:9 indexed:1 old:1 re:1 plotted:1 e0:3 theoretical:4 nij:2 instance:2 column:1 earlier:3 modeling:5 gn:14 increased:1 introducing:1 vertex:70 subset:1 examining:2 characterize:1 varies:2 combined:1 fundamental:1 csail:3 probabilistic:1 again:1 containing:1 choose:1 henceforth:1 admit:1 zhao:1 style:1 de:2 lloyd:3 sec:1 satisfy:1 vi:3 later:1 start:2 bayes:1 slope:8 il:1 formed:1 yield:4 correspond:1 bayesian:8 iid:6 j6:1 trevor:1 definition:2 frequency:27 tamara:1 e2:2 associated:1 proof:1 sampled:2 dataset:1 popular:3 recall:1 color:3 lim:1 conversation:1 dempsey:7 campbell:2 back:1 multigraph:9 nonparametrics:1 though:3 generality:1 furthermore:1 just:1 crp:5 ei:1 cohn:3 aj:3 tdjc:1 believe:1 grows:7 normalized:2 nonzero:1 conditionally:2 ind:3 round:1 self:1 outline:1 demonstrate:5 misspecified:1 empirically:1 volume:1 tail:1 interpretation:1 employee:2 refer:1 cambridge:2 caron:9 ai:2 similarly:1 i6:2 longer:1 add:1 multivariate:1 recent:2 orbanz:2 aldous:7 mandatory:1 binary:13 continue:1 wv:3 additional:1 greater:5 employed:1 surely:4 v3:1 dashed:2 multiple:4 full:1 characterized:1 e1:7 instantiating:1 prediction:1 expectation:1 poisson:9 arxiv:10 represent:1 relabelings:1 nji:1 whereas:2 separately:1 interval:1 grow:2 limn:1 unlike:3 strict:1 undirected:1 regularly:1 jordan:2 call:5 integer:4 variety:1 restaurant:2 topology:1 restrict:1 idea:1 regarding:1 whether:1 e3:2 passing:1 york:1 jj:1 repeatedly:1 remark:1 ignored:1 generally:1 useful:1 unimportant:1 nonparametric:6 discount:4 generate:5 exist:2 disjoint:1 per:3 arising:1 discrete:4 express:2 key:1 four:2 demonstrating:1 traced:1 drawn:5 kallenberg:12 graph:148 asymptotically:2 year:1 parameterized:1 arrive:1 family:2 almost:4 vn:27 draw:6 appendix:3 scaling:3 submatrix:1 quadratic:2 occur:1 infinity:6 bp:4 phrased:1 generates:4 min:1 developing:2 according:5 truncate:1 poor:1 across:4 em:5 intimately:1 wi:14 lp:1 making:1 s1:3 huggins:1 dv:3 intuitively:1 invariant:7 multiplicity:7 restricted:1 gradually:1 taken:1 equation:3 visualization:1 remains:1 previously:1 eventually:2 mechanism:2 studying:1 observe:1 simulating:1 bailey:1 alternative:2 buffet:2 original:1 top:1 dirichlet:2 binomial:1 saint:1 graphical:1 ghahramani:1 chinese:2 added:6 already:2 quantity:1 print:10 primary:1 concentration:1 traditional:1 riordan:1 exhibit:4 dp:3 nonatomic:1 thank:1 link:1 simulated:2 berlin:1 veitch:3 outer:2 participate:1 me:2 topic:1 trivial:1 index:2 relationship:3 gji:1 bollob:2 providing:1 illustration:1 modeled:1 equivalently:1 setup:1 potentially:1 negative:1 countable:2 motivates:1 mccormick:1 upper:3 teh:1 observation:1 markov:1 datasets:1 sm:3 finite:15 tbroderick:1 truncated:1 defining:1 relational:1 communication:2 misspecification:1 varied:2 introduced:4 namely:1 pair:2 extensive:1 connection:4 quadratically:2 established:2 barcelona:1 nip:6 suggested:1 below:2 pattern:2 appeared:2 sparsity:20 graphons:1 pioneering:1 max:1 including:1 power:11 natural:2 rely:1 advanced:1 nth:2 representing:2 review:1 understanding:1 prior:1 asymptotic:5 law:11 embedded:1 lecture:1 permutation:8 highlight:1 generation:1 allocation:1 analogy:1 versus:1 degree:1 consistent:1 principle:1 share:1 heavy:1 row:1 soon:1 copy:1 truncation:2 side:1 uchicago:1 allow:3 institute:1 wide:2 characterizing:3 correspondingly:1 sparse:29 pitman:4 benefit:1 distributed:1 yor:2 depth:1 world:3 transition:1 made:1 collection:7 avoided:1 social:5 transaction:1 approximate:2 emphasize:1 countably:2 confirm:1 active:12 summing:1 assumed:1 tuples:1 alternatively:2 continuous:3 latent:12 tailed:1 additionally:1 symmetry:1 williamson:3 complex:1 necessarily:2 e20:2 domain:1 dense:10 arrival:2 n2:1 repeated:1 augmented:11 representative:1 join:1 en:32 darker:1 sub:3 position:1 wish:1 explicit:1 tied:1 breaking:2 atypical:1 theorem:8 e4:1 specific:2 borgs:5 list:2 explored:1 admits:1 r2:5 exists:1 workshop:3 adding:1 ci:3 suited:1 cx:1 explore:1 infinitely:3 g2:1 partially:1 springer:2 ch:1 corresponds:4 ma:2 viewed:1 goal:1 change:1 infinite:7 except:1 uniformly:1 determined:1 lemma:4 total:2 gij:4 invariance:3 e10:2 indian:2 dept:1 princeton:1 ex:2 |
6,176 | 6,587 | Probabilistic Inference with Generating Functions for
Poisson Latent Variable Models
Kevin Winner1 and Daniel Sheldon1,2
{kwinner,sheldon}@cs.umass.edu
1
College of Information and Computer Sciences, University of Massachusetts Amherst
2
Department of Computer Science, Mount Holyoke College
Abstract
Graphical models with latent count variables arise in a number of fields. Standard
exact inference techniques such as variable elimination and belief propagation
do not apply to these models because the latent variables have countably infinite
support. As a result, approximations such as truncation or MCMC are employed.
We present the first exact inference algorithms for a class of models with latent
count variables by developing a novel representation of countably infinite factors
as probability generating functions, and then performing variable elimination with
generating functions. Our approach is exact, runs in pseudo-polynomial time, and
is much faster than existing approximate techniques. It leads to better parameter
estimates for problems in population ecology by avoiding error introduced by
approximate likelihood computations.
1
Introduction
A key reason for the success of graphical models is the existence of fast algorithms that exploit the
graph structure to perform inference, such as Pearl?s belief propagation [19] and related propagation
algorithms [13, 16, 23] (which we refer to collectively as ?message passing? algorithms), and variable
elimination [27]. For models with a simple enough graph structure, these algorithms can compute
marginal probabilities exponentially faster than direct summation.
However, these fast exact inference methods apply only to a relatively small class of models?those
for which the basic operations of marginalization, conditioning, and multiplication of constituent
factors can be done efficiently. In most cases, this means that the user is limited to models where the
variables are either discrete (and finite) or Gaussian, or they must resort to some approximate form of
inference. Why are Gaussian and discrete models tractable while others are not? The key issue is one
of representation. If we start with factors that are all discrete or all Gaussian, then: (1) factors can be
represented exactly and compactly, (2) conditioning, marginalization, and multiplication can be done
efficiently in the compact representation, and (3) each operation produces new factors of the same
type, so they can also be represented exactly and compactly.
Many models fail the restriction of being discrete or Gaussian even though they are qualitatively
?easy?. The goal of this paper is to expand the class of models amenable to fast exact inference
by developing and exploiting a novel representation for factors with properties similar to the three
above. In particular, we investigate models with latent count variables, and we develop techniques to
represent and manipulate factors using probability generating functions.
Figure 1 provides a simple example to illustrate the main ideas. It shows a model that is commonly
used to interpret field surveys in ecology, where it is known as an N-mixture model [22]. The latent
variable n ? Poisson( ) represents the unknown number of individual animals at a given site.
Repeated surveys are conducted at the site during which the observer detects each individual with
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
n
Generating Function
p(n)
0.15
prior
posterior
0.1
n ? Poisson( )
yk |n ? Binomial(n, ?)
(a)
p(n, y1 = 2, y2 = 5, y3 = 3)sn
n=0
= 0.0061s5 +0.1034s6 +0.5126s7
yk
k=1:K
F (s) =
1
X
0.05
0
0
+1.0000s8 +0.8023s9 +0.2184s10
10
20
30
n
(b)
40
? exp(8.4375s
15.4101)
(c)
Figure 1: The N-mixture model [22] is a simple model with a Poisson latent variable for which
no exact inference algorithm is known: (a) the model, (b) the prior and posterior for = 20,
? = 0.25, y1 = 2, y2 = 5, y3 = 3, (c) a closed form representation of the generating function of the
unnormalized posterior, which is a compact and exact description of the posterior.
probability ?, so each observation yk is Binomial(n, ?). From these observations (usually across
many sites with shared ), the scientist wishes to infer n and fit and ?.
This model is very simple: all variables are marginally Poisson, and the unnormalized posterior has a
simple form (e.g., see Figure 1b). However, until recently, there was no known algorithm to exactly
compute the likelihood p(y1:K ). The naive way is to sum the unnormalized posterior p(n, y1 , . . . , yK )
over all possible values of n. However, n has a countably infinite support, so this is not possible. In
practice, users of this and related models truncate the infinite sum at a finite value [22]. A recent paper
developed an exact algorithm for the N-mixture model, but one with running time that is exponential
in K [8]. For a much broader class of models with Poisson latent variables [5, 7, 11, 15, 28], there
are no known exact inference algorithms. Current methods either truncate the support [5, 7, 11], which
is slow (e.g., see [4]) and interacts poorly with parameter estimation [6, 8], or use MCMC [15, 28],
which is slow and for which convergence is hard to assess. The key difficulty with these models
is that we lack finite and computationally tractable representations of factors over variables with a
countably infinite support, such as the posterior distribution in the N-mixture model, or intermediate
factors in exact inference algorithms.
The main contribution of this paper is to develop compact and exact representations of countably
infinite factors using probability generating functions (PGFs) and to show how to perform variable
elimination in the domain of generating functions. We provide the first exact pseudo-polynomial
time inference algorithms (i.e., polynomial in the magnitude of the observed variables) for a class of
Poisson latent variable models, including the N-mixture model and a more general class of Poisson
HMMs. For example, the generating function of the unnormalized N-mixture posterior is shown
in Figure 1c, from which we can efficiently recover the likelihood p(y1 = 2, y2 = 5, y3 = 3) =
F (1) = 0.0025. For Poisson HMMs, we first develop a PGF-based forward algorithm to compute
the likelihood, which enables efficient parameter estimatation. We then develop a ?tail elimination?
approach to compute posterior marginals. Experiments show that our exact algorithms are much
faster than existing approximate approaches, and lead to better parameter estimation.
Related work. Several previous works have used factor transformations for inference. Bickson and
Guestrin [2] show how to perform inference in the space of characteristic functions (see also [17])
for a certain class of factor graphs. Xue et al. [26] perform variable elimination in discrete models
using Walsh-Hadamard transforms. Jha et al. [14] use generating functions (over finite domains) to
compute the partition function of Markov logic networks. McKenzie [18] describes the use of PGFs
in discrete time series models, which are related to our models except they are fully observed, and
thus require no inference.
2
The Poisson Hidden Markov Model
Although our PGF-based approaches will apply more broadly, the primary focus of our work is a
Poisson hidden Markov model (HMM) that captures a number of models from different disciplines.
To describe the model, we first introduce notation for an operation called binomial thinning [24].
2
Write z = ? n to mean that z|n ? Binomial(n, ?), i.e., z is the result
of ?thinning? the n individuals so that each remains with probability
?. The Poisson HMM model is given by:
nk = Poisson(
k)
+
nk
k 1
1,
yk = ?k
nk .
n1
n2
y1
y2
...
nK
yK
for k
1, with the initialization condition n0 = 0. The variables
Figure 2: Poisson HMM
n1 , . . . , nK describe the size of a population at sampling times t1 <
t2 < . . . < tK . At time tk , the population consists of a Poisson( k ) number of new arrivals, plus
k 1 nk 1 survivors from the previous time step (each individual survives with probability k ). A
noisy count yk = ?k nk is made of the population at time tk , where ?k is the detection probability
of each individual. This model is broadly applicable. It models situations where individuals arrive in
an iid fashion, and the time they remain is ?memoryless?. Versions of this model are used in ecology
to model surveys of ?open populations? (individuals arrive and depart over time) [7] and the timing
and abundance of insect populations [12, 25, 29], and it also capture models from queueing theory [9]
and generic time series models for count data [1, 18].
Existing approaches. Two classes of methods have been applied for inference in Poisson HMMs
and related models. The first is to truncate the support of the Poisson variables at a large but finite
value Nmax [5, 7, 11, 22]. Then, for example, the Poisson HMM reduces to a standard discrete
HMM. This is unsatisfactory because it is slow (a smart implementation that uses the fast Fourier
2
transform takes time O(KNmax
log Nmax )), and the choice of Nmax is intertwined with the unknown
Poisson parameters k , so the approximation interacts poorly with parameter estimation [6, 8]. The
second class of approximate methods that has been applied to these problems is MCMC [28]. This is
undesirable because it is also slow, and because the problem has a simple structure that should admit
fast algorithms.
3
Variable Elimination with Generating Functions
Our approach to inference in Poisson HMMs will be to implement the same abstract set of operations
as variable elimination, but using a representation based on probability generating functions. Because
variable elimination will produce intermediate factors on larger sets of variables, and to highlight
the ability of our methods to generalize to a larger class of models, we first abstract from the
Poisson HMM to introduce notation general for graphical models with multivariate factors, and their
corresponding multivariate generating functions.
Factors. Let x = (x1 , . . . , xd ) be a vector of nonnegative integer-valued random variables where
xi 2 Xi ? Z 0 . The set Xi may be finite (e.g., to model binary or finite discrete variables), but
we assume without loss of generality that Xi = Z 0 for all i by defining factors to take value zero
for integers outside of Xi . For any set ? ? {1, . . . , d},
Q define the subvector x? := (xi , i 2 ?).
We consider probability models of the form p(x) = Z1 ?2A ? (x? ), where Z is a normalization
constant and { ? } is a set of factors ? : Z 0 ! R+ indexed by subsets ? ? {1, . . . , d} in a
collection A.
Generating Functions. A general factor ? on integer-valued variables cannot be finitely represented. We instead use the formalization of probability generating functions (PGFs). Let
s = (s1 , . . . , sd ) be a vector of indeterminates corresponding to the random variables x. The
joint PGF of a factor ? is
X
Y
X
x?
F? (s? ) =
sxi i =
? (x? ) ?
? (x? ) ? s? .
x?
i2?
x?
Q
Here, for two vectors a and b with the same index set I, we have defined ab = i2I abi i . The sum is
over all vectors x? of non-negative integers.
P1
Univariate PGFs of the form F (s) = x=0 Pr(X = x)sx = E[sX ], where X is a nonnegative
integer-valued random variable, are widely used in probability and statistics [3, 21], and have a
number of nice properties. A PGF uniquely encodes the distribution of X, and there are formulas
to recover moments and entries of the the probability mass function from the PGF. Most common
distributions have closed-form PGFs, e.g., F (s) = exp{ (s 1)} when X ? Poisson( ). Similarly,
the joint PGF F? uniquely encodes the factor ? , and we will develop a set of useful operations on
joint PGFs. Note that we abuse terminology slightly by referring to the generating function of the
3
factor ? as a probability generating function; however, it is consistent with the view of
unnormalized probability distribution.
3.1
?
as an
Operations on Generating Functions
Our goal is to perform variable elimination using factors represented as PGFs. To do this, the basic
operations we need to support are are multiplication, marginalization, and ?entering evidence? into
factors (reducing the factor by fixing the value of one variable). In this section we state a number of
results about PGFs that show how to perform such operations. For the most part, these are either well
known or variations on well known facts about PGFs (e.g., see [10], Chapters 11, 12). All proofs can
be found in the supplementary material.
First, we see that marginalization of factors is very easy in the PGF domain:
P
Proposition 1 (Marginalization). Let ?\i (x?\i ) := xi 2Xi ? (x?\i , xi ) be the factor obtained
from marginalizing
P i out of ? . The joint PGF of ?\i is F?\i (s?\i ) = F? (s?\i , 1). The normalization constant x? ? (x? ) is equal to F? (1, . . . , 1).
Entering evidence is also straightforward:
Proposition 2 (Evidence). Let ?\i (x?\i ) := ? (x?\i , a) be the factor resulting from observing
1 @a
the value xi = a in ? . The joint PGF of ?\i is F?\i (s?\i ) = a!
@sa F? (s? ) si =0 .
i
Multiplication in the PGF domain?i.e., computing the PGF of the product ? (x? ) (x ) of
two factors ? and ?is not straightforward in general. However, for certain types of factors,
multiplication is possible. We give two cases.
Proposition 3 (Multiplication: Binomial thinning). Let ?[j (x? , xj ) = ? (x? )?Binomial(xj |xi , ?)
be the factor resulting from expanding ? to introduce a thinned variable xj := ? xi , where i 2 ?
and j 2
/ ?. The joint PGF of ?[j is F?[j (s? , sj ) = F? (s?\i , si (?sj + 1 ?)).
Proposition 4 (Multiplication:
Addition of two variables). Let
(x? , x , xk ) :=
(x )I{xk = xi + xj } be the joint factor resulting from the introduction of a new variable
? (x? )
xk = xi + xj , where i 2 ?, j 2 , k 2
/ ? [ , := ? [ [ {k}. The joint PGF of
is
F (s? , s , sk ) = F? (s?\i , sk si )F (s \j , sk sj ).
The four basic operations above are enough to perform variable elimination on a large set of models.
In practice, it is useful to introduce additional operations that combine two of the above operations.
Proposition 5 (Thin then observe). Let ?0 (x? ) := ? (x? )?Binomial(a|xi , ?) be the factor resulting
from observing the thinned variable ? xi = a for i 2 ?. The joint PGF of ?0 is F?0 (s? ) =
1
a @a
F? (s?\i , ti )
a! (si ?) @ta
i
ti =si (1 ?)
.
P
Proposition 6 (Thin then marginalize). Let (?\i)[j (x?\i , xj ) := xi ? (x? ) ? Binomial(xj |xi , ?)
be the factor resulting from introducing xj := ? xi and then marginalizing xi for i 2 ?, j 2
/ ?. The
joint PGF of (?\i)[j is F(?\i)[j (s?\i , sj ) = F? (s?\i , ?sj + 1 ?).
P
Proposition 7 (Add then marginalize). Let (x?\i , x \j , xk ) := xi ,xj ? (x? ) (x )I{xk =
xi + xj } be the factor resulting from the deterministic addition xi + xj = xk followed by marginalization of xi and xj , where i 2 ?, j 2 , k 2
/ ? [ , := (? \ i) [ ( \ j) [ {k}. The joint PGF of
is F (s?\i , s \j , sk ) = F? (s?\i , sk )F (s \j , sk ).
3.2
The PGF-Forward Algorithm for Poisson HMMs
We now use the operations from the previous section to implement the forward algorithm for Poisson
HMMs in the domain of PGFs. The forward algorithm is an instance of variable elimination, but in
HMMs is more easily described using the following recurrence for the joint probability p(nk , y1:k ):
X
p(nk , y1:k ) =
p(nk 1 , y1:k 1 ) p(nk |nk 1 )p(yk |nk )
| {z } n |
{z
}
?k (nk )
k
1
?k
1 (nk
1)
We can compute the ?forward messages? ?k (nk ) := p(nk , y1:k ) in a sequential forward pass,
assuming it is possible to enumerate all possible values of nk to store the messages and compute the
recurrence. In our case, nk can take on an infinite number of values, so this is not possible.
4
Algorithm 1 FORWARD
1: 1 (z1 ) := I{z1 = 0}
2: for k = 1 to K do
P
3:
k (nk ) :=
Algorithm 2 PGF - FORWARD
k (zk )p(mk )I{nk
zk ,mk
= zk +mk }
4:
?k (nk ) := k (nk )p(yk | nk )
5:
if k < K then
P
6:
k+1 (zk+1 ) :=
nk ?k (nk )p(zk+1 | nk )
7:
end if
8: end for
1: 1 (s) := 1
2: for k = 1 to K do
3:
1)}
k (s) :=
k (s) ? exp{ k (s
(y )
4:
Ak (s) := y1k ! (s?k )yk k k s(1 ?k )
5:
if k < K then
6:
k+1 (s) := Ak k s + 1
k
7:
end if
8: end for
mk
We proceed instead using generating functions. To apply the operations from the previous section, it is useful to instantiate explicit
random variables mk and zk for the number of new arrivals in step
k and survivors from step k 1, respectively, to get the model (see
Figure 3):
mk ? Poisson( k ),
n k = m k + zk ,
zk = k
yk = ?k
1
nk
nk .
!k
nk?1
!k?1
?k (nk ) = p(yk |nk )
1
X
mk =0 zk =0
|
z
and
!k
k (zk )
1
X
nk
{z
1 =0
k (nk )
k (zk )
nk
yk
Figure 3: Expanded model.
p(mk )p(nk |zk , mk )
We have introduced the intermediate factors
zk
yk?1
1,
We can now expand the recurrence for ?k (nk ) as:
1
X
"k
k (nk )
?k
}|
{
1 (nk 1 )p(zk |nk 1 )
(1)
}
to clarify the implementation.
FORWARD (Algorithm 1) is a dynamic programming algorithm based on this recurrence to compute
the ?k messages for all k. However, it cannot be implemented due to the infinite sums. PGF - FORWARD
(Algorithm 2) instead performs the same operations in the domain of generating functions? k , k ,
and Ak are the PGFs of k , k , and ?k , respectively. Each line in PGF - FORWARD implements the
operation P
in the corresponding line of FORWARD using the operations given in Section 3.1. In Line 1,
z1
= 1 is the PGF of 1 . Line 3 uses ?Add then marginalize? (Proposition 7)
1 (s) =
z1 1 (z1 )s
combined with the fact that the Poisson PGF for mk is exp{ k (s 1)}. Line 4 uses ?Thin then
observe? (Proposition 5), and Line 6 uses ?Thin then marginalize? (Proposition 6).
Implementation and Complexity. The PGF - FORWARD algorithm as stated is symbolic. It remains
to see how it can be implemented efficiently. For this, we need to respresent and manipulate the PGFs
in the algorithm efficiently. We do so based on the following result:
Theorem 1. All PGFs in the PGF - FORWARD
algorithm have the form f (s) exp{as + b} where f is
P
a polynomial with degree at most Y = k yk .
Proof. We verify the invariant inductively. It is clearly satisfied in Line 1 of PGF - FORWARD (f (s) =
1, a = b = 0). We check that it is preserved for each operation within the loop. In Line 3, suppose
k (s) = f (s) exp{as + b}. Then k (s) = f (s) exp{(a + k )s + (b
k )} has the desired form.
In Line 4, assume that k (s) = f (s) exp{as + b}. Then one can verify by taking the yk th derivative
of k (s) that Ak (s) is given by:
!
yk
X
f (`) (s(1 ?k ))
yk
yk
Ak (s) = (a?k ) ? s
? exp{a(1 ?k )s + b}
a` `!(yk `)!
`=0
The scalar (a?) can be combined with the polynomial coefficients or the scalar exp(b) in the
exponential. The second term is a polynomial of degree yk + deg(f ). The third term has the form
exp{a0 s + b0 }. Therefore, in Line 4, Ak (s) has the desired form, and the degree of the polynomial
part of the representation increases by yk .
yk
5
In Line 6, suppose Ak (s) = f (s) exp{as + b}. Then k+1 (s) = g(s) exp a k s + b + a(1 k ) ,
where g(s) is the composition of f with the affine function k s + 1
k , so g is a polynomial of the
same degree as f . Therefore, k+1 (s) has the desired form.
We have shown that each PGF retains the desired form, and the degree of the polynomial
P is initially
zero and increases by yk each time through the loop, so it is always bounded by Y = k yk .
The important consequence of Theorem 1 is that we can represent and manipulate PGFs in PGF FORWARD by storing at most Y coefficients for the polynomial f plus the scalars a and b. An
efficient implementation based on this principle and the proof of the previous theorem is given in the
supplementary material.
Theorem 2. The running time of PGF - FORWARD for Poisson HMMs is O(KY 2 ).
3.3
Computing Marginals by Tail Elimination
PGF - FORWARD allows us to efficiently compute
Algorithm 3 PGF - TAIL - ELIMINATE
the likelihood in a Poisson HMM. We would also
like to compute posterior marginals, the standard Output: PGF of unnormalized marginal p(ni , y1:K )
approach for which is the forward-backward al- 1: i,i+1 (s, t) := Ai (s( i t + 1 i ))
gorithm [20]. A natural question is whether there 2: for j = i + 1 to K do
Hij (s, t) := ij (s, t) exp{ k (t 1)}
is an efficient PGF implementation of the back- 3:
y
@ j Hij (s,u)
ward algorithm for Poisson HMMs. While we 4:
?ij (s, t) := y1 ! (t?j )yj
y
@u j
j
u=t(1 ?j )
were able to derive this algorithm symbolically,
5:
if j < K then
the functional form of the PGFs is more complex
i,j+1 (s, t) := ?ij (s, j t + 1
j)
and we do not know of a polynomial-time im- 6:
7:
end
if
plementation. Instead, we adopt a variable elimination approach that is less efficient in terms of 8: end for
the number of operations performed on factors 9: return ?iK (s, 1)
(O(K 2 ) instead of O(K) to compute all posterior marginals) but with the significant advantage
that those operations are efficient. The key principle is to always eliminate predecessors before successors in the Poisson HMM. This allows us to apply operations similar to those in PGF - FORWARD.
Define ?ij (ni , nj ) := p(ni , nj , y1:j ) for j > i. We can write a recurrence for ?ij similar to Equation
(1). For j > i + 1:
ij (ni ,zj )
zX
}|
{
X
?ij (ni , nj ) = p(yj |nj )
?i,j 1 (ni , nj 1 )p(zj |nj 1 ) .
p(mj )p(nj |zj , mj )
mj ,zj
nj
|
1
{z
?ij (ni ,nj )
We have again introduced intermediate factors, with probabilistic meanings
p(ni , zj , y1:j 1 ) and ?ij (ni , nj ) = p(ni , nj , y1:j 1 ).
}
ij (ni , zj )
=
PGF - TAIL - ELIMINATE (Algorithm 3) is a PGF-domain dynamic programming algorithm based on
this recurrence to compute the PGFs of the ?ij factors for all j 2 {i + 1, . . . , K}. The non-PGF
version of the algorithm appears in the supplementary material for comparison. We use ?ij , ij ,
and Hij to represent the joint PGFs of ?ij , ij , and ?ij , respectively. The algorithm can also be
interpreted as variable elimination using the order zi+1 , ni+1 , . . . , zK , nK , after having already
eliminated variables n1:i 1 and z1:i 1 in the forward algorithm, and therefore starting with the PGF
of ?i (ni ). PGF - TAIL - ELIMINATE concludes by marginalizing nK from ?iK to obtain the PGF of
the unnormalized posterior marginal p(ni , y1:K ). Each line of PGF - TAIL - ELIMINATE uses the same
operations given in Section 3.1. Line 1 uses ?Binomial thinning? (Proposition 3), Line 3 uses ?Add
then marginalize? (Proposition 7), Line 4 uses ?Thin then observe? (Proposition 5) and Line 6 uses
?Thin then marginalize? (Proposition 6).
Implementation and Complexity. The considerations for implementating PGF - TAIL - ELIMINATE
are similar to those of PGF - FORWARD, with the details being slightly more complex due to the
larger factors. We state the main results here and include proofs and implementation details in the
supplementary material.
Theorem 3. All PGFs in the PGF - TAIL - ELIMINATE algorithm have the form f (s,
P t) exp{ast + bs +
ct + d} where f is a bivariate polynomial with maximum exponent most Y = k yk .
6
$; vs Runtime
10
10
-1
#10-3
-2
-3
$; vs Runtime of PGFFA
6 Recovery
200
PGFFA
8
7
160
6
140
120
5
4
80
60
40
1
100
101
$;
102
103
100
3
2
10-4
10-1
Trunc
PGFFA
True 6
180
^
6
10
9
FA - Poiss
FA - Oracle
PGFFA
Mean runtime (s)
Mean runtime (s)
100
0
20
0
100
200
300
$;
400
500
0
10
30
50
70
90
110
130
150
6
Figure 4: Runtime of PGF - FORWARD and truncated algorithm vs. Figure 5: Parameter estimation
??. Left: log-log scale. Right: PGF - FORWARD only, linear scale. w/ PGF - FORWARD
Theorem 4. PGF - TAIL - ELIMINATE can be implemented to run in time O(Y 3 (log Y + K)), and the
PGFs for all marginals can be computed in time O(KY 3 (log Y + K)).
3.4
Extracting Posterior Marginals and Moments
After computing the PGF of the posterior marginals, we wish to compute the actual probabilities and
other quantities, such as the moments, of the posterior distribution. This can be done efficiently:
Theorem 5. The PGF of the unnormalized
posterior marginal p(ni , y1:K ) has the form F (s) =
Pm
f (s) exp{as + b} where f (s) = j=0 cj sj is a polynomial of degree m ? Y . Given the parameters
of the PGF, the posterior mean, the posterior variance, and an arbitrary entry of the posterior
probability mass function can each be computed in O(m) = O(Y ) time as follows, where Z =
f (1) exp{a + b}:
Pm
(i) ? := E[ni | y1:k ] = ea+b log Z j=0 (a + m)cj
Pm
(ii) 2 := Var(ni | y1:k ) = ? ?2 + ea+b log Z j=0 ((a + m)2 m)cj
Pmin{m,`} a` i
(iii) Pr(ni = ` | y1:k ) = eb log Z j=0
cj (` i)!
4
Experiments
We conducted experiments to demonstrate that our method is faster than standard approximate
approaches for computing the likelihood in Poisson HMMs, that it leads to better parameter estimates,
and to demonstrate the computation of posterior marginals on an ecological data set.
Running time. We compared the runtimes of PGF - FORWARD and the truncated forward algorithm, a
standard method for Poisson HMMs in the ecology domain [7]. The runtime of our algorithm depends
on the magnitude of the observed counts. The runtime of the truncated forward is very sensitive to
the setting of the trunctation parameter Nmax : smaller values are faster, but may underestimate the
likelihood. Selecting Nmax large enough to yield correct likelihoods but small enough to be fast is
difficult [4, 6, 8]. We evaluated two strategies to select Nmax . The first is an oracle strategy, where
we first searched for the smallest value of Nmax for which the error in the likelihood is at most 0.001,
and then compared vs. the runtime for that value (excluding the search time). The second strategy,
adapted from [8], is to set Nmax such that the maximum discarded tail probability of the Poisson
prior over any nk is less than 10 5 .
To explore these issues we generated data from models with arrival rates
= ? ?
[0.0257, 0.1163, 0.2104, 0.1504, 0.0428] and survival rates = [0.2636, 0.2636, 0.2636, 0.2636]
based on a model for insect populations [29]. We varied the overall population size parameter
? 2 {10, 20, . . . , 100, 125, 150, . . . , 500}, and detection probability ? 2 {0.05, 0.10, . . . , 1.00}. For
each parameter setting, we generated 25 data sets and recorded the runtime of both methods.
Figure 4 shows that PGF - FORWARD is 2?3 orders of magnitude faster than even the oracle truncated
algorithm. The runtime is plotted against ?? / E[Y ], the primary parameter controlling the runtime
of PGF - FORWARD. Empirically, the runtime depends linearly instead of quadratically, as predicted,
7
on the magnitude of observed counts?this is likely due to the implementation, which is dominated
by loops that execute O(Y ) times, with much faster vectorized O(Y ) operations within the loops.
Parameter Estimation. We now examine the impact of exact vs. truncated likelihood computations
on parameter estimation in the N-mixture model [22]. A well-known feature of this and related
models is that it is usually easy to estimate the product of the population size parameter and
detection probability ?, which determines the mean of the observed counts, but, without enough
data, it is difficult to estimate both parameters accurately, especially as ? ! 0 (e.g., see [8]). It was
previously shown that truncating the likelihood can artificially suppress instances where the true
maximum-likelihood estimates are infinite [8], a phenomenon that we also observed. We designed
a different, simple, experiment to reveal another failure case of the truncated likelihood, which is
avoided by our exact methods. In this case, the modeler is given observed counts over 50 time steps
(K = 50) at 20 iid locations. She selects a heuristic fixed value of Nmax approximately 5 times the
average observed count based on her belief that the detection probability is not too small and this will
capture most of the probability mass.
To evaluate the accuracy of parameter estimates obtained by numerically maximizing the truncated
and exact likelihoods using this heuristic for Nmax we generated true data from different values of
and ? with ? = E[y] fixed to be equal to 10?the modeler does not know the true parameters, and in
each cases chooses Nmax = 5E[y] = 50. Figure 5 shows the results. As the true increases close
to and beyond Nmax , the truncated method cuts off significant portions of the probability mass and
severely underestimates . Estimation with the exact likelihood is noisier as increases and ? ! 0,
but not biased by truncation. While this result is not surprising, it reflects a realistic situation faced by
the practitioner who must select this trunctation parameter.
Marginals. We demonstrate the computation of posterior
marginals and parameter estimation on an end-to-end case
study to model the abundance of Northern Dusky Salamanders
at 15 sites in the mid-Atlantic US using data from [28]. The
data consists of 14 counts at each site, conducted in June and
July over 7 years. We first fit a Poisson HMM by numerically
maximizing the likelihood as computed by PGF - FORWARD.
The model has three parameters total, which are shared across
sites and time: arrival rate, survival rate, and detection probability. Arrivals are modeled as a homogenous Poisson process,
and survival is modeled by assuming indvidual lifetimes are Figure 6: Posterior marginals for
exponentially distributed. The fitted parameters indicated an abundance of Northern Dusky Salaarrival rate of 0.32 individuals per month, a mean lifetime of manders at 1 site. See text.
14.25 months, and detection probability of 0.58.
Figure 6 shows the posterior marginals as computed by PGF - TAIL - ELIMINATE with the fitted parameters, which are useful both for model diagnostics and for population status assessments. The
crosses show the posterior mean, and color intensity indicates the actual PMF. Overall, computing
maximum likelihood estimates required 189 likelihood evaluations and thus 189 ? 15 = 2835 calls
to PGF - FORWARD, which took 24s total. Extracting posterior marginals at each site required 14
executions of the full PGF - TAIL - ELIMINATE routine (at all 14 latent variables), and took 1.6s per site.
Extracting the marginal probabilities and posterior mean took 0.0012s per latent variable.
5
Conclusion
We have presented techniques for exact inference in countably infinite latent variable models using
probability generating functions. Although many aspects of the methodology are general, the current
method is limited to HMMs with Poisson latent variables, for which we can represent and manipulate
PGFs efficiently (cf. Theorems 1 and 3). Future work will focus on extending the methods to
graphical models with more complex structures and to support a larger set of distributions, for
example, including the negative binomial, geometric, and others. One path toward these goals is to
find a broader parametric representation for PGFs that can be manipulated efficiently.
Acknowledgments. This material is based upon work supported by the National Science Foundation under Grant No. 1617533.
8
References
[1] M. A. Al-Osh and A. A. Alzaid. First-order integer-valued autoregressive (INAR(1)) process. Journal of
Time Series Analysis, 8(3):261?275, 1987.
[2] D. Bickson and C. Guestrin. Inference with Multivariate Heavy-Tails in Linear Models. In Advances in
Neural Information Processing Systems (NIPS), 2010.
[3] G. Casella and R. Berger. Statistical Inference. Duxbury advanced series in statistics and decision sciences.
Thomson Learning, 2002. ISBN 9780534243128.
[4] R. Chandler. URL http://www.inside-r.org/packages/cran/unmarked/docs/pcountOpen.
[5] R. B. Chandler, J. A. Royle, and D. I. King. Inference about density and temporary emigration in unmarked
populations. Ecology, 92(7):1429?1435, 2011.
[6] T. Couturier, M. Cheylan, A. Bertolero, G. Astruc, and A. Besnard. Estimating abundance and population
trends when detection is low and highly variable: A comparison of three methods for the Hermann?s
tortoise. Journal of Wildlife Management, 77(3):454?462, 2013.
[7] D. Dail and L. Madsen. Models for estimating abundance from repeated counts of an open metapopulation.
Biometrics, 67(2):577?87, 2011.
[8] E. B. Dennis, B. J. Morgan, and M. S. Ridout. Computational aspects of n-mixture models. Biometrics, 71
(1):237?246, 2015.
[9] S. G. Eick, W. A. Massey, and W. Whitt. The physics of the Mt /G/1 queue. Operations Research, 41
(4):731?742, 1993.
[10] W. Feller. An Introduction to Probability Theory and Its Applications. Wiley, 1968.
[11] I. J. Fiske and R. B. Chandler. unmarked: An R package for fitting hierarchical models of wildlife
occurrence and abundance. Journal of Statistical Software, 43:1?23, 2011.
[12] K. Gross, E. J. Kalendra, B. R. Hudgens, and N. M. Haddad. Robustness and uncertainty in estimates of
butterfly abundance from transect counts. Population Ecology, 49(3):191?200, 2007.
[13] F. V. Jensen, S. L. Lauritzen, and K. G. Olesen. Bayesian updating in causal probabilistic networks by
local computations. Computational statistics quarterly, 1990.
[14] A. Jha, V. Gogate, A. Meliou, and D. Suciu. Lifted inference seen from the other side: The tractable
features. In Advances in Neural Information Processing Systems (NIPS), pages 973?981, 2010.
[15] M. K?ry, R. M. Dorazio, L. Soldaat, A. Van Strien, A. Zuiderwijk, and J. A. Royle. Trend estimation in
populations with imperfect detection. Journal of Applied Ecology, 46:1163?1172, 2009.
[16] S. L. Lauritzen and D. J. Spiegelhalter. Local computations with probabilities on graphical structures and
their application to expert systems. Journal of the Royal Statistical Society. Series B (Methodological),
pages 157?224, 1988.
[17] Y. Mao and F. R. Kschischang. On factor graphs and the Fourier transform. IEEE Transactions on
Information Theory, 51(5):1635?1649, 2005.
[18] E. McKenzie. Ch. 16. discrete variate time series. In Stochastic Processes: Modelling and Simulation,
volume 21 of Handbook of Statistics, pages 573 ? 606. Elsevier, 2003.
[19] J. Pearl. Fusion, propagation, and structuring in belief networks. Artificial intelligence, 29(3):241?288,
1986.
[20] L. Rabiner. A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257?286, Feb 1989.
[21] S. I. Resnick. Adventures in stochastic processes. Springer Science & Business Media, 2013.
[22] J. A. Royle. N-Mixture Models for Estimating Population Size from Spatially Replicated Counts. Biometrics, 60(1):108?115, 2004.
[23] P. P. Shenoy and G. Shafer. Axioms for probability and belief-function propagation. In Uncertainty in
Artificial Intelligence, 1990.
[24] C. H. Wei?. Thinning operations for modeling time series of counts?a survey. AStA Advances in Statistical
Analysis, 92(3):319?341, 2008.
[25] K. Winner, G. Bernstein, and D. Sheldon. Inference in a partially observed queueing model with applications in ecology. In Proceedings of the 32nd International Conference on Machine Learning (ICML),
volume 37, pages 2512?2520, 2015.
[26] Y. Xue, S. Ermon, R. Lebras, C. P. Gomes, and B. Selman. Variable Elimination in Fourier Domain. In
Proceedings of the 33rd International Conference on Machine Learning (ICML), pages 1?10, 2016.
[27] N. L. Zhang and D. Poole. A simple approach to bayesian network computations. In Proc. of the Tenth
Canadian Conference on Artificial Intelligence, 1994.
[28] E. F. Zipkin, J. T. Thorson, K. See, H. J. Lynch, E. H. C. Grant, Y. Kanno, R. B. Chandler, B. H. Letcher,
and J. A. Royle. Modeling structured population dynamics using data from unmarked individuals. Ecology,
95(1):22?29, 2014.
[29] C. Zonneveld. Estimating death rates from transect counts. Ecological Entomology, 16(1):115?121, 1991.
9
| 6587 |@word version:2 polynomial:13 nd:1 open:2 simulation:1 moment:3 series:7 uma:1 selecting:1 daniel:1 existing:3 atlantic:1 current:2 osh:1 surprising:1 si:5 must:2 realistic:1 partition:1 enables:1 designed:1 bickson:2 n0:1 v:5 intelligence:3 instantiate:1 selected:1 xk:6 provides:1 location:1 org:1 zhang:1 direct:1 predecessor:1 ik:2 consists:2 combine:1 fitting:1 inside:1 thinned:2 introduce:4 p1:1 examine:1 ry:1 detects:1 actual:2 spain:1 estimating:4 notation:2 bounded:1 mass:4 medium:1 interpreted:1 developed:1 transformation:1 nj:11 pseudo:2 y3:3 ti:2 xd:1 runtime:12 exactly:3 grant:2 shenoy:1 t1:1 before:1 scientist:1 timing:1 local:2 sd:1 consequence:1 severely:1 ak:7 mount:1 path:1 abuse:1 approximately:1 plus:2 initialization:1 eb:1 hmms:12 limited:2 walsh:1 acknowledgment:1 yj:2 practice:2 implement:3 axiom:1 nmax:12 symbolic:1 get:1 cannot:2 undesirable:1 marginalize:6 s9:1 close:1 ast:1 restriction:1 www:1 deterministic:1 maximizing:2 straightforward:2 starting:1 truncating:1 besnard:1 survey:4 recovery:1 s6:1 population:16 variation:1 controlling:1 suppose:2 user:2 exact:18 programming:2 us:9 trend:2 recognition:1 updating:1 gorithm:1 cut:1 observed:9 resnick:1 capture:3 yk:26 gross:1 feller:1 complexity:2 inductively:1 dynamic:3 trunc:1 smart:1 upon:1 compactly:2 easily:1 joint:13 represented:4 chapter:1 fast:6 describe:2 artificial:3 kevin:1 outside:1 heuristic:2 larger:4 valued:4 widely:1 supplementary:4 ability:1 statistic:4 ward:1 transform:2 noisy:1 butterfly:1 advantage:1 isbn:1 took:3 product:2 hadamard:1 loop:4 poorly:2 description:1 ky:2 constituent:1 exploiting:1 convergence:1 extending:1 produce:2 generating:21 tk:3 illustrate:1 develop:5 derive:1 fixing:1 ij:16 finitely:1 lauritzen:2 b0:1 sa:1 implemented:3 c:1 predicted:1 hermann:1 correct:1 chandler:4 stochastic:2 successor:1 ermon:1 elimination:16 material:5 asta:1 require:1 proposition:14 summation:1 im:1 clarify:1 exp:17 adopt:1 smallest:1 estimation:9 proc:1 applicable:1 sensitive:1 survives:1 reflects:1 clearly:1 gaussian:4 always:2 lynch:1 poi:1 lifted:1 broader:2 structuring:1 focus:2 june:1 she:1 unsatisfactory:1 methodological:1 likelihood:18 survivor:2 check:1 salamander:1 indicates:1 modelling:1 elsevier:1 inference:22 eliminate:10 a0:1 initially:1 hidden:3 her:1 expand:2 selects:1 issue:2 overall:2 insect:2 exponent:1 animal:1 marginal:5 field:2 equal:2 homogenous:1 having:1 sampling:1 eliminated:1 runtimes:1 represents:1 icml:2 thin:6 future:1 others:2 t2:1 manipulated:1 national:1 individual:9 n1:3 ecology:9 ab:1 detection:8 message:4 investigate:1 highly:1 evaluation:1 mixture:9 diagnostics:1 suciu:1 amenable:1 biometrics:3 indexed:1 pmf:1 y1k:1 desired:4 plotted:1 causal:1 mk:10 fitted:2 instance:2 modeling:2 retains:1 introducing:1 subset:1 entry:2 conducted:3 too:1 xue:2 combined:2 referring:1 chooses:1 density:1 international:2 amherst:1 probabilistic:3 off:1 physic:1 meliou:1 discipline:1 again:1 satisfied:1 recorded:1 management:1 admit:1 resort:1 derivative:1 expert:1 return:1 pmin:1 jha:2 coefficient:2 depends:2 performed:1 view:1 observer:1 closed:2 observing:2 portion:1 start:1 recover:2 whitt:1 contribution:1 ass:1 ni:18 accuracy:1 variance:1 characteristic:1 efficiently:9 who:1 yield:1 dusky:2 rabiner:1 generalize:1 bayesian:2 i2i:1 accurately:1 iid:2 marginally:1 zx:1 casella:1 against:1 underestimate:2 failure:1 proof:4 modeler:2 massachusetts:1 color:1 cj:4 routine:1 ea:2 thinning:5 back:1 appears:1 ta:1 methodology:1 wei:1 done:3 though:1 evaluated:1 generality:1 execute:1 lifetime:2 tortoise:1 until:1 cran:1 dennis:1 assessment:1 propagation:5 lack:1 reveal:1 indicated:1 verify:2 y2:4 true:5 entering:2 memoryless:1 spatially:1 death:1 i2:1 during:1 uniquely:2 recurrence:6 unnormalized:8 thomson:1 demonstrate:3 performs:1 meaning:1 adventure:1 consideration:1 novel:2 recently:1 common:1 functional:1 mt:1 empirically:1 conditioning:2 exponentially:2 volume:2 winner:1 s8:1 tail:13 interpret:1 marginals:13 numerically:2 refer:1 s5:1 composition:1 significant:2 ai:1 rd:1 pm:3 similarly:1 add:3 feb:1 posterior:26 multivariate:3 recent:1 madsen:1 store:1 certain:2 ecological:2 binary:1 success:1 guestrin:2 wildlife:2 additional:1 morgan:1 seen:1 employed:1 july:1 ii:1 full:1 infer:1 reduces:1 faster:7 pgf:57 cross:1 manipulate:4 impact:1 basic:3 poisson:36 represent:4 normalization:2 preserved:1 addition:2 biased:1 abi:1 integer:6 extracting:3 practitioner:1 call:1 intermediate:4 iii:1 enough:5 easy:3 bernstein:1 canadian:1 marginalization:6 fit:2 xj:12 zi:1 variate:1 imperfect:1 idea:1 whether:1 url:1 s7:1 queue:1 speech:1 passing:1 proceed:1 enumerate:1 useful:4 transect:2 transforms:1 mid:1 http:1 northern:2 zj:6 tutorial:1 per:3 broadly:2 discrete:9 write:2 intertwined:1 key:4 four:1 terminology:1 queueing:2 indeterminates:1 sxi:1 tenth:1 backward:1 massey:1 graph:4 symbolically:1 sum:4 year:1 run:2 package:2 uncertainty:2 arrive:2 doc:1 decision:1 ct:1 followed:1 nonnegative:2 oracle:3 adapted:1 s10:1 software:1 encodes:2 sheldon:2 dominated:1 fourier:3 aspect:2 performing:1 expanded:1 relatively:1 department:1 developing:2 structured:1 truncate:3 across:2 describes:1 remain:1 slightly:2 smaller:1 b:1 s1:1 invariant:1 pr:2 computationally:1 equation:1 remains:2 previously:1 count:16 fail:1 know:2 tractable:3 end:8 operation:24 apply:5 observe:3 hierarchical:1 quarterly:1 generic:1 mckenzie:2 occurrence:1 duxbury:1 robustness:1 letcher:1 existence:1 binomial:10 running:3 include:1 cf:1 graphical:5 exploit:1 especially:1 society:1 question:1 already:1 depart:1 quantity:1 fa:2 primary:2 strategy:3 parametric:1 interacts:2 hmm:9 reason:1 toward:1 assuming:2 index:1 modeled:2 berger:1 gogate:1 difficult:2 hij:3 negative:2 stated:1 suppress:1 implementation:8 unknown:2 perform:7 observation:2 markov:4 discarded:1 finite:7 truncated:8 situation:2 defining:1 excluding:1 y1:20 varied:1 arbitrary:1 intensity:1 introduced:3 subvector:1 required:2 z1:7 quadratically:1 temporary:1 pearl:2 barcelona:1 nip:3 able:1 beyond:1 poole:1 usually:2 haddad:1 including:2 royal:1 belief:5 difficulty:1 natural:1 business:1 advanced:1 spiegelhalter:1 concludes:1 naive:1 sn:1 faced:1 prior:3 nice:1 text:1 geometric:1 multiplication:7 marginalizing:3 fully:1 loss:1 highlight:1 var:1 foundation:1 degree:6 affine:1 vectorized:1 consistent:1 principle:2 storing:1 heavy:1 supported:1 truncation:2 side:1 taking:1 distributed:1 van:1 autoregressive:1 forward:32 qualitatively:1 commonly:1 made:1 collection:1 avoided:1 replicated:1 selman:1 transaction:1 sj:6 approximate:6 compact:3 countably:6 status:1 logic:1 deg:1 handbook:1 gomes:1 xi:24 search:1 latent:13 sk:6 why:1 mj:3 zk:15 expanding:1 kschischang:1 complex:3 artificially:1 domain:9 main:3 linearly:1 unmarked:4 arise:1 arrival:5 n2:1 shafer:1 repeated:2 x1:1 site:9 fashion:1 slow:4 wiley:1 formalization:1 mao:1 wish:2 explicit:1 exponential:2 third:1 abundance:7 formula:1 theorem:8 jensen:1 evidence:3 bivariate:1 survival:3 fusion:1 sequential:1 magnitude:4 execution:1 sx:2 nk:43 univariate:1 explore:1 likely:1 partially:1 scalar:3 collectively:1 springer:1 ch:1 determines:1 goal:3 month:2 king:1 shared:2 hard:1 infinite:10 except:1 reducing:1 called:1 total:2 pas:1 select:2 college:2 support:7 searched:1 noisier:1 olesen:1 avoiding:1 evaluate:1 mcmc:3 phenomenon:1 |
6,177 | 6,588 | On Graph Reconstruction via Empirical Risk
Minimization: Fast Learning Rates and Scalability
Guillaume Papa, St?phan Cl?men?on
LTCI, CNRS, T?l?com ParisTech, Universit? Paris-Saclay
75013, Paris, France
[email protected]
Aur?lien Bellet
INRIA
59650 Villeneuve d?Ascq, France
[email protected]
Abstract
The problem of predicting connections between a set of data points finds many
applications, in systems biology and social network analysis among others. This
paper focuses on the graph reconstruction problem, where the prediction rule is
obtained by minimizing the average error over all n(n ? 1)/2 possible pairs of
the n nodes of a training graph. Our first contribution is to derive learning rates of
order OP (log n/n)
? for this problem, significantly improving upon the slow rates
of order OP (1/ n) established in the seminal work of Biau and Bleakley (2006).
Strikingly, these fast rates are universal, in contrast to similar results known for
other statistical learning problems (e.g., classification, density level set estimation,
ranking, clustering) which require strong assumptions on the distribution of the
data. Motivated by applications to large graphs, our second contribution deals with
the computational complexity of graph reconstruction. Specifically, we investigate
to which extent the learning rates can be preserved when replacing the empirical
reconstruction risk by a computationally cheaper Monte-Carlo version, obtained
by sampling with replacement B n2 pairs of nodes. Finally, we illustrate our
theoretical results by numerical experiments on synthetic and real graphs.
1
Introduction
Although statistical learning theory mainly focuses on establishing universal rate bounds (i.e.,
which hold for any distribution of the data) for the accuracy of a decision rule based on training
observations, refined concentration inequalities have recently helped understanding conditions on
the data distribution under which learning paradigms such as Empirical Risk Minimization (ERM)
lead to faster rates. In binary classification, i.e., the problem of learning to predict a random
binary label Y ? {?1, +1} from on an input random variable
? X based on independent copies
(X1 , Y1 ), . . . , (Xn , Yn ) of the pair (X, Y ), rates faster than 1/ n are achieved when little mass in
the vicinity of 1/2 is assigned by the distribution of the random variable ?(X) = P{Y = +1 | X}.
This condition and its generalizations are referred to as the Mammen-Tsybakov noise conditions (see
Mammen and Tsybakov, 1999; Tsybakov, 2004; Massart and N?d?lec, 2006). It has been shown
that a similar phenomenon occurs for various other statistical learning problems. Indeed, specific
conditions under which fast rate results hold have been exhibited for density level set estimation
(Rigollet and Vert, 2009), (bipartite) ranking (Cl?men?on et al., 2008; Cl?men?on and Robbiano,
2011; Agarwal, 2014), clustering (Antos et al., 2005; Cl?men?on, 2014) and composite hypothesis
testing (Cl?men?on and Vayatis, 2010).
In this paper, we consider the supervised learning problem on graphs referred to as graph reconstruction, rigorously formulated by Biau and Bleakley (2006). The objective of graph reconstruction is to
predict the possible occurrence of connections between a set of objects/individuals known to form the
nodes of an undirected graph. Precisely, each node is described by a random vector X which defines
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
a form of conditional preferential attachment: one predicts whether two nodes are connected based
on their features X and X 0 . This statistical learning problem is motivated by a variety of applications
such as systems biology (e.g., inferring protein-protein interactions or metabolic networks, see Jansen
et al., 2003; Kanehisa, 2001) and social network analysis (e.g., predicting future connections between
users, see Liben-Nowell and Kleinberg, 2003). It has recently been the subject of a good deal of
attention in the machine learning literature (see Vert and Yamanishi, 2004; Biau and Bleakley, 2006;
Shaw et al., 2011), and is also known as supervised link prediction (Lichtenwalter et al., 2010;
Cukierski et al., 2011). The learning task is formulated as the minimization of a reconstruction risk,
whose natural empirical version is the average prediction error over the n(n ? 1)/2 pairs of nodes in
a training graph of size n. Under standard complexity
assumptions on the set of candidate prediction
?
rules, excess risk bounds of the order OP (1/ n) for the empirical risk minimizers have been established by Biau and Bleakley (2006) based on a representation of the objective functional very similar
to the first Hoeffding decomposition for second-order U -statistics (see Hoeffding, 1948). However,
Biau & Bleakley ignored the computational complexity of finding an empirical risk minimizer, which
scales at least as O(n2 ) since the empirical graph reconstruction risk involves summing up over
n(n ? 1)/2 terms. This makes the approach impractical when dealing with large graphs commonly
found in many applications.
Building up on the above work, our contributions to statistical graph reconstruction are two-fold:
Universal fast rates. We prove that a fast rate of order OP (log n/n) is always achieved by empirical
reconstruction risk minimizers, in absence
of any restrictive condition imposed on the data distribution.
?
This is much faster than the OP (1/ n) rate established by Biau and Bleakley (2006). Our analysis is
based on a different decomposition of the excess of reconstruction risk of any decision rule candidate,
involving the second Hoeffding representation of a U -statistic approximating it, as well as appropriate
maximal/concentration inequalities.
Scaling-up ERM. We investigate the performance of minimizers of computationally cheaper MonteCarlo estimates of the empirical reconstruction risk, built by averaging over B n2 pairs of vertices
drawn with replacement. The rate bounds we obtain highlight that B plays the role of a tuning
parameter to achieve an effective trade-off between statistical accuracy and computational cost.
Numerical results based on simulated graphs and real-world networks are presented in order to
support these theoretical findings.
The paper is organized as follows. In Section 2, we present the probabilistic setting for graph
reconstruction and recall state-of-the-art results. Section 3 provides our fast rate bound analysis,
while Section 4 deals with the problem of scaling-up reconstruction risk minimization to large graphs.
Numerical experiments are displayed in Section 5, and a few concluding remarks are collected
in Section 6. The technical proofs can be found in the Supplementary Material, along with some
additional remarks and results.
2
Background and Preliminaries
We start by describing at length the probabilistic framework we consider for statistical inference on
graphs, as introduced by Biau and Bleakley (2006). We then briefly recall the related theoretical
results documented in the literature.
2.1
A Probabilistic Setup for Preferential Attachment
In this paper, G = (V, E) is an undirected random graph with a set V = {1, . . . , n} of n ? 2
vertices and a set E = {ei,j : 1 ? i 6= j ? n} ? {0, 1}n(n?1) describing its edges: for all i 6= j, we
have ei,j = ej,i = +1 if the vertices i and j are connected by an edge and ei,j = ej,i = 0 otherwise.
We assume that G is a Bernoulli graph, i.e. the random variables ei,j , 1 ? i < j ? n, are independent
labels drawn from a Bernoulli distribution Ber(p) with parameter p = P{ei,j = +1}, the probability
that two vertices of G are connected by an edge. The degree of each vertex is thus distributed as a
binomial with parameters n and p, which can be classically approximated by a Poisson distribution
of parameter ? > 0 in the limit of large n, when np ? ?.
Whereas the marginal distribution of the graph G is that of a Bernoulli graph (also sometimes
abusively referred to as a random graph), a form of conditional preferential attachment is also
specified in the framework considered here. Precisely, we assume that, for all i ? V , a continuous r.v.
2
Xi , taking its values in a separable Banach space X , describes some features related to vertex i. The
Xi ?s are i.i.d. with common distribution ?(dx) and, for any i 6= j, the random pair (Xi , Xj ) models
some information useful for predicting the occurrence of an edge connecting the vertices i and j.
Conditioned upon the features (X1 , . . . , Xn ), any binary variables ei,j and ek,l are independent
only if {i, j} ? {k, l} = ?. The conditional distribution of ei,j , i 6= j, is supposed to depend on
(Xi , Xj ) solely, described by the posterior preferential attachment probability:
? (Xi , Xj ) = P {ei,j = +1 | (Xi , Xj )} .
(1)
For instance, ?(x1 , x2 ) ? X 2 , ? (x1 , x2 ) can be a certain function of a specific distance or similarity
measure between x1 and x2 , as in the synthetic graphs described in Section 5.
The conditional Raverage degree of the vertex i ? V P
given Xi (respectively, given (X1 , . . . , Xn ))
is thus (n ? 1) x?X ?(Xi , x)?(dx) (respectively, j6=i ?(Xi , Xj )). Observe incidentally that,
R
equipped with these notations, p = (x,x0 )?X 2 ?(x, x0 )?(dx)?(dx0 ). Hence, the 3-tuples
(Xi , Xj , ei,j ), 1 ? i < j ? n, are non-i.i.d. copies of a generic random vector (X1 , X2 , e1,2 )
whose distribution L is given by the tensorial product ?(dx1 ) ? ?(dx2 ) ? Ber(?(x1 , x2 )), which
is fully described by the pair (?, ?). Observe also that the function ? is symmetric by construction:
?(x1 , x2 ) ? X 2 , ?(x1 , x2 ) = ?(x2 , x1 ).
In this framework, the learning problem introduced by Biau and Bleakley (2006), referred to as graph
reconstruction, consists in building a symmetric reconstruction rule g : X 2 ? {0, 1}, from a training
graph G, with nearly minimum reconstruction risk
R(g) = P {g(X1 , X2 ) 6= e1,2 } ,
(2)
thus achieving a comparable performance to that of the Bayes rule g ? (x1 , x2 ) = I{?(x1 , x2 ) > 1/2},
whose risk is given by R? = E[min{?(X1 , X2 ), 1 ? ?(X1 , X2 )}] = inf g R(g).
Remark 1 (E XTENDED FRAMEWORK ) The results established in this paper can be straightforwardly
extended to a more general framework, where L = L(n) may depend on the number n of vertices.
This allows to consider a general class of models, accounting for possible accelerating properties
exhibited by various non scale-free real networks (Mattick and Gagen, 2005). An asymptotic study can
be then carried out with the additional assumption that, as n ? +?, L(n) converges in distribution
to a probability measure L(?) on X ? X ? {0, 1}, see (Biau and Bleakley, 2006). For simplicity, we
restrict our study to the stationary case, i.e. L(n) = L for all n ? 2.
2.2
Related Results on Empirical Risk Minimization
A paradigmatic approach in statistical learning, referred to as Empirical Risk Minimization (ERM),
consists in replacing (2) by its empirical version based on the labeled sample Dn = {(Xi , Xj , ei,j ) :
1 ? i < j ? n} related to G:1
X
2
b n (g) =
R
I {g(Xi , Xj ) 6= ei,j } .
(3)
n(n ? 1)
1?i<j?n
b n (g), where G
An empirical risk minimizer gbn is a solution of the optimization problem ming?G R
is a class of reconstruction rules of controlled complexity, hopefully rich enough to yield a small
bias inf g?G R(g) ? R? . The performance of gbn is measured by its excess risk R(b
gn ) ? inf g?G R(g),
which can be bounded if we can derive probability inequalities for the maximal deviation
b n (g) ? R(g)|.
sup |R
(4)
g?G
In the framework of classification, the flagship problem of statistical learning theory, the empirical
risk is of the form of an average of i.i.d. r.v.?s, so that results pertaining to empirical process theory can
be readily used to obtain bounds for the performance of empirical error minimization. Unfortunately,
the empirical risk (3) is a sum of dependent variables. Following in the footsteps of Cl?men?on et al.
1
A classical Lehmann-Scheff? argument shows that (3) is the estimator of (2) with smallest variance among
all unbiased estimators.
3
(2008), the work of Biau and Bleakley (2006) circumvents this difficulty by means of a representation
b n (g) as an average of sums of i.i.d. r.v.?s, namely
of R
n
b2c
1 X
1 X
I{g(X?(i) , X?(i+b n2 c) ) 6= e?(i),?(i+b n2 c) },
n!
bn/2c i=1
??Sn
where the sum is taken over all permutations of Sn , the symmetric group of order n, and buc denotes
the integer part of any u ? R. Very similar to the first Hoeffding decomposition for U -statistics
(see Lee, 1990), this representation reduces the first order analysis of the concentration properties of
(4) to the study of a basic empirical process (see Biau and Bleakley,
2006, Lemma 3.1). Biau and
?
Bleakley (2006) thereby establish rate bounds of the order OP (1/ n) for the excess of reconstruction
risk of gbn under appropriate complexity assumptions (namely, G is of finite VC-dimension). Note
incidentally that (3) is a U -statistic only when the variable ?(X1 , X2 ) is almost-surely constant (see
Janson and Nowicki, 1991, for an asymptotic study of graph reconstruction in this restrictive context).
Remark 2 (A LTERNATIVE LOSS FUNCTIONS ) For simplicity, all our results are stated for the case
of the 0-1 loss I{g(Xi , Xj ) 6= ei,j }, but they straightforwardly extend to more practical alternatives
such as the convex surrogate and cost-sensitive variants used in our numerical experiments. See the
Supplementary Material for more details.
3
Empirical Reconstruction is Always Fast!
In this section, we show that the rate bounds established by Biau and Bleakley (2006) can be largely
improved without any additional assumptions. Precisely, we prove that fast learning rates of order
OP (log n/n) are always attained by the minimizers of the empirical reconstruction risk (3), as
revealed by the following theorem.
Theorem 1 (FAST RATES ) Let gbn be any minimizer of the empirical reconstruction risk (3) over a
class G of finite VC-dimension V < +?. For all ? ? (0, 1), we have w.p. at least 1 ? ?: ?n ? 2,
V log(n/?)
?
?
,
R(b
gn ) ? R ? 2 inf R(g) ? R + C ?
g?G
n
where C < +? is a universal constant.2
Remark 3 (O N THE BIAS TERM ) Apart from its remarkable universality, Theorem 1 takes the same
form as in the case of empirical minimization of U-statistics (Cl?men?on et al., 2008, Corollary 6),
with the same constant 2 in front of the bias term inf g?G R(g) ? R? . As can be seen from the proof,
this constant has no special meaning and can be replaced
? by any constant strictly larger than 1 at the
cost of increasing the constant C. Note that the OP (1/ n) rate obtained by Biau and Bleakley (2006)
has a factor 1 in front of the bias term. Therefore, Theorem 1 provides a significant improvement
unless the bias overly dominates the second term of the bound (i.e., the complexity of G is too small).
Remark 4 (O N COMPLEXITY ASSUMPTIONS ) We point out that a similar result can be established
under weaker complexity assumptions involving Rademacher averages (refer to the Supplementary
Material for more details). As may be seen by carefully examining the proof of Theorem 1, this would
require to use the moment inequality for degenerate U -processes stated in (Cl?men?on et al., 2008,
Theorem 11) instead of that proved by Arcones and Gin? (1994).
In the rest of this section, we outline the main ideas used to obtain this result (the detailed proofs can
be found in the Supplementary Material). We rely on some arguments used in the fast rate analysis
for empirical minimization of U -statistics (Cl?men?on et al., 2008), although these results only
hold true under restrictive distributional assumptions. Whereas the quantity (3) is not a U -statistic,
one may decompose the difference between the excess of reconstruction risk of any candidate rule
g ? G and its empirical counterpart as the sum of its conditional expectation given the Xi ?s, which
is a U -statistic, plus a residual term. In order to explain the main argument underlying the present
analysis, additional notation is required. Set
Hg (x1 , x2 , e1,2 ) = I{g(x1 , x2 ) 6= e1,2 } and qg (x1 , x2 , e1,2 ) = Hg (x1 , x2 , e1,2 )?Hg? (x1 , x2 , e1,2 )
2
Note that, throughout the paper, the constant C is not necessarily the same at each appearance.
4
for any (x1 , x2 , e1,2 ) ? X ? X ? {0, 1}. Denoting by ?(g) = R(g) ? R? = E[qg (X1 , X2 , e1,2 )]
the excess reconstruction risk with respect to the Bayes rule, its empirical estimate is given by
X
2
b n (g) ? R
b n (g ? ) =
?n (g) = R
qg (Xi , Xj , ei,j ).
n(n ? 1)
1?i<j?n
For all g ? G, one may write:
cn (g),
?n (g) ? ?(g) = Un (g) + W
(5)
where
Un (g) = E [?n (g) ? ?(g) | X1 , . . . , Xn ] =
2
n(n ? 1)
X
qeg (Xi , Xj ) ? ?(g)
1?i<j?n
is a U -statistic of degree 2 with symmetric kernel qeg (X1 , X2 )??(g), where we denote qeg (X1 , X2 ) =
P
2
cn (g) =
E[qg (X1 , X2 , e1,2 ) | X1 , X2 ], and W
eg (Xi , Xj )}.
i<j {qg (Xi , Xj , ei,j ) ? q
n(n?1)
Equipped with this notation, we can now sketch the main steps of the proof of the fast rate bound
stated in Theorem 1. As shown in the Supplementary Material, it is based on Eq. (5) combined with
two intermediary results, each providing a control of one of the terms involved in it. The second order
analysis carried out by Cl?men?on et al. (2008) shows that the small variance property of U -statistics
may yield fast learning rates for empirical risk minimizers when U -statistics are used to estimate the
risk, under a certain ?low-noise? condition (see Assumption 4 therein). One of our main findings is
that this condition is always fulfilled for the specific U -statistic Un (g) involved in the decomposition
(5) of the excess of reconstruction risk of any rule candidate g, as shown by the following lemma.
Lemma 2 (VARIANCE CONTROL ) For any distribution L and any reconstruction rule g, we have
Var (E [qg (X1 , X2 , e1,2 ) | X1 ]) ? ?(g).
The fundamental reason for the universal character of this result lies in the fact that the empirical
reconstruction risk is not an average over all pairs (i.e., a U-statistic of order 2) but an average over
randomly selected pairs (random selection being ruled by the function ?). The resulting smoothness
is the key ingredient allowing us to establish the desired property.
Empirical reconstruction risk minimization over a class G being equivalent to minimization of
?n (g) ? ?(g), the result below, combined with (5), proves that it also boils down to minimizing
Un (g) under appropriate conditions on G, so that the fast rate analysis of Cl?men?on et al. (2008) can
be extended to graph reconstruction.
Lemma 3 (U NIFORM APPROXIMATION ) Under the same assumptions as in Theorem 1, for any
? ? (0, 1), we have with probability larger than 1 ? ?: ?n ? 2,
V log(n/?)
c
sup W
,
n (g) ? C ?
n
g?G
where C < +? is a universal constant.
The proof relies on classical symmetrization and randomization tricks combined with the decoupling method, in order to cope with the dependence structure of the variables and apply maximal/concentration inequalities for sums of independent random variables (see De la Pena and Gin?,
1999).
Based on the above results, Theorem 1 can then be derived by relying on the second Hoeffding
decomposition (see Hoeffding, 1948). This allows us to write Un (g) as a leading term taking the form
of a sum of i.i.d r.v.?s with variance 4V ar(E[qg (X1 , X2 , e1,2 ) | X1 ]), plus a degenerate U -statistic
(i.e., a U -statistic of symmetric kernel h(x1 , x2 ) such that E[h(x1 , X2 )] = 0 for all x1 ? X ). The
latter can be shown to be of order OP (1/n) uniformly over the class G by means of concentration
results for degenerate U -processes.
We conclude this section by observing that, instead of estimating the reconstruction risk by (3), one
could split the training dataset into two halves and consider the unbiased estimate of (2) given by
bn/2c
X
1
I{g(Xi , Xi+bn/2c ) 6= ei,i+bn/2c }.
bn/2c i=1
5
(6)
The analysis of the generalization ability of minimizers of this empirical risk functional is simpler,
insofar as only independent r.v.?s are involved in the sum (6). However, this estimate does not share
the reduced variance property of (3) and although one could show that rate bounds of the same
order as those stated in Theorem 1 may be attained by means of results pertaining to ERM theory
for binary classification (see e.g. Section 5 in Boucheron et al., 2005), this would require a very
restrictive assumption on distribution L, namely to suppose that the posterior preferential attachment
probability ? stays bounded away from 1/2 with probability one (cf Massart and N?d?lec, 2006).
This is illustrated in the Supplementary Material.
4
Scaling-up Empirical Risk Minimization
The results of the previous section, as well as those of Biau and Bleakley (2006), characterize the
b n (g) but do not consider the
excess risk achieved by minimizers of the empirical reconstruction risk R
computational complexity of finding such minimizers. For large training graphs, the complexity of
b n (g) is prohibitive as the number of terms involved in the summation is O(n2 ).
merely computing R
In this section, we introduce a sampling-based approach to build approximations of the reconstruction
risk with much fewer terms than O(n2 ), so as to scale-up risk minimization to large graphs.
The strategy we propose, inspired from the notion of incomplete U -statistic (see Lee, 1990), is of
disarming simplicity: instead of the empirical reconstruction risk (3), we will consider an incomplete
approximation obtained by sampling pairs of vertices (and not vertices) with replacement. Formally,
we define the incomplete graph reconstruction risk based on B ? 1 pairs of vertices as
X
e B (g) = 1
R
I {g(Xi , Xj ) 6= ei,j } ,
(7)
B
(i,j)?PB
where PB is a set of cardinality B built by sampling with replacement in the set ?n = {(i, j) :
1 ? i < j ? n} of all pairs of vertices of the training graph G. For any b ? {1, . . . , B}
and all (i, j) ? ?n , denote by b (i, j) the variable indicating whether the pair (i, j) has been
picked at the b-th draw (b (i, j) = +1) or not P
(b (i, j) = +0). The (multinomial) random vectors
b = (b (i, j))(i,j)??n are i.i.d. (notice that (i,j)??n b (i, j) = +1 for 1 ? b ? B) and the
incomplete risk can be then rewritten as
B
X
e B (g) = 1
R
B
X
b (i, j) ? I {g(Xi , Xj ) 6= ei,j } .
(8)
b=1 (i,j)??n
Observe that the statistic (7) is an unbiased estimate of the true risk (2) and that, given the Xi ?s,
its conditional expectation is equal to (3). Considering (7) with B = o(n2 ) as our empirical risk
estimate significantly reduces the computational cost, at the price of a slightly increased variance:
e B (g) = Var R
b n (g) + 1 Var R
b 1 (g) ? Var R
b n (g) ,
Var R
B
for any reconstruction rule g. Note in particular that the above variance
? is in general much smaller
than that of the complete reconstruction risk based on a subsample of b Bc vertices drawn at random
(thus involving O(B) pairs as well). We refer to the Supplementary Material for more details.
We are thus interested in characterizing the performance of solutions geB to the computationally
e B (g). The following theorem shows that, when the class G is of finite VCsimpler problem ming?G R
e B (g)}g?G
dimension, the concentration properties of the incomplete reconstruction risk process {R
b n (g)}g?G .
can be deduced from those of the complete version {R
Theorem 4 (U NIFORM DEVIATIONS ) Suppose that the class G is of finite VC-dimension V < +?.
For all ? > 0, n ? 1 and B ? 1, we have with probability at least 1 ? ?:,
r
e B (g) ? R
b n (g)| ? log 2 + V log ((1 + n(n ? 1)/2)/?) .
sup |R
2B
g?G
The finite VC-dimension hypothesis can be relaxed and a bound of the same order can be proved
to hold true under weaker complexity assumptions involving Rademacher averages (see Remark 4).
6
(a) True graph
(b) Graph with scrambled features
(c) Reconstructed graph
Figure 1: Illustrative experiment with n = 50, q = 2, ? = 0.27 and p = 0. Figure 1(a) shows
the training graph, where the position of each node is given by its 2D feature vector. Figure 1(b)
depicts the same graph after applying a random transformation R to the features. On this graph, the
Euclidean distance with optimal threshold achieves a reconstruction error of 0.1311. In contrast, the
reconstruction rule learned from B = 100 pairs of nodes (out of 1225 possible pairs) successfully
inverts R and accurately recovers the original graph (Figure 1(c)). Its reconstruction error is 0.008 on
the training graph and 0.009 on a held-out graph generated with the same parameters.
Remarkably, with only B = O(n) pairs, the rate in Theorem 4 is of the same order (up to a log factor)
b n (g) ? R(g)|
as that obtained by Biau and Bleakley (2006) for the maximal deviation supg?G |R
b n (g) with O(n2 ) pairs. From Theorem 4, one can get a
related to the complete reconstruction
risk R
?
learning rate of order OP (1/ n) for the minimizer of the incomplete risk involving only O(n) pairs.
Unfortunately, such an analysis does not exploit the relationship between conditional variance
and expectation formulated in Lemma 2, and is thus not sufficient to show that reconstruction rules
minimizing the incomplete risk (7) can achieve learning rates comparable to those stated in Theorem 1.
In contrast, the next theorem provides sharper statistical guarantees. We refer to the Supplementary
Material for the proof.
Theorem 5 Let geB be any minimizer of the incomplete reconstruction risk (7) over a class G of finite
VC-dimension V < +?. Then, for all ? ? (0, 1), we have with probability at least 1 ? ?: ?n ? 2,
1
1
?
?
?
R(e
gB ) ? R ? 2 inf R(g) ? R + CV log(n/?) ?
+
,
g?G
n
B
where C < +? is a universal constant.
This bound reveals that the number B ? 1 of pairs of vertices plays the role of a tuning parameter,
ruling a trade-off between statistical accuracy (taking B(n) = O(n2 ) fully preserves the convergence
rate) and computational complexity. This will be confirmed numerically in Section 5.
The above results can be extended to other sampling techniques, such as Bernoulli sampling and
sampling without replacement. We refer to the Supplementary Material for details.
5
Numerical Experiments
In this section, we present some numerical experiments on large-scale graph reconstruction to
illustrate the practical relevance of the idea of incomplete risk introduced in Section 4. Following a
well-established line of work (Vert and Yamanishi, 2004; Vert et al., 2007; Shaw et al., 2011), we
formulate graph reconstruction as a distance metric learning problem (Bellet et al., 2015): we learn
a distance function such that we predict an edge between two nodes if the distance between their
features is smaller than some threshold. Assuming X ? Rq , let Sq+ be the cone of symmetric PSD
q ? q real-valued matrices. The reconstruction rules we consider are parameterized by M ? Sq+ and
have the form
gM (x1 , x2 ) = I {DM (x1 , x2 ) ? 1} ,
where DM (x1 , x2 ) = (x1 ? x2 )T M (x1 ? x2 ) is a (pseudo) distance equivalent to the Euclidean
distance after a linear transformation L ? Rq?q , with M = LT L. Note that gM (x1 , x2 ) can be
2
seen as a linear separator operating on the pairwise representation vec((x1 ? x2 )(x1 ? x02 )T ) ? Rq ,
7
Table 1: Results (averaged over 10 runs) on synthetic graph with n = 1, 000, 000, q = 100, p = 0.05.
Reconstruction error
Relative improvement
Training time (seconds)
B = 0.01n
B = 0.1n
B=n
B = 5n
B = 10n
0.2272
?
21
0.1543
32%
398
0.1276
17%
5,705
0.1185
7%
20,815
0.1159
2%
42,574
hence the class of learning rules we consider has VC-dimension bounded by q 2 + 1. We define the
reconstruction risk as:
X
2
Sbn (gM ) =
[(2ei,j ? 1)(DM (Xi , Xj ) ? 1)]+ ,
n(n ? 1) i<j
where [?]+ = max(0, ?) is a convex surrogate for the 0-1 loss. In earlier work, ERM has only been
applied to graphs with at most a few hundred or thousand nodes due to scalability issues. Thanks to
our results, we are able to scale it up to much larger networks by sampling pairs of nodes and solve
the resulting simpler optimization problem.
We create a synthetic graph with n nodes as follows. Each node i has features Xitrue ? Rq sampled
uniformly over [0, 1]. We then add an edge between nodes that are at Euclidean distance smaller than
some threshold ? , and introduce some noise by flipping the value of ei,j for each pair of nodes (i, j)
independently with probability p. We then apply a random linear transformation R ? Rq?q to each
node to generate a ?scrambled? version Xi = RXitrue of the nodes? features. The learning algorithm
is only allowed to observe the scrambled features and must find a rule which accurately recovers the
graph by solving the ERM problem above. Note that, denoting Dij = kR?1 Xi ? R?1 Xj k2 , the
posterior preferential attachment probability is given by
? (Xi , Xj ) = (1 ? p) ? I{Dij ? ? } + p ? I{Dij > ? }.
The process is illustrated in Figure 1. Using this procedure, we generate a training graph with
n = 1, 000, 000 and q = 100. We set the threshold ? such that there is an edge between about 20%
of the node pairs, and set p = 0.05. We also generate a test graph using the same parameters. We
then sample uniformly with replacement B pairs of nodes from the training graph to construct our
incomplete reconstruction risk. The reconstruction error of the resulting empirical risk minimizer
is estimated on 1,000,000 pairs of nodes drawn from the test graph. Table 1 shows the test error
(averaged over 10 runs) as well as the training time for several values of B. Consistently with our
theoretical findings, B implements a trade-off between statistical accuracy and computational cost.
For this dataset, sampling B = 5, 000, 000 pairs (out of 1012 possible pairs!) is sufficient to find an
accurate reconstruction rule. A larger B would result in increased training time for negligible gains
in reconstruction error.
Additional results. In the Supplementary Material, we present comparisons to a node sampling
scheme and to the ?dataset splitting? strategy given by (6), as well as experiments on a real network.
6
Conclusion
In this paper, we proved that the learning rates for ERM in the graph reconstruction problem are
always of order OP (log n/n). We also showed how sampling schemes applied to the population
of edges (not nodes) can be used to scale-up such ERM-based predictive methods to very large
graphs by means of a detailed rate bound analysis, further supported by empirical results. A first
possible extension of this work would naturally consist in considering more general sampling designs,
such as Poisson sampling (which generalizes Bernoulli sampling) used in graph sparsification (cf
Spielman, 2005), and investigating the properties of minimizers of Horvitz-Thompson versions of the
reconstruction risk (see Horvitz and Thompson, 1951). Another challenging line of future research is
to extend the results of this paper to more complex unconditional graph structures in order to account
for properties shared by some real-world graphs (e.g., graphs with a power law degree distribution).
Acknowledgments This work was partially supported by the chair ?Machine Learning for Big
Data? of T?l?com ParisTech and by a grant from CPER Nord-Pas de Calais/FEDER DATA Advanced
data science and technologies 2015-2020.
8
References
Agarwal, S. (2014). Surrogate regret bounds for bipartite ranking via strongly proper losses. JMLR, 15:1653?
1674.
Antos, A., Gy?rfi, L., and Gy?rgy, A. (2005). Individual convergence rates in empirical vector quantizer design.
IEEE Transactions on Information Theory, 51(11):4013?4023.
Arcones, M. and Gin?, E. (1994). U-processes indexed by Vapnik-Chervonenkis classes of functions with
applications to asymptotics and bootstrap of U-statistics with estimated parameters. Stochastic Processes and
their Applications, 52:17?38.
Bellet, A., Habrard, A., and Sebban, M. (2015). Metric Learning. Morgan & Claypool Publishers.
Biau, G. and Bleakley, K. (2006). Statistical Inference on Graphs. Statistics & Decisions, 24:209?232.
Boucheron, S., Bousquet, O., Lugosi, G., and Massart, P. (2005). Moment inequalities for functions of
independent random variables. Ann. Stat., 33(2):514?560.
Cl?men?on, S. (2014). A statistical view of clustering performance through the theory of U-processes. Journal
of Multivariate Analysis, 124:42 ? 56.
Cl?men?on, S. and Robbiano, S. (2011). Minimax learning rates for bipartite ranking and plug-in rules. In
ICML.
Cl?men?on, S. and Vayatis, N. (2010). Overlaying classifiers: a practical approach to optimal scoring. Constructive Approximation, 32(3):619?648.
Cl?men?on, S., Lugosi, G., and Vayatis, N. (2008). Ranking and Empirical Minimization of U-statistics. Ann.
Stat., 36(2):844?874.
Cukierski, W., Hamner, B., and Yang, B. (2011). Graph-based features for supervised link prediction. In IJCNN.
De la Pena, V. and Gin?, E. (1999). Decoupling : from dependence to independence. Springer.
Hoeffding, W. (1948). A class of statistics with asymptotically normal distribution. The Annals of Mathematical
Statistics, 19:293?325.
Horvitz, D. and Thompson, D. (1951). A generalization of sampling without replacement from a finite universe.
Journal of the American Statistical Association, 47:663?685.
Jansen, R., Yu, H., Greenbaum, D., Kluger, Y., Krogan, N., Chung, S., Emili, A., Snyder, M., Greenblatt, J., and
Gerstein, M. (2003). A Bayesian networks approach for predicting protein-protein interactions from genomic
data. Science, 302(5644):449?453.
Janson, S. and Nowicki, K. (1991). The asymptotic distributions of generalized U-statistics with applications to
random graphs. Probability Theory and Related Fields, 90:341?375.
Kanehisa, M. (2001). Prediction of higher order functional networks from genomic data. Pharmacogenomics,
2(4):373?385.
Lee, A. J. (1990). U -statistics: Theory and practice. Marcel Dekker, Inc., New York.
Liben-Nowell, D. and Kleinberg, J. (2003). The link prediction problem for social networks. In CIKM.
Lichtenwalter, R., Lussier, J., and Chawla, N. (2010). New perspectives and methods in link prediction. In KDD.
Mammen, E. and Tsybakov, A. (1999). Smooth discrimination analysis. Ann. Stat., 27(6):1808?1829.
Massart, P. and N?d?lec, E. (2006). Risk bounds for statistical learning. Ann. Stat., 34(5).
Mattick, J. and Gagen, M. (2005). Accelerating networks. Science, 307(5711):856?858.
Rigollet, P. and Vert, R. (2009). Fast rates for plug-in estimators of density level sets. Bernoulli, 14(4):1154?1178.
Shaw, B., Huang, B., and Jebara, T. (2011). Learning a Distance Metric from a Network. In NIPS.
Spielman, D. (2005). Fast Randomized Algorithms for Partitioning, Sparsification, and Solving Linear Systems.
Lecture notes from IPCO Summer School 2005.
Tsybakov, A. (2004). Optimal aggregation of classifiers in statistical learning. Ann. Stat., 32(1):135?166.
Vert, J.-P., Qiu, J., and Noble, W. S. (2007). A new pairwise kernel for biological network inference with support
vector machines. BMC Bioinformatics, 8(10).
Vert, J.-P. and Yamanishi, Y. (2004). Supervised graph inference. In NIPS, pages 1433?1440.
9
| 6588 |@word briefly:1 version:6 arcones:2 tensorial:1 dekker:1 bn:5 decomposition:5 accounting:1 thereby:1 moment:2 chervonenkis:1 denoting:2 bc:1 janson:2 horvitz:3 com:2 universality:1 dx:3 must:1 readily:1 numerical:6 kdd:1 discrimination:1 stationary:1 half:1 selected:1 prohibitive:1 fewer:1 provides:3 quantizer:1 node:22 simpler:2 mathematical:1 along:1 dn:1 prove:2 consists:2 introduce:2 pairwise:2 x0:2 indeed:1 inspired:1 ming:2 relying:1 little:1 equipped:2 cardinality:1 increasing:1 considering:2 spain:1 estimating:1 notation:3 bounded:3 underlying:1 mass:1 finding:5 transformation:3 sparsification:2 impractical:1 guarantee:1 pseudo:1 universit:1 k2:1 classifier:2 control:2 partitioning:1 grant:1 yn:1 overlaying:1 negligible:1 limit:1 establishing:1 solely:1 lugosi:2 inria:2 plus:2 therein:1 challenging:1 averaged:2 practical:3 acknowledgment:1 testing:1 practice:1 regret:1 implement:1 bootstrap:1 sq:2 procedure:1 asymptotics:1 empirical:37 universal:7 significantly:2 vert:7 composite:1 protein:4 get:1 selection:1 risk:54 context:1 seminal:1 bleakley:17 applying:1 equivalent:2 imposed:1 attention:1 independently:1 convex:2 thompson:3 formulate:1 simplicity:3 splitting:1 rule:19 estimator:3 population:1 notion:1 annals:1 construction:1 play:2 suppose:2 user:1 gm:3 hypothesis:2 trick:1 pa:1 approximated:1 papa:1 predicts:1 labeled:1 distributional:1 role:2 thousand:1 sbn:1 connected:3 trade:3 liben:2 rq:5 complexity:12 rigorously:1 depend:2 solving:2 predictive:1 upon:2 bipartite:3 strikingly:1 various:2 cper:1 fast:15 effective:1 monte:1 pertaining:2 refined:1 whose:3 supplementary:10 larger:4 valued:1 solve:1 otherwise:1 ability:1 statistic:24 emili:1 reconstruction:55 propose:1 interaction:2 maximal:4 product:1 fr:2 degenerate:3 achieve:2 supposed:1 rgy:1 scalability:2 convergence:2 rademacher:2 yamanishi:3 incidentally:2 converges:1 object:1 derive:2 illustrate:2 stat:5 measured:1 school:1 op:11 eq:1 strong:1 involves:1 marcel:1 stochastic:1 vc:6 kluger:1 material:10 disarming:1 require:3 generalization:3 villeneuve:1 preliminary:1 decompose:1 randomization:1 biological:1 summation:1 strictly:1 extension:1 hold:4 considered:1 normal:1 claypool:1 predict:3 achieves:1 nowell:2 smallest:1 estimation:2 intermediary:1 label:2 calais:1 sensitive:1 symmetrization:1 create:1 successfully:1 minimization:14 genomic:2 always:5 ej:2 corollary:1 derived:1 focus:2 improvement:2 consistently:1 bernoulli:6 mainly:1 contrast:3 inference:4 dependent:1 minimizers:9 cnrs:1 footstep:1 lien:1 france:2 interested:1 issue:1 among:2 classification:4 jansen:2 art:1 special:1 marginal:1 equal:1 construct:1 field:1 sampling:15 biology:2 bmc:1 yu:1 icml:1 nearly:1 noble:1 future:2 others:1 np:1 few:2 randomly:1 preserve:1 individual:2 cheaper:2 replaced:1 replacement:7 psd:1 ltci:1 investigate:2 unconditional:1 antos:2 hg:3 held:1 accurate:1 edge:8 preferential:6 unless:1 indexed:1 incomplete:10 euclidean:3 ruled:1 desired:1 theoretical:4 instance:1 increased:2 earlier:1 gn:2 ar:1 cost:5 vertex:15 deviation:3 habrard:1 hundred:1 examining:1 dij:3 front:2 too:1 characterize:1 straightforwardly:2 synthetic:4 combined:3 st:1 density:3 fundamental:1 deduced:1 aur:1 thanks:1 stay:1 randomized:1 probabilistic:3 off:3 lee:3 connecting:1 huang:1 hoeffding:7 classically:1 ek:1 american:1 leading:1 chung:1 account:1 qeg:3 de:3 gy:2 inc:1 ranking:5 supg:1 helped:1 picked:1 view:1 observing:1 sup:3 start:1 bayes:2 aggregation:1 contribution:3 accuracy:4 variance:8 largely:1 yield:2 biau:17 bayesian:1 accurately:2 carlo:1 confirmed:1 j6:1 explain:1 ipco:1 involved:4 dm:3 naturally:1 proof:7 recovers:2 boil:1 sampled:1 gain:1 proved:3 dataset:3 recall:2 niform:2 organized:1 carefully:1 attained:2 higher:1 supervised:4 improved:1 strongly:1 sketch:1 replacing:2 ei:19 hopefully:1 defines:1 building:2 unbiased:3 true:4 counterpart:1 vicinity:1 assigned:1 hence:2 symmetric:6 boucheron:2 nowicki:2 dx2:1 illustrated:2 deal:3 eg:1 illustrative:1 mammen:3 generalized:1 outline:1 complete:3 meaning:1 recently:2 common:1 functional:3 rigollet:2 multinomial:1 sebban:1 banach:1 extend:2 association:1 pena:2 numerically:1 significant:1 refer:4 vec:1 cv:1 smoothness:1 tuning:2 similarity:1 operating:1 add:1 posterior:3 multivariate:1 showed:1 perspective:1 inf:6 apart:1 certain:2 inequality:6 binary:4 scoring:1 seen:3 minimum:1 additional:5 relaxed:1 morgan:1 surely:1 paradigm:1 paradigmatic:1 x02:1 reduces:2 smooth:1 technical:1 faster:3 b2c:1 plug:2 e1:12 controlled:1 qg:7 prediction:8 involving:5 basic:1 variant:1 expectation:3 poisson:2 metric:3 sometimes:1 kernel:3 agarwal:2 achieved:3 preserved:1 vayatis:3 background:1 whereas:2 remarkably:1 publisher:1 rest:1 exhibited:2 massart:4 subject:1 undirected:2 greenbaum:1 integer:1 abusively:1 yang:1 revealed:1 split:1 enough:1 insofar:1 variety:1 xj:18 independence:1 restrict:1 idea:2 cn:2 whether:2 motivated:2 gb:1 accelerating:2 feder:1 york:1 remark:7 ignored:1 useful:1 rfi:1 detailed:2 tsybakov:5 documented:1 reduced:1 generate:3 notice:1 fulfilled:1 overly:1 estimated:2 cikm:1 write:2 snyder:1 group:1 key:1 threshold:4 pb:2 achieving:1 drawn:4 graph:61 asymptotically:1 merely:1 sum:7 cone:1 run:2 parameterized:1 lehmann:1 almost:1 throughout:1 ruling:1 circumvents:1 draw:1 decision:3 gerstein:1 scaling:3 comparable:2 bound:15 summer:1 lec:3 fold:1 robbiano:2 ijcnn:1 precisely:3 x2:36 aurelien:1 bousquet:1 kleinberg:2 argument:3 min:1 concluding:1 chair:1 separable:1 bellet:4 describes:1 slightly:1 character:1 smaller:3 erm:8 taken:1 computationally:3 describing:2 montecarlo:1 pharmacogenomics:1 generalizes:1 rewritten:1 apply:2 observe:4 away:1 appropriate:3 generic:1 chawla:1 occurrence:2 shaw:3 alternative:1 original:1 binomial:1 clustering:3 denotes:1 cf:2 exploit:1 restrictive:4 prof:1 establish:2 approximating:1 classical:2 build:1 objective:2 quantity:1 occurs:1 flipping:1 strategy:2 concentration:6 dependence:2 surrogate:3 gin:4 distance:9 link:4 simulated:1 extent:1 collected:1 reason:1 assuming:1 length:1 relationship:1 providing:1 minimizing:3 setup:1 unfortunately:2 sharper:1 nord:1 stated:5 design:2 proper:1 allowing:1 observation:1 finite:7 displayed:1 extended:3 y1:1 jebara:1 introduced:3 pair:27 paris:2 specified:1 namely:3 connection:3 required:1 learned:1 established:7 barcelona:1 nip:3 able:1 below:1 saclay:1 built:2 max:1 power:1 natural:1 difficulty:1 rely:1 predicting:4 residual:1 advanced:1 geb:2 minimax:1 scheme:2 technology:1 ascq:1 greenblatt:1 attachment:6 carried:2 flagship:1 sn:2 understanding:1 literature:2 asymptotic:3 relative:1 law:1 fully:2 loss:4 highlight:1 permutation:1 lecture:1 men:15 var:5 remarkable:1 ingredient:1 degree:4 sufficient:2 metabolic:1 share:1 supported:2 last:1 copy:2 free:1 bias:5 weaker:2 ber:2 taking:3 characterizing:1 distributed:1 dimension:7 xn:4 world:2 rich:1 commonly:1 social:3 cope:1 transaction:1 excess:8 reconstructed:1 scheff:1 buc:1 dealing:1 reveals:1 investigating:1 summing:1 conclude:1 tuples:1 xi:27 krogan:1 scrambled:3 continuous:1 un:5 gbn:4 table:2 learn:1 decoupling:2 improving:1 cl:15 necessarily:1 separator:1 complex:1 main:4 universe:1 big:1 noise:3 subsample:1 n2:10 qiu:1 allowed:1 x1:44 telecom:1 referred:5 depicts:1 slow:1 inferring:1 position:1 inverts:1 candidate:4 lie:1 jmlr:1 theorem:17 down:1 specific:3 dx1:1 dominates:1 consist:1 vapnik:1 kr:1 conditioned:1 phan:1 lt:1 appearance:1 partially:1 lussier:1 springer:1 minimizer:6 relies:1 conditional:7 kanehisa:2 formulated:3 ann:5 price:1 absence:1 shared:1 paristech:3 specifically:1 uniformly:3 averaging:1 lemma:5 la:2 indicating:1 formally:1 guillaume:1 support:2 latter:1 dx0:1 relevance:1 spielman:2 bioinformatics:1 constructive:1 phenomenon:1 |
6,178 | 6,589 | Scan Order in Gibbs Sampling: Models in Which it
Matters and Bounds on How Much
Bryan He, Christopher De Sa, Ioannis Mitliagkas, and Christopher R?
Stanford University
{bryanhe,cdesa,imit,chrismre}@stanford.edu
Abstract
Gibbs sampling is a Markov Chain Monte Carlo sampling technique that iteratively
samples variables from their conditional distributions. There are two common scan
orders for the variables: random scan and systematic scan. Due to the benefits
of locality in hardware, systematic scan is commonly used, even though most
statistical guarantees are only for random scan. While it has been conjectured that
the mixing times of random scan and systematic scan do not differ by more than a
logarithmic factor, we show by counterexample that this is not the case, and we
prove that that the mixing times do not differ by more than a polynomial factor
under mild conditions. To prove these relative bounds, we introduce a method of
augmenting the state space to study systematic scan using conductance.
1
Introduction
Gibbs sampling, or Glauber dynamics, is a Markov chain Monte Carlo method that draws approximate
samples from multivariate distributions that are difficult to sample directly [9; 15, p. 40]. A major use
of Gibbs sampling is marginal inference: the estimation of the marginal distributions of some variables
of interest [8]. Some applications include various computer vision tasks [9, 23, 24], information
extraction [7], and latent Dirichlet allocation for topic modeling [11]. Gibbs sampling is simple to
implement and quickly produces accurate samples for many models, so it is widely used and available
in popular libraries such as OpenBUGS [16], FACTORIE [17], JAGS [18], and MADlib [14].
Gibbs sampling (Algorithm 1) iteratively selects a single variable and resamples it from its conditional
distribution, given the other variables in the model. The method that selects the variable index to
sample (s in Algorithm 1) is called the scan order. Two scan orders are commonly used: random scan
and systematic scan (also known as deterministic or sequential scan). In random scan, the variable to
sample is selected uniformly and independently at random at each iteration. In systematic scan, a
fixed permutation is selected, and the variables are repeatedly selected in that order. The existence of
these two distinct options raises an obvious question?which scan order produces accurate samples
more quickly? This question has two components: hardware efficiency (how long does each iteration
take?) and statistical efficiency (how many iterations are needed to produce an accurate sample?).
From the hardware efficiency perspective, systematic scans are clearly superior [21, 22]. Systematic
scans have good spatial locality because they access the variables in linear order, which makes their
iterations run faster on hardware. As a result, systematic scans are commonly used in practice.
Comparing the two scan orders is much more interesting from the perspective of statistical efficiency,
which we focus on for the rest of this paper. Statistical efficiency is measured by the mixing
time, which is the number of iterations needed to obtain an accurate sample [15, p. 55]. The
mixing times of random scan and systematic scan have been studied, and there is a longstanding
conjecture [3; 15, p. 300] that systematic scan (1) never mixes more than a constant factor slower
than random scan and (2) never mixes more than a logarithmic factor faster than random scan. This
conjecture implies that the choice of scan order does not have a large effect on performance.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Algorithm 1 Gibbs sampler
input Variables xi for 1 ? i ? n, and target distribution ?
Initialize x1 , . . . , xn
loop
Select variable index s from {1, . . . , n}
Sample xs from the conditional distribution P? Xs | X{1,...,n}\{s}
end loop
Recently, Roberts and Rosenthal [20] described a model in which systematic scan mixes more
slowly than random scan by a polynomial factor; this disproves direction (1) of this conjecture.
Independently, we constructed other models for which the scan order has a significant effect on
mixing time. This raises the question: what are the true bounds on the difference between these
mixing times? In this paper, we address this question and make the following contributions.
? In Section 3, we study the effect of the variable permutation chosen for systematic scan on
the mixing time. In particular, in Section 3.1, we construct a model for which a systematic
scan mixes a polynomial factor faster than random scan, disproving direction (2) of the
conjecture, and in Section 3.2, we construct a model for which the systematic scan with the
worst-case permutation results in a mixing time that is slower by a polynomial factor than
both the best-case systematic scan permutation and random scan.
? In Section 4, we empirically verify the mixing times of the models we construct, and we
analyze how the mixing time changes as a function of the permutation.
? In Section 5, we prove a weaker version of the conjecture described above, providing
relative bounds on the mixing times of random and systematic scan. Specifically, under
mild conditions, different scan orders can only change the mixing time by a polynomial
factor. To obtain these bounds, we introduce a method of augmenting the state space of
Gibbs sampling so that the method of conductance can be applied to analyze its dynamics.
2
Related Work
Recent work has made progress on analyzing the mixing time of Gibbs sampling, but there are still
major limitations to our understanding. In particular, most results are only for specific models or for
random scan. For example, mixing times are known for Mallow?s model [1, 4], and colorings of a
graph [5] for both random and systematic scan, but these are not applicable to general models. On the
other hand, random scan has been shown to mix in polynomial time for models that satisfy structural
conditions ? such as having close-to-modular energy functions [10] or having bounded hierarchy
width and factor weights [2] ? but corresponding results for for systematic scan are not known.
The major exception to these limitations is Dobrushin?s condition, which guarantees O(n log n)
mixing for both random scan and systematic scan [6, 13]. However, many models of interest with
close-to-modular energy functions or bounded hierarchy width do not satisfy Dobrushin?s condition.
A similar choice of scan order appears in stochastic gradient descent (SGD), where the standard SGD
algorithm uses random scan, and the incremental gradient method (IGM) uses systematic scan. In
contrast to Gibbs sampling, avoiding ?bad permutations? in the IGM is known to be important to
ensure fast convergence [12, 19]. In this paper, we bring some intuition about the existence of bad
permutations from SGD to Gibbs sampling.
3
Models in Which Scan Order Matters
Despite a lack of theoretical results regarding the effect of scan order on mixing times, it is generally
believed that scan order only has a small effect on mixing time. In this section, we first define
relevant terms and state some common conjectures regarding scan order. Afterwards, we give several
counterexamples showing that the scan order can have asymptotic effects on the mixing time.
The total variation distance between two probability distributions ? and ? on ? is [15, p. 47]
k? ? ?kTV = max |?(A) ? ?(A)|.
A??
2
Table 1: Models and Approximate Mixing Times
Model
Sequence of Dependencies
Two Islands
Discrete Pyramid [20]
Memorize and Repeat
Soft Dependencies
tmix (R)
min tmix (S? )
max tmix (S? )
n2
2n
n
n3
n3/2
n
2n
n3
n2
n
n2
n2n
n3
n2
n2
?
?
The mixing time is the minimum number of steps needed to guarantee that the total variation distance
between the true and estimated distributions is below a given threshold from any starting distribution.
Formally, the mixing time of a stochastic process P with transition matrix P (t) after t steps and
stationary distribution ? is [15, p. 55]
tmix (P, ) = min t : max kP (t) ? ? ?kTV ? ,
?
where the maximum is taken over the distribution ? of the initial state of the process. When comparing
the statistical efficiency of systematic scan and random scan, it would be useful to establish, for any
systematic scan process S and random scan process R on the same n-variable model, a relative bound
of the form
F1 (, n, tmix (R, )) ? tmix (S, ) ? F2 (, n, tmix (R, ))
(1)
for some functions F1 and F2 . Similarly, to bound the effect that the choice of permutation can have
on the mixing time, it would be useful to know, for any two systematic scan processes S? and S?
with different permutations on the same model, that for some function F3 ,
tmix (S? , ) ? F3 (, n, tmix (S? , )).
(2)
Diaconis [3] and Levin et al. [15, p. 300] conjecture that systematic scan is never more than a
constant factor slower or a logarithmic factor faster than random scan. This is equivalent to choosing
F1 (, n, t) = C1 ()?t?(log n)?1 and F2 (, n, t) = C2 ()?t in the inequality in (1), for some functions
C1 and C2 . It is also commonly believed that all systematic scans mix at the same asymptotic rate,
which is equivalent to choosing F3 (, n, t) = C3 () ? t in (2).
These conjectures imply that using systematic scan instead of random scan will not result in significant
consequences, at least asymptotically, and that the particular permutation used for systematic scan is
not important. However, we show that neither conjecture is true by constructing models (listed in
Table 1) in which the scan order has substantial asymptotic effects on mixing time.
In the rest of this section, we go through two models in detail to highlight the diversity of behaviors
that different scan orders can have. First, we construct the sequence of dependencies model, for
which a single ?good permutation? of systematic scan mixes faster, by a polynomial factor, than both
random scan and systematic scans using most other permutations. This serves as a counterexample
to the conjectured lower bounds (i.e. the choice of F1 and F3 ) on the mixing time of systematic
scan. Second, we construct the two islands model, for which a small set of ?bad permutations? mix
very slowly in comparison to random scan and most other systematic scans. This contradicts the
conjectured upper bounds (i.e. the choice of F2 and F3 ). For completeness, we also discuss the
discrete pyramid model introduced by Roberts and Rosenthal [20], which contradicts the conjectured
choice of F2 . Table 1 lists several additional models we constructed: these models further explore the
space of asymptotic comparisons among scan orders, but for brevity we defer them to the appendix.
3.1
Sequence of Dependencies
The first model we describe is the sequence of dependencies model (Figure 1a), where we explore
how fast systematic scan can be by allowing a specific good permutation to mix rapidly. The sequence
of dependencies model achieves this by having the property that, at any time, progress towards mixing
is only made if a particular variable is sampled; this variable is always the one that is chosen by the
good permutation. As a result, while a systematic scan using the good permutation makes progress at
3
s0
x1
x2
s1
xi?1
???
xi
si
???
xn
sn
(a) Sequence of Dependencies Model
Island y
sy2
sxn
y2
..
.
xn
yn
???
s0
???
b
..
.
syn
(b) Two Islands Model
s3
x2
x3
???
x2
xn x1
???
..
.
..
.
..
.
???
sx2
y1
x1
s2
???
..
.
sy1
s1
???
sx1
sn
???
Island x
(c) Discrete Pyramid Model
Figure 1: State space of the models.
every step, both random scan and other systematic scans often fail to progress, which leads to a gap
between their mixing times. Thus, this model exhibits two surprising behaviors: (1) one systematic
scan is polynomially better than random scan and (2) systematic scans using different permutations
have polynomial differences in mixing times. We now describe this model in detail.
Variables There are n binary variables x1 , . . . , xn . Independently, each variable has a very strong
prior of being true. However, variable xi is never true unless xi?1 is also true. The unnormalized
probability distribution is the following, where M is a very large constant.1
P (x) ?
(
0 P
n
M i=1 xi
if xi is true and xi?1 is false for some i ? {2, . . . , n}
otherwise
State Space There are n + 1 states with non-zero probability: s0 , . . . , sn , where si is the state
where the first i variables are true and the remaining n ? i variables are false. In the stationary
distribution, sn has almost all of the mass due to the strong priors on the variables, so reaching sn is
essentially equivalent to mixing because the total variation distance from the stationary distribution is
equal to the mass not on sn . Notice that sampling xi will almost always move the state from si?1 to
si , very rarely move it from si to si?1 , and can have no other effect. The worst-case starting state is
s0 , where the variables must be sampled in the order x1 , . . . , xn for this model to mix.
Random Scan The number of steps needed to transition from s0 to s1 is distributed as a geometric
random variable with mean n (variables are randomly selected, and specifically x1 must be selected).
Similarly, the number of steps needed to transition from si?1 to si is distributed as a geometric
random variable with mean n. In total, there are n transitions, so O(n2 ) steps are needed to mix.
Best Systematic Scan The best systematic scan uses the order x1 , x2 , . . . , xn . For this scan, one
sweep will reach sn no matter what the starting state is, so the mixing time is n.
Worst Systematic Scan The worst systematic scan uses the order xn , xn?1 , . . . , x1 . The first
sweep only uses x1 , the second sweep only uses x2 , and in general, any sweep only makes progress
using one transition. Finally, in the n-th sweep, xn is used in the first step. Thus, this process mixes
in n(n ? 1) + 1 steps, which is O(n2 ).
3.2
Two Islands
With the sequence of dependencies model, we showed that a single good permutation can mix much
faster than other scan orders. Next, we describe the two islands model (Figure 1b), which has the
1
We discuss the necessary magnitude of M in Appendix B
4
opposite behavior: it has bad permutations that yield much slower mixing times. The two islands
model achieves this by having two disjoint blocks of variables such that consecutively sampling two
variables from the same block accomplishes very little. As a result, a systematic scan that uses a
permutation that frequently consecutively samples from the same block mixes a polynomial factor
slower than both random scan and most other systematic scans. We now describe this model in detail.
Variables There are 2n binary variables grouped into two blocks: x1 , . . . , xn and y1 , . . . , yn .
Conditioned on all other variables being false, each variable is equally likely to be true or false.
However, the x variables and the y variables contradict each other. As a result, if any of the x?s are
true, then all of the y?s must be false, and if any of the y?s are true, then all of the x?s must be false.
The unnormalized probability distribution for this model is the following.
(
P (x, y) ?
0 if ?xi true and ?yj true
1 otherwise
(3)
This model can be interpreted as a machine learning inference problem in the following way. Each
variable represents whether the reasoning in some sentence is sound. The sentences corresponding
to x1 , . . . , xn and the sentences corresponding to y1 , . . . , yn reach contradicting conclusions. If any
variable is true, its conclusion is correct, so all of the sentences that reached the opposite conclusion
must be not be sound, and their corresponding variables must be false. However, this does not
guarantee that all other sentences that reached the same conclusion have sound reasoning, so it is
possible for some variables in a block to be true while others are false. Under these assumptions
alone, the natural way to model this system is with the two islands distribution in (3).
State Space The states are divided into three groups: states in island x (at least one x variable is
true), states in island y (at least one y variable is true), and a single bridge state b (all variables are
false). The islands are well-connected internally, so the islands mix rapidly. but it is impossible to
directly move from one island to the other ? the only way to move from one island to the other is
through the bridge. To simplify the analysis, we assume that the bridge state has very low mass.2
This allows us to assume that the chains always move off of the bridge when a variable is sampled.
The bridge is the only way to move from one island to the other, so it acts as a bottleneck. As a result,
the efficiency of bridge usage is critical to the mixing time. We will use bridge efficiency to refer to
the probability that the chain moves to the other island when it reaches the bridge. Because mixing
within the islands is rapid in comparison to the time needed to move onto the bridge, the mixing time
is inversely proportional to the bridge efficiency of the chain.
Random Scan In random scan, the variable selected after getting on the bridge is independent of
the previous variable. As a result, with probability 1/2, the chain will move onto the other island,
and with probability 1/2, the chain will return to the same island, so the bridge efficiency is 1/2.
Best Systematic Scan Several different systematic scans achieve the fastest mixing time. One such
scan is x1 , y1 , x2 , y2 , . . . , xn , yn . Since the sampled variables alternate between the blocks, if the
chain moves onto the bridge (necessarily by sampling a variable from the island it was previously on),
it will always proceed to sample a variable from the other block, which will cause it to move onto
the other island. Thus, the bridge efficiency is 1. More generally, any systematic scan that alternates
between sampling from x variables and sampling from y variables will have a bridge efficiency of 1.
Worst Systematic Scan Several different systematic scans achieve the slowest mixing time. One
such scan is x1 , . . . , xn , y1 , . . . , yn . In this case, if the chain moves onto the bridge, it will almost
always proceed to sample a variable from the same block, and return to the same island. In fact,
the only way for this chain to move across islands is if it moves from island x to the bridge using
transition xn and then moves to island y using transition y1 , or if it moves from island y to the bridge
using transition yn and then moves to island x using transition x1 . Thus, only 2 of the 2n transitions
will cross the bridge, and the bridge efficiency is 1/n. More generally, any systematic scan that
consecutively samples all x variables and then all y variables will have a bridge efficiency of 1/n.
Comparison of Mixing Times The mixing times of the chains are inversely proportional to the
bridge efficiency. As a result, random scan takes twice as long to mix as the best systematic scan, and
mixes n/2 times faster than the worst systematic scan.
2
We show that the same asymptotic result holds without this assumption in Appendix C.
5
3.3
Discrete Pyramid
In the discrete pyramid model (Figure 1c) introduced by Roberts and Rosenthal [20], there are n
binary variables xi , and the mass is uniformly distributed over all states where at most one xi is true.
In this model, the mixing time of random scan, O(n), is asymptotically better than that of systematic
scan for any permutation, which all have the same mixing time, O(n3 ).
4
Experiments
In this section, we run several experiments to illustrate the effect of scan order on mixing times. First,
in Figure 2a, we plot the mixing times of the models from Section 3 as a function of the number of
variables. These experiments validate our results about the asymptotic scaling of the mixing time,
as well as show that the scan order can have a significant effect on the mixing time for even small
models. (Due to the exponential state space of the two islands model, we modify it slightly to make
the computation of mixing times feasible: we simplify the model by only considering the states that
are adjacent to the bridge, and assume that the states on each individual island mix instantly.)
In the following experiments, we consider a modified version of the two islands model, in which the
mass of the bridge state is set to 0.1 of the mass of the other states to allow the effect of scan order to
be clear even for a small number of variables. Figure 2b illustrates the rate at which different scan
orders explore this modified model. Due to symmetry, we know that half of the mass should be on
each island in the stationary distribution, so getting half of the mass onto the other island is necessary
for mixing. This experiment illustrates that random scan and a good systematic scan move to the
other island quickly, while a bad systematic scan requires many more iterations.
Figure 2c illustrates the effect that the permutation chosen for systematic scan can have on the mixing
time. In this experiment, the mixing time for each permutation was found and plotted in sorted order.
For the sequence of dependencies model, there are a small number of good permutations which mix
very quickly compared to the other permutations and random scan. However, no permutation is bad
compared to random scan. In the two islands model, as we would expect based on the analysis in
Section 3, there are a small number of bad permutations which mix very slowly compared to the
other permutations and random scan. Some permutations are slightly better than random scan, but
none of the scan orders are substantially better. In addition, the mixing times for systematic scan
approximately discretized due to the fact that mixing time depends so heavily on the bridge efficiency.
5
Relative Bounds on Mixing Times via Conductance
In Section 3, we described two models for which a systematic scan can mix a polynomial factor
faster or slower than random scan, thus invalidating conventional wisdom that the scan order does not
have an asymptotically significant effect on mixing times. This raises a question of how different the
mixing times of different scans can be. In this section, we derive the following weaker ? but correct ?
version of the conjecture stated by Diaconis [3] and Levin et al. [15].
One of the obstacles to proving this result is that the systematic scan chain is not reversible. A
standard method of handling non-reversible Markov chains is to study a lazy version of the Markov
chain instead [15, p. 9]. In the lazy version of a Markov chain, each step has a probability of 1/2 of
staying at the current state, and acts as a normal step otherwise. This is equivalent to stopping at a
random time that is distributed as a binomial random variable. Due to the fact that systematic scan is
not reversible, our bounds are on the lazy systematic scan, rather than the standard systematic scan.
Theorem 1. For any random scan Gibbs sampler R and lazy systematic scan sampler S with the
same stationary distribution ?, their relative mixing times are bounded as follows.
(1/2 ? )2 tmix (R, ) ? 2t2mix (S, ) log
(1/2 ? )2 tmix (S, ) ?
1
?min
8n2
t2mix (R, ) log
(minx,i Pi (x, x))2
1
?min
,
where Pi is the transition matrix corresponding to resampling just variable i, and ?min is the
probability of the least likely state in ?.
6
tmix (thousands)
Sequence of Dependencies
Discrete Pyramid
Two Islands
1
Best Systematic
Worst Systematic
Other Systematic
Random
True Value
0.5
0
0
25
n
50
0
25
n
50
0
25
n
50
(a) Mixing times for = 1/4.
Sequence of Dependencies (n = 10)
Two Islands (n = 6)
tmix (thousands)
150
0.5
100
tmix
Mass on Island
Two Islands (n = 10)
0.25
0
50
0
0 25 50 75 100
Iterations (thousands)
0
3
2
1
0
25 50 75 100
Percentile
0
25 50 75 100
Percentile
(c) Sorted mixing times of different permutations ( = 1/4).
(b) Marginal island mass over time.
Figure 2: Empirical analysis of the models.
?1
Under mild conditions, namely being fixed and the quantities log(?min
) and (minx,i Pi (x, x))?1
being at most polynomial in n, this theorem implies that the choice of scan order can only affect the
mixing time by up to polynomial factors in n and tmix . We now outline the proof of this theorem and
include full proofs in Appendix D.
In the two islands models, the mixing time of a scan order was determined by its ability to move
through a single bridge state that restricted flow. This suggests that a technique with the ability to
model the behavior of this bridge state is needed to bound the relative mixing times of different scans.
Conductance, also known as the bottleneck ratio, is a topological property of Markov chains used to
bound mixing times by considering the flow of mass around the model [15, p. 88]. This ability to
model bottlenecks in a Markov chain makes conductance a natural technique both for studying the
two islands model and bounding mixing times in general.
More formally, consider a Markov chain on state space ? with transition matrix P and stationary
distribution ?. The conductance of a set S and of the whole chain are respectively defined as
P
?(S) =
x?S,y ?S
/
?(x)P (x, y)
?? =
?(S)
min
S:?(S)? 1
2
?(S).
Conductance can be directly applied to analyze random scan. Let Pi be the transition matrix
corresponding to samplingPvariable i. The state space ? is used without modification, and the
n
transition matrix is P = n1 i=1 Pi . The stationary distribution is the expected target distribution ?.
On the other hand, conductance cannot be directly applied to systematic scan. Systematic scan is not
a homogeneous Markov chain because it uses a sequence of transition matrices rather than a single
transition matrix. One standard method of converting systematic scan into a homogeneous Markov
chain is to consider each full scan as one step of a Markov chain. However, this makes it difficult
to compare with random scan because it completely changes which states are connected by single
steps of the transition matrix. To allow systematic and random scan to be compared more easily,
we introduce an alternative way of converting systematic scan to a homogeneous Markov chain by
augmenting the state space. The augmented state space is ? = ? ? [n], which represents an ordered
pair of the normal state and the index of the variable to be sampled. The transition probability is
P ((x, i), (y, j)) = Pi (x, y)s(i, j), where s(i, j) = I[i + 1 ? j (mod n)] is an indicator that shows
if the correct variable will be sampled next.
7
Additionally, augmenting the state space for random scan allows easier comparison with systematic
scan in some cases. For augmented random scan, the state space is also ? = ? ? [n], the same
as for systematic scan. The transition probability is P ((x, i), (y, j)) = n1 Pi (x, y), which means
that the next variable to sample is selected uniformly. The stationary distributions of the augmented
random scan and systematic scan chains are both ? ((x, i)) = n?1 ?(x). Because the state space and
stationary distribution are the same, augmented random scan and augmented systematic scan can be
compared directly, which lets us prove the following lemma.
Lemma 1. For any random scan Gibbs sampler and systematic scan sampler with the same stationary
distribution ?, let ?RS denote the conductance of the random scan process, let ?RS-A denote the
conductance of the augmented random scan process, and let ?SS-A denote the conductance of the
augmented systematic scan process. Then,
1
? min Pi (x, x) ? ?RS-A ? ?SS-A ? ?RS .
2n x,i
In Lemma 1, the upper bound states that the conductance of systematic scan is no larger than the
conductance of random scan. We use this result in the proof of Theorem 1 to show that systematic
scan cannot mix too much more quickly than random scan. To prove this upper bound, we show
that for any set S under random scan, the set S? containing the corresponding augmented states for
systematic scan will have the same conductance under systematic scan as S had under random scan.
The lower bound in Lemma 1 states that the conductance of systematic scan is no smaller than a
function of the conductance of augmented random scan. This function depends on the number of
variables n and minx,i Pi (x, x), which is the minimum holding probability of any state. To prove
this lower bound, we show that for any set S under augmented systematic scan, we can bound its
conductance under augmented random scan.
There are well-known bounds on the mixing time of a Markov chain in terms of its conductance,
which we state in Theorem 2 [15, pp. 89, 235].
Theorem 2. For any lazy or reversible Markov chain,
1/2 ?
2
? tmix () ? 2 log
??
??
1
?min
.
It is straightforward to prove the result of Theorem 1 by combining the bounds from Theorem 2 with
the conductance bounds from Lemma 1.
6
Conclusion
We studied the effect of scan order on mixing times of Gibbs samplers, and found that for particular
models, the scan order can have an asymptotic effect on the mixing times. These models invalidate
conventional wisdom about scan order and show that we cannot freely change scan orders without
considering the resulting changes in mixing times. In addition, we found bounds on the mixing times
of different scan orders, which replaces a common conjecture about the mixing times of random scan
and systematic scan.
Acknowledgments
The authors acknowledge the support of: DARPA FA8750-12-2-0335; NSF IIS-1247701; NSF
CCF-1111943; DOE 108845; NSF CCF-1337375; DARPA FA8750-13-2-0039; NSF IIS-1353606;
ONR N000141210041 and N000141310129; NIH U54EB020405; NSF DGE-114747; DARPA?s
SIMPLEX program; Oracle; NVIDIA; Huawei; SAP Labs; Sloan Research Fellowship; Moore
Foundation; American Family Insurance; Google; and Toshiba. The views and conclusions expressed
in this material are those of the authors and should not be interpreted as necessarily representing the
official policies or endorsements, either expressed or implied, of DARPA, AFRL, NSF, ONR, NIH,
or the U.S. Government.
8
References
[1] I. Benjamini, N. Berger, C. Hoffman, and E. Mossel. Mixing times of the biased card shuffling and the
asymmetric exclusion process. Transactions of the American Mathematical Society, 357(8):3013?3029,
2005.
[2] C. De Sa, C. Zhang, K. Olukotun, and C. R?. Rapidly mixing gibbs sampling for a class of factor graphs
using hierarchy width. In Advances in Neural Information Processing Systems, 2015.
[3] P. Diaconis. Some things we?ve learned (about markov chain monte carlo). Bernoulli, 19(4):1294?1305,
2013.
[4] P. Diaconis and A. Ram. Analysis of systematic scan metropolis algorithms using iwahori-hecke algebra
techniques. The Michigan Mathematical Journal, 48(1):157?190, 2000.
[5] M. Dyer, L. A. Goldberg, and M. Jerrum. Systematic scan for sampling colorings. The Annals of Applied
Probability, 16(1):185?230, 2006.
[6] M. Dyer, L. A. Goldberg, and M. Jerrum. Dobrushin conditions and systematic scan. Combinatorics,
Probability and Computing, 17(06):761?779, 2008.
[7] J. R. Finkel, T. Grenager, and C. Manning. Incorporating non-local information into information extraction
systems by gibbs sampling. In Proceedings of the 43rd Annual Meeting on Association for Computational
Linguistics, 2005.
[8] A. E. Gelfand and A. F. M. Smith. Sampling-based approaches to calculating marginal densities. Journal
of the American Statistical Association, 85(410):398?409, 1990.
[9] S. Geman and D. Geman. Stochastic relaxation, gibbs distributions, and the bayesian restoration of images.
IEEE Transactions on Pattern Analysis and Machine Intelligence, (6):721?741, 1984.
[10] A. Gotovos, H. Hassani, and A. Krause. Sampling from probabilistic submodular models. In Advances in
Neural Information Processing Systems, 2015.
[11] T. L. Griffiths and M. Steyvers. Finding scientific topics. Proceedings of the National Academy of Sciences,
101(suppl 1):5228?5235, 2004.
[12] M. G?rb?zbalaban, A. Ozdaglar, and P. Parrilo. Convergence rate of incremental gradient and newton
methods. arXiv preprint arXiv:1510.08562, 2015.
[13] T. P. Hayes. A simple condition implying rapid mixing of single-site dynamics on spin systems. In 47th
Annual IEEE Symposium on Foundations of Computer Science, 2006.
[14] J. M. Hellerstein, C. R?, F. Schoppmann, D. Z. Wang, E. Fratkin, A. Gorajek, K. S. Ng, C. Welton, X. Feng,
K. Li, et al. The madlib analytics library: or mad skills, the sql. Proceedings of the VLDB Endowment, 5
(12):1700?1711, 2012.
[15] D. A. Levin, Y. Peres, and E. L. Wilmer. Markov chains and mixing times. American Mathematical Society,
2009.
[16] D. Lunn, D. Spiegelhalter, A. Thomas, and N. Best. The bugs project: Evolution, critique and future
directions. Statistics in medicine, 28(25):3049?3067, 2009.
[17] A. McCallum, K. Schultz, and S. Singh. Factorie: Probabilistic programming via imperatively defined
factor graphs. In Advances in Neural Information Processing Systems, 2009.
[18] M. Plummer. Jags: A program for analysis of bayesian graphical models using gibbs sampling. In
Proceedings of the 3rd international workshop on distributed statistical computing, 2003.
[19] B. Recht and C. R?. Beneath the valley of the noncommutative arithmetic-geometric mean inequality:
conjectures, case-studies, and consequences. In Proceedings of the 25th Annual Conference on Learning
Theory, 2012.
[20] G. O. Roberts and J. S. Rosenthal. Surprising convergence properties of some simple gibbs samplers under
various scans. International Journal of Statistics and Probability, 5(1):51?60, 2015.
[21] A. Smola and S. Narayanamurthy. An architecture for parallel topic models. Proceedings of the VLDB
Endowment, 3(1):703?710, 2010.
[22] C. Zhang and C. R?. Towards high-throughput gibbs sampling at scale: A study across storage managers.
In Proceedings of the 2013 ACM SIGMOD International Conference on Management of Data, 2013.
[23] Y. Zhang, M. Brady, and S. Smith. Segmentation of brain mr images through a hidden markov random
field model and the expectation-maximization algorithm. IEEE Transactions on Medical Imaging, 20(1):
45?57, 2001.
[24] S. C. Zhu, Y. Wu, and D. Mumford. Filters, random fields and maximum entropy (frame): Towards a
unified theory for texture modeling. International Journal of Computer Vision, 27(2):107?126, 1998.
9
| 6589 |@word mild:3 version:5 polynomial:12 vldb:2 r:4 sgd:3 initial:1 ktv:2 fa8750:2 current:1 comparing:2 surprising:2 si:8 must:6 plot:1 resampling:1 stationary:10 alone:1 selected:7 half:2 intelligence:1 implying:1 mccallum:1 smith:2 completeness:1 noncommutative:1 zbalaban:1 zhang:3 mathematical:3 constructed:2 c2:2 symposium:1 prove:7 introduce:3 expected:1 rapid:2 behavior:4 frequently:1 manager:1 discretized:1 brain:1 little:1 considering:3 spain:1 project:1 bounded:3 mass:11 what:2 interpreted:2 substantially:1 unified:1 finding:1 brady:1 guarantee:4 every:1 act:2 ozdaglar:1 internally:1 medical:1 yn:6 local:1 modify:1 consequence:2 despite:1 analyzing:1 critique:1 approximately:1 twice:1 studied:2 suggests:1 fastest:1 analytics:1 acknowledgment:1 yj:1 mallow:1 practice:1 block:8 implement:1 x3:1 empirical:1 griffith:1 onto:6 close:2 cannot:3 valley:1 storage:1 impossible:1 equivalent:4 deterministic:1 conventional:2 go:1 straightforward:1 starting:3 independently:3 chrismre:1 sx1:1 proving:1 steyvers:1 variation:3 annals:1 target:2 hierarchy:3 heavily:1 programming:1 homogeneous:3 us:8 goldberg:2 asymmetric:1 geman:2 preprint:1 wang:1 worst:7 thousand:3 connected:2 substantial:1 intuition:1 bryanhe:1 dynamic:3 raise:3 singh:1 algebra:1 efficiency:16 f2:5 completely:1 easily:1 darpa:4 various:2 distinct:1 fast:2 describe:4 plummer:1 monte:3 kp:1 gotovos:1 choosing:2 tmix:16 modular:2 stanford:2 widely:1 larger:1 gelfand:1 s:2 otherwise:3 ability:3 statistic:2 jerrum:2 grenager:1 sequence:11 relevant:1 loop:2 combining:1 rapidly:3 jag:2 beneath:1 mixing:70 achieve:2 academy:1 bug:1 validate:1 getting:2 convergence:3 produce:3 incremental:2 staying:1 illustrate:1 derive:1 augmenting:4 measured:1 progress:5 sa:2 strong:2 implies:2 memorize:1 differ:2 direction:3 correct:3 filter:1 stochastic:3 consecutively:3 material:1 government:1 f1:4 hold:1 around:1 normal:2 major:3 achieves:2 estimation:1 applicable:1 bridge:27 grouped:1 hoffman:1 clearly:1 always:5 modified:2 reaching:1 rather:2 finkel:1 focus:1 bernoulli:1 slowest:1 contrast:1 inference:2 huawei:1 stopping:1 igm:2 hidden:1 selects:2 among:1 spatial:1 initialize:1 marginal:4 equal:1 construct:5 never:4 extraction:2 having:4 sampling:23 ng:1 f3:5 represents:2 field:2 throughput:1 future:1 simplex:1 others:1 simplify:2 randomly:1 diaconis:4 ve:1 national:1 individual:1 n1:2 conductance:19 interest:2 insurance:1 chain:28 accurate:4 necessary:2 unless:1 plotted:1 theoretical:1 n000141310129:1 lunn:1 modeling:2 soft:1 obstacle:1 restoration:1 maximization:1 levin:3 too:1 dependency:11 recht:1 density:1 international:4 systematic:86 off:1 probabilistic:2 quickly:5 management:1 containing:1 slowly:3 american:4 return:2 li:1 parrilo:1 de:2 diversity:1 ioannis:1 matter:3 satisfy:2 combinatorics:1 sloan:1 depends:2 view:1 lab:1 analyze:3 reached:2 option:1 parallel:1 defer:1 contribution:1 spin:1 yield:1 wisdom:2 bayesian:2 none:1 carlo:3 reach:3 energy:2 pp:1 obvious:1 proof:3 sampled:6 sap:1 popular:1 segmentation:1 hassani:1 syn:1 coloring:2 appears:1 afrl:1 factorie:2 though:1 just:1 smola:1 hand:2 christopher:2 reversible:4 lack:1 google:1 scientific:1 dge:1 openbugs:1 usage:1 effect:16 verify:1 true:19 y2:2 ccf:2 evolution:1 iteratively:2 moore:1 glauber:1 adjacent:1 width:3 percentile:2 unnormalized:2 outline:1 bring:1 reasoning:2 resamples:1 image:2 recently:1 nih:2 common:3 superior:1 empirically:1 association:2 he:1 significant:4 refer:1 gibbs:20 counterexample:3 shuffling:1 rd:2 narayanamurthy:1 similarly:2 benjamini:1 submodular:1 had:1 access:1 sql:1 invalidate:1 multivariate:1 recent:1 showed:1 perspective:2 conjectured:4 exclusion:1 nvidia:1 inequality:2 binary:3 onr:2 meeting:1 minimum:2 additional:1 mr:1 accomplishes:1 converting:2 freely:1 ii:2 arithmetic:1 afterwards:1 mix:22 sound:3 full:2 faster:8 believed:2 long:2 cross:1 divided:1 equally:1 vision:2 essentially:1 expectation:1 arxiv:2 iteration:7 pyramid:6 suppl:1 c1:2 addition:2 fellowship:1 krause:1 biased:1 rest:2 thing:1 flow:2 mod:1 structural:1 affect:1 architecture:1 opposite:2 regarding:2 bottleneck:3 whether:1 proceed:2 cause:1 repeatedly:1 generally:3 useful:2 clear:1 listed:1 u54eb020405:1 hardware:4 nsf:6 s3:1 notice:1 estimated:1 rosenthal:4 disjoint:1 rb:1 bryan:1 instantly:1 discrete:6 group:1 threshold:1 neither:1 imaging:1 ram:1 graph:3 asymptotically:3 olukotun:1 relaxation:1 run:2 almost:3 family:1 wu:1 sx2:1 draw:1 endorsement:1 appendix:4 scaling:1 bound:22 topological:1 replaces:1 oracle:1 annual:3 toshiba:1 n3:5 x2:6 min:9 conjecture:12 alternate:2 manning:1 across:2 slightly:2 smaller:1 contradicts:2 island:42 metropolis:1 modification:1 s1:3 restricted:1 taken:1 previously:1 discus:2 fail:1 needed:8 know:2 dyer:2 end:1 serf:1 studying:1 available:1 hellerstein:1 alternative:1 slower:6 existence:2 thomas:1 binomial:1 dirichlet:1 ensure:1 include:2 remaining:1 linguistics:1 graphical:1 newton:1 calculating:1 medicine:1 sigmod:1 establish:1 society:2 feng:1 sweep:5 move:19 implied:1 question:5 quantity:1 mumford:1 exhibit:1 gradient:3 minx:3 distance:3 n2n:1 card:1 topic:3 mad:1 index:3 berger:1 providing:1 ratio:1 difficult:2 robert:4 holding:1 stated:1 policy:1 allowing:1 upper:3 markov:17 acknowledge:1 descent:1 peres:1 y1:6 frame:1 introduced:2 namely:1 pair:1 c3:1 sentence:5 learned:1 barcelona:1 nip:1 address:1 below:1 pattern:1 program:2 max:3 critical:1 natural:2 indicator:1 zhu:1 representing:1 spiegelhalter:1 mossel:1 library:2 imply:1 inversely:2 sn:7 prior:2 understanding:1 geometric:3 relative:6 asymptotic:7 expect:1 permutation:30 highlight:1 interesting:1 limitation:2 allocation:1 proportional:2 disproving:1 foundation:2 s0:5 pi:9 endowment:2 repeat:1 wilmer:1 weaker:2 allow:2 benefit:1 distributed:5 xn:15 transition:19 author:2 commonly:4 made:2 longstanding:1 schultz:1 polynomially:1 transaction:3 approximate:2 contradict:1 skill:1 hayes:1 xi:12 latent:1 table:3 additionally:1 symmetry:1 necessarily:2 constructing:1 official:1 s2:1 bounding:1 whole:1 n2:8 contradicting:1 x1:15 augmented:11 site:1 exponential:1 theorem:8 bad:7 specific:2 showing:1 invalidating:1 list:1 x:2 imperatively:1 incorporating:1 workshop:1 false:9 sequential:1 n000141210041:1 mitliagkas:1 texture:1 magnitude:1 conditioned:1 illustrates:3 gap:1 easier:1 locality:2 entropy:1 logarithmic:3 michigan:1 explore:3 likely:2 lazy:5 expressed:2 ordered:1 acm:1 cdesa:1 conditional:3 sorted:2 towards:3 feasible:1 change:5 specifically:2 determined:1 uniformly:3 sampler:7 lemma:5 called:1 total:4 exception:1 select:1 formally:2 rarely:1 support:1 scan:175 dobrushin:3 brevity:1 avoiding:1 handling:1 |
6,179 | 659 | Assessing and Improving Neural Network
Predictions by the Bootstrap Algorithm
Gerhard Paass
German National Research Center for Computer Science (GMD)
D-5205 Sankt Augustin, Germany
e-mail: paass<Dgmd.de
Abstract
The bootstrap algorithm is a computational intensive procedure to
derive nonparametric confidence intervals of statistical estimators
in situations where an analytic solution is intractable. It is applied to neural networks to estimate the predictive distribution for
unseen inputs. The consistency of different bootstrap procedures
and their convergence speed is discussed. A small scale simulation
experiment shows the applicability of the bootstrap to practical
problems and its potential use.
1
INTRODUCTION
Bootstrapping is a strategy for estimating standard errors and confidence intervals
for parameters when the form of the underlying distribution is unknown. It is
particularly valuable when the parameter of interest is a complicated functional
of the true distribution. The key idea first promoted by Efron (1979) is that the
relationship between the true cumulative distribution function (cdf) F and the
sample of size n is similar to the relationship between the empirical cdf Fn and
a secondary sample drawn from it. So one uses the primary sample to form an
estimate Fn and calculates the sampling distribution of the parameter estimate
under Fn. This calculation is done by drawing many secondary samples and finding
the estimate, or function of the estimate, for each. If:Fn is a good approximation
of F, then H n , the sampling distribution of the estimate under Fn , is a generally
good approximation to the sampling distribution for the estimate under F. H n is
196
Assessing and Improving Neural Network Predictions by the Bootstrap Algorithm
called the bootstrap distribution of the parameter. Introductory articles are Efron
and Gong (1983) and Efron and Tibshirani (1986). For a survey of bootstrap results
see Hinkley (1988) and DiCiccio and Romano (1988).
A neural networks often may be considered as a nonlinear or nonparametric regression model
(1)
Z g/3(Y) + ?
which d~fines the relation between the vectors Y and z of input and output variables.
The ter:n ? can be interpreted as a random 'error' and the function g/3 depends on
some Ul_kown parameter f3 which may have infinite dimension. Usually the network
is used to determine a prediction Zo = g/3(Yo) for some new input vector Yo. If the
data is a random sample, an estimate Pdiffers from the true value of f3 because of
the sampling error and consequently the prediction g~(yo) is different from the true
prediction. In this paper the bootstrap approach is used to approximate a sampling
distribution of the prediction (or a function thereof) and to estimate parameters
of that distribution like its mean value, variance, percentiles, etc. Bootstrapping
procedures are closely related to other resampling methods like cross validation and
jackknife (Efron 1982). The jackknife can be considered as a linear approximation
to the bootstrap (Efron, Tibshirani 1986).
=
In the next section different versions of the bootstrap procedure for feedforward
neural networks are defined and their theoretical properties are reviewed. Main
points are the convergence of the bootstrap distribution to true theoretical distribution and the speed of that convergence. In the following section the results of a
simulation experiment for a simple backprop model are reported and the application of the bootstrap to model selection is discussed. The final section gives a short
summary.
2
CONSISTENCY OF THE BOOTSTRAP FOR
FEEDFORWARD NEURAL NETWORKS
Assume X (n) := (Xl, ... , xn) is the available independent, identically distributed
(iid) sample from an underlying cdf F where Xi
(Zi' Yi) and Fn is the corresponding empirical cdf. For a given Yo let T} T}(g/3(Yo)) be a parameter of interest of the
prediction, e.g. the mean value of the prediction of a component of Z for Yo.
=
=
The pairwise bootstrap algorithm is an intuitive way to apply the bootstrap notion to
regression. It was proposed by Efron (1982) and involves the independent repetition
of following steps for b 1, ... , B.
=
Fn. Notice that this amounts
to the random selection of n elements from X(n) with replacement.
2. An estimate TJb is determined from X;(n).
1. A sample XbCn) of size n is generated from
=
The resulting empirical cdf of the TJb, b 1, ... ,n is denoted by HB and approximates the sampling distribution for the estimate TJ under Fn. The standard deviation of HE is an estimate of the standard error of T}(Fn) , and [Hnl(a), Hn 1 (1- a)]
is an approximate (1 - 2a) central confidence interval.
197
198
Paass
In general two conditions are necessary for the bootstrap to be consistent:
? The estimator, e.g.
fib has to be consistent .
? The functional which maps F to HB has to be smooth.
This requirement can be formalized by a uniform weak convergence condition (DiCiccio, Romano 1988). Using these concepts Freedman (1981) proved that for the
parameters of a linear regression model the pairwise bootstrap procedure is consistent, i.e. yields the desired limit distribution for n, B --+ 00. Mammen (1991)
showed that this also holds for the preuictive distribution of a linear model (i.e.
linear contrasts). These results hold even if the errors are heteroscedastic, i.e. if
the distribution of fi depends on the value of Yi.
The performance of the bootstrap for linear regression is extensively discussed by
Wu (1986). It turns out that the small sample properties can be different from
the asymptotic relations and the bootstrap may exhibit a sizeable bias. Various
procedures of bias correction have been proposed (DiCiccio, Romano 1988). Beran
(1990) discusses a calibrated bootstrap prediction region containing the prediction
g,6(YO)+f with prescribed probability a. It requires a sequence of nested bootstraps.
Its coverage probability tends to a at a rate up to n- 2 ? Note that this procedure can
be applied to nonlinear regression models (1) with homoscedastic errors (Example
3 in Beran (1990, p.718) can be extended to this case).
Biases especially arise if the errors are heteroscedastic. Hinkley (1988) discusses the
parametric modelling of dependency of the error distribution (or its variance) from
Y and the application of the bootstrap algorithm using this model. The problem is
here to determine this parametric dependency from the data. As an alternative Wu
(1986) and Liu (1988) take into account heteroscedasticity in a nonparametric way.
They propose the following wild bootstrap algorithm which starts with a consistent
estimate /3 based on the sample X(n). Then the set of residuals (i l , ... ,in) with
fi := Zi - g~(yd is determined. The approach attempts to mimic the conditional
distribution of Z given Yi in a very crude way by defining a distribution Oi whose
first three moments coincide with the observed residual fi:
J
udOj(u) = 0
J
u2 dO i (u) = f;
J
u3 dO i (u) =
f~
(2)
Two point distributions are used which are uniquely defined by this requirement
(Mammen 1991, p.121). Then the following steps are repeated for b = 1, ... , B:
1. Independently generate residuals fi according to Oi and generate observations z; := 9p(Yi) + fi for i = (1, ... , n). This yields a new sample X;(n)
of size n.
2. An estimate fit is determined from Xb(n).
The resulting empirical cdf of the fit is then taken as the bootstrap distribution
HB which approximates the sampling distribution for the estimate fI under Fn.
Mammen (1991, p.123) shows that this algorithm is consistent for the prediction of
linear regression models if the least square estimator or M-estimators are used and
discusses the convergence speed of the procedure.
Assessing and Improving Neural Network Predictions by the Bootstrap Algorithm
The bootstrap may also be applied to nonparameric regression models like kerneltype estimators of the form
g(y) =
[g?~:~~(~lJ
(3)
with kernel [{ and bandwidth h. These models are related to radial basis functions
discussed in the neural network literature. For those models the pairwise bootstrap
does not work (HardIe, Mammen 1990) as the algorithm is not forced to perform
locall:J., ~raging. To account for heteroscedasticity in the errors of (1) HardIe (1990,
p.103) advocates the use of the wild bootstrap algorithm described above. Under
some regularity conditions he shows the convergence of the bootstrap distribution
of the kernel estimator to the correct limit distribution.
To summarize the bootstrap often is used simply because an analytic derivation of
the desired sampling distribution is too complicated. The asymptotic investigations
offer two additional reasons:
? There exist versions of the bootstrap algorithm that have a better rate of
convergence than the usual asymptotic normal approximation. This effect
has been extensively discussed in literature e.g. by Hall (1988), Beran
(1988), DiCiccio and Romano (1988, p.349), Mammen (1991, p.74).
? There are cases where the bootstrap works, even if the normal approximation breaks down. Bickel and Freedman (1983) for instance show, that the
bootstrap is valid for linear regression models in the presence of outliers and
if the number of parameters changes with n. Their results are discussed
and extended by Mammen (1991, p.88ff).
3
SIMULATION EXPERIMENTS
To demonstrate the performance of the bootstrap for real real problems we investigated a small neural network. To get a nonlinear situation we chose a "noisy"
version of the xor model with eight input units YI, ... ,Ys and a single output unit
z. The input variables may take the values 0 and 1. The output unit of the true
model is stochastic. It takes the values 0.1 and 0.9 with the following probabilities:
p(y
p(y
p(y
p(y
= 0.9) = 0.9
= 0.9) = 0.1
= 0.9) = 0.1
= 0.9) = 0.9
if
if
if
if
+ X2 + X3 + X4 < 3
Xl + X2 + X3 + X4 < 3
Xl + X2 + X3 + X4 > 3
Xl + X2 + X3 + X4 > 3
Xl
and
and
and
and
+ X6 + X7 + Xs < 3
X5 + X6 + X7 + Xs > 3
X5 + X6 + X7 + XS < 3
X5 + X6 + X7 + Xs > 3
X5
In contrast to the simple xor model generalization is possible in this setup. We
generated a training set X( n) of n 100 inputs using the true model.
=
=
30
We used the pairwise bootstrap procedure described above and generated B
different bootstrap samples X;(n) by random selection from X(n) with replacement. This number of bootstrap samples is rather low and only will yield reliable
information on the central tendency of the prediction. More sensitive parameters of
199
200
Paass
Input vectors
Y1
Yg
10110110
01110110
Bootstrap Distribution of Prediction
1.0
0.5
0.0
t---------tl ~ H
r------mJ-;
1 1 1 101 1 0
00001110
10001110
01001110
11001110
...---------.,14
I~
00101110
10101110
01101110
1 1 101 1 1 0
.A
A..
....
l!:rt
I? ~
zill
00011110
t------::zs:~~
10011110
01011110
.A
l{l}-i
11011110
~
00111110
...
.A
I!H
true expected value
value predicted by the original backprop model
~------f--llnH
10
25
percentiles of the
50 75 90 bootstrap distribution
Figure 1: Box-Plots of the Bootstrap Predictive Distribution for a Series
of Different Input Vectors
the distribution like low percentiles and the standard deviation can be expected to
exhibit larger fluctuations. We estimated 30 weight vectors ~b from those samples
by the backpropagation method with random initial weights. Subsequently for each
of the 256 possible input vectors Yi we determined the prediction g~1> (Yi) yielding
a predictive distribution. For comparison purposes we also estimated the weights
of the original backprop model with the full data set X (n) and the corresponding
Assessing and Improving Neural Network Predictions by the Bootstrap Algorithm
Table 1: Mean Square Deviation from the True Prediction
INPUT
TYPE
HIDDEN
UNITS
MEAN SQUARE DIFFERENCE
BOOTSTRAP DB FULL DATA DF
training
inputs
2
3
4
0.18
0.17
0.17
0.19
0.19
0.19
ne I-training
inputs
2
3
4
0.30
0.35
0.37
0.34
0.38
0.42
Table 2: Coverage Probabilities of the Bootstrap Confidence Interval for Prediction
HIDDEN
UNITS
2
3
4
FRACTION OF CASES WITH TRUE PREDICTION IN
[q2S, q7S]
[q10, qgo]
0.47
0.44
0.43
0.77
0.70
0.70
predictions.
For some of those input vectors the results are shown in figure 1. The distributions
differ greatly in size and form for the different input vectors. Usually the spread
of the predictive distribution is large if the median prediction differs substantially
from the true value. This reflects the situation that the observed data does not have
much information on the specific input vector. Simply by inspecting the predictive
distribution the reliability of a predictions may be assessed in a heuristic way. This
may be a great help in practical applications.
In table 1 the mean square difference DB := (~L:~=1 (Zi - qSo)2) 1/2 between the
true prediction Zi and the median qso of the bootstrap predictive distribution is
compared to the mean square difference Ds := (1. L:?=l(Zi - Zi,F)2)1/2 between
the true prediction and the value Zi,F estimated with full data backprop model. For
the non-training inputs the bootstrap median has a lower mean deviation from the
true value. This effect is a real practical advantage and occurs even for this simple
bootstrap procedure. It may be caused in part by the variation of the initial weight
values (cf. Pearlmutter, Rosenfeld 1991). The utilization of bootstrap procedures
with higher order convergence has the potential to improve this effect.
Table 2 list the fraction of cases in the full set of all 256 possible inputs where the
true value is contained in the central 50% and 80% prediction interval. Note that
the intervals are based on only 30 cases. For the correct model with 2 hidden units
the difference is 0.03 which corresponds to just one case. Models with more hidden
units exhibit larger fluctuations. To arrive at more reliable intervals the number of
201
202
Paass
Table 3: Spread of the Predictive Distribution
HIDDEN
UNITS
MEAN INTERQUARTILE RANGE FOR
TRAINING INPUTS NON-TRAINING INPUTS
2
3
4
0.13
0.11
0.11
0.29
0.35
0.37
bootstrap samples has to be increased by an order of magnitude.
If we use a model with more than two hidden units the fit to the training sample cannot be improved but remains constant. For nontraining inputs, however,
the predictions of the model deteriorate. In table 1 we see that the mean square
deviation from the true prediction increases. This is just a manifestation of 'Occam's razor' which states that unnecessary complex models should not be prefered
to simpler ones (MacKay 1992). Table 3 shows that the spread of the predictive
distribution is increased for non-training inputs in the case of models with more
than two hidden units. Therefore Occam's razor is supported by the bootstrap
predictive distribution without knowing the correct prediction.
This effect shows that bootstrap procedures may be utilized for model selection.
Analoguous to Liu (1993) we may use a crossvalidation strategy to determine the
prediction error for the bootstrap estimate ~b for sample elements of X (n) which
are not contained in the bootstrap sample X; (n). In a similar way Efron (1982,
p.52f) determines the error for the predictions g~b(Y) within the full sample X(n)
and uses this as an indicator of the model performance.
4
SUMMARY
The bootstrap method offers an computation intensive alternative to estimate the
predictive distribution for a neural network even if the analytic derivation is intractable. The available asymptotic results show that it is valid for a large number
of linear, nonlinear and even nonparametric regression problems. It has the potential to model the distribution of estimators to a higher precision than the usual
normal asymptotics. It even may be valid if the normal asymptotics fail. However,
the theoretical properties of bootstrap procedures for neural networks - especially
nonlinear models - have to be investigated more comprehensively. In contrast to
the Bayesian approach no distributional assumptions (e.g. normal errors) are have
to be specified. The simulation experiments show that bootstrap methods offer
practical advantages as the performance of the model with respect to a new input
may be readily assessed.
Acknowledgements
This research was supported in part by the German Federal Department of Reserach
and Technology, grant ITW8900A 7.
Assessing and Improving Neural Network Predictions by the Bootstrap Algorithm
References
Beran, R. (1988): Prepivoting Test Statistics: A Bootstrap View of Asymptotic
Refinements. Journal of the American Statistical Association. vol. 83, pp.687-697.
Beran, R. (1990): Calibrating Prediction Regions. Journal of the American Statistical Association., vol. 85, pp.715-723.
Bickel, P.J., Freedman, D.H. (1981): Some Asymptotic Theory for the Bootstrap.
The Annals of Statistics, vol. 9, pp.1l96-1217.
Bickel, P.J., Freedman, D.H. (1983): Bootstrapping Regression Models with many
Parame~ers. In P. Bickel, K. Doksum, J .C. Hodges (eds.) A Festschrift for Erich
Lehmann. Wadsworth, Belmont, CA, pp.28-48.
DiCiccio, T.J., Romano, J.P. (1988): A Review of Bootstrap Confidence Intervals.
J. Royal Statistical Soc., Ser. B, vol. 50, pp.338-354.
Efron, B. (1979): Bootstrap Methods: Another Look at the Jackknife. The Annals
of Statistics, vol 7, pp.1-26.
Efron, B. (1982): The Jackknife, the Bootstrap and Other Resampling Plans. SIAM,
Philadelphia.
Efron, B., Gong, G. (1983): A leisure look at the bootstrap, the jackknife and
crossvalidation. A merican Statistician, vol. 37, pp.36-48.
Efron, B., Tibshirani (1986): Bootstrap methods for Standard Errors, Confidence
Intervals, and other Measures of Statistical Accuracy . Statistical Science, vol 1,
pp.54-77.
Freedman, D.H. (1981): Bootstrapping Regression Models. The Annals of Statistics, vol 9, p.1218-1228.
HardIe, W.(1990): Applied Nonparametric Regression. Cambridge University Press,
Cambridge.
HardIe, W., Mammen, E. (1990): Bootstrap Methods in Nonparametric Regression.
Preprint Nr. 593. Sonderforschungsbereich 123, University of Heidelberg.
Hall, P. (1988): Theoretical Comparison of Bootstrap Confidence Intervals. The
Annals of Statistics, vol 16, pp.927-985.
Hinkley, D .. (1988): Bootstrap Methods. Journal of the Royal Statistical Society,
Ser. B, vol.50, pp.321-337.
Liu, R. (1988): Bootstrap Procedures under some non i.i.d. Models. The Annals
of Statistics, vo1.16, pp. 1696-1708.
Liu, Y. (1993): Neural Network Model Selection Using Asymptotic Jackknife Estimator and Cross-Validation Method. This volume.
MacKay, D. J. C. (1992): Bayesian Model Comparison and Backprop Nets. In
Moody, J .E., Hanson, S.J., Lippman, R.P. (eds.) Advances in Neural Information
Processing Systems 4. Morgan Kaufmann, San Mateo, pp.839-846.
Mammen, E. (1991): When does Bootstrap Work: Asymptotic Results and Simulations. Preprint Nr. 623. Sonderforschungsbereich 123, University of Heidelberg.
Pearlmutter, B.A., Rosenfeld, R. (1991): Chaitin-Kolmogorov Complexity and Generalization in Neural Networks. in Lippmann et al. (eds.): Advances in Neural
Information Processing Systems 3, Morgan Kaufmann, pp.925-931.
C.F.J. Wu (1986): Jackknife, Bootstrap and other Resampling Methods in Regression Analysis. The Annals of Statistics, vol. 14, p.1261-1295.
203
| 659 |@word version:3 simulation:5 analoguous:1 moment:1 initial:2 liu:4 series:1 readily:1 belmont:1 fn:10 analytic:3 plot:1 resampling:3 short:1 simpler:1 introductory:1 wild:2 advocate:1 deteriorate:1 pairwise:4 expected:2 estimating:1 underlying:2 interpreted:1 sankt:1 substantially:1 z:1 finding:1 bootstrapping:4 nontraining:1 ser:2 utilization:1 unit:10 grant:1 tends:1 limit:2 fluctuation:2 yd:1 chose:1 mateo:1 heteroscedastic:2 range:1 practical:4 differs:1 x3:4 backpropagation:1 bootstrap:69 lippman:1 procedure:14 asymptotics:2 empirical:4 confidence:7 radial:1 get:1 cannot:1 selection:5 map:1 center:1 independently:1 survey:1 formalized:1 estimator:8 notion:1 variation:1 annals:6 gerhard:1 us:2 element:2 particularly:1 utilized:1 distributional:1 observed:2 preprint:2 region:2 prefered:1 valuable:1 complexity:1 heteroscedasticity:2 predictive:10 basis:1 various:1 kolmogorov:1 derivation:2 zo:1 forced:1 whose:1 heuristic:1 larger:2 drawing:1 statistic:7 unseen:1 rosenfeld:2 noisy:1 final:1 sequence:1 advantage:2 net:1 propose:1 chaitin:1 intuitive:1 q10:1 crossvalidation:2 convergence:8 regularity:1 requirement:2 assessing:5 help:1 derive:1 qso:2 gong:2 soc:1 coverage:2 predicted:1 involves:1 differ:1 closely:1 correct:3 stochastic:1 subsequently:1 backprop:5 generalization:2 investigation:1 inspecting:1 correction:1 hold:2 considered:2 hall:2 normal:5 great:1 l96:1 u3:1 bickel:4 homoscedastic:1 purpose:1 augustin:1 sensitive:1 repetition:1 reflects:1 federal:1 rather:1 yo:7 modelling:1 greatly:1 contrast:3 lj:1 hidden:7 relation:2 germany:1 denoted:1 plan:1 mackay:2 wadsworth:1 f3:2 sampling:8 x4:4 look:2 paass:5 mimic:1 national:1 festschrift:1 replacement:2 statistician:1 attempt:1 interest:2 interquartile:1 yielding:1 tj:1 xb:1 necessary:1 desired:2 theoretical:4 instance:1 increased:2 applicability:1 deviation:5 uniform:1 too:1 reported:1 dependency:2 calibrated:1 siam:1 yg:1 moody:1 central:3 hodges:1 containing:1 hn:1 american:2 account:2 potential:3 de:1 sizeable:1 caused:1 depends:2 break:1 view:1 start:1 complicated:2 oi:2 square:6 accuracy:1 xor:2 variance:2 kaufmann:2 yield:3 weak:1 bayesian:2 iid:1 doksum:1 ed:3 pp:13 thereof:1 proved:1 efron:11 higher:2 x6:4 improved:1 done:1 box:1 just:2 d:1 nonlinear:5 hardie:4 leisure:1 effect:4 calibrating:1 concept:1 true:16 x5:4 uniquely:1 razor:2 percentile:3 mammen:8 manifestation:1 demonstrate:1 pearlmutter:2 tjb:2 fi:6 functional:2 volume:1 discussed:6 he:2 approximates:2 association:2 cambridge:2 consistency:2 erich:1 reliability:1 etc:1 showed:1 yi:7 morgan:2 additional:1 promoted:1 determine:3 full:5 smooth:1 calculation:1 cross:2 offer:3 y:1 calculates:1 prediction:32 regression:14 df:1 kernel:2 fine:1 interval:10 median:3 db:2 presence:1 ter:1 feedforward:2 identically:1 hb:3 fit:3 zi:7 bandwidth:1 idea:1 knowing:1 intensive:2 romano:5 generally:1 amount:1 nonparametric:6 extensively:2 gmd:1 generate:2 exist:1 notice:1 estimated:3 tibshirani:3 vol:11 key:1 drawn:1 fraction:2 lehmann:1 arrive:1 wu:3 x2:4 fib:1 x7:4 speed:3 prescribed:1 hinkley:3 jackknife:7 department:1 according:1 outlier:1 taken:1 remains:1 turn:1 german:2 discus:3 fail:1 available:2 apply:1 eight:1 alternative:2 original:2 cf:1 especially:2 society:1 occurs:1 strategy:2 primary:1 parametric:2 usual:2 rt:1 nr:2 exhibit:3 parame:1 mail:1 reason:1 relationship:2 setup:1 unknown:1 perform:1 observation:1 situation:3 extended:2 defining:1 y1:1 specified:1 hanson:1 usually:2 summarize:1 reliable:2 royal:2 indicator:1 residual:3 improve:1 technology:1 ne:1 philadelphia:1 review:1 literature:2 acknowledgement:1 asymptotic:8 validation:2 hnl:1 consistent:5 article:1 beran:5 occam:2 summary:2 supported:2 bias:3 comprehensively:1 distributed:1 dimension:1 xn:1 valid:3 cumulative:1 refinement:1 coincide:1 san:1 approximate:2 lippmann:1 unnecessary:1 reserach:1 xi:1 reviewed:1 table:7 mj:1 ca:1 improving:5 heidelberg:2 investigated:2 complex:1 main:1 spread:3 arise:1 freedman:5 repeated:1 tl:1 ff:1 precision:1 xl:5 crude:1 down:1 specific:1 er:1 list:1 x:4 intractable:2 magnitude:1 simply:2 contained:2 u2:1 nested:1 corresponds:1 determines:1 cdf:6 conditional:1 consequently:1 change:1 infinite:1 determined:4 vo1:1 called:1 secondary:2 tendency:1 assessed:2 |
6,180 | 6,590 | Training and Evaluating Multimodal Word
Embeddings with Large-scale Web Annotated Images
Junhua Mao1
Jiajing Xu2
Yushi Jing2
Alan Yuille1,3
2
3
University of California, Los Angeles
Pinterest Inc.
Johns Hopkins University
[email protected], {jiajing,jing}@pinterest.com, [email protected]
1
Abstract
In this paper, we focus on training and evaluating effective word embeddings with
both text and visual information. More specifically, we introduce a large-scale
dataset with 300 million sentences describing over 40 million images crawled and
downloaded from publicly available Pins (i.e. an image with sentence descriptions
uploaded by users) on Pinterest [2]. This dataset is more than 200 times larger than
MS COCO [22], the standard large-scale image dataset with sentence descriptions.
In addition, we construct an evaluation dataset to directly assess the effectiveness
of word embeddings in terms of finding semantically similar or related words
and phrases. The word/phrase pairs in this evaluation dataset are collected from
the click data with millions of users in an image search system, thus contain
rich semantic relationships. Based on these datasets, we propose and compare
several Recurrent Neural Networks (RNNs) based multimodal (text and image)
models. Experiments show that our model benefits from incorporating the visual
information into the word embeddings, and a weight sharing strategy is crucial for
learning such multimodal embeddings. The project page is: http://www.stat.
ucla.edu/~junhua.mao/multimodal_embedding.html1 .
1
Introduction
Word embeddings are dense vector representations of words with semantic and relational information.
In this vector space, semantically related or similar words should be close to each other. A large-scale
training dataset with billions of words is crucial to train effective word embedding models. The
trained word embeddings are very useful in various tasks and real-world applications that involve
searching for semantically similar or related words and phrases.
A large proportion of the state-of-the-art word embedding models are trained on pure text data only.
Since one of the most important functions of language is to describe the visual world, we argue that
the effective word embeddings should contain rich visual semantics. Previous work has shown that
visual information is important for training effective embedding models. However, due to the lack
of large training datasets of the same scale as the pure text dataset, the models are either trained on
relatively small datasets (e.g. [13]), or the visual contraints are only applied to limited number of
pre-defined visual concepts (e.g. [21]). Therefore, such work did not fully explore the potential of
visual information in learning word embeddings.
In this paper, we introduce a large-scale dataset with both text descriptions and images, crawled and
collected from Pinterest, one of the largest database of annotated web images. On Pinterest, users
save web images onto their boards (i.e. image collectors) and supply their descriptions of the images.
More descriptions are collected when the same images are saved and commented by other users.
Compared to MS COCO (i.e. the image benchmark with sentences descriptions [22]), our dataset is
much larger (40 million images with 300 million sentences compared to 0.2 million images and 1
million sentences in the current release of MS COCO) and is at the same scale as the standard pure
1
The datasets introduced in this work will be gradually released on the project page.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
text training datasets (e.g. Wikipedia Text Corpus). Some sample images and their descriptions are
shown in Figure 1 in Section 3.1. We believe training on this large-scale dataset will lead to richer
and better generalized models. We denote this dataset as the Pinterest40M dataset.
One challenge for word embeddings learning is how to directly evaluate the quality of the model
with respect to the tasks (e.g. the task of finding related or similar words and phrases). State-ofthe-art neural language models often use the negative log-likelihood of the predicted words as their
training loss, which is not always correlated with the effectiveness of the learned embedding. Current
evaluation datasets (e.g. [5, 14, 11]) for word similarity or relatedness contain only less than a
thousand word pairs and cannot comprehensively evaluate all the embeddings of the words appearing
in the training set.
The challenge of constructing large-scale evaluation datasets is partly due to the difficulty of finding a
large number of semantically similar or related word/phrase pairs. In this paper, we utilize user click
information collected from Pinterest?s image search system to generate millions of these candidate
word/phrase pairs. Because user click data are somewhat noisy, we removed inaccurate entries in
the dataset by using crowdsourcing human annotations. This led to a final gold standard evaluation
dataset consists of 10,674 entries.
Equipped with these datasets, we propose, train and evaluate several Recurrent Neural Network (RNN
[10]) based models with input of both text descriptions and images. Some of these models directly
minimize the Euclidean distance between the visual features and the word embeddings or RNN states,
similar to previous work (e.g. [13, 21]). The best performing model is inspired by recent image
captioning models [9, 24, 36], with the additional weight-sharing strategy originally proposed in [23]
to learn novel visual concepts. This strategy imposes soft constraints between the visual features and
all the related words in the sentences. Our experiments validate the effectiveness and importance of
incorporating visual information into the learned word embeddings.
We make three major contributions: Firstly, we constructed a large-scale multimodal dataset with both
text descriptions and images, which is at the same scale as the pure text training set. Secondly, we
collected and labeled a large-scale evaluation dataset for word and phrase similarity and relatedness
evaluation. Finally, we proposed and compared several RNN based models for learning multimodal
word embeddings effectively. To facilitate research in this area, we will gradually release the datasets
proposed in this paper on our project page.
2
Related Work
Image-Sentence Description Datasets The image descriptions datasets, such as Flickr8K [15],
Flickr30K [37], IAPR-TC12 [12], and MS COCO [22], greatly facilitated the development of models
for language and vision tasks such as image captioning. Because it takes lots of resources to label
images with sentences descriptions, the scale of these datasets are relatively small (MS COCO, the
largest dataset among them, only contains 1 million sentences while our Pinterest40M dataset has 300
million sentences). In addition, the language used to describe images in these datasets is relatively
simple (e.g. MS COCO only has around 10,000 unique words appearing at least 3 times while there
are 335,323 unique words appearing at least 50 times in Pinterest40M). The Im2Text dataset proposed
in [28] adopts a similar data collection process to ours by using 1 million images with 1 million user
annotated captions from Flickr. But its scale is still much smaller than our Pinterest40M dataset.
Recently, [34] proposed and released the YFCC100M dataset, which is a large-scale multimedia
dataset contains metadata of 100 million Flickr images. It provides rich information about images,
such as tags, titles, and locations where they were taken. The users? comments can be obtained
by querying the Flickr API. Because of the different functionality and user groups between Flickr
and Pinterest, the users? comments of Flickr images are quite different from those of Pinterest
(e.g. on Flickr, users tend to comment more on the photography techniques). This dataset provides
complementary information to our Pinterest40M dataset.
Word Similarity-Relatedness Evaluation The standard benchmarks, such as WordSim-353/WSSim [11, 3], MEN [5], and SimLex-999 [14], consist of a couple hundreds of word pairs and their
similarity or relatedness scores. The word pairs are composed by asking human subjects to write
the first related, or similar, word that comes into their mind when presented with a concept word
(e.g. [27, 11]), or by randomly selecting frequent words in large text corpus and manually searching
for useful pairs (e.g. [5]). In this work, we are able to collect a large number of word/phrase pairs
2
This strawberry limeade
cake is fruity, refreshing,
and gorgeous! Those
lovely layers are
impossible to resist.
This is the place I will be
going (hopefully) on my first
date with Prince Stephen. It's
the palace gardens, and they
are gorgeous. I cannot wait to
get to know him and
exchange photography ideas!
Make two small fishtail
braids on each side,
then put them together
with a ponytail.
White and gold ornate library
with decorated ceiling, ironwork balcony, crystal
chandelier, and glass-covered
shelves. (I don't know if you're
allowed to read a beat-up
paperback in this room.)
This flopsy-wopsy who
just wants a break from
his walk. | 18 German
Shepherd Puppies Who
Need To Be Snuggled
Immediately
Figure 1: Sample images and their sample descriptions collected from Pinterest.
with good quality by mining them from the click data of Pinterest?s image search system used by
millions of users. In addition, because this dataset is collected through a visual search system, it is
more suitable to evaluate multimodal embedding models. Another related evaluation is the analogy
task proposed in [25]. They ask the model questions like ?man to woman is equal king to what?? as
their evaluation. But such questions do not directly measure the word similarity or relatedness, and
cannot cover all the semantic relationships of million of words in the dictionary.
RNN for Language and Vision Our models are inspired by recent RNN-CNN based image captioning models [9, 24, 36, 16, 6, 18, 23], which can be viewed as a special case of the sequence-tosequence learning framework [33, 7]. We adopt Gated Recurrent Units (GRUs [7]), a variation of the
simple RNN model.
Multimodal Word Embedding Models For pure text, one of the most effective approaches to learn
word embeddings is to train neural network models to predict a word given its context words in
a sentence (i.e. the continuous bag-of-word model [4]) or to predict the context words given the
current word (i.e. the skip-gram model [25]). There is a large literature on word embedding models
that utilize visual information. One type of methods takes a two-step strategy that first extracts text
and image features separately and then fuses them together using singular value decomposition [5],
stacked autoencoders [31], or even simple concatenation [17]. [13, 21, 19] learn the text and image
features jointly by fusing visual or perceptual information in a skip-gram model [25]. However,
because of the lack of large-scale multimodal datasets, they only associate visual content with a
pre-defined set of nouns (e.g. [21]) or perception domains (e.g. [14]) in the sentences, or focus on
abstract scenes (e.g. [19]). By contrast, our best performing model places a soft constraint between
visual features and all the words in the sentences by a weight sharing strategy as shown in Section 4.
3
Datasets
We constructed two datasets: one for training our multimodal word-embeddings (see Section 3.1)
and another one for the evaluation of the learned word-embeddings (see Section 3.2).
3.1 Training Dataset
Table 1: Scale comparison with other image
Pinterest is one of the largest repository of Web images. descriptions benchmarks.
Users commonly tag images with short descriptions and
Image Sentences
share the images (and desriptions) with others. Since
a given image can be shared and tagged by multiple,
Flickr8K [15]
8K
40K
sometimes thousands of users, many images have a very
Flickr30K [37]
30K
150K
rich set of descriptions, making this source of data ideal
IAPR-TC12 [12]
20K
34K
for training model with both text and image inputs.
MS COCO [22]
200K
1M
Im2Text [28]
1M
1M
The dataset is prepared in the following way: first,
Pinterset40M
40M
300M
we crawled the public available data on Pinterest to
construct our training dataset of more than 40 million
images. Each image is associated with an average of 12 sentences, and we removed duplicated or
short sentences with less than 4 words. The duplication detection is conducted by calculating the
3
User Query
hair
styles
Most Clicked Items
Positive Phrases for the Query
Annotations:
Annotations:
hair tutorial
pony tails
prom night ideas
unique hairstyles
long hair
hair tutorial
beautiful
picture inspiration
?
?
Sorted by Click
Final List
pony tails
hair tutorial
pony tails
?
makeup
?
inspiration
ideas
makeup
?
Remove
overlapping
words
Figure 2: The illustration of the positive word/phrase pairs generation. We calculate a score for each
annotation (i.e. a short phrase describes the items) by aggregating the click frequency of the items to
which it belongs and rank them according to the score. The final list of positive phrases are generated
from the top ranked phrases after removing phrases containing overlapping words with the user query
phrase. See text for details.
overlapped word unigram ratios. Some sample images and descriptions are shown in Figure 1. We
denote this dataset as the Pinterest40M dataset.
Our dataset contains 40 million images with 300 million sentences (around 3 billion words), which
is much larger than the previous image description datasets (see Table 1). In addition, because the
descriptions are annotated by users who expressed interest in the images, the descriptions in our
dataset are more natural and richer than the annotated image description datasets. In our dataset,
there are 335,323 unique words with a minimum number of occurence of 50, compared with 10,232
and 65,552 words appearing at least 3 times in MS COCO and IM2Text dataset respectively. To the
best of our knowledge, there is no previous paper that trains a multimodal RNN model on a dataset of
such scale.
3.2
Evaluation Datasets
This work proposes to use labeled phrase triplets ? each triplet is a three-phrase tuple containing
phrase A, phrase B and phrase C, where A is considered as semantically closer to B than A is to
C. At testing time, we compute the distance in the word embedding space between A/B and A/C,
and consider a test triplet as positive if d(A, B) < d(A, C). This relative comparison approach was
commonly used to evaluate and compare different word embedding models [30].
In order to generate large number of phrase triplets, we rely on user-click data collected from Pinterest
image search system. At the end, we construct a large-scale evaluation dataset with 9.8 million
triplets (see Section 3.2.1), and its cleaned up gold standard version with 10 thousand triplets (see
Section 3.2.2).
3.2.1
The Raw Evaluation Dataset from User Clickthrough Data
It is very hard to obtain a large number of semantically similar or related word and phrase pairs.
This is one of the challenges for constructing a large-scale word/phrase similarity and relatedness
evaluation dataset. We address this challenge by utilizing the user clickthrough data from Pinterest
image search system, see Figure 2 for an illustration.
More specifically, given a query from a user (e.g. ?hair styles?), the search system returns a list of
items, and each item is composed of an image and a set of annotations (i.e. short phrases or words
that describe the item). Please note that the same annotation can appear in multiple items, e.g., ?hair
tutorial? can describe items related to prom hair styles or ponytails. We derive a matching score
for each annotation by aggregating the click frequency of the items containing the annotation. The
annotations are then ranked according to the matching scores, and the top ranked annotations are
considered as the positive set of phrases or words with respect to the user query.
To increase the difficulty of this dataset, we remove the phrases that share common words with the
user query from the initial list of positive phrases. E.g. ?hair tutorials? will be removed because the
word ?hair? is contained in the query phrase ?hair styles?. A stemmer in Python?s ?stemmer? package
is also adopted to find words with the same root (e.g. ?cake? and ?cakes? are considered as the same
word). This pruning step also prevents giving bias to methods which measure the similarity between
the positive phrase and the query phrase by counting the number of overlapping words between them.
In this way, we collected 9,778,508 semantically similar phrase pairs.
4
Table 2: Sample triplets from the Gold RP10K dataset.
Base Phrase
Positive Phrase
Negative Phrase
hair style
summer lunch
oil painting ideas
la multi ani birthdays
teach activities
karting
looking down
black ceiling
new marriage quotes
sexy scientist costume
framing a mirror
ponytail
salads sides
art tips
wishes
preschool
go carts
the view
home ideas
true love
labs
decorating bathroom
pink nail
packaging bottle
snickerdoodle muffins
tandoori
rental house ideas
office waiting area
soft curls for medium hair
paleo potluck
winter travel packing
personal word wall
celebrity style inspiration
Previous word similarity/relatedness datasets (e.g. [11, 14]) manually annotated each word pair with
an absolute score reflecting how much the words in this pair are semantically related. In the testing
stage, a predicted similarity score list of the word pairs generated by the model in the dataset is
compared with the groundtruth score list. The Spearman?s rank correlation between the two lists is
calculated as the score of the model. However, it is often too hard and expensive to label the absolute
related score and maintain the consistency across all the pairs in a large-scale dataset, even if we
average the scores of several annotators.
We adopt a simple strategy by composing triplets for the phrase pairs. More specifically, we randomly
sample negative phrases from a pool of 1 billion phrases. The negative phrase should not contain any
overlapping word (a stemmer is also adopted) with both of the phrases in the original phrase pair. In
this way, we construct 9,778,508 triplets with the format of (base phrase, positive phrase, negative
phrase). In the evaluation, a model should be able to distinguish the positive phrase from the negative
phrase by calculating their similarities with the base phrase in the embedding space. We denote this
dataset as Related Phrase 10M (RP10M) dataset.
3.2.2 The Cleaned-up Gold Standard Dataset
Because the raw Related Query 10M dataset is built upon user click information, it contains some
noisy triplets (e.g. the positive and base phrase are not related, or the negative phrase is strongly
related to the base phrase). To create a gold standard dataset, we conduct a clean up step using the
crowdsourcing platform CrowdFlower [1] to remove these inaccurate triplets. A sample question and
choices for the crowdsourcing annotators are shown in Figure 3. The positive and negative phrases in
a triplet are randomly given as choice ?A? or ?B?. The annotators are required to choose which phrase
is more related to the base phrase, or if they are both related or unrelated. To help the annotators
understand the meaning of the phrases, they can click on the phrases to get Google search results.
We annotate 21,000 triplets randomly sampled from the raw Related Query 10M dataset. Three to
five annotators are assigned to each question. A triplet is accepted and added in the final cleaned
up dataset only if more than 50% of the annotators agree with the original positive and negative
label of the queries (note that they do not know which one is positive in the annotation process). In
practice, 70% of the selected phrases triplets have more than 3 annotators to agree. This leads to a
gold standard dataset with 10,674 triplets. We denote this dataset as Gold Phrase Query 10K (Gold
RP10K) dataset.
Figure 3: The interface for the annotators. They are required to choose which phrase (positive and
negative phrases will be randomly labeled as ?A? or ?B?) is more related to base phrase. They can
click on the phrases to see Google search results.
5
wt
Image
One Hot
Embedding (128)
UW
Image
Set
GRU (512)
Image
CNN
Supervision
CNN
CNN
on the final
state
FC (128)
Sampled SoftMax
Supervision
U WT
wt+1
Model A
Model B
Model C
Figure 4: The illustration of the structures of our model A, B, and C. We use a CNN to extract visual
representations and use a RNN to model sentences. The numbers on the bottom right corner of
the layers indicate their dimensions. We use a sampled softmax layer with 1024 negative words to
accelerate the training. Model A, B, and C differ from each other by the way that we fuse the visual
representation into the RNN. See text for more details.
This dataset is very challenging and a successfully model should be able to capture a variety of
semantic relationships between words or phrases. Some sample triplets are shown in Table 2.
4
The Multimodal Word Embedding Models
We propose three RNN-CNN based models to learn the multimodal word embeddings, as illustrated in
Figure 4. All of the models have two parts in common: a Convolutional Neural Network (CNN [20])
to extract visual representations and a Recurrent Neural Network (RNN [10]) to model sentences.
For the CNN part, we resize the images to 224 ? 224, and adopt the 16-layer VGGNet [32] as the
visual feature extractor. The binarized activation (i.e. 4096 binary vectors) of the layer before its
SoftMax layer are used as the image features and will be mapped to the same space of the state of
RNN (Model A, B) or the word embeddings (Model C), depends on the structure of the model, by a
fully connected layer and a Rectified Linear Unit function (ReLU [26], ReLU(x) = max(0, x)).
For the RNN part, we use a Gated Recurrent Unit (GRU [7]), an recently very popular RNN structure,
with a 512 dimensional state cell. The state of GRU ht for each word with index t in a sentence can
be represented as:
rt = ?(Wr [et , ht?1 ] + br )
(1)
ut = ?(Wu [et , ht?1 ] + bu )
(2)
ct = tanh(Wc [et , rt ht?1 ] + bc )
(3)
ht = ut ht?1 + (1 ? ut ) ct
(4)
where represents the element-wise product, ?(.) is the sigmoid function, et denotes the word
embedding for the word wt , rt and ut are the reset gate and update gate respectively. The inputs of
the GRU are words in a sentence and it is trained to predict the next words given the previous words.
We add all the words that appear more than 50 times in the Pinterest40M dataset into the dictionary.
The final vocabulary size is 335,323. Because the vocabulary size is very huge, we adopt the sampled
SoftMax loss [8] to accelerate the training. For each training step, we sample 1024 negative words
according to their log frequency in the training data and calculate the sampled SoftMax loss for the
positive word. This sampled SoftMax loss function of the RNN part is adopted with Model A, B and
C. Minimizing this loss function can be considered as approximately maximizing the probability of
the sentences in the training set.
As illustrated in Figure 4, Model A, B and C have different ways to fuse the visual information in the
word embeddings. Model A is inspired by the CNN-RNN based image captioning models [36, 23].
We map the visual representation in the same space as the GRU states to initialize them (i.e. set
h0 = ReLU(WI fI )). Since the visual information is fed after the embedding layer, it is usually hard
to ensure that this information is fused in the learned embeddings. We adopt a transposed weight
sharing strategy proposed in [23] that was originally used to enhance the models? ability to learn novel
visual concepts. More specifically, we share the weight matrix of the SoftMax layer UM with the
matrix Uw of the word embedding layer in a transposed manner. In this way, UwT is learned to decode
the visual information and is enforced to incorporate this information into the word embedding matrix
6
Table 3: Performance comparison of our Model A, B, C, their variants and a state-of-the-art skip-gram
model [25] trained on Google News dataset with 300 billion words.
Pure text RNN
Model A without weight sharing
Model A (weight shared multimodal RNN)
Model B (direct visual supervisions on the final RNN state)
Model C (direct visual supervisions on the embeddings)
Word2Vec-GoogleNews [25]
GloVe-Twitter [29]
Gold RP10K
RP10M
dim
0.748
0.773
0.843
0.705
0.771
0.716
0.693
0.633
0.681
0.725
0.646
0.687
0.596
0.617
128
128
128
128
128
300
200
Uw . In the experiments, we show that this strategy significantly improve the performance of the
trained embeddings. Model A is trained by maximizing the log likelihood of the next words given the
previous words conditioned on the visual representations, similar to the image captioning models.
Compared to Model A, we adopt a more direct way to utilize the visual information for Model B and
Model C. We add direct supervisions of the final state of the GRU (Model B) or the word embeddings
(Model C), by adding new loss terms, in addition to the negative log-likelihood loss from the sampled
SoftMax layer:
1X
k hls ? ReLU(WI fIs ) k
(5)
Lstate =
n s
1X1X
Lemb =
k et ? ReLU(WI fIs ) k
(6)
n s ls t
where ls is the length of the sentence s in a mini-batch with n sentences, Eqn. 5 and Eqn. 6 denote
the additional losses for model B and C respectively. The added loss term is balanced by a weight
hyperparameter ? with the negative log-likehood loss from the sampled SoftMax layer.
5
Experiments
5.1 Training Details
We convert the words in all sentences of the Pinterest40M dataset to lower cases. All the nonalphanumeric characters are removed. A start sign hbosi and an end sign heosi are added at the
beginning and the end of all the sentences respectively.
We use the stochastic gradient descent method with a mini-batch size of 256 sentences and a learning
rate of 1.0. The gradient is clipped to 10.0. We train the models until the loss does not decrease on a
small validation set with 10,000 images and their descriptions. The models will scan the dataset for
roughly five 5 epochs. The bias terms of the gates (i.e. br and bu in Eqn. 1 and 2) in the GRU layer
are initialized to 1.0.
5.2 Evaluation Details
We use the trained embedding models to extract embeddings for all the words in a phrase and
aggregate them by average pooling to get the phrase representation. We then check whether the
cosine distance between the (base phrase, positive phrase) pair are smaller than the (base phrase,
negative phrase) pair. The average precision over all the triplets in the raw Related Phrases 10M
(RP10M) dataset and the Gold standard Related Phrases 10K (Gold RP10K) dataset are reported.
5.3 Results on the Gold RP10K and RP10M datasets
We evaluate and compare our Model A, B, C, their variants and several strong baselines on our
RP10M and Gold RP10K datasets. The results are shown in Table 3. ?Pure Text RNN? denotes
the baseline model without input of the visual features trained on Pinterest40M. It have the same
model structure as our Model A except that we initialize the hidden state of GRU with a zero vector.
?Model A without weight sharing? denotes a variant of Model A where the weight matrix Uw of the
word embedding layer is not shared with the weight matrix UM of the sampled SoftMax layer (see
Figure 4 for details). 2 ?Word2Vec-GoogleNews? denotes the state-of-the-art off-the-shelf word
2
We also try to adopt the weight sharing strategy in Model B and C, but the performance is very similar to
the non-weight sharing version.
7
embedding models of Word2Vec [25] trained on the Google-News data (about 300 billion words).
?GloVe-Twitter? denotes the GloVe model [29] trained on the Twitter data (about 27 billion words).
They are pure text models, but trained on a very large dataset (our model only trains on 3 billion
words). Comparing these models, we can draw the following conclusions:
? Under our evaluation criteria, visual information significantly helps the learning of word embeddings when the model successfully fuses the visual and text information together. E.g., our Model
A outperforms the Word2Vec model by 9.5% and 9.2% on the Gold RP10K and RP10M datasets
respectively. Model C also outperforms the pure text RNN baselines.
? The weight sharing strategy is crucial to enhance the ability of Model A to fuse visual information
into the learned embeddings. E.g., our Model A outperforms the baseline without this sharing
strategy by 7.0% and 4.4% on Gold RP10K and RP10M respectively.
? Model A performs the best among all the three models. It shows that soft supervision imposed
by the weight-sharing strategy is more effective than direct supervision. This is not surprising
since not all the words are semantically related to the content of the image and a direct and hard
constraint might hinder the learning of the embeddings for these words.
? Model B does not perform very well. The reason might be that most of the sentences have more
than 8 words and the gradient from the final state loss term Lstate cannot be easily passed to the
embedding of all the words in the sentence.
? All the models trained on the Pinterest40M dataset performs better than the skip-gram model [25]
trained on a much larger dataset of 300 billion words.
6
Discussion
In this paper, we investigate the task of training and evaluating word embedding models. We introduce
Pinterest40M, the largest image dataset with sentence descriptions to the best of our knowledge,
and construct two evaluation dataset (i.e. RP10M and Gold RP10K) for word/phrase similarity and
relatedness evaluation. Based on these datasets, we propose several CNN-RNN based multimodal
models to learn effective word embeddings. Experiments show that visual information significantly
helps the training of word embeddings, and our proposed model successfully incorporates such
information into the learned embeddings.
There are lots of possible extensions of the proposed model and the dataset. E.g., we plan to separate
semantically similar or related phrase pairs from the Gold RP10K dataset to better understand the
performance of the methods, similar to [3]. We will also give relatedness or similarity scores for
the pairs (base phrase, positive phrase) to enable same evaluation strategy as previous datasets (e.g.
[5, 11]). Finally, we plan to propose better models for phrase representations.
Acknowledgement We are grateful to James Rubinstein for setting up the crowdsourcing experiments
for dataset cleanup. We thank Veronica Mapes, Pawel Garbacki, and Leon Wong for discussions
and support. We appreciate the comments and suggestions from anonymous reviewers of NIPS
2016. This work is partly supported by the Center for Brains, Minds and Machines NSF STC award
CCF-1231216 and the Army Research Office ARO 62250-CS.
Figure 5: t-SNE [35] visualization of the 500 most frequent words learned by our Model A.
8
References
[1] The crowdflower platform. https://www.crowdflower.com/.
[2] Pinterest. https://www.pinterest.com/.
[3] E. Agirre, E. Alfonseca, K. Hall, J. Kravalova, M. Pa?sca, and A. Soroa. A study on similarity and relatedness
using distributional and wordnet-based approaches. In NAACL HLT, pages 19?27, 2009.
[4] Y. Bengio, H. Schwenk, J.-S. Sen?cal, F. Morin, and J.-L. Gauvain. Neural probabilistic language models. In
Innovations in Machine Learning, pages 137?186. Springer, 2006.
[5] E. Bruni, N.-K. Tran, and M. Baroni. Multimodal distributional semantics. JAIR, 49(1-47), 2014.
[6] X. Chen and C. L. Zitnick. Learning a recurrent visual representation for image caption generation. arXiv
preprint arXiv:1411.5654, 2014.
[7] K. Cho, B. Van Merri?nboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint
arXiv:1406.1078, 2014.
[8] S. J. K. Cho, R. Memisevic, and Y. Bengio. On using very large target vocabulary for neural machine
translation. In ACL, 2015.
[9] J. Donahue, L. Anne Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell.
Long-term recurrent convolutional networks for visual recognition and description. In CVPR, 2015.
[10] J. L. Elman. Finding structure in time. Cognitive science, 14(2):179?211, 1990.
[11] L. Finkelstein, E. Gabrilovich, Y. Matias, E. Rivlin, Z. Solan, G. Wolfman, and E. Ruppin. Placing search in
context: The concept revisited. In WWW, pages 406?414. ACM, 2001.
[12] M. Grubinger, P. Clough, H. M?ller, and T. Deselaers. The iapr tc-12 benchmark: A new evaluation resource
for visual information systems. In International Workshop OntoImage, pages 13?23, 2006.
[13] F. Hill and A. Korhonen. Learning abstract concept embeddings from multi-modal data: Since you probably
can?t see what i mean. In EMNLP, pages 255?265. Citeseer, 2014.
[14] F. Hill, R. Reichart, and A. Korhonen. Simlex-999: Evaluating semantic models with (genuine) similarity
estimation. Computational Linguistics, 2015.
[15] M. Hodosh, P. Young, and J. Hockenmaier. Framing image description as a ranking task: Data, models and
evaluation metrics. Journal of Artificial Intelligence Research, pages 853?899, 2013.
[16] A. Karpathy and L. Fei-Fei. Deep visual-semantic alignments for generating image descriptions. In CVPR,
pages 3128?3137, 2015.
[17] D. Kiela and L. Bottou. Learning image embeddings using convolutional neural networks for improved
multi-modal semantics. In EMNLP, pages 36?45. Citeseer, 2014.
[18] R. Kiros, R. Salakhutdinov, and R. S. Zemel. Unifying visual-semantic embeddings with multimodal neural
language models. arXiv preprint arXiv:1411.2539, 2014.
[19] S. Kottur, R. Vedantam, J. M. Moura, and D. Parikh. Visual word2vec (vis-w2v): Learning visually grounded
word embeddings using abstract scenes. arXiv preprint arXiv:1511.07067, 2015.
[20] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, pages 1097?1105, 2012.
[21] A. Lazaridou, N. T. Pham, and M. Baroni. Combining language and vision with a multimodal skip-gram
model. arXiv preprint arXiv:1501.02598, 2015.
[22] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Doll?r, and C. L. Zitnick. Microsoft
coco: Common objects in context. In ECCV, pages 740?755. Springer, 2014.
[23] J. Mao, X. Wei, Y. Yang, J. Wang, Z. Huang, and A. L. Yuille. Learning like a child: Fast novel visual concept
learning from sentence descriptions of images. In ICCV, pages 2533?2541, 2015.
[24] J. Mao, W. Xu, Y. Yang, J. Wang, Z. Huang, and A. Yuille. Deep captioning with multimodal recurrent neural
networks (m-rnn). In ICLR, 2015.
[25] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. Distributed representations of words and
phrases and their compositionality. In NIPS, pages 3111?3119, 2013.
[26] V. Nair and G. E. Hinton. Rectified linear units improve restricted boltzmann machines. In Proceedings of
the 27th International Conference on Machine Learning (ICML-10), pages 807?814, 2010.
[27] D. L. Nelson, C. L. McEvoy, and T. A. Schreiber. The university of south florida free association, rhyme, and
word fragment norms. Behavior Research Methods, Instruments, & Computers, 36(3):402?407, 2004.
[28] V. Ordonez, G. Kulkarni, and T. L. Berg. Im2text: Describing images using 1 million captioned photographs.
In Advances in Neural Information Processing Systems, pages 1143?1151, 2011.
[29] J. Pennington, R. Socher, and C. D. Manning. Glove: Global vectors for word representation. In EMNLP,
volume 14, pages 1532?43, 2014.
[30] T. Schnabel, I. Labutov, D. Mimno, and T. Joachims. Evaluation methods for unsupervised word embeddings.
In EMNLP, pages 298?307, 2015.
[31] C. Silberer and M. Lapata. Learning grounded meaning representations with autoencoders. In ACL, pages
721?732, 2014.
[32] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In
ICLR, 2015.
[33] I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. In NIPS, pages
3104?3112, 2014.
[34] B. Thomee, D. A. Shamma, G. Friedland, B. Elizalde, K. Ni, D. Poland, D. Borth, and L.-J. Li. Yfcc100m:
The new data in multimedia research. Communications of the ACM, 59(2):64?73, 2016.
[35] L. Van der Maaten and G. Hinton. Visualizing data using t-sne. JMLR, 9(2579-2605):85, 2008.
[36] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and tell: A neural image caption generator. In CVPR,
pages 3156?3164, 2015.
[37] P. Young, A. Lai, M. Hodosh, and J. Hockenmaier. From image descriptions to visual denotations: New
similarity metrics for semantic inference over event descriptions. In ACL, pages 479?488, 2014.
9
| 6590 |@word cnn:10 version:2 repository:1 proportion:1 norm:1 rivlin:1 solan:1 decomposition:1 citeseer:2 initial:1 contains:4 score:12 selecting:1 fragment:1 ours:1 bc:1 outperforms:3 current:3 com:4 comparing:1 surprising:1 gauvain:1 activation:1 gmail:1 anne:1 guadarrama:1 john:1 remove:3 update:1 intelligence:1 selected:1 item:9 mcevoy:1 beginning:1 short:4 provides:2 revisited:1 location:1 firstly:1 five:2 constructed:2 direct:6 supply:1 consists:1 manner:1 introduce:3 behavior:1 elman:1 love:1 kiros:1 multi:3 brain:1 roughly:1 gabrilovich:1 inspired:3 salakhutdinov:1 equipped:1 mjhustc:1 project:3 spain:1 clicked:1 unrelated:1 medium:1 what:2 balcony:1 finding:4 flickr8k:2 binarized:1 um:2 unit:4 ramanan:1 appear:2 positive:18 before:1 scientist:1 aggregating:2 api:1 rhyme:1 approximately:1 birthday:1 black:1 rnns:1 might:2 acl:3 collect:1 challenging:1 limited:1 shamma:1 unique:4 lovely:1 testing:2 practice:1 maire:1 area:2 decorating:1 rnn:24 sca:1 significantly:3 matching:2 word:120 pre:2 wait:1 morin:1 get:3 onto:1 close:1 cannot:4 cal:1 thomee:1 put:1 context:4 impossible:1 wong:1 www:4 map:1 imposed:1 reviewer:1 maximizing:2 center:1 uploaded:1 go:1 dean:1 l:2 immediately:1 pure:9 utilizing:1 his:1 embedding:21 searching:2 crowdsourcing:4 variation:1 merri:1 target:1 user:24 caption:3 decode:1 associate:1 overlapped:1 element:1 expensive:1 pa:1 recognition:2 distributional:2 database:1 labeled:3 bottom:1 preprint:5 wang:2 capture:1 thousand:3 calculate:2 connected:1 news:2 decrease:1 removed:4 balanced:1 hinder:1 personal:1 trained:14 grateful:1 iapr:3 yuille:3 upon:1 packing:1 multimodal:18 accelerate:2 easily:1 schwenk:2 various:1 represented:1 train:6 stacked:1 fast:1 effective:7 describe:4 query:12 rubinstein:1 artificial:1 aggregate:1 zemel:1 tell:1 h0:1 quite:1 richer:2 larger:4 cvpr:3 encoder:1 ability:2 simonyan:1 jointly:1 noisy:2 final:9 sequence:3 sen:1 propose:5 aro:1 tran:1 product:1 reset:1 frequent:2 finkelstein:1 combining:1 date:1 gold:18 description:29 validate:1 los:1 billion:8 sutskever:3 darrell:1 jing:1 captioning:6 generating:1 object:1 help:3 derive:1 recurrent:8 stat:1 strong:1 predicted:2 skip:5 come:1 indicate:1 c:1 differ:1 puppy:1 annotated:6 saved:1 functionality:1 stochastic:1 human:2 enable:1 public:1 exchange:1 wall:1 anonymous:1 secondly:1 extension:1 pham:1 around:2 considered:4 hall:1 marriage:1 visually:1 predict:3 major:1 dictionary:2 adopt:7 released:2 crowdflower:3 baroni:2 estimation:1 travel:1 bag:1 label:3 tanh:1 quote:1 title:1 him:1 largest:4 schreiber:1 create:1 successfully:3 lazaridou:1 always:1 shelf:2 crawled:3 office:2 deselaers:1 release:2 focus:2 joachim:1 rank:2 likelihood:3 check:1 greatly:1 contrast:1 baseline:4 glass:1 dim:1 inference:1 twitter:3 kottur:1 inaccurate:2 fis:2 hidden:1 perona:1 captioned:1 going:1 semantics:3 prom:2 among:2 classification:1 development:1 proposes:1 art:5 special:1 noun:1 platform:2 softmax:10 equal:1 construct:5 initialize:2 genuine:1 manually:2 represents:1 placing:1 icml:1 unsupervised:1 others:1 randomly:5 palace:1 composed:2 winter:1 maintain:1 microsoft:1 detection:1 interest:1 huge:1 mining:1 investigate:1 evaluation:24 alignment:1 sexy:1 elizalde:1 word2vec:5 tuple:1 closer:1 pawel:1 conduct:1 euclidean:1 walk:1 initialized:1 prince:1 re:1 soft:4 asking:1 cover:1 phrase:77 fusing:1 entry:2 hundred:1 krizhevsky:1 conducted:1 too:1 reported:1 my:1 cho:2 grus:1 international:2 bu:2 probabilistic:1 off:1 memisevic:1 pool:1 tip:1 together:3 hopkins:1 decorated:1 fused:1 enhance:2 containing:3 choose:2 huang:2 woman:1 emnlp:4 corner:1 cognitive:1 style:6 return:1 li:1 potential:1 lapata:1 inc:1 ranking:1 depends:1 vi:1 break:1 lot:2 root:1 view:1 lab:1 try:1 start:1 annotation:11 contribution:1 ass:1 minimize:1 ni:1 publicly:1 convolutional:5 who:3 uwt:1 ofthe:1 painting:1 raw:4 venugopalan:1 rectified:2 moura:1 flickr:6 sharing:11 hlt:1 matias:1 frequency:3 james:1 associated:1 transposed:2 couple:1 sampled:9 dataset:67 duplicated:1 ask:1 popular:1 knowledge:2 ut:4 pony:3 reflecting:1 originally:2 jair:1 modal:2 improved:1 wei:1 zisserman:1 strongly:1 just:1 stage:1 autoencoders:2 correlation:1 until:1 eqn:3 salad:1 web:4 night:1 hopefully:1 lack:2 overlapping:4 celebrity:1 google:4 clough:1 quality:2 ordonez:1 believe:1 facilitate:1 oil:1 naacl:1 contain:4 concept:7 true:1 ccf:1 tagged:1 inspiration:3 assigned:1 read:1 semantic:8 illustrated:2 white:1 visualizing:1 please:1 cosine:1 m:8 generalized:1 criterion:1 hill:2 crystal:1 performs:2 interface:1 image:71 photography:2 meaning:2 novel:3 recently:2 wise:1 fi:1 wikipedia:1 common:3 sigmoid:1 parikh:1 googlenews:2 volume:1 million:20 tail:3 association:1 bougares:1 curl:1 consistency:1 language:8 similarity:15 supervision:7 rental:1 base:10 add:2 w2v:1 recent:2 belongs:1 coco:9 hay:1 binary:1 der:1 minimum:1 additional:2 somewhat:1 bathroom:1 ller:1 corrado:1 stephen:1 multiple:2 hairstyle:1 alan:2 long:2 lin:1 lai:1 award:1 packaging:1 variant:3 hair:13 vision:3 metric:2 arxiv:10 annotate:1 sometimes:1 grounded:2 cell:1 addition:5 want:1 separately:1 singular:1 source:1 crucial:3 probably:1 south:1 comment:4 tend:1 subject:1 shepherd:1 duplication:1 cart:1 pooling:1 incorporates:1 bahdanau:1 effectiveness:3 counting:1 ideal:1 yang:2 bengio:4 embeddings:36 variety:1 relu:5 click:11 idea:6 br:2 angeles:1 whether:1 lstate:2 passed:1 deep:4 useful:2 covered:1 involve:1 karpathy:1 prepared:1 http:3 generate:2 nsf:1 tutorial:5 sign:2 wr:1 write:1 hyperparameter:1 waiting:1 commented:1 group:1 clean:1 borth:1 ani:1 ht:6 utilize:3 uw:4 fuse:5 convert:1 enforced:1 facilitated:1 package:1 you:2 place:2 clipped:1 nail:1 groundtruth:1 wu:1 home:1 draw:1 maaten:1 resize:1 layer:15 ct:2 summer:1 distinguish:1 activity:1 denotation:1 constraint:3 fei:2 scene:2 grubinger:1 ucla:2 tag:2 wc:1 toshev:1 leon:1 performing:2 nboer:1 mikolov:1 relatively:3 format:1 according:3 manning:1 pink:1 spearman:1 smaller:2 describes:1 across:1 character:1 hodosh:2 wi:3 bruni:1 lunch:1 yfcc100m:2 making:1 hockenmaier:2 hl:1 restricted:1 gradually:2 iccv:1 taken:1 ceiling:2 resource:2 agree:2 visualization:1 describing:2 pin:1 german:1 mind:2 know:3 fed:1 instrument:1 end:3 adopted:3 available:2 flickr30k:2 costume:1 gulcehre:1 doll:1 appearing:4 save:1 batch:2 gate:3 florida:1 original:2 cake:3 top:2 denotes:5 ensure:1 likehood:1 linguistics:1 unifying:1 calculating:2 giving:1 appreciate:1 xu2:1 question:4 added:3 x1x:1 strategy:13 rt:3 gradient:3 iclr:2 friedland:1 distance:3 separate:1 paperback:1 mapped:1 concatenation:1 thank:1 decoder:1 strawberry:1 nelson:1 argue:1 collected:9 reason:1 length:1 alfonseca:1 index:1 relationship:3 illustration:3 ratio:1 minimizing:1 mini:2 innovation:1 sne:2 teach:1 negative:15 contraints:1 clickthrough:2 boltzmann:1 gated:2 perform:1 im2text:4 datasets:25 benchmark:4 descent:1 beat:1 relational:1 looking:1 hinton:3 communication:1 ruppin:1 makeup:2 compositionality:1 introduced:1 pair:21 cleaned:3 bottle:1 required:2 sentence:32 resist:1 gru:8 plan:2 reichart:1 california:1 imagenet:1 learned:8 framing:2 barcelona:1 nip:5 address:1 able:3 usually:1 perception:1 hendricks:1 challenge:4 built:1 max:1 garden:1 hot:1 suitable:1 event:1 difficulty:2 ranked:3 beautiful:1 natural:1 rely:1 agirre:1 improve:2 library:1 picture:1 vggnet:1 metadata:1 extract:4 occurence:1 text:22 epoch:1 literature:1 acknowledgement:1 python:1 poland:1 relative:1 fully:2 loss:12 men:1 generation:2 suggestion:1 querying:1 analogy:1 annotator:8 validation:1 generator:1 downloaded:1 imposes:1 share:3 translation:2 eccv:1 supported:1 free:1 side:2 bias:2 understand:2 stemmer:3 comprehensively:1 absolute:2 benefit:1 mimno:1 van:2 distributed:1 vocabulary:3 calculated:1 dimension:1 evaluating:4 world:2 rich:4 tosequence:1 adopts:1 collection:1 gram:5 commonly:2 erhan:1 pruning:1 relatedness:10 global:1 kiela:1 corpus:2 belongie:1 vedantam:1 don:1 search:10 continuous:1 triplet:18 table:6 learn:6 composing:1 bottou:1 constructing:2 domain:1 stc:1 zitnick:2 did:1 refreshing:1 dense:1 junhua:2 heosi:1 allowed:1 collector:1 complementary:1 child:1 xu:1 board:1 precision:1 mao:3 wish:1 candidate:1 house:1 perceptual:1 jmlr:1 extractor:1 donahue:1 young:2 removing:1 down:1 unigram:1 list:7 veronica:1 incorporating:2 consist:1 workshop:1 socher:1 adding:1 effectively:1 importance:1 pennington:1 mirror:1 conditioned:1 chen:2 tc:1 led:1 fc:1 photograph:1 explore:1 army:1 rohrbach:1 wolfman:1 visual:43 prevents:1 expressed:1 contained:1 vinyals:2 springer:2 acm:2 nair:1 viewed:1 sorted:1 king:1 simlex:2 room:1 shared:3 man:1 content:2 hard:4 specifically:4 glove:4 except:1 semantically:10 wt:4 wordnet:1 korhonen:2 multimedia:2 partly:2 accepted:1 la:1 saenko:1 berg:1 support:1 scan:1 cleanup:1 schnabel:1 kulkarni:1 incorporate:1 evaluate:6 correlated:1 |
6,181 | 6,591 | VIME: Variational Information Maximizing
Exploration
Rein Houthooft??? , Xi Chen?? , Yan Duan?? , John Schulman?? , Filip De Turck? , Pieter Abbeel??
?
UC Berkeley, Department of Electrical Engineering and Computer Sciences
?
Ghent University - imec, Department of Information Technology
?
OpenAI
Abstract
Scalable and effective exploration remains a key challenge in reinforcement learning (RL). While there are methods with optimality guarantees in the setting of discrete state and action spaces, these methods cannot be applied in high-dimensional
deep RL scenarios. As such, most contemporary RL relies on simple heuristics
such as -greedy exploration or adding Gaussian noise to the controls. This paper
introduces Variational Information Maximizing Exploration (VIME), an exploration strategy based on maximization of information gain about the agent?s belief
of environment dynamics. We propose a practical implementation, using variational inference in Bayesian neural networks which efficiently handles continuous
state and action spaces. VIME modifies the MDP reward function, and can be
applied with several different underlying RL algorithms. We demonstrate that
VIME achieves significantly better performance compared to heuristic exploration
methods across a variety of continuous control tasks and algorithms, including
tasks with very sparse rewards.
1
Introduction
Reinforcement learning (RL) studies how an agent can maximize its cumulative reward in a previously
unknown environment, which it learns about through experience. A long-standing problem is how to
manage the trade-off between exploration and exploitation. In exploration, the agent experiments
with novel strategies that may improve returns in the long run; in exploitation, it maximizes rewards
through behavior that is known to be successful. An effective exploration strategy allows the agent
to generate trajectories that are maximally informative about the environment. For small tasks, this
trade-off can be handled effectively through Bayesian RL [1] and PAC-MDP methods [2?6], which
offer formal guarantees. However, these guarantees assume discrete state and action spaces. Hence, in
settings where state-action discretization is infeasible, many RL algorithms use heuristic exploration
strategies. Examples include acting randomly using -greedy or Boltzmann exploration [7], and
utilizing Gaussian noise on the controls in policy gradient methods [8]. These heuristics often rely on
random walk behavior which can be highly inefficient, for example Boltzmann exploration requires
a training time exponential in the number of states in order to solve the well-known n-chain MDP
[9]. In between formal methods and simple heuristics, several works have proposed to address the
exploration problem using less formal, but more expressive methods [10?14]. However, none of
them fully address exploration in continuous control, as discretization of the state-action space scales
exponentially in its dimensionality. For example, the Walker2D task [15] has a 26-dim state-action
space. If we assume a coarse discretization into 10 bins for each dimension, a table of state-action
visitation counts would require 1026 entries.
This paper proposes a curiosity-driven exploration strategy, making use of information gain about the
agent?s internal belief of the dynamics model as a driving force. This principle can be traced back
to the concepts of curiosity and surprise [16?18]. Within this framework, agents are encouraged
to take actions that result in states they deem surprising?i.e., states that cause large updates to the
dynamics model distribution. We propose a practical implementation of measuring information gain
using variational inference. Herein, the agent?s current understanding of the environment dynamics is
represented by a Bayesian neural networks (BNN) [19, 20]. We also show how this can be interpreted
as measuring compression improvement, a proposed model of curiosity [21]. In contrast to previous
curiosity-based approaches [10, 22], our model scales naturally to continuous state and action spaces.
The presented approach is evaluated on a range of continuous control tasks, and multiple underlying
RL algorithms. Experimental results show that VIME achieves significantly better performance than
na?ve exploration strategies.
2
Methodology
In Section 2.1, we establish notation for the subsequent equations. Next, in Section 2.2, we explain
the theoretical foundation of curiosity-driven exploration. In Section 2.3 we describe how to adapt this
idea to continuous control, and we show how to build on recent advances in variational inference for
Bayesian neural networks (BNNs) to make this formulation practical. Thereafter, we make explicit
the intuitive link between compression improvement and the variational lower bound in Section 2.4.
Finally, Section 2.5 describes how our method is practically implemented.
2.1
Preliminaries
This paper assumes a finite-horizon discounted Markov decision process (MDP), defined by
(S, A, P, r, ?0 , ?, T ), in which S ? Rn is a state set, A ? Rm an action set, P : S ? A ? S ? R?0
a transition probability distribution, r : S ? A ? R a bounded reward function, ?0 : S ? R?0
an initial state distribution, ? ? (0, 1] a discount factor, and T the horizon. States and actions
viewed as random variables are abbreviated as S and A. The presented models are based on the
optimization of a stochastic policy ?? : S ? A ? R?0 , parametrized by ?. Let ?(?? ) denote its
PT
expected discounted return: ?(?? ) = E? [ t=0 ? t r(st , at )], where ? = (s0 , a0 , . . .) denotes the
whole trajectory, s0 ? ?0 (s0 ), at ? ?? (at |st ), and st+1 ? P(st+1 |st , at ).
2.2
Curiosity
Our method builds on the theory of curiosity-driven exploration [16, 17, 21, 22], in which the agent
engages in systematic exploration by seeking out state-action regions that are relatively unexplored.
The agent models the environment dynamics via a model p(st+1 |st , at ; ?), parametrized by the
random variable ? with values ? ? ?. Assuming a prior p(?), it maintains a distribution over
dynamic models through a distribution over ?, which is updated in a Bayesian manner (as opposed to
a point estimate). The history of the agent up until time step t is denoted as ?t = {s1 , a1 , . . . , st }.
According to curiosity-driven exploration [17], the agent should take actions that maximize the
reduction in uncertainty about the dynamics. This can be formalized as maximizing the sum of
reductions in entropy
P
(1)
t (H(?|?t , at ) ? H(?|St+1 , ?t , at )) ,
through a sequence of actions {at }. According to information theory, the individual terms equal
the mutual information between the next state distribution St+1 and the model parameter ?, namely
I (St+1 ; ?|?t , at ). Therefore, the agent is encouraged to take actions that lead to states that are
maximally informative about the dynamics model. Furthermore, we note that
I (St+1 ; ?|?t , at ) = Est+1 ?P(?|?t ,at ) DKL [p(?|?t , at , st+1 )k p(?|?t )] ,
(2)
the KL divergence from the agent?s new belief over the dynamics model to the old one, taking
expectation over all possible next states according to the true dynamics P. This KL divergence can
be interpreted as information gain.
2
If calculating the posterior dynamics distribution is tractable, it is possible to optimize Eq. (2)
directly by maintaining a belief over the dynamics model [17]. However, this is not generally the
case. Therefore, a common practice [10,
P 23] is to use RL to approximate planning for maximal
mutual information along a trajectory t I (St+1 ; ?|?t , at ) by adding each term I (St+1 ; ?|?t , at )
as an intrinsic reward, which captures the agent?s surprise in the form of a reward function. This
is practically realized by taking actions at ? ?? (st ) and sampling st+1 ? P(?|st , at ) in order to
add DKL [p(?|?t , at , st+1 )k p(?|?t )] to the external reward. The trade-off between exploitation and
exploration can now be realized explicitly as follows:
r0 (st , at , st+1 ) = r(st , at ) + ?DKL [p(?|?t , at , st+1 )k p(?|?t )],
(3)
with ? ? R+ a hyperparameter controlling the urge to explore. In conclusion, the biggest practical
issue with maximizing information gain for exploration is that the computation of Eq. (3) requires
calculating the posterior p(?|?t , at , st+1 ), which is generally intractable.
2.3
Variational Bayes
We propose a tractable solution to maximize the information gain objective presented in the previous
section. In a purely Bayesian setting, we can derive the posterior distribution given a new state-action
pair through Bayes? rule as
p(?|?t , at , st+1 ) =
p(?|?t )p(st+1 |?t , at ; ?)
,
p(st+1 |?t , at )
(4)
with p(?|?t , at ) = p(?|?t ) as actions do not influence beliefs about the environment [17]. Herein, the
denominator is computed through the integral
Z
p(st+1 |?t , at ) =
p(st+1 |?t , at ; ?)p(?|?t )d?.
(5)
?
In general, this integral tends to be intractable when using highly expressive parametrized models
(e.g., neural networks), which are often needed to accurately capture the environment model in
high-dimensional continuous control.
We propose a practical solution through variational inference [24]. Herein, we embrace the fact that
calculating the posterior p(?|D) for a data set D is intractable. Instead we approximate it through
an alternative distribution q(?; ?), parameterized by ?, by minimizing DKL [q(?; ?) k p(?|D)]. This is
done through maximization of the variational lower bound L[q(?; ?), D]:
L[q(?; ?), D] = E??q(?;?) [log p(D|?)] ? DKL [q(?; ?) k p(?)].
(6)
Rather than computing information gain in Eq. (3) explicitly, we compute an approximation to it,
leading to the following total reward:
r0 (st , at , st+1 ) = r(st , at ) + ?DKL [q(?; ?t+1 ) kq(?; ?t )],
(7)
with ?t+1 the updated and ?t the old parameters representing the agent?s belief. Natural candidates
for parametrizing the agent?s dynamics model are Bayesian neural networks (BNNs) [19], as they
maintain a distribution over their weights. This allows us to view the BNN as an infinite neural
network ensemble by integrating out its parameters:
Z
p(y|x) =
p(y|x; ?)q(?; ?)d?.
(8)
?
In particular, we utilize a BNN parametrized by a fully factorized Gaussian distribution [20]. Practical
BNN implementation details are deferred to Section 2.5, while we give some intuition into the
behavior of BNNs in the appendix.
2.4
Compression
It is possible to derive an interesting relationship between compression improvement?an intrinsic
reward objective defined in [25], and the information gain of Eq. (2). In [25], the agent?s curiosity is
3
equated with compression improvement, measured through C(?t ; ?t?1 ) ? C(?t ; ?t ), where C(?; ?)
is the description length of ? using ? as a model. Furthermore, it is known that the negative variational
lower bound can be viewed as the description length [19]. Hence, we can write compression
improvement as L[q(?; ?t ), ?t ] ? L[q(?; ?t?1 ), ?t ]. In addition, an alternative formulation of the
variational lower bound in Eq. (6) is given by
L[q(?;?),D]
{
p(?, D)
log p(D) =
q(?; ?) log
d? +DKL [q(?; ?)k p(?|D)].
q(?; ?)
?
z
Z
}|
(9)
Thus, compression improvement can now be written as
(log p(?t ) ? DKL [q(?; ?t )k p(?|?t )]) ? (log p(?t ) ? DKL [q(?; ?t?1 ) kp(?|?t )]) .
(10)
If we assume that ?t perfectly optimizes the variational lower bound for the history ?t , then
DKL [q(?; ?t ) k p(?|?t )] = 0, which occurs when the approximation equals the true posterior, i.e.,
q(?; ?t ) = p(?|?t ). Hence, compression improvement becomes DKL [p(?|?t?1 ) kp(?|?t )]. Therefore,
optimizing for compression improvement comes down to optimizing the KL divergence from the
posterior given the past history ?t?1 to the posterior given the total history ?t . As such, we arrive at
an alternative way to encode curiosity than information gain, namely DKL [p(?|?t ) k p(?|?t , at , st+1 )],
its reversed KL divergence. In experiments, we noticed no significant difference between the two KL
divergence variants. This can be explained as both variants are locally equal when introducing small
changes to the parameter distributions. Investigation of how to combine both information gain and
compression improvement is deferred to future work.
2.5
Implementation
The complete method is summarized in Algorithm 1. We first set forth implementation and
parametrization details of the dynamics BNN. The BNN weight distribution q(?; ?) is given by
the fully factorized Gaussian distribution [20]:
Q|?|
q(?; ?) = i=1 N (?i |?i ; ?i2 ).
(11)
Hence, ? = {?, ?}, with ? the Gaussian?s mean vector and ? the covariance matrix diagonal. This is
particularly convenient as it allows for a simple analytical formulation of the KL divergence. This
is described later in this section. Because of the restriction ? > 0, the standard deviation of the
Gaussian BNN parameter is parametrized as ? = log(1 + e? ), with ? ? R [20].
Now the training of the dynamics BNN through optimization of the variational lower bound is
described. The second term in Eq. (6) is approximated through sampling E??q(?;?) [log p(D|?)] ?
PN
1
i=1 log p(D|?i ) with N samples drawn according to ? ? q(?; ?) [20]. Optimizing the variational
N
lower bound in Eq. (6) in combination with the reparametrization trick is called stochastic gradient
variational Bayes (SGVB) [26] or Bayes by Backprop [20]. Furthermore, we make use of the local
reparametrization trick proposed in [26], in which sampling at the weights is replaced by sampling the
neuron pre-activations, which is more computationally efficient and reduces gradient variance. The
optimization of the variational lower bound is done at regular intervals during the RL training process,
by sampling D from a FIFO replay pool that stores recent samples (st , at , st+1 ). This is to break up
the strong intratrajectory sample correlation which destabilizes learning in favor of obtaining i.i.d.
data [7]. Moreover, it diminishes the effect of compounding posterior approximation errors.
The posterior distribution of the dynamics parameter, which is needed to compute the KL divergence
in the total reward function r0 of Eq. (7), can be computed through the following minimization
`(q(?;?),s )
t
hz
}|
{i
? = arg min DKL [q(?; ?) k q(?; ?t?1 )] ?E??q(?;?) [log p(st |?t , at ; ?)] ,
|
{z
}
?
0
(12)
`KL (q(?;?))
where we replace the expectation over ? with samples ? ? q(?; ?). Because we only update the model
periodically based on samples drawn from the replay pool, this optimization can be performed in
parallel for each st , keeping ?t?1 fixed. Once ?0 has been obtained, we can use it to compute the
intrinsic reward.
4
Algorithm 1: Variational Information Maximizing Exploration (VIME)
for each epoch n do
for each timestep t in each trajectory generated during n do
Generate action at ? ?? (st ) and sample state st+1 ? P(?|?t , at ), get r(st , at ).
Add triplet (st , at , st+1 ) to FIFO replay pool R.
Compute DKL [q(?; ?0n+1 ) kq(?; ?n+1 )] by approximation ?> H ?1 ?, following Eq. (16) for
diagonal BNNs, or by optimizing Eq. (12) to obtain ?0n+1 for general BNNs.
Divide DKL [q(?; ?0n+1 ) kq(?; ?n+1 )] by median of previous KL divergences.
Construct r0 (st , at , st+1 ) ? r(st , at ) + ?DKL [q(?; ?0n+1 ) kq(?; ?n+1 )], following Eq. (7).
Minimize DKL [q(?; ?n ) kp(?)] ? E??q(?;?n ) [log p(D|?)] following Eq. (6), with D sampled
randomly from R, leading to updated posterior q(?; ?n+1 ).
Use rewards {r0 (st , at , st+1 )} to update policy ?? using any standard RL method.
To optimize Eq. (12) efficiently, we only take a single second-order step. This way, the gradient
is rescaled according to the curvature of the KL divergence at the origin. As such, we compute
DKL [q(?; ? + ???) kq(?; ?)], with the update step ?? defined as
?? = H ?1 (`)?? `(q(?; ?), st ),
(13)
in which H(`) is the Hessian of `(q(?; ?), st ). Since we assume that the variational approximation is
a fully factorized Gaussian, the KL divergence from posterior to prior has a particularly simple form:
P|?| ?i 2
(?0i ??i )2
1
0
0
DKL [q(?; ?) kq(?; ? )] = 2 i=1
+ 2 log ?i ? 2 log ?i + ?02
(14)
? |?|
?0
2 .
i
i
Because this KL divergence is approximately quadratic in its parameters and the log-likelihood term
can be seen as locally linear compared to this highly curved KL term, we approximate H by only
calculating it for the term KL term `KL (q(?; ?)). This can be computed very efficiently in case of a
fully factorized Gaussian distribution, as this approximation becomes a diagonal matrix. Looking at
Eq. (14), we can calculate the following Hessian at the origin. The ? and ? entries are defined as
1
? 2 `KL
=
??2i
log2 (1 + e?i )
and
2e2?i
1
? 2 `KL
=
,
2
??i
(1 + e?i )2 log2 (1 + e?i )
(15)
while all other entries are zero. Furthermore, it is also possible to approximate the KL divergence
>
through a second-order Taylor expansion as 12 ??H?? = 12 H ?1 ? H H ?1 ? , since both the
value and gradient of the KL divergence are zero at the origin. This gives us
DKL [q(?; ? + ???) k q(?; ?)] ? 12 ?2 ?? `> H ?1 (`KL )?? `.
(16)
Note that H ?1 (`KL ) is diagonal, so this expression can be computed efficiently. Instead of using the
KL divergence DKL [q(?; ?t+1 ) kq(?; ?t )] directly as an intrinsic reward in Eq. (7), we normalize it
by division through the average of the median KL divergences taken over a fixed number of previous
trajectories. Rather than focusing on its absolute value, we emphasize relative difference in KL
divergence between samples. This accomplishes the same effect since the variance of KL divergence
converges to zero, once the model is fully learned.
3
Experiments
In this section, we investigate (i) whether VIME can succeed in domains that have extremely sparse
rewards, (ii) whether VIME improves learning when the reward is shaped to guide the agent towards
its goal, and (iii) how ?, as used in in Eq. (3), trades off exploration and exploitation behavior. All
experiments make use of the rllab [15] benchmark code base and the complementary continuous
control tasks suite. The following tasks are part of the experimental setup: CartPole (S ? R4 ,
A ? R1 ), CartPoleSwingup (S ? R4 , A ? R1 ), DoublePendulum (S ? R6 , A ? R1 ), MountainCar
(S ? R3 , A ? R1 ), locomotion tasks HalfCheetah (S ? R20 , A ? R6 ), Walker2D (S ? R20 ,
A ? R6 ), and the hierarchical task SwimmerGather (S ? R33 , A ? R2 ).
5
Performance is measured through the average return (not including the intrinsic rewards) over the
trajectories generated (y-axis) at each iteration (x-axis). More specifically, the darker-colored lines in
each plot represent the median performance over a fixed set of 10 random seeds while the shaded
areas show the interquartile range at each iteration. Moreover, the number in each legend shows this
performance measure, averaged over all iterations. The exact setup is described in the Appendix.
(a) MountainCar
(b) CartPoleSwingup
(c) HalfCheetah
(d) state space
Figure 1: (a,b,c) TRPO+VIME versus TRPO on tasks with sparse rewards; (d) comparison of
TRPO+VIME (red) and TRPO (blue) on MountainCar: visited states until convergence
Domains with sparse rewards are difficult to solve through na?ve exploration behavior because, before
the agent obtains any reward, it lacks a feedback signal on how to improve its policy. This allows
us to test whether an exploration strategy is truly capable of systematic exploration, rather than
improving existing RL algorithms by adding more hyperparameters. Therefore, VIME is compared
with heuristic exploration strategies on the following tasks with sparse rewards. A reward of +1 is
given when the car escapes the valley on the right side in MountainCar; when the pole is pointed
upwards in CartPoleSwingup; and when the cheetah moves forward over five units in HalfCheetah.
We compare VIME with the following baselines: only using Gaussian control noise [15] and using
the `2 BNN prediction error as an intrinsic reward, a continuous extension of [10]. TRPO [8] is
used as the RL algorithm, as it performs very well compared to other methods [15]. Figure 1 shows
the performance results. We notice that using a na?ve exploration performs very poorly, as it is
almost never able to reach the goal in any of the tasks. Similarly, using `2 errors does not perform
well. In contrast, VIME performs much better, achieving the goal in most cases. This experiment
demonstrates that curiosity drives the agent to explore, even in the absence of any initial reward,
where na?ve exploration completely breaks down.
To further strengthen this point, we have evaluated VIME on the highly difficult hierarchical task
SwimmerGather in Figure 5 whose reward signal is naturally sparse. In this task, a two-link robot
needs to reach ?apples? while avoiding ?bombs? that are perceived through a laser scanner. However,
before it can make any forward progress, it has to learn complex locomotion primitives in the absence
of any reward. None of the RL methods tested previously in [15] were able to make progress with
na?ve exploration. Remarkably, VIME leads the agent to acquire coherent motion primitives without
any reward guidance, achieving promising results on this challenging task.
Next, we investigate whether VIME is widely applicable by (i) testing it on environments where the
reward is well shaped, and (ii) pairing it with different RL methods. In addition to TRPO, we choose
to equip REINFORCE [27] and ERWR [28] with VIME because these two algorithms usually suffer
from premature convergence to suboptimal policies [15, 29], which can potentially be alleviated by
better exploration. Their performance is shown in Figure 2 on several well-established continuous
control tasks. Furthermore, Figure 3 shows the same comparison for the Walker2D locomotion task.
In the majority of cases, VIME leads to a significant performance gain over heuristic exploration.
Our exploration method allows the RL algorithms to converge faster, and notably helps REINFORCE
and ERWR avoid converging to a locally optimal solution on DoublePendulum and MountainCar.
We note that in environments such as CartPole, a better exploration strategy is redundant as following
the policy gradient direction leads to the globally optimal solution. Additionally, we tested adding
Gaussian noise to the rewards as a baseline, which did not improve performance.
To give an intuitive understanding of VIME?s exploration behavior, the distribution of visited states
for both na?ve exploration and VIME after convergence is investigated. Figure 1d shows that using
Gaussian control noise exhibits random walk behavior: the state visitation plot is more condensed
and ball-shaped around the center. In comparison, VIME leads to a more diffused visitation pattern,
exploring the states more efficiently, and hence reaching the goal more quickly.
6
(a) CartPole
(c) DoublePendulum
(b) CartPoleSwingup
(d) MountainCar
Figure 2: Performance of TRPO (top row), ERWR (middle row), and REINFORCE (bottom row)
with (+VIME) and without exploration for different continuous control tasks.
Figure 3: Performance of TRPO
with and without VIME on the
high-dimensional Walker2D locomotion task.
Figure 4: VIME: performance Figure 5: Performance of
over the first few iterations TRPO with and without VIME
for TRPO, REINFORCE, and on the challenging hierarchical
ERWR i.f.o. ? on MountainCar. task SwimmerGather.
Finally, we investigate how ?, as used in in Eq. (3), trades off exploration and exploitation behavior.
On the one hand, higher ? values should lead to a higher curiosity drive, causing more exploration.
On the other hand, very low ? values should reduce VIME to traditional Gaussian control noise.
Figure 4 shows the performance on MountainCar for different ? values. Setting ? too high clearly
results in prioritizing exploration over getting additional external reward. Too low of an ? value
reduces the method to the baseline algorithm, as the intrinsic reward contribution to the total reward
r0 becomes negligible. Most importantly, this figure highlights that there is a wide ? range for which
the task is best solved, across different algorithms.
4
Related Work
A body of theoretically oriented work demonstrates exploration strategies that are able to learn online
in a previously unknown MDP and incur a polynomial amount of regret?as a result, these algorithms
find a near-optimal policy in a polynomial amount of time. Some of these algorithms are based on the
principle of optimism under uncertainty: E3 [3], R-Max [4], UCRL [30]. An alternative approach is
Bayesian reinforcement learning methods, which maintain a distribution over possible MDPs [1, 17,
23, 31]. The optimism-based exploration strategies have been extended to continuous state spaces,
for example, [6, 9], however these methods do not accommodate nonlinear function approximators.
Practical RL algorithms often rely on simple exploration heuristics, such as -greedy and Boltzmann
exploration [32]. However, these heuristics exhibit random walk exploratory behavior, which can lead
7
to exponential regret even in case of small MDPs [9]. Our proposed method of utilizing information
gain can be traced back to [22], and has been further explored in [17, 33, 34]. Other metrics for
curiosity have also been proposed, including prediction error [10, 35], prediction error improvement
[36], leverage [14], neuro-correlates [37], and predictive information [38]. These methods have not
been applied directly to high-dimensional continuous control tasks without discretization. We refer
the reader to [21, 39] for an extensive review on curiosity and intrinsic rewards.
Recently, there have been various exploration strategies proposed in the context of deep RL. [10]
proposes to use the `2 prediction error as the intrinsic reward. [12] performs approximate visitation
counting in a learned state embedding using Gaussian kernels. [11] proposes a form of Thompson
sampling, training multiple value functions using bootstrapping. Although these approaches can scale
up to high-dimensional state spaces, they generally assume discrete action spaces. [40] make use
of mutual information for gait stabilization in continuous control, but rely on state discretization.
Finally, [41] proposes a variational method for information maximization in the context of optimizing
empowerment, which, as noted by [42], does not explicitly favor exploration.
5
Conclusions
We have proposed Variational Information Maximizing Exploration (VIME), a curiosity-driven
exploration strategy for continuous control tasks. Variational inference is used to approximate the
posterior distribution of a Bayesian neural network that represents the environment dynamics. Using
information gain in this learned dynamics model as intrinsic rewards allows the agent to optimize
for both external reward and intrinsic surprise simultaneously. Empirical results show that VIME
performs significantly better than heuristic exploration methods across various continuous control
tasks and algorithms. As future work, we would like to investigate measuring surprise in the value
function and using the learned dynamics model for planning.
Acknowledgments
This work was supported in part by DARPA, the Berkeley Vision and Learning Center (BVLC), the Berkeley
Artificial Intelligence Research (BAIR) laboratory, Berkeley Deep Drive (BDD), and ONR through a PECASE
award. Rein Houthooft is supported by a Ph.D. Fellowship of the Research Foundation - Flanders (FWO).
Xi Chen was also supported by a Berkeley AI Research lab Fellowship. Yan Duan was also supported by a
Berkeley AI Research lab Fellowship and a Huawei Fellowship.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
M. Ghavamzadeh, S. Mannor, J. Pineau, and A. Tamar, ?Bayesian reinforcement learning: A survey?,
Found. Trends. Mach. Learn., vol. 8, no. 5-6, pp. 359?483, 2015.
S. Kakade, M. Kearns, and J. Langford, ?Exploration in metric state spaces?, in ICML, vol. 3, 2003,
pp. 306?312.
M. Kearns and S. Singh, ?Near-optimal reinforcement learning in polynomial time?, Mach. Learn., vol.
49, no. 2-3, pp. 209?232, 2002.
R. I. Brafman and M. Tennenholtz, ?R-Max - a general polynomial time algorithm for near-optimal
reinforcement learning?, J. Mach. Learn. Res., vol. 3, pp. 213?231, 2003.
P. Auer, ?Using confidence bounds for exploitation-exploration trade-offs?, J. Mach. Learn. Res., vol. 3,
pp. 397?422, 2003.
J. Pazis and R. Parr, ?PAC optimal exploration in continuous space Markov decision processes?, in AAAI,
2013.
V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller,
A. K. Fidjeland, G. Ostrovski, et al., ?Human-level control through deep reinforcement learning?, Nature,
vol. 518, no. 7540, pp. 529?533, 2015.
J. Schulman, S. Levine, P. Moritz, M. I. Jordan, and P. Abbeel, ?Trust region policy optimization?, in
ICML, 2015.
I. Osband, B. Van Roy, and Z. Wen, ?Generalization and exploration via randomized value functions?,
ArXiv preprint arXiv:1402.0635, 2014.
8
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
[35]
[36]
[37]
[38]
[39]
[40]
[41]
[42]
B. C. Stadie, S. Levine, and P. Abbeel, ?Incentivizing exploration in reinforcement learning with deep
predictive models?, ArXiv preprint arXiv:1507.00814, 2015.
I. Osband, C. Blundell, A. Pritzel, and B. Van Roy, ?Deep exploration via bootstrapped DQN?, in ICML,
2016.
J. Oh, X. Guo, H. Lee, R. L. Lewis, and S. Singh, ?Action-conditional video prediction using deep
networks in Atari games?, in NIPS, 2015, pp. 2845?2853.
T. Hester and P. Stone, ?Intrinsically motivated model learning for developing curious robots?, Artificial
Intelligence, 2015.
K. Subramanian, C. L. Isbell Jr, and A. L. Thomaz, ?Exploration from demonstration for interactive
reinforcement learning?, in AAMAS, 2016.
Y. Duan, X. Chen, R. Houthooft, J. Schulman, and P. Abbeel, ?Benchmarking deep reinforcement
learning for continous control?, in ICML, 2016.
J. Schmidhuber, ?Curious model-building control systems?, in IJCNN, 1991, pp. 1458?1463.
Y. Sun, F. Gomez, and J. Schmidhuber, ?Planning to be surprised: Optimal Bayesian exploration in
dynamic environments?, in Artificial General Intelligence, 2011, pp. 41?51.
L. Itti and P. F. Baldi, ?Bayesian surprise attracts human attention?, in NIPS, 2005, pp. 547?554.
A. Graves, ?Practical variational inference for neural networks?, in NIPS, 2011, pp. 2348?2356.
C. Blundell, J. Cornebise, K. Kavukcuoglu, and D. Wierstra, ?Weight uncertainty in neural networks?, in
ICML, 2015.
J. Schmidhuber, ?Formal theory of creativity, fun, and intrinsic motivation (1990?2010)?, IEEE Trans.
Auton. Mental Develop., vol. 2, no. 3, pp. 230?247, 2010.
J. Storck, S. Hochreiter, and J. Schmidhuber, ?Reinforcement driven information acquisition in nondeterministic environments?, in ICANN, vol. 2, 1995, pp. 159?164.
J. Z. Kolter and A. Y. Ng, ?Near-Bayesian exploration in polynomial time?, in ICML, 2009, pp. 513?520.
G. E. Hinton and D. Van Camp, ?Keeping the neural networks simple by minimizing the description
length of the weights?, in COLT, 1993, pp. 5?13.
J. Schmidhuber, ?Simple algorithmic principles of discovery, subjective beauty, selective attention,
curiosity & creativity?, in Intl. Conf. on Discovery Science, 2007, pp. 26?38.
D. P. Kingma, T. Salimans, and M. Welling, ?Variational dropout and the local reparameterization trick?,
in NIPS, 2015, pp. 2575?2583.
R. J. Williams, ?Simple statistical gradient-following algorithms for connectionist reinforcement learning?, Mach. Learn., vol. 8, no. 3-4, pp. 229?256, 1992.
J. Kober and J. R. Peters, ?Policy search for motor primitives in robotics?, in NIPS, 2009, pp. 849?856.
J. Peters and S. Schaal, ?Reinforcement learning by reward-weighted regression for operational space
control?, in ICML, 2007, pp. 745?750.
P. Auer, T. Jaksch, and R. Ortner, ?Near-optimal regret bounds for reinforcement learning?, in NIPS,
2009, pp. 89?96.
A. Guez, N. Heess, D. Silver, and P. Dayan, ?Bayes-adaptive simulation-based search with value function
approximation?, in NIPS, 2014, pp. 451?459.
R. S. Sutton, Introduction to reinforcement learning.
S. Still and D. Precup, ?An information-theoretic approach to curiosity-driven reinforcement learning?,
Theory Biosci., vol. 131, no. 3, pp. 139?148, 2012.
D. Y. Little and F. T. Sommer, ?Learning and exploration in action-perception loops?, Closing the Loop
Around Neural Systems, p. 295, 2014.
S. B. Thrun, ?Efficient exploration in reinforcement learning?, Tech. Rep., 1992.
M. Lopes, T. Lang, M. Toussaint, and P.-Y. Oudeyer, ?Exploration in model-based reinforcement learning
by empirically estimating learning progress?, in NIPS, 2012, pp. 206?214.
J. Schossau, C. Adami, and A. Hintze, ?Information-theoretic neuro-correlates boost evolution of
cognitive systems?, Entropy, vol. 18, no. 1, p. 6, 2015.
K. Zahedi, G. Martius, and N. Ay, ?Linear combination of one-step predictive information with an
external reward in an episodic policy gradient setting: A critical analysis?, Front. Psychol., vol. 4, 2013.
P.-Y. Oudeyer and F. Kaplan, ?What is intrinsic motivation? a typology of computational approaches?,
Front Neurorobot., vol. 1, p. 6, 2007.
G. Montufar, K. Ghazi-Zahedi, and N. Ay, ?Information theoretically aided reinforcement learning for
embodied agents?, ArXiv preprint arXiv:1605.09735, 2016.
S. Mohamed and D. J. Rezende, ?Variational information maximisation for intrinsically motivated
reinforcement learning?, in NIPS, 2015, pp. 2116?2124.
C. Salge, C. Glackin, and D. Polani, ?Guided self-organization: Inception?, in. 2014, ch. Empowerment?
An Introduction, pp. 67?114.
9
| 6591 |@word exploitation:6 middle:1 polynomial:5 compression:10 pieter:1 simulation:1 covariance:1 accommodate:1 reduction:2 initial:2 typology:1 bootstrapped:1 past:1 existing:1 subjective:1 current:1 discretization:5 surprising:1 activation:1 lang:1 guez:1 written:1 john:1 subsequent:1 periodically:1 informative:2 cartpoleswingup:4 motor:1 plot:2 update:4 greedy:3 intelligence:3 zahedi:2 parametrization:1 colored:1 mental:1 coarse:1 mannor:1 five:1 wierstra:1 along:1 pairing:1 pritzel:1 surprised:1 combine:1 nondeterministic:1 baldi:1 manner:1 theoretically:2 notably:1 expected:1 behavior:9 planning:3 cheetah:1 discounted:2 globally:1 duan:3 little:1 deem:1 becomes:3 estimating:1 underlying:2 notation:1 maximizes:1 bounded:1 factorized:4 moreover:2 what:1 atari:1 interpreted:2 bootstrapping:1 suite:1 guarantee:3 berkeley:6 unexplored:1 fun:1 interactive:1 rm:1 demonstrates:2 control:21 unit:1 engages:1 before:2 negligible:1 engineering:1 local:2 tends:1 sgvb:1 sutton:1 mach:5 approximately:1 r4:2 shaded:1 challenging:2 range:3 averaged:1 practical:8 acknowledgment:1 testing:1 practice:1 regret:3 maximisation:1 urge:1 episodic:1 area:1 riedmiller:1 empirical:1 yan:2 significantly:3 convenient:1 alleviated:1 pre:1 integrating:1 regular:1 confidence:1 get:1 cannot:1 valley:1 context:2 influence:1 bellemare:1 optimize:3 restriction:1 center:2 maximizing:6 modifies:1 primitive:3 attention:2 williams:1 thompson:1 survey:1 formalized:1 bomb:1 rule:1 utilizing:2 importantly:1 oh:1 reparameterization:1 embedding:1 handle:1 exploratory:1 updated:3 pt:1 controlling:1 strengthen:1 exact:1 origin:3 locomotion:4 trick:3 trend:1 roy:2 approximated:1 particularly:2 bottom:1 levine:2 preprint:3 electrical:1 capture:2 solved:1 calculate:1 region:2 sun:1 montufar:1 trade:6 contemporary:1 rescaled:1 glackin:1 intuition:1 environment:12 reward:38 dynamic:20 ghavamzadeh:1 singh:2 incur:1 purely:1 predictive:3 division:1 completely:1 darpa:1 represented:1 various:2 laser:1 effective:2 describe:1 kp:3 artificial:3 rein:2 whose:1 heuristic:10 widely:1 solve:2 vime:28 favor:2 online:1 sequence:1 analytical:1 thomaz:1 propose:4 gait:1 maximal:1 kober:1 causing:1 loop:2 halfcheetah:3 poorly:1 forth:1 intuitive:2 description:3 normalize:1 getting:1 convergence:3 r1:4 intl:1 silver:2 converges:1 help:1 derive:2 develop:1 measured:2 salge:1 progress:3 eq:17 strong:1 implemented:1 come:1 direction:1 guided:1 stochastic:2 exploration:62 stabilization:1 human:2 bin:1 backprop:1 require:1 abbeel:4 generalization:1 creativity:2 preliminary:1 investigation:1 extension:1 exploring:1 scanner:1 practically:2 around:2 seed:1 algorithmic:1 parr:1 bnns:5 driving:1 achieves:2 perceived:1 diminishes:1 applicable:1 condensed:1 visited:2 weighted:1 minimization:1 compounding:1 offs:1 clearly:1 gaussian:13 rather:3 reaching:1 pn:1 avoid:1 rusu:1 beauty:1 encode:1 ucrl:1 rezende:1 schaal:1 improvement:10 likelihood:1 tech:1 contrast:2 baseline:3 storck:1 dim:1 inference:6 camp:1 huawei:1 dayan:1 a0:1 selective:1 issue:1 arg:1 colt:1 denoted:1 proposes:4 uc:1 mutual:3 equal:3 once:2 construct:1 shaped:3 never:1 sampling:6 encouraged:2 ng:1 represents:1 veness:1 icml:7 future:2 connectionist:1 escape:1 few:1 wen:1 ortner:1 randomly:2 oriented:1 simultaneously:1 ve:6 divergence:17 individual:1 replaced:1 maintain:2 organization:1 ostrovski:1 highly:4 investigate:4 interquartile:1 mnih:1 deferred:2 introduces:1 truly:1 chain:1 integral:2 capable:1 experience:1 cartpole:3 hester:1 old:2 divide:1 walk:3 taylor:1 re:2 guidance:1 theoretical:1 rllab:1 measuring:3 maximization:3 introducing:1 deviation:1 entry:3 pole:1 kq:7 successful:1 too:2 front:2 st:49 randomized:1 fifo:2 standing:1 systematic:2 off:5 lee:1 pool:3 quickly:1 pecase:1 precup:1 na:6 aaai:1 manage:1 opposed:1 choose:1 external:4 conf:1 cognitive:1 inefficient:1 leading:2 return:3 itti:1 de:1 summarized:1 kolter:1 explicitly:3 later:1 view:1 break:2 performed:1 lab:2 red:1 bayes:5 maintains:1 reparametrization:2 parallel:1 contribution:1 minimize:1 variance:2 swimmergather:3 efficiently:5 ensemble:1 bayesian:13 kavukcuoglu:2 accurately:1 none:2 trajectory:6 drive:3 apple:1 history:4 explain:1 reach:2 acquisition:1 pp:26 mohamed:1 e2:1 naturally:2 gain:13 sampled:1 intrinsically:2 car:1 dimensionality:1 improves:1 auer:2 back:2 focusing:1 higher:2 methodology:1 maximally:2 formulation:3 evaluated:2 done:2 furthermore:5 inception:1 until:2 correlation:1 hand:2 langford:1 expressive:2 trust:1 nonlinear:1 lack:1 pineau:1 mdp:5 dqn:1 building:1 effect:2 concept:1 true:2 evolution:1 hence:5 moritz:1 laboratory:1 jaksch:1 i2:1 bnn:9 during:2 game:1 self:1 noted:1 pazis:1 stone:1 ay:2 complete:1 demonstrate:1 theoretic:2 performs:5 motion:1 upwards:1 variational:24 novel:1 recently:1 bdd:1 common:1 rl:18 empirically:1 exponentially:1 significant:2 refer:1 biosci:1 ai:2 similarly:1 pointed:1 closing:1 robot:2 add:2 base:1 curvature:1 posterior:12 recent:2 optimizing:5 optimizes:1 driven:7 scenario:1 store:1 schmidhuber:5 onr:1 rep:1 approximators:1 walker2d:4 seen:1 additional:1 r0:6 accomplishes:1 converge:1 maximize:3 redundant:1 signal:2 ii:2 multiple:2 reduces:2 faster:1 adapt:1 offer:1 long:2 award:1 dkl:21 a1:1 prediction:5 scalable:1 regression:1 variant:2 denominator:1 converging:1 expectation:2 metric:2 neuro:2 vision:1 iteration:4 represent:1 kernel:1 arxiv:6 hochreiter:1 robotics:1 addition:2 remarkably:1 fellowship:4 interval:1 median:3 imec:1 hz:1 legend:1 jordan:1 curious:2 near:5 leverage:1 counting:1 iii:1 variety:1 attracts:1 perfectly:1 r33:1 suboptimal:1 reduce:1 idea:1 tamar:1 blundell:2 whether:4 expression:1 handled:1 optimism:2 bair:1 motivated:2 osband:2 suffer:1 peter:2 e3:1 hessian:2 cause:1 action:22 deep:8 heess:1 generally:3 martius:1 amount:2 fwo:1 discount:1 adami:1 locally:3 ph:1 bvlc:1 generate:2 notice:1 blue:1 discrete:3 hyperparameter:1 write:1 vol:13 visitation:4 key:1 thereafter:1 trpo:10 openai:1 traced:2 drawn:2 achieving:2 polani:1 utilize:1 timestep:1 sum:1 houthooft:3 run:1 parameterized:1 uncertainty:3 lope:1 arrive:1 almost:1 reader:1 decision:2 appendix:2 dropout:1 bound:10 gomez:1 quadratic:1 ijcnn:1 isbell:1 optimality:1 min:1 extremely:1 relatively:1 embrace:1 department:2 developing:1 according:5 combination:2 ball:1 jr:1 across:3 describes:1 kakade:1 making:1 s1:1 explained:1 taken:1 computationally:1 equation:1 remains:1 previously:3 abbreviated:1 count:1 r3:1 needed:2 tractable:2 auton:1 hierarchical:3 salimans:1 alternative:4 assumes:1 denotes:1 include:1 top:1 sommer:1 log2:2 maintaining:1 calculating:4 build:2 establish:1 seeking:1 objective:2 turck:1 noticed:1 realized:2 occurs:1 move:1 strategy:13 diffused:1 diagonal:4 traditional:1 exhibit:2 gradient:8 reversed:1 link:2 reinforce:4 fidjeland:1 thrun:1 parametrized:5 majority:1 equip:1 assuming:1 length:3 code:1 relationship:1 minimizing:2 acquire:1 demonstration:1 setup:2 difficult:2 potentially:1 negative:1 kaplan:1 implementation:5 boltzmann:3 unknown:2 policy:10 perform:1 neuron:1 markov:2 benchmark:1 finite:1 parametrizing:1 curved:1 extended:1 looking:1 hinton:1 rn:1 prioritizing:1 namely:2 pair:1 kl:25 extensive:1 continous:1 coherent:1 learned:4 herein:3 established:1 kingma:1 boost:1 nip:9 trans:1 address:2 able:3 tennenholtz:1 curiosity:17 usually:1 pattern:1 perception:1 challenge:1 including:3 max:2 video:1 belief:6 cornebise:1 subramanian:1 critical:1 natural:1 rely:3 force:1 representing:1 improve:3 technology:1 mdps:2 axis:2 psychol:1 embodied:1 prior:2 understanding:2 schulman:3 epoch:1 review:1 discovery:2 relative:1 graf:2 fully:6 highlight:1 interesting:1 versus:1 toussaint:1 foundation:2 agent:23 destabilizes:1 s0:3 principle:3 row:3 supported:4 brafman:1 keeping:2 infeasible:1 formal:4 guide:1 side:1 wide:1 taking:2 absolute:1 sparse:6 van:3 feedback:1 dimension:1 transition:1 cumulative:1 equated:1 forward:2 reinforcement:20 adaptive:1 premature:1 welling:1 correlate:2 approximate:6 emphasize:1 obtains:1 r20:2 filip:1 xi:2 continuous:17 search:2 triplet:1 table:1 additionally:1 promising:1 learn:7 nature:1 operational:1 obtaining:1 improving:1 expansion:1 investigated:1 complex:1 domain:2 did:1 icann:1 whole:1 noise:6 hyperparameters:1 motivation:2 aamas:1 complementary:1 body:1 biggest:1 benchmarking:1 darker:1 stadie:1 explicit:1 exponential:2 candidate:1 replay:3 r6:3 flanders:1 learns:1 down:2 incentivizing:1 pac:2 r2:1 explored:1 intrinsic:13 intractable:3 adding:4 effectively:1 horizon:2 chen:3 surprise:5 entropy:2 explore:2 ch:1 relies:1 mountaincar:8 lewis:1 succeed:1 conditional:1 viewed:2 goal:4 towards:1 replace:1 absence:2 change:1 aided:1 infinite:1 specifically:1 acting:1 kearns:2 ghent:1 total:4 called:1 oudeyer:2 experimental:2 est:1 empowerment:2 internal:1 guo:1 tested:2 avoiding:1 |
6,182 | 6,592 | The Multi-fidelity Multi-armed Bandit
Kirthevasan Kandasamy \ , Gautam Dasarathy ? , Jeff Schneider \ , Barnab?s P?czos \
\
Carnegie Mellon University, ? Rice University
{kandasamy, schneide, bapoczos}@cs.cmu.edu, [email protected]
Abstract
We study a variant of the classical stochastic K-armed bandit where observing
the outcome of each arm is expensive, but cheap approximations to this outcome
are available. For example, in online advertising the performance of an ad can be
approximated by displaying it for shorter time periods or to narrower audiences.
We formalise this task as a multi-fidelity bandit, where, at each time step, the
forecaster may choose to play an arm at any one of M fidelities. The highest
fidelity (desired outcome) expends cost ?(M ) . The mth fidelity (an approximation)
expends ?(m) < ?(M ) and returns a biased estimate of the highest fidelity. We
develop MF-UCB, a novel upper confidence bound procedure for this setting and
prove that it naturally adapts to the sequence of available approximations and costs
thus attaining better regret than naive strategies which ignore the approximations.
For instance, in the above online advertising example, MF-UCB would use the
lower fidelities to quickly eliminate suboptimal ads and reserve the larger expensive
experiments on a small set of promising candidates. We complement this result with
a lower bound and show that MF-UCB is nearly optimal under certain conditions.
1
Introduction
Since the seminal work of Robbins [11], the multi-armed bandit has become an attractive framework
for studying exploration-exploitation trade-offs inherent to tasks arising in online advertising, finance
and other fields. In the most basic form of the K-armed bandit [9, 12], we have a set K = {1, . . . , K}
of K arms (e.g. K ads in online advertising). At each time step t = 1, 2, . . . , an arm is played and a
corresponding reward is realised. The goal is to design a strategy of plays that minimises the regret
after n plays. The regret is the comparison, in expectation, of the realised reward against an oracle
that always plays the best arm. The well known Upper Confidence Bound (UCB) algorithm [3],
achieves regret O(K log(n)) after n plays (ignoring mean rewards) and is minimax optimal [9].
In this paper, we propose a new take on this important problem. In many practical scenarios of
interest, one can associate a cost to playing each arm. Furthermore, in many of these scenarios,
one might have access to cheaper approximations to the outcome of the arms. For instance, in
online advertising the goal is to maximise the cumulative number of clicks over a given time period.
Conventionally, an arm pull maybe thought of as the display of an ad for a specific time, say one
hour. However, we may approximate its hourly performance by displaying the ad for shorter periods.
This estimate is biased (and possibly noisy), as displaying an ad for longer intervals changes user
behaviour. It can nonetheless be useful in gauging the long run click through rate. We can also
obtain biased estimates of an ad by displaying it only to certain geographic regions or age groups.
Similarly one might consider algorithm selection for machine learning problems [4], where the goal
is to be competitive with the best among a set of learning algorithms for a task. Here, one might
obtain cheaper approximate estimates of the performance of algorithm by cheaper versions using
less data or computation. In this paper, we will refer to such approximations as fidelities. Consider a
2-fidelity problem where the cost at the low fidelity is ?(1) and the cost at the high fidelity is ?(2) .
We will present a cost weighted notion of regret for this setting for a strategy that expends a capital
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
of ? units. A classical K-armed bandit strategy such as UCB, which only uses the highest fidelity,
can obtain at best O(?(2) K log(?/?(2) )) regret [9]. In contrast,
this paper will present multi-fidelity
strategies that achieve O (?(1) K + ?(2) |Kg |) log(?/?(2) ) regret. Here Kg is a (typically) small
subset of arms with high expected reward that can be identified using plays at the (cheaper) low
fidelity. When |Kg | < K and ?(1) < ?(2) , such a strategy will outperform the more standard UCB
algorithms. Intuitively, this is achieved by using the lower fidelities to eliminate several of ?bad?
arms and reserving expensive higher fidelity plays for a small subset of the most promising arms. We
formalise the above intuitions in the sequel. Our main contributions are,
1. A novel formalism for studying bandit tasks when one has access to multiple fidelities for each
arm, with each successive fidelity providing a better approximation to the most expensive one.
2. A new algorithm that we call Multi-Fidelity Upper Confidence Bound (MF-UCB) that adapts
the classical Upper Confidence Bound (UCB) strategies to our multi-fidelity setting. Empirically,
we demonstrate that our algorithm outperforms naive UCB on simulations.
3. A theoretical characterisation of the performance of MF-UCB that shows that the algorithm
(a) uses the lower fidelities to explore all arms and eliminates arms with low expected reward, and
(b) reserves the higher fidelity plays for arms with rewards close to the optimal value. We derive
a lower bound on the regret and demonstrate that MF-UCB is near-optimal on this problem.
Related Work
The K-armed bandit has been studied extensively in the past [1, 9, 11]. There has been a flurry of
work on upper confidence bound (UCB) methods [2, 3], which adopt the optimism in the face of
uncertainty principle for bandits. For readers unfamiliar with UCB methods, we recommend Chapter
2 of Bubeck and Cesa-Bianchi [5]. Our work in this paper builds on UCB ideas, but the multi-fidelity
framework poses significantly new algorithmic and theoretical challenges.
There has been some interest in multi-fidelity methods for optimisation in many applied domains
of research [7, 10]. However, these works do not formalise or analyse notions of regret in the
multi-fidelity setting. Multi-fidelity methods are used in the robotics community for reinforcement
learning tasks by modeling each fidelity as a Markov decision process [6]. Zhang and Chaudhuri [16]
study active learning with a cheap weak labeler and an expensive strong labeler. The objective of
these papers however is not to handle the exploration-exploitation trade-off inherent to the bandit
setting. A line of work on budgeted multi-armed bandits [13, 15] study a variant of the K-armed
bandit where each arm has a random reward and cost and the goal is to play the arm with the highest
reward/cost ratio as much as possible. This is different from our setting where each arm has multiple
fidelities which serve as an approximation. Recently, in Kandasamy et al. [8] we extended ideas in
this work to analyse multi-fidelity bandits with Gaussian process payoffs.
2
The Stochastic K-armed Multi-fidelity Bandit
In the classical K-armed bandit, each arm k ? K = {1, . . . , K} is associated with a real valued
distribution ?k with mean ?k . Let K? = argmaxk?K ?k be the set of optimal arms, k? ? K? be
an optimal arm and ?? = ?k? denote the optimal mean value. A bandit strategy would play an
arm It ? K at each time step t andP
observe a sample from ?It . Its goal is to maximise the sum of
n
expected
rewards
after
n
time
steps
t=1 ?It , or equivalently minimise the cumulative pseudo-regret
Pn
?
?
?
for
all
values
of
n.
In
other words, the objective is to be competitive, in expectation,
It
t=1 ?
against an oracle that plays an optimal arm all the time.
In this work we differ from the usual bandit setting in the following aspect. For each arm k, we have
(1) (2)
(M ?1)
access to M ? 1 successively approximate distributions ?k , ?k , . . . , ?k
to the desired distribu(M )
tion ?k = ?k . We will refer to these approximations as fidelities. Clearly, these approximations are
(M )
meaningful only if they give us some information about ?k . In what follows, we will assume that
(m)
th
the m fidelity mean of an arm is within ? , a known quantity, of its highest fidelity mean, where
(M )
(m)
? (m) , decreasing with m, characterise the successive approximations. That is, |?k ? ?k | ? ? (m)
for all k ? K and m = 1, . . . , M , where ? (1) > ? (2) > ? ? ? > ? (M ) = 0 and the ? (m) ?s are known. It
is possible for the lower fidelities to be misleading under this assumption: there could exist an arm k
(M )
(M )
(m)
(m)
(m)
with ?k < ?? = ?k? but with ?k > ?? and/or ?k > ?k? for any m < M . In other words,
we wish to explicitly account for the biases introduced by the lower fidelities, and not treat them
2
as just a higher variance observation of an expensive experiment. This problem of course becomes
interesting only when lower fidelities are more attractive than higher fidelities in terms of some notion
of cost. Towards this end, we will assign a cost ?(m) (such as advertising time, money etc.) to playing
an arm at fidelity m where ?(1) < ?(2) ? ? ? < ?(M ) .
(m)
(>m)
Notation: Tk,t denotes the number of plays at arm k, at fidelity m until t time steps. Tk,t
P
(m)
(m)
is the number of plays at fidelities greater than m. Qt = k?K Tk,t is the number of fidelity
(m)
(m)
m plays at all arms until time t. X k,s denotes the mean of s samples drawn from ?k . Denote
(m)
(m)
?k = ?? ? ?k ? ? (m) . When s refers to the number of plays of an arm, we will take 1/s = ?
if s = 0. A denotes the complement of a set A ? K. While discussing the intuitions in our proofs
and theorems we will use , ., & to denote equality and inequalities ignoring constants.
Regret in the multi-fidelity setting: A strategy for a multi-fidelity bandit problem, at time t,
produces an arm-fidelity pair (It , mt ), where It ? K and mt ? {1, . . . , M }, and observes a sample
(m )
Xt drawn (independently of everything else) from the distribution ?It t . The choice of (It , mt ) could
depend on previous arm-observation-fidelity tuples {(Ii , Xi , mi )}t?1
i=1 . The multi-fidelity setting calls
for a new notion of regret. For any strategy A that expends ? units of the resource, we will define
the pseudo-regret R(?, A) as follows. Let qt denote the instantaneous pseudo-reward at time t and
rt = ?? ? qt denote the instantaneous pseudo-regret. We will discuss choices for qt shortly. Any
notion of regret in the multi-fidelity setting needs to account for this instantaneous regret along with
the cost of the fidelity at which we played at time t, i.e. ?(mt ) . Moreover, we should receive no
reward (maximum regret) for any unused capital. These observations lead to the following definition,
!
N
N
N
X
X
X
(mt )
(mt )
R(?, A) = ??? ?
?
qt = ? ?
?
?? +
(1)
?(mt ) rt .
t=1
t=1
|
{z
r?(?,A)
t=1
}
|
{z
?
R(?,A)
}
Above,
N is the (random) number of plays within capital ? by A, i.e. the largest n such that
Pn
(mt )
? ?. To motivate our choice of qt we consider an online advertising example where
t=1 ?
(m)
?(m) is the advertising time at fidelity m and ?k is the expected number of clicks per unit time.
(m )
While we observe from ?It t at time t, we wish to reward the strategy according to its highest fidelity
(M )
(M )
distribution ?It . Therefore regardless of which fidelity we play we set qt = ?It . Here, we are
competing against an oracle which plays an optimal arm at any fidelity all the time. Note that we
(m )
might have chosen qt to be ?It t . However, this does not reflect the motivating applications for the
multi-fidelity setting that we consider. For instance, a clickbait ad might receive a high number of
clicks in the short run, but its long term performance might be poor. Furthermore, for such a choice,
we may as well ignore the rich structure inherent to the multi-fidelity setting and simply play the arm
(m)
argmaxm,k ?k at each time. There are of course other choices for qt that result in very different
notions of regret; we discuss this briefly at the end of Section 7.
(m)
The distributions ?k need to be well behaved for the problem to be tractable. We will assume that
they satisfy concentration inequalities of the following form. For all > 0,
(m)
(m)
(m)
(m)
? m, k,
P X k,s ? ?k > < ?e?s?() ,
P X k,s ? ?k < ? < ?e?s?() . (2)
Here ? > 0 and ? is an increasing function with ?(0) = 0 and is at least increasing linearly
?(x) ? ?(x). For example, if the distributions are sub-Gaussian, then ?(x) ? ?(x2 ).
The performance of a multi-fidelity strategy which switches from low to high fidelities can be
worsened by artificially inserting fidelities. Consider a scenario where ?(m+1) is only slightly larger
than ?(m) and ? (m+1) is only slightly smaller than ? (m) . This situation is unfavourable since there
isn?t much that can be inferred from the (m + 1)th fidelity that cannot already be inferred from the mth
by expending the same cost. We impose the following regularity condition to avoid such situations.
Pm
1
Assumption 1. The ? (m) ?s decay fast enough such that i=1 ?(?1(i) ) ? ?(? (m+1)
for all m < M .
)
Assumption 1 is not necessary to analyse our algorithm, however, the performance of MF-UCB when
compared to UCB is most appealing when the above holds. In cases where M is small enough and
3
can be treated as a constant, the assumption is not necessary. For sub-Gaussian
? distributions,
? the
condition is satisfied for an exponentially decaying (? (1) , ? (2) , . . . ) such as (1/ 2, 1/2, 1/2 2 . . . ).
Our goal is to design a strategy A0 that has low expected pseudo-regret E[R(?, A0 )] for all values of
(sufficiently large) ?, i.e. the equivalent of an anytime strategy, as opposed to a fixed time horizon
strategy, in the usual bandit setting. The expectation is over the observed rewards which also dictates
the number of plays N . From now on, for simplicity we will write R(?) when A is clear from
context and refer to it just as regret.
3
The Multi-Fidelity Upper Confidence Bound (MF-UCB) Algorithm
As the name suggests, the MF-UCB algorithm maintains an upper confidence bound corresponding
(m)
to ?k for each m ? {1, . . . , M } and k ? K based on its previous plays. Following UCB
strategies [2, 3], we define the following set of upper confidence bounds,
? log t
(m)
(m)
Bk,t (s) = X k,s + ? ?1
+ ? (m) ,
for all m ? {1, . . . , M } , k ? K
s
(m)
(m)
Bk,t =
min Bk,t (Tk,t?1 ).
(3)
m=1,...,M
(m)
(m)
Here ? is a parameter in our algorithm and ? is from (2). Each Bk,t (Tk,t?1 ) provides a high
(M )
probability upper bound on ?k with their minimum Bk,t giving the tightest bound (See Appendix A).
Similar to UCB, at time t we play the arm It with the highest upper bound It = argmaxk?K Bk,t .
Since our setup has multiple fidelities associated with each arm, the algorithm needs to determine
at each time t which fidelity (mt ) to play the chosen arm (It ). For this consider an arbitrary fidelity
(m)
(M )
m < M . The ? (m) conditions on ?k imply a constraint on the value of ?k . If, at fidelity m, the
(m)
(M )
uncertainty interval ? ?1 (? log(t)/TIt ,t?1 ) is large, then we have not constrained ?It sufficiently
(M )
well yet. There is more information to be gleaned about ?It from playing the arm It at fidelity m.
On the other hand, playing at fidelity m indefinitely will not help us much since the ? (m) elongation of
(M )
the confidence band caps off how much we can learn about ?It from fidelity m; i.e. even if we knew
(m)
(M )
?It , we will have only constrained ?It
to within a ?? (m) interval. Our algorithm captures this
(1)
natural intuition. Having selected It , we begin checking at the first fidelity. If ? ?1 (? log(t)/TIt ,t?1 )
is smaller than a threshold ? (1) we proceed to check the second fidelity, continuing in a similar
(m)
fashion. If at any point ? ?1 (? log(t)/TIt ,t?1 ) ? ? (m) , we play It at fidelity mt = m. If we go
all the way to fidelity M , we play at mt = M . The resulting procedure is summarised below in
Algorithm 1.
Algorithm 1 MF-UCB
? for t = 1, 2, . . .
1. Choose It ? argmaxk?K Bk,t .
2. mt = minm { m | ?
3. Play X ?
?1
(See equation (3).)
(m)
(? log t/TIt ,t?1 )
(m )
? It t .
? ? (m) ? m = M }
(See equation (4).)
Choice of ? (m) : In our algorithm, we choose
?
(m)
=?
?1
?(m)
? ? (m)
(m+1)
?
(m)
(m)
(4)
To motivate this choice, note that if ?k = ?? ? ?k ? ? (m) > 0 then we can conclude that arm k
(m)
is not optimal. Step 2 of the algorithm attempts to eliminate arms for which ?k & ? (m) from plays
above the mth fidelity. If ? (m) is too large, then we would not eliminate a sufficient number of arms
(m)
whereas if it was too small we could end up playing a suboptimal arm k (for which ?k > ?? ) too
many times at fidelity m. As will be revealed by our analysis, the given choice represents an optimal
tradeoff under the given assumptions.
4
(1)
K
(2)
J? (1) +2
K(3)
K?
K
4
(2)
Figure 1: Illustration of the partition K(m) ?s
for a M = 4 fidelity problem. The sets
(m)
J? (m) +2? (m) are indicated next to their bound-
K(1)
(4)
(2)
J? (2) +2
(1)
aries. K(1) , K(2) , K(3) , K(4) are shown in yellow,
green, red and purple respectively. The optimal
arms K? are shown as a black circle.
(3)
J? (3) +2
(3)
Analysis
?
?
We will be primarily concerned with the term R(?,
from (1). r?(?, A) is a residual term;
A) = R(?)
it is an artefact of the fact that after the N + 1th play, the spent capital would have exceeded ?. For any
algorithm that operates oblivious to a fixed capital, it can be bounded by ?(M ) ?? which is negligible
?
?
compared to R(?).
According to the above, we have the following expressions for R(?):
!
M
X (M ) X
(m)
?
R(?)
=
?
?(m) T
,
(5)
k
k,N
m=1
k?K
Central to our analysis will be the following partitioning of K. First denote the set of arms whose
(m)
(m)
fidelity m mean is within ? of ?? to be J?
= {k ? K; ?? ? ?k ? ?}. Define K(1) ,
(1)
(1)
(1)
J ? (1) +2? (1) = {k ? K; ?k > 2? (1) } to be the arms whose first fidelity mean ?k is at least
? (1) + 2? (1) below the optimum ?? . Then we recursively define,
m?1
M\
?1
\ (`)
(m)
(`)
(M )
(m)
J? (`) +2? (`) , ?m?M ? 1, K
, K? ?
J? (`) +2? (`) .
K
, J ? (m) +2? (m) ?
`=1
(m)
`=1
(m)
?k
(`)
?k
(m)
(`)
Observe that for all k ? K ,
> 2?
and
? 2? for all ` < m. For what follows, for
any k ? K, JkK will denote the partition k belongs to, i.e. JkK = m s.t. k ? K(m) . We will see that
K(m) are the arms that will be played at the mth fidelity but can be excluded from fidelities higher
than m using information at fidelity m. See Fig. 1 for an illustration of these partitions.
4.1
Regret Bound for MF-UCB
PM
(m)
Recall that N = m=1 QN is the total (random) number of plays by a multi-fidelity strategy
within capital ?. Let n? = b?/?(M ) c be the (non-random) number of plays by any strategy that
operates only on the highest fidelity. Since ?(m) < ?(M ) for all m < M , N could be large for an
arbitrary multi-fidelity method. However, our analysis reveals that for MF-UCB, N . n? with high
probability. The following theorem bounds R for MF-UCB. The proof is given in Appendix A. For
clarity, we ignore the constants but they are fleshed out in the proofs.
Theorem 2 (Regret Bound for MF-UCB). Let ? > 4. There exists ?0 depending on ?(m) ?s such that
for all ? > ?0 , MF-UCB satisfies,
E[R(?)]
.
log(n? )
X
k?K
/ ?
(M )
?k
?
?(JkK)
(JkK)
?(?k )
Let us compare the above bound to UCB whose regret is
M
X
X
m=1 k?K(m)
E[R(?)]
log(n? )
?(m)
(M )
?k
P
k?K
/ ?
(m)
?(?k )
(M )
?k
?(M )
(M ) .
?(?k )
We will
first argue that MF-UCB does not do significantly worse than UCB in the worst case. Modulo the
(M )
(JkK)
?k log(n? ) terms, regret for MF-UCB due to arm k is Rk,MF-UCB ?(JkK) /?(?k ). Consider
(m)
any k ? K(m) , m < M for which ?k > 2? (m) . Since
?(JkK+1)
(M )
(JkK)
(JkK)
?k ? ? k
+ 2? (JkK) . ? ?1
?(?
)
,
k
?(JkK)
5
(M )
a (loose) lower bound for UCB for the same quantity is Rk,UCB ?(M ) /?(?k ) &
?(M )
R
. Therefore for any k ? K(m) , m < M , MF-UCB is at most a constant times
?(JkK+1) k,MF-UCB
(JkK)
(M )
worse than UCB. However, whenever ?k
is comparable to or larger than ?k , MF-UCB outperforms UCB by a factor of ?(JkK) /?(M ) on arm k. As can be inferred from the theorem, most of the
cost invested by MF-UCB on arm k is at the JkKth fidelity. For example, in Fig. 1, MF-UCB would not
play the yellow arms K(1) beyond the first fidelity (more than a constant number of times). Similarly
all green and red arms are played mostly at the second and third fidelities respectively. Only the blue
arms are played at the fourth (most expensive) fidelity. On the other hand UCB plays all arms at the
fourth fidelity. Since lower fidelities are cheaper MF-UCB achieves better regret than UCB.
(M )
It is essential to note here that ?k is small for arms in in K(M ) . These arms are close to the
optimum and require more effort to distinguish than arms that are far away. MF-UCB, like UCB ,
(M )
invests log(n? )?(M ) /?(?k ) capital in those arms. That is, the multi-fidelity setting does not help
us significantly with the ?hard-to-distinguish? arms. That said, in cases where K is very large and the
sets K(M ) is small the bound for MF-UCB can be appreciably better than UCB.
4.2
Lower Bound
Since, N ? n? = b?/?(M ) c, any multi-fidelity strategy which plays a suboptimal arm a polynomial
number of times at any fidelity after n time steps, will have worse regret than MF-UCB (and UCB).
Therefore, in our lower bound we will only consider strategies which satisfy the following condition.
(M )
Assumption 3. Consider the strategy after n plays at any fidelity. For any arm with ?k
PM
(m)
have E[ m=1 Tk,n ] ? o(na ) for any a > 0 .
> 0, we
(m)
For our lower bound we will consider a set of Bernoulli distributions ?k for each fidelity m and
(m)
each arm k with mean ?k . It is known that for Bernoulli distributions ?() ? ?(2 ) [14]. To state
(m)
(m)
our lower bound we will further partition the set K(m) into two sets K3 , K7 as follows,
(m)
K3
(`)
= {k ? K(m) : ?k ? 0 ?` < m},
(m)
K7
(`)
= {k ? K(m) : ? ` < m s.t. ?k > 0}.
For any k ? K(m) our lower bound, given below, is different depending on which set k belongs to.
Theorem 4 (Lower bound for R(?)). Consider any set of Bernoulli reward distributions with
?? ? (1/2, 1) and ? (1) < 1/2. Then, for any strategy satisfying Assumption 3 the following holds.
?
?
M
(m)
(`)
X
X
X
? ?
E[R(?)]
?
(M )
(M ) ?
?k
min
lim inf
?k
+
(6)
? c?
?
?
(m) 2
(`) 2
??? log(n? )
`?Lm (k)
(m)
?k
?
m=1 k?K(m)
k
k?K
7
3
(`)
?k
Here c is a problem dependent constant. Lm (k) = {` < m :
(`)
mth fidelity and all fidelities smaller than m for which ?k > 0.
> 0} ? {m} is the union of the
Comparing this with Theorem 2 we find that MF-UCB meets the lower bound on all arms k ?
(m)
(m)
K3 , ?m. However, it may be loose on any k ? K7 . The gap can be explained as follows. For
(m)
(`)
k ? K7 , there exists some ` < m such that 0 < ?k < 2? (`) . As explained previously, the
switching criterion of MF-UCB ensures that we do not invest too much effort trying to distinguish
(`)
(`)
whether ?k < 0 since ?k could be very small. That is, we proceed to the next fidelity only if we
(`) 2
(`)
cannot conclude ?k . ? (`) . However, since ?(m) > ?(`) it might be the case that ?(`) /?k
(m) 2
(m)
/?k
even though ?k > 2? (m) .
p
(1)
(2)
= ?k = ?k < 2 ?(1) /?(2) ? (1) .
(1)
(m)
<
?
Consider for example a two fidelity problem where
?
Here it makes sense to distinguish the arm as being
suboptimal at the first fidelity with ? log(n? )/?2 capital instead of ?(2) log(n? )/?2 at the second
fidelity. However, MF-UCB distinguishes this arm at the higher fidelity as ? < 2? (m) and therefore
does not meet the lower bound on this arm. While it might seem tempting to switch based on estimates
(1)
(2)
(2)
(2)
for ?k , ?k , this idea is not desirable as estimating ?k for an arm requires log(n? )/?(?k )
samples at the second fidelity; this is is exactly what we are trying to avoid for the majority of the
arms via the multi-fidelity setting. We leave it as an open problem to resolve this gap.
6
K(1)
(1)
E[Tk,n ]
log(n)
(1)
?(?k )
(2)
E[Tk,n ]
..
.
(m)
E[Tk,n ]
..
.
(M )
E[Tk,n ]
K(2)
log(n)
?(? (1) )
log(n)
(2)
?(?k )
K(m)
...
...
O(1)
...
O(1)
K(M )
...
log(n)
?(? (1) )
log(n)
?(? (2) )
K?
log(n)
?(? (1) )
log(n)
?(? (2) )
...
log(n)
?(? (m) )
log(n)
?(? (m) )
log(n)
(M )
?(?k )
(m)
?(n)
log(n)
?(? (1) )
log(n)
?(? (2) )
...
log(n)
(m)
?(?k )
O(1)
Table 1: Bounds on the expected number of plays for each k ? K
(rows) after n time steps (i.e. n plays at any fidelity) in MF-UCB.
5
5.1
(columns) at each fidelity
Proof Sketches
Theorem 2
First we analyse MF-UCB after n plays (at any fidelity) and control the number of plays of an arm at
various fidelities depending on which K(m) it belongs to. To that end we prove the following.
(m)
Lemma 5. (Bounding E[Tk,n ] ? Informal) After n time steps of MF-UCB for any k ? K,
(`)
Tk,n .
log(n)
, ? ` < JkK,
?(? (m) )
log(n)
(JkK)
E[Tk,n ] .
(JkK)
?(?k
/2)
,
(>JkK)
E[Tk,n
] ? O(1).
? k (?) = PM ?(m) ?(M ) T (m) be the regret
The bounds above are illustrated in Table 1. Let R
m=1
k
k,N
? kn = E[R
? k (?)|N = n]. Using Lemma 5 we have,
incurred due to arm k and R
? kn
R
(M )
?k
log(n)
.
JkK?1
X
`=1
?(`)
?(JkK)
+ o(1)
+
(JkK)
?(? (m) )
?(?k /2)
(7)
The next step will be to control the number of plays N within capital ? which will bound E[log(N )].
While ?/?(1) is an easy bound, we will see that for MF-UCB, N will be on the order of n? = ?/?(M ) .
(m)
For this we will use the following high probability bounds on Tk,n .
(m)
Lemma 6. (Bounding P(Tk,n > ? ) ? Informal) After n time steps of MF-UCB for any k ? K,
!
log(n)
1
1
(>JkK)
(JkK)
,
P
T
>
x
. ??2 .
P Tk,n & x ?
.
k,n
x??1
(JkK)
n
x
?(?k /2)
PM ?1 (m)
We bound the number of plays at fidelities less than M via Lemma 6 and obtain n/2 > m=1 Qn
with probability greater than, say ?, for all n ? n0 . By setting ? = 1/ log(?/?(1) ), we get
E[log(N )] . log(n? ). The actual argument is somewhat delicate since ? depends on ?.
This gives as an expression for the regret due to arm k to be of the form (7) where n is replaced by
n? . Then we we argue that the regret incurred by an arm k at fidelities less than JkK (first term in the
(JkK)
RHS of (7)) is dominated by ?(JkK) /?(?k ) (second term). This is possible due to the design of
(m)
the sets K
and Assumption 1. While Lemmas 5, 6 require only ? > 2, we need ? > 4 to ensure
PM ?1 (m)
that m=1 Qn remains sublinear when we plug-in the probabilities from Lemma 6. ? > 2 is
attainable with a more careful design of the sets K(m) . The ? > ?0 condition is needed because
initially MF-UCB is playing at lower fidelities and for small ?, N could be much larger than n? .
5.2
Theorem 4
(p)
(`)
First we show that for an arm k with ?k > 0 and ?k ? 0 for all ` < p, any strategy should satisfy
?(`)
(M )
Rk (?) & log(n? ) ?k
min
2
(`)
`?p,?k >0 ?(`)
k
7
K = 500, M = 3, costs = [1; 10; 100]
?10 4
K = 500, M = 4, costs = [1; 5; 20; 50]
?10 5
K = 200, M = 2, costs = [1; 10]
?10 4
2
10
MF-UCB
UCB
9
8
MF-UCB
UCB
3
MF-UCB
UCB
1.8
1.6
2.5
1.4
2
5
R(?)
6
R(?)
R(?)
7
1.5
1.2
1
0.8
4
0.6
1
3
0.4
2
0.5
0.2
1
0.5
1
1.5
2
2.5
3
3.5
4
4.5
?
?10 5
3
5
0.5
2
2.5
?
K = 1000, M = 5, costs = [1; 3; 10; 30; 100]
1
2
3
4
MF-UCB
1.5
1
6
7
8
9
10
?10 4
UCB
m=3
m=2
m=1
160
140
Number of plays
Number of plays
140
5
?
?10 5
m=3
m=2
m=1
160
2
R(?)
1.5
?10 5
MF-UCB
UCB
2.5
1
120
100
80
60
40
120
100
80
60
40
0.5
20
20
0
1
2
3
4
5
6
?
7
8
9
10
0
0
?10 5
50
100
150
200
250
300
350
Arm Index
400
450
500
0
50
100
150
200
250
300
350
400
450
500
Arm Index
Figure 2: Simulations results on the synthetic problems. The first four figures compares UCB against MF-UCB
on four synthetic problems. The title states K, M and the costs ?(1) , . . . , ?(M ) . The first two used Gaussian
rewards and the last two used Bernoulli rewards. The last two figures show the number of plays by UCB and
MF-UCB on a K = 500, M = 3 problem with Gaussian observations (corresponding to the first figure).
where Rk is the regret incurred due to arm k. The proof uses a change of measure argument. The
(`)
(`)
(`)
modification has Bernoulli distributions with mean ?
?k , ` = 1, . . . , M where ?
?k = ?k for all
(`)
(M )
` < m. Then we push ?
?k slightly above ?? ? ? (`) from ` = m all the way to M where ?
?k > ?? .
(`)
To control the probabilities after changing to ?
?k we use the conditions in Assumption 3. Then for
(`) 2
(m) 2
k ? K(m) we argue that ?(`) ?k & ?(m) /?k
using, once again the design of the sets K(m) .
(m)
(m)
This yields the separate results for k ? K3 , K7 .
6
Some Simulations on Synthetic Problems
We compare UCB against MF-UCB on a series of synthetic problems. The results are given in
Figure 2. Due to space constraints, the details on these experiments are given in Appendix C. Note
that MF-UCB outperforms UCB on all these problems. Critically, note that the gradient of the curve
is also smaller than that for UCB ? corroborating our theoretical insights. We have also illustrated
the number of plays by MF-UCB and UCB at each fidelity for one of these problems. The arms
(M )
are arranged in increasing order of ?k values. As predicted by our analysis, most of the very
suboptimal arms are only played at the lower fidelities. As lower fidelities are cheaper, MF-UCB is
able to use more higher fidelity plays at arms close to the optimum than UCB.
7
Conclusion
We studied a novel framework for studying exploration exploitation trade-offs when cheaper approximations to a desired experiment are available. We propose an algorithm for this setting, MF-UCB,
based on upper confidence bound techniques. It uses the cheap lower fidelity plays to eliminate
several bad arms and reserves the expensive high fidelity queries for a small set of arms with high
expected reward, hence achieving better regret than strategies which ignore multi-fidelity information.
We complement this result with a lower bound which demonstrates that MF-UCB is near optimal.
Other settings for bandit problems with multi-fidelity evaluations might warrant different definitions
for the regret. For example, consider a gold mining robot where each high fidelity play is a real
world experiment of the robot and incurs cost ?(2) . However, a vastly cheaper computer simulation
which incurs ?(1) approximate a robot?s real world behaviour. In applications like this ?(1) ?(2) .
However, unlike our setting lower fidelity plays may not have any rewards (as simulations do not
yield actual gold). Similarly, in clinical trials the regret due to a bad treatment at the high fidelity,
would be, say, a dead patient. However, a bad treatment at a lower fidelity may not warrant a large
penalty. These settings are quite challenging and we wish to work on them going forward.
8
References
[1] Rajeev Agrawal. Sample Mean Based Index Policies with O(log n) Regret for the Multi-Armed Bandit
Problem. Advances in Applied Probability, 1995.
[2] Jean-Yves Audibert, R?mi Munos, and Csaba Szepesv?ri. Exploration-exploitation Tradeoff Using
Variance Estimates in Multi-armed Bandits. Theor. Comput. Sci., 2009.
[3] Peter Auer. Using Confidence Bounds for Exploitation-exploration Trade-offs. J. Mach. Learn. Res., 2003.
[4] Yoram Baram, Ran El-Yaniv, and Kobi Luz. Online choice of active learning algorithms. The Journal of
Machine Learning Research, 5:255?291, 2004.
[5] S?bastien Bubeck and Nicol? Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed
bandit problems. Foundations and Trends in Machine Learning, 2012.
[6] Mark Cutler, Thomas J. Walsh, and Jonathan P. How. Reinforcement Learning with Multi-Fidelity
Simulators. In IEEE International Conference on Robotics and Automation (ICRA), 2014.
[7] D. Huang, T.T. Allen, W.I. Notz, and R.A. Miller. Sequential kriging optimization using multiple-fidelity
evaluations. Structural and Multidisciplinary Optimization, 2006.
[8] Kirthevasan Kandasamy, Gautam Dasarathy, Junier Oliva, Jeff Schenider, and Barnab?s P?czos. Gaussian
Process Bandit Optimisation with Multi-fidelity Evaluations. In Advances in Neural Information Processing
Systems, 2016.
[9] T. L. Lai and Herbert Robbins. Asymptotically Efficient Adaptive Allocation Rules. Advances in Applied
Mathematics, 1985.
[10] Dev Rajnarayan, Alex Haas, and Ilan Kroo. A multifidelity gradient-free optimization method and application to aerodynamic design. In AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference,
Victoria, Etats-Unis, 2008.
[11] Herbert Robbins. Some aspects of the sequential design of experiments. Bulletin of the American
Mathematical Society, 1952.
[12] W. R. Thompson. On the Likelihood that one Unknown Probability Exceeds Another in View of the
Evidence of Two Samples. Biometrika, 1933.
[13] Long Tran-Thanh, Lampros C. Stavrogiannis, Victor Naroditskiy, Valentin Robu, Nicholas R. Jennings,
and Peter Key. Efficient Regret Bounds for Online Bid Optimisation in Budget-Limited Sponsored Search
Auctions. In UAI, 2014.
[14] Larry Wasserman. All of Statistics: A Concise Course in Statistical Inference. Springer Publishing
Company, Incorporated, 2010.
[15] Yingce Xia, Haifang Li, Tao Qin, Nenghai Yu, and Tie-Yan Liu. Thompson Sampling for Budgeted
Multi-Armed Bandits. In IJCAI, 2015.
[16] Chicheng Zhang and Kamalika Chaudhuri. Active Learning from Weak and Strong Labelers. In Advances
in Neural Information Processing Systems, 2015.
9
| 6592 |@word trial:1 exploitation:5 briefly:1 version:1 polynomial:1 open:1 simulation:5 forecaster:1 k7:5 attainable:1 concise:1 incurs:2 recursively:1 liu:1 series:1 outperforms:3 past:1 comparing:1 yet:1 partition:4 cheap:3 sponsored:1 n0:1 kandasamy:4 selected:1 short:1 indefinitely:1 provides:1 gautam:2 successive:2 zhang:2 mathematical:1 along:1 robu:1 become:1 prove:2 expected:7 multi:35 simulator:1 decreasing:1 company:1 resolve:1 actual:2 armed:14 increasing:3 becomes:1 spain:1 begin:1 notation:1 moreover:1 bounded:1 estimating:1 what:3 kg:3 csaba:1 pseudo:5 finance:1 tie:1 exactly:1 biometrika:1 demonstrates:1 partitioning:1 unit:3 control:3 hourly:1 maximise:2 negligible:1 aiaa:1 treat:1 switching:1 mach:1 dasarathy:2 meet:2 might:9 black:1 studied:2 suggests:1 challenging:1 limited:1 walsh:1 practical:1 union:1 regret:36 procedure:2 yan:1 thought:1 significantly:3 dictate:1 confidence:11 word:2 refers:1 get:1 cannot:2 close:3 selection:1 context:1 seminal:1 equivalent:1 thanh:1 go:1 regardless:1 independently:1 thompson:2 simplicity:1 wasserman:1 insight:1 rule:1 pull:1 handle:1 notion:6 play:48 user:1 modulo:1 us:4 associate:1 trend:1 expensive:8 approximated:1 satisfying:1 observed:1 capture:1 worst:1 region:1 ensures:1 trade:4 highest:8 observes:1 ran:1 kriging:1 intuition:3 reward:18 flurry:1 motivate:2 depend:1 tit:4 serve:1 chapter:1 worsened:1 various:1 fast:1 query:1 outcome:4 whose:3 quite:1 larger:4 valued:1 jean:1 say:3 statistic:1 invested:1 analyse:4 noisy:1 online:8 sequence:1 agrawal:1 propose:2 tran:1 qin:1 inserting:1 chaudhuri:2 achieve:1 adapts:2 gold:2 invest:1 ijcai:1 regularity:1 optimum:3 yaniv:1 produce:1 leave:1 tk:17 help:2 derive:1 develop:1 spent:1 pose:1 minimises:1 depending:3 qt:9 strong:2 c:1 predicted:1 differ:1 artefact:1 stochastic:3 exploration:5 unfavourable:1 larry:1 everything:1 require:2 behaviour:2 assign:1 barnab:2 theor:1 hold:2 sufficiently:2 k3:4 algorithmic:1 lm:2 reserve:3 achieves:2 adopt:1 title:1 robbins:3 largest:1 appreciably:1 jkk:27 weighted:1 offs:3 clearly:1 always:1 gaussian:6 pn:2 avoid:2 bernoulli:5 check:1 likelihood:1 contrast:1 sense:1 inference:1 dependent:1 el:1 eliminate:5 typically:1 a0:2 initially:1 mth:5 bandit:25 going:1 tao:1 fidelity:123 among:1 constrained:2 field:1 once:1 having:1 elongation:1 labeler:2 sampling:1 represents:1 yu:1 nearly:1 warrant:2 gauging:1 recommend:1 inherent:3 primarily:1 oblivious:1 distinguishes:1 cheaper:8 replaced:1 delicate:1 multifidelity:1 attempt:1 interest:2 mining:1 evaluation:3 cutler:1 necessary:2 shorter:2 continuing:1 desired:3 circle:1 re:1 formalise:3 theoretical:3 instance:3 column:1 formalism:1 modeling:1 aries:1 dev:1 cost:19 subset:2 valentin:1 too:4 motivating:1 kn:2 synthetic:4 international:1 sequel:1 off:2 quickly:1 na:1 again:1 reflect:1 cesa:2 successively:1 satisfied:1 choose:3 possibly:1 vastly:1 central:1 opposed:1 huang:1 worse:3 dead:1 american:1 return:1 li:1 account:2 ilan:1 attaining:1 automation:1 satisfy:3 explicitly:1 audibert:1 ad:8 depends:1 tion:1 view:1 observing:1 realised:2 competitive:2 decaying:1 maintains:1 red:2 chicheng:1 contribution:1 purple:1 yves:1 variance:2 miller:1 yield:2 yellow:2 weak:2 critically:1 advertising:8 minm:1 whenever:1 definition:2 against:5 nonetheless:1 naturally:1 associated:2 proof:5 mi:2 treatment:2 baram:1 nenghai:1 recall:1 anytime:1 cap:1 lim:1 auer:1 exceeded:1 higher:7 luz:1 arranged:1 though:1 furthermore:2 just:2 until:2 hand:2 sketch:1 rajeev:1 multidisciplinary:2 indicated:1 behaved:1 name:1 geographic:1 equality:1 hence:1 excluded:1 illustrated:2 attractive:2 criterion:1 trying:2 demonstrate:2 gleaned:1 allen:1 auction:1 instantaneous:3 novel:3 recently:1 mt:12 empirically:1 exponentially:1 mellon:1 refer:3 unfamiliar:1 pm:6 similarly:3 mathematics:1 schneide:1 access:3 robot:3 longer:1 money:1 etc:1 labelers:1 belongs:3 inf:1 scenario:3 certain:2 inequality:2 discussing:1 victor:1 herbert:2 minimum:1 greater:2 somewhat:1 impose:1 schneider:1 determine:1 period:3 tempting:1 ii:1 multiple:4 desirable:1 expending:1 exceeds:1 plug:1 clinical:1 long:3 naroditskiy:1 lai:1 variant:2 basic:1 oliva:1 optimisation:3 cmu:1 expectation:3 patient:1 achieved:1 robotics:2 audience:1 receive:2 whereas:1 szepesv:1 interval:3 else:1 biased:3 eliminates:1 unlike:1 seem:1 call:2 structural:1 near:2 unused:1 revealed:1 enough:2 concerned:1 easy:1 switch:2 bid:1 nonstochastic:1 identified:1 suboptimal:5 click:4 competing:1 idea:3 tradeoff:2 minimise:1 whether:1 expression:2 optimism:1 effort:2 penalty:1 peter:2 proceed:2 useful:1 jennings:1 reserving:1 clear:1 characterise:1 maybe:1 extensively:1 band:1 outperform:1 exist:1 arising:1 per:1 blue:1 summarised:1 carnegie:1 write:1 group:1 key:1 four:2 threshold:1 achieving:1 drawn:2 capital:9 characterisation:1 budgeted:2 clarity:1 changing:1 asymptotically:1 sum:1 run:2 uncertainty:2 fourth:2 reader:1 decision:1 appendix:3 comparable:1 bound:38 played:6 display:1 distinguish:4 aerodynamic:1 oracle:3 constraint:2 alex:1 x2:1 ri:1 dominated:1 aspect:2 argument:2 min:3 according:2 poor:1 smaller:4 slightly:3 appealing:1 modification:1 stavrogiannis:1 intuitively:1 explained:2 bapoczos:1 resource:1 equation:2 previously:1 remains:1 discus:2 loose:2 needed:1 tractable:1 end:4 informal:2 studying:3 available:3 tightest:1 victoria:1 observe:3 away:1 nicholas:1 shortly:1 thomas:1 denotes:3 ensure:1 publishing:1 yoram:1 giving:1 build:1 classical:4 icra:1 society:1 objective:2 already:1 quantity:2 strategy:24 concentration:1 rt:2 usual:2 said:1 gradient:2 separate:1 sci:1 majority:1 haas:1 argue:3 index:3 illustration:2 providing:1 ratio:1 equivalently:1 setup:1 mostly:1 kirthevasan:2 design:7 policy:1 unknown:1 bianchi:2 upper:11 observation:4 markov:1 payoff:1 extended:1 situation:2 incorporated:1 arbitrary:2 community:1 inferred:3 yingce:1 introduced:1 complement:3 pair:1 bk:7 hour:1 barcelona:1 nip:1 beyond:1 andp:1 able:1 below:3 challenge:1 green:2 treated:1 natural:1 residual:1 arm:80 minimax:1 misleading:1 imply:1 conventionally:1 argmaxk:3 naive:2 isn:1 checking:1 nicol:1 sublinear:1 interesting:1 allocation:1 age:1 foundation:1 incurred:3 sufficient:1 displaying:4 principle:1 playing:6 invests:1 row:1 course:3 czos:2 last:2 free:1 distribu:1 bias:1 face:1 bulletin:1 munos:1 curve:1 xia:1 world:2 cumulative:2 rich:1 qn:3 forward:1 reinforcement:2 adaptive:1 far:1 approximate:4 ignore:4 active:3 reveals:1 uai:1 corroborating:1 conclude:2 kobi:1 tuples:1 xi:1 knew:1 search:1 table:2 promising:2 learn:2 ignoring:2 argmaxm:1 artificially:1 domain:1 main:1 linearly:1 rh:1 bounding:2 unis:1 fig:2 fashion:1 sub:2 wish:3 comput:1 candidate:1 third:1 theorem:8 rk:4 bad:4 specific:1 xt:1 bastien:1 decay:1 evidence:1 exists:2 essential:1 sequential:2 kamalika:1 budget:1 push:1 horizon:1 gap:2 mf:49 simply:1 explore:1 bubeck:2 springer:1 expends:4 satisfies:1 rice:2 goal:6 narrower:1 careful:1 towards:1 jeff:2 notz:1 change:2 hard:1 operates:2 lemma:6 total:1 junier:1 ucb:83 meaningful:1 mark:1 jonathan:1 |
6,183 | 6,593 | A state-space model of cross-region dynamic
connectivity in MEG/EEG
Ying Yang?
Elissa M. Aminoff? Michael J. Tarr? Robert E. Kass?
Carnegie Mellon University, ? Fordham University
[email protected], {eaminoff@fordham, michaeltarr@cmu, [email protected]}.edu
?
Abstract
Cross-region dynamic connectivity, which describes the spatio-temporal dependence of neural activity among multiple brain regions of interest (ROIs), can
provide important information for understanding cognition. For estimating such
connectivity, magnetoencephalography (MEG) and electroencephalography (EEG)
are well-suited tools because of their millisecond temporal resolution. However,
localizing source activity in the brain requires solving an under-determined linear
problem. In typical two-step approaches, researchers first solve the linear problem
with generic priors assuming independence across ROIs, and secondly quantify
cross-region connectivity. In this work, we propose a one-step state-space model to
improve estimation of dynamic connectivity. The model treats the mean activity in
individual ROIs as the state variable and describes non-stationary dynamic dependence across ROIs using time-varying auto-regression. Compared with a two-step
method, which first obtains the commonly used minimum-norm estimates of source
activity, and then fits the auto-regressive model, our state-space model yielded
smaller estimation errors on simulated data where the model assumptions held.
When applied on empirical MEG data from one participant in a scene-processing
experiment, our state-space model also demonstrated intriguing preliminary results,
indicating leading and lagged linear dependence between the early visual cortex
and a higher-level scene-sensitive region, which could reflect feedforward and
feedback information flow within the visual cortex during scene processing.
1
Introduction
Cortical regions in the brain are anatomically connected, and the joint neural activity in connected
regions are believed to underlie various perceptual and cognitive functions. Besides anatomical
connectivity, researchers are particularly interested in the spatio-temporal statistical dependence
across brain regions, which may vary quickly in different time stages of perceptual and cognitive
processes. Descriptions of such spatio-temporal dependence, which we call dynamic connectivity, not
only help to model the joint neural activity, but also provide insights to understand how information
flows in the brain. To estimate dynamic connectivity in human brains, we need non-invasive
techniques to record neural activity with high temporal resolution. Magnetoencephalography (MEG)
and electroencephalography (EEG) are well-suited tools for such purposes, in that they measure
changes of magnetic fields or scalp voltages, which are almost instantaneously induced by electric
activity of neurons.
However, spatially localizing the source activity in MEG/EEG is challenging. Assuming the brain
source space is covered by m discrete points, each representing an electric current dipole generated
by the activity of the local population of neurons, then the readings of n MEG/EEG sensors can
be approximated with a linear transformation of the m-dimensional source activity. The linear
transformation, known as the forward model, is computed using Maxwell equations given the relative
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
positions of sensors with respect to the scalp (1). Typically m ? 103 ? 104 whereas n ? 102 m,
so the source localization problem ? estimating the source activity from the sensor data? is underdetermined. Previous work has exploited various constraints or priors for regularization, including L2
norm penalty (2; 3), sparsity-inducing penalty (4), and priors that encourage local spatial smoothness
or temporal smoothness (5; 6; 7; 8).
When estimating dynamic connectivity from MEG/EEG recordings, especially among several predefined regions of interest (ROIs), researchers often use a two-step procedure: Step 1, estimating
source activity using one of the common source localization methods, for example, the minimum
norm estimate (MNE) that penalizes squared L2 norm (2); Step 2, extracting the mean activity of
source points within each ROI, and then quantifying the statistical dependence among the ROIs,
using various methods ranging from pairwise correlations of time series to Granger causality and
other extensions (9). However, most of the popular methods in Step 1 do not assume dependence
across ROIs. For example, MNE assumes that all source points have independent and identical priors.
Even in methods that assume auto-regressive structures of source activity (6; 8), only dependence
on the one-step-back history of a source point itself and its adjacent neighbors is considered, while
long-range dependence across ROIs is ignored. Biases due to these assumptions in Step 1 can not be
adjusted in Step 2 and thus may result in additional errors in the connectivity analysis.
Alternatively, one can combine source localization and connectivity analysis jointly in one step.
Two pioneering methods have explored this direction. The dynamical causal modeling (DCM
(10)) assumes the source activity includes only one single current dipole in each ROI, and the ROI
dipoles are modeled with a nonlinear, neurophysiology-informed dynamical system, where timeinvariant coefficients describe how the current activity in each ROI is dependent on the history of all
ROIs. Another method (11) does not use pre-defined ROIs, but builds a time-invariant multivariate
auto-regressive (AR) model of all m source points, where the AR coefficients are constrained by
structural white-matter connectivity and sparsity-inducing priors. Both methods use static parameters
to quantify connectivity, but complex perceptual or cognitive processes may involve fast changes of
neural activity, and correspondingly require time-varying models of dynamic connectivity.
Here, we propose a new one-step state-space model, designed to estimate dynamic spatio-temporal
dependence across p given ROIs directly from MEG/EEG sensor data. We define the mean activity
of the source points within each individual ROI as our p-dimensional state variable, and use a timevarying multivariate auto-regressive model to describe how much the activity in each ROI is predicted
by the one-step-back activity in the p ROIs. More specifically, we utilize the common multi-trial
structure of MEG/EEG experiments, which gives independent observations at each time point and
facilitates estimating the time-varying auto-regressive coefficients. Given the state variable at each
time point, activities of source points within each ROI are modeled as independent Gaussian variables,
with the ROI activity as the mean and a shared ROI-specific variance; activities of source points
outside of all ROIs are also modeled as independent Gaussian variables with a zero mean and a shared
variance. Finally, along with the forward model that projects source activity to the sensor space, we
build a direct relationship between the state variables (ROI activities) and the sensor observations,
yielding a tractable Kalman filter model. Comparing with the previous one-step methods (10; 11),
the main novelty of our model is the time-varying description of connectivity. We note that the
previous methods and our model all utilize specific assumptions to regularize the under-determined
source localization problem. These assumptions may not always be satisfied universally. However,
we expect our model to serve as a good option in the one-step model toolbox for researchers, when
the assumptions are reasonably met. In this paper, we mainly compare our model with a two-step
procedure using the commonly applied MNE method, on simulated data and in a real-world MEG
experiment.
2
Model
Model formulation In MEG/EEG experiments, researchers typically acquire multiple trials of
the same condition and treat them as independent and identically distributed (i.i.d.) samples. Each
trial includes a fixed time window of (T +1) time points, aligned to the stimulus onset. Assuming
(r)
there are n sensors and q trials, we use y t to denote the n-dimensional sensor readings at time
t (t = 0, 1, 2, ? ? ? , T ) in the rth trial (r = 1, 2, ? ? ? , q). To be more succinct, when alluding to the
sensor readings in a generic trial without ambiguity, we drop the superscript (r) and use y t instead;
the same omission works for source activity and the latent ROI activity described below. We also
2
assume the mean of sensor data across trials is an n ? (T + 1) zero matrix; this assumption can be
easily met by subtracting the n ? (T + 1) sample mean across trials from the data.
MEG and EEG are mainly sensitive to electric currents in the pyramidal cells, which are perpendicular
to the folded cortical surfaces (12). Here we define the source space as a discrete mesh of m source
points distributed on the cortical surfaces, where each source point represents an electric current
dipole along the local normal direction. If we use an m-dimensional vector J t to denote the source
activity at time t in a trial, then the corresponding sensor data y t has the following form
sensor model (forward model): y t = GJ t + et ,
i.i.d
et ? N (0, Qe )
(1)
where the n ? m matrix G describes the linear projection of the source activity into the sensor
space, and the sensor noise, et , is modeled as temporally independent draws from an n-dimensional
Gaussian distribution N (0, Qe ). The noise covariance Qe can be pre-measured using recordings in
the empty room or in a baseline time window before experimental tasks.
Standard source localization methods aim to solve for J t given y t , G and Qe . In contrast, our model
aims to estimate dynamic connectivity among p pre-defined regions of interest (ROIs) in the source
space (see Figure 1 for an illustration). We assume at each time point in each trial, the current dipoles
of the source points within each ROI share a common mean. Given p ROIs, we have a p-dimensional
state variable ut at time t in a trial, where each element represents the mean activity in one ROI. The
state variable ut follows a time-varying auto-regressive model of order 1
ROI model:
u0 ? N (0, Q0 )
ut = At ut?1 + t ,
t ? N (0, Q),
for t = 1, ? ? ? , T.
(2)
where Q0 is a p ? p covariance matrix at t = 0, and At s are the time-varying auto-regressive
coefficients, which describe lagged dependence across ROIs. The p-dimensional Gaussian noise term
t is independent of the past, with a zero mean and a covariance matrix Q.
state?space?(ROI?means)
...
sensor?space
source?space
...
...
sensor?model
ROI?model
source?model
Figure 1: Illustration of the one-step state-space model
Now we describe how the source activity is distributed given the state variable (i.e., the ROI means).
Below, we denote the lth element in a vector a by a[l], and the entry in ith row and jth column of a
matrix L by L[i, j]. Let Ai be the set of indices of source points in the ith ROI (i = 1, 2, ? ? ? , p);
then for any l ? Ai , the activity of the lth source point at time t in a trial (scalar J t [l]) is modeled as
the ROI mean plus noise,
J t [l] = ut [i] + wt [l],
i.i.d.
wt [l] ? N (0, ?i2 ),
?l ? Ai ,
(3)
where wt denotes the m-dimensional noise on the m source points given the ROI means ut , at time t
in the trial. Note that the mean ut [i] is shared by all source points within the ith ROI, and the noise
term wt [l] given the mean is independent and identically distributed as N (0, ?i2 ) for all source points
within the ROI, at any time in any trial. Additionally, we denote the indices of source points outside
of any ROIs by A0 = {l, l ?
/ ?pi=1 Ai }, and similarly, for each such source point, we also assume its
activity at time t in each trial has a Gaussian distribution, but with a zero mean, and a variance ?02
J t [l] = 0 + wt [l],
i.i.d.
wt [l] ? N (0, ?02 ),
?l ? A0 .
(4)
We can concisely re-write (3) and (4) as
source model: J t = Lut + wt ,
3
i.i.d.
wt ? N (0, QJ )
(5)
where L is a 0/1 m ? p matrix indicating whether a source point is in an ROI (i.e., L[l, i] = 1
if l ? Ai and L[l, i] = 0 otherwise). The covariance QJ is an m ? m diagonal matrix, where
each diagonal element is one among {?02 , ?12 , ? ? ? ?p2 }, depending on which region the corresponding
source point is in; that is, QJ [l, l] = ?02 if l ? A0 (outside of any ROIs), and QJ [l, l] = ?12 if l ? A1 ,
QJ [l, l] = ?22 if l ? A2 and so on.
Combining the conditional distributions of (y t |J t ) given by (1) and (J t |ut ) given by (5), we can
eliminate J t (by integrating over all values of J t ) and obtain the following conditional distribution
for (y t |ut )
y t = Cut + ? t ,
i.i.d.
? t ? N (0, R)
where
C = GL,
R = Qe + GQJ G0
(6)
where G0 is the transpose of G. Putting (2) and (6) together, we have a time-varying Kalman
(r)
filter model, where the observed sensor data from q trials {y t }T,q
t=0,r=1 and parameters Qe , G
and L are given, and the unknown set of parameters ? = {{At }Tt=1 , Q0 , Q, {?i2 }pi=0 } are to be
estimated. Among these parameters, we are mainly interested in {At }Tt=1 , which describes the
spatio-temporal dependence. Let f (?) denote probability density functions in general. We can add
optional priors on ? (denoted by f (?)) to regularize the parameters. For example, we can use
PT
PT
f (?) = f ({At }Tt=1 ) ? exp(?(?0 t=1 kAt k2F + ?1 t=2 kAt ? At?1 k2F )), which penalizes the
squared Frobenius norm (k ? kF ) of At s and encourages temporal smoothness.
Fitting the parameters using the expectation-maximization (EM) algorithm To estimate ?, we
(r)
maximize the objective function log f ({y t }T,q
t=0,r=1 ; ?) + log f (?) using the standard expectation(r)
maximization (EM) algorithm (13). Here log f ({y t }T,q
t=0,r=1 ; ?) is the marginal log-likelihood of
the sensor data, and log f (?) is the logarithm of the prior. We alternate between an E-step and an
M-step.
? we use the forward and
In the E-step, given an estimate of the parameters (denoted by ?),
backward steps in the Kalman smoothing algorithm (13) to obtain the posterior mean of ut ,
(r) def
(r)
(r)
(r) def
(r)
(r)
ut|T = E(ut |{y ? }T?=0 ), the posterior covariance of ut , P t|T = cov(ut |{y ? }T?=0 ), and
(r)
def
(r)
(r)
(r)
the posterior cross covariance of ut and ut?1 , P (t,t?1)|T = cov(ut , ut?1 |{y ? }T?=0 ), for each t
in each trial r. Here E(?) and cov(?) denote the expectation and the covariance. More details are in
Appendix and (13).
(r)
(r)
T,q
In the M-step, we maximize the expectation of log f ({y t }T,q
t=0,r=1 , {ut }t=0,r=1 ; ?) + log f (?),
def
(r)
(r) T,q
?
with respect to the posterior distribution f? = f ({ut }T,q
t=0,r=1 |{y t }t=0,r=1 ; ?). Let tr(?) and
? the
det(?) denote the trace and the determinant of a matrix. Given results in the E-step based on ?,
M-step is equivalent to minimizing three objectives separately
(r)
(r)
T,q
min(?Ef?(log f ({y t }T,q
t=0,r=1 , {ut }t=0,r=1 ; ?)) ? log f (?))
?
? min L1 +
Q0
min
Q,{At }T
t=1
(7)
L2 + min
L3 .
2 p
{?i }i=0
Xq
(r)
(r)
(r)
L1 (Q0 ) = q log det(Q0 ) + tr(Q?1
(P 0|T + u0|T (u0|T )0 )
(8)
0 B 0 ) where B 0 =
r=1
XT
L2 (Q, {At }Tt=1 ) = qT log det(Q) + tr(Q?1
(B 1t ? At B 02t ? B 2t A0t + At B 3t A0t ))
t=1
+ log f ({At }Tt=1 )
(9)
Xq
Xq
(r)
(r)
(r) 0
(r)
(r)
(r)
0
where B 1t =
(P t|T + ut|T (ut|T ) ), B 2t =
(P (t,t?1)|T + ut|T (u(t?1)|T ) , )
r=1
r=1
Xq
(r)
(r)
(r)
B 3t =
(P (t?1)|T + u(t?1)|T (u(t?1)|T )0 )
r=1
L3 ({?i2 }pi=0 ) = q(T + 1) log det(R) + tr(R?1 B 4 ), where R = Qe + GQJ G0 ,
Xq XT
(r)
(r)
(r)
(r)
(r)
and B 4 =
[(y t ? Cut|T )(y t ? Cut|T )0 + CP t|T C 0 )]
r=1
t=0
The optimization for the three separate objectives is relatively easy.
4
(10)
? For L1 , the analytical solution is Q0 ? (1/q)(B0 ).
? For L2 , optimization for {At }Tt=1 and Q can be done in alternations. Given {At }Tt=1 ,
PT
Q has the analytical solution Q ? 1/(qT ) t=1 (B 1t ? At B 02t ? B 2t A0t + At B 3t A0t ).
Given Q, we use gradient descent with back-tracking line search (14) to solve for {At }Tt=1 ,
?L2
where the gradients are ?A
= 2Q?1 (?B 2t + At B 3t ) + 2D t , D t = ?1 (2At ? At+1 ?
t
At?1 ) + ?0 At for t = 2, ? ? ? , T ? 1, D t = ?1 (A1 ? A2 ) + ?0 A1 for t = 1, and
D t = ?1 (At ? AT ?1 ) + ?0 At for t = T .
3
? For L3 , we can also use gradient descent to solve for ?i , with the gradient ?L
??i =
?1
?1
?1
?L3 0 ?R
?L3
?R
tr(( ?R ) ??i ), where ?R = R ?R B 4 R and ??i = 2?i G[:, l ? Ai ]G[:, l ? Ai ]0 .
Here G[:, l ? Ai ] denotes the columns in G corresponding to source points in the ith region.
Because the E-M algorithm only guarantees to find a local optimum, we use multiple initializations,
(r)
and select the solution that yields the best objective function log f ({y t }T,q
t=0,r=1 ) + log f (?) (see
(r)
the appendix on computing log f ({y t }T,q
t=0,r=1 ; ?)). The implementation of the model and the
E-M algorithm in Python is available at github.com/YingYang/MEEG_connectivity.
Visualizing the connectivity We visualize the lagged linear dependence between any pair of
ROIs. According to the auto-regressive model in (2), given {At }Tt=1 , we can characterize the linear
dependence of ROI means at time t + h on those at time t by
? t,t+h ut + noise independent of ut
ut+h = A
? t,t+h = Qt+1 A? , and in Qt+1 , ? decreases from t + h to t + 1. For two ROIs indexed
where A
? =t+h
? =t+h
? t,t+h [i1 , i2 ] indicates the linear dependence of the activity in ROI i1 at time t + h
by i1 and i2 , A
on the activity in ROI i2 at time t, where the linear dependence on the activity at time t in other
? t,t+h [i2 , i1 ] indicates the linear dependence of
ROIs and ROI i1 itself is accounted for; similarly, A
the activity in ROI i2 at time t + h on the activity in ROI i1 at time t. Therefore, we can create a
T ? T matrix ? for any pair of ROIs (i1 and i2 ) to describe their linear dependence at any time
? t,t+h [i2 , i1 ] (i1 leading i2 ) and ?[t + h, t] = A
? t,t+h [i1 , i2 ] (i2 leading i1 ), for
lag: ?[t, t + h] = A
t = 1, ? ? ? , T and h = 1, ? ? ? , T ? t ? 1.
3
Results
To examine whether our state-space model can improve dynamic connectivity estimation empirically,
compared with the two-step procedure, we applied both approaches on simulated and real MEG data.
We implemented the following two-step method as a baseline for comparison. In Step 1, we applied
the minimum-norm estimate (MNE (2)), one of the most commonly used source localization methods,
to estimate J t for each time point in each trial. This is a Bayesian estimate assuming an L2 prior
on the source activity. Given G, Qe and a prior J t ? N (0, (1/?)I), ? > 0 and the corresponding
y t , the estimate is J t ? G0 (GG0 + ?Qe )?1 y t . We averaged the MNE estimates for source points
within each ROI, at each time point and in each trial respectively, and treated the averages as an
estimate of the ROI means {ut }T,q
t=0,r=1 . In Step 2, according to the auto-regressive model in (2), we
estimated Q0 , {At }Tt=1 and Q by maximizing the sum of the log-likelihood and the logarithm of the
T
prior (log f ({ut }T,q
t=0,r=1 ) + log f ({At }t=1 ); the maximization is very similar to the optimization
for L2 in the M-step. Details are deferred to the appendix.
3.1
Simulation
We simulated MEG sensor data according to our model assumptions. The source space was defined
as m ? 5000 source points covering the cortical surfaces of a real brain, with 6.2 mm spacing on
average, and n = 306 sensors were used. The sensor noise covariance matrix Qe was estimated
from real data. Two bilaterally merged ROIs were used: the pericalcarine area (ROI 1), and the
parahippocampal gyri (ROI 2) (see Figure 2a). We selected these two regions, because they were
of interest when we applied the models on the real MEG data (see Section 3.2). We generated the
auto-regressive coefficients for T = 20 time points, where for each At , the diagonal entries were
5
set to 0.5, and the off-diagonal entries were generated as a Morlet function multiplied by a random
scalar drawn uniformly from the interval (?1, 1) (see Figure 2b for an example). The covariances
Q0 and Q were random positive definite matrices, whose diagonal entries were a constant a. The
variances of source space noise {?i2 }pi=0 were randomly drawn from a Gamma distribution with the
shape parameter being 2 and the scale parameter being 1. We used two different values, a = 2 and
a = 5, respectively, where the relative strength of the ROI means compared with the source variance
{?i2 }pi=0 were different. Each simulation had q = 200 trials, and 5 independent simulations for each
a value were generated. The unit of the source activity was nanoampere meter (nAm).
When running the two-step MNE method for each simulation, a wide range of penalization values (?)
were used. When fitting the state-space model, multiple initializations were used, including one of
the two-step MNE estimates. In the prior of {At }Tt=1 , we set ?0 = 0 and ?1 = 0.1. For the fitted
parameters {At }Tt=1 and Q we defined the relative error as the Frobenius norm of the difference
between the estimate and the true parameter, divided by the Frobenius norm of the true parameter
? the relative error was kQ
? ? QkF /kQkF ). For different
(e.g., for the true Q and the estimate Q,
two-step MNE estimates with different ?s, the smallest relative error was selected for comparison.
Figure 2c and 2d show the relative errors and paired differences in errors between the two methods;
in these simulations, the state-space model yielded smaller estimation errors than the two-step MNE
method.
right hemisphere
ROI 2 ROI 1
left hemisphere
A[:,1,1]
A[:,2,1]
A[:,2,2]
1.0
0.5
0.0
?0.5
?1.0
1.0
0.5
0.0
?0.5
?1.0
A[:,1,2]
A[:,1,1]
truth
ss
0
5
mne
10
15
time index
20
A[:,2,1]
0
5
10
15
20
1.0
0.5
0.0
?0.5
?1.0
1.0
0.5
0.0
?0.5
?1.0
A[:,1,2]
0
5
10
15
20
15
20
A[:,2,2]
0
5
10
(b)
1.0
1.0
0.5
0.5
Q error
A error
(a)
0.0
?0.5
ss
mne
diff ss-mne
?1.0
2
0.0
?0.5
?1.0
5
2
5
a
a
(c)
(d)
Figure 2: Simulation results. (a), Illustration of the two ROIs. (b), The auto-regressive coefficients
{At }Tt=1 of T = 20 time points in one example simulation (a = 5). Here A[:, i1 , i2 ] indicates the
time-varying coefficient in At [i1 , i2 ], for i1 , i2 = 1, 2. (The legends: truth (blue), true values; ss
(green), estimates by the state-space model; mne (red), estimates by the two-step MNE method.) (c)
and (d), Comparison of the state-space model (ss) with the two-step MNE method (mne) in relative
errors of {At }Tt=1 (c) and Q (d). The error bars show standard errors across individual simulations.
3.2
Real MEG data on scene processing
We also applied our state-space model and the two-step MNE method on real MEG data, to explore
the dynamic connectivity in the visual cortex during scene processing. It is hypothesized that the
ventral visual pathway, which underlies recognition of what we see, is organized in a hierarchical
manner?along the pathway, regions at each level of the hierarchy receive inputs from previous
levels, and perform transformations to extract features that are more and more related to semantics
(e.g., categories of objects/scenes ) (15). Besides such feedfoward processing, a large number of
top-down anatomical connections along the hypothesized hierarchy also suggest feedback effects
6
(16). Evidence for both directions has been reported previously (17; 18). However, details of the
dynamic information flow during scene processing, such as when and how significant the feedback
effect is, is not well understood. Here, as an exploratory step, we estimate dynamic connectivity
between two regions in the ventral pathway: the early visual cortex (EVC) at the lowest level (in the
pericalcarine areas), which is hypothesized to process low-level features such as local edges, and
the parahippocampal place area (PPA), which is a scene-sensitive region on the higher level of the
hierarchy and has been implicated in processing semantic information (19).
The 306-channel MEG data were recorded while a human participant was viewing 362 photos of
various scenes. Each image was presented for 200 ms and repeated 5 times across the session,
and data across the repetitions were averaged, resulting in q = 362 observations. The data was
down-sampled from a sampling rate of 1 kHz to 100 Hz, and cropped within ?100 ? 700 ms, where
0 ms marked the stimulus onset. Together, we had T + 1 = 80 time points (see the appendix for
more preprocessing details). Given the data, we estimated the dynamic connectivity between the
neural responses to the 362 images in the two ROIs (EVC and PPA), using our state-space model
and the two-step MNE method. We created a source space including m ? 5000 source points for the
participant. In the prior of {At }Tt=1 , we set ?0 = 0 and ?1 = 1.0; in the two-step MNE method, we
used the default value of the tuning parameter (?) for single-trial data in the MNE-python software
(20). After fitting Q0 , {At }Tt=1 and Q, we computed the ? matrix, as defined in Section 2, to
visualize the lagged linear dependence between the two ROIs (EVC and PPA). We also bootstrapped
the 362 observations 27 times to obtain standard deviations of entries in ?, and then computed a
z-score for each entry, defined as the ratio between the estimated value and the bootstrapped standard
deviation. Note that the sign of the source activity only indicates the direction of the electric current,
so negative entries in ? are as meaningful as positive ones. We ran two-tailed z-tests on the z-scores
(assuming a standard normal null distribution); then we plotted the absolute values of the z-scores
that passed a threshold where the p-value < 0.05/(T 2 ), using the Bonferroni correction for T 2
comparisons in all the entries (Figure 3). Larger absolute values indicate more significant non-zero
entries of ?, and more significant lagged linear dependence. As illustrated in Figure 3a, the lower
right triangle of ? indicates the linear dependence of PPA activity on previous EVC activity (EVC
leading PPA, lower- to higher-level), whereas the upper left triangle indicates the linear dependence
of EVC activity on previous PPA activity (PPA leading EVC, higher- to lower-level).
left hemisphere
EVC
700
10
700
10
600
9
600
9
8
EVC
PPA
PPA leading
EVC
EVC time (ms)
500
7
400
6
300
5
4
200
3
100
2
0
EVC leading
PPA
PPA
1
0 100 200 300 400 500 600 700
PPA time (ms)
0
(a) Illustration of the ROIs and ? (b) Results by the state-space model
8
500
EVC time (ms)
right hemisphere
7
400
6
300
5
4
200
3
100
2
0
1
0 100 200 300 400 500 600 700
PPA time (ms)
0
(c) Results by the two-step MNE
Figure 3: Results from real MEG data on scene processing. (a), illustration of ROIs and the triangular
parts of ?. (b) and (c), thresholded z-scores of ? by the state-space model (b) and by the two-step
MNE method (c).
Figure 3b and 3c show the thresholded absolute values of the z-scores by the state-space model and
the two-step MNE method. In Figure 3b by the state-space model, we observed clusters indicating
significant non-zero lagged dependence, in the lower right triangle, spanning roughly from 60 to
280 ms in EVC and from 120 to 300 ms in PPA, which suggests earlier responses in EVC can predict
later responses in PPA in these windows. This pattern could result from feedforward information
flow, which starts when EVC first receives the visual input near 60 ms. In the upper left triangle,
we also observed clusters spanning from 100 to 220 ms in PPA and from 140 to 300 ms in EVC,
suggesting earlier responses in PPA can predict later responses in EVC, which could reflect feedback
along the top-down direction of the hierarchy. Figure 3c by the two-step MNE method also shows
clusters in similar time windows, yet the earliest cluster in the lower right triangle appeared before
0 ms in EVC, which could be a false positive as visual input is unlikely to reach EVC that early.
7
We also observed a small cluster in the top right corner near the diagonal by both methods. This
cluster could indicate late dependence between the two regions, but it was later than the typically
evoked responses before 500 ms. These preliminary results were based on only one participant, and
further analysis for more participants is needed. In addition, the apparent lagged dependence between
the two regions are not necessarily direct or causal interactions; instead, it could be mediated by
other intermediate or higher-level regions, as well as by the stimulus-driven effects. For example,
the disappearance of the stimuli at 200 ms could cause an image-specific offset-response starting at
260 ms in the EVC, which could make it seem that image-specific responses in PPA near 120 ms
predicted the responses at EVC after 260 ms. Therefore further analysis including more regions is
needed, and the stimulus-driven effect needs to be considered as well. Nevertheless, the interesting
patterns in Figure 3b suggest that our one-step state-space model can be a promising tool to explore
the timing of feedforward and feedback processing in a data-driven manner, and such analysis can
help to generate specific hypotheses about information flow for further experimental testing.
4
Discussion
We propose a state-space model to directly estimate the dynamic connectivity across regions of interest
from MEG/EEG data, with the source localization step embedded. In this model, the mean activities
in individual ROIs, (i.e., the state variable), are modeled with time-varying auto-regression, which
can flexibly describe the spatio-temporal dependence of non-stationary neural activity. Compared
with a two-step method, which first obtains the commonly used minimum-norm estimate of source
activity, and then fits the auto-regressive model, our state-space model yielded smaller estimation
errors than the two-step method in simulated data, where the assumptions in our model held. When
applied on empirical MEG data from one participant in a scene-processing experiment, our statespace model also demonstrated intriguing preliminary results, indicating leading and lagged linear
dependence between the early visual cortex and a higher-level scene-sensitive region, which could
reflect feedforward and feedback information flow within the visual cortex. In sum, these results
shed some light on how to better study dynamic connectivity using MEG/EEG and how to exploit the
estimated connectivity to study information flow in cognition.
One limitation of the work here is that we did not compare with other one-step models (10; 11). In
future work, we plan to do comprehensive empirical evaluations of the available one-step methods.
Another issue is there can be violations of our model assumptions in practice. First, given the
ROI means, the noise on source points could be spatially and temporally correlated, rather than
independently distributed. Secondly, if we fail to include an important ROI, the connectivity estimates
may be inaccurate?the estimates may not even be equivalent to the estimates when this ROI is
marginalized out, due to the under-determined nature of source localization. Thirdly, the assumption
that source points within an ROI share a common mean is typically correct for small ROIs but could
be less accurate for larger ROIs, where the diverse activities of many source points might not be
well-represented by a one-dimensional mean activity. That being said, as long as the activity in
different source points within the ROI is not fully canceled, positive dependence effects of the kind
identified by our model would still be meaningful in the sense that they reflect some cross-region
dependence. To deal with the last two issues, one may divide the entire source space into sufficiently
small, non-overlapping ROIs, when applying our state-space model. In such cases, the number of
parameters can be large, and some sparsity-inducing regularization (such as the one in (11)) can
be applied. In ongoing and future work, we plan to explore this idea and also address the effect of
potential assumption violations.
Acknowledgments
This work was supported in part by the National Science Foundation Grant 1439237, the National
Institute of Mental Health Grant RO1 MH64537, as well as the Henry L. Hillman Presidential
Fellowship at Carnegie Mellon University.
References
[1] J. C. Mosher, R. M. Leahy, and P. S. Lewis. EEG and MEG: forward solutions for inverse
methods. Biomedical Engineering, IEEE Transactions on, 46(3):245?259, 1999.
8
[2] M. Hamalainen and R. Ilmoniemi. Interpreting magnetic fields of the brain: minimum norm
estimates. Med. Biol. Eng. Comput., 32:35?42, 1994.
[3] A. M. Dale, A. K. Liu, B. R. Fischl, R. L. Buckner, J. W. Belliveau, J. D. Lewine, and E. Halgren.
Dynamic statistical parametric mapping: combining fMRI and MEG for high-resolution imaging
of cortical activity. Neuron, 26(1):55?67, 2000.
[4] A. Gramfort, M. Kowalski, and M. Hamaleinen. Mixed-norm estimates for the m/eeg inverse
problem using accelerated gradient methods. Physics in Medicine and Biology, 57:1937?1961,
2012.
[5] R. D. Pascual-Marqui, C. M. Michel, and D. Lehmann. Low resolution electromagnetic
tomography: a new method for localizing electrical activity in the brain. International Journal
of psychophysiology, 18(1):49?65, 1994.
[6] A. Galka, O. Y. T. Ozaki, R. Biscay, and P. Valdes-Sosa. A solution to the dynamical inverse
problem of eeg generation using spatiotemporal kalman filtering. NeuroImage, 23:435?453,
2004.
[7] J. Mattout, C. Phillips, W. D. Penny, M. D. Rugg, and K. J. Friston. MEG source localization
under multiple constraints: an extended bayesian framework. NeuroImage, 30(3):753?767,
2006.
[8] C. Lamus, M. S. Hamalainen, S. Temereanca, E. N. Brown, and P. L. Purdon. A spatiotemporal
dynamic distributed solution to the MEG inverse problem. NeuroImage, 63:894?909, 2012.
[9] V. Sakkalis. Review of advanced techniques for the estimation of brain connectivity measured
with EEG/MEG. Computers in biology and medicine, 41(12):1110?1117, 2011.
[10] O. David, S. J. Kiebel, L. M. Harrison, J. Mattout, J. M. Kilner, and K. J. Friston. Dynamic
causal modeling of evoked responses in EEG and MEG. NeuroImage, 30(4):1255?1272, 2006.
[11] M. Fukushima, O. Yamashita, T. R. Kn?sche, and M.-a. Sato. MEG source reconstruction
based on identification of directed source interactions on whole-brain anatomical networks.
NeuroImage, 105:408?427, 2015.
[12] M. Hamalainen, R. Hari, R. J. Ilmoniemi, J. Knuutila, and O. V. Lounasmaa.
Magnetoencephalography?theory, instrumentation, to noninvasive studies of the working human
brain. Reviews of Modern Physics, 65:414?487, 1993.
[13] R. H. Shumway and D. S. Stoffer. An approach to time series smoothing and forecasting using
the EM algorithm. Journal of time series analysis, 3(4):253?264, 1982.
[14] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, New York,
NY, USA, 2004.
[15] J. J. DiCarlo and D. D. Cox. Untangling invariant object recognition. Trends in cognitive
sciences, 11(8):333?341, 2007.
[16] D. J. Felleman and D. C. Van Essen. Distributed hierarchical processing in the primate cerebral
cortex. Cerebral cortex, 1(1):1?47, 1991.
[17] R. M. Cichy, A. Khosla, D. Pantazis, A. Torralba, and A. Oliva. Deep neural networks predict
hierarchical spatio-temporal cortical dynamics of human visual object recognition. arXiv
preprint arXiv:1601.02970, 2016.
[18] M. Bar, K. S. Kassam, A. S. Ghuman, J. Boshyan, A. M. Schmid, A. M. Dale, M. S. H?m?l?inen,
K. Marinkovic, D. L. Schacter, B. R. Rosen, et al. Top-down facilitation of visual recognition.
Proceedings of the National Academy of Sciences of the United States of America, 103(2):449?
454, 2006.
[19] R. Epstein, A. Harris, D. Stanley, and N. Kanwisher. The parahippocampal place area: Recognition, navigation, or encoding? Neuron, 23(1):115?125, 1999.
[20] A. Gramfort, M. Luessi, E. Larson, D. A. Engemann, D. Strohmeier, C. Brodbeck, R. Goj,
M. Jas, T. Brooks, L. Parkkonen, et al. Meg and eeg data analysis with mne-python. Frontiers
in neuroscience, 7:267, 2013.
9
| 6593 |@word neurophysiology:1 trial:21 determinant:1 cox:1 norm:11 simulation:8 covariance:9 eng:1 tr:5 liu:1 series:3 score:5 united:1 mosher:1 bootstrapped:2 past:1 ka:2 com:2 current:7 comparing:1 sosa:1 gmail:1 intriguing:2 yet:1 gqj:2 kiebel:1 mesh:1 shape:1 designed:1 drop:1 stationary:2 selected:2 ith:4 feedfoward:1 record:1 regressive:12 mental:1 valdes:1 along:5 direct:2 ozaki:1 combine:1 fitting:3 pathway:3 manner:2 cnbc:1 pairwise:1 kanwisher:1 roughly:1 examine:1 multi:1 brain:13 window:4 electroencephalography:2 spain:1 estimating:5 project:1 lowest:1 what:1 null:1 kind:1 informed:1 transformation:3 guarantee:1 temporal:11 shed:1 unit:1 underlie:1 grant:2 before:3 positive:4 understood:1 local:5 treat:2 timing:1 engineering:1 encoding:1 might:1 plus:1 initialization:2 mne:25 suggests:1 challenging:1 evoked:2 range:2 perpendicular:1 averaged:2 qkf:1 directed:1 acknowledgment:1 testing:1 practice:1 definite:1 kat:2 procedure:3 area:4 empirical:3 projection:1 boyd:1 pre:3 integrating:1 suggest:2 parahippocampal:3 applying:1 equivalent:2 demonstrated:2 maximizing:1 starting:1 flexibly:1 independently:1 convex:1 resolution:4 dipole:5 insight:1 regularize:2 nam:1 leahy:1 vandenberghe:1 facilitation:1 population:1 exploratory:1 pt:3 hierarchy:4 hypothesis:1 ppa:18 element:3 trend:1 approximated:1 particularly:1 recognition:5 cut:3 observed:4 preprint:1 electrical:1 region:23 connected:2 decrease:1 ran:1 dynamic:21 solving:1 evc:22 gg0:1 serve:1 localization:9 untangling:1 triangle:5 easily:1 joint:2 various:4 represented:1 america:1 fast:1 describe:6 outside:3 whose:1 lag:1 larger:2 solve:4 apparent:1 s:5 otherwise:1 triangular:1 presidential:1 cov:3 jointly:1 itself:2 superscript:1 galka:1 analytical:2 propose:3 subtracting:1 interaction:2 reconstruction:1 aligned:1 combining:2 academy:1 description:2 inducing:3 frobenius:3 empty:1 optimum:1 cluster:6 object:3 help:2 depending:1 kilner:1 stat:1 measured:2 qt:4 b0:1 p2:1 implemented:1 predicted:2 indicate:2 quantify:2 met:2 direction:5 merged:1 correct:1 filter:2 human:4 viewing:1 require:1 electromagnetic:1 preliminary:3 secondly:2 underdetermined:1 adjusted:1 extension:1 frontier:1 correction:1 mm:1 sufficiently:1 considered:2 roi:73 normal:2 exp:1 cognition:2 predict:3 visualize:2 mapping:1 ventral:2 vary:1 early:4 a2:2 smallest:1 purpose:1 torralba:1 estimation:6 sensitive:4 repetition:1 create:1 tool:3 instantaneously:1 sensor:21 gaussian:5 always:1 aim:2 rather:1 varying:9 voltage:1 timevarying:1 earliest:1 likelihood:2 mainly:3 indicates:6 contrast:1 baseline:2 sense:1 buckner:1 schacter:1 dependent:1 inaccurate:1 typically:4 eliminate:1 a0:3 unlikely:1 entire:1 interested:2 i1:14 semantics:1 issue:2 among:6 canceled:1 denoted:2 plan:2 spatial:1 constrained:1 smoothing:2 gramfort:2 marginal:1 field:2 ilmoniemi:2 tarr:1 sampling:1 identical:1 represents:2 biology:2 k2f:2 future:2 fmri:1 rosen:1 stimulus:5 modern:1 randomly:1 gamma:1 national:3 comprehensive:1 individual:4 fukushima:1 yamashita:1 interest:5 essen:1 evaluation:1 stoffer:1 deferred:1 violation:2 navigation:1 yielding:1 light:1 held:2 predefined:1 accurate:1 edge:1 encourage:1 purdon:1 indexed:1 divide:1 logarithm:2 penalizes:2 re:1 plotted:1 causal:3 fitted:1 column:2 modeling:2 earlier:2 ar:2 localizing:3 maximization:3 deviation:2 entry:9 kq:1 characterize:1 reported:1 kn:1 spatiotemporal:2 density:1 international:1 off:1 lewine:1 physic:2 michael:1 together:2 quickly:1 connectivity:26 squared:2 reflect:4 satisfied:1 ambiguity:1 recorded:1 cognitive:4 corner:1 leading:8 michel:1 suggesting:1 potential:1 includes:2 coefficient:7 matter:1 onset:2 later:3 red:1 start:1 participant:6 option:1 variance:5 kowalski:1 yield:1 bayesian:2 identification:1 researcher:5 history:2 reach:1 invasive:1 static:1 sampled:1 popular:1 lounasmaa:1 ut:29 stanley:1 organized:1 back:3 maxwell:1 higher:6 pantazis:1 psychophysiology:1 response:10 formulation:1 done:1 stage:1 bilaterally:1 biomedical:1 correlation:1 working:1 receives:1 nonlinear:1 overlapping:1 epstein:1 usa:1 effect:6 hypothesized:3 brown:1 true:4 regularization:2 spatially:2 q0:10 i2:19 semantic:1 deal:1 white:1 illustrated:1 adjacent:1 visualizing:1 during:3 encourages:1 bonferroni:1 covering:1 qe:10 larson:1 m:18 tt:16 felleman:1 l1:3 cp:1 interpreting:1 ranging:1 image:4 ef:1 common:4 empirically:1 khz:1 cerebral:2 thirdly:1 rth:1 mellon:2 significant:4 cambridge:1 ai:8 phillips:1 smoothness:3 tuning:1 similarly:2 session:1 had:2 henry:1 l3:5 cortex:8 surface:3 gj:1 morlet:1 add:1 multivariate:2 posterior:4 yingyang:1 hemisphere:4 driven:3 instrumentation:1 alternation:1 exploited:1 minimum:5 additional:1 novelty:1 maximize:2 u0:3 multiple:5 cross:5 believed:1 long:2 divided:1 a1:3 paired:1 underlies:1 regression:2 oliva:1 cmu:3 expectation:4 arxiv:2 cell:1 receive:1 whereas:2 cropped:1 separately:1 spacing:1 interval:1 addition:1 fellowship:1 pyramidal:1 source:66 harrison:1 induced:1 recording:2 hz:1 facilitates:1 med:1 legend:1 flow:7 seem:1 call:1 extracting:1 structural:1 near:3 yang:2 feedforward:4 intermediate:1 identically:2 easy:1 independence:1 fit:2 identified:1 idea:1 det:4 qj:5 whether:2 a0t:4 passed:1 forecasting:1 penalty:2 mattout:2 york:1 cause:1 deep:1 ignored:1 covered:1 involve:1 kqkf:1 tomography:1 category:1 gyrus:1 generate:1 goj:1 millisecond:1 sign:1 estimated:6 neuroscience:1 anatomical:3 blue:1 diverse:1 discrete:2 carnegie:2 write:1 fischl:1 putting:1 threshold:1 nevertheless:1 drawn:2 thresholded:2 utilize:2 backward:1 imaging:1 sum:2 inverse:4 lehmann:1 place:2 almost:1 draw:1 appendix:4 def:4 yielded:3 activity:54 scalp:2 strength:1 sato:1 constraint:2 scene:12 software:1 min:4 relatively:1 according:3 alternate:1 describes:4 across:13 smaller:3 em:3 ro1:1 primate:1 jas:1 anatomically:1 invariant:2 equation:1 previously:1 granger:1 fail:1 needed:2 tractable:1 photo:1 available:2 multiplied:1 hierarchical:3 generic:2 magnetic:2 assumes:2 denotes:2 running:1 top:4 include:1 marginalized:1 medicine:2 exploit:1 especially:1 build:2 objective:4 g0:4 parametric:1 dependence:29 disappearance:1 diagonal:6 said:1 hamalainen:3 gradient:5 separate:1 simulated:5 parkkonen:1 spanning:2 assuming:5 meg:30 besides:2 dicarlo:1 kalman:4 modeled:6 relationship:1 illustration:5 index:3 minimizing:1 acquire:1 ying:2 ratio:1 robert:1 trace:1 negative:1 luessi:1 lagged:8 implementation:1 unknown:1 perform:1 upper:2 neuron:4 observation:4 descent:2 optional:1 extended:1 omission:1 david:1 pair:2 toolbox:1 connection:1 concisely:1 barcelona:1 nip:1 brook:1 address:1 bar:2 dynamical:3 below:2 pattern:2 appeared:1 reading:3 sparsity:3 pioneering:1 including:4 green:1 treated:1 friston:2 advanced:1 representing:1 improve:2 github:1 temporally:2 created:1 mediated:1 auto:14 extract:1 health:1 schmid:1 xq:5 prior:12 understanding:1 l2:8 python:3 kf:1 meter:1 review:2 relative:7 shumway:1 embedded:1 fully:1 expect:1 mixed:1 interesting:1 limitation:1 generation:1 filtering:1 penalization:1 foundation:1 share:2 pi:5 row:1 accounted:1 gl:1 last:1 transpose:1 supported:1 jth:1 implicated:1 bias:1 understand:1 institute:1 neighbor:1 wide:1 correspondingly:1 absolute:3 penny:1 distributed:7 van:1 feedback:6 default:1 cortical:6 world:1 noninvasive:1 dale:2 forward:5 commonly:4 universally:1 preprocessing:1 lut:1 transaction:1 obtains:2 hari:1 spatio:7 rugg:1 alternatively:1 inen:1 search:1 latent:1 tailed:1 khosla:1 additionally:1 promising:1 nature:1 channel:1 reasonably:1 eeg:18 complex:1 necessarily:1 electric:5 did:1 main:1 whole:1 noise:10 hillman:1 succinct:1 repeated:1 causality:1 marqui:1 pascual:1 ny:1 neuroimage:5 position:1 comput:1 perceptual:3 late:1 down:4 specific:5 xt:2 explored:1 offset:1 evidence:1 false:1 knuutila:1 suited:2 explore:3 visual:11 tracking:1 scalar:2 truth:2 lewis:1 harris:1 dcm:1 conditional:2 lth:2 marked:1 magnetoencephalography:3 quantifying:1 room:1 shared:3 change:2 cichy:1 determined:3 typical:1 specifically:1 folded:1 wt:8 uniformly:1 diff:1 experimental:2 meaningful:2 indicating:4 timeinvariant:1 select:1 accelerated:1 ongoing:1 statespace:1 biol:1 correlated:1 |
6,184 | 6,594 | An Online Sequence-to-Sequence Model Using Partial
Conditioning
Navdeep Jaitly
Google Brain
[email protected]
Oriol Vinyals
Google DeepMind
[email protected]
David Sussillo
Google Brain
[email protected]
Ilya Sutskever
Open AI?
[email protected]
Quoc V. Le
Google Brain
[email protected]
Samy Bengio
Google Brain
[email protected]
Abstract
Sequence-to-sequence models have achieved impressive results on various tasks.
However, they are unsuitable for tasks that require incremental predictions to be
made as more data arrives or tasks that have long input sequences and output
sequences. This is because they generate an output sequence conditioned on an
entire input sequence. In this paper, we present a Neural Transducer that can make
incremental predictions as more input arrives, without redoing the entire computation. Unlike sequence-to-sequence models, the Neural Transducer computes the
next-step distribution conditioned on the partially observed input sequence and
the partially generated sequence. At each time step, the transducer can decide to
emit zero to many output symbols. The data can be processed using an encoder
and presented as input to the transducer. The discrete decision to emit a symbol
at every time step makes it difficult to learn with conventional backpropagation.
It is however possible to train the transducer by using a dynamic programming
algorithm to generate target discrete decisions. Our experiments show that the
Neural Transducer works well in settings where it is required to produce output
predictions as data come in. We also find that the Neural Transducer performs well
for long sequences even when attention mechanisms are not used.
1
Introduction
The recently introduced sequence-to-sequence model has shown success in many tasks that map
sequences to sequences, e.g., translation, speech recognition, image captioning and dialogue modeling [17, 4, 1, 6, 3, 20, 18, 15, 19]. However, this method is unsuitable for tasks where it is important
to produce outputs as the input sequence arrives. Speech recognition is an example of such an online
task ? users prefer seeing an ongoing transcription of speech over receiving it at the ?end? of an
utterance. Similarly, instant translation systems would be much more effective if audio was translated
online, rather than after entire utterances. This limitation of the sequence-to-sequence model is due
to the fact that output predictions are conditioned on the entire input sequence.
In this paper, we present a Neural Transducer, a more general class of sequence-to-sequence learning
models. Neural Transducer can produce chunks of outputs (possibly of zero length) as blocks of inputs
arrive - thus satisfying the condition of being ?online? (see Figure 1(b) for an overview). The model
generates outputs for each block by using a transducer RNN that implements a sequence-to-sequence
model. The inputs to the transducer RNN come from two sources: the encoder RNN and its own
?
Work done at Google Brain
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
ym
<e>
transducer
y1
y2
yi
yL
transducer
</s>
<e>
Neural Network
x1 x2
xi
xL <s> y1
yym+1
<e>
2
ym
ym+1
encoder
yi-1
yL-1
yL
x1
(a) seq2seq
xbW
W
xbW+W
xL
(b) Neural Transducer
Figure 1: High-level comparison of our method with sequence-to-sequence models. (a) Sequence-tosequence model [17]. (b) The Neural Transducer (this paper) which emits output symbols as data
come in (per block) and transfers the hidden state across blocks.
recurrent state. In other words, the transducer RNN generates local extensions to the output sequence,
conditioned on the features computed for the block by an encoder RNN and the recurrent state of the
transducer RNN at the last step of the previous block.
During training, alignments of output symbols to the input sequence are unavailable. One way
of overcoming this limitation is to treat the alignment as a latent variable and to marginalize over
all possible values of this alignment variable. Another approach is to generate alignments from a
different algorithm, and train our model to maximize the probability of these alignments. Connectionist Temporal Classification (CTC) [7] follows the former strategy using a dynamic programming
algorithm, that allows for easy marginalization over the unary potentials produced by a recurrent
neural network (RNN). However, this is not possible in our model, since the neural network makes
next-step predictions that are conditioned not just on the input data, but on the alignment, and the
targets produced until the current step. In this paper, we show how a dynamic programming algorithm,
can be used to compute "approximate" best alignments from this model. We show that training our
model on these alignments leads to strong results.
On the TIMIT phoneme recognition task, a Neural Transducer (with 3 layered unidirectional LSTM
encoder and 3 layered unidirectional LSTM transducer) can achieve an accuracy of 20.8% phoneme
error rate (PER) which is close to state-of-the-art for unidirectional models. We show too that if good
alignments are made available (e.g, from a GMM-HMM system), the model can achieve 19.8% PER.
2
Related Work
In the past few years, many proposals have been made to add more power or flexibility to neural
networks, especially via the concept of augmented memory [10, 16, 21] or augmented arithmetic
units [13, 14]. Our work is not concerned with memory or arithmetic components but it allows more
flexibility in the model so that it can dynamically produce outputs as data come in.
Our work is related to traditional structured prediction methods, commonplace in speech recognition.
The work bears similarity to HMM-DNN [11] and CTC [7] systems. An important aspect of these
approaches is that the model makes predictions at every input time step. A weakness of these models
is that they typically assume conditional independence between the predictions at each output step.
Sequence-to-sequence models represent a breakthrough where no such assumptions are made ? the
output sequence is generated by next step prediction, conditioning on the entire input sequence and
the partial output sequence generated so far [5, 6, 3]. Figure 1(a) shows the high-level picture of this
architecture. However, as can be seen from the figure, these models have a limitation in that they have
to wait until the end of the speech utterance to start decoding. This property makes them unattractive
for real time speech recognition and online translation. Bahdanau et. al. [2] attempt to rectify this for
speech recognition by using a moving windowed attention, but they do not provide a mechanism to
address the situation that arises when no output can be produced from the windowed segment of data.
Figure 1(b) shows the difference between our method and sequence-to-sequence models.
A strongly related model is the sequence transducer [8, 9]. This model augments the CTC model
by combining the transcription model with a prediction model. The prediction model is akin to a
2
ym-1
ym+1
ym
ym+2
transducer
eb-1=m-1
h?m-1
h?m
h?m+1
h?m+2
h?m+3
sm-1
sm
sm+1
sm+2
sm+3
ym-2
+
?1
?2
h(b-1)W
h(b-1)W+1
x(b-1)W
x(b-1)W+1
ym+1
ym
ym-1
attention
encoder
ym+3
eb=m+2
ym+2
cm
?b
h(b-1)W+2
W
hbW
xbW
hbW+1
xbW+1
Figure 2: An overview of the Neural Transducer architecture for speech. The input acoustic sequence is
processed by the encoder to produce hidden state vectors hi at each time step i, i = 1 ? ? ? L. The transducer
receives a block of inputs at each step and produces up to M output tokens using the sequence-to-sequence model
over this input. The transducer maintains its state across the blocks through the use of recurrent connections
to the previous output time steps. The figure above shows the transducer producing tokens for block b. The
subsequence emitted in this block is ym ym+1 ym+2 .
language model and operates only on the output tokens, as a next step prediction model. This gives
the model more expressiveness compared to CTC which makes independent predictions at every time
step. However, unlike the model presented in this paper, the two models in the sequence transducer
operate independently ? the model does not provide a mechanism by which the prediction network
features at one time step would change the transcription network features in the future, and vice versa.
Our model, in effect both generalizes this model and the sequence to sequence model.
Our formulation requires inferring alignments during training. However, our results indicate that this
can be done relatively fast, and with little loss of accuracy, even on a small dataset where no effort
was made at regularization. Further, if alignments are given, as is easily done offline for various tasks,
the model is able to train relatively fast, without this inference step.
3
Methods
In this section we describe the model in more detail. Please refer to Figure 2 for an overview.
3.1
Model
Let x1???L be the input data that is L time steps long, where xi represents the features at input time
step i. Let W be the block size, i.e., the periodicity with which the transducer emits output tokens,
L
and N = d W
e be the number of blocks.
Let y?1???S be the target sequence, corresponding to the input sequence. Further, let the transducer
produce a sequence of k outputs, y?i???(i+k) , where 0 ? k < M , for any input block. Each such
sequence is padded with the <e> symbol, that is added to the vocabulary. It signifies that the
transducer may proceed and consume data from the next block. When no symbols are produced for a
block, this symbol is akin to the blank symbol of CTC.
The sequence y?1???S can be transduced from the input from various alignments. Let Y be the set
of all alignments of the output sequence y?1???S to the input blocks. Let y1???(S+B)) ? Y be any
such alignment. Note that the length of y is B more than the length of y?, since there are B end
of block symbols, <e>, in y. However, the number of sequences y matching to y? is much larger,
corresponding to all possible alignments of y? to the blocks. The block that element yi is aligned
3
to can be inferred simply by counting the number of <e> symbols that came before index i. Let,
eb , b ? 1 ? ? ? N be the index of the last token in y emitted in the bth block. Note that e0 = 0 and
eN = (S + B). Thus yeb =<e> for each block b.
In this section, we show how to compute p y1???(S+B)) |x1???L . Later, in section 3.5 we show how
to compute, and maximize p (?
y1???S |x1???L ).
We first compute the probability of l compute the probability of seeing output sequence y1???eb by the
end of block b as follows:
b
Y
p (y1???eb |x1???bW ) = p (y1???e1 |x1???W )
p y(e 0 +1)???e0 |x1???b0 W , y1???eb0 ?1
(1)
b ?1
b
b0 =2
Each of the terms in this equation is itself computed by the chain rule decomposition, i.e., for any
block b,
eb
Y
p y(eb?1 +1)???eb |x1???bW , y1???eb?1 =
p ym |x1???bW , y1???(m?1)
(2)
m=e(b?1) +1
The next step probability terms, p ym |x1???bW , y1???(m?1) , in Equation 2 are computed by the
transducer using the encoding of the input x1???bW computed by the encoder, and the label prefix
y1???(m?1) that was input into the transducer, at previous emission steps. We describe this in more
detail in the next subsection.
3.2
Next Step Prediction
We again refer the reader to Figure 2 for this discussion. The example shows a transducer with two
hidden layers, with units sm and h0 m at output step m. In the figure, the next step prediction is shown
for block b. For this block, the index of the first output symbol is m = eb?1 + 1, and the index of the
last output symbol is m + 2 (i.e. eb = m + 2).
The transducer computes the next step prediction, using parameters, ?, of the neural network through
the following sequence of steps:
sm
cm
0
hm
p ym |x1???bW , y1???(m?1)
=
=
=
=
fRN N (sm?1 , [cm?1 ; ym?1 ] ; ?)
fcontext sm , h((b?1)W +1)???bW ; ?
0
fRN N (h m?1 , [cm ; sm ] ; ?)
0
fsof tmax (ym ; h m , ?)
(3)
(4)
(5)
(6)
where fRN N (am?1 , bm ; ?) is the recurrent neural network function (such as an LSTM or
a sigmoid or tanh RNN) that computes the state vector am for a layer at a step using the
recurrent state vector am?1 at the last time step, and input bm at the current time step;2
fsof tmax (?; am ; ?) is the softmax distribution
computed by a softmax layer, with input vector am ;
and fcontext sm , h((b?1)W +1)???bW ; ? is the context function, that computes the input to the transducer at output step m from the state sm at the current time step, and the features h((b?1)W +1)???bW
of the encoder for the current input block, b. We experimented with different ways of computing
the context vector ? with and without an attention mechanism. These are described subsequently in
section 3.3.
Note that since the encoder is an RNN, h(b?1)W ???bW is actually a function of the entire input, x1???bW
so far. Correspondingly, sm is a function of the labels emitted so far, and the entire input seen so far.3
Similarly, h0 m is a function of the labels emitted so far and the entire input seen so far.
Computing fcontext
3.3
We first describe how the context vector is computed by an attention model similar to earlier
work [5, 1, 3]. We call this model the MLP-attention model.
2
Note that for LSTM, we would have to additionally factor in cell states from the previous states - we have
ignored this in the notation for purpose of clarity. The exact details are easily worked out.
3
For the first output step of a block it includes only the input seen until the end of the last block.
4
In this model the context vector cm is in computed in two steps - first a normalized attention vector
?m is computed from the state sm of the transducer and next the hidden states h(b?1)W +1???bW
of the encoder for the current block are linearly combined using ? and used as the context vector.
To compute ?m , a multi-layer perceptron computes a scalar value, em
j for each pair of transducer
state sm and encoder h(b?1)W +j . The attention vector is computed from the scalar values, em
j ,
j = 1 ? ? ? W . Formally:
em
(7)
j = fattention sm , h(b?1)W +j ; ?
m
m
?m = sof tmax ([em
1 ; e2 ; ? ? ? eW ])
cm =
W
X
?jm h(b?1)W +j
(8)
(9)
j=1
T
We also experimented with using a simpler model for fattention that computed em
j = sm h(b?1)W +j .
We refer to this model as DOT-attention model.
Both of these attention models have two shortcomings. Firstly there is no explicit mechanism
that requires the attention model to move its focus forward, from one output time step to the next.
Secondly, the energies computed as inputs to the softmax function, for different input frames j are
independent of each other at each time step, and thus cannot modulate (e.g., enhance or suppress)
each other, other than through the softmax function. Chorowski et. al. [6] ameliorate the second
problem by using a convolutional operator that affects the attention at one time step using the attention
at the last time step.
We attempt to address these two shortcomings using a new attention mechanism. In this model,
m
m
instead of feeding [em
1 ; e2 ; ? ? ? eW ] into a softmax, we feed them into a recurrent neural network with
one hidden layer that outputs the softmax attention vector at each time step. Thus the model should
be able to modulate the attention vector both within a time step and across time steps. This attention
model is thus more general than the convolutional operator of Chorowski et. al. (2015), but it can
only be applied to the case where the context window size is constant. We refer to this model as
LSTM-attention.
3.4
Addressing End of Blocks
Since the model only produces a small sequence of output tokens in each block, we have to address the
mechanism for shifting the transducer from one block to the next. We experimented with three distinct
ways of doing this. In the first approach, we introduced no explicit mechanism for end-of-blocks,
hoping that the transducer neural network would implicitly learn a model from the training data. In
the second approach we added end-of-block symbols, <e>, to the label sequence to demarcate the end
of blocks, and we added this symbol to the target dictionary. Thus the softmax function in Equation 6
implicitly learns to either emit a token, or to move the transducer forward to the next block. In the
third approach, we model moving the transducer forward, using a separate logistic function of the
attention vector. The target of the logistic function is 0 or 1 depending on whether the current step is
the last step in the block or not.
3.5
Training
In this section we show how the Neural Transducer model can be trained.
The probability of the output sequence y?1..S , given x1???L is as follows4 :
X
p (?
y1???S |x1???L ) =
p y1???(S+B)) |x1???L
(10)
y?Y
In theory, we can train the model by maximizing the log of equation 10. The gradient for the log
likelihood can easily be expressed as follows:
X
d
d
log p (?
y1???S |x1???L ) =
p y1???(S+B)) |x1???L , y?1???S
log p y1???(S+B) |x1???L
(11)
d?
d?
y?Y
4
Note that this equation implicitly incorporates the prior for alignments within the equation
5
Each of the latter term in the sum on the right hand side can be computed, by backpropagation, using
y as the target of the model. However, the marginalization is intractable because of the sum over a
combinatorial number of alignments. Alternatively, the gradient
can be approximated by sampling
from the posterior distribution (i.e. p y1???(S+B)) |x1???L , y?1???S ). However, we found this had very
large noise in the learning and the gradients were often too biased, leading to the models that rarely
achieved decent accuracy.
Instead, we attempted to maximize the probability in equation 10 by computing the sum over only one
term - corresponding to the y1???S with the highest posterior probability. Unfortunately, even doing this
exactly is computationally infeasible because the number of possible alignments is combinatorially
large and the problem of finding the best alignment cannot be decomposed to easier subproblems. So
we use an algorithm that finds the approximate best alignment with a dynamic programming-like
algorithm that we describe in the next paragraph.
At each block, b, for each output position j, this algorithm keeps track of the approximate best
hypothesis h(j, b) that represents the best partial alignment of the input sequence y?1???j to the partial
input x1???bW . Each hypothesis, keeps track of the best alignment y1???(j+b) that it represents, and
the recurrent states of the decoder at the last time step, corresponding to this alignment. At block
b + 1, all hypotheses h(j, b), j <= min (b (M ? 1) , S) are extended by at most M tokens using
their recurrent states, to compute h(j, b + 1), h(j + 1, b + 1) ? ? ? h(j + M, b + 1)5 . For each position
j 0 , j 0 <= min ((b + 1) (M ? 1) , S) the highest log probability hypothesis h(j 0 , b + 1) is kept6 . The
alignment from the best hypothesis h(S, B) at the last block is used for training.
In theory, we need to compute the alignment for each sequence when it is trained, using the model
parameters at that time. In practice, we batch the alignment inference steps, using parallel tasks,
and cache these alignments. Thus alignments are computed less frequently than the model updates typically every 100-300 sequences. This procedure has the flavor of experience replay from Deep
Reinforcement learning work [12].
3.6
Inference
For inference, given the input acoustics x1???L , and the model parameters, ?, we find the sequence of
labels y1..M that maximizes the probability of the labels, conditioned on the data, i.e.,
y?1???S = arg
max
y1???S 0 ,e1???N
N
X
log p ye(b?1)+1 ???eb |x1???bW , y1???e(b?1)
(12)
b=1
Exact inference in this scheme is computationally expensive because the expression for log probability
does not permit decomposition into smaller terms that can be independently computed. Instead, each
candidate, y1?S 0 , would have to be tested independently, and the best sequence over an exponentially
large number of sequences would have to be discovered. Hence, we use a beam search heuristic to
find the ?best? set of candidates. To do this, at each output step m, we keep a heap of alternative n
best prefixes, and extend each one by one symbol, trying out all the possible alternative extensions,
keeping only the best n extensions. Included in the beam search is the act of moving the attention to
the next input block. The beam search ends either when the sequence is longer than a pre-specified
threshold, or when the end of token symbol is produced at the last block.
4
Experiments and Results
4.1
Addition Toy Task
We experimented with the Neural Transducer on the toy task of adding two three-digit decimal
numbers. The second number is presented in the reverse order, and so is the target output. Thus the
model can produce the first output as soon as the first digit of the second number is observed. The
model is able to achieve 0% error on this task with a very small number of units (both encoder and
transducer are 1 layer unidirectional LSTM RNNs with 100 units).
5
Note the minutiae that each of these extensions ends with <e> symbol.
We also experimented with sampling from the extensions in proportion to the probabilities, but this did not
always improve results.
6
6
As can be seen below, the model learns to output the digits as soon as the required information is
available. Occasionally the model waits an extra step to output its target symbol. We show results
(blue) for four different examples (red). A block window size of W=1 was used, with M=8.
2
<e>
1
<e>
4.2
+
<e>
7
<e>
7
<e>
4
<e>
2
9<e>
+
<e>
5
2<e>
3
<e>
<s>
5<e>
<s>
771<e>
2
<e>
4
<e>
2
<e>
0
<e>
7
<e>
+
<e>
+
<e>
2
<e>
3
<e>
6
2<e>
<s>
032<e>
2
0<e>
<s>
3<e>
TIMIT
We used TIMIT, a standard benchmark for speech recognition, for our larger experiments. Log Mel
filterbanks were computed every 10ms as inputs to the system. The targets were the 60 phones
defined for the TIMIT dataset (h# were relabelled as pau).
We used stochastic gradient descent with momentum with a batch size of one utterance per training
step. An initial learning rate of 0.05, and momentum of 0.9 was used. The learning rate was reduced
by a factor of 0.5 every time the average log prob over the validation set decreased 7 . The decrease
was applied for a maximum of 4 times. The models were trained for 50 epochs and the parameters
from the epochs with the best dev set log prob were used for decoding.
We trained a Neural Transducer with three layer LSTM RNN coupled to a three LSTM layer
unidirectional encoder RNN, and achieved a PER of 20.8% on the TIMIT test set. This model
used the LSTM attention mechanism. Alignments were generated from a model that was updated
after every 300 steps of Momentum updates. Interestingly, the alignments generated by the model
are very similar to the alignments produced by a Gaussian Mixture Model-Hidden Markov Model
(GMM-HMM) system that we trained using the Kaldi toolkit ? even though the model was trained
entirely discriminatively. The small differences in alignment correspond to an occasional phoneme
emitted slightly later by our model, compared to the GMM-HMM system.
We also trained models using alignments generated from the GMM-HMM model trained on Kaldi.
The frame level alignments from Kaldi were converted into block level alignments by assigning each
phone in the sequence to the block it was last observed in. The same architecture model described
above achieved an accuracy of 19.8% with these alignments.
For further exploratory experiments, we used the GMM-HMM alignments as given to avoid computing
the best alignments. Table 1 shows a comparison of our method against a basic implementation of a
sequence-to-sequence model that produces outputs for each block independent of the other blocks,
and concatenates the produced sequences. Here, the sequence-to-sequence model produces the output
conditioned on the state of the encoder at the end of the block. Both models used an encoder with
two layers of 250 LSTM cells, without attention. The standard sequence-to-sequence model performs
significantly worse than our model ? the recurrent connections of the transducer across blocks are
clearly helpful in improving the accuracy of the model.
Table 1: Impact of maintaining recurrent state of transducer across blocks on the PER (median of 3 runs). This
table shows that maintaining the state of the transducer across blocks leads to much better results.
W
15
15
BLOCK-RECURRENCE
No
Yes
PER
34.3
20.6
Figure 3 shows the impact of block size on the accuracy of the different transducer variants that we
used. See Section 3.3 for a description of the {DOT,MLP,LSTM}-attention models. All models used
a two LSTM layer encoder and a two LSTM layer transducer. The model is sensitive to the choice of
the block size, when no attention is used. However, it can be seen that with an appropriate choice of
window size (W=8), the Neural Transducer without attention can match the accuracy of the attention
based Neural Transducers. Further exploration of this configuration should lead to improved results.
When attention is used in the transducer, the precise value of the block size becomes less important.
The LSTM-based attention model seems to be more consistent compared to the other attention
7
Note the TIMIT provides a validation set, called the dev set. We use these terms interchangeably.
7
mechanisms we explored. Since this model performed best with W=25, we used this configuration
for subsequent experiments.
Phone Error Rate (PER)
26
25
24
no-attention
DOT-ATTENTION
MLP-ATTENTION
LSTM-ATTENTION
23
22
21
20
19
5
10
15
20
25
window size (W)
30
Figure 3: Impact of the number of frames (W) in a block and attention mechanism on PER. Each number is the
median value from three experiments.
Table 2 explores the impact of the number of layers in the transducer and the encoder on the PER.
A three layer encoder coupled to a three layer transducer performs best on average. Four layer
transducers produced results with higher spread in accuracy ? possibly because of the more difficult
optimization involved. Thus, the best average PER we achieved (over 3 runs) was 19.8% on the
TIMIT test set. These results could probably be improved with other regularization techniques, as
reported by [6] but we did not pursue those avenues in this paper.
Table 2: Impact of depth of encoder and transducer on PER.
# of layers in encoder / transducer
2
3
1
2
19.2
18.5
3
18.9
18.2
4
18.8
19.4
For a comparison with previously published sequence-to-sequence models on this task, we used a
three layer bidirectional LSTM encoder with 250 LSTM cells in each direction and achieved a PER
of 18.7%. By contrast, the best reported results using previous sequence-to-sequence models are
17.6% [6]. However, this requires controlling overfitting carefully.
5
Discussion
One of the important side-effects of our model using partial conditioning with a blocked transducer
is that it naturally alleviates the problem of ?losing attention? suffered by sequence-to-sequence
models. Because of this, sequence-to-sequence models perform worse on longer utterances [6, 3].
This problem is automatically tackled in our model because each new block automatically shifts the
attention monotonically forward. Within a block, the model learns to move attention forward from
one step to the next, and the attention mechanism rarely suffers, because both the size of a block,
and the number of output steps for a block are relatively small. As a result, error in attention in one
block, has minimal impact on the predictions at subsequent blocks. Finally, we note that increasing
the block size, W , so that it is as large as the input utterance makes the model similar to vanilla
end-to-end models [5, 3].
6
Conclusion
We have introduced a new model that uses partial conditioning on inputs to generate output sequences.
This allows the model to produce output as input arrives. This is useful for speech recognition
systems and will also be crucial for future generations of online speech translation systems. Further it
can be useful for performing transduction over long sequences ? something that is possibly difficult
for sequence-to-sequence models. We applied the model to a toy task of addition, and to a phone
recognition task and showed that is can produce results comparable to the state of the art from
sequence-to-sequence models.
8
References
[1] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural Machine Translation by Jointly Learning
to Align and Translate. In International Conference on Learning Representations, 2015.
[2] Dzmitry Bahdanau, Jan Chorowski, Dmitriy Serdyuk, Philemon Brakel, and Yoshua Bengio. End-to-end
attention-based large vocabulary speech recognition. In http://arxiv.org/abs/1508.04395, 2015.
[3] William Chan, Navdeep Jaitly, Quoc V Le, and Oriol Vinyals. Listen, attend and spell. arXiv preprint
arXiv:1508.01211, 2015.
[4] Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger
Schwen, and Yoshua Bengio. Learning Phrase Representations using RNN Encoder-Decoder for Statistical
Machine Translation. In Conference on Empirical Methods in Natural Language Processing, 2014.
[5] Jan Chorowski, Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. End-to-end Continuous Speech
Recognition using Attention-based Recurrent NN: First Results. In Neural Information Processing Systems:
Workshop Deep Learning and Representation Learning Workshop, 2014.
[6] Jan Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio. AttentionBased Models for Speech Recognition. In Neural Information Processing Systems, 2015.
[7] Alan Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recurrent neural
networks. In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on,
pages 6645?6649. IEEE, 2013.
[8] Alex Graves. Sequence Transduction with Recurrent Neural Networks. In International Conference on
Machine Learning: Representation Learning Workshop, 2012.
[9] Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech Recognition with Deep Recurrent
Neural Networks. In IEEE International Conference on Acoustics, Speech and Signal Processing, 2013.
[10] Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401,
2014.
[11] Geoffrey Hinton, Li Deng, Dong Yu, George E. Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew
Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N. Sainath, and Brian Kingsbury. Deep neural networks
for acoustic modeling in speech recognition: The shared views of four research groups. Signal Processing
Magazine, IEEE, 29(6):82?97, 2012.
[12] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra,
and Martin A. Riedmiller. Playing atari with deep reinforcement learning. CoRR, abs/1312.5602, 2013.
[13] Arvind Neelakantan, Quoc V Le, and Ilya Sutskever. Neural programmer: Inducing latent programs with
gradient descent. arXiv preprint arXiv:1511.04834, 2015.
[14] Scott Reed and Nando de Freitas. Neural programmer-interpreters. arXiv preprint arXiv:1511.06279,
2015.
[15] Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, JianYun Nie, Jianfeng Gao, and Bill Dolan. A neural network approach to context-sensitive generation of
conversational responses. arXiv preprint arXiv:1506.06714, 2015.
[16] Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. End-to-end memory networks. In Advances in
Neural Information Processing Systems, pages 2431?2439, 2015.
[17] Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to Sequence Learning with Neural Networks. In
Neural Information Processing Systems, 2014.
[18] Oriol Vinyals, Lukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. Grammar as a
foreign language. In NIPS, 2015.
[19] Oriol Vinyals and Quoc V. Le. A neural conversational model. In ICML Deep Learning Workshop, 2015.
[20] Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and Tell: A Neural Image
Caption Generator. In IEEE Conference on Computer Vision and Pattern Recognition, 2015.
[21] Wojciech Zaremba and Ilya Sutskever. Reinforcement learning neural turing machines. arXiv preprint
arXiv:1505.00521, 2015.
9
| 6594 |@word proportion:1 seems:1 open:1 decomposition:2 initial:1 configuration:2 relabelled:1 interestingly:1 prefix:2 past:1 freitas:1 current:6 com:6 blank:1 assigning:1 subsequent:2 hoping:1 update:2 bart:1 sukhbaatar:1 ivo:1 provides:1 yeb:1 firstly:1 simpler:1 org:1 windowed:2 kingsbury:1 wierstra:1 transducer:58 paragraph:1 frequently:1 multi:1 brain:5 decomposed:1 automatically:2 little:1 jm:1 window:4 cache:1 increasing:1 becomes:1 spain:1 minutia:1 notation:1 maximizes:1 transduced:1 cm:6 atari:1 pursue:1 deepmind:1 interpreter:1 finding:1 temporal:1 every:7 act:1 zaremba:1 exactly:1 unit:4 wayne:1 producing:1 danihelka:1 before:1 attend:1 local:1 treat:1 encoding:1 koo:1 tmax:3 rnns:1 eb:12 dynamically:1 practice:1 block:63 implement:1 backpropagation:2 digit:3 procedure:1 jan:3 riedmiller:1 empirical:1 rnn:12 significantly:1 matching:1 word:1 pre:1 seeing:2 wait:2 cannot:2 marginalize:1 layered:2 close:1 operator:2 context:7 conventional:1 map:1 bill:1 maximizing:1 attention:41 independently:3 sainath:1 rule:1 exploratory:1 updated:1 target:9 controlling:1 user:1 exact:2 programming:4 losing:1 us:1 samy:2 hypothesis:5 jaitly:3 magazine:1 caption:1 element:1 recognition:16 satisfying:1 approximated:1 expensive:1 observed:3 preprint:6 frn:3 commonplace:1 decrease:1 highest:2 alessandro:1 nie:1 dynamic:4 trained:8 segment:1 translated:1 easily:3 icassp:1 various:3 train:4 distinct:1 fast:2 effective:1 describe:4 shortcoming:2 tell:1 jianfeng:1 h0:2 heuristic:1 larger:2 dmitriy:2 consume:1 encoder:23 grammar:1 jointly:1 itself:1 online:6 sequence:88 aligned:1 combining:1 alleviates:1 flexibility:2 achieve:3 translate:1 margaret:1 jianyun:1 description:1 inducing:1 sutskever:5 produce:13 captioning:1 incremental:2 silver:1 sussillo:2 recurrent:15 bth:1 depending:1 andrew:1 b0:2 strong:1 come:4 indicate:1 direction:1 subsequently:1 stochastic:1 exploration:1 nando:1 programmer:2 require:1 feeding:1 merrienboer:1 brian:1 sainbayar:1 secondly:1 extension:5 sordoni:1 kaldi:3 dictionary:1 heap:1 purpose:1 label:6 tanh:1 combinatorial:1 sensitive:2 combinatorially:1 vice:1 clearly:1 always:1 gaussian:1 rather:1 avoid:1 emission:1 focus:1 likelihood:1 contrast:1 am:5 helpful:1 inference:5 foreign:1 unary:1 entire:8 typically:2 nn:1 brockett:1 hidden:6 dnn:1 arg:1 classification:1 art:2 breakthrough:1 softmax:7 sampling:2 koray:1 represents:3 holger:1 yu:1 icml:1 future:2 connectionist:1 yoshua:5 few:1 bw:14 william:1 attempt:2 ab:2 mlp:3 mnih:1 alignment:38 weakness:1 mixture:1 arrives:4 chain:1 emit:3 partial:6 experience:1 e0:2 seq2seq:1 minimal:1 modeling:2 earlier:1 dev:2 signifies:1 phrase:1 addressing:1 galley:1 too:2 reported:2 combined:1 chunk:1 cho:4 lstm:17 explores:1 international:4 yl:3 receiving:1 decoding:2 dong:1 michael:1 enhance:1 ym:21 attentionbased:1 ilya:5 again:1 possibly:3 worse:2 lukasz:1 dialogue:1 leading:1 wojciech:1 michel:1 toy:3 chorowski:5 li:1 potential:1 converted:1 volodymyr:1 de:1 redoing:1 ioannis:1 includes:1 filterbanks:1 later:2 performed:1 view:1 jason:1 doing:2 red:1 start:1 maintains:1 parallel:1 unidirectional:5 timit:7 accuracy:8 convolutional:2 phoneme:3 greg:1 correspond:1 yes:1 vincent:1 kavukcuoglu:1 produced:8 hbw:2 published:1 suffers:1 against:1 petrov:1 energy:1 involved:1 mohamed:3 e2:2 naturally:1 emits:2 dataset:2 mitchell:1 subsection:1 listen:1 carefully:1 actually:1 feed:1 bidirectional:1 higher:1 response:1 improved:2 formulation:1 done:3 though:1 strongly:1 just:1 until:3 rahman:3 hand:1 receives:1 google:11 logistic:2 effect:2 ye:1 concept:1 y2:1 normalized:1 former:1 regularization:2 hence:1 kyunghyun:4 spell:1 fethi:1 during:2 interchangeably:1 recurrence:1 please:1 mel:1 m:1 trying:1 performs:3 image:2 recently:1 sigmoid:1 ctc:5 ji:1 overview:3 conditioning:4 exponentially:1 extend:1 bougares:1 refer:4 blocked:1 versa:1 ai:1 vanilla:1 ongoing:1 similarly:2 language:3 had:1 dot:3 rectify:1 moving:3 toolkit:1 impressive:1 similarity:1 longer:2 add:1 align:1 something:1 patrick:1 posterior:2 own:1 showed:1 chan:1 reverse:1 phone:4 occasionally:1 success:1 came:1 yi:3 seen:6 george:1 deng:1 maximize:3 monotonically:1 signal:3 arithmetic:2 alan:1 match:1 arvind:1 long:4 e1:2 impact:6 prediction:18 variant:1 basic:1 vision:1 navdeep:3 arxiv:13 represent:1 sof:1 achieved:6 cell:3 beam:3 proposal:1 addition:2 decreased:1 median:2 source:1 suffered:1 crucial:1 biased:1 operate:1 unlike:2 extra:1 probably:1 bahdanau:6 incorporates:1 emitted:5 call:1 counting:1 bengio:8 easy:1 concerned:1 decent:1 independence:1 marginalization:2 affect:1 architecture:3 avenue:1 shift:1 whether:1 expression:1 effort:1 akin:2 speech:19 proceed:1 deep:7 ignored:1 useful:2 neelakantan:1 processed:2 augments:1 reduced:1 generate:4 http:1 per:13 track:2 blue:1 discrete:2 group:1 four:3 openai:1 threshold:1 clarity:1 gmm:5 dahl:1 xbw:4 padded:1 year:1 sum:3 run:2 prob:2 turing:2 ameliorate:1 arrive:1 reader:1 decide:1 decision:2 prefer:1 comparable:1 entirely:1 layer:17 hi:1 tackled:1 yangfeng:1 worked:1 alex:4 x2:1 generates:2 aspect:1 toshev:1 min:2 conversational:2 performing:1 relatively:3 martin:1 structured:1 slav:1 across:6 smaller:1 em:6 slightly:1 rob:1 quoc:5 ndjaitly:1 computationally:2 equation:7 previously:1 mechanism:12 antonoglou:1 end:21 gulcehre:1 available:2 generalizes:1 permit:1 occasional:1 appropriate:1 batch:2 alternative:2 maintaining:2 instant:1 unsuitable:2 especially:1 move:3 added:3 kaiser:1 strategy:1 traditional:1 gradient:5 separate:1 hmm:6 decoder:2 chris:1 dzmitry:5 length:3 index:4 reed:1 decimal:1 difficult:3 unfortunately:1 subproblems:1 ilyasu:1 suppress:1 implementation:1 perform:1 markov:1 sm:17 benchmark:1 philemon:1 caglar:1 descent:2 daan:1 situation:1 extended:1 hinton:4 precise:1 y1:26 frame:3 discovered:1 auli:1 expressiveness:1 overcoming:1 inferred:1 david:2 introduced:3 pair:1 required:2 specified:1 connection:2 acoustic:5 barcelona:1 nip:2 address:3 able:3 below:1 pattern:1 scott:1 program:1 max:1 memory:3 shifting:1 power:1 terry:1 natural:1 scheme:1 improve:1 picture:1 hm:1 coupled:2 utterance:6 prior:1 epoch:2 dolan:1 graf:5 loss:1 bear:1 discriminatively:1 generation:2 limitation:3 geoffrey:4 generator:1 validation:2 abdel:3 vanhoucke:1 consistent:1 playing:1 translation:6 demarcate:1 periodicity:1 token:9 last:11 keeping:1 soon:2 infeasible:1 offline:1 side:2 senior:1 perceptron:1 correspondingly:1 van:1 depth:1 vocabulary:2 tosequence:1 qvl:1 computes:5 forward:5 made:5 reinforcement:3 bm:2 nguyen:1 far:6 erhan:1 brakel:1 approximate:3 implicitly:3 transcription:3 keep:3 overfitting:1 xi:2 fergus:1 alternatively:1 subsequence:1 search:3 latent:2 continuous:1 table:5 additionally:1 learn:2 transfer:1 concatenates:1 serdyuk:2 unavailable:1 improving:1 did:2 spread:1 linearly:1 noise:1 x1:24 augmented:2 en:1 transduction:2 inferring:1 position:2 explicit:2 momentum:3 xl:2 replay:1 candidate:2 third:1 learns:3 dumitru:1 symbol:18 explored:1 experimented:5 unattractive:1 intractable:1 workshop:4 adding:1 corr:1 conditioned:7 pau:1 easier:1 flavor:1 simply:1 gao:1 vinyals:7 expressed:1 partially:2 scalar:2 weston:1 conditional:1 modulate:2 shared:1 change:1 included:1 operates:1 called:1 attempted:1 ew:2 rarely:2 formally:1 tara:1 latter:1 arises:1 alexander:1 oriol:6 audio:1 tested:1 |
6,185 | 6,595 | Combinatorial Energy Learning for Image
Segmentation
Jeremy Maitin-Shepard
UC Berkeley Google
[email protected]
Peter Li
Google
[email protected]
Viren Jain
Google
[email protected]
Michal Januszewski
Google
[email protected]
Pieter Abbeel
UC Berkeley
[email protected]
Abstract
We introduce a new machine learning approach for image segmentation that uses a
neural network to model the conditional energy of a segmentation given an image.
Our approach, combinatorial energy learning for image segmentation (CELIS)
places a particular emphasis on modeling the inherent combinatorial nature of
dense image segmentation problems. We propose efficient algorithms for learning
deep neural networks to model the energy function, and for local optimization of
this energy in the space of supervoxel agglomerations. We extensively evaluate
our method on a publicly available 3-D microscopy dataset with 25 billion voxels
of ground truth data. On an 11 billion voxel test set, we find that our method
improves volumetric reconstruction accuracy by more than 20% as compared to
two state-of-the-art baseline methods: graph-based segmentation of the output
of a 3-D convolutional neural network trained to predict boundaries, as well as a
random forest classifier trained to agglomerate supervoxels that were generated by
a 3-D convolutional neural network.
1
Introduction
Mapping neuroanatomy, in the pursuit of linking hypothesized computational models consistent
with observed functions to the actual physical structures, is a long-standing fundamental problem
in neuroscience. One primary interest is in mapping the network structure of neural circuits by
identifying the morphology of each neuron and the locations of synaptic connections between
neurons, a field called connectomics. Currently, the most promising approach for obtaining such
maps of neural circuit structure is volume electron microscopy of a stained and fixed block of
tissue. [4, 16, 17, 10] This technique was first used successfully decades ago in mapping the structure
of the complete nervous system of the 302-neuron Caenorhabditis elegans; due to the need to
manually cut, image, align, and trace all neuronal processes in about 8000 50 nm serial sections, even
this small circuit required over 10 years of labor, much of it spent on image analysis. [31] At the time,
scaling this approach to larger circuits was not practical.
Recent advances in volume electron microscopy [11, 20, 15] make feasible the imaging of large
circuits, potentially containing hundreds of thousands of neurons, at sufficient resolution to discern
even the smallest neuronal processes. [4, 16, 17, 10] The high image quality and near-isotropic
resolution achievable with these methods enables the resultant data to be treated as a true 3-D volume,
which significantly aids reconstruction of processes that do not run parallel to the sectioning axis, and
is potentially more amenable to automated image processing.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
E s3
Fully-connected layer
Convolutional
neural network
E(S; I)
Image (I)
E s2
all voxel positions x
Boundary
classification
E s1
Global
energy
P
Shape descriptors
Candidate
segmentation
(S)
Initial oversegmentation
Agglomeration
Local energy
Figure 1: Illustration of computation of global energy for a single candidate segmentation S. The
local energy Es (x; S; I) ? [0, 1], computed by a deep neural network, is summed over all shape
descriptor types s and voxel positions x.
Image analysis remains a key challenge, however. The primary bottleneck is in segmenting the
full volume, which is filled almost entirely by heavily intertwined neuronal processes, into the
volumes occupied by each individual neuron. While the cell boundaries shown by the stain provide
a strong visual cue in most cases, neurons can extend for tens of centimeters in path length while
in some places becoming as narrow as 40 nm; a single mistake anywhere along the path can render
connectivity information for the neuron largely inaccurate. Existing automated and semi-automated
segmentation methods do not sufficiently reduce the amount of human labor required: a recent
reconstruction of 950 neurons in the mouse retina required over 20000 hours of human labor, even
with an efficient method of tracing just a skeleton of each neuron [18]; a recent reconstruction of
379 neurons in the Drosophila medulla column (part of the visual pathway) required 12940 hours of
manual proof-reading/correction of an automated segmentation [26].
Related work: Algorithmic approaches to image segmentation are often formulated as variations on
the following pipeline: a boundary detection step establishes local hypotheses of object boundaries, a
region formation step integrates boundary evidence into local regions (i.e. superpixels or supervoxels),
and a region agglomeration step merges adjacent regions based on image and object features. [1, 19,
30, 2] Although extensive integration of machine learning into such pipelines has begun to yield
promising segmentation results [3, 14, 22], we argue that such pipelines, as previously formulated,
fundamentally neglect two potentially important aspects of achieving accurate segmentation: (i) the
combinatorial nature of reasoning about dense image segmentation structure,1 and (ii) the fundamental
importance of shape as a criterion for segmentation quality.
Contributions: We propose a method that attempts to overcome these deficiencies. In particular,
we propose an energy-based model that scores segmentation quality using a deep neural network
that flexibly integrates shape and image information: Combinatorial Energy Learning for Image
Segmentation (CELIS). In pursuit of such a model this paper makes several specific contributions:
a novel connectivity region data structure for efficiently computing the energy of configurations of
3-D objects; a binary shape descriptor for efficient representation of 3-D shape configurations; a
neural network architecture that splices the intermediate unit output from a trained convolutional
network as input to a deep fully-connected neural network architecture that scores a segmentation
and 3-D image; a training procedure that uses pairwise object relations within a segmentation to
learn the energy-based model. an experimental evaluation of the proposed and baseline automated
reconstruction methods on a massive and (to our knowledge) unprecedented scale that reflects the
true size of connectomic datasets required for biological analysis (many billions of voxels).
2
Conditional energy modeling of segmentations given images
We define a global, translation-invariant energy model for predicting the cost of a complete segmentation S given a corresponding image I. This cost can be seen as analogous to the negative
1
While prior work [30, 14, 2] has recognized the importance of combinatorial reasoning, the previously
proposed global optimization methods allow local decisions to interact only in a very limited way.
2
log-likelihood of the segmentation given the image, but we do not actually treat it probabilistically.
Our goal is to define a model such that the true segmentation corresponding to a given image can be
found by minimizing the cost; the energy can reflect both a prior over object configurations alone, as
well as compatibility between object configurations and the image.
As shown in Fig. 1, we define the global energy E(S; I) as the sum over local energy models (defined
by a deep neural network) Es (x; S; I) at several different scales s computed in sliding-window
fashion centered at every position x within the volume:
XX
E(S; I) :=
Es (x; S; I),
s
x
?s (rs (x; S); ?(x; I)) .
Es (x; S; I) := E
The local energy Es (x; S; I) depends on the local image context centered at position x by way
of a vector representation ?(x; I) computed by a deep convolutional neural network, and on the
local shape/object configuration at scale s by way of a novel local binary shape descriptor rs (x; S),
defined in Section 3.
To find (locally) minimal-cost segmentations under this model, we use local search over the space of
agglomerations starting from some initial supervoxel segmentation. Using a simple greedy policy,
at each step we consider all possible agglomeration actions, i.e. merges between any two adjacent
segments, and pick the action that results in the lowest energy.
Na?vely, computing the energy for just a single segmentation requires computing shape descriptors
and then evaluating the energy model at every voxel position with the volume; a small volume may
have tens or hundreds of millions of voxels. At each stage of the agglomeration, there may be
thousands, or tens of thousands, of potential next agglomeration steps, each of which results in a
unique segmentation. In order to choose the best next step, we must know the energy of all of these
potential next segmentations. The computational cost to perform these computations directly would
be tremendous, but in the supplement, we prove a collection of theorems that allow for an efficient
implementation that computes these energy terms incrementally.
3
Representing 3-D Shape Configurations with Local Binary Descriptors
We propose a binary shape descriptor based on subsampled pairwise connectivity information: given
a specification s of k pairs of position offsets {a1 , b1 }, . . . , {ak , bk } relative to the center of some
fixed-size bounding box of size Bs , the corresponding k-bit binary shape descriptor r(U ) for a
particular segmentation U of that bounding box is defined by
1 if ai is connected to bi in U ;
ri (U ) :=
for i ? [1, k].
0 otherwise.
As shown in Fig. 2a, each bit of the descriptor specifies whether a particular pair of positions are
part of the same segment, which can be determined in constant time by the use of a suitable data
structure. In the limit case, if we use the list of all n2 pairs of positions within an n-voxel bounding
box, no information is lost and the Hamming distance between two descriptors is precisely
equal to
the Rand index. [23] In general we can sample a subset of only k pairs out of the n2 possible; if we
sample uniformly at random, we retain the property that the expected Hamming distance between
two descriptors is equal to the Rand index. We found that picking k = 512 bits provides a reasonable
trade-off between fidelity and representation size. While the pairs may be randomly sampled initially,
naturally to obtain consistent results when learning models based on these descriptors we must use
the same fixed list of positions for defining the descriptor at both training and test time. 2
Note that this descriptor serves in general as a type of sketch of a full segmentation of a given
bounding box. By restricting one of the two positions of each pair to be the center position of the
bounding box, we instead obtain a sketch of just the single segment containing the center position.
We refer to the descriptor in this case as center-based, and to the general case as pairwise, as shown
in Fig. 2b. We will use these shape descriptors to represent only local sub-regions of a segmentation.
To represent shape information throughout a large volume, we compute shape descriptors densely at
all positions in a sliding window fashion, as shown in Fig. 2c.
2
The BRIEF descriptor [5] is similarly defined as a binary descriptor based on a subset of the pairs of points
within a patch, but each bit is based on the intensity difference, rather than connectivity, between each pair.
3
r = 1...
r = 100000000110 . . .
r = 10000000011000000110100000101001
(a) Sequence showing computation of a shape descriptor.
r = 00001000001011100111100100001000
r = 00000000000101110000010000110010
r = 10001001101100010100000010000111
(b) Shape descriptors are computed at multiple scales. Pairwise descriptors (shown left and center) consider
arbitrary pairwise connectivity, while center-based shape descriptors (shown right) restrict one position of each
pair to be the center point.
r = 10000001110010100110100001011001
r = 11000011110011100100100011011011
r = 10000011100111100100110011011111
(c) Shape descriptors are computed densely at every position within the volume.
Figure 2: Illustration of shape descriptors. The connected components of the bounding box U for
which the descriptor is computed are shown in distinct colors. The pairwise connectivity relationships
that define the descriptor are indicated by dashed lines; connected pairs are shown in white, while
disconnected pairs are shown in black. Connectivity is determined based on the connected components
of the underlying segmentation, not the geometry of the line itself. While this illustration is 2-D, in
our experiments shape descriptors are computed fully in 3-D.
Connectivity Regions
As defined, a single shape descriptor represents the segmentation within its fixed-size bounding box;
by shifting the position of the bounding box we can obtain descriptors corresponding to different
local regions of some larger segmentation. The size of the bounding box determines the scale of the
local representation. This raises the question of how connectivity should be defined within these local
regions. Two voxels may be connected only by a long path well outside the descriptor bounding box.
As we would like the shape descriptors to be consistent with the local topology, such pairs should
be considered disconnected. Shape descriptors are, therefore, defined with respect to connectivity
within some larger connectivity region, which necessarily contains one or more descriptor bounding
boxes but may in general be significantly smaller than the full segmentation; conceptually, the shape
descriptor bounding box slides around to all possible positions contained within the connectivity
region. (This sliding necessarily results in some minor inconsistency in context between different
positions, but reduces computational and memory costs.) To obtain shape descriptors at all positions,
we simply tile the space with overlapping rectangular connectivity regions of appropriate uniform
size and stride, as shown in the supplement. The connectivity region size determines the degree
of locality of the connectivity information captured by the shape descriptor (independent of the
descriptor bounding box size). It also affects computational costs, as described in the supplement.
4
4
Energy model learning
?s (r; v) for each shape descriptor type/scale s by a learned neural
We define the local energy model E
network model that computes a real-valued score in [0, 1] from a shape descriptor r and image feature
vector v.
To simplify the presentation, we define the following notation for the forward discrete derivative of f
with respect to S: ?eS f (S) := f (S + e) ? f (S).
Based on this notation, we have the discrete derivative of the energy function ?eS E(S; I) =
E(S + e; I) ? E(S; I), where S + e denotes the result of merging the two supervoxels corresponding to e in the existing segmentation S. To agglomerate, our greedy policy simply chooses at
step t the action e that minimizes ?eS t E(S t ; I), where S t denotes the current segmentation at step t.
As in prior work [22], we treat this as a classification problem, with the goal of matching the sign of
?eS t E(S t ; I) to ?eS t error(S t , S ? ), the corresponding change in segmentation error with respect to a
ground truth segmentation S ? , measured using Variation of Information [21].
4.1
Local training procedure
Because the ?eS t E(S t ; I) term is simply the sum of the change in energies from each position
?s (r; v)
and descriptor type s, as a heuristic we optimize the parameters of the energy model E
independently for each shape descriptor type/scale s. We seek to minimize the expectation
?s (rs (xi ; Si + e); ?(xi ; I)))+
Ei `(?eSii error(Si , S ? ), E
?s (rs (x; Si ); ?(xi ; I))) ,
`(??eSii error(Si , S ? ), E
where i indexes over training examples that correspond to a particular sampled position xi and a
merge action ei applied to a segmentation Si . `(y, a) denotes a binary classification loss function,
where a ? [0, 1] is the predicted probability that the true label y is positive, weighted by |y|. Note that
if ?eSii error(Si , S ? ) < 0, then action e improved the score and therefore we want a low predicted
score for the post-merge descriptor rs (xi ; Si + e) and a high predicted score for the pre-merge
descriptor rs (xi ; Si ); if ?eSii error(Si , S ? ) > 0 the opposite applies. We tested the standard log loss
`(y, a) := |y| ? [1y>0 log(a) + 1y<0 log(1 ? a)], as well as the signed linear loss `(y, a) := y ? a,
which more closely matches how the Es (x; Si ; I) terms contribute to the overall ?eS E(S; I) scores.
Stochastic gradient descent (SGD) is used to perform the optimization.
We obtain training examples by agglomerating using the expert policy that greedily optimizes
error(S t , S ? ). At each segmentation state S t during an agglomeration step (including the initial state),
for each possible agglomeration action e, and each position x within the volume, we compute the shape
descriptor pair rs (x; S t ) and rs (x; S t +e) reflecting the pre-merge and post-merge states, respectively.
If rs (x; S t ) 6= rs (x; S t + e), we emit a training example corresponding to this descriptor pair. We
thereby obtain a conceptual stream of examples he, ?eS t error(S t , S ? ), ?(x; I), rs (x; S t ), rs (x; S t +
e)i.
This stream of examples may contain billions of examples (and many highly correlated), far more
than required to learn the parameters of Es . To reduce resource requirements, we use priority
sampling [12], based on |?eS error(S, S ? )|, to obtain a fixed number of weighted samples without
replacement for each descriptor type s. We equalize the total weight of true merge examples
(?eS error(S, S ? ) < 0) and false merge examples (?eS error(S, S ? ) > 0) in order to avoid learning
degenerate models.3
5
Experiments
We tested our approach on a large, publicly available electron microscopy dataset, called Janelia FIB25, of a portion of the Drosophila melangaster optic lobe. The dataset was collected at 8 ? 8 ? 8 nm
3
For example, if most of the weight is on false merge examples, as would often occur without balancing, the
model can simply learn to assign a score that increases with the number of 1 bits in the shape descriptor.
5
3.5
Split error (H(p|t))
3.0
2.5
2.0
1.5
??
1.0
0.5
0.0
0.0
0.5
1.0
1.5
2.0
2.5
3.0
VI
Rand F1
CELIS (this paper)
3d-CNN+GALA
3d-CNN+Watershed
7colseg1
1.672
2.069
2.143
2.981
0.691
0.597
0.629
0.099
Oracle
0.428
0.901
3.5
Merge error (H(t|p))
Figure 3: Segmentation accuracy on 11-gigavoxel FIB-25 test set. Left: Pareto frontiers of
information-theoretic split/merge error, as used previously to evaluate segmentation accuracy. [22]
Right: Comparison of Variation of Information (lower is better) and Rand F1 score (higher is better).
For CELIS, 3d-CNN+GALA, and 3d-CNN+watershed, the hyperparameters were optimized for each
metric on the training set.
resolution using Focused Ion Beam Scanning Electron Microscopy (FIB-SEM); a labor-intensive
semi-automated approach was used to segment all of the larger neuronal processes within a ? 20,000
cubic micron volume (comprising about 25 billion voxels). [27] To our knowledge, this challenging
dataset is the largest publicly available electron microscopy dataset of neuropil with a corresponding
?ground truth? segmentation.
For our experiments, we split the dataset into separate training and testing portions along the z axis:
the training portion comprises z-sections 2005?5005, and the testing portion comprises z-sections
5005?8000 (about 11 billion voxels).
5.1
Boundary classification and oversegmentation
To obtain image features and an oversegmentation to use as input for agglomeration, we trained
convolutional neural networks to predict, based on a 35 ? 35 ? 9 voxel image context region, whether
the center voxel is part of the same neurite as the adjacent voxel in each of the x, y, and z directions, as
in prior work. [29] We optimized the parameters of the network using stochastic gradient descent with
log loss. We trained several different networks, varying as hyperparameters the amount of dilation of
boundaries in the training data (in order to increase extracellular space) from 0 to 8 voxels and whether
components smaller than 10000 voxels were excluded. See the supplementary information for a
description of the network architecture. Using these connection affinities, we applied a watershed
algorithm [33, 34] to obtain an (approximate) oversegmentation. We used parameters Tl = 0.95,
Th = 0.95, Te = 0.5, and Ts = 1000 voxels.
5.2
Energy model architecture
We used five types of 512-dimensional shape descriptors: three pairwise descriptor types with 93 ,
173 , and 333 bounding boxes, and two center-based descriptor types with 173 and 333 bounding
boxes, respectively. The connectivity positions within the bounding boxes for each descriptor type
were sampled uniformly at random.
We used the 512-dimensional fully-connected penultimate layer output of the low-level classification
convolutional neural network as the image feature vector ?(x; I). For each shape descriptor type s,
?s (r; v): we concatenated the shape
we used the following architecture for the local energy model E
descriptor vector and the image feature vector to obtain a 1024-dimensional input vector. We used
two 2048-dimensional fully-connected rectified linear hidden layers, followed by a logistic output
unit, and applied dropout (with p = 0.5) after the last hidden layer. While this effectively computes a
6
score from a raw image patch and a shape descriptor, by segregating expensive convolutional image
processing that does not depend on the shape descriptor, this architecture allows us to benefit from
pre-training and precomputation of the intermediate image feature vector ?(x; I) for each position x.
Training for both the energy models and the boundary classifier was performed using asynchronous
SGD using a distributed architecture. [9]
5.3
Evaluation
We compared our method to the state-of-the-art agglomeration method GALA [22], which trains
a random forest classifier to predict merge decisions using image features derived from boundary
probabilities. 4 To obtain such probabilities from our low-level convolutional neural network classifier,
which predicts edge affinities between adjacent voxels rather than per-voxel predictions, we compute
for each voxel the minimum connection probability to any voxel in its 6-connectivity neighborhood,
and treat this as the probability/score of it being cell interior.
For comparison, we also evaluated a watershed procedure applied to the CNN affinity graph output,
under varying parameter choices, to measure the accuracy of the deep CNN boundary classification
without the use of an agglomeration procedure. Finally, we evaluated the accuracy of the publicly
released automated segmentation of FIB-25 (referred to as 7colseg1) [13] that was the basis of
the proofreading process used to obtain the ground truth; it was produced by applying watershed
segmentation and a variant of GALA agglomeration to the predictions made by an Ilastik [25]-trained
voxel classifier.
We tested both GALA and CELIS using the same initial oversegmentations for the training and test
regions. To compare the accuracy of the reconstructions, we computed two measures of segmentation
consistency relative to the ground truth: Variation of Information [21] and Rand F1 score, defined as
the F1 classification score over connectivity between all voxel pairs within the volumes; these are the
primary metrics used in prior work. [28, 8, 22] The former has the advantage of weighing segments
linearly in their size rather than quadratically.
Because any agglomeration method is ultimately limited by the quality of the initial oversegmentation,
we also computed the accuracy of an oracle agglomeration policy that greedily optimizes the
error metric directly. (Computing the true globally-optimal agglomeration under either metric is
intractable.) This serves as an (approximate) upper bound that is useful for separating the error due to
agglomeration from the error due to the initial oversegmentation.
6
Results
Figure 3 shows the Pareto optimal trade-offs between test set split and merge error of each method
obtained by varying the choice of hyperparameters and agglomeration thresholds, as well as the
Variation of Information and Rand F1 scores obtained from the training set-optimal hyperparameters.
CELIS consistently outperforms all other methods by a significant margin under both metrics. The
large gap between the Oracle results and the best automated reconstruction indicates, however, that
there is still large room for improvement in agglomeration.
While the evaluations are done on a single dataset, it is a single very large dataset; to verify that
the improvement due to CELIS is broad and general (rather than localized to a very specific part of
the image volume), we also evaluated accuracy independently on 18 non-overlapping 5003 -voxel
subvolumes evenly spaced within the test region. On all subvolumes CELIS outperformed the best
existing method under both metrics, with a median reduction in Variation of Information error of 19%
and in Rand F1 error of 22%. This suggests that CELIS is improving accuracy in many parts of the
volume that span significant variations in shape and image characteristics.
4
GALA also supports multi-channel image features, potentially representing predicted probabilities of
additional classes, such as mitochondria, but we did not make use of this functionality as we did not have training
data for additional classes.
7
7
Discussion
We have introduced CELIS, a framework for modeling image segmentations using a learned energy
function that specifically exploits the combinatorial nature of dense segmentation. We have described
how this approach can be used to model the conditional energy of a segmentation given an image, and
how the resulting model can be used to guide supervoxel agglomeration decisions. In our experiments
on a challenging 3d microscopy reconstruction problem, CELIS improved volumetric reconstruction
accuracy by 20% over the best existing method, and offered a strictly better trade-off between split
and merge errors, by a wide margin, compared to existing methods.
The experimental results are unique in the scale of the evaluations: the 11-gigavoxel test region is 2?4
orders of magnitude larger than used for evaluation in prior work, and we believe this large scale of
evaluation to be critically important; we have found evaluations on smaller volumes, containing only
short neurite fragments, to be unreliable at predicting accuracy on larger volumes (where propagation
of merge errors is a major challenge). While more computationally expensive than many prior
methods, CELIS is nonetheless practical: we have successfully run CELIS on volumes approaching
? 1 teravoxel in a matter of hours, albeit using many thousands of CPU cores.
In addition to advancing the state of the art in learning-based image segmentation, this work also has
significant implications for the application area we have studied, connectomic reconstruction. The
FIB-25 dataset reflects state-of-the-art techniques in sample preparation and imaging for large-scale
neuron reconstruction, and in particular is highly representative of much larger datasets actively
being collected (e.g. of a full adult fly brain). We expect, therefore, that the significant improvements
in automated reconstruction accuracy made by CELIS on this dataset will directly translate to a
corresponding decrease in human proof-reading effort required to reconstruct a given volume of tissue,
and a corresponding increase in the total size of neural circuit that may reasonably be reconstructed.
Future work in several specific areas seems particularly fruitful:
? End-to-end training of the CELIS energy modeling pipeline, including the CNN model
for computing the image feature representation and the aggregation of local energies at
each position and scale. Because the existing pipeline is fully differentiable, it is directly
amenable to end-to-end training.
? Integration of the CELIS energy model with discriminative training of a neural networkbased agglomeration policy. Such a policy could depend on the distribution of local energy
changes, rather than just the sum, as well as other per-object and per-action features proposed
in prior work. [22, 3]
? Use of a CELIS energy model for fixing undersegmentation errors. While the energy
minimization procedure proposed in this paper is based on a greedy local search limited to
performing merges, the CELIS energy model is capable of evaluating arbitrary changes to
the segmentation. Evaluation of candidate splits (based on a hierarchical initial segmentation
or other heuristic criteria) would allow for the use of a potentially more robust simulated
annealing energy minimization procedure capable of both splits and merges.
Several recent works [24, 32, 7, 6] have integrated deep neural networks into pairwise-potential
conditional random field models. Similar to CELIS, these approaches combine deep learning with
structured prediction, but differ from CELIS in several key ways:
? Through a restriction to models that can be factored into pairwise potentials, these approaches are able to use mean field and pseudomarginal approximations to perform efficient
approximate inference. The CELIS energy model, in contrast, sacrifices factorization for the
richer combinatorial modeling provided by the proposed 3-D shape descriptors.
? More generally, these prior CRF methods are focused on refining predictions (e.g. improving
boundary localization/detail for semantic segmentation) made by a feed-forward neural
network that are correct at a high level. In contrast, CELIS is designed to correct fundamental
inaccuracy of the feed-forward convolutional neural network in critical cases of ambiguity,
which is reflected in the much greater complexity of the structured model.
Acknowledgments
This material is based upon work supported by the National Science Foundation under Grant No.
1118055.
8
References
[1] B. Andres, U. K?the, M. Helmstaedter, W. Denk, and F. Hamprecht. Segmentation of SBFSEM volume data of neural tissue by hierarchical classification. Pattern recognition, pages 142?152, 2008. 2
[2] Bjoern Andres, Thorben Kroeger, Kevin L Briggman, Winfried Denk, Natalya Korogod, Graham Knott, Ullrich Koethe, and Fred A
Hamprecht. Globally optimal closed-surface segmentation for connectomics. In Computer Vision?ECCV 2012, pages 778?791. Springer,
2012. 2
[3] John A Bogovic, Gary B Huang, and Viren Jain. Learned versus hand-designed feature representations for 3d agglomeration.
arXiv:1312.6159, 2013. 2, 8
[4] Kevin L Briggman and Winfried Denk. Towards neural circuit reconstruction with volume electron microscopy techniques. Current
Opinion in Neurobiology, 16(5):562 ? 570, 2006. Neuronal and glial cell biology / New technologies. 1
[5] Michael Calonder, Vincent Lepetit, Christoph Strecha, and Pascal Fua. Brief: Binary robust independent elementary features. In
European conference on computer vision, pages 778?792. Springer, 2010. 3
[6] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L. Yuille. Semantic image segmentation with deep
convolutional nets and fully connected crfs. CoRR, abs/1412.7062, 2014. 8
[7] Liang-Chieh Chen, Alexander G Schwing, Alan L Yuille, and Raquel Urtasun. Learning deep structured models. In Proc. ICML, 2015.
8
[8] Dan Claudiu Ciresan, Alessandro Giusti, Luca Maria Gambardella, and J?rgen Schmidhuber. Deep neural networks segment neuronal
membranes in electron microscopy images. In NIPS, pages 2852?2860, 2012. 7
[9] Jeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Marc?Aurelio Ranzato, Andrew Senior, Paul Tucker,
Ke Yang, Quoc V. Le, and Andrew Y. Ng. Large scale distributed deep networks. In F. Pereira, C.J.C. Burges, L. Bottou, and K.Q.
Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 1223?1231. Curran Associates, Inc., 2012. 7
[10] Winfried Denk, Kevin L Briggman, and Moritz Helmstaedter. Structural neurobiology: missing link to a mechanistic understanding of
neural computation. Nature Reviews Neuroscience, 13(5):351?358, 2012. 1
[11] Winfried Denk and Heinz Horstmann. Serial block-face scanning electron microscopy to reconstruct three-dimensional tissue nanostructure. PLoS Biol, 2(11):e329, 10 2004. 1
[12] Nick Duffield, Carsten Lund, and Mikkel Thorup. Priority sampling for estimation of arbitrary subset sums. Journal of the ACM (JACM),
54(6):32, 2007. 5
[13] Janelia FlyEM. https://www.janelia.org/project-team/flyem/data-and-software-release. Accessed: 2016-05-19. 7
[14] Jan Funke, Bjoern Andres, Fred A Hamprecht, Albert Cardona, and Matthew Cook. Efficient automatic 3d-reconstruction of branching
neurons from em data. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 1004?1011. IEEE, 2012.
2
[15] KJ Hayworth, N Kasthuri, R Schalek, and JW Lichtman. Automating the collection of ultrathin serial sections for large volume tem
reconstructions. Microscopy and Microanalysis, 12(Supplement,S02):86?87, 2006. 1
[16] Moritz Helmstaedter, Kevin L Briggman, and Winfried Denk. 3d structural imaging of the brain with photons and electrons. Current
Opinion in Neurobiology, 18(6):633 ? 641, 2008. 1
[17] Moritz Helmstaedter, Kevin L Briggman, and Winfried Denk. High-accuracy neurite reconstruction for high-throughput neuroanatomy.
Nature neuroscience, 14(8):1081?1088, 2011. 1
[18] Moritz Helmstaedter, Kevin L Briggman, Srinivas C Turaga, Viren Jain, H Sebastian Seung, and Winfried Denk. Connectomic reconstruction of the inner plexiform layer in the mouse retina. Nature, 500(7461):168?174, 2013. 2
[19] Viren Jain, Srinivas C Turaga, Kevin L Briggman, Moritz N Helmstaedter, Winfried Denk, and H Sebastian Seung. Learning to agglomerate superpixel hierarchies. Advances in Neural Information Processing Systems, 2(5), 2011. 2
[20] Graham Knott, Herschel Marchman, David Wall, and Ben Lich. Serial section scanning electron microscopy of adult brain tissue using
focused ion beam milling. The Journal of Neuroscience, 28(12):2959?2964, 2008. 1
[21] Marina Meil?a. Comparing clusterings?an information based distance. Journal of Multivariate Analysis, 98(5):873?895, 2007. 5, 7
[22] Juan Nunez-Iglesias, Ryan Kennedy, Toufiq Parag, Jianbo Shi, and Dmitri B Chklovskii. Machine learning of hierarchical clustering to
segment 2d and 3d images. PloS one, 8(8):e71715, 2013. 2, 5, 6, 7, 8
[23] William M. Rand. Objective criteria for the evaluation of clustering methods. Journal of the American Statistical Association,
66(336):846?850, 1971. 3
[24] Alexander G Schwing and Raquel Urtasun. Fully connected deep structured networks. arXiv preprint arXiv:1503.02351, 2015. 8
[25] Christoph Sommer, Christoph Straehle, Ullrich Kothe, and Fred A Hamprecht. ilastik: Interactive learning and segmentation toolkit. In
Biomedical Imaging: From Nano to Macro, 2011 IEEE International Symposium on, pages 230?233. IEEE, 2011. 7
[26] Shin-ya Takemura, Arjun Bharioke, Zhiyuan Lu, Aljoscha Nern, Shiv Vitaladevuni, Patricia K Rivlin, William T Katz, Donald J Olbris,
Stephen M Plaza, Philip Winston, et al. A visual motion detection circuit suggested by drosophila connectomics. Nature, 500(7461):175?
181, 2013. 2
[27] Shin-ya Takemura, C Shan Xu, Zhiyuan Lu, Patricia K Rivlin, Toufiq Parag, Donald J Olbris, Stephen Plaza, Ting Zhao, William T Katz,
Lowell Umayam, et al. Synaptic circuits and their variations within different columns in the visual system of drosophila. Proceedings of
the National Academy of Sciences, 112(44):13711?13716, 2015. 6
[28] Srinivas Turaga, Kevin Briggman, Moritz Helmstaedter, Winfried Denk, and Sebastian Seung. Maximin affinity learning of image
segmentation. In Advances in Neural Information Processing Systems 22, pages 1865?1873. MIT Press, Cambridge, MA, 2009. 7
[29] Srinivas C. Turaga, Joseph F. Murray, Viren Jain, Fabian Roth, Moritz Helmstaedter, Kevin Briggman, Winfried Denk, and H. Sebastian
Seung. Convolutional networks can learn to generate affinity graphs for image segmentation. Neural Comput., 22(2):511?538, 2010. 6
[30] Amelio Vazquez-Reina, Michael Gelbart, Daniel Huang, Jeff Lichtman, Eric Miller, and Hanspeter Pfister. Segmentation fusion for
connectomics. In Computer Vision (ICCV), 2011 IEEE International Conference on, pages 177?184. IEEE, 2011. 2
[31] J. G. White, E. Southgate, J. N. Thomson, and S. Brenner. The Structure of the Nervous System of the Nematode Caenorhabditis elegans.
Philosophical Transactions of the Royal Society of London. B, Biological Sciences, 314(1165):1?340, 1986. 1
[32] Shuai Zheng, Sadeep Jayasumana, Bernardino Romera-Paredes, Vibhav Vineet, Zhizhong Su, Dalong Du, Chang Huang, and Philip HS
Torr. Conditional random fields as recurrent neural networks. In Proceedings of the IEEE International Conference on Computer Vision,
pages 1529?1537, 2015. 8
[33] Aleksandar Zlateski. A design and implementation of an efficient, parallel watershed algorithm for affinity graphs. PhD thesis, Massachusetts Institute of Technology, 2011. 6
[34] Aleksandar Zlateski and H. Sebastian Seung. Image segmentation by size-dependent single linkage clustering of a watershed basin graph.
CoRR, 2015. 6
9
| 6595 |@word h:1 cnn:7 achievable:1 seems:1 kokkinos:1 paredes:1 rivlin:2 pieter:1 r:12 seek:1 lobe:1 pick:1 sgd:2 thereby:1 ultrathin:1 briggman:9 reduction:1 initial:7 configuration:6 contains:1 score:14 fragment:1 lepetit:1 daniel:1 romera:1 outperforms:1 existing:6 current:3 com:4 michal:1 comparing:1 si:10 must:2 connectomics:4 john:1 celis:22 devin:1 duffield:1 kothe:1 shape:39 enables:1 strecha:1 designed:2 alone:1 cue:1 greedy:3 weighing:1 nervous:2 cook:1 agglomerating:1 isotropic:1 short:1 core:1 provides:1 contribute:1 location:1 org:1 accessed:1 five:1 along:2 symposium:1 prove:1 pathway:1 combine:1 dan:1 introduce:1 pairwise:9 sacrifice:1 expected:1 morphology:1 multi:1 brain:3 heinz:1 globally:2 actual:1 cpu:1 window:2 ilastik:2 spain:1 xx:1 underlying:1 notation:2 circuit:9 provided:1 project:1 lowest:1 minimizes:1 berkeley:3 every:3 precomputation:1 interactive:1 classifier:5 jianbo:1 unit:2 grant:1 segmenting:1 positive:1 local:24 treat:3 mistake:1 limit:1 ak:1 meil:1 path:3 becoming:1 merge:14 black:1 signed:1 emphasis:1 studied:1 suggests:1 challenging:2 christoph:3 limited:3 factorization:1 bi:1 practical:2 unique:2 acknowledgment:1 testing:2 lost:1 block:2 procedure:6 shin:2 jan:1 area:2 significantly:2 matching:1 pre:3 donald:2 interior:1 context:3 applying:1 optimize:1 fruitful:1 map:1 restriction:1 shi:1 center:9 crfs:1 dean:1 missing:1 www:1 flexibly:1 starting:1 independently:2 rectangular:1 resolution:3 focused:3 identifying:1 ke:1 matthieu:1 factored:1 biol:1 variation:8 analogous:1 hierarchy:1 heavily:1 massive:1 viren:6 us:2 hypothesis:1 curran:1 superpixel:1 stain:1 associate:1 expensive:2 particularly:1 recognition:2 cut:1 predicts:1 observed:1 fly:1 preprint:1 januszewski:1 thousand:4 region:17 connected:11 ranzato:1 plo:2 trade:3 decrease:1 equalize:1 alessandro:1 complexity:1 skeleton:1 seung:5 denk:11 ultimately:1 trained:6 raise:1 depend:2 segment:7 yuille:2 localization:1 upon:1 eric:1 basis:1 train:1 jain:5 distinct:1 london:1 formation:1 outside:1 neighborhood:1 kevin:10 nematode:1 heuristic:2 larger:7 valued:1 supplementary:1 richer:1 kai:1 otherwise:1 reconstruct:2 cvpr:1 undersegmentation:1 itself:1 sequence:1 advantage:1 unprecedented:1 differentiable:1 net:1 koethe:1 propose:4 reconstruction:17 caenorhabditis:2 macro:1 translate:1 degenerate:1 iglesias:1 shiv:1 academy:1 description:1 billion:6 requirement:1 ben:1 object:8 spent:1 andrew:2 recurrent:1 fixing:1 measured:1 minor:1 arjun:1 strong:1 c:1 predicted:4 differ:1 direction:1 closely:1 correct:2 functionality:1 stochastic:2 centered:2 human:3 opinion:2 material:1 assign:1 parag:2 abbeel:1 f1:6 wall:1 drosophila:4 biological:2 elementary:1 ryan:1 frontier:1 strictly:1 correction:1 sufficiently:1 considered:1 around:1 ground:5 mapping:3 predict:3 algorithmic:1 electron:10 matthew:1 rgen:1 major:1 smallest:1 released:1 estimation:1 proc:1 integrates:2 outperformed:1 combinatorial:8 currently:1 label:1 largest:1 networkbased:1 successfully:2 establishes:1 reflects:2 weighted:2 minimization:2 offs:1 mit:1 rather:5 occupied:1 avoid:1 varying:3 probabilistically:1 derived:1 release:1 refining:1 improvement:3 consistently:1 maria:1 likelihood:1 indicates:1 superpixels:1 contrast:2 greedily:2 baseline:2 inference:1 dependent:1 inaccurate:1 integrated:1 initially:1 hidden:2 relation:1 comprising:1 compatibility:1 overall:1 classification:8 fidelity:1 pascal:1 ullrich:2 art:4 summed:1 integration:2 uc:2 field:4 equal:2 ng:1 sampling:2 manually:1 biology:1 represents:1 broad:1 icml:1 throughput:1 toufiq:2 tem:1 future:1 fundamentally:1 inherent:1 simplify:1 retina:2 randomly:1 densely:2 national:2 individual:1 murphy:1 subsampled:1 geometry:1 replacement:1 jeffrey:1 william:3 attempt:1 ab:1 detection:2 interest:1 highly:2 patricia:2 zheng:1 evaluation:9 hamprecht:4 watershed:7 amenable:2 accurate:1 implication:1 emit:1 edge:1 capable:2 vely:1 filled:1 minimal:1 subvolumes:2 column:2 modeling:5 cardona:1 papandreou:1 cost:7 subset:3 hundred:2 uniform:1 medulla:1 scanning:3 chooses:1 fundamental:3 international:3 retain:1 standing:1 automating:1 off:2 vineet:1 picking:1 michael:2 mouse:2 na:1 connectivity:18 thesis:1 reflect:1 nm:3 ambiguity:1 containing:3 choose:1 huang:3 tile:1 juan:1 nano:1 priority:2 natalya:1 mikkel:1 expert:1 derivative:2 american:1 zhao:1 li:1 actively:1 jeremy:1 potential:4 bjoern:2 photon:1 stride:1 matter:1 inc:1 depends:1 stream:2 vi:1 performed:1 closed:1 zhizhong:1 portion:4 bharioke:1 aggregation:1 parallel:2 contribution:2 minimize:1 publicly:4 greg:1 accuracy:13 convolutional:12 descriptor:57 largely:1 miller:1 yield:1 efficiently:1 correspond:1 spaced:1 characteristic:1 conceptually:1 reina:1 raw:1 calonder:1 vincent:1 andres:3 produced:1 critically:1 lu:2 rectified:1 kennedy:1 vazquez:1 tissue:5 ago:1 manual:1 synaptic:2 sebastian:5 volumetric:2 energy:43 nonetheless:1 tucker:1 resultant:1 proof:2 naturally:1 hamming:2 sampled:3 dataset:10 lowell:1 begun:1 massachusetts:1 nanostructure:1 knowledge:2 color:1 improves:1 segmentation:62 iasonas:1 actually:1 reflecting:1 feed:2 higher:1 reflected:1 improved:2 rand:8 fua:1 jw:1 evaluated:3 box:16 done:1 anywhere:1 just:4 stage:1 biomedical:1 shuai:1 sketch:2 hand:1 ei:2 su:1 overlapping:2 propagation:1 google:8 incrementally:1 logistic:1 quality:4 indicated:1 vibhav:1 believe:1 jayasumana:1 hypothesized:1 contain:1 true:6 verify:1 former:1 excluded:1 moritz:7 semantic:2 white:2 adjacent:4 during:1 branching:1 criterion:3 agglomerate:3 gelbart:1 complete:2 theoretic:1 crf:1 claudiu:1 thomson:1 motion:1 reasoning:2 image:45 novel:2 agglomeration:22 physical:1 marchman:1 shepard:1 volume:22 million:1 linking:1 extend:1 he:1 association:1 katz:2 refer:1 significant:4 cambridge:1 ai:1 automatic:1 consistency:1 similarly:1 janelia:3 neurite:3 toolkit:1 specification:1 surface:1 align:1 mitochondrion:1 multivariate:1 recent:4 maitin:1 supervoxel:3 optimizes:2 schmidhuber:1 binary:8 inconsistency:1 seen:1 captured:1 minimum:1 additional:2 greater:1 george:1 neuroanatomy:2 recognized:1 gambardella:1 corrado:1 dashed:1 semi:2 ii:1 full:4 sliding:3 multiple:1 reduces:1 dalong:1 stephen:2 alan:2 match:1 long:2 luca:1 serial:4 post:2 marina:1 a1:1 prediction:4 variant:1 vision:5 expectation:1 metric:6 nunez:1 arxiv:3 albert:1 represent:2 monga:1 microscopy:12 cell:3 ion:2 beam:2 addition:1 want:1 chklovskii:1 annealing:1 pseudomarginal:1 median:1 thorben:1 hanspeter:1 plexiform:1 elegans:2 structural:2 near:1 yang:1 intermediate:2 split:7 automated:9 glial:1 affect:1 architecture:7 restrict:1 topology:1 opposite:1 reduce:2 approaching:1 ciresan:1 inner:1 intensive:1 bottleneck:1 whether:3 giusti:1 linkage:1 effort:1 render:1 peter:1 action:7 deep:14 useful:1 generally:1 connectomic:3 amount:2 slide:1 extensively:1 ten:3 locally:1 http:1 specifies:1 generate:1 s3:1 sign:1 neuroscience:4 stained:1 per:3 intertwined:1 discrete:2 key:2 oversegmentation:6 segregating:1 threshold:1 achieving:1 southgate:1 advancing:1 imaging:4 graph:5 dmitri:1 year:1 sum:4 run:2 micron:1 raquel:2 discern:1 place:2 almost:1 reasonable:1 throughout:1 patch:2 decision:3 scaling:1 graham:2 bit:5 entirely:1 layer:5 dropout:1 bound:1 followed:1 shan:1 winston:1 oracle:3 plaza:2 occur:1 optic:1 precisely:1 deficiency:1 hayworth:1 ri:1 software:1 fib:4 kasthuri:1 aspect:1 span:1 proofreading:1 performing:1 extracellular:1 structured:4 turaga:4 supervoxels:3 disconnected:2 funke:1 membrane:1 smaller:3 em:1 joseph:1 b:1 s1:1 quoc:1 iccv:1 invariant:1 zhiyuan:2 pipeline:5 computationally:1 resource:1 remains:1 previously:3 know:1 mechanistic:1 serf:2 end:4 thorup:1 available:3 pursuit:2 hierarchical:3 appropriate:1 weinberger:1 denotes:3 clustering:4 sommer:1 neglect:1 exploit:1 concatenated:1 ting:1 murray:1 society:1 zlateski:2 chieh:2 objective:1 olbris:2 question:1 primary:3 gradient:2 affinity:6 distance:3 separate:1 link:1 separating:1 penultimate:1 simulated:1 philip:2 evenly:1 argue:1 collected:2 urtasun:2 length:1 index:3 relationship:1 illustration:3 minimizing:1 liang:2 potentially:5 trace:1 negative:1 schalek:1 implementation:2 design:1 policy:6 perform:3 upper:1 neuron:12 datasets:2 knott:2 nern:1 fabian:1 sadeep:1 descent:2 t:1 defining:1 neurobiology:3 team:1 arbitrary:3 intensity:1 bk:1 introduced:1 pair:15 required:7 david:1 extensive:1 connection:3 optimized:2 philosophical:1 nick:1 maximin:1 merges:4 narrow:1 tremendous:1 learned:3 barcelona:1 hour:3 nip:2 quadratically:1 adult:2 able:1 inaccuracy:1 suggested:1 pattern:2 lund:1 reading:2 challenge:2 including:2 memory:1 royal:1 shifting:1 suitable:1 critical:1 treated:1 roth:1 predicting:2 representing:2 technology:2 brief:2 axis:2 kj:1 prior:9 voxels:10 understanding:1 review:1 relative:2 fully:8 loss:4 expect:1 takemura:2 versus:1 pabbeel:1 localized:1 foundation:1 degree:1 offered:1 sufficient:1 consistent:3 basin:1 editor:1 pareto:2 balancing:1 translation:1 eccv:1 centimeter:1 supported:1 last:1 asynchronous:1 guide:1 allow:3 senior:1 burges:1 institute:1 wide:1 face:1 tracing:1 benefit:1 distributed:2 boundary:12 overcome:1 evaluating:2 fred:3 computes:3 forward:3 collection:2 made:3 voxel:14 far:1 transaction:1 reconstructed:1 approximate:3 unreliable:1 global:5 b1:1 conceptual:1 xi:6 discriminative:1 search:2 gala:6 decade:1 dilation:1 promising:2 nature:7 vitaladevuni:1 robust:2 reasonably:1 helmstaedter:8 learn:4 channel:1 obtaining:1 straehle:1 sem:1 forest:2 du:1 interact:1 neuropil:1 necessarily:2 improving:2 european:1 marc:1 bottou:1 did:2 dense:3 linearly:1 aurelio:1 s2:1 bounding:16 hyperparameters:4 paul:1 n2:2 xu:1 neuronal:6 fig:4 referred:1 representative:1 tl:1 e329:1 fashion:2 cubic:1 aid:1 sub:1 position:25 comprises:2 mao:1 pereira:1 comput:1 candidate:3 splice:1 theorem:1 specific:3 showing:1 offset:1 list:2 horstmann:1 evidence:1 fusion:1 intractable:1 restricting:1 merging:1 false:2 importance:2 effectively:1 supplement:4 phd:1 albeit:1 magnitude:1 te:1 corr:2 milling:1 margin:2 lichtman:2 gap:1 chen:3 locality:1 simply:4 jacm:1 visual:4 bernardino:1 labor:4 contained:1 chang:1 applies:1 springer:2 gary:1 truth:5 determines:2 acm:1 ma:1 conditional:5 goal:2 formulated:2 presentation:1 carsten:1 towards:1 room:1 jeff:1 brenner:1 feasible:1 change:4 determined:2 specifically:1 uniformly:2 torr:1 schwing:2 called:2 total:2 sectioning:1 pfister:1 e:18 experimental:2 ya:2 s02:1 support:1 winfried:10 mark:1 alexander:2 rajat:1 preparation:1 evaluate:2 aleksandar:2 tested:3 srinivas:4 correlated:1 |
6,186 | 6,596 | Dimensionality Reduction of Massive Sparse Datasets
Using Coresets
Dan Feldman
University of Haifa
Haifa, Israel
[email protected]
Mikhail Volkov
CSAIL, MIT
Cambridge, MA, USA
[email protected]
Daniela Rus
CSAIL, MIT
Cambridge, MA, USA
[email protected]
Abstract
In this paper we present a practical solution with performance guarantees to the
problem of dimensionality reduction for very large scale sparse matrices. We
show applications of our approach to computing the Principle Component Analysis (PCA) of any n ? d matrix, using one pass over the stream of its rows. Our
solution uses coresets: a scaled subset of the n rows that approximates their sum
of squared distances to every k-dimensional affine subspace. An open theoretical
problem has been to compute such a coreset that is independent of both n and
d. An open practical problem has been to compute a non-trivial approximation to
the PCA of very large but sparse databases such as the Wikipedia document-term
matrix in a reasonable time. We answer both of these questions affirmatively. Our
main technical result is a new framework for deterministic coreset constructions
based on a reduction to the problem of counting items in a stream.
1
Introduction
Algorithms for dimensionality reduction usually aim to project an input set of d-dimensional vectors
(database records) onto a k ? d ? 1 dimensional affine subspace that minimizes the sum of squared
distances to these vectors, under some constraints. Special cases include the Principle Component
Analysis (PCA), Linear regression (k = d ? 1), Low-rank approximation (k-SVD), Latent Drichlet
Analysis (LDA) and Non-negative matrix factorization (NNMF). Learning algorithms such as kmeans clustering can then be applied on the low-dimensional data to obtain fast approximations with
provable guarantees. To our knowledge, unlike SVD, there are no algorithms or coreset constructions with performance guarantees for computing the PCA of sparse n ? n matrices in the streaming
model, i.e. using memory that is poly-logarithmic in n. Much of the large scale high-dimensional
data sets available today (e.g. image streams, text streams, etc.) are sparse. For example, consider
the text case of Wikipedia. We can associate a matrix with Wikipedia, where the English words
define the columns (approximately 1.4 million) and the individual documents define the rows (approximately 4.4 million documents). This large scale matrix is sparse because most English words
do not appear in most documents. The size of this matrix is huge and no existing dimensionality
reduction algorithm can compute its eigenvectors. To this point, running the state of the art SVD
implementation from GenSim on the Wikipedia document-term matrix crashes the computer very
quickly after applying its step of random projection on the first few thousand documents. This is
because such dense vectors, each of length 1.4 million, use all of the computer?s RAM capacity.
1
Support for this research has been provided by Hon Hai/Foxconn Technology Group and NSFSaTC-BSF
CNC 1526815, and in part by the Singapore MIT Alliance on Research and Technology through the Future of
Urban Mobility project and by Toyota Research Institute (TRI). TRI provided funds to assist the authors with
their research but this article solely reflects the opinions and conclusions of its authors and not TRI or any other
Toyota entity. We are grateful for this support.
Submitted to 30th Conference on Neural Information Processing Systems (NIPS 2016). Do not distribute.
In this paper we present a dimensionality reduction algorithms that can handle very large scale sparse
data sets such as Wikipedia and returns provably correct results. A long-open research question has
been whether we can have a coreset for PCA that is both small in size and a subset of the original
data. In this paper we answer this question affirmatively and provide an efficient construction. We
also show that this algorithm provides a practical solution to a long-standing open practical problem:
computing the PCA of large matrices such as those associated with Wikipedia.
2
Problem Formulation
Given a matrix A, a coreset C in this paper is defined as a weighted subset of rows of A such that the
sum of squared distances from any given k-dimensional subspace to the rows of A is approximately
the same as the sum of squared weighted distances to the rows in C. Formally,
For a compact set S ? Rd and a vector x in Rd , we denote the Euclidean distance between x and its
closest points in S by
dist2 (x, S) := min kx ? sk22
s?S
For an n?d matrix A whose rows are a1 , . . . , an , we define the sum of the squared distances from
A to S by
n
X
dist2 (A, S) :=
dist2 (ai , S)
i=1
Definition 1 ((k, ?)-coreset). Given a n ? d matrix A whose rows a1 , ? ? ? , an are n points (vectors) in Rd , an error parameter ? ? (0, 1], and an integer k ? [1, d ? 1] = {1, ? ? ? , d ? 1}
that represents the desired dimensionality reduction, n (k, ?)-coreset for A is a weighted subset
C = {wi ai | wi > 0 and i ? [n]} of the rows of A, where w = (w1 , ? ? ? , wn ) ? [0, ?)n is a
non-negative weight vector, such that for every affine k-subspace S in Rd we have
dist2 (A, S)) ? dist2 (C, S)) ? ? dist2 (A, S)).
(1)
That is, the sum ofPsquared distances from the n points to S approximates the sum of squared
n
weighted distances i=1 wi2 (dist(ai , S))2 to S. The approximation is up to a multiplicative factor
of 1??. By choosing w = (1, ? ? ? , 1) we obtain a trivial (k, 0)-coreset. However, in a more efficient
coreset most of the weights will be zero and the corresponding rows in A can be discarded. The
cardinality of the coreset is thus the sparsity of w, given by |C| = kwk0 := | {wi 6= 0 | i ? [n]} |.
If C is small, then the computation is efficient. Because C is a weighted subset of the rows of A,
if A is sparse, then C is also sparse. A long-open research question has been whether we can have
such a coreset that is both of size independent of the input dimension (n and d) and a subset of the
original input rows.
2.1
Related Work
In [24] it was recently proved that an (k, ?) coreset of size |C| = O(dk 3 /?2 ) exists for every
input matrix, and distances to the power of z ? 1 where z is constant. The proof is based on a
general framework for constructing different kinds of coresets, and is known as sensitivity [10, 17].
This coreset is efficient for tall matrices, since its cardinality is independent of n. However, it is
useless for ?fat? or square matrices (such as the Wikipedia matrix above), where d is in the order
of n, which is the main motivation for our paper. In [5], the Frank-Wolfe algorithm was used to
construct different types of coresets than ours, and for different problems. Our approach is based
on a solution that we give to an open problem in [5], however we can see how it can be used to
compute the coresets in [5] and vice versa. For the special case z = 2 (sum of squared distances),
a coreset of size O(k/?2 ) was suggested in [7] with a randomized version in [8] for a stream of n
points that, unlike the standard approach of using merge-and-reduce trees, returns a coreset of size
independent of n with a constant probability. These result minimizes the k ? k2 error, while our result
minimizes the Frobenius norm, which is always higher, and may be higher by a factor of d. After
appropriate weighting, we can apply the uniform sampling of size O(k/?2 ) to get a coreset with a
small Frobenius error [14], as in our paper. However, in this case the probability of success is only
constant. Since in the streaming case we compute roughly n coresets (formally, O(n/m) coresets,
where m is the size of the coreset) the probability that all these coresets constructions will succeed
2
is close to zero (roughly 1/n). Since the probability of failure in [14] reduces linearly with the size
of the coreset, getting a constant probability of success in the streaming model for O(n) coresets
would require to take coresets of size that is no smaller than the input size.
There are many papers, especially in recent years, regarding data compression for computing the
SVD of large matrices. None of these works addresses the fundamental problem of computing a
sparse approximated PCA for a large matrix (in both rows and columns), such as Wikipedia. The
reason is that current results use sketches which do no preserve the sparsity of the data (e.g. because
of using random projections). Hence, neither the sketch nor the PCA computed on the sketch is
sparse. On the other side, we define coreset as a small weighted subset of rows, which is thus
sparse if the input is sparse. Moreover, the low rank approximation of a coreset is sparse, since
each of its right singular vectors is a sum of a small set of sparse vectors. While there are coresets
constructions as defined in this paper, all of them have cardinality of at least d points, which makes
them impractical for large data matrices, where d ? n. In what follows we describe these recent
results in details.
The recent results in [7, 8] suggest coresets that are similar to our definition of coresets (i.e., weighted
subsets), and do preserve sparsity. However, as mentioned above they minimize the 2-norm error and
not the larger Frobesnius error, and maybe more important, they provide coresets for k-SVD (i.e.,
k-dimensional subspaces) and not for PCA (k-dimensional affine subspaces that might not intersect
the origin). In addition [8] works with constant probability, while our algorithm is deterministic
(works with probability 1).
Software. Popular software for computing SVD such as GenSim [21], redsvd [12] or the MATLAB
sparse SVD function (svds) use sketches and crash for inputs of a few thousand of documents and
a dimensionality reduction (approximation rank) k < 100 on a regular laptop, as expected from
the analysis of their algorithms. This is why existing implementations (including Gensim) extract
topics from large matrices (e.g. Wikipedia), based on low-rank approximation of only small subset
of few thousands of selected words (matrix columns), and not the complete Wikipedia matrix.Even
for k = 3, running the implementation of sparse SVD in Hadoop [23] took several days [13]. Next
we give a broad overview of the very latest state of the dimensionality reduction methods, such as
the Lanczoz algorithm [16] for large matrices, that such systems employ under the hood.
Coresets. Following a decade of research in [24] it was recently proved that an (?, k)-coreset for low
rank approximation of size |C| = O(dk 3 /?2 ) exists for every input matrix. The proof is based on a
general framework for constructing different kinds of coresets, and is known as sensitivity [10, 17].
This coreset is efficient for tall matrices, since its cardinality is independent of n. However, it is
useless for ?fat? or square matrices (such as the Wikipedia matrix above), where d is in the order
of n, which is the main motivation for our paper. In [5], the Frank-Wolfe algorithm was used to
construct different types of coresets than ours, and for different problems. Our approach is based on
a solution that we give to an open problem in [5].
Sketches. A sketchPin the context of matrices is a set of vectors u1 , ? ? ? , us in Rd such that the sum of
n
2
squared distances i=1 (dist(a
Pni , S)) from the 2input n points to every k-dimensional subspace S in
d
R , can be approximated by i=1 (dist(ui , S)) up to a multiplicative factor of 1??. Note that even
if the input vectors a1 , ? ? ? , an are sparse, the sketched vectors u1 , ? ? ? , us in general are not sparse,
unlike the case of coresets. A sketch of cardinality d can be constructed with no approximation error
(? = 0), by defining u1 , ? ? ? , ud to be the d rows of the matrix DV T where U DV T = A is the SVD
of A. It was proved in [11] that taking the first O(k/?) rows of DV T yields such a sketch, i.e. of
size independent of n and d.
The first sketch for sparse matrices was suggested in [6], but like more recent results, it assumes that
the complete matrix fits in memory. Other sketching methods that usually do not support streaming
include random projections [2, 1, 9] and randomly combined rows [20, 25, 22, 18].
The Lanczoz Algorithm. The Lanczoz method [19] and its variant [15] multiply a large matrix by a
vector for a few iterations to get its largest eigenvector v1 . Then the computation is done recursively
after projecting the matrix on the hyperplane that is orthogonal to v1 . However, v1 is in general not
sparse even A is sparse. Hence, when we project A on the orthogonal subspace to v1 , the resulting
matrix is dense for the rest of the computations (k > 1). Indeed, our experimental results show that
the MATLAB svds function which uses this method runs faster than the exact SVD, but crashes on
large input, even for small k.
3
This paper builds on this extensive body of prior work in dimensionality reduction, and our approach
uses coresets to solve the time and space challenges.
2.2
Key Contributions
Our main result is the first algorithm for computing an (k, ?)-coreset C of size independent of
both n and d, for any given n ? d input matrix. The algorithm takes as input a finite set of ddimensional vectors, a desired approximation error ?, and an integer k ? 0. It returns a weighted
subset S (coreset) of k 2 /?2 such vectors. This coreset S can be used to approximate the sum of
squared distances from the matrix A ? Rn?d , whose rows are the n vectors seen so far, to any
k-dimensional affine subspace in Rd , up to a factor of 1 ? ?. For a (possibly unbounded) stream of
such input vectors the coreset can be maintained at the cost of an additional factor of log2 n.
The polynomial dependency on d of the cardinality of previous coresets made them impractical for
fat or square input matrices, such as Wikipedia, images in a sparse feature space representation, or
adjacency matrix of a graph. If each row of in input matrix A has O(nnz) non-zeroes entries, then
the update time per insertion, the overall memory that is used by our algorithm, and the low rank
approximation of the coreset S is O(nnz ? k 2 /?2 ), i.e. independent of n and d.
We implemented our algorithm to obtain a low-rank approximation for the term-document matrix
of Wikipedia with provable error bounds. Since our streaming algorithm is also ?embarrassingly
parallel? we run it on Amazon Cloud, and receive a significantly better running time and accuracy
compared to existing heuristics (e.g. Hadoop/MapReduce) that yield non-sparse solutions.
The key contributions in this work are:
1. A new algorithm for dimensionality reduction of sparse data that uses a weighted subset of
the data, and is independent of both the size and dimensionality of the data.
2. An efficient algorithm for computing such a reduction, with provable bounds on size and
running time (cf. http://people.csail.mit.edu/mikhail/NIPS2016).
3. A system that implements this dimensionality reduction algorithm and an application of the system
to compute latent semantic analysis (LSA) of the entire English Wikipedia.
3
Technical Solution
Given a n?d matrix A, we propose a construction mechanism for a matrix C of size |C| = O(k 2 /?2 )
and claim that it is a (k, ?)-coreset for A. We use the following corollary for Definition 1 of a coreset,
based on simple linear algebra that follows from the geometrical definitions (e.g. see [11]).
Property 1 (Coreset for sparse matrix). Let A ? Rn?d , k ? [1, d ? 1] be an integer, and let ? > 0
be an error parameter. For a diagonal matrix W ? Rn?n , the matrix C = W A is a (k, ?)-coreset
for A if for every matrix X ? Rd?(d?k) such that X T X = I, we have
kW AXk
? ?, and
(ii) kA ? W Ak < ? var(A)
(2)
(i) 1 ?
kAXk
where var(A) is the sum of squared distances from the rows of A to their mean.
The goal of this paper is to prove that such a coreset (Definition 1) exists for any matrix A (Property 1) and can be computed efficiently. Formally,
Theorem 1. For every input matrix A ? Rn?d , an error ? ? (0, 1] and an integer k ? [1, d ? 1]:
(a) there is a (k, ?)-coreset C of size |C| = O(k 2 /?2 );
(b) such a coreset can be constructed in O(k 2 /?2 ) time.
Theorem 1 is the formal statement for the main technical contribution of this paper. Sections 3?5
constitute a proof for Theorem 1.
To establish Theorem 1(a), we first state our two main results (Theorems 2 and 3) axiomatically, and
show how they combine such that Property 1 holds. Thereafter we prove the these results in Sections
4 and 5, respectively. To prove Theorem 1(b) (efficient construction) we present an algorithm for
4
004
005
006
007
008
009
010
011
012
013
Algorithm
1 C ORESET-S
-S UM
VV
ECS
(A, ?)
Algorithm
1 C ORESET
UM
ECS
(A, ?)
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
21:
22:
23:
24:
25:
26:
27:
Input: A: n input points a1 , . . . , an in Rd
Input: ? ? (0, 1): the approximation error
Output: w ? [0, ?)n : non-negative weights
A ? A ? mean(A)
A ? c A where c is a constant s.t. var(A) = 1
w ? (1, 0, . . . , 0)
j ? 1,p ? Aj , J ? {j}
Mj = y 2 | y = A ? ATj
for i = 1, . . . , n do
j ? argmin {wJ ? MJ }
?
0
G ? W 0 ? AJ where Wi,i
= wi
kck = kGT G)k2F
P|J|
T
c ? p = i=1
pG p
kc ? pk = 1 + kck2 ? c ? p
compp (v) = 1/kc ? pk ? (c ? p) /kc ? pk
kc ? c0 k = kc ? pk ? compp (v)
? = kc ? c0 k/kc ? pk
w ? w(1 ? |?|)
wj ? wjP
+?
n
w ? w/
2 i=1 wi
Mj ? y | y = A ? ATj
J ? J ? {j}
if kck2 ? ? then
break
end if
end for
return w
(a) Coreset for sum of vectors algorithm
1
0.8
a
2
a4
0.6
0.4
0.2
0
c
-0.2
2
-0.4
a
c3
-0.6
-0.8
a1 = c 1
-1
5
a3
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
(b) Illustration showing first 3 steps of the computation
computing a matrix C, and analyze the running time to show that the C can be constructed in
2 2
O(k
/? ) iterations.
048
049
Let
A ? Rn?d be a matrix of rank d, and let U ?V T = A denote its full SVD. Let W ? Rn?n be a
050
diagonal
matrix. Let k ? [1, d ? 1] be an integer. For every i ? [n] let
051
052
053
Ui,k+1:d ?k+1:d,k+1:d
,1 .
(3)
vi = Ui,1 , ? ? ? , Ui,k ,
k?k+1:d,k+1:d k
1
Then the following two results hold:
Theorem 2 (Coreset for sum of vectors). For every set of of n vectors v1 , ? ? ? , vn in Rd and every
? ? (0, 1), a weight vector w ? (0, ?)n of sparsity kwk0 ? 1/?2 can be computed deterministically
in O(nd/?) time such that
n
n
n
X
X
X
v
?
w
v
?
?
kvi k2 .
(4)
i
i i
i=1
i=1
i=1
Section 4 establishes a proof for Theorem 2.
Theorem 3 (Coreset for Low rank approximation). For every X ? Rd?(d?k) such that X T X = I,
n
X
2
kW
AXk
T
T
1 ?
?5
v
v
?
W
v
v
(5)
i i
i,i i i
.
kAXk2
i=1
Section 5 establishes a proof for Theorem 3.
3.1
Proof of Theorem 1
Proof of Theorem 1(a). Replacing vi with vi viT , kvi k2 with kvi viT k, and ? by ?/(5k) in Theorem 2
yields
n
X
X
T
T
vi vi ? Wi,i vi vi
? (?/5k)
kvi viT k = ?.
i
i=1
5
Combining this inequality with (4) gives
2
1 ? kW AXk ? 5
kAXk2
n
X
T
T
vi vi ? Wi,i vi vi
? ?.
i=1
Thus the left-most term is bounded by the right-most term, which proves (2). This also means that
C = W A is a coreset for k-SVD, i.e., (non-affine) k-dimensional subspaces. To support PCA
(affine subspaces) the coreset C = W A needs to satisfy the expression in the last line of Property 1
regarding its mean. This holds using the last entry (one) in the definition of vi (3), which implies
that the sum of the rows is preserved as in equation (4). Therefore Property 1 holds for C = W A,
which proves Theorem 1(a).
Claim Theorem 1(b) follows from simple analysis of Algorithm 2 that implements this construction.
4
Coreset for Sum of Vectors (k = 0)
In order to prove the general result Theorem 1(a), that is the existence of a (k, ?)-coreset for any
k ? [1, d?1], we first establish the special case for k = 0. In this section, we prove Theorem 2 by
providing an algorithm for constructing a small weighted subset of points that constitutes a general
approximation for the sum of vectors.
To this end, we first introduce an intermediate result that shows that given n points on the unit ball
with weight distribution z, there exists a small subset of points whose weighted mean is approximately the same as the weighted mean of the original points.
P
Let Dn denote the union over every vector z ? [0, 1]n that represent a distribution, i.e., i zi = 1.
Our first technical result is that for any finite set of unit vectors a1 , . . . , an in Rd , any distribution
z ? Dn , and every ? ? (0, 1], we can compute a sparse weight vector w ? Dn of sparsity (nonzeroes entries) kwk0 ? 1/?2 .
Lemma 1. Let z ? Dn be a distribution over n unit vectors a1 , ? ? ? , an in Rd . For ? ? (0, 1), a
sparse weight vector w ? Dn of sparsity s ? 1/?2 can be computed in O(nd/?2 ) time such that
n
n
X
X
wi ai
? ?.
zi ? ai ?
(6)
i=1
i=2
2
Proof of Lemma 1. Please see Supplementary Material, Section A.
We prove Theorem 2 by providing a computation of such a sparse weight vector w. The intuition
d
for
P this computation is as follows. Given n input points a1 ,. . . ,an in R , with weighted mean
z
a
=
0,
we
project
all
the
points
on
the
unit
sphere.
Pick
an
arbitrary
starting point a1 = c1 .
i
i
i
At each step find the farthest point aj+1 from cj , and compute cj+1 by projecting the origin onto
the line segment [cj , aj+1 ]. Repeat this for j = 1,. . . ,N iterations, where N = 1/?2 . We prove that
kci k2 = 1/i, thus if we iterate 1/2 times, this norm will be kc1/2 k = 2 . The resulting points ci
are a weighted linear combination
of a small subset of the input points. The output weight vector
Pn
w ? Dn satisfies cN = i=1 wi ai , and this weighted subset forms the coreset.
Fig. 1a contains the pseudocode for Algorithm 1. Fig. 1b illustrates the first steps of the main computation (lines 9?26). Please see Supplementary Material, Section C for a complete line-by-line
analysis of Algorithm 1.
Proof of Theorem 2. The proof of Theorem 2 follows by applying Lemma 1 after normalization of
the input points and then post-processing the output.
5
Coreset for Low Rank Approximation (k > 0)
In Section 4 we presented a new coreset construction for approximating the sum of vectors, showing
that given n points on the unit ball there exists a small weighted subset of points that is a coreset
for those points. In this section we describe the reduction of Algorithm 1 for k = 0 to an efficient
algorithm for any low rank approximation with k ? [1, d?1].
6
007
017
018
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049
050
051
052
053
019
020
021
022
023
024
025
Algorithm 1 C ORESET-L OW R ANK(A, k, ?)
1:
2:
3:
4:
5:
6:
7:
8:
9:
Algorithm
C ORESET
OW
Rk,ANK
026 k, ?)
Algorithm 12C ORESET
-L OW-L
R ANK
(A,
?) (A,
027
10:
Input: A: A sparse n?d matrix
Input: k ? Z>0 :the approximation rank
Input: ? ? 0, 12 : the approximation error
Output: w ? [0, ?)n : non-negative weights
Compute U ?V T = A, the SVD of A
R ? ?k+1:d,k+1:d
P ? matrix whose i-th row ?i ? [n] is
R
)
Pi = (Ui,1:k , Ui,k+1:d ? kRk
F
X ? matrix whose i-th row ?i ? [n] is
Xi = Pi /kPi kF
w ? (1, 0, . . . , 0)
for i = 1, . . . , k 2 /?2 do
j ? argmini=1,...,n {wXXi }
Pn
a = i=1 wi (XiT XjP
)2
n
1 ? kP Xj k2F + i=1 wi kP Xi k2F
b=
2
kP kF
c = kwXk2F
? = (1 ? a + b) / (1 + c ? 2a)
w ? (1 ? ?)Ij + ?w
end for
return w
11:
Input: A: A sparse n?d matrix
028
12:
Input: k ? Z>0 :the approximation rank 029
13:
Input: ? ? 0, 12 : the approximation error030
14:
Output: w ? [0, ?)n : non-negative weights
031
Compute U ?V T = A, the SVD of A
032
15:
R ? ?k+1:d,k+1:d
033
P ? matrix whose i-th row ?i ? [n] is
16:
034
R
)
Pi = (Ui,1:k , Ui,k+1:d ? kRk
17:
F
035
18:
9: X ? matrix whose i-th row ?i ? [n] is
036
19:
10:
Xi = Pi /kPi kF
037
20:
11: w ? (1, 0, . . .
, 0)
038
12: for i = 1, . . . , k 2 /?2 do
(a)
1/2:
Initialization
(b) 2/2: Computation
039
13:
j ? argmini=1,...,n {wXXi }
Pn
040
14:
a = i=1 wi (XiT XjP
)2
041
n
1 ? kP Xj k2F + i=1 wi kP Xi k2F 042
15:
b=
2
kP kF
043
Conceptually,
in two steps. The first step is to show that Algorithm 1 can
16:
c = kwXk2Fwe achieve this reduction
044
? = (1 ?
+ b)inefficient
/ (1 + c ? 2a)computation
be17:reduced
toaan
for low rank approximation for matrices. To this end, we
045
18:
w ? (1 ? ?)Ij + ?w
046
first
prove
Theorem
3,
thus
completing
the
existence clause Theorem 1(a).
19: end for
047
20: return w
P048
n
2
T
2
Proof of Theorem 3. Let ? = k 049
i=1 (1 ? Wi,i )vi vi k. For every i ? [n] let ti = 1 ? Wi,i . Set
050
d?(d?k)
T
T
1:
2:
3:
4:
5:
6:
7:
8:
X?R
such that X X = 051
I. Without loss of generality weP
assume V = I, i.e. A = U ?,
otherwise we replace X by V T X.052It thus suffices to prove that i ti kAi,: Xk2 ? 5? kAXk2 .
053
Using the triangle inequality, we get
X
X
X
1
ti kAi,: Xk2 ?
ti kAi,: Xk2 ?
ti k(Ai,1:k , 0)Xk2
(7)
i
i
i
X
2
(8)
ti k(Ai,1:k , 0)Xk .
+
i
We complete the proof by deriving bounds on (7) and (8), thus proving (5). For the complete proof,
please see Supplementary Material, Section
B.
1
Together, Theorems 2 and 3 show that the error of the coreset is a 1 ? ? approximation to the true
weighted mean. By Theorem 3, we can now simply apply Algorithm 1 to the right hand side of (5)
to compute the reduction. The intuition for this inefficient reduction is as follows. We first compute
the outer product of each row vector x in the input matrix A ? R[n?d] . Each such outer products
2
xT x is a matrix in Rd?d . Next, we expand every such matrix into a vector, in Rd by concatenating
2
its entries. Finally, we combine each such vector back to be a vector in the matrix P ? Rn?d . At
this point the reduction is complete, however it is clear that this matrix expansion is inefficient.
The second step of the reduction is to transform the slow computation of running Algorithm 1 on the
2
expanded matrix P ? Rn?d into an equivalent and provably fast computation on the original set of
d
points A ? R . To this end we make use of the fact that each row of P is a sparse vector in Rd to
implicitly run the computation in the original row space Rd . We present Algorithm 2 and prove that
it returns the weight vector w = (w1 , ? ? ? , wn ) of a (k, ?)-coreset for low-rank approximation of the
input point set P , and that this coreset is small, namely, only O(k 2 /?2 ) of the weights (entries) in w
are non-zeros. Fig. 5 contains the pseudocode for Algorithm 2. Please see Supplementary Material,
Section D for a complete line-by-line analysis of Algorithm 2.
6
Evaluation and Experimental Results
The coreset construction algorithm described in Section 5 was implemented in MATLAB. We make
use of the redsvd package [12] to improve performance, but it is not required to run the system. We
evaluate our system on two types of data: synthetic data generated with carefully controlled parameters, and real data from the English Wikipedia under the ?bag of words? (BOW) model. Synthetic
data provides ground-truth to evaluate the quality, efficiency, and scalability of our system, while
the Wikipedia data provides us with a grand challenge for latent semantic analysis computation.
7
#10 -4
4
#10 -4
SVD Coreset
Uniform Random Sampling
Weighted Random Sampling
3.5
1
SVD Coreset
Uniform Random Sampling
Weighted Random Sampling
5
3
#10 -3
SVD Coreset
Uniform Random Sampling
Weighted Random Sampling
0.9
0.8
4
2
1.5
Relative error
Relative error
Relative error
0.7
2.5
3
2
0.6
0.5
0.4
0.3
1
0.2
1
0.5
0.1
0
0
0
10
20
30
40
50
60
0
0
10
20
30
40
50
60
70
80
0
10
20
30
40
50
60
70
80
90
100
Coreset size (number of points)
Coreset size (number of points)
Coreset size (number of points)
(a) Relative error (k = 10)
(b) Relative error (k = 20)
(c) Relative error (k = 50)
A[5000x1000], sparsity=0.0333
Wikipedia approximation log error
1
f(N) = eps
f(N) = N eps
f(N) = N logN eps
f(N) = N2 eps
f(N) = f*(N)+C
0.9
0.8
0.7
0
-0.5
-1
-1.5
log 10 eps
f(N)
0.6
0.5
0.4
-2
-2.5
-3
0.3
-3.5
0.2
-4
k=1
k = 10
k = 100
0.1
-4.5
0
0
200
400
600
800
1000
1200
1400
1600
1800
-5
2000
Number of iterations N
0
0.5
1
1.5
2
2.5
3
3.5
Number of million points streamed
(d) Synthetic data errors (e) Wikipedia running time (x-axis log (f) Wikipedia log errors
scale)
Figure 1: Experimental results for synthetic data (Fig. 1a?1d) and Wikipedia (Fig. 1e?Fig. 1f).
For our synthetic data experiments, we used a moderate size sparse input of (5000?1000) to evaluate
the relationship between the error ? and the number of iterations of the algorithm N . We then
compare our coreset against uniform sampling and weighted random sampling using the squared
norms of U (A = U ?V T ) as the weights. Finally, we evaluate the efficiency of our algorithm by
comparing the running time against the MATLAB svds function and against the most recent state
of the art dimensionality reduction algorithm [8]. Figure 1a?1d show the exerimental results. Please
see Supplementary Material, Section E for a complete description of the experiments.
6.1
Latent Semantic Analysis of Wikipedia
For our large-scale grand challenge experiment, we apply our algorithm for computing Latent Semantic Analysis (LSA) on the entire English Wikipedia. The size of the data is n = 3.69M (documents) with a dimensionality d = 7.96M (words). We specify a nominal error of ? = 0.5, which is a
theoretical upper bound for N = 2k/? iterations, and show that the coreset error remains bounded.
Figure 1f shows the log approximation error, i.e. sum of squared distances of the coreset to the subspace for increasing approximation rank k = 1, 10, 100. We see that the log error is proportional to
k, and as the number of streamed points increases into the millions, coreset error remains bounded
by k. Figure 1e shows the running time of our algorithm compared against svds for increasing
dimensionality d and a fixed input size n = 3.69M (number of documents).
Finally, we show that our coreset can be used to create a topic model of 100 topics for the entire
English Wikipedia. We construct the coreset of size N = 1000 words. Then to generate the topics,
we compute a projection of the coreset onto a subspace of rank k = 100. Please see Supplementary
Material, Section F for more details, including an example of the topics obtained in our experiments.
7
Conclusion
We present a new approach for dimensionality reduction using coresets. Our solution is general and
can be used to project spaces of dimension d to subspaces of dimension k < d. The key feature of
our algorithm is that it computes coresets that are small in size and subsets of the original data. We
benchmark our algorithm for quality, efficiency, and scalability using synthetic data. We then apply
our algorithm for computing LSA on the entire Wikipedia ? a computation task hitherto not possible
with state of the art algorithms. We see this work as a theoretical foundation and practical toolbox
for a range of dimensionality reduction problems, and we believe that our algorithms will be used to
8
construct many other coresets in the future. Our project codebase is open-sourced and can be found
here: http://people.csail.mit.edu/mikhail/NIPS2016.
References
[1] D. Achlioptas and F. Mcsherry. Fast computation of low-rank matrix approximations. Journal of the ACM
(JACM), 54(2):9, 2007.
[2] S. Arora, E. Hazan, and S. Kale. A fast random sampling algorithm for sparsifying matrices. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, pages 272?279.
Springer, 2006.
[3] J. Batson, D. A. Spielman, and N. Srivastava. Twice-ramanujan sparsifiers. SIAM Journal on Computing,
41(6):1704?1721, 2012.
?
[4] C. Carath?eodory. Uber
den variabilit?atsbereich der fourierschen konstanten von positiven harmonischen
funktionen. Rendiconti del Circolo Matematico di Palermo (1884-1940), 32(1):193?217, 1911.
[5] K. L. Clarkson. Coresets, sparse greedy approximation, and the frank-wolfe algorithm. ACM Transactions
on Algorithms (TALG), 6(4):63, 2010.
[6] K. L. Clarkson and D. P. Woodruff. Low rank approximation and regression in input sparsity time. In
Proceedings of the forty-fifth annual ACM symposium on Theory of computing, pages 81?90. ACM, 2013.
[7] M. B. Cohen, S. Elder, C. Musco, C. Musco, and M. Persu. Dimensionality reduction for k-means
clustering and low rank approximation. In Proceedings of the Forty-Seventh Annual ACM on Symposium
on Theory of Computing, pages 163?172. ACM, 2015.
[8] M. B. Cohen, C. Musco, and J. W. Pachocki. Online row sampling. CoRR, abs/1604.05448, 2016.
[9] P. Drineas and A. Zouzias. A note on element-wise matrix sparsification via a matrix-valued bernstein
inequality. Information Processing Letters, 111(8):385?389, 2011.
[10] D. Feldman and M. Langberg. A unified framework for approximating and clustering data. In Proc. 41th
Ann. ACM Symp. on Theory of Computing (STOC), 2010. Manuscript available at arXiv.org.
[11] D. Feldman, M. Schmidt, and C. Sohler. Turning big data into tiny data: Constant-size coresets for kmeans, pca and projective clustering. Proceedings of ACM-SIAM Symposium on Discrete Algorithms
(SODA), 2013.
[12] Google. redsvd. https://code.google.com/archive/p/redsvd/, 2011.
[13] N. P. Halko. Randomized methods for computing low-rank approximations of matrices. PhD thesis,
University of Colorado, 2012.
[14] M. Inaba, N. Katoh, and H. Imai. Applications of weighted voronoi diagrams and randomization to
variance-based k-clustering. In Proceedings of the tenth annual symposium on Computational geometry,
pages 332?339. ACM, 1994.
[15] M. Journ?ee, Y. Nesterov, P. Richt?arik, and R. Sepulchre. Generalized power method for sparse principal
component analysis. The Journal of Machine Learning Research, 11:517?553, 2010.
[16] C. Lanczos. An iteration method for the solution of the eigenvalue problem of linear differential and
integral operators. United States Governm. Press Office Los Angeles, CA, 1950.
[17] M. Langberg and L. J. Schulman. Universal ? approximators for integrals. Proceedings of ACM-SIAM
Symposium on Discrete Algorithms (SODA), 2010.
[18] E. Liberty, F. Woolfe, P.-G. Martinsson, V. Rokhlin, and M. Tygert. Randomized algorithms for the
low-rank approximation of matrices. Proceedings of the National Academy of Sciences, 104(51):20167?
20172, 2007.
[19] C. C. Paige. Computational variants of the lanczos method for the eigenproblem. IMA Journal of Applied
Mathematics, 10(3):373?381, 1972.
[20] C. H. Papadimitriou, H. Tamaki, P. Raghavan, and S. Vempala. Latent semantic indexing: A probabilistic
analysis. In Proceedings of the seventeenth ACM SIGACT-SIGMOD-SIGART symposium on Principles
of database systems, pages 159?168. ACM, 1998.
[21] R. Ruvrek, P. Sojka, et al. Gensimstatistical semantics in python. 2011.
[22] T. Sarlos. Improved approximation algorithms for large matrices via random projections. In Foundations
of Computer Science, 2006. FOCS?06. 47th Annual IEEE Symposium on, pages 143?152. IEEE, 2006.
9
| 6596 |@word version:1 polynomial:1 compression:1 norm:4 nd:2 c0:2 open:8 pg:1 pick:1 sepulchre:1 recursively:1 reduction:23 contains:2 united:1 woodruff:1 document:10 ours:2 katoh:1 existing:3 current:1 com:2 ka:1 comparing:1 gmail:1 fund:1 update:1 greedy:1 selected:1 item:1 xk:1 record:1 provides:3 kaxk:1 org:1 unbounded:1 atj:2 dn:6 constructed:3 differential:1 symposium:7 focs:1 prove:10 dan:1 combine:2 symp:1 introduce:1 indeed:1 expected:1 roughly:2 dist:3 nor:1 cardinality:6 kwk0:3 increasing:2 project:6 provided:2 moreover:1 bounded:3 laptop:1 israel:1 what:1 kind:2 argmin:1 minimizes:3 eigenvector:1 hitherto:1 unified:1 sparsification:1 impractical:2 guarantee:3 every:15 ti:6 fat:3 um:2 scaled:1 k2:4 unit:5 farthest:1 lsa:3 appear:1 ak:1 solely:1 approximately:4 merge:1 might:1 twice:1 initialization:1 factorization:1 projective:1 range:1 seventeenth:1 practical:5 hood:1 union:1 implement:2 intersect:1 nnz:2 universal:1 significantly:1 projection:5 word:6 regular:1 suggest:1 get:3 onto:3 close:1 sojka:1 operator:1 context:1 applying:2 equivalent:1 deterministic:2 sarlos:1 ramanujan:1 latest:1 kale:1 starting:1 vit:3 kpi:2 musco:3 amazon:1 coreset:62 bsf:1 deriving:1 proving:1 handle:1 construction:10 today:1 nominal:1 massive:1 exact:1 colorado:1 us:4 origin:2 associate:1 wolfe:3 element:1 approximated:2 inaba:1 database:3 cloud:1 thousand:3 svds:4 wj:2 richt:1 mentioned:1 intuition:2 ui:8 insertion:1 kaxk2:3 nesterov:1 grateful:1 segment:1 algebra:1 efficiency:3 triangle:1 drineas:1 fast:4 describe:2 kp:6 choosing:1 sourced:1 whose:8 heuristic:1 larger:1 solve:1 supplementary:6 kai:3 valued:1 otherwise:1 transform:1 online:1 eigenvalue:1 took:1 propose:1 product:2 combining:1 bow:1 achieve:1 academy:1 description:1 frobenius:2 scalability:2 getting:1 dist2:6 los:1 tall:2 ij:2 ddimensional:1 implemented:2 implies:1 liberty:1 correct:1 kgt:1 raghavan:1 nnmf:1 opinion:1 material:6 adjacency:1 require:1 suffices:1 randomization:2 hold:4 ground:1 claim:2 xk2:4 proc:1 bag:1 combinatorial:1 largest:1 vice:1 create:1 establishes:2 reflects:1 weighted:22 mit:7 always:1 arik:1 aim:1 pn:3 office:1 corollary:1 xit:2 rank:22 carath:1 voronoi:1 streaming:5 entire:4 volkov:1 kc:7 journ:1 expand:1 semantics:1 provably:2 sketched:1 overall:1 hon:1 logn:1 art:3 special:3 construct:4 eigenproblem:1 sampling:11 sohler:1 represents:1 broad:1 kw:3 k2f:5 constitutes:1 future:2 papadimitriou:1 few:4 employ:1 randomly:1 preserve:2 national:1 individual:1 ima:1 geometry:1 ab:1 huge:1 multiply:1 evaluation:1 mcsherry:1 integral:2 mobility:1 orthogonal:2 tree:1 euclidean:1 desired:2 haifa:2 alliance:1 theoretical:3 column:3 lanczos:2 cost:1 subset:17 entry:5 uniform:5 seventh:1 dependency:1 answer:2 drichlet:1 synthetic:6 combined:1 fundamental:1 sensitivity:2 randomized:3 grand:2 csail:6 standing:1 sparsifiers:1 siam:3 probabilistic:1 together:1 quickly:1 sketching:1 w1:2 thesis:1 squared:12 von:1 x1000:1 possibly:1 inefficient:3 return:7 distribute:1 gensim:3 coresets:25 satisfy:1 vi:14 stream:6 multiplicative:2 break:1 analyze:1 hazan:1 parallel:1 contribution:3 minimize:1 square:3 accuracy:1 variance:1 efficiently:1 yield:3 conceptually:1 none:1 submitted:1 definition:6 failure:1 against:4 eodory:1 associated:1 proof:13 di:1 proved:3 popular:1 knowledge:1 dimensionality:18 cj:3 embarrassingly:1 carefully:1 back:1 elder:1 manuscript:1 higher:2 day:1 specify:1 improved:1 formulation:1 done:1 generality:1 achlioptas:1 sketch:8 hand:1 replacing:1 axk:3 del:1 google:2 lda:1 aj:4 quality:2 believe:1 usa:2 true:1 hence:2 semantic:5 please:6 maintained:1 generalized:1 complete:8 geometrical:1 image:2 wise:1 recently:2 wikipedia:24 pseudocode:2 clause:1 overview:1 cohen:2 million:5 martinsson:1 approximates:2 eps:5 cambridge:2 feldman:3 ai:8 versa:1 rd:16 mathematics:1 circolo:1 tygert:1 etc:1 closest:1 recent:5 moderate:1 redsvd:4 inequality:3 success:2 approximators:1 der:1 seen:1 additional:1 zouzias:1 forty:2 imai:1 ud:1 ii:1 full:1 reduces:1 technical:4 faster:1 long:3 sphere:1 post:2 sk22:1 a1:9 controlled:1 variant:2 regression:2 woolfe:1 arxiv:1 iteration:7 represent:1 normalization:1 c1:1 receive:1 addition:1 crash:3 preserved:1 ank:3 batson:1 singular:1 diagram:1 rest:1 unlike:3 archive:1 tri:3 sigact:1 integer:5 ee:1 counting:1 intermediate:1 bernstein:1 wn:2 iterate:1 axiomatically:1 fit:1 zi:2 xj:2 codebase:1 reduce:1 regarding:2 cn:1 angeles:1 whether:2 expression:1 pca:11 assist:1 clarkson:2 paige:1 constitute:1 matlab:4 clear:1 eigenvectors:1 maybe:1 kc1:1 funktionen:1 reduced:1 http:3 generate:1 kck2:2 singapore:1 per:1 discrete:2 kck:1 group:1 key:3 thereafter:1 kci:1 sparsifying:1 urban:1 neither:1 tenth:1 tamaki:1 v1:5 ram:1 graph:1 sum:19 year:1 run:4 package:1 letter:1 soda:2 reasonable:1 vn:1 bound:4 completing:1 annual:4 constraint:1 software:2 u1:3 min:1 expanded:1 vempala:1 ball:2 combination:1 smaller:1 wi:16 dv:3 projecting:2 den:1 indexing:1 equation:1 remains:2 daniela:1 mechanism:1 end:7 available:2 cnc:1 apply:4 appropriate:1 schmidt:1 existence:2 original:6 assumes:1 clustering:5 include:2 running:9 cf:1 a4:1 log2:1 palermo:1 sigmod:1 especially:1 build:1 establish:2 prof:2 approximating:2 streamed:2 question:4 diagonal:2 hai:1 ow:3 subspace:14 distance:14 capacity:1 entity:1 outer:2 topic:5 trivial:2 reason:1 provable:3 ru:2 length:1 code:1 useless:2 relationship:1 illustration:1 providing:2 statement:1 frank:3 stoc:1 sigart:1 negative:5 implementation:3 upper:1 datasets:1 discarded:1 benchmark:1 finite:2 affirmatively:2 defining:1 rn:8 arbitrary:1 namely:1 required:1 toolbox:1 extensive:1 c3:1 pachocki:1 nip:1 matematico:1 address:1 suggested:2 usually:2 wi2:1 sparsity:8 challenge:3 xjp:2 including:2 memory:3 power:2 turning:1 improve:1 technology:2 axis:1 arora:1 extract:1 text:2 prior:1 mapreduce:1 schulman:1 kf:4 python:1 relative:6 loss:1 proportional:1 var:3 foundation:2 affine:7 article:1 principle:3 tiny:1 pi:4 row:29 rendiconti:1 repeat:1 last:2 english:6 side:2 formal:1 vv:1 institute:1 pni:1 taking:1 mikhail:4 sparse:35 fifth:1 dimension:3 computes:1 author:2 made:1 far:1 ec:2 transaction:1 approximate:1 compact:1 implicitly:1 langberg:2 persu:1 xi:4 latent:6 decade:1 why:1 mj:3 ca:1 hadoop:2 expansion:1 poly:1 constructing:3 krk:2 pk:5 main:7 dense:2 linearly:1 wjp:1 motivation:2 big:1 n2:1 body:1 fig:6 slow:1 deterministically:1 concatenating:1 toyota:2 weighting:1 theorem:25 rk:1 xt:1 showing:2 kvi:4 dk:2 a3:1 exists:5 corr:1 ci:1 phd:1 illustrates:1 kx:1 logarithmic:1 halko:1 simply:1 jacm:1 springer:1 truth:1 satisfies:1 acm:12 ma:2 succeed:1 goal:1 kmeans:2 ann:1 replace:1 argmini:2 talg:1 hyperplane:1 lemma:3 principal:1 pas:1 svd:17 experimental:3 uber:1 formally:3 rokhlin:1 support:4 people:2 spielman:1 evaluate:4 srivastava:1 |
6,187 | 6,597 | Optimal Binary Classifier Aggregation for General
Losses
Akshay Balsubramani
University of California, San Diego
[email protected]
Yoav Freund
University of California, San Diego
[email protected]
Abstract
We address the problem of aggregating an ensemble of predictors with known loss
bounds in a semi-supervised binary classification setting, to minimize prediction
loss incurred on the unlabeled data. We find the minimax optimal predictions for
a very general class of loss functions including all convex and many non-convex
losses, extending a recent analysis of the problem for misclassification error. The
result is a family of semi-supervised ensemble aggregation algorithms which are
as efficient as linear learning by convex optimization, but are minimax optimal
without any relaxations. Their decision rules take a form familiar in decision
theory ? applying sigmoid functions to a notion of ensemble margin ? without the
assumptions typically made in margin-based learning.
1
Introduction
Consider a binary classification problem, in which we are given an ensemble of individual classifiers
to aggregate into the most accurate predictor possible for data falling into two classes. Our predictions are measured on a large test set of unlabeled data, on which we know the ensemble classifiers?
predictions but not the true test labels. Without using the unlabeled data, the prototypical supervised solution is empirical risk minimization (ERM): measure the errors of the ensemble classifiers
with labeled data, and then simply predict according to the best classifier. But can we learn a better
predictor by using unlabeled data as well?
This problem is central to semi-supervised learning. The authors of this paper recently derived
the worst-case-optimal solution for it when performance is measured with classification error ([1]).
However, this zero-one loss is inappropriate for other common binary classification tasks, such as
estimating label probabilities, and handling false positives and false negatives differently. Such goals
motivate the use of different evaluation losses like log loss and cost-weighted misclassification loss.
In this paper, we generalize the setup of [1] to these loss functions and a large class of others. Like
the earlier work, the choice of loss function completely specifies the minimax optimal ensemble
aggregation algorithm in our setting, which is efficient and scalable.
The algorithm learns weights over the ensemble classifiers by minimizing a convex function. The
optimal prediction on each example in the test set is a sigmoid-like function of a linear combination
of the ensemble predictions, using the learned weighting. Due to the minimax structure, this decision
rule depends solely upon the loss function and upon the structure of the ensemble predictions on data,
with no parameter or model choices.
1.1
Preliminaries
Our setting generalizes that of [1], in which we are given an ensemble H = {h1 , . . . , hp } and unlabeled (test) examples x1 , . . . , xn on which to predict. The ensemble?s predictions on the unlabeled
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
data are written as a matrix F:
?
h1 (x1 ) h1 (x2 ) ? ? ?
? ..
..
..
F=? .
.
.
hp (x1 ) hp (x2 ) ? ? ?
?
h1 (xn )
.. ?
. ?
hp (xn )
(1)
We use vector notation for the rows and columns of F: hi = (hi (x1 ), ? ? ? , hi (xn ))> and xj =
(h1 (xj ), ? ? ? , hp (xj ))> . Each example j ? [n] has a binary label yj ? {?1, 1}, but the test labels
are allowed to be randomized, represented by values in [?1, 1] instead of just the two values {?1, 1};
e.g. zi = 21 indicates yi = +1 w.p. 34 and ?1 w.p. 41 . So the labels on the test data can be
represented by z = (z1 ; . . . ; zn ) ? [?1, 1]n , and are unknown to the predictor, which predicts
g = (g1 ; . . . ; gn ) ? [?1, 1]n .
1.2
Loss Functions
We incur loss on test example j according to its true label yj . If yj = 1, then the loss of predicting
gj ? [?1, 1] on it is some function `+ (gj ); and if yj = ?1, then the loss is `? (gj ). To illustrate,
if the loss is the expected classification error, then gj ? [?1, 1] can be interpreted as a randomized
binary prediction in the same way as zj , so that `+ (gj ) = 21 (1 ? gj ) and `? (gj ) = 12 (1 + gj ).
We call `? the partial losses here, following earlier work (e.g. [16]). Since the true label can only
be ?1, the partial losses fully specify the decision-theoretic problem we face, and changing them is
tantamount to altering the prediction task.
What could such partial losses conceivably look like in general? Observe that they intuitively measure discrepancy to the true label ?1. Consequently, it is natural for e.g. `+ (g) to be decreasing,
as g increases toward the notional true label +1. This suggests that both partial losses `+ (?) and
`? (?) would be monotonic, which we assume hereafter in this paper (throughout we use increasing
to mean ?monotonically nondecreasing? and vice versa).
Assumption 1. Over the interval (?1, 1), `+ (?) is decreasing and `? (?) is increasing, and both are
twice differentiable.
We view Assumption 1 as very mild, as motivated above. Notably, convexity or symmetry of the
partial losses are not required. In this paper, ?general losses? refer to loss functions whose partial
losses satisfy Assumption 1, to contrast them with convex losses or other subclasses.
The expected loss incurred w.r.t. the randomized true labels zj is a linear combination of the partial
losses:
1 + zj
1 ? zj
`(zj , gj ) :=
`+ (gj ) +
`? (gj )
(2)
2
2
Decision theory and learning theory have thoroughly investigated the nature of the loss ` and its
partial losses, particularly how to estimate the ?conditional label probability? zj using `(zj , gj ). A
natural operation to do this is to minimize the loss over gj , and a loss ` such that arg min `(zj , g) =
g?[?1,1]
zj (for all zj ? [?1, 1]) is called a proper loss ([17, 16]).
1.3
Minimax Formulation
As in [1], we formulate the ensemble aggregation problem as a two-player zero-sum game between
a predictor and an adversary. In this game, the first player is the predictor, playing predictions over
the test set g ? [?1, 1]n . The adversary then sets the true labels z ? [?1, 1]n .
The key idea is that any ensemble constituent i ? [p] known to have low loss on the test data gives
us information about the unknown z, as z is constrained to be ?close? to the test predictions hi .
Each hypothesis in the ensemble represents such a constraint, and z is in the intersection of all these
constraint sets, which interact in ways that depend on the ensemble predictions F.
Accordingly, for now assume the predictor knows a vector of label correlations b such that
n
1X
?i ? [p] :
hi (xj )zj ? bi
n j=1
2
(3)
i.e. n1 Fz ? b. When the ensemble is composed of binary classifiers which predict in [?1, 1], these
p inequalities represent upper bounds on individual classifier error rates. These can be estimated
from the training set w.h.p. when the training and test data are i.i.d. using uniform convergence,
exactly as in the prototypical supervised ERM procedure discussed in the introduction ([5]). So in
our game-theoretic formulation, the adversary plays under ensemble constraints defined by b.
The predictor?s goal is to minimize the worst-case expected loss of g on the test data (w.r.t. the
randomized labeling z), using the loss function as defined earlier in Equation (2):
n
1X
`(z, g) :=
`(zj , gj )
n j=1
This goal can be written as the following optimization problem, a two-player zero-sum game:
V := min n max n `(z, g)
g?[?1,1]
z?[?1,1] ,
1
n Fz?b
min
max n
n
=
g?[?1,1]n z?[?1,1] ,
1
n Fz?b
1X
n j=1
1 + zj
2
`+ (gj ) +
1 ? zj
2
`? (gj )
(4)
(5)
In this paper, we solve the learning problem faced by the predictor, finding an optimal strategy
g? realizing the minimum in (4) for any given ?general loss? `. This strategy guarantees the best
possible worst-case performance on the unlabeled dataset, with an upper bound of V on the loss.
Indeed, for all z0 and g0 obeying the constraints, Equation (4) implies the tight inequalities
(a)
min
g?[?1,1]n
`(z0 , g) ? V ?
max
z?[?1,1]n ,
1
n Fz?b
`(z, g0 )
(6)
and g? attains the equality in (a), with a worst-case loss as good as any aggregated predictor.
In our formulation of the problem, the constraints on the adversary take a central role. As discussed
in previous work with this formulation ([1, 2]), these constraints encode the information we have
about the true labels. Without them, the adversary would find it optimal to trivially guarantee error
(arbitrarily close to) 12 by simply setting all labels uniformly at random (z = 0n ). It is clear that
adding information through more constraints will never raise the error bound V . 1
Nothing has yet been assumed about `(z, g) other than Assumption 1. Our main results will require
only this, holding for general losses. This brings us to this paper?s contributions:
1. We give the exact minimax g? ? [?1, 1]n for general losses (Section 2.1). The optimal
prediction on each example j is a sigmoid function of a fixed linear combination of the
ensemble?s p predictions on it, so g? is a non-convex function of the ensemble predictions.
By (6), this incurs the lowest worst-case loss of any predictor constructed with the ensemble
information F and b.
2. We derive an efficient algorithm for learning g? , by solving a p-dimensional convex optimization problem. This applies to a broad class of losses (cf. Lem. 2), including any with
convex partial losses. Sec. 2 develops and discusses the results.
3. We extend the optimal g? and efficient learning algorithm for it, as above, to a large variety
of more general ensembles and prediction scenarios (Sec. 3), including constraints arising
from general loss bounds, and ensembles of ?specialists? and heterogeneous features.
2
Results for Binary Classification
Based on the loss, define the function ? : [?1, 1] 7? R as ?(g) := `? (g) ? `+ (g). (We also write
the vector ?(g) componentwise with [?(g)]j = ?(gj ) for convenience, so that ?(hi ) ? Rn and
?(xj ) ? Rp .) Observe that by Assumption 1, ?(g) is increasing on its domain; so we can discuss
its inverse ??1 (m), which is typically sigmoid-shaped, as will be illustrated.
With these we will set up the solution to the game (4), which relies on a convex function.
1
However, it may pose difficulties in estimating b by applying uniform convergence over a larger H ([2]).
3
Figure 1: At left are plots of potential wells. At right are optimal prediction functions g, as a function of score.
Both are shown for various losses, as listed in Section 2.3.
Definition 1 (Potential Well). Define the potential well
?
??m + 2`? (?1)
?(m) := `+ (??1 (m)) + `? (??1 (m))
?
m + 2`+ (1)
if m ? ?(?1)
if m ? (?(?1), ?(1))
if m ? ?(1)
Lemma 2. The potential well ?(m) is continuous and 1-Lipschitz. It is also convex under any of
the following conditions:
(A) The partial losses `? (?) are convex over (?1, 1).
(B) The loss function `(?, ?) is a proper loss.
(C) `0? (x)`00+ (x) ? `00? (x)`0+ (x) for all x ? (?1, 1).
Condition (C) is also necessary for convexity of ?, under Assumption 1.
So the potential wells for different losses are shaped similarly, as seen in Figure 1. Lemma 2 tells
us that the potential well is easy to optimize under any of the given conditions. Note that these
conditions encompass convex surrogate losses commonly used in ERM, including all such ?marginbased? losses (convex univariate functions of zj gj ), introduced primarily for their favorable computational properties.
An easily optimized potential well benefits us, because the learning problem basically consists of
optimizing it over the unlabeled data, as we will soon make explicit. The function that will actually
be optimized is in terms of the dual parameters, so we call it the slack function.
Definition 3 (Slack Function). Let ? ? 0p be a weight vector over H (not necessarily a distri>
bution). The vector of scores is F> ? = (x>
1 ?, . . . , xn ?), whose elements? magnitudes are the
margins. The prediction slack function is
n
?(?, b) := ?(?) := ?b> ? +
1X
?(x>
j ?)
n j=1
(7)
An optimal weight vector ? ? is any minimizer of the slack function: ? ? ? arg min [?(?)].
??0p
2.1
Solution of the Game
These are used to describe the minimax equilibrium of the game (4), in our main result.
Theorem 4. The minimax value of the game (4) is
min
max
g?[?1,1]n z?[?1,1]n ,
1
n Fz?b
?
?
n
X
1
1
1
?
`(z, g) = V = ?(? ? ) = minp ??b> ? +
?(x>
j ?)
2
2 ??0
n j=1
4
The minimax optimal predictions are defined as follows:
?
if
??1
?
gj? := gj (? ? ) = ??1 (x>
?
)
if
j
?
1
if
for all j ? [n],
?
x>
j ? ? ?(?1)
> ?
xj ? ? (?(?1), ?(1))
?
x>
j ? ? ?(1)
(8)
gj? is always an increasing sigmoid, as shown in Figure 1.
We can also redo the proof of Theorem 4 when g ? [?1, 1]n is not left as a free variable set in the
game, but instead is preset to g(?) as in (8) for some (possibly suboptimal) weight vector ?.
Observation 5. For any weight vector ?0 ? 0p , the worst-case loss after playing g(?0 ) is
max n `(z, g(?0 )) ?
z?[?1,1] ,
1
n Fz?b
1
?(?0 )
2
The proof is a simplified version of that of Theorem 4; there is no minimum over g to deal with,
and the minimum over ? ? 0p in Equation (13) is upper-bounded by using ?0 . This result is an
expression of weak duality in our setting, and generalizes Observation 4 of [1].
2.2
Ensemble Aggregation Algorithm
Theorem 4 defines a prescription for aggregating the given ensemble predictions on the test set.
Learning: Minimize the slack function ?(?), finding the minimizer ? ? that achieves V .
This is a convex optimization under broad conditions (Lemma 2), and when the test examples are
i.i.d. the ? term is a sum of n i.i.d. functions. Therefore, it is readily amenable to standard first-order
optimization methods which require only O(1) test examples at once. In practice, learning employs
such methods to approximately minimize ?, finding some ?A such that ?(?A ) ? ?(? ? ) + for
some small . Standard convex optimization methods are guaranteed to do this for binary classifier
ensembles, because the slack function is Lipschitz (Lemma 2) and kbk? ? 1.
Prediction: Predict g(? ? ) on any test example, as indicated in (8).
This decouples the prediction task over each test example separately, which requires O(p) time and
memory like p-dimensional linear prediction. After finding an -approximate minimizer ?A in the
learning step as above, Observation 5 tells us that the prediction g(?A ) has loss ? V + 2 .
In particular, note that there is no algorithmic dependence on n in either step in a statistical learning
setting. So though our formulation is transductive, it is no less tractable than a stochastic optimization setting in which i.i.d. data arrive one at a time, and applies to this common situation.
2.3
Examples of Different Losses
To further illuminate Theorem 4, we detail a few special cases in which `? are explicitly defined.
These losses may be found throughout the literature (see e.g. [16]). The key functions ? and g ? are
listed for these losses in Appendix A, and in many cases in Figure 1. The nonlinearities used for g ?
are sigmoids, arising solely from the intrinsic minimax structure of the classification game.
? 0-1 Loss: Here gj is taken to be a randomized binary prediction; this case was developed
in [1], the work we generalize in this paper.
? Log Loss, Square Loss
? Cost-Weighted Misclassification (Quantile) Loss: This is defined with a parameter c ?
[0, 1] representing the relative cost of false positives vs. false negatives, making the Bayesoptimal classifier the c-quantile of the conditional probability distribution ([19]).
? Exponential Loss, Logistic Loss
? Hellinger
Loss:
This is typically
given for p, y
?
[0, 1] as
?
2
?
?
? 2
1
p? y +
1?p? 1?y
. Our formulation is equivalent when the
2
prediction and label are rescaled to [?1, 1].
5
? ?AdaBoost Loss?: If the goal of AdaBoost ([18]) is interpreted as class probability estimation, the implied loss is proper and given in [6, 16].
? Absolute Loss and Hinge Loss: The absolute loss can be defined by `abs
? (gj ) = 1 ? gj ,
abs
and the hinge loss also has `? (gj ) = 1 ? gj since the kink in the hinge loss only lies at
gj = ?1. These partial losses are the same as for 0-1 loss up to scaling, and therefore all
our results for ? and g? are as well. So these losses are not shown in Appendix A.
? Sigmoid Loss: This is an example of a sigmoid-shaped margin loss, a nonconvex smooth
surrogate for 0-1 loss. Similar losses have arisen in a variety of binary classification contexts, from robustness (e.g. [9]) to active learning ([10]) and structured prediction ([14]).
2.4
Related Work and Technical Discussion
There are two notable ways in which the result of Theorem 4 is particularly advantageous and general. First, the fact that `(z, g) can be non-convex in g, yet solvable by convex optimization, is a
major departure from previous work. Second, the solution has a convenient dependence on n (as
in [1]), simply averaging a function over the unlabeled data, which is not only mathematically convenient but also makes stochastic O(1)-space optimization practical. This is surprisingly powerful,
because the original minimax problem is jointly over the entire dataset, avoiding further independence or decoupling assumptions.
Both these favorable properties stem from the structure of the binary classification problem, as
we can describe by examining the optimization problem constructed within the proof of Thm. 4
(Appendix C.1). In it, the constraints which do not explicitly appear with Lagrange parameters are
all box, or L? norm, constraints. These decouple over the n test examples, so the problem can
be reduced to the one-dimensional optimization at the heart of Eq. (14), which is solved ad hoc.
So we are able to obtain minimax results for these non-convex problems ? the gi are ?clipped?
sigmoid functions because of the bounding effect of the [?1, 1] box constraints intrinsic to binary
classification. We introduce Lagrange parameters ? only for the p remaining constraints in the
problem, which do not decouple as above, applying globally over the n test examples. However,
these constraints only depend on n as an average over examples (which is how they arise in dual
form in Equation (16) of the proof), and the loss function itself is also an average (Equation (12)).
This makes the box constraint decoupling possible, and leads to the favorable dependence on n,
making an efficient solution possible to a potentially flagrantly non-convex problem.
To summarize, the technique of optimizing only ?halfway into? the dual allows us to readily manipulate the minimax problem exactly without using an approximation like weak duality, despite the
lack of convexity in g. This technique was used implicitly for a different purpose in the ?drifting
game? analysis of boosting ([18], Sec. 13.4.1). Existing boosting work is loosely related to our
approach in being a transductive game invoked to analyze ensemble aggregation, but it does not
consider unlabeled data and draws its power instead from being a repeated game ([18]).
The predecessor to this work ([1]) addresses a problem, 0-1 loss minimization, that is known to be
NP-hard when solved directly ([11]). Using the unlabeled data is essential to surmounting this. It
gives the dual problem an independently interesting interpretation, so the learning problem is on the
always-convex Lagrange dual function and is therefore tractable.
This paper?s transductive formulation involves no surrogates or relaxations of the loss, in sharp contrast to most previous work. This allows us to bypass the consistency and agnostic-learning discussions ([22, 3]) common to ERM methods that use convex risk minimization. Convergence analyses
of those methods make heavy use of convexity of the losses and are generally done presupposing a
linear weighting over H ([21]), whereas here such structure emerges directly from Lagrange duality
and involves no convexity to derive the worst-case-optimal predictions.
The conditions in Assumption 1 are notably general. Differentiability of the partial losses is convenient, but not necessary, and only used because first-order conditions are a convenient way to
establish convexity of the potential well in Lemma 2. It is never used elsewhere, including in the
minimax arguments used to prove Theorem 4. These manipulations are structured to be valid even if
`? are non-monotonic; but in this case, gj? could turn out to be multi-valued and hence not a genuine
function of the example?s score.
We emphasize that our result on the minimax equilibrium (Theorem 4) holds for general losses
? the slack function may not be convex unless the further conditions of Lemma 2 are met, but
6
the interpretation of the optimum in terms of margins and sigmoid functions remains. All this
emerges from the inherent decision-theoretic structure of the problem (the proof of Appendix C.1). It
>
manifests in the fact that the function g(x>
j ?) is always increasing in xj ? for general losses, because
the function ? is increasing. This monotonicity typically needs to be assumed in a generalized linear
model (GLM; [15]) and related settings. ? appears loosely analogous to the link function used by
GLMs, with its inverse being used for prediction.
The optimal decision rules emerging from our framework are artificial neurons of the ensemble inputs. Helmbold et al. introduce the notion of a ?matching loss? ([13]) for learning the parameters of
a (fully supervised) artificial neuron with an arbitrary increasing transfer function, effectively taking
the opposite tack of this paper in using a neuron?s transfer function to derive a loss to minimize in order to learn the neuron?s weights by convex optimization. Our assumptions on the loss, particularly
condition (C) of Lemma 2, have arisen independently in earlier online learning work by some of the
same authors ([12]); this may suggest connections between our techniques. We also note that our
family of general losses was defined independently by [19] in the ERM setting (dubbed ?F-losses?)
? in which condition (C) of Lemma 2 also has significance ([19], Prop. 2) ? but has seemingly
not been revisited thereafter. Further fleshing out the above connections would be interesting future
work.
3
Extensions
We detail a number of generalizations to the basic prediction scenario of Sec. 2. These extensions
are not mutually exclusive, and apply in conjunction with each other, but we list them separately for
clarity. They illustrate the versatility of our minimax framework, particularly Sec. 3.4.
3.1
Weighted Test Sets and Covariate Shift
Though our results here deal with binary classification of an unweighted test set, the formulation
deals with a weighted set with only a slight modification of the slack function:
Theorem 6. For any vector r ? 0n ,
?
!?
n
n
>
X
x
?
1
1
1X
j
?
min
max
rj `(zj , gj ) = minp ??b> ? +
rj ?
??0
2
n
r
g?[?1,1]n z?[?1,1]n , n
j
j=1
j=1
1
n Fz?b
Writing ?r? as the minimizer of the RHS above, the optimal predictions g? = g(?r? ), as in Theorem 4.
Such weighted classification can be parlayed into algorithms for general supervised learning problems via learning reductions ([4]). Allowing weights on the test set for the evaluation is tantamount
to accounting for known covariate shift in our setting; it would be interesting, though outside our
scope, to investigate scenarios with more uncertainty.
3.2
General Loss Constraints on the Ensemble
So far in the paper, we have considered the constraints on ensemble classifiers as derived from
their label correlations (i.e. 0-1 losses), as in (3). However, this view can be extended significantly
with the same analysis, because any general loss `(z, g) is linear in z (Eq. (2)), which allows our
development to go through essentially intact.
In summary, a classifier can be incorporated into our framework for aggregation if we have a generalization loss bound on it, for any ?general loss.? This permits an enormous variety of constraint sets,
as each classifier considered can have constraints corresponding to any number of loss bounds on it,
even multiple loss bounds using different losses. For instance, h1 can yield a constraint corresponding to a zero-one loss bound, h2 can yield one constraint corresponding to a square loss bound and
another corresponding to a zero-one loss bound, and so on. Appendix B details this idea, extending
Theorem 4 to general loss constraints.
3.3
Uniform Convergence Bounds for b
In our basic setup, b has been taken as a lower bound on ensemble classifier label correlations.
But as mentioned earlier, the error in estimating b is in fact often quantified by two-sided uniform
7
convergence (L? ) bounds on b. Constraining z in this fashion amounts to L1 regularization of the
dual vector ?.
Proposition 7. For any ? 0,
?
?
n
X
1
?
max
`(z, g) = minp ??b> ? +
min
?(x>
j ?) + k?k1
??R
n j=1
z?[?1,1]n ,
g?[?1,1]n
1
k n Fz?bk? ?
?
?
As in Thm. 4, the optimal g? = g(??
), where ??
is the minimizer of the right-hand side above.
Here we optimize over all vectors ? (not just nonnegative ones) in an L1 -regularized problem, convenient in practice because we can cross-validate over the parameter . To our knowledge, this
particular analysis has been addressed in prior work only for the special case of the entropy loss on
the probability simplex, discussed further in [8].
Prop. 7 is a corollary of amore general
result using differently scaled label correlation deviations
within the ensemble, i.e. n1 Fz ? b ? c for a general c ? 0n . This turns out to be equivalent
to regularizing the minimization over ? by its c-weighted L1 norm c> |?| (Thm. 11), used to
penalize the ensemble nonuniformly ([7]). This situation is motivated by uniform convergence of
heterogeneous
ensembles
comprised of e.g. ?specialist? predictors, for which a union bound ([5])
results in n1 Fz ? b with varying coordinates. Such ensembles are discussed next.
3.4
Heterogenous Ensembles of Specialist Classifiers and Features
All the results and algorithms in this paper apply in full generality to ensembles of ?specialist?
classifiers that only predict on some subset of the test examples. This is done by merely calculating
the constraints over only these examples, and changing F and b accordingly ([2]).
To summarize this from [2], suppose a classifier i ? [p] decides to abstain on an example xj
(j ? [n]) with probability 1 ? vi (x), and otherwise
predict hi (x). Our only assumption on
Pn
{vi (x1 ), . . . , vi (xn )} is the reasonable one that j=1 vi (xj ) > 0, so classifier i is not a useless
specialist that abstains everywhere.
The information about z contributed by classifier i is now not its overall correlation with z on the
entire test set, but rather the correlation with z restricted to the test examples on which it predicts.
On the test set, this is written as n1 Sz, where the matrix S is formed by reweighting each row of F:
?
?
?1 (x1 )h1 (x1 ) ?1 (x2 )h1 (x2 ) ? ? ? ?1 (xn )h1 (xn )
??2 (x1 )h2 (x1 ) ?2 (x2 )h2 (x2 ) ? ? ? ?2 (xn )h2 (xn )?
? , ?i (xj ) := Pnvi (xj )
S := n ?
..
..
..
..
?
?
.
.
.
.
k=1 vi (xk )
?p (x1 )hp (x1 ) ?p (x2 )hp (x2 ) ? ? ? ?p (xn )hp (xn )
(S = F when the entire ensemble consists of non-specialists, recovering our initial setup.) Therefore, the ensemble constraints (3) become n1 Sz ? bS , where bS gives the label correlations of each
classifier restricted to the examples on which it predicts. Though this rescaling results in entries of
S having different ranges and magnitudes ? 1, our results and proofs remain entirely intact.
Indeed, despite the title, this paper applies far more generally than to an ensemble of binary classifiers, because our proofs make no assumptions at all about the structure of F. Each predictor in the
ensemble can be thought of as a feature; it has so far been convenient to think of it as binary, following the perspective of binary classifier aggregation, but it could as well be e.g. real-valued, and
the features can have very different scales (as in S above). An unlabeled example x is simply a vector of features, so arbitrarily abstaining specialists are equivalent to ?missing features,? which this
framework handles seamlessly due to the given unlabeled data. Our development applies generally
to semi-supervised binary classification.
Acknowledgements
AB is grateful to Chris ?Ceej? Tosh for feedback that made the manuscript clearer. This work was
supported by the NSF (grant IIS-1162581).
8
| 6597 |@word mild:1 version:1 advantageous:1 norm:2 accounting:1 incurs:1 reduction:1 initial:1 score:3 hereafter:1 existing:1 yet:2 written:3 readily:2 plot:1 v:1 accordingly:2 xk:1 realizing:1 boosting:2 revisited:1 constructed:2 predecessor:1 become:1 consists:2 prove:1 introduce:2 hellinger:1 notably:2 indeed:2 expected:3 multi:1 globally:1 decreasing:2 inappropriate:1 increasing:7 spain:1 estimating:3 notation:1 distri:1 bounded:1 agnostic:1 lowest:1 what:1 interpreted:2 emerging:1 developed:1 finding:4 dubbed:1 guarantee:2 subclass:1 exactly:2 decouples:1 classifier:22 scaled:1 grant:1 appear:1 positive:2 aggregating:2 despite:2 solely:2 approximately:1 twice:1 quantified:1 suggests:1 bi:1 range:1 practical:1 yj:4 practice:2 union:1 procedure:1 empirical:1 significantly:1 thought:1 convenient:6 matching:1 suggest:1 convenience:1 unlabeled:13 close:2 risk:2 context:1 applying:3 writing:1 optimize:2 equivalent:3 missing:1 go:1 independently:3 convex:23 formulate:1 helmbold:1 rule:3 handle:1 notion:2 coordinate:1 analogous:1 diego:2 play:1 suppose:1 exact:1 hypothesis:1 element:1 particularly:4 predicts:3 labeled:1 role:1 solved:2 worst:7 rescaled:1 mentioned:1 convexity:6 motivate:1 depend:2 tight:1 raise:1 solving:1 grateful:1 incur:1 upon:2 completely:1 easily:1 differently:2 represented:2 various:1 describe:2 artificial:2 labeling:1 aggregate:1 tell:2 outside:1 whose:2 larger:1 solve:1 valued:2 otherwise:1 gi:1 g1:1 transductive:3 nondecreasing:1 jointly:1 itself:1 think:1 online:1 seemingly:1 hoc:1 differentiable:1 validate:1 constituent:1 kink:1 convergence:6 optimum:1 extending:2 illustrate:2 derive:3 clearer:1 pose:1 measured:2 eq:2 recovering:1 involves:2 implies:1 met:1 stochastic:2 abstains:1 require:2 generalization:2 preliminary:1 proposition:1 mathematically:1 extension:2 hold:1 considered:2 equilibrium:2 algorithmic:1 predict:6 scope:1 major:1 achieves:1 purpose:1 favorable:3 estimation:1 label:20 title:1 vice:1 weighted:6 minimization:4 always:3 rather:1 pn:1 varying:1 conjunction:1 corollary:1 encode:1 derived:2 indicates:1 seamlessly:1 contrast:2 attains:1 typically:4 entire:3 arg:2 classification:13 dual:6 overall:1 development:2 constrained:1 special:2 genuine:1 once:1 never:2 shaped:3 having:1 represents:1 broad:2 look:1 discrepancy:1 future:1 others:1 np:1 develops:1 inherent:1 primarily:1 employ:1 few:1 simplex:1 composed:1 individual:2 familiar:1 versatility:1 n1:5 ab:3 investigate:1 evaluation:2 amenable:1 accurate:1 partial:12 necessary:2 unless:1 loosely:2 instance:1 column:1 earlier:5 gn:1 altering:1 yoav:1 zn:1 fleshing:1 cost:3 deviation:1 subset:1 entry:1 predictor:13 uniform:5 comprised:1 examining:1 thoroughly:1 randomized:5 central:2 possibly:1 rescaling:1 potential:8 nonlinearities:1 sec:5 satisfy:1 notable:1 explicitly:2 depends:1 ad:1 vi:5 h1:9 view:2 analyze:1 bution:1 aggregation:8 abalsubr:1 contribution:1 minimize:6 square:2 formed:1 ensemble:41 yield:2 generalize:2 weak:2 basically:1 definition:2 notional:1 proof:7 dataset:2 manifest:1 knowledge:1 emerges:2 actually:1 appears:1 manuscript:1 supervised:8 adaboost:2 specify:1 formulation:8 done:2 though:4 box:3 generality:1 just:2 correlation:7 glms:1 hand:1 reweighting:1 lack:1 defines:1 logistic:1 brings:1 indicated:1 effect:1 true:8 equality:1 hence:1 regularization:1 illustrated:1 deal:3 game:13 generalized:1 theoretic:3 l1:3 invoked:1 abstain:1 recently:1 sigmoid:9 common:3 discussed:4 extend:1 interpretation:2 slight:1 refer:1 versa:1 trivially:1 consistency:1 hp:8 similarly:1 gj:29 recent:1 perspective:1 optimizing:2 scenario:3 manipulation:1 nonconvex:1 inequality:2 binary:18 arbitrarily:2 yi:1 seen:1 minimum:3 aggregated:1 monotonically:1 semi:4 ii:1 encompass:1 multiple:1 rj:2 full:1 stem:1 smooth:1 technical:1 cross:1 prescription:1 manipulate:1 prediction:32 scalable:1 basic:2 heterogeneous:2 essentially:1 represent:1 arisen:2 penalize:1 whereas:1 separately:2 interval:1 addressed:1 call:2 constraining:1 easy:1 variety:3 xj:11 independence:1 zi:1 suboptimal:1 opposite:1 idea:2 presupposing:1 shift:2 motivated:2 expression:1 redo:1 generally:3 clear:1 listed:2 amount:1 differentiability:1 reduced:1 specifies:1 fz:10 zj:16 nsf:1 estimated:1 arising:2 write:1 key:2 thereafter:1 enormous:1 falling:1 changing:2 clarity:1 abstaining:1 relaxation:2 merely:1 halfway:1 sum:3 inverse:2 everywhere:1 powerful:1 uncertainty:1 arrive:1 family:2 throughout:2 clipped:1 reasonable:1 yfreund:1 draw:1 decision:7 appendix:5 scaling:1 entirely:1 bound:15 hi:7 guaranteed:1 nonnegative:1 constraint:23 x2:8 argument:1 min:8 structured:2 according:2 combination:3 marginbased:1 remain:1 making:2 lem:1 conceivably:1 modification:1 kbk:1 b:2 intuitively:1 restricted:2 erm:5 glm:1 sided:1 taken:2 heart:1 equation:5 mutually:1 remains:1 discus:2 slack:8 turn:2 know:2 tractable:2 generalizes:2 operation:1 permit:1 apply:2 balsubramani:1 observe:2 specialist:7 robustness:1 rp:1 drifting:1 original:1 remaining:1 cf:1 hinge:3 calculating:1 quantile:2 k1:1 establish:1 implied:1 g0:2 strategy:2 dependence:3 exclusive:1 nonuniformly:1 surrogate:3 illuminate:1 link:1 chris:1 toward:1 useless:1 minimizing:1 setup:3 potentially:1 holding:1 negative:2 proper:3 unknown:2 contributed:1 allowing:1 upper:3 observation:3 neuron:4 situation:2 extended:1 incorporated:1 rn:1 ucsd:2 sharp:1 thm:3 arbitrary:1 introduced:1 bk:1 required:1 z1:1 componentwise:1 optimized:2 connection:2 california:2 learned:1 barcelona:1 heterogenous:1 nip:1 address:2 able:1 adversary:5 departure:1 summarize:2 including:5 max:7 memory:1 power:1 misclassification:3 natural:2 difficulty:1 regularized:1 predicting:1 solvable:1 minimax:16 representing:1 faced:1 prior:1 literature:1 acknowledgement:1 tantamount:2 relative:1 freund:1 loss:104 fully:2 prototypical:2 interesting:3 h2:4 incurred:2 minp:3 playing:2 bypass:1 heavy:1 row:2 elsewhere:1 summary:1 surprisingly:1 supported:1 soon:1 free:1 side:1 face:1 akshay:1 taking:1 absolute:2 benefit:1 feedback:1 xn:12 valid:1 unweighted:1 author:2 made:2 commonly:1 san:2 simplified:1 far:3 approximate:1 emphasize:1 implicitly:1 monotonicity:1 sz:2 active:1 decides:1 assumed:2 continuous:1 learn:2 nature:1 transfer:2 decoupling:2 symmetry:1 interact:1 investigated:1 necessarily:1 domain:1 significance:1 main:2 rh:1 bounding:1 arise:1 nothing:1 allowed:1 repeated:1 x1:11 fashion:1 explicit:1 obeying:1 exponential:1 lie:1 weighting:2 learns:1 z0:2 theorem:11 covariate:2 list:1 intrinsic:2 essential:1 false:4 adding:1 bayesoptimal:1 effectively:1 magnitude:2 sigmoids:1 margin:5 entropy:1 intersection:1 simply:4 univariate:1 lagrange:4 monotonic:2 applies:4 minimizer:5 relies:1 prop:2 conditional:2 goal:4 consequently:1 lipschitz:2 hard:1 uniformly:1 averaging:1 preset:1 decouple:2 lemma:8 tack:1 called:1 duality:3 player:3 intact:2 surmounting:1 regularizing:1 avoiding:1 handling:1 |
6,188 | 6,598 | Correlated-PCA: Principal Components? Analysis
when Data and Noise are Correlated
Namrata Vaswani and Han Guo
Iowa State University, Ames, IA, USA
Email: {namrata,hanguo}@iastate.edu
Abstract
Given a matrix of observed data, Principal Components Analysis (PCA) computes
a small number of orthogonal directions that contain most of its variability. Provably accurate solutions for PCA have been in use for a long time. However, to
the best of our knowledge, all existing theoretical guarantees for it assume that the
data and the corrupting noise are mutually independent, or at least uncorrelated.
This is valid in practice often, but not always. In this paper, we study the PCA
problem in the setting where the data and noise can be correlated. Such noise is
often also referred to as ?data-dependent noise?. We obtain a correctness result
for the standard eigenvalue decomposition (EVD) based solution to PCA under
simple assumptions on the data-noise correlation. We also develop and analyze a
generalization of EVD, cluster-EVD, that improves upon EVD in certain regimes.
1
Introduction
Principal Components Analysis (PCA) is among the most frequently used tools for dimension reduction. Given a matrix of data, it computes a small number of orthogonal directions that contain all
(or most) of the variability of the data. The subspace spanned by these directions is the ?principal
subspace?. To use PCA for dimension reduction, one projects the observed data onto this subspace.
The standard solution to PCA is to compute the reduced singular value decomposition (SVD) of
the data matrix, or, equivalently, to compute the reduced eigenvalue decomposition (EVD) of the
empirical covariance matrix of the data. If all eigenvalues are nonzero, a threshold is used and all
eigenvectors with eigenvalues above the threshold are retained. This solution, which we henceforth
refer to as simple EVD, or just EVD, has been used for many decades and is well-studied in literature, e.g., see [1] and references therein. However, to the best of our knowledge, all existing results
for it assume that the true data and the corrupting noise in the observed data are independent, or, at
least, uncorrelated. This is valid in practice often, but not always. Here, we study the PCA problem
in the setting where the data and noise vectors may be correlated (correlated-PCA). Such noise is
sometimes called ?data-dependent? noise.
Contributions. (1) Under a boundedness assumption on the true data vectors, and some other assumptions, for a fixed desired subspace error level, we show that the sample complexity of simpleEVD for correlated-PCA scales as f 2 r2 log n where n is the data vector length, f is the condition
number of the true data covariance matrix and r is its rank. Here ?sample complexity? refers to
the number of samples needed to get a small enough subspace recovery error with high probability
(whp). The dependence on f 2 is problematic for datasets with large condition numbers, and, especially in the high dimensional setting where n is large. (2) To address this, we also develop and
analyze a generalization of simple-EVD, called cluster-EVD. Under an eigenvalues? ?clustering?
assumption, cluster-EVD weakens the dependence on f .
To our best knowledge, the correlated-PCA problem has not been explicitly studied. We first encountered it while solving the dynamic robust PCA problem in the Recursive Projected Compressive
Sensing (ReProCS) framework [2, 3, 4, 5]. The version of correlated-PCA studied here is motivated
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
by these works. Some other somewhat related recent works include [6, 7] that study stochastic
optimization based techniques for PCA; and [8, 9, 10, 11] that study online PCA.
Notation. We use the interval notation [a, b] to mean all of the integers between a and b, inclusive,
and similarly for [a, b) etc. We use k ? k to denote the l2 norm of a vector or the induced l2 norm of
a matrix. For other lp norms, we use k ? kp . For a set T , IT refers to an n ? |T | matrix of columns
of the identity matrix indexed by entries in T . For a matrix A, AT := AIT . A tall matrix with
orthonormal columns is referred to as a basis matrix. For basis matrices P? and P , we quantify the
subspace error (SE) between their range spaces using
SE(P? , P ) := k(I ? P? P? 0 )P k.
(1)
1.1 Correlated-PCA: Problem Definition
We are given a time sequence of data vectors, yt , that satisfy
yt = `t + wt , with wt = Mt `t and `t = P at
(2)
where P is an n ? r basis matrix with r n. Here `t is the true data vector that lies in a low
dimensional subspace of Rn , range(P ); at is its projection into this r-dimensional subspace; and
wt is the data-dependent noise. We refer to Mt as the correlation / data-dependency matrix. The
goal is to estimate range(P ). We make the following assumptions on `t and Mt .
Assumption 1.1. The subspace projection coefficients, at , are zero mean, mutually independent
and bounded random vectors (r.v.), with a diagonal covariance matrix ?. Define ?? := ?min (?),
+
?+ := ?max (?) and f := ??? . Since the at ?s are bounded, we can also define a finite constant
? := maxj=1,2,...r maxt
(at )2j
?j .
Thus, (at )2j ? ??j .
For most bounded distributions, ? will be a small constant more than one, e.g., if the distribution of
all entries of at is iid zero mean uniform, then ? = 3. From Assumption 1.1, clearly, the `t ?s are also
EVD
zero mean, bounded, and mutually independent r.v.?s with a rank r covariance matrix ? = P ?P 0 .
In the model, for simplicity, we assume ? to be fixed. However, even if we replace ? by ?t and
define ?? = mint ?min (?t ) and ?+ = ?max (?t ), all our results will still hold.
Assumption 1.2. Decompose Mt as Mt = M2,t M1,t . Assume that
kM1,t P k ? q < 1, kM2,t k ? 1,
and, for any sequence of positive semi-definite Hermitian matrices, At , the following holds
?
?
1 X
M2,t At M2,t 0
?
max kAt k.
for a ? < ?,
? t?[1,?]
?
t=1
(3)
(4)
We will need the above to hold for all ? ? ?0 and for all ? ? c0 ? with a c0 1. We set ?0 and c0
tk
in Theorems 2.1 and 3.3; both will depend on q. Observe that, using (3), kw
k`t k ? q, and so q is an
upper bound on the signal-to-noise ratio (SNR).
To understand the assumption on M2,t , notice that, if we allow ? = ?, then (4) always holds and
is not an assumption. Let B denote the matrix on the LHS of (4). One example situation when (4)
holds with a ? ? is if B is block-diagonal with blocks At . In this case, it holds with ? = 1. In
fact, it also holds with ? = 1 if B is permutation-similar to a block diagonal matrix. The matrix
B will be of this form if M2,t = ITt with all the sets Tt being mutually disjoint. More generally,
if B is permutation-similar to a block-diagonal matrix with blocks given by the summation of At ?s
over at most ?0 < ? time instants, then (4) holds with ? = ?0 . This will happen if M2,t = ITt
with Tt = T [k] for at most ?0 time instants and if sets T [k] are mutually disjoint for different
k. Finally, the T [k] ?s need not even be mutually disjoint. As long as they are such that B is a
matrix with nonzero blocks on only the main diagonal and on a few diagonals near it, e.g., if it is
block tri-diagonal, it can be shown that the above assumption holds. This example is generalized in
Assumption 1.3 given below.
1.2 Examples of correlated-PCA problems
One key example of correlated-PCA is the PCA with missing data (PCA-missing) problem. Let Tt
denote the set of missing entries at time t. Suppose, we set the missing entries of yt to zero. Then,
yt = `t ? ITt ITt 0 `t .
2
(5)
In this case M2,t = ITt and M1,t = ?ITt 0 . Thus, q is an upper bound on kITt 0 P k. Clearly, it
will be small if the columns of P are dense vectors. For the reader familiar with low-rank matrix
completion (MC), e.g., [12, 13], PCA-missing can also be solved by first solving the low-rank matrix
completion problem to recover L, followed by PCA on the completed matrix. This would, of course,
be much more expensive than directly solving PCA-missing and would need more assumptions.
Another example where correlated-PCA occurs is that of robust PCA (low-rank + sparse formulation) [14, 15, 16] when the sparse component?s magnitude is correlated with `t . Let Tt denote the
support set of wt and let xt be the |Tt |-length vector of its nonzero entries. If we assume linear
dependency of xt on `t , we can write out yt as
yt = `t + ITt xt = `t + ITt Ms,t `t .
(6)
Thus M2,t = ITt and M1,t = Ms,t and so q is an upper bound on kMs,t P k. In the rest of the
paper, we refer to this problem is ?PCA with sparse data-dependent corruptions (PCA-SDDC)?.
One key application where it occurs is in foreground-background separation for videos consisting
of a slow changing background sequence (modeled as lying close to a low-dimensional subspace)
and a sparse foreground image sequence consisting typically of one or more moving objects [14].
The PCA-SDDC problem is to estimate the background sequence?s subspace. In this case, `t is
the background image at time t, Tt is the support set of the foreground image at t, and xt is the
difference between foreground and background intensities on Tt . An alternative solution approach
for PCA-SDDC is to use an RPCA solution such as principal components? pursuit (PCP) [14, 15] or
Alternating-Minimization (Alt-Min-RPCA) [17] to first recover the matrix L followed by PCA on
L. However, as shown in Sec. 5, Table 1, this approach will be much slower; and it will work only
if its required incoherence assumptions hold. For example, if the columns of P are sparse, it fails.
For both problems above, a solution for PCA will work only when the corrupting noise wt is small
compared to `t . A sufficient condition for this is that q is small.
A third example where correlated-PCA and its generalization, correlated-PCA with partial subspace
knowledge, occurs is in the subspace update step of Recursive Projected Compressive Sensing (ReProCS) for dynamic robust PCA [3, 5].
In all three of the above applications, the assumptions on the data-noise correlation matrix given in
Assumption 1.2 hold if there are enough changes of a certain type in the set of missing or corrupted
entries, Tt . One example where this is true is in case of a 1D object of length s or less that remains
static for at most ? frames at a time. When it moves, it moves by at least a certain fraction of s
pixels. The following assumption is inspired by the object?s support.
Assumption 1.3. Let l denote the number of times the set Tt changes in the interval [1, ?] (or in any
given interval of length ? in case of dynamic robust PCA). So 0 ? l ? ? ? 1. Let t0 := 1; let tk ,
with tk < tk+1 , denote the time instants in this interval at which Tt changes; and let T [k] denote the
distinct sets. In other words, Tt = T [k] for t ? [tk , tk+1 ), for each k = 1, 2, . . . , l. Assume that the
following hold with a ? < ?:
1. (tk+1 ? tk ) ? ?? and |T [k] | ? s;
2. ?2 ?? ? ? where ? is the smallest positive integer so that, for any 0 ? k ? l, T [k] and
T [k+?] are disjoint;
3. for any k1 , k2 satisfying 0 ? k1 < k2 ? l, the sets (T [k1 ] \ T [k1 +1] ) and (T [k2 ] \ T [k2 +1] )
are disjoint.
Pl
An implicit assumption for condition 3 to hold is that k=0 |T [k] \ T [k+1] | ? n. Observe that
conditions 2 and 3 enforce an upper bound on the maximum support size s.
To connect Assumption 1.3 with the moving object example given above, condition 1 holds if the
object?s size is at most s and if it moves at least once every ?? frames. Condition 2 holds, if, every
time it moves, it moves in the same direction and by at least ?s pixels. Condition 3 holds if, every
time it moves, it moves in the same direction and by at most d0 ? ?s pixels, with d0 ? ? n (or, more
generally, the motion is such that, if the object were to move at each frame, and if it started at the
top of the frame, it does not reach the bottom of the frame in a time interval of length ?).
The following lemma [4] shows that, with Assumption 1.3 on Tt , M2,t = ITt satisfies the assumption on M2,t given in Assumption 1.2. Its proof generalizes the discussion below Assumption 1.2.
3
Lemma 1.4. [[4], Lemmas 5.2 and 5.3] Assume that Assumption 1.3 holds. For any sequence of
|Tt | ? |Tt | symmetric positive-semi-definite matrices At ,
k
?
X
? max kAt k ? ? max kAt k
ITt At ITt 0 k ? (?2 ?)
t?[1,?]
t=1
t?[1,?]
Thus, if kITt 0 P k ? q < 1, then the PCA-missing problem satisfies Assumption 1.2. If kMs,t P k ?
q < 1, then the PCA-SDDC problem satisfies Assumption 1.2.
Assumption 1.3 is one model on Tt that ensures that, if M2,t = ITt , the assumption on M2,t given
in Assumption 1.2 holds. For its many generalizations, see Supplementary Material, Sec. 7, or [4].
As explained in [18], data-dependent noise also often occurs in molecular biology applications when
the noise affects the measurement levels through the very same process as the interesting signal.
2
Simple EVD
P?
Simple EVD computes the top eigenvectors of the empirical covariance matrix, ?1 t=1 yt yt 0 , of
the observed data. The following can be shown.
Theorem
2.1 (simple-EVD result). Let P? denote the matrix containing all the eigenvectors of
P?
1
0
t=1 yt yt with eigenvalues above a threshold, ?thresh , as its columns. Pick a ? so that
?
r? ? 0.01. Suppose that yt ?s satisfy (2) and the following hold.
1. Assumption 1.1 on `t holds. Define
?0 := C? 2
r2 11 log n
32
max(f, qf, q 2 f )2 , C :=
.
(r?)2
0.012
2. Assumption 1.2 on Mt holds for any ? ? ?0 and for any ? satisfying
2
1 ? r?
(r?)2 (r?)
?
,
?
min
?
2
4.1(qf )2 q 2 f
3. Set algorithm parameters ?thresh = 0.95?? and ? ? ?0 .
Then, with probability at least 1 ? 6n?10 , SE(P? , P ) ? r?.
Proof: The proof involves a careful application of the sin ? theorem [19] to bound the subspace
error, followed by using matrix Hoeffding [20] to obtain high probability bounds on each of the
terms in the sin ? bound. It is given in the Supplementary Material, Section 8.
Consider the lower bound on ?. We refer to this as the ?sample complexity?. Since q < 1, and ? is a
small constant (e.g., for the uniform distribution, ? = 3), for a fixed error level, r?, ?0 simplifies to
cf 2 r2 log n. Notice that the dependence on n is logarithmic. It is possible to show that the sample
complexity scales as log n because we assume that the `t ?s are bounded r.v.s. As a result we can
apply the matrix Hoeffding inequality [20] to bound the perturbation between the observed data?s
empirical covariance matrix and that of the true data. The bounded r.v. assumption is actually a
more practical one than the usual Gaussian assumption since most sources of data have finite power.
By replacing matrix Hoeffding
by Theorem 5.39 of [21] in places where one can apply a concentraP
tion of measure result to t at at 0 /? (which is at r ? r matrix), and by matrix Bernstein [20] elsewhere, it should be possible to further reduce the sample complexity to c max((qf )2 r log n, f 2 (r +
log n)). It should also be possible remove the boundedness assumption and replace it by a Gaussian
(or a sub-Gaussian) assumption, but, that would increase the sample complexity to c(qf )2 n.
Consider the upper bound on ?/?. Clearly, the smaller term is the first one. This depends on
1/(qf )2 . Thus, when f is large and q is not small enough, the bound required may be impractically
small. As will be evident from the proof (see Remark 8.3 in Supplementary Material), we get this
bound because wt is correlated with `t and this results in E[`t wt 0 ] 6= 0.
If wt and `t were uncorrelated, qf would get replaced by
as well as in the sample complexity.
?max (Cov(wt ))
??
in the upper bound on ?/?
Application to PCA-missing and PCA-SDDC. By Lemma 1.4, the following is immediate.
4
16
log of eigs of curtain video
log of eigs of lake video
log of eigs of waving?tree video
15
14
13
12
11
10
9
8
7
6
0
5
10
15
20
25
30
35
Figure 1: Eigenvalue clusters of the three low-rankified videos.
Corollary 2.2. Consider the PCA-missing model, (5), and assume that maxt kITt 0 P k ? q < 1;
or consider the PCA-SDDC model, (6), and assume that maxt kMs,t P k ? q < 1. Assume that
everything in Theorem 2.1 holds except that we replace Assumption 1.2 by Assumption 1.3. Then,
with probability at least 1 ? 6n?10 , SE(P? , P ) ? r?.
3
Cluster-EVD
To try to relax the strong dependence on f 2 of the result above, we develop a generalization of
simple-EVD that we call cluster-EVD. This requires the clustering assumption.
3.1 Clustering assumption
To state the assumption, define the following partition of the index set {1, 2, . . . r} based on the
eigenvalues of ?. Let ?i denote its i-th largest eigenvalue.
Definition 3.1 (g-condition-number partition of {1, 2, . . . r}). Define G1 = {1, 2, . . . r1 } where r1
is the index for which ??r1 ? g and ?r?1+1 > g. In words, to define G1 , start with the index of the first
1
1
(largest) eigenvalue and keep adding indices of the smaller eigenvalues to the set until the ratio of
the maximum to the minimum eigenvalue first exceeds g.
Pk?1
For each k > 1, define Gk = {r? + 1, r? + 2, . . . , r? + rk } where r? = ( i=1 ri ), rk is the index for
? ? +1
? +1
? g and ?r r+r
> g. In words, to define Gk , start with the index of the (r? + 1)-th
which ?rr?+r
?
?
k
k +1
eigenvalue, and repeat the above procedure.
Stop when ?r? +rk +1 = 0, i.e., when there are no more nonzero eigenvalues. Define ? = k as the
number of sets in the partition. Thus {G1 , G2 , . . . , G? } is the desired partition.
?
Define G0 = [.], Gk := (P )Gk , ?+
k := maxi?Gk ?i (?), ?k := mini?Gk ?i (?) and
? :=
max
k=1,2,...,?
?+
k+1
??
k
.
? quantifies the ?distance? between consecutive sets of the above partition. Moreover, by definition,
?+
k
??
k
? g. Clearly, g ? 1 and ? ? 1 always. We assume the following.
Assumption 3.2. For a 1 ? g + < f and a 0 ? ?+ < 1, assume that there exists a g satisfying
1 ? g ? g + and a ? satisfying 0 ? ? ? ?+ , for which we can define a g-condition-number
partition of {1, 2, . . . r} that satisfies ? ? ?+ . The number of sets in the partition is ?. When g +
and ?+ are small, we say that the eigenvalues are ?well-clustered? with ?clusters?, Gk .
This assumption can be understood as a generalization of the eigen-gap condition needed by the
block power method, which is a fast algorithm for obtaining the k top eigenvectors of a matrix [22].
We expect it to hold for data that has variability across different scales. The large scale variations
would result in the first (largest eigenvalues?) cluster and the smaller scale variations would form the
later clusters. This would be true, for example, for video ?textures? such as moving waters or waving
trees in a forest. We tested this assumption on some such videos. We describe our conclusions here
for three videos - ?lake? (video of moving lake waters), ?waving-tree? (video consisting of waving
trees), and ?curtain? (video of window curtains moving due to the wind). For each video, we first
made it low-rank by keeping the eigenvectors corresponding to the smallest number of eigenvalues
that contain at least 90% of the total energy and projecting the video onto this subspace. For the
low-rankified lake video, f = 74 and Assumption 3.2 holds with ? = 6 clusters, g + = 2.6 and
?+ = 0.7. For the waving-tree video, f = 180 and Assumption 3.2 holds with ? = 6, g + = 9.4
and ?+ = 0.72. For the curtain video, f = 107 and the assumption holds ? = 3, g + = 16.1 and
?+ = 0.5. We show the clusters of eigenvalues in Fig. 1.
5
Algorithm 1 Cluster-EVD
Parameters: ?, g?, ?thresh .
? 0 ? [.]. Set the flag Stop ? 0. Set k ? 1.
Set G
Repeat
? det,k := [G
? 0, G
? 1, . . . G
? k?1 ] and let ?k := (I ? G
? det,k G
? det,k 0 ). Notice that ?1 =
1. Let G
I. Compute
?
?
k?
X
1
? k = ?k ?
D
yt yt 0 ? ?k
?
t=(k?1)?+1
? i = ?i (D
? k );
2. Find the k-th cluster, G?k : let ?
?1
?
? r?
?
k
?1
?
? r? +1
?
k
? r? +1 < ?thresh ;
> g? or ?
k
P
k?1
?
(b) set Gk = {?
r? + 1, r?? + 2, . . . , r?? + r?k } where r?? := j=1 r?j ;
?
(c) if ?r?k +1 < ?thresh , update the flag Stop ? 1
? k ? eigenvectors(D
? k , r?k ); increment k
3. Compute G
Until Stop == 1.
?1 ? ? ? G
? ?].
Set ?? ? k. Output P? ? [G
?
eigenvectors(M, r) returns a basis matrix for the span of the top r eigenvectors of M.
(a) find the index r?k for which
? g? and either
3.2 Cluster-EVD algorithm
The cluster-EVD approach is summarized in Algorithm 1. I Its main idea is as follows. We start
? 1 :=
by computing the empirical covariance matrix of the first set of ? observed data points, D
P?
1
0
?
?
y
y
.
Let
?
denote
its
i-th
largest
eigenvalue.
To
estimate
the
first
cluster,
G
,
we
start
i
1
t=1 t t
?
with the index of the first (largest) eigenvalue and keep adding indices of the smaller eigenvalues
to it until the ratio of the maximum to the minimum eigenvalue exceeds g? or until the minimum
eigenvalue goes below a ?zero threshold?, ?thresh . Then, we estimate the first cluster?s subspace,
? 1 . To get the second cluster and its subspace,
range(G1 ) by computing the top r?1 eigenvectors of D
?
we project the next set of ? yt ?s orthogonal to G1 followed by repeating the above procedure. This
? r? +1 < ?thresh .
is repeated for each k > 1. The algorithm stops when ?
k
Algorithm 1 is related to, but significantly different from, the ones introduced in [3, 5] for the
subspace deletion step of ReProCS. The one introduced in [3] assumed that the clusters were known
to the algorithm (which is unrealistic). The one studied in [5] has an automatic cluster estimation
approach, but, one that needs a larger lower bound on ? compared to what Algorithm 1 needs.
3.3 Main result
We give the performance guarantee for Algorithm 1 here. Its parameters are set as follows. We set
? i is not equal to
g? to a value that is a little larger than g. This is needed to allow for the fact that ?
the i-th eigenvalue of ? but is within a small margin of it. For the same reason, we need to also use
a nonzero ?zeroing? threshold, ?thresh , that is larger than zero but smaller than ?? . We set ? large
enough to ensure that SE(P? , P ) ? r? holds with a high enough probability.
Theorem 3.3 (cluster-EVD result). Consider Algorithm 1.
Pick a ? so that r2 ? ?
0.0001, and r2 ?f ? 0.01. Suppose that yt ?s satisfy (2) and the following hold.
1. Assumption 1.1 and Assumption 3.2 on `t hold with ?+ satisfying ?+ ? min(1 ? r? ?
g + ?0.0001
0.08
0.25 , 1.01g + +0.0001 ? 0.0001). Define
r2 (11 log n + log ?)
max(g + , qg + ,
(r?)2
p
p
32 ? 16
.
q 2 f, q(r?)f, (r?)2 f, q f g + , (r?) f g + )2 , C :=
0.012
?0 := C? 2
2. Assumption 1.2 on Mt holds with ? ? ?0 and with ? satisfying
2
?
(1 ? r? ? ?+ )
(rk ?)2 (rk ?)
min
?
,
.
?
2
4.1(qg + )2 q 2 f
6
3. Set algorithm parameters g? = 1.01g + + 0.0001, ?thresh = 0.95?? and ? ? ?0 .
Then, with probability at least 1 ? 12n?10 , SE(P? , P ) ? r?.
Proof: The proof is given in Section 9 in Supplementary Material.
We can also get corollaries for PCA-missing and PCA-SDDC for cluster-EVD. We have given one
specific value for g? and ?thresh in Theorem 3.3 for simplicity. One can, in fact, set g? to be anything
that satisfies (12) given in Supplementary Material and one can set ?thresh to be anything satisfying
5r??? ? ?thresh ? 0.95?? . Also, it should be possible to reduce the sample complexity of clusterEVD to c max(q 2 (g + )2 r log n, (g + )2 (r + log n)) using the approach explained in Sec. 2.
4
Discussion
Comparing simple-EVD and cluster-EVD. Consider the lower bounds
? on ?. In the cluster-EVD
(c-EVD) result, Theorem 3.3, if q is small enough (e.g., if q ? 1/ f ), and if (r2 ?)f ? 0.01,
it is clear that the maximum in the max(., ., ., .) expression is achieved by (g + )2 . Thus, in this
2
n+log ?) 2
regime, c-EVD needs ? ? C r (11 log
g and its sample complexity is ??. In the EVD result
(r?)2
2
11 log n 2
f .
(Theorem 2.1), g + gets replaced by f and ? by 1, and so, its sample complexity, ? ? C r (r?)
2
+
In situations where the condition number f is very large but g is much smaller and ? is small (the
clustering assumption holds well), the sample complexity of c-EVD will be much smaller than that
of simple-EVD. However, notice that, the lower bound on ? for simple-EVD holds for any q < 1
and for any ? with r? < ?
0.01 while the c-EVD lower bound given above holds only when q is small
enough, e.g., q = O(1/ f ), and ? is small enough, e.g., r? = O(1/f ). This tighter bound on ?
is needed because the error of the k-th step of c-EVD depends on the errors of the previous steps
times f . Secondly, the c-EVD result also needs ?+ and ? to be small (clustering assumption holds
well), whereas, for simple-EVD, by definition, ?+ = 0 and ? = 1. Another thing to note is that the
constants in both lower bounds are very large with the c-EVD one being even larger.
To compare the upper bounds on ?, assume that the same ? is used by both, i.e., ? =
max(?0 (EVD), ?0 (c-EVD)). As long as rk is large enough, ?+ is small enough, and g is small
enough, the upper bound on ? needed by the c-EVD result is significantly looser. For example, if
(r?)2
?+ = 0.2, ? = 2, rk = r/2, then c-EVD needs ? ? (0.5 ? 0.79 ? 0.5)2 4.1q
2 g 2 ? while simple-EVD
2
(r?)
needs ? ? (0.5 ? 0.99)2 4.1q
2 f 2 ?. If g = 3 but f = 100, clearly the c-EVD bound is looser.
Comparison with other results for PCA-SDDC and PCA-missing. To our knowledge, there is no
other result for correlated-PCA. Hence, we provide comparisons of the corollaries given above for
the PCA-missing and PCA-SDDC special cases with works that also study these or related problems.
An alternative solution for either PCA-missing or PCA-SDDC is to first recover the entire matrix L
and then compute its subspace via SVD on the estimated L. For the PCA-missing problem, this can
be done by using any of the low-rank matrix completion techniques, e.g., nuclear norm minimization
(NNM) [13] or alternating minimization (Alt-Min-MC) [23]. Similarly, for PCA-SDDC, this can be
done by solving any of the recent provably correct RPCA techniques such as principal components?
pursuit (PCP) [14, 15, 16] or alternating minimization (Alt-Min-RPCA) [17].
However, as explained earlier doing the above has two main disadvantages. The first is that it is
much slower (see Sec. 5). The difference in speed is most dramatic when solving the matrix-sized
convex programs such as NNM or PCP, but even the Alt-Min methods are slower. If we use the time
complexity from [17], then finding the span of the top k singular vectors of an n ? m matrix takes
O(nmk) time. Thus, if ? is a constant, both simple-EVD and c-EVD need O(n?r) time, whereas,
Alt-Min-RPCA needs O(n?r2 ) time per iteration [17]. The second disadvantage is that the above
methods for MC or RPCA need more assumptions to provably correctly recover L. All the above
methods need an incoherence assumption on both the left singular vectors, P , and the right singular
vectors, V , of L. Of course, it is possible that, if one studies these methods with the goal of only
recovering the column space of L correctly, the incoherence assumption on the right singular vectors
is not needed. From simulation experiments (see Sec. 5), the incoherence of the left singular vectors
is definitely needed. On the other hand, for the PCA-SDDC problem, simple-EVD or c-EVD do not
even need the incoherence assumption on P .
The disadvantage of both EVD and c-EVD, or in fact of any solution for the PCA problem, is that
they work only when q is small enough (the corrupting noise is small compared to `t ).
7
Mean Subspace Error (SE)
Expt 1
Expt 2
Average Execution Time
c-EVD
EVD
PCP
A-M-RPCA
c-EVD
EVD
PCP
A-M-RPCA
0.0908
0.3626
0.0911
0.3821
1.0000
0.4970
1.0000
0.4846
0.0549
0.0613
0.0255
0.0223
0.2361
1.6784
0.0810
5.5144
Table 1: Comparison of SE(P? , P ) and execution time (in seconds). A-M-RPCA: Alt-Min-RPCA. Expt 1:
simulated data, Expt 2: lake video with simulated foreground.
5
Numerical Experiments
We use the PCA-SDDC problem as our case study example. We compare EVD and cluster-EVD
(c-EVD) with PCP [15], solved using [24], and with Alt-Min-RPCA [17] (implemented using code
from the authors? webpage). For both PCP and Alt-Min-RPCA, P? is recovered as the top r eigenvectors of of the estimated L. To show the advantage of EVD or c-EVD, we let `t = P at with columns
of P being sparse. These were chosen as the first r = 5 columns of the identity matrix. We generate at ?s iid uniformly with zero mean and covariance matrix ? = diag(100, 100, 100, 0.1, 0.1).
Thus the condition number f = 1000. The clustering assumption holds with ? = 2, g + = 1 and
?+ = 0.001. The noise wt is generated as wt = ITt Ms,t `t with Tt generated to satisfy Assumption
1.3 with s = 5, ? = 2, and ?? = 1; and the entries of Ms,t being iid N (0, q 2 ) with q = 0.01. We
used n = 500. EVD and c-EVD (Algorithm 1) were implemented with ? = 300, ?thresh = 0.095,
g? = 3. 10000-time Monte Carlo averaged values of SE(P? , P ) and execution time are shown in the
first row of Table 1. Since the columns of P are sparse, both PCP and Alt-Min-RPCA fail. Both
have average SE close to one whereas the average SE of c-EVD and EVD is 0.0908 and 0.0911
respectively. Also, both EVD and c-EVD are much faster than the other two. We also did an experiment with the settings of this experiment, but with P dense. In this case, EVD and c-EVD errors
were similar, but PCP and Alt-Min-RPCA errors were less than 10?5 .
For our second experiment, we used images of a low-rankified real video sequence as `t ?s.
We chose the escalator sequence from http://perception.i2r.a-star.edu.sg/bk_
model/bk_index.html since the video changes are only in the region where the escalator
moves (and hence can be modeled as being sparse). We made it exactly low-rank by retaining
its top 5 eigenvectors and projecting onto their subspace. This resulted in a data matrix L of size
n ? 2? with n = 20800 and ? = 50, of rank r = 5. We overlaid a simulated moving foreground
block on it. The intensity of the moving block was controlled to ensure that q is small. We used
? = 50 for EVD and c-EVD. We let P be the eigenvectors of the low-rankified video with nonzero
eigenvalues and computed SE(P? , P ). The errors and execution time are displayed in the second
row of Table 1. Since n is very large, the difference in speed is most apparent in this case.
Thus c-EVD outperforms PCP and AltMinRPCA when columns of P are sparse. It also outperforms
EVD but the advantage in mean error is not as much as our theorems predict. One reason is that the
constant in the required lower bounds on ? is very large. It is hard to pick an ? that is this large and
still only O(log n) unless n is very large. Secondly, both guarantees are only sufficient conditions.
6
Conclusions and Future Work
We studied the problem of PCA in noise that is correlated with the data (data-dependent noise). We
obtained sample complexity bounds for the most commonly used PCA solution, simple EVD. We
also developed and analyzed a generalization of EVD, called cluster-EVD, that has lower sample
complexity under extra assumptions. We provided a detailed comparison of our results with those
for other approaches to solving its example applications - PCA with missing data and PCA with
sparse data-dependent corruptions.
We used matrix Hoeffding [20] to obtain our results. As explained in Sec. 2, we can significantly
improve the sample complexity bounds if this is replaced by [21, Theorem 5.39] and matrix Bernstein at appropriate places. We have obtained such a result in ongoing work [25]. Moreover, as done
in [5] (for ReProCS), the mutual independence of `t ?s can be easily replaced by a more practical assumption of `t ?s following autoregressive model with almost no change to our assumptions. Thirdly,
by generalizing the proof techniques developed here, we can also study the problem of correlatedPCA with partial subspace knowledge. The solution to the latter problem helps to greatly simplify
the proof of correctness of ReProCS for online dynamic RPCA. The boundedness assumption on
`t ?s can be replaced by a Gaussian or a well-behaved sub-Gaussian assumption but this will increase
the sample complexity to O(n). Finally, an open-ended question is how we relax Assumption 1.2
on Mt and still get results similar to Theorem 2.1 or Theorem 3.3.
8
References
[1] B. Nadler, ?Finite sample approximation results for principal component analysis: A matrix
perturbation approach,? The Annals of Statistics, vol. 36, no. 6, 2008.
[2] C. Qiu and N. Vaswani, ?Real-time robust principal components? pursuit,? in Allerton Conf.
on Communication, Control, and Computing, 2010.
[3] C. Qiu, N. Vaswani, B. Lois, and L. Hogben, ?Recursive robust pca or recursive sparse recovery
in large but structured noise,? IEEE Trans. Info. Th., pp. 5007?5039, August 2014.
[4] B. Lois and N. Vaswani, ?Online matrix completion and online robust pca,? in IEEE Intl.
Symp. Info. Th. (ISIT), 2015.
[5] J. Zhan, B. Lois, H. Guo, and N. Vaswani, ?Online (and Offline) Robust PCA: Novel Algorithms and Performance Guarantees,? in Intnl. Conf. Artif. Intell. and Stat. (AISTATS), 2016.
[6] R. Arora, A. Cotter, and N. Srebro, ?Stochastic optimization of pca with capped msg,? in Adv.
Neural Info. Proc. Sys. (NIPS), 2013, pp. 1815?1823.
[7] O. Shamir, ?A stochastic pca and svd algorithm with an exponential convergence rate,?
arXiv:1409.2848, 2014.
[8] C. Boutsidis, D. Garber, Z. Karnin, and E. Liberty, ?Online principal components analysis,? in
Proc. ACM-SIAM Symposium on Discrete Algorithms (SODA), 2015, pp. 887?901.
[9] A. Balsubramani, S. Dasgupta, and Y. Freund, ?The fast convergence of incremental pca,? in
Adv. Neural Info. Proc. Sys. (NIPS), 2013, pp. 3174?3182.
[10] Z. Karnin and E. Liberty, ?Online pca with spectral bounds,? in Proce. Conference on Computational Learning Theory (COLT), 2015, pp. 505?509.
[11] I. Mitliagkas, C. Caramanis, and P. Jain, ?Memory limited, streaming pca,? in Adv. Neural
Info. Proc. Sys. (NIPS), 2013, pp. 2886?2894.
[12] M. Fazel, ?Matrix rank minimization with applications,? PhD thesis, Stanford Univ, 2002.
[13] E. J. Candes and B. Recht, ?Exact matrix completion via convex optimization,? Found. of
Comput. Math, , no. 9, pp. 717?772, 2008.
[14] E. J. Cand`es, X. Li, Y. Ma, and J. Wright, ?Robust principal component analysis?,? Journal of
ACM, vol. 58, no. 3, 2011.
[15] V. Chandrasekaran, S. Sanghavi, P. A. Parrilo, and A. S. Willsky, ?Rank-sparsity incoherence
for matrix decomposition,? SIAM Journal on Optimization, vol. 21, 2011.
[16] D. Hsu, S.M. Kakade, and T. Zhang, ?Robust matrix decomposition with sparse corruptions,?
IEEE Trans. Info. Th., Nov. 2011.
[17] P. Netrapalli, U N Niranjan, S. Sanghavi, A. Anandkumar, and P. Jain, ?Non-convex robust
pca,? in Neural Info. Proc. Sys. (NIPS), 2014.
[18] Jussi Gillberg, Pekka Marttinen, Matti Pirinen, Antti J Kangas, Pasi Soininen, Mehreen Ali,
Aki S Havulinna, Marjo-Riitta Marjo-Riitta J?arvelin, Mika Ala-Korpela, and Samuel Kaski,
?Multiple output regression with latent noise,? Journal of Machine Learning Research, 2016.
[19] C. Davis and W. M. Kahan, ?The rotation of eigenvectors by a perturbation. iii,? SIAM J.
Numer. Anal., vol. 7, pp. 1?46, Mar. 1970.
[20] J. A. Tropp, ?User-friendly tail bounds for sums of random matrices,? Found. Comput. Math.,
vol. 12, no. 4, 2012.
[21] R. Vershynin, ?Introduction to the non-asymptotic analysis of random matrices,? Compressed
sensing, pp. 210?268, 2012.
[22] G. H. Golub and H. A. Van der Vorst, ?Eigenvalue computation in the 20th century,? Journal
of Computational and Applied Mathematics, vol. 123, no. 1, pp. 35?65, 2000.
[23] P. Netrapalli, P. Jain, and S. Sanghavi, ?Low-rank matrix completion using alternating minimization,? in Symposium on Theory of Computing (STOC), 2013.
[24] Z. Lin, M. Chen, and Y. Ma, ?Alternating direction algorithms for l1 problems in compressive
sensing,? Tech. Rep., University of Illinois at Urbana-Champaign, November 2009.
[25] N. Vaswani, ?PCA in Data-Dependent Noise (Correlated-PCA): Improved Finite Sample Guarantees,? to be posted on arXiV, 2017.
9
| 6598 |@word version:1 norm:4 c0:3 open:1 riitta:2 km:3 simulation:1 decomposition:5 covariance:8 pick:3 dramatic:1 boundedness:3 reduction:2 ala:1 outperforms:2 existing:2 recovered:1 whp:1 comparing:1 pcp:10 numerical:1 happen:1 partition:7 remove:1 update:2 sys:4 math:2 ames:1 allerton:1 zhang:1 symposium:2 symp:1 hermitian:1 cand:1 frequently:1 inspired:1 little:1 window:1 project:2 spain:1 notation:2 bounded:6 moreover:2 provided:1 what:1 developed:2 compressive:3 finding:1 ended:1 guarantee:5 every:3 friendly:1 exactly:1 k2:4 control:1 positive:3 understood:1 incoherence:6 mika:1 chose:1 therein:1 studied:5 vaswani:6 limited:1 range:4 averaged:1 fazel:1 practical:2 practice:2 block:10 recursive:4 definite:2 vorst:1 kat:3 procedure:2 empirical:4 significantly:3 projection:2 lois:3 word:3 pekka:1 refers:2 get:7 onto:3 close:2 yt:15 missing:16 go:1 convex:3 simplicity:2 recovery:2 hogben:1 m2:12 spanned:1 orthonormal:1 nuclear:1 century:1 variation:2 increment:1 annals:1 shamir:1 suppose:3 user:1 exact:1 expensive:1 satisfying:7 observed:6 bottom:1 solved:2 region:1 ensures:1 adv:3 complexity:16 dynamic:4 depend:1 solving:6 arvelin:1 ali:1 upon:1 basis:4 easily:1 kaski:1 caramanis:1 escalator:2 univ:1 distinct:1 fast:2 describe:1 jain:3 monte:1 kp:1 apparent:1 garber:1 supplementary:5 larger:4 stanford:1 say:1 relax:2 compressed:1 cov:1 statistic:1 g1:5 kahan:1 online:7 sequence:8 eigenvalue:26 rr:1 advantage:2 km2:1 webpage:1 convergence:2 cluster:26 r1:3 intl:1 incremental:1 tk:8 tall:1 weakens:1 develop:3 completion:6 stat:1 object:6 help:1 strong:1 netrapalli:2 recovering:1 expt:4 involves:1 implemented:2 quantify:1 direction:6 liberty:2 correct:1 stochastic:3 material:5 everything:1 generalization:7 clustered:1 decompose:1 isit:1 tighter:1 summation:1 secondly:2 pl:1 hold:35 lying:1 wright:1 overlaid:1 nadler:1 predict:1 consecutive:1 smallest:2 estimation:1 proc:5 rpca:15 pasi:1 largest:5 correctness:2 tool:1 cotter:1 minimization:6 clearly:5 always:4 gaussian:5 curtain:4 corollary:3 rank:12 greatly:1 tech:1 dependent:8 streaming:1 typically:1 entire:1 provably:3 pixel:3 among:1 html:1 colt:1 retaining:1 special:1 mutual:1 equal:1 once:1 karnin:2 biology:1 kw:1 foreground:6 future:1 sanghavi:3 simplify:1 few:1 resulted:1 intell:1 maxj:1 familiar:1 replaced:5 consisting:3 numer:1 golub:1 analyzed:1 accurate:1 partial:2 lh:1 orthogonal:3 unless:1 indexed:1 tree:5 desired:2 theoretical:1 column:10 earlier:1 disadvantage:3 entry:7 snr:1 uniform:2 dependency:2 connect:1 corrupted:1 vershynin:1 recht:1 definitely:1 siam:3 thesis:1 containing:1 hoeffding:4 henceforth:1 conf:2 return:1 li:1 parrilo:1 star:1 sec:6 summarized:1 coefficient:1 satisfy:4 explicitly:1 depends:2 tion:1 try:1 later:1 wind:1 analyze:2 doing:1 start:4 recover:4 candes:1 waving:5 contribution:1 iid:3 mc:3 carlo:1 corruption:3 reach:1 email:1 definition:4 energy:1 boutsidis:1 pp:10 proof:8 static:1 stop:5 hsu:1 knowledge:6 improves:1 nnm:2 actually:1 improved:1 formulation:1 done:3 mar:1 just:1 implicit:1 correlation:3 until:4 hand:1 tropp:1 replacing:1 behaved:1 artif:1 usa:1 contain:3 true:7 hence:2 alternating:5 symmetric:1 nonzero:6 sin:2 aki:1 davis:1 anything:2 samuel:1 m:4 generalized:1 evident:1 tt:16 l1:1 motion:1 image:4 novel:1 rotation:1 mt:8 thirdly:1 tail:1 m1:3 refer:4 measurement:1 automatic:1 mathematics:1 similarly:2 zeroing:1 illinois:1 moving:7 han:1 etc:1 recent:2 thresh:13 mint:1 certain:3 inequality:1 rep:1 proce:1 der:1 minimum:3 somewhat:1 signal:2 semi:2 multiple:1 d0:2 exceeds:2 champaign:1 faster:1 long:3 lin:1 molecular:1 niranjan:1 qg:2 controlled:1 regression:1 arxiv:2 iteration:1 sometimes:1 achieved:1 background:5 whereas:3 interval:5 singular:6 source:1 extra:1 rest:1 tri:1 induced:1 thing:1 pirinen:1 integer:2 call:1 anandkumar:1 near:1 bernstein:2 iii:1 enough:12 affect:1 independence:1 reduce:2 simplifies:1 idea:1 det:3 t0:1 motivated:1 pca:71 expression:1 remark:1 generally:2 se:12 eigenvectors:13 clear:1 detailed:1 repeating:1 reduced:2 generate:1 http:1 problematic:1 notice:4 estimated:2 disjoint:5 per:1 correctly:2 write:1 discrete:1 dasgupta:1 vol:6 key:2 threshold:5 changing:1 fraction:1 sum:1 soda:1 place:2 almost:1 reader:1 chandrasekaran:1 looser:2 separation:1 lake:5 zhan:1 bound:27 followed:4 encountered:1 msg:1 inclusive:1 ri:1 speed:2 min:15 span:2 structured:1 smaller:7 iastate:1 across:1 lp:1 kakade:1 explained:4 projecting:2 jussi:1 mutually:6 remains:1 fail:1 needed:7 pursuit:3 generalizes:1 apply:2 observe:2 balsubramani:1 enforce:1 appropriate:1 spectral:1 alternative:2 slower:3 eigen:1 top:8 clustering:6 include:1 cf:1 completed:1 ensure:2 instant:3 k1:4 especially:1 move:9 g0:1 question:1 occurs:4 dependence:4 usual:1 diagonal:7 subspace:22 distance:1 simulated:3 eigs:3 water:2 reason:2 willsky:1 length:5 code:1 retained:1 modeled:2 index:9 ratio:3 mini:1 equivalently:1 km1:1 stoc:1 gk:8 info:7 anal:1 upper:8 datasets:1 urbana:1 finite:4 november:1 displayed:1 immediate:1 situation:2 variability:3 communication:1 frame:5 rn:1 perturbation:3 kangas:1 august:1 intensity:2 introduced:2 required:3 deletion:1 barcelona:1 nip:5 trans:2 address:1 capped:1 below:3 perception:1 regime:2 sparsity:1 program:1 max:13 memory:1 video:20 ia:1 power:2 unrealistic:1 improve:1 arora:1 started:1 literature:1 l2:2 sg:1 asymptotic:1 freund:1 expect:1 permutation:2 interesting:1 srebro:1 iowa:1 sufficient:2 nmk:1 corrupting:4 uncorrelated:3 maxt:3 row:2 qf:6 course:2 elsewhere:1 repeat:2 keeping:1 antti:1 offline:1 allow:2 understand:1 evd:72 sparse:12 van:1 dimension:2 valid:2 computes:3 autoregressive:1 author:1 made:2 commonly:1 projected:2 nov:1 keep:2 assumed:1 latent:1 decade:1 quantifies:1 table:4 itt:14 robust:11 matti:1 obtaining:1 forest:1 posted:1 diag:1 did:1 pk:1 main:4 dense:2 aistats:1 noise:23 qiu:2 ait:1 repeated:1 fig:1 referred:2 slow:1 fails:1 sub:2 exponential:1 comput:2 lie:1 third:1 theorem:13 rk:7 xt:4 specific:1 sensing:4 r2:8 maxi:1 alt:10 exists:1 adding:2 mitliagkas:1 texture:1 magnitude:1 execution:4 phd:1 margin:1 gap:1 chen:1 generalizing:1 logarithmic:1 g2:1 satisfies:5 acm:2 ma:2 identity:2 goal:2 sized:1 careful:1 replace:3 change:5 hard:1 except:1 uniformly:1 wt:11 impractically:1 flag:2 principal:10 lemma:4 called:3 total:1 i2r:1 svd:3 e:1 support:4 guo:2 latter:1 ongoing:1 tested:1 correlated:19 |
6,189 | 6,599 | An equivalence between high dimensional Bayes
optimal inference and M-estimation
Madhu Advani
Surya Ganguli
Department of Applied Physics, Stanford University
[email protected] and [email protected]
Abstract
When recovering an unknown signal from noisy measurements, the computational
difficulty of performing optimal Bayesian MMSE (minimum mean squared error)
inference often necessitates the use of maximum a posteriori (MAP) inference,
a special case of regularized M-estimation, as a surrogate. However, MAP is
suboptimal in high dimensions, when the number of unknown signal components
is similar to the number of measurements. In this work we demonstrate, when
the signal distribution and the likelihood function associated with the noise are
both log-concave, that optimal MMSE performance is asymptotically achievable
via another M-estimation procedure. This procedure involves minimizing convex
loss and regularizer functions that are nonlinearly smoothed versions of the widely
applied MAP optimization problem. Our findings provide a new heuristic derivation
and interpretation for recent optimal M-estimators found in the setting of linear
measurements and additive noise, and further extend these results to nonlinear
measurements with non-additive noise. We numerically demonstrate superior
performance of our optimal M-estimators relative to MAP. Overall, at the heart
of our work is the revelation of a remarkable equivalence between two seemingly
very different computational problems: namely that of high dimensional Bayesian
integration underlying MMSE inference, and high dimensional convex optimization
underlying M-estimation. In essence we show that the former difficult integral may
be computed by solving the latter, simpler optimization problem.
1
Introduction
Modern technological advances now enable scientists to simultaneously record hundreds or thousands
of variables in fields ranging from neuroscience and genomics to health care and economics. For
example, in neuroscience, we can simultaneously record P = O(1000) neurons in behaving animals.
However, the number of measurements N we can make of these P dimensional neural activity
patterns can be limited in any given experimental condition due to constraints on recording time.
Thus a critical parameter is the measurement density ? = N
P . Classical statistics focuses on the
limit of few variables and many measurements, so P is finite, N is large, and ? ? ?. Here, we
instead consider the modern high dimensional limit where the measurement density ? remains finite
as N, P ? ?. In this important limit, we ask what is the optimal way to recover signal from noise?
More precisely, we wish to recover an unknown signal vector s0 ? RP given N noisy measurements
y? = r(x? ? s0 , ? ) where x? ? RP
and
y? ? R,
for
? = 1, . . . , N.
(1)
Here, x? and y? are input-output pairs for measurement ?, r is a measurement nonlinearity, and
? is a noise realization. For example, in a brain machine interface, x? could be a neural activity
pattern, y? a behavioral covariate, and s0 the unknown regression coefficients of a decoder relating
neural activity to behavior. Alternatively, in sensory neuroscience, x? could be an external stimulus,
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
y? a single neuron?s response to that stimulus, and s0 the unknown receptive field relating stimulus
to neural response. We assume the noise ? is independent and identically distributed (iid) across
measurements, implying the outputs y? are drawn iid from a noise distribution Py|z (y? |z? ), where
z? = x? ? s0 . Similarly, we assume the signal components s0i are drawn iid from a prior signal
distribution Ps (s0 ). We denote its variance below by ?s2 . Finally, we denote by X ? RN ?P the
input or measurement matrix, whose ??th row is x? , and by y ? RN the measurement output vector
whose ??th component is y? . In this paper, we will focus on the case of dense iid random Gaussian
measurements, normalized so that h x? ? x? i = ? ??,? . In the case of systems identification in
sensory neuroscience, this choice would correspond to an oft used white noise stimulus at contrast ?.
Now given measurement data (X, y), as well as knowledge of the nonlinearity r(?) and the signal
Ps and noise Py|z distributions, what is the best way to infer an estimate ?s of the unknown signal
s0 ? We characterize the performance of an estimate ?s by its mean squared error (MSE), k?s ? s0 k22 ,
averaged over noise realizations and measurements. The best minimal MSE (MMSE) estimator is
given by optimal Bayesian integration to compute the posterior mean:
Z
?
sMMSE = s P (s|X, y) ds.
(2)
Unfortunately, this integral is generally intractable in high dimensions, at large P ; both numerical
integration and Monte Carlo methods for estimating the integral require computational time growing
exponentially in P for high accuracy. Consequently, an often used surrogate for MMSE inference
is maximum a posteriori (MAP) inference, which computes the mode rather than the mean of the
posterior distribution. Thus MAP relies on optimization rather than integration:
?sMAP = arg max P (s|X, y) = arg min [? log P (s|X, y)].
s
s
Assuming inputs X are independent of the unkown signal s0 , the above expression becomes
"N
#
P
X
X
MAP
?s
= arg min
? log Py|z (y? |x? ? s) +
? log Ps (si ) .
s
?=1
(3)
(4)
i=1
A related algorithm is maximum likelihood (ML), which seeks to maximize the likelihood of the data
given a candidate signal s. ML is equivalent to MAP in (4) but without the second sum, i.e. without
prior information on the signal.
While ML is typically optimal amongst unbiased estimators in the classical statistical limit ? ? ?
(see e.g. [1]), neither MAP nor ML are optimal in high dimensions, at finite ?. Therefore, we consider
a broader class of estimators known as regularized M-estimators, corresponding to the optimization
problem
"N
#
P
X
X
?s = arg min
L(y? , x? ? s) +
?(si ) .
(5)
s
?=1
i=1
Here L(y, ?) is a loss function and ? is a regularizer. We assume both to be convex functions in ? and
s respectively. Note that MAP inference corresponds to the choice L(y, ?) = ? log Py|z (y|?) and
?(s) = ? log Ps (s). ML inference corresponds to the same loss function but without regularization:
?(s) = 0. Other well known M-estimators include LASSO [2], corresponding to the choice
L(y, ?) = 21 (y ? ?)2 and ?(s) ? |s|, or the elastic net [3], which includes an addition quadratic term
on the LASSO regularizer. Such M-estimators are heuristically motivated as a convex relaxation of
MAP inference for sparse signal distributions, and have been found to be very useful in such settings.
However, a general theory for how to select the optimal M-estimator in (5) given the generative model
of data in (1) remains elusive. This is the central problem we address in this work.
1.1
Related work and Outline
Seminal work [4] found the optimal unregularized M-estimator using variational methods in the
special case of linear measurements and additive noise, i.e. r(z, ) = z + in (1). In this same
setting, [5] characterized unregularized M-estimator performance via approximate message passing
(AMP) [6]. Following this, the performance of regularized M-estimators in the linear additive setting
was characterized in [7], using non-rigorous statistical physics methods based on replica theory, and
2
in [8], using rigorous methods different from [4, 5]. Moreover, [7] found the optimal regularized
M-estimator and demonstrated, surprisingly, zero performance gap relative to MMSE. The goals of
this paper are to (1) interpret and extend previous work by deriving an equivalence between optimal
M-estimation and Bayesian MMSE inference via AMP and (2) to derive the optimal M-estimator in
the more general setting of nonlinear measurements and non-additive noise.
To address these goals, we begin in section 2 by describing a pair of AMP algorithms, derived
heuristically via approximations of belief propagation (BP). The first algorithm, mAMP, is designed
to solve M-estimation in (5), while the second, bAMP, is designed to solve Bayesian MMSE inference
in (2). In section 3 we derive a connection, via AMP, between M-estimation and MMSE inference:
we find, for a particular choice of optimal M-estimator, that mAMP and bAMP have the same fixed
points. To quantitatively determine the optimal M-estimator, which depends on some smoothing
parameters, we must quantitatively characterize the performance of AMP, which we do in section
4. We thereby recover optimal M-estimators found in recent works in the linear additive setting,
without using variational methods, and moreover find optimal M-estimators in the nonlinear, nonadditive setting. Our non-variational approach through AMP also provides an intuitive explanation
for the form of the optimal M-estimator in terms of Bayesian inference. Intriguingly, the optimal
M-estimator resembles a smoothed version of MAP, with lower measurement density requiring
more smoothing. In Section 4, we also demonstrate, through numerical simulations, a substantial
performance improvement in inference accuracy achieved by the optimal M-estimator over MAP
under nonlinear measurements with non-additive noise. We end with a discussion in section 5.
2
Formulations of Bayesian inference and M-estimation through AMP
Both mAMP and bAMP, heuristically derived in the supplementary material 1 (SM) sections 2.2-2.4
though approximate BP applied to (5) and (2) respectively, can be expressed as special cases of a
generalized AMP (gAMP) algorithm [9], which we first describe. gAMP is a set of iterative equations,
t?1
? t = X?
st + ?t? Gy (?t?1
)
? , y, ?
?th
=
!?1
N
?? X ?
t
t
Gy (?? , y? , ?? )
N ?=1 ??
?st+1 = Gs ?th , ?
st ? ?th XT Gy (?t? , y, ? t )
?t+1
=
?
(6)
P
??th X ?
Gs (?th , s?tj ? ?th XTj Gy (?t? , y, ? t )),
P j=1 ?h
(7)
that depend on the scalar functions Gy (?? , y, ?) and Gs (?h , h) which, in our notation, act componentwise on vectors so that ?th component Gy (?? , y, ?)? = Gy (?? , y? , ?? ) and the ith component
Gs (?h , h)i = Gs (?h , hi ). Initial conditions are given by ?st=0 ? RP , ?t=0
? R+ and ? t=?1 ? RN .
?
Intuitively, one can think of ? t as related to the linear part of the measurement outcome predicted by
the current guess ?
st , and Gy is a measurement correction map that uses the actual measurement data y
t
to correct ? . Also, intuitively, we can think of Gs as taking an input ?
st ? ?th XT Gy (?t? , y, ? t ), which
t
is a measurement based correction to ?
s , and yielding as output a further, measurement independent
correction ?st+1 , that could depend on either a regularizer or prior. We thus refer to the functions Gy
and Gs as the measurement and signal correctors respectively. gAMP is thus alternating measurement
and signal correction, with time dependent parameters ?th and ?t? . These equations were described in
[9], and special cases of them were studied in various works (see e.g. [5, 10]).
2.1
From M-estimation to mAMP
Now, applying approximate BP to (5) when the input vectors x? are iid Gaussian, again with
normalization h x? ? x? i = ?, we find (SM Sec. 2.3) that the resulting mAMP equations are a special
case of the gAMP equations, where the functions Gy and Gs are related to the loss L and regularizer
? through
0
GM
y (?? , y, ?) = M?? [ L(y, ?) ] (?),
1
GM
s (?h , h) = P?h [ ? ](h).
(8)
Please see https://ganguli-gang.stanford.edu/pdf/16.Bayes.Mestimation.Supp.pdf for the supplementary
material.
3
The functional mappings M and P, the Moreau envelope and proximal map [11], are defined as
(x ? y)2
(x ? y)2
M? [ f ](x) = min
+ f (y) ,
P? [ f ](x) = arg min
+ f (y) .
(9)
y
y
2?
2?
The proximal map maps a point x to another point that minimizes f while remaining close to x as
determined by a scale ?. This can be thought of as a proximal descent step on f starting from x with
step length ?. Perhaps the most ubiquitous example of a proximal map occurs for f (z) = |z|, in which
case the proximal map is known as the soft thresholding operator and takes the form P? [ f ](x) = 0
for |x| ? ? and P? [ f ](x) = x ? sign(x)? for |x| ? ?. This soft thresholding is prominent in AMP
approaches to compressed sensing (e.g. [10]). The Moreau envelope is a minimum convolution of f
with a quadratic, and as such, M? [ f ](x) is a smoothed lower bound on f with the same minima
[11]. Moreover, differentiating M with respect to x yields [11] the relation
P? [ f ](x) = x ? ?M? [ f ]0 (x).
(10)
Thus a proximal descent step on f is equivalent to a gradient descent step on the Moreau envelope of
f , with the same step length ?. This equality is also useful in proving (SM Sec. 2.1) that the fixed
points of mAMP satisfy
?
XT
L(y, X?s) + ? 0 (?s) = 0.
(11)
??
Thus fixed points of mAMP are local minima of M-estimation in (5).
To develop intuition for the mAMP algorithm, we note that the ?s update step in (6) is similar to
the more intuitive proximal gradient descent algorithm [11] which seeks to solve the M-estimation
problem in (5) by alternately performing a gradient descent step on the loss term and a proximal
descent step on the regularization term, both with the same step length. Thus one iteration of gradient
descent on L followed by proximal descent on ? in (5), with both steps using step length ?h , yields
?
?st+1 = P?h [ ? ](?st ? ?h XT ??
L(y, X?
st )).
(12)
By inserting (8) into (6)-(7), we see that mAMP closely resembles proximal gradient descent, but
with three main differences: 1) the loss function is replaced with its Moreau envelope, 2) the loss is
evaluated at ? t which includes an additional memory term, and 3) the step size ?th is time dependent.
Interestingly, this additional memory term and step size evolution has been found to speed up
convergence relative to proximal gradient descent in certain special cases, like LASSO [10].
In summary, in mAMP the measurement corrector Gy implements a gradient descent on the Moreau
smoothed loss, while the signal corrector Gs implements a proximal descent step on the regularizer.
But because of (10), this latter step can also be thought of as a gradient descent step on the Moreau
smoothed regularizer. Thus overall, the mAMP approach to M-estimation is intimately related to
Moreau smoothing of both the loss and regularizer.
2.2
From Bayesian integration to bAMP
Now, applying approximate BP to (2) when again the input vectors x? are iid Gaussian, we find (SM
Sec. 2.2) that the resulting bAMP equations are a special case of the gAMP equations, where the
functions Gy and Gs are related to the noise Py|z and signal Ps distributions through
?
GB
log (Py (y|?, ?? )),
GB
?mmse (?h , h),
(13)
y (?? , y, ?) = ?
s (?h , h) = s
??
where
R
(s?h)2
Z
(??z)2
sPs (s)e? 2? ds
? 2?
mmse
Py (y|?, ?) ? Py|z (y|z)e
dz,
s?
(?, h) = R
,
(14)
(s?h)2
Ps (s)e? 2? ds
as derived in SM section 2.2. Here Py (y|?, ?) is a convolution of the likelihood with a Gaussian of
variance
so?that it is a probability density in y) and s?mmse denotes the posterior mean
0 ? (normalized
0
s |h where h = s + ?w is a corrupted signal, w is a standard Gaussian random variable, and
s0 is a random variable drawn from Ps .
Inserting these equations into (6)-(7), we see that bAMP performs a measurement correction step
through Gy that corresponds to a gradient descent step on the negative log of a Gaussian-smoothed
likelihood function. The subsequent signal correction step through Gs is simply the computation of a
posterior mean, assuming the input is drawn from the prior and corrupted by additive Gaussian noise
with a time-dependent variance ?th .
4
3
An AMP equivalence between Bayesian inference and M-estimation
In the previous section, we saw intriguing parallels between mAMP and bAMP, both special cases of
gAMP. While mAMP performs its measurement and signal correction through a gradient descent
step on a Moreau smoothed loss and a Moreau smoothed regularizer respectively, bAMP performs its
measurement correction through a gradient descent step on the minus log of a Gaussian smoothed
likelihood, and its signal correction though an MMSE estimation problem. These parallels suggest we
may be able to find a loss L and regularizer ? such that the corresponding mAMP becomes equivalent
to bAMP. If so, then assuming the correctness of bAMP as a solution to (2), the resulting Lopt and
? opt will yield the optimal mAMP dynamics, achieving MMSE inference.
By comparing (8) and (13), we see that bAMP and mAMP will have the same Gy if the Moreausmoothed loss equals the minus log of the Gaussian-smoothed likelihood function:
M?? [ Lopt (y, ?) ](?) = ? log (Py (y|?, ?? )).
(15)
opt
Before describing how to invert the above expression to determine L , we would also like to find a
B
relation between the two signal correction functions GM
s and Gs . This is a little more challenging
because the former implements a proximal descent step while the latter implements an MMSE
posterior mean computation. However, we can express the MMSE computation as gradient ascent on
the log of a Gaussian smoothed signal distribution (see SM):
Z
(s?h)2
?
s?mmse (?h , h) = h + ?h
log (Ps (h, ?h )),
Ps (h, ?) ? Ps (s)e? 2? ds.
(16)
?h
M
Moreover, by applying (10) to the definition of GM
s in (8), we can write Gs as gradient descent on
M
a Moreau smoothed regularizer. Then, comparing these modified forms of GB
s with Gs , we find
a similar condition for ? opt , namely that its Moreau smoothing should equal the minus log of the
Gaussian smoothed signal distribution:
M?h [ ? opt ](h) = ? log (Ps (h, ?h )) .
(17)
Our goal is now to compute the optimal loss and regularizer by inverting the Moreau envelope
relations (15, 17) to solve for Lopt , ? opt . A sufficient condition [4] to invert these Moreau envelopes
to determine the optimal mAMP dynamics is that Py (y|z) and Ps (s) are log concave with respect
to z and s respectively. Under this condition the Moreau envelope will be invertible via the relation
Mq [ ?Mq [ ?f ](?) ](?) = f (?) (see SM Appendix A.3 for a derivation), which yields:
Lopt (y, ?) = ?M?? [ log (Py (y|?, ?? )) ](?),
? opt (h) = ?M?h [ log (Ps (?, ?h )) ](h).
(18)
This optimal loss and regularizer form resembles smoothed MAP inference, with ?? and ?h being
scalar parameters that modify MAP through both Gaussian and Moreau smoothing. An example of
such a family of smoothed loss and regularizer functions is given in Fig. 1 for the case of a logistic
output channel with Laplacian distributed signal. Additionally, one can show that the optimal loss
and regularizer are convex when the signal and noise distributions are log-concave. Overall, this
analysis yields a dynamical equivalence between mAMP and bAMP as long as at each iteration time
t, the optimal loss and regularizer for mAMP are chosen through the smoothing operation in (18), but
using time-dependent smoothing parameters ?t? and ?th whose evolution is governed by (7).
4
Determining optimal smoothing parameters via state evolution of AMP
In the previous section, we have shown that mAMP and bAMP have the same dynamics, as long as, at
opt
each iteration t of mAMP, we choose a time dependent optimal loss Lopt
t and regularizer ?t through
(18), where the time dependence is inherited from the time dependent smoothing parameters ?t? and
?th . However, mAMP was motivated as an algorithmic solution to the M-estimation problem in (5)
for a fixed loss and regularizer, while bAMP was motivated as a method of performing the Bayesian
integral in (2). This then raises the question, is there a fixed, optimal choice of Lopt and ? opt in (5)
such the corresponding M-estimation problem yields the same answer as the Bayesian integral in (2)?
The answer is yes: simply choose a fixed Lopt and ? opt through (18) where the smoothing parameters
?? and ?h are chosen to be those found at the fixed points of bAMP. To see this, note that fixed
opt
points of mAMP with time dependent choices of Lopt
t and ?t are equivalent to the minima of the
5
A
Optimal loss
B
Optimal regularizer
4
2
3
1.5
2
1
1
0
0.5
-4
-2
0
2
0
4
-2
-1
0
1
2
Figure 1: Here we plot the optimal loss (A) and regularizer (B) in (18), for a logistic output y ? {0, 1}
with Py|z (y = 1|z) = 1+e1?z , and Laplacian signal s with Ps (s) = 21 e?|s| . In (A) we plot the loss
for the measurement y = 1: Lopt (y = 1, ?). Both sets of curves from red to black (and bottom to top)
correspond to smoothing parameters ?? = (0, 2, 4, 6) in (A) and ?h = (0, 1/2, 1, 2) in (B). With
zero smoothing, the red curves at the bottom correspond to the MAP loss and regularizer.
M-estimation problem in (5), with the choice of loss and regularizer that this time dependent sequence
opt
converges to: Lopt
? and ?? (this follows from an extension of the argument that lead to (11)). In turn
the fixed points of mAMP are equivalent to those of bAMP under the choice (18). These equivalences
?
then imply that, if the bAMP dynamics for (?
st , ?t? , ?th ) approaches the fixed point (?
s? , ??
? , ?h ),
then ?
s? is the solution to both Bayesian inference in (2) and optimal M-estimation in (5), with
?
optimal loss and regularizer given by (18) with the choice of smoothing parameters ??
? and ?h .
?
We now discuss how to determine ??
? and ?h analytically, thereby completing our heuristic derivation
of an optimal M-estimator that matches Bayesian MMSE inference. An essential tool is state evolution
(SE) which characterizes the gAMP dynamics [12] as follows. First, let z = Xs0 be related to the
true measurements. Then (6) implies that ? t ? z is a time-dependent residual. Remarkably, the gAMP
equations ensure that the components of the residual ? t ? z, as well as ht = ??th XT Gy (?t? , y, ? t )
are Gaussian distributed; the history term in the update of ? t in (6) crucially cancels out non-Gaussian
structure that would otherwise develop as the vectors ? t and ht propagate through the nonlinear
measurement and signal correction steps induced by Gy and Gs . We denote by q?t and qht the variance
of the components of ? t ? z and ht respectively. Additionally, we denote by qst = P1 hks?t ? s0 k2 i
the per component MSE at iteration t. SE is a set of analytical evolution equations for the quantities
(qst , q?t , qht , ?t? , ?th ) that characterize the state of gAMP. A rigorous derivation both for dense [12]
Gaussian measurements and sparse measurements [13] reveal that the SE equations accurately track
the gAMP dynamical state in the high dimensional limit N, P ? ? with ? = N
P O(1) that we
consider here.
We derive the specific form of the mAMP SE equations, yielding a set of 5 update equations (see SM
section 3.1 for further details). We also derive the SE equations for bAMP, which are simpler. First,
we find the relations ?t? = q?t and ?th = qht . Thus SE for bAMP reduces to a pair of update equations:
q?t+1
=?
t
0
GB
s (qh , s
+
p
qht w)
0
?s
2
qht
w,s0
=
??
D
t
t 2
GB
y (q? , y, ? )
E
y,z,? t
?1
.
(19)
Here w is a zero mean unit variance Gaussian and s0 is a scalar signal drawn from the signal
distribution Ps . Thus the computation of the next residual q?t+1 on the LHS of (19) involves
computing the MSE in estimating a signal s0 corrupted by Gaussian noise of variance qht , using
MMSE inference as an estimation prcoedure via the function GB defined in (13). The RHS involves
an average over the joint distribution of scalar versions of the output y, true measurement z, and
estimated measurement ? t . These three scalars are the SE analogs of the gAMP variables y, z,
and ? t , and they model the joint distribution of single components of these vectors. Their joint
distribution is given by P (y, z, ? t ) = Py|z (y|z)P (z, ? t ). In the special case of bAMP, z and ? t
are jointly zero mean Gaussian with second moments given by h(? t )2 i = ??s2 ? q?t , hz 2 i = ??s2 ,
6
and h z? t i = ??s2 ? q?t (see SM 3.2 for derivations). These moments imply the residual variance
(z ? ? t )2 = q?t . Intuitively, when gAMP works well, that is reflected in the SE equations by the
reduction of the residual variance q?t over time, as the time dependent estimated measurement ? t
converges to the true measurement z. The actual measurement outcome y, after the nonlinear part of
the measurement process, is always conditionally independent of the estimated measurement ? t , given
the true linear part of the measurement, z. Finally, the joint distribution of a single componentp
of ?st+1
0
t+1
B t
0
and s in gAMP are predicted by SE to have the same distribution as s?
= Gs (qh , s + qht w),
after marginalizing out w. Comparing with the LHS of (19) then yields that the MSE per component
satisfies qst = q?t /?.
Now, bAMP performance, upon convergence, is characterized by the fixed point of SE, which satisfies
qs = MMSE(s0 |s0 +
?
qh w)
qh =
1
.
??J [ Py (y|?, ?qs ) ]
(20)
Here, the MMSE function denotes the minimal error in estimating the scalar signal s0 from a
measurement of
s0 corrupted
additive Gaussian noise of variance qh via computation of the
? by
posterior mean s0 |s0 + qh w :
D
2 E
?
?
MMSE(s0 |s0 + qh w) =
s0 |s0 + qh w ? s0
.
(21)
0
s ,w
Also, the function J on the RHS of (20) denotes the average Fisher information that y retains about
an input, with some additional Gaussian input noise of variance q:
D 2
E
?
J [ Py (y|?, q) ] = ? ??
(22)
2 log Py (y|?, q)
?,y
These equations characterize the performance of bAMP, through qs . Furthermore, they yield the
optimal smoothing parameters ?? = ?qs and ?h = qh . This choice of smoothing parameters,
when used in (18), yield a fixed optimal loss Lopt and regularizer ? opt . When this optimal loss
and regularizer are used in the M-estimation problem in (5), the resulting M-estimator should have
performance equivalent to that of MMSE inference in (2). This completes our heuristic derivation of
an equivalence between optimal M-estimation and Bayesian inference through message passing.
In Figure 2 we demonstrate numerically that the optimal M-estimator substantially outperforms MAP,
especially at low measurement density ?, and has performance equivalent to MMSE inference, as
theoretically predicted by SE for bAMP.
Optimal vs MAP inference error
MAP
Optimal
0.9
0.8
0.7
0.6
0.5
0.4
0.3
5
0
1
2
3
4
5
Figure 2: For logistic output and Laplacian signal, as in Fig.
1, we plot the per component MSE, normalized by signal
variance. Smooth curves are theoretical predictions based
on SE fixed points for mAMP for MAP inference (red) and
bAMP for MMSE inference (black). Error bars reflect standard deviation in performance obtained by solving (5), via
mAMP, for MAP inference (red) and optimal M-estimation
(black), using simulated data generated as in (1), with dense
i.i.d Gaussian measurements. For these?
finite simulated data
sets, we varied ? = N
N P ? 250. These
P , while holding
results demonstrate that optimal M-estimation both significantly outperforms MAP (black below red) and matches
Bayesian MMSE inference as predicted by SE for bAMP
(black error bars consistent with black curve).
Discussion
Overall we have derived an optimal M-estimator, or a choice of optimal loss and regularizer, such the
M-estimation problem in (5) has equivalent performance to that of Bayes optimal MMSE inference in
(2), in the case of log-concave signal distribution and noise likelihood. Our derivation is heuristic in
that it employs the formalism of gAMP, and as such depends on the correctness of a few statements.
First, we assume that two special cases of the gAMP dynamics in (6), namely mAMP in (8) and
7
bAMP in (13) correctly solve the M-estimation problem in (5) and Bayesian MMSE inference in
(2), respectively. We provide a heuristic derivation of both of these assumptions in the SM based on
approximations of BP. Second, we require that SE in (19) correctly tracks the performance of gAMP
in (13). We note that under mild conditions, the correctness of SE as a description of gAMP was
rigorously proven in [12].
While we have not presented a rigorous derivation that the bAMP dynamics correctly solves the
MMSE inference problem, we note several related rigorous results. First, it has been shown that
bAMP is equivalent to MMSE inference in the limit of large sparse measurement matrices in [13, 14].
Also, in this same large sparse limit, the corresponding mAMP algorithm was shown to be equivalent
to MAP inference with additive Gaussian noise [15]. In the setting of dense measurements, the
correctness of bAMP has not yet been rigorously proven, but the associated SE is believed to be exact
in the dense iid Gaussian measurement setting based on replica arguments from statistical physics
(see e.g. section 4.3 in [16] for further discussion). For this reason, similar arguments have been
used to determine theoretical bounds on inference algorithms in compressed sensing [16], and matrix
factorization [17].
There are further rigorous results in the setting of M-estimation: mAMP and its associated SE is
also provably correct in the large sparse measurement limit, and has additionally been rigorously
proven to converge in special cases [5],[6] for dense iid Gaussian measurements. We further expect
these results to generalize to a universality class of measurement matrices with iid elements and a
suitable condition on their moments. Indeed this generalization was demonstrated rigorously for a
subclass of M-estimators in [18]. In the setting of dense measurements, due to the current absence
of rigorous results demonstrating the correctness of bAMP in solving MMSE inference, we have
also provided numerical experiments in Fig. 2. This figure demonstrates that optimal M-estimation
can significantly outperform MAP for high dimensional inference problems, again for the case of
log-concave signal and noise.
Additionally, we note that the per-iteration time complexity of the gAMP algorithms (6, 7) scales
linearly in both the number of measurements and signal dimensions. Therefore the optimal algorithms
we describe are applicable to large-scale problems. Moreover, at lower measurement densities, the
optimal loss and regularizer are smoother. Such smoothing may accelerate convergence time. Indeed
smoother convex functions, with smaller Lipschitz constants on their derivative, can be minimized
faster via gradient descent. It would be interesting to explore whether a similar result may hold for
gAMP dynamics.
Another interesting future direction is the optimal estimation of sparse signals, which typically do not
have log-concave distributions. One potential strategy in such scenarios would be to approximate
the signal distribution with the best log-concave fit and apply optimal smoothing to determine a
good regularizer. Alternatively, for any practical problem, one could choose the precise smoothing
parameters through any model selection procedure, for example cross-validation on held-out data.
Thus the combined Moreau and Gaussian smoothing in (18) could yield a family of optimization
problems, where one member of this family could potentially yield better performance in practice on
held-out data. For example, while LASSO performs very well for sparse signals, as demonstrated by
its success in compressed sensing [19, 20], the popular elastic net [3], which sometimes outperforms
pure LASSO by combining L1 and L2 penalties, resembles a specific type of smoothing of an L1
regularizer. It would be interesting to see if combined Moreau and Gaussian smoothing underlying
our optimal M-estimators could significantly out-perform LASSO and elastic net in practice, when
our distributional assumptions about signal and noise need not precisely hold. However, finding
optimal M-estimators for known sparse signal distributions, and characterizing the gap between their
performance and that of MMSE inference, remains a fundamental open question.
Acknowledgements
The authors would like to thank Lenka Zdeborova and Stephen Boyd for useful discussions and also
Chris Stock and Ben Poole for comments on the manuscript. M.A. thanks the Stanford MBC and
SGF for support. S.G. thanks the Burroughs Wellcome, Simons, Sloan, McKnight, and McDonnell
foundations, and the Office of Naval Research for support.
8
References
[1] P. Huber and E. Ronchetti. Robust Statistics. Wiley, 2009.
[2] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical
Society. Series B (Methodological), 58:267?288, 1996.
[3] H. Zou and T. Hastie. Journal of the Royal Statistical Society: Series B (Statistical Methodology),
67:301?320, 2005.
[4] D. Bean, PJ Bickel, N. El Karoui, and B. Yu. Optimal M-estimation in high-dimensional
regression. PNAS, 110(36):14563?8, 2013.
[5] D Donoho and A Montanari. High dimensional robust m-estimation: Asymptotic variance via
approximate message passing. Probability Theory and Related Fields, pages 1?35, 2013.
[6] M Bayati and A Montanari. The dynamics of message passing on dense graphs, with applications
to compressed sensing. Information Theory, IEEE Transactions, 57(2):764?785, 2011.
[7] M Advani and S Ganguli. Statistical mechanics of optimal convex inference in high dimensions.
Physical Review X, 6:031034.
[8] C. Thrampoulidis, Abbasi E., and Hassibi B. Precise high-dimensional error analysis of
regularized m-estimators. 2015 53rd Annual Allerton Conference on Communication, Control,
and Compting (Allerton), pages 410?417, 2015.
[9] S Rangan. Generalized approximate message passing for estimation with random linear mixing.
Information Theory Proceedings (ISIT), 2011 IEEE International Symposium, pages 2168?2172,
2011.
[10] D. L. Donoho, A. Maleki, and Montanari A. Message-passing algorithms for compressed
sensing. Proceedings of the National Academy of Sciences, pages 18914?18919, 2009.
[11] N. Parikh and S. Boyd. Proximal algorithms. Foundations and Trends in Optimization, 1(3):123?
231, 2013.
[12] A Javanmard and A Montanari. State evolution for general approximate message passing
algorithms, with applications to spatial coupling. Information and Inference, page iat004, 2013.
[13] S Rangan. Estimation with random linear mixing, belief propagation and compressed sensing.
Information Sciences and Systems (CISS), 2010 44th Annual Conference, 2010.
[14] D Guo and CC Wang. Random sparse linear systems observed via arbitrary channels: A
decoupling principle. Information Theory, 2007. ISIT 2007. IEEE International Symposium,
2007.
[15] CC Wang and D Guo. Belief propagation is asymptotically equivalent to map estimation for
sparse linear systems. Proc. Allerton Conf, pages 926?935, 2006.
[16] F Krzakala, M M?ezard, F Sausset, Y. Sun, and L. Zdeborov?a. Probabilistic reconstruction in
compressed sensing: algorithms, phase diagrams, and threshold achieving matrices. Journal of
Statistical Mechanics: Theory and Experiment, (08):P08009, 2012.
[17] Y. Kabashima, F. Krzakala, M. Mzard, A. Sakata, and L. Zdeborov?a. Phase transitions and
sample complexity in bayes-optimal matrix factorization. IEEE Transactions on Information
Theory, 62:4228?4265, 2016.
[18] M Bayati, M Lelarge, and A Montanari. Universality in polytope phase transitions and message
passing algorithms. The Annals of Applied Probability, 25:753?822, 2015.
[19] E. Candes and M. Wakin. An introduction to compressive sampling. IEEE Sig. Proc. Mag.,
25(2):21?30, 2008.
[20] A.M. Bruckstein, D.L. Donoho, and M. Elad. From sparse solutions of systems of equations to
sparse modeling of signals and images. SIAM Review, 51(1):34?81, 2009.
9
| 6599 |@word mild:1 sgf:1 version:3 achievable:1 open:1 heuristically:3 seek:2 simulation:1 crucially:1 propagate:1 ronchetti:1 thereby:2 minus:3 reduction:1 moment:3 initial:1 series:2 mag:1 interestingly:1 mmse:32 amp:11 outperforms:3 current:2 comparing:3 karoui:1 si:2 yet:1 intriguing:1 must:1 universality:2 additive:10 numerical:3 subsequent:1 cis:1 designed:2 plot:3 update:4 v:1 implying:1 generative:1 guess:1 ith:1 record:2 provides:1 allerton:3 simpler:2 symposium:2 behavioral:1 krzakala:2 theoretically:1 javanmard:1 huber:1 indeed:2 behavior:1 p1:1 nor:1 growing:1 mechanic:2 brain:1 actual:2 little:1 becomes:2 spain:1 estimating:3 underlying:3 moreover:5 begin:1 notation:1 provided:1 what:2 minimizes:1 substantially:1 compressive:1 finding:2 act:1 concave:7 subclass:1 zdeborova:1 revelation:1 k2:1 demonstrates:1 control:1 unit:1 before:1 scientist:1 local:1 modify:1 limit:8 nonadditive:1 black:6 resembles:4 studied:1 equivalence:7 challenging:1 limited:1 factorization:2 averaged:1 practical:1 practice:2 implement:4 procedure:3 thought:2 significantly:3 boyd:2 suggest:1 close:1 selection:2 operator:1 applying:3 seminal:1 py:17 equivalent:11 map:31 demonstrated:3 dz:1 elusive:1 economics:1 starting:1 convex:7 pure:1 estimator:29 q:4 deriving:1 mq:2 proving:1 annals:1 qh:9 gm:4 exact:1 us:1 sig:1 element:1 trend:1 distributional:1 bottom:2 observed:1 wang:2 thousand:1 sun:1 technological:1 substantial:1 intuition:1 complexity:2 rigorously:4 dynamic:9 ezard:1 depend:2 solving:3 raise:1 upon:1 necessitates:1 accelerate:1 joint:4 stock:1 various:1 regularizer:29 derivation:9 describe:2 monte:1 outcome:2 whose:3 heuristic:5 stanford:5 widely:1 solve:5 supplementary:2 elad:1 otherwise:1 compressed:7 statistic:2 sakata:1 think:2 jointly:1 noisy:2 seemingly:1 unkown:1 sequence:1 net:3 analytical:1 reconstruction:1 inserting:2 combining:1 realization:2 mixing:2 academy:1 intuitive:2 description:1 convergence:3 p:15 converges:2 ben:1 derive:4 develop:2 coupling:1 solves:1 recovering:1 predicted:4 involves:3 implies:1 direction:1 closely:1 correct:2 bean:1 enable:1 material:2 require:2 generalization:1 opt:12 isit:2 extension:1 correction:11 hold:2 mapping:1 algorithmic:1 bickel:1 estimation:33 proc:2 applicable:1 saw:1 correctness:5 tool:1 gaussian:26 always:1 modified:1 rather:2 shrinkage:1 broader:1 office:1 derived:4 focus:2 naval:1 improvement:1 methodological:1 likelihood:8 sganguli:1 contrast:1 rigorous:7 posteriori:2 inference:40 ganguli:3 dependent:10 el:1 typically:2 relation:5 provably:1 arg:5 overall:4 animal:1 smoothing:21 special:11 integration:5 spatial:1 field:3 equal:2 intriguingly:1 sampling:1 yu:1 cancel:1 future:1 minimized:1 stimulus:4 quantitatively:2 few:2 employ:1 modern:2 simultaneously:2 national:1 xtj:1 replaced:1 phase:3 message:8 yielding:2 tj:1 held:2 integral:5 lh:2 theoretical:2 minimal:2 formalism:1 soft:2 modeling:1 retains:1 deviation:1 hundred:1 characterize:4 gamp:19 answer:2 corrupted:4 proximal:14 combined:2 st:12 density:6 fundamental:1 thanks:2 international:2 siam:1 probabilistic:1 physic:3 invertible:1 squared:2 central:1 again:3 reflect:1 abbasi:1 choose:3 smap:1 external:1 conf:1 derivative:1 sps:1 supp:1 potential:1 gy:17 sec:3 includes:2 coefficient:1 satisfy:1 sloan:1 depends:2 characterizes:1 red:5 recover:3 bayes:4 parallel:2 inherited:1 candes:1 simon:1 accuracy:2 variance:12 correspond:3 yield:11 yes:1 generalize:1 bayesian:16 identification:1 accurately:1 iid:9 carlo:1 cc:2 kabashima:1 history:1 lenka:1 definition:1 lelarge:1 burroughs:1 associated:3 popular:1 ask:1 knowledge:1 ubiquitous:1 manuscript:1 reflected:1 response:2 methodology:1 formulation:1 evaluated:1 though:2 furthermore:1 d:4 nonlinear:6 propagation:3 mode:1 logistic:3 perhaps:1 reveal:1 k22:1 normalized:3 true:4 unbiased:1 requiring:1 former:2 regularization:2 equality:1 evolution:6 alternating:1 analytically:1 maleki:1 white:1 conditionally:1 please:1 essence:1 generalized:2 prominent:1 sausset:1 pdf:2 outline:1 demonstrate:5 performs:4 l1:2 interface:1 ranging:1 variational:3 image:1 parikh:1 superior:1 functional:1 physical:1 qht:7 exponentially:1 extend:2 interpretation:1 analog:1 relating:2 numerically:2 interpret:1 measurement:57 refer:1 rd:1 similarly:1 nonlinearity:2 behaving:1 posterior:6 recent:2 scenario:1 certain:1 success:1 minimum:5 additional:3 care:1 determine:6 maximize:1 converge:1 signal:43 stephen:1 smoother:2 pnas:1 infer:1 reduces:1 smooth:1 match:2 characterized:3 faster:1 believed:1 long:2 cross:1 e1:1 laplacian:3 prediction:1 regression:3 iteration:5 normalization:1 sometimes:1 achieved:1 invert:2 addition:1 remarkably:1 completes:1 diagram:1 envelope:7 ascent:1 comment:1 recording:1 induced:1 hz:1 member:1 identically:1 fit:1 hastie:1 lasso:7 suboptimal:1 whether:1 expression:2 motivated:3 gb:6 penalty:1 passing:8 generally:1 useful:3 se:17 http:1 outperform:1 sign:1 neuroscience:4 estimated:3 per:4 track:2 correctly:3 tibshirani:1 write:1 express:1 demonstrating:1 threshold:1 achieving:2 drawn:5 neither:1 pj:1 ht:3 replica:2 asymptotically:2 relaxation:1 graph:1 sum:1 family:3 appendix:1 bound:2 hi:1 completing:1 followed:1 quadratic:2 g:16 activity:3 annual:2 gang:1 constraint:1 precisely:2 rangan:2 bp:5 lopt:11 speed:1 argument:3 min:5 performing:3 department:1 mcknight:1 mcdonnell:1 across:1 smaller:1 intimately:1 mbc:1 intuitively:3 heart:1 unregularized:2 wellcome:1 equation:17 remains:3 describing:2 turn:1 discus:1 end:1 operation:1 apply:1 rp:3 denotes:3 remaining:1 include:1 top:1 ensure:1 wakin:1 especially:1 classical:2 society:2 question:2 quantity:1 occurs:1 receptive:1 strategy:1 dependence:1 surrogate:2 amongst:1 gradient:14 zdeborov:2 thank:1 simulated:2 decoder:1 chris:1 polytope:1 reason:1 assuming:3 length:4 corrector:2 minimizing:1 difficult:1 unfortunately:1 statement:1 holding:1 potentially:1 negative:1 unknown:6 perform:1 neuron:2 convolution:2 sm:10 finite:4 descent:19 communication:1 precise:2 rn:3 varied:1 smoothed:15 arbitrary:1 thrampoulidis:1 inverting:1 nonlinearly:1 namely:3 pair:3 connection:1 componentwise:1 barcelona:1 nip:1 alternately:1 address:2 able:1 bar:2 poole:1 below:2 pattern:2 dynamical:2 oft:1 max:1 memory:2 explanation:1 belief:3 royal:2 critical:1 suitable:1 difficulty:1 regularized:5 residual:5 hks:1 imply:2 health:1 genomics:1 prior:4 review:2 l2:1 acknowledgement:1 determining:1 relative:3 marginalizing:1 asymptotic:1 loss:29 expect:1 interesting:3 proven:3 remarkable:1 bayati:2 validation:1 foundation:2 sufficient:1 consistent:1 s0:25 thresholding:2 principle:1 xs0:1 row:1 summary:1 surprisingly:1 taking:1 characterizing:1 differentiating:1 sparse:12 moreau:17 distributed:3 curve:4 dimension:5 transition:2 computes:1 sensory:2 author:1 transaction:2 approximate:8 qst:3 ml:5 bruckstein:1 alternatively:2 surya:1 iterative:1 s0i:1 additionally:4 channel:2 robust:2 elastic:3 decoupling:1 mse:6 zou:1 dense:8 main:1 linearly:1 rh:2 s2:4 noise:23 montanari:5 fig:3 wiley:1 hassibi:1 wish:1 candidate:1 governed:1 xt:5 covariate:1 specific:2 sensing:7 intractable:1 essential:1 gap:2 simply:2 explore:1 expressed:1 scalar:6 corresponds:3 satisfies:2 relies:1 goal:3 consequently:1 donoho:3 lipschitz:1 fisher:1 absence:1 determined:1 experimental:1 select:1 support:2 guo:2 latter:3 |
6,190 | 66 | 41
ON PROPERTIES OF NETWORKS
OF NEURON-LIKE ELEMENTS
Pierre Baldi? and Santosh S. Venkatesh t
15 December 1987
Abstract
The complexity and computational capacity of multi-layered, feedforward
neural networks is examined. Neural networks for special purpose (structured)
functions are examined from the perspective of circuit complexity. Known results in complexity theory are applied to the special instance of neural network
circuits, and in particular, classes of functions that can be implemented in
shallow circuits characterised. Some conclusions are also drawn about learning
complexity, and some open problems raised. The dual problem of determining
the computational capacity of a class of multi-layered networks with dynamics
regulated by an algebraic Hamiltonian is considered. Formal results are presented on the storage capacities of programmed higher-order structures, and
a tradeoff between ease of programming and capacity is shown. A precise determination is made of the static fixed point structure of random higher-order
constructs, and phase-transitions (0-1 laws) are shown.
1
INTRODUCTION
In this article we consider two aspects of computation with neural networks. Firstly
we consider the problem of the complexity of the network required to compute classes
of specified (structured) functions. We give a brief overview of basic known complexity theorems for readers familiar with neural network models but less familiar
with circuit complexity theories. We argue that there is considerable computational
and physiological justification for the thesis that shallow circuits (Le., networks with
relatively few layers) are computationally more efficient. We hence concentrate on
structured (as opposed to random) problems that can be computed in shallow (constant depth) circuits with a relatively few number (polynomial) of elements, and
demonstrate classes of structured problems that are amenable to such low cost solutions. We discuss an allied problem-the complexity of learning-and close with
some open problems and a discussion of the observed limitations of the theoretical
approach.
We next turn to a rigourous classification of how much a network of given
structure can do; i.e., the computational capacity of a given construct. (This is, in
?Department of Mathematics, University of California (San Diego), La Jolla, CA 92093
tMoore School of Electrical Engineering, University of Pennsylvania, Philadelphia, PA 19104
? American Institute of Physics 1988
42
a sense, the mirror image of the problem considered above, where we were seeking
to design a minimal structure to perform a given task.) In this article we restrict
ourselves to the analysis of higher-order neural structures obtained from polynomial
threshold rules. We demonstrate that these higher-order networks are a special class
of layered neural network, and present formal results on storage capacities for these
constructs. Specifically, for the case of programmed interactions we demonstrate
that the storage capacity is of the order of n d where d is the interaction order.
For the case of random interactions, a type of phase transition is observed in the
distribution of fixed points as a function of attraction depth.
2
COMPLEXITY
There exist two broad classes of constraints on compl,ltations.
1. Physical constraints: These are related to the hardware in which the computa-
tion is embedded, and include among others time constants, energy limitations,
volumes and geometrical relations in 3D space, and bandwidth capacities.
2. Logical constraints: These can be further subdivided into
? Computability constraints-for instance, there exist unsolvable problems,
i.e., functions such as the halting problem which are not computable in
an absolute sense .
? Complexity constraints-usually giving upper and/or lower bounds on
the amount of resources such as the time, or the number of gates required to compute a given function. As an instance, the assertion "There
exists an exponential time algorithm for the Traveling Salesman Problem," provides a computational upper bound.
If we view brains as computational devices, it is not unreasonable to think
that in the course of the evolutionary process, nature may have been faced several
times by problems related to physical and perhaps to a minor degree logical constraints on computations. If this is the case, then complexity theory in a broad
sense could contribute in the future to our understanding of parallel computations
and architectural issues both in natural and synthetic neural systems.
A simple theory of parallel processing at the macro level (where the elements
are processors) can be developed based on the ratio of the time spent on communications between processors [7] for different classes of problems and different
processor architecture and interconnections. However, this approach does not seem
to work for parallel processing at the level of circuits, especially if calculations and
communications are intricately entangled.
Recent neural or connectionist models are based on a common structure, that
of highly interconnected networks of linear (or polynomial) threshold (or with sigmoid input-output function) units with adjustable interconnection weights. We shall
therefore review the complexity theory of such circuits. In doing so, it will be sometimes helpful to contrast it with the similar theory based on Boolean (AND, OR,
NOT) gates. The presentation will be rather informal and technical complements
can easily be found in the references.
43
Consider a circuit as being on a cyclic oriented graph connecting n Boolean
inputs to one Boolean output. The nodes of the graph correspond to the gates
(the n input units, the "hidden" units, and the output unit) of the circuit. The
size of the circuit is the total number of gates and the depth is the length of the
longest path connecting one input to the output. For a layered, feed-forward circuit,
the width is the average number of computational units in the hidden (or interior)
layers of elements. The first obvious thing when comparing Boolean and threshold
logic is that they are equivalent in the sense that any Boolean function can be
implemented using either logic. In fact, any such function can be computed in a
circuit of depth two and exponential size. Simple counting arguments show that
the fraction of functions requiring a circuit of exponential size approaches one as
n -+ 00 in both cases, i.e., a random function will in general require an exponential
size circuit. (Paradoxically, it is very difficult to construct a family of functions
for which we can prove that an exponential circuit is necessary.) Yet, threshold
logic is more powerful than Boolean logic. A Boolean gate can compute only one
function whereas a threshold gate can compute to the order of 2on2 functions by
varying the weights with 1/2 ~ a ~ 1 (see [19] for the lower bound; the upper
bound is a classical hyperplane counting argument, see for instance [20,30)). It
would hence appear plausible that there exist wide classes of problems which can be
computed by threshold logic with circuits substantially smaller than those required
by Boolean logic. An important result which separates threshold and Boolean logic
from this point of view has been demonstrated by Yao [31] (see [10,24] for an elegant
proof). The result is that in order to compute a function such as parity in a circuit
of constant depth k, at least exp(cnl/2k) Boolean gates with unbounded fanin are
required. As we shall demonstrate shortly, a circuit of depth two and linear size is
sufficient for the computation of such functions using threshold logic.
It is not unusual to hear discussions about the tradeoffs between the depth
and the width of a circuit. We believe that one of the main constributions of
complexity analysis is to show that this tradeoff is in some sense minimal and that
in fact there exists a very strong bias in favor of shallow (Le., constant depth)
circuits. There are multiple reasons for this. In general, for a fixed size, the number
of different functions computable by a circuit of small depth exceeds the number
of those computable by a deeper circuit. That is, if one had no a priori knowledge
regarding the function to be computed and was given hidden units, then the optimal
strategy would be to choose a circuit of depth two with the m units in a single
layer. In addition, if we view computations as propagating in a feedforward mode
from the inputs to the output unit, then shallow circuits compute faster. And the
deeper a circuit, the more difficult become the issues of time delays, synchronisation,
and precision on the computations. Finally, it should be noticed that given overall
responses of a few hundred milliseconds and given the known time scales for synaptic
integration, biological circuitry must be shallow, at least within a "module" and
this is corroborated by anatomical data. The relative slowness of neurons and their
shallow circuit architecture are to be taken together with the "analog factor" and
"entropy factor" [1] to understand the necessary high-connectivity requirements of
neural systems.
44
From the previous analysis emerges an important class of circuits in threshold
logic characterised by polynomial size and shallow depth. We have seen that, in
general, a random function cannot be computed by such circuits. However, many
interesting functions-the structured problems--are far from random, and it is then
natural to ask what is the class of functions computable by such circuits? While
a complete characterisation is probably difficult, there are several sub-classes of
structural functions which are known to be computable in shallow poly-size circuits.
The symmetric functions, i.e., functions which are invariant under any permutation of the n input variables, are an important class of structured problems
that can be implemented in shallow polynomial size circuits. In fact, any symmetric function can be computed by a threshold circuit of depth two and linear size;
(n hidden units and one output unit are always sufficient). We demonstrate the
validity of this assertion by the following instructive construction. We consider n
binary inputs, each taking on values -1 and 1 only, and threshold gates as units.
Now array the 2n possible inputs in n + 1 rows with the elements in each row being
permuted versions of each other (i.e., n-tuples in a row all have the same number
of +1's) and with the rows going monotonically from zero +1's to n +l's. Any
given symmetric Boolean function clearly assumes the same value for all elements
(Boolean n-tuples) in a row, so that contiguous rows where the function assumes
the value +1 form bands. (There are at most n/2 bands-the worst case occuring
for the parity function.) The symmetric function can now be computed with 2B
threshold gates in a single hidden layer with the topmost "neuron" being activated
only if the number of +1's in the input exceeds the number of +1's in the lower
edge of the lowest band, and proceeding systematically, the lowest "neuron" being
activated only if the number of +1's in the input exceeds the number of +1's in the
upper edge of the highest band. An input string will be within a band if and only if
an odd number of hidden neurons are activated startbg contiguously from the top
of the hidden layer, and conversely. Hence, a single output unit can compute the
given symmetric function.
It is easy to see that arithmetic operations on binary strings can be performed
with polysize small depth circuits. Reif [23] has shown that for a fixed degree of precision, any analytic function such as polynomials, exponentials, and trigonometric
functions can be approximated with small and shallow threshold circuits. Finally,
in many situations one is interested in the value of a function only for a vanishingly
small (Le., polynomial) fraction of the total number of possible inputs 2n. These
functions can be implemented by polysize shallow circuits and one can relate the
size and depths of the circuit to the cardinal of the interesting inputs.
So far we only have been concerned with the complexity of threshold circuits.
We now turn to the complexity of learning, i.e., the problem of finding the weights
required to implement a given function. Consider the problem of repeating m points
in 1Rl coloured in two colours, using k hyperplanes so that any region contains only
monochromatic points. If i and k are fixed the problem can be solved in polynomial
time. If either i or k goes to infinity, the problem becomes NP-complete [1]. As
a result, it is not difficult to see that the general learning problem is NP-complete
(see also [12] for a different proof and [21] for a proof of the fact it is already
NP-complete in the case of one single threshold gate).
45
Some remarks on the limitations of the complexity approach are a
this juncture:
pro]XJs
at
1. While a variety of structured Boolean functions can be implemented at relatively low cost with networks of linear threshold gates (McCulloch-Pitts neurons), the extension to different input-output functions and the continuous
domain is not always straightforward.
2. Even restricting ourselves to networks of relatively simple Boolean devices such
as the linear threshold gate, in many instances, only relatively weak bounds
are available for computational cost and complexity.
3. Time is probably the single most important ingredient which is completely
absent from these threshold units and their interconnections [17,14]; there
are, in addition, non-biological aspects of connectionist models [8].
4. Finally, complexity results (where available) are often asymptotic in nature
and may not be meaningful in the range corresponding to a particular application.
We shall end this section with a few open questions and speculations. One
problem has to do with the time it takes to learn. Learning is often seen as a
very slow process both in artificial models (cf. back propagation, for instance) and
biological systems (cf. human acquisition of complex skills). However, if we follow
the standards of complexity theory, in order to be effective over a wide variety of
scales, a single learning algorithm should be polynomial time. We can therefore
ask what is learnable by examples in polynomial time by polynomial size shallow
threshold circuits? The status of back propagation type of algorithms with respect
to this question is not very clear.
The existence of many tasks which are easily executed by biological organisms
and for which no satisfactory computer program has been found so far leads to the
question of the specificity of learning algorithms, i.e., whether there exists a complexity class of problems or functions for which a "program" can be found only by
learning from examples as opposed to by traditional programming. There is some
circumstantial evidence against such conjecture. As pointed out by Valiant [25],
cryptography can be seen in some sense as the opposite of learning. The conjectures
existence of one way function, i.e., functions which can be constructed in polynomial time but cannot be invested (from examples) in polynomial time suggests that
learning algorithms may have strict limitations. In addition, for most of the artificial
applications seen so far, the programs obtained through learning do not outperform
the best already known software, though there may be many other reasons for that.
However, even if such a complexity class does not exist, learning algorithm may
still be very important because of their inexpensiveness and generality. The work of
Valiant [26,13] on polynomial time learning of Boolean formulas in his "distribution
free model" explores some additional limitations of what can be learned by examples
without including any additional knowledge.
Learning may therefore turn out to be a powerful, inexpensive but limited
family of algorithms that need to be incorporated as "sub-routines" of more global
46
programs, the structure of which may be -harder to find. Should evolution be regarded as an "exponential" time learning process complemented by the "polynomial"
time type of learning occurring in the lifetime of organisms?
3
CAPACITY
In the previous section the focus of our investigation was on the structure and cost of
minimal networks that would compute specified Boolean functions. We now consider
the dual question: What is the computational capacity of a threshold network of
given structure? As with the issues on complexity, it turns out that for fairly general
networks, the capacity results favour shallow (but perhaps broad) circuits [29]. In
this discourse, however, we shall restrict ourselves to a specified class of higher-order
networks, and to problems of associative memory. We will just quote the principal
rigourous results here, and present the involved proofs elsewhere [4].
We consider systems of n densely interacting threshold units each of which
yields an instantaneous state -1 or +1. (This corresponds in the literature to a
system of n Ising spins, or alternatively, a system of n neural states.) The state
space is hence the set of vertices of the hypercube. We will in this discussion
also restrict our attention throughout to symmetric interaction systems wherein the
interconnections between threshold elements is bidirectional.
Let Id be the family of all subsets of cardinality d + 1 of the set {1, 2, ... , n}.
n
Clearly IIdl = ( d + 1)? For any subset I of {1, 2, ... , n}, and for every state
U
= {Ul,U2, ... ,un }E lB n deC
= {-1,l}n, set UI = fIiEIui.
Definition 1 A homogeneous algebraic threshold network of degree d is a network of
n threshold elements with interactions specified by a set of (
WI
indexed by I in I
d,
d:
1 ) real coefficients
and the evolution rule
ut = sgn ( IeIdL:ieI WIUI\{i})
(1)
These systems can be readily seen to be natural generalisations to higherorder of the familiar case d = 1 of linear threshold networks. The added degrees of
freedom in the interaction coefficients can potentially result in enhanced flexibility
and programming capability over the linear case as has been noted independently
by several authors recently [2,3,4,5,22,27]. Note that each d-wise product uI\i is just
the parity of the corresponding d inputs, and by our earlier discussion, this can be
computed with d hidden units in one layer followed by a single threshold unit. Thus
the higher-order network can be realised by a network of depth three, where the first
hidden layer has d( ~ ) units, the second hidden layer has ( ~ ) units, and there are
n output units which feedback into the n input units. Note that the weights from
the input to the first hidden layer, and the first hidden layer to the second are fixed
47
(computing the various d-wise products), and the weights from the second hidden
layer to the output are the coefficients WI which are free parameters.
These systems can be identified either with long range interactions for higherorder spin glasses at zero temperature, or higher-order neural networks. Starting
from an arbitrary configuration or state, the system evolves asynchronously by a
sequence of single "spin" flips involving spins which are misaligned with the instantaneous "molecular field." The dynamics of these symmetric higher-order systems
are regulated analogous to the linear system by higher-order extensions of the classical quadratic Hamiltonian. We define the homogeneous algebraic Hamiltonian of
degree d by
Hd(u)
=- E
WI'UI?
(2)
IeId
The algebraic Hamiltonians are functionals akin in behaviour to the classical
quadratic Hamiltonian as has been previously demonstrated [5].
Proposition 1 The functional H d is non-increasing under the evolution rule 1.
In the terminology of spin glasses, the state trajectories of these higher-order
networks can be seen to be following essentially a zero-temperature Monte Carlo
(or Glauber) dynamics. Because of the monotonicity of the algebraic Hamiltonians
given by equation 2 under the asynchronous evolution rule 1, the system always
reaches a stable state (fixed point) where the relation 1 is satisfied for each of the n
spins or neural states. The fixed points are hence the arbiters of system dynamics,
and determine the computational capacity of the system.
System behaviour and applications are somewhat different depending on
whether the interactions are random or programmed. The case of random interactions lends itself to natural extensions of spin glass formulations, while programmed
interactions yield applications of higher-order extensions of neural network models.
We consider the two cases in turn.
3.1
PROGRAMMED INTERACTIONS
Here we query whether given sets of binary n-vectors can be stored as fixed points
by a suitable selection of interaction coefficients. If such sets of prescribed vectors
can be stored as stable states for some suitable choice of interaction coefficients,
then proposition 1 will ensure that the chosen vectors are at the bottom of "energy
wells" in the state space with each vector exercising a region of attraction around
it-all characterestics of a physical associative memory. In such a situation the
dynamical evolution of the network can be interpreted in terms of computations:
error-correction, nearest neighbour search and associative memory. Of importance
here is the maximum number of states that can be stored as fixed points for an
appropriate choice of algebraic threshold network. This represents the maximal
information storage capacity of such higher-order neural networks.
Let d represent the degree ofthe algebraic threshold network. Let u(l), ... , u(m)
be the m-set of vectors which we require to store as fixed points in a suitable algebraic threshold network. We will henceforth refer to these prescribed vectors as
48
memories. We define the storage capacity of an algebraic threshold network of degree d to be the maximal number m of arbitrarily chosen memories which can be
stored with high probability for appropriate choices of coefficients in the network.
Theorem 1 The maximal (algorithm independent) storage capacity of a homoge-
neous algebraic threshold network of degree d is less than or equal to 2 ( ~ ).
Generalised Sum of Outer-Products Rule: The classical Reb bian rule for the
linear case d = 1 (cf. [11] and quoted references) can be naturally extended to
networks of higher-order. The coefficients WI, IE Id are constructed as the sum of
generalised Kronecker outer-products,
m
WI
=L
u~a).
a=l
Theorem 2 The storage capacity of the outer-product algorithm applied to a homogeneous algebraic threshold network of degree d is less than or equal to n d /2(d +
l)logn (also cf. [15,27]).
Generalised Spectral Rule: For d = 1 the spectral rule amounts to iteratively
projecting states orthogonally onto the linear space generated by u(1), ... , u(m), and
then taking the closest point on the hypercube to this projection (cf. [27,28]). This
approach can be extended to higher-orders as we now describe.
Let W denote the n X N(n,d) matrix of coefficients WI arranged lexicographically; i.e.,
W=
Wl,l,2, ... ,d-l,d
Wl,2,3, ... ,d,d+l
Wl,n-d+l,n-d+2, ... ,n-l,n
W2,l,2, ... ,d-l,d
W2,2,3, ... ,d,d+l
W2,n-d+l,n-d+2, ... ,n-l,n
W n ,l,2, ... ,d-l,d
W n ,2,3, ... ,d,d+l
W n ,n-d+l,n-d+2, ... ,n-l,n
Note that the symmetry and the "zero-diagonal" nature of the interactions have
been relaxed to increase capacity. Let U be the n X m matrix of memories. Form
the extended N(n,d) X m binary matrix 1 U = [lu(l) ... lu(m)], where
u(a)
l,2,. .. ,d-l,d
(a)
u 1,2, ... ,d-l,d+l
(a)
u n _ d+ l,n- d+2, ... ,n-l,n
Let A = dgP'<l) ... ),(m)] be a m X m diagonal matrix with positive diagonal terms.
A generalisation of the spectral algorithm for choosing coefficients yields
W
where
1
= UA1Ut
ut is the pseudo-inverse of 1 U.
49
Theorem 3 The storage capacity of the generalised spectral algorithm is at best
n
( d ).
3.2
RANDOM INTERACTIONS
We consider homogeneous algebraic threshold networks whose weights WI are Li.d.,
N(O, 1) random variables. This is a natural generalisation to higher-order of Ising
spin glasses with Gaussian interactions. We will show an asymptotic estimate for
the number of fixed points of the structure. Asymptotic results for the usual case
d = 1 of linear threshold networks with Gaussian interactions have been reported
in the literature [6,9,16].
For i = 1, ... , n set
s~
L:
= Ui
WIUI\i .
IeId:ieI
For each n the random variables S~, i
= 1, ... , n are identically distributed, jointly
Gaussian variables with zero mean, and variance
Definition 2 For any given f3 ~ 0, a state u E
for each i = 1, ... , n.
O'~ = ( n ~ 1
).
run is f3-strongly stable iff S~ ~ f3O'n,
The case f3 = 0 reverts to the usual case of fixed points. The parameter f3 is
essentially a measure of how deep the well of attraction surrounding the fixed point
is. The following proposition asserts that a 0-1 law ("phase transition") governs
the expected number of fixed points which have wells of attraction above a certain
depth. Let Fd(f3) be the expected number of f3-strongly stable states.
Theorem 4 Corresponding to each fixed interaction order d there exists a positive
constant f3 d such that as n --+ 00,
if f3 < f3 d
if f3 = f3d
if f3 > f3 d ,
where kd(f3) > 0, and 0 ~ Cd(f3)
interaction order d.
4
< 1 are parameters depending solely on f3 and the
CONCLUSION
In fine, it appears possible to design shallow, polynomial size threshold circuits
to compute a wide class of structured problems. The thesis that shallow circuits
compute more efficiently than deep circuits is borne out. For the particular case of
50
higher-order networks, all the garnered results appear to point in the same direction:
For neural networks of fixed degree d, the maximal number of programmable states is
essentially of the order of n d ? The total number of fixed points, however, appear to
be exponential in number (at least for the random interaction case) though almost
all of them have constant attraction depths.
References
[1] Y. S. Abu-Mostafa, "Number of synapses per neuron," in Analog VLSI and
Neural Systems, ed. C. Mead, Addison Wesley, 1987.
[2] P. Baldi, II. Some Contributions to the Theory of Neural Networks. Ph.D. Thesis, California Insitute of Technology, June 1986.
[3] P. Baldi and S. S. Venkatesh, "Number of stable points for spin glasses and
neural networks of higher orders," Phys. Rev. Lett., vol. 58, pp. 913-916, 1987.
[4] P. Baldi and S. S. Venkatesh, "Fixed points of algebraic threshold networks,"
in preparation.
[5] H. H. Chen, et al, "Higher order correlation model of associative memory," in
Neural Networks for Computing. New York: AlP Conf. Proc., vol. 151, 1986.
[6] S. F. Edwards and F. Tanaka, "Analytical theory of the ground state properties
of a spin glass: I. ising spin glass," Jnl. Phys. F, vol. 10, pp. 2769-2778, 1980.
[7] G. C. Fox and S. W. Otto, "Concurrent Computations and the Theory of
Complex Systems," Caltech Concurrent Computation Program, March 1986.
[8] F. H. Grick and C. Asanuma, ~'Certain aspects of the anatomy and physiology
of the cerebral cortex," in Parallel Distributed Processing, vol. 2, eds. D. E.
Rumelhart and J. L. McCelland, pp. 333-371, MIT Press, 1986.
[9] D. J. Gross and M. Mezard, "The simplest spin glass," Nucl. Phys., vol. B240,
pp. 431-452, 1984.
[10] J. Hasted, "Almost optimal lower bounds for small depth circuits," Proc. 18-th
ACM STOC, pp. 6-20, 1986.
[11] J. J. Hopfield, "Neural networks and physical sytems with emergent collective
computational abilities," Proc. Natl. Acad. Sci. USA, vol. 79, pp. 25.54-2558,
1982.
[12] J. S. Judd, "Complexity of connectionist learning with various node functions,"
Dept. of Computer and Information Science Technical Report, vol. 87-60, Univ.
of Massachussetts, Amherst, 1987.
[13] M. Kearns, M. Li, 1. Pitt, and L. Valiant, "On the learnability of Boolean
formulae," Proc. 19-th ACM STOC, 1987.
[14] C. Koch, T. Poggio, and V. Torre, "Retinal ganglion cells: A functional interpretation of dendritic morphology," Phil. Trans. R. Soc. London, vol. B 288,
pp. 227-264, 1982.
51
[15] R. J. McEliece, E. C. Posner, E. R. Rodemich, and S. S. Venkatesh, "The
capacity of the Hopfield associative memory," IEEE Trans. Inform. Theory,
vol. IT-33, pp. 461-482, 1987.
[16] R. J. McEliece and E. C. Posner, "The number of stable points of an infiniterange spin glass memory," JPL Telecomm. and Data Acquisition Progress Report, vol. 42-83, pp. 209-215, 1985.
[17] C. A. Mead (ed.), Analog VLSI and Neural Systems, Addison Wesley, 1987.
[18] N. Megiddo, "On the complexity of polyhedral separability," to appear in Jnl.
Discrete and Computational Geometry, 1987.
[19] S. Muroga, "Lower bounds on the number of threshold functions," IEEE Trans.
Elec. Comp., vol. 15, pp. 805-806, 1966.
[20] S. Muroga, Threshold Logic and its Applications, Wiley Interscience, 1971.
[21] V. N. Peled and B. Simeone, "Polynomial-time algorithms for regular setcovering and threshold synthesis," Discr. Appl. Math., vol. 12, pp. 57-69, 1985.
[22] D. Psaltis and C. H. Park, "Nonlinear discriminant functions and associative
memories," in Neural Networks for Computing. New York: AlP Conf. Proc.,
vol. 151, 1986.
[23] J. Reif, "On threshold circuits and polynomial computation," preprint.
[24] R. Smolenski, "Algebraic methods in the theory of lower bounds for Boolean
circuit complexity," Proc. J9-th ACM STOC, 1987.
[25] L. G. Valiant, "A theory of the learnable," Comm. ACM, vol. 27, pp. 1134-1142,
1984.
[26] L. G. Valiant, "Deductive learning," Phil. Trans. R. Soc. London, vol. A 312,
pp. 441-446, 1984.
[27] S. S. Venkatesh, Linear Maps with Point Rules: Applications to Pattern Classification and Associativ~ Memory. Ph.D. Thesis, California Institute of Technology, Aug. 1986.
[28] S. S. Venkatesh and D. Psaltis, "Linear and logarithmic capacities in associative
neural networks," to appear IEEE Trans. Inform. Theory.
[29] S. S. Venkatesh, D. Psaltis, and J. Yu, private communication.
[30] R. O. Winder, "Bounds on threshold gate realisability," IRE Trans. Elec.
Comp., vol. EC-12, pp. 561-564, 1963.
[31] A. C. C. Yaa, "Separating the poly-time hierarchy by oracles," Proc. 26-th
IEEE FOCS, pp. 1-10, 1985.
| 66 |@word private:1 version:1 polynomial:18 open:3 harder:1 cyclic:1 contains:1 configuration:1 comparing:1 yet:1 must:1 readily:1 analytic:1 device:2 hamiltonian:4 ire:1 provides:1 node:2 math:1 contribute:1 hyperplanes:1 firstly:1 unbounded:1 constructed:2 asanuma:1 become:1 focs:1 prove:1 interscience:1 baldi:4 polyhedral:1 expected:2 multi:2 brain:1 morphology:1 cardinality:1 increasing:1 becomes:1 circuit:46 mcculloch:1 lowest:2 what:4 interpreted:1 substantially:1 string:2 developed:1 finding:1 synchronisation:1 pseudo:1 every:1 computa:1 megiddo:1 unit:20 appear:5 generalised:4 positive:2 engineering:1 acad:1 id:2 mead:2 path:1 solely:1 examined:2 conversely:1 suggests:1 misaligned:1 appl:1 ease:1 programmed:5 limited:1 range:2 implement:1 physiology:1 projection:1 regular:1 specificity:1 cannot:2 close:1 layered:4 interior:1 selection:1 storage:8 onto:1 equivalent:1 map:1 demonstrated:2 phil:2 go:1 straightforward:1 attention:1 independently:1 starting:1 rule:9 attraction:5 array:1 regarded:1 his:1 hd:1 posner:2 justification:1 analogous:1 diego:1 construction:1 enhanced:1 hierarchy:1 programming:3 homogeneous:4 pa:1 element:8 rumelhart:1 approximated:1 corroborated:1 ising:3 observed:2 bottom:1 module:1 preprint:1 electrical:1 solved:1 worst:1 region:2 highest:1 topmost:1 gross:1 comm:1 complexity:25 ui:4 peled:1 dynamic:4 rigourous:2 completely:1 easily:2 hopfield:2 emergent:1 various:2 surrounding:1 univ:1 elec:2 effective:1 describe:1 monte:1 insitute:1 artificial:2 query:1 london:2 choosing:1 whose:1 plausible:1 cnl:1 interconnection:4 otto:1 favor:1 ability:1 think:1 invested:1 itself:1 jointly:1 asynchronously:1 associative:7 sequence:1 analytical:1 interaction:20 interconnected:1 vanishingly:1 product:5 macro:1 maximal:4 trigonometric:1 flexibility:1 iff:1 asserts:1 requirement:1 spent:1 depending:2 propagating:1 nearest:1 school:1 odd:1 minor:1 progress:1 edward:1 soc:2 implemented:5 aug:1 strong:1 concentrate:1 direction:1 anatomy:1 torre:1 on2:1 human:1 sgn:1 alp:2 exercising:1 require:2 subdivided:1 behaviour:2 investigation:1 proposition:3 biological:4 dendritic:1 extension:4 correction:1 around:1 considered:2 ground:1 koch:1 exp:1 pitt:2 circuitry:1 mostafa:1 purpose:1 proc:7 psaltis:3 quote:1 concurrent:2 wl:3 deductive:1 mit:1 clearly:2 always:3 gaussian:3 rather:1 dgp:1 varying:1 focus:1 june:1 longest:1 contrast:1 sense:6 glass:9 helpful:1 hidden:13 relation:2 vlsi:2 going:1 interested:1 issue:3 dual:2 classification:2 among:1 logn:1 priori:1 overall:1 raised:1 special:3 integration:1 fairly:1 santosh:1 construct:4 f3:15 equal:2 field:1 represents:1 broad:3 park:1 yu:1 muroga:2 future:1 others:1 connectionist:3 np:3 cardinal:1 few:4 report:2 oriented:1 neighbour:1 densely:1 familiar:3 phase:3 ourselves:3 geometry:1 freedom:1 circumstantial:1 fd:1 highly:1 activated:3 natl:1 amenable:1 edge:2 necessary:2 poggio:1 fox:1 indexed:1 reif:2 theoretical:1 minimal:3 instance:6 earlier:1 boolean:18 assertion:2 contiguous:1 cost:4 vertex:1 subset:2 hundred:1 delay:1 learnability:1 stored:4 reported:1 synthetic:1 explores:1 amherst:1 ie:1 intricately:1 reb:1 physic:1 connecting:2 together:1 synthesis:1 yao:1 connectivity:1 thesis:4 satisfied:1 opposed:2 choose:1 henceforth:1 borne:1 conf:2 american:1 winder:1 li:2 halting:1 iei:2 retinal:1 coefficient:9 tion:1 view:3 performed:1 doing:1 realised:1 parallel:4 capability:1 contribution:1 spin:13 variance:1 efficiently:1 correspond:1 yield:3 ofthe:1 weak:1 lu:2 carlo:1 trajectory:1 comp:2 processor:3 synapsis:1 reach:1 phys:3 inform:2 synaptic:1 ed:3 definition:2 against:1 inexpensive:1 energy:2 acquisition:2 pp:15 involved:1 obvious:1 naturally:1 proof:4 static:1 ask:2 logical:2 knowledge:2 emerges:1 ut:2 routine:1 back:2 appears:1 feed:1 bidirectional:1 higher:18 wesley:2 rodemich:1 follow:1 bian:1 response:1 wherein:1 formulation:1 arranged:1 though:2 strongly:2 generality:1 lifetime:1 just:2 correlation:1 traveling:1 mceliece:2 telecomm:1 nonlinear:1 propagation:2 mode:1 perhaps:2 believe:1 usa:1 validity:1 requiring:1 evolution:5 hence:5 symmetric:7 iteratively:1 satisfactory:1 glauber:1 width:2 noted:1 complete:4 demonstrate:5 occuring:1 temperature:2 pro:1 geometrical:1 image:1 wise:2 instantaneous:2 recently:1 common:1 sigmoid:1 permuted:1 functional:2 garnered:1 physical:4 overview:1 rl:1 volume:1 cerebral:1 analog:3 organism:2 interpretation:1 refer:1 mccelland:1 mathematics:1 pointed:1 had:1 stable:6 cortex:1 closest:1 recent:1 perspective:1 jolla:1 store:1 slowness:1 certain:2 binary:4 arbitrarily:1 caltech:1 seen:6 additional:2 somewhat:1 relaxed:1 determine:1 monotonically:1 arithmetic:1 ii:1 multiple:1 exceeds:3 technical:2 faster:1 determination:1 calculation:1 lexicographically:1 long:1 f3o:1 molecular:1 involving:1 basic:1 essentially:3 sometimes:1 represent:1 dec:1 cell:1 whereas:1 addition:3 fine:1 entangled:1 w2:3 probably:2 strict:1 elegant:1 thing:1 december:1 monochromatic:1 seem:1 structural:1 counting:2 feedforward:2 easy:1 concerned:1 identically:1 paradoxically:1 variety:2 pennsylvania:1 restrict:3 bandwidth:1 architecture:2 opposite:1 regarding:1 identified:1 tradeoff:3 computable:5 absent:1 sytems:1 favour:1 whether:3 colour:1 ul:1 akin:1 algebraic:14 discr:1 york:2 remark:1 programmable:1 deep:2 simeone:1 clear:1 governs:1 j9:1 amount:2 repeating:1 band:5 ph:2 hardware:1 simplest:1 outperform:1 exist:4 millisecond:1 per:1 anatomical:1 discrete:1 shall:4 vol:16 abu:1 terminology:1 threshold:41 drawn:1 characterisation:1 computability:1 graph:2 contiguously:1 fraction:2 sum:2 run:1 inverse:1 unsolvable:1 powerful:2 family:3 reader:1 throughout:1 architectural:1 almost:2 bound:9 layer:11 wiui:2 followed:1 quadratic:2 oracle:1 constraint:6 infinity:1 kronecker:1 software:1 aspect:3 argument:2 prescribed:2 relatively:5 conjecture:2 structured:8 department:1 march:1 kd:1 smaller:1 separability:1 wi:7 shallow:16 evolves:1 rev:1 projecting:1 invariant:1 taken:1 computationally:1 resource:1 equation:1 previously:1 discus:1 turn:5 addison:2 flip:1 end:1 unusual:1 informal:1 salesman:1 available:2 operation:1 unreasonable:1 appropriate:2 spectral:4 massachussetts:1 pierre:1 shortly:1 gate:13 existence:2 assumes:2 top:1 include:1 cf:5 ensure:1 giving:1 realisability:1 especially:1 classical:4 hypercube:2 seeking:1 noticed:1 already:2 question:4 added:1 strategy:1 usual:2 traditional:1 diagonal:3 evolutionary:1 regulated:2 lends:1 separate:1 higherorder:2 sci:1 capacity:20 separating:1 outer:3 argue:1 discriminant:1 reason:2 length:1 ratio:1 difficult:4 executed:1 potentially:1 relate:1 stoc:3 design:2 collective:1 adjustable:1 perform:1 upper:4 neuron:7 situation:2 extended:3 communication:3 precise:1 incorporated:1 interacting:1 lb:1 arbitrary:1 venkatesh:7 complement:1 required:5 specified:4 speculation:1 california:3 learned:1 allied:1 tanaka:1 trans:6 jnl:2 usually:1 dynamical:1 pattern:1 hear:1 program:5 reverts:1 including:1 memory:11 suitable:3 natural:5 nucl:1 technology:2 brief:1 orthogonally:1 philadelphia:1 faced:1 review:1 understanding:1 coloured:1 literature:2 f3d:1 determining:1 relative:1 law:2 embedded:1 asymptotic:3 permutation:1 interesting:2 limitation:5 ingredient:1 degree:10 sufficient:2 article:2 systematically:1 cd:1 row:6 course:1 elsewhere:1 parity:3 free:2 asynchronous:1 formal:2 bias:1 deeper:2 understand:1 institute:2 wide:3 taking:2 absolute:1 distributed:2 feedback:1 depth:18 lett:1 transition:3 judd:1 forward:1 made:1 author:1 san:1 far:4 ec:1 compl:1 functionals:1 skill:1 status:1 logic:10 monotonicity:1 global:1 tuples:2 arbiter:1 quoted:1 alternatively:1 continuous:1 un:1 search:1 nature:3 learn:1 ca:1 xjs:1 symmetry:1 poly:2 complex:2 domain:1 hamiltonians:2 main:1 cryptography:1 slow:1 wiley:1 precision:2 sub:2 mezard:1 exponential:8 theorem:5 formula:2 learnable:2 physiological:1 jpl:1 evidence:1 exists:4 restricting:1 valiant:5 importance:1 mirror:1 juncture:1 occurring:1 chen:1 fanin:1 entropy:1 logarithmic:1 ganglion:1 u2:1 corresponds:1 discourse:1 complemented:1 acm:4 presentation:1 considerable:1 characterised:2 specifically:1 generalisation:3 hyperplane:1 principal:1 kearns:1 total:3 la:1 meaningful:1 preparation:1 dept:1 instructive:1 |
6,191 | 660 | Learning Sequential Tasks by
Incrementally Adding Higher Orders
Mark Ring
Department of Computer Sciences, Taylor 2.124
University of Texas at Austin
Austin, Texas 78712
(ring@cs. utexas.edu)
Abstract
An incremental, higher-order, non-recurrent network combines two
properties found to be useful for learning sequential tasks: higherorder connections and incremental introduction of new units. The
network adds higher orders when needed by adding new units that
dynamically modify connection weights. Since the new units modify the weights at the next time-step with information from the
previous step, temporal tasks can be learned without the use of
feedback, thereby greatly simplifying training. Furthermore, a theoretically unlimited number of units can be added to reach into
the arbitrarily distant past. Experiments with the Reber grammar have demonstrated speedups of two orders of magnitude over
recurrent networks.
1
INTRODUCTION
Second-order recurrent networks have proven to be very powerful [8], especially
when trained using complete back propagation through time [1, 6, 14]. It has also
been demonstrated by Fahlman that a recurrent network that incrementally adds
nodes during training-his Recurrent Cascade-Correlation algorithm [5]-can be
superior to non-incremental, recurrent networks [2,4, 11, 12, 15].
The incremental, higher-order network presented here combines advantages of both
of these approaches in a non-recurrent network. This network (a simplified, con115
116
Ring
tinuous version of that introduced in [9]), adds higher orders when they are needed
by the system to solve its task. This is done by adding new units that dynamically
modify connection weights. The new units modify the weights at the next time-step
with information from the last, which allows temporal tasks to be learned without
the use of feedback.
2
GENERAL FORMULATION
Each unit (U) in the network is either an input (I), output (0), or high-level (L)
unit.
Ui(t)
value of ith unit at time t.
Ii(t)
Ui(t) where i is an input unit.
Oi(t)
Ui(t) where i is an output unit.
ret)
Target value for Oi (t) at time t.
L~y(t)
Ui(t) where i is the higher-order unit that
modifies weight w xy at time t. 1
The output and high-level units are collectively referred to as non-input (N) units:
{ Oi (t)
L~y(t)
if Ui
if Ui
= Oi.
=L~y.
In a given time-step, the output and high-level units receive a summed input from
the input units.
Ni(t) ==
Ij (t)g(i, j, t).
(1)
L
j
is a gating function representing the weight of a particular connection at a particular time-step. If there is a higher-order unit assigned to that connection, then
the input value of that unit is added to the connection's weight at that time-step.2
g
g
( .. t) _ { Wij(t)
Z,),
Wij (t)
+ Lij(t -
1)
If Lij exists
Otherwise
(2)
At each time-step, the values of the output units are calculated from the input units
and the weights (possibly modified by the activations of the high-level units from the
previous time-step). The values of the high-level units are calculated at the same
time in the same way. The output units generate the output of the network. The
high-level units simply alter the weights at the next time-step. All unit activations
can be computed simultaneously since the activations of the L units are not required
connection may be modified by at most one L unit. Therefore Li , Lzy , and L~y are
identical but used as appropriate for notational convenience.
21t can be seen that this is a higher-order connection in the usual sense if one substitutes
the right-hand side of equation 1 for L'0 in equation 2 and then replaces g in equation 1 with
the result. In fact, as the network increases in height, ever higher orders are introduced,
while lower orders are preserved.
1A
Learning Sequential Tasks by Incrementally Adding Higher Orders
until the following time-step. The network is arranged hierarchically in that every
higher-order units is always higher in the hierarchy than the units on either side
of the weight it affects. Since higher-order units have no outgoing connections, the
network is not recurrent. It is therefore impossible for a high-level unit to affect,
directly or indirectly, its own input.
There are no hidden units in the traditional sense, and all units have a linear activation function. (This does not imply that non-linear functions cannot be represented,
since non-linearities do result from the multiplication of higher-level and input units
in equations 1 and 2.)
Learning is done through gradient descent to reduce the sum-squared error.
! L:(Ti(t) -
E(t)
2
,.
Oi(t?2
(3)
where 1] is the learning rate. Since it may take several time-steps for the value of
a weight to affect the network's output and therefore the error, equation 3 can be
rewritten as:
~Wij(t)
BE(t)
= BWij (t - r'") ,
where
=
{
0
if U i
if U i
(4)
=
=L~y
Oi
1 + rX
The value ri is constant for any given unit i and specifies how "high" in the hierarchy
unit i is. It therefore also specifies how many time-steps it takes for a change in
unit i's activation to affect the network's output.
ri
Due to space limitations, the derivation of the gradient is not shown, but is given
elsewhere [10]. The resulting weight change rule, however, is:
=
~ ..(t) - Ii (t _ i) { Ti(t) - Oi(t) If u~ O~
w')
r
~W xy (t)
If U' =
LI
xy
(5)
The weights are changed after error values for the output units have been collected.
Since each high-level unit is higher in the hierarchy than the units on either side
of the weight it affects, weight changes are made bottom up, and the ~Wxy(t) in
equation 5 will already have been calculated at the time ~Wij(t) is computed.
The intuition behind the learning rule is that each high-level unit learns to utilize
the context from the previous time-step for adjusting the connection it influences
at the next time-step so that it can minimize the connection's error in that context.
Therefore, if the information necessary to decide the correct value of a connection
at one time-step is available at the previous time-step, then that information is used
by the higher-order unit assigned to that connection. If the needed information is
not available at the previous time-step, then new units may be built to look for
the information at still earlier steps. This method concentrating on unexpected
events is similar to the "hierarchy of decisions" of Dawkins [3], and the "history
compression" of Schmidhuber [13].
117
118
Ring
3
WHEN TO ADD NEW UNITS
A unit is added whenever a weight is being pulled strongly in opposite directions
(i.e. when learning is forcing the weight to increase and to decrease at the same
time) . The unit is created to determine the contexts in which the weight is pulled
in each direction. This is done in the following way: Two long-term averages are
kept for each connection. The first of these records the average change made to the
weight,
~Wij(t) = O'~Wij(t) + (1 - O')~Wij(t - 1); 0 S; 0' S; l.
The second is the long-term mean absolute deviation, given by:
The parameter, 0', specifies the duration of the long-term average. A lower value
of 0' means that the average is kept for a longer period of time. When ~Wij(t) is
small, but I~Wij(t)1 is large, then the weight is being pulled strongly in conflicting
directions, and a new unit is built.
if
I~Wij(t)1
c + I~Wij(t)1
>8
then build unit L~ +1,
where c is a small constant that keeps the denominator from being zero, 8 is a
threshold value, and N is the number of units in the network. A related method for
adding new units in feed-forward networks was introduced by Wynne-Jones [16].
When a new unit is added, its incoming weights are initially zero. It has no output
weights but simply learns to anticipate and reduce the error at each time-step of
the weight it modifies. In order to keep the number of new units low, whenever a
unit, Lij is created, the statistics for all connections into the destination unit (U i )
are reset: I~Wij(t)1 ~ 0.0 and ~Wij(t) ~ 1.0.
4
RESULTS
The Reber grammar is a small finite-state grammar of the following form:
y. .. .
sO
B
...
~?
TO
X
~ ...
.../v
E
Transitions from one node to the next are made by way of the labeled arcs. The
task of the network is: given as input the label of the arc just traversed, predict
Learning Sequential Tasks by Incrementally Adding Higher Orders
Elman
Network
Sequences Seen:
"Hidden" Units
Mean
Best
20,000
15
RTRL
19,000
2
Recurrent
Cascade
Correlation
25,000
2-3
Incremental
Higher-Order
Network
206
176
40
Table 1: The incremental higher-order network is compared against recurrent networks on the Reber grammar. The results for the recurrent networks are quoted
from other sources [2, 5]. The mean and/or best performance is shown when available. RTRL is the real-time recurrent learning algorithm [15].
the arc that will be traversed next. A training sequence, or string, is generated
by starting with a B transition and then randomly choosing an arc leading away
from the current state until the final state is reached. Both inputs and outputs are
encoded locally, so that there are seven output units (one each for B, T, S, X, V, P,
and E) and eight input units (the same seven plus one bias unit). The network is
considered correct if its highest activated outputs correspond to the arcs that can be
traversed from the current state. Note that the current state cannot be determined
from the current input alone.
An Elman-type recurrent network was able to learn this task after 20,000 string
presentations using 15 hidden units [2]. (The correctness criteria for the Elman
net was slightly more stringent than that described in the previous paragraph.)
Recurrent Cascade-Correlation (RCC) was able to learn this task using only two or
three hidden units in an average of 25,000 string presentations [5].
The incremental, higher-order network was trained on a continuous stream of input:
the network was not reset before beginning a new string. Training was considered
to be complete only after the network had correctly classified 100 strings in a row.
Using this criterion, the network completed training after an average of 206.3 string
presentations with a standard deviation of 16.7. It achieved perfect generalization
on test sets of 128 randomly generated strings in all ten runs. Because the Reber
grammar is stochastic, a ceiling of 40 higher-order units was imposed on the network
to prevent it from continually creating new units in an attempt to outguess the
random number generator.
Complete results for the network on the Reber grammar task are given in table 1.
The parameter settings were: TJ = 0.04, (J" = 0.08, e = 1.0, f = 0.1 and Bias = 0.0.
(The network seemed to perform better with no bias unit.)
The network has also been tested on the "variable gap" tasks introduced by Mozer
[7], as shown in figure 1. These tasks were intended to test performance of networks
over long time-delays. Two sequences are alternately presented to the network.
Each sequence begins with an X or a Y and is followed by a fixed string of characters
with an X or a Y inserted some number of time-steps from the beginning. In
figure 1 the number of time-steps, or "gap", is 2. The only difference between the
two sequences is that the first begins with an X and repeats the X after the gap,
while the second begins with a Y and repeats the Y after the gap. The network
must learn to predict the next item in the sequence given the current item as input
119
120
Ring
Time-step:
Sequence 1:
Sequence 2:
0
X
Y
1
a
a
2
b
3
4
5
6
X
b
Y
c
c
d
d
e
e
10
789
f g h
f g h
11
J
J
12
k
k
Figure 1: An example of a "variable gap" training sequence [7]. One item is presented to the network at each time-step. The target is the next item in the sequence.
Here the "gap" is two, because there are two items in the sequence between the first
X or Y and the second X or Y . In order to correctly predict the second X or Y, the
network must remember how the sequence began.
(where all inputs are locally encoded). In order for the network to predict the
second occurrence of the X or Y, it must remember how the sequence began. The
length of the gap can be increased in order to create tasks of greater difficulty.
Results of the "gap" tasks are given in table 2. The values for the standard recurrent
network and for Mozer's own variation are quoted from Mozer's paper [7]. The
incremental higher-order net had no difficulty with gaps up to 24, which was the
largest gap I tested. The same string was used for all tasks (except for the position
of the second X or V), and had no repeated characters (again with the exception
of the X and Y). The network continued to scale linearly with every gap size both
in terms of units and epochs required for training. Because these tasks are not
stochastic, the network always stopped building units as soon as it had created
those necessary to solve each task.
=
=
=
=
=
The parameter settings were: TJ
1.5, (j
0.2, e
1.0, f
0.1 and Bias
0.0.
The network was considered to have correctly predicted an element in the sequence
if the most strongly activated output unit was the unit representing the correct
prediction. The sequence was considered correctly predicted if all elements (other
than the initial X or Y) were correctly predicted.
Gap
2
4
6
8
10
24
Mean number of Training sets required by:
Standard
Mozer
Incremental
Recurrent Net Network Higher-Order Net
468
4
328
7406
584
6
992
9830
8
1312
10
> 10000
12
> 10000
1630
26
Umts
Created
10
15
19
23
27
49
Table 2: A comparison on the "gap" tasks of a standard recurrent-network and
a network devised specifically for long time-delays (quoted from Mozer [7], who
reported results for gaps up to ten) against an incremental higher-order network.
The last column is the number of units created by the incremental higher-order net.
Learning Sequential Tasks by Incrementally Adding Higher Orders
5
CONCLUSIONS
The incremental higher-order network performed much better than the networks
that it was compared against on these tiny tests. A few caveats are in order,
however. First, the parameters given for the tasks above were customized for those
tasks. Second, the network may add a large number of new units if it contains
many context-dependent events or if it is inherently stochastic. Third, though the
network in principle can build an ever larger hierarchy that searches further and
further back in time for a context that will predict what a connection's weight
should be, many units may be needed to bridge a long time-gap. Finally, once a
bridge across a time-delay is created, it does not generalize to other time-delays.
On the other hand, the network learns very fast due to its simple structure that
adds high-level units only when needed. Since there is no feedback (i.e. no unit ever
produces a signal that will ever feed back to itself), learning can be Qone without
back propagation through time. Also, since the outputs and high-level units have a
fan-in equal to the number of inputs only, the number of connections in the system
is much smaller than the number of connections in a traditional network with the
same number of hidden units.
Finally, the network can be thought of as a system of continuous-valued conditionaction rules that are inserted or removed depending on another set of such rules
that are in turn inserted or removed depending on another set, etc. When new rules
(new units) are added, they are initially invisible to the system, (i.e., they have no
effect), but only gradually learn to have an effect as the opportunity to decrease
error presents itself.
Acknowledgements
This work was supported by NASA Johnson Space Center Graduate Student Researchers Program training grant, NGT 50594. I would like to thank Eric Hartman,
Kadir Liano, and my advisor Robert Simmons for useful discussions and helpful
comments on drafts of this paper. I would also like to thank Pavilion Technologies,
Inc. for their generous contribution of computer time and office space required to
complete much of this work.
References
[1] Jonathan Richard Bachrach. Connectionist Modeling and Control of Finite
State Environments. PhD thesis, Department of Computer and Information
Sciences, University of Massachusetts, February 1992.
[2] Axel Cleeremans, David Servan-Schreiber, and James L. McClelland. Finite
state automata and simple recurrent networks. Neural Computation, 1(3):372381, 1989.
[3] Richard Dawkins. Hierarchical organisation: a candidate principle for ethology.
In P. P. G. Bateson and R. A. Hinde, editors, Growing Points in Ethology, pages
7-54, Cambridge, 1976. Cambridge University Press.
[4] Jeffrey L. Elman. Finding structure in time. CRL Technical Report 8801,
University of California, San Diego, Center for Research in Language, April
1988.
121
122
Ring
[5] Scott E. Fahlman. The recurrent cascade-correlation architecture. In R. P.
Lippmann, J. E. Moody, and D. S. Touretzky, editors, Advances in Neural
Information Processing Systems 3, pages 190-196, San Mateo, California, 1991.
Morgan Kaufmann Publishers.
[6] C. L. Giles, C. B. Miller, D. Chen, G. Z. Sun, H. H. Chen, and Y. C. Lee.
Extracting and learning an unknown grammar with recurrent neural networks.
In J. E. Moody, S. J. Hanson, and R. P. Lippman, editors, Advances in Neural
Information Processing Systems 4, pages 317-324, San Mateo, California, 1992.
Morgan Kaufmann Publishers.
[7] Michael C. Mozer. Induction of multiscale temporal structure. In John E.
Moody, Steven J. Hanson, and Richard P. Lippmann, editors, Advances in
Neural Information Processing Systems 4, pages 275-282, San Mateo, California, 1992. Morgan Kaufmann Publishers.
[8] Jordan B. Pollack. The induction of dynamical recognizers. Machine Learning,
7:227-252, 1991.
[9] Mark B. Ring. Incremental development of complex behaviors through automatic construction of sensory-motor hierarchies. In Lawrence A. Birnbaum
and Gregg C. Collins, editors, Machine Learning: Proceedings of the Eighth International Workshop (ML91), pages 343-347. Morgan Kaufmann Publishers,
June 1991.
[10] Mark B. Ring. Sequence learning with incremental higher-order neural networks. Technical Report AI 93-193, Artificial Intelligence Laboratory, University of Texas at Austin, January 1993.
[11] A. J. Robinson and F. Fallside. The utility driven dynamic error propagation
network. Technical Report CUED/F-INFENG/TR.l, Cambridge University
Engineering Department, 1987.
[12] D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning internal representations by error propagation. In D. E. Rumelhart and J. L. McClelland,
editors, Parallel Distributed Processing: Explorations in the Microstructure of
Cognition. V1: Foundations. MIT Press, 1986.
[13] Jiirgen Schmidhuber. Learning unambiguous reduced sequence descriptions.
In J. E. Moody, S. J. Hanson, and R. P. Lippman, editors, Advances in Neural
Information Processing Systems 4, pages 291-298, San Mateo, California, 1992.
Morgan Kaufmann Publishers.
[14] Raymond L. Watrous and Gary M. Kuhn. Induction of finite-state languages
using second-order recurrent networks. In J. E. Moody, S. J. Hanson, and
R. P. Lippman, editors, Advances in Neural Information Processing Systems
4, pages 309-316, San Mateo, California, 1992. Morgan Kaufmann Publishers.
[15] Ronald J. Williams and David Zipser. A learning algorithm for continually
running fully recurrent neural networks. Neural Computation, 1(2):270-280,
1989.
[16] Mike Wynn-Jones. Node splitting: A constructive algorithm for feed-forward
neural networks. Neural Computing and Applications, 1(1):17-22, 1993.
| 660 |@word version:1 compression:1 simplifying:1 thereby:1 tr:1 initial:1 contains:1 past:1 current:5 activation:5 must:3 john:1 ronald:1 distant:1 motor:1 wynne:1 alone:1 intelligence:1 item:5 beginning:2 ith:1 record:1 caveat:1 draft:1 node:3 wxy:1 height:1 combine:2 paragraph:1 theoretically:1 behavior:1 elman:4 growing:1 begin:3 linearity:1 what:1 watrous:1 string:9 ret:1 finding:1 temporal:3 remember:2 every:2 ti:2 control:1 unit:75 grant:1 rcc:1 continually:2 before:1 engineering:1 modify:4 plus:1 mateo:5 dynamically:2 graduate:1 lippman:3 cascade:4 thought:1 convenience:1 cannot:2 context:5 impossible:1 influence:1 imposed:1 demonstrated:2 center:2 modifies:2 williams:2 starting:1 duration:1 automaton:1 bachrach:1 splitting:1 rule:5 continued:1 his:1 variation:1 simmons:1 hierarchy:6 target:2 diego:1 construction:1 element:2 rumelhart:2 labeled:1 bottom:1 inserted:3 steven:1 mike:1 cleeremans:1 sun:1 decrease:2 highest:1 removed:2 intuition:1 mozer:6 environment:1 ui:6 dynamic:1 trained:2 eric:1 represented:1 derivation:1 fast:1 artificial:1 choosing:1 encoded:2 larger:1 solve:2 valued:1 kadir:1 otherwise:1 grammar:7 hartman:1 statistic:1 itself:2 final:1 pavilion:1 advantage:1 sequence:17 net:5 reset:2 gregg:1 description:1 produce:1 incremental:14 ring:8 perfect:1 cued:1 depending:2 recurrent:22 ij:1 c:1 predicted:3 direction:3 kuhn:1 correct:3 stochastic:3 exploration:1 stringent:1 microstructure:1 generalization:1 anticipate:1 traversed:3 considered:4 lawrence:1 cognition:1 predict:5 generous:1 label:1 utexas:1 bridge:2 largest:1 schreiber:1 correctness:1 create:1 mit:1 always:2 modified:2 office:1 june:1 notational:1 greatly:1 sense:2 helpful:1 dependent:1 initially:2 hidden:5 wij:13 development:1 summed:1 equal:1 once:1 identical:1 look:1 jones:2 alter:1 connectionist:1 report:3 richard:3 few:1 randomly:2 simultaneously:1 intended:1 jeffrey:1 attempt:1 behind:1 activated:2 tj:2 necessary:2 xy:3 taylor:1 pollack:1 stopped:1 increased:1 column:1 earlier:1 advisor:1 modeling:1 giles:1 servan:1 deviation:2 delay:4 johnson:1 reported:1 my:1 international:1 destination:1 axel:1 lee:1 michael:1 moody:5 squared:1 again:1 thesis:1 possibly:1 creating:1 leading:1 li:2 student:1 inc:1 stream:1 performed:1 reached:1 wynn:1 parallel:1 contribution:1 minimize:1 oi:7 ni:1 kaufmann:6 who:1 miller:1 correspond:1 generalize:1 rx:1 researcher:1 bateson:1 history:1 classified:1 reach:1 touretzky:1 whenever:2 against:3 james:1 jiirgen:1 adjusting:1 concentrating:1 massachusetts:1 back:4 nasa:1 feed:3 higher:28 april:1 formulation:1 done:3 arranged:1 strongly:3 though:1 furthermore:1 just:1 correlation:4 until:2 hand:2 multiscale:1 propagation:4 incrementally:5 building:1 effect:2 assigned:2 laboratory:1 during:1 unambiguous:1 criterion:2 complete:4 invisible:1 began:2 superior:1 cambridge:3 ai:1 automatic:1 language:2 had:4 longer:1 recognizers:1 etc:1 add:6 own:2 driven:1 forcing:1 schmidhuber:2 arbitrarily:1 seen:2 morgan:6 greater:1 determine:1 period:1 signal:1 ii:2 technical:3 long:6 devised:1 reber:5 prediction:1 infeng:1 denominator:1 achieved:1 receive:1 preserved:1 source:1 publisher:6 comment:1 jordan:1 extracting:1 zipser:1 affect:5 architecture:1 opposite:1 reduce:2 texas:3 utility:1 useful:2 locally:2 ten:2 mcclelland:2 reduced:1 generate:1 specifies:3 correctly:5 threshold:1 prevent:1 birnbaum:1 utilize:1 kept:2 v1:1 sum:1 run:1 powerful:1 decide:1 decision:1 followed:1 fan:1 replaces:1 ri:2 unlimited:1 speedup:1 department:3 across:1 slightly:1 smaller:1 rtrl:2 character:2 gradually:1 ceiling:1 equation:6 turn:1 needed:5 available:3 rewritten:1 eight:1 hierarchical:1 away:1 appropriate:1 indirectly:1 occurrence:1 substitute:1 running:1 completed:1 opportunity:1 especially:1 build:2 february:1 added:5 already:1 usual:1 traditional:2 gradient:2 fallside:1 higherorder:1 thank:2 seven:2 collected:1 induction:3 length:1 robert:1 unknown:1 perform:1 arc:5 finite:4 descent:1 january:1 hinton:1 ever:4 introduced:4 david:2 required:4 connection:18 hanson:4 california:6 learned:2 conflicting:1 alternately:1 robinson:1 able:2 dynamical:1 scott:1 eighth:1 program:1 built:2 event:2 difficulty:2 customized:1 representing:2 technology:1 imply:1 ethology:2 created:6 tinuous:1 lij:3 raymond:1 epoch:1 acknowledgement:1 multiplication:1 fully:1 limitation:1 proven:1 generator:1 foundation:1 principle:2 editor:8 tiny:1 austin:3 row:1 elsewhere:1 changed:1 repeat:2 fahlman:2 last:2 soon:1 supported:1 side:3 bias:4 pulled:3 absolute:1 distributed:1 feedback:3 calculated:3 transition:2 seemed:1 sensory:1 forward:2 made:3 san:6 simplified:1 lippmann:2 keep:2 incoming:1 quoted:3 continuous:2 search:1 table:4 learn:4 inherently:1 complex:1 hierarchically:1 linearly:1 repeated:1 referred:1 position:1 candidate:1 third:1 learns:3 gating:1 organisation:1 exists:1 workshop:1 sequential:5 adding:7 phd:1 magnitude:1 umts:1 gap:15 chen:2 simply:2 unexpected:1 collectively:1 gary:1 presentation:3 crl:1 change:4 determined:1 except:1 specifically:1 exception:1 internal:1 mark:3 jonathan:1 collins:1 constructive:1 outgoing:1 tested:2 |
6,192 | 6,600 | Unsupervised Learning of 3D Structure from Images
Danilo Jimenez Rezende*
[email protected]
S. M. Ali Eslami*
[email protected]
Peter Battaglia*
[email protected]
Shakir Mohamed*
[email protected]
Max Jaderberg*
[email protected]
Nicolas Heess*
[email protected]
* Google DeepMind
Abstract
A key goal of computer vision is to recover the underlying 3D structure that gives
rise to 2D observations of the world. If endowed with 3D understanding, agents
can abstract away from the complexity of the rendering process to form stable,
disentangled representations of scene elements. In this paper we learn strong
deep generative models of 3D structures, and recover these structures from 2D
images via probabilistic inference. We demonstrate high-quality samples and
report log-likelihoods on several datasets, including ShapeNet [2], and establish
the first benchmarks in the literature. We also show how these models and their
inference networks can be trained jointly, end-to-end, and directly from 2D images
without any use of ground-truth 3D labels. This demonstrates for the first time
the feasibility of learning to infer 3D representations of the world in a purely
unsupervised manner.
1
Introduction
We live in a three-dimensional world, yet our observations of it are typically in the form of twodimensional projections that we capture with our eyes or with cameras. A key goal of computer
vision is that of recovering the underlying 3D structure that gives rise to these 2D observations.
The 2D projection of a scene is a complex function of the attributes and positions of the camera, lights
and objects that make up the scene. If endowed with 3D understanding, agents can abstract away
from this complexity to form stable, disentangled representations, e.g., recognizing that a chair is a
chair whether seen from above or from the side, under different lighting conditions, or under partial
occlusion. Moreover, such representations would allow agents to determine downstream properties
of these elements more easily and with less training, e.g., enabling intuitive physical reasoning about
the stability of the chair, planning a path to approach it, or figuring out how best to pick it up or sit on
it. Models of 3D representations also have applications in scene completion, denoising, compression
and generative virtual reality.
There have been many attempts at performing this kind of reasoning, dating back to the earliest years
of the field. Despite this, progress has been slow for several reasons: First, the task is inherently illposed. Objects always appear under self-occlusion, and there are an infinite number of 3D structures
that could give rise to a particular 2D observation. The natural way to address this problem is by
learning statistical models that recognize which 3D structures are likely and which are not. Second,
even when endowed with such a statistical model, inference is intractable. This includes the sub-tasks
of mapping image pixels to 3D representations, detecting and establishing correspondences between
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
different images of the same structures, and that of handling the multi-modality of the representations
in this 3D space. Third, it is unclear how 3D structures are best represented, e.g., via dense volumes
of voxels, via a collection of vertices, edges and faces that define a polyhedral mesh, or some other
kind of representation. Finally, ground-truth 3D data is difficult and expensive to collect and therefore
datasets have so far been relatively limited in size and scope.
In this paper we introduce a family of generative models of
3D structures and recover these structures from 2D images via
or
probabilistic inference. Learning models of 3D structures directly from pixels has been a long-standing research problem
and a number of approaches with different levels of underlying
2D input
3D interpretation
assumptions and feature engineering have been proposed. Traditional approaches to vision as inverse graphics [20, 17, 19] Figure 1: Motivation: The 3D
and analysis-by-synthesis [23, 27, 16, 28] rely on heavily engi- representation of a 2D image is
neered visual features with which inference of object properties ambiguous and multi-modal. We
such as shape and pose is substantially simplified. More recent achieve such reasoning by learning
work [16, 4, 3, 30] addresses some of these limitations by learn- a generative model of 3D structures,
ing parts of the encoding-decoding pipeline depicted in figure and recover this structure from 2D
2 in separate stages. Concurrent to our work [10] also develops images via probabilistic inference.
a generative model of volumetric data based on adversarial
methods. We discuss other related work in A.1. Unlike existing approaches, our approach is one of
the first to learn 3D representations in an unsupervised, end-to-end manner, directly from 2D images.
Our contributions are as follows. (a) We design a strong generative model of 3D structures, defined
over the space of volumes and meshes, combining ideas from state-of-the-art generative models
of images [7]. (b) We show that our models produce high-quality samples, can effectively capture
uncertainty and are amenable to probabilistic inference, allowing for applications in 3D generation
and simulation. We report log-likelihoods on a dataset of shape primitives, a 3D version of MNIST,
and on ShapeNet [2], which to the best of our knowledge, constitutes the first quantitative benchmark
for 3D density modeling. (c) We show how complex inference tasks, e.g., that of inferring plausible
3D structures given a 2D image, can be achieved using conditional training of the models. We
demonstrate that such models recover 3D representations in one forward pass of a neural network
and they accurately capture the multi-modality of the posterior. (d) We explore both volumetric
and mesh-based representations of 3D structure. The latter is achieved by flexible inclusion of
off-the-shelf renders such as OpenGL [22]. This allows us to build in further knowledge of the
rendering process, e.g., how light bounces of surfaces and interacts with its material?s attributes. (e)
We show how the aforementioned models and inference networks can be trained end-to-end directly
from 2D images without any use of ground-truth 3D labels. This demonstrates for the first time the
feasibility of learning to infer 3D representations of the world in a purely unsupervised manner.
2
Conditional Generative Models
In this section we develop our framework for learning models of 3D structure from volumetric data
or directly from images. We consider conditional latent variable models, structured as in figure 2
(left). Given an observed volume or image x and a context c, we wish to infer a corresponding
3D representation h (which can be a volume or a mesh). This is achieved by modelling the latent
manifold of object shapes and poses via the low-dimensional codes z. The context is any quantity
that is always observed at both train- and test-time, and it conditions all computations of inference
and generation (see figure 2, middle). In our experiments, context is either 1) nothing, 2) an object
class label, or 3) one or more views of the scene from different cameras.
Our models employ a generative process which consists of first generating a 3D representation h
(figure 2, middle) and then projecting to the domain of the observed data (figure 2, right). For instance,
the model will first generate a volume or mesh representation of a scene or object and then render it
down using a convolutional network or an OpenGL renderer to form a 2D image.
Generative models with latent variables describe probability densities p(x)
R over datapoints x implicitly through a marginalization of the set of latent variables z, p(x) = p? (x|z)p(z)dz. Flexible
models can be built by using multiple layers of latent variables, where each layer specifies a conditional distribution parameterized by a deep neural network. Examples of such models include
[12, 15, 24]. The marginal likelihood p(x) is intractable and we must resort to approximations.
2
observed
class
observed
view(s)
or
context
observed
context
abstract
code
c
z
volume/mesh
representation
h
observed
volume/image
x
c
training
volume xx
training
image xx
training
volume xx
inference
network
abstract
code z
3D structure
model
volume/mesh
representation h
c
training
image xx
learned/specified
renderer
Figure 2: Proposed framework: Left: Given an observed volume or image x and contextual
information c, we wish to infer a corresponding 3D representation h (which can be a volume or a
mesh). This is achieved by modeling the latent manifold of object shapes via the low-dimensional
codes z. In experiments we will consider unconditional models (i.e., no context), as well as models
where the context c is class or one or more 2D views of the scene. Right: We train a contextconditional inference network (red) and object model (green). When ground-truth volumes are
available, they can be trained directly. When only ground-truth images are available, a renderer is
required to measure the distance between an inferred 3D representation and the ground-truth image.
We opt for variational approximations [13], in which we bound the marginal likelihood p(x) by
F = Eq(z|x) [log p? (x|z)] KL[q (z|x)kp(z)], where the true posterior distribution is approximated
by a parametric family of posteriors q (z|x) with parameters . Learning involves joint optimization
of the variational parameters and model parameters ?. In this framework, we can think of the
generative model as a decoder of the latent variables, and the inference network as an encoder of the
observed data into the latent representation. Gradients of F are estimated using path-wise derivative
estimators (?reparameterization trick?) [12, 15].
2.1
Architectures
We build on recent work on sequential generative models [7, 11, 6] by extending them to operate on
different 3D representations. This family of models generates the observed data over the course
of T computational steps. More precisely, these models operate by sequentially transforming
independently generated Gaussian latent variables into refinements of a hidden representation h,
which we refer to as the ?canvas?. The final configuration of the canvas, hT , is then transformed into
the target data x (e.g. an image) through a final smooth transformation. In our framework, we refer
to the hidden representation hT as the ?3D representation? since it will have a special form that is
amenable to 3D transformations. This generative process is described by the following equations:
(1)
;
?
)
(2)
1 r
Hidden state st = fstate (st 1 , zt , et ; ?s ) (3)
Latents zt ? N (?|0, 1)
3D representation ht = fwrite (st , ht
1 ; ?w )
? = Proj(hT , sT ; ?p )
2D projection x
Encoding et = fread (c, st
Observation x ? p(x|?
x).
(4)
(5)
(6)
Each step generates an independent set of K-dimensional variables zt (equation 1). We use a fully connected long short-term memory network (LSTM, [8]) as the transition function fstate (st 1 , zt , c; ?s ).
The context encoder fread (c, st 1 ; ?r ) is task dependent; we provide further details in section 3.
When using a volumetric latent 3D representation, the representation update function
fwrite (st , ht 1 ; ?w ) in equation 4 is parameterized by a volumetric spatial transformer (VST, [9]).
More precisely, we set fwrite (st , ht 1 ; ?w ) = VST(g1 (st ), g2 (st )) where g1 and g2 are MLPs that
take the state st and map it to appropriate sizes. More details about the VST are provided in the
appendix A.3. When using a mesh 3D representation fwrite is a fully-connected MLP.
The function Proj(hT , sT ) is a projection operator from the model?s latent 3D representation hT to
the training data?s domain (which in our experiments is either a volume or an image) and plays the
role of a ?renderer?. The conditional density p(x|?
x) is either a diagonal Gaussian (for real-valued
data) or a product of Bernoulli distributions (for binary data). We denote the set of all parameters
of this generative model as ? = {?r , ?w , ?s , ?p }. Details of the inference model and the variational
bound is provided in the appendix A.2.
3
VST
volume
hT (DxHxW)
volume
x
? (DxHxW)
volume
hT (FxDxHxW)
3D
conv
image
camera
x
? (1xHxW)
sT
mesh
hT (3xM)
camera
sT
image
x
? (3xHxW)
Figure 3: Projection operators: These drop-in modules relate a latent 3D representation with the
training data. The choice of representation and the type of available training data determine which
operator should be used. Left: Volume-to-volume projection (no parameters). Middle: Volumeto-image neural projection (learnable parameters). Right: Mesh-to-image OpenGL projection (no
learnable parameters).
Here we discuss the projection operators in detail. These drop-in modules relate a latent 3D representation with the training data. The choice of representation (volume or mesh) and the type of available
training data (3D or 2D) determine which operator is used.
3D ! 3D projection (identity): In cases where training data is already in the form of volumes (e.g.,
in medical imagery, volumetrically rendered objects, or videos), we can directly define the likelihood
? = hT function (see figure 3 left).
density p(x|?
x), and the projection operator is simply the identity x
3D ! 2D neural projection (learned): In most practical applications we only have access to images
captured by a camera. Moreover, the camera pose may be unknown or partially known. For these
cases, we construct and learn a map from an F -dimensional volume hT to the observed 2D images
by combining the VST with 3D and 2D convolutions. When multiple views from different positions
are simultaneously observed, the projection operator is simply cloned as many times as there are
target views. The parameters of the projection operator are trained jointly with the rest of the model.
This operator is depicted in figure 3 (middle). For details see appendix A.4.
3D ! 2D OpenGL projection (fixed): When working with a mesh representation, the projection
operator in equation 4 is a complex map from the mesh description h provided by the generative
? . In our experiments we use an off-the-shelf OpenGL renderer and
model to the rendered images x
treat it as a black-box with no parameters. This operator is depicted in figure 3 (right).
A challenge in working with black-box renderers is that of back-propagating errors from the image
to the mesh. This requires either a differentiable renderer [19], or resort to gradient estimation
techniques such as finite-differences [5] or Monte Carlo estimators [21, 1]. We opt for a scheme
based on REINFORCE [26], details of which are provided in appendix A.5.
3
Experiments
We demonstrate the ability of our model to learn and exploit 3D scene representations in five
challenging tasks. These tasks establish it as a powerful, robust and scalable model that is able to
provide high quality generations of 3D scenes, can robustly be used as a tool for 3D scene completion,
can be adapted to provide class-specific or view-specific generations that allow variations in scenes to
be explored, can synthesize multiple 2D scenes to form a coherent understanding of a scene, and can
operate with complex visual systems such as graphics renderers. We explore four data sets:
Necker cubes The Necker cube is a classical psychological test of the human ability for 3D and
spatial reasoning. This is the simplest dataset we use and consists of 40 ? 40 ? 40 volumes with a
10 ? 10 ? 10 wire-frame cube drawn at a random orientation at the center of the volume [25].
Primitives The volumetric primitives are of size 30 ? 30 ? 30. Each volume contains a simple
solid geometric primitive (e.g., cube, sphere, pyramid, cylinder, capsule or ellipsoid) that undergoes
random translations ([0, 20] pixels) and rotations ([ ?, ?] radians).
MNIST3D We extended the MNIST dataset [18] to create a 30 ? 30 ? 30 volumetric dataset by
extruding the MNIST images. The resulting dataset has the same number of images as MNIST. The
data is then augmented with random translations ([0, 20] pixels) and rotations ([ ?, ?] radians) that
are procedurally applied during training.
4
Figure 4: A generative model of volumes: For each dataset we display 9 samples from the model.
The samples are sharp and capture the multi-modality of the data. Left: Primitives (trained with
translations and rotations). Middle: MNIST3D (translations and rotations). Right: ShapeNet (trained
with rotations only). Videos of these samples can be seen at https://goo.gl/9hCkxs.
Necker
Primitives
MNIST3D
Figure 5: Probabilistic volume completion (Necker Cube, Primitives, MNIST3D): Left: Full
ground-truth volume. Middle: First few steps of the MCMC chain completing the missing left half of
the data volume. Right: 100th iteration of the MCMC chain. Best viewed on a screen. Videos of
these samples can be seen at https://goo.gl/9hCkxs.
ShapeNet The ShapeNet dataset [2] is a large dataset of 3D meshes of objects. We experiment with
a 40-class subset of the dataset, commonly referred to as ShapeNet40. We render each mesh as a
binary 30 ? 30 ? 30 volume.
For all experiments we used LSTMs with 300 hidden neurons and 10 latent variables per generation
step. The context encoder fc (c, st 1 ) was varied for each task. For image inputs we used convolutions
and standard spatial transformers, and for volumes we used volumetric convolutions and VSTs. For
the class-conditional experiments, the context c is a one-hot encoding of the class. As meshes are
much lower-dimensional than volumes, we set the number of steps to be T = 1 when working with
this representation. We used the Adam optimizer [14] for all experiments.
3.1
Generating volumes
When ground-truth volumes are available we can directly train the model using the identity projection
operator (see section 2.1). We explore the performance of our model by training on several datasets.
We show in figure 4 that it can capture rich statistics of shapes, translations and rotations across the
datasets. For simpler datasets such as Primitives and MNIST3D (figure 4 left, middle), the model
learns to produce very sharp samples. Even for the more complex ShapeNet dataset (figure 4 right)
its samples show a large diversity of shapes whilst maintaining fine details.
3.2
Probabilistic volume completion and denoising
We test the ability of the model to impute missing data in 3D volumes. This is a capability that is
often needed to remedy sensor defects that result in missing or corrupt regions, (see for instance
[29, 4]). For volume completion, we use an unconditional volumetric model and alternate between
inference and generation, feeding the result of one into the other. This procedure simulates a Markov
chain and samples from the correct distribution, as we show in appendix A.10. We test the model by
occluding half of a volume and completing the missing half. Figure 5 demonstrates that our model
successfully completes large missing regions with high precision. More examples are shown in the
appendix A.7.
5
550
600
1050
550
1000
Unconditional
1 context view
2 context views
3 context views
Bound (nats)
Bound (nats)
500
500
450
450
400
350
Bound (nats)
Baseline model
400
12
Generation Steps
24
350
950
900
850
2
6
Generation Steps
12
800
2
6
Generation Steps
12
Figure 6: Quantitative results: Increasing the number of steps or the number of contextual views
both lead to improved log-likelihoods. Left: Primitives. Middle: MNIST3D. Right: ShapeNet.
3.3
Conditional volume generation
The models can also be trained with context representing the class of the object, allowing for class
conditional generation. We train a class-conditional model on ShapeNet and show multiple samples
for 10 of the 40 classes in figure 7. The model produces high-quality samples of all classes. We
note their sharpness, and that they accurately capture object rotations, and also provide a variety of
plausible generations. Samples for all 40 ShapeNet classes are shown in appendix A.8.
We also form conditional models using a single view of 2D contexts. Our results, shown in figure 8
indicate that the model generates plausible shapes that match the constraints provided by the context
and captures the multi-modality of the posterior. For instance, consider figure 8 (right). The model
is conditioned on a single view of an object that has a triangular shape. The model?s three shown
samples have greatly varying shape (e.g., one is a cone and the other a pyramid), whilst maintaining
the same triangular projection. More examples of these inferences are shown in the appendix A.9.
3.4
Performance benchmarking
We quantify the performance of the model by computing likelihood scores, varying the number of
conditioning views and the number of inference steps in the model. Figure 6 indicates that the number
of generation steps is a very important factor for performance (note that increasing the number of
steps does not affect the total number of parameters in the model). Additional context views generally
improves the model?s performance but the effect is relatively small. With these experiments we
establish the first benchmark of likelihood-bounds on Primitives (unconditional: 500 nats; 3-views:
472 nats), MNIST3D (unconditional: 410 nats; 3-views: 393 nats) and ShapeNet (unconditional: 827
nats; 3-views: 814 nats). As a strong baseline, we have also trained a deterministic 6-layer volumetric
convolutional network with Bernoulli likelihoods to generate volumes conditioned on 3 views. The
performance of this model is indicated by the red line in figure 6. Our generative model substantially
outperforms the baseline for all 3 datasets, even when conditioned on a single view.
3.5
Multi-view training
In most practical applications, ground-truth volumes are not available for training. Instead, data is
captured as a collection of images (e.g., from a multi-camera rig or a moving robot). To accommodate
this fact, we extend the generative model with a projection operator that maps the internal volumetric
? . This map imitates a ?camera? in that it first applies an affine
representation hT to a 2D image x
transformation to the volumetric representation, and then flattens the result using a convolutional
network. The parameters of this projection operator are trained jointly with the rest of the model.
Further details are explained in the appendix A.4.
In this experiment we train the model to learn to reproduce an image of the object given one or
more views of it from fixed camera locations. It is the model?s responsibility to infer the volumetric
representation as well as the camera?s position relative to the volume. It is clear to see how the
model can ?cheat? by generating volumes that lead to good reconstructions but do not capture the
underlying 3D structure. We overcome this by reconstructing multiple views from the same volumetric
representation and using the context information to fix a reference frame for the internal volume. This
enforces a consistent hidden representation that generalises to new views.
We train a model that conditions on 3 fixed context views to reproduce 10 simultaneous random
views of an object. After training, we can sample a 3D representation given the context, and render it
from arbitrary camera angles. We show the model?s ability to perform this kind of inference in figure
6
table
airplane
vase
bowl
car
person
laptop
cone
Figure 7: Class-conditional samples: Given a one-hot encoding of class as context, the model
produces high-quality samples. Notice, for instance, sharpness and variability of generations for
?chair?, accurate capture of rotations for ?car?, and even identifiable legs for the ?person? class. Videos
of these samples can be seen at https://goo.gl/9hCkxs.
9. The resulting network is capable of producing an abstract 3D representation from 2D observations
that is amenable to, for instance, arbitrary camera rotations.
3.6
Single-view training
Finally, we consider a mesh-based 3D representation and demonstrate the feasibility of training our
models with a fully-fledged, black-box renderer in the loop. Such renderers (e.g. OpenGL) accurately
capture the relationship between a 3D representation and its 2D rendering out of the box. This image
is a complex function of the objects? colors, materials and textures, positions of lights, and that of
other objects. By building this knowledge into the model we give hints for learning and constrain its
hidden representation.
We consider again the Primitives dataset, however now we only have access to 2D images of the
objects at training time. The primitives are textured with a color on each side (which increases
the complexity of the data, but also makes it easier to detect the object?s orientation relative to the
camera), and are rendered under three lights. We train an unconditional model that given a 2D image,
infers the parameters of a 3D mesh and its orientation relative to the camera, such that when textured
and rendered reconstructs the image accurately. The inferred mesh is formed by a collection of 162
vertices that can move on fixed lines that spread from the object?s center, and is parameterized by the
vertices? positions on these lines.
The results of these experiments are shown in figure 10. We observe that in addition to reconstructing
the images accurately (which implies correct inference of mesh and camera), the model correctly
infers the extents of the object not in view, as demonstrated by views of the inferred mesh from
unobserved camera angles.
4
Discussion
In this paper we introduced a powerful family of 3D generative models inspired by recent advances
in image modeling. When trained on ground-truth volumes, they can produce high-quality samples
that capture the multi-modality of the data. We further showed how common inference tasks, such
as that of inferring a posterior over 3D structures given a 2D image, can be performed efficiently
via conditional training. We also demonstrated end-to-end training of such models directly from 2D
images through the use of differentiable renderers. This demonstrates for the first time the feasibility
of learning to infer 3D representations in a purely unsupervised manner.
We experimented with two kinds of 3D representations: volumes and meshes. Volumes are flexible
and can capture a diverse range of structures, however they introduce modeling and computational
challenges due to their high dimensionality. Conversely, meshes can be much lower dimensional
and therefore easier to work with, and they are the data-type of choice for common rendering
engines, however standard paramaterizations can be restrictive in the range of shapes they can capture.
7
It will be of interest to consider other representation types, such as NURBS, or training with a
volume-to-mesh conversion algorithm (e.g., marching cubes) in the loop.
c
x
?
h
r1
r2
c
x
?
r1
h
r2
Figure 8: Recovering 3D structure from 2D images: The model is trained on volumes, conditioned
on c as context. Each row corresponds to an independent sample h from the model given c.
We display x
?, which is h viewed from the same angle as c. Columns r1 and r2 display the
inferred 3D representation h from different viewpoints. The model generates plausible, but varying,
interpretations, capturing the inherent ambiguity of the problem. Left: MNIST3D. Right: ShapeNet.
Videos of these samples can be seen at https://goo.gl/9hCkxs.
c1
c2
c3
r1
r2
r3
r4
r5
r6
r7
r8
Figure 9: 3D structure from multiple 2D images: Conditioned on 3 depth images of an object,
the model is trained to generate depth images of that object from 10 different views. Left: Context
views. Right: Columns r1 through r8 display the inferred abstract 3D representation h rendered
from different viewpoints by the learned projection operator. Videos of these samples can be seen at
https://goo.gl/9hCkxs.
x
x
?
r1
r2
r3
x
x
?
r1
r2
r3
Figure 10: Unsupervised learning of 3D structure: The model observes x and is trained to reconstruct it using a mesh representation and an OpenGL renderer, resulting in x
?. We rotate the camera
around the inferred mesh to visualize the model?s understanding of 3D shape. We observe that
in addition to reconstructing accurately, the model correctly infers the extents of the object not in
view, demonstrating true 3D understanding of the scene. Videos of these reconstructions have been
included in the supplementary material. Best viewed in color. Videos of these samples can be seen at
https://goo.gl/9hCkxs.
8
References
[1] Y. Burda, R. Grosse, and R. Salakhutdinov. Importance weighted autoencoders. arXiv preprint:1509.00519,
2015.
[2] A. X. Chang, T. Funkhouser, L. Guibas, P. Hanrahan, Q. Huang, Z. Li, S. Savarese, M. Savva, S. Song,
H. Su, et al. Shapenet: An information-rich 3d model repository. arXiv preprint:1512.03012, 2015.
[3] C. B. Choy, D. Xu, J. Gwak, K. Chen, and S. Savarese. 3d-r2n2: An unified approach for single and
multi-view 3d object reconstruction. arXiv preprint:1604.00449, 2016.
[4] A. Dosovitskiy, J. Tobias Springenberg, and T. Brox. Learning to generate chairs with convolutional neural
networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages
1538?1546, 2015.
[5] S. Eslami, N. Heess, T. Weber, Y. Tassa, K. Kavukcuoglu, and G. E. Hinton. Attend, infer, repeat: Fast
scene understanding with generative models. preprint:1603.08575, 2016.
[6] K. Gregor, F. Besse, D. Jimenez Rezende, I. Danihelka, and D. Wierstra. Towards conceptual compression.
arXiv preprint:1604.08772, 2016.
[7] K. Gregor, I. Danihelka, A. Graves, D. Jimenez Rezende, and D. Wierstra. Draw: A recurrent neural
network for image generation. In ICML, 2015.
[8] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735?1780, 1997.
[9] M. Jaderberg, K. Simonyan, A. Zisserman, et al. Spatial transformer networks. In NIPS, pages 2008?2016,
2015.
[10] W. Jiajun, Z. Chengkai, X. Tianfan, F. William T., and J. Tenenbaum. Learning a probabilistic latent space
of object shapes via 3d generative-adversarial modeling. arXiv preprint: 1610.07584, 2016.
[11] D. Jimenez Rezende, S. Mohamed, I. Danihelka, K. Gregor, and D. Wierstra. One-shot generalization in
deep generative models. arXiv preprint:1603.05106, 2016.
[12] D. Jimenez Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate inference
in deep generative models. In ICML, 2014.
[13] M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul. An introduction to variational methods for
graphical models. Machine learning, 37(2):183?233, 1999.
[14] D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
[15] D. P. Kingma and M. Welling. Auto-encoding variational bayes. In ICLR, 2014.
[16] T. Kulkarni, I. Yildirim, P. Kohli, W. Freiwald, and J. B. Tenenbaum. Deep generative vision as approximate
bayesian computation. In NIPS 2014 ABC Workshop, 2014.
[17] T. D. Kulkarni, V. K. Mansinghka, P. Kohli, and J. B. Tenenbaum. Inverse graphics with probabilistic cad
models. arXiv preprint arXiv:1407.1339, 2014.
[18] Y. Lecun and C. Cortes. The MNIST database of handwritten digits.
[19] M. M. Loper and M. J. Black. Opendr: An approximate differentiable renderer. In Computer Vision?ECCV
2014, pages 154?169. Springer, 2014.
[20] V. Mansinghka, T. D. Kulkarni, Y. N. Perov, and J. Tenenbaum. Approximate bayesian image interpretation
using generative probabilistic graphics programs. In NIPS, pages 1520?1528, 2013.
[21] A. Mnih and D. Jimenez Rezende. Variational inference for monte carlo objectives. arXiv
preprint:1602.06725, 2016.
[22] OpenGL Architecture Review Board. OpenGL Reference Manual: The Official Reference Document for
OpenGL, Release 1. 1993.
[23] L. D. Pero, J. Bowdish, D. Fried, B. Kermgard, E. Hartley, and K. Barnard. Bayesian geometric modeling
of indoor scenes. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages
2719?2726. IEEE, 2012.
[24] R. Salakhutdinov and I. Murray. On the quantitative analysis of deep belief networks. In Proceedings of
the 25th international conference on Machine learning, pages 872?879, 2008.
[25] R. Sundareswara and P. R. Schrater. Perceptual multistability predicted by search model for bayesian
decisions. Journal of Vision, 8(5):12?12, 2008.
[26] R. J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning.
Machine learning, 8(3-4):229?256, 1992.
[27] D. Wingate, N. Goodman, A. Stuhlmueller, and J. M. Siskind. Nonstandard interpretations of probabilistic
programs for efficient inference. In NIPS, pages 1152?1160, 2011.
[28] J. Wu, I. Yildirim, J. J. Lim, B. Freeman, and J. Tenenbaum. Galileo: perceiving physical object properties
by integrating a physics engine with deep learning. In NIPS, pages 127?135, 2015.
[29] Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao. 3d shapenets: A deep representation for
volumetric shapes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,
pages 1912?1920, 2015.
[30] T. Zhou, S. Tulsiani, W. Sun, J. Malik, and A. A. Efros. View synthesis by appearance flow. arXiv preprint,
May 2016.
9
| 6600 |@word kohli:2 repository:1 version:1 middle:8 compression:2 cloned:1 choy:1 simulation:1 pick:1 solid:1 shot:1 accommodate:1 configuration:1 contains:1 score:1 jimenez:6 document:1 outperforms:1 existing:1 com:6 contextual:2 cad:1 yet:1 must:1 mesh:28 shape:13 drop:2 update:1 generative:25 half:3 fried:1 short:2 detecting:1 location:1 simpler:1 tianfan:1 five:1 zhang:1 wierstra:4 c2:1 consists:2 polyhedral:1 introduce:2 manner:4 planning:1 multi:9 inspired:1 salakhutdinov:2 freeman:1 increasing:2 conv:1 spain:1 xx:4 underlying:4 moreover:2 provided:5 laptop:1 kind:4 substantially:2 deepmind:1 whilst:2 unified:1 unobserved:1 transformation:3 quantitative:3 demonstrates:4 medical:1 appear:1 producing:1 danihelka:3 engineering:1 attend:1 treat:1 eslami:2 despite:1 encoding:5 cheat:1 establishing:1 path:2 black:4 r4:1 collect:1 challenging:1 conversely:1 limited:1 range:2 practical:2 camera:18 enforces:1 lecun:1 galileo:1 backpropagation:1 illposed:1 digit:1 procedure:1 projection:21 integrating:1 operator:15 twodimensional:1 context:23 live:1 transformer:3 map:5 deterministic:1 dz:1 center:2 missing:5 primitive:12 demonstrated:2 williams:1 independently:1 sharpness:2 freiwald:1 estimator:2 siskind:1 disentangled:2 datapoints:1 reparameterization:1 stability:1 variation:1 target:2 play:1 heavily:1 trick:1 element:2 renderers:4 expensive:1 approximated:1 synthesize:1 recognition:3 database:1 observed:12 role:1 module:2 preprint:11 wingate:1 capture:13 region:2 connected:2 rig:1 sun:1 goo:6 observes:1 transforming:1 complexity:3 nats:9 tobias:1 trained:13 sundareswara:1 ali:1 purely:3 textured:2 easily:1 joint:1 bowl:1 represented:1 train:7 fast:1 describe:1 monte:2 kp:1 supplementary:1 plausible:4 valued:1 cvpr:1 reconstruct:1 encoder:3 ability:4 statistic:1 triangular:2 g1:2 simonyan:1 think:1 jointly:3 shakir:2 final:2 differentiable:3 reconstruction:3 product:1 combining:2 loop:2 achieve:1 intuitive:1 description:1 extending:1 r1:7 produce:5 generating:3 adam:2 stuhlmueller:1 object:27 develop:1 completion:5 propagating:1 pose:3 recurrent:1 tulsiani:1 mansinghka:2 progress:1 eq:1 strong:3 recovering:2 predicted:1 involves:1 indicate:1 implies:1 quantify:1 hartley:1 correct:2 attribute:2 stochastic:2 human:1 virtual:1 material:3 feeding:1 fix:1 generalization:1 opt:2 opengl:10 around:1 ground:10 guibas:1 mapping:1 scope:1 visualize:1 efros:1 optimizer:1 battaglia:1 estimation:1 label:3 concurrent:1 create:1 successfully:1 tool:1 weighted:1 sensor:1 always:2 gaussian:2 zhou:1 shelf:2 varying:3 jaakkola:1 earliest:1 rezende:6 loper:1 release:1 modelling:1 likelihood:9 bernoulli:2 indicates:1 greatly:1 adversarial:2 shapenet:12 baseline:3 detect:1 inference:23 dependent:1 typically:1 hidden:6 proj:2 transformed:1 reproduce:2 pixel:4 aforementioned:1 flexible:3 orientation:3 art:1 special:1 spatial:4 brox:1 marginal:2 field:1 construct:1 cube:6 r5:1 r7:1 unsupervised:6 constitutes:1 icml:2 yu:1 report:2 connectionist:1 develops:1 hint:1 employ:1 few:1 inherent:1 dosovitskiy:1 simultaneously:1 recognize:1 occlusion:2 william:1 attempt:1 cylinder:1 mlp:1 interest:1 mnih:1 light:4 unconditional:7 chain:3 amenable:3 accurate:1 edge:1 capable:1 partial:1 pero:1 savarese:2 psychological:1 instance:5 column:2 modeling:6 perov:1 vertex:3 subset:1 latents:1 recognizing:1 graphic:4 st:16 density:4 lstm:1 person:2 international:1 standing:1 probabilistic:10 off:2 physic:1 decoding:1 synthesis:2 imagery:1 again:1 ambiguity:1 reconstructs:1 huang:1 resort:2 derivative:1 li:1 diversity:1 includes:1 vst:5 view:33 performed:1 responsibility:1 red:2 recover:5 bayes:1 capability:1 contribution:1 mlps:1 formed:1 convolutional:4 efficiently:1 necker:4 bayesian:4 handwritten:1 kavukcuoglu:1 accurately:6 yildirim:2 carlo:2 lighting:1 nonstandard:1 simultaneous:1 manual:1 volumetric:15 mohamed:3 radian:2 dataset:11 knowledge:3 car:2 improves:1 color:3 infers:3 dimensionality:1 lim:1 back:2 danilo:1 chengkai:1 modal:1 improved:1 zisserman:1 box:4 stage:1 autoencoders:1 canvas:2 working:3 lstms:1 su:1 google:7 undergoes:1 quality:6 indicated:1 building:1 effect:1 true:2 remedy:1 funkhouser:1 during:1 self:1 impute:1 ambiguous:1 demonstrate:4 reasoning:4 image:51 variational:6 wise:1 weber:1 common:2 rotation:9 physical:2 conditioning:1 volume:49 tassa:1 extend:1 interpretation:4 schrater:1 refer:2 inclusion:1 gwak:1 moving:1 stable:2 access:2 robot:1 surface:1 renderer:9 posterior:5 recent:3 showed:1 schmidhuber:1 binary:2 hanrahan:1 seen:7 captured:2 additional:1 determine:3 multiple:6 full:1 infer:7 ing:1 smooth:1 match:1 generalises:1 long:3 sphere:1 feasibility:4 scalable:1 vision:9 arxiv:12 iteration:1 pyramid:2 achieved:4 hochreiter:1 c1:1 addition:2 fine:1 completes:1 modality:5 goodman:1 operate:3 unlike:1 rest:2 simulates:1 flow:1 jordan:1 r2n2:1 rendering:4 variety:1 marginalization:1 affect:1 architecture:2 idea:1 airplane:1 peterbattaglia:1 bounce:1 whether:1 song:2 render:4 peter:1 deep:8 heess:3 generally:1 clear:1 tenenbaum:5 simplest:1 generate:4 specifies:1 http:6 notice:1 figuring:1 estimated:1 jiajun:1 per:1 correctly:2 diverse:1 key:2 four:1 demonstrating:1 drawn:1 ht:15 defect:1 downstream:1 year:1 cone:2 inverse:2 parameterized:3 uncertainty:1 powerful:2 procedurally:1 angle:3 springenberg:1 family:4 wu:2 draw:1 decision:1 appendix:9 capturing:1 layer:3 bound:6 completing:2 display:4 correspondence:1 identifiable:1 adapted:1 precisely:2 constraint:1 constrain:1 scene:16 generates:4 chair:5 performing:1 rendered:5 relatively:2 structured:1 alternate:1 across:1 reconstructing:3 leg:1 projecting:1 explained:1 pipeline:1 equation:4 discus:2 r3:3 needed:1 end:8 available:6 endowed:3 multistability:1 observe:2 away:2 appropriate:1 robustly:1 include:1 graphical:1 maintaining:2 exploit:1 restrictive:1 ghahramani:1 build:2 establish:3 murray:1 classical:1 gregor:3 move:1 objective:1 already:1 quantity:1 flattens:1 malik:1 parametric:1 shapenets:1 traditional:1 interacts:1 unclear:1 diagonal:1 gradient:3 iclr:1 distance:1 separate:1 reinforce:1 decoder:1 manifold:2 extent:2 reason:1 code:4 relationship:1 ellipsoid:1 difficult:1 relate:2 rise:3 ba:1 design:1 zt:4 unknown:1 perform:1 allowing:2 conversion:1 neuron:1 observation:6 convolution:3 datasets:6 benchmark:3 enabling:1 finite:1 wire:1 markov:1 extended:1 variability:1 hinton:1 frame:2 varied:1 sharp:2 arbitrary:2 inferred:6 introduced:1 required:1 specified:1 kl:1 c3:1 engine:2 coherent:1 learned:3 barcelona:1 kingma:2 nip:6 address:2 able:1 pattern:3 xm:1 indoor:1 challenge:2 program:2 built:1 max:1 including:1 green:1 memory:2 video:8 hot:2 belief:1 natural:1 rely:1 vase:1 representing:1 scheme:1 eye:1 auto:1 danilor:1 dating:1 imitates:1 review:1 understanding:6 literature:1 voxels:1 geometric:2 relative:3 graf:1 fully:3 generation:15 limitation:1 agent:3 affine:1 consistent:1 xiao:1 viewpoint:2 corrupt:1 translation:5 row:1 eccv:1 course:1 gl:6 repeat:1 side:2 allow:2 burda:1 fledged:1 saul:1 face:1 overcome:1 depth:2 world:4 transition:1 rich:2 forward:1 collection:3 refinement:1 commonly:1 simplified:1 reinforcement:1 far:1 welling:1 approximate:4 jaderberg:3 implicitly:1 sequentially:1 conceptual:1 search:1 latent:15 khosla:1 reality:1 table:1 learn:6 capsule:1 robust:1 nicolas:1 inherently:1 complex:6 domain:2 official:1 dense:1 spread:1 motivation:1 nothing:1 xu:1 augmented:1 referred:1 benchmarking:1 screen:1 besse:1 grosse:1 slow:1 board:1 precision:1 sub:1 position:5 inferring:2 wish:2 r6:1 perceptual:1 third:1 learns:1 tang:1 down:1 specific:2 learnable:2 explored:1 experimented:1 r2:6 r8:2 cortes:1 sit:1 intractable:2 workshop:1 mnist:5 sequential:1 effectively:1 importance:1 volumetrically:1 texture:1 conditioned:5 chen:1 easier:2 marching:1 savva:1 depicted:3 fc:1 simply:2 likely:1 explore:3 appearance:1 visual:2 g2:2 partially:1 chang:1 applies:1 springer:1 corresponds:1 truth:10 abc:1 conditional:12 goal:2 identity:3 viewed:3 towards:1 barnard:1 included:1 infinite:1 perceiving:1 denoising:2 total:1 pas:1 occluding:1 internal:2 latter:1 rotate:1 kulkarni:3 mcmc:2 handling:1 |
6,193 | 6,601 | Local Minimax Complexity of
Stochastic Convex Optimization
Yuancheng Zhu
Wharton Statistics Department
University of Pennsylvania
John Duchi
Department of Statistics
Department of Electrical Engineering
Stanford University
Sabyasachi Chatterjee
Department of Statistics
University of Chicago
John Lafferty
Department of Statistics
Department of Computer Science
University of Chicago
Abstract
We extend the traditional worst-case, minimax analysis of stochastic convex optimization by introducing a localized form of minimax complexity for individual
functions. Our main result gives function-specific lower and upper bounds on
the number of stochastic subgradient evaluations needed to optimize either the
function or its ?hardest local alternative? to a given numerical precision. The
bounds are expressed in terms of a localized and computational analogue of the
modulus of continuity that is central to statistical minimax analysis. We show how
the computational modulus of continuity can be explicitly calculated in concrete
cases, and relates to the curvature of the function at the optimum. We also prove a
superefficiency result that demonstrates it is a meaningful benchmark, acting as
a computational analogue of the Fisher information in statistical estimation. The
nature and practical implications of the results are demonstrated in simulations.
1
Introduction
The traditional analysis of algorithms is based on a worst-case, minimax formulation. One studies
the running time, measured in terms of the smallest number of arithmetic operations required by any
algorithm to solve any instance in the family of problems under consideration. Classical worst-case
complexity theory focuses on discrete problems. In the setting of convex optimization, where the
problem instances require numerical rather than combinatorial optimization, Nemirovsky and Yudin
[12] developed an approach to minimax analysis based on a first order oracle model of computation.
In this model, an algorithm to minimize a convex function can make queries to a first-order ?oracle,?
and the complexity is defined as the smallest error achievable using some specified minimum number
of queries needed. Specifically, the oracle is queried with an input point x 2 C from a convex domain
C, and returns an unbiased estimate of a subgradient vector to the function f at x. After T calls to the
oracle, an algorithm A returns a value x
bA 2 C, which is a random variable due to the stochastic nature
of the oracle, and possibly also due to randomness in the algorithm. The Nemirovski-Yudin analysis
reveals that, in the worst case, the number of calls to the oracle required to drive the expected error
E(f (b
xA ) inf x2C f (x)) below ? scales as T = O(1/?) for the class of strongly convex functions,
and as T = O(1/?2 ) for the class of Lipschitz convex functions.
In practice, one naturally finds that some functions are easier to optimize than others. Intuitively, if
the function is ?steep? near the optimum, then the subgradient may carry a great deal of information,
and a stochastic gradient descent algorithm may converge relatively quickly. A minimax approach
to analyzing the running time cannot take this into account for a particular function, as it treats the
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
worst-case behavior of the algorithm over all functions. It would be of considerable interest to be able
to assess the complexity of solving an individual convex optimization problem. Doing so requires a
break from traditional worst-case thinking.
In this paper we revisit the traditional view of the complexity of convex optimization from the point
of view of a type of localized minimax complexity. In local minimax, our objective is to quantify the
intrinsic difficulty of optimizing a specific convex function f . With the target f fixed, we take an
alternative function g within the same function class F, and evaluate how the maximum expected
error decays with the number of calls to the oracle, for an optimal algorithm designed to optimize
either f or g. The local minimax complexity RT (f ; F) is defined as the least favorable alternative g:
RT (f ; F) = sup
inf
max error(A, h)
g2F A2AT h2{f,g}
(1)
where error(A, h) is some measure of error for the algorithm applied to function h. Note that here the
the algorithm A is allowed to depend on the function f and the selected worst-case g. In contrast, the
traditional global worst-case performance of the best algorithm, as defined by the minimax complexity
RT (F) of Nemirovsky and Yudin, is
RT (F) = inf
sup error(A, g).
A2AT g2F
(2)
The local minimax complexity can be thought of as the difficulty of optimizing the hardest alternative
to the target function. Intuitively, a difficult alternative is a function g for which querying the oracle
with g gives results similar to querying with f , but for which the value of x 2 C that minimizes g is
far from the value that minimizes f .
Our analysis ties this function-specific notion of complexity to a localized and computational analogue
of the modulus of continuity that is central to statistical minimax analysis [5, 6]. We show that the
local minimax complexity gives a meaningful benchmark for quantifying the difficulty of optimizing
a specific function by proving a superefficiency result; in particular, outperforming this benchmark
at some function must lead to a larger error at some other function. Furthermore, we propose an
adaptive algorithm in the one-dimensional case that is based on binary search, and show that this
algorithm automatically achieves the local minimax complexity, up to a logarithmic factor. Our study
of the algorithmic complexity of convex optimization is motivated by the work of Cai and Low [2],
who propose an analogous definition in the setting of statistical estimation of a one-dimensional
convex function. The present work can thus be seen as exposing a close connection between statistical
estimation and numerical optimization of convex functions. In particular, our results imply that
the local minimax complexity can be viewed as a computational analogue of Fisher information in
classical statistical estimation.
In the following section we establish our notation, and give a technical overview of our main results,
which characterize the local minimax complexity in terms of the computational modulus of continuity.
In Section 2.2, we demonstrate the phenomenon of superefficiency of the local minimax complexity.
In Section 3 we present the algorithm that adapts to the benchmark, together with an analysis of its
theoretical properties. We also present simulations of the algorithm and comparisons to traditional
stochastic gradient descent. Finally, we conclude with a brief review of related work and a discussion
of future research directions suggested by our results.
2
Local minimax complexity
In this section, we first establish notation and define a modulus of continuity for a convex function f .
We then state our main result, which links the local minimax complexity to this modulus of continuity.
Let F be the collection of Lipschitz convex functions defined on a compact convex set C ? Rd .
Given a function f 2 F, our goal is to find a minimum point, x?f 2 arg min x2C f (x). However, our
knowledge about f can only be gained through a first-order oracle. The oracle, upon being queried
with x 2 C, returns f 0 (x) + ?, where f 0 (x) is a subgradient of f at x and ? ? N(0, 2 Id ). When
the oracle is queried with a non-differentiable point x of f , instead of allowing the oracle to return
an arbitrary subgradient at x, we assume that it has a deterministic mechanism for producing f 0 (x).
That is, when we query the oracle with x twice, it should return two random vectors with the same
mean f 0 (x). Such an oracle can be realized, for example, by taking f 0 (x) = arg min z2@f (x) kzk.
Here and throughout the paper, k ? k denotes the Euclidean norm.
2
f (x)
f 0 (x)
g 0 (x)
g(x)
?
!(?; f )
flat set
Figure 1: Illustration of the flat set and the modulus of continuity. Both the function f (left) and its
derivative f 0 (right) are shown (black curves), along with one of the many possible alternatives, g
and its derivative g 0 (solid gray curves), that achieve the sup in the definition of !f (?). The flat set
contains all the points for which |f 0 (x)| < ?, and !f (?) is the larger half width of the flat set.
Consider optimization algorithms that make a total of T queries to this first-order oracle, and let AT
be the collection of all such algorithms. For A 2 AT , denote by x
bA the output of the algorithm. We
write err(x, f ) for a measure of error for using x as the estimate of the minimum point of f 2 F. In
this notation, the usual minimax complexity is defined as
RT (F) = inf sup Ef err(b
xA , f ).
(3)
A2AT f 2F
Note that the algorithm A queries the oracle at up to T points xt 2 C selected sequentially, and the
output x
bA is thus a function of the entire sequence of random vectors vt ? N (f 0 (xt ), 2 Id ) returned
by the oracle. The expectation Ef denotes the average with respect to this randomness (and any
additional randomness injected by the algorithm itself). The minimax risk RT (F) characterizes the
hardness of the entire class F. To quantify the difficulty of optimizing an individual function f , we
consider the following local minimax complexity, comparing f to its hardest local alternative
RT (f ; F) = sup inf max Eh err(b
xA , h).
(4)
g2F A2AT h2{f,g}
We now proceed to define a computational modulus of continuity that characterizes the local minimax
complexity. Let Xf? = arg min x2C f (x) be the set of minimum points of function f . We consider
err(x, f ) = inf y2Xf? kx yk as our measure of error. Define d(f, g) = inf x2Xf? ,y2Xg? kx yk for
f, g 2 F. It is easy to see that err(x, f ) and d(f, g) satisfy the exclusion inequality
1
1
err(x, f ) < d(f, g) implies err(x, g)
d(f, g).
(5)
2
2
Next we define
?(f, g) = sup kf 0 (x) g 0 (x)k
(6)
x2C
where f 0 (x) is the unique subgradient of f that is returned as the mean by the oracle when queried
with x. For example, if we take f 0 (x) = arg min z2@f (x) kzk, we have
?(f, g) = sup kProj@f (x) (0)
x2C
Proj@g(x) (0)k
(7)
where ProjB (z) is the projection of z to the set B. Thus, d(f, g) measures the dissimilarity between
two functions in terms of the distance between their minimizers, whereas ?(f, g) measures the
dissimilarity by the largest separation between their subgradients at any given point.
Given d and ?, we define the modulus of continuity of d with respect to ? at the function f by
!f (?) = sup {d(f, g) : g 2 F, ?(f, g) ? ?} .
(8)
We now show how to calculate the modulus for some specific functions.
Example 1. Suppose that f is a convex function on a one-dimensional interval C ? R. If we take
f 0 (x) = arg min z2@f (x) kzk, then
(
)
!f (?) = sup
inf |x
x2Xf?
y| : y 2 C, |f 0 (y)| < ? .
3
(9)
The proof of this claim is given in the appendix. This result essentially says that the modulus of
continuity measures the size (in fact, the larger half-width) of the the ?flat set? where the magnitude
of the subderivative is smaller than ?. See Figure 1 for an illustration Thus, for the class of symmetric
functions f (x) = k1 |x|k over C = [ 1, 1], with k > 1,
!f (?) = ? k
For the asymmetric case f (x) =
1
kl
kl |x| I(
1
1
(10)
.
1 ? x ? 0) +
1
!f (?) = ? kl _kr
1
1
kr
kr |x| I(0
< x ? 1) with kl , kr > 1,
(11)
.
That is, the size of the flat set depends on the flatter side of the function.
2.1
Local minimax is characterized by the modulus
We now state our main result linking the local minimax complexity to the modulus of continuity. We
say that the modulus of the continuity has polynomial growth if there exists ? > 0 and ?0 , such that
for any c 1 and ? ? ?0 /c
!f (c?) ? c? !f (?).
(12)
Our main result below shows that the modulus of continuity characterizes the local minimax complexity of optimization of a particular convex function, in a manner similar to how the modulus of
continuity quantifies the (local) minimax risk in a statistical estimation setting [2, 5, 6], relating the
objective to a geometric property of the function.
Theorem 1. Suppose that f 2 F and that !f (?) has polynomial growth. Then there exist constants
C1 and C2 independent of T and T0 > 0 such that for all T > T0
?
?
?
?
C 1 !f p
? RT (f ; F) ? C2 !f p
.
(13)
T
T
Remark 1. We use the error metric err(x, f ) = inf y2Xf? kx yk here. For a given a pair (err, d)
that satisfies the exclusion inequality (5), our proof technique applies to yield the corresponding
lower bound. For example, we could use err(x, f ) = inf y2Xf? |v T (x y)| for some vector v. This
error metric would be suitable when we wish to estimate v T x?f , for example, the first coordinate of
x?f . Another natural choice of error metric is err(x, f ) = f (x) inf x2C f (x), with a corresponding
distance d(f, g) = inf x2C |f (x) inf x f (x) + g(x) inf x g(x)|. For this case, while the proof of
the lower bound stays exactly the same, further work is required for the upper bound, which is beyond
the scope of this paper.
Remark 2. The results can be extended to oracles with more general noise models. In particular,
the lower bounds will still hold with more general noise distributions, as long as Gaussian noise is a
subclass. Indeed, in proving lower bounds assuming Gaussianity only makes solving the optimization
problem easier. Our algorithm and upper bound analysis will go through for all sub-Gaussian noise
oracles. For the ease of presentation, we will focus on Gaussian noise model for the current paper.
Remark 3. Although the theorem gives an upper bound for the local minimax complexity, this
does not guarantee the existence of an algorithm that achieves the local complexity for any function.
Therefore, it is important to design an algorithm that adapts to this benchmark for each individual
function. We solve this problem in the one-dimensional case in Section 3.
The proof of this theorem is given in the appendix. We now illustrate the result with examples that
verify the intuition that different functions should have different degrees of difficulty for stochastic
convex optimization.
Example 2. For the function f (x) = k1 |x|k with x 2 [ 1, 1] for k > 1, we have RT (f ; F) =
1
O T 2(k 1) . This agrees with the minimax risk complexity for the class of Lipschitz convex
functions that satisfy f (x) f (x?f ) 2 kx x?f kk [14]. In particular, when k = 2, we recover the
p
strongly convex case, where the (global) minimax complexity is O 1/ T with respect to the error
err(x, f ) = inf y2Xf? kx yk. We see a faster rate of convergence for k < 2. As k ! 1, we also see
that the error fails to decrease as T gets large. This corresponds to the worst case for any Lipschitz
convex function. In the asymmetric setting with f (x) = k1l |x|kl I( 1 ? x ? 0) + k1r |x|kr I(0 <
x ? 1) with kl , kr > 1, we have RT (f ; F) = O(T
4
1
2(kl _kr
1)
).
The following example illustrates that the local minimax complexity and modulus of continuity are
consistent with known behavior of stochastic gradient descent for strongly convex functions.
Example 3. In this example we consider the error err(x, f ) = inf y2Xf? |v T (x y)| for some vector
v, and let f be an arbitrary convex function satisfying r2 f (x?f ) 0 with Hessian continuous around
x?f . Thus the optimizer x?f is unique. If we define gw (x) = f (x) wT r2 f (x?f )x, then gw (x) is a
convex function with unique minimizer and
?(f, gw ) = sup
x
rf (x)
(rf (x)
r2 f (x?f )w)
Thus, defining (w) = x?f x?gw ,
?
?
p
!f p
sup{|v T (w)| : r2 f (x?f )w ? / T }
w
T
= r2 f (x?f )w .
sup v T
u
?
(14)
p r2 f (x?f )
T
1
u
?
.
(15)
By the convexity of gw , we know that x?gw satisfies rf (x?gw ) r2 f (x?f ) 1 w = 0, and therefore by
the implicit function theorem, x?gw = x?f + w + o(kwk) as w ! 0. Thus,
?
?
?
?
2
?
1
p
p
p
!f
r f (xf ) v + o
as T ! 1.
(16)
T
T
T
In particular, we have the local minimax lower bound
p
lim inf T RT (f ; F) C1
r2 f (x?f )
T !1
1
(17)
v
where C1 is the same constant appearing in Theorem 1. This shows that the local minimax complexity
captures the function-specific dependence on the constant in the strongly convex case. Stochastic
gradient descent with averaging is known to adapt to this strong convexity constant [16, 13, 10]. Note
that lower bounds of similar forms on the minimax complexity have been obtained in [11].
2.2
Superefficiency
Having characterized the local minimax complexity in terms of a computational modulus of continuity,
we would now like to show that there are consequences to outperforming it at some function. This
will strengthen the case that the local minimax complexity serves as a meaningful benchmark to
quantify the difficulty of optimizing any particular convex function.
Suppose that f is any one-dimensional function such that Xf? = [xl , xr ], which has as asymptotic
expansion around {xl , xr } of the form
f (xl
) = f (xl ) +
l
kl
+ o(
kl
) and f (xr + ) = f (xr ) +
r
kr
+ o(
kr
)
(18)
for > 0, some powers kl , kr > 1, and constants l , r > 0. The following result shows that if
any algorithm significantly outperforms the local modulus of continuity on such a function, then it
underperforms the modulus on a nearby function.
Proposition 1. Let f be any convex function satisfying the asymptotic expansion (21) around its
optimum. Suppose that A 2 AT is any algorithm that satisfies
?
?
q
Ef err(b
xA , f ) ? Ef err(b
xA , f ) 2 ? T !f p
,
(19)
T
whereqT < C1 . Define g 1 (x) = f (x) ?T x and g1 (x) = f (x) + ?T x, where ?T is given by
2 log C1 /T . Then for some g 2 {g
?T =
T0 implies
1 , g1 }, there exists T0 such that T
T
Eg err(b
xA , g)
0s
C !g @
for some constant C that only depends on k = kl _ kr .
5
2
log C1 /
T
T
1
A
(20)
A proof of this result is given in the appendix, where it is derived as a consequence
p of a more general
statement. We remark that while condition (19) involves the squared error Ef err(b
xA , f )2 , we
expect that the result holds with only the weaker inequality on the absolute error Ef err(b
xA , f ).
It follows from this proposition that if an algorithm A significantly outperforms the local minimax
complexity in the sense that (19) holds for some sequence T ! 0 with lim inf T eT T = 1, then
there exists a sequence of convex functions gT with ?(f, gT ) ! 0, such that
EgT err(b
xA , gT )
?q
? > 0.
lim inf
(21)
T !1
2
! gT
log CT1 /T
This is analogous to the phenomenon of superefficiency in classical parametric estimation problems,
where outperforming the asymptotically optimal rate given by the Fisher information implies worse
performance at some other point in the parameter space. In this sense, !f can be viewed as a
computational analogue of Fisher information in the setting of convex optimization. We note that
superefficiency has also been studied in nonparametric settings [1], and a similar result was shown by
Cai and Low [2] for local minimax estimation of convex functions.
3
An adaptive optimization algorithm
In this section, we show that a simple stochastic binary search algorithm achieves the local minimax
complexity in the one-dimensional case.
The general idea of the algorithm is as follows. Suppose that we are given a budget of T queries to
the oracle. We divide this budget into T0 = bT /Ec queries over each of E = br log T c many rounds,
where r > 0 is a constant to be specified later. In each round, we query the oracle T0 times for the
derivative at the mid-point of the current interval. Estimating the derivative by averaging over the
queries, we proceed to the left half of the interval if the estimated sign is positive, and to the right
half of the interval if the estimated sign is negative. The details are given in Algorithm 1.
Algorithm 1 Sign testing binary search
Input: T , r.
Initialize: (a0 , b0 ), E = br log T c, T0 = bT /Ec.
for e = 1, . . . , E do
(e)
Query xe = (ae + be )/2 for T0 times to get Zt for t = 1, . . . , T0 .
P
(e)
(e)
T0
Calculate the average Z?T0 = T10 t=1
Zt .
(e)
If Z? > 0, set (ae+1 , be+1 ) = (ae , xe ).
T0
(e)
If Z?T0 ? 0, set (ae+1 , be+1 ) = (xe , be ).
end for
Output: xE .
We will show that this algorithm adapts to the local minimax complexity up to a logarithmic factor.
First, the following result shows that the algorithm gets us close to the ?flat set? of the function.
p
Proposition 2. For 2 (0, 1), let C =
2 log(E/ ). Define
?
C
I = y 2 dom(f ) : |f 0 (y)| < p
.
(22)
T0
Suppose that (a0 , b0 ) \ I 6= ;. Then
with probability at least 1
.
dist(xE , I ) ? 2
E
(b0
This proposition tells us that after E rounds of bisection, we are at most a distance 2
from the flat set I . In terms of the distance to the minimum point, we have
n
o
inf ? |xE x| ? 2 E (b0 a0 ) + sup inf ? |x y| : y 2 I .
x2Xf
(23)
a0 )
x2Xf
E
(b0
a0 )
(24)
If the modulus of continuity satisfies the polynomial growth condition, we then obtain the following.
6
5e-06
0.001
0.01
risk
risk
0.05
risk
risk
0.005 0.020
5e-03
5e-05
risk
risk
5e-04
100
1000
t
t
10000
100
1000
t
t
10000
10000
risk
risk
0.05 0.10
0.20
0.20
0.02
0.02
1000
10000
t
t
binary search
1000
t
t
kl = 2, kr = 3
risk
risk
0.05 0.10
0.002
risk
risk
0.010
100
kl = 1.5, kr = 3
0.050
kl = 1.5, kr = 2
100
k=3
0.20 0.50
k=2
0.100
k = 1.5
100
SGD, ?(t) = 1/t
1000
t
t
10000
p
SGD, ?(t) = 1/ t
100
theoretic
1000
t
t
10000
Figure 2: Simulation results: Averaged risk versus number of queries T . The black curves correspond
to the risk of the stochastic binary search algorithm. The redp
and blue curves are for the stochastic
gradient descent methods, red for stepsize 1/t and blue for 1/ t. The dashed gray lines indicate the
optimal convergence rate. Note that the plots are on a log-log scale. The plots on the top panels are
for the symmetric cases f (x) = k1 |x x? |k ; the lower plots are for the asymmetric cases.
Corollary 1. Let ?0 > 0. Suppose !f satisfies the polynomial growth condition (12) with constant
? ? ?0 . Let r = 12 ?0 . Then with probability at least 1
and for large enough T ,
?
?
e !f p
inf ? |xE x| ? C
(25)
x2Xf
T
e hides a dependence on log T and log(1/ ).
where the term C
The proofs of these results are given in the appendix.
3.1
Simulations showing adaptation to the benchmark
We now demonstrate the performance of the stochastic binary search algorithm, making a comparision
to stochastic gradient descent. For the stochastic gradient descent algorithm, we perform T steps of
update
xt+1 = xt ?(t) ? gb(xt )
(26)
1
1
where ?(t) is a stepsize function, chosen as either ?(t) = t or ?(t) = pt . We first consider the
following setup with symmetric functions f :
1.
2.
3.
4.
The function to optimize is fk (x) = k1 |x x? |k for k = 32 , 2 or 3.
The minimum point x? ? Unif( 1, 1) is selected uniformaly at random over the interval.
The oracle returns the derivative at the query point with additive N (0, 2 ) noise, = 0.1.
The optimization algorithms know a priori that the minimum point is inside the interval
( 2, 2). Therefore, the binary search starts with interval ( 2, 2) and the stochastic gradient
descent starts at x0 ? Unif( 2, 2) and project the query points to the interval ( 2, 2).
7
5. We carry out the simulation for values of T on a logarithmic grid between 100 and 10,000.
For each setup, we average the error |b
x x? | over 1,000 runs.
The simulation results are shown in the top 3 panels of Figure 2. Several properties predicted by
our theory are apparent from the simulations. First, the risk curves for the stochastic binary search
algorithm parallel the gray curves. This indicates that the optimal rate of convergence is achieved.
Thus, the stochastic binary search adapts to the curvature of different functions and yields the optimal
local minimax complexity, as given by our benchmark. Second, the stochastic gradient descent
algorithms
with stepsize 1/t achieve the optimal rate when k = 2, but not when k = 3; with stepsize
p
1/ t SGD gets close to the optimal rate when k = 3, but not when k = 2. Neither leads to the faster
rate when k = 32 . This is as expected, since the stepsize needs to be adapted to the curvature at the
optimum in order to achieve the optimal rate.
Next, we consider a set of asymmetric functions. Using the same setup as in the symmetric case, we
consider the functions of the form f (x) = k1l |x x? |kl I(x x? ? 0) + k1r |x x? |kr I(x x? > 0),
for exponent pairs (k1 , k2 ) chosen to be ( 32 , 2), ( 32 , 3) and (2, 3). The simulation results are shown in
the bottom three panels of Figure 2. We observe that the stochastic binary search once again achieves
the optimal rate, which is determined by the flatter side of the function, that is, the larger of kl and kr .
4
Related work and future directions
In related recent work, Ramdas and Singh [14] study minimax complexity for the class of Lipschitz
convex functions that satisfy f (x) f (x?f ) 2 kx x?f kk . They show that the minimax complexity
k
under the function value error is of the order T 2(k 1) . Juditski and Nesterov [8] also consider
minimax complexity for the class of k-uniformly convex functions for k > 2. They give an
adaptive algorithm based on stochastic gradient descent that achieves the minimax complexity up
to a logarithmic factor. Connections with active learning are developed in [15], with related ideas
appearing in [3]. Adaptivity in this line of work corresponds to the standard notion in statistical
estimation, which seeks to adapt to a large subclass of a parameter space. In contrast, the results in
the current paper quantify the difficulty of stochastic convex optimization at a much finer scale, as
the benchmark is determined by the specific function to be optimized.
The stochastic binary search algorithm presented in Section 3, despite being adaptive, has a few
drawbacks. It requires the modulus of continuity of the function to satisfy polynomial growth, with
a parameter ? bounded away from 0. This rules out cases such as f (x) = |x|, which should have
an error that decays exponentially in T ; it is of interest to handle this case as well. It would also be
of interest to construct adaptive optimization procedures tuned to a fixed numerical precision. Such
procedures should have different running times depending on the hardness of the problem. Progress
on both problems has been made, and will be reported elsewhere.
Another challenge is to remove the logarithmic factors appearing in the binary search algorithm
developed in Section 3. In one dimension, stochastic convex optimization is intimately related to a
noisy root finding problem for a monotone function taking values in [ a, a] for some a > 0. Karp
and Kleinberg [9] study optimal algorithms for such root finding problems in a discrete setting. A
binary search algorithm that allows backtracking is proposed, which saves log factors in the running
time. It would be interesting to study the use of such techniques in our setting.
Other areas that warrant study involve the dependence on dimension. The scaling with dimension
of the local minimax complexity and modulus of continuity is not fully revealed by the current
analysis. Moreover, the superefficiency result and the adaptive algorithm presented here are only for
the one-dimensional case. We note that a form of adaptive stochastic gradient algorithm for the class
of uniformly convex functions in general, fixed dimension is developed in [8].
Finally, a more open-ended direction is to consider larger classes of stochastic optimization problems.
For instance, minimax results are known for functions of the form f (x) := E F (x; ?) where ? is a
random variable and x 7! F (x; ?) is convex for any ?, when f is twice continuously differentiable
around the minimum point with positive definite Hessian. However, the role of the local geometry
is not well understood. It would be interesting to further develop the local complexity techniques
introduced in the current paper, to gain insight into the geometric structure of more general stochastic
optimization problems.
8
Acknowledgments
Research supported in part by ONR grant 11896509 and NSF grant DMS-1513594. The authors
thank Tony Cai, Praneeth Netrapalli, Rob Nowak, Aaron Sidford, and Steve Wright for insightful
discussions and valuable comments on this work.
References
[1] Lawrence Brown and Mark Low. A constrained risk inequality with applications to nonparametric functional estimation. Annals of Statistics, 24(6):2524?2535, 1996.
[2] Tony Cai and Mark Low. A framework for estimation of convex functions. Statistica Sinica,
pages 423?456, 2015.
[3] Rui Castro and Robert Nowak. Minimax bounds for active learning. Information Theory, IEEE
Transactions on, 54(5):2339?2353, 2008.
[4] David Donoho. Statistical estimation and optimal recovery. The Annals of Statistics, pages
238?270, 1994.
[5] David Donoho and Richard Liu. Geometrizing rates of convergence, I. Technical report,
University of California, Berkeley, 1987. Department of Statistics, Technical Report 137.
[6] David Donoho and Richard Liu. Geometrizing rates of convergence, II. Annals of Statistics, 19:
633?667, 1991.
[7] Jean-Baptiste Hiriart-Urruty and Claude Lemar?chal. Convex Analysis and Minimization
Algorithms I & II. Springer, New York, 1993.
[8] Anatoli Juditski and Yuri Nesterov. Deterministic and stochastic primal-dual subgradient
methods for minimizing uniformly convex functions. Stochastic System, 4(1):44?80, 2014.
[9] Richard M Karp and Robert Kleinberg. Noisy binary search and its applications. In Proceedings
of the eighteenth annual ACM-SIAM symposium on Discrete algorithms, pages 881?890. Society
for Industrial and Applied Mathematics, 2007.
[10] Eric Moulines and Francis Bach. Non-asymptotic analysis of stochastic approximation algorithms for machine learning. In J. Shawe-Taylor, R. S. Zemel, P. L. Bartlett, F. Pereira, and K. Q.
Weinberger, editors, Advances in Neural Information Processing Systems 24, pages 451?459,
2011.
[11] Aleksandr Nazin. Informational inequalities in gradient stochastic optimization optimal feasible
algorithms. Automation and Remote Control, 50(4):531?540, 1989.
[12] Arkadi Nemirovsky and David Yudin. Problem Complexity and Method Efficiency in Optimization. John Wiley & Sons, 1983.
[13] Boris Polyak and Anatoli Juditsky. Acceleration of stochastic approximation by averaging.
SIAM Journal on Control and Optimization, 30(4):838?855, 1992.
[14] Aaditya Ramdas and Aarti Singh. Optimal rates for stochastic convex optimization under
Tsybakov noise condition. In Proceedings of The 30th International Conference on Machine
Learning, pages 365?373, 2013.
[15] Aaditya Ramdas and Aarti Singh. Algorithmic connections between active learning and
stochastic convex optimization. arxiv:1505.04214, 2015.
[16] David Ruppert. Efficient estimations from a slowly convergent Robbins-Monro process. Technical report, Report 781, Cornell University Operations Research and Industrial Engineering,
1988.
[17] Alexandre Tsybakov. Introduction to Nonparametric Estimation. Springer, 2009.
9
| 6601 |@word polynomial:5 achievable:1 norm:1 open:1 unif:2 simulation:8 seek:1 nemirovsky:3 sgd:3 solid:1 carry:2 liu:2 contains:1 egt:1 tuned:1 outperforms:2 juditski:2 err:19 current:5 z2:3 comparing:1 must:1 john:3 exposing:1 chicago:2 numerical:4 additive:1 remove:1 designed:1 plot:3 update:1 juditsky:1 half:4 selected:3 along:1 c2:2 symposium:1 prove:1 inside:1 manner:1 x0:1 hardness:2 indeed:1 expected:3 behavior:2 dist:1 moulines:1 informational:1 automatically:1 spain:1 estimating:1 notation:3 project:1 panel:3 bounded:1 moreover:1 minimizes:2 developed:4 finding:2 ended:1 guarantee:1 berkeley:1 subclass:2 growth:5 tie:1 exactly:1 demonstrates:1 k2:1 control:2 grant:2 producing:1 positive:2 engineering:2 local:35 treat:1 understood:1 consequence:2 despite:1 aleksandr:1 analyzing:1 id:2 black:2 twice:2 studied:1 ease:1 nemirovski:1 averaged:1 practical:1 unique:3 acknowledgment:1 testing:1 practice:1 definite:1 xr:4 procedure:2 area:1 thought:1 significantly:2 projection:1 t10:1 get:4 cannot:1 close:3 risk:19 optimize:4 deterministic:2 demonstrated:1 eighteenth:1 go:1 convex:42 recovery:1 rule:1 insight:1 proving:2 handle:1 notion:2 coordinate:1 analogous:2 annals:3 target:2 suppose:7 pt:1 strengthen:1 satisfying:2 asymmetric:4 bottom:1 role:1 k1r:2 electrical:1 capture:1 worst:9 calculate:2 remote:1 decrease:1 valuable:1 yk:4 intuition:1 convexity:2 complexity:44 nesterov:2 dom:1 depend:1 solving:2 singh:3 upon:1 eric:1 efficiency:1 query:13 zemel:1 tell:1 apparent:1 jean:1 stanford:1 solve:2 larger:5 say:2 statistic:8 g1:2 itself:1 noisy:2 sequence:3 differentiable:2 claude:1 cai:4 propose:2 hiriart:1 adaptation:1 achieve:3 adapts:4 projb:1 convergence:5 optimum:4 yuancheng:1 boris:1 illustrate:1 depending:1 develop:1 measured:1 b0:5 progress:1 strong:1 netrapalli:1 predicted:1 involves:1 implies:3 indicate:1 quantify:4 direction:3 drawback:1 stochastic:34 require:1 proposition:4 hold:3 around:4 wright:1 great:1 lawrence:1 algorithmic:2 scope:1 claim:1 achieves:5 optimizer:1 smallest:2 aarti:2 estimation:13 favorable:1 combinatorial:1 robbins:1 largest:1 agrees:1 minimization:1 gaussian:3 rather:1 cornell:1 karp:2 corollary:1 derived:1 focus:2 indicates:1 contrast:2 industrial:2 sense:2 minimizers:1 entire:2 bt:2 a0:5 proj:1 arg:5 dual:1 priori:1 exponent:1 constrained:1 initialize:1 wharton:1 once:1 construct:1 having:1 hardest:3 warrant:1 thinking:1 future:2 others:1 report:4 richard:3 few:1 sabyasachi:1 geometrizing:2 individual:4 geometry:1 interest:3 evaluation:1 primal:1 implication:1 nowak:2 euclidean:1 divide:1 taylor:1 theoretical:1 instance:3 sidford:1 introducing:1 characterize:1 reported:1 international:1 siam:2 stay:1 together:1 quickly:1 concrete:1 continuously:1 squared:1 central:2 again:1 possibly:1 slowly:1 worse:1 derivative:5 return:6 account:1 flatter:2 gaussianity:1 automation:1 satisfy:4 explicitly:1 depends:2 later:1 break:1 view:2 root:2 doing:1 francis:1 sup:13 characterizes:3 kwk:1 recover:1 red:1 start:2 parallel:1 arkadi:1 monro:1 minimize:1 ass:1 who:1 yield:2 correspond:1 bisection:1 drive:1 finer:1 randomness:3 definition:2 dm:1 naturally:1 proof:6 gain:1 knowledge:1 lim:3 alexandre:1 steve:1 formulation:1 strongly:4 furthermore:1 xa:9 implicit:1 continuity:20 gray:3 modulus:23 verify:1 brown:1 unbiased:1 symmetric:4 deal:1 gw:8 eg:1 round:3 width:2 theoretic:1 demonstrate:2 duchi:1 aaditya:2 consideration:1 ef:6 functional:1 overview:1 exponentially:1 extend:1 linking:1 relating:1 queried:4 rd:1 fk:1 grid:1 mathematics:1 shawe:1 gt:4 curvature:3 exclusion:2 hide:1 recent:1 optimizing:5 inf:22 inequality:5 outperforming:3 binary:14 onr:1 vt:1 xe:7 yuri:1 seen:1 minimum:8 additional:1 converge:1 dashed:1 arithmetic:1 relates:1 ii:2 technical:4 xf:3 characterized:2 faster:2 adapt:2 long:1 bach:1 baptiste:1 ae:4 essentially:1 expectation:1 metric:3 arxiv:1 achieved:1 underperforms:1 c1:6 whereas:1 interval:8 comment:1 lafferty:1 call:3 near:1 revealed:1 easy:1 enough:1 pennsylvania:1 polyak:1 idea:2 praneeth:1 br:2 t0:14 motivated:1 bartlett:1 gb:1 returned:2 proceed:2 hessian:2 york:1 remark:4 involve:1 nonparametric:3 tsybakov:2 mid:1 exist:1 nsf:1 revisit:1 sign:3 estimated:2 blue:2 discrete:3 write:1 neither:1 asymptotically:1 subgradient:7 monotone:1 run:1 injected:1 family:1 throughout:1 separation:1 appendix:4 scaling:1 bound:12 convergent:1 oracle:23 annual:1 comparision:1 adapted:1 flat:8 nearby:1 kleinberg:2 superefficiency:7 min:5 subgradients:1 relatively:1 department:7 smaller:1 son:1 intimately:1 rob:1 making:1 castro:1 intuitively:2 mechanism:1 needed:2 know:2 urruty:1 serf:1 end:1 operation:2 observe:1 away:1 appearing:3 stepsize:5 save:1 alternative:7 weinberger:1 existence:1 denotes:2 running:4 top:2 tony:2 anatoli:2 k1:5 establish:2 classical:3 society:1 objective:2 realized:1 parametric:1 rt:11 usual:1 traditional:6 dependence:3 gradient:12 distance:4 link:1 thank:1 assuming:1 illustration:2 kk:2 minimizing:1 difficult:1 steep:1 setup:3 sinica:1 statement:1 robert:2 negative:1 ba:3 design:1 zt:2 perform:1 allowing:1 upper:4 benchmark:9 descent:10 defining:1 extended:1 arbitrary:2 introduced:1 david:5 pair:2 required:3 specified:2 kl:16 connection:3 ct1:1 optimized:1 california:1 barcelona:1 nip:1 able:1 suggested:1 beyond:1 below:2 chal:1 challenge:1 rf:3 max:2 analogue:5 power:1 suitable:1 difficulty:7 eh:1 natural:1 zhu:1 minimax:49 brief:1 imply:1 review:1 geometric:2 kf:1 asymptotic:3 fully:1 expect:1 adaptivity:1 interesting:2 querying:2 versus:1 localized:4 h2:2 degree:1 g2f:3 consistent:1 editor:1 elsewhere:1 supported:1 side:2 weaker:1 taking:2 absolute:1 kzk:3 calculated:1 curve:6 dimension:4 yudin:4 author:1 collection:2 adaptive:7 made:1 nazin:1 far:1 ec:2 transaction:1 compact:1 global:2 sequentially:1 reveals:1 active:3 conclude:1 search:14 continuous:1 quantifies:1 nature:2 expansion:2 domain:1 main:5 statistica:1 noise:7 ramdas:3 allowed:1 wiley:1 precision:2 sub:1 fails:1 pereira:1 wish:1 xl:4 theorem:5 specific:7 xt:5 showing:1 insightful:1 r2:8 decay:2 intrinsic:1 exists:3 gained:1 kr:16 dissimilarity:2 magnitude:1 illustrates:1 chatterjee:1 budget:2 kx:6 rui:1 easier:2 logarithmic:5 backtracking:1 expressed:1 k1l:2 applies:1 springer:2 corresponds:2 minimizer:1 satisfies:5 acm:1 viewed:2 goal:1 presentation:1 quantifying:1 donoho:3 acceleration:1 lipschitz:5 fisher:4 considerable:1 lemar:1 feasible:1 ruppert:1 specifically:1 determined:2 uniformly:3 acting:1 wt:1 averaging:3 total:1 meaningful:3 aaron:1 mark:2 evaluate:1 phenomenon:2 |
6,194 | 6,602 | Error Analysis of Generalized Nystr?m Kernel
Regression
Hong Chen
Computer Science and Engineering
University of Texas at Arlington
Arlington, TX, 76019
[email protected]
Haifeng Xia
Mathematics and Statistics
Huazhong Agricultural University
Wuhan 430070,China
[email protected]
Weidong Cai
School of Information Technologies
University of Sydney
NSW 2006, Australia
[email protected]
Heng Huang
Computer Science and Engineering
University of Texas at Arlington
Arlington, TX, 76019
[email protected]
Abstract
Nystr?m method has been successfully used to improve the computational efficiency of kernel ridge regression (KRR). Recently, theoretical analysis of Nystr?m
KRR, including generalization bound and convergence rate, has been established
based on reproducing kernel Hilbert space (RKHS) associated with the symmetric
positive semi-definite kernel. However, in real world applications, RKHS is not
always optimal and kernel function is not necessary to be symmetric or positive
semi-definite. In this paper, we consider the generalized Nystr?m kernel regression
(GNKR) with `2 coefficient regularization, where the kernel just requires the continuity and boundedness. Error analysis is provided to characterize its generalization
performance and the column norm sampling strategy is introduced to construct the
refined hypothesis space. In particular, the fast learning rate with polynomial decay
is reached for the GNKR. Experimental analysis demonstrates the satisfactory
performance of GNKR with the column norm sampling.
1
Introduction
The high computational complexity makes kernel methods unfeasible to deal with large-scale data.
Recently, the Nystr?m method and its alternatives (e.g., the random Fourier feature technique [15],
the sketching method [25]) have been used to scale up kernel ridge regression (KRR) [4, 23, 27]. The
key step of Nystr?m method is to construct a subsampled matrix, which only contains part columns
of the original empirical kernel matrix. Therefore, the sampling criterion on the matrix column
affects heavily on the learning performance. The subsampling strategies of Nystr?m method can be
categorized into two types: uniform sampling and non-uniform sampling. The uniform sampling is
the simplest strategy, which has shown satisfactory performance on some applications [16, 23, 24].
From different theoretical aspects, several non-uniform sampling approaches have been proposed
such as the square `2 column-norm sampling [3, 4], the leverage score sampling [5, 8, 12], and the
adaptive sampling [11]. Besides the sampling strategies, there exist learning bounds for Nystr?m
kernel regression from three measurements: the matrix approximation [4, 5, 11], the coefficient
approximation [9, 10], and the excess generalization error [2, 16, 24].
Despite rapid progress on theory and applications, the following critical issues should be further
addressed for Nystr?m kernel regression.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
? Nystr?m regression with general kernel. The previous algorithms are mainly limited to
KRR with symmetric and positive semi-definite kernels. For real-world applications, this
restriction may be not necessary. Several general kernels have shown the competitive
performance in machine learning, e.g., the indefinite kernels for regularized algorithms
[14, 20, 26] and PCA [13]. Therefore, it is important to formulate the learning algorithm for
Generalized Nystr?m Kernel Regression (GNKR).
? Generalization analysis and sampling criterion. Previous theoretical results rely on the
symmetric positive semi-definite (SPSD) matrix associated with a Mercer kernel [17].
However, this condition is not satisfied for GNKR, which induces the additional difficulty on
error analysis. Can we get the generalization error analysis for GNKR? It is also interesting
to explore the sampling strategy for GNKR, e.g., the column-norm sampling in [3, 4].
To address the above issues, we propose the GNKR algorithm and investigate its theoretical properties
on generalization bound and learning rate. Inspired from the recent studies for data dependent
hypothesis spaces [7, 19], we establish the error analysis for GNKR, which implies that the learning
rate with polynomial decay can be reached under proper parameter selection. Meanwhile, we extend
the `2 column norm subsampling in the linear regression [16, 22] to the GNKR setting.
The main contributions of this paper can be summarized as below:
? GNKR with `2 regularization. Due to the lack of Mercer condition associated with general
kernel, coefficient regularization becomes a natural choice to replace the kernel norm
regularization in KRR. Note that Nystr?m approximation has the similar role with the `1
regularization in [7, 18, 20], which addresses the sample sparsity on hypothesis function.
Hence, we formulate GNKR by combining the Nystr?m method and the least squares
regression with `2 regularization in [19, 21].
? Theoretical and empirical evaluations. From the view of learning with data dependent
hypothesis spaces, theoretical analysis of GNKR is established to illustrate its generalization
bound and learning rate. In particular, the fast learning rate arbitrarily close to O(m?1 ) is
obtained under mild conditions, where m is the size of subsampled set. The effectiveness of
GNKR is also supported by experiments on synthetic and real-world data sets.
2
Related Works
Due to the flexibility and adaptivity, least squares regression algorithms with general kernel have been
proposed involving various types of regularization, e.g., the `1 -regularizer [18, 21], the `2 -regularizer
[19, 20], and the elastic net regularization [7]. For the Mercer kernel, these algorithms are related
closely with the KRR, which has been well understand in learning theory. For the general kernel
setting, theoretical foundations of regression with coefficient regularization have been studied recently
via the analysis techniques with the operator approximation [20] and the empirical covering numbers
[7, 18, 19]. Although rich results on theoretical analysis, the previous works mainly focus on the
prediction accuracy without considering the computation complexity for large scale data.
Nystr?m approximation has been studied extensively for kernel methods recently. Almost all existing
studies are relied on the fast approximation of SPSD matrix associated with a Mercer kernel. For the
fixed design setting, the expectation of the excess generalization error is bounded for least square
regression with the regularizer in RKHS [1, 2]. Recently, the probabilistic error bounds have been
estimated for Nystr?m KRR in [16, 24]. In [24], the fast learning rate with O(m?1 ) is derived for the
fixed design regression under the conditions on kernel matrix eigenvalues. In [16], the convergence
rate is obtained under the capacity assumption and the regularity assumption. It is worthy notice
that the learning bound in [16] is based on the estimates of the sample error, the computation error,
and the approximation error. Indeed, the computation error is related with the sampling subset and
can be considered as the hypothesis error in [18], which is induced by the variance of hypothesis
spaces. Differently from previous works, our theoretical analysis of GNKR is dependent on general
continuous kernel and `2 coefficient regularization.
3
Generalized Nystr?m Kernel Regression
Let ? be a probability distribution on Z := X ? Y, where X ? Rd and Y ? R are viewed as the
input space and the output space, respectively. Let ?(?|x) be the conditional distribution of ? for
2
given x ? X and let F be a measurable function space on X . In statistical learning, the samples
z := {zi }ni=1 = {(xi , yi )}ni=1 are drawn independently and identically from an unknown distribution
?. The task of least squares regression is to find a prediction function f : X ? R such that the
expected risk
Z
E(f ) =
(y ? f (x))2 d?(x, y)
Z
as small as possible. From the viewpoint of approximation theory, this means to search a good
approximation of the regression function
Z
f? (x) =
yd?(y|x)
Y
based on the empirical risk
n
Ez (f ) =
1X
(yi ? f (xi ))2 .
n i=1
Let K : X ? X ? R be a continuous and bounded kernel function. Without loss of generality, we
assume that ? := sup K(x, x0 ) ? 1 and for all |y| ? 1 for all y ? Y throughout this paper.
x,x0 ?X
Besides the given samples z, the hypothesis function space is crucial to reach well learning performance. The following data dependent hypothesis space has been used for kernel regression with
coefficient regularization:
n
n
o
X
Hn = f (x) =
?
? i K(xi , x) : ?
? = (?
?1 , ..., ?
? n ) ? Rn , x ? X .
i=1
Given z, kernel regression with `2 regularization [19, 20] is formulated as
f?z = f?? z =
n
X
?
? z,i K(xi , ?)
(1)
i=1
with
?
? z = arg min
n
??R
?
n1
n
o
kKnn ?
? ? Y k22 + ?n?
?T ?
? ,
where Knn = (K(xi , xj ))ni,j=1 , Y = (y1 , ? ? ? , yn )T , and ? > 0 is a regularization parameter.
Even the positive semi-definiteness is not required for the kernel, (3) also can be solved by the
following linear system (see Theorem 3.1 in [20])
T
T
(Knn
Knn + ?n2 In )?
? = Knn
Y,
(2)
where In is the n-order unit matrix.
From the viewpoint of learning function in Hn , (1) can be rewritten as
n
o
f?z = arg min Ez (f ) + ?nkf k2`2 ,
(3)
f ?Hn
where
kf k2`2 = inf
n
nX
?
? i2 : f =
i=1
n
X
o
?
? i K(xi , ?) .
i=1
In a standard implementation of (2), the computational complexity is O(n3 ). This computation
requirement becomes the bottleneck of (3) when facing large data sets. To reduce the computational
burden, we consider to find the predictor in a smaller hypothesis space
m
n
o
X
n
Hm = f (x) =
?i K(?
xi , x) : ? = (?1 , ..., ?m ) ? Rm , x ? X , {?
xi }m
i=1 ? {xi }i=1 .
i=1
3
The generalized Nystr?m kernel regression (GNKR) can be formulated as
o
n
fz = arg min Ez (f ) + ?mkf k2`2 .
(4)
f ?Hm
Denote (Knm )ij = K(xi , x
?j ), (Kmm )jk = K(?
xi , x
?j ) for i ? {1, ..., n}, j, k ? {1, ..., m}. We can
deduce that
m
X
fz =
?z,i K(?
xi , ?)
i=1
with
T
T
(Knm
Knm + ?mnIm )?z = Knm
Y.
(5)
The key problem of (4) is how to select the subset {?
xi }m
i=1 such that the computational complexity
can be decreased efficiently while satisfactory accuracy can be guaranteed. For the KRR, there
are several strategies to select the subset with different motivations [5, 11, 12]. In this paper we
preliminarily consider the following two strategies with low computational complexity:
? Uniform Subsampling. The subset {?
xi }m
i=1 is drawn uniformly at random from the input
n
{xi }i=1 .
n
? Column-norm Subsampling. The subset {?
xi }m
i=1 is drawn from {xi }i=1 independently with
i k2
probabilities pi = PnkKkK
, where Ki = (K(x1 , xi ), ..., K(xn , xi ))T ? Rn .
i k2
i=1
Some discussions for the column-norm subsampling will be provided in Section 4.
4
Learning Theory Analysis
In this section, we will introduce our theoretical results on generalization bound and learning rate.
The detailed proofs can be found in the supplementary materials.
Inspired from analysis technique in [7, 19], we introduce the intermediate function for error decomposition firstly. Let F be the square integrable space on X with norm k ? kL2? . For any bounded
X
continuous kernel K : X ? X ? R, the integral operator LK : F ? F is defined as
Z
LK f (x) =
K(x, t)f (t)d?X (t), ?x ? X ,
X
where ?X is the marginal distribution of ?. Given F and LK , introduce the function space
n
o
H = g = LK f, f ? F with kgkH = inf kf kL2? : g = LK f .
X
Since H is sample independent, the intermediate function can be constructed as g? = LK f? , where
n
o
f? = arg min E(LK f ) ? E(f? ) + ?kf k2L2? .
(6)
X
f ?F
In learning theory, usually g? is called as the regularized function and
D(?) = inf {E(g) ? E(f? ) + ?kgk2H } = E(LK f? ) ? E(f? ) + ?kf? k2L2?
g?H
X
is called the approximation error
To further bridge the gap between g? and fz , we construct the stepping stone function
m
g?? =
1 X
f? (?
xi )K(?
xi , ?).
m i=1
(7)
The following condition on K is used in this paper, which has been well studied in learning theory
literature [18, 19]. Examples include Gaussian kernel, the sigmoid kernel [17], and the fractional
power polynomials [13].
4
Definition 1 The kernel function K is a C s kernel with s > 0 if there exists some constant cs > 0,
such that
|K(t, x) ? K(t, x0 )| ? cs kx ? x0 ks2 , ?t, x, x0 ? X .
The definition of f? tells us |f? (x)| ? 1, so it is natural to restrict the predictor to [?1, 1]. The
projection operator
?(f )(x) = min{1, f (x)}I{f (x) ? 0} + max{?1, f (x)}I{f (x) < 0}
has been extensively used in learning theory analysis, e.g. [6].
It is a position to present our result on the generalization error bound.
Theorem 1 Suppose that X is compact subset of Rd and K ? C s (X ? X ) for some s > 0. For any
0 < ? < 1, with confidence 1 ? ?, there holds
p
2
2
E(?(fz )) ? E(f? ) ? c?1 log2 (8/?) (1 + m?1 ??1 + m?2 ??2 + n? 2+p ??2 )D(?) + n? 2+p ?? 2+p ,
where constant c?1 is independent of m, n, ?, and
(
2d/(d + 2s), if 0 < s ? 1;
2d/(d + 2), if 1 < s ? 1 + d/2;
p=
d/s,
if s > 1 + d/2.
(8)
Theorem 1 is a general result that applies to Lipschitz continuous kernel. Although the statement
appears somewhat complicated at first sight, it yields fast convergence rate on the error when
specialized to particular kernels. Before doing so, let us provide a few heuristic arguments for
intuition. Theorem 1 guarantees an upper bound of the form
p
2
k?(fz ) ? f? k2L2? ? O c(m, n, ?) inf {E(LK f ) ? E(f? ) + ?kf k2L2? } + n? 2+p ?? 2+p .
(9)
X
f
X
Note that a smaller value of ? reduces the approximation error term, but increases the second term
associated with the sample error. This inequality demonstrates that the proper ? should be selected
to balance the two terms. This quantitative relationship (9) also can be considered as the oracle
inequality for GNKR, where the approximation error D(?) only can be obtained by an oracle knowing
the distribution.
Theorem 1 tells us that the generalization bound of GNKR depends on the numbers of samples m, n,
the continuous degree s, and the approximation error D(?). In essential, the subsampling number m
has double impact on generalization error: one is the complexity of data dependent hypothesis space
Hm and the other is the selection of parameter ?.
Now we introduce the characterization of approximation error, which has been studied in [19, 20].
Definition 2 The target function f? can be approximated with exponent 0 < ? ? 1 in H if there
exists a constant c? ? 1 such that D(?) ? c? ?? for any ? > 0.
If the kernel is not symmetric or positive semi-definite, the approximation condition holds true for
?r
2
?
? = 2r
? is the integral operator associated with K(u, v) =
? ? L?X , where LK
3 when f? ? LK
R
K(u, x)K(v, x)d?X , (u, v) ? X 2 (see [7]).
X
Now we state our main results on the convergence rate.
Theorem 2 Let X be a compact subset of Rd . Assume that f? can be approximated with exponent
1
0 < ? ? 1 in H and K ? C s (X ? X ) for some s > 0. Choose m ? n 2+p and ? = m?? for some
? > 0. For any 0 < ? < 1, with confidence 1 ? ?, there holds
E(?(fz )) ? E(f? ) ? c?2 log2 (8/?)m?? ,
where constant c?2 is independent of m, ?, and
n
o
p?
? = min 2 ?
, 2 + ?? ? 2?, ??, 1 + ?? ? ? .
2+p
5
Theorem 2 states the polynomial convergence rate of GNKR and indicates its dependence on the
subsampling size m as n ? m2+p . Similar observation also can be found in Theorem 2 [16] for
Nystr?m KRR, where the fast learning rate also is relied on the grow of m under fixed hypothesis
space complexity. However, even we do not consider the complexity of hypothesis space, the increase
of m will add the computation complexity. Hence, a suitable size of m is a trade off between the
1
approximation performance and the computation complexity. When p ? (0, 2), m = n 2+p means
1
that m can be chosen between n 4 and 12 under the conditions in Theorem 4. In particular, the fast
convergence rate O(m?1 ) can be obtained as K ? C ? , ? ? 1, and ? ? 1.
The most related works with Theorems 1 and 2 are presented in [16, 24], where learning bounds are
established for Nystr?m KRR. Compared with the previous results, the features of this paper can be
summarized as below.
? Learning model. This paper considered Nystr?m regression with data dependent hypothesis
space and coefficient regularization, which can employ general kernel including the indefinite kernel and nonsymmetric kernel. However, the previous analysis just focuses on the
positive semi-definite kernel and the regularizer in RKHS. For a fixed design KRR, the fast
convergence O(m?1 ) in [24] depends on the eigenvalue condition of kernel matrix. Differently from [24], our result relies on the Lipschitz continuity of kernel and the approximation
condition D(?) for the statistical learning setting.
? Analysis technique. The previous analysis in [16, 24] utilizes the theoretical techniques for
operator approximation and matrix decomposition, which depends heavily on the symmetric
positive semi-definite kernel. For GNKR (4), the previous analysis is not valid directly since
the kernel is not necessary to satisfy the positive semi-definite or symmetric condition. The
flexibility on kernel and the adaptivity on hypothesis space induce the additional difficulty
on error analysis. Fortunately, the error analysis is obtained by incorporating the error
decomposition ideas in [7] and the concentration estimate techniques in [18, 19]. An
interesting future work is to establish the optimal bound of GNKR to extend Theorem 2 in
[16] to the general setting.
For the proofs of Theorem 1 and 2, the key idea is using g?? as the stepping stone function to bridge fz
and g? . Additionally, the connection between g? = LK f? and f? has been well studied in learning
theory. Hence, the proofs in Appendix follow from the approximation decomposition.
In remainder of this section, we present a simple analysis for column-norm subsampling.
Given the full samples z = {(xi , yi )}ni=1 and sampling number m, the key of subsampling is to
select a subset of z with strong inference ability. In other words, we should select the subset with
small divergence with the full sample estimator. Following this idea, the optimal subsampling
criterion is studied in [28, 22] for the linear regression.
z = {zi }ni=1 and Knn , we introduce
Pn Given
1?Lii
2
the objective function S(p) := S(p1 , ..., pn ) =
i=1 pi kKi k2 by extending (16) in [28] to
n
the kernel-based setting. Here {pi }i=1 are the sampling probabilities with respect to {xi }ni=1 and
T
T
Lii = (Knn (Knn
Knn + ?n2 In )?1 Knn
)ii , i ? {1, ..., n} are basic leverage values obtained from
(2). For the fixed design setting, assume that yi = KiT ?0 + ?i , i = 1, ..., n, ?0 ? Rn , where {?i }ni=1
are drawn identically and independently from N (0, ? 2 ). Then, for ? = 0, min S(p1 , ..., pn ) can
p
be transformed as min Etr((Knn )T (diag(p))?1 Knn ), which is related with the A-optimality or
p
A-criterion for subset selection in [22].
When Lii ? 0 for any i ? {1, ..., n}, we can get the following sampling probabilities.
Theorem 3 When hii = o(1) for 1 ? i ? n, the minimizer of S(p1 , ..., pn ) can be approximated by
kKi k2
pi = Pn
, i ? {1, ..., n}.
i=1 kKi k2
Usually, the leverage values are computed by fast approximation algorithms [1, 16] since Lii involves
the inverse matrix. Different from the leverage values, the sampling probabilities in Theorem 3 can
be computed directly, which just involves the `2 column-norm of empirical matrix.
6
Table 1: Average RMSE of GNKR with Gaussian(G)/Epanechnikov(E) kernel under different
sampling strategies and sampling size. US:=Uniform subsampling, CS: Column-norm subsampling.
Function
f1 (x) = x sin x
x ? [0, 2?]
x
f2 (x) = sin
x
x ? [?2?, 2?]
f3 (x) = sign(x)
x ? [?3, 3]
x
f4 (x) = cos(ex ) + sin
x
x ? [?2, 4]
5
Algorithm
G-GNKR-US
G-GNKR-CS
E-GNKR-US
E-GNKR-CS
G-GNKR-US
G-GNKR-CS
E-GNKR-US
E-GNKR-CS
G-GNKR-US
G-GNKR-CS
E-GNKR-US
E-GNKR-CS
G-GNKR-US
G-GNKR-CS
E-GNKR-US
E-GNKR-CS
]300
0.03412
0.03420
0.10159
0.09941
0.03442
0.03444
0.04786
0.04607
0.29236
0.29319
0.16170
0.16500
0.34916
0.34909
0.22298
0.21624
]400
0.03145
0.03086
0.09653
0.09414
0.03434
0.03423
0.04191
0.03865
0.29102
0.29071
0.15822
0.15579
0.35158
0.35171
0.21012
0.20783
]500
0.02986
0.02954
0.09081
0.08908
0.03418
0.03419
0.04073
0.03709
0.29009
0.28983
0.15537
0.15205
0.35155
0.35168
0.20265
0.20024
]600
0.02919
0.02911
0.08718
0.08631
0.03409
0.03408
0.03692
0.03573
0.28908
0.28975
0.15188
0.15201
0.35148
0.35133
0.19977
0.19698
]700
0.02897
0.02890
0.08515
0.08450
0.03404
0.03397
0.03582
0.03510
0.28867
0.28903
0.15086
0.14949
0.35156
0.35153
0.19414
0.19260
]800
0.02906
0.02878
0.08278
0.08237
0.03400
0.03397
0.03493
0.03441
0.28839
0.28833
0.14889
0.14698
0.35140
0.35145
0.19126
0.18996
]900
0.02896
0.02891
0.08198
0.08118
0.03398
0.03396
0.03470
0.03316
0.28755
0.28797
0.14730
0.14597
0.35136
0.35141
0.18916
0.18702
]1000
0.02908
0.02889
0.08024
0.07898
0.03395
0.03389
0.03440
0.03383
0.28742
0.28768
0.14726
0.14566
0.35139
0.35138
0.18560
0.18662
Experimental Analysis
Since kernel regression with different types of regularization has been well studied in [7, 20, 21], this
section just presents the empirical evaluation of GNKR to illustrate the roles of sampling strategy and
kx?tk2
kernel function. Gaussian kernel KG (x, t) = exp ? 2?2 2 is used for simulated data and real
kx?tk2
data. Epanechnikov kernel KE (x, t) = 1 ? 2?2 2 + is used in the simulated experiment. Here, ?
denotes the scale parameter selected form [10?5 : 10 : 104 ]. Following the discussion on parameter
selection in [16], we select the regularization parameter of GNKR from [10?15 : 10 : 10?3 ]. The
best results are reported according to the measure of Root Mean Squared Error (RMSE).
5.1
Experiments on synthetic data
Following the empirical studies in [20, 21], we design simulation experiments on f1 (x) = x sin x, x ?
[0, 2?], f2 (x) = sinx x , x ? [?2?, 2?], f3 (x) = sign(x), x ? [?3, 3], and f4 (x) = cos(ex ) +
sin x
x , x ? [?2, 4]. The function fi is considered as the truly regression function for 1 ? i ? 4. Note
that f1 , f2 are smooth, f3 is not continuous, and f4 embraces a highly oscillatory part. First, we select
10000 points randomly from the preset interval and generate the dependent variable y according to
the corresponding function. Then we divided these data into two parts with equal size. we chose one
part as the training samples and the other is regarded as testing samples. For the training samples, the
output y is contaminated by Gaussian noise N (0, 1). For each function and each kernel, we run the
experiment 20 times. The average RMSE is shown in Table 1. The results indicate that the column
norm subsampling can achieve the satisfactory performance. In particular, GNKR with the indefinite
Epanechnikov kernel has better performance than Gaussian kernel for the noncontinuous function f3
and the non-flat function f4 . This observation is consistent with the empirical result in [21].
5.2
Experiments on real data
In order to better evaluate the empirical performance, four data sets are used in our study including
the Wine Quality, CASP, Year Prediction datasets (http://archive.ics.uci.edu/ml/) and the census-house
dataset (http://www.cs.toronto.edu/ delve/data/census-house/desc.html). The detailed information
about the data sets are showed in Table 2. Firstly, each data set is standardized by subtracting its
mean and dividing its standard deviation. Then, each input vector is unitized. For CASP and Year
Prediction, 20000 samples are drawn randomly from data sets, where half is used for training and the
rest is for testing. For other datasets, we random select part samples to training and use the rest part
as test set. Table 3 reports the average RMSE over ten trials.
Table 3 shows the performance of two sampling strategies. For CASP, and Year Prediction, we can
see that GNKR with 100 selected samples can achieve the satisfactory performance, which reduce
the computation complexity of (2) efficiently. Additionally, the competitive performance of GNKR
with Epanechnikov kernel is demonstrated via the experimental results on the four data sets. These
empirical examples support the effectiveness of the proposed method.
7
Table 2: Statistics of data sets
Dataset
Wine Quality
Year Prediction
#Features
12
90
#Instances
4898
515345
#Train
2000
10000
#Test
2898
10000
Dataset
CASP
census-house
#Feature
9
139
#Instance
45730
22784
#Train
10000
12000
#Test
10000
10784
Table 3: Average RMSE (?10?3 ) with Gaussian(G)/Epanechnikov(E) kernel under different sampling
levels and strategies. US:=Uniform subsampling, CS: Column-norm subsampling.
Function
Wine Quality
CASP
Year Prediction
census-house
6
Algorithm
G-GNKR-US
G-GNKR-CS
E-GNKR-US
E-GNKR-CS
G-GNKR-US
G-GNKR-CS
E-GNKR-US
E-GNKR-CS
G-GNKR-US
G-GNKR-CS
E-GNKR-US
E-GNKR-CS
G-GNKR-US
G-GNKR-CS
E-GNKR-US
E-GNKR-CS
]50
14.567
14.563
13.990
13.969
9.275
9.220
4.282
4.206
8.806
8.806
7.013
7.006
111.084
111.083
102.731
102.703
]100
14.438
14.432
13.928
13.899
9.238
9.196
4.196
4.249
8.802
8.801
6.842
6.861
111.083
111.080
99.535
99.528
]200
14.382
14.394
13.807
13.798
9.205
9.205
4.213
4.206
8.798
8.798
6.739
6.804
111.082
111.080
99.698
99.697
]400
14.292
14.225
13.636
13.601
9.222
9.193
4.153
4.182
8.795
8.793
6.700
6.705
111.079
111.079
99.718
99.716
]600
14.189
14.138
13.473
13.445
9.204
9.198
4.181
4.172
8.792
8.792
6.676
6.697
111.077
111.075
99.715
99.714
]800
14.103
14.014
13.381
13.362
9.207
9.199
4.174
4.165
8.790
8.789
6.671
6.663
111.074
111.071
99.714
99.714
]1000
13.936
13.936
13.217
13.239
9.205
9.198
4.180
4.118
8.782
8.781
6.637
6.662
111.071
111.068
99.713
99.712
Conclusion
This paper focuses on the learning theory analysis of Nystr?m kernel regression. One key difference
with the previous related work is that GNKR uses general continuous kernel function and `2 coefficient
regularization. The stepping-stone functions are constructed to overcome the analysis difficulty
induced by the difference. The learning bound with fast convergence is derived under mild conditions
and empirical analysis is provided to verify our theoretical analysis.
Acknowledgments
This work was partially supported by U.S. NSF-IIS 1302675, NSF-IIS 1344152, NSF-DBI 1356628,
NSF-IIS 1619308, NSF-IIS 1633753, NIH AG049371, and by National Natural Science Foundation
of China (NSFC) 11671161. We thank the anonymous NIPS reviewers for insightful comments.
References
[1] A. Alaoui and M. W. Mahoney. Fast randomized kernel methods with statistical guarantees. In
NIPS, pp. 775?783, 2015.
[2] F. Bach. Sharp analysis of low-rank kernel matrix approximations. In COLT, 2013.
[3] P. Drineas, R. Kannan, and M.W. Mahoney. Fast Monte Carlo algorithms for matrices I:
Computing a low-rank approximation to a matrix. SIAM Journal on Computing, pp. 158?183,
2006.
[4] P. Drineas and M.W. Mahoney. On the Nystr?m method for approximating a Gram matrix for
improved kernel-based learning. J. Mach. Learn. Res., 6: 2153?2175, 2005.
[5] P. Drineas, M. Magdon-Ismail, M.W. Mahoney, and D.P. Woodruff. Fast approximation of
matrix coherence and statistical leverage. J. Mach. Learn. Res., 13: 3475?3506, 2012.
[6] M. Eberts and I. Steinwart. Optimal learning rates for least squares SVMs using Gaussian
kernels. In NIPS, pp. 1539?1547, 2011.
[7] Y. Feng, S. Lv, H. Huang, and J. Suykens. Kernelized elastic net regularization: generalization
bounds and sparse recovery. Neural Computat., 28: 1?38, 2016.
[8] A. Gittens and M.W. Mahoney. Revisiting the Nystr?m method for improved large-scale machine
learning. In ICML, pp. 567?575, 2013
8
[9] C.J. Hsieh, S. Si, and I.S. Dhillon. Fast prediction for large scale kernel machines. In NIPS, pp.
3689?3697, 2014.
[10] R. Jin, T. Yang, M. Mahdavi, Y. Li, and Z. Zhou. Improved bounds for the Nystr?m method
with application to kernel classification. IEEE Trans. Inf. Theory, 59(10): 6939?6949, 2013.
[11] S. Kumar, M. Mohri, and A. Talwalkar. Sampling methods for the Nystr?m method. J. Mach.
Learn. Res., 13: 981?1006, 2012.
[12] W. Lim, M. Kim, H. Park, and K. Jung. Double Nystr?m method: An efficient and accurate
Nystr?m scheme for large-scale data sets. In ICML, pp. 1367?1375, 2015.
[13] C. Liu. Gabor-based kernel PCA with fractional power polynomial models for face recognition.
IEEE Trans. Pattern Anal. Mach. Intell., 26: 572?581, 2004.
[14] E. Pekalska and B. Haasdonk. Kernel discriminant analysis with positive definite and indefinite
kernels. IEEE Trans. Pattern. Anal. Mach. Intell.,31: 1017?1032, 2009.
[15] A. Rahimi and B. Recht. Random features for large-scale kernel machines. In NIPS, pp. 1177?
1184, 2007.
[16] A. Rudi, R. Camoriano, R. Rosasco. Less is more: Nystr?m computation regularization. In
NIPS, 1657?1665, 2015.
[17] B. Sch?lkopf and A.J. Smola. Learning with Kernels: Support Vector Machines, Regularization,
Optimization, and Beyond . MIT Press, 2001.
[18] L. Shi, Y. Feng, and D.X. Zhou. Concentration estimates for learning with `1 -regularizer and
data dependent hypothesis spaces. Appl. Comput. Harmon. Anal., 31(2): 286?302, 2011.
[19] L. Shi. Learning theory estimates for coefficient-based regularized regression. Appl. Comput.
Harmon. Anal., 34(2): 252?265, 2013.
[20] H. Sun and Q. Wu. Least square regression with indefinite kernels and coefficient regularization.
Appl. Comput. Harmon. Anal., 30(1): 96?109, 2011.
[21] H. Sun and Q. Wu. Sparse representation in kernel machines. IEEE Trans. Neural Netw. Learning
Syst., 26(10): 2576?2582, 2015.
[22] Y. Wang and A. Singh. Minimax subsampling for estimation and prediction in low-dimensional
linear regression. arXiv, 2016 (https://arxiv.org/pdf/1601.02068v2.pdf).
[23] C. Williams and M. Seeger. Using the Nystr?m method to speed up kernel machines. In NIPS,
pp. 682?688, 2001.
[24] T. Yang, Y.F. Li, M. Mahdavi, R. Jin, and Z.H. Zhou. Nystr?m method vs random Fourier
features: A theoretical and empirical comparison. In NIPS, 2012, pp. 485?493.
[25] Y. Yang, M. Pilanci and M. J. Wainwright. Randomized sketches for kernels: Fast and optimal
non-parametric regression. arxiv:1501.06195, 2015.(http://arxiv.org/abs/1501.06195).
[26] Y. Ying, C. Campbell, and M. Girolami. Analysis of SVM with indefinite kernels. In NIPS, pp.
2205?2213, 2009.
[27] K. Zhang, I.W. Tsang, and J.T. Kwok. Improved Nystr?m low-rank approximation and error
analysis. In ICML, pp. 1232?1239, 2008.
[28] R. Zhu, P. Ma, M.W. Mahoney, and B. Yu. Optimal subsampling approaches for large sample
linear regression. arXiv:1509.05111, 2015 (http://arxiv.org/abs/1509.05111).
9
| 6602 |@word mild:2 trial:1 polynomial:5 norm:14 simulation:1 decomposition:4 hsieh:1 nsw:1 nystr:31 boundedness:1 liu:1 contains:1 score:1 woodruff:1 rkhs:4 existing:1 com:1 si:1 gmail:1 v:1 half:1 selected:3 epanechnikov:5 characterization:1 toronto:1 firstly:2 org:3 casp:5 zhang:1 constructed:2 introduce:5 x0:5 expected:1 indeed:1 rapid:1 p1:3 inspired:2 considering:1 agricultural:1 becomes:2 provided:3 spain:1 bounded:3 kg:1 guarantee:2 quantitative:1 demonstrates:2 k2:8 rm:1 unit:1 yn:1 positive:10 before:1 engineering:2 despite:1 k2l2:4 mach:5 nsfc:1 yd:1 chose:1 au:1 studied:7 china:2 appl:3 co:2 delve:1 limited:1 acknowledgment:1 testing:2 definite:9 empirical:12 gabor:1 projection:1 confidence:2 induce:1 word:1 get:2 unfeasible:1 close:1 selection:4 operator:5 risk:2 www:1 restriction:1 measurable:1 demonstrated:1 reviewer:1 shi:2 williams:1 independently:3 formulate:2 ke:1 recovery:1 m2:1 estimator:1 regarded:1 dbi:1 target:1 suppose:1 heavily:2 us:1 hypothesis:15 approximated:3 jk:1 ks2:1 recognition:1 role:2 solved:1 haasdonk:1 wang:1 tsang:1 revisiting:1 sun:2 trade:1 intuition:1 complexity:11 singh:1 efficiency:1 f2:3 drineas:3 differently:2 various:1 tx:2 regularizer:5 train:2 fast:15 monte:1 tell:2 refined:1 heuristic:1 supplementary:1 ability:1 statistic:2 knn:11 preliminarily:1 eigenvalue:2 net:2 cai:2 propose:1 subtracting:1 remainder:1 uci:1 combining:1 flexibility:2 achieve:2 ismail:1 spsd:2 convergence:8 regularity:1 requirement:1 double:2 extending:1 illustrate:2 ij:1 school:1 progress:1 strong:1 sydney:2 dividing:1 c:21 involves:2 implies:1 indicate:1 girolami:1 closely:1 f4:4 australia:1 material:1 f1:3 generalization:13 anonymous:1 desc:1 hold:3 considered:4 ic:1 exp:1 kki:3 camoriano:1 wine:3 estimation:1 krr:11 bridge:2 successfully:1 mit:1 always:1 gaussian:7 sight:1 pn:5 zhou:3 derived:2 focus:3 rank:3 indicates:1 mainly:2 seeger:1 talwalkar:1 kim:1 inference:1 dependent:8 kernelized:1 transformed:1 issue:2 arg:4 html:1 colt:1 classification:1 exponent:2 marginal:1 equal:1 construct:3 f3:4 sampling:25 park:1 yu:1 icml:3 future:1 contaminated:1 report:1 few:1 employ:1 randomly:2 divergence:1 national:1 uta:1 ag049371:1 intell:2 subsampled:2 eberts:1 sinx:1 n1:1 ab:2 investigate:1 highly:1 evaluation:2 mahoney:6 truly:1 accurate:1 integral:2 necessary:3 harmon:3 haifeng:2 re:3 theoretical:13 instance:2 column:14 deviation:1 subset:10 uniform:7 predictor:2 characterize:1 reported:1 synthetic:2 recht:1 randomized:2 siam:1 probabilistic:1 off:1 sketching:1 squared:1 satisfied:1 huang:2 hn:3 choose:1 rosasco:1 lii:4 li:2 huazhong:1 mahdavi:2 syst:1 knm:4 summarized:2 coefficient:10 satisfy:1 depends:3 view:1 root:1 doing:1 sup:1 reached:2 competitive:2 relied:2 complicated:1 rmse:5 contribution:1 square:8 ni:7 accuracy:2 variance:1 efficiently:2 yield:1 lkopf:1 carlo:1 oscillatory:1 reach:1 definition:3 kl2:2 pp:11 associated:6 proof:3 dataset:3 lim:1 fractional:2 hilbert:1 campbell:1 appears:1 arlington:4 tom:1 follow:1 improved:4 generality:1 just:4 smola:1 sketch:1 steinwart:1 lack:1 continuity:2 quality:3 k22:1 verify:1 true:1 regularization:21 hence:3 symmetric:7 dhillon:1 satisfactory:5 i2:1 deal:1 sin:5 covering:1 hong:1 generalized:5 criterion:4 stone:3 pdf:2 ridge:2 recently:5 fi:1 nih:1 sigmoid:1 specialized:1 stepping:3 extend:2 nonsymmetric:1 measurement:1 rd:3 mathematics:1 deduce:1 add:1 recent:1 showed:1 inf:5 inequality:2 arbitrarily:1 yi:4 integrable:1 additional:2 somewhat:1 fortunately:1 kit:1 semi:9 ii:5 full:2 hzau:1 reduces:1 rahimi:1 smooth:1 bach:1 divided:1 impact:1 prediction:9 involving:1 regression:30 basic:1 expectation:1 arxiv:6 kernel:79 suykens:1 addressed:1 decreased:1 interval:1 grow:1 crucial:1 sch:1 rest:2 archive:1 comment:1 induced:2 alaoui:1 effectiveness:2 leverage:5 yang:3 intermediate:2 identically:2 affect:1 xj:1 zi:2 restrict:1 reduce:2 idea:3 cn:1 knowing:1 texas:2 bottleneck:1 pca:2 detailed:2 extensively:2 ten:1 induces:1 svms:1 simplest:1 generate:1 fz:7 http:5 exist:1 unitized:1 nsf:5 computat:1 notice:1 sign:2 estimated:1 key:5 indefinite:6 four:2 drawn:5 year:5 run:1 inverse:1 almost:1 throughout:1 wu:2 utilizes:1 coherence:1 appendix:1 bound:15 ki:1 guaranteed:1 rudi:1 oracle:2 n3:1 flat:1 fourier:2 aspect:1 argument:1 min:8 optimality:1 kumar:1 speed:1 kgkh:1 embrace:1 according:2 smaller:2 gittens:1 kmm:1 census:4 magdon:1 rewritten:1 kwok:1 v2:1 hii:1 alternative:1 original:1 denotes:1 standardized:1 subsampling:17 include:1 log2:2 tk2:2 establish:2 approximating:1 feng:2 objective:1 strategy:11 concentration:2 dependence:1 parametric:1 thank:1 simulated:2 capacity:1 nx:1 mail:1 etr:1 discriminant:1 kannan:1 besides:2 relationship:1 balance:1 ying:1 statement:1 design:5 implementation:1 proper:2 anal:5 unknown:1 upper:1 observation:2 datasets:2 jin:2 y1:1 worthy:1 rn:3 reproducing:1 sharp:1 weidong:1 introduced:1 required:1 pekalska:1 connection:1 established:3 barcelona:1 nip:10 trans:4 address:2 beyond:1 below:2 usually:2 pattern:2 sparsity:1 including:3 max:1 wainwright:1 power:2 critical:1 suitable:1 difficulty:3 rely:1 regularized:3 natural:3 zhu:1 minimax:1 scheme:1 improve:1 technology:1 lk:12 hm:3 literature:1 kf:5 loss:1 adaptivity:2 interesting:2 facing:1 lv:1 foundation:2 degree:1 consistent:1 mercer:4 viewpoint:2 heng:2 pi:4 mohri:1 jung:1 supported:2 understand:1 face:1 sparse:2 xia:1 overcome:1 xn:1 world:3 valid:1 rich:1 gram:1 adaptive:1 excess:2 compact:2 netw:1 mkf:1 ml:1 xi:23 continuous:7 search:1 table:7 additionally:2 learn:3 pilanci:1 elastic:2 meanwhile:1 diag:1 main:2 motivation:1 noise:1 n2:2 categorized:1 x1:1 definiteness:1 position:1 comput:3 house:4 theorem:14 insightful:1 decay:2 chenh:1 svm:1 burden:1 exists:2 essential:1 incorporating:1 kx:3 chen:1 gap:1 explore:1 ez:3 partially:1 applies:1 minimizer:1 relies:1 ma:1 conditional:1 viewed:1 formulated:2 replace:1 lipschitz:2 uniformly:1 preset:1 called:2 experimental:3 select:7 support:2 evaluate:1 ex:2 |
6,195 | 6,603 | New Liftable Classes for
First-Order Probabilistic Inference
Seyed Mehran Kazemi
The University of British Columbia
[email protected]
Angelika Kimmig
KU Leuven
[email protected]
Guy Van den Broeck
University of California, Los Angeles
[email protected]
David Poole
The University of British Columbia
[email protected]
Abstract
Statistical relational models provide compact encodings of probabilistic dependencies in relational domains, but result in highly intractable graphical models.
The goal of lifted inference is to carry out probabilistic inference without needing to reason about each individual separately, by instead treating exchangeable,
undistinguished objects as a whole. In this paper, we study the domain recursion inference rule, which, despite its central role in early theoretical results on
domain-lifted inference, has later been believed redundant. We show that this
rule is more powerful than expected, and in fact significantly extends the range
of models for which lifted inference runs in time polynomial in the number of
individuals in the domain. This includes an open problem called S4, the symmetric
transitivity model, and a first-order logic encoding of the birthday paradox. We
further identify new classes S 2 FO 2 and S 2 RU of domain-liftable theories, which
respectively subsume FO 2 and recursively unary theories, the largest classes of
domain-liftable theories known so far, and show that using domain recursion can
achieve exponential speedup even in theories that cannot fully be lifted with the
existing set of inference rules.
1
Introduction
Statistical relational learning (SRL) [8] aims at unifying logic and probability for reasoning and
learning in noisy domains, described in terms of individuals (or objects), and the relationships
between them. Statistical relational models [10], or template-based models [18] extend Bayesian and
Markov networks with individuals and relations, and compactly describe probabilistic dependencies
among them. These models encode exchangeability among the objects: individuals that we have the
same information about are treated similarly.
A key challenge with SRL models is the fact that they represent highly intractable, densely connected
graphical models, typically with millions of random variables. The aim of lifted inference [23] is to
carry out probabilistic inference without needing to reason about each individual separately, by instead
treating exchangeable, undistinguished objects as a whole. Over the past decade, a large number of
lifted inference rules have been proposed [5, 9, 11, 14, 20, 22, 28, 30], often providing exponential
speedups for specific SRL models. These basic exact inference techniques have applications in
(tractable) lifted learning [32], where the main task is to efficiently compute partition functions, and
in variational and over-symmetric approximations [29, 33]. Moreover, they provided the foundation
for a rich literature on approximate lifted inference and learning [1, 4, 13, 17, 19, 21, 25, 34].
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
The theoretical study of lifted inference began with the complexity notion of domain-lifted inference [31] (a concept similar to data complexity in databases). Inference is domain-lifted when it runs
in time polynomial in the number of individuals in the domain. By identifying liftable classes of
models, guaranteeing domain-lifted inference, one can characterize the theoretical power of the various inference rules. For example, the class FO 2 , encoding dependencies among pairs of individuals
(i.e., two logical variables), is liftable [30]. Kazemi and Poole [15] introduce a liftable class called
recursively unary, capturing hierarchical simplification rules. Beame et al. [3] identify liftable classes
of probabilistic database queries. Such results elevate the specific inference rules and examples to a
general principle, and bring lifted inference in line with complexity and database theory [3].
This paper studies the domain recursion inference rule, which applies the principle of induction on
the domain size. The rule makes one individual A in the domain explicit. Afterwards, the other
inference rules simplify the SRL model up to the point where it becomes identical to the original
model, except the domain size has decreased. Domain recursion was introduced by Van den Broeck
[31] and was central to the proof that FO 2 is liftable. However, later work showed that simpler rules
suffice to capture FO 2 [27], and the domain recursion rule was forgotten.
We show that domain recursion is more powerful than expected, and can lift models that are otherwise
not amenable to domain-lifted inference. This includes an open problem by Beame et al. [3], asking
for an inference rule for a logical sentence called S4. It also includes the symmetric transitivity
model, and an encoding of the birthday paradox in first-order logic. There previously did not exist any
efficient algorithm to compute the partition function of these SRL models, and we obtain exponential
speedups. Next, we prove that domain recursion supports its own large classes of liftable models
S 2 FO 2 subsuming FO 2 , and S 2 RU subsuming recursive unary1 . All existing exact lifted inference
algorithms (e.g., [11, 15, 28]) resort to grounding the theories in S 2 FO 2 or S 2 RU that are not in
FO 2 or recursively unary, and require time exponential in the domain size.
These results will be established using the weighted first-order model counting (WFOMC) formulation
of SRL models [28]. WFOMC is close to classical first-order logic, and it can encode many other
SRL models, including Markov logic [24], parfactor graphs [23], some probabilistic programs [7],
relational Bayesian networks [12], and probabilistic databases [26]. It is a basic specification language
that simplifies the development of lifted inference algorithms [3, 11, 28].
2
Background and Notation
A population is a set of constants denoting individuals (or objects). A logical variable (LV) is typed
with a population. We represent LVs with lower-case letters, constants with upper-case letters, the
population associated with a LV x with ?x , and its cardinality with |?x |. That is, a population ?x is
a set of constants {X1 , . . . , Xn }, and we use x ? ?x as a shorthand for instantiating x with one of
the Xi . A parametrized random variable (PRV) is of the form F(t1 , . . . , tk ) where F is a predicate
symbol and each ti is a LV or a constant. A unary PRV contains exactly one LV and a binary PRV
contains exactly two LVs. A grounding of a PRV is obtained by replacing each of its LVs x by one
of the individuals in ?x .
A literal is a PRV or its negation. A formula ? is a literal, a disjunction ?1 ? ?2 of formulas, a
conjunction ?1 ? ?2 of formulas, or a quantified formula ?x ? ?x : ?(x) or ?x ? ?x : ?(x)
where x appears in ?(x). A sentence is a formula with all LVs quantified. A clause is a disjunction
of literals. A theory is a set of sentences. A theory is clausal if all its sentences are clauses. An
interpretation is an assignment of values to all ground PRVs in a theory. An interpretation I is a
model of a theory T , I |= T , if given its value assignments, all sentences in T evaluate to True.
Let F(T ) be the set of predicate symbols in theory T , and ? : F(T ) ? R and ? : F(T ) ? R
be two functions that map each predicate F to weights. These functions associate a weight with
assigning True or False to ground PRVs F(C1 , . . . , Ck ). For an interpretation I of T , let ? T rue
F alse
be the set of ground
the ones assigned False. The weight of I is
Q PRVs assigned True, and
Q?
given by ?(I) = F(C1 ,...,Ck )??T rue ?(F) ? F(C1 ,...,Ck )??F alse ?(F). Given a theory T and two
functions ? and ?, the weighted
first-order model count (WFOMC) of the theory given ? and ?
P
is: WFOMC(T |?, ?) = I|=T ?(I).
1
All proofs can be found in the extended version of the paper at: https://arxiv.org/abs/1610.08445
2
In this paper, we assume that all theories are clausal and do not contain existential quantifiers. The
latter can be achieved using the Skolemization procedure of Van den Broeck et al. [30], which
efficiently transforms a theory T with existential quantifiers into a theory T 0 without existential
quantifiers that has the same weighted model count. That is, our theories are sets of finite-domain,
function-free first-order clauses whose LVs are all universally quantified (and typed with a population).
Furthermore, when a clause mentions two LVs x1 and x2 with the same population ?x , or a LV x
with population ?x and a constant C ? ?x , we assume they refer to different individuals.2
Example 1. Consider the theory ?x ? ?x : ?Smokes(x) ? Cancer(x) having only one clause and
assume ?x = {A, B}. The assignment Smokes(A) = True, Smokes(B) = False, Cancer(A) =
True, Cancer(B) = True is a model. Assuming ?(Smokes) = 0.2, ?(Cancer) = 0.8, ?(Smokes) =
0.5 and ?(Cancer) = 1.2, the weight of this model is 0.2 ? 0.5 ? 0.8 ? 0.8. This theory has eight other
models. The WFOMC can be calculated by summing the weights of all nine models.
2.1
Converting Inference for SRL Models into WFOMC
For many SRL models, (lifted) inference can be converted into a WFOMC problem. As an example,
consider a Markov logic network (MLN) [24] with weighted formulae (w1 : F1 , . . . , wk : Fk ). For
every weighted formula wi : Fi of this MLN, let theory T have a sentence Auxi (x, . . . ) ? Fi such
that Auxi is a predicate having all LVs appearing in Fi . Assuming ?(Auxi ) = exp(wi ), and ? and
? are 1 for the other predicates, the partition function of the MLN is equal to WFOMC(T ).
2.2
Calculating the WFOMC of a Theory
We now describe a set of rules R that can be applied to a theory to find its WFOMC efficiently;
for more details, readers are directed to [28], [22] or [11]. We use the following theory T with two
clauses and four PRVs (S(x, m), R(x, m), T(x) and Q(x)) as our running example:
?x ? ?x , m ? ?m : Q(x) ? R(x, m) ? S(x, m)
?x ? ?x , m ? ?m : S(x, m) ? T(x)
Lifted Decomposition Assume we ground x in T . Then the clauses mentioning an arbitrary
Xi ? ?x are ?m ? ?m : Q(Xi ) ? R(Xi , m) ? S(Xi , m) and ?m ? ?m : S(Xi , m) ? T(Xi ).
These clauses are totally disconnected from clauses mentioning Xj ? ?x (j 6= i), and are the
same up to renaming Xi to Xj . Given the exchangeability of the individuals, we can calculate
the WFOMC of only the clauses mentioning Xi and raise the result to the power of the number of
connected components (|?x |). Assuming T1 is the theory that results from substituting x with Xi ,
WFOMC(T ) = WFOMC(T1 )|?x | .
Case-Analysis The WFOMC of T1 can be computed by a case analysis over different assignments
of values to a ground PRV, e.g., Q(Xi ). Let T2 and T3 represent T1 ? Q(Xi ) and T1 ? ?Q(Xi )
respectively. Then, WFOMC(T1 ) = WFOMC(T2 ) + WFOMC(T3 ). We follow the process for
T3 (the process for T2 will be similar) having clauses ?Q(Xi ), ?m ? ?m : Q(Xi ) ? R(Xi , m) ?
S(Xi , m) and ?m ? ?m : S(Xi , m) ? T(Xi ).
Unit Propagation When a clause in the theory has only one literal, we can propagate the effect
of this clause through the theory and remove it3 . In T3 , ?Q(Xi ) is a unit clause. Having this
unit clause, we can simplify the second clause and get the theory T4 having clauses ?m ? ?m :
R(Xi , m) ? S(Xi , m) and ?m ? ?m : S(Xi , m) ? T(Xi ).
Lifted Case-Analysis Case-analysis can be done for PRVs having one logical variable in a lifted
way. Consider the S(Xi , m) in T4 . Due to the exchangeability of the individuals, we do not have
to consider all possible assignments to all ground PRVs of S(Xi , m), but only the ones where the
number of individuals M ? ?m for which S(Xi , M ) is True (or equivalently False) is different.
This means considering |?m | + 1 cases suffices, corresponding to S(Xi , M ) being True for exactly
j = 0, . . . , |?m | individuals. Note that we must multiply by |?jm | to account for the number
2
Equivalently, we can disjoin x1 = x2 or x = C to the clause.
Note that unit propagation may remove clauses and random variables from the theory. To account for them,
smoothing multiplies the WFOMC by 2#rv , where #rv represents the number of removed variables.
3
3
of ways one can select j out of |?m | individuals. Let T4j represent T4 with two more clauses:
?m ? ?mT : S(Xi , m) and ?m ? ?mF : ?S(Xi , m), where ?mT represents the j individuals
in ?m for which S(Xi , M ) is True, and ?mF represents the other |?m | ? j individuals. Then
P|? |
WFOMC(T4 ) = j=0m |?jm | WFOMC(T4j ).
Shattering In T4j , the individuals in ?m are no longer exchangeable: we know different things
about those in ?mT and those in ?mF . We need to shatter every clause having individuals coming
from ?m to make the theory exchangeable. To do so, the clause ?m ? ?m : R(Xi , m) ? S(Xi , m) in
T4j must be shattered to ?m ? ?mT : R(Xi , m)?S(Xi , m) and ?m ? ?mF : R(Xi , m)?S(Xi , m)
(and similarly for the other formulae). The shattered theory T5j after unit propagation will have
clauses ?m ? ?mF : R(Xi , m) and ?m ? ?mF : T(Xi ).
Decomposition, Caching, and Grounding In T5j , the two clauses have different PRVs, i.e., they
are disconnected. In such cases, we apply decomposition, i.e., find the WFOMC of each connected
component separately and return the product. The WFOMC of the theory can be found by continuing
to apply the above rules. In all the above steps, after finding the WFOMC of each (sub-)theory, we
store the results in a cache so we can reuse them if the same WFOMC is required again. By following
these steps, one can find the WFOMC of many theories in polynomial time. However, if we reach a
point where none of the above rules are applicable, we ground one of the populations which makes
the process exponential in the number of individuals.
2.3
Domain-Liftability
The following notions allow us to study the power of a set of lifted inference rules.
Definition 1. A theory is domain-liftable [31] if calculating its WFOMC is polynomial in
|?x1 |, |?x2 |, . . . , |?xk | where x1 , x2 , . . . , xk represent the LVs in the theory. A class C of theories is domain-liftable if ?T ? C, T is domain-liftable.
So far, two main classes of domain-liftable theories have been recognized: FO 2 [30, 31] and
recursively unary [15, 22].
Definition 2. A theory is in FO 2 if all its clauses have up to two LVs.
Definition 3. A theory T is recursively unary (RU) if for every theory T 0 resulting from applying
rules in R except for lifted case analysis to T , until no more rules apply, there exists some unary PRV
in T 0 and a generic case of lifted case-analysis on this unary PRV is itself RU.
Note that the time needed to check whether a theory is in FO 2 or RU is independent of the domain
sizes in the theory. For FO 2 , the membership check can be done in time linear in the size of the
theory, whereas for RU, only a worst-case exponential procedure is known. Thus, FO 2 currently
offers a faster membership check than RU, but as we show later, RU subsumes FO 2 . This gives rise to
a trade-off between fast membership checking and modeling power for, e.g., lifted learning purposes.
3
The Domain Recursion Rule
Van den Broeck [31] considered another rule called domain recursion in the set of rules for calculating
the WFOMC of a theory. The intuition behind domain recursion is that it modifies a domain ?x by
making one element explicit: ?x = ?x0 ? {A} with A 6? ?x0 . Next, clauses are rewritten in terms
of ?x0 and A while removing ?x from the theory entirely. Then, by applying standard rules in R
on this modified theory, the problem is reduced to a WFOMC problem on a theory identical to the
original one, except that ?x is replaced by the smaller domain ?x0 . This lets us compute WFOMC
using dynamic programming. We refer to R extended with the domain recursion rule as RD .
Example 2. Suppose we have a theory whose only clause is ?x, y ? ?p : ?Friend(x, y) ?
Friend(y, x), stating if x is friends with y, y is also friends with x. One way to calculate the
WFOMC of this theory is by grounding only one individual in ?p and then using R. Let A be an
individual in ?p and let ?p0 = ?p ? {A}. We can (using domain recursion) rewrite the theory
as: ?x ? ?p0 : ?Friend(x, A) ? Friend(A, x), ?y ? ?p0 : ?Friend(A, y) ? Friend(y, A), and
?x, y ? ?p0 : ?Friend(x, y) ? Friend(y, x). Lifted case-analysis on Friend(p0 , A) and Friend(A, p0 ),
4
shattering and unit propagation give ?x, y ? ?p0 : ?Friend(x, y) ? Friend(y, x). This theory is
equivalent to our initial theory, with the only difference being that the population of people has
decreased by one. By keeping a cache of the values of each sub-theory, one can verify that this
process finds the WFOMC of the above theory in polynomial time.
Note that the theory in Example 2 is in FO 2 and as proved in [27], its WFOMC can be computed
without using the domain recursion rule4 . This proof has caused the domain recursion rule to be
forgotten in the lifted inference community. In the next section, we revive this rule and identify a
class of theories that are only domain-liftable when using the domain recursion rule.
4
Domain Recursion Makes More Theories Domain-Liftable
In this section, we show three example theories that are not domain-liftable when using R, yet
become domain-liftable with domain recursion.
S4 Clause: Beame et al. [3] identified a clause (S4) with four binary PRVs having the same predicate
and proved that, even though the rules R in Section 2.2 cannot calculate the WFOMC of that clause,
there is a polynomial-time algorithm for finding its WFOMC. They concluded that this set of rules R
for finding the WFOMC of theories does not suffice, asking for new rules to compute their theory.
We prove that adding domain recursion to the set achieves this goal.
Proposition 1. The theory consisting of the S4 clause ?x1 , x2 ? ?x , y1 , y2 ? ?y : S(x1 , y1 ) ?
?S(x2 , y1 ) ? S(x2 , y2 ) ? ?S(x1 , y2 ) is domain-liftable using RD .
Symmetric Transitivity: Domain-liftable calculation of WFOMC for the transitivity formula is
a long-standing open problem. Symmetric transitivity is easier as its model count corresponds to
the Bell number, but solving it using general-purpose rules has been an open problem. Consider
clauses ?x, y, z ? ?p : ?F(x, y) ? ?F(y, z) ? F(x, z) and ?x, y ? ?p : ?F(x, y) ? F(y, x) defining
a symmetric transitivity relation. For example, ?p may indicate the population of people and F may
indicate friendship.
Proposition 2. The symmetric-transitivity theory is domain-liftable using RD .
Birthday Paradox: The birthday paradox problem [2] is to compute the probability that in a set
of n randomly chosen people, two of them have the same birthday. A first-order encoding of this
problem requires computing the WFOMC for a theory with clauses ?p ? ?p , ?d ? ?d : Born(p, d),
?p ? ?p , d1 , d2 ? ?d : ?Born(p, d1 ) ? ?Born(p, d2 ), and ?p1 , p2 ? ?p , d ? ?d : ?Born(p1 , d) ?
?Born(p2 , d), where ?p and ?d represent the population of people and days. The first two clauses
impose the condition that every person is born in exactly one day, and the third clause states the ?no
two people are born on the same day? query.
Proposition 3. The birthday-paradox theory is domain-liftable using RD .
New Domain-Liftable Classes: S 2 FO 2 and S 2 RU
5
In this section, we identify new domain-liftable classes, enabled by the domain recursion rule.
Definition 4. Let ?(S) be a clausal theory that uses a single binary predicate S, such that each clause
has exactly two different literals of S. Let ? = ?(S1 ) ? ?(S2 ) ? ? ? ? ? ?(Sn ) where the Si are different
binary predicates. Let ? be a theory where all clauses contain at most one Si literal, and the clauses
that contain an Si literal contain no other literals with more than one LV. Then, S 2 FO 2 and S 2 RU
are the classes of theories of the form ? ? ? where ? ? FO 2 and ? ? RU respectively.
Theorem 1. S 2 FO 2 and S 2 RU are domain-liftable using RD .
Proof. The case where ? = ? is trivial. Let ? = ?(S1 ) ? ?(S2 ) ? ? ? ? ? ?(Sn ). Once we remove
all PRVs having none or one LV by (lifted) case-analysis, the remaining clauses can be divided into
n + 1 components: the i-th component in the first n components only contains Si literals, and the
4
This can be done by realizing that the theory is disconnected in the grounding for every pair (A, B) of
individuals and applying the lifted case-analysis.
5
(n + 1)-th component contains no Si literals. These components are disconnected from each other,
so we can consider each of them separately. The (n + 1)-th component comes from clauses in ?
and is domain-liftable by definition. The following two Lemmas prove that the clauses in the other
components are also domain-liftable. The proofs of both lemmas rely on domain recursion.
Lemma 1. A clausal theory ?(S) with only one predicate S where all clauses have exactly two
different literals of S is domain-liftable.
Lemma 2. Suppose {?p1 , ?p2 , . . . , ?pn } are mutually exclusive subsets of ?x and
{?q1 , ?q2 , . . . , ?qm } are mutually exclusive subsets of ?y . We can add any unit clause of the
form ?pi ? ?pi , qj ? ?qj : S(pi , qj ) or ?pi ? ?pi , qj ? ?qj : ?S(pi , qj ) to the theory ?(S) in
Lemma 1 and the theory is still domain-liftable.
Therefore, theories in S 2 FO 2 and S 2 RU are domain-liftable.
It can be easily verified that membership checking for S 2 FO 2 and S 2 RU is not harder than for FO 2
and RU , respectively.
Example 3. Suppose we have a set ?j of jobs and a set ?v of volunteers. Every volunteer must
be assigned to at most one job, and every job requires no more than one person. If the job involves
working with gas, the assigned volunteer must be a non-smoker. And we know that smokers are most
probably friends with each other. Then we will have the following first-order theory:
?v1 , v2 ? ?v , j ? ?j : ?Assigned(v1 , j) ? ?Assigned(v2 , j)
?v ? ?v , j1 , j2 ? ?j : ?Assigned(v, j1 ) ? ?Assigned(v, j2 )
?v ? ?v , j ? ?j : InvolvesGas(j) ? Assigned(v, j) ? ?Smokes(v)
?v1 , v2 ? ?v : Aux(v1 , v2 ) ? (Smokes(v1 ) ? Friends(v1 , v2 ) ? Smokes(v2 ))
Predicate Aux is added to capture the probability assigned to the last rule (as in MLNs). This theory
is not in FO 2 , not in RU , and is not domain-liftable using R. However, the first two clauses are
of the form described in Lemma 1, the third and fourth are in FO 2 (and also in RU ), and the third
clause, which contains Assigned(v, j), has no other PRVs with more than one LV. Therefore, this
theory is in S 2 FO 2 (and also in S 2 RU ) and domain-liftable based on Theorem 1.
Example 4. Consider the birthday paradox introduced in Section 4. After Skolemization [30] for
removing the existential quantifier, the theory contains ?p ? ?p , ?d ? ?d : S(p) ? ?Born(p, d),
?p ? ?p , d1 , d2 ? ?d : ?Born(p, d1 ) ? ?Born(p, d2 ), and ?p1 , p2 ? ?p , d ? ?d : ?Born(p1 , d) ?
?Born(p2 , d), where S is the Skolem predicate. This theory is not in FO 2 , not in RU , and is not
domain-liftable using R. However, the last two clauses belong to clauses in Lemma 1, the first one is
in FO 2 (and also in RU ) and has no PRVs with more than one LV other than Born. Therefore, this
theory is in S 2 FO 2 (and also in S 2 RU ) and domain-liftable based on Theorem 1.
Proposition 4. FO 2 ? RU , FO 2 ? S 2 FO 2 , FO 2 ? S 2 RU , RU ? S 2 RU , S 2 FO 2 ? S 2 RU .
Proof. Let T ? FO 2 and T 0 be any of the theories resulting from exhaustively applying rules in
R except lifted case analysis on T . If T initially contains a unary PRV with predicate S, either it
is still unary in T 0 or lifted decomposition has replaced the LV with a constant. In the first case,
we can follow a generic branch of lifted case-analysis on S, and in the second case, either T 0 is
empty or all binary PRVs in T have become unary in T 0 due to applying the lifted decomposition
and we can follow a generic branch of lifted case-analysis for any of these PRVs. The generic
branch in both cases is in FO 2 and the same procedure can be followed until all theories become
empty. If T initially contains only binary PRVs, lifted decomposition applies as the grounding of
T is disconnected for each pair of individuals, and after lifted decomposition all PRVs have no
LVs. Applying case analysis on all PRVs gives empty theories. Therefore, T ? RU . The theory
?x, y, z ? ?p : F(x, y) ? F(y, z) ? F(x, y, z) is an example of a RU theory that is not in FO 2 ,
showing RU 6? FO 2 . FO 2 and RU are special cases of S 2 FO 2 and S 2 RU respectively, where
? = ?, showing FO 2 ? S 2 FO 2 and RU ? S 2 RU . However, Example 3 is both in S 2 FO 2
and S 2 RU but is not in FO 2 and not in RU, showing S 2 FO 2 6? FO 2 and S 2 RU 6? RU . Since
FO 2 ? RU and the class of added ?(S) clauses are the same, S 2 FO 2 ? S 2 RU .
6
1000
100
100
10
1
0.1
1
10
100
10
1
0.1
10
100
0.01
0.01
0.001
1
0.001
Domain Recursion
10
1
0.1
3
30
Population size
WFOMC-v3.0
Domain Recursion
0.001
3000
Population size
WFOMC-v3.0
(b)
(a)
300
0.01
Population size
WFOMC-v3.0
Time in seconds
1000
100
Time in seconds
Time in seconds
1000
Domain Recursion
(c)
Figure 1: Run-times for calculating the WFOMC of (a) the theory in Example 3, (b) the S4 clause, and
(c) symmetric transitivity, using the WFOMC-v3.0 software (which only uses R) and comparing it to
the case where we use the domain recursion rule, referred to as Domain Recursion in the diagrams.
6
Experiments and Results
In order to see the effect of using domain recursion in practice, we find the WFOMC of three theories
with and without using the domain recursion rule: (a) the theory in Example 3, (b) the S4 clause, and
(c) the symmetric-transitivity theory. We implemented the domain recursion rule in C++ and compiled
the codes using the g++ compiler. We compare our results with the WFOMC-v3.0 software5 . Since
this software requires domain-liftable input theories, for the first theory we grounded the jobs, for
the second we grounded ?x , and for the third we grounded ?p . For each of these three theories,
assuming |?x | = n for all LVs x in the theory, we varied n and plotted the run-time as a function
of n. All experiments were done on a 2.8GH core with 4GB RAM under MacOSX. The run-times
are reported in seconds. We allowed a maximum of 1000 seconds for each run.
Obtained results can be viewed in Fig. 1. These results are consistent with our theory and indicate
the clear advantage of using the domain recursion rule in practice. In Fig. 1(a), the slope of the
diagram for domain recursion is approximately 4 which indicates the degree of the polynomial for
the time complexity. Similar analysis can be done for the results on the S4 clause and the symmetrictransitivity clauses represented in Fig. 1(b), (c). The slope of the diagram in these two diagrams is
around 5 and 2 respectively, indicating that the time complexity for finding their WFOMC are n5 and
n2 respectively, where n is the size of the population.
7
Discussion
We can categorize theories with respect to the domain recursion rule as: (1) theories proved to be
domain-liftable using domain recursion (e.g., S4, symmetric transitivity, and theories in S 2 FO 2 ),
(2) theories that are domain-liftable using domain recursion, but we have not identified them yet
as such, and (3) theories that are not domain-liftable even when using domain recursion. We leave
discovering and characterizing the theories in category 2 and 3 as future work. But here we show that
even though the theories in category 3 are not domain-liftable using domain recursion, this rule may
still result in exponential speedups for these theories.
Consider the (non-symmetric) transitivity rule: ?x, y, z ? ?p : ?Friend(x, y) ? ?Friend(y, z) ?
Friend(x, z). Since none of the rules in R apply to the above theory, the existing lifted inference
engines ground ?p and calculate the weighted model count (WMC) of the ground theory. By
grounding ?p , these engines lose great amounts of symmetry. Suppose ?p = {A, B, C} and assume
we select Friend(A, B) and Friend(A, C) as the first two random variables for case-analysis. Due to
the exchangeability of the individuals, the case where Friend(A, B) and Friend(A, C) are assigned to
True and False respectively has the same WMC as the case where they are assigned to False and True.
However, the current engines fail to exploit this symmetry as they consider grounded individuals
non-exchangeable.
By applying domain recursion to the above theory instead of fully grounding it, one can exploit the
symmetries of the theory. Suppose ?p0 = ?p ? {P }. Then we can rewrite the theory as follows:
?y, z ? ?p0 : ?Friend(P, y) ? ?Friend(y, z) ? Friend(P, z)
5
Available at: https://dtai.cs.kuleuven.be/software/wfomc
7
?x, z ? ?p0 : ?Friend(x, P ) ? ?Friend(P, z) ? Friend(x, z)
?x, y ? ?p0 : ?Friend(x, y) ? ?Friend(y, P ) ? Friend(x, P )
?x, y, z ? ?p0 : ?Friend(x, y) ? ?Friend(y, z) ? Friend(x, z)
Now if we apply lifted case analysis on Friend(P, y) (or equivalently on Friend(P, z)), we do not
get back the same theory with reduced population and calculating the WFOMC is still exponential.
However, we only generate one branch for the case where Friend(P, y) is True only once. This
branch covers both the symmetric cases mentioned above. Exploiting these symmetries reduces the
time-complexity exponentially.
This suggests that for any given theory, when the rules in R are not applicable one may want to try
the domain recursion rule before giving up and resorting to grounding a population.
8
Conclusion
We identified new classes of domain-liftable theories called S 2 FO 2 and S 2 RU by reviving the
domain recursion rule. We also demonstrated how this rule is useful for theories outside these
classes. Our work opens up a future research direction for identifying and characterizing larger
classes of theories that are domain-liftable using domain recursion. It also helps us get closer to
finding a dichotomy between the theories that are domain-liftable and those that are not, similar to
the dichotomy result of Dalvi and Suciu [6] for query answering in probabilistic databases.
It has been shown [15, 16] that compiling the WFOMC rules into low-level programs (e.g., C++
programs) offers a (approx.) 175x speedup compared to other approaches. While compiling the
previously known rules to low-level programs was straightforward, compiling the domain recursion
rule to low-level programs without using recursion might be tricky as it relies on the population size
of the logical variables. A future research direction would be finding if the domain recursion rule can
be efficiently compiled into low-level programs, and measuring the amount of speedup it offers.
Acknowledgements. AK is supported by the Research Foundation Flanders (FWO). GVdB is partially supported
by NSF (#IIS-1633857).
References
[1] Babak Ahmadi, Kristian Kersting, and Sriraam Natarajan. Lifted online training of relational models with
stochastic gradient methods. In ECML PKDD, pages 585?600, 2012.
[2] W. W. Rouse Ball. Other questions on probability. Mathematical Recreations and Essays, page 45, 1960.
[3] Paul Beame, Guy Van den Broeck, Eric Gribkoff, and Dan Suciu. Symmetric weighted first-order model
counting. In PODS, pages 313?328, 2015.
[4] Hung Hai Bui, Tuyen N Huynh, Artificial Intelligence Center, and Sebastian Riedel. Automorphism groups
of graphical models and lifted variational inference. In UAI, page 132, 2013.
[5] Jaesik Choi, Rodrigo de Salvo Braz, and Hung H. Bui. Efficient methods for lifted inference with aggregate
factors. In AAAI, 2011.
[6] Nilesh Dalvi and Dan Suciu. Efficient query evaluation on probabilistic databases. The VLDB Journal,
16(4):523?544, 2007.
[7] Luc De Raedt, Angelika Kimmig, and Hannu Toivonen. ProbLog: A probabilistic Prolog and its application
in link discovery. In IJCAI, volume 7, 2007.
[8] Luc De Raedt, Kristian Kersting, Sriraam Natarajan, and David Poole. Statistical relational artificial
intelligence: Logic, probability, and computation. Synthesis Lectures on Artificial Intelligence and Machine
Learning, 10(2):1?189, 2016.
[9] Rodrigo de Salvo Braz, Eyal Amir, and Dan Roth. Lifted first-order probabilistic inference. In IJCAI, pages
1319?1325, 2005.
[10] Lise Getoor and Ben Taskar. Introduction to statistical relational learning. MIT press, 2007.
[11] Vibhav Gogate and Pedro Domingos. Probabilistic theorem proving. In UAI, pages 256?265, 2011.
8
[12] Manfred Jaeger. Relational Bayesian networks. In UAI. Morgan Kaufmann Publishers Inc., 1997.
[13] Yacine Jernite, Alexander M Rush, and David Sontag. A fast variational approach for learning Markov
random field language models. In ICML, 2015.
[14] Abhay Jha, Vibhav Gogate, Alexandra Meliou, and Dan Suciu. Lifted inference seen from the other side:
The tractable features. In NIPS, pages 973?981, 2010.
[15] Seyed Mehran Kazemi and David Poole. Knowledge compilation for lifted probabilistic inference:
Compiling to a low-level language. In KR, 2016.
[16] Seyed Mehran Kazemi and David Poole. Why is compiling lifted inference into a low-level language so
effective? arXiv preprint arXiv:1606.04512, 2016.
[17] Kristian Kersting, Babak Ahmadi, and Sriraam Natarajan. Counting belief propagation. In UAI, pages
277?284, 2009.
[18] Daphne Koller and Nir Friedman. Probabilistic Graphical Models: Principles and Techniques. MIT Press,
Cambridge, MA, 2009.
[19] Timothy Kopp, Parag Singla, and Henry Kautz. Lifted symmetry detection and breaking for MAP inference.
In NIPS, pages 1315?1323, 2015.
[20] Brian Milch, Luke S. Zettlemoyer, Kristian Kersting, Michael Haimes, and Leslie Pack Kaelbling. Lifted
probabilistic inference with counting formulae. In AAAI, pages 1062?1068, 2008.
[21] Mathias Niepert. Markov chains on orbits of permutation groups. In UAI, 2012.
[22] David Poole, Fahiem Bacchus, and Jacek Kisynski. Towards completely lifted search-based probabilistic
inference. arXiv:1107.4035 [cs.AI], 2011.
[23] David Poole. First-order probabilistic inference. In IJCAI, pages 985?991, 2003.
[24] Matthew Richardson and Pedro Domingos. Markov logic networks. Machine Learning, 62:107?136, 2006.
[25] Parag Singla and Pedro M Domingos. Lifted first-order belief propagation. In AAAI, volume 8, pages
1094?1099, 2008.
[26] Dan Suciu, Dan Olteanu, Christopher R?, and Christoph Koch. Probabilistic databases. Synthesis Lectures
on Data Management, 3(2):1?180, 2011.
[27] Nima Taghipour, Daan Fierens, Guy Van den Broeck, Jesse Davis, and Hendrik Blockeel. Completeness
results for lifted variable elimination. In AISTATS, pages 572?580, 2013.
[28] Guy Van den Broeck, Nima Taghipour, Wannes Meert, Jesse Davis, and Luc De Raedt. Lifted probabilistic
inference by first-order knowledge compilation. In IJCAI, pages 2178?2185, 2011.
[29] Guy Van den Broeck, Arthur Choi, and Adnan Darwiche. Lifted relax, compensate and then recover: From
approximate to exact lifted probabilistic inference. In UAI, 2012.
[30] Guy Van den Broeck, Wannes Meert, and Adnan Darwiche. Skolemization for weighted first-order model
counting. In KR, 2014.
[31] Guy Van den Broeck. On the completeness of first-order knowledge compilation for lifted probabilistic
inference. In NIPS, pages 1386?1394, 2011.
[32] Jan Van Haaren, Guy Van den Broeck, Wannes Meert, and Jesse Davis. Lifted generative learning of
Markov logic networks. Machine Learning, pages 1?29, 2015.
[33] Deepak Venugopal and Vibhav Gogate. Evidence-based clustering for scalable inference in Markov logic.
In ECML PKDD, pages 258?273, 2014.
[34] Deepak Venugopal and Vibhav G Gogate. Scaling-up importance sampling for Markov logic networks. In
NIPS, pages 2978?2986, 2014.
9
| 6603 |@word version:1 polynomial:7 open:5 adnan:2 d2:4 essay:1 vldb:1 propagate:1 decomposition:7 p0:12 q1:1 mention:1 harder:1 recursively:5 carry:2 initial:1 born:13 contains:8 denoting:1 past:1 existing:3 current:1 comparing:1 si:5 assigning:1 yet:2 must:4 partition:3 j1:2 remove:3 treating:2 intelligence:3 discovering:1 braz:2 generative:1 amir:1 mln:3 xk:2 realizing:1 core:1 manfred:1 completeness:2 org:1 simpler:1 daphne:1 mathematical:1 shatter:1 become:3 prove:3 shorthand:1 dan:6 dalvi:2 darwiche:2 introduce:1 x0:4 tuyen:1 expected:2 p1:5 pkdd:2 beame:4 rule4:1 jm:2 cache:2 cardinality:1 totally:1 becomes:1 provided:1 considering:1 notation:1 moreover:1 spain:1 suffice:2 q2:1 finding:6 forgotten:2 every:7 ti:1 exactly:6 qm:1 tricky:1 exchangeable:5 unit:7 t1:7 before:1 despite:1 encoding:5 ak:1 blockeel:1 approximately:1 birthday:7 might:1 quantified:3 suggests:1 luke:1 christoph:1 mentioning:3 range:1 directed:1 sriraam:3 recursive:1 kimmig:3 practice:2 procedure:3 jan:1 bell:1 significantly:1 renaming:1 get:3 cannot:2 close:1 milch:1 applying:7 equivalent:1 map:2 demonstrated:1 center:1 roth:1 modifies:1 straightforward:1 jesse:3 pod:1 identifying:2 rule:50 d1:4 enabled:1 population:18 proving:1 notion:2 suppose:5 exact:3 programming:1 us:2 domingo:3 associate:1 element:1 natarajan:3 database:7 role:1 taskar:1 preprint:1 capture:2 worst:1 calculate:4 connected:3 automorphism:1 trade:1 removed:1 mentioned:1 intuition:1 meert:3 complexity:6 wmc:2 dynamic:1 exhaustively:1 angelika:3 babak:2 raise:1 rewrite:2 solving:1 seyed:3 eric:1 completely:1 compactly:1 easily:1 various:1 represented:1 fast:2 describe:2 effective:1 query:4 dichotomy:2 artificial:3 lift:1 aggregate:1 outside:1 disjunction:2 whose:2 larger:1 relax:1 otherwise:1 revive:1 richardson:1 noisy:1 itself:1 online:1 advantage:1 coming:1 product:1 j2:2 achieve:1 los:1 exploiting:1 ijcai:4 empty:3 jaeger:1 guaranteeing:1 leave:1 ben:1 object:5 tk:1 help:1 friend:38 stating:1 job:5 p2:5 kazemi:4 implemented:1 c:6 involves:1 indicate:3 come:1 direction:2 stochastic:1 elimination:1 require:1 parag:2 undistinguished:2 f1:1 suffices:1 proposition:4 brian:1 around:1 considered:1 ground:9 koch:1 exp:1 great:1 matthew:1 substituting:1 achieves:1 early:1 purpose:2 mlns:1 applicable:2 lose:1 currently:1 singla:2 largest:1 kopp:1 weighted:8 mit:2 aim:2 modified:1 ck:3 srl:9 caching:1 pn:1 lifted:54 exchangeability:4 kersting:4 conjunction:1 encode:2 lise:1 check:3 indicates:1 inference:44 membership:4 unary:11 shattered:2 typically:1 initially:2 relation:2 koller:1 among:3 multiplies:1 development:1 smoothing:1 special:1 equal:1 once:2 field:1 having:9 sampling:1 identical:2 represents:3 shattering:2 icml:1 future:3 t2:3 simplify:2 randomly:1 densely:1 individual:28 replaced:2 consisting:1 negation:1 ab:1 friedman:1 detection:1 highly:2 multiply:1 evaluation:1 recreation:1 behind:1 suciu:5 compilation:3 chain:1 amenable:1 closer:1 arthur:1 continuing:1 orbit:1 plotted:1 rush:1 theoretical:3 modeling:1 asking:2 cover:1 measuring:1 raedt:3 assignment:5 leslie:1 kaelbling:1 subset:2 predicate:12 bacchus:1 characterize:1 reported:1 dependency:3 elevate:1 broeck:11 person:2 parfactor:1 standing:1 probabilistic:22 off:1 nilesh:1 meliou:1 michael:1 synthesis:2 w1:1 again:1 central:2 aaai:3 management:1 literal:11 guy:8 resort:1 return:1 prolog:1 account:2 converted:1 de:5 wk:1 includes:3 subsumes:1 inc:1 jha:1 caused:1 later:3 try:1 eyal:1 compiler:1 recover:1 kautz:1 slope:2 kaufmann:1 efficiently:4 lvs:11 prvs:16 identify:4 t3:4 bayesian:3 none:3 fo:50 reach:1 sebastian:1 definition:5 typed:2 proof:6 associated:1 proved:3 logical:5 knowledge:3 liftable:41 back:1 appears:1 yacine:1 day:3 follow:3 formulation:1 done:5 t5j:2 though:2 niepert:1 furthermore:1 until:2 working:1 replacing:1 christopher:1 smoke:8 propagation:6 vibhav:4 alexandra:1 grounding:9 effect:2 concept:1 true:12 contain:4 verify:1 y2:3 assigned:13 symmetric:13 transitivity:11 huynh:1 davis:3 wfomc:48 bring:1 gh:1 jacek:1 reasoning:1 kisynski:1 variational:3 fi:3 began:1 mt:4 clause:52 exponentially:1 volume:2 million:1 extend:1 interpretation:3 belong:1 refer:2 cambridge:1 ai:1 leuven:1 rd:5 fk:1 resorting:1 similarly:2 approx:1 language:4 henry:1 specification:1 longer:1 compiled:2 add:1 own:1 showed:1 store:1 binary:6 morgan:1 seen:1 impose:1 converting:1 recognized:1 v3:5 redundant:1 ii:1 rv:2 afterwards:1 branch:5 needing:2 reduces:1 faster:1 calculation:1 believed:1 offer:3 long:1 compensate:1 divided:1 instantiating:1 scalable:1 basic:2 n5:1 subsuming:2 mehran:3 volunteer:3 arxiv:4 represent:6 grounded:4 achieved:1 c1:3 background:1 whereas:1 separately:4 want:1 decreased:2 zettlemoyer:1 diagram:4 concluded:1 publisher:1 probably:1 liftability:1 thing:1 counting:5 xj:2 jernite:1 identified:3 simplifies:1 angeles:1 qj:6 whether:1 gb:1 reuse:1 sontag:1 nine:1 useful:1 clear:1 transforms:1 s4:9 amount:2 fwo:1 category:2 reduced:2 http:2 generate:1 exist:1 nsf:1 it3:1 taghipour:2 clausal:4 group:2 key:1 four:2 verified:1 v1:6 ram:1 graph:1 run:6 letter:2 powerful:2 fourth:1 extends:1 reader:1 scaling:1 capturing:1 entirely:1 followed:1 simplification:1 riedel:1 x2:7 prv:9 software:3 ucla:1 haimes:1 speedup:6 ball:1 disconnected:5 smaller:1 wi:2 making:1 s1:2 alse:2 den:11 quantifier:4 mutually:2 previously:2 count:4 fail:1 fierens:1 needed:1 know:2 kuleuven:2 tractable:2 available:1 rewritten:1 eight:1 apply:5 hierarchical:1 v2:6 generic:4 appearing:1 ahmadi:2 compiling:5 original:2 running:1 remaining:1 clustering:1 graphical:4 unifying:1 calculating:5 exploit:2 giving:1 classical:1 added:2 question:1 exclusive:2 hai:1 gradient:1 link:1 parametrized:1 trivial:1 reason:2 induction:1 assuming:4 ru:41 code:1 relationship:1 gogate:4 providing:1 equivalently:3 rise:1 abhay:1 toivonen:1 upper:1 wannes:3 markov:9 daan:1 finite:1 gas:1 ecml:2 subsume:1 relational:9 extended:2 defining:1 paradox:6 y1:3 varied:1 arbitrary:1 community:1 david:7 introduced:2 pair:3 required:1 sentence:6 california:1 engine:3 established:1 barcelona:1 salvo:2 nip:5 poole:8 hendrik:1 challenge:1 program:6 including:1 belief:2 power:4 getoor:1 treated:1 rely:1 recursion:42 columbia:2 existential:4 sn:2 nir:1 literature:1 acknowledgement:1 checking:2 discovery:1 fully:2 lecture:2 permutation:1 lv:10 foundation:2 degree:1 consistent:1 principle:3 pi:6 cancer:5 supported:2 last:2 free:1 keeping:1 dtai:1 side:1 allow:1 rodrigo:2 template:1 characterizing:2 deepak:2 van:12 calculated:1 xn:1 rich:1 universally:1 far:2 approximate:2 compact:1 bui:2 logic:11 uai:6 summing:1 xi:39 search:1 decade:1 why:1 ku:1 pack:1 ca:2 symmetry:5 domain:93 rue:2 venugopal:2 did:1 aistats:1 main:2 whole:2 s2:2 paul:1 n2:1 allowed:1 x1:8 fig:3 referred:1 sub:2 explicit:2 exponential:8 answering:1 breaking:1 flanders:1 third:4 british:2 formula:10 removing:2 friendship:1 specific:2 theorem:4 choi:2 showing:3 symbol:2 evidence:1 intractable:2 exists:1 guyvdb:1 false:6 adding:1 kr:2 haaren:1 importance:1 t4:4 easier:1 mf:6 smoker:2 timothy:1 partially:1 applies:2 kristian:4 pedro:3 ubc:2 corresponds:1 relies:1 ma:1 goal:2 viewed:1 towards:1 luc:3 fahiem:1 nima:2 except:4 lemma:7 called:5 mathias:1 indicating:1 select:2 support:1 people:5 latter:1 categorize:1 alexander:1 evaluate:1 aux:2 hung:2 |
6,196 | 6,604 | C YCLADES:
Conflict-free Asynchronous Machine Learning
Xinghao Pan?, Maximilian Lam?, Stephen Tu?, Dimitris Papailiopoulos?,
Ce Zhang?, Michael I. Jordan?,? Kannan Ramchandran?, Chris Re?, Benjamin Recht??
Abstract
We present C YCLADES, a general framework for parallelizing stochastic optimization algorithms in a shared memory setting. C YCLADES is asynchronous during
model updates, and requires no memory locking mechanisms, similar to H OG WILD !-type algorithms. Unlike H OGWILD !, C YCLADES introduces no conflicts
during parallel execution, and offers a black-box analysis for provable speedups
across a large family of algorithms. Due to its inherent cache locality and conflictfree nature, our multi-core implementation of C YCLADES consistently outperforms
H OGWILD !-type algorithms on sufficiently sparse datasets, leading to up to 40%
speedup gains compared to H OGWILD !, and up to 5? gains over asynchronous
implementations of variance reduction algorithms.
1
Introduction
Following the seminal work of H OGWILD ! [17], many studies have demonstrated that near linear
speedups are achievable on several machine learning tasks via asynchronous, lock-free implementations [25, 13, 8, 16]. In all of these studies, classic algorithms are parallelized by simply running
parallel and asynchronous model updates without locks. These lock-free, asynchronous algorithms
exhibit speedups even when applied to large, non-convex problems, as demonstrated by deep learning systems such as Google?s Downpour SGD [6] and Microsoft?s Project Adam [4]. While these
techniques have been remarkably successful, many of the above papers require delicate and tailored
analyses to quantify the benefits of asynchrony for each particular learning task. Moreover, in
non-convex settings, we currently have little quantitative insight into how much speedup is gained
from asynchrony.
In this work, we present C YCLADES, a general framework for lock-free, asynchronous machine
learning algorithms that obviates the need for specialized analyses. C YCLADES runs asynchronously
and maintains serial equivalence, i.e., it produces the same outcome as the serial algorithm. Since
it returns the same output as a serial implementation, any algorithm parallelized by our framework
inherits the correctness proof of the serial counterpart without modifications. Additionally, if a
particular serially run heuristic is popular, but does not have a rigorous analysis, C YCLADES still
guarantees that its execution will return a serially equivalent output.
C YCLADES achieves serial equivalence by partitioning updates among cores, in a way that ensures
that there are no conflicts across partitions. Such a partition can always be found efficiently by
leveraging a powerful result on graph phase transitions [12]. When applied to our setting, this result
guarantees that a sufficiently small sample of updates will have only a logarithmic number of conflicts.
This allows us to evenly partition model updates across cores, with the guarantee that all conflicts are
localized within each core. Given enough problem sparsity, C YCLADES guarantees a nearly linear
?
Department of Electrical Engineering and Computer Science, UC Berkeley, Berkeley, CA.
Department of Computer Science, Stanford University, Palo Alto, CA.
?
Department of Statistics, UC Berkeley, Berkeley, CA.
?
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
speedup, while inheriting all the qualitative properties of the serial counterpart of the algorithm, e.g.,
proofs for rates of convergence. Enforcing a serially equivalent execution in C YCLADES comes with
additional practical benefits. Serial equivalence is helpful for hyperparameter tuning, or locating
the best model produced by the asynchronous execution, since experiments are reproducible, and
solutions are easily verifiable. Moreover, a C YCLADES program is easy to debug because bugs are
repeatable and we can examine the step-wise execution to localize them.
A significant benefit of the update partitioning in C YCLADES is that it induces considerable access
locality compared to the more unstructured nature of the memory accesses during H OGWILD !. Cores
will access the same data points and read/write the same subset of model variables. This has the additional benefit of reducing false sharing across cores. Because of these gains, C YCLADES can actually
outperform H OGWILD ! in practice on sufficiently sparse problems, despite appearing to require
more computational overhead. Remarkably, because of the added locality, even a single threaded
implementation of C YCLADES can actually be faster than serial SGD. In our SGD experiments for
matrix completion and word embedding problems, C YCLADES can offer a speedup gain of up to 40%
compared to that of H OGWILD !. Furthermore, for variance reduction techniques such as SAGA [7]
and SVRG [11], C YCLADES yields better accuracy and more significant speedups, with up to 5?
performance gains over H OGWILD !-type implementations.
2
The Algorithmic Family of Stochastic-Updates
We study parallel asynchronous iterative algorithms on the computational model used by [17]: several
cores have access to the same shared memory, and each of them can read and update components of
the shared memory. In this work, we consider a family of randomized algorithms that we refer to as
Stochastic Updates (SU). The main algorithmic component of SU focuses on updating small subsets
of a model variable x, according to prefixed access patterns, as sketched by Alg. 1.
In Alg. 1, Si is a subset of the coordinates in x, each function Algorithm 1 Stochastic Updates
fi operates on the subset Si of coordinates, and ui is a local
update function that computes a vector with support on Si using 1: Input: x; f1 , . . . , fn ; T
as input xSi and fi . Moreover, T is the total number of iterations, 2: for t = 1 : T do
and D is the distribution with support {1, . . . , n} from which we 3: sample i ? D
4:
xSi = ui (xSi , fi )
draw i. Several machine learning algorithms belong to the SU
5:
Output:
x
algorithmic family, such as stochastic gradient descent (SGD),
with or without weight decay and regularization, variance-reduced
learning algorithms like SAGA and SVRG, and even some combinatorial graph algorithms. In our
supplemental material, we explain how these algorithms can be phrased in the SU language.
conflict graph
The updates conflict graph A useful construct for our developments is the conflict graph between updates, which can be generated
from the bipartite graph between the updates and the model variables.
We define these graphs below, and provide an illustrative sketch in
Fig. 1.
Definition 1. Let Gu denote the bipartite update-variable graph
between the n updates and the d model variables. An update ui
is linked to a variable xj , if ui requires to read/write xj . Let Eu
denote the number of edges in the bipartite graph, L the max left
degree of Gu , and L the average left degree. Finally, we denote
by Gc the conflict graph on the n updates. Two vertices in Gc are
linked, if the corresponding updates share at least one variable in
Gu . We also denote as the max vertex degree of Gc .
u1
u2
..
.
..
.
x1
x2
un
xd
u1
sample
u2
un
Figure 1: In the bipartite graph,
an update ui is linked to variable
xj when it needs to read/write it.
From Gu we obtain the conflict
graph Gc , whose max degree is
. If that is small, we expect that
it is possible to parallelize updates
without too many conflicts. C YCLADES exploits this intuition.
We stress that the conflict graph is never constructed, but is a useful for understanding C YCLADES.
Our Main Result By exploiting the structure of the above graphs and through a light-weight
sampling and allocation of updates, C YCLADES guarantees the following result for SU algorithms,
which we establish in the following sections.
Theorem 1 (informal). Let an SU algorithm A be defined through n update rules, where the conflict
max degree between the n updates is , and the sampling distribution D is uniform with (or without)
replacement from {1, . . . , n}. Moreover, assume that we wish to run A for T = ?(n) iterations, and
2
p
? n ) cores, C YCLADES guarantees a ?(P
e ) speedup over
that L ? n. Then on up to P = O(
? L
L
A, while outputting the same solution x as A would do after the same random set of T iterations.4
We now provide two examples of how these guarantees translate for specific problem cases.
Pn
Example 1. In many applications we seek to minimize: minx n1 i=1 `i (aTi x) where ai represents
the ith data point, x is the parameter vector, and `i is a loss. Several problems can be formulated in
this way, such as logistic regression, least squares, binary classification, etc. If we tackle the above
problem using SGD, or techniques like SVRG and SAGA, then (as we show in the supplemental) the
update sparsity is determined by the gradient of a single sampled data point ai . Here, we will have
will be equal to the maximum number of data points ai that share at least
L = maxi ||ai ||0 , and
one feature. As a toy example, let nd = ?(1) and let the non-zero support of ai be of size n and
e 1/2+ ) and hence
uniformly distributed. Then, one can show that with high probability = O(n
e ) speedup on up to P = O(n
e 1/2 2 ) cores.
C YCLADES achieves an ?(P
Pm Pm
Example 2. Consider the generic optimization minxi ,yj ,i2[n] i=1 j=1 i,j (xi , yj ), which captures several problems like matrix completion and factorization [17], word embeddings [2], graph
k-way cuts [17], etc. Assume that we aim to minimize the above by sampling a single function
i,j and then updating xi and yj using SGD. Here, the number of update functions is proportional
to n = m2 , and each gradient update with respect to the sampled function i,j (xi , yj ) is only
interacting with the variables xi and yj , i.e., only two variable vectors out of the 2m vectors (i.e.,
= 2m. Here, C YCLADES can provably
L = 2). This also implies a conflict degree of at most
e ) speedup for up to P = O(m) cores.
guarantee an ?(P
In our experiments we test C YCLADES on several problems including least squares, classification
with logistic models, matrix factorization, and word embeddings, and several algorithms including
SGD, SVRG, and SAGA. We show that in most cases it can significantly outperform the H OGWILD !
implementation of these algorithms, if the data is sparse.
Remark 1. We would like to note that there are several cases where there might be a few outlier
updates with extremely high conflict degree. In the supplemental material, we prove that if there are
no more than O(n ) vertices of high conflict degree o , and the rest of the vertices have max degree
at most , then the result of Theorem 1 still holds in expectation.
In the following section, we establish the theory of C YCLADES and provide the details behind our
parallelization framework.
3
C YCLADES: Shattering Dependencies
C YCLADES consists of three computational
It starts by sampling (according to a distribution D) a
number of B updates from the graph shown in Fig. 1,
and assigns a label to each of them (a processing
order). After sampling, it computes the connected
components of the sampled subgraph induced by the
B sampled updates, to determine the conflict groups.
components
as
shown
in
Fig.
2.
Sample Batch + Connected Components
conflict-graph
sample
C.C.
Once the conflicts groups are formed, it allocates
Allocation
them across P cores. Finally, each core processes
locally the conflict groups of updates that it has been
Core1
Core 2
Core p
assigned, following the order that each update has
been labeled with. The above process is then repeated,
Asynchronous and Lock-free Stochastic Updates
for as many iterations as needed. The key component
SU
SU
SU
of C YCLADES is to carry out the sampling in such
Core1
Core
2
Core
p
a way that we have as many connected components
Batch Synchronization
as possible, and all of them of small size, provably.
In the next subsections, we explain how each part
is carried out, and provide theoretical guarantees for Figure 2: C YCLADES samples updates, finds
each of them individually, which we combine at the conflict-groups, and allocates them across cores.
Each core asynchronously updates the model, withend of this section for our main theorem.
4e
e hide polylog factors.
?(?) and O(?)
out access conflicts. This is possible by processing
the conflicting updates within the same core.
3
A key technical aspect that we exploit in C YCLADES is that appropriate sampling and allocation of
updates can lead to near optimal parallelization of SU algorithms. To do that we expand upon the
following result established in [12].
Theorem 2. Let G be a graph on n vertices, with max degree . Let us sample each vertex
independently with probability p = 1 ? and define as G0 the induced subgraph on the sampled
vertices. Then, the largest connected component of G0 has size at most ?42 log n, with high probability.
The above result pays homage to the giant component phase transition phenomena in random
Erdos-Renyi graphs. What is surprising is that similar phase transitions apply to any given graph!
In practice, for most SU algorithms of interest, the sampling distribution of updates is either with or
without replacement from the n updates. As it turns out, morphing Theorem 2 into a with-/withoutreplacement result is not straightforward. We defer the analysis needed to the supplemental material,
and present our main theorem about graph sampling here.
Theorem 3. Let G be a graph on n vertices, with max degree . Let us sample B = (1 ?)n vertices
with or without replacement, and define as G0 the induced subgraph on the sampled vertices. Then,
n
the largest connected component of G0 has size at most O( log
?2 ), with high probability.
The key idea from the above is that if one samples no more than B = (1 ?) n updates, then there will
be at least O (?2 B/log n) conflict groups to allocate across cores, each of size at most O log n/?2 .
Since there are no conflicts between different conflict-groups, the processing of updates per any single
group will never interact with the variables corresponding to the updates of another conflict group.
The next step of C YCLADES is to form and allocate the connected components (CCs) across cores,
efficiently. We address this in the following subsection. In the following, for brevity we focus on the
with-replacement sampling case, but the results can be extended to the without-replacement case.
Identifying groups of conflict In C YCLADES, we sample batches of updates of size B multiple
times, and for each batch we need to identify the conflict groups across the updates. Let us refer
to Giu as the subgraph induced by the ith sampled batch of updates on the update-variable graph
Gu . In the following we always assume that we sample nb = c ? /(1 ?) batches, where c 1 is
a constant. This number of batches results in a constant number of passes over the dataset. Then,
identifying the conflict groups in Giu can be done with a connected components (CC) algorithm. The
main question we need to address is what is the best way to parallelize this graph partitioning part. In
the supplemental, we provide the details of this part, and prove the following result:
p
Lemma 1. Let the number of cores be P = O( n L ) and let L ? n. Then, the overall computation
of CCs for nb = c ?
1 ?
batches, each of size B = (1
L
?) n , costs no more than O(Eu /P log2 n).
Allocating updates to cores Once we compute the CCs (i.e., the conflicts groups of the sampled
updates), we have to allocate them across cores. Once a core has been assigned with CCs, it will
process the updates included in these CCs, according to the order that each update has been labeled
n
with. Due to Theorem 3, each connected component will contain at most O( log
?2 ) updates. Assuming
that thePcost of the j-th update in the batch is wj , the cost of a single connected component C will be
wC = j2C wj . To proceed with characterizing the maximum load among the P cores, we assume
that the cost of a single update ui , for i 2 {1, . . . , n}, is proportional to the out-degree of that update
?according to the update-variable graph Gu ? times a constant cost which we shall refer to as ?.
Hence, wj = O(dL,j ? ?), where dL,j is the degree of the j-th left vertex of Gu . In the supplemental
material, we establish that a near-uniform allocation of CCs according to their weights leads to the
following guarantee.
p
Lemma 2. Let the number of cores by bounded as P = O( n L ), and let L ? n. Then, computing
L
2
n
the stochastic updates across all nb = c ? 1 ? batches can be performed in time O( E log
? ?), with
P
high probability, where ? is the per edge cost for computing one of the n updates defined on Gu .
Stitching the pieces together Now that we have described the sampling, conflict computation, and
allocation strategies, we are ready to put all the pieces together and detail C YCLADES in full. Let us
assume that we sample a total number of nb = c ? 1 ? batches of size B = (1 ?) n , and that each
i
update is sampled uniformly at random. For the i-th batch let us denote as C1i , . . . Cm
the connected
i
4
components on the induced subgraph Giu . Due to Theorem 3, each connected component C contains a
n
number of at most O( log
?2 ) updates; each update carries an ID (the order of which it would have been
sampled by the serial algorithm). Using the above notation, we give the pseudocode for C YCLADES
in Alg. 2. Note that the inner loop that is parallelized (i.e., the SU processing loop in lines 6 ? 9), can
be performed asynchronously; cores do not have to synchronize, and do not need to lock any memory
variables, as they are all accessing non-overlapping subset of x. This also provides for better cache
coherence. Moreover, each core potentially accesses the same coordinates several times, leading to
good cache locality. These improved cache locality and coherence properties experimentally lead to
substantial performance gains as we see in the next section. We can now combine the results of the
previous subsection to obtain our main theorem for C YCLADES.
Theorem
p 4. Let us assume any given update-variable graph Gu with L and L , such that
L
? n, and with induced max conflict degree . Then, C YCLADES on P = O( ?n L ) cores, with
L
batch sizes B = (1 ?) n can execute T = c ? n updates, for any constant c 1, selected uniformly
at random with replacement, in time O EPu ?? ? log2 n , with high probability.
Observe that C YCLADES bypasses the need to es- Algorithm 2 C YCLADES
tablish convergence guarantees for the parallel algorithm. Hence, it could be the case for an applications 1: Input: Gu , nb .
n
2: Sample n subgraphs G1 , . . . , Gub from Gu
of interest that we cannot analyze how a serial SU al- 3: Compute bin parallel CCsu for sampled
graphs
gorithm performs in terms of say the accuracy of the 4: for batch i = 1 : nb do
i
solution, but C YCLADES can still provide black box 5: Allocation of C1i , . . . Cm
to P cores
i
guarantees for speedup, since our analysis is com- 6: for each core in parallel do
pletely oblivious to the qualitative performance of 7:
for each allocated component C do
the serial algorithm. This is in contrast to recent stud- 8:
for each ordered update j from C do
ies similar to [5], where the authors provide speedup 9:
xSj = uj (xSj , fj )
guarantees via a convergence-to-optimal proof for an 10: Output: x
asynchronous SGD on a nonconvex problem. Unfortunately these proofs can become complicated on a wider range of nonconvex objectives.
In the following section we show that C YCLADES is not only useful theoretically, but can consistently
outperform H OGWILD ! on sufficiently sparse datasets.
4
Evaluation
We implemented C YCLADES5 in C++ and tested it on a variety of problems, and a number of
stochastic updates algorithms, and compared against their H OGWILD ! (i.e., asynchronous, lock-free)
implementations. Since C YCLADES is intended to be a general SU parallelization framework, we
do not compare against algorithms tailored to specific applications, nor do we expect C YCLADES
to outperform every such highly-tuned, well-designed, specific algorithms. Our experiments were
conducted on a machine with 72 CPUs (Intel(R) Xeon(R) CPU E7-8870 v3, 2.10 GHz) on 4 NUMA
nodes, each with 18 CPUs, and 1TB of memory. We ran C YCLADES and H OGWILD ! with 1, 4, 8, 16
and 18 threads pinned to CPUs on a single NUMA node (i.e., the maximum physical number of cores
per single node), to can avoid well-known cache coherence and scaling issues across nodes [24].
Dataset
NH2010
DBLP
MovieLens
EN-Wiki
# datapoints
48,838
5,425,964
?10M
20,207,156
# features
48,838
5,425,964
82,250
213,272
av. sparsity / datapoint
4.8026
3.1880
200
200
Comments
Topological graph
Authorship network
10M movie ratings
Subset of english Wikipedia dump.
Table 1: Details of datasets used in our experiments.
In our experiments, we measure overall running times which include the overheads for computing
connected components and allocating work in C YCLADES. We also compute the objective value at
the end of each epoch (i.e., one full pass over the data). We measure the speedups for each algorithm
of the parallel algorithm to reach ? objective
as time
time of the serial algorithm to reach ? objective where ? was chosen to be the smallest objective value that is
achievable by all parallel algorithms on every choice of number of threads. The serial algorithm used
for comparison is H OGWILD ! running serially on one thread. In Table 1 we list some details of the
datasets that we use in our experiments. We tune our constant stepsizes so to maximize convergence
5
Code is available at https://github.com/amplab/cyclades.
5
without diverging, and use one random data reshuffling across all epochs. Batch sizes are picked to
optimize performance for C YCLADES.
(a) Least Sq.,
SAGA
DBLP, (b) Graph Eig., nh2010, (c) Mat. Comp., 10M, `2 - (d) Word2Vec, EN-Wiki,
SVRG
SGD
SGD
Figure 3: Convergence of C YCLADES and H OGWILD ! in terms of overall running time with 1, 8, 16, 18 threads.
C YCLADES is initially slower, but ultimately reaches convergence faster than H OGWILD !.
(a) Least Sq.,
SAGA
DBLP, (b) Graph Eig., nh2010, (c) Mat. Comp., 10M, `2 - (d) Word2Vec, EN-Wiki,
SVRG
SGD
SGD
Figure 4: Speedup of C YCLADES and H OGWILD ! versus number of threads. On multiple threads, C YCLADES
always reaches ? objective faster than H OGWILD !. In some cases C YCLADES is faster than H OGWILD ! even
on 1 thread, due to better cache locality. In Figs. 4(a) and 4(b), C YCLADES exhibits significant gains since
H OGWILD ! suffers from asynchrony noise, and we had to use comparatively smaller stepsizes to prevent it from
diverging.
Least squares via SAGA
Pn The first problem we consider is least
squares: minx minx n1 i=1 (aTi x bi )2 which we will solve using
the SAGA algorithm [7], an incrimental gradient algorithm with
faster than SGD rates on convex, or strongly convex functions. In
SAGA, we initialize gi = rfi (x0 ) and iterate
following two
Pthe
n
steps xk+1 = xk
? (rfsk (xk ) gsk + n1 i=1 gi ) and gsk =
rfsk (xk ), where fi (x) = (aTi x bi )2 . In the above iteration it is
useful to observe that the updates can be performed in a sparse and
?lazy? way, as we explain in detail in our supplemental material.
Figure 5: Convergence of C YThe stepsizes chosen for each of C YCLADES and H OGWILD ! were CLADES and H OGWILD ! on least
largest such that the algorithms did not diverge. We used the DBLP squares using SAGA, with 16
and NH2010 datasets for this experiment, and set A as the adjacency threads, on DBLP dataset. H OG matrix of each graph. For NH2010, the values of b were set to WILD ! diverges with > 10 5 ;
population living in the Census Block. For DBLP we used synthetic thus, we were only able to use a
5
? and z
? were generated smaller step size = 10 for
values: we set b = A?
x + 0.1?
z, where x
H
OGWILD
!
on
multiple
threads.
randomly. The SAGA algorithm was run for 500 epochs for each
dataset. When running SAGA for least squares, we found that For H OGWILD ! on 1 thread (and
H OGWILD ! was divergent with the large stepsizes that we were C YCLADES on any number of
threads), we could use a larger
using for C YCLADES (Fig. 5). Thus, in the multi-thread setting, stepsize of = 3 ? 10 4 .
we were only able to use smaller stepsizes for H OGWILD !, which
resulted in slower convergence than C YCLADES, as seen in Fig. 3(a). The effects of a smaller stepsize
for H OGWILD ! are also manifested in terms of speedups in Fig. 4(a), since H OGWILD ! takes a longer
time to converge to an ? objective value.
Graph eigenvector via SVRG Given an adjacency matrix A, the top eigenvector of AT A is useful
in several applications such as spectral clustering, principle component analysis, and others. In a
6
recent work, [10] proposes an algorithm for computing the top eigenvector of AT A by running
intermediate SVRG steps toP
approximate the shift-and-invert iteration. Specifically, at each step
n
SVRG is used to solve: min i=1 12 xT n I ai aTi x n1 bT x , where ai is the i-th column of
A. According to [10], if we initialize y = x0 and assume kai k = 1, we have to iterate the following
updates xk+1 = xk
? n ? (rfsk (xk ) rfsk (y)) + ? rf (y) where after every T iterations we
update y = xk , and the stochastic gradients are of the form rfi (x) = n I ai aTi x n1 b.
We apply C YCLADES to the above SVRG iteration (see supplemental) for parallelizing this problem.
We run experiments on two graphs: DBLP and and NH2010. We ran SVRG for 50 and 100 epochs
for NH2010 and DBLP respectively. The convergence of SVRG for graph eigenvectors is shown
in Fig. 3(b). C YCLADES starts off slower than H OGWILD !, but always produces results equivalent
to the convergence on a single thread. H OGWILD ! does not exhibit the same behavior on multiple
threads as it does serially; due to asynchrony causes H OGWILD ! to converge slower on multiple
threads. This effect is clearly seen on Figs. 4(b), where H OGWILD ! fails to converge faster than the
serial counterpart, and C YCLADES attains a significantly better speedup on 16 threads.
Matrix completion and word embeddings via SGD In matrix completion we are given a partially
observed matrix M, and wish to factorize it as M ? UV where U and V are low rank
P matrices with
dimensions n ? r and r ? m respectively. This may be achieved by optimizing min (i,j)2? (Mi,j
Ui,? V?,j )2 + 2 (kUk2F + kVk2F ) where ? is the set of observed entries, which can be approximated
by SGD on the observed samples. The regularized objective can be optimized by weighted SGD. In
our experiments, we chose a rank of r = 100, and ran SGD and weighted SGD for 200 epochs. We
used the MovieLens 10M dataset containing 10M ratings for 10K movies by 72K users.
Our second task that uses SGD is word embeddings, which aim to represent the meand
ing of a word
recent work by [2] proposes to solve:
P w via a vector vw 2 R . A
0
0
min{vw },C w,w0 Aw,w (log(Aw,w ) kvw + vw0 k22 C)2 , where Aw,w0 is the number of times
words w and w0 co-occur within ? words in the corpus. In our experiments we set ? = 10 following
the suggested recipe of the aforementioned paper. We can approximate the solution to the above
problem, by obtaining one using SGD: we can repeatedly sample entries Aw,w0 from A and update
the corresponding vectors vw , vw0 . Then, at the end of each full pass over the data, we update the
constant C by its locally optimal value, which can be calculated in closed form. In our experiments,
we optimized for a word embedding of dimension d = 100, and tested on a 80MB subset of the
English Wikipedia dump. For our experiments, we run SGD for 200 epochs.
Figs. 3(c) and 3(d) show the convergence for the matrix completion and word embeddings problems. C YCLADES is initially slower than H OGWILD ! due to the overhead of computing connected
components. However, due to better cache locality and convergence properties, C YCLADES is able
to reach a lower objective value in less time than H OGWILD !. In fact, we observe that C YCLADES
is faster than H OGWILD ! when both are run serially, demonstrating that the gains from (temporal)
cache locality outweigh the coordination overhead of C YCLADES. These results are reflected in the
speedups of C YCLADES and H OGWILD ! (Figs. 4(c) and 4(d)). C YCLADES consistently achieves a
better speedup (up to 11? on 18 threads) compared to that of H OGWILD ! (up to 9? on 18 threads).
Partitioning and allocation costs5 The cost of partitioning and allocation5 for C YCLADES is given
in Table 2, relatively to the time that H OGWILD ! takes to complete a single pass over the dataset. For
matrix completion and the graph eigenvector problem, on 18 threads, C YCLADES takes the equivalent
of 4-6 epochs of H OGWILD ! to complete its partitioning, as the problem is either very sparse or the
updates are expensive. For solving least squares using SAGA and word embeddings using SGD, the
cost of partitioning is equivalent to 11-14 epochs of H OGWILD ! on 18 threads. However, we point
out that partitioning and allocation5 is a one-time cost which becomes cheaper with more stochastic
update epochs. Additionally, note that this cost can become amortized due to the extra experiments
one has to run for hyperparameter tuning, since the graph partitioning is identical across different
stepsizes one might want to test.
Binary classification and dense coordinates Here we explore settings where C YCLADES is expected to perform poorly due to the inherent density of updates (i.e., for data sets with dense features).
In particular, we test C YCLADES on a classification problem for text based data. Specifically, we run
classification for the URL dataset [15] contains ? 2.4M URLs, labeled as either benign or malicious,
5
It has come to our attention post submission that parts of our partitioning and allocation code could be
further parallelized. We refer the reader to our arXiv paper 1605.09721 for the latest results.
7
#
Least Squares
threads SAGA, DBLP
1
2.2245
18
14.1792
Graph Eig.
SVRG, NH2010
0.9039
4.7639
Mat. Comp.
`2 -SGD, MovieLens
0.5507
5.5270
Word2Vec
SGD, EN-Wiki
0.5299
3.9362
Table 2: Ratio of the time that C YCLADES consumes for partition and allocation over the time that H OGWILD !
takes for 1 full pass over the dataset. On 18 threads, C YCLADES takes between 4-14 H OGWILD ! epochs to
perform partitioning. Note however, this computational effort is only required once per dataset.
and 3.2M features, including bag-of-words representation of tokens in the URL. For this classification
task, we used a logistic regression model, trained using SGD. By its power-law nature, the dataset
consists of a small number of extremely dense features which occur in nearly all updates. Since
C YCLADES explicitly avoids conflicts, it has a schedule of SGD updates that leads to poor speedups.
However, we observe that most conflicts are caused by a small
percentage of the densest features. If these features are removed
from the dataset, C YCLADES is able to obtain much better
speedups. The speedups that are obtained by C YCLADES and
H OGWILD ! on 16 threads for different filtering percentages are
shown in Figure 6. Full results of the experiment are presented
in the supplemental material. C YCLADES fails to get much
speedup when nearly all the features are used. However, as
more dense features are removed, C YCLADES obtains a better
speedup, almost equalling H OGWILD !?s speedup when 0.048%
of the densest features are filtered.
Figure 6: Speedups of C YCLADES and
H OGWILD ! on 16 threads, for different percentage of dense features filtered.
The end of Moore?s Law coupled with recent advances in par- When only a very small number of feaallel and distributed computing technologies have triggered re- tures are filtered, C YCLADES is almost
newed interest in parallel stochastic optimization [26, 9, 1, 22]. serial. However, as we increase the perMuch of this contemporary work is built upon the foundational centage from 0.016% to 0.048%, the
speedup of C YCLADES improves and
work of Bertsekas, Tsitsiklis et al. [3, 23].
almost catches up with H OGWILD !.
5
Related work
Inspired by H OGWILD !?s success at achieving nearly linear speedups for a variety of machine learning
tasks, several authors developed other lock-free and asynchronous optimization algorithms, such as
parallel stochastic coordinate descent [13]. Additional work in first order optimization and beyond
[8, 21, 5], has further demonstrated that linear speedups are generically possible in the asynchronous
shared-memory setting.
Other machine learning algorithms that have been parallelized using concurrency control, including
non-parametric clustering [18], submodular maximization [19], and correlation clustering [20].
Sparse, graph-based parallel computation are supported by systems like GraphLab [14]. These
frameworks require computation to be written in a specific programming model with associative,
commutative operations. GraphLab and PowerGraph support serializable execution via locking
mechanisms, this is in contrast to our partition-and-allocate coordination which allows us to provide
guarantees on speedup.
6
Conclusion
We presented C YCLADES, a general framework for lock-free parallelization of stochastic optimization
algorithms, while maintaining serial equivalence. Our framework can be used to parallelize a
large family of stochastic updates algorithms in a conflict-free manner, thereby ensuring that the
parallelized algorithm produces the same result as its serial counterpart. Theoretical properties, such
as convergence rates, are therefore preserved by the C YCLADES-parallelized algorithm, and we
provide a single unified theoretical analysis that guarantees near linear speedups.
By eliminating conflicts across processors within each batch of updates, C YCLADES is able to avoid
all asynchrony errors and conflicts, and leads to better cache locality and cache coherence than
H OGWILD !. These features of C YCLADES translate to near linear speedups in practice, where it can
outperform H OGWILD !-type of implementations by up to a factor of 5?, in terms of speedups.
In the future, we intend to explore hybrids of C YCLADES with H OGWILD !, pushing the boundaries
of what is possible in a shared-memory setting. We are also considering solutions for scaling out in a
distributed setting, where the cost of communication is significantly higher.
8
References
[1] A. Agarwal and J. C. Duchi. Distributed delayed stochastic optimization. In NIPS, pages 873?881, 2011.
[2] S. Arora, Y. Li, Y. Liang, T. Ma, and A. Risteski. Rand-walk: A latent variable model approach to word
embeddings. arXiv:1502.03520, 2015.
[3] D. P. Bertsekas and J. N. Tsitsiklis. Parallel and distributed computation: numerical methods, volume 23.
Prentice hall Englewood Cliffs, NJ, 1989.
[4] T. Chilimbi, Y. Suzue, J. Apacible, and K. Kalyanaraman. Project adam: Building an efficient and scalable
deep learning training system. In USENIX OSDI, 2014.
[5] C. De Sa, C. Zhang, K. Olukotun, and C. R?. Taming the wild: A unified analysis of hogwild!-style
algorithms. arXiv:1506.06438, 2015.
[6] J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, M. Mao, A. Senior, P. Tucker, K. Yang, Q. V. Le, et al.
Large scale distributed deep networks. In NIPS 2012.
[7] A. Defazio, F. Bach, and S. Lacoste-Julien. Saga: A fast incremental gradient method with support for
non-strongly convex composite objectives. In NIPS, pages 1646?1654, 2014.
[8] J. Duchi, M. I. Jordan, and B. McMahan. Estimation, optimization, and parallelism when data is sparse. In
NIPS, pages 2832?2840, 2013.
[9] R. Gemulla, E. Nijkamp, P. J. Haas, and Y. Sismanis. Large-scale matrix factorization with distributed
stochastic gradient descent. In Proceedings of the 17th ACM SIGKDD international conference on
Knowledge discovery and data mining, pages 69?77. ACM, 2011.
[10] C. Jin, S. M. Kakade, C. Musco, P. Netrapalli, and A. Sidford. Robust shift-and-invert preconditioning:
Faster and more sample efficient algorithms for eigenvector computation. arXiv:1510.08896, 2015.
[11] R. Johnson and T. Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In
NIPS, pages 315?323, 2013.
[12] M. Krivelevich. The phase transition in site percolation on pseudo-random graphs. The Electronic Journal
of Combinatorics, 23(1):1?12, 2016.
[13] J. Liu and S. J. Wright. Asynchronous stochastic coordinate descent: Parallelism and convergence
properties. SIAM Journal on Optimization, 25(1):351?376, 2015.
[14] Y. Low, J. E. Gonzalez, A. Kyrola, D. Bickson, C. E. Guestrin, and J. Hellerstein. Graphlab: A new
framework for parallel machine learning. arXiv:1408.2041, 2014.
[15] J. Ma, L. K. Saul, S. Savage, and G. M. Voelker. Identifying suspicious urls: an application of large-scale
online learning. In Proceedings of the 26th Annual International Conference on Machine Learning, pages
681?688. ACM, 2009.
[16] H. Mania, X. Pan, D. Papailiopoulos, B. Recht, K. Ramchandran, and M. I. Jordan. Perturbed iterate
analysis for asynchronous stochastic optimization. arXiv:1507.06970, 2015.
[17] F. Niu, B. Recht, C. Re, and S. Wright. Hogwild: A lock-free approach to parallelizing stochastic gradient
descent. In NIPS, pages 693?701, 2011.
[18] X. Pan, J. E. Gonzalez, S. Jegelka, T. Broderick, and M. I. Jordan. Optimistic concurrency control for
distributed unsupervised learning. In NIPS 26. 2013.
[19] X. Pan, S. Jegelka, J. E. Gonzalez, J. K. Bradley, and M. I. Jordan. Parallel double greedy submodular
maximization. In NIPS 27. 2014.
[20] X. Pan, D. Papailiopoulos, S. Oymak, B. Recht, K. Ramchandran, and M. I. Jordan. Parallel correlation
clustering on big graphs. In NIPS, pages 82?90, 2015.
[21] S. J. Reddi, A. Hefny, S. Sra, B. P?czos, and A. Smola. On variance reduction in stochastic gradient
descent and its asynchronous variants. arXiv:1506.06840, 2015.
[22] P. Richt?rik and M. Tak??c. Parallel coordinate descent methods for big data optimization. arXiv:1212.0873,
2012.
[23] J. N. Tsitsiklis, D. P. Bertsekas, and M. Athans. Distributed asynchronous deterministic and stochastic
gradient optimization algorithms. IEEE transactions on automatic control, 31(9):803?812, 1986.
[24] C. Zhang and C. R?. Dimmwitted: A study of main-memory statistical analytics. Proceedings of the VLDB
Endowment, 7(12):1283?1294, 2014.
[25] Y. Zhuang, W.-S. Chin, Y.-C. Juan, and C.-J. Lin. A fast parallel sgd for matrix factorization in shared
memory systems. In Proceedings of the 7th ACM conference on Recommender systems, pages 249?256.
ACM, 2013.
[26] M. Zinkevich, J. Langford, and A. J. Smola. Slow learners are fast. In NIPS, pages 2331?2339, 2009.
9
| 6604 |@word eliminating:1 achievable:2 nd:1 vldb:1 seek:1 sgd:27 thereby:1 carry:2 reduction:4 liu:1 contains:2 tuned:1 ati:5 outperforms:1 numa:2 bradley:1 savage:1 com:2 surprising:1 si:3 written:1 fn:1 numerical:1 partition:5 devin:1 benign:1 reproducible:1 designed:1 update:76 bickson:1 greedy:1 selected:1 xk:8 ith:2 core:33 filtered:3 provides:1 node:4 zhang:4 constructed:1 become:2 qualitative:2 prove:2 consists:2 stud:1 wild:3 overhead:4 combine:2 suspicious:1 manner:1 x0:2 theoretically:1 expected:1 behavior:1 examine:1 nor:1 multi:2 inspired:1 little:1 cpu:4 cache:10 considering:1 becomes:1 project:2 spain:1 moreover:5 bounded:1 alto:1 notation:1 what:3 cm:2 eigenvector:5 developed:1 supplemental:9 unified:2 giant:1 nj:1 guarantee:15 temporal:1 quantitative:1 berkeley:4 every:3 pseudo:1 xd:1 tackle:1 partitioning:11 control:3 bertsekas:3 engineering:1 local:1 sismanis:1 despite:1 id:1 cliff:1 parallelize:3 niu:1 black:2 might:2 chose:1 equivalence:4 co:1 factorization:4 analytics:1 range:1 bi:2 practical:1 yj:5 practice:3 block:1 sq:2 foundational:1 significantly:3 composite:1 word:13 get:1 cannot:1 nb:6 put:1 prentice:1 seminal:1 optimize:1 zinkevich:1 equivalent:5 demonstrated:3 outweigh:1 dean:1 deterministic:1 straightforward:1 attention:1 latest:1 independently:1 convex:5 musco:1 unstructured:1 assigns:1 identifying:3 m2:1 insight:1 rule:1 subgraphs:1 datapoints:1 ogwild:49 classic:1 embedding:2 population:1 coordinate:7 papailiopoulos:3 user:1 densest:2 programming:1 us:1 amortized:1 approximated:1 expensive:1 updating:2 gorithm:1 cut:1 submission:1 labeled:3 observed:3 electrical:1 capture:1 wj:3 ensures:1 connected:13 richt:1 eu:2 removed:2 contemporary:1 consumes:1 ran:3 substantial:1 benjamin:1 intuition:1 accessing:1 locking:2 ui:7 broderick:1 ultimately:1 trained:1 solving:1 concurrency:2 predictive:1 upon:2 bipartite:4 learner:1 gu:11 preconditioning:1 easily:1 fast:3 kuk2f:1 minxi:1 outcome:1 whose:1 heuristic:1 stanford:1 solve:3 larger:1 say:1 kai:1 voelker:1 statistic:1 gi:2 g1:1 asynchronously:3 associative:1 online:1 triggered:1 kvw:1 lam:1 outputting:1 mb:1 tu:1 loop:2 translate:2 subgraph:5 pthe:1 poorly:1 bug:1 recipe:1 exploiting:1 convergence:14 double:1 diverges:1 produce:3 adam:2 incremental:1 wider:1 polylog:1 completion:6 sa:1 netrapalli:1 implemented:1 come:2 implies:1 quantify:1 stochastic:22 material:6 bin:1 pinned:1 require:3 adjacency:2 f1:1 dimmwitted:1 hold:1 sufficiently:4 hall:1 wright:2 algorithmic:3 achieves:3 smallest:1 estimation:1 bag:1 combinatorial:1 currently:1 label:1 palo:1 coordination:2 percolation:1 individually:1 largest:3 correctness:1 weighted:2 clearly:1 reshuffling:1 always:4 suzue:1 aim:2 e7:1 pn:2 avoid:2 og:2 stepsizes:6 inherits:1 focus:2 consistently:3 rank:2 kyrola:1 contrast:2 sigkdd:1 rigorous:1 attains:1 helpful:1 osdi:1 bt:1 initially:2 tak:1 expand:1 provably:2 sketched:1 issue:1 overall:3 among:2 classification:6 aforementioned:1 development:1 proposes:2 initialize:2 uc:2 equal:1 construct:1 once:4 never:2 sampling:11 shattering:1 represents:1 identical:1 unsupervised:1 nearly:4 future:1 others:1 inherent:2 few:1 oblivious:1 randomly:1 resulted:1 cheaper:1 delayed:1 phase:4 intended:1 replacement:6 microsoft:1 delicate:1 n1:5 interest:3 englewood:1 highly:1 mining:1 evaluation:1 introduces:1 generically:1 light:1 behind:1 word2vec:3 allocating:2 edge:2 allocates:2 vw0:2 walk:1 re:3 theoretical:3 xeon:1 column:1 sidford:1 maximization:2 cost:10 vertex:11 subset:7 entry:2 uniform:2 successful:1 conducted:1 johnson:1 too:1 dependency:1 aw:4 perturbed:1 synthetic:1 recht:4 density:1 international:2 randomized:1 siam:1 oymak:1 off:1 diverge:1 michael:1 together:2 containing:1 juan:1 leading:2 return:2 style:1 toy:1 li:1 de:1 gemulla:1 combinatorics:1 explicitly:1 caused:1 piece:2 performed:3 hogwild:2 picked:1 closed:1 optimistic:1 linked:3 analyze:1 start:2 maintains:1 parallel:17 complicated:1 defer:1 nijkamp:1 minimize:2 square:8 formed:1 accuracy:2 variance:5 efficiently:2 yield:1 identify:1 produced:1 comp:3 cc:7 processor:1 explain:3 datapoint:1 reach:5 suffers:1 sharing:1 definition:1 against:2 tucker:1 proof:4 mi:1 athans:1 gain:8 sampled:11 dataset:11 popular:1 subsection:3 knowledge:1 improves:1 schedule:1 hefny:1 actually:2 higher:1 reflected:1 improved:1 rand:1 done:1 box:2 execute:1 strongly:2 furthermore:1 smola:2 correlation:2 langford:1 sketch:1 su:14 overlapping:1 eig:3 google:1 logistic:3 asynchrony:5 building:1 effect:2 k22:1 contain:1 counterpart:4 regularization:1 hence:3 assigned:2 read:4 moore:1 i2:1 during:3 illustrative:1 authorship:1 stress:1 chin:1 complete:2 performs:1 duchi:2 fj:1 wise:1 fi:4 wikipedia:2 specialized:1 pseudocode:1 physical:1 volume:1 belong:1 significant:3 refer:4 ai:8 tuning:2 uv:1 debug:1 pm:2 powergraph:1 automatic:1 submodular:2 language:1 had:1 access:7 risteski:1 longer:1 etc:2 mania:1 hide:1 recent:4 optimizing:1 nonconvex:2 manifested:1 binary:2 success:1 seen:2 guestrin:1 additional:3 parallelized:7 determine:1 v3:1 maximize:1 converge:3 living:1 stephen:1 corrado:1 multiple:5 full:5 ing:1 technical:1 faster:8 offer:2 bach:1 lin:1 serial:17 post:1 y:1 ensuring:1 scalable:1 regression:2 xsi:3 variant:1 expectation:1 arxiv:8 iteration:8 represent:1 tailored:2 monga:1 agarwal:1 invert:2 achieved:1 preserved:1 remarkably:2 want:1 malicious:1 allocated:1 withoutreplacement:1 parallelization:4 rest:1 unlike:1 extra:1 pass:1 comment:1 induced:6 leveraging:1 jordan:6 reddi:1 near:5 vw:3 yang:1 intermediate:1 enough:1 easy:1 embeddings:7 variety:2 xj:3 xsj:2 iterate:3 inner:1 idea:1 shift:2 thread:24 allocate:4 defazio:1 url:4 accelerating:1 effort:1 locating:1 proceed:1 cause:1 remark:1 repeatedly:1 deep:3 krivelevich:1 useful:5 rfi:2 eigenvectors:1 tune:1 verifiable:1 apacible:1 locally:2 induces:1 reduced:1 http:1 wiki:4 outperform:5 percentage:3 per:4 write:3 hyperparameter:2 shall:1 mat:3 group:12 key:3 demonstrating:1 achieving:1 localize:1 prevent:1 ce:1 lacoste:1 graph:40 olukotun:1 run:9 powerful:1 family:5 reader:1 almost:3 electronic:1 draw:1 gonzalez:3 coherence:4 scaling:2 pay:1 topological:1 annual:1 occur:2 serializable:1 x2:1 phrased:1 wc:1 u1:2 aspect:1 extremely:2 min:3 relatively:1 speedup:33 department:3 according:6 poor:1 across:15 smaller:4 pan:5 kakade:1 modification:1 outlier:1 census:1 turn:1 mechanism:2 needed:2 prefixed:1 stitching:1 end:3 informal:1 available:1 operation:1 xinghao:1 apply:2 observe:4 hellerstein:1 generic:1 appropriate:1 spectral:1 appearing:1 stepsize:2 batch:16 slower:5 obviates:1 top:3 running:6 include:1 clustering:4 downpour:1 lock:10 log2:2 maintaining:1 pushing:1 exploit:2 uj:1 establish:3 comparatively:1 objective:10 g0:4 added:1 question:1 intend:1 strategy:1 parametric:1 exhibit:3 gradient:11 minx:3 w0:4 chris:1 evenly:1 haas:1 threaded:1 kannan:1 provable:1 enforcing:1 assuming:1 code:2 ratio:1 liang:1 unfortunately:1 potentially:1 implementation:9 perform:2 recommender:1 av:1 datasets:5 descent:8 jin:1 extended:1 communication:1 gc:4 interacting:1 parallelizing:3 usenix:1 rating:2 required:1 optimized:2 conflict:37 pletely:1 conflicting:1 established:1 barcelona:1 nip:11 address:2 able:5 suggested:1 beyond:1 below:1 pattern:1 dimitris:1 parallelism:2 sparsity:3 program:1 tb:1 rf:1 max:8 memory:11 including:4 built:1 power:1 serially:6 c1i:2 synchronize:1 regularized:1 hybrid:1 movie:2 github:1 technology:1 zhuang:1 julien:1 arora:1 carried:1 ready:1 catch:1 coupled:1 text:1 morphing:1 understanding:1 epoch:10 taming:1 discovery:1 law:2 synchronization:1 loss:1 expect:2 par:1 allocation:9 proportional:2 filtering:1 versus:1 localized:1 tures:1 chilimbi:1 degree:14 giu:3 jegelka:2 rik:1 principle:1 bypass:1 share:2 endowment:1 token:1 supported:1 czos:1 asynchronous:18 free:10 svrg:13 english:2 tsitsiklis:3 senior:1 saul:1 characterizing:1 sparse:8 benefit:4 distributed:9 ghz:1 dimension:2 calculated:1 transition:4 avoids:1 epu:1 computes:2 boundary:1 author:2 transaction:1 approximate:2 obtains:1 erdos:1 graphlab:3 corpus:1 kalyanaraman:1 xi:4 amplab:1 factorize:1 un:2 iterative:1 latent:1 table:4 additionally:2 nature:3 robust:1 ca:3 sra:1 obtaining:1 alg:3 interact:1 inheriting:1 did:1 main:7 dense:5 big:2 noise:1 repeated:1 x1:1 fig:11 intel:1 dump:2 en:4 site:1 slow:1 fails:2 mao:1 saga:15 wish:2 kvk2f:1 mcmahan:1 renyi:1 theorem:11 load:1 specific:4 repeatable:1 xt:1 maxi:1 list:1 decay:1 divergent:1 dl:2 false:1 gained:1 execution:6 ramchandran:3 gsk:2 commutative:1 maximilian:1 dblp:9 chen:1 locality:9 logarithmic:1 simply:1 explore:2 lazy:1 ordered:1 partially:1 u2:2 acm:5 ma:2 formulated:1 shared:6 considerable:1 experimentally:1 included:1 determined:1 movielens:3 reducing:1 operates:1 uniformly:3 specifically:2 lemma:2 total:2 pas:4 e:1 diverging:2 support:5 brevity:1 tested:2 phenomenon:1 |
6,197 | 6,606 | Wider and Deeper, Cheaper and Faster:
Tensorized LSTMs for Sequence Learning
Zhen He1,2 , Shaobing Gao3 , Liang Xiao2 , Daxue Liu2 , Hangen He2 , and David Barber1,4?
1
University College London, 2 National University of Defense Technology, 3 Sichuan University,
4
Alan Turing Institute
Abstract
Long Short-Term Memory (LSTM) is a popular approach to boosting the ability
of Recurrent Neural Networks to store longer term temporal information. The
capacity of an LSTM network can be increased by widening and adding layers.
However, usually the former introduces additional parameters, while the latter
increases the runtime. As an alternative we propose the Tensorized LSTM in
which the hidden states are represented by tensors and updated via a cross-layer
convolution. By increasing the tensor size, the network can be widened efficiently
without additional parameters since the parameters are shared across different
locations in the tensor; by delaying the output, the network can be deepened
implicitly with little additional runtime since deep computations for each timestep
are merged into temporal computations of the sequence. Experiments conducted on
five challenging sequence learning tasks show the potential of the proposed model.
1
Introduction
We consider the time-series prediction task of producing a desired output yt at each timestep
t ? {1, . . . , T } given an observed input sequence x1:t = {x1 , x2 , ? ? ? , xt }, where xt ? RR and
yt ? RS are vectors1 . The Recurrent Neural Network (RNN) [17, 43] is a powerful model that learns
how to use a hidden state vector ht ? RM to encapsulate the relevant features of the entire input
R+M
history x1:t up to timestep t. Let hcat
be the concatenation of the current input xt and the
t?1 ? R
previous hidden state ht?1 :
hcat
t?1 = [xt , ht?1 ]
(1)
The update of the hidden state ht is defined as:
h
h
at = hcat
t?1 W + b
ht = ?(at )
(2)
(3)
where W h ? R(R+M )?M is the weight, bh ? RM the bias, at ? RM the hidden activation, and ?(?)
the element-wise tanh function. Finally, the output yt at timestep t is generated by:
yt = ?(ht W y + by )
y
where W ? R
M?S
y
(4)
S
and b ? R , and ?(?) can be any differentiable function, depending on the task.
However, this vanilla RNN has difficulties in modeling long-range dependencies due to the vanishing/exploding gradient problem [4]. Long Short-Term Memories (LSTMs) [19, 24] alleviate
?
1
Corresponding authors: Shaobing Gao <[email protected]> and Zhen He <[email protected]>.
Vectors are assumed to be in row form throughout this paper.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
these problems by employing memory cells to preserve information for longer, and adopting gating
mechanisms to modulate the information flow. Given the success of the LSTM in sequence modeling,
it is natural to consider how to increase the complexity of the model and thereby increase the set of
tasks for which the LSTM can be profitably applied.
We consider the capacity of a network to consist of two components: the width (the amount of
information handled in parallel) and the depth (the number of computation steps) [5]. A naive way
to widen the LSTM is to increase the number of units in a hidden layer; however, the parameter
number scales quadratically with the number of units. To deepen the LSTM, the popular Stacked
LSTM (sLSTM) stacks multiple LSTM layers [20]; however, runtime is proportional to the number
of layers and information from the input is potentially lost (due to gradient vanishing/explosion) as it
propagates vertically through the layers.
In this paper, we introduce a way to both widen and deepen the LSTM whilst keeping the parameter
number and runtime largely unchanged. In summary, we make the following contributions:
(a) We tensorize RNN hidden state vectors into higher-dimensional tensors which allow more flexible
parameter sharing and can be widened more efficiently without additional parameters.
(b) Based on (a), we merge RNN deep computations into its temporal computations so that the
network can be deepened with little additional runtime, resulting in a Tensorized RNN (tRNN).
(c) We extend the tRNN to an LSTM, namely the Tensorized LSTM (tLSTM), which integrates a
novel memory cell convolution to help to prevent the vanishing/exploding gradients.
2
2.1
Method
Tensorizing Hidden States
It can be seen from (2) that in an RNN, the parameter number scales quadratically with the size of the
hidden state. A popular way to limit the parameter number when widening the network is to organize
parameters as higher-dimensional tensors which can be factorized into lower-rank sub-tensors that
contain significantly fewer elements [6, 15, 18, 26, 32, 39, 46, 47, 51], which is is known as tensor
factorization. This implicitly widens the network since the hidden state vectors are in fact broadcast to
interact with the tensorized parameters. Another common way to reduce the parameter number is to
share a small set of parameters across different locations in the hidden state, similar to Convolutional
Neural Networks (CNNs) [34, 35].
We adopt parameter sharing to cutdown the parameter number for RNNs, since compared with
factorization, it has the following advantages: (i) scalability, i.e., the number of shared parameters
can be set independent of the hidden state size, and (ii) separability, i.e., the information flow can be
carefully managed by controlling the receptive field, allowing one to shift RNN deep computations to
the temporal domain (see Sec. 2.2). We also explicitly tensorize the RNN hidden state vectors, since
compared with vectors, tensors have a better: (i) flexibility, i.e., one can specify which dimensions
to share parameters and then can just increase the size of those dimensions without introducing
additional parameters, and (ii) efficiency, i.e., with higher-dimensional tensors, the network can be
widened faster w.r.t. its depth when fixing the parameter number (see Sec. 2.3).
For ease of exposition, we first consider 2D tensors (matrices): we tensorize the hidden state ht ? RM
to become Ht ? RP?M , where P is the tensor size, and M the channel size. We locally-connect the
first dimension of Ht in order to share parameters, and fully-connect the second dimension of Ht to
allow global interactions. This is analogous to the CNN which fully-connects one dimension (e.g.,
the RGB channel for input images) to globally fuse different feature planes. Also, if one compares
Ht to the hidden state of a Stacked RNN (sRNN) (see Fig. 1(a)), then P is akin to the number of
stacked hidden layers, and M the size of each hidden layer. We start to describe our model based on
2D tensors, and finally show how to strengthen the model with higher-dimensional tensors.
2.2
Merging Deep Computations
Since an RNN is already deep in its temporal direction, we can deepen an input-to-output computation
by associating the input xt with a (delayed) future output. In doing this, we need to ensure that the
output yt is separable, i.e., not influenced by any future input xt0 (t0 > t). Thus, we concatenate
the projection of xt to the top of the previous hidden state Ht?1 , then gradually shift the input
2
Figure 1: Examples of sRNN, tRNNs and tLSTMs. (a) A 3-layer sRNN. (b) A 2D tRNN without (?)
feedback (F) connections, which can be thought as a skewed version of (a). (c) A 2D tRNN. (d) A 2D
tLSTM without (?) memory (M) cell convolutions. (e) A 2D tLSTM. In each model, the blank circles
in column 1 to 4 denote the hidden state at timestep t?1 to t+2, respectively, and the blue region
denotes the receptive field of the current output yt . In (b)-(e), the outputs are delayed by L?1 = 2
timesteps, where L = 3 is the depth.
information down when the temporal computation proceeds, and finally generate yt from the bottom
of Ht+L?1 , where L?1 is the number of delayed timesteps for computations of depth L. An example
with L = 3 is shown in Fig. 1(b). This is in fact a skewed sRNN as used in [1] (also similar to [48]).
However, our method does not need to change the network structure and also allows different kinds
of interactions as long as the output is separable, e.g, one can increase the local connections and use
feedback (see Fig. 1(c)), which can be beneficial for sRNNs [10]. In order to share parameters, we
update Ht using a convolution with a learnable kernel. In this manner we increase the complexity of
the input-to-output mapping (by delaying outputs) and limit parameter growth (by sharing transition
parameters using convolutions).
cat
To describe the resulting tRNN model, let Ht?1
? R(P +1)?M be the concatenated hidden state, and
M
cat
p ? Z+ the location at a tensor. The channel vector hcat
at location p of Ht?1
is defined as:
t?1,p ? R
xt W x + bx if p = 1
hcat
=
(5)
t?1,p
ht?1,p?1
if p > 1
where W x ? RR?M and bx ? RM . Then, the update of tensor Ht is implemented via a convolution:
cat
At = Ht?1
~ {W h , bh }
Ht = ?(At )
i
(6)
(7)
o
where W h ? RK?M ?M is the kernel weight of size K, with M i = M input channels and M o = M
o
o
output channels, bh ? RM is the kernel bias, At ? RP?M is the hidden activation, and ~ is the
convolution operator (see Appendix A.1 for a more detailed definition). Since the kernel convolves
across different hidden layers, we call it the cross-layer convolution. The kernel enables interaction,
both bottom-up and top-down across layers. Finally, we generate yt from the channel vector
ht+L?1,P ? RM which is located at the bottom of Ht+L?1 :
yt = ?(ht+L?1,P W y + by )
(8)
where W y ? RM?S and by ? RS . To guarantee that the receptive field of yt only covers the current
and previous inputs x1:t (see Fig. 1(c)), L, P , and K should satisfy the constraint:
l
m
2P
L=
(9)
K ? K mod 2
where d?e is the ceil operation. For the derivation of (9), please see Appendix B.
We call the model defined in (5)-(8) the Tensorized RNN (tRNN). The model can be widened by
increasing the tensor size P , whilst the parameter number remains fixed (thanks to the convolution).
Also, unlike the sRNN of runtime complexity O(T L), tRNN breaks down the runtime complexity to
O(T +L), which means either increasing the sequence length T or the network depth L would not
significantly increase the runtime.
3
2.3
Extending to LSTMs
To allow the tRNN to capture long-range temporal dependencies, one can straightforwardly extend it
to an LSTM by replacing the tRNN tensor update equations of (6)-(7) as follows:
cat
[Agt , Ait , Aft , Aot ] = Ht?1
~ {W h , bh }
[?(Agt ), ?(Ait ), ?(Aft ), ?(Aot )]
[Gt , It , Ft , Ot ] =
Ct = Gt It + Ct?1 Ft
Ht = ?(Ct ) Ot
(10)
(11)
(12)
(13)
where the kernel {W h , bh } is of size K, with M i=M input channels and M o= 4M output channels,
Agt ,Ait ,Aft ,Aot ? RP?M are activations for the new content Gt , input gate It , forget gate Ft , and
output gate Ot , respectively, ?(?) is the element-wise sigmoid function, and Ct ? RP?M is the
memory cell. However, since in (12) the previous memory cell Ct?1 is only gated along the temporal
direction (see Fig. 1(d)), long-range dependencies from the input to output might be lost when the
tensor size P becomes large.
Memory Cell Convolution. To capture long-range dependencies from multiple directions, we
additionally introduce a novel memory cell convolution, by which the memory cells can have a larger
receptive field (see Fig. 1(e)). We also dynamically generate this convolution kernel so that it is
both time- and location-dependent, allowing for flexible control over long-range dependencies from
different directions. This results in our tLSTM tensor update equations:
cat
[Agt , Ait , Aft , Aot , Aqt ] = Ht?1
~ {W h , bh }
[?(Agt ), ?(Ait ), ?(Aft ), ?(Aot ), ?(Aqt )]
[Gt , It , Ft , Ot , Qt ] =
Wtc (p) = reshape (qt,p , [K, 1, 1])
conv
Ct?1
= Ct?1 ~ Wtc (p)
conv
Ct = Gt It + Ct?1
Ft
Ht = ?(Ct ) Ot
(14)
(15)
(16)
(17)
(18)
(19)
where, in contrast to (10)-(13), the kernel {W h , bh } has additional
hKi output channels2 to generate the activation Aqt ? RP?hKi for
the dynamic kernel bank Qt ? RP?hKi , qt,p ? RhKi is the vectorized
adaptive kernel at the location p of Qt , and Wtc (p) ? RK?1?1 is
the dynamic kernel of size K with a single input/output channel,
which is reshaped from qt,p (see Fig. 2(a) for an illustration). In
(17), each channel of the previous memory cell Ct?1 is convolved
with Wtc (p) whose values vary with p, forming a memory cell
convolution (see Appendix A.2 for a more detailed definition),
conv
which produces a convolved memory cell Ct?1
? RP?M . Note
that in (15) we employ a softmax function ?(?) to normalize the
channel dimension of Qt , which, similar to [37], can stabilize the Figure 2: Illustration of genervalue of memory cells and help to prevent the vanishing/exploding ating the memory cell convolugradients (see Appendix C for details).
tion kernel, where (a) is for 2D
The idea of dynamically generating network weights has been used tensors and (b) for 3D tensors.
in many works [6, 14, 15, 23, 44, 46], where in [14] locationdependent convolutional kernels are also dynamically generated to improve CNNs. In contrast to
these works, we focus on broadening the receptive field of tLSTM memory cells. Whilst the flexibility
is retained, fewer parameters are required to generate the kernel since the kernel is shared by different
memory cell channels.
Channel Normalization. To improve training, we adapt Layer Normalization (LN) [3] to our
tLSTM. Similar to the observation in [3] that LN does not work well in CNNs where channel vectors
at different locations have very different statistics, we find that LN is also unsuitable for tLSTM
where lower level information is near the input while higher level information is near the output. We
2
The operator h?i returns the cumulative product of all elements in the input variable.
4
therefore normalize the channel vectors at different locations with their own statistics, forming a
Channel Normalization (CN), with its operator CN (?):
b?+B
CN (Z; ?, B) = Z
(20)
z
b ?, B ? RP ?M are the original tensor, normalized tensor, gain parameter, and bias
where Z, Z,
parameter, respectively. The mz -th channel of Z, i.e. zmz ? RP , is normalized element-wisely:
zbmz = (zmz ? z ? )/z ?
(21)
where z ? , z ? ? RP are the mean and standard deviation along the channel dimension of Z, respecb Note that the number of parameters introduced by
tively, and zbmz ? RP is the mz -th channel of Z.
CN/LN can be neglected as it is very small compared to the number of other parameters in the model.
Using Higher-Dimensional Tensors. One can observe from (9) that when fixing the kernel size
K, the tensor size P of a 2D tLSTM grows linearly w.r.t. its depth L. How can we expand the tensor
volume more rapidly so that the network can be widened more efficiently? We can achieve this goal
by leveraging higher-dimensional tensors. Based on previous definitions for 2D tLSTMs, we replace
the 2D tensors with D-dimensional (D > 2) tensors, obtaining Ht , Ct ? RP1?P2?...?PD?1?M with the
tensor size P = [P1 , P2 , . . . , PD?1 ]. Since the hidden states are no longer matrices, we concatenate
the projection of xt to one corner of Ht?1 , and thus (5) is extended as:
?
x
x
if pd = 1 for d = 1, 2, . . . , D ? 1
?xt W + b
cat
ht?1,p = ht?1,p?1
(22)
if pd > 1 for d = 1, 2, . . . , D ? 1
?
0
otherwise
M
where hcat
is the channel vector at location p ? ZD?1
of the concatenated hidden state
t?1,p ? R
+
cat
Ht?1
? R(P1 +1)?(P2 +1)?...?(PD?1 +1)?M . For the tensor update, the convolution kernel W h and Wtc (?)
also increase their dimensionality with kernel size K = [K1 , K2 , . . . , KD?1 ]. Note that Wtc (?) is
reshaped from the vector, as illustrated in Fig. 2(b). Correspondingly, we generate the output yt from
the opposite corner of Ht+L?1 , and therefore (8) is modified as:
yt = ?(ht+L?1,P W y + by )
(23)
For convenience, we set Pd = P and Kd = K for d = 1, 2, . . . , D ? 1 so that all dimensions of P
and K can satisfy (9) with the same depth L. In addition, CN still normalizes the channel dimension
of tensors.
3
Experiments
We evaluate tLSTM on five challenging sequence learning tasks under different configurations:
(a) sLSTM (baseline): our implementation of sLSTM [21] with parameters shared across all layers.
(b) 2D tLSTM: the standard 2D tLSTM, as defined in (14)-(19).
(c) 2D tLSTM?M: removing (?) memory (M) cell convolutions from (b), as defined in (10)-(13).
(d) 2D tLSTM?F: removing (?) feedback (F) connections from (b).
(e) 3D tLSTM: tensorizing (b) into 3D tLSTM.
(f) 3D tLSTM+LN: applying (+) LN [3] to (e).
(g) 3D tLSTM+CN: applying (+) CN to (e), as defined in (20).
To compare different configurations, we also use L to denote the number of layers of a sLSTM, and
M to denote the hidden size of each sLSTM layer. We set the kernel size K to 2 for 2D tLSTM?F
and 3 for other tLSTMs, in which case we have L = P according to (9).
For each configuration, we fix the parameter number and increase the tensor size to see if the
performance of tLSTM can be boosted without increasing the parameter number. We also investigate
how the runtime is affected by the depth, where the runtime is measured by the average GPU
milliseconds spent by a forward and backward pass over one timestep of a single example. Next, we
compare tLSTM against the state-of-the-art methods to evaluate its ability. Finally, we visualize the
internal working mechanism of tLSTM. Please see Appendix D for training details.
5
3.1
Wikipedia Language Modeling
The Hutter Prize Wikipedia dataset [25] consists of 100 million
characters taken from 205 different characters including alphabets, XML markups and special symbols. We model the dataset
at the character-level, and try to predict the next character of the
input sequence.
We fix the parameter number to 10M, corresponding to channel
sizes M of 1120 for sLSTM and 2D tLSTM?F, 901 for other
2D tLSTMs, and 522 for 3D tLSTMs. All configurations are
evaluated with depths L = 1, 2, 3, 4. We use Bits-per-character
(BPC) to measure the model performance.
Results are shown in Fig. 3. When L ? 2, sLSTM and 2D
tLSTM?F outperform other models because of a larger M . With
L increasing, the performances of sLSTM and 2D tLSTM?M
improve but become saturated when L ? 3, while tLSTMs with
memory cell convolutions improve with increasing L and finally
outperform both sLSTM and 2D tLSTM?M. When L = 4, 2D Figure 3: Performance and runtLSTM?F is surpassed by 2D tLSTM, which is in turn surpassed time of different configurations
by 3D tLSTM. The performance of 3D tLSTM+LN benefits from on Wikipedia.
LN only when L ? 2. However, 3D tLSTM+CN consistently
improves 3D tLSTM with different L.
Whilst the runtime of sLSTM is almost proportional to L, it is nearly
constant in each tLSTM configuration
and largely independent of L.
We compare a larger model, i.e. a
3D tLSTM+CN with L = 6 and M =
1200, to the state-of-the-art methods
on the test set, as reported in Table 1.
Our model achieves 1.264 BPC with
50.1M parameters, and is competitive
to the best performing methods [38,
54] with similar parameter numbers.
3.2
Table 1: Test BPC on Wikipedia.
BPC
# Param.
MI-LSTM [51]
mLSTM [32]
HyperLSTM+LN [23]
HM-LSTM+LN [11]
Large RHN [54]
Large FS-LSTM-4 [38]
2 ? Large FS-LSTM-4 [38]
1.44
1.42
1.34
1.32
1.27
1.245
1.198
?17M
?20M
26.5M
?35M
?46M
?47M
?94M
3D tLSTM+CN (L = 6, M = 1200)
1.264
50.1M
Algorithmic Tasks
(a) Addition: The task is to sum
two 15-digit integers. The network
first reads two integers with one
digit per timestep, and then predicts
the summation. We follow the processing of [30], where a symbol
?-? is used to delimit the integers
as well as pad the input/target sequence. A 3-digit integer addition
task is of the form:
Input: - 1 2 3 - 9 0 0 - - - - Target: - - - - - - - - 1 0 2 3 (b) Memorization: The goal of this
task is to memorize a sequence of Figure 4: Performance and runtime of different configurations
20 random symbols. Similar to the on the addition (left) and memorization (right) tasks.
addition task, we use 65 different
6
symbols. A 5-symbol memorization task is of the form:
Input:
Target:
-abccb-----------abccb-
We evaluate all configurations with L = 1, 4, 7, 10 on both tasks, where M is 400 for addition and
100 for memorization. The performance is measured by the symbol prediction accuracy.
Fig. 4 show the results. In both tasks, large L degrades the performances of sLSTM and 2D tLSTM?
M. In contrast, the performance of 2D tLSTM?F steadily improves with L increasing, and is further
enhanced by using feedback connections, higher-dimensional tensors, and CN, while LN helps only
when L = 1. Note that in both tasks, the correct solution can be found (when 100% test accuracy is
achieved) due to the repetitive nature of the task. In our experiment, we also observe that for the
addition task, 3D tLSTM+CN with L = 7 outperforms other configurations and finds the solution
with only 298K training samples, while for the memorization task, 3D tLSTM+CN with L = 10 beats
others configurations and achieves perfect memorization after seeing 54K training samples. Also,
unlike in sLSTM, the runtime of all tLSTMs is largely unaffected by L.
We further compare the best
performing configurations to
the state-of-the-art methods
for both tasks (see Table 2).
Our models solve both tasks
significantly faster (i.e., using
fewer training samples) than
other models, achieving the
new state-of-the-art results.
3.3
Table 2: Test accuracies on two algorithmic tasks.
Addition
Acc.
# Samp.
Memorization
Acc.
# Samp.
Stacked LSTM [21]
Grid LSTM [30]
51%
>99%
5M
550K
>50%
>99%
900K
150K
3D tLSTM+CN (L = 7)
3D tLSTM+CN (L = 10)
>99%
>99%
298K
317K
>99%
>99%
115K
54K
MNIST Image Classification
The MNIST dataset [35] consists
of 50000/10000/10000 handwritten
digit images of size 28?28 for training/validation/test. We have two
tasks on this dataset:
(a) Sequential MNIST: The goal
is to classify the digit after sequentially reading the pixels in a scanline order [33]. It is therefore a
784 timestep sequence learning task
where a single output is produced at
the last timestep; the task requires
very long range dependencies in the
sequence.
(b) Sequential Permuted MNIST:
We permute the original image pix- Figure 5: Performance and runtime of different configurations
els in a fixed random order as in on sequential MNIST (left) and sequential pMNIST (right).
[2], resulting in a permuted MNIST
(pMNIST) problem that has even longer range dependencies across pixels and is harder.
In both tasks, all configurations are evaluated with M = 100 and L = 1, 3, 5. The model performance
is measured by the classification accuracy.
Results are shown in Fig. 5. sLSTM and 2D tLSTM?M no longer benefit from the increased depth
when L = 5. Both increasing the depth and tensorization boost the performance of 2D tLSTM.
However, removing feedback connections from 2D tLSTM seems not to affect the performance. On
the other hand, CN enhances the 3D tLSTM and when L ? 3 it outperforms LN. 3D tLSTM+CN
with L = 5 achieves the highest performances in both tasks, with a validation accuracy of 99.1% for
MNIST and 95.6% for pMNIST. The runtime of tLSTMs is negligibly affected by L, and all tLSTMs
become faster than sLSTM when L = 5.
7
Figure 6: Visualization of the diagonal channel means of the tLSTM memory cells for each task. In
each horizontal bar, the rows from top to bottom correspond to the diagonal locations from pin to
pout , the columns from left to right correspond to different timesteps (from 1 to T +L?1 for the full
sequence, where L?1 is the time delay), and the values are normalized to be in range [0, 1] for better
visualization. Both full sequences in (d) and (e) are zoomed out horizontally.
We also compare the configura- Table 3: Test accuracies (%) on sequential MNIST/pMNIST.
tions of the highest test accuracies
MNIST
pMNIST
to the state-of-the-art methods (see
iRNN [33]
97.0
82.0
Table 3). For sequential MNIST, our
LSTM [2]
98.2
88.0
3D tLSTM+CN with L = 3 performs
uRNN [2]
95.1
91.4
as well as the state-of-the-art Dilated
Full-capacity uRNN [49]
96.9
94.1
GRU model [8], with a test accusTANH [53]
98.1
94.0
racy of 99.2%. For the sequential
BN-LSTM [13]
99.0
95.4
pMNIST, our 3D tLSTM+CN with
Dilated GRU [8]
99.2
94.6
L = 5 has a test accuracy of 95.7%,
Dilated CNN [40] in [8]
98.3
96.7
which is close to the state-of-the-art
3D tLSTM+CN (L = 3)
99.2
94.9
of 96.7% produced by the Dilated
3D tLSTM+CN (L = 5)
99.0
95.7
CNN [40] in [8].
3.4
Analysis
The experimental results of different model configurations on different tasks suggest that the performance of tLSTMs can be improved by increasing the tensor size and network depth, requiring no
additional parameters and little additional runtime. As the network gets wider and deeper, we found
that the memory cell convolution mechanism is crucial to maintain improvement in performance.
Also, we found that feedback connections are useful for tasks of sequential output (e.g., our Wikipedia
and algorithmic tasks). Moreover, tLSTM can be further strengthened via tensorization or CN.
It is also intriguing to examine the internal working mechanism of tLSTM. Thus, we visualize the
memory cell which gives insight into how information is routed. For each task, the best performing
tLSTM is run on a random example. We record the channel mean (the mean over channels, e.g., it is
of size P ?P for 3D tLSTMs) of the memory cell at each timestep, and visualize the diagonal values
of the channel mean from location pin = [1, 1] (near the input) to pout = [P, P ] (near the output).
Visualization results in Fig. 6 reveal the distinct behaviors of tLSTM when dealing with different tasks:
(i) Wikipedia: the input can be carried to the output location with less modification if it is sufficient
to determine the next character, and vice versa; (ii) addition: the first integer is gradually encoded
into memories and then interacts (performs addition) with the second integer, producing the sum; (iii)
memorization: the network behaves like a shift register that continues to move the input symbol to the
output location at the correct timestep; (iv) sequential MNIST: the network is more sensitive to the
pixel value change (representing the contour, or topology of the digit) and can gradually accumulate
evidence for the final prediction; (v) sequential pMNIST: the network is sensitive to high value pixels
(representing the foreground digit), and we conjecture that this is because the permutation destroys
the topology of the digit, making each high value pixel potentially important.
From Fig. 6 we can also observe common phenomena for all tasks: (i) at each timestep, the values at
different tensor locations are markedly different, implying that wider (larger) tensors can encode more
information, with less effort to compress it; (ii) from the input to the output, the values become increasingly distinct and are shifted temporally, revealing that deep computations are indeed performed
together with temporal computations, with long-range dependencies carried by memory cells.
8
Figure 7: Examples of models related to tLSTMs. (a) A single layer cLSTM [48] with vector array
input. (b) A 3-layer sLSTM [21]. (c) A 3-layer Grid LSTM [30]. (d) A 3-layer RHN [54]. (e) A
3-layer QRNN [7] with kernel size 2, where costly computations are done by temporal convolution.
4
Related Work
Convolutional LSTMs. Convolutional LSTMs (cLSTMs) are proposed to parallelize the computation of LSTMs when the input at each timestep is structured (see Fig. 7(a)), e.g., a vector array
[48], a vector matrix [41, 42, 50, 52], or a vector tensor [9, 45]. Unlike cLSTMs, tLSTM aims to
increase the capacity of LSTMs when the input at each timestep is non-structured, i.e., a single vector,
and is advantageous over cLSTMs in that: (i) it performs the convolution across different hidden
layers whose structure is independent of the input structure, and integrates information bottom-up
and top-down; while cLSTM performs the convolution within each hidden layer whose structure is
coupled with the input structure, thus will fall back to the vanilla LSTM if the input at each timestep
is a single vector; (ii) it can be widened efficiently without additional parameters by increasing the
tensor size; while cLSTM can be widened by increasing the kernel size or kernel channel, both of
which will introduce parameters; (iii) it can be deepened with little additional runtime by merging
deep computations; while cLSTM can be deepened by using more hidden layers which significantly
increases the runtime; (iv) it captures long-range dependencies from multiple directions through the
memory cell convolution; while cLSTM struggles to capture long-range dependencies from multiple
directions since memory cells are only gated along one direction.
Deep LSTMs. Deep LSTMs (dLSTMs) extend sLSTMs by making them deeper (see Fig. 7(b)-(d)).
To keep the parameter number small and ease training, Graves [22], Kalchbrenner et al. [30], Mujika
et al. [38], Zilly et al. [54] apply another RNN/LSTM along the depth direction of dLSTMs, which,
however, multiplies the runtime. Though there are implementations to accelerate the deep computation
[1, 16], they generally aim at simple architectures such sLSTMs. Compared with dLSTMs, tLSTM
performs the deep computation with little additional runtime, and employs a cross-layer convolution to
enable the feedback mechanism. Moreover, the capacity of tLSTM can be increased more efficiently
by using higher-dimensional tensors, whereas in dLSTM all hidden layers as a whole only equal to a
2D tensor (i.e., a stack of hidden vectors), the dimensionality of which is fixed.
Other Parallelization Methods. Some methods [7, 8, 28, 29, 36, 40] parallelize the temporal
computation of the sequence (e.g., use the temporal convolution, as in Fig. 7(e)) during training, in
which case full input/target sequences are accessible. However, during the online inference when the
input presents sequentially, temporal computations can no longer be parallelized and will be blocked
by deep computations of each timestep, making these methods potentially unsuitable for real-time
applications that demand a high sampling/output frequency. Unlike these methods, tLSTM can speed
up not only training but also online inference for many tasks since it performs the deep computation
by the temporal computation, which is also human-like: we convert each signal to an action and
meanwhile receive new signals in a non-blocking way. Note that for the online inference of tasks
that use the previous output yt?1 for the current input xt (e.g., autoregressive sequence generation),
tLSTM cannot parallel the deep computation since it requires to delay L?1 timesteps to get yt?1 .
5
Conclusion
We introduced the Tensorized LSTM, which employs tensors to share parameters and utilizes the
temporal computation to perform the deep computation for sequential tasks. We validated our model
on a variety of tasks, showing its potential over other popular approaches.
9
Acknowledgements
This work is supported by the Alan Turing Institute under the EPSRC grant EP/N510129/1.
References
[1] Jeremy Appleyard, Tomas Kocisky, and Phil Blunsom. Optimizing performance of recurrent neural networks
on gpus. arXiv preprint arXiv:1604.01946, 2016. 3, 9
[2] Martin Arjovsky, Amar Shah, and Yoshua Bengio. Unitary evolution recurrent neural networks. In ICML,
2016. 7, 8
[3] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton.
arXiv:1607.06450, 2016. 4, 5
Layer normalization.
arXiv preprint
[4] Yoshua Bengio, Patrice Simard, and Paolo Frasconi. Learning long-term dependencies with gradient descent
is difficult. IEEE TNN, 5(2):157?166, 1994. 1
R in Machine Learning, 2009. 2
[5] Yoshua Bengio. Learning deep architectures for ai. Foundations and trends
[6] Luca Bertinetto, Jo?o F Henriques, Jack Valmadre, Philip Torr, and Andrea Vedaldi. Learning feed-forward
one-shot learners. In NIPS, 2016. 2, 4
[7] James Bradbury, Stephen Merity, Caiming Xiong, and Richard Socher. Quasi-recurrent neural networks. In
ICLR, 2017. 9
[8] Shiyu Chang, Yang Zhang, Wei Han, Mo Yu, Xiaoxiao Guo, Wei Tan, Xiaodong Cui, Michael Witbrock,
Mark Hasegawa-Johnson, and Thomas Huang. Dilated recurrent neural networks. In NIPS, 2017. 8, 9
[9] Jianxu Chen, Lin Yang, Yizhe Zhang, Mark Alber, and Danny Z Chen. Combining fully convolutional and
recurrent neural networks for 3d biomedical image segmentation. In NIPS, 2016. 9
[10] Junyoung Chung, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. Gated feedback recurrent neural
networks. In ICML, 2015. 3, 13
[11] Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. Hierarchical multiscale recurrent neural networks. In
ICLR, 2017. 6
[12] Ronan Collobert, Koray Kavukcuoglu, and Cl?ment Farabet. Torch7: A matlab-like environment for
machine learning. In NIPS Workshop, 2011. 13
[13] Tim Cooijmans, Nicolas Ballas, C?sar Laurent, and Aaron Courville. Recurrent batch normalization. In
ICLR, 2017. 8
[14] Bert De Brabandere, Xu Jia, Tinne Tuytelaars, and Luc Van Gool. Dynamic filter networks. In NIPS, 2016.
4
[15] Misha Denil, Babak Shakibi, Laurent Dinh, Nando de Freitas, et al. Predicting parameters in deep learning.
In NIPS, 2013. 2, 4
[16] Greg Diamos, Shubho Sengupta, Bryan Catanzaro, Mike Chrzanowski, Adam Coates, Erich Elsen, Jesse
Engel, Awni Hannun, and Sanjeev Satheesh. Persistent rnns: Stashing recurrent weights on-chip. In ICML,
2016. 9
[17] Jeffrey L Elman. Finding structure in time. Cognitive science, 14(2):179?211, 1990. 1
[18] Timur Garipov, Dmitry Podoprikhin, Alexander Novikov, and Dmitry Vetrov. Ultimate tensorization:
compressing convolutional and fc layers alike. In NIPS Workshop, 2016. 2
[19] Felix A Gers, J?rgen Schmidhuber, and Fred Cummins. Learning to forget: Continual prediction with lstm.
Neural computation, 12(10):2451?2471, 2000. 1
[20] Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recurrent
neural networks. In ICASSP, 2013. 2
[21] Alex Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013.
5, 7, 9
[22] Alex Graves. Adaptive computation time for recurrent neural networks. arXiv preprint arXiv:1603.08983,
2016. 9
[23] David Ha, Andrew Dai, and Quoc V Le. Hypernetworks. In ICLR, 2017. 4, 6
[24] Sepp Hochreiter and J?rgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735?1780,
1997. 1
[25] Marcus Hutter. The human knowledge compression contest. URL http://prize.hutter1.net, 2012. 6
[26] Ozan Irsoy and Claire Cardie. Modeling compositionality with multiplicative recurrent neural networks.
In ICLR, 2015. 2
10
[27] Rafal Jozefowicz, Wojciech Zaremba, and Ilya Sutskever. An empirical exploration of recurrent network
architectures. In ICML, 2015. 13
[28] ?ukasz Kaiser and Samy Bengio. Can active memory replace attention? In NIPS, 2016. 9
[29] ?ukasz Kaiser and Ilya Sutskever. Neural gpus learn algorithms. In ICLR, 2016. 9
[30] Nal Kalchbrenner, Ivo Danihelka, and Alex Graves. Grid long short-term memory. In ICLR, 2016. 6, 7, 9,
13
[31] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015. 13
[32] Ben Krause, Liang Lu, Iain Murray, and Steve Renals. Multiplicative lstm for sequence modelling. In
ICLR Workshop, 2017. 2, 6
[33] Quoc V Le, Navdeep Jaitly, and Geoffrey E Hinton. A simple way to initialize recurrent networks of
rectified linear units. arXiv preprint arXiv:1504.00941, 2015. 7, 8
[34] Yann LeCun, Bernhard Boser, John S Denker, Donnie Henderson, Richard E Howard, Wayne Hubbard,
and Lawrence D Jackel. Backpropagation applied to handwritten zip code recognition. Neural computation,
1(4):541?551, 1989. 2
[35] Yann LeCun, L?on Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to
document recognition. Proceedings of the IEEE, 86(11):2278?2324, 1998. 2, 7
[36] Tao Lei and Yu Zhang. Training rnns as fast as cnns. arXiv preprint arXiv:1709.02755, 2017. 9
[37] Gundram Leifert, Tobias Strau?, Tobias Gr?ning, Welf Wustlich, and Roger Labahn. Cells in multidimensional recurrent neural networks. JMLR, 17(1):3313?3349, 2016. 4, 13
[38] Asier Mujika, Florian Meier, and Angelika Steger. Fast-slow recurrent neural networks. In NIPS, 2017. 6,
9
[39] Alexander Novikov, Dmitrii Podoprikhin, Anton Osokin, and Dmitry P Vetrov. Tensorizing neural networks.
In NIPS, 2015. 2
[40] Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal
Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio. arXiv
preprint arXiv:1609.03499, 2016. 8, 9
[41] Viorica Patraucean, Ankur Handa, and Roberto Cipolla. Spatio-temporal video autoencoder with differentiable memory. In ICLR Workshop, 2016. 9
[42] Bernardino Romera-Paredes and Philip Hilaire Sean Torr. Recurrent instance segmentation. In ECCV,
2016. 9
[43] David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning representations by backpropagating errors. Nature, 323(6088):533?536, 1986. 1
[44] J?rgen Schmidhuber. Learning to control fast-weight memories: An alternative to dynamic recurrent
networks. Neural Computation, 4(1):131?139, 1992. 4
[45] Marijn F Stollenga, Wonmin Byeon, Marcus Liwicki, and Juergen Schmidhuber. Parallel multi-dimensional
lstm, with application to fast biomedical volumetric image segmentation. In NIPS, 2015. 9
[46] Ilya Sutskever, James Martens, and Geoffrey E Hinton. Generating text with recurrent neural networks. In
ICML, 2011. 2, 4
[47] Graham W Taylor and Geoffrey E Hinton. Factored conditional restricted boltzmann machines for modeling
motion style. In ICML, 2009. 2
[48] Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. In
ICML, 2016. 3, 9
[49] Scott Wisdom, Thomas Powers, John Hershey, Jonathan Le Roux, and Les Atlas. Full-capacity unitary
recurrent neural networks. In NIPS, 2016. 8
[50] Lin Wu, Chunhua Shen, and Anton van den Hengel. Deep recurrent convolutional networks for video-based
person re-identification: An end-to-end approach. arXiv preprint arXiv:1606.01609, 2016. 9
[51] Yuhuai Wu, Saizheng Zhang, Ying Zhang, Yoshua Bengio, and Ruslan Salakhutdinov. On multiplicative
integration with recurrent neural networks. In NIPS, 2016. 2, 6
[52] SHI Xingjian, Zhourong Chen, Hao Wang, Dit-Yan Yeung, Wai-kin Wong, and Wang-chun Woo. Convolutional lstm network: A machine learning approach for precipitation nowcasting. In NIPS, 2015. 9
[53] Saizheng Zhang, Yuhuai Wu, Tong Che, Zhouhan Lin, Roland Memisevic, Ruslan R Salakhutdinov, and
Yoshua Bengio. Architectural complexity measures of recurrent neural networks. In NIPS, 2016. 8
[54] Julian Georg Zilly, Rupesh Kumar Srivastava, Jan Koutn?k, and J?rgen Schmidhuber. Recurrent highway
networks. In ICML, 2017. 6, 9
11
| 6606 |@word cnn:3 version:1 compression:1 seems:1 advantageous:1 paredes:1 r:2 rgb:1 bn:1 thereby:1 shot:1 harder:1 configuration:14 series:1 document:1 romera:1 outperforms:2 freitas:1 current:4 com:1 blank:1 activation:4 gmail:1 intriguing:1 aft:5 gpu:1 danny:1 john:2 ronald:1 concatenate:2 ronan:1 diederik:1 enables:1 atlas:1 update:6 implying:1 generative:1 fewer:3 ivo:1 plane:1 rp1:1 podoprikhin:2 vanishing:4 short:4 prize:2 record:1 boosting:1 location:14 zhang:6 five:2 along:4 donnie:1 become:4 persistent:1 consists:2 shubho:1 manner:1 introduce:3 indeed:1 behavior:1 merity:1 p1:2 examine:1 kiros:1 elman:1 multi:1 andrea:1 wavenet:1 salakhutdinov:2 globally:1 little:5 param:1 increasing:11 becomes:1 conv:3 precipitation:1 moreover:2 factorized:1 kind:1 whilst:4 finding:1 guarantee:1 temporal:16 multidimensional:1 continual:1 growth:1 runtime:20 zaremba:1 rm:8 k2:1 control:2 unit:3 grant:1 wayne:1 organize:1 producing:2 encapsulate:1 danihelka:1 felix:1 vertically:1 local:1 ceil:1 limit:2 struggle:1 vetrov:2 parallelize:2 laurent:2 shiyu:1 merge:1 might:1 rnns:3 blunsom:1 steger:1 ankur:1 dynamically:3 challenging:2 catanzaro:1 ease:2 factorization:2 range:11 lecun:2 lost:2 backpropagation:1 digit:8 jan:1 rnn:12 empirical:1 yan:1 significantly:4 thought:1 projection:2 revealing:1 vedaldi:1 seeing:1 suggest:1 get:2 convenience:1 scu:1 close:1 operator:3 bh:7 cannot:1 applying:2 memorization:8 wong:1 marten:1 shi:1 yt:14 phil:1 jesse:1 sepp:1 delimit:1 attention:1 jimmy:2 hypernetworks:1 williams:1 tomas:1 roux:1 shen:1 marijn:1 insight:1 iain:1 array:2 factored:1 bertinetto:1 analogous:1 updated:1 sar:1 controlling:1 target:4 enhanced:1 strengthen:1 tan:1 samy:1 jaitly:1 element:5 trend:1 recognition:3 rumelhart:1 located:1 continues:1 predicts:1 blocking:1 observed:1 bottom:5 ft:5 negligibly:1 epsrc:1 ep:1 capture:4 preprint:8 mike:1 wang:2 region:1 compressing:1 mz:2 highest:2 pd:6 environment:1 complexity:5 nowcasting:1 tobias:2 dynamic:4 neglected:1 babak:1 angelika:1 zilly:2 efficiency:1 learner:1 tinne:1 srnn:5 accelerate:1 icassp:1 convolves:1 chip:1 represented:1 cat:7 derivation:1 stacked:4 alphabet:1 distinct:2 fast:4 describe:2 london:1 liwicki:1 configura:1 kalchbrenner:4 whose:3 encoded:1 larger:4 solve:1 saizheng:2 otherwise:1 ability:2 statistic:2 simonyan:1 amar:1 reshaped:2 tuytelaars:1 final:1 online:3 patrice:1 sequence:19 rr:2 differentiable:2 advantage:1 net:1 propose:1 jamie:1 interaction:3 product:1 zoomed:1 ment:1 renals:1 relevant:1 combining:1 rapidly:1 flexibility:2 achieve:1 normalize:2 scalability:1 sutskever:3 extending:1 produce:1 generating:3 perfect:1 adam:2 ben:1 wider:3 depending:1 recurrent:28 help:3 fixing:2 spent:1 measured:3 tions:1 bradbury:1 qt:7 novikov:2 andrew:2 p2:3 implemented:1 c:1 memorize:1 direction:8 ning:1 merged:1 correct:2 tensorization:3 cnns:4 filter:1 stochastic:1 exploration:1 human:2 nando:1 enable:1 fix:2 alleviate:1 koutn:1 ryan:1 summation:1 awni:1 trnn:9 lawrence:1 mapping:1 predict:1 visualize:3 algorithmic:3 dieleman:1 rgen:4 mo:1 vary:1 adopt:1 achieves:3 ruslan:2 integrates:2 tanh:1 yuhuai:2 jackel:1 sensitive:2 hubbard:1 highway:1 vice:1 engel:1 destroys:1 aim:2 modified:1 denil:1 boosted:1 profitably:1 encode:1 validated:1 focus:1 improvement:1 consistently:1 rank:1 modelling:1 contrast:3 baseline:1 inference:3 dependent:1 el:1 rupesh:1 entire:1 pad:1 hidden:30 expand:1 quasi:1 ukasz:2 tao:1 pmnist:7 pixel:6 classification:2 flexible:2 multiplies:1 sengupta:1 art:7 softmax:1 special:1 initialize:1 integration:1 field:5 equal:1 frasconi:1 beach:1 sampling:1 koray:3 yu:2 icml:8 nearly:1 foreground:1 future:2 others:1 yoshua:8 richard:2 employ:3 widen:2 preserve:1 national:1 cheaper:1 delayed:3 connects:1 alber:1 jeffrey:1 maintain:1 investigate:1 aqt:3 bpc:4 saturated:1 introduces:1 henderson:1 misha:1 stollenga:1 explosion:1 iv:2 taylor:1 desired:1 circle:1 re:1 elsen:1 hutter:2 increased:3 column:2 modeling:5 classify:1 instance:1 cover:1 juergen:1 introducing:1 ating:1 deviation:1 pout:2 witbrock:1 byeon:1 delay:2 conducted:1 johnson:1 gr:1 reported:1 straightforwardly:1 dependency:11 connect:2 cho:1 st:1 thanks:1 lstm:29 person:1 accessible:1 oord:2 memisevic:1 zhouhan:1 michael:1 together:1 ilya:3 sanjeev:1 jo:1 rafal:1 broadcast:1 huang:1 zen:1 corner:2 cognitive:1 simard:1 chung:2 bx:2 return:1 wojciech:1 style:1 potential:2 jeremy:1 valmadre:1 de:2 sec:2 stabilize:1 dilated:5 satisfy:2 explicitly:1 register:1 collobert:1 multiplicative:3 tion:1 break:1 try:1 performed:1 doing:1 scanline:1 start:1 competitive:1 parallel:3 samp:2 jia:1 contribution:1 shakibi:1 accuracy:8 convolutional:8 greg:1 largely:3 efficiently:5 correspond:2 wisdom:1 anton:2 handwritten:2 raw:1 kavukcuoglu:3 identification:1 produced:2 cardie:1 lu:1 rectified:1 unaffected:1 history:1 acc:2 influenced:1 sharing:3 farabet:1 wai:1 definition:3 xiaoxiao:1 against:1 chrzanowski:1 volumetric:1 frequency:1 steadily:1 james:2 mohamed:1 mi:1 gain:1 dataset:4 popular:4 knowledge:1 dimensionality:2 improves:2 segmentation:3 sean:1 carefully:1 back:1 feed:1 steve:1 higher:9 follow:1 patraucean:1 specify:1 improved:1 mujika:2 wei:2 hershey:1 evaluated:2 done:1 though:1 just:1 biomedical:2 roger:1 rahman:1 working:2 hand:1 horizontal:1 lstms:9 replacing:1 multiscale:1 reveal:1 lei:2 grows:1 irnn:1 usa:1 xiaodong:1 contain:1 normalized:3 managed:1 requiring:1 former:1 evolution:1 kyunghyun:1 read:1 illustrated:1 skewed:2 width:1 during:2 please:2 backpropagating:1 clstm:5 performs:6 motion:1 image:6 wise:2 jack:1 novel:2 handa:1 common:2 sigmoid:1 wikipedia:6 permuted:2 behaves:1 tively:1 irsoy:1 hki:3 volume:1 million:1 extend:3 he:1 ballas:1 accumulate:1 dinh:1 blocked:1 jozefowicz:1 versa:1 xingjian:1 ai:1 vanilla:2 grid:3 erich:1 contest:1 language:1 vectors1:1 longer:6 han:1 ahn:1 gt:5 patrick:1 own:1 optimizing:1 chunhua:1 schmidhuber:5 store:1 success:1 seen:1 arjovsky:1 additional:12 dai:1 florian:1 locationdependent:1 zip:1 parallelized:1 determine:1 cummins:1 exploding:3 ii:5 multiple:4 full:5 signal:2 stephen:1 alan:2 faster:4 adapt:1 cross:3 long:16 wtc:6 lin:3 luca:1 roland:1 prediction:4 surpassed:2 navdeep:1 arxiv:16 repetitive:1 kernel:22 adopting:1 normalization:5 yeung:1 achieved:1 cell:25 hochreiter:1 liu2:1 addition:10 whereas:1 receive:1 krause:1 crucial:1 ot:5 parallelization:1 unlike:4 tim:1 markedly:1 flow:2 mod:1 leveraging:1 call:2 integer:6 unitary:2 near:4 yang:2 iii:2 bengio:9 sander:1 variety:1 affect:1 timesteps:4 architecture:3 associating:1 opposite:1 topology:2 reduce:1 idea:1 cn:23 haffner:1 slstm:15 shift:3 t0:1 handled:1 urnn:2 defense:1 ultimate:1 torch7:1 url:1 effort:1 akin:1 f:2 routed:1 karen:1 speech:1 action:1 matlab:1 deep:19 useful:1 generally:1 detailed:2 n510129:1 amount:1 locally:1 wonmin:1 dit:1 generate:6 http:1 outperform:2 wisely:1 aot:5 millisecond:1 shifted:1 coates:1 hutter1:1 per:2 bryan:1 blue:1 zd:1 affected:2 paolo:1 georg:1 achieving:1 prevent:2 nal:3 ht:34 backward:1 timestep:16 fuse:1 sum:2 convert:1 pix:1 turing:2 run:1 powerful:1 throughout:1 almost:1 yann:2 wu:3 utilizes:1 architectural:1 appendix:5 graham:1 bit:1 layer:28 ct:13 courville:1 constraint:1 alex:5 x2:1 shaobing:2 speed:1 kumar:1 performing:3 separable:2 martin:1 conjecture:1 gpus:2 structured:2 according:1 cui:1 kd:2 across:7 beneficial:1 increasingly:1 separability:1 character:6 modification:1 making:3 alike:1 quoc:2 den:3 gradually:3 restricted:1 taken:1 ln:12 equation:2 visualization:3 remains:1 hannun:1 turn:1 pin:2 mechanism:5 end:2 gulcehre:1 operation:1 xiao2:1 observe:3 apply:1 hierarchical:1 ozan:1 reshape:1 caiming:1 denker:1 xiong:1 alternative:2 batch:1 shah:1 gate:3 rp:11 convolved:2 original:2 compress:1 top:4 denotes:1 ensure:1 thomas:2 widens:1 unsuitable:2 concatenated:2 k1:1 murray:1 unchanged:1 tensor:42 move:1 already:1 kaiser:2 receptive:5 degrades:1 costly:1 diagonal:3 interacts:1 che:1 enhances:1 gradient:5 iclr:10 capacity:6 concatenation:1 philip:2 marcus:2 length:1 code:1 retained:1 illustration:2 julian:1 ying:1 liang:2 difficult:1 potentially:3 hasegawa:1 hao:1 ba:2 implementation:2 satheesh:1 boltzmann:1 gated:3 allowing:2 perform:1 convolution:23 observation:1 howard:1 tensorizing:3 caglar:1 descent:1 beat:1 extended:1 hinton:6 delaying:2 stack:2 bert:1 compositionality:1 kocisky:1 david:3 introduced:2 namely:1 widened:7 required:1 hyperlstm:1 connection:6 gru:2 meier:1 quadratically:2 boser:1 heiga:1 boost:1 kingma:1 nip:16 deepen:3 proceeds:1 usually:1 bar:1 scott:1 reading:1 including:1 memory:32 video:2 timur:1 gool:1 power:1 widening:2 difficulty:1 natural:1 predicting:1 representing:2 improve:4 xml:1 technology:1 temporally:1 carried:2 zhen:2 hm:1 naive:1 coupled:1 autoencoder:1 roberto:1 woo:1 text:1 acknowledgement:1 graf:6 fully:3 permutation:1 generation:1 proportional:2 he1:1 geoffrey:6 validation:2 foundation:1 abdel:1 vectorized:1 sufficient:1 propagates:1 bank:1 share:5 claire:1 eccv:1 row:2 summary:1 normalizes:1 supported:1 last:1 keeping:1 henriques:1 bias:3 allow:3 deeper:3 senior:1 institute:2 fall:1 correspondingly:1 benefit:2 van:4 feedback:8 depth:13 dimension:9 transition:1 cumulative:1 rhn:2 contour:1 autoregressive:1 author:1 forward:2 adaptive:2 sungjin:1 fred:1 hengel:1 osokin:1 employing:1 deepened:4 implicitly:2 dmitry:3 bernhard:1 keep:1 dealing:1 global:1 sequentially:2 active:1 cooijmans:1 assumed:1 viorica:1 spatio:1 table:6 additionally:1 learn:1 channel:27 nature:2 nicolas:1 ca:1 obtaining:1 interact:1 permute:1 broadening:1 clstms:3 cl:1 meanwhile:1 bottou:1 domain:1 linearly:1 whole:1 zhourong:1 ait:5 he2:1 x1:4 xu:1 fig:16 junyoung:2 strengthened:1 slow:1 tong:1 sub:1 gers:1 tensorize:3 jmlr:1 learns:1 kin:1 down:4 rk:2 removing:3 xt:10 brabandere:1 gating:1 showing:1 tnn:1 learnable:1 symbol:7 chun:1 evidence:1 consist:1 socher:1 mnist:11 workshop:4 adding:1 merging:2 sequential:11 agt:5 diamos:1 racy:1 demand:1 chen:3 forget:2 fc:1 xt0:1 gao:1 forming:2 horizontally:1 vinyals:1 bernardino:1 chang:1 cipolla:1 yizhe:1 conditional:1 modulate:1 goal:3 exposition:1 shared:4 replace:2 content:1 change:2 luc:1 torr:2 pas:1 experimental:1 aaron:3 college:1 internal:2 mark:2 guo:1 latter:1 jonathan:1 alexander:2 oriol:1 evaluate:3 audio:1 phenomenon:1 srivastava:1 |
6,198 | 6,607 | Concentration of Multilinear Functions of the Ising
Model with Applications to Network Data
Constantinos Daskalakis ?
EECS & CSAIL, MIT
[email protected]
Nishanth Dikkala?
EECS & CSAIL, MIT
[email protected]
Gautam Kamath?
EECS & CSAIL, MIT
[email protected]
Abstract
We prove near-tight concentration of measure for polynomial functions of the
Ising model under high temperature. For any degree d, we show that a degreed polynomial of a n-spin Ising model exhibits exponential tails that scale as
? d (nd/2 ). Our concentration radius is optimal up to
exp(?r2/d ) at radius r = ?
logarithmic factors for constant d, improving known results by polynomial factors
in the number of spins. We demonstrate the efficacy of polynomial functions as
statistics for testing the strength of interactions in social networks in both synthetic
and real world data.
1
Introduction
The Ising model is a fundamental probability distribution defined in terms of a graph G = (V, E)
whose nodes and edges are associated with scalar parameters (?v )v?V and (?u,v ){u,v}?E respectively.
The distribution samples a vector x ? {?1}V with probability:
?
?
X
X
p(x) = exp ?
?v xv +
?u,v xu xv ? ? ?~ ? ,
(1)
v?V
(u,v)?E
where ? ?~ serves to provide normalization. Roughly speaking, there is a random variable Xv at
every node of G, and this variable may be in one of two states, or spins: up (+1) or down (?1). The
scalar parameter ?v models a local field at node v. The sign of ?v represents whether this local field
favors Xv taking the value +1, i.e. the up spin, when ?v > 0, or the value ?1, i.e. the down spin,
when ?v < 0, and its magnitude represents the strength of the local field. Similarly, ?u,v represents
the direct interaction between nodes u and v. Its sign represents whether it favors equal spins, when
?u,v > 0, or opposite spins, when ?u,v < 0, and its magnitude corresponds to the strength of the
direct interaction. Of course, depending on the structure of G and the node and edge parameters, there
may be indirect interactions between nodes, which may overwhelm local fields or direct interactions.
Many popular models, for example, the usual ferromagnetic Ising model [Isi25, Ons44], the
Sherrington-Kirkpatrick mean field model [SK75] of spin glasses, and the Hopfield model [Hop82]
of neural networks, the Curie-Weiss model [DCG68] all belong to the above family of distributions, with various special structures on G, the ?u,v ?s and the ?v ?s. Since its introduction in
?
Authors are listed in alphabetical order.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Statistical Physics, the Ising model has found a myriad of applications in diverse research disciplines, including probability theory, Markov chain Monte Carlo, computer vision, theoretical
computer science, social network analysis, game theory, computational biology, and neuroscience;
see e.g. [LPW09, Cha05, Fel04, DMR11, GG86, Ell93, MS10] and their references. The ubiquity of
these applications motivate the problem of inferring Ising models from samples, or inferring statistical
properties of Ising models from samples. This type of problem has enjoyed much study in statistics,
machine learning, and information theory; see, e.g., [CL68, AKN06, CT06, Cha07, RWL10, JJR11,
SW12, BGS14, Bre15, VMLC16, BK16, Bha16, BM16, MdCCU16, KM17, HKM17, DDK18].
Despite the wealth of theoretical study and practical applications of this model, outlined above, there
are still aspects of it that are poorly understood. In this work, we focus on the important topic of
concentration of measure. We are interested in studying the concentration properties of polynomial
functions f (X) of the Ising model. That is, for a random vector X sampled from p as above and a
polynomial f , we are interested in the concentration of f (X) around its expectation E[f (X)]. Since
the coordinates of X take values in {?1}, we can without loss of generality focus our attention to
multi-linear functions f .
While the theory of concentration inequalities for functions of independent random variables has
reached a high level of sophistication, proving concentration of measure for functions of dependent
random variables is significantly harder, the main tools being martingale methods, logarithmic
Sobolev inequalities and transportation cost inequalities. One shortcoming of the latter methods is
that explicit constants are very hard or almost impossible to get. For the Ising model, in particular,
the log-Sobolev inequalities of Stroock and Zegarlinski [SZ92], known under high temperature,2 do
not give explicit constants, and it is also not clear whether they extend to systems beyond the lattice.
An alternative approach, proposed recently by Chatterjee [Cha05], is an adaptation to the Ising
model of Stein?s method of exchangeable pairs. This powerful method is well-known in probability
theory, and has been used to derive concentration inequalities with explicit constants for functions
of dependent random variables (see [MJC+ 14] for a recent work). Chatterjee uses this technique to
establish concentration inequalities for Lipschitz functions of the Ising model under high temperature.
While these inequalities are tight (and provide Gaussian tails) for linear functions of the Ising model,
they are unfortunately not tight for higher degree polynomials, in that the concentration radius is
off
P by factors that depend on the dimension n = |V |. For example, consider the function fc (X) =
i6=j cij Xi Xj of an Ising model without external fields, where the cij ?s are signs. Chatterjee?s
results imply that this function concentrates at radius ?O(n1.5 ), but as we show this is suboptimal by
? ?n).
a factor of ?(
In particular, our main technical contribution is to obtain near-tight concentration inequalities for
polynomial functions of the Ising model, whose concentration radii are tight up to logarithmic factors.
A corollary of our main result (Theorem 4) is as follows:
Theorem 1. Consider any degree-d multilinear function f with coefficients in [?1, 1], defined on
an Ising model p without external field in the high-temperature regime. Then there exists a constant
? d (nd/2 ), we have
C = C(d) > 0 (depending only on d) such that for any r = ?
r2/d
Pr [|f (X) ? E[f (X)]| > r] ? exp ?C ?
.
X?p
n log n
The concentration radius is tight up to logarithmic factors, and the tail bound is tight up to a
Od (1/ log n) factor in the exponent of the tail bound.
Our formal theorem statements for bilinear and higher degree multilinear functions appear as Theorems 2 and 4 of Sections 3 and 4, respectively. Some further discussion of our results is in order:
? Under existence of external fields, it is easy to see that the above concentration does not
hold, even for bilinear functions. Motivated by our applications in Section 5 we extend the
above concentration of measure result to centered bilinear functions (where each variable
Xi appears as Xi ? E[Xi ] in the function) that also holds under arbitrary external fields; see
Theorem 3. We leave extensions of this result to higher degree multinear functions to the
next version of this paper.
2
High temperature is a widely studied regime of the Ising model where it enjoys a number of useful properties
such as decay of correlations and fast mixing of the Glauber dynamics. Throughout this paper we will take ?high
temperature? to mean that Dobrushin?s conditions of weak dependence are satisfied. See Definition 1.
2
? Moreover, notice that the tails for degree-2 functions are exponential and not Gaussian,
and this is unavoidable, and that as the degree grows the tails become heavier exponentials,
and this is also unavoidable. In particular, the tightness of our bound is justified in the
supplementary material.
? Lastly, like Chatterjee and Stroock and Zegarlinski, we prove our results under high temperature. On the other hand, it is easy to construct low temperature Ising models where no
non-trivial concentration holds.3
With our theoretical understanding in hand, we proceed with an experimental evaluation of the efficacy
of multilinear functions applied to hypothesis testing. Specifically, given a binary vector, we attempt
to determine whether or not it was generated by an Ising model. Our focus is on testing whether
choices in social networks can be approximated as an Ising model, a common and classical assumption
in the social sciences [Ell93, MS10]. We apply our method to both synthetic and real-world data.
On synthetic data, we investigate when our statistics are successful in detecting departures from the
Ising model. For our real-world data study, we analyze the Last.fm dataset from HetRec?11 [CBK11].
Interestingly, when considering musical preferences on a social network, we find that the Ising model
may be more or less appropriate depending on the genre of music.
1.1
Related Work
As mentioned before, Chatterjee previously used the method of exchangeable pairs to prove variance
and concentration bounds for linear statistics of the Ising model [Cha05]. In [DDK18], the authors
prove variance bounds for bilinear statistics. The present work improves upon this by proving
concentration rather than bounding the variance, as well as considering general degrees d rather than
just d = 2. In simultaneous work, Gheissari, Lubetzky, and Peres proved concentration bounds which
are qualitatively similar to ours, though the techniques are somewhat different [GLP17].
2
Preliminaries
We will state some preliminaries here, see the supplementary material for further preliminaries.
We define the high-temperature regime, also known as Dobrushin?s uniqueness condition ? in this
paper, we will use the terms interchangeably.
Definition 1. Consider an Ising model p defined on a graph G = (V, E) with |V | = n and parameter
~ Suppose maxv?V P
vector ?.
u6=v tanh (|?uv |) ? 1 ? ? for some ? > 0. Then p is said to satisfy
Dobrushin?s uniqueness condition, or be in the ?-high temperature regime.
In some situations, we may use the parameter ? implicitly and simply say the Ising model is in the
high temperature regime.
Glauber dynamics refers to the canonical Markov chain for sampling from an Ising model, see the
supplementary material for a formal definition. Glauber dynamics define a reversible, ergodic Markov
chain whose stationary distribution is identical to the corresponding Ising model. In many relevant
settings, including the high-temperature regime, the dynamics are rapidly mixing and hence offer an
efficient way to sample from Ising models. In particular, the mixing time in ?-high-temperature is
n
tmix = n log
.
?
We may couple two executions of the Glauber dynamics using a greedy coupling (also known as a
monotone coupling). Roughly, this couples the choices made by the runs to maximize the probability
of agreement; see the supplementary material for a formal definition. One of the key properties of
this coupling is that it satisfies the following contraction property:
3
Consider an Ising model with no external fields, comprising two disjoint cliques of half the vertices with
infinitely strong bonds; i.e. ?v = P
0 for all v, and ?u,v = ? if u and v belong to the same clique. Now consider
the multilinear function f (X) = u6?v Xu Xv , wher u 6? v denotes that u and v are not neighbors (i.e. belong
to different cliques). It is easy to see that the maximum absolute value of f (X) is ?(n2 ) and that there is no
concentration at radius better than some ?(n2 ).
3
Lemma 1. If p is an Ising model in ?-high temperature, then the greedy coupling between two
executions satisfies the following contraction in Hamming distance:
h
i
? t
(1)
(2)
(1)
(2)
(1)
(2)
E dH (Xt , Xt )(X0 , X0 ) ? 1 ?
dH (X0 , X0 ).
n
The key technical tool we use is the following concentration inequality for martingales:
Lemma 2 (Freedman?s Inequality (Proposition 2.1 in [Fre75])). Let X0 , X1 , . . . , Xt be a sequence
Pi
of martingale increments, such that Si = j=0 Xj forms a martingale sequence. Let ? be a stopping
Pt
time and K ? 0 be such that Pr[|Xi | ? K ? i ? ? ] = 1.
i |Xi?1 ] and Vt =
i=0 vi .
Let vi = Var[X
2
r
Then Pr[|St | ? r and Vt ? b for some t ? ? ] ? 2 exp ? 2(rK+b)
.
3
Concentration of Measure for Bilinear Functions
In this section, we describe our main concentration result for bilinear functions of the Ising model.
This is not as technically involved as the result for general-degree multilinear functions, but exposes
many of the main conceptual ideas. The theorem statement is as follows:
P
Theorem 2. Consider any bilinear function fa (x) = u,v auv xu xv on an Ising model p (defined
on a graph G = (V, E) such that |V | = n) in ?-high-temperature regime with no external field. Let
kak? = maxu,v auv . If X ? p, then for any r ? 300kak? n log2 n/? + 2, we have
?r
Pr [|fa (X) ? E [fa (X)]| ? r] ? 5 exp ?
.
1735kak? n log n
Remark 1. We note that ?-high-temperature is not strictly needed for our results to hold ? we only
need Hamming contraction of the ?greedy coupling? (see Lemma 1). This condition implies rapid
mixing of the Glauber dynamics (in O(n log n) steps) via path coupling (Theorem 15.1 of [LPW09]).
3.1
Overview of the Technique
A well known approach to proving concentration inequalities for functions of dependent random
variables is via martingale tail bounds. For instance, Azuma?s inequality gives useful tail bounds
whenever one can bound the martingale increments (i.e., the differences between consecutive terms
of the martingale sequence) of the underlying martingale in absolute value, without requiring any
form of independence. Such an approach is fruitful in showing concentration of linear functions on
the Ising model in high temperature. The Glauber dynamics associated with Ising models in high
temperature are fast mixing and offer a natural way to define a martingale sequence. In particular,
consider the Doob martingale corresponding to any linear function f for which we wish to show
concentration, defined on the state of the dynamics at some time step t? , i.e. f (Xt? ). If we choose
t? larger than O(n log n) then f (Xt? ) would be very close to a sample from p irrespective of the
starting state. We set the first term of the martingale sequence as E[f (Xt? )|X0 ] and the last term is
simply f (Xt? ). By bounding the martingale increments we can show that |f (Xt? ) ? E[f (Xt? )|X0 ]|
concentrates at the right radius with high probability. By making t? large enough we can argue that
E[f (Xt? )|X0 ] ? E[f (X)]. Also, crucially, t? need not be too large since the dynamics are fast
mixing. Hence we don?t incur too big a hit when applying Azuma?s inequality, and one can argue
? ?n). Crucial to this argument is the fact
that linear functions are concentrated with a radius of O(
that linear functions are O(1)-Lipschitz (when the entries of a are constant), bounding the Doob
martingale differences to be O(1).
The challenge with bilinear functions is that they are O(n)-Lipschitz ? a naive application of the
? 3/2 ), which albeit better than the trivial radius
same approach gives a radius of concentration of O(n
of O(n2 ) is not optimal. To show stronger concentration for bilinear functions, at a high level, the
idea is to bootstrap the known fact that linear functions of the Ising model concentrate well at high
temperature.
The key insight is that, when we have a d-linear function, its Lipschitz constants are bounds on the
absolute values of certain d ? 1-linear functions. In particular, this implies that the Lipschitz constants
of a bilinear function are bounds on the absolute values of certain associated linear functions. And
4
although a worst case bound on the absolute value of linear functions with bounded coefficients
? ?n), means
would be O(n), the fact that linear functions are concentrated within a radius of O(
?
? n)-Lipschitz in spirit. In order to exploit this intuition, we turn to
that bilinear functions are O(
more sophisticated concentration inequalities, namely Freedman?s inequality (Lemma 2). This is
a generalization of Azuma?s inequality, which handles the case when the martingale differences
are only bounded until some stopping time (very roughly, the first time we reach a state where the
expectation of the linear function after mixing is large). To apply Freedman?s inequality, we would
need to define a stopping time which has two properties:
1. The stopping time is larger than t? with high probability. Hence, with a good probability the
process doesn?t stop too early. The harm if the process stops too early (at t < t? ) is that we
will not be able to effectively decouple E [fa (Xt )|X0 ] from the choice of X0 . t? is chosen
to be larger than the mixing time of the Glauber dynamics precisely because it allows us to
argue that E [fa (Xt? )|X0 ] ? E [fa (Xt? )] = E[fa (X)].
less than the stopping time, the martingale increments are bounded, i.e.
2. For all times i + 1 ?
|Bi+1 ? Bi | = O( n) where {Bi }i?0 is the martingale sequence.
We observe that the martingale increments corresponding to a martingale defined on a bilinear
function have the flavor of the conditional expectations of certain linear functions which can be shown
? ?n) when the process starts at its stationary distribution. This provides
to concentrate at a radius O(
us with a nice way of defining the stopping
? time to be the first time when one of these conditional
expectations deviates by more than ?( n poly log n) from the origin. More precisely, we define
a set GaK (t) of configurations xt , which is parameterized by a function fa (X) and parameter K
? ?n)). The objects of interest are linear functions fav (Xt? ) conditioned
(which we will take to be ?(
on Xt = xt , where fav are linear functions which arise when examining the evolution of fa over
steps of the Glauber dynamics. GaK (t) are the set of configurations for which all such linear functions
satisfy certain conditions, including bounded expectation and concentration around their mean. The
stopping time for our process TK is defined as the first time we have a configuration which leaves
this set GaK (t). We can show that the stopping time is large via the following lemma:
Lemma 3. For any t ? 0, for t? = 3tmix ,
K2
a
Pr [Xt ?
/ GK (t)] ? 8n exp ? ? .
8t
Next, we require a bound on the conditional variance of the martingale increments. This can be
shown using the property that the martingale increments are bounded up until the stopping time:
Lemma 4. Consider the Doob martingale where Bi = E[fa (Xt? )|Xi ]. Suppose Xi ? GaK (i) and
Xi+1 ? GaK (i + 1). Then
K2
2
|Bi+1 ? Bi | ? 16K + 16n exp ? ? .
16t
With these two pieces in hand, we can apply Freedman?s inequality to bound the desired quantity.
It is worth noting that the martingale approach described above closely relates to the technique
of exchangeable pairs exposited by Chatterjee [Cha05]. When we look at differences for the
martingale sequence defined using the Glauber dynamics, we end up analyzing an exchangeable
pair of the following form: sample X ? p from the Ising model. Take a step along the Glauber
dynamics starting from X to reach X 0 . (X, X 0 ) forms an exchangeable pair. This is precisely how
Chatterjee?s application of exchangeable pairs is set up. Chatterjee then goes on to study a function
of X and X 0 which serves as a proxy for the variance of f (X) and obtains concentration results
by bounding the absolute value of this function. The definition of the function involves considering two greedily coupled runs of the Glauber dynamics just as we do in our martingale based approach.
To summarize, our proof of bilinear concentration involves showing various concentration proper? ?n)-bounded
ties for linear functions via Azuma?s inequality, showing that the martingale has O(
differences before our stopping time, proving that the stopping time is larger than the mixing time
with high probability, and combining these ingredients using Freedman?s inequality. Full details are
provided in the supplementary material.
5
3.2
Concentration Under an External Field
Under an external field, not all bilinear functions concentrate nicely even in the high temperature
regime ? in particular, they may concentrate with a radius of ?(n1.5 ), instead of O(n). As such,
we must instead consider ?recentered? statistics to obtain the same radius of concentration. The
following theorem is proved in the supplementary material:
P
Theorem 3.
1. Bilinear functions on the Ising model of the form fa (X) = u,v auv (Xu ?
E[Xu ])(Xv ? E[Xv ]) satisfy the following inequality at high temperature. There exist
absolute constants c and c0 such that, for r ? cn log2 n/?,
r
.
Pr [|fa (X) ? E[fa (X)]| ? r] ? 4 exp ? 0
c n log n
2. Bilinear functions on the Ising model of the form fa (X (1) , X (2) ) =
(2)
(1)
P
u,v
(1)
auv (Xu ?
(2)
Xu )(Xv ? Xv ), where X (1) , X (2) are two i.i.d samples from the Ising model, satisfy
the following inequality at high temperature. There exist absolute constants c and c0 such
that, for r ? cn log2 n/?,
h
i
r
(1)
(2)
(1)
(2)
Pr fa (X , X ) ? E[fa (X , X )] ? r ? 4 exp ? 0
.
c n log n
4
Concentration of Measure for d-linear Functions
More generally, we can show concentration of measure for d-linear functions on an Ising model in
high temperature, when d ? 3. Again, we will focus on the setting with no external field. Although
we will follow a recipe similar to that used for bilinear functions, the proof is more involved and
requires some new definitions and tools. The proof will proceed by induction on the degree d. Due to
the proof being more involved, for ease of exposition, we present the proof of Theorem 4 without
explicit values for constants.
Our main theorem statement is the following:
P
Q
Theorem 4. Consider any degree-d multilinear function fa (x) = U ?V :|U |=d aU u?U xu on an
Ising model p (defined on a graph G = (V, E) such that |V | = n) in ?-high-temperature regime
with no external field. Let kak? = maxU ?V :|U |=d |aU |. There exist constants C1 = C1 (d) > 0 and
C2 = C2 (d) > 0 depending only on d, such that if X ? p, then for any r ? C1 kak? (n log2 n/?)d/2 ,
we have
!
?r2/d
Pr [|fa (X) ? E [fa (X)]| > r] ? 2 exp ?
.
2/d
C2 kak? n log n
Similar to Remark 1, our theorem statement still holds under the weaker assumption of Hamming
contraction. This bound is also tight up to polylogarithmic factors in the radius of concentration and
the exponent of the tail bound, see Remark 1 in the supplementary material.
4.1
Overview of the Technique
Our approach uses induction and is similar to the one used for bilinear functions. To show concentration for d-linear functions we will use the concentration of (d ? 1)-linear functions together with
Freedman?s martingale inequality.
Consider the following process: Sample X0 ? p from the Ising model of interest. Starting at X0 ,
run the Glauber dynamics associated with p for t? = (d + 1)tmix steps. We will study the target
quantity, Pr [|fa (Xt? ) ? E[fa (Xt? )|X0 ]| > K], by defining a martingale sequence similar to the one
in the bilinear proof. However, to bound the increments of the martingale for d-linear functions we
will require an induction hypothesis which is more involved. The reason is that with higher degree
multilinear functions (d > 2), the argument for bounding increments of the martingale sequence runs
into multilinear terms which are a function of not just a single instance of the dynamics Xt , but also
of the configuration obtained from the coupled run, Xt0 . We call such multilinear terms hybrid terms
6
and multilinear functions involving hybrid terms as hybrid multilinear functions henceforth. Since the
two runs (of the Glauber dynamics) are coupled greedily to maximize the probability of agreement
and they start with a small Hamming distance from each other (? 1), these hybrid terms behave very
similar to the non-hybrid multilinear terms. Showing that their behavior is similar, however, requires
some supplementary statements about them which are presented in the supplementary material.
In addition to the martingale technique of Section 3, an ingredient that is crucial to the proving
concentration for d ? 3 is a bound on the magnitude of the (d ? 1)-order marginals of the Ising
model:
Lemma 5. Consider any Ising model p at high temperature. Let d be a positive integer. We have
d/2
X
4nd log n
Ep [Xu1 Xu2 . . . , Xud ] ? 2
.
u ,...,u
?
1
d
This is because when studying degree d ? 3 functions we find ourselves having to bound expected
values of degree d ? 1 multilinear functions on the Ising model. A naive bound of Od (nd?1 ) can
be argued for these functions but by exploiting the fact that we are in high temperature, we can
show a bound of Od (n(d?1)/2 ) via a coupling with the Fortuin-Kastelyn model. When d = 2,
(d ? 1)-linear functions are just linear functions which are zero mean. However, for d ? 3, this is not
the case. Hence, we first need to prove this desired bound on the marginals of an Ising model in high
temperature.
Further details are provided in the supplementary material.
5
Experiments
In this section, we apply our family of bilinear statistics on the Ising model to a problem of statistical
hypothesis testing. Given a single sample from a multivariate distribution, we attempt to determine
whether or not this sample was generated from an Ising model in the high-temperature regime. More
specifically, the null hypothesis is that the sample is drawn from an Ising model with a known graph
structure with a common edge parameter and a uniform node parameter (which may potentially be
known to be 0). In Section 5.1, we apply our statistics to synthetic data. In Section 5.2, we turn our
attention to the Last.fm dataset from HetRec 2011 [CBK11].
The running theme of our experimental investigation is testing the classical and common assumption
which models choices in social networks as an Ising model [Ell93, MS10]. To be more concrete,
choices in a network could include whether to buy an iPhone or an Android phone, or whether
to vote for a Republican or Democratic candidate. Such choices are naturally influenced by one?s
neighbors in the network ? one may be more likely to buy an iPhone if he sees all his friends have
one, corresponding to an Ising model with positive-weight edges4 In our synthetic data study, we will
leave these choices as abstract, referring to them only as ?values,? but in our Last.fm data study, these
choices will be whether or not one listens to a particular artist.
Our general algorithmic approach is as follows. Given a single multivariate sample, we first run the
maximum pseudo-likelihood estimator (MPLE) to obtain an estimate of the model?s parameters. The
MPLE is a canonical estimator for the parameters of the Ising model, and it enjoys strong consistency
guarantees in many settings of interest [Cha07, BM16]. If the MPLE gives a large estimate of the
model?s edge parameter, this is sufficient evidence to reject the null hypothesis. Otherwise, we use
Markov Chain Monte Carlo (MCMC) on a model with the MPLE parameters to determine a range
of values for our statistic. We note that, to be precise, we would need to quantify the error incurred
by the MPLE ? in favor of simplicity in our exploratory investigation, we eschew this detail, and
at this point attempt to reject the null hypothesis of the model learned by the MPLE. Our statistic
is bilinear in the Ising model, and thus enjoys the strong concentration properties explained earlier
in this paper. Note that since the Ising model will be in the high-temperature regime, the Glauber
dynamics mix rapidly, and we can efficiently sample from the model using MCMC. Finally, given the
range of values for the statistic determined by MCMC, we reject the null hypothesis if p ? 0.05.
4
Note that one may also decide against buying an iPhone in this scenario, if one places high value on
individuality and uniqueness ? this corresponds to negative-weight edges.
7
5.1
Synthetic Data
We proceed with our investigation on synthetic data. Our null hypothesis is that the sample is
generated from an Ising model in the high temperature regime on the grid, with no external field (i.e.
?u = 0 for all u) and a common (unknown) edge parameter ? (i.e., ?uv = ? iff nodes u and v are
adjacent in the grid, and 0 otherwise).
For the Ising model on the grid, the critical edge parameter for
?
ln(1+ 2)
high-temperature is ?c =
. In other words, we are in high-temperature if and only if ? ? ?c ,
2
and we can reject the null hypothesis if the MPLE estimate ?? > ?c .
To generate departures from the null hypothesis, we give a construction parameterized by ? ? [0, 1].
We provide a rough description of the departures, for a precise description, see the supplemental
material. Each node x selects a random node y at Manhattan distance at most 2, and sets y?s value
to x with probability ? . The intuition behind this construction is that each individual selects a
friend or a friend-of-a-friend, and tries to convince them to take his value ? he is successful with
probability ? . Selecting either a friend or a friend-of-a-friend is in line with the concept of strong
triadic closure [EK10] from the social sciences, which suggests that two individuals with a mutual
friend are likely to either already be friends (which the social network may not have knowledge of) or
become friends in the future.
An example of a sample generated from this distribution with ? = 0.04 is provided in Figure 1 of the
supplementary material, alongside a sample from the Ising model generated with the corresponding
MPLE parameters. We consider this distribution to pass the ?eye test? ? one can not easily distinguish
these two distributions by simply glancing at them. However, as we will see, our multilinear statistic
is able to correctly reject the null a large fraction of the time.
Our experimental process was as follows. We started with a 40 ? 40 grid, corresponding to a
distribution with n = 1600 dimensions. We generated values for this grid according to the depatures
from the null described above, with some parameter ? . We then ran the MPLE estimator to obtain an
? immediately rejecting the null if ?? > ?c . Otherwise, we ran the
estimate for the edge parameter ?,
Glauber dynamics for O(n log n) steps to generate a sample from the grid Ising model with parameter
? We repeated this process to generate 100 samples, and for each sample, computed the value of
?.
P
P
the statistic Zlocal = u=(i,j) v=(k,l):d(u,v)?2 Xu Xv , where d(?, ?) is the Manhattan distance on
the grid. This statistic can be justified since we wish to account for the possibility of connections
between friends-of-friends of which the social network may be lacking knowledge. We then compare
with the value of the statistic Zlocal on the provided sample, and reject the null hypothesis if this
statistic corresponds to a p-value of ? 0.05. We repeat this for a wide range of values of ? ? [0, 1],
and repeat 500 times for each ? .
Our results are displayed in Figure 1 The x-axis marks the value of parameter ? , and the y-axis
indicates the fraction of repetitions in which we successfully rejected the null hypothesis. The
performance of the MPLE alone is indicated by the orange line, while the performance of our statistic
is indicated by the blue line. We find that our statistic is able to correctly reject the null at a much
earlier point than the MPLE alone. In particular, our statistic manages to reject the null for ? ? 0.04,
while the MPLE requires a parameter which is an order of magnitude larger, at 0.4. As mentioned
before, in the former regime (when ? ? 0.04), it appears impossible to distinguish the distribution
from a sample from the Ising model with the naked eye.
5.2
Last.fm Dataset
We now turn our focus to the Last.fm dataset from HetRec?11 [CBK11]. This dataset consists of
data from n = 1892 users on the Last.fm online music system. On Last.fm, users can indicate
(bi-directional) friend relationships, thus constructing a social network ? our dataset has m = 12717
such edges. The dataset also contains users? listening habits ? for each user we have a list of their
fifty favorite artists, whose tracks they have listened to the most times. We wish to test whether users?
preference for a particular artist is distributed according to a high-temperature Ising model.
(a)
Fixing some artist a of interest, we consider the vector X (a) , where Xu is +1 if user u has artist
a in his favorite artists, and ?1 otherwise. We wish to test the null hypothesis, whether X (a) is
distributed according to an Ising model in the high temperature regime on the known social network
8
1
0.9
Probability of rejecting the null with local correlations and MPLE
Local correlation statistic
MPLE
0.8
Test probability of success
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
10-3
10-2
10-1
100
Model parameter value
Figure 1: Power of our statistic on synthetic data.
graph, with common (unknown) external field h (i.e. ?u = h for all u) and edge parameter ? (i.e.,
?uv = ? iff u and v are neighbors in the graph, and 0 otherwise).
Our overall experimental process was very similar to the synthetic data case. We gathered a list of the
ten most-common favorite artists, and repeated the following process for each artist a. We consider
? and ?.
? We
the vector X (a) (defined above) and run the MPLE estimator on it, obtaining estimates h
then run MCMC to generate 100 samples from the Ising model with these parameters, and for each
P P
?
?
sample, computed the value of the statistics Zk =
(Xu ? tanh(h))(X
v ? tanh(h)),
u
v:d(u,v)?k
where d(?, ?) is the distance on the graph, and k = 1 (the neighbor correlation statistic) or 2 (the
local correlation statistic). Motivated by our theoretical results (Theorem 3), we consider a statistic
where the variables are recentered by their marginal expectations, as this statistic experiences sharper
concentration. We again consider k = 2 to account for the possibility of edges which are unknown to
the social network.
Strikingly, we found that the plausibility of the Ising modelling assumption varies significantly
depending on the artist. We highlight some of our more interesting findings here, see the supplemental
material for more details. The most popular artist in the dataset was Lady Gaga, who was a favorite
artist of 611 users in the dataset. We found that X (Lady Gaga) had statistics Z1 = 9017.3 and
Z2 = 106540. The range of these statistics computed by MCMC can be seen in Figure 2 of the
supplementary material ? clearly, the computed statistics fall far outside these ranges, and we can
reject the null hypothesis with p 0.01. Similar results held for other popular pop musicians,
including Britney Spears, Christina Aguilera, Rihanna, and Katy Perry.
However, we observed qualitatively different results for The Beatles, the fourth most popular artist,
being a favorite of 480 users. We found that X (The Beatles) had statistics Z1 = 2157.8 and Z2 =
22196. The range of these statistics computed by MCMC can be seen in Figure 3 of the supplementary
material. This time, the computed statistics fall near the center of this range, and we can not reject
the null. Similar results held for the rock band Muse.
Based on our investigation, our statistic seems to indicate that for the pop artists, the null fails to
effectively model the distribution, while it performs much better for the rock artists. We conjecture
that this may be due to the highly divisive popularity of pop artists like Lady Gaga and Britney Spears
? while some users may love these artists (and may form dense cliques within the graph), others have
little to no interest in their music. The null would have to be expanded to accomodate heterogeneity
to model such effects. On the other hand, rock bands like The Beatles and Muse seem to be much
more uniform in their appeal: users seem to be much more homogeneous when it comes to preference
for these groups.
Acknowledgments
Research was supported by NSF CCF-1617730, CCF-1650733, and ONR N00014-12-1-0999. Part
of this work was done while GK was an intern at Microsoft Research New England.
9
References
[AKN06] Pieter Abbeel, Daphne Koller, and Andrew Y. Ng. Learning factor graphs in polynomial
time and sample complexity. Journal of Machine Learning Research, 7(Aug):1743?
1788, 2006.
[BGS14] Guy Bresler, David Gamarnik, and Devavrat Shah. Structure learning of antiferromagnetic Ising models. In Advances in Neural Information Processing Systems 27, NIPS
?14, pages 2852?2860. Curran Associates, Inc., 2014.
[Bha16] Bhaswar B. Bhattacharya. Power of graph-based two-sample tests. arXiv preprint
arXiv:1508.07530, 2016.
[BK16] Guy Bresler and Mina Karzand. Learning a tree-structured Ising model in order to
make predictions. arXiv preprint arXiv:1604.06749, 2016.
[BM16] Bhaswar B. Bhattacharya and Sumit Mukherjee. Inference in Ising models. Bernoulli,
2016.
[Bre15] Guy Bresler. Efficiently learning ising models on arbitrary graphs. In Proceedings
of the 47th Annual ACM Symposium on the Theory of Computing, STOC ?15, pages
771?782, New York, NY, USA, 2015. ACM.
[CBK11] Iv?n Cantador, Peter Brusilovsky, and Tsvi Kuflik. Second workshop on information
heterogeneity and fusion in recommender systems (hetrec 2011). In Proceedings of
the 5th ACM Conference on Recommender Systems, RecSys ?11, pages 387?388, New
York, NY, USA, 2011. ACM.
[Cha05] Sourav Chatterjee. Concentration Inequalities with Exchangeable Pairs. PhD thesis,
Stanford University, June 2005.
[Cha07] Sourav Chatterjee. Estimation in spin glasses: A first step. The Annals of Statistics,
35(5):1931?1946, October 2007.
[CL68] C.K. Chow and C.N. Liu. Approximating discrete probability distributions with
dependence trees. IEEE Transactions on Information Theory, 14(3):462?467, 1968.
[CT06] Imre Csisz?r and Zsolt Talata. Consistent estimation of the basic neighborhood of
Markov random fields. The Annals of Statistics, 34(1):123?145, 2006.
[DCG68] Stanley Deser, Max Chr?tien, and Eugene Gross. Statistical Physics, Phase Transitions,
and Superfluidity. Gordon and Breach, 1968.
[DDK18] Constantinos Daskalakis, Nishanth Dikkala, and Gautam Kamath. Testing Ising models.
In Proceedings of the 29th Annual ACM-SIAM Symposium on Discrete Algorithms,
SODA ?18, Philadelphia, PA, USA, 2018. SIAM.
[DMR11] Constantinos Daskalakis, Elchanan Mossel, and S?bastien Roch. Evolutionary trees
and the Ising model on the Bethe lattice: A proof of Steel?s conjecture. Probability
Theory and Related Fields, 149(1):149?189, 2011.
[EK10] David Easley and Jon Kleinberg. Networks, Crowds, and Markets: Reasoning about a
Highly Connected World. Cambridge University Press, 2010.
[Ell93] Glenn Ellison. Learning, local interaction, and coordination. Econometrica, 61(5):1047?
1071, 1993.
[Fel04] Joseph Felsenstein. Inferring Phylogenies. Sinauer Associates Sunderland, 2004.
[Fre75] David A. Freedman. On tail probabilities for martingales. The Annals of Probability,
3(1):100?118, 1975.
[GG86] Stuart Geman and Christine Graffigne. Markov random field image models and their
applications to computer vision. In Proceedings of the International Congress of
Mathematicians, pages 1496?1517. American Mathematical Society, 1986.
10
[GLP17] Reza Gheissari, Eyal Lubetzky, and Yuval Peres. Concentration inequalities for polynomials of contracting Ising models. arXiv preprint arXiv:1706.00121, 2017.
[HKM17] Linus Hamilton, Frederic Koehler, and Ankur Moitra. Information theoretic properties
of Markov random fields, and their algorithmic applications. In Advances in Neural
Information Processing Systems 30, NIPS ?17. Curran Associates, Inc., 2017.
[Hop82] John J. Hopfield. Neural networks and physical systems with emergent collective
computational abilities. Proceedings of the National Academy of Sciences, 79(8):2554?
2558, 1982.
[Isi25] Ernst Ising. Beitrag zur theorie des ferromagnetismus. Zeitschrift f?r Physik A Hadrons
and Nuclei, 31(1):253?258, 1925.
[JJR11] Ali Jalali, Christopher C. Johnson, and Pradeep K. Ravikumar. On learning discrete
graphical models using greedy methods. In Advances in Neural Information Processing
Systems 24, NIPS ?11, pages 1935?1943. Curran Associates, Inc., 2011.
[KM17] Adam Klivans and Raghu Meka. Learning graphical models using multiplicative
weights. In Proceedings of the 58th Annual IEEE Symposium on Foundations of
Computer Science, FOCS ?17, Washington, DC, USA, 2017. IEEE Computer Society.
[LPW09] David A. Levin, Yuval Peres, and Elizabeth L. Wilmer. Markov Chains and Mixing
Times. American Mathematical Society, 2009.
[MdCCU16] Abraham Mart?n del Campo, Sarah Cepeda, and Caroline Uhler. Exact goodness-of-fit
testing for the Ising model. Scandinavian Journal of Statistics, 2016.
[MJC+ 14] Lester Mackey, Michael I. Jordan, Richard Y. Chen, Brendan Farrell, and Joel A. Tropp.
Matrix concentration inequalities via the method of exchangeable pairs. The Annals of
Probability, 42(3):906?945, 2014.
[MS10] Andrea Montanari and Amin Saberi. The spread of innovations in social networks.
Proceedings of the National Academy of Sciences, 107(47):20196?20201, 2010.
[Ons44] Lars Onsager. Crystal statistics. I. a two-dimensional model with an order-disorder
transition. Physical Review, 65(3?4):117, 1944.
[RWL10] Pradeep Ravikumar, Martin J. Wainwright, and John D. Lafferty. High-dimensional
ising model selection using `1 -regularized logistic regression. The Annals of Statistics,
38(3):1287?1319, 2010.
[SK75] David Sherrington and Scott Kirkpatrick. Solvable model of a spin-glass. Physical
Review Letters, 35(26):1792, 1975.
[SW12] Narayana P. Santhanam and Martin J. Wainwright. Information-theoretic limits of selecting binary graphical models in high dimensions. IEEE Transactions on Information
Theory, 58(7):4117?4134, 2012.
[SZ92] Daniel W. Stroock and Boguslaw Zegarlinski. The logarithmic Sobolev inequality
for discrete spin systems on a lattice. Communications in Mathematical Physics,
149(1):175?193, 1992.
[VMLC16] Marc Vuffray, Sidhant Misra, Andrey Lokhov, and Michael Chertkov. Interaction
screening: Efficient and sample-optimal learning of Ising models. In Advances in
Neural Information Processing Systems 29, NIPS ?16, pages 2595?2603. Curran
Associates, Inc., 2016.
11
| 6607 |@word version:1 polynomial:10 stronger:1 seems:1 nd:4 c0:2 physik:1 closure:1 pieter:1 crucially:1 contraction:4 harder:1 configuration:4 contains:1 efficacy:2 selecting:2 liu:1 daniel:1 ours:1 interestingly:1 z2:2 od:3 si:1 must:1 john:2 maxv:1 mackey:1 stationary:2 greedy:4 half:1 leaf:1 alone:2 detecting:1 provides:1 node:10 gautam:2 preference:3 daphne:1 narayana:1 mathematical:3 along:1 c2:3 direct:3 become:2 symposium:3 focs:1 prove:5 consists:1 x0:14 market:1 expected:1 behavior:1 andrea:1 love:1 rapid:1 multi:1 roughly:3 buying:1 little:1 considering:3 provided:4 moreover:1 underlying:1 bounded:6 null:20 mathematician:1 supplemental:2 finding:1 onsager:1 guarantee:1 pseudo:1 every:1 tie:1 k2:2 hit:1 exchangeable:8 lester:1 appear:1 hamilton:1 before:3 positive:2 understood:1 local:8 xv:11 congress:1 limit:1 zeitschrift:1 despite:1 bilinear:21 analyzing:1 sidhant:1 path:1 au:2 studied:1 ankur:1 suggests:1 ease:1 bi:7 range:7 practical:1 acknowledgment:1 testing:7 alphabetical:1 graffigne:1 bootstrap:1 habit:1 significantly:2 reject:10 word:1 refers:1 fav:2 get:1 lady:3 close:1 selection:1 impossible:2 applying:1 fruitful:1 transportation:1 musician:1 center:1 go:1 attention:2 starting:3 ergodic:1 simplicity:1 disorder:1 immediately:1 insight:1 estimator:4 u6:2 his:3 proving:5 handle:1 exploratory:1 coordinate:1 increment:9 annals:5 pt:1 suppose:2 target:1 construction:2 user:10 exact:1 homogeneous:1 us:2 curran:4 hypothesis:14 origin:1 agreement:2 associate:5 pa:1 approximated:1 mukherjee:1 ising:74 geman:1 ep:1 observed:1 preprint:3 worst:1 ferromagnetic:1 connected:1 ran:2 mentioned:2 wher:1 intuition:2 gross:1 complexity:1 econometrica:1 dynamic:19 motivate:1 depend:1 tight:8 ellison:1 ali:1 myriad:1 technically:1 upon:1 incur:1 strikingly:1 easily:1 indirect:1 hopfield:2 emergent:1 various:2 genre:1 easley:1 fast:3 shortcoming:1 describe:1 monte:2 bhaswar:2 outside:1 neighborhood:1 crowd:1 whose:4 tmix:3 widely:1 supplementary:13 larger:5 say:1 tightness:1 recentered:2 otherwise:5 stanford:1 favor:3 statistic:38 ability:1 online:1 sequence:9 rock:3 interaction:7 adaptation:1 relevant:1 hop82:2 combining:1 rapidly:2 mixing:10 poorly:1 iff:2 ernst:1 academy:2 amin:1 description:2 csisz:1 recipe:1 exploiting:1 adam:1 leave:2 object:1 tk:1 depending:5 derive:1 coupling:7 fixing:1 friend:13 andrew:1 sarah:1 aug:1 strong:4 involves:2 implies:2 indicate:2 quantify:1 come:1 concentrate:6 radius:16 closely:1 lars:1 centered:1 material:14 ms10:4 require:2 argued:1 abbeel:1 generalization:1 preliminary:3 investigation:4 proposition:1 multilinear:15 extension:1 strictly:1 hold:5 around:2 exp:10 maxu:2 algorithmic:2 lokhov:1 consecutive:1 early:2 uniqueness:3 estimation:2 bond:1 tanh:3 expose:1 coordination:1 repetition:1 successfully:1 tool:3 beitrag:1 mit:6 rough:1 clearly:1 gaussian:2 rather:2 imre:1 corollary:1 focus:5 june:1 modelling:1 likelihood:1 indicates:1 bernoulli:1 kuflik:1 brendan:1 greedily:2 glass:3 inference:1 dependent:3 stopping:11 chow:1 koller:1 sunderland:1 doob:3 interested:2 comprising:1 selects:2 overall:1 exponent:2 special:1 orange:1 mutual:1 marginal:1 field:21 equal:1 construct:1 nicely:1 beach:1 sampling:1 having:1 biology:1 represents:4 identical:1 look:1 stuart:1 constantinos:3 cantador:1 jon:1 linus:1 future:1 others:1 gordon:1 richard:1 national:2 individual:2 phase:1 ourselves:1 n1:2 microsoft:1 attempt:3 uhler:1 interest:5 screening:1 investigate:1 possibility:2 highly:2 evaluation:1 joel:1 kirkpatrick:2 zsolt:1 pradeep:2 behind:1 held:2 chain:5 edge:11 experience:1 elchanan:1 tree:3 iv:1 desired:2 theoretical:4 android:1 instance:2 earlier:2 goodness:1 stroock:3 lattice:3 cost:1 vertex:1 entry:1 uniform:2 successful:2 examining:1 johnson:1 sumit:1 too:4 levin:1 listened:1 varies:1 eec:3 synthetic:9 referring:1 st:2 convince:1 fundamental:1 siam:2 international:1 andrey:1 csail:6 off:1 physic:3 discipline:1 michael:2 together:1 concrete:1 again:2 thesis:1 satisfied:1 unavoidable:2 moitra:1 choose:1 henceforth:1 guy:3 external:12 american:2 account:2 de:1 coefficient:2 inc:4 satisfy:4 xu1:1 farrell:1 vi:2 piece:1 multiplicative:1 try:1 eyal:1 analyze:1 reached:1 start:2 ferromagnetismus:1 curie:1 contribution:1 spin:11 musical:1 variance:5 efficiently:2 who:1 gathered:1 directional:1 weak:1 artist:16 rejecting:2 manages:1 carlo:2 worth:1 britney:2 simultaneous:1 caroline:1 reach:2 influenced:1 whenever:1 definition:6 against:1 vuffray:1 involved:4 naturally:1 associated:4 proof:7 hamming:4 couple:2 sampled:1 stop:2 dataset:9 proved:2 popular:4 aguilera:1 knowledge:2 improves:1 stanley:1 sophisticated:1 appears:2 higher:4 follow:1 wei:1 done:1 though:1 generality:1 just:4 rejected:1 lastly:1 correlation:5 until:2 hand:4 beatles:3 christopher:1 tropp:1 reversible:1 perry:1 del:1 logistic:1 indicated:2 grows:1 usa:5 effect:1 requiring:1 concept:1 ccf:2 evolution:1 hence:4 former:1 glauber:15 gaga:3 adjacent:1 game:1 interchangeably:1 cha07:3 kak:6 mina:1 crystal:1 theoretic:2 demonstrate:1 sherrington:2 performs:1 temperature:34 christine:1 saberi:1 reasoning:1 image:1 gamarnik:1 recently:1 common:6 physical:3 overview:2 reza:1 tail:10 belong:3 extend:2 he:2 marginals:2 cambridge:1 meka:1 enjoyed:1 uv:3 outlined:1 consistency:1 similarly:1 i6:1 grid:7 had:2 scandinavian:1 multivariate:2 recent:1 phone:1 scenario:1 certain:4 n00014:1 misra:1 inequality:27 binary:2 success:1 onr:1 vt:2 tien:1 seen:2 somewhat:1 hadron:1 campo:1 determine:3 maximize:2 relates:1 full:1 mix:1 technical:2 england:1 plausibility:1 offer:2 long:1 christina:1 ravikumar:2 prediction:1 involving:1 basic:1 regression:1 vision:2 expectation:6 arxiv:6 normalization:1 c1:3 justified:2 addition:1 zur:1 wealth:1 crucial:2 fifty:1 lafferty:1 spirit:1 seem:2 jordan:1 call:1 integer:1 near:3 noting:1 easy:3 enough:1 xj:2 independence:1 fit:1 fm:7 opposite:1 suboptimal:1 idea:2 cn:2 listening:1 whether:11 motivated:2 heavier:1 peter:1 lubetzky:2 speaking:1 proceed:3 york:2 remark:3 useful:2 generally:1 clear:1 listed:1 stein:1 ten:1 band:2 concentrated:2 generate:4 exist:3 canonical:2 nsf:1 notice:1 talata:1 sign:3 neuroscience:1 disjoint:1 correctly:2 track:1 popularity:1 blue:1 diverse:1 discrete:4 santhanam:1 group:1 key:3 drawn:1 graph:12 monotone:1 fraction:2 run:9 parameterized:2 powerful:1 fourth:1 soda:1 letter:1 place:1 almost:1 family:2 throughout:1 decide:1 sobolev:3 fortuin:1 bgs14:2 bound:22 individuality:1 distinguish:2 auv:4 annual:3 strength:3 precisely:3 kleinberg:1 aspect:1 argument:2 klivans:1 expanded:1 martin:2 conjecture:2 structured:1 according:3 felsenstein:1 elizabeth:1 joseph:1 making:1 explained:1 pr:9 brusilovsky:1 ln:1 previously:1 overwhelm:1 turn:3 devavrat:1 needed:1 serf:2 end:1 raghu:1 studying:2 apply:5 observe:1 appropriate:1 ubiquity:1 triadic:1 washington:1 bhattacharya:2 alternative:1 shah:1 existence:1 denotes:1 running:1 include:1 mple:15 log2:4 muse:2 graphical:3 music:3 exploit:1 establish:1 approximating:1 classical:2 society:3 iphone:3 xu2:1 already:1 quantity:2 koehler:1 fa:21 concentration:47 dependence:2 usual:1 jalali:1 said:1 exhibit:1 evolutionary:1 distance:5 gak:5 recsys:1 topic:1 argue:3 trivial:2 reason:1 induction:3 relationship:1 innovation:1 unfortunately:1 cij:2 october:1 kamath:2 statement:5 potentially:1 gk:2 sharper:1 negative:1 stoc:1 steel:1 theorie:1 proper:1 collective:1 unknown:3 recommender:2 markov:8 behave:1 displayed:1 peres:3 situation:1 defining:2 precise:2 heterogeneity:2 dc:1 communication:1 arbitrary:2 david:5 pair:8 namely:1 connection:1 z1:2 learned:1 polylogarithmic:1 pop:3 nip:5 beyond:1 able:3 alongside:1 roch:1 scott:1 departure:3 azuma:4 regime:14 democratic:1 challenge:1 summarize:1 including:4 eschew:1 max:1 wainwright:2 power:2 critical:1 natural:1 hybrid:5 regularized:1 solvable:1 republican:1 imply:1 eye:2 mossel:1 axis:2 irrespective:1 started:1 naive:2 coupled:3 philadelphia:1 breach:1 deviate:1 nice:1 understanding:1 spear:2 eugene:1 review:2 sinauer:1 manhattan:2 lacking:1 loss:1 bresler:3 highlight:1 contracting:1 interesting:1 var:1 ingredient:2 foundation:1 nucleus:1 hetrec:4 degree:14 incurred:1 sufficient:1 proxy:1 consistent:1 pi:1 naked:1 course:1 repeat:2 last:8 supported:1 wilmer:1 enjoys:3 formal:3 weaker:1 neighbor:4 wide:1 taking:1 fall:2 absolute:8 distributed:2 cepeda:1 dimension:3 world:4 transition:2 doesn:1 author:2 qualitatively:2 made:1 far:1 social:13 sourav:2 transaction:2 obtains:1 implicitly:1 clique:4 buy:2 conceptual:1 harm:1 xi:9 don:1 daskalakis:3 glenn:1 favorite:5 bethe:1 zk:1 ca:1 obtaining:1 improving:1 listens:1 poly:1 constructing:1 marc:1 spread:1 main:6 dense:1 abraham:1 bounding:5 big:1 arise:1 freedman:7 n2:3 montanari:1 repeated:2 xu:11 x1:1 ng:1 martingale:31 ny:2 fails:1 inferring:3 theme:1 explicit:4 wish:4 exponential:3 candidate:1 chertkov:1 down:2 theorem:15 rk:1 xt:22 bastien:1 showing:4 r2:3 decay:1 list:2 appeal:1 evidence:1 fusion:1 exists:1 workshop:1 frederic:1 albeit:1 effectively:2 phd:1 magnitude:4 execution:2 accomodate:1 conditioned:1 chatterjee:10 chen:1 flavor:1 logarithmic:5 sophistication:1 fc:1 simply:3 infinitely:1 xt0:1 likely:2 intern:1 katy:1 scalar:2 rihanna:1 corresponds:3 antiferromagnetic:1 satisfies:2 dh:2 acm:5 mart:1 conditional:3 exposition:1 lipschitz:6 hard:1 specifically:2 determined:1 yuval:2 decouple:1 lemma:8 pas:1 experimental:4 divisive:1 vote:1 phylogeny:1 chr:1 mark:1 latter:1 dobrushin:3 mcmc:6 |
6,199 | 6,608 | Deep Subspace Clustering Networks
Pan Ji?
University of Adelaide
Tong Zhang?
Australian National University
Mathieu Salzmann
EPFL - CVLab
Hongdong Li
Australian National University
Ian Reid
University of Adelaide
Abstract
We present a novel deep neural network architecture for unsupervised subspace
clustering. This architecture is built upon deep auto-encoders, which non-linearly
map the input data into a latent space. Our key idea is to introduce a novel
self-expressive layer between the encoder and the decoder to mimic the ?selfexpressiveness? property that has proven effective in traditional subspace clustering.
Being differentiable, our new self-expressive layer provides a simple but effective
way to learn pairwise affinities between all data points through a standard backpropagation procedure. Being nonlinear, our neural-network based method is able
to cluster data points having complex (often nonlinear) structures. We further
propose pre-training and fine-tuning strategies that let us effectively learn the
parameters of our subspace clustering networks. Our experiments show that
our method significantly outperforms the state-of-the-art unsupervised subspace
clustering techniques.
1
Introduction
In this paper, we tackle the problem of subspace clustering [42] ? a sub-field of unsupervised
learning ? which aims to cluster data points drawn from a union of low-dimensional subspaces in an
unsupervised manner. Subspace clustering has become an important problem as it has found various
applications in computer vision, e.g., image segmentation [50, 27], motion segmentation [17, 9],
and image clustering [14, 10]. For example, under Lambertian reflectance, the face images of one
subject obtained with a fixed pose and varying lighting conditions lie in a low-dimensional subspace
of dimension close to nine [2]. Therefore, one can employ subspace clustering to group images of
multiple subjects according to their respective subjects.
Most recent works on subspace clustering [49, 6, 10, 23, 46, 26, 16, 52] focus on clustering linear
subspaces. However, in practice, the data do not necessarily conform to linear subspace models. For
instance, in the example of face image clustering, reflectance is typically non-Lambertian and the
pose of the subject often varies. Under these conditions, the face images of one subject rather lie in
a non-linear subspace (or sub-manifold). A few works [5, 34, 35, 51, 47] have proposed to exploit
the kernel trick [40] to address the case of non-linear subspaces. However, the selection of different
kernel types is largely empirical, and there is no clear reason to believe that the implicit feature space
corresponding to a predefined kernel is truly well-suited to subspace clustering.
In this paper, by contrast, we introduce a novel deep neural network architecture to learn (in an
unsupervised manner) an explicit non-linear mapping of the data that is well-adapted to subspace
clustering. To this end, we build our deep subspace clustering networks (DSC-Nets) upon deep
auto-encoders, which non-linearly map the data points to a latent space through a series of encoder
?
Authors contributed equally to this work
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
layers. Our key contribution then consists of introducing a novel self-expressive layer ? a fully
connected layer without bias and non-linear activations ? at the junction between the encoder and the
decoder. This layer encodes the ?self-expressiveness? property [38, 9] of data drawn from a union
of subspaces, that is, the fact that each data sample can be represented as a linear combination of
other samples in the same subspace. To the best of our knowledge, our approach constitutes the
first attempt to directly learn the affinities (through combination coefficients) between all data points
within one neural network. Furthermore, we propose effective pre-training and fine-tuning strategies
to learn the parameters of our DSC-Nets in an unsupervised manner and with a limited amount of
data.
We extensively evaluate our method on face clustering, using the Extended Yale B [21] and ORL [39]
datasets, and on general object clustering, using COIL20 [31] and COIL100 [30]. Our experiments
show that our DSC-Nets significantly outperform the state-of-the-art subspace clustering methods.
2
Related Work
Subspace Clustering. Over the years, many methods have been developed for linear subspace
clustering. In general, these methods consist of two steps: the first and also most crucial one aims to
estimate an affinity for every pair of data points to form an affinity matrix; the second step then applies
normalized cuts [41] or spectral clustering [32] using this affinity matrix. The resulting methods
can then be roughly divided into three categories [42]: factorization methods [7, 17, 44, 29, 16],
higher-order model based methods [49, 6, 33, 37], and self-expressiveness based methods [9, 24, 26,
46, 15, 12, 22, 52]. In essence, factorization methods build the affinity matrix by factorizing the data
matrix, and methods based on higher-order models estimate the affinities by exploiting the residuals
of local subspace model fitting. Recently, self-expressiveness based methods, which seek to express
the data points as a linear combination of other points in the same subspace, have become the most
popular ones. These methods build the affinity matrix using the matrix of combination coefficients.
Compared to factorization techniques, self-expressiveness based methods are often more robust to
noise and outliers when relying on regularization terms to account for data corruptions. They also
have the advantage over higher-order model based methods of considering connections between all
data points rather than exploiting local models, which are often suboptimal. To handle situations
where data points do not exactly reside in a union of linear subspaces, but rather in non-linear ones,
a few works [34, 35, 51, 47] have proposed to replace the inner product of the data matrix with a
pre-defined kernel matrix (e.g., polynomial kernel and Gaussian RBF kernel). There is, however, no
clear reason why such kernels should correspond to feature spaces that are well-suited to subspace
clustering. By contrast, here, we propose to explicitly learn one that is.
Auto-Encoders. Auto-encoders (AEs) can non-linearly transform data into a latent space. When
this latent space has lower dimension than the original one [13], this can be viewed as a form of
non-linear PCA. An auto-encoder typically consists of an encoder and a decoder to define the data
reconstruction cost. With the success of deep learning [20], deep (or stacked) AEs have become
popular for unsupervised learning. For instance, deep AEs have proven useful for dimensionality
reduction [13] and image denoising [45]. Recently, deep AEs have also been used to initialize deep
embedding networks for unsupervised clustering [48]. A convolutional version of deep AEs was also
applied to extract hierarchical features and to initialize convolutional neural networks (CNNs) [28].
There has been little work in the literature combining deep learning with subspace clustering. To the
best of our knowledge, the only exception is [36], which first extracts SIFT [25] or HOG [8] features
from the images and feeds them to a fully connected deep auto-encoder with a sparse subspace
clustering (SSC) [10] prior. The final clustering is then obtained by applying k-means or SSC on the
learned auto-encoder features. In essence, [36] can be thought of as a subspace clustering method
based on k-means or SSC with deep auto-encoder features. Our method significantly differs from [36]
in that our network is designed to directly learn the affinities, thanks to our new self-expressive layer.
3
Deep Subspace Clustering Networks (DSC-Nets)
Our deep subspace clustering networks leverage deep auto-encoders and the self-expressiveness
property. Before introducing our networks, we first discuss this property in more detail.
2
Figure 1: Deep Convolutional Auto-Encoder: The input xi is mapped to zi through an encoder, and
? i through a decoder. We use shaded circles to denote data vectors and shaded
then reconstructed as x
squares to denote the channels after convolution or deconvolution. We do not enforce the weights of
the corresponding encoder and decoder layers to be coupled (or the same).
3.1
Self-Expressiveness
Given data points {xi }i=1,??? ,N drawn from multiple linear subspaces {Si }i=1,??? ,K , one can express
a point in a subspace as a linear combination of other points in the same subspace. In the literature [38,
9], this property is called self-expressiveness. If we stack all the points xi into columns of a data
matrix X, the self-expressiveness property can be simply represented as one single equation, i.e.,
X = XC, where C is the self-representation coefficient matrix. It has been shown in [15] that,
under the assumption that the subspaces are independent, by minimizing certain norms of C, C is
guaranteed to have a block-diagonal structure (up to certain permutations), i.e., cij 6= 0 iff point
xi and point xj lie in the same subspace. So we can leverage the matrix C to construct the affinity
matrix for spectral clustering. Mathematically, this idea is formalized as the optimization problem
min kCkp s.t. X = XC, (diag(C) = 0) ,
(1)
C
where k ? kp represents an arbitrary matrix norm, and the optional diagonal constraint on C prevents
trivial solutions for sparsity inducing norms, such as the `1 norm. Various norms for C have
been proposed in the literature, e.g., the `1 norm in Sparse Subspace Clustering (SSC) [9, 10], the
nuclear norm in Low Rank Representation (LRR) [24, 23] and Low Rank Subspace Clustering
(LRSC) [11, 43], and the Frobenius norm in Least-Squares Regression (LSR) [26] and Efficient
Dense Subspace Clustering (EDSC) [15]. To account for data corruptions, the equality constraint
in (1) is often relaxed as a regularization term, leading to
?
(2)
min kCkp + kX ? XCk2F s.t. (diag(C) = 0) .
C
2
Unfortunately, the self-expressiveness property only holds for linear subspaces. While kernel based
methods [34, 35, 51, 47] aim to tackle the non-linear case, it is not clear that pre-defined kernels yield
implicit feature spaces that are well-suited for subspace clustering. In this work, we aim to learn an
explicit mapping that makes the subspaces more separable. To this end, and as discussed below, we
propose to build our networks upon deep auto-encoders.
3.2
Self-Expressive Layer in Deep Auto-Encoders
Our goal is to train a deep auto-encoder, such as the one depicted by Figure 1, such that its latent
representation is well-suited to subspace clustering. To this end, we introduce a new layer that
encodes the notion of self-expressiveness.
Specifically, let ? denote the auto-encoder parameters, which can be decomposed into encoder
parameters ?e and decoder parameters ?d . Furthermore, let Z?e denote the output of the encoder,
i.e., the latent representation of the data matrix X. To encode self-expressiveness, we introduce a
new loss function defined as
1
? ? k2 + ?1 kCkp + ?2 kZ? ? Z? Ck2 s.t. (diag(C) = 0) ,
L(?, C) = kX ? X
(3)
F
F
e
e
2
2
? ? represents the data reconstructed by the auto-encoder. To minimize (3), we propose to
where X
leverage the fact that, as discussed below, C can be thought of as the parameters of an additional
network layer, which lets us solve for ? and C jointly using backpropagation.1
1
Note that one could also alternate minimization between ? and C. However, since the loss is non-convex,
this would not provide better convergence guarantees and would require investigating the influence of the number
of steps in the optimization w.r.t. ? on the clustering results.
3
Figure 2: Deep Subspace Clustering Networks: As an example, we show a deep subspace clustering
network with three convolutional encoder layers, one self-expressive layer, and three deconvolutional
decoder layer. During training, we first pre-train the deep auto-encoder without the self-expressive
layer; we then fine-tune our entire network using this pre-trained model for initialization.
Specifically, consider the self-expressiveness term in (3), kZ?e ? Z?e Ck2F . Since each data point zi
(in the latent space) is approximated by a weighted linear combination of other points {zj }j=1,??? ,N
(optionally, j 6= i) with weights cij , this linear operation corresponds exactly to a set of linear
neurons without non-linear activations. Therefore, if we take each zi as a node in the network, we
can then represent the self-expressiveness term with a fully-connected linear layer, which we call
the self-expressive layer. The weights of the self-expressive layer correspond to the matrix C in (3),
which are further used to construct affinities between all data points. Therefore, our self-expressive
layer essentially lets us directly learn the affinity matrix via the network. Moreover, minimizing
kCkp simply translates to adding a regularizer to the weights of the self-expressive layer. In this
work, we consider two kinds of regularizations on C: (i) the `1 norm, resulting in a network denoted
by DSC-Net-L1; (ii) the `2 norm, resulting in a network denoted by DSC-Net-L2.
For notational consistency, let us denote the parameters of the self-expressive layer (which are just the
elements of C) as ?s . As can be seen from Figure 2, we then take the input to the decoder part of our
network to be the transformed latent representation Z?e ?s . This lets us re-write our loss function as
? ? k2 + ?1 k?s kp + ?2 kZ? ? Z? ?s k2 s.t. (diag(?s ) = 0) , (4)
? ?)
? = 1 kX ? X
L(
F
e
e
? F
2
2
? now consist of encoder parameters ?e , self-expressive layer
where the network parameters ?
? is now a function of
parameters ?s , and decoder parameters ?d , and where the reconstructed data X
{?e , ?s , ?d } rather than just {?e , ?d } in (3).
3.3
Network Architecture
Our network consists of three parts, i.e., stacked encoders, a self-expressive layer, and stacked
decoders. The overall network architecture is shown in Figure 2. In this paper, since we focus on
image clustering problems, we advocate the use of convolutional auto-encoders that have fewer
parameters than the fully connected ones and are thus easier to train. Note, however, that fullyconnected auto-encoders are also compatible with our self-expressive layer. In the convolutional
layers, we use kernels with stride 2 in both horizontal and vertical directions, and rectified linear unit
(ReLU) [19] for the non-linear activations. Given N images to be clustered, we use all the images in
a single batch. Each input image is mapped by the convolutional encoder layers to a latent vector (or
node) zi , represented as a shaded circle in Figure 2. In the self-expressive layer, the nodes are fully
connected using linear weights without bias and non-linear activations. The latent vectors are then
mapped back to the original image space via the deconvolutional decoder layers.
For the ith encoder layer with ni channels of kernel size ki ? ki , the number of weight parameters
is ki2 ni?1 ni , with n0 = P
1. Since the encoders and decoders have symmetric
P structures, their total
number of parameters is i 2ki2 ni?1 ni plus the number of bias parameters i 2ni ? n1 + 1. For N
input images, the number of parameters for the self-expressive layer is N 2 . For example, if we have
4
Figure 3: From the parameters of the self-expressive layer, we construct an affinity matrix, which we
use to perform spectral clustering to get the final clusters. Best viewed in color.
three encoder layers with 10, 20, and 30 channels, respectively, and all convolutional kernels are of size
P3
3 ? 3, then the number of parameters for encoders and decoders is i=1 2(ki2 ni?1 + 1)ni ? n1 + 1 =
14671. If we have 1000 input images, then the number of parameters in the self-expressive layer is
106 . Therefore, the network parameters are typically dominated by those of the self-expressive layer.
3.4
Training Strategy
Since the size of datasets for unsupervised subspace clustering is usually limited (e.g., in the order of
thousands of images), our networks remain of a tractable size. However, for the same reason, it also
remains difficult to directly train a network with millions of parameters from scratch. To address this,
we design the pre-training and fine-tuning strategies described below. Note that this also allows us to
avoid the trivial all-zero solution while minimizing the loss (4).
As illustrated in Figure 2, we first pre-train the deep auto-encoder without the self-expressive layer on
all the data we have. We then use the trained parameters to initialize the encoder and decoder layers
of our network. After this, in the fine-tuning stage, we build a big batch using all the data to minimize
?
the loss L(?)
defined in (4) with a gradient descent method. Specifically, we use Adam [18], an
adaptive momentum based gradient descent method, to minimize the loss, where we set the learning
rate to 1.0 ? 10?3 in all our experiments. Since we always use the same batch in each training epoch,
our optimization strategy is rather a deterministic momentum based gradient method than a stochastic
gradient method. Note also that, since we only have access to images for training and not to cluster
labels, our training strategy is unsupervised (or self-supervised).
Once the network is trained, we can use the parameters of the self-expressive layer to construct an
affinity matrix for spectral clustering [32], as illustrated in Figure 3. Although such an affinity matrix
could in principle be computed as |C| + |CT |, over the years, researchers in the field have developed
many heuristics to improve the resulting matrix. Since there is no globally-accepted solution for this
step in the literature, we make use of the heuristics employed by SSC [10] and EDSC [15]. Due to
the lack of space, we refer the reader to the publicly available implementation of SSC and Section 5
of [15], as well as to the TensorFlow implementation of our algorithm 2 for more detail.
4
Experiments
We implemented our method in Python with Tensorflow-1.0 [1], and evaluated it on four standard
datasets, i.e., the Extended Yale B and ORL face image datasets, and the COIL20/100 object image
datasets. We compare our methods against the following baselines: Low Rank Representation
(LRR) [23], Low Rank Subspace Clustering (LRSC) [43], Sparse Subspace Clustering (SSC) [10],
Kernel Sparse Subspace Clustering (KSSC) [35], SSC by Orthogonal Matching Pursuit (SSCOMP) [53], Efficient Dense Subspace Clustering (EDSC) [15], SSC with the pre-trained convolutional
auto-encoder features (AE+SSC), and EDSC with the pre-trained convolutional auto-encoder features
(AE+EDSC). For all the baselines, we used the source codes released by the authors and tuned the
parameters by grid search to the achieve best results on each dataset. Since the code for the deep
subspace clustering method of [36] is not publicly available, we are only able to provide a comparison
2
https://github.com/panji1990/Deep-subspace-clustering-networks
5
(a) Extended Yale B
(b) ORL
(c) COIL20 and COIL100
Figure 4: Sample images from Extended Yale B, ORL , COIL20 and COIL100.
layers
kernel size
channels
parameters
encoder-1
5?5
10
260
encoder-2
3?3
20
1820
encoder-3
3?3
30
5430
self-expressive
?
?
5914624
decoder-1
3?3
30
5420
decoder-2
3?3
20
1810
decoder-3
5?5
10
251
Table 1: Network settings for Extended Yale B.
against this approach on Extended Yale B and COIL20, for which the results are provided in [36].
Note that this comparison already clearly shows the benefits of our approach.
For all quantitative evaluations, we make use of the clustering error rate, defined as
# of wrongly clustered points
? 100% .
err % =
total # of points
4.1
(5)
Extended Yale B Dataset
The Extended Yale B dataset [21] is a popular benchmark for subspace clustering. It consists of 38
subjects, each of which is represented with 64 face images acquired under different illumination
conditions (see Figure 4(a) for sample images from this dataset). Following the experimental setup
of [10], we down-sampled the original face images from 192 ? 168 to 42 ? 42 pixels, which
makes it computationally feasible for the baselines [10, 23]. In each experiment, we pick K ?
{10, 15, 20, 25, 30, 35, 38} subjects (each subject with 64 face images) to test the robustness w.r.t.
an increasing number of clusters. Taking all possible combinations of K subjects out of 38 would
result in too many experimental trials. To get a manageable size of experiments, we first number the
subjects from 1 to 38 and then take all possible K consecutive subjects. For example, in the case
of 10 subjects, we take all the images from subject 1 ? 10, 2 ? 11, ? ? ? , 29 ? 38, giving rise to 29
experimental trials.
We experimented with different architectures for the convolutional layers of our network, e.g., different
network depths and number of channels. While increasing these values increases the representation
power of the network, it also increases the number of network parameters, thus requiring larger
training datasets. Since the size of Extended Yale B is quite limited, with only 2432 images, we
found having three-layer encoders and decoders with [10, 20, 30] channels to be a good trade-off for
this dataset. The detailed network settings are described in Table 1. In the fine-tuning phase, since
the number of epochs required for gradient descent increases as the number of subjects K increases,
we defined the number of epochs for DSC-Net-L1 as 160 + 20K and for DSC-Net-L2 as 50 + 25K.
K
We set the regularization parameters to ?1 = 1.0, ?2 = 1.0 ? 10 10 ?3 .
The clustering performance of different methods for different numbers of subjects is provided in
Table 2. For the experiments with K subjects, we report the mean and median errors of 39 ? K
experimental trials. From these results, we can see that the performance of most of the baselines
decreases dramatically as the number of subjects K increases. By contrast, the performance of our
deep subspace clustering methods, DSC-Net-L1 and DSC-Net-L2, remains relatively stable w.r.t.
the number of clusters. Specifically, our DSC-Net-L2 achieves 2.67% error rate for 38 subjects,
which is only around 1/5 of the best performing baseline EDSC. We also observe that using the
pre-trained auto-encoder features does not necessarily improve the performance of SSC and EDSC,
which confirms the benefits of our joint optimization of all parameters in one network. The results
of [36] on this dataset for 38 subjects was reported to be 92.08 ? 2.42% in terms of clustering
accuracy, or equivalently 7.92 ? 2.42% in terms of clustering error, which is worse than both our
methods ? DSC-Net-L1 and DSC-Net-L2. We further notice that DSC-Net-L1 performs slightly
worse than DSC-Net-L2 in the current experimental settings. We conjecture that this is due to the
difficulty in optimization introduced by the `1 norm as it is non-differentiable at zero.
6
Method
10 subjects
Mean
Median
15 subjects
Mean
Median
20 subjects
Mean
Median
25 subjects
Mean
Median
30 subjects
Mean
Median
35 subjects
Mean
Median
38 subjects
Mean
Median
LRR
LRSC
SSC
AE+
SSC
KSSC
SSCOMP
EDSC
AE+
EDSC
DSCNet-L1
DSCNet-L2
22.22
23.49
30.95
29.38
10.22
11.09
17.06
17.75
14.49
15.78
12.08
8.28
5.64
5.47
5.46
6.09
2.23
2.03
1.59
1.25
23.22
23.49
31.47
31.64
13.13
13.40
18.65
17.76
16.22
17.34
14.05
14.69
7.63
6.41
6.70
5.52
2.17
2.03
1.69
1.72
30.23
29.30
28.76
28.91
19.75
21.17
18.23
16.80
16.55
17.34
15.16
15.23
9.30
10.31
7.67
6.56
2.17
2.11
1.73
1.80
27.92
28.13
27.81
26.81
26.22
26.66
18.72
17.88
18.56
18.03
18.89
18.53
10.67
10.84
10.27
10.22
2.53
2.19
1.75
1.81
37.98
36.82
30.64
30.31
28.76
28.59
19.99
20.00
20.49
20.94
20.75
20.52
11.24
11.09
11.56
10.36
2.63
2.81
2.07
2.19
41.85
41.81
31.35
31.74
28.55
29.04
22.13
21.74
26.07
25.92
20.29
20.18
13.10
13.10
13.28
13.21
3.09
3.10
2.65
2.64
34.87
34.87
29.89
29.89
27.51
27.51
25.33
25.33
27.75
27.75
24.71
24.71
11.64
11.64
12.66
12.66
3.33
3.33
2.67
2.67
Table 2: Clustering error (in %) on Extended Yale B. The lower the better.
layers
kernel size
channels
parameters
encoder-1
5?5
5
130
encoder-2
3?3
3
138
encoder-3
3?3
3
84
self-expressive
?
?
160000
decoder-1
3?3
3
84
decoder-2
3?3
3
140
decoder-3
5?5
5
126
Table 3: Network settings for ORL.
4.2
ORL Dataset
The ORL dataset [39] is composed of 400 human face images, with 40 subjects each having 10
samples. Following [4], we down-sampled the original face images from 112 ? 92 to 32 ? 32. For
each subject, the images were taken under varying lighting conditions with different facial expressions
(open / closed eyes, smiling / not smiling) and facial details (glasses / no glasses)(see Figure 4(b)
for sample images). Compared to Extended Yale B, this dataset is more challenging for subspace
clustering because (i) the face subspaces have more non-linearity due to varying facial expressions and
details; (ii) the dataset size is much smaller (400 vs. 2432). To design a trainable deep auto-encoder
on 400 images, we reduced the number of network parameters by decreasing the number of channels
in each encoder and decoder layer. The resulting network is specified in Table 3.
Since we already verified the robustness of our method to the number of clusters in the previous
experiment, here, we only provide results for clustering all 40 subjects. In this setting, we set
?1 = 1 and ?2 = 0.2 and run 700 epochs for DSC-Net-L2 and 1500 epochs for DSC-Net-L1 during
fine-tuning. Note that, since the size of this dataset is small, we can even use the whole data as a
single batch in pre-training. We found this to be numerically more stable and converge faster than
stochastic gradient descent using randomly sampled mini-batches.
Figure 5(a) shows the error rates of the different methods, where different colors denote different
subspace clustering algorithms and the length of the bars reflects the error rate. Since there are
much fewer samples per subject, all competing methods perform worse than on Extended Yale B.
Note that both EDSC and SSC achieve moderate clustering improvement by using the features of
pre-trained convolutional auto-encoders, but their error rates are still around twice as high as those of
our methods.
4.3
COIL20 and COIL100 Datasets
The previous experiments both target face clustering. To show the generality of our algorithm, we
also evaluate it on the COIL object image datasets ? COIL20 [31] and COIL100 [30]. COIL20
consists of 1440 gray-scale image samples, distributed over 20 objects such as duck and car model
(see sample images in Figure 4(c)). Similarly, COIL100 consists of 7200 images distributed over
100 objects. Each object was placed on a turntable against a black background, and 72 images were
taken at pose intervals of 5 degrees. Following [3], we down-sampled the images to 32 ? 32. In
contrast with the previous human face datasets, in which faces are well aligned and have similar
structures, the object images from COIL20 and COIL100 are more diverse, and even samples from
7
(a) ORL
(b) COIL20
(c) COIL100
Figure 5: Subspace clustering error (in %) on the ORL, COIL20 and COIL100 datasets. Different
colors indicate different methods. The height of the bars encodes the error, so the lower the better.
layers
kernel size
channels
parameters
encoder-1
3?3
15
150
COIL20
self-expressive
?
?
2073600
decoder-1
3?3
15
136
encoder-1
5?5
50
1300
COIL100
self-expressive
?
?
51840000
decoder-1
5?5
50
1251
Table 4: Network settings for COIL20 and COIL100.
the same object differ from each other due to the change of viewing angle. This makes these datasets
challenging for subspace clustering techniques. For these datasets, we used shallower networks with
one encoder layer, one self-expressive layer, and one decoder layer. For COIL20, we set the number
of channels to 15 and the kernel size to 3 ? 3. For COIL100, we increased the number of channels
to 50 and the kernel size to 5 ? 5. The settings for both networks are provided in Table 4. Note
that with these network architectures, the dimension of the latent space representation zi increases
by a factor of 15/4 for COIL20 (as the spatial resolution of each channel shrinks to 1/4 of the input
image after convolutions with stride 2, and we have 15 channels) and 50/4 for COIL100. Thus our
networks perform dimensionality lifting rather than dimensionality reduction. This, in some sense, is
similar to the idea of Hilbert space mapping in kernel methods [40], but with the difference that, in
our case, the mapping is explicit, via the neural network. In our experiments, we found that these
shallow, dimension-lifting networks performed better than deep, bottle-neck ones on these datasets.
While it is also possible to design deep, dimension-lifting networks, the number of channels has to
increase by a factor of 4 after each layer to compensate for the resolution loss. For example, if we
want the latent space dimension to increase by a factor of 15/4, we need 15 ? 4 channels in the second
layer for a 2-layer encoder, 15 ? 42 channels in the third layer for a 3-layer encoder, and so forth. In
the presence of limited data, this increasing number of parameters makes training less reliable. In
our fine-tuning stage, we ran 30 epochs (COIL20) / 100 epochs (COIL100) for DSC-Net-L1 and 30
epochs (COIL20) / 120 epochs (COIL100) for DSC-Net-L2, and set the regularization parameters to
?1 = 1, ?2 = 150/30 (COIL20/COIL100).
Figure 5(b) and (c) depict the error rates of the different methods on clustering 20 classes for COIL20
and 100 classes for COIL100, respectively. Note that, in both cases, our DSC-Net-L2 achieves the
lowest error rate. In particular, for COIL20, we obtain an error of 5.14%, which is roughly 1/3 of the
error rate of the best-performing baseline AE+EDSC. The results of [36] on COIL20 were reported
to be 14.24 ? 4.70% in terms of clustering error, which is also much higher than ours.
5
Conclusion
We have introduced a deep auto-encoder framework for subspace clustering by developing a novel
self-expressive layer to harness the "self-expressiveness" property of a union of subspaces. Our deep
subspace clustering network allows us to directly learn the affinities between all data points with a
single neural network. Furthermore, we have proposed pre-training and fine-tuning strategies to train
our network, demonstrating the ability to handle challenging scenarios with small-size datasets, such
as the ORL dataset. Our experiments have demonstrated that our deep subspace clustering methods
provide significant improvement over the state-of-the-art subspace clustering solutions in terms of
clustering accuracy on several standard datasets.
8
Acknowledgements
This research was supported by the Australian Research Council (ARC) through the Centre of
Excellence in Robotic Vision, CE140100016, and through Laureate Fellowship FL130100102 to
IDR. TZ was supported by the ARC?s Discovery Projects funding scheme (project DP150104645).
References
[1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin,
et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv:1603.04467,
2016.
[2] R. Basri and D. W. Jacobs. Lambertian reflectance and linear subspaces. TPAMI, 25(2):218?233, 2003.
[3] D. Cai, X. He, J. Han, and T. Huang. Graph regularized nonnegative matrix factorization for data
representation. TPAMI, 33(8):1548?1560, 2011.
[4] D. Cai, X. He, Y. Hu, J. Han, and T. Huang. Learning a spatially smooth subspace for face recognition. In
CVPR, pages 1?7. IEEE, 2007.
[5] G. Chen, S. Atev, and G. Lerman. Kernel spectral curvature clustering (KSCC). In ICCV Workshops, pages
765?772. IEEE, 2009.
[6] G. Chen and G. Lerman. Spectral curvature clustering (SCC). IJCV, 81(3):317?330, 2009.
[7] J. Costeira and T. Kanade. A multibody factorization method for independently moving objects. IJCV,
29(3):159?179, 1998.
[8] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR 2005, pages
886?893. IEEE, 2005.
[9] E. Elhamifar and R. Vidal. Sparse subspace clustering. In CVPR, pages 2790?2797, 2009.
[10] E. Elhamifar and R. Vidal. Sparse subspace clustering: Algorithm, theory, and applications. TPAMI,
35(11):2765?2781, 2013.
[11] P. Favaro, R. Vidal, and A. Ravichandran. A closed form solution to robust subspace estimation and
clustering. In CVPR, pages 1801?1807. IEEE, 2011.
[12] J. Feng, Z. Lin, H. Xu, and S. Yan. Robust subspace segmentation with block-diagonal prior. In CVPR,
pages 3818?3825, 2014.
[13] G. E. Hinton and R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science,
313(5786):504?507, 2006.
[14] J. Ho, M.-H. Yang, J. Lim, K.-C. Lee, and D. Kriegman. Clustering appearances of objects under varying
illumination conditions. In CVPR, volume 1, pages 11?18. IEEE, 2003.
[15] P. Ji, M. Salzmann, and H. Li. Efficient dense subspace clustering. In WACV, pages 461?468. IEEE, 2014.
[16] P. Ji, M. Salzmann, and H. Li. Shape interaction matrix revisited and robustified: Efficient subspace
clustering with corrupted and incomplete data. In ICCV, pages 4687?4695, 2015.
[17] K.-i. Kanatani. Motion segmentation by subspace separation and model selection. In ICCV, volume 2,
pages 586?591. IEEE, 2001.
[18] D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv:1412.6980, 2014.
[19] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural
networks. In Advances in neural information processing systems, pages 1097?1105, 2012.
[20] Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521(7553):436?444, 2015.
[21] K.-C. Lee, J. Ho, and D. J. Kriegman. Acquiring linear subspaces for face recognition under variable
lighting. TPAMI, 27(5):684?698, 2005.
[22] C.-G. Li and R. Vidal. Structured sparse subspace clustering: A unified optimization framework. In CVPR,
pages 277?286, 2015.
[23] G. Liu, Z. Lin, S. Yan, J. Sun, Y. Yu, and Y. Ma. Robust recovery of subspace structures by low-rank
representation. TPAMI, 35(1):171?184, 2013.
[24] G. Liu, Z. Lin, and Y. Yu. Robust subspace segmentation by low-rank representation. In ICML, pages
663?670, 2010.
[25] D. G. Lowe. Distinctive image features from scale-invariant keypoints. IJCV, 60(2):91?110, 2004.
[26] C.-Y. Lu, H. Min, Z.-Q. Zhao, L. Zhu, D.-S. Huang, and S. Yan. Robust and efficient subspace segmentation
via least squares regression. In ECCV, pages 347?360. Springer, 2012.
[27] Y. Ma, H. Derksen, W. Hong, and J. Wright. Segmentation of multivariate mixed data via lossy data coding
and compression. TPAMI, 29(9), 2007.
[28] J. Masci, U. Meier, D. Cire?san, and J. Schmidhuber. Stacked convolutional auto-encoders for hierarchical
feature extraction. Artificial Neural Networks and Machine Learning?ICANN 2011, pages 52?59, 2011.
[29] Q. Mo and B. A. Draper. Semi-nonnegative matrix factorization for motion segmentation with missing
data. In ECCV, pages 402?415. Springer, 2012.
[30] S. A. Nene, S. K. Nayar, and H. Murase. Columbia object image library (COIL-100). Technical Report
CUCS-006-96, 1996.
9
[31] S. A. Nene, S. K. Nayar, and H. Murase. Columbia object image library (COIL-20). Technical Report
CUCS-005-96, 1996.
[32] A. Y. Ng, M. I. Jordan, Y. Weiss, et al. On spectral clustering: Analysis and an algorithm. In NIPS,
volume 14, pages 849?856, 2001.
[33] P. Ochs and T. Brox. Higher order motion models and spectral clustering. In CVPR, 2012.
[34] V. M. Patel, H. Van Nguyen, and R. Vidal. Latent space sparse subspace clustering. In ICCV, pages
225?232, 2013.
[35] V. M. Patel and R. Vidal. Kernel sparse subspace clustering. In ICIP, pages 2849?2853. IEEE, 2014.
[36] X. Peng, S. Xiao, J. Feng, W.-Y. Yau, and Z. Yi. Deep subspace clustering with sparsity prior. In IJCAI,
2016.
[37] P. Purkait, T.-J. Chin, H. Ackermann, and D. Suter. Clustering with hypergraphs: the case for large
hyperedges. In ECCV, pages 672?687. Springer, 2014.
[38] S. R. Rao, R. Tron, R. Vidal, and Y. Ma. Motion segmentation via robust subspace separation in the
presence of outlying, incomplete, or corrupted trajectories. In CVPR, pages 1?8. IEEE, 2008.
[39] F. S. Samaria and A. C. Harter. Parameterisation of a stochastic model for human face identification. In
Applications of Computer Vision, 1994., Proceedings of the Second IEEE Workshop on, pages 138?142.
IEEE, 1994.
[40] J. Shawe-Taylor and N. Cristianini. Kernel methods for pattern analysis. Cambridge university press,
2004.
[41] J. Shi and J. Malik. Normalized cuts and image segmentation. TPAMI, 22(8):888?905, 2000.
[42] R. Vidal. Subspace clustering. IEEE Signal Processing Magazine, 28(2):52?68, 2011.
[43] R. Vidal and P. Favaro. Low rank subspace clustering (LRSC). Pattern Recognition Letters, 43:47?61,
2014.
[44] R. Vidal, R. Tron, and R. Hartley. Multiframe motion segmentation with missing data using powerfactorization and GPCA. IJCV, 79(1):85?105, 2008.
[45] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P.-A. Manzagol. Stacked denoising autoencoders:
Learning useful representations in a deep network with a local denoising criterion. JMLR, 11(Dec):3371?
3408, 2010.
[46] Y.-X. Wang, H. Xu, and C. Leng. Provable subspace clustering: When LRR meets SSC. In Advances in
Neural Information Processing Systems, pages 64?72, 2013.
[47] S. Xiao, M. Tan, D. Xu, and Z. Y. Dong. Robust kernel low-rank representation. IEEE transactions on
neural networks and learning systems, 27(11):2268?2281, 2016.
[48] J. Xie, R. Girshick, and A. Farhadi. Unsupervised deep embedding for clustering analysis. In ICML, 2016.
[49] J. Yan and M. Pollefeys. A general framework for motion segmentation: Independent, articulated, rigid,
non-rigid, degenerate and non-degenerate. In ECCV, pages 94?106. Springer, 2006.
[50] A. Y. Yang, J. Wright, Y. Ma, and S. S. Sastry. Unsupervised segmentation of natural images via lossy data
compression. CVIU, 110(2):212?225, 2008.
[51] M. Yin, Y. Guo, J. Gao, Z. He, and S. Xie. Kernel sparse subspace clustering on symmetric positive definite
manifolds. In CVPR, pages 5157?5164, 2016.
[52] C. You, C.-G. Li, D. P. Robinson, and R. Vidal. Oracle based active set algorithm for scalable elastic net
subspace clustering. In CVPR, pages 3928?3937, 2016.
[53] C. You, D. Robinson, and R. Vidal. Scalable sparse subspace clustering by orthogonal matching pursuit.
In CVPR, pages 3918?3927, 2016.
10
| 6608 |@word trial:3 version:1 manageable:1 polynomial:1 norm:11 dalal:1 compression:2 triggs:1 open:1 hu:1 confirms:1 seek:1 jacob:1 pick:1 reduction:2 liu:2 series:1 salzmann:3 tuned:1 ours:1 deconvolutional:2 outperforms:1 err:1 current:1 com:1 activation:4 si:1 devin:1 shape:1 designed:1 depict:1 n0:1 v:1 fewer:2 ith:1 ck2:1 provides:1 gpca:1 node:3 revisited:1 zhang:1 favaro:2 height:1 become:3 abadi:1 consists:6 ijcv:4 fitting:1 advocate:1 fullyconnected:1 idr:1 manner:3 introduce:4 excellence:1 acquired:1 pairwise:1 peng:1 roughly:2 salakhutdinov:1 relying:1 decomposed:1 globally:1 decreasing:1 little:1 farhadi:1 considering:1 increasing:3 provided:3 project:2 moreover:1 linearity:1 multibody:1 lowest:1 kind:1 developed:2 unified:1 guarantee:1 quantitative:1 every:1 tackle:2 exactly:2 k2:3 unit:1 reid:1 before:1 positive:1 local:3 fl130100102:1 meet:1 black:1 plus:1 twice:1 initialization:1 shaded:3 challenging:3 limited:4 factorization:6 lrr:4 lecun:1 union:4 practice:1 block:2 differs:1 backpropagation:2 definite:1 procedure:1 empirical:1 yan:4 significantly:3 thought:2 matching:2 pre:14 get:2 close:1 selection:2 wrongly:1 ravichandran:1 applying:1 influence:1 map:2 deterministic:1 demonstrated:1 dean:1 missing:2 shi:1 independently:1 convex:1 resolution:2 formalized:1 recovery:1 nuclear:1 embedding:2 handle:2 notion:1 target:1 tan:1 magazine:1 trick:1 element:1 approximated:1 recognition:3 ochs:1 cut:2 powerfactorization:1 wang:1 thousand:1 connected:5 sun:1 trade:1 decrease:1 ran:1 cristianini:1 kriegman:2 trained:7 upon:3 distinctive:1 joint:1 various:2 represented:4 regularizer:1 samaria:1 stacked:5 train:6 articulated:1 effective:3 kp:2 artificial:1 quite:1 heuristic:2 larger:1 solve:1 cvpr:12 encoder:41 ability:1 transform:1 jointly:1 final:2 advantage:1 differentiable:2 tpami:7 net:21 cai:2 propose:5 reconstruction:1 interaction:1 product:1 aligned:1 combining:1 iff:1 degenerate:2 achieve:2 forth:1 inducing:1 frobenius:1 harter:1 exploiting:2 convergence:1 cluster:7 sutskever:1 ijcai:1 adam:2 object:12 pose:3 implemented:1 murase:2 indicate:1 australian:3 larochelle:1 differ:1 direction:1 hartley:1 cnns:1 stochastic:4 human:4 viewing:1 require:1 clustered:2 mathematically:1 hold:1 around:2 wright:2 mapping:4 mo:1 achieves:2 consecutive:1 released:1 estimation:1 label:1 council:1 weighted:1 reflects:1 minimization:1 clearly:1 gaussian:1 always:1 aim:4 rather:6 avoid:1 varying:4 encode:1 focus:2 notational:1 improvement:2 rank:8 contrast:4 baseline:6 sense:1 glass:2 rigid:2 epfl:1 typically:3 entire:1 transformed:1 pixel:1 overall:1 classification:1 denoted:2 art:3 spatial:1 initialize:3 brox:1 field:2 construct:4 once:1 having:3 beach:1 extraction:1 ng:1 represents:2 yu:2 unsupervised:12 constitutes:1 icml:2 mimic:1 report:3 employ:1 few:2 suter:1 randomly:1 oriented:1 composed:1 national:2 phase:1 n1:2 attempt:1 detection:1 evaluation:1 truly:1 predefined:1 respective:1 facial:3 orthogonal:2 incomplete:2 taylor:1 circle:2 re:1 girshick:1 instance:2 column:1 increased:1 rao:1 cost:1 introducing:2 krizhevsky:1 too:1 reported:2 encoders:15 varies:1 corrupted:2 st:1 thanks:1 lee:2 off:1 dong:1 huang:3 multiframe:1 ssc:15 worse:3 tz:1 yau:1 zhao:1 leading:1 li:5 account:2 stride:2 coding:1 coefficient:3 explicitly:1 performed:1 lowe:1 closed:2 contribution:1 minimize:3 square:3 ni:8 accuracy:2 convolutional:14 publicly:2 largely:1 correspond:2 yield:1 ackermann:1 identification:1 vincent:1 lu:1 trajectory:1 lighting:3 rectified:1 corruption:2 researcher:1 nene:2 against:3 sampled:4 dataset:12 popular:3 knowledge:2 color:3 dimensionality:4 car:1 segmentation:13 hilbert:1 lim:1 back:1 feed:1 scc:1 higher:5 supervised:1 xie:2 harness:1 costeira:1 wei:1 evaluated:1 shrink:1 kanatani:1 generality:1 furthermore:3 just:2 implicit:2 lsr:1 stage:2 autoencoders:1 horizontal:1 expressive:28 nonlinear:2 lack:1 lrsc:4 gray:1 believe:1 lossy:2 usa:1 smiling:2 normalized:2 requiring:1 regularization:5 equality:1 spatially:1 symmetric:2 illustrated:2 during:2 self:44 essence:2 davis:1 hong:1 criterion:1 chin:1 tron:2 performs:1 motion:7 l1:8 image:44 novel:5 recently:2 funding:1 ji:3 dsc:20 volume:3 million:1 discussed:2 he:3 hypergraphs:1 numerically:1 refer:1 significant:1 cambridge:1 tuning:8 consistency:1 grid:1 similarly:1 sastry:1 centre:1 shawe:1 moving:1 access:1 stable:2 han:2 curvature:2 multivariate:1 recent:1 coil20:21 moderate:1 scenario:1 schmidhuber:1 certain:2 success:1 yi:1 seen:1 additional:1 relaxed:1 employed:1 converge:1 corrado:1 signal:1 ii:2 semi:1 multiple:2 keypoints:1 smooth:1 technical:2 faster:1 long:1 compensate:1 lin:3 divided:1 equally:1 scalable:2 regression:2 ae:5 vision:3 essentially:1 heterogeneous:1 arxiv:2 histogram:1 kernel:24 represent:1 agarwal:1 dec:1 background:1 want:1 fine:9 fellowship:1 interval:1 median:8 source:1 hyperedges:1 crucial:1 hongdong:1 subject:30 jordan:1 call:1 leverage:3 presence:2 yang:2 bengio:2 xj:1 relu:1 zi:5 architecture:7 competing:1 suboptimal:1 inner:1 idea:3 barham:1 translates:1 expression:2 pca:1 nine:1 deep:40 dramatically:1 useful:2 clear:3 detailed:1 tune:1 turntable:1 amount:1 coil100:17 ce140100016:1 extensively:1 category:1 reduced:1 http:1 outperform:1 zj:1 notice:1 per:1 diverse:1 conform:1 write:1 pollefeys:1 express:2 group:1 key:2 four:1 demonstrating:1 drawn:3 verified:1 draper:1 graph:1 year:2 run:1 angle:1 letter:1 you:2 reader:1 p3:1 separation:2 orl:10 layer:52 ki:2 ct:1 guaranteed:1 yale:12 nonnegative:2 oracle:1 adapted:1 constraint:2 encodes:3 dominated:1 min:3 performing:2 separable:1 relatively:1 conjecture:1 robustified:1 structured:1 developing:1 according:1 alternate:1 combination:7 remain:1 slightly:1 pan:1 smaller:1 derksen:1 shallow:1 parameterisation:1 outlier:1 iccv:4 invariant:1 taken:2 computationally:1 equation:1 remains:2 discus:1 tractable:1 end:3 junction:1 operation:1 available:2 pursuit:2 brevdo:1 vidal:12 observe:1 lambertian:3 hierarchical:2 spectral:8 enforce:1 batch:5 robustness:2 ho:2 original:4 clustering:91 xc:2 exploit:1 giving:1 reflectance:3 build:5 feng:2 malik:1 already:2 cire:1 strategy:7 traditional:1 diagonal:3 affinity:16 gradient:7 subspace:89 mapped:3 decoder:25 lajoie:1 manifold:2 trivial:2 reason:3 provable:1 aes:5 code:2 length:1 mini:1 manzagol:1 minimizing:3 optionally:1 difficult:1 unfortunately:1 cij:2 setup:1 equivalently:1 hog:1 rise:1 ba:1 design:3 implementation:2 contributed:1 perform:3 shallower:1 vertical:1 convolution:2 neuron:1 datasets:15 benchmark:1 arc:2 descent:4 optional:1 situation:1 extended:12 sscomp:2 hinton:3 stack:1 arbitrary:1 expressiveness:14 introduced:2 pair:1 required:1 specified:1 bottle:1 connection:1 imagenet:1 meier:1 cucs:2 icip:1 learned:1 tensorflow:3 kingma:1 nip:2 robinson:2 address:2 able:2 bar:2 below:3 usually:1 pattern:2 sparsity:2 built:1 reliable:1 power:1 difficulty:1 natural:1 regularized:1 residual:1 zhu:1 scheme:1 improve:2 github:1 eye:1 library:2 mathieu:1 auto:26 extract:2 coupled:1 columbia:2 prior:3 literature:4 l2:10 epoch:9 python:1 acknowledgement:1 discovery:1 fully:5 loss:7 permutation:1 mixed:1 wacv:1 proven:2 degree:1 xiao:2 principle:1 eccv:4 ck2f:1 compatible:1 ki2:3 placed:1 supported:2 bias:3 face:17 taking:1 sparse:11 benefit:2 distributed:3 van:1 dimension:6 depth:1 kz:3 author:2 reside:1 adaptive:1 san:1 leng:1 nguyen:1 outlying:1 transaction:1 reconstructed:3 patel:2 laureate:1 basri:1 robotic:1 investigating:1 active:1 xi:4 factorizing:1 search:1 latent:13 why:1 table:8 kanade:1 learn:10 channel:16 robust:8 ca:1 nature:1 elastic:1 complex:1 necessarily:2 diag:4 icann:1 dense:3 linearly:3 big:1 noise:1 whole:1 cvlab:1 xu:3 tong:1 sub:2 momentum:2 explicit:3 duck:1 lie:3 jmlr:1 third:1 ian:1 masci:1 down:3 sift:1 experimented:1 deconvolution:1 consist:2 workshop:2 adding:1 effectively:1 lifting:3 illumination:2 elhamifar:2 kx:3 chen:3 easier:1 cviu:1 suited:4 depicted:1 yin:1 simply:2 appearance:1 gao:1 prevents:1 applies:1 acquiring:1 springer:4 corresponds:1 ma:4 coil:3 viewed:2 goal:1 rbf:1 replace:1 feasible:1 change:1 specifically:4 reducing:1 denoising:3 called:1 total:2 neck:1 accepted:1 experimental:5 lerman:2 citro:1 exception:1 guo:1 adelaide:2 evaluate:2 trainable:1 scratch:1 nayar:2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.