Unnamed: 0
int64 0
7.24k
| id
int64 1
7.28k
| raw_text
stringlengths 9
124k
| vw_text
stringlengths 12
15k
|
---|---|---|---|
3,100 | 3,809 | Convergent Temporal-Difference Learning with
Arbitrary Smooth Function Approximation
Hamid R. Maei
University of Alberta
Edmonton, AB, Canada
Csaba Szepesv?ari?
University of Alberta
Edmonton, AB, Canada
Shalabh Bhatnagar
Indian Institute of Science
Bangalore, India
Doina Precup
McGill University
Montreal, QC, Canada
David Silver
University of Alberta,
Edmonton, AB, Canada
Richard S. Sutton
University of Alberta,
Edmonton, AB, Canada
Abstract
We introduce the first temporal-difference learning algorithms that converge with
smooth value function approximators, such as neural networks. Conventional
temporal-difference (TD) methods, such as TD(?), Q-learning and Sarsa have
been used successfully with function approximation in many applications. However, it is well known that off-policy sampling, as well as nonlinear function approximation, can cause these algorithms to become unstable (i.e., the parameters
of the approximator may diverge). Sutton et al. (2009a, 2009b) solved the problem of off-policy learning with linear TD algorithms by introducing a new objective function, related to the Bellman error, and algorithms that perform stochastic
gradient-descent on this function. These methods can be viewed as natural generalizations to previous TD methods, as they converge to the same limit points when
used with linear function approximation methods. We generalize this work to nonlinear function approximation. We present a Bellman error objective function and
two gradient-descent TD algorithms that optimize it. We prove the asymptotic
almost-sure convergence of both algorithms, for any finite Markov decision process and any smooth value function approximator, to a locally optimal solution.
The algorithms are incremental and the computational complexity per time step
scales linearly with the number of parameters of the approximator. Empirical results obtained in the game of Go demonstrate the algorithms? effectiveness.
1
Introduction
We consider the problem of estimating the value function of a given stationary policy of a Markov
Decision Process (MDP). This problem arises as a subroutine of generalized policy iteration and
is generally thought to be an important step in developing algorithms that can learn good control
policies in reinforcement learning (e.g., see Sutton & Barto, 1998). One widely used technique
for value-function estimation is the TD(?) algorithm (Sutton, 1988). A key property of the TD(?)
algorithm is that it can be combined with function approximators in order to generalize the observed
data to unseen states. This generalization ability is crucial when the state space of the MDP is large
or infinite (e.g., TD-Gammon, Tesauro, 1995; elevator dispatching, Crites & Barto, 1997; job-shop
scheduling, Zhang & Dietterich, 1997). TD(?) is known to converge when used with linear function
approximators, if states are sampled according to the policy being evaluated ? a scenario called onpolicy learning (Tsitsiklis & Van Roy, 1997). However, the absence of either of these requirements
can cause the parameters of the function approximator to diverge when trained with TD methods
(e.g., Baird, 1995; Tsitsiklis & Van Roy, 1997; Boyan & Moore, 1995). The question of whether it
is possible to create TD-style algorithms that are guaranteed to converge when used with nonlinear
function approximation has remained open until now. Residual gradient algorithms (Baird, 1995)
?
On leave from MTA SZTAKI, Hungary.
1
attempt to solve this problem by performing gradient descent on the Bellman error. However, unlike
TD, these algorithms usually require two independent samples from each state. Moreover, even if
two samples are provided, the solution to which they converge may not be desirable (Sutton et al.,
2009b provides an example).
In this paper we define the first TD algorithms that are stable when used with smooth nonlinear
function approximators (such as neural networks). Our starting point is the family of TD-style algorithms introduced recently by Sutton et al. (2009a, 2009b). Their goal was to address the instability
of TD learning with linear function approximation, when the policy whose value function is sought
differs from the policy used to generate the samples (a scenario called off-policy learning). These algorithms were designed to approximately follow the gradient of an objective function whose unique
optimum is the fixed point of the original TD(0) algorithm. Here, we extend the ideas underlying
this family of algorithms to design TD-like algorithms which converge, under mild assumptions,
almost surely, with smooth nonlinear approximators. Under some technical conditions, the limit
points of the new algorithms correspond to the limit points of the original (not necessarily convergent) nonlinear TD algorithm. The algorithms are incremental, and the cost of each update is linear
in the number of parameters of the function approximator, as in the original TD algorithm.
Our development relies on three main ideas. First, we extend the objective function of Sutton et
al. (2009b), in a natural way, to the nonlinear function approximation case. Second, we use the
weight-duplication trick of Sutton et al. (2009a) to derive a stochastic gradient algorithm. Third,
in order to implement the parameter update efficiently, we exploit a nice idea due to Pearlmutter
(1994), allowing one to compute exactly the product of a vector and a Hessian matrix in linear
time. To overcome potential instability issues, we introduce a projection step in the weight update.
The almost sure convergence of the algorithm then follows from standard two-time-scale stochastic
approximation arguments.
In the rest of the paper, we first introduce the setting and our notation (Section 2), review previous relevant work (Section 3), introduce the algorithms (Section 4), analyze them (Section 5) and
illustrate the algorithms? performance (Section 6).
2
Notation and Background
We consider policy evaluation in finite state and action Markov Decision Processes (MDPs).1
An MDP is described by a 5-tuple (S, A, P, r, ?), where S is the finite state space, A is the finite action space, P = (P (s? |s, a))s,s? ?S,a?A are the transition probabilities (P (s? |s, a) ? 0,
?
?
?
?
s? ?S P (s |s, a) = 1, for all s ? S, a ? A), r = (r(s, a, s ))s,s ?S,a?A are the real-valued
immediate rewards and ? ? (0, 1) is the discount factor. The policy to be evaluated is a mapping ? : S ? A ? [0, 1]. The value function of ?, V ? : S ? R, maps each state s to a
number representing the infinite-horizon expected discounted return obtained if policy ? is followed from state s. Formally, let s0 = s and for t?? 0 let at ? ?(st , ?), st+1 ? P (?|st , at )
?
and rt+1 ?
= r(s?
Then V ? (s) = E[ t=0 ? t rt+1 ]. Let R? : S ? R, with
t , at , st+1 ).
?
?
?
?
R (s) =
: S ? S ? [0, 1] be defined as
? ?S
s?
a?A ?(s, a)P (s |s, a)r(s, a, s ), and let P
?
?
?
P (s, s ) =
?(s,
a)P
(s
|s,
a).
Assuming
a
canonical
ordering
on the elements of S, we
a?A
can treat V ? and R? as vectors in R|S| , and P ? as a matrix in R|S|?|S| . It is well-known that V ?
satisfies the so-called Bellman equation:
V ? = R? + ?P ? V ? .
Defining the operator T ? : R|S| ? R|S| as T ? V = R? +?P ? V, the Bellman equation can be written
compactly as V ? = T ? V ? . To simplify the notation, from now on we will drop the superscript ?
everywhere, since the policy to be evaluated will be kept fixed.
Assume that the policy to be evaluated is followed and it gives rise to the trajectory
(s0 , a0 , r1 , s1 , a1 , r2 , s2 , . . .). The problem is to estimate V , given a finite prefix of this trajectory. More generally, we may assume that we are given an infinite sequence of 3-tuples, (sk , rk , s?k ),
that satisfies the following:
Assumption A1 (sk )k?0 is an S-valued stationary Markov process, sk ? d(?), rk = R(sk ) and
s?k ? P (sk , ?).
1
Under appropriate technical conditions, our results, can be generalized to MDPs with infinite state spaces,
but we do not address this here.
2
We call (sk , rk , s?k ) the k th transition. Since we assume stationarity, we will sometimes drop the
index k and use (s, r, s? ) to denote a random transition. Here d(?) denotes the probability distribution
over initial states for a transition; let D ? R|S|?|S| be the corresponding diagonal matrix. The
problem is still to estimate V given a finite number of transitions.
When the state space is large (or infinite) a function approximation method can be used to facilitate
the generalization of observed transitions to unvisited or rarely visited states. In this paper we focus
on methods that are smoothly parameterized with a finite-dimensional parameter vector ? ? Rn . We
denote by V? (s) the value of state s ? S returned by the function approximator with parameters ?.
The goal of policy evaluation becomes to find ? such that V? ? V .
3
TD Algorithms with function approximation
The classical TD(0) algorithm with function approximation (Sutton, 1988; Sutton & Barto, 1998)
starts with an arbitrary value of the parameters, ?0 . Upon observing the k th transition, it computes
the scalar-valued temporal-difference error,
?k = rk + ?V?k (s?k ) ? V?k (sk ),
which is then used to update the parameter vector as follows:
?k+1 ? ?k + ?k ?k ?V?k (sk ).
(1)
Here ?k is a deterministic positive step-size parameter, which is typically small, or?
(for the purpose
?
of
convergence
analysis)
is
assumed
to
satisfy
the
Robbins-Monro
conditions:
k=0 ?k = ?,
??
2
n
?
<
?.
We
denote
by
?V
(s)
?
R
the
gradient
of
V
w.r.t.
?
at
s.
?
k=0 k
When the TD algorithm converges, it must converge to a parameter value where, in expectation, the
parameters do not change:
E[? ?V? (s)] = 0,
(2)
where s, ? are random and share the common distribution underlying (sk , ?k ); in particular, (s, r, s? )
are drawn as in Assumption A1 and ? = r + ?V? (s? ) ? V? (s).
However, it is well known that TD(0) may not converge; the stability of the algorithm is affected
both by the actual function approximator V? and by the way in which transitions are sampled. Sutton
et al (2009a, 2009b) tackled this problem in the case of linear function approximation, in which
V? (s) = ?? ?(s), where ? : S ? Rn , but where transitions may be sampled in an off-policy
manner. From now on we use the shorthand notation ? = ?(s), ?? = ?(s? ).
Sutton et al. (2009b) rely on an error function, called mean-square projected Bellman error
(MSPBE)2 , which has the same unique optimum as Equation (2). This function, which we denote J, projects the Bellman error measure, T V? ? V? onto the linear space M = {V? | ? ? Rn }
with respect to the metric ? ? ?D . Hence, ?V = arg min V ? ?M ?V ? ? V ?2D . More precisely:
(3)
J(?) =? ?(T V? ? V? ) ?2D =? ? T V? ? V? ?2D = E[??]? E[??? ]?1 E[??],
?
2
2
where ?V ?D is the weighted quadratic norm defined by ?V ?D = s?S d(s)V (s) , and the scalar
TD(0) error for a given transition (s, r, s? ) is ? = r + ??? ?? ? ?? ?.
The negative gradient of the MSPBE objective function is:
?
?
?
?
1
(4)
? ?J(?) = E (? ? ??? )?? w = E[??] ? ?E ?? ?? w,
2
where w = E[??? ]?1 E[??]. Note that ? depends on ?, hence w depends on ?. In order to develop
an efficient (O(n)) stochastic gradient algorithm, Sutton et al. (2009a) use a weight-duplication
trick. They introduce a new set of weights, wk , whose purpose is to estimate w for a fixed value of
the ? parameter. These weights are updated on a ?fast? timescale, as follows:
wk+1 = wk + ?k (?k ? ??
(5)
k wk )?k .
The parameter vector ?k is updated on a ?slower? timescale. Two update rules can be obtained,
based on two slightly different calculations:
?k+1 = ?k + ?k (?k ? ???k )(??
(6)
k wk ) (an algorithm called GTD2), or
?k+1
= ?k + ?k ?k ?k ? ?k ???k (??
k wk ) (an algorithm called TDC).
(7)
2
This error function was also described in (Antos et al., 2008), although the algorithmic issue of how to
minimize it is not pursued there. Algorithmic issues in a batch setting are considered by Farahmand et al.
(2009) who also study regularization.
3
4
Nonlinear Temporal Difference Learning
Our goal is to generalize this approach to the case in which V? is a smooth, nonlinear function
approximator. The first step is to find a good objective function on which to do gradient descent. In
the linear case, MSPBE was chosen as a projection of the Bellman error on a natural hyperplane?the
subspace to which V? is restricted. However, in the nonlinear case, the value function is no longer
restricted to a plane, but can move on a nonlinear surface. More precisely, assuming that V? is a
differentiable function of ?, M = {V? ? R|S| | ? ? Rn } becomes a differentiable submanifold
of R|S| . Projecting onto a nonlinear manifold is not computationally feasible; to get around this
problem, we will assume that the parameter vector ? changes very little in one step (given that
learning rates are usually small); in this case, the surface is locally close to linear, and we can
project onto the tangent plane at the given point. We now detail this approach and show that this is
indeed a good objective function.
The tangent plane P M? of M at ? is the hyperplane of R|S| that (i) passes through V? and (ii)
is orthogonal to the normal of M at ?. The tangent space T M? is the translation of P M? to the
?
origin. Note that T M? = {?? a | a ? Rn }, where ?? ? R|S|?n is defined by (?? )s,i = ??
V? (s).
i
|S|
?
Let ?? be the projection that projects vectors of (R , ? ? ?D ) to T M? . If ?? D?? is non-singular
then ?? can be written as:
?1 ?
?? = ?? (??
?? D.
(8)
? D?? )
The objective function that we will optimize is:
J(?) = ? ?? (T V? ? V? ) ?2D .
(9)
?
V
?T
?
TV
?
T V?
?
This is a natural generalization of the objective function defined by (3), as the plane on which we
project is parallel to the tangent plane at ?. More precisely, let ?? be the projection to P M? and
let ?? be the projection to T M? . Because the two hyperplanes are parallel, for any V ? R|S| ,
?? V ? V? = ?? (V ? V? ). In other words, projecting onto the tangent space gives exactly the same
distance as projecting onto the tangent plane, while being mathematically more convenient. Fig. 1
illustrates visually this objective function.
?
?
T
?
J(
?)
V?
V
??
Figure 1: The MSPBE objective for nonlinear function approximation at two points in
the value function space. The figure shows a
point, V? , at which, J(?), is not 0 and a point,
V?? , where J(?? ) = 0, thus ??? T V?? = V?? ,
so this is a TD(0) solution.
??? T V?? = V??
Tangent plane
TD(0) solution
We now show that J(?) can be re-written in the same way as done in (Sutton et al., 2009b).
Lemma 1. Assume V? (s0 ) is continuously differentiable as a function of ?, for any s0 ? S s.t.
d(s0 ) > 0. Let (s, ?) be jointly distributed random variables as in Section 3 and assume that
E[?V? (s)?V? (s)? ] is nonsingular. Then
J(?) = E[ ? ?V? (s) ]? E[ ?V? (s)?V? (s)? ]?1 E[ ? ?V? (s) ].
(10)
Proof. The identity is obtained similarly to Sutton et. al (2009b), except that here ?? is expressed
by (8). Details are omitted for brevity.
Note that the assumption that E[ ?V? (s)?V? (s)? ]?1 is non-singular is akin to the assumption that
the feature vectors are independent in the linear function approximation case. We make this assumption here for convenience; it can be lifted, but the proofs become more involved.
Corollary 1. Under the conditions of Lemma 1, J(?) = 0, if and only if V? satisfies (2).
4
This is an important corollary, because it shows that the global optima of the proposed objective
function will not modify the set of solutions that the usual TD(0) algorithm would find (if it would
indeed converge). We now proceed to compute the gradient of this objective.
Theorem 1. Assume that (i) V? (s0 ) is twice continuously differentiable in ? for any s0 ?
? = E[?V ? ?V ? ] is non-singular in a small neighS s.t. d(s0 ) > 0 and (ii) W (?) defined by W (?)
?
??
borhood of ?. Let (s, ?) be jointly distributed random variables as in Section 3. Let ? ? ?V? (s),
?? ? ?V? (s? ) and
h(?, u) = ?E[ (? ? ?? u) ?2 V? (s)u ],
(11)
n
where u ? R . Then
1
? ?J(?) = ?E[(??? ? ?)?? w] + h(?, w) = ?E[??] ? ?E[?? ?? w] + h(?, w),
(12)
2
where w = E[? ?? ]?1 E[??].
The main difference between Equation (12) and Equation (4), which shows the gradient for the linear
case, is the appearance of the term h(?, w), which involves second-order derivatives of V? (which
are zero when V? is linear in ?).
?
Proof. The conditions of Lemma 1 are satisfied, so (10) holds. Denote ?i = ??
. From its
i
d
definition and the assumptions, W (u) is a symmetric, positive definite matrix, so du (W ?1 )|u=? =
d
d
? W ?1 (?) ( du
W |u=? ) W ?1 (?), where we use the assumption that du
W exists at ? and W ?1
exists in a small neighborhood of ?. From this identity, we have:
?
?
1
1
? [?J(?)]i = ?(?i E[??])? E[??? ]?1 E[??] ? E[??]? ?i E[??? ]?1 E[??]
2
2
1
?
? ?1
= ?(?i E[??]) E[?? ] E[??] + E[??]? E[??? ]?1 (?i E[??? ]) E[??? ]?1 E[??]
2
1
?
? ?1
= ?E[?i (??)] (E[?? ] E[??]) + (E[??? ]?1 E[??])? E[?i (??? )] (E[??? ]?1 E[??]).
2
The interchange between the gradient and expectation is possible here because of assumptions (i)
and (ii) and the fact that S is finite. Now consider the identity
1 ?
x ?i (??? )x = ?? x (?i ?? )x,
2
which holds for any vector x ? Rn . Hence, using the definition of w,
1
? [?J(?)]i
2
1
= ?E[?i (??)]? w + w? E[?i (??? )]w
2
= ?E[(?i ?)?? w] ? E[?(?i ?? )w] + E[?? w(?i ?? )w].
Using ?? = ??? ? ? and ??? = ?2 V? (s), we get
1
? ?J(?) = ?E[(??? ? ?)?? w] ? E[(? ? ?? w)?2 V (s)w],
2
Finally, observe that :
E[(??? ? ?)?? w] = E[(? ? ??? )?]? (E[??? ]?1 E[??])
which concludes the proof.
= E[??] ? E[??? ?? ](E[??? ]?1 E[??]) = E[??] ? E[??? ?? w].
Theorem 1 suggests straightforward generalizations of GTD2 and TDC (cf. Equations (6) and (7))
to the nonlinear case. Weight wk is updated as before on a ?faster? timescale:
wk+1 = wk + ?k (?k ? ??
k wk )?k .
The parameter vector ?k is updated on a ?slower? timescale, either according to
?
?
??
?k+1 = ? ?k + ?k (?k ? ???k )(??
,
(non-linear GTD2)
k wk ) ? hk
5
(13)
(14)
or, according to
where
?
?
??
,
?k+1 = ? ?k + ?k ?k ?k ? ???k (??
k wk ) ? hk
(non-linear TDC)
(15)
2
hk = (?k ? ??
(16)
k wk ) ? V?k (sk )wk .
Besides hk , the only new ingredient compared to the linear case is ? : Rn ? Rn , a mapping
that projects its argument into an appropriately chosen compact set C with a smooth boundary.
The purpose of this projection is to prevent the parameters to diverge in the initial phase of the
algorithm, which could happen due to the presence of the nonlinearities in the algorithm. Projection
is a common technique for stabilizing the transient behavior of stochastic approximation algorithms
(see, e.g., Kushner & Yin, 2003). In practice, if one selects C large enough so that it contains the
set of possible solutions U = { ? | E[ ? ?V? (s)] = 0 } (by using known bounds on the size of the
rewards and on the derivative of the value function), it is very likely that no projections will take
place at all during the execution of the algorithm. We expect this to happen frequently in practice:
the main reason for the projection is to facilitate convergence analysis.
Let us now analyze the computational complexity per update. Assume that V? (s) and its gradient can each be computed in O(n) time, the usual case for approximators of interest (e.g., neural networks). Equation (16) also requires computing the product of the Hessian of V? (s) and
w. Pearlmutter (1994) showed that this can be computed exactly in O(n) time. The key is to
?
note that ?2 V?k (sk )wk = ?(?V?k (s) wk ), because wk does not depend on ?k . The scalar term
?
?V?k (s) wk can be computed in O(n) and its gradient, which is a vector, can also be computed in
O(n). Hence, the computation time per update for the proposed algorithms is linear in the number
of parameters of the function approximator (just like in TD(0)).
5
Convergence Analysis
Given the compact set C ? Rn , let C(C) be the space of C ? Rn continuous functions. Given
? : C(C) ? C(Rn ) be
projection ? onto C, let operator ?
?
?
? ? + ? v(?) ? ?
?
?v (?) = lim
.
0<??0
?
? is well defined.
By assumption, ?(?) = arg min?? ?C ??? ??? and the boundary of C is smooth, so ?
?
?
?
In particular, ?v (?) = v(?) when ? ? C , otherwise, if ? ? ?C, ?v (?) is the projection of v(?) to
the tangent space of ?C at ?. Consider the following ODE:
? 1 ?J)(?),
?? = ?(?
2
?(0) ? C.
(17)
Let K be the set of all asymptotically stable equilibria of (17). By the definitions, K ? C. Furthermore, U ? C ? K.
The next theorem shows that under some technical conditions, the iterates produced by nonlinear
GTD2 converge to K with probability one.
Theorem 2 (Convergence of nonlinear GTD2). Let (sk , rk , s?k )k?0 be a sequence of transitions that
satisfies?
A1. Consider
nonlinear GTD2
(13), (14). with positive step-size sequences that
?the
?? updates
??
?
?
satisfy k=0 ?k = k=0 ?k = ?, k=0 ?k2 , k=0 ?k2 < ? and ??kk ? 0, as k ? ?. Assume
that for any ? ? C and s0 ? S s.t. d(s0 ) > 0, V? (s0 ) is three times continuously differentiable.
Further assume that for each ? ? C, E[?? ??
? ] is nonsingular. Then ?k ? K, with probability one,
as k ? ?.
Proof. Let (s, r, s? ) be a random transition. Let ?? = ?V? (s), ??? = ?V? (s? ), ?k = ?V?k (sk ), and
??k = ?V?k (s?k ). We begin by rewriting the updates (13)-(14) as follows:
where
wk+1 = wk + ?k (f (?k , wk ) + Mk+1 ),
?
?
?k+1 = ? ?k + ?k (g(?k , wk ) + Nk+1 ) ,
(18)
(19)
f (?k , wk ) = E[?k ?k |?k ] ? E[?k ??
Mk+1 = (?k ? ??
k |?k ]wk ,
k wk )?k ? f (?k , wk ),
?
?
?
?
g(?k , wk ) = E (?k ? ??k )?k wk ? hk |?k , wk , Nk+1 = ((?k ? ???k )??
k wk ? hk ) ? g(?k , wk ).
6
We need to verify that there exists a compact set B ? R2n such that (a) the functions f (?, w),
g(?, w) are Lipschitz continuous over B, (b) (Mk , Gk ), (Nk , Gk ), k ? 1 are martingale difference
sequences, where Gk = ?(ri , ?i , wi , si , i ? k; s?i , i < k), k ? 1 are increasing sigma fields, (c)
{(wk (?), ?)} with wk (?) obtained as ?k (?) = rk + ?V? (s?k ) ? V? (sk ), ?k (?) = ?V? (sk ),
?
?
wk+1 (?) = wk (?) + ?k ?k (?) ? ?k (?)? wk (?) ?k (?)
almost surely stays in B for any choice of (w0 (?), ?) ? B, and (d) {(w, ?k )} almost surely stays in
B for any choice of (w, ?0 ) ? B. From these and the conditions on the step-sizes, using standard
arguments (c.f. Theorem 2 of Sutton et al. (2009b)), it follows that ?k converges almost surely to
? (?), (?(0) ? C), where F (?) = g(?, w? ). Here
the set of asymptotically stable equilibria of ?? = ?F
for ? ? C fixed, w? is the (unique) equilibrium point of
w? = E[?? ?? ] ? E[?? ??
(20)
? ]w,
?
??1
where ?? = r + ?V? (s? ) ? V? (s). Clearly, w? = E ?? ??
E[?? ?? ], which exists by assumption.
?
Then by Theorem 1 it follows that F (?) = ? 12 ?J(?). Hence, the statement will follow once (a)?(d)
are verified.
Note that (a) is satisfied because V? is three times continuously differentiable. For (b), we need to
verify that for any k ? 0, E[Mk+1 | Gk ] = 0 and E[Nk+1 | Gk ] = 0, which in fact follow from
the definitions. Condition (c) follows since, by a standard argument (e.g., Borkar & Meyn, 2000),
wk (?) converges to w? , which by assumption stays bounded if ? comes from a bounded set. For
condition (d), note that {?k } is uniformly bounded since for any k ? 0, ?k ? C, and by assumption
C is a compact set.
Theorem 3 (Convergence of nonlinear TDC). Under the same conditions as in Theorem 2, the
iterates computed via (13), (15) satisfy ?k ? K, with probability one, as k ? ?.
The proof follows in a similar manner as that of Theorem 2 and is omitted for brevity.
6
Empirical results
To illustrate the convergence properties of the algorithms, we applied them to the ?spiral? counterexample of Tsitsikilis & Van Roy (1997), originally used to show the divergence of TD(0) with
nonlinear function approximation. The Markov chain with 3 states is shown in the left panel of
Figure 2. The reward is always zero and the discount factor is ? ?= 0.9. The value function has ?a sin? ? b(s) sin (??)
?
gle parameter, ?, and takes the nonlinear spiral form V? (s) = a(s) cos (??)
e?? .
The true value function is V = (0, 0, 0)? which is achieved as ? ? ??. Here we used
? = 0.866 and ? = 0.05.
V0 = (100, ?70, ?30)? , a = V0 , b = (23.094, ?98.15, 75.056)? , ?
Note that this is a degenerate example, in which our theorems do not apply, because the optimal
parameter values are infinite. Hence, we run our algorithms without a projection step. We also use
constant learning rates, in order to facilitate gradient descent through an error surface which is essentially flat. For TDC we used ? = 0.5, ? = 0.05, and for GTD2, ? = 0.8 and ? = 0.1. For TD(0)
we used ? = 2 ? 10?3 (as argued by Tsitsiklis & Van Roy (1997), tuning the step-size does not help
d
V? ?. The graph shows
with the divergence problem). All step sizes are then normalized by ?V?? D d?
?
the performance measure, J, as a function of the number of updates (we used expected updates
for all algorithms). GTD2 and TDC converge to the correct solution, while TD(0) diverges. We note
that convergence happens despite the fact that this example is outside the scope of the theory.
To assess the performance of the new algorithms on a large scale problem, we used them to learn
an evaluation function in 9x9 computer Go. We used a version of RLGO (Silver, 2009), in which a
logistic function is fit to evaluate the probability of winning from a given position. Positions were
described using 969,894 binary features corresponding to all possible shapes in every 3x3, 2x2, and
1x1 region of the board. Using weight sharing to take advantage of symmetries, the million features
were reduced to a parameter vector of n = 63, 303 components. Experience was generated by selfplay, with actions chosen uniformly randomly among the legal moves. All rewards were zero, except
upon winning the game, when the reward was 1. We applied four algorithms to this problem: TD(0),
the proposed algorithms (GTD2 and TDC) and residual gradient (RG). In the experiments, RG was
7
TDC
0.60
15
TD
0.55
RMSE
?
J
10
1
2
5
GTD2
TDC
0
0
RG
TD
1
2
2
1
1
2
0.50
1
2
1
2
GTD2
TDC
RG
3
1000
2000
Time step
1
2
0.45
TD
TDC
3000
0.40
.00001
.0001
.001
?
.01
.1
Figure 2: Empirical evaluation results. Left panel: example MDP from Tsitsiklis & Van Roy (1994).
Right panel: 9x9 Computer Go.
run with only one sample3 . In each run, ? was initialized to random values uniformly distributed in
[?0.1, 0.1]; for GTD2 and TDC, the second parameter vector, w, was initialized to 0. Training then
proceeded for 5000 complete games, after which ? was frozen. This problem is too large to compute
the objective function J. Instead, to assess the quality of the solutions obtained, we estimated the
average prediction error of each algorithm. More precisely, we generated 2500 test games; for
every state occurring in a game, we computed the squared error between its predicted value and the
actual return that was obtained in that game. We then computed the root of the mean-squared error,
averaged over all time steps. The right panel in Figure 2 plots this measure over a range of values of
the learning rate ?. The results are averages over 50 independent runs. For TDC and GTD we used
several values of the ? parameter, which generate the different curves. As was noted in previous
empirical work, TD provides slightly better estimates than the RG algorithm. TDC?s performance
is very similar to TD, for a wide range of parameter values. GTD2 is slightly worse. These results
are very similar in flavor to those obtained in Sutton et al. (2009b) using the same domain, but with
linear function approximation.
7
Conclusions and future work
In this paper, we solved a long-standing open problem in reinforcement learning, by establishing a
family of temporal-difference learning algorithms that converge with arbitrary differentiable function approximators (including neural networks). The algorithms perform gradient descent on a natural objective function, the projected Bellman error. The local optima of this function coincide with
solutions that could be obtained by TD(0). Of course, TD(0) need not converge with non-linear
function approximation. Our algorithms are on-line, incremental and their computational cost per
update is linear in the number of parameters. Our theoretical results guarantee convergence to a
local optimum, under standard technical assumptions. Local optimality is the best one can hope
for, since nonlinear function approximation creates non-convex optimization problems. The early
empirical results obtained for computer Go are very promising. However, more practical experience
with these algorithms is needed. We are currently working on extensions of these algorithms using
eligibility traces, and on using them for solving control problems.
Acknowledgments
This research was supported in part by NSERC, iCore, AICML and AIF. We thank the three anonymous reviewers for their useful comments on previous drafts of this paper.
3
Unlike TD, RG would require two independent transition samples from a given state. This requires knowledge about the model of the environment which is not always available. In the experiments only one transition
sample was used following Baird?s original recommendation.
8
References
Antos, A., Szepesv?ari, Cs. & Munos, R. (2008). Learning near-optimal policies with Bellmanresidual minimization based fitted policy iteration and a single sample path. Machine Learning 71:
89?129.
Baird, L. C. (1995). Residual algorithms: Reinforcement learning with function approximation.
In Proceedings of the Twelfth International Conference on Machine Learning, pp. 30?37. Morgan
Kaufmann.
Borkar, V. S. & Meyn, S. P. (2000). The ODE method for convergence of stochastic approximation
and reinforcement learning. SIAM Journal on Control And Optimization 38(2): 447?469.
Boyan, J. A. & Moore, A.W. (1995). Generalization in Reinforcement Learning: Safely Approximating the Value Function. In Advances in Neural Information Processing Systems 7, pp. 369?376,
MIT Press.
Crites, R. H. & Barto, A.G. (1995). Improving Elevator Performance Using Reinforcement Learning
In Advances in Neural Information Processing Systems 8, pp. 1017-1023. MIT Press.
Farahmand, A.m., Ghavamzadeh, M., Szepesvari, C. & Mannor, S. (2009). Regularized Policy
Iteration In Advances in Neural Information Processing Systems 21, pp. 441?448.
Kushner, H. J. & Yin, G. G. (2003). Stochastic Approximation Algorithms and Applications. Second
Edition, Springer-Verlag.
Pearlmutter, B. A (1994). Fast exact multiplication by the Hessian. Neural Computation 6(1),
pp. 147?160.
Silver, D. (2009). Reinforcement Learning and Simulation-Based Search in Computer Go. University of Alberta Ph.D. thesis.
Sutton, R. S. (1988). Learning to predict by the method of temporal differences. Machine Learning
3:9?44.
Sutton, R. S. & Barto, A. G. (1998). Reinforcement Learning: An Introduction. MIT Press.
Sutton, R. S., Szepesv?ari, Cs. & Maei, H. R. (2009a). A convergent O(n) algorithm for off-policy
temporal-difference learning with linear function approximation. In Advances in Neural Information
Processing Systems 21, pp. 1609?1616. MIT Press.
Sutton, R. S., Maei, H. R, Precup, D., Bhatnagar, S., Silver, D., Szepesv?ari, Cs. & Wiewiora,
E. (2009b). Fast gradient-descent methods for temporal-difference learning with linear function
approximation. In Proceedings of the 26th International Conference on Machine Learning, pp.
993?1000. Omnipress.
Tesauro, G. (1992) Practical issues in temporal difference learning. Machine Learning 8: 257-277.
Tsitsiklis, J. N. & Van Roy, B. (1997). An analysis of temporal-difference learning with function
approximation. IEEE Transactions on Automatic Control 42:674?690.
Zhang, W. & Dietterich, T. G. (1995) A reinforcement learning approach to job-shop scheduling. In
Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence, pp. 11141120. AAAI Press.
9
| 3809 |@word mild:1 proceeded:1 version:1 norm:1 twelfth:1 open:2 simulation:1 initial:2 contains:1 prefix:1 si:1 written:3 must:1 wiewiora:1 happen:2 shape:1 designed:1 drop:2 update:12 plot:1 stationary:2 pursued:1 intelligence:1 plane:7 provides:2 iterates:2 draft:1 mannor:1 hyperplanes:1 zhang:2 become:2 farahmand:2 prove:1 shorthand:1 manner:2 introduce:5 indeed:2 expected:2 behavior:1 frequently:1 bellman:9 discounted:1 alberta:5 td:40 actual:2 little:1 increasing:1 becomes:2 provided:1 estimating:1 notation:4 moreover:1 underlying:2 project:5 begin:1 bounded:3 panel:4 csaba:1 guarantee:1 temporal:11 safely:1 every:2 exactly:3 k2:2 control:4 positive:3 before:1 local:3 treat:1 modify:1 limit:3 sutton:21 despite:1 establishing:1 path:1 approximately:1 twice:1 borhood:1 suggests:1 co:1 range:2 averaged:1 unique:3 practical:2 acknowledgment:1 practice:2 implement:1 differs:1 definite:1 x3:1 empirical:5 thought:1 projection:12 convenient:1 word:1 gammon:1 get:2 onto:6 close:1 convenience:1 operator:2 scheduling:2 instability:2 optimize:2 conventional:1 map:1 deterministic:1 reviewer:1 go:5 straightforward:1 starting:1 convex:1 qc:1 stabilizing:1 rule:1 meyn:2 stability:1 updated:4 mcgill:1 exact:1 origin:1 trick:2 element:1 roy:6 onpolicy:1 observed:2 solved:2 region:1 ordering:1 environment:1 complexity:2 reward:5 ghavamzadeh:1 trained:1 depend:1 solving:1 upon:2 creates:1 compactly:1 joint:1 fast:3 artificial:1 neighborhood:1 outside:1 whose:3 widely:1 solve:1 valued:3 otherwise:1 ability:1 unseen:1 timescale:4 jointly:2 superscript:1 sequence:4 differentiable:7 advantage:1 frozen:1 product:2 relevant:1 hungary:1 degenerate:1 convergence:11 requirement:1 optimum:5 r1:1 diverges:1 silver:4 incremental:3 leave:1 converges:3 help:1 derive:1 illustrate:2 montreal:1 develop:1 job:2 predicted:1 involves:1 come:1 c:3 correct:1 stochastic:7 transient:1 require:2 argued:1 generalization:6 hamid:1 anonymous:1 sarsa:1 mathematically:1 extension:1 hold:2 around:1 considered:1 normal:1 visually:1 equilibrium:3 mapping:2 algorithmic:2 scope:1 predict:1 sought:1 early:1 omitted:2 purpose:3 estimation:1 currently:1 visited:1 robbins:1 create:1 successfully:1 weighted:1 hope:1 minimization:1 mit:4 clearly:1 always:2 lifted:1 barto:5 corollary:2 focus:1 hk:6 typically:1 a0:1 subroutine:1 selects:1 issue:4 arg:2 among:1 development:1 field:1 once:1 sampling:1 neigh:1 future:1 simplify:1 bangalore:1 richard:1 randomly:1 divergence:2 elevator:2 phase:1 ab:4 attempt:1 stationarity:1 interest:1 evaluation:4 antos:2 chain:1 tuple:1 experience:2 orthogonal:1 initialized:2 re:1 theoretical:1 mk:4 fitted:1 cost:2 introducing:1 submanifold:1 too:1 combined:1 st:4 international:3 siam:1 stay:3 standing:1 off:5 diverge:3 bellmanresidual:1 precup:2 continuously:4 squared:2 thesis:1 satisfied:2 x9:2 aaai:1 worse:1 derivative:2 style:2 return:2 unvisited:1 sztaki:1 potential:1 nonlinearities:1 wk:37 baird:4 satisfy:3 doina:1 depends:2 root:1 analyze:2 observing:1 start:1 parallel:2 rmse:1 monro:1 ass:2 square:1 minimize:1 kaufmann:1 who:1 efficiently:1 correspond:1 nonsingular:2 generalize:3 produced:1 bhatnagar:2 trajectory:2 sharing:1 definition:4 pp:8 involved:1 proof:6 sampled:3 lim:1 knowledge:1 originally:1 follow:3 aif:1 evaluated:4 done:1 furthermore:1 just:1 until:1 working:1 nonlinear:21 logistic:1 quality:1 mdp:4 facilitate:3 dietterich:2 shalabh:1 verify:2 true:1 normalized:1 hence:6 regularization:1 symmetric:1 moore:2 sin:2 game:6 during:1 eligibility:1 noted:1 generalized:2 complete:1 demonstrate:1 pearlmutter:3 omnipress:1 ari:4 recently:1 common:2 million:1 extend:2 mspbe:4 counterexample:1 tuning:1 automatic:1 similarly:1 stable:3 longer:1 surface:3 v0:2 showed:1 tesauro:2 scenario:2 verlag:1 binary:1 icore:1 approximators:7 morgan:1 surely:4 converge:13 ii:3 desirable:1 smooth:8 technical:4 faster:1 calculation:1 long:1 a1:4 prediction:1 essentially:1 expectation:2 metric:1 iteration:3 sometimes:1 achieved:1 szepesv:4 background:1 ode:2 singular:3 crucial:1 appropriately:1 rest:1 dispatching:1 unlike:2 sure:2 pass:1 duplication:2 comment:1 effectiveness:1 call:1 near:1 presence:1 enough:1 spiral:2 fit:1 idea:3 whether:1 akin:1 returned:1 hessian:3 cause:2 proceed:1 action:3 gtd2:13 generally:2 useful:1 gle:1 discount:2 locally:2 ph:1 reduced:1 generate:2 canonical:1 estimated:1 per:4 affected:1 key:2 four:1 drawn:1 prevent:1 rewriting:1 verified:1 kept:1 asymptotically:2 graph:1 run:4 fourteenth:1 everywhere:1 parameterized:1 place:1 almost:6 family:3 decision:3 bound:1 guaranteed:1 followed:2 convergent:3 tackled:1 quadratic:1 precisely:4 ri:1 flat:1 x2:1 argument:4 min:2 optimality:1 performing:1 developing:1 according:3 mta:1 tv:1 slightly:3 wi:1 s1:1 happens:1 projecting:3 restricted:2 computationally:1 equation:7 legal:1 needed:1 available:1 apply:1 observe:1 appropriate:1 batch:1 slower:2 original:4 denotes:1 cf:1 kushner:2 exploit:1 approximating:1 classical:1 objective:15 move:2 question:1 rt:2 usual:2 diagonal:1 gradient:19 subspace:1 distance:1 thank:1 w0:1 manifold:1 unstable:1 reason:1 assuming:2 besides:1 aicml:1 index:1 kk:1 statement:1 gk:5 sigma:1 negative:1 rise:1 trace:1 design:1 policy:20 perform:2 allowing:1 markov:5 finite:8 descent:7 immediate:1 defining:1 rn:11 arbitrary:3 canada:5 maei:3 david:1 introduced:1 address:2 usually:2 including:1 natural:5 boyan:2 rely:1 regularized:1 residual:3 representing:1 shop:2 mdps:2 concludes:1 review:1 nice:1 tangent:8 multiplication:1 asymptotic:1 expect:1 approximator:9 ingredient:1 tdc:14 s0:11 share:1 r2n:1 translation:1 course:1 supported:1 tsitsiklis:5 institute:1 india:1 wide:1 munos:1 van:6 distributed:3 overcome:1 boundary:2 curve:1 transition:14 computes:1 interchange:1 reinforcement:9 projected:2 coincide:1 transaction:1 compact:4 global:1 assumed:1 tuples:1 continuous:2 search:1 sk:15 promising:1 learn:2 szepesvari:1 symmetry:1 improving:1 du:3 necessarily:1 domain:1 main:3 linearly:1 crites:2 s2:1 gtd:1 edition:1 x1:1 fig:1 edmonton:4 board:1 martingale:1 position:2 winning:2 third:1 rk:6 remained:1 theorem:10 r2:1 exists:4 execution:1 illustrates:1 occurring:1 horizon:1 nk:4 flavor:1 smoothly:1 rg:6 yin:2 borkar:2 selfplay:1 appearance:1 likely:1 expressed:1 nserc:1 scalar:3 recommendation:1 springer:1 satisfies:4 relies:1 viewed:1 goal:3 identity:3 lipschitz:1 absence:1 feasible:1 change:2 infinite:6 except:2 uniformly:3 hyperplane:2 lemma:3 called:6 rarely:1 formally:1 arises:1 brevity:2 indian:1 evaluate:1 |
3,101 | 381 | Using Genetic Algorithms to Improve
Pattern Classification Performance
Eric I. Chang and Richard P. Lippmann
Lincoln Laboratory, MIT
Lexington, MA 02173-9108
Abstract
Genetic algorithms were used to select and create features and to select
reference exemplar patterns for machine vision and speech pattern classification tasks. For a complex speech recognition task, genetic algorithms
required no more computation time than traditional approaches to feature
selection but reduced the number of input features required by a factor of
five (from 153 to 33 features). On a difficult artificial machine-vision task,
genetic algorithms were able to create new features (polynomial functions
of the original features) which reduced classification error rates from 19%
to almost 0%. Neural net and k nearest neighbor (KNN) classifiers were
unable to provide such low error rates using only the original features. Genetic algorithms were also used to reduce the number of reference exemplar
patterns for a KNN classifier. On a 338 training pattern vowel-recognition
problem with 10 classes, genetic algorithms reduced the number of stored
exemplars from 338 to 43 without significantly increasing classification error rate. In all applications, genetic algorithms were easy to apply and
found good solutions in many fewer trials than would be required by exhaustive search. Run times were long, but not unreasonable. These results
suggest that genetic algorithms are becoming practical for pattern classification problems as faster serial and parallel computers are developed.
1
INTRODUCTION
Feature selection and creation are two of the most important and difficult tasks in
the field of pattern classification. Good features improve the performance of both
conventional and neural network pattern classifiers. Exemplar selection is another
task that can reduce the memory and computation requirements of a KNN classifier.
These three tasks require a search through a space which is typically so large that
797
798
Chang and Lippmann
exhaustive search is impractical. The purpose of this research was to explore the
usefulness of Genetic search algorithms for these tasks. Details concerning this
research are available in (Chang, 1990).
Genetic algorithms depend on the generation-by-generation development of possible
solutions, with selection eliminating bad solutions and allowing good solutions to
replicate and be modified. There are four stages in the genetic search process: creation, selection, crossover, and mutation. In the creation stage, a group of possible
solutions to a search problem is randomly generated. In most genetic algorithm
applications, each solution is a bit string with each bit initially randomly set to 1
or O.
After the creation stage, each solution is evaluated using a fitness function and assigned a fitness value. The fitness function must be tightly linked to the eventual
goal. The usual criterion for success in pattern classification tasks is the percentage
of patterns classified correctly on test data. This was approximated in all experiments by using a leave-one-out cross-validation measure of classification accuracy
obtained using training data and a KNN classifier. After solutions are assigned
fitness values, a selection stage occurs, where the fitter solutions are given more
chance to reproduce. This gives the fitter solutions more and more influence over
the changes in the population so that eventually fitter solutions dominate.
A crossover operation occurs after two fitter solutions (called parent solutions) have
been selected . During crossover, portions of the parent solutions are exchanged.
This operation is performed in the hope of generating new solutions which will
contain the useful parts of both parent solutions and be even better solutions.
Crossover is responsible for generating most of the new solutions in genetic search.
When all solutions are similar, the crossover operation loses its ability to generate
new solutions since exchanging portions of identical solutions generates the same
solutions. Mutation (randomly altering bits) is performed on each new solution
to prevent the whole population from becoming similar. However, mutation does
not generally improve solutions by itself. The combination of both crossover and
mutation is required for good performance.
There are many varieties of genetic algorithms. A relatively new incremental static
population model proposed by (Whitley, 1989) was used in all experiments. In
the regular genetic algorithm model, the whole population undergoes selection and
reproduction, with a large portion of the strings replaced by new strings. It is thus
possible for good strings to be deleted from the population. In the static population
model, the population is ranked according to fitness. At each recombination cycle,
two strings are picked as parents according to their fitness values, and two new
strings are produced. These two new strings replace the lowest ranked strings in
the original population. This model automatically protects the better strings in the
population.
2
FEATURE SELECTION
Adding more input features or input dimensions to a pattern classifier often degrades
rather than improves performance. This is because as the number of input features
increases, the number of training patterns required to maintain good generalization
and adequately describe class distributions also often increases rapidly. Performance
with limited training data may thus degrade. Feature selection (dimensionality
Using Genetic Algorithms to Improve Pattern Classification Performance
A} CLASSIFICATION ERROR RATE
30 ~------------------------------------------.
-as
~
o
Q)
20 n/\4,,1'\
,
-~~J~~,
Testing__________________
Set
________ r----------,
_-
a:
10
-
~
~~_________________________T_ra_in_i_ng__S_e_t______________-4
O~------------------_~I------------------~
B) NUM BER OF FEATURES USED
"0
200
.--------------r--.---------------,
en
-
-
~
-
-
~ 120-
-
Q)
-
Q)
Initial Number of Features
~lroen
~----------------------------------------------------~
Q)
u.
o
-
I-
~
Q)
.0
E
::l
Z
-
80 ~:,.
..- ....
40
I-
.
."..: ....
-
..
??????
Features Selected by Genetic Search
-
.... _----_ ........... __ ._------_. __ . __ ._-.... __ . __ . __ .--_ ... ----_ ... _----------=-
O'--------------~I-----------~
o
10000
20000
Number of Features Recombinations
Figure 1: Progress Of a Genetic Algorithm Search For Those Features From an
Original 153 Features That Provide High Accuracy in "E" Set Classification For
One Female Talker: (A) Classification Error Rate and (B) Number Of Features
Used.
reduction) is often required when training data is limited to select the subset of
features that best separates classes. It can improve performance and/or reduce
computation requirements.
Feature selection is difficult because the number of possible combinations of features
grows exponentially with the number of original features . For a moderate size
problem with 64 features, there are 264 possible subsets of features. Clearly an
exhaustive evaluation of each possible combination is impossible. Frequently, finding
a near optimal feature subset is adequate. An overview of many different approaches
to feature selection is available in (Siedlecki and Sklansky, 1988).
This work applies genetic search techniques to the problem of feature selection.
Every feature set is represented by a bit string with d bits, where d is the maximum
799
800
Chang and Lippmann
input dimension. Each bit determines whether a feature is used. The accuracy of
a KNN classifier with the leave-one-out approach to error rate estimation was used
as an evaluation function as described above. A KNN classifier has the advantage
of requiring no training time and providing results directly related to performance.
"E-set" words (9 letters from the English alphabet that rhyme with the letter "E")
taken from a Texas Instruments 46-word speech database were used for experiments.
Waveforms were spectrally analyzed and encoded with a hidden Markov Model
speech recognizer as described in (Huang and Lippmann, 1990). Features were
the average log likelihood distance and duration from all the hidden Markov nodes
determined using Viterbi decoding. The final output of the hidden Markov model
was also included in the feature set. This resulted in 17 features per word class.
The 9 different word classes result in a total of 153 features. For each talker there
were 10 patterns in the training set and 16 patterns in the testing set per word
class. All experiments were talker dependent.
An experiment was performed using the data from one female talker. More conventional sequential forward and backward searches for the best feature subset were
first performed. The total number of KNN evaluations for each sequential search
was 11,781. The best feature subset found with sequential searches contained 33
features and the classification error rates were 2.2% and 18.5% on training and testing sets respectively. Genetic algorithms provided a lower error rate on the testing
set with fewer than half as many features. Fig. 1 shows the progress of the genetic
search. The bottom plot shows that near recombination 12,100, the number of
features used was reduced to 15. The top plot shows that classification error rates
were 3.3% and 17.5% for the training and testing sets respectively.
3
FEATURE CREATION
One of the most successful techniques for improving pattern classification performance with limited training data is to find more effective input features. An approach to creating more effective input features is to search through new features
that are polynomial functions of the original features. This difficult search problem
was explored using genetic algorithms. The fitness function was again determined
using the performance of a KNN classifier with leave-one-out testing.
Polynomial functions of the original features taken two at a time were created as
new features. New features were represented by a bit string consisting of substrings
identifying the original features used, their exponents, and the operation to be
applied between the original features. A gradual buildup of feature complexity over
multiple stages was enforced by limiting the complexity of the created features.
Once the accuracy of a KNN classifier had converged at one stage, another stage
was begun where more complex high order features were allowed. This improves
generalization by creating simple features first and by creating more complicated
features only when simpler features are not satisfactory.
A parallel vector problem, where the input data consists of ~x, ~y of two vectors,
was used. Parallel vectors are identified as one class while nonparallel vectors are
identified as another. There were 300 training patterns and 100 testing patterns.
During an experiment, the ratio features ~x2/ ~x1 and ~y2/ ~y1 were first found
near recombination 700. After the error rate had not changed for 2,000 recombinations, the complexity of the created features was allowed to increase at recombina-
Using Genetic Algorithms to Improve Pattern Classification Performance
tion 2,700. At this point the two ratio features and the four original features were
treated as if they were six original features. The final feature found after this point
was (6X2 *6Y2)/(~Xl *~Yl). Classification error rates for the training set and the
testing set decreased to 0% with this feature. The classification error rate on the
testing set using the original four features was 19% using a KNN classifier. Tests
using the original features with two more complex classifiers also used in (N g and
Lippmann, 1991) resulted in error rates of 13.3% for a GMDH classifier and 8.3%
for a radial basis function classifier. Feature creation with a simple KNN classifier
was thus more effective than the use of more complex classifiers with the original
features.
o head
3000
nhid
+ hod
2000
x
had
<> hawed
\I heard
Fl(Hz)
> heed
Ohud
A
1000
who'd
<hood
+
~~----------~----------~~--------~
o
~
1000
1400
Fl (Hz)
Figure 2: Decision Boundaries Of a Nearest Neighbor Classifier For The Vowel
Problem Using All 338 Original Exemplars.
4
EXEMPLAR SELECTION
The performance of a KNN classifier typically improves as more training patterns
are stored. This often makes KNN classifiers impractical because both classification time and memory requirements increase linearly with the number of training
patterns. Previous approaches to reducing the classification time and memory requirements of KNN classifiers include using KD trees and condensed k nearest
neighbor (CKNN) classifiers as described in (Ng and Lippmann, 1991). KD trees,
however, are effective only if the input dimensionality is low, and CKNN classifiers
use a heuristic that may not result in minimal memory requirements. An alternate
801
802
Chang and Lippmann
4000
3000
o head
>
t::.. hid
+ hod
X
2000
X had
o hawed
'ilheard
F2(Hz)
> heed
Ohud
A
1000
who'd
<hood
A
~L-
o
_ _ _ _ _ _ _ _- L_ _L -_ _ _ _ _ _
~
~
____
~
1000
________
~
1400
Fl (Hz)
Figure 3: Decision Boundaries Of a Nearest Neighbor Classifier For The Vowel
Problem Using 43 Exemplars selected Using Genetic Search.
approach is to use genetic algorithms.
Genetic algorithms were used to manipulate bit strings identifying useful exemplar
patterns. A bonus proportional to the number of unused exemplars was given to
strings with classifier accuracy above a user-preset threshold. The value k was also
selected by genetic algorithms in some experiments. The k value was encoded with
three bits which were attached to the end of each string. Exemplar selection was
tested with the vowel database used by (Ng and Lippmann, 1991). There were
ten classes, each class being a word starting with "h" and ending with "d", with a
vowel in between ("head" , "hid" , "hod" , "had" , "hawed" , "heard" , "heed" , "hud" ,
"who'd", and "hood"). A total of 338 patterns was used as the training set and
333 patterns were used as the testing set. Each pattern consisted of two features
which were the two formant frequencies of the vowel determined by spectrographic
analysis.
Genetic algorithms were effective in both reducing the number of exemplars and
selecting k. Classification error rates with selected exemplars were roughly 20%
on both training and test data. Selecting k typically resulted in fewer exemplars
and the number of exemplars required was reduced by a factor of roughly 8 (from
338 to 43). Genetic search was thus much more effective than the CKNN classifier
Using Genetic Algorithms to Improve Pattern Classification Performance
described in (N g and Lippmann, 1991) which reduced the number of exemplars by
a factor of roughly 2 (from 338 to 152). Decision boundaries with all 338 original
exemplars are shown in Fig. 2. Boundaries are excessively complex and provide
perfect performance on the training patterns but perform poorly on the testing
patterns (25% error rate). Decision boundaries with the 43 exemplars selected
using genetic algorithms are shown in Fig. 3. Boundaries with the smaller number
of exemplars are smoother and provide an error rate of 20.1 % on test data.
5
CONCLUSIONS
Genetic algorithms proved to be a good search technique which is widely applicable
in pattern classification. Genetic algorithms were relatively easy to apply to feature
selection, feature creation, and exemplar selection problems. Solutions were found
that were better than those provided by heuristic approaches including forward and
backward feature selection and condensed k nearest neighbor algorithms. Genetic
algorithms also required far fewer evaluations than required by exhaustive search
and sometimes required only little more computation than heuristic approaches.
Run times on a Sun-3 workstation were long (hours and sometimes one or two
days) but not impractical. Run times are becoming become less of an issue as
single-processor workstations become more powerful and as parallel computers become more available. Compared to developing a heuristic search technique for each
type of search problem, genetic algorithms offer the benefit of simplicity and good
performance on all problems. Further experiments should explore the use of genetic
algorithms in other application areas and also compare alternative search techniques
including simulated annealing.
Acknowledgements
This work was sponsored by the Air Force Office of Scientific Research and the
Department of the Air Force.
References
Eric I. Chang. Using Genetic Algorithm~ to Select and Create Features for Pattern
Classification. Master's Thesis, Massachusetts Institute of Technology, Department
of Electrical Engineering and Computer Science, Cambridge, MA, May 1990.
William Y. Huang and Richard P. Lippmann. HMM Speech Recognition Systems
with Neural Net Discrimination. In D. Touretzky (Ed.) Advances in Neural Information Processing Systems 2, 194-202, 1990.
Kenney Ng and Richard P. Lippmann. A Comparative Study of the Practical Characteristics of Neural Network and Conventional Pattern Classifiers. In Lippmann,
R., Moody, J., Touretzky, D., (Eds.) Advances in Neural Information Processing
Systems 3, 1991.
W. Siedlecki and J. Sklansky. On Automatic Feature Selection. International Journal of Pattern Recognition and Artificial Intelligence, 2:197-220, 1988.
Darrel Whitley. The GENITOR Algorithm and Selection Pressure: Why RankBased Allocation of Reproductive Trials is Best . In Proceedings Third International
Conference on Genetic Algorithms, Washington, DC, June 1989.
803
| 381 |@word trial:2 eliminating:1 polynomial:3 replicate:1 gradual:1 pressure:1 reduction:1 initial:1 selecting:2 genetic:38 must:1 plot:2 sponsored:1 discrimination:1 half:1 fewer:4 selected:6 intelligence:1 num:1 node:1 simpler:1 five:1 become:3 consists:1 kenney:1 roughly:3 frequently:1 automatically:1 little:1 increasing:1 provided:2 bonus:1 lowest:1 string:14 spectrally:1 developed:1 finding:1 lexington:1 impractical:3 every:1 classifier:26 engineering:1 rhyme:1 becoming:3 limited:3 practical:2 responsible:1 hood:3 testing:10 fitter:4 area:1 crossover:6 significantly:1 word:6 radial:1 regular:1 suggest:1 selection:19 impossible:1 influence:1 conventional:3 starting:1 duration:1 simplicity:1 identifying:2 dominate:1 population:9 limiting:1 user:1 recognition:4 approximated:1 database:2 bottom:1 electrical:1 cycle:1 sun:1 complexity:3 depend:1 creation:7 eric:2 f2:1 basis:1 represented:2 alphabet:1 describe:1 effective:6 artificial:2 exhaustive:4 encoded:2 heuristic:4 widely:1 whitley:2 ability:1 formant:1 knn:14 itself:1 final:2 advantage:1 net:2 hid:2 rapidly:1 poorly:1 lincoln:1 gmdh:1 parent:4 requirement:5 generating:2 incremental:1 leave:3 perfect:1 comparative:1 exemplar:19 nearest:5 progress:2 waveform:1 require:1 generalization:2 viterbi:1 talker:4 purpose:1 recognizer:1 estimation:1 applicable:1 condensed:2 create:3 hope:1 mit:1 clearly:1 modified:1 rather:1 office:1 june:1 likelihood:1 dependent:1 typically:3 initially:1 hidden:3 genitor:1 reproduce:1 issue:1 classification:24 exponent:1 development:1 field:1 once:1 ng:3 washington:1 identical:1 richard:3 randomly:3 hawed:3 tightly:1 resulted:3 fitness:7 replaced:1 consisting:1 vowel:6 maintain:1 william:1 evaluation:4 analyzed:1 tree:2 exchanged:1 minimal:1 altering:1 exchanging:1 subset:5 usefulness:1 successful:1 stored:2 international:2 yl:1 decoding:1 moody:1 again:1 thesis:1 huang:2 creating:3 performed:4 tion:1 picked:1 hud:1 linked:1 portion:3 parallel:4 complicated:1 mutation:4 air:2 accuracy:5 who:3 characteristic:1 produced:1 substring:1 processor:1 classified:1 converged:1 touretzky:2 ed:2 frequency:1 static:2 workstation:2 proved:1 begun:1 massachusetts:1 improves:3 dimensionality:2 day:1 evaluated:1 stage:7 undergoes:1 scientific:1 grows:1 spectrographic:1 excessively:1 contain:1 requiring:1 y2:2 consisted:1 adequately:1 assigned:2 laboratory:1 satisfactory:1 during:2 criterion:1 heed:3 overview:1 attached:1 exponentially:1 cambridge:1 automatic:1 had:5 female:2 moderate:1 success:1 smoother:1 multiple:1 faster:1 cross:1 long:2 offer:1 concerning:1 serial:1 sklansky:2 manipulate:1 nonparallel:1 vision:2 sometimes:2 decreased:1 annealing:1 hz:4 near:3 unused:1 easy:2 variety:1 identified:2 reduce:3 texas:1 whether:1 six:1 buildup:1 speech:5 adequate:1 useful:2 generally:1 heard:2 ten:1 reduced:6 generate:1 percentage:1 correctly:1 per:2 group:1 four:3 threshold:1 deleted:1 prevent:1 backward:2 enforced:1 run:3 letter:2 powerful:1 master:1 almost:1 decision:4 bit:9 fl:3 x2:2 protects:1 generates:1 relatively:2 department:2 developing:1 according:2 alternate:1 combination:3 kd:2 smaller:1 nhid:1 taken:2 eventually:1 instrument:1 end:1 available:3 operation:4 unreasonable:1 apply:2 alternative:1 original:16 top:1 include:1 recombination:5 occurs:2 degrades:1 usual:1 traditional:1 distance:1 unable:1 separate:1 simulated:1 hmm:1 degrade:1 providing:1 ratio:2 difficult:4 cknn:3 perform:1 allowing:1 markov:3 head:3 y1:1 dc:1 required:10 hour:1 able:1 pattern:32 including:2 memory:4 ranked:2 force:2 treated:1 improve:7 technology:1 created:3 acknowledgement:1 generation:2 proportional:1 allocation:1 validation:1 changed:1 english:1 l_:1 ber:1 institute:1 neighbor:5 benefit:1 boundary:6 dimension:2 ending:1 forward:2 far:1 lippmann:12 search:23 why:1 improving:1 complex:5 linearly:1 whole:2 allowed:2 x1:1 fig:3 en:1 xl:1 third:1 bad:1 reproductive:1 explored:1 reproduction:1 adding:1 sequential:3 hod:3 explore:2 contained:1 chang:6 applies:1 loses:1 chance:1 determines:1 ma:2 goal:1 eventual:1 replace:1 change:1 included:1 determined:3 reducing:2 preset:1 called:1 total:3 select:4 tested:1 |
3,102 | 3,810 | Local Rules for Global MAP: When Do They Work ?
Kyomin Jung?
KAIST
Daejeon, Korea
[email protected]
Pushmeet Kohli
Microsoft Research
Cambridge, UK
[email protected]
Devavrat Shah
MIT
Cambridge, MA, USA
[email protected]
Abstract
We consider the question of computing Maximum A Posteriori (MAP) assignment
in an arbitrary pair-wise Markov Random Field (MRF). We present a randomized
iterative algorithm based on simple local updates. The algorithm, starting with an
arbitrary initial assignment, updates it in each iteration by first, picking a random
node, then selecting an (appropriately chosen) random local neighborhood and
optimizing over this local neighborhood. Somewhat surprisingly, we show that
this algorithm finds a near optimal assignment within n log 2 n iterations with high
probability for any n node pair-wise MRF with geometry (i.e. MRF graph with
polynomial growth) with the approximation error depending on (in a reasonable
manner) the geometric growth rate of the graph and the average radius of the local
neighborhood ? this allows for a graceful tradeoff between the complexity of the
algorithm and the approximation error. Through extensive simulations, we show
that our algorithm finds extremely good approximate solutions for various kinds
of MRFs with geometry.
1 Introduction
The abstraction of Markov random field (MRF) allows one to utilize graphical representation to
capture inter-dependency between large number of random variables in a succinct manner. The MRF
based models have been utilized successfully in the context of coding (e.g. the low density parity
check code [15]), statistical physics (e.g. the Ising model [5]), natural language processing [13]
and image processing in computer vision [11, 12, 19]. In most applications, the primary inference
question of interest is that of finding maximum a posteriori (MAP) solution ? e.g. finding a most
likely transmitted message based on the received signal.
Related Work. Computing the exact MAP solution in general probabilistic models is an NP-hard
problem. This had led researchers to resort of fast approximate algorithms. Various such algorithmic approaches have been developed over more than the past three decades. In essence, all such
approaches try to find a locally optimal solution of the problem through iterative procedure. These
?local update? algorithms start from an initial solution and proceed by making a series of changes
which lead to solutions having lower energy (or better likelihood), and hence are also called ?move
making algorithms?. At each step, the algorithms search the space of all possible local changes that
can be made to the current solution (also called move space), and choose the one which leads to the
solution having the highest probability or lowest energy.
One such algorithm (which has been rediscovered multiple times) is called Iterated Conditional
Modes or ICM for short. Its local update involves selecting (randomly or deterministically) a variable of the problem. Keeping the values of all other variables fixed, the value of the selected variable
?
This work was partially carried out while the author was visiting Microsoft Research Cambridge, and was
partially supported by NSF CAREER project CNS-0546590.
1
is chosen which results in a solution with the maximum probability. This process is repeated by selecting other variables until the probability cannot be increased further.
The size of the move space is the defining characteristic of any such move making algorithm. A large
move space means that more extensive changes to the current solution can be made. This makes the
algorithm less prone to getting stuck in local minima and also results in a faster rate of convergence.
Expansion and Swap are move making algorithms which search for the optimal move in a move
space of size 2n where n is the number of random variables. For energy functions composed of
metric pairwise potentials, the optimal move can be found in polynomial time by minimizing a
submodular quadratic pseudo-boolean function [3] (or solving an equivalent minimum cost st-cut
problem).
The last few years have seen a lot of interest in st-mincut based move algorithms for energy minimization. Komodakis et al. [9] recently gave an alternative interpretation of the expansion algorithm.
They showed that expansion can be seen as solving the dual of a linear programming relaxation of
the energy minimization problem. Researchers have also proposed a number of novel move encoding strategies for solving particular forms of energy functions. Veksler [18] proposed a move
algorithm in which variables can choose any label from a range of labels. They showed that this
move space allowed them to obtain better minima of energy functions with truncated convex pairwise terms. Kumar and Torr [10] have since shown that the range move algorithm achieves the same
guarantees as the ones obtained by methods based on the standard linear programming relaxation.
A related popular algorithmic approach is based on max-product belief propagation (cf. [14] and
[22]). In a sense, it can be viewed as an iterative algorithm that makes local updates based optimizing
based on the immediate graphical structure. There is a long list of literature on understanding the
conditions under which max-product belief propagation algorithm find correct solution. Specifically,
in recent years a sequence of results suggest that there is an intimate relation between the maxproduct algorithm and a natural linear programming relaxation ? for example, see [1, 2, 8, 16, 21].
We also note that Swendsen-Wang algorithm (SW) [17], a local flipping algorithm, has a philosophy
similar to ours in that it repeats a process of randomly partitioning the graph, and computing an
assignment. However, the graph partitioning of SW is fundamentally different from ours and there
is no known guarantee for the error bound of SW.
In summary, all the approaches thus far with provable guarantees for local update based algorithm
are primarily for linear or more generally convex optimization setup.
Our Contribution. As the main result of this paper, we propose a randomized iterative local algorithm that is based on simple local updates. The algorithm, starting with an arbitrary initial assignment, updates it in each iteration by first picking a random node, then its (appropriate) random
local neighborhood and optimizing over this local neighborhood. Somewhat surprisingly, we show
that this algorithm finds near optimal assignment within n log 2 n iterations with high probability for
graphs with geometry ? i.e. graphs in which the neighborhood of each node within distance r grows
no faster than a polynomial in r. Such graphs can have arbitrarily structure subject to this polynomial growth structure. We show that the approximation error depends gracefully on the average
random radius of the local neighborhood and degree of polynomial growth of the graph. Overall, our
algorithm can provide an ??approximation MAP with C(?)n log 2 n total computation with C(?)
depending only on ? and the degree of polynomial growth. The crucial novel feature of our algorithm is the appropriate selection of random local neighborhood rather than deterministic in order to
achieve provable performance guarantee.
We note that near optimality of our algorithm does not depend on convexity property, or tree-like
structure as many of the previous works; but only relies on geometry of the graphical structure which
is present in many graphical models of interest such as those arising in image processing, wireless
networks, etc.
We use our algorithm to verify its performance in simulation scenario. Specifically, we apply our
algorithm to two popular setting: (a) a grid graph based pairwise MRF with varying node and edge
interaction strengths, and (b) a grid graph based MRF on the weighted independent set (or hardcore)
model. We find that with very small radius (within 3), we find assignment which within 1% (0.99
factor) of the MAP for a large range of parameters and upto graph of 1000 nodes.
2
Organization. We start by formally stating our problem statement and main theorem (Theorem 1)
in Section 2. This is followed by a detailed description of the algorithm in Section 3. We present
the sketch proof of the main result in Section 4. Finally, we provide a detailed simulation results in
Section 5.
2 Main Results
We start with the formal problem description and useful definitions/notations followed by the statement of the main result about performance of the algorithm. The algorithm will be stated in the next
section.
Definitions & Problem Statement. Our interest is in a pair-wise MRF defined next. We note that,
formally all (non pair-wise) MRFs are equivalent to pair-wise MRFs ? e.g. see [20].
Definition 1 (Pair-wise MRF). A pair-wise MRF based on graph G = (V, E) with n = |V | vertices
and edge set E is defined by associated a random variable X v with each vertex v ? V taking value
in finite alphabet set ?; the joint distribution of X = (X v )v?V defined as
?v (xv ) ?
?uv (xu , xv )
(1)
Pr[X = x] ?
v?V
(u,v)?E
where ?v : ? ? R+ and ?uv : ?2 ? R+ are called node and edge potential functions. 1
In this paper, the question of interest is to find the maximum a posteriori (MAP) assignment x ? ?
?n , i.e.
x? ? arg maxn Pr[X = x].
x??
Equivalently, from the optimization point of view, we wish to find an optimal assignment of the
problem
maximize H(x)
over
x ? ?n ,
where
ln ?v (xv ) +
ln ?uv (xu , xv ).
H(x) =
v?V
(u,v)?E
For completeness and simplicity of exposition, we assume that the function H is finite valued over
?n . However, results of this paper extend for hard constrained problems such as the hardcore or
independent set model.
In this paper, we will design algorithms for finding approximate MAP problem. Specifically, we call
as an ?-approximate MAP if
an assignment x
(1 ? ?)H(x? ) ? H(
x) ? H(x? ).
Graphs with Geometry. We define notion of graphs with geometry here. To this end, a graph
G = (V, E) induces a natural ?graph metric? on vertices V , denoted by d G : V ? V ? R+ with
dG (v, u) as the length of the shortest path between u and v; with it defined as ? if there is no path
between them.
Definition 2 (Graph with Polynomial Growth). We call a graph G with polynomial growth of degree
(or growth rate) ?, if for any v ? V and r ? N,
|BG (v, r)| ? C ? r? ,
where C > 0 is a universal constant and B G (v, r) = {w ? V |dG (w, v) < r}.
A large class of graph model naturally fall into the graphs with polynomial growth. To begin with,
the standard d-dimensional regular grid graphs have polynomial growth rate d ? e.g. d = 1 is the
line graph. More generally, in recent years in the context of computational geometry and metric
embedding, the graphs with finite doubling dimensions have become popular object of study [6, 7].
1
We assume the positivity of ?v ?s and ?uv ?s for simplicity of analysis.
3
It can be checked that a graph with doubling dimension ? is also a graph with polynomial growth
rate ?. Finally, the popular geometric graph model where nodes are placed arbitrarily on a two
dimensional surface with minimum distance separation and two nodes have an edge between them
if they are within certain finite distance, then it is indeed a graph with finite polynomial growth rate.
Statement of Main Result. The main result of this paper is a randomized iterative algorithm based
on simple local updates. In essence the algorithm works as follows. It starts with an arbitrary initial
assignment. In each iteration, it picks a node, say v from all n nodes of V , uniformly at random and
picks a random radius Q (as per specific distribution). The algorithm re-assigns values to all nodes
within distance Q of node v with respect to graph distance d G by finding the optimal assignment
for this local neighborhood subject to keeping the assignment to all other nodes the same. The
algorithm L OC -A LGO described in Section 3 repeats this process for n log 2 n many times. We show
that L OC -A LGO finds near optimal solution with high probability as long as the graph has finite
polynomial growth rate.
Theorem 1. Given MRF based on graph G = (V, E) of n = |V | nodes with polynomial growth rate
? and approximation parameter ? ? (0, 1), our algorithm L OC -A LGO with O (log(1/?)n log n)
such that
iterations produces a solution x
1
.
Pr[H(x? ) ? H(
x) ? 2?H(x? )] ? 1 ? ? ?
poly(n)
And each iteration takes at most ?(?, ?) computation, with
?
?(?, ?) ? |?|CK(?,?) ,
where K(?, ?) is defined as
4
8?
4
1
8?
+ log C + log + 2
K = K(?, ?) =
log
?
?
?
?
?
with
?=
?
.
5C2?
In a nutshell, Theorem 1 say that the complexity of the algorithm for obtaining an ?-approximation
scales almost linearly in n, double exponentially in 1/? and ?. On one hand, this result establishes
that it is indeed possible to have polynomial (or almost linear) time approximation algorithm for
arbitrary pair-wise MRF with polynomial growth. On the other hand, though theoretical bound on
the pre-constant ?(?, ?) as function of 1/? and ? is not very exciting, our simulations suggest (see
Section 5) that even for hard problem setup, the performance is much more optimistic than predicted
by these theoretical bounds. Therefore, as a recommendation for a system designer, we suggest use
of smaller ?radius? distribution in algorithm described in Section 3 for obtaining good algorithm.
3 Algorithm Description
In this section, we provide details of the algorithm intuitively described in the previous section. As
. Initially, the x
is
noted earlier, the algorithm iteratively updates its estimation of MAP, denoted by x
chosen arbitrarily. Iteratively, at each step a vertex v ? V is chosen uniformly at random along with
a random radius Q that is chosen independently as per distribution Q. Then, select R ? V , the local
neighborhood (or ball) of radius Q around v as per graph distance d G , i.e. {w ? V |dG (u, w) < Q}.
= (?
Then while keeping the assignment of all nodes in V \R fixed as per x
xv )v?V , find MAP
assignment x?,R restricted to nodes of R. And, update the assignment of nodes in v ? R as per
x?,R . A caricature of an iteration is described in Figure 1. The precise description of the algorithm
is given in Figure 2.
In order to have good performance, it is essential to choose appropriate distribution Q for selection
of random radius Q each time. Next, we define this distribution which is essentially a truncated
Geometric distribution. Specifically, given parameters ? ? (0, 1) and the polynomial growth rate ?
?
(with constant C) of the graph, define ? = 5C2
? , and
8?
4
4
1
8?
log
+ log C + log + 2.
K = K(?, ?) =
?
?
?
?
?
Then, the distribution (or random variable) Q is defined over integers from 1 to K(?, ?) as
?(1 ? ?)i?1 if 1 ? i < K(?, ?)
Pr[Q = i] =
.
(1 ? ?)K?1 if i = K(?, ?)
4
Graph G
Q
u
Pr[Q
i ] H (1 H ) i 1
for
i 1,2,3 , ( K 1)
Figure 1: Pictorial description of an iteration of L OC -A LGO.
L OC -A LGO(?, K)
(0) Input: MRF G = (V, E) with ?i (?), i ? V , ?ij (?, ?), (i, j) ? E.
? ?n arbitrarily.
(1) Initially, select x
(2) Do the following for n log 2 n many times :
Choose an element u ? V uniformly at random.
Draw a random number Q according to the distribution Q.
Let R ? {w ? V |dG (u, w) < Q}.
Through dynamic programming (or exhaustive computation) find
an exact MAP x?,R for R while fixing all the other assignment
value outside R.
of x
for R by x?,R .
(e) Change values of x
(a)
(b)
(c)
(d)
.
(3) Output x
Figure 2: Algorithm for approximate MAP computation.
4 Proof of Theorem 1
In this section, we present proof of Theorem 1. To that end, we will prove the following Lemma.
Lemma 1. If we run the L OC -A LGO with (2n ln n) iterations, with probability at least 1 ? 1/n, we
have
(1 ? ?)H(x? ) ? E[H(
x)] ? H(x? ).
From Lemma 1, we obtain Theorem 1 as follows. Define T = 2 log(1/?), and consider L OC A LGO with (2T n ln n) iterations. From the fact that H(x ? ) ? H(
x) ? 0, and by the Markov
inequality applied to H(x ? ) ? H(
x) with Lemma 1, we have that after (2n ln n) iterations,
x) ? 2?H(x? )] ?
Pr[H(x? ) ? H(
1
.
2
(2)
Note that (2) is true for any initial assignment of L OC -A LGO. Hence for each 1 ? t ? T , after
) is
(2tn ln n) iterations, (2) holds independently with probability 1 ? 1/n. Also, note that H( x
increasing monotonically. Hence, H(x ? ) ? H(
x) > 2?H(x? ) holds after (2T n ln n) iterations only
if the same holds after (2tn ln n) iterations for all 1 ? t ? T . Hence, after (2T n ln n) iterations, we
have Pr[H(x? ) ? H(
x) ? 2?H(x? )] ? 1 ? ? ? 1/poly(n), which proves the first part of Theorem
1.
For the total computation bound in Theorem 1, note that each iteration of L OC -A LGO involves
dynamic programming over a local neighborhood of radius at most K = K(?, ?) around a node.
5
This involves, due to the polynomial growth condition, at most CK ? nodes. Each variable can takes
at most |?| different values. Therefore, dynamic programming (or exhaustive search) can take at
?
most |?|CK operations as claimed.
Proof of Lemma 1. First observe that by the standard argument in the classical coupon collector
problem with n coupons (e.g. see [4]), it follows that after 2n ln n iterations, with probability at
least 1 ? 1/n, all the vertices of V will be chosen as ?ball centers? at least once.
Error bound. Now we prove that if all the vertices of V are chosen as ?ball centers? at least once, the
generated by L OC -A LGO after 2n ln n iterations, is indeed an ?-approximation on average.
answer x
To this end, we construct an imaginarily set of edges as follows. Imagine that the procedure (2) of
L OC -A LGO is done with an iteration parameter t ? Z + . Then for each vertex v ? V, we assign the
largest iteration number t such that the chosen ball R at the iteration t contains w. That is,
T (v) = max{t ? Z+ | L OC -A LGO chooses v as a member of R at iteration t}.
Clearly, this is well defined algorithm is run till each node is chosen as the ?ball center? at least once.
Now define an imaginary boundary set of L OC -A LGO as
B = {(u, w) ? E|T (u)
= T (w)}.
Now consider graph G = (V, E\B) obtained by removing edges B from G. In this graph, nodes
of the same connected component have same T (?) value. Next, we state two Lemmas that will be
crucial to the proof of the Theorem. Proof of Lemmas 2 and 3 are omitted.
Lemma 2. Given two MRFs X1 and X2 on the same graph G = (V, E) with identical edge potentials {?ij (?, ?)}, (i, j) ? E but distinct
{? 1i (?)}, {?2i (?)}, i ? V respectively. For
1 node potentials
D
2
each i ? V, define ? i = max??? ?i (?) ? ?i (?) . Finally, for
? {1, 2} and any x ? ? n ,
define H (x) = i?V ?i (xi ) + (i,j)?E ?ij (xi , xj ), with x?, being a MAP assignment of MRF
D
x . Then, we have |H1 (x?,1 ) ? H1 (x2,? )| ? 2
.
i?V ?i
Lemma 3. Given MRF X defined on G (as in (1)), the algorithm L OC -A LGO produces output x
such that
?
?
U
L ?
|H(x? ) ? H(
?ij
x)| ? 5 ?
? ?ij
,
(i,j)?B
U
max?,? ?? ?ij (?, ? ), and
where B is the (random) imaginary boundary set of L OC -A LGO, ? ij
L
?ij
min?,? ?? ?ij (?, ? ).
Now we obtain the following lemma that utilizes the fact that the distribution Q follows a geometric
distribution with rate (1 ? ?) ? its proof is omitted.
Lemma 4. For any edge e ? E of G,
Pr[e ? B] ? ?.
From Lemma 4, we obtain that
U
L
U
L
?ij
??
?ij
.
? ?ij
? ?ij
(i,j)?B
(3)
(i,j)?E
Finally, we establish the following lemma that bounds
(i,j)?E
U
L
?ij
? its proof is omitted.
? ?ij
Lemma 5. If G has maximum vertex degree d ? , then
U
L
?ij
? (d? + 1)H(x? ).
? ?ij
(4)
(i,j)?E
Now recall that the maximum vertex degree d ? of G is less than 2? C by the definition of polynomially growing graph. Therefore, by Lemma 3, (3), and Lemma 5, the output produced by the
L OC -A LGO algorithm is such that
x)| ? 5(d? + 1)?H(x? ) ? ?H(x? ),
|H(x? ) ? H(
where recall that ? =
?
5C2? .
This completes the proof of Lemma 1.
6
5 Experiments
Our algorithm provides a provable approximation for any MRF on a polynomially growing graph.
In this section, we present experimental evaluations of our algorithm for two popular models: (a)
synthetic Ising model, and (b) hardcore (independent set) model. As a reader will notice, the experimental results not only conform the qualitatively behavior proved by our theoretical result, but
it also suggest that much tighter approximation guarantees should be expected in practice compared
to what is guaranteed by theoretical results.
Setup 12 Consider a binary (i.e. ? = {0, 1}) MRF on an n 1 ? n2 grid G = (V, E):
?
?
?i xi +
?ij xi xj ? , for x ? {0, 1}n1n2 .
Pr(x) ? exp ?
i?V
(i,j)?E
We consider the following scenario for choosing parameters (with the notation U[a, b] for the uniform distribution over the interval [a, b]):
1. For each i ? V , choose ? i independently as per the distribution U[?1, 1].
2. For each (i, j) ? E, choose ? ij independently from U[??, ?]. Here the interaction parameter ? is chose from {0.125, 0.25, 0.5, 1, 2, 4, 8, 16, 32, 64}.
(A) 0.25
0.2
0.15
r=1
01
0.1
rr=2
2
0.05
r=3
E
Error
0
0.125 0.25
0.5
1
2
4
8
16
32
64
D
(B) 0.3
0.25
0.2
r=1
0.15
r=2
0.1
r=3
0.05
Error
0
0.125 0.25
0.5
1
2
4
8
16
32
64
D
Figure 3: (A) plots the error of local update algorithm for a random Ising model in the grid graph of
size 10 ? 10, and (B) plots the error in the grid of size 100 ? 10.
To compare the effectiveness of our algorithm for each size of the local updates, in our simulation,
we fix the square size as a constant instead of choosing it from a distribution. We run the simulation
for the local square size r?r with r = 1, 2, 3, where r = 1 is the case when each square consists of a
single vertex. We computed an exact MAP assignment x ? by dynamic programming, and computed
of our local update algorithm for each r, by doing 4n 1 n2 log(n1 n2 ) many local updates
the output x
for n1 ? n2 grid graph. Then compare the error as follows:
? )
H(x? ) ? H(x
Error =
.
?
H(x )
We run the simulation for 100 trials and compute the average error for each case. The Figure 3(A)
plots the error for the grid of size 10 ? 10, while Figure 3(B) plots the error for the grid of size
100 ? 10.
2
Though this setup has ?i , ?ij taking negative values, they are equivalent to the setup considered in the
paper, since affine shift will make them non-negative without changing the distribution.
7
Remind that the approximation guarantee of Theorem 1 is an error bound for the worst case. As the
simulation result suggests, for any graph and any range of ?, the error of the local update algorithm
decreases dramatically as r increases. Moreover, when r is comparably small as r = 3, the output
of the local update algorithm achieves remarkably good approximation. Hence we observe that our
algorithm performs well not only theoretically, but also practically.
Setup 2. We consider the vertex weighted independent set model defined on a grid graph. To this
end, we start by description of a weighted independent set problem as the MRF model. Specifically,
consider a binary MRF on an n 1 ? n2 grid G = (V, E):
?
?
?i xi +
?(xi xj )? , for x ? {0, 1}n1n2 .
Pr(x) ? exp ?
i?V
(i,j)?E
Here, the parameters are chosen as follows.
1. For each i ? V , ? i is chosen independently as per the distribution U[0, 1].
2. The function ?(?, ?) is defined as
?M if (?, ? ) = (1, 1)
,
?(?, ? ) =
0
otherwise
where M is a large number.
For this model, we did simulations for grid graphs of size 10?10, 30?10, and 100?10 respectively.
For each graph, we computed the average error as in the Setup 1, over 100 trials. The result is shown
in the following table. As the result shows, our local update algorithm achieves remarkably good
approximation of the MAP or equivalently in this setup the maximum weight independent set, even
with very small r values !
r=1
r=2
r=3
10 ? 10
0.219734
0.016032
0.001539
30 ? 10
0.205429
0.019145
0.002616
100 ? 10
0.208446
0.019305
0.002445
It is worth nothing that choosing ? i from U[0, ?] for any ? > 0 will give the same approximation
are both linear on ?.
result, since x? and x
6 Conclusion
We considered the question of designing simple, iterative algorithm with local updates for finding
MAP in any pair-wise MRF. As the main result of this paper, we presented such a randomized, local
iterative algorithm that can find ?-approximate solution of MAP in any pair-wise MRF based on
G within 2n ln n iterations and the computation per iteration is constant C(?, ?) dependent on the
accuracy parameter ? as well as the growth rate ? of the polynomially growing graph G. That is,
ours is a local, iterative randomized PTAS for MAP problem in MRF with geometry. Our results are
somewhat surprising given that thus far the known theoretical justification for such local algorithms
strongly dependended on some form of convexity of the ?energy? function. In contrast, our results
do not require any such condition, but only the geometry of the underlying MRF. We believe that
our algorithm will be of great practical interest in near future as a large class of problems that utilize
MRF based modeling and inference in practice have the underlying graphical structure possessing
some form of geometry naturally.
8
References
[1] M. Bayati, D. Shah, and M. Sharma. Maximum weight matching via max-product belief
propagation. In IEEE ISIT, 2005.
[2] M. Bayati, D. Shah, and M. Sharma. Max-Product for Maximum Weight Matching: Convergence, Correctness, and LP Duality. IEEE Transactions on Information Theory, 54(3):1241?
1251, 2008.
[3] Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts.
IEEE Trans. Pattern Anal. Mach. Intell., 23(11):1222?1239, 2001.
[4] William Feller. An Introduction to Probability Theory and Its Applications. Wiley, 1957.
[5] Hans-Otto Georgii. Gibbs measures and phase transitions. Walter de Gruyter, 1988.
[6] A. Gupta, R. Krauthgamer, and J.R. Lee. Bounded geometries, fractals, and low-distortion
embeddings. In In Proceedings of the 44th annual Symposium on the Foundations of Computer
Science, 2003.
[7] S. Har-Peled and M. Mendel. Fast construction of nets in low dimensional metrics, and their
applications. In Proceedings of the twenty-first annual symposium on Computational geometry,
pages 150?158. ACM New York, NY, USA, 2005.
[8] B. Huang and T. Jebara. Loopy belief propagation for bipartite maximum weight b-matching.
Artificial Intelligence and Statistics (AISTATS), 2007.
[9] N. Komodakis and G. Tziritas. A new framework for approximate labeling via graph cuts. In
International Conference on Computer Vision, pages 1018?1025, 2005.
[10] M. Pawan Kumar and Philip H. S. Torr. Improved moves for truncated convex models. In
NIPS, pages 889?896, 2008.
[11] Stan Z. Li. Markov Random Field Modeling in Image Analysis. Springer, 2001.
[12] M. Malfait and D. Roose. Wavelet-based image denoising using a markov random field a priori
model. IEEE Transactions on : Image Processing, 6(4):549?565, 1997.
[13] Christopher D. Manning and Hinrich Schutze. Foundations of Statistical Natural Language
Processing. The MIT Press, 1999.
[14] J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. San
Francisco, CA: Morgan Kaufmann, 1988.
[15] Thomas Richardson and Ruediger Ubanke. Modern Coding Theory. Cambridge University
Press, 2008.
[16] S. Sanghavi, D. Shah, and A. Willsky. Message-passing for Maximum Weight Independent
Set. In Proceedings of NIPS, 2007.
[17] R. Swendsen and J. Wang. Nonuniversal critical dynamics in monte carlo simulations. Phys.
Rev. Letter., 58:86?88, 1987.
[18] O. Veksler. Graph cut based optimization for mrfs with truncated convex priors. In CVPR,
2007.
[19] Paul Viola and Michael J. Jones. Robust real-time face detection. International Journal of
Computer Vision, 57(2):137?154, 2004.
[20] M. Wainwright and M. Jordan. Graphical models, exponential families, and variational inference. UC Berkeley, Dept. of Statistics, Technical Report 649, 2003.
[21] M. J. Wainwright, T. Jaakkola, and A. S. Willsky. Map estimation via agreement on (hyper)trees: Message-passing and linear-programming approaches. IEEE Transactions on Information Theory, 2005.
[22] J. Yedidia, W. Freeman, and Y. Weiss. Generalized belief propagation. Mitsubishi Elect. Res.
Lab., TR-2000-26, 2000.
9
| 3810 |@word kohli:1 trial:2 polynomial:18 simulation:10 mitsubishi:1 pick:2 tr:1 initial:5 series:1 contains:1 selecting:3 ours:3 past:1 imaginary:2 current:2 com:1 surprising:1 plot:4 update:19 intelligence:1 selected:1 short:1 completeness:1 provides:1 node:23 mendel:1 along:1 c2:3 become:1 symposium:2 prove:2 consists:1 manner:2 theoretically:1 pairwise:3 inter:1 expected:1 indeed:3 behavior:1 growing:3 freeman:1 increasing:1 project:1 begin:1 notation:2 moreover:1 underlying:2 bounded:1 hinrich:1 lowest:1 what:1 kind:1 developed:1 finding:5 guarantee:6 pseudo:1 berkeley:1 growth:18 nutshell:1 uk:1 partitioning:2 local:34 xv:5 pkohli:1 encoding:1 mach:1 path:2 chose:1 suggests:1 range:4 practical:1 practice:2 procedure:2 universal:1 matching:3 pre:1 regular:1 suggest:4 cannot:1 selection:2 context:2 equivalent:3 map:20 deterministic:1 center:3 starting:2 independently:5 convex:4 simplicity:2 assigns:1 rule:1 embedding:1 notion:1 justification:1 imagine:1 construction:1 exact:3 programming:8 designing:1 agreement:1 element:1 utilized:1 cut:4 ising:3 wang:2 capture:1 worst:1 connected:1 decrease:1 highest:1 feller:1 convexity:2 complexity:2 peled:1 dynamic:5 depend:1 solving:3 bipartite:1 swap:1 joint:1 various:2 alphabet:1 walter:1 distinct:1 fast:3 monte:1 artificial:1 labeling:1 hyper:1 neighborhood:11 outside:1 exhaustive:2 choosing:3 kaist:2 valued:1 say:2 distortion:1 otherwise:1 plausible:1 otto:1 cvpr:1 statistic:2 richardson:1 sequence:1 rr:1 net:1 propose:1 interaction:2 product:4 till:1 achieve:1 description:6 getting:1 convergence:2 double:1 produce:2 object:1 depending:2 stating:1 fixing:1 ij:20 received:1 predicted:1 involves:3 tziritas:1 radius:9 correct:1 require:1 assign:1 fix:1 isit:1 tighter:1 hold:3 practically:1 around:2 considered:2 swendsen:2 exp:2 great:1 algorithmic:2 achieves:3 omitted:3 estimation:2 label:2 largest:1 correctness:1 successfully:1 establishes:1 weighted:3 minimization:3 mit:3 clearly:1 rather:1 ck:3 varying:1 jaakkola:1 check:1 likelihood:1 contrast:1 schutze:1 sense:1 posteriori:3 inference:4 mrfs:5 abstraction:1 dependent:1 initially:2 relation:1 caricature:1 overall:1 dual:1 arg:1 denoted:2 priori:1 constrained:1 uc:1 field:4 once:3 construct:1 having:2 identical:1 jones:1 future:1 report:1 np:1 sanghavi:1 fundamentally:1 intelligent:1 few:1 primarily:1 modern:1 randomly:2 composed:1 dg:4 intell:1 pictorial:1 geometry:12 phase:1 cns:1 pawan:1 microsoft:3 n1:2 william:1 detection:1 organization:1 interest:6 message:3 rediscovered:1 evaluation:1 har:1 edge:8 korea:1 tree:2 re:2 theoretical:5 increased:1 earlier:1 boolean:1 modeling:2 assignment:20 loopy:1 cost:1 vertex:11 veksler:3 uniform:1 dependency:1 answer:1 synthetic:1 chooses:1 st:2 density:1 international:2 randomized:5 probabilistic:2 physic:1 lee:1 picking:2 michael:1 choose:6 huang:1 positivity:1 resort:1 li:1 potential:4 de:1 coding:2 depends:1 bg:1 try:1 lot:1 view:1 optimistic:1 h1:2 doing:1 lab:1 start:5 contribution:1 square:3 accuracy:1 kaufmann:1 characteristic:1 iterated:1 produced:1 comparably:1 carlo:1 worth:1 researcher:2 phys:1 checked:1 definition:5 hardcore:3 energy:9 naturally:2 proof:9 associated:1 proved:1 popular:5 recall:2 improved:1 wei:1 done:1 though:2 strongly:1 until:1 sketch:1 hand:2 christopher:1 propagation:5 mode:1 believe:1 grows:1 usa:2 verify:1 true:1 hence:5 iteratively:2 komodakis:2 essence:2 noted:1 elect:1 oc:16 generalized:1 tn:2 performs:1 reasoning:1 image:5 wise:10 variational:1 novel:2 recently:1 possessing:1 boykov:1 exponentially:1 extend:1 interpretation:1 roose:1 cambridge:4 gibbs:1 uv:4 grid:12 submodular:1 language:2 had:1 han:1 surface:1 etc:1 showed:2 recent:2 optimizing:3 scenario:2 claimed:1 certain:1 inequality:1 binary:2 arbitrarily:4 transmitted:1 minimum:4 seen:2 somewhat:3 ptas:1 morgan:1 sharma:2 maximize:1 shortest:1 monotonically:1 signal:1 multiple:1 technical:1 faster:2 long:2 mrf:24 vision:3 metric:4 essentially:1 iteration:25 remarkably:2 interval:1 completes:1 georgii:1 crucial:2 appropriately:1 subject:2 member:1 effectiveness:1 jordan:1 call:2 integer:1 near:5 embeddings:1 xj:3 gave:1 nonuniversal:1 tradeoff:1 shift:1 lgo:16 proceed:1 york:1 passing:2 fractal:1 dramatically:1 generally:2 useful:1 detailed:2 locally:1 induces:1 zabih:1 nsf:1 notice:1 designer:1 arising:1 per:8 conform:1 changing:1 utilize:2 graph:48 relaxation:3 year:3 run:4 letter:1 almost:2 reasonable:1 reader:1 family:1 separation:1 utilizes:1 draw:1 bound:7 followed:2 guaranteed:1 quadratic:1 annual:2 strength:1 x2:2 argument:1 extremely:1 optimality:1 kumar:2 min:1 kyomin:2 graceful:1 maxn:1 according:1 ball:5 manning:1 smaller:1 lp:1 rev:1 making:4 intuitively:1 restricted:1 pr:10 ln:12 devavrat:2 end:4 operation:1 yedidia:1 apply:1 observe:2 appropriate:3 upto:1 alternative:1 shah:4 thomas:1 cf:1 mincut:1 graphical:6 krauthgamer:1 sw:3 prof:1 establish:1 classical:1 move:15 question:4 flipping:1 strategy:1 primary:1 visiting:1 n1n2:2 distance:6 philip:1 gracefully:1 provable:3 willsky:2 code:1 length:1 remind:1 minimizing:1 equivalently:2 setup:8 statement:4 stated:1 negative:2 design:1 anal:1 twenty:1 ruediger:1 markov:5 finite:6 truncated:4 immediate:1 defining:1 viola:1 precise:1 arbitrary:5 jebara:1 pair:10 extensive:2 pearl:1 nip:2 trans:1 pattern:1 max:7 belief:5 wainwright:2 critical:1 natural:4 stan:1 carried:1 prior:1 geometric:4 literature:1 understanding:1 bayati:2 foundation:2 degree:5 affine:1 exciting:1 prone:1 summary:1 jung:1 placed:1 surprisingly:2 parity:1 keeping:3 supported:1 last:1 repeat:2 wireless:1 formal:1 fall:1 taking:2 face:1 coupon:2 boundary:2 dimension:2 transition:1 author:1 made:2 stuck:1 qualitatively:1 san:1 far:2 polynomially:3 pushmeet:1 transaction:3 approximate:8 global:1 francisco:1 xi:6 search:3 iterative:8 decade:1 table:1 robust:1 ca:1 career:1 obtaining:2 expansion:3 poly:2 did:1 aistats:1 main:8 linearly:1 paul:1 n2:5 nothing:1 succinct:1 repeated:1 allowed:1 collector:1 icm:1 xu:2 x1:1 ny:1 wiley:1 deterministically:1 wish:1 exponential:1 intimate:1 wavelet:1 theorem:11 removing:1 specific:1 list:1 gupta:1 essential:1 led:1 likely:1 partially:2 doubling:2 recommendation:1 springer:1 relies:1 acm:1 ma:1 gruyter:1 conditional:1 viewed:1 exposition:1 hard:3 change:4 daejeon:1 specifically:5 torr:2 uniformly:3 denoising:1 lemma:17 called:4 total:2 duality:1 experimental:2 maxproduct:1 formally:2 select:2 philosophy:1 dept:1 |
3,103 | 3,811 | A General Projection Property for Distribution Families
Yao-Liang Yu
Yuxi Li Dale Schuurmans Csaba Szepesv?ari
Department of Computing Science
University of Alberta
Edmonton, AB, T6G 2E8 Canada
{yaoliang,yuxi,dale,szepesva}@cs.ualberta.ca
Abstract
Surjectivity of linear projections between distribution families with fixed mean
and covariance (regardless of dimension) is re-derived by a new proof. We further
extend this property to distribution families that respect additional constraints,
such as symmetry, unimodality and log-concavity. By combining our results with
classic univariate inequalities, we provide new worst-case analyses for natural
risk criteria arising in classification, optimization, portfolio selection and Markov
decision processes.
1
Introduction
In real applications, the model of the problem at hand inevitably embodies some form of uncertainty:
the parameters of the model are usually (roughly) estimated from data, which themselves can be
uncertain due to various kinds of noises. For example, in finance, the return of a financial product
can seldom be known exactly beforehand. Despite this uncertainty, one still usually has to take action
in the underlying application. However, due to uncertainty, any attempt to behave ?optimally? in the
world must take into account plausible alternative models.
Focusing on problems where uncertain data/parameters are treated as random variables and the
model consists of a joint distribution over these variables, we initially assume prior knowledge
that the first and second moments of the underlying distribution are known, but the distribution is
otherwise arbitrary. A parametric approach to handling uncertainty in such a setting would be to fit a
specific parametric model to the known moments and then apply stochastic programming techniques
to solve for an optimal decision. For example, fitting a Gaussian model to the constraints would be
a popular choice. However, such a parametric strategy can be too bold, hard to justify, and might
incur significant loss if the fitting distribution does not match the true underlying distribution very
well. A conservative, but more robust approach would be to take a decision that was ?protected? in
the worst-case sense; that is, behaves optimally assuming that nature has the freedom to choose an
adverse distribution. Such a minimax formulation has been studied in several fields [1; 2; 3; 4; 5; 6]
and is also the focus of this paper. Although Bayesian optimal decision theory is a rightfully wellestablished approach for decision making under uncertainty, minimax has proved to be a useful
alternative in many domains, such as finance, where it is difficult to formulate appropriate priors
over models. In these fields, minimax formulation combined with stochastic programming [7] have
been extensively studied and successfully applied.
We make a contribution to minimax probability theory and apply the results to problems arising in
four different areas. Specifically, we generalize a classic result on the linear projection property of
distribution families: we show that any linear projection between distribution families with fixed
mean and covariance, regardless of their dimensions, is surjective. That is, given any matrix X and
any random vector r with mean X T ? and covariance X T ?X, one can always find another random
vector R with mean ? and covariance ? such that X T R = r (almost surely). Our proof imposes no
conditions on the deterministic matrix X, hence extends the classic projection result in [6], which
assumes X is a vector. We furthermore extend this surjective property to some restricted distribution
1
families, which allows additional prior information to be incorporated and hence less conservative
solutions to be obtained. In particular, we prove that surjectivity of linear projections remains to hold
for distribution families that are additionally symmetric, log-concave, or symmetric linear unimodal.
In each case, our proof strategy allows one to construct the worst-case distribution(s).
An immediate application of these results is to reduce the worst-case analysis of multivariate expectations to the univariate (or reduced multivariate) ones, which have been long studied and produced
many fruitful results. In this direction, we conduct worst-case analyses of some common restricted
distribution families. We illustrate our results on problems that incorporate a classic worst case
value-at-risk constraint: minimax probability classification [2]; chance constrained linear programming (CCLP) [3]; portfolio selection [4]; and Markov decision processes (MDPs) with reward uncertainty [8]. Although some of the results we obtain have been established in the respective fields
[2; 3; 4], we unify them through a much simpler proof strategy. Additionally, we provide extensions
to other constrained distribution families, which makes the minimax formulation less conservative
in each case. These results are then extended to the more recent conditional value-at-risk constraint,
and new bounds are proved, including a new bound on the survival function for symmetric unimodal
distributions.
2
A General Projection Property
First we establish a generalized linear projection property for distribution families. The key application will be to reduce worst-case multivariate stochastic programming problems to lower dimensional equivalents; see Corollary 1. Popescu [6] has proved the special case of reduction to one
dimension, however we provide a simpler proof that can be more easily extended to other distribution families1 .
Let (?, ?) denote the family of distributions sharing common mean ? and covariance ?, and let
?X = X T ? and ?X = X T ?X. Below we denote random variables by boldface letters, and use I
to denote the identity matrix. We use ? to denote the pseudo-inverse.
Theorem 1 (General Projection Property (GPP)) For all ?, ? ? 0, and X ? Rm?d , the projection X T R = r from m-variate distributions R ? (?, ?) to d-variate distributions r ? (?X , ?X ) is
surjective and many-to-one. That is, every r ? (?X , ?X ) can be obtained from some R ? (?, ?)
via X T R = r (almost surely).
Proof: The proof is constructive. Given a r ? (?X , ?X ), we can construct a pre-image R by letting
R = ?X??X r+(Im ??X??X X T )M, where M ? (?, ?) is independent of r, for example, one can
choose M as a Gaussian random vector. It is easy to verify that R ? (?, ?) and X T R = ?X ??X r+
(Im ? ?X ??X )X T M = r. The last equality holds since (Im ? ?X ??X )r = (Im ? ?X ??X )X T M
(the two random vectors on both sides have the same mean and zero covariance). Note that since M
can be chosen arbitrarily in (?, ?), the projections are always many-to-one.
Although this establishes the general result, we extend it to distribution families under additional
constraints below. That is, one often has additional prior information about the underlying distribution, such as symmetry, unimodality, and/or support. In such cases, if a general linear projection
property can still be shown to hold, the additional assumptions can be used to make the minimax
approach less conservative in a simple, direct manner. We thus consider a number of additionally
restricted distribution families.
Definition 1 A random vector X is called (centrally) symmetric about ?, if for all vectors x,
Pr(X ? ? + x) = Pr(X ? ? ? x). A univariate random variable is called unimodal about
a if its cumulative distribution function (c.d.f.) is convex on (??, a] and concave on [a, ?). A
random vector X is called log-concave if its c.d.f. is log-concave. A random m-vector X is called
linear unimodal about 0m if for all a ? Rm , aT X is (univariate) unimodal about 0.
Let (?, ?)S denote the family of distributions in (?, ?) that are additionally symmetric about ?,
and similarly, let (?, ?)L denote the family of distributions that are additionally log-concave, and
1
In preparing the final version of this paper, we noticed that a very recent work [9] proved the one dimensional case by a similar technique as ours.
2
let (?, ?)SU denote the family of distributions that are additionally symmetric and linear unimodal
about ?. For each of these restricted families, we require the following properties to establish our
next main result.
Lemma 1 (a) If random vector X is symmetric about 0, then AX + ? is symmetric about ?. (b) If
X, Y are independent and both symmetric about 0, Z = X + Y is also symmetric about 0.
Although once misbelieved, it is now clear that the convolution of two (univariate) unimodal distributions need not be unimodal. However, for symmetric, unimodal distributions we have
Lemma 2 ([10] Theorem 1.6) If two independent random variables x and y are both symmetric
and unimodal about 0, then z = x + y is also unimodal about 0.
There are several non-equivalent extensions of unimodality to multivariate random variables. We
consider two specific (multivariate) unimodalities in this paper: log-concave and linear unimodal.2
Lemma 3 ([10] Lemma 2.1, Theorem 2.4, Theorem 2.18)
1. Linearity: If random m-vector X is log-concave, aT X is also
for all a ? Rm .
]
[ log-concave
X
2. Cartesian Product: If X and Y are log-concave, then Z = Y is also log-concave .
3. Convolution: If X and Y are independent and log-concave, then Z = X+Y is also log-concave.
Given the above properties, we can now extend Theorem 1 to (?, ?)S , (?, ?)L and (?, ?)SU .
Theorem 2 (GPP for Symmetric, Log-concave, and Symmetric Linear Unimodal Distributions)
For all ?, ? ? 0 and X ? Rm?d , the projection X T R = r from m-variate R ? (?, ?)S to
d-variate r ? (?X , ?X )S is surjective and many-to-one. The same is true for (?, ?)L and
(?, ?)SU .3
Proof: The proofs follow the same basic outline as Theorem 1 except that in the first step we
now choose N ? (0m , Im )S or (0m , Im )L or (0m , Im )SU . Then, respectively, symmetry of the
constructed R follows from Lemma 1; log-concavity of R follows from Lemma 3; and linear unimodality of R follows from the definition and Lemma 2. The maps remain many-to-one.
An immediate application of the general projection property is to reduce worst-case analyses of
multivariate expectations to the univariate case. Note that in the following corollary, the optimal
distribution of R can be easily constructed from the optimal distribution of r.
Corollary 1 For any matrix X and any function g(?) (including in particular when X is a vector)
sup
E[g(X T R)]
=
sup
E[g(r)].
(1)
r?(X T ?,X T ?X)
R?(?,?)
The equality continues to hold if we restrict (?, ?) to (?, ?)S , (?, ?)L , or (?, ?)SU respectively.
Proof: It is obvious that the right hand side is an upper bound on the left hand side, since for every
R ? (?, ?) there exists an r ? (X T ?, X T ?X) given by r = X T R. Similarly for (?, ?)S ,
(?, ?)L , and (?, ?)SU . However, given Theorems 1 and 2, one can then establish the converse.4
3
Application to Worst-case Value-at-risk
We now apply these projection properties to analyze the worst case value-at-risk (VaR) ?a useful
risk criterion in many application areas. Consider the following constraint on a distribution R
Pr(?xT R ? ?) ? 1 ? ?,
(2)
2
A sufficient but not necessary condition for log-concavity is having log-concave densities. This can be used
to verify log-concavity of normal and uniform distributions. In the univariate case, log-concave distributions
are called strongly unimodal, which is only a proper subset of univariate unimodal distributions [10].
3
If X is a vector we can also extend this theorem to other multivariate unimodalities such as symmetric
star/block/convex unimodal.
4
The closure of (?, ?), (?, ?)S , (?, ?)L , and (?, ?)SU under linear projection is critical for Corollary 1
to hold. Corollary 1 fails for other kinds of multivariate unimodalities, such as symmetric star/block/convex
unimodal. It also fails for (?, ?)+ , a distribution family whose support is contained in the nonnegative orthant.
This is not surprising since determining whether the set (?, ?)+ is empty is already NP-hard [11].
3
for given x, ? and ? ? (0, 1). In this case, the infimum over ? such that (2) is satisfied is referred
to as the ?-VaR of R. Within certain restricted distribution families, such as Q-radially symmetric
distributions, (2) can be (equivalently) transformed to a deterministic second order cone constraint
(depending on the range of ?) [3]. Unfortunately, determining whether (2) can be satisfied for given
x, ? and ? ? (0, 1) is NP-hard in general [8]. Suppose however that one knew the distribution of
R belonged to a certain family, such as (?, ?).5 Given such knowledge, it is natural to consider
whether (2) can be satisfied in a worst case sense. That is, consider
[
]
inf Pr(?xT R ? ?) ? 1 ? ?.
(3)
R?(?,?)
Here the infimum of ? values satisfying (3) is referred to as the worst-case ?-VaR. If we have additional information about the underlying distribution, such as symmetry or unimodality, the worstcase ?-VaR can be reduced. Importantly, using the results of the previous section, we can easily
determine the worst-case ?-VaR for various distribution families. These can also be used to provide
a tractable bound on the ?-VaR even when the distribution is known.
Proposition 1 For alternative distribution families, the worst-case ?-VaR constraint (3) is given by:
?
1??
if R ? (?, ?) then ? ? ??x +
?x ,
(4)
?
{
?
1
? ? ??x + 2?
?x , if ? ? (0, 12 )
(5)
if R ? (?, ?)S then
? ? ??x ,
if ? ? [ 12 , 1)
{
?
1
? ? ??x + 32 2?
?x , if ? ? (0, 12 )
(6)
if R ? (?, ?)SU then
if ? ? [ 12 , 1)
? ? ??x ,
if R ? N (?, ?) then ? ? ??x + ??1 (1 ? ?)?x ,
(7)
?
where ?x = xT ?, ?x = xT ?x and ?(?) is the c.d.f. of the standard normal distribution N (0, 1).
It turns out some results of Proposition 1 are known. In fact, the first bound (4) has been extensively
studied. However, given the results of the previous section, we can now provide a much simpler
proof.6 (This simplicity will also allow us to achieve some useful new bounds in Section 4 below.)
Proof: From Corollary 1 it follows that
inf
R?(?,?)
Pr(?xT R ? ?) =
inf
2)
r?(??x ,?x
Pr(r ? ?) = 1 ?
sup
Pr(r > ?).
(8)
2)
r?(??x ,?x
Given that the problem is reduced to the univariate case, we simply exploit classical inequalities:
if x ? (?, ? 2 )
then
if x ? (?, ? 2 )S
then
if x ? (?, ? 2 )SU
then
?2
,
? 2 + (? ? t)2
?2
1
),
Pr(x > t) ? min(1,
2
(? ? t)2
1
4 ?2
Pr(x > t) ? min(1,
),
2
9 (? ? t)2
Pr(x > t) ?
(9)
(10)
(11)
for t ? ?.7 Now to prove (4), simply plug (8) into (3) and notice that an application of (9) leads to
? ? ??x
and
1?
?x2
? 1 ? ?.
?x2 + (??x ? ?)2
(4) then follows by simple rearrangement. The same procedure can be used to prove (5), (6), (7).
5
We will return to the question of when such moment information is also subject to uncertainty in Section 5.
[2] and [3] provide a proof of (4) based on the multivariate Chebyshev inequality in [12]; [4] proves (4)
from dual optimality; and the proof in [6] utilizes two point support property of the general constraint (3).
7
(9) is known as the (one-sided) Chebyshev inequality. Two-sided version of (11) is known as the Gauss
inequality. These classical bounds are tight. Proofs can be found in [13], for example.
6
4
Figure 1: Comparison of the coefficients in front of ?x for different distribution families in Proposition 1 (left) and Proposition 2 (right). Only the range ? ? (0, 12 ) is depicted.
Proposition 1 clearly illustrates the benefit of prior knowledge. Figure 1 compares the coefficients
on ?x among the different worst case VaR for different distribution families. The large gap between
coefficients for general and symmetric (linear) unimodal distributions demonstrates how additional
constraints can generate much less conservative solutions while still ensuring robustness.
Beyond simplifying existing proofs, Proposition 1 can be used to extend some of the uses of the VaR
criterion in different application areas.
Minimax probability classification [2]: Lanckriet et al. [2] first studied the value-at-risk constraint
in binary classification. In this scenario, one is given labeled data from two different sources and
seeks a robust separating hyperplane. From the data, the distribution families (?1 , ?1 ) and (?2 , ?2 )
can be estimated. Then a robust hyperplane can be recovered by minimizing the worst-case error
[
]
[
]
min ? s.t.
inf
Pr(xT R1 ? ?) ? 1 ? ? and
inf
Pr(xT R2 ? ?) ? 1 ? ?, (12)
x?=0,?,?
R1 ?(?1 ,?1 )
R2 ?(?2 ,?2 )
where x is the normal vector of the hyperplane, ? is the offset and ? controls the error probability.
Note that the results in [2] follow from using the bound (4). However, interesting additional facts
arise when considering alternative distribution families. For example, consider symmetric distributions. In this case, suppose we knew in advance that the optimal ? lay in [ 21 , 1), meaning that
no hyperplane predicts better than random guessing. Then the constraints in (12) become linear,
covariance information becomes useless in determining the optimal hyperplane, and the optimization concentrates solely on separating the means of two classes. Although such a result might seem
surprising, it is a direct consequence of symmetry: the worst-case distributions are forced to put
probability mass arbitrarily far away on both sides of the mean, thereby eliminating any information
brought by covariance. When the optimal ? lies in (0, 12 ), however, covariance information becomes
meaningful, since the worst-case distributions can no longer put probability mass arbitrarily far
away on both sides of the mean (owing to the existence of a hyperplane that predicts labels better
than random guessing). In this case, the optimization problems involving (?, ?)S and (?, ?)SU are
equivalent to that for (?, ?) except that the maximum error probability ? becomes smaller, which
is to be expected since more information about the marginal distributions should make one more
confident to predict the labels of future data.
Chance Constrained Linear Programming (CCLP) [3]: Consider a linear program
minx aT x s.t. rT x ? 0. If the coefficient r is uncertain, it is clear that solving the linear program
merely using the expected value of r could result in a solution x that was sub-optimal or even infeasible. Calafiore and El Ghaoui studied this problem in [3], and imposed the inequality constraint
with high probability, leading to the the so-called chance constrained linear program (CCLP):
[
]
min aT x s.t.
inf Pr(?xT R ? 0) ? 1 ? ?.
(13)
x
R?(?,?)
5
In this case, ? is simply 0 and ? is given by the user. Depending on the value of ?, the chance
constraint can be equivalently transformed into a second order cone constraint or a linear constraint.
The work in [3] concentrates on the general and symmetric distribution families. In the latter case,
[3] uses the first part of inequality (5) as a sufficient condition for guaranteeing robust solutions.
Note however that from Corollary 1 and Proposition 1 one can now see that (5) is also a necessary
condition. Although the symmetric linear unimodal case is not discussed in [3], from Proposition 1
again one can see that incorporating bound (6) in (13) yields a looser constraint than does (5),
hence the feasible region will be enlarged and the optimum value of the CCLP potentially reduced,
corresponding to the intuition that increased prior knowledge leads to more optimized results.
Portfolio Selection [4]: In portfolio selection, let R represent the (uncertain) returns of a suite of
financial assets, and x the weighting one would like to put on the various assets. Here ? > ?xT R
represents an upper bound on the loss one might suffer with weighting x. The goal is to minimize
an upper bound on the loss that holds with high probability,8 say 1 ? ?, specified by the user
[
]
min ? s.t.
inf Pr(?xT R ? ?) ? 1 ? ?.
(14)
x,?
R?(?,?)
This criterion has been studied by El Ghaoui et al. [4] in the worst case setting. Previous work has not
addressed the case when additional symmetry or linear unimodal information is available. However,
comparing the minimal value of ? in Proposition 1, we see that such additional information, such
as symmetry or unimodality, indeed decreases our potential loss, as shown clearly in Figure 1. This
makes sense, since the more one knows about uncertain returns the less risk one should have to bear.
Note also that when incorporating additional information, the optimal portfolio, represented by x, is
changed as well but remains mean-variance efficient when ? ? (0, 12 ).
Uncertain MDPs with reward uncertainty: The standard planning problem in Markov decision
processes (MDPs) is to find a policy such that maximizes the expected total discounted return. This
nonlinear optimization problem can be efficiently solved by dynamic programming, provided that
the model parameters (transition kernel and reward function) are exactly known. Unfortunately, this
is rarely the case in practice. Delage and Mannor [8] extend this problem to the uncertain case by
employing the value-at-risk type constraint (2) and assuming the unknown reward model and transition kernel are drawn from a known distribution (Gaussian and Dirichlet respectively).Unfortunately,
[8] also proves that the constraint (2) is generally NP-hard to satisfy unless one assumes some very
restricted form of distribution, such as Gaussian. Alternatively, note that one can use the worst case
value-at-risk formulation (3) to obtain a tractable approximation to (2)
[
]
min ? s.t.
inf Pr(?xT R ? ?) ? 1 ? ?,
(15)
x,?
R?(?,?)
where R is the reward function (unknown but assumed to belong to (?, ?)) and x represents a
discounted-stationary state-action visitation distribution (which can be used to recover an optimal
behavior policy). Although this worst case formulation (15) might appear to be conservative compared to working with a known distribution on R and using (2), when additional information about
the distribution is available, such as symmetry or unimodality, (15) can be brought very close to using a Gaussian distribution, as shown in Figure 1. Thus, given reasonable constraints, the minimax
approach does not have to be overly conservative, while providing robustness and tractability.
4
Application to Worst-case Conditional Value-at-risk
Finally, we investigate the more refined conditional value-at-risk (CVaR) criterion that bounds the
conditional expectation of losses beyond the value-at-risk (VaR). This criterion has been of growing
prominence in many areas recently. Consider the following quantity defined as the mean of a tail
distribution:
[
]
f? = E ?xT R Pr(?xT R ? ?? ) ? 1?? where ?? = arg min ? s.t. Pr(?xT R ? ?) ? 1??.
?
(16)
Here, ?? is the value-at-risk and f? is the conditional value-at-risk of R. It is well-known that the
CVaR, f?, is always an upper bound on the VaR, ?? . Although it might appear that dealing with
8
Note that seeking to minimize the loss surely leads to a meaningless outcome. For example, if ? = 0, the
optimization problem trivially says that the loss of any portfolio will be no larger than ?.
6
the CVaR criterion entails greater complexity than the VaR, since VaR is directly involved in the
definition of CVaR, it turns out that CVaR can be more directly expressed as
]
1 [
f? = min ? + E (?xT R ? ?)+ ,
(17)
?
?
where (x)+ = max(0, x) [14]. Unlike the VaR constraint (2), (17) is always (jointly) convex in x
and ?. Thus if R were discrete, f? could be easily computed by a linear program [14; 5]. However,
the expectation in (17) involves a high dimensional integral in general, whose analytical solution
is not always available, thus f? is still hard to compute in practice. Although one potential remedy
might be to use Monte Carlo techniques to approximate the expectation, we instead take a robust
approach: As before, suppose one knew the distribution of R belonged to a certain family, such as
(?, ?). Given such knowledge, it is natural to consider the worst-case CVaR
]
]
1 [
1 [
f = sup min ? + E (?xT R ? ?)+ = min
sup ? + E (?xT R ? ?)+ , (18)
?
?
?
R?(?,?) ?
R?(?,?)
where the interchangeability of the min and sup operators follows from the classic minimax theorem
[15]. Importantly, as in the previous section, we can determine the worst-case CVaR for various
distribution families. If one has additional information about the underlying distribution, such as
symmetry or unimodality, the worst-case CVaR can be reduced. These can be used to provide a
tractable bound on the CVaR even when the distribution is known.
Proposition 2 For alternative distribution families, the worst-case CVaR is given by:
?
1??
(2? ? 1)
if R ? (?, ?) then ? = ??x + ?
?x , f = ??x +
?x ,
(19)
?
2 ?(1 ? ?)
{
if ? ? (0, 12 ]
? = ??x + ?18? ?x , f = ??x + ?12? ?x
?
(20)
if R ? (?, ?)S then
? = ??x ? ? 1
?
if ? ? [ 12 , 1)
?x , f = ??x + ?1??
2? x
8(1??)
?
1
2
?
? , f = ??x + 3?
?
if ? ? (0, 13 ]
?
? ? = ??x + ?
3 ? x
? x ?
? = ??x + 3(1 ? 2?)?x , f = ??x?+ 3(1 ? ?)?x if ? ? [ 13 , 23 (21)
]
if R ? (?, ?)SU then
?
2
? ? = ?? ? ?1 ? , f = ?? + 2 1?? ? ,
if
?
?
[
,
1)
x
x
x
3?
3
3 1?? x
e?
(??1 (1??))2
2
?
?x ,
(22)
if R ? N (?, ?) then f = ??x +
2??
?
where ?x = xT ?, ?x = xT ?x and ?(?) is the c.d.f. of a standard normal distribution N (0, 1).
The results of Proposition 2 are a novel contribution of this paper, with the exception of (22), which
is a standard result in stochastic programming [7].
Proof: We know from Corollary 1 that
[
]
sup E (?xT R ? ?)+ =
R?(?,?)
sup
2)
r?(??x ,?x
[
]
E (r ? ?)+ ,
(23)
which reduces the problem to the univariate case. To proceed, we will need to make use of the
univariate results given in Proposition 3 below. Assuming Proposition 3 for now, we show how to
prove (19): In this case, substitute (23) into (18) and apply (24) from Proposition 3 below to obtain
]
?
1[
f = min ? +
(??x ? ?) + ?x2 + (??x ? ?)2 .
?
2?
This is a convex univariate optimization problem in ?. Taking the derivative with respect?
to ? and
(2??1)
setting to zero gives ? = ??x + ?
?x . Substituting back we obtain f = ??x + 1??
? ?x .
2
?(1??)
A similar strategy can be used to prove (20), (21), and (22).
As with Proposition 1, Proposition 2 illustrates the benefit of prior knowledge. Figure 1 (right)
compares the coefficients on ?x among different worst-case CVaR quantities for different families.
Comparing VaR and CVaR in Figure 1 shows that unimodality has less impact on improving CVaR.
A key component of Proposition 2 is its reliance on the following important univariate results. The
following proposition gives tight bounds of the expected survival function for the various families.
7
Proposition 3 For alternative distribution families, the expected univariate survival functions are:
]
?
[
]
1[
sup E (x ? t)+ =
(? ? t) + ? 2 + (? ? t)2 ,
(24)
2
x?(?,? 2 )
? ??t+?
if ? ? ?2 ? t ? ? + ?2
?
22 ,
?
[
]
?
if t > ? + ?2
sup E (x ? t)+ =
(25)
8(t??) ,
2
2
?
2
x?(?,? )S
? ? ? +8(t??) , if t < ? ? ?
8(t??)
2
? ?
2
(
3??t+?)
?
?
?
,
if ? ? ?3 ? t ? ? + ??3
?
?
4 3?
[
]
2
?
if t > ? + ??3
E (x ? t)+ =
(26)
sup
9(t??) ,
?
2
2
x?(?,? 2 )SU
?
? ? ? +9(t??) , if t < ? ? ??
9(t??)
3
Here (26) is a further novel contribution of this paper. Proofs of (24) and (25) can be found in [1].
Interestingly, to the best of our knowledge, the worst-case CVaR criterion has not yet been applied
to any of the four problems mentioned in the previous section9 . Given the space constraints, we can
only discuss the direct application of worst-case CVaR to the portfolio selection problem. We note
that CVaR has been recently applied to ?-SVM learning in [16].
Implications for Portfolio Selection: By comparing Propositions 1 and 2, the first interesting conclusion one can reach about portfolio selection is that, without considering any additional information, the worst-case CVaR criterion yields the same optimal portfolio weighting x as the worst-case
VaR criterion (recall that VaR minimizes ? in Proposition 1 by adjusting x while CVaR minimizes
f by adjusting x in Proposition 2). However, the worst-case distributions for the two approaches are
not the same, which can be seen from the relation (16) between VaR and CVaR and observing that ?
in (4) is not the same as in (19). Next, when additional symmetry information is taken into account
and ? ? (0, 12 ), CVaR and VaR again select the same portfolio but under different worst-case distributions. When unimodality is added, the CVaR criterion finally begins to select different portfolios
than VaR.
5 Concluding Remarks
We have provided a simpler yet broader proof of the general linear projection property for distribution families with given mean and covariance. The proof strategy can be easily extended to more
restricted distribution families. A direct implication of our results is that worst-case analyses of multivariate expectations can often be reduced to those of univariate ones. By combining this trick with
classic univariate inequalities, we were able to provide worst-case analyses of two widely adopted
constraints (based on value-at-risk criteria). Our analysis recovers some existing results in a simpler
way while also provides new insights on incorporating additional information.
Above, we assumed the first and second moments of the underlying distribution were precisely
known, which of course is questionable in practice. Fortunately, there are standard techniques for
handling such additional uncertainty. One strategy, proposed in [2], is to construct a (bounded and
convex) uncertainty set U over (?, ?), and then applying a similar minimax formulation but with
respect to (?, ?) ? U. As shown in [2], appropriately chosen uncertainty sets amount to adding
straightforward regularizations to the original problem. A second approach is simply to lower one?s
confidence of the constraints and rely on the fact that the moment estimates are close to their true
values within some additional confidence bound [17]. That is, instead of enforcing the constraint
(3) or (18) surely, one can instead plug-in the estimated moments and argue that constraints will be
satisfied within some diminished probability. For an application of this strategy in CCLP, see [3].
Acknowledgement
We gratefully acknowledge support from the Alberta Ingenuity Centre for Machine Learning, the
Alberta Ingenuity Fund, iCORE and NSERC. Csaba Szepesv`ari is on leave from MTA SZTAKI, Bp.
Hungary.
9
Except the very recent work of [9] on portfolio selection.
8
References
[1] R. Jagannathan. ?Minimax procedure for a class of linear programs under uncertainty?. Operations Research, vol. 25(1):pp. 173?177, 1977.
[2] Gert R.G. Lanckriet, Laurent El Ghaoui, Chiranjib Bhattacharyya and Michael I. Jordan. ?A
robust minimax approach to classification?. Journal of Machine Learning Research, vol. 03:pp.
555?582, 2002.
[3] G.C.Calafiore and Laurent El Ghaoui. ?On distributionally robust chance-constrained linear
programs?. Journal of Optimization Theory and Applications, vol. 130(1):pp. 1?22, 2006.
[4] Laurent El Ghaoui, Maksim Oks and Francois Oustry. ?Worst-case value-at-risk and robust
portfolio optimization: a conic programming approach?. Operations Research, vol. 51(4):pp.
542?556, 2003.
[5] Shu-Shang Zhu and Masao Fukushima. ?Worst-case conditional value-at-risk with application
to robust portfolio management?. Operations Research, vol. 57(5):pp. 1155?1168, 2009.
[6] Ioana Popescu. ?Robust mean-covariance solutions for stochastic optimization?. Operations
Research, vol. 55(1):pp. 98?112, 2007.
[7] Andr?as Pr?ekopa. Stochastic Programming. Springer, 1995.
[8] Erick Delage and Shie Mannor. ?Percentile optimization for Markov decision processes with
parameter uncertainty?. Operations Research, to appear 2009.
[9] Li Chen, Simai He and Shuzhong Zhang. ?Tight Bounds for Some Risk Measures, with Applications to Robust Portfolio Selection?. Tech. rep., Department of Systerms Engineering and
Engineering Management, The Chinese University of Hongkong, 2009.
[10] Sudhakar Dharmadhikari and Kumar Joag-Dev. Unimodality, Convexity, and Applications.
Academic Press, 1988.
[11] Dimitris Bertsimas and Ioana Popescu. ?Optimal inequalities in probability theory a convex
optimization approach?. SIAM Journal on Optimization, vol. 15(3):pp. 780?804, 2005.
[12] Albert W. Marshall and Ingram Olkin. ?Multivariate Chebyshev inequalities?. Annals of Mathematical Statistics, vol. 31(4):pp. 1001?1014, 1960.
[13] Ioana Popescu. ?A semidefinite programming approach to optimal moment bounds for convex
classes of distributions?. Mathematics of Operations Research, vol. 30(3):pp. 632?657, 2005.
[14] R. Tyrrell Rockafellar and Stanislav Uryasev. ?Optimization of conditional value-at-risk?.
Journal of Risk, vol. 2(3):pp. 493?517, 2000.
[15] Ky Fan. ?Minimax Theorems?.
vol. 39(1):pp. 42?47, 1953.
Proceedings of the National Academy of Sciences,
[16] Akiko Takeda and Masashi Sugiyama. ??-support vector machine as conditional value-at-risk
minimization?. In Proceedings of the 25th International Conference on Machine Learning, pp.
1056?1063. 2008.
[17] John Shawe-Taylor and Nello Cristianini. ?Estimating the moments of a random vector with
applications?. In Proceedings of GRETSI 2003 Conference, pp. 47?52. 2003.
9
| 3811 |@word version:2 eliminating:1 closure:1 seek:1 covariance:11 simplifying:1 prominence:1 thereby:1 reduction:1 moment:8 ours:1 interestingly:1 bhattacharyya:1 existing:2 recovered:1 comparing:3 surprising:2 olkin:1 yet:2 must:1 john:1 fund:1 stationary:1 akiko:1 provides:1 mannor:2 simpler:5 zhang:1 mathematical:1 constructed:2 direct:4 become:1 consists:1 prove:5 fitting:2 manner:1 indeed:1 expected:5 behavior:1 themselves:1 planning:1 growing:1 ingenuity:2 roughly:1 discounted:2 alberta:3 considering:2 becomes:3 provided:2 begin:1 underlying:7 linearity:1 maximizes:1 mass:2 bounded:1 estimating:1 kind:2 minimizes:2 csaba:2 suite:1 pseudo:1 every:2 masashi:1 concave:15 finance:2 questionable:1 exactly:2 rm:4 demonstrates:1 gretsi:1 control:1 converse:1 appear:3 before:1 engineering:2 consequence:1 despite:1 gpp:2 laurent:3 solely:1 might:6 studied:7 range:2 practice:3 block:2 procedure:2 area:4 delage:2 projection:17 pre:1 confidence:2 close:2 selection:9 operator:1 put:3 risk:22 applying:1 fruitful:1 deterministic:2 equivalent:3 map:1 imposed:1 straightforward:1 regardless:2 convex:8 formulate:1 unify:1 simplicity:1 insight:1 importantly:2 financial:2 classic:6 gert:1 annals:1 suppose:3 ualberta:1 user:2 programming:10 us:2 lanckriet:2 trick:1 satisfying:1 continues:1 lay:1 predicts:2 labeled:1 solved:1 worst:37 region:1 decrease:1 e8:1 mentioned:1 intuition:1 convexity:1 complexity:1 reward:5 cristianini:1 dynamic:1 tight:3 solving:1 incur:1 easily:5 joint:1 various:5 unimodality:11 represented:1 forced:1 monte:1 outcome:1 refined:1 shuzhong:1 whose:2 larger:1 plausible:1 solve:1 say:2 widely:1 otherwise:1 statistic:1 jointly:1 final:1 analytical:1 product:2 combining:2 hungary:1 achieve:1 academy:1 ioana:3 ky:1 takeda:1 empty:1 optimum:1 r1:2 francois:1 guaranteeing:1 leave:1 illustrate:1 depending:2 c:1 involves:1 direction:1 concentrate:2 owing:1 stochastic:6 require:1 proposition:22 im:7 extension:2 hold:6 calafiore:2 normal:4 predict:1 substituting:1 label:2 successfully:1 establishes:1 minimization:1 brought:2 clearly:2 gaussian:5 always:5 broader:1 corollary:8 derived:1 focus:1 ax:1 tech:1 sense:3 el:5 yaoliang:1 initially:1 relation:1 transformed:2 arg:1 classification:5 dual:1 among:2 constrained:5 special:1 marginal:1 field:3 construct:3 once:1 having:1 hongkong:1 preparing:1 represents:2 yu:1 future:1 np:3 national:1 fukushima:1 ab:1 attempt:1 freedom:1 rearrangement:1 investigate:1 semidefinite:1 implication:2 beforehand:1 integral:1 necessary:2 respective:1 unless:1 conduct:1 taylor:1 re:1 minimal:1 uncertain:7 increased:1 dev:1 marshall:1 tractability:1 subset:1 uniform:1 too:1 front:1 optimally:2 combined:1 confident:1 density:1 international:1 siam:1 michael:1 yao:1 again:2 satisfied:4 management:2 choose:3 derivative:1 leading:1 return:5 li:2 account:2 potential:2 sztaki:1 star:2 bold:1 coefficient:5 rockafellar:1 satisfy:1 analyze:1 sup:11 observing:1 recover:1 contribution:3 minimize:2 variance:1 efficiently:1 yield:2 generalize:1 bayesian:1 produced:1 carlo:1 asset:2 reach:1 sharing:1 definition:3 pp:13 involved:1 obvious:1 proof:20 recovers:1 proved:4 radially:1 popular:1 adjusting:2 recall:1 knowledge:7 back:1 focusing:1 ok:1 follow:2 interchangeability:1 formulation:6 strongly:1 furthermore:1 hand:3 working:1 su:12 nonlinear:1 infimum:2 verify:2 true:3 remedy:1 hence:3 equality:2 regularization:1 symmetric:21 szepesva:1 jagannathan:1 percentile:1 criterion:12 generalized:1 outline:1 image:1 meaning:1 novel:2 ari:2 recently:2 common:2 behaves:1 extend:7 discussed:1 belong:1 tail:1 he:1 significant:1 seldom:1 trivially:1 mathematics:1 similarly:2 centre:1 sugiyama:1 gratefully:1 portfolio:16 shawe:1 entail:1 longer:1 multivariate:11 masao:1 recent:3 inf:8 scenario:1 certain:3 inequality:10 binary:1 arbitrarily:3 rep:1 icore:1 seen:1 additional:18 greater:1 fortunately:1 surely:4 determine:2 unimodal:20 reduces:1 match:1 academic:1 plug:2 long:1 ensuring:1 impact:1 involving:1 basic:1 expectation:6 albert:1 represent:1 kernel:2 szepesv:2 addressed:1 source:1 appropriately:1 meaningless:1 unlike:1 subject:1 shie:1 seem:1 jordan:1 easy:1 fit:1 variate:4 restrict:1 reduce:3 chebyshev:3 whether:3 suffer:1 proceed:1 action:2 remark:1 useful:3 generally:1 clear:2 amount:1 extensively:2 wellestablished:1 reduced:6 generate:1 andr:1 notice:1 estimated:3 arising:2 overly:1 discrete:1 vol:11 visitation:1 key:2 four:2 reliance:1 drawn:1 bertsimas:1 merely:1 cone:2 inverse:1 letter:1 uncertainty:13 extends:1 family:35 almost:2 reasonable:1 looser:1 utilizes:1 decision:8 bound:18 centrally:1 fan:1 nonnegative:1 constraint:26 precisely:1 bp:1 x2:3 min:12 optimality:1 concluding:1 kumar:1 department:2 mta:1 remain:1 smaller:1 surjectivity:2 making:1 restricted:7 pr:18 ghaoui:5 sided:2 taken:1 chiranjib:1 remains:2 turn:2 discus:1 know:2 letting:1 tractable:3 adopted:1 available:3 operation:6 apply:4 away:2 appropriate:1 ekopa:1 alternative:6 robustness:2 existence:1 substitute:1 original:1 assumes:2 dirichlet:1 embodies:1 exploit:1 prof:2 establish:3 surjective:4 classical:2 chinese:1 seeking:1 noticed:1 already:1 question:1 quantity:2 added:1 parametric:3 strategy:7 rt:1 guessing:2 minx:1 separating:2 cvar:21 argue:1 nello:1 boldface:1 enforcing:1 assuming:3 useless:1 providing:1 minimizing:1 liang:1 difficult:1 equivalently:2 unfortunately:3 potentially:1 maksim:1 shu:1 proper:1 policy:2 unknown:2 upper:4 convolution:2 markov:4 acknowledge:1 inevitably:1 behave:1 orthant:1 immediate:2 extended:3 incorporated:1 incorporate:1 arbitrary:1 canada:1 specified:1 optimized:1 established:1 beyond:2 able:1 usually:2 below:5 dimitris:1 belonged:2 program:6 including:2 max:1 critical:1 natural:3 treated:1 rely:1 zhu:1 minimax:14 mdps:3 conic:1 popescu:4 prior:7 acknowledgement:1 determining:3 stanislav:1 loss:7 bear:1 interesting:2 var:20 sufficient:2 t6g:1 imposes:1 course:1 changed:1 last:1 infeasible:1 side:5 allow:1 taking:1 benefit:2 ingram:1 dimension:3 world:1 cumulative:1 transition:2 concavity:4 dale:2 far:2 employing:1 uryasev:1 approximate:1 dealing:1 assumed:2 knew:3 alternatively:1 protected:1 additionally:6 nature:1 robust:11 ca:1 symmetry:10 schuurmans:1 improving:1 domain:1 main:1 noise:1 arise:1 enlarged:1 referred:2 edmonton:1 fails:2 sub:1 lie:1 weighting:3 theorem:11 specific:2 xt:20 r2:2 offset:1 svm:1 survival:3 exists:1 incorporating:3 erick:1 adding:1 illustrates:2 cartesian:1 gap:1 chen:1 depicted:1 simply:4 univariate:16 expressed:1 contained:1 nserc:1 springer:1 chance:5 worstcase:1 conditional:8 identity:1 goal:1 feasible:1 hard:5 adverse:1 diminished:1 specifically:1 except:3 tyrrell:1 justify:1 hyperplane:6 lemma:7 conservative:7 called:6 total:1 shang:1 gauss:1 meaningful:1 distributionally:1 rarely:1 exception:1 select:2 support:5 latter:1 constructive:1 handling:2 |
3,104 | 3,812 | Streaming k-means approximation
Nir Ailon
Google Research
[email protected]
Ragesh Jaiswal?
Columbia University
[email protected]
Claire Monteleoni?
Columbia University
[email protected]
Abstract
We provide a clustering algorithm that approximately optimizes the k-means objective, in the one-pass streaming setting. We make no assumptions about the
data, and our algorithm is very light-weight in terms of memory, and computation. This setting is applicable to unsupervised learning on massive data sets, or
resource-constrained devices. The two main ingredients of our theoretical work
are: a derivation of an extremely simple pseudo-approximation batch algorithm
for k-means (based on the recent k-means++), in which the algorithm is allowed
to output more than k centers, and a streaming clustering algorithm in which batch
clustering algorithms are performed on small inputs (fitting in memory) and combined in a hierarchical manner. Empirical evaluations on real and simulated data
reveal the practical utility of our method.
1
Introduction
As commercial, social, and scientific data sources continue to grow at an unprecedented rate, it is
increasingly important that algorithms to process and analyze this data operate in online, or one-pass
streaming settings. The goal is to design light-weight algorithms that make only one pass over the
data. Clustering techniques are widely used in machine learning applications, as a way to summarize
large quantities of high-dimensional data, by partitioning them into ?clusters? that are useful for
the specific application. The problem with many heuristics designed to implement some notion of
clustering is that their outputs can be hard to evaluate. Approximation guarantees, with respect to
some reasonable objective, are therefore useful. The k-means objective is a simple, intuitive, and
widely-cited clustering objective for data in Euclidean space. However, although many clustering
algorithms have been designed with the k-means objective in mind, very few have approximation
guarantees with respect to this objective.
In this work, we give a one-pass streaming algorithm for the k-means problem. We are not aware
of previous approximation guarantees with respect to the k-means objective that have been shown
for simple clustering algorithms that operate in either online or streaming settings. We extend work
of Arthur and Vassilvitskii [AV07] to provide a bi-criterion approximation algorithm for k-means,
in the batch setting. They define a seeding procedure which chooses a subset of k points from a
batch of points, and they show that this subset gives an expected O(log (k))-approximation to the kmeans objective. This seeding procedure is followed by Lloyd?s algorithm1 which works very well
in practice with the seeding. The combined algorithm is called k-means++, and is an O(log (k))approximation algorithm, in expectation.2 We modify k-means++ to obtain a new algorithm, kmeans#, which chooses a subset of O(k log (k)) points, and we show that the chosen subset of
?
Department of Computer Science. Research supported by DARPA award HR0011-08-1-0069.
Center for Computational Learning Systems
1
Lloyd?s algorithm is popularly known as the k-means algorithm
2
Since the approximation guarantee is proven based on the seeding procedure alone, for the purposes of this
exposition we denote the seeding procedure as k-means++.
?
1
points gives a constant approximation to the k-means objective. Apart from giving us a bi-criterion
approximation algorithm, our modified seeding procedure is very simple to analyze.
[GMMM+03] defines a divide-and-conquer strategy to combine multiple bi-criterion approximation
algorithms for the k-medoid problem to yield a one-pass streaming approximation algorithm for
k-median. We extend their analysis to the k-means problem and then use k-means++ and k-means#
in the divide-and-conquer strategy, yielding an extremely efficient single pass streaming algorithm
with an O(c? log (k))-approximation guarantee, where ? ? log n/ log M , n is the number of input
points in the stream and M is the amount of work memory available to the algorithm. Empirical
evaluations, on simulated and real data, demonstrate the practical utility of our techniques.
1.1
Related work
There is much literature on both clustering algorithms [Gon85, Ind99, VW02, GMMM+03,
KMNP+04, ORSS06, AV07, CR08, BBG09, AL09], and streaming algorithms [Ind99, GMMM+03,
M05, McG07].3 There has also been work on combining these settings: designing clustering algorithms that operate in the streaming setting [Ind99, GMMM+03, CCP03]. Our work is inspired by
that of Arthur and Vassilvitskii [AV07], and Guha et al. [GMMM+03], which we mentioned above
and will discuss in further detail. k-means++, the seeding procedure in [AV07], had previously been
analyzed by [ORSS06], under special assumptions on the input data.
In order to be useful in machine learning applications, we are concerned with designing algorithms
that are extremely light-weight and practical. k-means++ is efficient, very simple, and performs
well in practice. There do exist constant approximations to the k-means objective, in the nonstreaming setting, such as a local search technique due to [KMNP+04].4 A number of works
[LV92, CG99, Ind99, CMTS02, AGKM+04] give constant approximation algorithms for the related k-median problem in which the objective is to minimize the sum of distances of the points to
their nearest centers (rather than the square of the distances as in k-means), and the centers must be
a subset of the input points. It is popularly believed that most of these algorithms can be extended to
work for the k-means problem without too much degredation of the approximation, however there
is no formal evidence for this yet. Moreover, the running times of most of these algorithms depend
worse than linearly on the parameters (n, k, and d) which makes these algorithms less useful in practice. As future work, we propose analyzing variants of these algorithms in our streaming clustering
algorithm, with the goal of yielding a streaming clustering algorithm with a constant approximation
to the k-means objective.
Finally, it is important to make a distinction from some lines of clustering research which involve
assumptions on the data to be clustered. Common assumptions include i.i.d. data, e.g. [BL08], and
data that admits a clustering with well separated means e.g. in [VW02, ORSS06, CR08]. Recent
work [BBG09] assumes a ?target? clustering for the specific application and data set, that is close
to any constant approximation of the clustering objective. In contrast, we prove approximation
guarantees with respect to the optimal k-means clustering, with no assumptions on the input data.5
As in [AV07], our probabilistic guarantees are only with respect to randomness in the algorithm.
1.1.1
Preliminaries
The k-means clustering problem is defined as follows: Given n points X ? Rd and a weight
d
function w : X ? R
!, the goal is to find a2subset C ? R , |C| = k such that the following quantity is
6
minimized: ?C = x?X w(x)?D(x, C) , where D(x, C) denotes the #2 distance of x to the nearest
point in C. When the subset C is clear from the context, we denote this distance by D(x). Also,
for two points x, y, D(x, y) denotes the #2 distance between x and y. The subset C is alternatively
called a clustering of X and ?C is called the potential function corresponding to the clustering. We
will use the term ?center? to refer to any c ? C.
3
For a comprehensive survey of streaming results and literature, refer to [M05].
In recent, independent work, Aggarwal, Deshpande, and Kannan [ADK09] extend the seeding procedure of
k-means++ to obtain a constant factor approximation algorithm which outputs O(k) centers. They use similar
techniques to ours, but reduce the number of centers by using a stronger concentration property.
5
It may be interesting future work to analyze our algorithm in special cases, such as well-separated clusters.
6
For the unweighted case, we can assume that w(x) = 1 for all x.
4
2
Definition 1.1 (Competitive ratio, b-approximation). Given an algorithm B for the k-means problems, let ?C be the potential of the clustering C returned by B (on some input set which is implicit)
and let ?COP T denote the potential of the optimal clustering COP T . Then the competitive ratio is
defined to be the worst case ratio ?C?C . The algorithm B is said to be b-approximation algorithm
if
?C
?COP T
OP T
? b.
The previous definition might be too strong for an approximation algorithm for some purposes. For
example, the clustering algorithm performs poorly when it is constrained to output k centers but it
might become competitive when it is allowed to output more centers.
Definition 1.2 ((a, b)-approximation). We call an algorithm B, (a, b)-approximation for the kmeans problem if it outputs a clustering C with ak centers with potential ?C such that ?C?C ? b in
OP T
the worst case. Where a > 1, b > 1.
Note that for simplicity, we measure the memory in terms of the words which essentially means that
we assume a point in Rd can be stored in O(1) space.
2
k-means#: The advantages of careful and liberal seeding
The k-means++ algorithm is an expected ?(log k)-approximation algorithm. In this section, we
extend the ideas in [AV07] to get an (O(log k), O(1))-approximation algorithm. Here is the kmeans++ algorithm:
1. Choose an initial center c1 uniformly at random from X .
2. Repeat (k ? 1) times:
! 2
)
3.
Choose the next center ci , selecting ci = x# ? X with probability P D(xD(x)
2.
x?X
(here D(.) denotes the distances w.r.t. to the subset of points chosen in the previous rounds)
Algorithm 1: k-means++
In the original definition of k-means++ in [AV07], the above algorithm is followed by Lloyd?s
algorithm. The above algorithm is used as a seeding step for Lloyd?s algorithm which is known
to give the best results in practice. On the other hand, the theoretical guarantee of the k-means++
comes from analyzing this seeding step and not Lloyd?s algorithm. So, for our analysis we focus on
this seeding step. The running time of the algorithm is O(nkd).
In the above algorithm X denotes the set of given points and for any point x, D(x) denotes the
distance of this point from the nearest center among the centers chosen in the previous rounds. To
get an (O(log k), O(1))-approximation algorithm, we make a simple change to the above algorithm.
We first set up the tools for analysis. These are the basic lemmas from [AV07]. We will need the
following definition first:
Definition 2.1 (Potential w.r.t. a set). Given a clustering
C, its potential with respect to some set A
!
is denoted by ?C (A) and is defined as ?C (A) = x?A D(x)2 , where D(x) is the distance of the
point x from the nearest point in C.
Lemma 2.2 ([AV07], Lemma 3.1). Let A be an arbitrary cluster in COP T , and let C be the clustering
with just one center, chosen uniformly at random from A. Then Exp[?C (A)] = 2 ? ?COP T (A).
Corollary 2.3. Let A be an arbitrary cluster in COP T , and let C be the clustering with just one
center, which is chosen uniformly at random from A. Then, Pr[?C (A) < 8?COP T (A)] ? 3/4
Proof. The proof follows from Markov?s inequality.
Lemma 2.4 ([AV07], Lemma 3.2). Let A be an arbitrary cluster in COP T , and let C be an arbitrary
clustering. If we add a random center to C from A, chosen with D2 weighting to get C # , then
Exp[?C ! (A)] ? 8 ? ?COP T (A).
Corollary 2.5. Let A be an arbitrary cluster in COP T , and let C be an arbitrary clustering. If
we add a random center to C from A, chosen with D2 weighting to get C # , then Pr[?C ! (A) <
32 ? ?COP T (A)] ? 3/4.
3
We will use k-means++ and the above two lemmas to obtain a (O(log k), O(1))-approximation
algorithm for the k-means problem. Consider the following algorithm:
1. Choose 3 ? log k centers independently and uniformly at random from X .
2. Repeat (k ? 1) times.
! 2
)
3.
Choose 3 ? log k centers independently and with probability P D(xD(x)
2.
x?X
(here D(.) denotes the distances w.r.t. to the subset of points chosen in the previous rounds)
Algorithm 2: k-means#
Note that the algorithm is almost the same as the k-means++ algorithm except that in each round
of choosing centers, we pick O(log k) centers rather than a single center. The running time of the
above algorithm is clearly O(ndk log k).
Let A = {A1 , ..., Ak } denote the set of clusters in the optimal clustering COP T . Let C i denote the
clustering after ith round of choosing centers. Let Aic denote the subset of clusters ? A such that
?A ? Aic , ?C i (A) ? 32 ? ?COP T (A).
We call this subset of clusters, the ?covered? clusters. Let Aiu = A\Aic be the subset of ?uncovered?
clusters. The following simple lemma shows that with constant probability step (1) of k-means#
picks a center such that at least one of the clusters gets covered, or in other words, |A1c | ? 1. Let us
call this event E.
Lemma 2.6. Pr[E] ? (1 ? 1/k).
Proof. The proof easily follows from Corollary 2.3.
Let Xci = ?A?Aic A and let Xui = X \ Xci . Now after the ith round, either ?C i (Xci ) ? ?C i (Xui )
or otherwise. In the former case, using Corollary 2.5, we show that the probability of covering an
uncovered cluster in the (i + 1)th round is large. In the latter case, we will show that the current set
of centers is already competitive with constant approximation ratio. Let us start with the latter case.
Lemma 2.7. If event E occurs ( |A1c | ? 1) and for any i > 1, ?C i (Xci ) > ?C i (Xui ), then ?C i ?
64?COP T .
Proof. We get the main result using the following sequence of inequalities: ?C i = ?C i (Xci ) +
?C i (Xui ) ? ?C i (Xci ) + ?C i (Xci ) ? 2 ? 32 ? ?COP T (Xci ) ? 64 ?COP T (using the definition of Xci ).
i
Lemma 2.8. If for any i ? 1, ?C i (Xci ) ? ?C i (Xui ), then Pr[|Ai+1
c | ? |Ac | + 1] ? (1 ? 1/k).
Proof. Note that in the (i + 1)th round, the probability that a center is chosen from a cluster ?
/ Aic is
?
(X i )
i
u
at least ? i (X Ci )+?
? 1/2. Conditioned on this event, with probability at least 3/4 any of the
i
u
C
C i (Xc )
centers x chosen in round (i + 1) satisfies ?C i ?x (A) ? 32 ? ?COP T (A) for some uncovered cluster
A ? Aiu . This means that with probability at least 3/8 any of the chosen centers x in round (i + 1)
satisfies ?C i ?x (A) ? 32 ? ?COP T (A) for some uncovered cluster A ? Aiu . This further implies that
with probability at least (1 ? 1/k) at least one of the chosen centers x in round (i + 1) satisfies
?C i ?x (A) ? 32 ? ?COP T (A) for some uncovered cluster A ? Aiu .
We use the above two lemmas to prove our main theorem.
Theorem 2.9. k-means# is a (O(log k), O(1))-approximation algorithm.
Proof. From Lemma 2.6 we know that event E (i.e., |Aic | ? 1) occurs. Given this, suppose for
any i > 1, after the ith round ?C i (Xc ) > ?C i (Xu ). Then from Lemma 2.7 we have ?C ? ?C i ?
64?COP T . If no such i exist, then from Lemma 2.8 we get that the probability that there exists a
cluster A ? A such that A is not covered even after k rounds(i.e., end of the algorithm) is at most:
1 ? (1 ? 1/k)k ? 3/4. So with probability at least 1/4, the algorithm covers all the clusters in A.
In this case from Lemma 2.8, we have ?C = ?C k ? 32 ? ?COP T .
We have shown that k-means# is a randomized algorithm for clustering which with probability at
least 1/4 gives a clustering with competitive ratio 64.
4
3
A single pass streaming algorithm for k-means
In this section, we will provide a single pass streaming algorithm. The basic ingredients for the algorithm is a divide and conquer strategy defined by [GMMM+03] which uses bi-criterion approximation algorithms in the batch setting. We will use k-means++ which is a (1, O(log k))-approximation
algorithm and k-means# which is a (O(log k), O(1))-approximation algorithm, to construct a single
pass streaming O(log k)-approximation algorithm for k-means problem. In the next subsection, we
develop some of the tools needed for the above.
3.1 A streaming (a,b)-approximation for k-means
We will show that a simple streaming divide-and-conquer scheme, analyzed by [GMMM+03] with
respect to the k-medoid objective, can be used to approximate the k-means objective. First we
present the scheme due to [GMMM+03], where in this case we use k-means-approximating algorithms as input.
Inputs: (a) Point set S ? Rd . Let n = |S|.
(b) Number of desired clusters, k ? N .
(c) A, an (a, b)-approximation algorithm to the k-means objective.
(d) A# , an (a# , b# )-approximation algorithm to the k-means objective.
1.
2.
3.
4.
5.
6.
7.
Divide S into groups S1 , S2 , . . . , S#
For each i ? {1, 2, . . . , #}
Run A on Si to get ? ak centers Ti = {ti1 , ti2 , . . .}
Denote the induced clusters of Si as Si1 ? Si2 ? ? ? ?
Sw ? T1 ? T2 ? ? ? ? ? T# , with weights w(tij ) ? |Sij |
Run A# on Sw to get ? a# k centers T
Return T
Algorithm 3: [GMMM+03] Streaming divide-and-conquer clustering
?
?
First note that when every batch Si has size nk, this algorithm takes one pass, and O(a nk)
memory. Now we will give an approximation guarantee.
Theorem 3.1. The algorithm above outputs a clustering that is an (a# , 2b + 4b# (b + 1))approximation to the k-means objective.
The a# approximation of the desired number of centers follows directly from the approximation
property of A# , with respect to the number of centers, since A# is the last algorithm to be run. It
remains to show the approximation of the k-means objective. The proof, which appears in the
Appendix, involves extending the analysis of [GMMM+03], to the case of the k-means objective.
Using the exposition in Dasgupta?s lecture notes [Das08], of the proof due to [GMMM+03], our
extension is straightforward, and differs in the following ways from the k-medoid analysis.
1. The k-means objective involves squared distance (as opposed to k-medoid in which the
distance is not squared), so the triangle inequality cannot be invoked directly. We replace it
with an application of the triangle inequality, followed by (a+b)2 ? 2a2 +2b2 , everywhere
it occurs, introducing several factors of 2.
2. Cluster centers are chosen from Rd , for the k-means problem, so in various parts of the
proof we save an approximation a factor of 2 from the k-medoid problem, in which cluster
centers must be chosen from the input data.
3.2 Using k-means++ and k-means# in the divide-and-conquer strategy
In the previous subsection, we saw how a (a, b)-approximation algorithm A and an (a# , b# )approximation algorithm A# can be used to get a single pass (a# , 2b + 4b# (b + 1))-approximation
streaming algorithm. We now have two randomized algorithms, k-means# which with probability
at least 1/4 is a (3 log k, 64)-approximation algorithm and k-means++ which is a (1, O(log k))approximation algorithm (the approximation factor being in expectation). We can now use these
two algorithms in the divide-and-conquer strategy to obtain a single pass streaming algorithm.
We use the following as algorithms as A and A# in the divide-and-conquer strategy (3):
5
A: ?Run k-means# on the data 3 log n times independently, and pick the clustering
with the smallest cost.?
A?: ?Run k-means++?
Weighted versus non-weighted. Note that k-means and k-means# are approximation algorithms
for the non-weighted case (i.e. w(x) = 1 for all points x). On the other hand, in the divide-andconquer strategy we need the algorithm A# , to work for the weighted case where the weights are
integers. Note that both k-means and k-means# can be easily generalized for the weighted case
when the weights are integers. Both algorithms compute probabilities based on the cost with respect
to the current clustering. This cost can be computed by taking into account the weights. For the
analysis, we can assume points with multiplicities equal to the integer weight of the point. The
memory required remains logarithmic in the input size, including the storing the weights.
"
#
"
#
Analysis. With probability at least 1 ? (3/4)3 log n ? 1 ? n1 , algorithm A is a (3 log k, 64)approximation algorithm. Moreover, the space requirement remains logarithmic ?
in the input size. In
step (3) of $
Algorithm 3 we run A on batches of data. Since each batch is of size nk the number of
batches is n/k, the probability
that A is a (3 log k, 64)-approximation algorithm for all of these
"
#?n/k
1
batches is at least 1 ? n
? 1/2. Conditioned on this event, the divide-and-conquer strategy
?
gives a O(log k)-approximation algorithm. The memory required is O(log(k) ? nk) times the
logarithm of the input size. Moreover, the algorithm has running time O(dnk log n log k).
3.3 Improved memory-approximation tradeoffs
We saw in the last section how to obtain a single-pass (a# , cbb# )-approximation for k-means using
first an (a, b)-approximation on input blocks and then an (a# , b# )-approximation on the union of the
output center
? sets, where c is some global constant. The optimal memory required for this scheme
was O(a nk). This immediately implies a tradeoff between the memory requirements (growing
like a), the number of centers outputted (which is a# k) and the approximation to the potential (which
is cbb# ) with respect to the optimal solution using k centers. A more subtle tradeoff is possible by a
recursive application of the technique in multiple levels. Indeed, the (a, b)-approximation could be
broken up in turn into two levels, and so on. This idea was used in [GMMM+03]. Here we make a
more precise account of the tradeoff between the different parameters.
Assume we have subroutines for performing (ai , bi )-approximation for k-means in batch mode, for
i = 1, . . . r (we will choose a1 , . . . , ar , b1 , . . . , br later). We will hold r buffers B1 , . . . , Br as
work areas, where the size of buffer Bi is Mi . In the topmost level, we will divide the input into
equal blocks of size M1 , and run our (a1 , b1 )-approximation algorithm on each block. Buffer B1
will be repeatedly reused for this task, and after each application of the approximation algorithm,
the outputted set of (at most) ka1 centers will be added to B2 . When B2 is filled, we will run
the (a2 , b2 )-approximation algorithm on the data and add the ka2 outputted centers to B3 . This
will continue until buffer Br fills, and the (ar , br )-approximation algorithm outputs the final ar k
centers. Let ti denote the number of times the i?th level algorithm is executed. Clearly we have
ti kai = Mi+1 ti+1 for i = 1, . . . , r ? 1. For the last stage we have tr = 1, which means that tr?1 =
Mr /kar?1 , tr?2 = Mr?1 Mr /k 2 ar?2 ar?1 and generally ti = Mi+1 ? ? ? Mr /k r?i ai ? ? ? ar?1 .7 But
M1 ???Mr
we must also have t1 = n/M1 , implying n = kr?1
a1 ???ar?1 . In order to minimize the total memory
!
Mi under the last constraint, using standard arguments in multivariate analysis we must have
"
#1/r
M1 = ? ? ? = Mr , or in other words Mi = nk r?1 a1 ? ? ? ar?1
? n1/r k(a1 ? ? ? ar?1 )1/r for all i.
The resulting one-pass algorithm will have an approximation guarantee of (ar , cr?1 b1 ? ? ? br ) (using
a straightforward extension of the result in the previous section) and memory requirement of at most
rn1/r k(a1 ? ? ? ar?1 )1/r .
Assume now that we are in the realistic setting in which the available memory is of fixed size
M ? k. We will choose r (below), and for each i = 1..r ? 1 we choose to either run k-means++
or the repeated k-means# (algorithm A in the previous subsection), i.e., (ai , bi ) = (1, O(log k))
or (3 log k, O(1)) for each i. For i = r, we choose k-means++, i.e., (ar , br ) = (1, O(log k)) (we
are interested in outputting exactly k centers as the final solution). Let q denote the number of
7
We assume all quotients are integers for simplicity of the proof, but note that fractional blocks would arise
in practice.
6
25
8
40
35
4
3
Cost in units of 107
6
5
Batch Lloyds
Divide and Conquer with km# and km++
Divide and Conquer with km++
45
20
Cost in units of 10
Cost in units of 109
6
50
Batch Lloyds
Divide and Conquer with km# and km++
Divide and Conquer with km++
Batch Lloyds
Online Lloyds
Divide and Conquer with km# and km++
Divide and Conquer with km++
7
15
10
30
25
20
15
2
5
10
1
0
5
5
10
15
k
20
25
0
5
10
15
k
20
25
0
5
10
15
k
20
25
Figure 1: Cost vs. k: (a) Mixtures of gaussians simulation, (b) Cloud data, (c) Spam data,.
indexes i ? [r ? 1] such that (ai , bi ) = (3 log k, O(1)). By the above discussion, the memory is
used optimally if M = rn1/r k(3 log k)q/r , in which case the final approximation guarantee will be
c?r?1 (log k)r?q , for some global c? > 0. We concentrate on the case M growing polynomially in n,
say M = n? for some ? < 1. In this case, the memory optimality constraint implies r = 1/? for
n large enough (regardless of the choice of q). This implies that the final approximation guarantee
is best if q = r ? 1, in other words, we choose the repeated k-means# for levels 1..r ? 1, and
k-means++ for level r. Summarizing, we get:
Theorem 3.2. If there is access to memory of size M = n? for some fixed ? > 0, then for sufficiently
large n the best application of the multi-level scheme described above is obtained by running r =
-?. = -log n/ log M . levels, and choosing the repeated k-means# for all but the last level, in which
k-means++ is chosen. The resulting algorithm is a randomized one-pass streaming approximation
to k-means, with an approximation ratio of O(?
cr?1 (log k)), for some global c? > 0. The running
time of the algorithm is O(dnk 2 log n log k).
We should compare the above multi-level streaming algorithm with the state-of-art (in terms of
memory vs. approximation tradeoff) streaming algorithm for the k-median problem. Charikar,
Callaghan, and Panigrahy [CCP03] give a one-pass streaming algorithm for the k-median problem
which gives a constant factor approximation and uses O(k?poly log(n)) memory. The main problem
with this algorithm from a practical point of view is that the average processing time per item is
large. It is proportional to the amount of memory used, which is poly-logarithmic in n. This might
be undesirable in practical scenarios where we need to process a data item quickly when it arrives. In
contrast, the average per item processing time using the divide-and-conquer-strategy is constant and
furthermore the algorithm can be pipelined (i.e. data items can be temporarily stored in a memory
buffer and quickly processed before the the next memory buffer is filled). So, even if [CCP03] can
be extended to the k-means setting, streaming algorithms based on the divide-and-conquer-strategy
would be more interesting from a practical point of view.
4
Experiments
Datasets. In our discussion, n denotes the number of points in the data, d denotes the dimension,
and k denotes the number of clusters. Our first evaluation, detailed in Tables 1a)-c) and Figure 1,
compares our algorithms on the following data: (1) norm25 is synthetic data generated in the following manner: we choose 25 random vertices from a 15 dimensional hypercube of side length 500.
We then add 400 gaussian random points (with variance 1) around each of these points.8 So, for this
data n = 10, 000 and d = 15. The optimum cost for k = 25 is 1.5026 ? 105 . (2) The UCI Cloud
dataset consists of cloud cover data [AN07]. Here n = 1024 and d = 10. (3) The UCI Spambase
dataset is data for an e-mail spam detection task [AN07]. Here n = 4601 and d = 58.
To compare against a baseline method known to be used in practice, we used Lloyd?s algorithm,
commonly referred to as the k-means algorithm. Standard Lloyd?s algorithm operates in the batch
setting, which is an easier problem than the one-pass streaming setting, so we ran experiments with
this algorithm to form a baseline. We also compare to an online version of Lloyd?s algorithm,
however the performance is worse than the batch version, and our methods, for all problems, so we
8
Testing clustering algorithms on this simulation distribution was inspired by [AV07].
7
k
5
10
15
20
25
k
5
10
15
20
25
k
5
10
15
20
25
BL
5.1154 ? 109
3.3080 ? 109
2.0123 ? 109
1.4225 ? 109
0.8602 ? 109
OL
6.5967 ? 109
6.0146 ? 109
4.3743 ? 109
3.7794 ? 109
2.8859 ? 109
DC-1
7.9398 ? 109
4.5954 ? 109
2.5468 ? 109
1.0718 ? 109
2.7842 ? 105
DC-2
7.8474 ? 109
4.6829 ? 109
2.5898 ? 109
1.1403 ? 109
2.7298 ? 105
BL
1.25
2.05
3.88
8.62
13.13
OL
1.32
2.45
3.49
4.69
6.04
DC-1
14.37
45.39
95.22
190.73
283.19
BL
1.12
1.20
2.18
2.59
2.43
OL
0.13
0.25
0.35
0.47
0.52
DC-1
1.73
5.64
10.98
25.72
36.17
BL
4.9139 ? 108
1.6952 ? 108
1.5670 ? 108
1.5196 ? 108
1.5168 ? 108
OL
1.7001 ? 109
1.6930 ? 109
1.4762 ? 109
1.4766 ? 109
1.4754 ? 109
DC-1
3.4021 ? 108
1.0206 ? 108
5.5095 ? 107
3.3400 ? 107
2.3151 ? 107
DC-2
3.3963 ? 108
1.0463 ? 108
5.3557 ? 107
3.2994 ? 107
2.3391 ? 107
BL
9.68
34.78
67.54
100.44
109.41
OL
0.70
1.31
1.88
2.57
3.04
DC-1
11.65
40.14
77.75
194.01
274.42
BL
1.7707 ? 107
0.7683 ? 107
0.5012 ? 107
0.4388 ? 107
0.3839 ? 107
OL
1.2401 ? 108
8.5684 ? 107
8.4633 ? 107
6.5110 ? 107
6.3758 ? 107
DC-1
2.2924 ? 107
8.3363 ? 106
4.9667 ? 106
3.7479 ? 106
2.8895 ? 106
DC-2
2.2617 ? 107
8.7788 ? 106
4.8806 ? 106
3.7536 ? 106
2.9014 ? 106
DC-2
9.93
21.09
30.34
41.49
53.07
DC-2
0.92
1.87
2.67
4.19
4.82
DC-2
5.14
9.75
14.41
22.76
27.10
Table 1: Columns 2-5 have the clustering cost and columns 6-9 have time in sec. a) norm25 dataset,
b) Cloud dataset, c) Spambase dataset.
Memory/
#levels
1024/0
480/1
360/2
Cost
Time
6
8.74 ? 10
8.59 ? 106
8.61 ? 106
5.5
3.6
3.8
Memory/
#levels
2048/0
1250/1
1125/2
Cost
Time
4
5.78 ? 10
5.36 ? 104
5.15 ? 104
30
25
26
Memory/
#levels
4601/0
880/1
600/2
Cost
Time
8
1.06 ? 10
0.99 ? 108
1.03 ? 108
34
20
19.5
Table 2: Multi-level hierarchy evaluation: a) Cloud dataset, k = 10, b) A subset of norm25 dataset,
n = 2048, k = 25, c) Spambase dataset, k = 10. The memory size decreases as the number of
levels of the hierarchy increases. (0 levels means running batch k-means++ on the data.)
do not include it in our plots for the real data sets.9 Tables 1a)-c) shows average k-means cost (over
10 random restarts for the randomized algorithms: all but Online Lloyd?s) for these algorithms:
(1) BL: Batch Lloyd?s, initialized with random centers in the input data, and run to convergence.10
(2) OL: Online Lloyd?s.
(3) DC-1: The simple 1-stage divide and conquer algorithm of Section 3.2.
(4) DC-2: The simple 1-stage divide and conquer algorithm 3 of Section 3.1. The sub-algorithms
used are A = ?run k-means++ 3 ? log n times and pick best clustering,? and A? is k-means++. In our
context, k-means++ and k-means# are only the seeding step, not followed by Lloyd?s algorithm.
In all problems, our streaming methods achieve much lower cost than Online Lloyd?s, for all settings
of k, and lower cost than Batch Lloyd?s for most settings of k (including the correct k = 25, in
norm25). The gains with respect to batch are noteworthy, since the batch problem is less constrained
than the one-pass streaming problem. The performance of DC-1 and DC-2 is comparable.
Table 2 shows an evaluation of the one-pass multi-level hierarchical algorithm of Section 3.3, on the
different datasets, simulating different memory restrictions. Although our worst-case theoretical results imply an exponential clustering cost as a function of the number of levels, our results show a far
more optimistic outcome in which adding levels (and limiting memory) actually improves the outcome. We conjecture that our data contains enough information for clustering even on chunks that fit
in small buffers, and therefore the results may reflect the benefit of the hierarchical implementation.
Acknowledgements. We thank Sanjoy Dasgupta for suggesting the study of approximation algorithms for k-means in the streaming setting, for excellent lecture notes, and for helpful discussions.
9
10
Despite the poor performance we observed, this algorithm is apparently used in practice, see [Das08].
We measured convergence by change in cost less than 1.
8
References
[ADK09] Ankit Aggarwal, Amit Deshpande and Ravi Kannan: Adaptive Sampling for k-means Clustering.
APPROX, 2009.
[AL09] Nir Ailon and Edo Liberty: Correlation Clustering Revisited: The ?True? Cost of Error Minimization
Problems. To appear in ICALP 2009.
[AMS96] Noga Alon, Yossi Matias, and Mario Szegedy.: The space complexity of approximating the frequency moments. In Proceedings of the Twenty-Eighth Annual ACM Symposium on Theory of Computing, pages 20?29, 1996.
[AV06] David Arthur and Sergei Vassilvitskii: Worst-case and smoothed analyses of the icp algorithm, with
an application to the k-means method. FOCS, 2006
[AV07] David Arthur and Sergei Vassilvitskii: k-means++: the advantages of careful seeding. SODA, 2007.
[AGKM+04] V. Arya, N. Garg, R. Khandekar, A. Meyerson, K. Munagala, and V. Pandit: Local search heuristics for k-median and facility location problems. Siam Journal of Computing, 33(3):544?562, 2004.
[AN07] A.
Asuncion
and
D.J.
Newman:
UCI
Machine
Learning
Repository.
http://www.ics.uci.edu/?mlearn/MLRepository.html, University of California, Irvine, School of
Information and Computer Sciences, 2007.
[BBG09] Maria-Florina Balcan, Avrim Blum, and Anupam Gupta: Approximate Clustering without the Approximation. SODA, 2009.
[BL08] S. Ben-David and U. von Luxburg: Relating clustering stability to properties of cluster boundaries.
COLT, 2008
[CCP03] Moses Charikar and Liadan O?Callaghan and Rina Panigrahy: Better streaming algorithms for clustering problems. STOC, 2003.
[CG99] Moses Charikar and Sudipto Guha: Improved combinatorial algorithms for the facility location and
k-medians problem. FOCS, 1999.
[CMTS02] M. Charikar, S. Guha , E Tardos, and D. Shmoys: A Constant Factor Approximation Algorithm
for the k-Median Problem. Journal of Computer and System Sciences, 2002.
[CR08] Kamalika Chaudhuri and Satish Rao: Learning Mixtures of Product Distributions using Correlations
and Independence. COLT, 2008.
[Das08] Sanjoy Dasgupta.: Course notes, CSE 291: Topics in unsupervised learning. http://wwwcse.ucsd.edu/ dasgupta/291/index.html, University of California, San Diego, Spring 2008.
[Gon85] T. F. Gonzalez: Clustering to minimize the maximum intercluster distance. Theoretical Computer
Science, 38, pages 293?306, 1985.
[GMMM+03] Sudipto Guha, Adam Meyerson, Nina Mishra, Rajeev Motwani, and Liadan O?Callaghan: Clustering Data Streams: Theory and Practice. IEEE Transactions on Knowledge and Data Engineering,
15(3): 515?528, 2003.
[Ind99] Piotr Indyk: Sublinear Time Algorithms for Metric Space Problems. STOC, 1999.
[JV01]
K. Jain and Vijay Vazirani: Approximation Algorithms for Metric Facility Location and k-Median
Problems Using the Primal-Dual Schema and Lagrangian Relaxation. Journal of the ACM. 2001.
[KMNP+04] T. Kanungo, D. M. Mount, N. Netanyahu, C. Piatko, R. Silverman, and A. Y. Wu: A Local
Search Approximation Algorithm for k-Means Clustering, Computational Geometry: Theory and
Applications, 28, 89-112, 2004.
[LV92] J. Lin and J. S. Vitter: Approximation Algorithms for Geometric Median Problems. Information
Processing Letters, 1992.
[McG07] Andrew McGregor: Processing Data Streams. Ph.D. Thesis, Computer and Information Science,
University of Pennsylvania, 2007.
[M05]
S. Muthukrishnan: Data Streams: Algorithms and Applications, NOW Publishers, Inc., Hanover MA
[ORSS06] Rafail Ostrovsky, Yuval Rabani, Leonard J. Schulman, Chaitanya Swamy: The effectiveness of
Lloyd-type methods for the k-means problem. FOCS, 2006.
[VW02] V. Vempala and G. Wang: A spectral algorithm of learning mixtures of distributions. pages 113?123,
FOCS, 2002.
9
| 3812 |@word repository:1 version:2 stronger:1 reused:1 d2:2 km:9 simulation:2 pick:4 tr:3 moment:1 initial:1 uncovered:5 contains:1 selecting:1 ours:1 spambase:3 mishra:1 current:2 com:2 si:3 gmail:1 yet:1 must:4 sergei:2 realistic:1 seeding:14 designed:2 plot:1 v:2 alone:1 implying:1 device:1 item:4 ith:3 revisited:1 location:3 cse:1 liberal:1 si1:1 become:1 symposium:1 focs:4 prove:2 consists:1 fitting:1 combine:1 vitter:1 manner:2 indeed:1 expected:2 growing:2 multi:4 ol:7 inspired:2 das08:3 moreover:3 guarantee:12 pseudo:1 every:1 ti:5 xd:2 exactly:1 ostrovsky:1 partitioning:1 unit:3 appear:1 t1:2 before:1 engineering:1 local:3 modify:1 despite:1 ak:3 analyzing:2 mount:1 approximately:1 noteworthy:1 might:3 garg:1 bi:8 practical:6 testing:1 piatko:1 practice:8 block:4 implement:1 differs:1 union:1 recursive:1 silverman:1 procedure:7 area:1 empirical:2 outputted:3 word:4 get:11 cannot:1 close:1 undesirable:1 pipelined:1 context:2 restriction:1 www:1 xci:10 center:44 lagrangian:1 straightforward:2 regardless:1 independently:3 survey:1 simplicity:2 immediately:1 fill:1 stability:1 notion:1 limiting:1 tardos:1 target:1 commercial:1 suppose:1 massive:1 hierarchy:2 diego:1 us:2 designing:2 observed:1 cloud:5 wang:1 worst:4 rina:1 jaiswal:1 decrease:1 ran:1 mentioned:1 topmost:1 broken:1 complexity:1 nkd:1 depend:1 triangle:2 easily:2 darpa:1 various:1 muthukrishnan:1 derivation:1 separated:2 jain:1 newman:1 choosing:3 outcome:2 heuristic:2 widely:2 kai:1 say:1 otherwise:1 cop:21 final:4 online:7 ankit:1 indyk:1 advantage:2 sequence:1 unprecedented:1 propose:1 outputting:1 product:1 uci:4 combining:1 poorly:1 achieve:1 chaudhuri:1 sudipto:2 intuitive:1 convergence:2 cluster:25 requirement:3 extending:1 optimum:1 motwani:1 adam:1 ben:1 aiu:4 develop:1 ac:1 alon:1 andrew:1 measured:1 nearest:4 school:1 op:2 strong:1 quotient:1 involves:2 come:1 implies:4 xui:5 concentrate:1 liberty:1 popularly:2 correct:1 munagala:1 pandit:1 clustered:1 preliminary:1 extension:2 hold:1 sufficiently:1 around:1 ic:1 exp:2 a2:2 smallest:1 purpose:2 applicable:1 combinatorial:1 saw:2 tool:2 weighted:5 minimization:1 clearly:2 gaussian:1 modified:1 rather:2 cr:2 corollary:4 focus:1 maria:1 contrast:2 baseline:2 summarizing:1 helpful:1 streaming:31 subroutine:1 interested:1 among:1 html:2 colt:2 denoted:1 dual:1 constrained:3 special:2 art:1 equal:2 aware:1 construct:1 piotr:1 sampling:1 unsupervised:2 future:2 minimized:1 t2:1 few:1 comprehensive:1 geometry:1 n1:2 detection:1 cbb:2 evaluation:5 bl08:2 analyzed:2 mixture:3 arrives:1 yielding:2 light:3 primal:1 arthur:4 filled:2 euclidean:1 divide:22 logarithm:1 desired:2 initialized:1 chaitanya:1 theoretical:4 column:2 rao:1 cover:2 ar:12 cost:18 introducing:1 vertex:1 subset:13 satish:1 guha:4 too:2 optimally:1 stored:2 synthetic:1 combined:2 chooses:2 chunk:1 cited:1 randomized:4 siam:1 probabilistic:1 icp:1 quickly:2 thesis:1 squared:2 reflect:1 rn1:2 opposed:1 choose:10 von:1 worse:2 agkm:2 return:1 szegedy:1 account:2 potential:7 suggesting:1 lloyd:19 b2:4 sec:1 inc:1 stream:4 performed:1 later:1 view:2 optimistic:1 mario:1 analyze:3 apparently:1 competitive:5 start:1 schema:1 asuncion:1 minimize:3 square:1 variance:1 yield:1 ka1:1 shmoys:1 randomness:1 mlearn:1 monteleoni:1 edo:1 definition:7 against:1 matias:1 frequency:1 deshpande:2 proof:11 mi:5 gain:1 irvine:1 dataset:8 subsection:3 fractional:1 improves:1 knowledge:1 subtle:1 actually:1 appears:1 restarts:1 improved:2 furthermore:1 just:2 implicit:1 stage:3 until:1 correlation:2 hand:2 rajeev:1 google:2 defines:1 mode:1 reveal:1 scientific:1 dnk:2 liadan:2 b3:1 true:1 former:1 facility:3 round:13 covering:1 mlrepository:1 criterion:4 generalized:1 demonstrate:1 performs:2 balcan:1 invoked:1 common:1 ti2:1 extend:4 m1:4 relating:1 refer:2 ai:5 rd:4 approx:1 had:1 access:1 add:4 multivariate:1 recent:3 optimizes:1 apart:1 scenario:1 buffer:7 inequality:4 kar:1 continue:2 bbg09:3 mr:6 ndk:1 multiple:2 aggarwal:2 believed:1 lin:1 award:1 a1:7 variant:1 basic:2 florina:1 essentially:1 expectation:2 metric:2 c1:1 grow:1 source:1 median:9 publisher:1 noga:1 operate:3 induced:1 effectiveness:1 call:3 integer:4 enough:2 concerned:1 independence:1 fit:1 pennsylvania:1 reduce:1 idea:2 ti1:1 tradeoff:5 br:6 vassilvitskii:4 utility:2 ragesh:1 returned:1 repeatedly:1 useful:4 tij:1 clear:1 involve:1 covered:3 generally:1 detailed:1 amount:2 kanungo:1 ph:1 processed:1 http:2 exist:2 moses:2 medoid:5 per:2 dasgupta:4 group:1 blum:1 ravi:1 relaxation:1 sum:1 run:11 luxburg:1 everywhere:1 letter:1 soda:2 almost:1 reasonable:1 wu:1 gonzalez:1 appendix:1 comparable:1 followed:4 aic:6 annual:1 constraint:2 argument:1 extremely:3 optimality:1 spring:1 performing:1 rabani:1 vempala:1 conjecture:1 department:1 ailon:2 charikar:4 poor:1 increasingly:1 s1:1 pr:4 sij:1 multiplicity:1 resource:1 previously:1 remains:3 discus:1 turn:1 needed:1 mind:1 know:1 yossi:1 end:1 available:2 gaussians:1 hanover:1 hierarchical:3 spectral:1 simulating:1 save:1 batch:21 anupam:1 algorithm1:1 swamy:1 original:1 assumes:1 clustering:50 running:7 include:2 denotes:9 sw:2 xc:2 giving:1 amit:1 conquer:19 approximating:2 hypercube:1 bl:7 objective:21 already:1 quantity:2 occurs:3 added:1 strategy:10 concentration:1 said:1 distance:12 thank:1 simulated:2 topic:1 mail:1 khandekar:1 kannan:2 nina:1 panigrahy:2 length:1 index:2 ratio:6 executed:1 stoc:2 design:1 implementation:1 twenty:1 markov:1 datasets:2 arya:1 extended:2 precise:1 dc:16 ucsd:1 smoothed:1 arbitrary:6 david:3 ka2:1 ccls:1 required:3 california:2 distinction:1 hr0011:1 below:1 eighth:1 summarize:1 including:2 memory:27 event:5 scheme:4 imply:1 columbia:3 nir:2 literature:2 acknowledgement:1 geometric:1 schulman:1 lecture:2 icalp:1 sublinear:1 interesting:2 proportional:1 proven:1 versus:1 ingredient:2 cmontel:1 netanyahu:1 storing:1 a1c:2 claire:1 course:1 supported:1 repeat:2 last:5 formal:1 side:1 taking:1 benefit:1 boundary:1 dimension:1 unweighted:1 rafail:1 meyerson:2 commonly:1 adaptive:1 san:1 spam:2 far:1 polynomially:1 social:1 si2:1 transaction:1 vazirani:1 approximate:2 global:3 b1:5 alternatively:1 search:3 table:5 excellent:1 poly:2 main:4 linearly:1 s2:1 arise:1 allowed:2 nailon:1 repeated:3 xu:1 referred:1 sub:1 exponential:1 weighting:2 theorem:4 specific:2 admits:1 gupta:1 evidence:1 exists:1 avrim:1 adding:1 kamalika:1 kr:1 ci:3 callaghan:3 conditioned:2 nk:6 easier:1 vijay:1 logarithmic:3 temporarily:1 satisfies:3 acm:2 ma:1 intercluster:1 goal:3 kmeans:4 exposition:2 careful:2 leonard:1 replace:1 hard:1 change:2 except:1 uniformly:4 operates:1 andconquer:1 yuval:1 lemma:15 called:3 total:1 pas:19 sanjoy:2 latter:2 evaluate:1 mcgregor:1 |
3,105 | 3,813 | Fast subtree kernels on graphs
Nino Shervashidze, Karsten M. Borgwardt
Interdepartmental Bioinformatics Group
Max Planck Institutes T?ubingen, Germany
{nino.shervashidze,karsten.borgwardt}@tuebingen.mpg.de
Abstract
In this article, we propose fast subtree kernels on graphs. On graphs with n nodes
and m edges and maximum degree d, these kernels comparing subtrees of height
h can be computed in O(mh), whereas the classic subtree kernel by Ramon &
G?artner scales as O(n2 4d h). Key to this efficiency is the observation that the
Weisfeiler-Lehman test of isomorphism from graph theory elegantly computes a
subtree kernel as a byproduct. Our fast subtree kernels can deal with labeled
graphs, scale up easily to large graphs and outperform state-of-the-art graph kernels on several classification benchmark datasets in terms of accuracy and runtime.
1
Introduction
Graph kernels have recently evolved into a branch of kernel machines that reaches deep into graph
mining. Several different graph kernels have been defined in machine learning which can be categorized into three classes: graph kernels based on walks [5, 7] and paths [2], graph kernels based on
limited-size subgraphs [6, 11], and graph kernels based on subtree patterns [9, 10].
While fast computation techniques have been developed for graph kernels based on walks [12]
and on limited-size subgraphs [11], it is unclear how to compute subtree kernels efficiently. As a
consequence, they have been applied to relatively small graphs representing chemical compounds [9]
or handwritten digits [1], with approximately twenty nodes on average. But could one speed up
subtree kernels to make them usable on graphs with hundreds of nodes, as they arise in protein
structure models or in program flow graphs?
It is a general limitation of graph kernels that they scale poorly to large, labeled graphs with more
than 100 nodes. While the efficient kernel computation strategies from [11, 12] are able to compare
unlabeled graphs efficiently, the efficient comparison of large, labeled graphs remains an unsolved
challenge. Could one speed up subtree kernels to make them the kernel of choice for comparing
large, labeled graphs?
The goal of this article is to address both of the aforementioned questions, that is, to develop a fast
subtree kernel that scales up to large, labeled graphs.
The remainder of this article is structured as follows. In Section 2, we review the subtree kernel from
the literature and its runtime complexity. In Section 3, we describe an alternative subtree kernel and
its efficient computation based on the Weisfeiler-Lehman test of isomorphism. In Section 4, we
compare these two subtree kernels to each other, as well as to a set of four other state-of-the-art
graph kernels and report results on kernel computation runtime and classification accuracy on graph
benchmark datasets.
1
2
The Ramon-G?artner subtree kernel
Terminology We define a graph G as a triplet (V, E, L), where V is the set of vertices, E the set
of undirected edges, and L : V ? ? a function that assigns labels from an alphabet ? to nodes in
the graph1 . The neighbourhood N (v) of a node v is the set of nodes to which v is connected by an
edge, that is N (v) = {v 0 |(v, v 0 ) ? E}. For simplicity, we assume that every graph has n nodes, m
edges, a maximum degree of d, and that there are N graphs in our given set of graphs.
A walk is a sequence of nodes in a graph, in which consecutive nodes are connected by an edge. A
path is a walk that consists of distinct nodes only. A (rooted) subtree is a subgraph of a graph, which
has no cycles, but a designated root node. A subtree of G can thus be seen as a connected subset
of distinct nodes of G with an underlying tree structure. The height of a subtree is the maximum
distance between the root and any other node in the graph plus one. The notion of walk is extending
the notion of path by allowing nodes to be equal. Similarly, the notion of subtrees can be extended to
subtree patterns (also called ?tree-walks? [1]), which can have nodes that are equal. These repetitions
of the same node are then treated as distinct nodes, such that the pattern is still a cycle-free tree. Note
that all subtree kernels compare subtree patterns in two graphs, not (strict) subtrees. Let S(G) refer
to the set of all subtree patterns in graph G.
Definition The first subtree kernel on graphs was defined by [10]. It compares all pairs of nodes
from graphs G = (V, E, L) and G0 = (V 0 , E 0 , L0 ) by iteratively comparing their neighbourhoods:
(h)
kRamon (G, G0 ) =
X X
kh (v, v 0 ),
(1)
v?V v 0 ?V 0
where
0
kh (v, v ) =
?(L(v), L0 (v 0 )), if h = 1
0
(w,w0 )?R kh?1 (w, w ), if h > 1
?r ?s
P
R?M(v,v 0 )
Q
(2)
and
M(v, v 0 ) = {R ? N (v) ? N (v 0 )|(?(u, u0 ), (w, w0 ) ? R : u = w ? u0 = w0 )
?(?(u, u0 ) ? R : L(u) = L0 (u0 ))}.
(3)
Intuitively, kRamon iteratively compares all matchings M(v, v 0 ) between neighbours of two nodes
v from G and v 0 from G0 .
Complexity The runtime complexity of the subtree kernel for a pair of graphs is O(n2 h4d ), including a comparison of all pairs of nodes (n2 ), and a pairwise comparison of all matchings in
their neighbourhoods in O(4d ), which is repeated in h iterations. h is a multiplicative factor, not an
exponent, as one can implement the subtree kernel recursively, starting with k1 and recursively computing kh from kh?1 . For a dataset of N graphs, the resulting runtime complexity is then obviously
in O(N 2 n2 h4d ).
Related work The subtree kernels in [9] and [1] refine the above definition for applications in
chemoinformatics and hand-written digit recognition. Mah?e and Vert [9] define extensions of the
classic subtree kernel that avoid tottering [8] and consider unbalanced subtrees. Both [9] and [1]
propose to consider ?-ary subtrees with at most ? children per node. This restricts the set of matchings to matchings of up to ? nodes, but the runtime complexity is still exponential in this parameter
?, which both papers describe as feasible on small graphs (with approximately 20 nodes) with many
distinct node labels. We present a subtree kernel that is efficient to compute on graphs with hundreds
and thousands of nodes next.
1
The extension of this definition and our results to graphs with edge labels is straightforward, but omitted
for clarity of presentation.
2
3
3.1
Fast subtree kernels
The Weisfeiler-Lehman test of isomorphism
Our algorithm for computing a fast subtree kernel builds upon the Weisfeiler-Lehman test of isomorphism [14], more specifically its 1-dimensional variant, also known as ?naive vertex refinement?,
which we describe in the following.
Assume we are given two graphs G and G0 and we would like to test whether they are isomorphic.
The 1-dimensional Weisfeiler-Lehman test proceeds in iterations, which we index by h and which
comprise the following steps:
Algorithm 1 One iteration of the 1-dimensional Weisfeiler-Lehman test of graph isomorphism
1: Multiset-label determination
? For h = 1, set Mh (v) := l0 (v) = L(v) for labeled graphs, and Mh (v) := l0 (v) =
| N (v)| for unlabeled graphs.
? For h > 1, assign a multiset-label Mh (v) to each node v in G and G0 which consists of
the multiset {lh?1 (u)|u ? N (v)}.
2: Sorting each multiset
? Sort elements in Mh (v) in ascending order and concatenate them into a string sh (v).
? Add lh?1 (v) as a prefix to sh (v).
3: Sorting the set of multisets
? Sort all of the strings sh (v) for all v from G and G0 in ascending order.
4: Label compression
? Map each string sh (v) to a new compressed label, using a function f : ?? ? ? such
that f (sh (v)) = f (sh (w)) if and only if sh (v) = sh (w).
5: Relabeling
? Set lh (v) := f (sh (v)) for all nodes in G and G0 .
The sorting step 3 allows for a straightforward definition and implementation of f for the compression step 4: one keeps a counter variable for f that records the number of distinct strings that f has
compressed before. f assigns the current value of this counter to a string if an identical string has
been compressed before, but when one encounters a new string, one increments the counter by one
and f assigns its value to the new string. The sorted order from step 3 guarantees that all identical
strings are mapped to the same number, because they occur in a consecutive block.
The Weisfeiler-Lehman algorithm terminates after step 5 of iteration h if {lh (v)|v ? V } 6=
{lh (v 0 )|v 0 ? V 0 }, that is, if the sets of newly created labels are not identical in G and G0 . The
graphs are then not isomorphic. If the sets are identical after n iterations, the algorithm stops without giving an answer.
Complexity The runtime complexity of Weisfeiler-Lehman algorithm with h iterations is O(hm).
Defining the multisets in step 1 for all nodes is an O(m) operation. Sorting each multiset is an
O(m) operation for all nodes. This efficiency can be achieved by using Counting Sort, which is an
instance of Bucket Sort, due to the limited range that the elements of the multiset are from. The
elements of each multiset are a subset of {f (sh (v))|v ? V }. For a fixed h, the cardinality of this
set is upper-bounded by n, which means that we can sort all multisets in O(m) by the following
procedure: We assign the elements of all multisets to their corresponding buckets, recording which
multiset they came from. By reading through all buckets in ascending order, we can then extract
the sorted multisets for all nodes in a graph. The runtime is O(m) as there are O(m) elements in
the multisets of a graph in iteration h. Sorting the resulting strings is of time complexity O(m) via
the Radix Sort. The label compression requires one pass over all strings and their characters, that is
O(m). Hence all these steps result in a total runtime of O(hm) for h iterations.
3.2
The Weisfeiler-Lehman kernel on pairs of graphs
Based on the Weisfeiler-Lehman algorithm, we define the following kernel function.
3
Definition 1 The Weisfeiler-Lehman kernel on two graphs G and G0 is defined as:
(h)
kW L (G, G0 ) = |{(si (v), si (v 0 ))|f (si (v)) = f (si (v 0 )), i ? {1, . . . , h}, v ? V, v 0 ? V 0 }|,
0
(4)
0
where f is injective and the sets {f (si (v))|v ? V ? V } and {f (sj (v))|v ? V ? V } are disjoint
for all i 6= j.
That is, the Weisfeiler-Lehman kernel counts common multiset strings in two graphs.
Theorem 2 The Weisfeiler-Lehman kernel is positive definite.
(h)
Proof Intuitively, kW L is a kernel because it counts matching subtree patterns of up to height h in
two graphs. More formally, let us define a mapping ? that counts the occurrences of a particular
(h)
label sequence s in G (generated in h iterations of Weisfeiler-Lehman). Let ?s (G) denote the
(h)
number of occurrences of s in G, and analogously ?s (G0 ) for G0 . Then
(h)
0
ks(h) (G, G0 ) = ?(h)
s (G)?s (G ) =
= |{(si (v), si (v 0 ))|si (v) = si (v 0 ), i ? {1, . . . , h}, v ? V, v 0 ? V 0 }|,
(5)
?
and if we sum over all s from ? , we obtain
X
X
(h)
(h)
0
kW L (G, G0 ) =
ks(h) (G, G0 ) =
?(h)
s (G)?s (G ) =
s???
s???
0
= |{(si (v), si (v ))|si (v) = si (v 0 ), i ? {1, . . . , h}, v ? V, v 0 ? V 0 }| =
= |{(si (v), si (v 0 ))|f (si (v)) = f (si (v 0 )), i ? {1, . . . , h}, v ? V, v 0 ? V 0 }|,
(6)
where the last equality follows from the fact that f is injective.
(h)
As f (s) 6= s and hence each string s corresponds to exactly one subtree pattern t, kW L defines a
(h)
kernel with corresponding feature map ?W L , such that
(h)
(h)
?W L (G) = (?(h)
s (G))s??? = (?t (G))t?S(G) .
(7)
Theorem 3 The Weisfeiler-Lehman kernel on a pair of graphs G and G0 can be computed in
O(hm).
Proof This follows directly from the definition of the Weisfeiler-Lehman kernel and the runtime
complexity of the Weisfeiler-Lehman test, as described in Section 3.1. The number of matching multiset strings can be counted as part of step 3, as they occur consecutively in the sorted order.
3.3
The Weisfeiler-Lehman kernel on N graphs
For computing the Weisfeiler-Lehman kernel on N graphs we propose the following algorithm
which improves over the naive, N 2 -fold application of the kernel from (4). We now process all
N graphs simultaneously and conduct the steps given in the Algorithm 2 in each of h iterations on
each graph G.
The hash function g can be implemented efficiently: it again keeps a counter variable x which counts
the number of distinct strings that g has mapped to compressed labels so far. If g is applied to a string
that is different from all previous ones, then the string is mapped to x + 1, and x increments. As
before, g is required to keep sets of compressed labels from different iterations disjoint.
Theorem 4 For N graphs, the Weisfeiler-Lehman kernel on all pairs of these graphs can be computed in O(N hm + N 2 hn).
Proof Naive application of the kernel from definition (4) for computing an N ? N kernel matrix
would require a runtime of O(N 2 hm). One can improve upon this runtime complexity by comput(h)
ing ?W L explicitly. This can be achieved by replacing the compression mapping f in the classic
Weisfeiler-Lehman algorithm by a hash function g that is applied to all N graphs simultaneously.
4
Algorithm 2 One iteration of the Weisfeiler-Lehman kernel on N graphs
1: Multiset-label determination
? Assign a multiset-label Mh (v) to each node v in G which consists of the multiset
{lh?1 (u)|u ? N (v)}.
2: Sorting each multiset
? Sort elements in Mh (v) in ascending order and concatenate them into a string sh (v).
? Add lh?1 (v) as a prefix to sh (v).
3: Label compression
? Map each string sh (v) to a compressed label using a hash function g : ?? ? ? such
that g(sh (v)) = g(sh (w)) if and only if sh (v) = sh (w).
4: Relabeling
? Set lh (v) := g(sh (v)) for all nodes in G.
This has the following effects on the runtime of Weisfeiler-Lehman: Step 1, the multiset-label determination, still requires O(N m). Step 2, the sorting of the elements in each multiset can be done
via a joint Bucket Sort (Counting Sort) of all strings, requiring O(N n + N m) time. The use of the
hash function g renders the sorting of all strings unnecessary (Step 3 from Section 3.1), as identical
strings will be mapped to the same (compressed) label anyway. Step 4 and Step 5 remain unchanged.
(h)
The effort of computing ?W L on all N graphs in h iterations is then O(N hm), assuming that
m > n. To get all pairwise kernel values we have to multiply all feature vectors, which requires a
(h)
runtime of O(N 2 hn), as each graph G has at most hn non-zero entries in ?W L (G).
3.4
Link to the Ramon-G?artner kernel
The Weisfeiler-Lehman kernel can be defined in a recursive fashion which elucidates its relation to
the Ramon-G?artner kernel.
(h)
Theorem 5 The kernel krecursive defined as
(h)
krecursive (G, G0 ) =
h X X
X
i=1 v?V
ki (v, v 0 ),
(8)
v 0 ?V 0
where
0
ki (v, v ) =
?
?
0
ki?1 (v, v ) maxR?M(v,v0 )
?(L(v), L0 (v 0 )), if i = 1
0
(w,w0 )?R ki?1 (w, w ), if i > 1 and M 6= ?
0, if i > 1 and M = ?
Q
?
(9)
and
M(v, v 0 ) = {R ? N (v) ? N (v 0 )|(?(u, u0 ), (w, w0 ) ? R : u = w ? u0 = w0 )
?(?(u, u0 ) ? R : L(u) = L0 (u0 ) ? |R| = | N (v)| = | N (v 0 )|)}
is equivalent to the Weisfeiler-Lehman kernel
(10)
(h)
kW L .
Proof We prove this theorem by induction over h. Induction initialisation: h = 1:
(1)
kW L = |{(s1 (v), s1 (v 0 ))|f (s1 (v)) = f (s1 (v 0 )), v ? V, v 0 ? V 0 }| =
X X
(1)
=
?(L(v), L0 (v 0 )) = krecursive .
(11)
(12)
v?V v 0 ?V 0
The equality follows from the definition of M(v, v 0 ).
(h)
(h)
Induction step h ? h + 1: Assume that kW L = krecursive . Then
(h+1)
krecursive =
X X
kh+1 (v, v 0 ) +
v?V v 0 ?V 0
h X X
X
ki (v, v 0 ) =
(13)
i=1 v?V v 0 ?V 0
(h)
(h+1)
= |{(sh+1 (v), sh+1 (v 0 ))|f (sh+1 (v)) = f (sh+1 (v 0 )), v ? V, v 0 ? V 0 }| + kW L = kW L ,
5
(14)
5
10
400
pairwise
global
Runtime in seconds
10
350
Runtime in seconds
4
3
10
2
10
1
10
0
300
250
200
150
100
10
50
!1
10
1
2
10
0
3
10
Number of graphs N
10
15
10
5
0
200
400
600
Graph size n
800
1000
20
Runtime in seconds
Runtime in seconds
20
0
2
3
4
5
6
Subtree height h
7
15
10
5
0
0.1
8
0.2
0.3
0.4 0.5 0.6
Graph density c
0.7
0.8
0.9
Figure 1: Runtime in seconds for kernel matrix computation on synthetic graphs using the pairwise
(red, dashed) and the global (green) Weisfeiler-Lehman kernel (Default values: dataset size N = 10,
graph size n = 100, subtree height h = 5, graph density c = 0.4).
where the equality of (13) and (14) follows from the fact that kh+1 (v, v 0 ) = 1 if and only if the
neigborhoods of v and v 0 are identical, that is if f (sh+1 (v)) = f (sh+1 (v 0 )).
Theorem 5 highlights the following differences between the Weisfeiler-Lehman and the RamonG?artner kernel: In (8), Weisfeiler-Lehman considers all subtrees up to height h and the RamonG?artner kernel the subtrees of exactly height h. In (9) and (10), the Weisfeiler-Lehman kernel checks
whether the neighbourhoods of v and v 0 match exactly, whereas the Ramon-G?artner kernel considers
all pairs of matching subsets of the neighbourhoods of v and v 0 in (3). In our experiments, we next
examine the empirical differences between these two kernels in terms of runtime and prediction
accuracy on classification benchmark datasets.
4
4.1
Experiments
Runtime behaviour of Weisfeiler-Lehman kernel
Methods We empirically compared the runtime behaviour of our two variants of the WeisfeilerLehman (WL) kernel. The first variant computes kernel values pairwise in O(N 2 hm). The second
variant computes the kernel values in O(N hm + N 2 hn) on the dataset simultaneously. We will
refer to the former variant as the ?pairwise? WL, and the latter as ?global? WL.
Experimental setup We assessed the behaviour on randomly generated graphs with respect to
four parameters: dataset size N , graph size n, subtree height h and graph density c. The density
of an undirected graph of n nodes without self-loops is defined as the number of its edges divided
by n(n ? 1)/2, the maximal number of edges. We kept 3 out of 4 parameters fixed at their default
values and varied the fourth parameter. The default values we used were 10 for N , 100 for n, 5 for
h and 0.4 for the graph density c. In more detail, we varied N and n in range {10, 100, 1000}, h in
{2, 4, 8} and c in {0.1, 0.2, . . . , 0.9}.
For each individual experiment, we generated N graphs with n nodes, and inserted edges randomly
until the number of edges reached bcn(n ? 1)/2c. We then computed the pairwise and the global
6
WL kernel on these synthetic graphs. We report CPU runtimes in seconds in Figure 1, as measured
in Matlab R2008a on an Apple MacPro with 3.0GHz Intel 8-Core with 16GB RAM.
Results Empirically, we observe that the pairwise kernel scales quadratically with dataset size N .
Interestingly, the global kernel scales linearly with N . The N 2 sparse vector multiplications that
have to be performed for kernel computation with global WL do not dominate runtime here. This
result on synthetic data indicates that the global WL kernel has attractive scalability properties for
large datasets.
When varying the number of nodes n per graph, we observe that the runtime of global WL scales
linearly with n, and is much faster than the pairwise WL for large graphs.
We observe the same picture for the height h of the subtree patterns. The runtime of both kernels
grows linearly with h, but the global WL is more efficient in terms of runtime in seconds.
Varying the graph density c, both methods show again a linearly increasing runtime, although the
runtime of the global WL kernel is close to constant. The density c seems to be a graph property
that affects the runtime of the pairwise kernel more severely than that of global WL.
Across all different graph properties, the global WL kernel from Section 3.3 requires less runtime
than the pairwise WL kernel from Section 3.2. Hence the global WL kernel is the variant of our
Weisfeiler-Lehman kernel that we use in the following graph classification tasks.
4.2
Graph classification
Datasets We employed the following datasets in our experiments: MUTAG, NCI1, NCI109, and
D&D. MUTAG [3] is a dataset of 188 mutagenic aromatic and heteroaromatic nitro compounds
labeled according to whether or not they have a mutagenic effect on the Gram-negative bacterium
Salmonella typhimurium. We also conducted experiments on two balanced subsets of NCI1 and
NCI109, which classify compounds based on whether or not they are active in an anti-cancer screen
([13] and http://pubchem.ncbi.nlm.nih.gov). D&D is a dataset of 1178 protein structures [4]. Each protein is represented by a graph, in which the nodes are amino acids and two nodes
are connected by an edge if they are less than 6 Angstroms apart. The prediction task is to classify
the protein structures into enzymes and non-enzymes.
Experimental setup On these datasets, we compared our Weisfeiler-Lehman kernel to the RamonG?artner kernel (?r = ?s = 1), as well as to several state-of-the-art graph kernels for large graphs:
the fast geometric random walk kernel from [12] that counts common labeled walks (with ? chosen
from the set {10?2 , 10?3 , . . . , 10?6 } by cross-validation on the training set), the graphlet kernel
from [11] that counts common induced labeled connected subgraphs of size 3, and the shortest path
kernel from [2] that counts pairs of labeled nodes with identical shortest path distance.
We performed 10-fold cross-validation of C-Support Vector Machine Classification, using 9 folds
for training and 1 for testing. All parameters of the SVM were optimised on the training dataset
only. To exclude random effects of fold assignments, we repeated the whole experiment 10 times.
We report average prediction accuracies and standard errors in Tables 1 and 2.
We choose h for our Weisfeiler-Lehman kernel by cross-validation on the training dataset for h ?
{1, . . . , 10}, which means that we computed 10 different WL kernel matrices in each experiment.
We report the total runtime of this computation (not the average per kernel matrix).
Results In terms of runtime the Weisfeiler-Lehman kernel can easily scale up even to graphs with
thousands of nodes. On D&D, subtree-patterns of height up to 10 were computed in 11 minutes,
while no other comparison method could handle this dataset in less than half an hour. The shortest
path kernel is competitive to the WL kernel on smaller graphs (MUTAG, NCI1, NCI109), but on
D&D its runtime degenerates to more than 23 hours. The Ramon and G?artner kernel was computable
on MUTAG in approximately 40 minutes, but for the large NCI datasets it only finished computation
on a subsample of 100 graphs within two days. On D&D, it did not even finish on a subsample of
100 graphs within two days. The random walk kernel is competitive on MUTAG, but as the RamonG?artner kernel, does not finish computation on the full NCI datasets and on D&D within two days.
The graphlet kernel is faster than our WL kernel on MUTAG and the NCI datasets, and about a
7
Method/Dataset
Weisfeiler-Lehman
Ramon & G?artner
Graphlet count
Random walk
Shortest path
MUTAG
82.05 (?0.36)
85.72 (?0.49)
75.61 (?0.49)
80.72 (?0.38)
87.28 (?0.55)
NCI1
82.19 (? 0.18)
?66.00 (?0.07)
?73.47 (?0.11)
NCI109
82.46 (?0.24)
?66.59 (?0.08)
?73.07 (?0.11)
D&D
79.78 (?0.36)
?78.59 (?0.12)
?78.45 (?0.26)
?-: did not finish in 2 days.
Table 1: Prediction accuracy (? standard error) on graph classification benchmark datasets
Dataset
Maximum # nodes
Average # nodes
# labels
Number of graphs
Weisfeiler-Lehman
Ramon & G?artner
Graphlet count
Random walk
Shortest path
MUTAG
28
17.93
7
188
6?
40?6?
3?
12?
2?
NCI1
111
29.87
37
100
4110
5?
7?20?
25?9? 29 days?
2?
1?27?
58?30? 68 days?
3?
4?38?
NCI109
111
29.68
54
100
4127
5?
7?21?
26?40?
31 days?
2?
1?27?
2h 9?41? 153 days?
3?
4?39?
D&D
5748
284.32
89
100
1178
58?
11?
??2?40?
30?21?
??58?45? 23h 17?2?
?-: did not finish in 2 days, * = extrapolated.
Table 2: CPU runtime for kernel computation on graph classification benchmark datasets
factor of 3 slower on D&D. However, this efficiency comes at a price, as the kernel based on size-3
graphlets turns out to lead to poor accuracy levels on three datasets. Using larger graphlets with 4
or 5 nodes that might have been more expressive led to infeasible runtime requirements in initial
experiments (not shown here).
On NCI1, NCI109 and D&D, the Weisfeiler-Lehman kernel reached the highest accuracy. On D&D
the shortest path and graphlet kernels yielded similarly good results, while on NCI1 and NCI109 the
Weisfeiler-Lehman kernel improves by more than 8% the best accuracy attained by other methods.
On MUTAG, it reaches the third best accuracy among all methods considered. We could not assess
the performance of the Ramon & G?artner kernel and the random walk kernel on larger datasets,
as their computation did not finish in 48 hours. The labeled size-3 graphlet kernel achieves low
accuracy levels, except on D&D.
To summarize, the WL kernel turns out to be competitive in terms of runtime on all smaller datasets,
fastest on the large protein dataset, and its accuracy levels are highest on three out of four datasets.
5
Conclusions
We have defined a fast subtree kernel on graphs that combines scalability with the ability to deal
with node labels. It is competitive with state-of-the-art kernels on several classification benchmark
datasets in terms of accuracy, even reaching the highest accuracy level on three out of four datasets,
and outperforms them significantly in terms of runtime on large graphs, even the efficient computation schemes for random walk kernels [12] and graphlet kernels [11] that were recently defined.
This new kernel opens the door to applications of graph kernels on large graphs in bioinformatics, for
instance, protein function prediction via detailed graph models of protein structure on the amino acid
level, or on gene networks for phenotype prediction. An exciting algorithmic question for further
studies will be to consider kernels on graphs with continuous or high-dimensional node labels and
their efficient computation.
Acknowledgements
The authors would like to thank Kurt Mehlhorn, Pascal Schweitzer, and Erik Jan van Leeuwen for
fruitful discussions.
8
References
[1] F. R. Bach. Graph kernels between point clouds. In ICML, pages 25?32, 2008.
[2] K. M. Borgwardt and H.-P. Kriegel. Shortest-path kernels on graphs. In Proc. Intl. Conf. Data
Mining, pages 74?81, 2005.
[3] A. K. Debnath, R. L. Lopez de Compadre, G. Debnath, A. J. Shusterman, and C. Hansch.
Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds.
correlation with molecular orbital energies and hydrophobicity. J Med Chem, 34:786?797,
1991.
[4] P. D. Dobson and A. J. Doig. Distinguishing enzyme structures from non-enzymes without
alignments. J Mol Biol, 330(4):771?783, Jul 2003.
[5] T. G?artner, P.A. Flach, and S. Wrobel. On graph kernels: Hardness results and efficient alternatives. In B. Sch?olkopf and M. Warmuth, editors, Sixteenth Annual Conference on Computational Learning Theory and Seventh Kernel Workshop, COLT. Springer, 2003.
[6] T. Horvath, T. G?artner, and S. Wrobel. Cyclic pattern kernels for predictive graph mining. In
Proceedings of the International Conference on Knowledge Discovery and Data Mining, 2004.
[7] H. Kashima, K. Tsuda, and A. Inokuchi. Marginalized kernels between labeled graphs. In
Proceedings of the 20th International Conference on Machine Learning (ICML), Washington,
DC, United States, 2003.
[8] P. Mah?e, N. Ueda, T. Akutsu, J.-L. Perret, and J.-P. Vert. Extensions of marginalized graph
kernels. In Proceedings of the Twenty-First International Conference on Machine Learning,
2004.
[9] P. Mah?e and J.-P. Vert. Graph kernels based on tree patterns for molecules. q-bio/0609024,
September 2006.
[10] J. Ramon and T. G?artner. Expressivity versus efficiency of graph kernels. Technical report, First
International Workshop on Mining Graphs, Trees and Sequences (held with ECML/PKDD?03),
2003.
[11] N. Shervashidze, S.V.N. Vishwanathan, T. Petri, K. Mehlhorn, and K. M. Borgwardt. Efficient
graphlet kernels for large graph comparison. In Artificial Intelligence and Statistics, 2009.
[12] S. V. N. Vishwanathan, Karsten Borgwardt, and Nicol N. Schraudolph. Fast computation
of graph kernels. In B. Sch?olkopf, J. Platt, and T. Hofmann, editors, Advances in Neural
Information Processing Systems 19, Cambridge MA, 2007. MIT Press.
[13] N. Wale and G. Karypis. Comparison of descriptor spaces for chemical compound retrieval
and classification. In Proc. of ICDM, pages 678?689, Hong Kong, 2006.
[14] B. Weisfeiler and A. A. Lehman. A reduction of a graph to a canonical form and an algebra
arising during this reduction. Nauchno-Technicheskaya Informatsia, Ser. 2, 9, 1968.
9
| 3813 |@word kong:1 compression:5 seems:1 flach:1 open:1 recursively:2 reduction:2 initial:1 cyclic:1 united:1 initialisation:1 interestingly:1 prefix:2 kurt:1 outperforms:1 perret:1 current:1 comparing:3 si:17 written:1 concatenate:2 mutagenic:3 hofmann:1 graphlets:2 hash:4 half:1 intelligence:1 warmuth:1 core:1 record:1 multiset:16 node:45 height:10 mehlhorn:2 schweitzer:1 lopez:1 consists:3 prove:1 artner:16 combine:1 wale:1 orbital:1 pairwise:11 hardness:1 karsten:3 mpg:1 examine:1 pkdd:1 gov:1 cpu:2 cardinality:1 increasing:1 underlying:1 bounded:1 evolved:1 string:22 developed:1 guarantee:1 every:1 runtime:37 exactly:3 platt:1 bio:1 ser:1 planck:1 before:3 positive:1 consequence:1 severely:1 optimised:1 path:10 approximately:3 might:1 plus:1 k:2 fastest:1 limited:3 range:2 karypis:1 testing:1 recursive:1 block:1 implement:1 definite:1 graphlet:8 digit:2 procedure:1 jan:1 empirical:1 significantly:1 vert:3 matching:3 protein:7 get:1 unlabeled:2 graph1:1 close:1 equivalent:1 map:3 fruitful:1 straightforward:2 starting:1 simplicity:1 assigns:3 subgraphs:3 dominate:1 classic:3 handle:1 notion:3 anyway:1 increment:2 elucidates:1 distinguishing:1 element:7 recognition:1 labeled:12 inserted:1 cloud:1 tottering:1 thousand:2 connected:5 cycle:2 counter:4 highest:3 balanced:1 complexity:10 debnath:2 algebra:1 predictive:1 upon:2 efficiency:4 matchings:4 easily:2 mh:7 joint:1 represented:1 alphabet:1 distinct:6 fast:10 describe:3 artificial:1 shervashidze:3 larger:2 compressed:7 ability:1 statistic:1 obviously:1 sequence:3 mah:3 propose:3 maximal:1 remainder:1 loop:1 subgraph:1 poorly:1 degenerate:1 sixteenth:1 kh:7 scalability:2 olkopf:2 requirement:1 extending:1 intl:1 develop:1 measured:1 implemented:1 come:1 consecutively:1 nlm:1 require:1 behaviour:3 assign:3 shusterman:1 extension:3 considered:1 mapping:2 algorithmic:1 achieves:1 consecutive:2 omitted:1 proc:2 label:21 wl:18 repetition:1 mit:1 reaching:1 avoid:1 varying:2 l0:8 check:1 indicates:1 relation:1 germany:1 classification:10 aforementioned:1 nci1:7 among:1 exponent:1 pascal:1 colt:1 art:4 equal:2 comprise:1 washington:1 runtimes:1 identical:7 kw:9 icml:2 petri:1 report:5 randomly:2 neighbour:1 simultaneously:3 individual:1 relabeling:2 mining:5 multiply:1 alignment:1 sh:24 held:1 subtrees:7 edge:11 byproduct:1 injective:2 lh:8 tree:5 conduct:1 walk:13 tsuda:1 leeuwen:1 instance:2 classify:2 assignment:1 vertex:2 subset:4 entry:1 hundred:2 conducted:1 seventh:1 answer:1 synthetic:3 borgwardt:5 density:7 nitro:2 international:4 analogously:1 again:2 hn:4 choose:1 conf:1 weisfeiler:39 usable:1 exclude:1 de:2 lehman:39 explicitly:1 multiplicative:1 root:2 performed:2 red:1 reached:2 sort:9 competitive:4 jul:1 ass:1 accuracy:13 acid:2 descriptor:1 efficiently:3 handwritten:1 weisfeilerlehman:1 apple:1 ary:1 aromatic:2 reach:2 definition:8 energy:1 proof:4 unsolved:1 stop:1 newly:1 dataset:13 knowledge:1 improves:2 attained:1 day:9 done:1 until:1 correlation:1 hand:1 replacing:1 expressive:1 mutag:9 defines:1 grows:1 compadre:1 effect:3 requiring:1 former:1 hence:3 equality:3 chemical:2 iteratively:2 bcn:1 deal:2 attractive:1 during:1 self:1 rooted:1 hong:1 recently:2 nih:1 common:3 empirically:2 refer:2 cambridge:1 similarly:2 v0:1 add:2 enzyme:4 apart:1 compound:5 ubingen:1 came:1 seen:1 employed:1 shortest:7 dashed:1 u0:8 branch:1 full:1 ing:1 technical:1 match:1 determination:3 faster:2 bach:1 cross:3 retrieval:1 schraudolph:1 divided:1 icdm:1 molecular:1 prediction:6 variant:6 iteration:13 kernel:125 salmonella:1 achieved:2 whereas:2 nci:3 sch:2 strict:1 recording:1 induced:1 med:1 undirected:2 flow:1 counting:2 door:1 affect:1 finish:5 computable:1 whether:4 isomorphism:5 gb:1 effort:1 render:1 matlab:1 deep:1 detailed:1 http:1 outperform:1 restricts:1 canonical:1 disjoint:2 per:3 arising:1 dobson:1 group:1 key:1 four:4 terminology:1 clarity:1 kept:1 ram:1 graph:111 sum:1 fourth:1 ueda:1 ki:5 nci109:7 fold:4 refine:1 yielded:1 activity:1 annual:1 occur:2 vishwanathan:2 speed:2 relatively:1 structured:1 designated:1 according:1 poor:1 terminates:1 remain:1 across:1 character:1 smaller:2 s1:4 intuitively:2 bucket:4 remains:1 turn:2 count:9 ascending:4 operation:2 observe:3 occurrence:2 neighbourhood:5 kashima:1 alternative:2 encounter:1 slower:1 marginalized:2 ncbi:1 giving:1 k1:1 build:1 unchanged:1 g0:17 question:2 strategy:1 bacterium:1 unclear:1 september:1 distance:2 link:1 mapped:4 thank:1 w0:6 considers:2 tuebingen:1 induction:3 assuming:1 erik:1 index:1 relationship:1 horvath:1 setup:2 negative:1 implementation:1 twenty:2 allowing:1 upper:1 observation:1 datasets:18 benchmark:6 anti:1 ecml:1 defining:1 extended:1 hansch:1 dc:1 varied:2 pair:8 required:1 radix:1 quadratically:1 expressivity:1 hour:3 address:1 able:1 kriegel:1 proceeds:1 pattern:11 reading:1 challenge:1 summarize:1 program:1 max:1 ramon:10 including:1 green:1 treated:1 representing:1 scheme:1 improve:1 picture:1 finished:1 created:1 multisets:6 hm:8 naive:3 extract:1 review:1 literature:1 geometric:1 acknowledgement:1 discovery:1 multiplication:1 nicol:1 highlight:1 limitation:1 versus:1 validation:3 hydrophobicity:1 degree:2 article:3 exciting:1 editor:2 cancer:1 extrapolated:1 last:1 free:1 infeasible:1 institute:1 sparse:1 ghz:1 van:1 default:3 gram:1 computes:3 author:1 refinement:1 counted:1 far:1 sj:1 keep:3 gene:1 maxr:1 global:13 active:1 unnecessary:1 nino:2 chemoinformatics:1 continuous:1 triplet:1 table:3 molecule:1 mol:1 elegantly:1 did:4 linearly:4 doig:1 whole:1 subsample:2 arise:1 n2:4 inokuchi:1 repeated:2 child:1 categorized:1 amino:2 intel:1 screen:1 fashion:1 exponential:1 comput:1 third:1 theorem:6 minute:2 wrobel:2 svm:1 workshop:2 subtree:37 sorting:8 phenotype:1 led:1 heteroaromatic:2 akutsu:1 springer:1 corresponds:1 ma:1 goal:1 presentation:1 sorted:3 price:1 feasible:1 specifically:1 except:1 called:1 total:2 isomorphic:2 pas:1 experimental:2 formally:1 support:1 latter:1 chem:1 unbalanced:1 assessed:1 bioinformatics:2 biol:1 |
3,106 | 3,814 | Learning to Explore and Exploit in POMDPs
Chenghui Cai, Xuejun Liao, and Lawrence Carin
Department of Electrical and Computer Engineering
Duke University
Durham, NC 27708-0291, USA
Abstract
A fundamental objective in reinforcement learning is the maintenance of a proper
balance between exploration and exploitation. This problem becomes more challenging when the agent can only partially observe the states of its environment.
In this paper we propose a dual-policy method for jointly learning the agent behavior and the balance between exploration exploitation, in partially observable
environments. The method subsumes traditional exploration, in which the agent
takes actions to gather information about the environment, and active learning, in
which the agent queries an oracle for optimal actions (with an associated cost for
employing the oracle). The form of the employed exploration is dictated by the
specific problem. Theoretical guarantees are provided concerning the optimality
of the balancing of exploration and exploitation. The effectiveness of the method
is demonstrated by experimental results on benchmark problems.
1 Introduction
A fundamental challenge facing reinforcement learning (RL) algorithms is to maintain a proper
balance between exploration and exploitation. The policy designed based on previous experiences
is by construction constrained, and may not be optimal as a result of inexperience. Therefore, it
is desirable to take actions with the goal of enhancing experience. Although these actions may not
necessarily yield optimal near-term reward toward the ultimate goal, they could, over a long horizon,
yield improved long-term reward. The fundamental challenge is to achieve an optimal balance
between exploration and exploitation; the former is performed with the goal of enhancing experience
and preventing premature convergence to suboptimal behavior, and the latter is performed with the
goal of employing available experience to define perceived optimal actions.
For a Markov decision process (MDP), the problem of balancing exploration and exploitation has
been addressed successfully by the E 3 [4, 5] and R-max [2] algorithms. Many important applications, however, have environments whose states are not completely observed, leading to partially
observable MDPs (POMDPs). Reinforcement learning in POMDPs is challenging, particularly in
the context of balancing exploration and exploitation. Recent work targeted on solving the exploration vs. exploitation problem is based on an augmented POMDP, with a product state space over
the environment states and the unknown POMDP parameters [9]. This, however, entails solving a
complicated planning problem, which has a state space that grows exponentially with the number
of unknown parameters, making the problem quickly intractable in practice. To mitigate this complexity, active learning methods have been proposed for POMDPs, which borrow similar ideas from
supervised learning, and apply them to selectively query an oracle (domain expert) for the optimal
action [3]. Active learning has found success in many collaborative human-machine tasks where
expert advice is available.
In this paper we propose a dual-policy approach to balance exploration and exploitation in POMDPs,
by simultaneously learning two policies with partially shared internal structure. The first policy,
termed the primary policy, defines actions based on previous experience; the second policy, termed
1
the auxiliary policy, is a meta-level policy maintaining a proper balance between exploration and
exploitation. We employ the regionalized policy representation (RPR) [6] to parameterize both
policies, and perform Bayesian learning to update the policy posteriors. The approach applies in
either of two cases: (i) the agent explores by randomly taking the actions that have been insufficiently
tried before (traditional exploration), or (ii) the agent explores by querying an oracle for the optimal
action (active learning). In the latter case, the agent is assessed a query cost from the oracle, in
addition to the reward received from the environment. Either (i) or (ii) is employed as an exploration
vehicle, depending upon the application.
The dual-policy approach possesses interesting convergence properties, similar to those of E3 [5]
and Rmax [2]. However, our approach assumes the environment is a POMDP while E3 and Rmax
both assume an MDP environment. Another distinction is that our approach learns the agent policy
directly from episodes, without estimating the POMDP model. This is in contrast to E3 and Rmax
(both learn MDP models) and the active-learning method in [3] (which learns POMDP models).
2 Regionalized Policy Representation
We first provide a brief review of the regionalized policy representation, which is used to parameterize the primary policy and the auxiliary policy as discussed above. The material in this section is
taken from [6], with the proofs omitted here.
Definition 2.1 A regionalized policy representation is a tuple (A, O, Z, W, ?, ?). The A and O are
respectively a finite set of actions and observations. The Z is a finite set of belief regions. The W is
the belief-region transition function with W (z, a, o? , z ? ) denoting the probability of transiting from
z to z ? when taking action a in z results in observing o? . The ? is the initial distribution of belief
regions with ?(z) denoting the probability of initially being in z. The ? are the region-dependent
stochastic policies with ?(z, a) denoting the probability of taking action a in z.
We denote A = {1, 2, . . . , |A|}, where |A| is the cardinality of A. Similarly, O = {1, 2, . . . , |O|}
and Z = {1, 2, . . . , |Z|}. We abbreviate (a0 , a1 , . . . , aT ) as a0:T and similarly, (o1 , o2 , . . . , aT )
as o1:T and (z0 , z1 , . . . , zT ) as z0:T , where the subscripts indexes discrete time steps. The history
ht = {a0:t?1 , o1:t } is defined as a sequence of actions performed and observations received up to
t. Let ? = {?, ?, W } denote the RPR parameters. Given ht , the RPR yields a joint probability
distribution of z0:t and a0:t as follows
Qt
p(a0:t , z0:t |o1:t , ?) = ?(z0 )?(z0 , a0 ) ? =1 W (z? ?1 , a? ?1 , o? , z? )?(z? , a? )
(1)
By marginalizing z0:t out in (1), we obtain p(a0:t |o1:t , ?). Furthermore, the history-dependent
distribution of action choices is obtained as follows:
p(a? |h? , ?) = p(a0:? |o1:? , ?)[p(a0:? ?1 |o1:? ?1 , ?)]?1
which gives a stochastic policy for choosing the action a? . The action choice depends solely on the
historical actions and observations, with the unobservable belief regions marginalized out.
2.1 Learning Criterion
Bayesian learning of the RPR is based on the experiences collected from the agent-environment
interaction. Assuming the interaction is episodic, i.e., it breaks into subsequences called episodes
[10], we represent the experiences by a set of episodes.
Definition 2.2 An episode is a sequence of agent-environment interactions terminated in
an absorbing state that transits to itself with zero reward.
An episode is denoted by
(ak0 r0k ok1 ak1 r1k ? ? ? okTk akTk rTkk ), where the subscripts are discrete times, k indexes the episodes, and o,
a, and r are respectively observations, actions, and immediate rewards.
Definition 2.3 (The RPR Optimality Criterion) Let D(K) = {(ak0 r0k ok1 ak1 r1k ? ? ? okTk akTk rTkk )}K
k=1
be a set of episodes obtained by an agent interacting with the environment by following policy
? to select actions, where ? is an arbitrary stochastic policy with action-selecting distributions
p? (at |ht ) > 0, ? action at , ? history ht . The RPR optimality criterion is defined as
t
k
k
def. 1 PK PTk
t k ? =0 p(a? |h? ,?)
Vb (D(K) ; ?) = K
(2)
t
k=1
t=0 ? rt
p? (ak |hk )
Q
Q
? =0
2
?
?
where hkt = ak0 ok1 ak1 ? ? ? okt is the history of actions and observations up to time t in the k-th episode,
0 < ? < 1 is the discount, and ? denotes the RPR parameters.
Throughout the paper, we call Vb (D(K) ; ?) the empirical value function of ?. It is proven in [6]
that limK?? Vb (D(K) ; ?) is the expected sum of discounted rewards by following the RPR policy
parameterized by ? for an infinite number of steps. Therefore, the RPR resulting from maximization
of Vb (D(K) ; ?) approaches the optimal as K is large (assuming |Z| is appropriate). In the Bayesian
setting discussed below, we use a noninformative prior for ?, leading to a posterior of ? peaked at
the optimal RPR, therefore the agent is guaranteed to sample the optimal or a near-optimal policy
with overwhelming probability.
2.2 Bayesian Learning
Let G0 (?) represent the prior distribution of the RPR parameters. We define the posterior of ? as
R
def.
p(?|D(K) , G0 ) = Vb (D(K) ; ?)G0 (?)[Vb (D(K) )]?1
(3)
where Vb (D(K) ) = Vb (D(K) ; ?)G0 (?)d? is the marginal empirical value. Note that Vb (D(K) ; ?)
is an empirical value function, thus (3) is a non-standard use of Bayes rule. However, (3) indeed
gives a distribution whose shape incorporates both the prior and the empirical information.
Since each term in Vb (D(K) ; ?) is a product of multinomial distributions, it is natural to choose the
prior as a product of Dirichlet distributions,
G0 (?) = p(?|?)p(?|?)p(W |?)
(4)
Q|Z|
where p(?|?) = Dir ?(1), ? ? ? , ?(|Z|)? , p(?|?) =
i=1 Dir ?(i, 1), ? ? ? , ?(i, |A|)?i ,
Q|A| Q|O| Q|Z|
|A|
p(W |?) =
a=1
o=1
i=1 Dir W (i, a, o, 1), ? ? ? , W (i, a, o, |Z|) ?i,a,o ; ?i = {?i,m }m=1 ,
|Z|
|Z|
? = {?i }i=1 , and ?i,a,o = {?i,a,o,j }j=1 are hyper-parameters. With the prior thus chosen, the
posterior in (3) is a large mixture of Dirichlet products, and therefore posterior analysis by Gibbs
sampling is inefficient. To overcome this, we employ the variational
Bayesian technique [1] to obtain
R
a variational posterior by maximizing a lower bound to ln Vb (D(K) ; ?)G0 (?)d?,
Z
k
k
LB({qtk }, g(?)) = ln Vb (D(K); ?)G0 (?)d? ? KL({qtk(z0:t
)g(?)}||{?tk p(z0:t
, ?|ak0:t , ok1:t )})
R
k
where {qtk }, g(?) are variational distributions satisfying qtk (z0:t
) ? 1, g(?) ? 1, g(?)d? = 1,
k
PK PTk P|Z|
? t r k p(ak
1
k
0:t |o1:t )
q k (z0:t
) = 1; ?tk = t p?t(ak |h
and KL(qkp) denotes
and K
k=1
t=1
k )V (D (K) )
z k ,??? ,z k =1 t
0
Q
t
? =0
?
?
b
the Kullback-Leibler (KL) distance between probability measure q and p.
The factorized form {qt (z0:t )g(?)} represents an approximation of the weighted joint posterior of
? and z?s when the lower bound reaches the maximum, and the corresponding g(?) is called the
variational approximate posterior of ?. The lower bound maximization is accomplished by solving
{qt (z0:t )} and g(?) alternately, keeping one fixed while solving for the other. The solutions are
summarized in Theorem 2.4; the proof is in [6].
Theorem 2.4 Given the initialization ?b = ?, ?b = ?, ?
b = ?, iterative application of the following
updates produces a sequence of monotonically increasing lower bounds LB({qtk }, g(?)), which
converges to a maxima. The update of {qtk } is
k
k
e
|ak0:t , ok1:t , ?)
qzk (z0:t
) = ?tk p(z0:t
P b
e = {e
f } is a set of under-normalized probability mass functions, with ?
where ?
?, ?
e, W
e (i, m) =
|A|
|A|
|Z|
f (i, a, o, j) = e?(?i,a,o,j )??( j=1 ?i,a,o,j ) ,
e?(?i,m )??( m=1 ?i,m ) , ?
e(i) = e?(?i )??( i=1 ?i ) , and W
and ? is the digamma function. The g(?) has the same form as the prior G0 in (4), except that the
hyper-parameter are updated as
PK PTk k k
?
bi = ?i + k=1 t=0
?t ?t,0 (i)
b
P
b
b
P b
3
b
PK PTk Pt
k k
k
?i,a + k=1 t=0
? =0 ?t ?t,? (i)?(a? , a)
PK PTk P
t
k
k
k
?
bi,a,o,j
?i,a,o,j + k=1 t=0 ? =1 ?tk ?t,?
?1 (i, j)?(a? ?1 , a)?(o? , o)
k
e ?kt,? (i) = p(z?k = i|ak0:t , ok1:t , ?),
e and
where ?t,?
(i, j) = p(z?k = i, z?k+1 = j|ak0:t , ok1:t , ?),
Q
?1
t
? k k b
(K) e
e
?tk = ? t rtk p(ak0:t |ok1:t , ?)
|?)
? =0 p (a? |h? )V (D
?bi,a
=
=
(5)
3 Dual-RPR: Joint Policy for the Agent Behavior and the Trade-Off
Between Exploration and Exploitation
Assume that the agent uses the RPR described in Section 2 to govern its behavior in the unknown
POMDP environment (the primary policy). Bayesian learning employs the empirical value function
Vb (D(K) ; ?) in (2) in place of a likelihood function, to obtain the posterior of the RPR parameters
?. The episodes D(K) may be obtained from the environment by following an arbitrary stochastic
policy ? with p? (a|h) > 0, ? a, ? h. Although any such ? guarantees optimality of the resulting
RPR, the choice of ? affects the convergence speed. A good choice of ? avoids episodes that do
not bring new information to improve the RPR, and thus the agent does not have to see all possible
episodes before the RPR becomes optimal.
In batch learning, all episodes are collected before the learning begins, and thus ? is pre-chosen
and does not change during the learning [6]. In online learning, however, the episodes are collected
during the learning, and the RPR is updated upon completion of each episode. Therefore there is
a chance to exploit the RPR to avoid repeated learning in the same part of the environment. The
agent should recognize belief regions it is familiar with, and exploit the existing RPR policy there;
in belief regions inferred as new, the agent should explore. This balance between exploration and
exploitation is performed with the goal of accumulating a large long-run reward.
We consider online learning of the RPR (as the primary policy) and choose ? as a mixture of two
policies: one is the current RPR ? (exploitation) and the other is an exploration policy ?e . This gives
the action-choosing probability p? (a|h) = p(y = 0|h)p(a|h, ?, y = 0)+p(y = 1|h)p(a|h, ?e , y =
1), where y = 0 (y = 1) indicates exploitation (exploration). The problem of choosing good ? then
reduces to a proper balance between exploitation and exploration: the agent should exploit ? when
doing so is highly rewarding, while following ?e to enhance experience and improve ?.
An auxiliary RPR is employed to represent the policy for balancing exploration and exploitation,
i.e., the history-dependent distribution p(y|h). The auxiliary RPR shares the parameters {?, W }
with the primary RPR, but with ? = {?(z, a) : a ? A, z ? Z} replaced by ? = {?(z, y) : y =
0 or 1, z ? Z}, where ?(z, y) is the probability of choosing exploitation (y = 0) or exploration
(y = 1) in belief region z. Let ? have the prior
Q
p(?|u) = |Z|
(6)
i=1 Beta ?(i, 0), ?(i, 1)u0 , u1 .
In order to encourage exploration when the agent has little experience, we choose u0 = 1 and u1 > 1
so that, at the beginning of learning, the auxiliary RPR always suggests exploration. As the agent
accumulates episodes of experience, it comes to know a certain part of the environment in which the
episodes have been collected. This knowledge is reflected in the auxiliary RPR, which, along with
the primary RPR, is updated upon completion of each new episode.
Since the environment is a POMDP, the agent?s knowledge should be represented in the space of
belief states. However, the agent cannot directly access the belief states, because computation of
belief states requires knowing the true POMDP model, which is not available. Fortunately, the RPR
formulation provides a compact representation of H = {h}, the space of histories, where each
history h corresponds to a belief state in the POMDP. Within the RPR formulation, H is represented
internally as the set of distributions over belief regions z ? Z, which allows the agent to access
H based on a subset of samples from H. Let Hknown be the part of H that has become known to
the agent, i.e., the primary RPR is optimal in Hknown and thus the agent should begin to exploit
upon entering Hknown . As will be clear below, Hknown can be identified by Hknown = {h : p(y =
0|h, ?, ?) ? 1}, if the posterior of ? is updated by
PK PTk Pt
k k
u
bi,0 = u0 + k=1 t=0
(7)
? =0 ?t ?t,? (i),
PK PTk Pt
k t
k
(8)
u
bi,1 = max ?, u1 ? k=1 t=0 ? =0 yt ? c ?t,? (i) ,
4
where ? is a small positive number, and ?tk is the same in (5) except that rtk is replaced by mkt , the
meta-reward received at t in episode k. We have mkt = rmeta if the goal is reached at time t in
episode k, and mkt = 0 otherwise, where rmeta > 0 is a constant. When ?e is provided by an oracle
(active learning), a query cost c > 0 is taken into account in (8), by subtracting c from u1 . Thus, the
probability of exploration is reduced each time the agent makes a query to the oracle (i.e., ytk = 1).
After a certain number of queries, u
bi,1 becomes the small positive number ? (it never becomes zero
due to the max operator), at which point the agent stops querying in belief region z = i.
In (7) and (8), exploitation always receives a ?credit?, while exploration never receives credit (exploration is actually discredited when ?e is an oracle). This update makes sure that the chance
of exploitation monotonically increases as the episodes accumulate. Exploration receives no credit
because it has been pre-assigned a credit (u1 ) in the prior, and the chance of exploration should
monotonically decrease with the accumulation of episodes. The parameter u1 represents the agent?s
prior for the amount of needed exploration. When c > 0, u1 is discredited by the cost and the agent
needs a larger u1 (than when c = 0) to obtain the same amount of exploration. The fact that the
amount of exploration monotonically increases with u1 implies that, one can always find a large
enough u1 to ensure that the primary RPR is optimal in Hknown = {h : p(y = 0|h, ?, ?) ? 1}.
However, an unnecessarily large u1 makes the agent over-explore and leads to slow convergence.
Let umin
denote the minimum u1 that ensures optimality in Hknown . We assume umin
exists in the
1
1
analysis below. The possible range of umin
is examined in the experiments.
1
4 Optimality and Convergence Analysis
Let M be the true POMDP model. We first introduce an equivalent expression for the empirical
value function in (2),
PT t
P
(K)
(9)
Vb (ET ; ?) =
(K)
t=0 ? rt p(a0:t , o1:t , rt |y0:t = 0, ?, M ),
E
T
(K)
where the first summation is over all elements in ET ? ET , and ET = {(a0:T , o1:T , r0:T ) : at ?
A, ot ? O, t = 0, 1, ? ? ? , T } is the complete set of episodes of length T in the POMDP, with no
repeated elements. The condition y0:t = 0, which is an an abbreviation for y? = 0 ? ? = 0, 1, ? ? ? , t,
(K)
indicates that the agent always follows the RPR (?) here. Note Vb (ET ; ?) is the empirical value
(K)
function of ? defined on ET , as is Vb (D(K) ; ?) on D(K) . When T = ? 1 , the two are identical
(K)
up to a difference in acquiring the episodes: ET is a simple enumeration of distinct episodes
(K)
while D
may contain identical episodes. The multiplicity of an episode in D(K) results from the
sampling process (by following a policy to interact with the environment). Note that the empirical
(K)
value function defined using ET is interesting only for theoretical analysis, because the evaluation
requires knowing the true POMDP model, not available in practice. We define the optimistic value
function
T
1
XX
X
(K)
Vbf (ET ; ?,?, ?e ) =
?t
rt +(Rmax?rt )?t? =0 y? p(a0:t , o1:t , rt , y0:t |?,?,M,?e ) (10)
(K)
ET
t=0 y0 ,??? ,yt =0
where ?t? =0 y? indicates that the agent receives rt if and only if y? = 0 at all time steps ? =
1, 2, ? ? ? , t; otherwise, it receives Rmax at t, which is an upper bound of the rewards in the environ(K)
ment. Similarly we can define Vb (D(K) ; ?, ?, ?e ), the equivalent expression for Vbf (ET ; ?, ?, ?e ).
The following lemma is proven in the Appendix.
(K)
(K)
Lemma 4.1 Let Vb (ET ; ?), Vbf (ET ; ?, ?, ?e ), and Rmax be defined as above.
Let
(K)
Pexlpore (ET , ?, ?, ?e ) be the probability of executing the exploration policy ?e at least once in
(K)
some episode in ET , under the auxiliary RPR (?, ?) and the exploration policy ?e . Then
(K)
Pexlpore (ET
, ?, ?, ?e ) ?
1 ? ? b (K)
(K)
|V (ET ; ?) ? Vbf (ET ; ?, ?, ?e )|.
Rmax
1
An episode almost always terminates in finite time steps in practice and the agent stays in the absorbing
state with zero reward for the remaining infinite steps after an episode is terminated [10]. The infinite horizon
is only to ensure theoretically all episodes have the same horizon length.
5
(K)
Proposition 4.2 Let ? be the optimal RPR on E? and ?? be the optimal RPR in the complete
POMDP environment. Let the auxiliary RPR hyper-parameters (?) be updated according to (7) and
b
(8), with u1 ? umin
1 . Let ?e be the exploration policy and ? ? 0. Then either (a) V (E? ; ?) ?
?
b
V (E? ; ? ) ? ?, or (b) the probability that the auxiliary RPR suggests executing ?e in some episode
(K)
unseen in E? is at least ?(1??)
Rmax .
Proof: It is sufficient to show that if (a) does not hold, then (b) must hold. Let us assume
(K)
(K)
(K)
Vb (E? ; ?) < Vb (E? ; ?? ) ? ?. Because ? is optimal in E? , Vb (E? ; ?) ? Vb (E? ; ?? ), which
(\K)
(\K)
(\K)
(K)
implies Vb (E? ; ?) < Vb (E? ; ?? ) ? ?. where E?
= E? \ E? . We show below that
(\K)
(\K)
?
Vbf (E? ; ?, ?, ?e ) ? Vb (E? ; ? ) which, together with Lemma 4.1, implies
i
1 ? ? h b (\K)
(\K)
(\K)
Pexlpore (E?
, ?, ?, ?e ) ?
Vf (E? ; ?, ?, ?e ) ? Vb (E?
; ?)
Rmax
i ?(1 ? ?)
1 ? ? h b (\K) ?
(\K)
V (E? ; ? ) ? Vb (E?
; ?) ?
?
Rmax
Rmax
(\K)
(\K)
(\K)
We now show Vbf (E? ; ?, ?, ?e ) ? Vb (E? ; ?? ). By construction, Vbf (E? ; ?, ?, ?e ) is an
optimistic value function, in which the agent receives Rmax at any time t unless if y? = 0 at ? =
0, 1, ? ? ? , t. However, y? = 0 at ? = 0, 1, ? ? ? , t implies that {h? : ? = 0, 1, ? ? ? , t} ? Hknown , By
the premise, ? is updated according to (7) and (8) and u1 ? umin
1 , therefore ? is optimal in Hknown
(see the discussions following (7) and (8)), which implies ? is optimal in {h? : ? = 0, 1, ? ? ? , t}.
Thus, the inequality holds.
Q.E.D.
Proposition 4.2 shows that whenever the primary RPR achieves less accumulative reward than
the optimal RPR by ?, the auxiliary RPR suggests exploration with a probability exceeding
?1
?(1 ? ?)Rmax
. Conversely, whenever the auxiliary RPR suggests exploration with a probability
?1
smaller than ?(1 ? ?)Rmax
, the primary RPR achieves ?-near optimality. This ensures that the agent
is either receiving sufficient rewards or it is performing sufficient exploration.
5 Experimental Results
Our experiments are based on Shuttle, a benchmark POMDP problem [7], with the following setup.
The primary policy is a RPR with |Z| = 10 and a prior in (4), with all hyper-parameters initially
set to one (which makes the initial prior non-informative). The auxiliary policy is a RPR sharing
{?, W } with the primary RPR and having a prior for ? as in (6). The prior of ? is initially biased
towards exploration by using u0 = 1 and u1 > 1. We consider various values of u1 to examine
the different effects. The agent performs online learning: upon termination of each new episode,
the primary and auxiliary RPR posteriors are updated by using the previous posteriors as the current
priors. The primary RPR update follows Theorem 2.4 with K = 1 while the auxiliary RPR update
follows (7) and (8) for ? (it shares the same update with the primary RPR for ? and W ). We perform
100 independent Monte Carlo runs. In each run, the agent starts learning from a random position
in the environment and stops learning when Ktotal episodes are completed. We compare various
methods that the agent uses to balance exploration and exploitation: (i) following the auxiliary RPR,
with various values of u1 , to adaptively switch between exploration and exploitation; (ii) randomly
switching between exploration and exploitation with a fixed exploration rate Pexplore (various values
of Pexplore are examined). When performing exploitation, the agent follows the current primary RPR
(using the ? that maximizes the posterior); when performing exploration, it follows an exploration
policy ?e . We consider two types of ?e : (i) taking random actions and (ii) following the policy
obtained by solving the true POMDP using PBVI [8] with 2000 belief samples. In either case,
rmeta = 1 and ? = 0.001. In case (ii), the PBVI policy is the oracle and incurs a query cost c.
We report: (i) the sum of discounted rewards accrued within each episode during learning; these
rewards result from both exploitation and exploration. (ii) the quality of the primary RPR upon
termination of each learning episode, represented by the sum of discounted rewards averaged over
251 episodes of following the primary RPR (using the standard testing procedure for Shuttle: each
episode is terminated when either the goal is reached or a maximum of 251 steps is taken); these
rewards result from exploitation alone. (iii) the exploration rate Pexplore in each learning episode,
which is the number of time steps at which exploration is performed divided by the total time steps in
6
a given episode. In order to examine the optimality, the rewards in (i)-(ii) has the corresponding optimal rewards subtracted, where the optimal rewards are obtained by following the PBVI policy; the
difference are reported, with zero difference indicating optimality and minus difference indicating
sub-optimality. All results are averaged over the 100 Monte Carlo runs. The results are summarized
in Figure 1 when ?e takes random actions and in Figure 2 when ?e is an oracle (the PBVI policy).
0
Dual?RPR, u =2
1
Dual?RPR, u1=2
?8
Dual?RPR, u1=20
?10
Dual?RPR, u1=200
RPR, Pexplore = 0
RPR, Pexplore = 0.1
?14
RPR, Pexplore = 0.3
?16
RPR, Pexplore = 1.0
0
?4
Dual?RPR, u =2
1
Dual?RPR, u =20
1
?6
Dual?RPR, u =200
1
RPR, Pexplore = 0
?8
RPR, Pexplore = 0.1
Dual?RPR, u =20
1
Dual?RPR, u =200
1
0.6
0.4
0.2
RPR, Pexplore = 0.3
?10
?12
0
500
1000
1500
2000
2500
3000
Number of episodes used in the learning phase
0.8
Exploration rate
?6
Accrued testing reward
minus optimal reward
Accrued learning reward
minus optimal reward
?2
?4
?12
1
0
?2
RPR, Pexplore = 1.0
0
0
500
1000
1500
2000
2500
3000
Number of episodes used in the learning phase
500
1000
1500
2000
2500
3000
Number of episodes used in the learning phase
Figure 1: Results on Shuttle with a random exploration policy, with Ktotal = 3000. Left: accumulative
discounted reward accrued within each learning episode, with the corresponding optimal reward subtracted.
Middle: accumulative discounted rewards averaged over 251 episodes of following the primary RPR obtained
after each learning episode, again with the corresponding optimal reward subtracted. Right: the rate of exploration in each learning episode. All results are averaged over 100 independent Monte Carlo runs.
1
0
0
Dual?RPR, u1=2
?6
?8
?10
Dual?RPR, u =2
1
Dual?RPR, u1=10
Dual?RPR, u1=20
RPR, Pexplore = 0.158
RPR, Pexplore = 0.448
RPR, Pexplore = 0.657
0.8
?5
Dual?RPR, u1=2
Dual?RPR, u =10
1
?10
Dual?RPR, u =20
1
?15
= 0.448
explore
RPR, Pexplore = 1.0
Dual?RPR, u1=20
0.6
0.4
0
0
20
40
60
80
100
Number of episodes used in the learning phase
20
40
60
80
100
Number of episodes used in the learning phase
1
0
0
Dual?RPR, u1=10
0.2
RPR, Pexplore = 0.657
?20
0
20
40
60
80
100
Number of episodes used in the learning phase
= 0.158
RPR, P
explore
RPR, Pexplore = 1.0
?12
0
RPR, P
Exploration rate
?4
Accrued testing reward
minus optimal reward
Accrued learning reward
minus optimal reward
?2
Dual?RPR, u1=2
?6
?8
Dual?RPR, u =2
1
Dual?RPR, u1=10
Dual?RPR, u1=20
RPR, Pexplore = 0.081
RPR, Pexplore = 0.295
?10
?12
0
RPR, Pexplore = 0.431
RPR, Pexplore = 1.0
20
40
60
80
100
Number of episodes used in the learning phase
0.8
?5
Dual?RPR, u1=2
Dual?RPR, u1=10
?10
Dual?RPR, u1=20
RPR, Pexplore = 0.081
RPR, Pexplore = 0.295
?15
RPR, Pexplore = 0.431
?20
0
RPR, Pexplore = 1.0
20
40
60
80
100
Number of episodes used in the learning phase
Exploration rate
?4
Accrued testing reward
minus optimal reward
Accrued learning reward
minus optimal reward
?2
Dual?RPR, u1=10
Dual?RPR, u1=20
0.6
0.4
0.2
0
0
20
40
60
80
100
Number of episodes used in the learning phase
Figure 2: Results on Shuttle with an oracle exploration policy incurring cost c = 1 (top row) and c = 3
(bottom row), and Ktotal = 100. Each figure in a row is a counterpart of the corresponding figure in Figure 1,
with the random ?e replaced by the oracle ?e . See the captions there for details.
It is seen from Figure 1 that, with random exploration and u1 = 2, the primary policy converges
to optimality and, accordingly, Pexplore drops to zero, after about 1500 learning episodes. When
u1 increases to 20, the convergence is slower: it does not occur (and Pexplore > 0) until after
abound 2500 learning episodes. With u1 increased to 200, the convergence does not happen and
Pexplore > 0.2 within the first 3000 learning episodes. These results verify our analysis in Section
3 and 4: (i) the primary policy improves as Pexplore decreases; (ii) the agent explores when it is
not acting optimally and it is acting optimally when it stops exploring; (iii) there exists finite u1
such that the primary policy is optimal if Pexplore = 0. Although u1 = 2 may still be larger than
umin
1 , it is small enough to ensure convergence within 1500 episodes. We also observe from Figure
1 that: (i) the agent explores more efficiently when it is adaptively switched between exploration
and exploitation by the auxiliary policy, than when the switch is random; (ii) the primary policy
cannot converge to optimality when the agent never explores; (iii) the primary policy may converge
7
to optimality when the agent always takes random actions, but it may need infinite learning episodes
to converge.
The results in Figure 2, with ?e being an oracle, provide similar conclusions as those in Figure
1 when ?e is random. However, there are two special observations from Figure 2: (i) Pexplore is
affected by the query cost c: with a larger c, the agent performs less exploration. (ii) the convergence
rate of the primary policy is not significantly affected by the query cost. The reason for (ii) is that the
oracle always provides optimal actions, thus over-exploration does not harm the optimality; as long
as the agent takes optimal actions, the primary policy continually improves if it is not yet optimal,
or it remains optimal if it is already optimal.
6 Conclusions
We have presented a dual-policy approach for jointly learning the agent behavior and the optimal
balance between exploitation and exploration, assuming the unknown environment is a POMDP. By
identifying a known part of the environment in terms of histories (parameterized by the RPR), the approach adaptively switches between exploration and exploitation depending on whether the agent is
in the known part. We have provided theoretical guarantees for the agent to either explore efficiently
or exploit efficiently. Experimental results show good agreement with our theoretical analysis and
that our approach finds the optimal policy efficiently. Although we empirically demonstrated the
existence of a small u1 to ensure efficient convergence to optimality, further theoretical analysis is
needed to find umin
1 , the tight lower bound of u1 , which ensures convergence to optimality with
just the right amount of exploration (without over-exploration). Finding the exact umin
is difficult
1
because of the partial observability. However, it is hopeful to find a good approximation to umin
1 . In
the worst case, the agent can always choose to be optimistic, like in E3 and Rmax. An optimistic
agent uses a large u1 , which usually leads to over-exploration but ensures convergence to optimality.
7 Acknowledgements
The authors would like to thank the anonymous reviewers for their valuable comments and suggestions. This work is supported by AFOSR.
Appendix
Proof of Lemma 4.1: We expand (10) as,
(K)
Vbf (ET
P
; ?, ?, ?e ) =
+
(K)
P
(K)
PT
ET
P
T
ET
t=0
t=0 ?
t
t
rt p(a0:t, o1:t , rt |y0:t = 0, ?, M )p(y0:t = 0|?, ?)
? Rmax
P
y0:t 6=0 p(a0:t , o1:t , rt |y0:t , ?, M, ?e )p(y0:t |?, ?)
where y0:t is an an abbreviation for y? = 0 ? ? = 0, ? ? ? , t and y0:t 6= 0 is an an abbreviation for ? 0 ? ? ? t
P
(K)
satisfying y? 6= 0. The sum E (K) is over all episodes in ET . The difference between (9) and (11) is
T
(K)
|Vb (ET , ?)
P
=
(K)
ET
?
PT
P
P
(K)
Vb (ET ; ?, ?)| = E (K) Tt=0 ? t rt p(a0:t , o1:t , rt |y0:t = 0, ?, M )(1 ? p(y0:t =
T
P
P
P
? E (K) Tt=0 ? t Rmax y0:t 6=0 p(a0:t , o1:t , rt |y0:t , ?, M, ?e )p(y0:t |?, ?)
T
t=0
? t rt p(a0:t , o1:t , rt |y0:t = 0, ?, M )
?
P
(K)
ET
PT
t=0
t
? Rmax
0|?, ?))
P
y0:t 6=0 p(y0:t |?, ?)
y0:t 6=0 p(a0:t , o1:t , rt |y0:t , ?, M, ?e )p(y0:t |?, ?)
P
T
XX
i
X h
Rmax
t
=
? rt
p(a0:t , o1:t , rt |y0:t = 0, ?, M ) ?
p(a0:t , o1:t , rt |y0:t , ?, M, ?e ) p(y0:t |?, ?)
r
t
(K) t=0
y0:t 6=0
ET
P
PT
P
P
P
t
T
? E (K) t=0 ? t Rmax y0:t 6=0 p(y0:t |?, ?) =
(K)
t=0 ? Rmax (1 ? p(y0:t = 0|?, ?))
ET
T
P
P
Rmax P
? E (K) (1 ? p(y0:T = 0|?, ?)) Tt=0 ? t Rmax ?
(K) (1 ? p(y0:T = 0|?, ?))
E
1??
T
where
P
y0:t 6=0
T
is a sum over all sequences {y0:t : ? 0 ? ? ? t satisfying y? 6= 0}.
8
Q.E.D.
References
[1] M. J. Beal. Variational Algorithms for Approximate Bayesian Inference. PhD thesis, Gatsby Computational Neuroscience Unit, Univertisity College London, 2003.
[2] R. I. Brafman and M. Tennenholtz. R-max - a general polynomial time algorithm for near-optimal reinforcement learning. Journal of Machine Learning Research, 3(OCT):213?231, 2002.
[3] F. Doshi, J. Pineau, and N. Roy. Reinforcement learning with limited reinforcement: Using Bayes risk for
active learning in POMDPs. In Proceedings of the 25th international conference on Machine learning,
pages 256?263. ACM, 2008.
[4] M. Kearns and D. Koller. Efficient reinforcement learning in factored mdps. In Proc. of the Sixteenth
International Joint Conference of Artificial Intelligence, pages 740?747, 1999.
[5] M. Kearns and S. P. Singh. Near-optimal performance for reinforcement learning in polynomial time. In
Proc. ICML, pages 260?268, 1998.
[6] H. Li, X. Liao, and L. Carin. Multi-task reinforcement learning in partially observable stochastic environments. Journal of Machine Learning Research, 10:1131?1186, 2009.
[7] M.L. Littman, A.R. Cassandra, and L.P. Kaelbling. Learning policies for partially observable environments: scaling up. In ICML, 1995.
[8] J. Pineau, G. Gordon, and S. Thrun. Point-based value iteration: An anytime algorithm for POMDPs. In
Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence (IJCAI), pages 1025
? 1032, August 2003.
[9] P. Poupart and N. Vlassis. Model-based bayesian reinforcement learning in partially observable domains.
In International Symposiu on Artificial Intelligence and Mathmatics (ISAIM), 2008.
[10] R. Sutton and A. Barto. Reinforcement learning: An introduction. MIT Press, Cambridge, MA, 1998.
9
| 3814 |@word exploitation:28 middle:1 polynomial:2 termination:2 tried:1 incurs:1 minus:7 initial:2 selecting:1 denoting:3 o2:1 existing:1 current:3 yet:1 must:1 happen:1 informative:1 shape:1 noninformative:1 designed:1 drop:1 update:7 v:1 alone:1 intelligence:3 accordingly:1 beginning:1 provides:2 along:1 beta:1 become:1 introduce:1 theoretically:1 indeed:1 expected:1 behavior:5 planning:1 examine:2 multi:1 discounted:5 little:1 overwhelming:1 enumeration:1 cardinality:1 increasing:1 becomes:4 provided:3 estimating:1 begin:2 xx:2 maximizes:1 factorized:1 mass:1 abound:1 rmax:23 finding:1 guarantee:3 mitigate:1 unit:1 internally:1 continually:1 before:3 positive:2 engineering:1 switching:1 sutton:1 accumulates:1 ak:3 subscript:2 solely:1 initialization:1 examined:2 suggests:4 challenging:2 conversely:1 limited:1 bi:6 range:1 averaged:4 testing:4 practice:3 procedure:1 episodic:1 empirical:8 significantly:1 pre:2 cannot:2 operator:1 context:1 risk:1 accumulating:1 accumulation:1 equivalent:2 demonstrated:2 yt:2 maximizing:1 reviewer:1 pomdp:16 xuejun:1 identifying:1 factored:1 rule:1 borrow:1 updated:7 qkp:1 construction:2 pt:8 caption:1 duke:1 exact:1 us:3 agreement:1 element:2 roy:1 satisfying:3 particularly:1 observed:1 bottom:1 electrical:1 parameterize:2 worst:1 region:10 ensures:4 episode:58 trade:1 decrease:2 valuable:1 environment:23 govern:1 complexity:1 reward:35 littman:1 singh:1 solving:5 tight:1 upon:6 completely:1 joint:5 represented:3 various:4 distinct:1 london:1 monte:3 query:9 artificial:3 hyper:4 choosing:4 whose:2 larger:3 otherwise:2 unseen:1 jointly:2 itself:1 ptk:7 online:3 beal:1 sequence:4 cai:1 propose:2 subtracting:1 interaction:3 product:4 ment:1 pbvi:4 achieve:1 sixteenth:2 convergence:12 ijcai:1 produce:1 hkt:1 converges:2 executing:2 tk:6 depending:2 completion:2 qt:3 received:3 auxiliary:16 come:1 implies:5 stochastic:5 exploration:62 human:1 material:1 premise:1 anonymous:1 proposition:2 summation:1 exploring:1 hold:3 credit:4 lawrence:1 r1k:2 achieves:2 omitted:1 perceived:1 proc:2 rpr:107 successfully:1 weighted:1 mit:1 always:8 avoid:1 shuttle:4 barto:1 likelihood:1 indicates:3 hk:1 contrast:1 digamma:1 inference:1 dependent:3 a0:20 initially:3 koller:1 expand:1 unobservable:1 dual:32 denoted:1 constrained:1 special:1 marginal:1 once:1 never:3 having:1 sampling:2 identical:2 represents:2 unnecessarily:1 icml:2 carin:2 peaked:1 report:1 gordon:1 employ:3 randomly:2 simultaneously:1 recognize:1 familiar:1 replaced:3 phase:9 maintain:1 highly:1 evaluation:1 mixture:2 kt:1 tuple:1 encourage:1 partial:1 experience:10 unless:1 theoretical:5 increased:1 maximization:2 cost:8 kaelbling:1 subset:1 optimally:2 reported:1 dir:3 adaptively:3 accrued:8 fundamental:3 explores:5 international:4 stay:1 off:1 rewarding:1 receiving:1 enhance:1 together:1 quickly:1 again:1 thesis:1 choose:4 isaim:1 expert:2 inefficient:1 leading:2 li:1 account:1 summarized:2 subsumes:1 depends:1 performed:5 vehicle:1 break:1 optimistic:4 observing:1 doing:1 reached:2 start:1 bayes:2 complicated:1 collaborative:1 efficiently:4 yield:3 bayesian:8 carlo:3 pomdps:7 history:8 reach:1 whenever:2 sharing:1 definition:3 doshi:1 associated:1 proof:4 stop:3 knowledge:2 anytime:1 improves:2 actually:1 aktk:2 supervised:1 reflected:1 improved:1 formulation:2 furthermore:1 just:1 until:1 receives:6 defines:1 pineau:2 quality:1 mdp:3 grows:1 usa:1 effect:1 verify:1 normalized:1 true:4 contain:1 accumulative:3 former:1 assigned:1 counterpart:1 entering:1 leibler:1 r0k:2 during:3 criterion:3 complete:2 tt:3 performs:2 bring:1 variational:5 absorbing:2 multinomial:1 rl:1 empirically:1 exponentially:1 discussed:2 environ:1 accumulate:1 cambridge:1 gibbs:1 similarly:3 umin:9 access:2 entail:1 posterior:13 recent:1 dictated:1 termed:2 certain:2 meta:2 inequality:1 success:1 accomplished:1 seen:1 minimum:1 fortunately:1 employed:3 r0:1 converge:3 monotonically:4 ii:11 u0:4 desirable:1 reduces:1 long:4 divided:1 concerning:1 a1:1 maintenance:1 liao:2 enhancing:2 iteration:1 represent:3 addition:1 addressed:1 ot:1 biased:1 posse:1 limk:1 sure:1 comment:1 incorporates:1 effectiveness:1 call:1 near:5 iii:3 enough:2 switch:3 affect:1 identified:1 suboptimal:1 observability:1 idea:1 knowing:2 whether:1 expression:2 ultimate:1 e3:4 action:28 clear:1 amount:4 discount:1 ytk:1 reduced:1 neuroscience:1 rtk:2 discrete:2 affected:2 ht:4 sum:5 run:5 parameterized:2 place:1 throughout:1 almost:1 decision:1 appendix:2 scaling:1 vb:30 vf:1 def:2 bound:6 guaranteed:1 oracle:14 insufficiently:1 occur:1 u1:42 speed:1 optimality:17 performing:3 department:1 transiting:1 according:2 terminates:1 smaller:1 y0:33 making:1 multiplicity:1 taken:3 ln:2 remains:1 needed:2 know:1 available:4 incurring:1 apply:1 observe:2 appropriate:1 subtracted:3 batch:1 slower:1 existence:1 assumes:1 denotes:2 dirichlet:2 ensure:4 remaining:1 completed:1 top:1 maintaining:1 marginalized:1 exploit:6 objective:1 g0:8 already:1 primary:26 rt:19 traditional:2 ak1:3 distance:1 thank:1 thrun:1 poupart:1 transit:1 collected:4 toward:1 reason:1 assuming:3 length:2 o1:19 index:2 balance:10 nc:1 setup:1 difficult:1 proper:4 policy:57 unknown:4 perform:2 zt:1 upper:1 observation:6 markov:1 benchmark:2 finite:4 immediate:1 vlassis:1 qtk:6 interacting:1 arbitrary:2 lb:2 august:1 inferred:1 kl:3 z1:1 distinction:1 alternately:1 tennenholtz:1 below:4 usually:1 challenge:2 max:4 belief:14 natural:1 abbreviate:1 improve:2 mdps:2 brief:1 review:1 prior:14 acknowledgement:1 marginalizing:1 afosr:1 interesting:2 suggestion:1 querying:2 facing:1 proven:2 switched:1 agent:50 gather:1 sufficient:3 share:2 balancing:4 row:3 supported:1 brafman:1 keeping:1 taking:4 hopeful:1 overcome:1 ak0:8 transition:1 avoids:1 preventing:1 author:1 reinforcement:11 premature:1 historical:1 employing:2 approximate:2 observable:5 compact:1 kullback:1 active:7 harm:1 subsequence:1 iterative:1 learn:1 interact:1 necessarily:1 domain:2 pk:7 terminated:3 repeated:2 augmented:1 advice:1 gatsby:1 slow:1 sub:1 position:1 exceeding:1 learns:2 z0:15 theorem:3 specific:1 intractable:1 exists:2 ktotal:3 phd:1 horizon:3 cassandra:1 durham:1 explore:6 partially:7 applies:1 acquiring:1 corresponds:1 chance:3 acm:1 ma:1 oct:1 abbreviation:3 goal:7 targeted:1 towards:1 shared:1 change:1 mkt:3 infinite:4 except:2 acting:2 lemma:4 kearns:2 called:2 total:1 experimental:3 indicating:2 selectively:1 select:1 college:1 internal:1 latter:2 assessed:1 |
3,107 | 3,815 | Conditional Random Fields with High-Order
Features for Sequence Labeling
Dan Wu
Hai Leong Chieu
Nan Ye
Wee Sun Lee
Singapore MIT Alliance
Department of Computer Science DSO National Laboratories
National University of Singapore [email protected] National University of Singapore
{yenan,leews}@comp.nus.edu.sg
[email protected]
Abstract
Dependencies among neighbouring labels in a sequence is an important source
of information for sequence labeling problems. However, only dependencies between adjacent labels are commonly exploited in practice because of the high
computational complexity of typical inference algorithms when longer distance
dependencies are taken into account. In this paper, we show that it is possible to
design efficient inference algorithms for a conditional random field using features
that depend on long consecutive label sequences (high-order features), as long as
the number of distinct label sequences used in the features is small. This leads
to efficient learning algorithms for these conditional random fields. We show experimentally that exploiting dependencies using high-order features can lead to
substantial performance improvements for some problems and discuss conditions
under which high-order features can be effective.
1
Introduction
In a sequence labeling problem, we are given an input sequence x and need to label each component
of x with its class to produce a label sequence y. Examples of sequence labeling problems include
labeling words in sentences with its type in named-entity recognition problems [16], handwriting
recognition problems [15], and deciding whether each DNA base in a DNA sequence is part of a
gene in gene prediction problems [2].
Conditional random fields (CRF) [8] has been successfully applied in many sequence labeling problems. Its chief advantage lies in the fact that it models the conditional distribution P (y|x) rather
than the joint distribution P (y, x). In addition, it can effectively encode arbitrary dependencies of y
on x as the learning cost mainly depends on the parts of y involved in the dependencies. However,
the use of high-order features, where a feature of order k is a feature that encodes the dependency
between x and (k + 1) consecutive elements in y, can potentially lead to an exponential blowup in
the computational complexity of inference. Hence, dependencies are usually assumed to exist only
between adjacent components of y, giving rise to linear-chain CRFs which limits the order of the
features to one.
In this paper, we show that it is possible to learn and predict CRFs with high-order features efficiently under the following pattern sparsity assumption (which is often observed in real problems):
the number of observed label sequences of length, say k, that the features depend on, is much smaller
than nk , where n is the number of possible labels. We give an algorithm for computing the marginals
and the CRF log likelihood gradient that runs in time polynomial in the number and length of the
label sequences that the features depend on. The gradient can be used with quasi-newton methods
to efficiently solve the convex log likelihood optimization problem [14]. We also provide an efficient decoding algorithm for finding the most probable label sequence in the presence of long label
sequence features. This can be used with cutting plane methods to train max-margin solutions for
sequence labeling problems in polynomial time [18].
We show experimentally that using high-order features can improve performance in sequence labeling problems. We show that in handwriting recognition, using even simple high-order indicator
features improves performance over using linear-chain CRFs, and impressive performance improvement is observed when the maximum order of the indicator features is increased. We also use a
synthetic data set to discuss the conditions under which higher order features can be helpful. We
further show that higher order label features can sometimes be more stable under change of data
distribution using a named entity data set.
2
Related Work
Conditional random fields [8] are discriminately trained, undirected Markov models, which has
been shown to perform well in various sequence labeling problems. Although a CRF can be used
to capture arbitrary dependencies among components of x and y, in practice, this flexibility of the
CRF is not fully exploited as inference in Markov models is NP-hard in general (see e.g. [1]), and
can only be performed efficiently for special cases such as linear chains. As such, most applications
involving CRFs are limited to some tractable Markov models. This observation also applies to other
structured prediction methods such as structured support vector machines [15, 18].
A commonly used inference algorithm for CRF is the clique tree algorithm [5]. Defining a feature
depending on k (not necessarily consecutive) labels will require forming a clique of size k, resulting
in a clique-tree with tree-width greater or equal to k. Inference on such a clique tree will be exponential in k. For sequence models, a feature of order k can be incorporated into a k-order Markov chain,
but the complexity of inference is again exponential in k. Under the pattern sparsity assumption, our
algorithm achieves efficiency by maintaining only information related to a few occurred patterns,
while previous algorithms maintain information about all (exponentially many) possible patterns.
In the special case of a semi-Markov random fields, where high-order features depend on segments
of identical labels, the complexity of inference is linear in the maximum length of the segments
[13]. The semi-Markov assumption can be seen as defining a sparse feature representation: though
the number of length k label patterns is exponential in k, the semi-Markov assumption effectively
allows only n2 of them (n is the cardinality of the label set), as the features are defined on a sequence
of identical labels that can only depend on the label of the preceding segment. Compared to this
approach, our algorithm has the advantage of being able to efficiently handle high-order features
having arbitrary label patterns.
Long distance dependencies can also be captured using hierarchical models such as Hierarchical
Hidden Markov Model (HHMM) [4] or Probabilistic Context Free Grammar (PCFG) [6]. The time
complexity of inference in an HHMM is O(min{nl3 , n2 l}) [4, 10], where n is the number of states
and l is the length of the sequence. Discriminative versions such as hierarchical CRF has also
been studied [17]. Inference in PCFG and its discriminative version can also be efficiently done
in O(ml3 ) where m is the number of productions in the grammar [6]. These methods are able to
capture dependencies of arbitrary lengths, unlike k-order Markov chains. However, to do efficient
learning with these methods, the hierarchical structure of the examples need to be provided. For
example, if we use PCFG to do named entity recognition, we need to provide the parse trees for
efficient learning; providing the named entity labels for each word is not sufficient. Hence, a training
set that has not been labeled with hierarchical labels will need to be relabeled before it can be trained
efficiently. Alternatively, methods that employ hidden variables can be used (e.g. to infer the hidden
parse tree) but the optimization problem is no longer convex and local optima can sometimes be a
problem. Using high-order features captures less expressive form of dependencies than these models
but allows efficient learning without relabeling the training set with hierarchical labels.
Similar work on using higher order features for CRFs was independently done in [11]. Their work
apply to a larger class of CRFs, including those requiring exponential time for inference, and they
did not identify subclasses for which inference is guaranteed to be efficient.
3
CRF with High-order Features
Throughout the remainder of this paper, x, y, z (with or without decorations) respectively denote
an observation sequence of length T , a label sequence of length T , and an arbitrary label sequence.
The function | ? | denotes the length of any sequence. The set of labels is Y = {1, . . . , n}. If
z = (y1 , . . . , yt ), then zi:j denotes (yi , . . . , yj ). When j < i, zi:j is the empty sequence (denoted
by ). Let the features being considered be f1 , . . . , fm . Each feature fi is associated with a label
sequence zi , called fi ?s label pattern, and fi has the form
gi (x, t), if yt?|zi |+1:t = zi
fi (x, y, t) =
0,
otherwise.
We call fi a feature of order |zi |?1. Consider, for example, the problem of named entity recognition.
The observations x = (x1 , . . . , xT ) may be a word sequence; gi (x, t) may be an indicator function
for whether xt is capitalized or may output a precomputed term weight if xt matches a particular
word; and zi may be a sequence of two labels, such as (person, organization) for the named entity
recognition task, giving a feature of order one.
A CRF defines conditional probability distributions P (y|x) = Zx (y)/Zx , where Zx (y) =
Pm PT
P
exp( i=1 t=|zi | ?i fi (x, y, t)), and Zx = y Zx (y). The normalization factor Zx is called the
P
partition function. In this paper, we will use the notation x:P red(x) f (x) to denote the summation
of f (x) over all elements of x satisfying the predicate P red(x).
3.1
Inference for High-order CRF
In this section, we describe the algorithms for computing the partition function, the marginals and the
most likely label sequence for high-order CRFs. We give rough polynomial time complexity bounds
to give an idea of the effectiveness of the algorithms. These bounds are pessimistic compared to
practical performance of the algorithms. It can also be verified that the algorithms for linear chain
CRF [8] are special cases of our algorithms when only zero-th and first order features are considered.
We show a work example illustrating the computations in the supplementary material.
3.1.1
Basic Notations
As in the case of hidden Markov models (HMM) [12], our algorithm uses a forward and backward
pass. First, we describe the equivalent of states used in the forward and backward computation. We
shall work with three sets: the pattern set Z, the forward-state set P and the backward-state set
S. The pattern set, Z, is the set of distinct label patterns used in the m features. For notational
simplicity, assume Z = {z1 , . . . , zM }. The forward-state set, P = {p1 , . . . p|P| }, consists of
distinct elements in Y ? {zj1:k }0?k?|zj |?1,1?j?M ; that is, P consists of all labels and all proper
prefixes (including ) of label patterns, with duplicates removed. Similarly, S = {s1 , . . . s|S| }
consists of the labels and proper suffixes: distinct elements in Y ? {zj1:k }1?k?|zj |,1?j?M .
The transitions between states are based on the prefix and suffix relationships defined below. Let
z1 ?p z2 denote that z1 is a prefix of z2 and let z1 ?s z2 denote that z1 is a suffix of z2 . We define
the longest prefix and suffix relations with respect to the sets P and S as follows
z1 ?pS z2
z1 ?sP z2
if and only if
if and only if
(z1 ? S) and (z1 ?p z2 ) and (?z ? S, z ?p z2 ? z ?p z1 )
(z1 ? P) and (z1 ?s z2 ) and (?z ? P, z ?s z2 ? z ?s z1 ).
Finally, the subsequence relationship defined below are used when combining forward and backward
variables to compute marginals. Let z ? z0 denote that z is a subsequence of z0 , z ? z0 denote that
z is a subsequence of z02:|z0 |?1 . The addition of subscript j in ?j and ?j indicates that the condition
z ?s z01:j is satisfied as well (that is, z ends at position j in z0 ).
We shall give rough time bounds in terms of m (the total number of features), n (the number of
labels), T (the length of the sequence), M (the number of distinct label patterns in Z), and the
maximum order K = max{|z1 | ? 1, . . . , |zM | ? 1}.
3.1.2
The Forward and Backward Variables
We now define forward vector ?x and backward vector ?x . Suppose z ?p y, then define y?s prefix
Pm P|z|
score Zxp (z) = exp( i=1 t=|zi | ?i fi (x, y, t)). Similarly, if z ?s y, then define y?s suffix score
Zxs (z) = exp(
Pm PT
?i fi (x, y, t)). Zxp (z) and Zxs (z) only depend on z. Let
X
?x (t, pi ) =
Zxp (z)
t=T ?|z|+|zi |
i=1
z:|z|=t,pi ?sP z
?x (t, si )
X
=
Zxs (z).
z:|z|=T +1?t,si ?p
Sz
The variable ?x (t, pi ) computes for x1:t the sum of the scores of all its label sequences z having
pi as the longest suffix. Similarly, the variable ?x (t, si ) computes for xt:T the sum of scores of all
its label sequence z having si as the longest prefix. Each vector ?x (t, ?) is of dimension |P|, while
?x (t, ?) has dimension |S|. We shall compute the ?x and ?x vectors with dynamic programming.
P
Let ?px (t, p) = exp( i:zi ?s p ?i gi (x, t)). For y with p ?s y1:t , this function counts the contribution towards Zx (y) by all features fi with their label patterns ending at position t and being suffixes
of p. Let pi y be the concatenation of pi with a label y. The following proposition is immediate.
Proposition 1
(a) For any z, there is a unique pi such that pi ?sP z.
(b) For any z, y, if pi ?sP z and pk ?sP pi y, then pk ?sP zy and Zxp (zy) = ?px (t, pi y)Zxp (z).
Proposition 1(a) means that we can induce partitions of label sequences using the forward states.
and Proposition 1(b) shows how to make well-defined transition from one forward state at a time
slice to another forward state at the next time slice. By definition, ?x (0, ) = 1, and ?x (0, pi ) = 0
for all pi 6= . Using Proposition 1(b), the recurrence for ?x is
X
?x (t, pk ) =
?px (t, pi y)?x (t ? 1, pi ), for 1 ? t ? T.
(pi ,y):pk ?sP pi y
P
Similarly, for the backward vectors ?x , let ?sx (t, s) = exp( i:zi ?p s ?i gi (x, t + |zi | ? 1)). By
definition, ?x (T + 1, ) = 1, and ?x (T + 1, si ) = 0 for all si 6= . The recurrence for ?x is
X
?x (t, sk ) =
?sx (t, ysi )?x (t + 1, si ), for 1 ? t ? T.
i
(si ,y):sk ?p
S ys
Once ?x or ?x is computed, then using Proposition 1(a), Zx can be easily obtained:
Zx =
|P|
X
?x (T, pi ) =
i=1
|S|
X
?x (1, si ).
i=1
Time Complexity: We assume that each evaluation of the function gi (?, ?) can be performed in unit
time for all i. All relevant values of ?px that are used can hence be computed in O(mn|P|T ) (thus
O(mnM KT )) time. In practice, this is pessimistic, and the computation can be done more quickly.
For all following analyses, we assume that ?px has already been computed and stored in an array.
Now all values of ?x can be computed in ?(n|P|T ), thus O(nM KT ) time. Similar bounds for ?sx
and ?x hold.
3.1.3
Computing the Most Likely Label Sequence
As in the case of HMM [12], Viterbi decoding (calculating the most likely label sequence) is obtained by replacing the sum operator in the forward backward algorithm with the max operator.
Formally, let ?x (t, pi ) = maxz:|z|=t,pi ?sP z Zxp (z). By definition, ?x (0, ) = 1, and ?x (0, pi ) = 0
for all pi 6= , and using Proposition 1, we have
?x (t, pk )
=
max
(pi ,y):pk ?sP pi y
?px (t, pi y)?x (t ? 1, pi ), for 1 ? t ? T.
We use ?x (t, pk ) to record the pair (pi , y) chosen to obtain ?x (t, pk ),
?x (t, pk )
=
arg max(pi ,y):pk ?sP pi y ?px (t, pi y)?x (t ? 1, pi ).
Let p?T = arg maxpi ?x (T, pi ), then the most likely path y? = (y1? , . . . , yT? ) has yT? as the last label
in p?T , and the full sequence can be traced backwards using ?x (?, ?) as follows
(p?t , yt? )
=
?x (t + 1, p?t+1 ), for 1 ? t < T.
Time Complexity: Either ?px or ?sx can be used for decoding; hence decoding can be done in
?(n min{|P|, |S|}T ) time.
3.1.4
Computing the Marginals
We need to compute marginals of label sequences and single variables, that is, compute P (yt?|z|:t =
z|x) for z ? Z ? Y. Unlike in the traditional HMM, additional care need to be taken regarding
features having label patterns that are super or sub sequences of z. We define
X
Wx (t, z) = exp(
?i gi (x, t ? |z| + j)).
(i,j):zi ?j z
This function computes the sum of all features that may activate strictly within z.
If z1:|z|?1 ?s pi and z2:|z| ?p sj , define [pi , z, sj ] as the sequence pi1:|pi |?(|z|?1) zsj|z|?1:|sj | , and
X
Ox (t, pi , sj , z) = exp(
?k gk (x, t ? |pi | + k 0 ? 1)).
(k,k0 ):z?zk ,zk ?k0 [pi ,z,sj ]
Ox (t, pi , sj , z) counts the contribution of features with their label patterns properly containing z but
within [pi , z, sj ].
Proposition 2 Let z ? Z ? Y. For any y with yt?|z|+1:t = z, there exists unique pi , sj such
that z1:|z|?1 ?s pi , z2:|z| ?p sj , pi ?sP y1:t?1 , and sj ?pS yt?|z|+2:T . In addition, Zx (y) =
1
i j
s
p
Wx (t,z) Zx (t ? 1, y1:t?1 )Zx (T + 1 ? (t ? |z| + 2), yt?|z|+2:T )Ox (t, p , s , z).
Multiplying by Ox counts features that are not counted in Zxp Zxs while division by Wx removes
features that are double-counted. By Proposition 2, we have
P
P (yt?|z|+1:t = z|x) =
(i,j):z1:|z|?1 ?s pi ,z2:|z| ?p sj
?x (t ? 1, pi )?x (t ? |z| + 2, sj )Ox (t, pi , sj , z)
Zx Wx (t, z)
Time Complexity: Both Wx (t, z) and Ox (t, pi , sj , z) can be computed in O(|pi ||sj |) =
O(K 2 ) time (with some precomputation). Thus a very pessimistic time bound for computing
P (yt?|z|+1:t = z|x) is O(K 2 |P||S|) = O(M 2 K 4 ).
3.2
Training
Given a training set T , the model parameters ?i ?s can be chosen by maximizing the regularized
Pm ?2
log-likelihood LT = log ?(x,y)?T P (y|x) ? i=1 2?2i , where ?reg is a parameter that controls
reg
the degree of regularization. Note that LT is a concave function of ?1 , . . . , ?m , and its maximum is
achieved when
?LT
? i ) ? E(fi ) ? ?k = 0
= E(f
2
??i
?reg
P|x|
? i) = P
where E(f
(x,y)?T
t=|zi | fi (x, y, t) is the empirical sum of the feature fi in the observed
P
P
P|x|
0
0
data, and E(fi ) =
(x,y)?T
|y0 |=|x| P (y |x)
t=|zi | fi (x, y , t) is the expected sum of fi .
Given the gradient and value of LT , we use the L-BFGS optimization method [14] for maximizing the regularized log-likelihood.
The function LT can now be computed because we have shown how to compute Zx , and computing
? i ) is
the value of Zx (y) is straightforward, for all (x, y) ? T . For the gradient, computing E(f
.
straightforward, and E(fi ) can be computed using marginals computed in previous section:
E(fi )
=
X
|x|
X
(x,y)?T
t=|zi |
0
i
P (yt?|z
i |+1:t = z |x)gi (x, t).
Time Complexity: Computing the gradient is clearly more time-consuming
than LT , thus we shall
P
just consider the time needed to compute the gradient. Let X = (x,y)?T |x|. We need to compute
at most M X marginals, thus total time needed to compute all the marginals has O(M 3 K 4 X) as
an upper bound. Given the marginals, we can compute the gradient in O(mX) time. If the total
number of gradient computations needed in maximization is I, then the total running time in training
is bounded by O((M 3 K 4 + m)XI) (very pessimistic).
4
Experiments
The practical feasibility of making use of high-order features based on our algorithm lies in the
observation that the pattern sparsity assumption often holds. Our algorithm can be applied to take
those high-order features into consideration; high-order features now form a component that one can
play with in feature engineering.
Now, the question is whether high-order features are practically significant. We first use a synthetic
data set to explore conditions under which high-order features can be expected to help. We then use
a handwritten character recognition problem to demonstrate that even incorporating simple highorder features can lead to impressive performance improvement on a naturally occurring dataset.
Finally, we use a named entity data set to show that for some data sets, higher order label features
may be more robust to changes in data distributions than observation features.
4.1
Synthetic Data Generated Using k-Order Markov Model
We randomly generate an order k Markov model with n states s1 , . . . , sn as follows. To increase
pattern sparsity, we allow at most r randomly chosen possible next state given the previous k states.
This limits the number of possible label sequences in each length k + 1 segment from nk+1 to
nk r. The conditional probabilities of these r next states is generated by randomly selecting a vector
from uniform distribution over [0, 1]r and normalizing them. Each state si generates an observation
(a1 , . . . , am ) such that aj follows a Gaussian distribution with mean ?ij and standard deviation
?. Each ?i,j is independently randomly generated from the uniform distribution over [0, 1]. In the
experiments, we use values of n = 5, r = 2 and m = 3.
The standard deviation, ?, has an important role in determining the characteristics of the data generated by this Markov model. If ? is very small as compared to most ?ij ?s, then using the observations
alone as features is likely to be good enough to obtain a good classifier of the states; the label correlations becomes less important for classification. However, if ? is large, then it is difficult to
distinguish the states based on the observations alone and the label correlations, particularly those
captured by higher order features are likely to be helpful. In short, the standard deviation, ?, is used
to to control how much information the observations reveal about the states.
We use the current, previous and next observations, rather than just the current observation as features, exploiting the conditional probability modeling strength of CRFs. For higher order features,
we simply use all indicator features that appeared in the training data up to a maximum order. We
considered the case k = 2 and k = 3, and varied ? and the maximum order. The training set and
test set each contains 500 sequences of length 20; each sequence was initialized with a random sequence of length k and generated using the randomly generated order k Markov model. Training
was done by maximizing the regularized log likelihood with regularization parameter ?reg = 1 in all
experiments in this paper. The experimental results are shown in Figure 1.
Figure 1 shows that the high-order indicator features are useful in this case. In particular, we can
see that it is beneficial to increase the order of the high-order features when the underlying model
has longer distance correlations. As expected, increasing the order of the features beyond the order
of the underlying model is not helpful. The results also suggests that in general, if the observations
are closely coupled with the states (in the sense that different states correspond to very different
observations), then feature engineering on the observations is generally enough to perform well, and
Generated by 2nd-Order Markov Model
Generated by 3rd-Order Markov Model
98
95
96
93
91
92
90
Accuracy
Accuracy
94
Sigma = 0.01
Sigma = 0.05
Sigma = 0.10
88
89
87
Sigma = 0.01
Sigma = 0.05
Sigma = 0.10
85
86
83
84
81
82
79
1
2
3
1
4
2
3
4
Maximum Order of Features
Maximum Order of Features
Figure 1: Accuracy as a function of maximum order on the synthetic data set.
Runtimes for Character Recognition Training
Handwritten Character Recognition
88
3500
90
86
80
3000
70
80
2000
50
40
1500
Per Iteration Time (Left Axis)
30
78
Time (s)
2500
60
82
Time (s)
Accuracy
84
1000
Total Time (Right Axis)
20
76
500
10
74
1
2
3
4
5
0
0
2
Maximum Order of Features
3
4
5
Maximum Order of Features
Figure 2: Accuracy (left) and running time (right) as a function of maximum order for the handwriting recognition data set.
it is less important to use high-order features to capture label correlations. On the other hand, when
such coupling is not clear, it becomes important to capture the label correlations, and high-order
features can be useful.
4.2
Handwriting Recognition
We used the handwriting recognition data set from [15], consisting of around 6100 handwritten
words with an average length of around 8 characters. The data was originally collected by Kassel
[7] from around 150 human subjects. The words were segmented into characters, and each character
was converted into an image of 16 by 8 binary pixels. In this labeling problem, each xi is the image
of a character, and each yi is a lower-case letter. The experimental setup is the same as that used in
[15]: the data set was divided into 10 folds with each fold having approximately 600 training and
5500 test examples and the zero-th order features for a character are the pixel values.
For higher order features, we again used all indicator features that appeared in the training data up
to a maximum order. The average accuracy over the 10 folds are shown in Figure 2, where strong
improvements are observed as the maximum order increases. Figure 2 also shows the total training
time and the running time per iteration of the L-BFGS algorithm (which requires computation of the
gradient and value of the function at each iteration). The running time appears to grow no more than
linearly with the maximum order of the features for this data set.
4.3
Named Entity Recognition with Distribution Change
The Named Entity Recognition (NER) problem asks for identification of named entities from texts.
With carefully engineered observation features, there does not appear to be very much to be gained
from using higher order features. However, in some situations, the training data does not come from
the same distribution as the test data. In such cases, we hypothesize that higher order label features
may be more stable than observation features and can sometimes offer performance gain.
In our experiment, we used the Automatic Content Extraction (ACE) data [9], which is labeled with
seven classes: Organization, Geo-political, Location, Facility, Vehicle, and Weapon. The ACE data
comes from several genres and we use the following in our experiment: Broadcast conversation
(BC), Newswire (NW), Weblog (WL) and Usenet (UN).
4.4
Discussion
Named Entity Recognition (Domain Adaptation)
Average Improvement = 0.62
70
Linear Chain
Second Order
65
60
F1 Score
55
50
45
40
35
30
un
nw
l:
l:
w
w
l
bc
l:
w
un
:w
:w
l
un
:b
c
un
:n
w
:u
n
nw
nw
l
:b
c
bc
:w
nw
bc
:u
n
25
bc
:n
w
We use all pairs of genres as training and test
data. Scoring was done with the F1 score [16].
The features used are previous word, next word,
current word, case patterns for these words, and
all indicator label features of order up to k. The
results for the case k = 1 and k = 2 are shown
in Figure 3. Introducing second order indicator
features shows improvement in 10 out of the 12
combinations and degrades performance in two
of the combinations. However, the overall effect
is small, with an average improvement of 0.62 in
F1 score.
Training Domain : Test Domain
Figure 3: Named entity recognition results.
In our experiments, we used indicator features of all label patterns that appear in the training data.
For real applications, if the pattern sparsity assumption is not satisfied, but certain patterns do not
appear frequently enough and are not really important, then it is useful to see how we can select a
subset of features with few distinct label patterns automatically. One possible approach would be to
use boosting type methods [3] to sequentially select useful features.
An alternate approach to feature selection is to use all possible features and maximize the margin
of the solution instead. Generalization error bounds [15] show that it is possible to obtain good
generalization with a relatively small training set size despite having a very large number of features
if the margin is large. This indicates that feature selection may not be critical in some cases. Theoretically, it is also interesting to note that minimizing the regularized training cost when all possible
high-order features of arbitrary length are used is computationally tractable. This is because the
representer theorem [19] tells us that the optimum solution for minimizing quadratically regularized
cost functions lies on the span of the training examples. Hence, even when we are learning with
arbitrary sets of high-order features, we only need to use the features that appear in the training set
to obtain the optimal solution. Given a training set of N sequences of length l, only O(l2 N ) long
label sequences of all orders are observed. Using cutting plane techniques [18] the computational
complexity of optimization is polynomial in inverse accuracy parameter, the training set size and
maximum length of the sequences.
It should also be possible to use kernels within the approach here. On the handwritten character
problem, [15] reports substantial improvement in performance with the use of kernels. Use of kernels together with high-order features may lead to further improvements. However, we note that the
advantage of the higher order features may become less substantial as the observations become more
powerful in distinguishing the classes. Whether the use of higher order features together with kernels brings substantial improvement in performance is likely to be problem dependent. Similarly,
observation features that are more distribution invariant such as comprehensive name lists can be
used for the NER task we experimented with and may reduce the improvements offered by higher
order features.
5
Conclusion
The pattern sparsity assumption often holds in real applications, and we give efficient inference algorithms for CRF with high-order features when the pattern sparsity assumption is satisfied. This
allows high-order features to be explored in feature engineering for real applications. We studied the
conditions that are favourable for using high-order features using a synthetic data set, and demonstrated that using simple high-order features can lead to performance improvement on a handwriting
recognition problem and a named entity recognition problem.
Acknowledgements
This work is supported by DSO grant R-252-000-390-592 and AcRF grant R-252-000-327-112.
References
[1] B. A. Cipra, ?The Ising model is NP-complete,? SIAM News, vol. 33, no. 6, 2000.
[2] A. Culotta, D. Kulp, and A. McCallum, ?Gene prediction with conditional random fields,?
University of Massachusetts, Amherst, Tech. Rep. UM-CS-2005-028, 2005.
[3] T. G. Dietterich, A. Ashenfelter, and Y. Bulatov, ?Training conditional random fields via gradient tree boosting,? in Proceedings of the Twenty-First International Conference on Machine
Learning, 2004.
[4] S. Fine, Y. Singer, and N. Tishby, ?The hierarchical hidden markov model: Analysis and applications,? Machine Learning, vol. 32, no. 1, pp. 41?62, 1998.
[5] C. Huang and A. Darwiche, ?Inference in belief networks: A procedural guide,? International
Journal of Approximate Reasoning, vol. 15, no. 3, pp. 225?263, 1996.
[6] F. Jelinek, J. D. Lafferty, and R. L. Mercer, ?Basic methods of probabilistic context free grammars,? in Speech Recognition and Understanding. Recent Advances, Trends, and Applications.
Springer Verlag, 1992.
[7] R. H. Kassel, ?A comparison of approaches to on-line handwritten character recognition,?
Ph.D. dissertation, Massachusetts Institute of Technology, Cambridge, MA, USA, 1995.
[8] J. Lafferty, A. McCallum, and F. Pereira, ?Conditional random fields: Probabilistic models
for segmenting and labeling sequence data,? in Proceedings of the Eighteenth International
Conference on Machine Learning, 2001, pp. 282?289.
[9] Linguistic Data Consortium, ?ACE (Automatic Content Extraction) English Annotation
Guidelines for Entities,? 2005.
[10] K. P. Murphy and M. A. Paskin, ?Linear-time inference in hierarchical HMMs,? in Advances
in Neural Information Processing Systems 14, vol. 14, 2002.
[11] X. Qian, X. Jiang, Q. Zhang, X. Huang, and L. Wu, ?Sparse higher order conditional random
fields for improved sequence labeling,? in ICML, 2009, p. 107.
[12] L. R. Rabiner, A tutorial on hidden Markov models and selected applications in speech recognition. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc., 1990.
[13] S. Sarawagi and W. W. Cohen, ?Semi-Markov conditional random fields for information extraction,? in Advances in Neural Information Processing Systems 17. Cambridge, MA: MIT
Press, 2005, pp. 1185?1192.
[14] F. Sha and F. Pereira, ?Shallow parsing with conditional random fields,? in Proceedings of the
Twentieth International Conference on Machine Learning, 2003, pp. 282?289.
[15] B. Taskar, C. Guestrin, and D. Koller, ?Max-margin Markov networks,? in Advances in Neural
Information Processing Systems 16. Cambridge, MA: MIT Press, 2004.
[16] E. Tjong and F. D. Meulder, ?Introduction to the CoNLL-2003 shared task: Languageindependent named entity recognition,? in Proceedings of Conference on Computational Natural Language Learning, 2003.
[17] T. T. Tran, D. Phung, H. Bui, and S. Venkatesh, ?Hierarchical semi-Markov conditional random
fields for recursive sequential data,? in NIPS?08: Advances in Neural Information Processing
Systems 20. Cambridge, MA: MIT Press, 2008, pp. 1657?1664.
[18] I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun, ?Support vector machine learning for
interdependent and structured output spaces,? in Proceedings of the Twenty-First international
conference on Machine learning, 2004, pp. 104?112.
[19] G. Wahba, Spline models for observational data, ser. CBMS-NSF Regional Conference Series
in Applied Mathematics. Philadelphia, PA: Society for Industrial and Applied Mathematics
(SIAM), 1990, vol. 59.
| 3815 |@word illustrating:1 version:2 polynomial:4 nd:1 kulp:1 asks:1 contains:1 score:7 selecting:1 series:1 bc:5 prefix:6 current:3 z2:13 si:10 parsing:1 partition:3 wx:5 hofmann:1 remove:1 hypothesize:1 alone:2 selected:1 plane:2 mccallum:2 short:1 dissertation:1 record:1 boosting:2 location:1 org:1 zhang:1 become:2 consists:3 dan:1 darwiche:1 theoretically:1 expected:3 blowup:1 p1:1 frequently:1 automatically:1 bulatov:1 cardinality:1 increasing:1 becomes:2 provided:1 notation:2 bounded:1 underlying:2 finding:1 subclass:1 concave:1 precomputation:1 um:1 classifier:1 ser:1 control:2 unit:1 grant:2 appear:4 segmenting:1 before:1 ner:2 engineering:3 local:1 limit:2 despite:1 usenet:1 jiang:1 subscript:1 path:1 approximately:1 studied:2 suggests:1 hmms:1 limited:1 practical:2 unique:2 yj:1 practice:3 recursive:1 sarawagi:1 empirical:1 word:10 induce:1 consortium:1 altun:1 selection:2 operator:2 tsochantaridis:1 context:2 equivalent:1 maxz:1 demonstrated:1 yt:12 crfs:8 maximizing:3 straightforward:2 eighteenth:1 independently:2 convex:2 simplicity:1 qian:1 array:1 handle:1 pt:2 suppose:1 play:1 z02:1 neighbouring:1 programming:1 us:1 distinguishing:1 pa:1 element:4 trend:1 recognition:22 satisfying:1 particularly:1 ising:1 labeled:2 observed:6 role:1 taskar:1 capture:5 culotta:1 news:1 sun:1 removed:1 substantial:4 complexity:11 dynamic:1 highorder:1 trained:2 depend:6 segment:4 division:1 efficiency:1 easily:1 joint:1 k0:2 various:1 genre:2 train:1 distinct:6 effective:1 describe:2 activate:1 labeling:12 tell:1 ace:3 larger:1 solve:1 supplementary:1 say:1 otherwise:1 grammar:3 gi:7 sequence:49 advantage:3 tran:1 remainder:1 zm:2 adaptation:1 relevant:1 combining:1 flexibility:1 exploiting:2 empty:1 optimum:2 p:2 double:1 produce:1 help:1 depending:1 coupling:1 ij:2 strong:1 c:1 decoration:1 come:2 closely:1 capitalized:1 human:1 engineered:1 languageindependent:1 observational:1 material:1 require:1 f1:4 generalization:2 really:1 proposition:9 probable:1 pessimistic:4 summation:1 strictly:1 hold:3 practically:1 around:3 considered:3 deciding:1 exp:7 viterbi:1 predict:1 nw:5 achieves:1 consecutive:3 label:57 wl:1 successfully:1 mit:4 rough:2 clearly:1 gaussian:1 super:1 rather:2 linguistic:1 encode:1 tjong:1 joachim:1 improvement:12 notational:1 longest:3 likelihood:5 mainly:1 indicates:2 properly:1 political:1 tech:1 industrial:1 am:1 sense:1 helpful:3 inference:16 dependent:1 suffix:7 hidden:6 relation:1 koller:1 quasi:1 pixel:2 arg:2 among:2 classification:1 overall:1 denoted:1 special:3 field:13 equal:1 once:1 having:6 extraction:3 runtimes:1 identical:2 icml:1 representer:1 np:2 report:1 spline:1 duplicate:1 employ:1 few:2 randomly:5 wee:1 national:3 comprehensive:1 relabeling:1 murphy:1 consisting:1 maintain:1 organization:2 evaluation:1 chain:7 kt:2 tree:7 initialized:1 alliance:1 increased:1 modeling:1 maximization:1 cost:3 geo:1 deviation:3 subset:1 introducing:1 uniform:2 predicate:1 tishby:1 stored:1 dependency:12 synthetic:5 person:1 international:5 siam:2 amherst:1 lee:1 probabilistic:3 decoding:4 together:2 quickly:1 dso:3 again:2 satisfied:3 nm:1 containing:1 broadcast:1 huang:2 account:1 converted:1 bfgs:2 inc:1 depends:1 performed:2 vehicle:1 red:2 annotation:1 contribution:2 accuracy:7 kaufmann:1 characteristic:1 efficiently:6 correspond:1 identify:1 rabiner:1 handwritten:5 identification:1 zy:2 multiplying:1 comp:1 zx:15 definition:3 pp:7 involved:1 naturally:1 associated:1 handwriting:6 gain:1 dataset:1 massachusetts:2 conversation:1 improves:1 carefully:1 cbms:1 appears:1 higher:13 originally:1 improved:1 done:6 though:1 ox:6 just:2 correlation:5 hand:1 parse:2 expressive:1 replacing:1 acrf:1 defines:1 brings:1 aj:1 reveal:1 name:1 effect:1 ye:1 requiring:1 dietterich:1 usa:2 facility:1 hence:5 regularization:2 laboratory:1 adjacent:2 width:1 recurrence:2 crf:11 demonstrate:1 complete:1 reasoning:1 image:2 consideration:1 fi:17 cohen:1 exponentially:1 occurred:1 marginals:9 significant:1 cambridge:4 rd:1 automatic:2 pm:4 similarly:5 mathematics:2 newswire:1 language:1 stable:2 longer:3 impressive:2 base:1 recent:1 certain:1 verlag:1 binary:1 rep:1 yi:2 exploited:2 scoring:1 seen:1 captured:2 greater:1 additional:1 preceding:1 care:1 morgan:1 guestrin:1 maximize:1 semi:5 full:1 infer:1 segmented:1 match:1 offer:1 long:5 divided:1 hhmm:2 y:1 a1:1 feasibility:1 prediction:3 involving:1 basic:2 iteration:3 sometimes:3 normalization:1 kernel:4 achieved:1 addition:3 fine:1 grow:1 source:1 publisher:1 weapon:1 unlike:2 regional:1 subject:1 undirected:1 lafferty:2 effectiveness:1 call:1 presence:1 backwards:1 leong:1 enough:3 zi:17 fm:1 wahba:1 reduce:1 idea:1 regarding:1 whether:4 speech:2 useful:4 generally:1 clear:1 ph:1 dna:2 maxpi:1 generate:1 exist:1 zj:2 singapore:3 tutorial:1 nsf:1 yenan:1 per:2 shall:4 vol:5 procedural:1 traced:1 verified:1 backward:8 nl3:1 sum:6 run:1 inverse:1 letter:1 powerful:1 named:14 throughout:1 wu:2 conll:1 bound:7 nan:1 guaranteed:1 distinguish:1 fold:3 phung:1 strength:1 encodes:1 generates:1 min:2 span:1 pi1:1 px:8 relatively:1 structured:3 department:1 leews:1 alternate:1 combination:2 smaller:1 beneficial:1 y0:1 character:10 shallow:1 making:1 s1:2 invariant:1 taken:2 computationally:1 discus:2 precomputed:1 count:3 needed:3 singer:1 tractable:2 end:1 apply:1 hierarchical:9 denotes:2 running:4 include:1 z01:1 maintaining:1 newton:1 calculating:1 giving:2 kassel:2 society:1 already:1 question:1 degrades:1 sha:1 traditional:1 hai:1 gradient:10 mx:1 distance:3 entity:15 hmm:3 concatenation:1 seven:1 collected:1 length:17 relationship:2 providing:1 minimizing:2 difficult:1 setup:1 potentially:1 gk:1 sigma:6 cipra:1 rise:1 design:1 guideline:1 proper:2 twenty:2 perform:2 upper:1 observation:18 markov:21 immediate:1 defining:2 situation:1 incorporated:1 y1:5 varied:1 arbitrary:7 venkatesh:1 pair:2 sentence:1 z1:17 quadratically:1 nu:2 nip:1 able:2 beyond:1 usually:1 pattern:24 below:2 mnm:1 appeared:2 sparsity:7 max:6 including:2 belief:1 critical:1 natural:1 regularized:5 indicator:9 mn:1 improve:1 technology:1 axis:2 coupled:1 philadelphia:1 sn:1 text:1 sg:3 l2:1 acknowledgement:1 understanding:1 interdependent:1 determining:1 fully:1 interesting:1 degree:1 offered:1 sufficient:1 mercer:1 pi:48 production:1 supported:1 last:1 free:2 english:1 meulder:1 guide:1 allow:1 institute:1 sparse:2 jelinek:1 slice:2 dimension:2 transition:2 ending:1 computes:3 forward:11 commonly:2 san:1 ashenfelter:1 counted:2 sj:15 approximate:1 cutting:2 bui:1 gene:3 clique:4 sz:1 sequentially:1 assumed:1 francisco:1 consuming:1 discriminative:2 xi:2 alternatively:1 subsequence:3 un:5 sk:2 chief:1 learn:1 zk:2 robust:1 ca:1 necessarily:1 domain:3 did:1 sp:11 pk:10 linearly:1 n2:2 x1:2 sub:1 position:2 pereira:2 exponential:5 lie:3 z0:5 theorem:1 xt:4 paskin:1 favourable:1 list:1 zj1:2 experimented:1 explored:1 normalizing:1 exists:1 incorporating:1 pcfg:3 effectively:2 gained:1 sequential:1 relabeled:1 occurring:1 margin:4 nk:3 sx:4 lt:6 ysi:1 likely:7 explore:1 forming:1 simply:1 twentieth:1 chieu:1 applies:1 springer:1 ma:4 zsj:1 conditional:16 towards:1 shared:1 content:2 experimentally:2 change:3 hard:1 typical:1 called:2 total:6 pas:1 experimental:2 formally:1 select:2 support:2 reg:4 |
3,108 | 3,816 | Quanti?cation and the language of thought
Charles Kemp
Department of Psychology
Carnegie Mellon University
[email protected]
Abstract
Many researchers have suggested that the psychological complexity of a concept
is related to the length of its representation in a language of thought. As yet,
however, there are few concrete proposals about the nature of this language. This
paper makes one such proposal: the language of thought allows ?rst order quanti?cation (quanti?cation over objects) more readily than second-order quanti?cation
(quanti?cation over features). To support this proposal we present behavioral results from a concept learning study inspired by the work of Shepard, Hovland and
Jenkins.
Humans can learn and think about many kinds of concepts, including natural kinds such as elephant
and water and nominal kinds such as grandmother and prime number. Understanding the mental
representations that support these abilities is a central challenge for cognitive science. This paper
proposes that quanti?cation plays a role in conceptual representation?for example, an animal X
quali?es as a predator if there is some animal Y such that X hunts Y . The concepts we consider
are much simpler than real-world examples such as predator, but even simple laboratory studies can
provide important clues about the nature of mental representation.
Our approach to mental representation is based on the language of thought hypothesis [1]. As
pursued here, the hypothesis proposes that mental representations are constructed in a compositional
language of some kind, and that the psychological complexity of a concept is closely related to
the length of its representation in this language [2, 3, 4]. Following previous researchers [2, 4],
we operationalize the psychological complexity of a concept in terms of the ease with which it is
learned and remembered. Given these working assumptions, the remaining challenge is to specify
the representational resources provided by the language of thought. Some previous studies have
relied on propositional logic as a representation language [2, 5], but we believe that the resources
of predicate logic are needed to capture the structure of many human concepts. In particular, we
suggest that the language of thought can accommodate relations, functions, and quanti?cation, and
focus here on the role of quanti?cation.
Our primary proposal is that quanti?cation is supported by the language of thought, but that quanti?cation over objects is psychologically more natural than quanti?cation over features. To test this
idea we compare concept learning in two domains which are very similar except for one critical
difference: one domain allows quanti?cation over objects, and the other allows quanti?cation over
features. We consider several logical languages that can be used to formulate concepts in both domains, and ?nd that learning times are best predicted by a language that supports quanti?cation over
objects but not features.
Our work illustrates how theories of mental representation can be informed by comparing concept
learning across two or more domains. Existing studies work with a range of domains, and it is useful
to consider a ?conceptual universe? that includes these possibilities along with many others that have
not yet been studied. Table 1 charts a small fragment of this universe, and the penultimate column
shows example stimuli that will be familiar from previous studies of concept learning. Previous
studies have made important contributions by choosing a single domain in Table 1 and explaining
1
why some concepts within this domain are easier to learn than others [2, 4, 6, 7, 8, 9]. Comparisons
across domains can also provide important information about learning and mental representation,
and we illustrate this claim by comparing learning times across Domains 3 and 4.
The next section introduces the conceptual universe in Table 1 in more detail. We then present a
formal approach to concept learning that relies on a logical language and compare three candidate
languages. Language OQ (for object quanti?cation) supports quanti?cation over objects but not features, language F Q (for feature quanti?cation) supports quanti?cation over features but not objects,
and language OQ + F Q supports quanti?cation over both objects and features. We use these languages to predict learning times across Domains 3 and 4, and present an experiment which suggests
that language OQ comes closest to the language of thought.
1
The conceptual universe
Table 1 provides an organizing framework for thinking about the many domains in which learning
can occur. The table includes 8 domains, each of which is de?ned by specifying some number of
objects, features, and relations, and by specifying the range of each feature and each relation. We
refer to the elements in each domain as items, and the penultimate column of Table 1 shows items
from each domain. The ?rst row shows a domain commonly used by studies of Boolean concept
learning. Each item in this domain includes a single object a and speci?es whether that object
has value v1 (small) or v2 (large) on feature F (size), value v3 (white) or v4 (gray) on feature G
(color), and value v5 (vertical) or v6 (horizontal) on feature H (texture). Domain 2 also includes
three features, but now each item includes three objects and each feature applies to only one of the
objects. For example, feature H (texture) applies to only the third object in the domain (i.e. the third
square on each card). Domain 3 is similar to Domain 1, but now the three features can be aligned?
for any given item each feature will be absent (value 0) or present. The example in Table 1 uses three
features (boundary, dots, and slash) that can each be added to an unadorned gray square. Domain 4
is similar to Domain 2, but again the feature values can be aligned, and the feature for each object
will be absent (value 0) or present. Domains 5 and 6 are similar to domains 2 and 4 respectively, but
each one includes relations rather than features. In Domain 6, for example, the relation R assigns
value 0 (absent) or value 1 (present) to each undirected pair of objects.
The ?rst six domains in Table 1 are all variants of Domain 1, which is the domain typically used by
studies of Boolean concept learning. Focusing on six related domains helps to establish some of the
dimensions along which domains can differ, but the ?nal two domains in Table 1 show some of the
many alternative possibilities. Domain 7 includes two categorical features, each of which takes three
rather than two values. Domain 8 is similar to Domain 6, but now the number of objects is 6 rather
than 3 and relation R is directed rather than undirected. To mention just a handful of possibilities
which do not appear in Table 1, domains may also have categorical features that are ordered (e.g.
a size feature that takes values small, medium, and large), continuous valued features or relations,
relations with more than two places, and objects that contain sub-objects or parts.
Several learning problems can be formulated within any given domain. The most basic is to learn a
single item?for example, a single item from Domain 8 [4]. A second problem is to learn a class of
items?for example, a class that includes four of the items in Domain 1 and excludes the remaining
four [6]. Learning an item class can be formalized as learning a unary predicate de?ned over items,
and a natural extension is to consider predicates with two or more arguments. For example, problems
of the form A is to B as C is to ? can be formulated as problems where the task is to learn a binary
relation analogous(?, ?) given the single example analogous(A, B). Here, however, we focus on the
task of learning item classes or unary predicates.
Since we focus on the role of quanti?cation, we will work with domains where quanti?cation is
appropriate. Quanti?cation over objects is natural in cases like Domain 4 where the feature values
for all objects can be aligned. Note, for example, that the statement ?every object has its feature?
picks out the ?nal example item in Domain 4 but that no such statement is possible in Domain 2.
Quanti?cation over features is natural in cases like Domain 3 where the ranges of each feature can be
aligned. For example, ?object a has all three features? picks out the ?nal example item in Domain 3
but no such statement is possible in Domain 1. We therefore focus on Domains 3 and 4, and explore
the problem of learning item classes in each domain.
2
3
{a}
{a, b, c}
{a}
{a, b, c}
{a, b, c}
{a, b, c}
{a}
{a, b, c, d, e, f }
1
2
3
4
5
6
7
8
R : O ? O ? {0, 1}
?
F : O ? {v1 , v2 , v3 }
G : O ? {v4 , v5 , v6 }
?
R : O ? O ? {0, 1}
R : (a, b) ? {v1 , v2 }
S : (a, c) ? {v3 , v4 }
T : (b, c) ? {v5 , v6 }
?
?
?
?
Relations
?
?
Domain speci?cation
Features
F : O ? {v1 , v2 }
G : O ? {v3 , v4 }
H : O ? {v5 , v6 }
F : a ? {v1 , v2 }
G : b ? {v3 , v4 }
H : c ? {v5 , v6 }
F : O ? {0, v1 }
G : O ? {0, v2 }
H : O ? {0, v3 }
F : a ? {0, v1 }
G : b ? {0, v2 }
H : c ? {0, v3 }
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
...
,
...
,
Example Items
,
,
,
,
,
,
,
,
,
,
,
,
, ...
,
[4]
[8, 9]
[13]
[6]
[12]
[6]
[2, 6, 7, 10, 11]
Ref.
Table 1: The conceptual universe. Eight domains are shown, and each one is de?ned by a set of objects, a set of features, and a set of relations. We call the members
of each domain items, and an item is created by specifying the extension of each feature and relation in the domain. The six domains above the double lines are
closely related to the work of Shepard et al. [6]. Each one includes eight items which differ along three dimensions. These dimensions, however, emerge from
different underlying representations in the six cases.
Objects O
#
(a)
(b)
1 (I)
2 (II)
3 (III)
4 (III)
5 (IV)
6 (IV)
7 (V)
8 (V)
9 (V)
10 (VI)
111
110
101
011
100
010
001
000
Figure 1: (a) A stimulus lattice for domains (e.g. Domains 3, 4, and 6) that can be encoded as a
triple of binary values where 0 represents ?absent? and 1 represents ?present.? (b) If the order of
the values in the triple is not signi?cant, there are 10 distinct ways to partition the lattice into two
classes of four items. The SHJ type for each partition is shown in parentheses.
Domains 3 and 4 both include 8 items each and we will consider classes that include exactly four
of these items. Each item in these domains can be represented as a triple of binary values, where 0
indicates that a feature is absent and value 1 indicates that a feature is present. Each triple represents
the values of the three features (Domain 3) or the feature values for the three objects (Domain 4).
By representing each domain in this way, we have effectively adopted domain speci?cations that
are simpli?cations of those shown in Table 1. Domain 3 is represented using three features of
the form F, G, H : O ? {0, 1}, and Domain 4 is represented using a single feature of the form
F : O ? {0, 1}. Simpli?cations of this kind are possible because the features in each domain can
be aligned?notice that no corresponding simpli?cations are possible for Domains 1 and 2.
The eight binary triples in each domain can be organized into the lattice shown in Figure 1a. Here
we consider all ways to partition the vertices of the lattice into two groups of four. If partitions that
differ only up to a permutation of the features (Domain 3) or objects (Domain 4) are grouped into
equivalence classes, there are ten of these classes, and a representative of each is shown in Figure 1b.
Previous researchers [6] have pointed out that the stimuli in Domain 1 can be organized into a cube
similar to Figure 1a, and that there are six ways to partition these stimuli into two groups of four
up to permutations of the features and permutations of the range of each feature. We refer to these
equivalence classes as the six Shepard-Hovland-Jenkins types (or SHJ types), and each partition in
Figure 1b is labeled with its corresponding SHJ type label. Note, for example, that partitions 3 and 4
are both examples of SHJ type III. For us, partitions 3 and 4 are distinct since items 000 (all absent)
and 111 (all present) are uniquely identi?able, and partition 3 assigns these items to different classes
but partition 4 does not.
Previous researchers have considered differences between some of the ?rst six domains in Table 1.
Shepard et al. [6] ran experiments using compact stimuli (Domain 1) and distributed stimuli (Domains 2 and 4), and observed the same dif?culty ranking of the six SHJ types in all cases. Their
work, however, does not acknowledge that Domain 4 leads to 10 distinct types rather than 6, and
therefore fails to address issues such as the relative complexities of concepts 5 and 6 in Figure 1.
Social psychologists [13, 14] have studied Domain 6 and found that learning patterns depart from
the standard SHJ order?in particular, that SHJ type VI (Concept 10 in Figure 1) is simpler than
types III, IV and V. This ?nding has been used to support the claim that social learning relies on
a domain-speci?c principle of structural balance [14]. We will see, however, that the relative simplicity of type VI in domains like 4 and 6 is consistent with a domain-general account based on
representational economy.
2
A representation length approach to concept learning
The conceptual universe in Table 1 calls for an account of learning that can apply across many
domains. One candidate is the representation length approach, which proposes that concepts are
mentally represented in a language of thought, and that the subjective complexity of a concept is
4
determined by the length of its representation in this language [4]. We consider the case where
a concept corresponds to a class of items, and explore the idea that these concepts are mentally
represented in a logical language. More formally, a concept is represented as a logical sentence, and
the concept includes all models of this sentence, or all items that make the sentence true.
The predictions of this representation length approach depend critically on the language chosen.
Here we consider three languages?an object quanti?cation language OQ that supports quanti?cation over objects, a feature quanti?cation language F Q that supports quanti?cation over features,
and a language OQ + F Q that supports quanti?cation over both objects and features. Language
OQ is based on a standard logical language known as predicate logic with equality. The language
includes symbols representing objects (e.g. a and b), and features (e.g. F and G) and these symbols
can be combined to create literals that indicate that an object does (Fa ) or does not have a certain
feature (Fa ? ). Literals can be combined using two connectives: AND (Fa Ga ) and OR (Fa + Ga ). The
language includes two quanti?ers?for all (?) and there exists (?)?and allows quanti?cation over
objects (e.g. ?x Fx , where x is a variable that ranges over all objects in the domain). Finally, language
OQ includes equality and inequality relations (= and 6=) which can be used to compare objects and
object variables (e.g. =xa or 6=xy ).
Table 2 shows several sentences formulated in language OQ. Suppose that the OQ complexity of
each sentence is de?ned as the number of basic propositions it contains, where a basic proposition
can be a positive or negative literal (Fa or Fa ? ) or an equality or inequality statement (=xa or 6=xy ).
Equivalently, the complexity of a sentence is the total number of ANDs plus the total number of
ORs plus one. This measure is equivalent by design to Feldman?s [2] notion of Boolean complexity
when applied to a sentence without quanti?cation. The complexity values in Table 2 show minimal
complexity values for each concept in Domains 3 and 4. Table 2 also shows a single sentence
that achieves each of these complexity values, although some concepts admit multiple sentences of
minimal complexity.
The complexity values in Table 2 were computed using an ?enumerate then combine? approach. We
began by enumerating a set of sentences according to criteria described in the next paragraph. Each
sentence has an extension that speci?es which items in the domain are consistent with the sentence.
Given the extensions of all sentences generated during the enumeration phase, the combination
phase considered all possible ways to combine these extensions using conjunctions or disjunctions.
The procedure terminated once extensions corresponding to all of the concepts in the domain had
been found. Although the number of possible sentences grows rapidly as the complexity of these
sentences increases, the number of extensions is ?xed and relatively small (28 for domains of size
8). The combination phase is tractable since sentences with the same extension can be grouped into
a single equivalence class.
The enumeration phase considered all formulae which had at most two quanti?ers and which
had a complexity value lower than four. For example, this phase did not include the formula
?x ?y ?z 6=yz F?x Fy Fz (too many quanti?ers) or the formula ?x ?y 6=xy Fy (Fx + Gx + Hx ) (complexity
too high). Despite these restrictions, we believe that the complexity values in Table 2 are identical
to the values that would be obtained if we had considered all possible sentences.
Language F Q is similar to OQ but allows quanti?cation over features rather than objects. For
example, F Q includes the statement ?Q Qa , where Q is a variable that ranges over all features in
the domain. Language F Q also allows features and feature variables to be compared for equality
or inequality (e.g. =QF or 6=QR ). Since F Q and OQ are closely related, it follows that the F Q
complexity values for Domains 3 and 4 are identical to the OQ complexity values for Domains 4
and 3. For example, F Q can express concept 5 in Domain 3 as ?Q ?R 6=QR Ra .
We can combine OQ and F Q to create a language OQ + F Q that allows quanti?cation over both
objects and features. Allowing both kinds of quanti?cation leads to identical complexity values for
Domains 3 and 4. Language OQ + F Q can express each of the formulae for Domain 4 in Table 2,
and these formulae can be converted into corresponding formulae for Domain 3 by translating each
instance of object quanti?cation into an instance of feature quanti?cation.
Logicians distinguish between ?rst-order logic, which allows quanti?cation over objects but not
predicates, and second-order logic, which allows quanti?cation over objects and predicates. The
difference between languages OQ and OQ + F Q is super?cially similar to the difference between
?rst-order and second-order logic, but does not cut to the heart of this matter. Since language
5
#
1
Domain 3
Domain 4
C
1
Ga
C
1
Fb
2
Fa Ha + Fa Ha
4
Fa Fc + Fa Fc
4
3
Fa ? Ga + Fa Ha
4
Fa ? Fb + Fa Fc
4
4
Fa ? Ga ? + Fa Ha
4
Fa ? Fb ? + Fa Fc
4
5
Ga (Fa + Ha ) + Fa Ha
2
6
7
8
?
?
?
?
5
?x ?y 6=xy Fy
?
5
?
?
6
Ga (Fa + Ha ) + Fa Ha
Ga (Fa + Ha ) + Fa Ga Ha
?
?
?
?
(?x Fx ) + Fb (Fa + Fc )
4
?
?
6
?
?
6
(?x Fx ) + Fa (Fb + Fc )
4
10
(?x Fx ) + ?y ?z Fy (=zy +Fz ? )
4
Ha (Fa + Ga ) + Fa Ga Ha
9
Fa (Ga + Ha ) + Fa Ga Ha
10
Ga ? (Fa Ha ? + Fa ? Ha ) + Ga (Fa ? Ha ? + Fa Ha )
?
3
(?x Fx ) + Fb ?y Fy
?
?
Fc (Fa + Fb ) + Fa Fb Fc
?
?
6
Table 2: Complexity values C and corresponding formulae for language OQ. Boolean complexity
predicts complexity values for both domains that are identical to the OQ complexity values shown
here for Domain 3. Language F Q predicts complexity values for Domains 3 and 4 that are identical
to the OQ values for Domains 4 and 3 respectively. Language OQ + F Q predicts complexity values
for both domains that are identical to the OQ complexity values for Domain 4.
OQ + F Q only supports quanti?cation over a pre-speci?ed set of features, it is equivalent to a
typed ?rst order logic that includes types for objects and features [15]. Future studies, however, can
explore the cognitive relevance of higher-order logic as developed by logicians.
3
Experiment
Now that we have introduced languages OQ, F Q and OQ + F Q our theoretical proposals can be
sharply formulated. We suggest that quanti?cation over objects plays an important role in mental
representations, and predict that OQ complexity will account better for human learning than Boolean
complexity. We also propose that quanti?cation over objects is more natural than quanti?cation over
features, and predict that OQ complexity will account better for human learning than both F Q
complexity and OQ + F Q complexity. We tested these predictions by designing an experiment
where participants learned concepts from Domains 3 and 4.
Method. 20 adults participated for course credit. Each participant was assigned to Domain 3 or
Domain 4 and learned all ten concepts from that domain. The items used for each domain were the
cards shown in Table 1. Note, for example, that each Domain 3 card showed one square, and that
each Domain 4 card showed three squares. These items are based on stimuli developed by Sakamoto
and Love [12].
The experiment was carried out using a custom built graphical interface. For each learning problem
in each domain, all eight items were simultaneously presented on the screen, and participants were
able to drag them around and organize them however they liked. Each problem had three phases.
During the learning phase, the four items belonging to the current concept had red boundaries, and
the remaining four items had blue boundaries. During the memory phase, these colored boundaries
were removed, and participants were asked to sort the items into the red group and the blue group.
If they made an error they returned to the learning phase, and could retake the test whenever they
were ready. During the description phase, participants were asked to provide a written description of
the two groups of cards. The color assignments (red or blue) were randomized across participants?
in other words, the ?red groups? learned by some participants were identical to the ?blue groups?
learned by others. The order in which participants learned the 10 concepts was also randomized.
Model predictions. The OQ complexity values for the ten concepts in each domain are shown in
Table 2 and plotted in Figure 2a. The complexity values in Figure 2a have been normalized so that
they sum to one within each domain, and the differences of these normalized scores are shown in
the ?nal row of Figure 2a. The two largest bars in the difference plot indicate that Concepts 10
and 5 are predicted to be easier to learn in Domain 4 than in Domain 3. Language OQ can express
6
OQ complexity
Domain 3
a)
0.2
0.2
0.1
0.1
Domain 4
0
Difference
Learning time
b)
0
1 2 3 4 5 6 7 8 9 10
0.2
0.2
0.1
0.1
0
0
1 2 3 4 5 6 7 8 9 10
0.1
0.05
0
?0.05
1 2 3 4 5 6 7 8 9 10
1 2 3 4 5 6 7 8 9 10
0.1
0.05
0
?0.05
1 2 3 4 5 6 7 8 9 10
1 2 3 4 5 6 7 8 9 10
Figure 2: Normalized OQ complexity values and normalized learning times for the 10 concepts in
Domains 3 and 4.
statements like ?either 1 or 3 objects have F ? (Concept 10 in Domain 4), or ?2 or more objects have
F ? (Concept 5 in Domain 4). Since quanti?cation over features is not permitted, however, analogous
statements (e.g. ?object a has either 1 or 3 features?) cannot be formulated in Domain 3.
Concept 10 corresponds to SHJ type VI, which often emerges as the most dif?cult concept in studies
of Boolean concept learning. Our model therefore predicts that the standard ordering of the SHJ
types will not apply in Domain 4. Our model also predicts that concepts assigned to the same SHJ
type will have different complexities. In Domain 4 the model predicts that Concept 6 will be harder
to learn than Concept 5 (both are examples of SHJ type IV), and that Concept 8 will be harder to
learn than Concepts 7 or 9 (all three are examples of SHJ type V).
Results. The computer interface recorded the amount of time participants spent on the learning
phase for each concept. Domain 3 was a little more dif?cult than Domain 4 overall: on average,
Domain 3 participants took 557 seconds and Domain 4 participants took 467 seconds to learn the
10 concepts. For all remaining analyses, we consider learning times that are normalized to sum to 1
for each participant. Figure 2b shows the mean values for these normalized times, and indicates the
relative dif?culties of the concepts within each condition.
The difference plot in Figure 2b supports the two main predictions identi?ed previously. Concepts
10 and 5 are the cases that differ most across the domains, and both concepts are easier to learn in
Domain 3 than Domain 4. As predicted, Concept 5 is substantially easier than Concept 6 in Domain
4 even though both correspond to the same SHJ type. Concepts 7 through 9 also correspond to the
same SHJ type, and the data for Domain 4 suggest that Concept 8 is the most dif?cult of the three,
although the difference between Concepts 8 and 7 is not especially large.
Four sets of complexity predictions are plotted against the human data in Figure 3. Boolean complexity and OQ complexity make identical predictions about Domain 3, and OQ complexity and
OQ + F Q complexity make identical predictions about Domain 4. Only OQ complexity, however,
accounts for the results observed in both domains.
The concept descriptions generated by participants provide additional evidence that there are psychologically important differences between Domains 3 and 4. If the descriptions for concepts 5 and
10 are combined, 18 out of 20 responses in Domain 4 referred to quanti?cation or counting. One
representative description of Concept 5 stated that ?red has multiple ?lled? and that ?blue has one
?lled or none.? Only 3 of 20 responses in Domain 3 mentioned quanti?cation. One representative
description of Concept 5 stated that ?red = multiple features? and that ?blue = only one feature.?
7
r=0.84
0.2
r=0.84
0.2
r=0.26
0.2
r=0.26
0.2
Learning time
(Domain 3)
0.1
0.1
0
(Domain 4)
0.2
r=0.27
0.2
Learning time
0.1
0.1
0
0.2
r=0.83
0.2
0.1
0.1
0
0.1
0.2
0
0.1
0.2
r=0.27
0.2
0.1
Boolean complexity
0.1
0.1
0.2
OQ complexity
0.1
0.2
r=0.83
0.2
0.1
0
0
0.1
0
0.1
0.2
F Q complexity
0
0.1
0.2
OQ + F Q complexity
Figure 3: Normalized learning times for each domain plotted against normalized complexity values
predicted by four languages: Boolean logic, OQ, F Q and OQ + F Q.
These results suggest that people can count or quantify over features, but that it is psychologically
more natural to quantify over objects rather than features.
Although we have focused on three speci?c languages, the results in Figure 2b can be used to
evaluate alternative proposals about the language of thought. One such alternative is an extension
of Language OQ that allows feature values to be compared for equality. This extended language
supports concise representations of Concept 2 in both Domain 3 (Fa = Ha ) and Domain 4 (Fa = Fc ),
and predicts that Concept 2 will be easier to learn than all other concepts except Concept 1. Note,
however, that this prediction is not compatible with the data in Figure 2b. Other languages might
also be considered, but we know of no simple language that will account for our data better than
OQ.
4
Conclusion
Comparing concept learning across qualitatively different domains can provide valuable information
about the nature of mental representation. We compared two domains that that are similar in many
respects, but that differ according to whether they include a single object (Domain 3) or multiple
objects (Domain 4). Quanti?cation over objects is possible in Domain 4 but not Domain 3, and this
difference helps to explain the different learning patterns we observed across the two domains. Our
results suggest that concept representations can incorporate quanti?cation, and that quantifying over
objects is more natural than quantifying over features.
The model predictions we reported are based on a language (OQ) that is a generic version of ?rst
order logic with equality. Our results therefore suggest that some of the languages commonly considered by logicians (e.g. ?rst order logic with equality) may indeed capture some aspects of the
?laws of thought? [16]. A simple language like OQ offers a convenient way to explore the role of
quanti?cation, but this language will need to be re?ned and extended in order to provide a more
accurate account of mental representation. For example, a comprehensive account of the language
of thought will need to support quanti?cation over features in some cases, but might be formulated
so that quanti?cation over features is typically more costly than quanti?cation over objects.
Many possible representation languages can be imagined and a large amount of empirical data will
be needed to identify the language that comes closest to the language of thought. Many relevant
studies have already been conducted [2, 6, 8, 9, 13, 17], but there are vast regions of the conceptual
universe (Table 1) that remain to be explored. Navigating this universe is likely to involve several
challenges, but web-based experiments [18, 19] may allow it to be explored at a depth and scale
that are currently unprecedented. Characterizing the language of thought is undoubtedly a long term
project, but modern methods of data collection may support rapid progress towards this goal.
Acknowledgments I thank Maureen Satyshur for running the experiment. This work was supported in part by
NSF grant CDI-0835797.
8
References
[1] J. A. Fodor. The language of thought. Harvard University Press, Cambridge, 1975.
[2] J. Feldman. Minimization of Boolean complexity in human concept learning. Nature, 407:
630?633, 2000.
[3] D. Fass and J. Feldman. Categorization under complexity: A uni?ed MDL account of human
learning of regular and irregular categories. In S. Thrun S. Becker and K. Obermayer, editors,
Advances in Neural Information Processing Systems 15, pages 35?34. MIT Press, Cambridge,
MA, 2003.
[4] C. Kemp, N. D. Goodman, and J. B. Tenenbaum. Learning and using relational theories. In J.C.
Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing
Systems 20, pages 753?760. MIT Press, Cambridge, MA, 2008.
[5] N. D. Goodman, J. B. Tenenbaum, J. Feldman, and T. L. Grif?ths. A rational analysis of
rule-based concept learning. Cognitive Science, 32(1):108?154, 2008.
[6] R. N. Shepard, C. I. Hovland, and H. M. Jenkins. Learning and memorization of classi?cations.
Psychological Monographs, 75(13), 1961. Whole No. 517.
[7] R. M. Nosofsky, M. Gluck, T. J. Palmeri, S. C. McKinley, and P. Glauthier. Comparing models
of rule-based classi?cation learning: A replication and extension of Shepard, Hovland, and
Jenkins (1961). Memory and Cognition, 22:352?369, 1994.
[8] M. D. Lee and D. J. Navarro. Extending the ALCOVE model of category learning to featural
stimulus domains. Psychonomic Bulletin and Review, 9(1):43?58, 2002.
[9] C. D. Aitkin and J. Feldman. Subjective complexity of categories de?ned over three-valued
features. In R. Sun and N. Miyake, editors, Proceedings of the 28th Annual Conference of the
Cognitive Science Society, pages 961?966. Psychology Press, New York, 2006.
[10] F. Mathy and J. Bradmetz. A theory of the graceful complexi?cation of concepts and their
learnability. Current Psychology of Cognition, 22(1):41?82, 2004.
[11] R. Vigo. A note on the complexity of Boolean concepts. Journal of Mathematical Psychology,
50:501?510, 2006.
[12] Y. Sakamoto and B. C. Love. Schematic in?uences on category learning and recognition memory. Journal of Experimental Psychology: General, 133(4):534?553, 2004.
[13] W. H. Crockett. Balance, agreement and positivity in the cognition of small social structures.
In Advances in Experimental Social Psychology, Vol 15, pages 1?57. Academic Press, 1982.
[14] N. B. Cottrell. Heider?s structural balance principle as a conceptual rule. Journal of Personality
and Social Psychology, 31(4):713?720, 1975.
[15] H. B. Enderton. A mathematical introduction to logic. Academic Press, New York, 1972.
[16] G. Boole. An investigation of the laws of thought on which are founded the mathematical
theories of logic and probabilities. 1854.
[17] B. C. Love and A. B. Markman. The nonindependence of stimulus properties in human category learning. Memory and Cognition, 31(5):790?799, 2003.
[18] L. von Ahn. Games with a purpose. Computer, 39(6):92?94, 2006.
[19] R. Snow, B. O?Connor, D. Jurafsky, and A. Ng. Cheap and fast?but is it good? Evaluating
non-expert annotations for natural language tasks. In Proceedings of the 2008 Conference on
empirical methods in natural language processing, pages 254?263. Association for Computational Linguistics, 2008.
9
| 3816 |@word version:1 nd:1 pick:2 concise:1 mention:1 harder:2 accommodate:1 contains:1 fragment:1 score:1 subjective:2 existing:1 current:2 comparing:4 yet:2 written:1 readily:1 cottrell:1 partition:10 cant:1 cheap:1 plot:2 pursued:1 item:34 cult:3 shj:14 colored:1 mental:9 provides:1 gx:1 simpler:2 mathematical:3 along:3 constructed:1 replication:1 sakamoto:2 combine:3 behavioral:1 paragraph:1 indeed:1 ra:1 rapid:1 love:3 inspired:1 little:1 enumeration:2 provided:1 project:1 underlying:1 medium:1 xed:1 kind:6 connective:1 substantially:1 developed:2 informed:1 every:1 exactly:1 platt:1 grant:1 appear:1 organize:1 positive:1 despite:1 might:2 plus:2 studied:2 drag:1 equivalence:3 suggests:1 specifying:3 dif:5 ease:1 hunt:1 jurafsky:1 range:6 directed:1 acknowledgment:1 procedure:1 empirical:2 thought:16 convenient:1 pre:1 word:1 regular:1 suggest:6 cannot:1 ga:15 memorization:1 restriction:1 equivalent:2 focused:1 formulate:1 miyake:1 formalized:1 simplicity:1 assigns:2 enderton:1 rule:3 notion:1 fx:6 analogous:3 fodor:1 nominal:1 play:2 suppose:1 us:1 designing:1 hypothesis:2 agreement:1 harvard:1 element:1 recognition:1 cut:1 predicts:7 labeled:1 observed:3 role:5 capture:2 region:1 sun:1 ordering:1 removed:1 valuable:1 ran:1 mentioned:1 monograph:1 complexity:52 asked:2 depend:1 cdi:1 represented:6 distinct:3 fast:1 choosing:1 disjunction:1 encoded:1 valued:2 elephant:1 ability:1 think:1 unprecedented:1 took:2 propose:1 aligned:5 relevant:1 rapidly:1 culty:1 organizing:1 representational:2 roweis:1 description:6 qr:2 rst:9 double:1 extending:1 categorization:1 liked:1 object:53 help:2 illustrate:1 spent:1 progress:1 predicted:4 signi:1 come:2 indicate:2 quantify:2 differ:5 snow:1 closely:3 human:8 translating:1 hx:1 investigation:1 proposition:2 extension:10 around:1 considered:6 credit:1 cognition:4 predict:3 slash:1 claim:2 achieves:1 hovland:4 purpose:1 label:1 currently:1 grouped:2 largest:1 create:2 alcove:1 minimization:1 mit:2 ands:1 super:1 rather:7 conjunction:1 focus:4 indicates:3 economy:1 unary:2 typically:2 relation:13 koller:1 issue:1 overall:1 uences:1 proposes:3 animal:2 cube:1 once:1 ng:1 satyshur:1 identical:9 represents:3 markman:1 thinking:1 future:1 others:3 stimulus:9 few:1 modern:1 simultaneously:1 comprehensive:1 familiar:1 phase:11 undoubtedly:1 possibility:3 custom:1 mdl:1 introduces:1 grif:1 accurate:1 xy:4 iv:4 re:1 plotted:3 theoretical:1 minimal:2 psychological:4 instance:2 column:2 boolean:11 assignment:1 lattice:4 vertex:1 predicate:7 conducted:1 too:2 learnability:1 reported:1 combined:3 randomized:2 v4:5 lee:1 concrete:1 nosofsky:1 again:1 central:1 recorded:1 von:1 positivity:1 literal:3 cognitive:4 admit:1 expert:1 account:9 converted:1 de:5 includes:15 matter:1 ranking:1 vi:4 red:6 relied:1 participant:13 sort:1 annotation:1 predator:2 contribution:1 chart:1 square:4 correspond:2 identify:1 critically:1 zy:1 none:1 researcher:4 cation:58 explain:1 whenever:1 ed:3 against:2 typed:1 rational:1 logical:5 color:2 emerges:1 organized:2 focusing:1 higher:1 permitted:1 specify:1 response:2 though:1 just:1 xa:2 working:1 horizontal:1 web:1 gray:2 grows:1 believe:2 concept:69 true:1 contain:1 normalized:8 equality:7 assigned:2 laboratory:1 white:1 during:4 game:1 uniquely:1 criterion:1 interface:2 charles:1 began:1 mentally:2 psychonomic:1 shepard:6 imagined:1 association:1 mellon:1 refer:2 cambridge:3 feldman:5 connor:1 pointed:1 language:65 had:7 dot:1 ahn:1 closest:2 showed:2 prime:1 certain:1 inequality:3 binary:4 remembered:1 additional:1 simpli:3 speci:7 v3:7 ii:1 multiple:4 academic:2 offer:1 long:1 parenthesis:1 schematic:1 prediction:9 variant:1 basic:3 cmu:1 psychologically:3 irregular:1 proposal:6 participated:1 goodman:2 navarro:1 undirected:2 member:1 oq:44 call:2 structural:2 counting:1 iii:4 psychology:7 idea:2 enumerating:1 absent:6 whether:2 six:8 becker:1 returned:1 york:2 compositional:1 enumerate:1 useful:1 involve:1 amount:2 ten:3 tenenbaum:2 category:5 fz:2 nsf:1 notice:1 blue:6 carnegie:1 vol:1 express:3 group:7 four:11 nal:4 v1:7 vast:1 excludes:1 sum:2 place:1 lled:2 unadorned:1 distinguish:1 annual:1 occur:1 handful:1 sharply:1 aspect:1 argument:1 graceful:1 relatively:1 ned:6 department:1 according:2 combination:2 belonging:1 across:9 remain:1 maureen:1 psychologist:1 heart:1 resource:2 previously:1 count:1 needed:2 know:1 singer:1 tractable:1 adopted:1 jenkins:4 eight:4 apply:2 v2:7 appropriate:1 generic:1 alternative:3 personality:1 remaining:4 include:4 running:1 linguistics:1 graphical:1 yz:1 establish:1 especially:1 society:1 v5:5 added:1 depart:1 already:1 fa:38 primary:1 costly:1 obermayer:1 navigating:1 thank:1 card:5 thrun:1 penultimate:2 fy:5 kemp:2 water:1 length:6 palmeri:1 balance:3 equivalently:1 statement:7 negative:1 stated:2 design:1 allowing:1 vertical:1 acknowledge:1 extended:2 relational:1 incorporate:1 introduced:1 propositional:1 pair:1 sentence:17 identi:2 learned:6 qa:1 address:1 able:2 suggested:1 adult:1 bar:1 pattern:2 grandmother:1 challenge:3 built:1 including:1 memory:4 critical:1 natural:10 representing:2 nding:1 created:1 carried:1 ready:1 categorical:2 featural:1 review:1 understanding:1 relative:3 law:2 permutation:3 triple:5 consistent:2 principle:2 editor:3 ckemp:1 row:2 qf:1 course:1 compatible:1 supported:2 formal:1 allow:1 cially:1 explaining:1 characterizing:1 bulletin:1 emerge:1 distributed:1 boundary:4 dimension:3 depth:1 world:1 evaluating:1 fb:8 made:2 clue:1 commonly:2 qualitatively:1 collection:1 founded:1 social:5 compact:1 uni:1 logic:13 conceptual:8 continuous:1 why:1 table:24 nature:4 learn:11 culties:1 mckinley:1 domain:145 quanti:53 did:1 main:1 universe:8 terminated:1 whole:1 ref:1 representative:3 referred:1 screen:1 sub:1 fails:1 candidate:2 third:2 formula:7 quali:1 operationalize:1 nonindependence:1 er:3 symbol:2 explored:2 evidence:1 exists:1 effectively:1 texture:2 illustrates:1 easier:5 gluck:1 fc:9 explore:4 likely:1 ordered:1 v6:5 applies:2 corresponds:2 relies:2 ma:2 goal:1 formulated:6 quantifying:2 towards:1 determined:1 except:2 classi:2 total:2 e:3 experimental:2 formally:1 support:15 people:1 relevance:1 heider:1 evaluate:1 tested:1 |
3,109 | 3,817 | Accelerated Gradient Methods for
Stochastic Optimization and Online Learning
Chonghai Hu?? , James T. Kwok? , Weike Pan?
? Department of Computer Science and Engineering
Hong Kong University of Science and Technology
Clear Water Bay, Kowloon, Hong Kong
? Department of Mathematics, Zhejiang University
Hangzhou, China
[email protected], {jamesk,weikep}@cse.ust.hk
Abstract
Regularized risk minimization often involves non-smooth optimization, either because of the loss function (e.g., hinge loss) or the regularizer (e.g., ?1 -regularizer).
Gradient methods, though highly scalable and easy to implement, are known to
converge slowly. In this paper, we develop a novel accelerated gradient method
for stochastic optimization while still preserving their computational simplicity
and scalability. The proposed algorithm, called SAGE (Stochastic Accelerated
GradiEnt), exhibits fast convergence rates on stochastic composite optimization
with convex or strongly convex objectives. Experimental results show that SAGE
is faster than recent (sub)gradient methods including FOLOS, SMIDAS and SCD.
Moreover, SAGE can also be extended for online learning, resulting in a simple
algorithm but with the best regret bounds currently known for these problems.
1
Introduction
Risk minimization is at the heart of many machine learning algorithms. Given a class of models
parameterized by w and a loss function ?(?, ?), the goal is to minimize EXY [?(w; X, Y )] w.r.t. w,
where the expectation is over the joint distribution of input X and output Y . However, since the joint
distribution is typically unknown in practice, a surrogate problem is to replace the expectation by
its empirical average on a training sample {(x1 , y1 ), . . . , (xm , ym )}. Moreover, a regularizer ?(?)
is often added for well-posedness. This leads to the minimization of the regularized risk
m
min
w
1 X
?(w; xi , yi ) + ??(w),
m i=1
(1)
where ? is a regularization parameter. In optimization terminology, the deterministic optimization
problem in (1) can be considered as a sample average approximation (SAA) of the corresponding
stochastic optimization problem:
min EXY [?(w; X, Y )] + ??(w).
w
(2)
Since both ?(?, ?) and ?(?) are typically convex, (1) is a convex optimization problem which can be
conveniently solved even with standard off-the-shelf optimization packages.
However, with the proliferation of data-intensive applications in the text and web domains, data sets
with millions or trillions of samples are nowadays not uncommon. Hence, off-the-shelf optimization
solvers are too slow to be used. Indeed, even tailor-made softwares for specific models, such as the
sequential minimization optimization (SMO) method for the SVM, have superlinear computational
1
complexities and thus are not feasible for large data sets. In light of this, the use of stochastic methods have recently drawn a lot of interest and many of these are highly successful. Most are based
on (variants of) the stochastic gradient descent (SGD). Examples include Pegasos [1], SGD-QN [2],
FOLOS [3], and stochastic coordinate descent (SCD) [4]. The main advantages of these methods
are that they are simple to implement, have low per-iteration complexity, and can scale up to large
data sets. Their runtime is independent of, or even decrease with, the number of training samples
[5, 6]. On the other hand, because of their simplicity, these methods have a slow convergence rate,
and thus may require a large number of iterations.
While standard gradient schemes have a slow convergence rate, they can often be ?accelerated?.
This stems from the pioneering work of Nesterov in 1983 [7], which is a deterministic algorithm
for smooth optimization. Recently, it is also extended for composite optimization, where the objective has a smooth component and a non-smooth component [8, 9]. This is particularly relevant to
machine learning since the loss ? and regularizer ? in (2) may be non-smooth. Examples include
loss functions such as the commonly-used hinge loss used in the SVM, and regularizers such as the
popular ?1 penalty in Lasso [10], and basis pursuit. These accelerated gradient methods have also
been successfully applied in the optimization problems of multiple kernel learning [11] and trace
norm minimization [12]. Very recently, Lan [13] made an initial attempt to further extend this for
stochastic composite optimization, and obtained the convergence rate of
?
O L/N 2 + (M + ?)/ N .
(3)
Here, N is the number of iterations performed by the algorithm, L is the Lipschitz parameter of
the gradient of the smooth term in the objective, M is the Lipschitz parameter of the nonsmooth
term, and ? is the variance of the stochastic subgradient. Moreover, note that the first term of (3)
is related to the smooth component in the objective while the second term is related to the nonsmooth component. Complexity results [14, 13] show that (3) is the optimal convergence rate for
any iterative algorithm solving stochastic (general) convex composite optimization.
However, as pointed out in [15], a very useful property that can improve the convergence rates in machine learning optimization problems is strong convexity. For example, (2) can be strongly convex
either because of the strong convexity of ? (e.g., log loss, square loss) or ? (e.g., ?2 regularization).
On the other hand, [13] is more interested in general convex optimization problems and so strong
convexity is not utilized. Moreover, though theoretically interesting, [13] may be of limited practical use as (1) the stepsize in its update rule depends on the often unknown ?; and (2) the number of
iterations performed by the algorithm has to be fixed in advance.
Inspired by the successes of Nesterov?s method, we develop in this paper a novel accelerated subgradient
stochastic composite optimization. It achieves the optimal convergence rate
scheme for
?
2
of O L/N + ?/ N for general convex objectives, and O (L + ?)/N 2 + ???1 /N for ?strongly convex objectives. Moreover, its per-iteration complexity is almost as low as that for standard (sub)gradient?
methods. Finally, we also extend the accelerated gradient scheme to online learning. We obtain O( N ) regret for general convex problems and O(log N ) regret for strongly convex
problems, which are the best regret bounds currently known for these problems.
2
Setting and Mathematical Background
First, we recapitulate a few notions in convex analysis.
(Lipschitz continuity) A function f (x) is L-Lipschitz if kf (x) ? f (y)k ? Lkx ? yk.
Lemma 1. [14] The gradient of a differentiable function f (x) is Lipschitz continuous with Lipschitz
parameter L if, for any x and y,
L
(4)
f (y) ? f (x) + h?f (x), y ? xi + kx ? yk2 .
2
(Strong convexity) A function ?(x) is ?-strongly convex if ?(y) ? ?(x)+hg(x), y?xi+ ?2 ky?xk2
for any x, y and subgradient g(x) ? ??(x).
Lemma 2. [14] Let ?(x) be ?-strongly convex and x? = arg minx ?(x). Then, for any x,
?
?(x) ? ?(x? ) + kx ? x? k2 .
(5)
2
2
We consider the following stochastic convex stochastic optimization problem, with a composite
objective function
min{?(x) ? E[F (x, ?)] + ?(x)},
(6)
x
where ? is a random vector, f (x) ? E[F (x, ?)] is convex and differentiable, and ?(x) is convex
but non-smooth. Clearly, this includes the optimization problem (2). Moreover, we assume that the
gradient of f (x) is L-Lipschitz and ?(x) is ?-strongly convex (with ? ? 0). Note that when ?(x) is
smooth (?(x) = 0), ? lower bounds the smallest eigenvalue of its Hessian.
Recall that in smooth optimization, the gradient update xt+1 = xt ? ??f (xt ) on a function f (x)
can be seen as proximal regularization of the linearized f at the current iterate xt [16]. In other
1
words, xt+1 = arg minx (h?f (xt ), x ? xt i + 2?
kx ? xt k2 ). With the presence of a non-smooth
component, we have the following more general notion.
(Gradient mapping) [8] In minimizing f (x) + ?(x), where f is convex and differentiable and ? is
convex and non-smooth,
1
kx ? xt k2 + ?(x)
(7)
xt+1 = arg min h?f (x), x ? xt i +
x
2?
is called the generalized gradient update, and ? = ?1 (xt ? xt+1 ) is the (generalized) gradient mapping. Note that the quadratic approximation is made to the smooth component only. It can be shown
that the gradient mapping is analogous to the gradient in smooth convex optimization [14, 8]. This
is also a common construct used in recent stochastic subgradient methods [3, 17].
3
Accelerated Gradient Method for Stochastic Learning
Let G(xt , ?t ) ? ?x F (x, ?t )|x=xt be the stochastic gradient of F (x, ?t ). We assume that it is an
unbiased estimator of the gradient ?f (x), i.e., E? [G(x, ?)] = ?f (x). Algorithm 1 shows the proposed algorithm, which will be called SAGE (Stochastic Accelerated GradiEnt). It involves the
updating of three sequences {xt }, {yt } and {zt }. Note that yt is the generalized gradient update,
and xt+1 is a convex combination of yt and zt . The algorithm also maintains two parameter sequences {?t } and {Lt }. We will see in Section 3.1 that different settings of these parameters lead
to different convergence rates. Note that the only expensive step of Algorithm 1 is the computation
of the generalized gradient update yt , which is analogous to the subgradient computation in other
subgradient-based methods. In general, its computational complexity depends on the structure of
?(x). As will be seen in Section 3.3, this can often be efficiently obtained in many regularized risk
minimization problems.
Algorithm 1 SAGE (Stochastic Accelerated GradiEnt).
Input: Sequences {Lt } and {?t }.
Initialize: y?1 = z?1 = 0, ?0 = ?0 = 1. L0 = L + ?.
for t = 0 to N do
xt = (1 ? ?t )yt?1 + ?t zt?1 .
yt = arg minx hG(xt , ?t ), x ? xt i + L2t kx ? xt k2 + ?(x) .
zt = zt?1 ? (Lt ?t + ?)?1 [Lt (xt ? yt ) + ?(zt?1 ? xt )].
end for
Output yN .
3.1
Convergence Analysis
Define ?t ? G(xt , ?t ) ? ?f (xt ). Because of the unbiasedness of G(xt , ?t ), E?t [?t ] = 0. In the
following, we will show that the value of ?(yt ) ? ?(x) can be related to that of ?(yt?1 ) ? ?(x) for
any x. Let ?t ? Lt (xt ? yt ) be the gradient mapping involved in updating yt . First, we introduce
the following lemma.
Lemma 3. For t ? 0, ?(x) is quadratically bounded from below as
2Lt ? L
?
k?t k2 .
?(x) ? ?(yt ) + h?t , x ? xt i + kx ? xt k2 + h?t , yt ? xi +
2
2L2t
3
Proposition 1. Assume that for each t ? 0, k?t k? ? ? and Lt > L, then
Lt ?t2 + ??t
kx ? zt k2
2
?2
Lt ?t2
kx ? zt?1 k2 +
+ ?t h?t , x ? zt?1 i.
? (1 ? ?t )[?(yt?1 ) ? ?(x)] +
2
2(Lt ? L)
?(yt ) ? ?(x) +
(8)
Proof. Define Vt (x) = h?t , x ? xt i + ?2 kx ? xt k2 + Lt2?t kx ? zt?1 k2 . It is easy to see that
zt = arg minx?Rd Vt (x). Moreover, notice that Vt (x) is (Lt ?t + ?)-strongly convex. Hence on
applying Lemmas 2 and 3, we obtain that for any x,
Lt ?t + ?
kx ? zt k2
Vt (zt ) ? Vt (x) ?
2
?
Lt ?t
Lt ?t + ?
= h?t , x ? xt i + kx ? xt k2 +
kx ? zt?1 k2 ?
kx ? zt k2
2
2
2
2Lt ?L
Lt ?t
Lt ?t +?
? ?(x)??(yt )?
k?t k2 +
kx?zt?1 k2 ?
kx?zt k2 +h?t , x?yt i.
2L2t
2
2
Then, ?(yt ) can be bounded from above, as:
Lt ?t
2Lt ? L
k?t k2 ?
kzt ? zt?1 k2
?(yt ) ??(x) + h?t , xt ? zt i ?
2
2Lt
2
(9)
Lt ?t
Lt ?t + ?
2
2
+
kx ? zt?1 k ?
kx ? zt k + h?t , x ? yt i,
2
2
where the non-positive term ? ?2 kzt ? xt k2 has been dropped from its right-hand-side. On the other
hand, by applying Lemma 3 with x = yt?1 , we get
2Lt ? L
k?t k2 ,
(10)
?(yt ) ? ?(yt?1 ) ? h?t , xt ? yt?1 i + h?t , yt?1 ? yt i ?
2L2t
where the non-positive term ? ?2 kyt?1 ? xt k2 has also been dropped from the right-hand-side. On
multiplying (9) by ?t and (10) by 1 ? ?t , and then adding them together, we obtain
2Lt ? L
Lt ?t2
k?t k2 + A + B + C ?
kzt ? zt?1 k2 , (11)
2
2Lt
2
where A = h?t , ?t (xt ? zt ) + (1 ? ?t )(xt ? yt?1 )i, B = ?t h?t , x ? yt i + (1 ? ?t )h?t , yt?1 ? yt i,
L ?2 +??
L ?2
and C = t2 t kx ? zt?1 k2 ? t t2 t kx ? zt k2 . In the following, we consider to upper bound A
and B. First, by using the update rule of xt in Algorithm 1 and the Young?s inequality1 , we have
?(yt ) ? ?(x) ? (1 ? ?t )[?(yt?1 ) ? ?(x)] ?
A = h?t , ?t (xt ? zt?1 ) + (1 ? ?t )(xt ? yt?1 )i + ?t h?t , zt?1 ? zt i
= ?t h?t , zt?1 ? zt i ?
Lt ?t2
k?t k2
.
kzt ? zt?1 k2 +
2
2Lt
On the other hand, B can be bounded as
B = h?t , ?t x + (1 ? ?t )yt?1 ? xt i + h?t , xt ? yt i = ?t h?t , x ? zt?1 i +
(12)
h?t , ?t i
Lt
?k?t k
,
(13)
Lt
where the second equality is due to the update rule of xt , and the last step is from the CauchySchwartz inequality and the boundedness of ?t . Hence, plugging (12) and (13) into (11),
? ?t h?t , x ? zt?1 i +
(Lt ?L)k?t k2 ?k?t k
+
+ ?t h?t , x?zt?1 i + C
2L2t
Lt
?2
? (1 ? ?t )[?(yt?1 ) ? ?(x)] +
+ ?t h?t , x ? zt?1 i + C,
2(Lt ? L)
?(yt ) ? ?(x) ? (1??t )[?(yt?1 )??(x)]?
where the last step is due to the fact that ?ax2 + bx ?
obtain (8).
1
The Young?s inequality states that hx, yi ?
kxk2
2a
+
4
b2
4a
akyk2
2
with a, b > 0. On re-arranging terms, we
for any a > 0.
Let the optimal solution in problem (6) be x? . From the update rules in Algorithm 1, we observe
that the triplet (xt , yt?1 , zt?1 ) depends on the random process ?[t?1] ? {?0 , . . . , ?t?1 } and hence is
also random. Clearly, zt?1 and x? are independent of ?t . Thus,
E?[t] h?t , x? ? zt?1 i
= E?[t?1] E?[t] [h?t , x? ? zt?1 i|?[t?1] ] = E?[t?1] E?t [h?t , x? ? zt?1 i]
= E?[t?1] hx? ? zt?1 , E?t [?t ]i = 0,
where the first equality uses Ex [h(x)] = Ey Ex [h(x)|y], and the last equality is from our assumption
that the stochastic gradient G(x, ?) is unbiased. Taking expectations on both sides of (8) with x =
x? , we obtain the following corollary, which will be useful in proving the subsequent theorems.
Corollary 1.
Lt ?t2 + ??t
E[kx? ? zt k2 ]
2
?2
Lt ?t2
E[kx? ? zt?1 k2 ] +
.
? (1 ? ?t )(E[?(yt?1 )] ? ?(x? )) +
2
2(Lt ? L)
E[?(yt )] ? ?(x? ) +
So far, the choice of Lt and ?t in Algorithm 1 has been left unspecified. In the following, we will
show that with a good choice of Lt and ?t , (the expectation of) ?(yt ) converges rapidly to ?(x? ).
Theorem 1. Assume that E[kx? ? zt k2 ] ? D2 for some D. Set
3
Lt = b(t + 1) 2 + L,
?t =
2
,
t+2
where b > 0 is a constant. Then the expected error of Algorithm 1 can be bounded as
3D2 L
1
5? 2
?
2
? .
E[?(yN )] ? ?(x ) ?
+ 3D b +
N2
3b
N
If ? were known, we can set b to the optimal choice of
?
2 ?5?D
.
N
?
5?
3D ,
and the bound in (15) becomes
(14)
(15)
3D 2 L
N2
+
Note that so far ?(x) is only assumed to be convex. As is shown in the following theorem, the
convergence rate can be further improved by assuming strong convexity. This also requires another
setting of ?t and Lt which is different from that in (14).
Theorem 2. Assume the same conditions as in Theorem 1, except that ?(x) is ?-strongly convex.
Set
s
?2
?t?1
, for t ? 1,
(16)
Lt = L + ???1
?t = ?t?1 + t?1 ?
t?1 , for t ? 1;
4
2
where ?t ? ?tk=1 (1 ? ?t ) for t ? 1 and ?0 = 1. Then, the expected error of Algorithm 1 can be
bounded as
2(L + ?)D2
6? 2
E[?(yN )] ? ?(x? ) ?
+
.
(17)
2
N
N?
In comparison, FOLOS only converges as O(log(N )/N ) for strongly convex objectives.
3.2
Remarks
As in recent studies on stochastic composite optimization [13], the error bounds in (15) and (17) consist of two terms: a faster term which is related to the smooth component and a slower term related to
the non-smooth component. SAGE benefits from using the structure of the problem and accelerates
the convergence of the smooth component. On the other hand, many stochastic (sub)gradient-based
algorithms like FOLOS do not separate the smooth from the non-smooth part, but simply treat the
whole objective
?as non-smooth. Consequently, convergence of the smooth component is also slowed
down to O(1/ N ).
As can be seen from (15) and (17), the convergence of SAGE is essentially encumbered by the
variance of the stochastic subgradient. Recall that the variance of the average of p i.i.d. random
5
variables is equal to 1/p of the original variance. Hence, as in Pegasos [1], ? can be reduced by
estimating the subgradient from a data subset.
Unlike the AC-SA algorithm in [13], the settings of Lt and ?t in (14) do not require knowledge of
? and the number of iterations, both of which can be difficult to estimate in practice. Moreover,
with the use of a sparsity-promoting ?(x), SAGE can produce a sparse solution (as will be experimentally demonstrated in Section 5) while AC-SA cannot. This is because in SAGE, the output
yt is obtained from a generalized gradient update. With a sparsity-promoting ?(x), this reduces to
a (soft) thresholding step, and thus ensures a sparse solution. On the other hand, in each iteration
of AC-SA, its output is a convex combination of two other variables. Unfortunately, adding two
vectors is unlikely to produce a sparse vector.
3.3
Efficient Computation of yt
The computational efficiency of Algorithm 1 hinges on the efficient computation of yt . Recall that
yt is just the generalized gradient update, and so is not significantly more expensive than the gradient
update in traditional algorithms. Indeed, the generalized gradient update is often a central component in various optimization and machine learning algorithms. In particular, Duchi and Singer [3]
showed how this can be efficiently computed with the various smooth and non-smooth regularizers, including the ?1 , ?2 , ?22 , ?? , Berhu and matrix norms. Interested readers are referred to [3] for
details.
4
Accelerated Gradient Method for Online Learning
In this section, we extend the proposed accelerated gradient scheme for online learning of (2). The
algorithm, shown in Algorithm 2, is similar to the stochastic version in Algorithm 1.
Algorithm 2 SAGE-based Online Learning Algorithm.
Inputs: Sequences {Lt } and {?t }, where Lt > L and 0 < ?t < 1.
Initialize: z1 = y1 .
loop
xt = (1 ? ?t )yt?1 + ?t zt?1 .
Output yt = arg minx h?ft?1 (xt ), x ? xt i + L2t kx ? xt k2 + ?(x) .
zt = zt?1 ? ?t (Lt + ??t )?1 [Lt (xt ? yt ) + ?(zt?1 ? xt )].
end loop
First, we introduce the following lemma, which plays a similar role as its stochastic counterpart of
Lemma 3. Moreover, let ?t ? Lt (xt ? yt ) be the gradient mapping related to the updating of yt .
Lemma 4. For t > 1, ?t (x) can be quadratically bounded from below as
?t?1 (x) ? ?t?1 (yt ) + h?t , x ? xt i +
?
2Lt ? L
kx ? xt k2 +
k?t k2 .
2
2L2t
Proposition 2. For any x and t ? 1, assume that there exists a subgradient g?(x) ? ??(x) such that
k?ft (x) + g?(x)k? ? Q. Then for Algorithm 2,
?t?1 (yt?1 ) ? ?t?1 (x) ?
Q2
Lt
Lt + ??t
+
kx ? zt?1 k2 ?
kx ? zt k2
2(1 ? ?t )(Lt ? L) 2?t
2?t
(18)
Lt
(1 ? ?t2 )Lt ? ?t (1 ? ?t )L
2
2
kyt?1 ? zt?1 k ? kzt ? yt k .
+
2
2
Proof Sketch. Define ?t = Lt ?t?1 . From the update rule of zt , one can check that
?
?t
zt = arg min Vt (x) ? h?t , x ? xt i + kx ? xt k2 + kx ? zt?1 k2 .
x
2
2
Similar to the analysis in obtaining (9), we can obtain
2Lt ?L
?t
?t
?t +?
?t?1 (yt )??t?1 (x) ? h?t , xt?zt i?
k?t k2? kzt?zt?1 k2+ kx?zt?1 k2?
kx?zt k2 . (19)
2
2Lt
2
2
2
6
On the other hand,
h?t , xt ? zt i ?
Lt
k?t k2
=
(kzt ? xt k2 ? kzt ? yt k2 )
2Lt
2
Lt (1 ? ?t )
Lt
Lt
kzt ? zt?1 k2 +
kzt?1 ? yt?1 k2 ? kzt ? yt k2 , (20)
?
2?t
2
2
on using the convexity of k ? k2 . Using (20), the inequality (19) becomes
?t?1 (yt ) ? ?t?1 (x) ?
Lt (1 ? ?t )
Lt
kzt?1 ? yt?1 k2 ? kzt ? yt k2
2
2
?t
?t + ?
Lt ? L
2
k?t k + kx ? zt?1 k2 ?
kx ? zt k2 .
?
2
2Lt
2
2
(21)
On the other hand, by the convexity of ?t?1 (x) and the Young?s inequality, we have
?t?1 (yt?1 ) ? ?t?1 (yt ) ? h?ft?1 (yt?1 ) + g?t?1 (yt?1 ), yt?1 ? yt i
?
Q2
(1 ? ?t )(Lt ? L)
+
kyt?1 ? yt k2 .
2(1 ? ?t )(Lt ? L)
2
(22)
Moreover, by using the update rule of xt and the convexity of k ? k2 , we have
kyt?1 ? yt k2 = k(yt?1 ? xt ) + (xt ? yt )k2 = k?t (yt?1 ? zt?1 ) + (xt ? yt )k2
? ?t kyt?1 ? zt?1 k2 + (1 ? ?t )?1 kxt ? yt k2 = ?t kyt?1 ? zt?1 k2 +
k?t k2
.
(1 ? ?t )L2t
(23)
On using (23), it follows from (22) that
?t?1 (yt?1 ) ? ?t?1 (yt ) ?
Q2
?t (1??t )(Lt ?L)
Lt ?L
k?t k2 .
+
kyt?1 ?zt?1 k2 +
2(1??t )(Lt ?L)
2
2L2t
Inequality (18) then follows immediately by adding this to (21).
?
Theorem 3. Assume that ? = 0, and kx? ?zt k ? D for t ? 1. Set ?t = a and Lt = aL t ? 1+L,
where a ? (0, 1) is a constant. Then the regret of Algorithm 2 can be bounded as
N
X
?
Q2
LD2
LD2
?
+
+
[?t (yt ) ? ?t (x )] ?
N.
2a
2
a(1 ? a)L
t=1
Theorem 4. Assume that ? > 0, and kx? ? zt k ? D for t ? 1. Set ?t = a, and Lt = a?t + L +
a?1 (? ? L)+ , where a ? (0, 1) is a constant. Then the regret of Algorithm 2 can be bounded as
N
X
Q2
(2a + a?1 )? + L
D2 +
log(N + 1).
[?t (yt ) ? ?t (x? )] ?
2a
2a(1 ? a)?
t=1
In particular, with a = 12 , the regret bound reduces to
5
Experiments
3?
2
+ L D2 +
2Q2
?
log(N + 1).
In this section, we perform experiments on the stochastic optimization of (2). Two data sets are
used2 (Table 1). The first one is the pcmac data set, which is a subset of the 20-newsgroup data set
from [18], while the second one is the RCV1 data set, which is a filtered collection of the Reuters
RCV1 from [19]. We choose the square loss for ?(?, ?) and the ?1 regularizer for ?(?) in (2). As
discussed in Section 3.3 and [3], the generalized gradient update can be efficiently computed by soft
thresholding in this case. Moreover, we do not use strong convexity and so ? = 0.
We compare the proposed SAGE algorithm (with Lt and ?t in (14)) with three recent algorithms: (1)
FOLOS [3]; (2) SMIDAS [4]; and (3) SCD [4]. For fair comparison, we compare their convergence
2
Downloaded from http://people.cs.uchicago.edu/?vikass/svmlin.html and http://www.cs.ucsb.edu/?wychen/sc.html.
7
behavior w.r.t. both the number of iterations and the number of data access operations, the latter
of which has been advocated in [4] as an implementation-independent measure of time. Moreover,
the efficiency tricks for sparse data described in [4] are also implemented. Following [4], we set the
regularization parameter ? in (2) to 10?6 . The ? parameter in SMIDAS is searched over the range
of {10?6 , 10?5 , 10?4 , 10?3 , 10?2 , 10?1 }, and the one with the lowest ?1 -regularized loss is used.
As in Pegasos [1], the (sub)gradient is computed from small sample subsets. The subset size p is set
to min(0.01m, 500), where m is the data set size. This is used on all the algorithms except SCD,
since SCD is based on coordinate descent and is quite different from the other stochastic subgradient
algorithms.3 All the algorithms are trained with the same maximum amount of ?time? (i.e., number
of data access operations).
Table 1: Summary of the data sets.
data set
pcmac
RCV1
#features
7,511
47,236
#instances
1,946
193,844
sparsity
0.73%
0.12%
Results are shown in Figure 1. As can be seen, SAGE requires much fewer iterations for convergence
than the others (Figures 1(a) and 1(e)). Moreover, the additional costs on maintaining xt and zt are
small, and the most expensive step in each SAGE iteration is in computing the generalized gradient
update. Hence, its per-iteration complexity is comparable with the other (sub)gradient schemes, and
its convergence in terms of the number of data access operations is still the fastest (Figures 1(b),
1(c), 1(f) and 1(g)). Moreover, the sparsity of the SAGE solution is comparable with those of the
other algorithms (Figures 1(d) and 1(h)).
100
0.8
0.6
0.4
0.2
0
0
1000
2000
3000
Number of Iterations
80
SAGE
FOLOS
SMIDAS
SCD
0.6
0.4
40
2
4
6
8
0
0
10
Number of Data Accessesx 106
(a)
SAGE
FOLOS
SMIDAS
SCD
60
20
0.2
0
0
4000
8000
1
0.8
Density of w
SAGE
FOLOS
SMIDAS
Error (%)
L1 regularized loss
L1 regularized loss
1
2
4
6
8
4000
2000
0
0
10
Number of Data Accessesx 106
(b)
SAGE
FOLOS
SMIDAS
SCD
6000
(c)
2
4
6
8
10
Number of Data Accesses x 106
(d)
4
0.6
0.4
0.2
0
0
1000
2000
3000
4000
Number of Iterations
(e)
SAGE
FOLOS
SMIDAS
SCD
0.8
0.6
80
0.4
60
40
20
0.2
0
0
SAGE
FOLOS
SMIDAS
SCD
Density of w
SAGE
FOLOS
SMIDAS
0.8
4
1
Error (%)
L1 regularized loss
L1 regularized loss
100
1
0.5
1
1.5
2
0
0
2.5
Number of Data Accessesx 108
0.5
1
1.5
2
Number of Data Accesses
(f)
(g)
2.5
8
x 10
x 10
3
2
SAGE
FOLOS
SMIDAS
SCD
1
0
0
0.5
1
1.5
2
2.5
Number of Data Accessesx 108
(h)
Figure 1: Performance of the various algorithms on the pcmac (upper) and RCV1 (below) data sets.
6
Conclusion
In this paper, we developed a novel accelerated gradient method (SAGE) for stochastic convex composite optimization. It enjoys the computational simplicity and scalability of traditional
(sub)gradient methods but are much faster, both theoretically and empirically. Experimental results
show that SAGE outperforms recent (sub)gradient descent methods. Moreover, SAGE can also be
extended to online learning, obtaining the best regret bounds currently known.
Acknowledgment
This research has been partially supported by the Research Grants Council of the Hong Kong Special
Administrative Region under grant 615209.
3
For the same reason, an SCD iteration is also very different from an iteration in the other algorithms.
Hence, SCD is not shown in the plots on the regularized loss versus number of iterations.
8
References
[1] S. Shalev-Shwartz, Y. Singer, and N. Srebro. Pegasos: Primal estimated sub-gradient solver
for SVM. In Proceedings of the 24th International Conference on Machine Learning, pages
807?814, Corvalis, Oregon, USA, 2007.
[2] A. Bordes, L. Bottou, and P. Gallinari. SGD-QN: Careful Quasi-Newton Stochastic Gradient
Descent. Journal of Machine Learning Research, 10:1737?1754, 2009.
[3] J. Duchi and Y. Singer. Online and batch learning using forward looking subgradients. Technical report, 2009.
[4] S. Shalev-Shwartz and A. Tewari. Stochastic methods for ?1 regularized loss minimization.
In Proceedings of the 26th International Conference on Machine Learning, pages 929?936,
Montreal, Quebec, Canada, 2009.
[5] L. Bottou and O. Bousquet. The tradeoffs of large scale learning. In Advances in Neural
Information Processing Systems 20. 2008.
[6] S. Shalev-Shwartz and N. Srebro. SVM optimization: Inverse dependence on training set size.
In Proceedings of the 25th International Conference on Machine Learning, pages 928?935,
Helsinki, Finland, 2008.
[7] Y. Nesterov. A method for unconstrained convex minimization problem with the rate of convergence o( k12 ). Doklady AN SSSR (translated as Soviet. Math. Docl.), 269:543?547, 1983.
[8] Y. Nesterov. Gradient methods for minimizing composite objective function. CORE Discussion Paper 2007/76, Catholic University of Louvain, September 2007.
[9] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse
problems. SIAM Journal on Imaging Sciences, 2:183?202, 2009.
[10] R. Tibshirani. Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical
Society: Series B, 58:267?288, 1996.
[11] S. Ji, L. Sun, R. Jin, and J. Ye. Multi-label multiple kernel learning. In Advances in Neural
Information Processing Systems 21. 2009.
[12] S. Ji and J. Ye. An accelerated gradient method for trace norm minimization. In Proceedings
of the International Conference on Machine Learning. Montreal, Canada, 2009.
[13] G. Lan. An optimal method for stochastic composite optimization. Technical report, School
of Industrial and Systems Engineering, Georgia Institute of Technology, 2009.
[14] Y. Nesterov and I.U.E. Nesterov. Introductory Lectures on Convex Optimization: A Basic
Course. Kluwer, 2003.
[15] S.M. Kakade and S. Shalev-Shwartz. Mind the duality gap: Logarithmic regret algorithms for
online optimization. In Advances in Neural Information Processing Systems 21. 2009.
[16] A. Beck and M. Teboulle. Mirror descent and nonlinear projected subgradient methods for
convex optimization. Operations Research Letters, 31(3):167?175, 2003.
[17] S.J. Wright, R.D. Nowak, and M.A.T. Figueiredo. Sparse reconstruction by separable approximation. In Proceedings of the International Conference on Acoustics, Speech, and Signal
Processing, Las Vegas, Nevada, USA, March 2008.
[18] V. Sindhwani and S.S. Keerthi. Large scale semi-supervised linear SVMs. In Proceedings of
the SIGIR Conference on Research and Development in Information Retrieval, pages 477?484,
Seattle, WA, USA, 2006.
[19] Y. Song, W.Y. Chen, H. Bai, C.J. Lin, and E.Y. Chang. Parallel spectral clustering. In Proceedings of the European Conference on Machine Learning, pages 374?389, Antwerp, Belgium,
2008.
9
| 3817 |@word kong:3 version:1 norm:3 d2:5 hu:2 linearized:1 recapitulate:1 sgd:3 boundedness:1 bai:1 initial:1 series:1 outperforms:1 current:1 com:1 exy:2 gmail:1 ust:1 subsequent:1 plot:1 update:16 fewer:1 folos:13 core:1 filtered:1 math:1 cse:1 mathematical:1 introductory:1 introduce:2 theoretically:2 expected:2 indeed:2 behavior:1 proliferation:1 multi:1 inspired:1 solver:2 becomes:2 estimating:1 moreover:15 bounded:8 lowest:1 unspecified:1 q2:6 developed:1 runtime:1 doklady:1 k2:65 gallinari:1 grant:2 yn:3 positive:2 engineering:2 dropped:2 treat:1 china:1 fastest:1 limited:1 range:1 docl:1 zhejiang:1 practical:1 acknowledgment:1 practice:2 regret:9 implement:2 empirical:1 significantly:1 composite:10 word:1 get:1 pegasos:4 superlinear:1 selection:1 cannot:1 risk:4 applying:2 www:1 deterministic:2 demonstrated:1 yt:77 convex:31 sigir:1 simplicity:3 immediately:1 rule:6 estimator:1 proving:1 notion:2 coordinate:2 analogous:2 arranging:1 play:1 us:1 pcmac:3 trick:1 expensive:3 particularly:1 utilized:1 updating:3 ft:3 role:1 solved:1 region:1 ensures:1 sun:1 decrease:1 yk:1 convexity:9 complexity:6 scd:13 nesterov:6 trained:1 solving:1 efficiency:2 basis:1 translated:1 joint:2 various:3 regularizer:5 soviet:1 fast:2 sc:1 shalev:4 hino:1 quite:1 online:9 advantage:1 differentiable:3 eigenvalue:1 sequence:4 kxt:1 nevada:1 reconstruction:1 relevant:1 loop:2 rapidly:1 scalability:2 ky:1 seattle:1 convergence:17 produce:2 converges:2 tk:1 develop:2 ac:3 montreal:2 school:1 advocated:1 sa:3 strong:6 implemented:1 c:2 involves:2 sssr:1 stochastic:31 require:2 hx:2 proposition:2 considered:1 wright:1 mapping:5 achieves:1 finland:1 smallest:1 xk2:1 belgium:1 label:1 currently:3 council:1 successfully:1 minimization:9 clearly:2 kowloon:1 shelf:2 shrinkage:2 corollary:2 l0:1 check:1 hk:1 industrial:1 hangzhou:1 typically:2 unlikely:1 quasi:1 interested:2 arg:7 html:2 development:1 special:1 initialize:2 equal:1 construct:1 nonsmooth:2 t2:9 others:1 report:2 few:1 beck:2 keerthi:1 attempt:1 interest:1 highly:2 uncommon:1 light:1 primal:1 regularizers:2 hg:2 smidas:11 nowadays:1 nowak:1 re:1 instance:1 soft:2 teboulle:2 svmlin:1 cost:1 subset:4 successful:1 too:1 proximal:1 unbiasedness:1 density:2 international:5 siam:1 off:2 ym:1 together:1 central:1 choose:1 slowly:1 ucsb:1 bx:1 b2:1 includes:1 oregon:1 depends:3 ax2:1 performed:2 lot:1 maintains:1 parallel:1 minimize:1 square:2 variance:4 efficiently:3 jamesk:1 multiplying:1 involved:1 james:1 proof:2 popular:1 recall:3 knowledge:1 supervised:1 improved:1 though:2 strongly:10 just:1 ld2:2 hand:10 sketch:1 web:1 nonlinear:1 continuity:1 usa:3 ye:2 unbiased:2 counterpart:1 regularization:4 hence:7 equality:3 hong:3 generalized:9 duchi:2 l1:4 novel:3 recently:3 vega:1 common:1 empirically:1 ji:2 million:1 extend:3 discussed:1 kluwer:1 rd:1 unconstrained:1 mathematics:1 pointed:1 access:5 yk2:1 lkx:1 recent:5 showed:1 inequality:5 success:1 vt:6 yi:2 preserving:1 seen:4 additional:1 ey:1 converge:1 signal:1 semi:1 multiple:2 reduces:2 stem:1 smooth:23 technical:2 faster:3 retrieval:1 lin:1 plugging:1 scalable:1 variant:1 regression:1 basic:1 essentially:1 expectation:4 iteration:16 kernel:2 background:1 unlike:1 quebec:1 presence:1 easy:2 iterate:1 lasso:2 tradeoff:1 intensive:1 penalty:1 song:1 speech:1 hessian:1 remark:1 useful:2 tewari:1 clear:1 amount:1 svms:1 reduced:1 http:2 notice:1 estimated:1 per:3 tibshirani:1 terminology:1 lan:2 drawn:1 imaging:1 subgradient:11 package:1 parameterized:1 inverse:2 letter:1 tailor:1 catholic:1 almost:1 reader:1 comparable:2 accelerates:1 bound:8 quadratic:1 kyt:7 helsinki:1 software:1 bousquet:1 min:6 rcv1:4 subgradients:1 separable:1 department:2 combination:2 march:1 pan:1 kakade:1 lt2:1 slowed:1 heart:1 singer:3 mind:1 end:2 pursuit:1 operation:4 promoting:2 kwok:1 observe:1 spectral:1 stepsize:1 vikass:1 batch:1 slower:1 original:1 clustering:1 include:2 hinge:3 maintaining:1 newton:1 society:1 objective:10 added:1 dependence:1 traditional:2 surrogate:1 exhibit:1 gradient:47 minx:5 september:1 separate:1 water:1 reason:1 assuming:1 minimizing:2 l2t:9 difficult:1 unfortunately:1 trace:2 sage:25 implementation:1 zt:69 unknown:2 perform:1 upper:2 descent:6 jin:1 extended:3 looking:1 y1:2 canada:2 posedness:1 z1:1 acoustic:1 smo:1 quadratically:2 louvain:1 below:3 xm:1 sparsity:4 pioneering:1 including:2 royal:1 regularized:10 scheme:5 improve:1 technology:2 text:1 kf:1 loss:16 lecture:1 interesting:1 srebro:2 versus:1 downloaded:1 thresholding:3 bordes:1 course:1 summary:1 supported:1 last:3 figueiredo:1 enjoys:1 side:3 uchicago:1 institute:1 taking:1 sparse:5 k12:1 benefit:1 qn:2 forward:1 made:3 commonly:1 collection:1 corvalis:1 projected:1 saa:1 far:2 assumed:1 xi:4 shwartz:4 continuous:1 iterative:2 triplet:1 bay:1 table:2 obtaining:2 bottou:2 european:1 domain:1 main:1 whole:1 reuters:1 n2:2 fair:1 x1:1 referred:1 georgia:1 slow:3 sub:8 kzt:13 kxk2:1 administrative:1 young:3 theorem:7 down:1 specific:1 xt:65 svm:4 consist:1 exists:1 sequential:1 adding:3 mirror:1 kx:35 gap:1 chen:1 lt:73 logarithmic:1 simply:1 conveniently:1 partially:1 sindhwani:1 chang:1 trillion:1 goal:1 consequently:1 careful:1 replace:1 lipschitz:7 feasible:1 experimentally:1 except:2 lemma:9 called:3 duality:1 experimental:2 la:1 newsgroup:1 people:1 searched:1 latter:1 accelerated:14 ex:2 |
3,110 | 3,818 | Sufficient Conditions for Agnostic Active Learnable
Liwei Wang
Key Laboratory of Machine Perception, MOE,
School of Electronics Engineering and Computer Science,
Peking University,
[email protected]
Abstract
We study pool-based active learning in the presence of noise, i.e. the agnostic setting. Previous works have shown that the effectiveness of agnostic active learning
depends on the learning problem and the hypothesis space. Although there are
many cases on which active learning is very useful, it is also easy to construct
examples that no active learning algorithm can have advantage. In this paper, we
propose intuitively reasonable sufficient conditions under which agnostic active
learning algorithm is strictly superior to passive supervised learning. We show
that under some noise condition, if the Bayesian classification boundary and the
underlying distribution are smooth to a finite order, active learning achieves polynomial improvement in the label complexity; if the boundary and the distribution
are infinitely smooth, the improvement is exponential.
1 Introduction
Active learning addresses the problem that the algorithm is given a pool of unlabeled data drawn
i.i.d. from some underlying distribution. The algorithm can then pay for the label of any example
in the pool. The goal is to learn an accurate classifier by requesting as few labels as possible. This
is in contrast with the standard passive supervised learning, where the labeled examples are chosen
randomly.
The simplest example that demonstrates the potential of active learning is to learn the optimal threshold on an interval. If there exists a perfect threshold separating the two classes (i.e. there is no noise),
then binary search only needs O(ln 1? ) labels to learn an ?-accurate classifier, while passive learning requires O( 1? ) labels. Another encouraging example is to learn a homogeneous linear separator
for data uniformly distributed on the unit sphere of Rd . In this case active learning can still give
exponential savings in the label complexity [Das05].
However, there are also very simple problems that active learning does not help at all. Suppose the
instances are uniformly distributed on [0, 1], and the positive class could be any interval on [0, 1].
Any active learning algorithms needs O( 1? ) label requests to learn an ?-accurate classifier [Han07].
There is no improvement over passive learning. All above are noise-free (realizable) problems. Of
more interest and more realistic is the agnostic setting, where the class labels can be noisy so that
the best classifier in the hypothesis space has a non-zero error ?. For agnostic active learning, there
2
is no active learning algorithm that can always reduce label requests due to a lower bound ?( ??2 ) for
the label complexity [Kaa06].
It is known that whether active learning helps or not depends on the distribution of the instance-label
pairs and the hypothesis space. Thus a natural question would be that under what conditions is active
learning guaranteed to require fewer labels than passive learning.
1
In this paper we propose intuitively reasonable sufficient conditions under which active learning
achieves lower label complexity than that of passive learning. Specifically, we focus on the A2 algorithm [BAL06] which works in the agnostic setting. Earlier work has discovered that the label
complexity of A2 can be upper bounded by a parameter of the hypothesis space and the data distribution called disagreement coefficient [Han07]. This parameter often characterizes the intrinsic
difficulty of the learning problem. By an analysis of the disagreement coefficient we show that, under some noise condition, if the Bayesian classification boundary and the underlying distribution are
smooth to a finite order, then A2 gives polynomial savings in the label complexity; if the boundary
and the distribution are infinitely smooth, A2 gives exponential savings.
1.1 Related Works
Our work is closely related to [CN07], in which the authors proved sample complexity bounds for
problems with smooth classification boundary under Tsybakov?s noise condition [Tsy04]. They also
assumed that the distribution of the instances is bounded from above and below. The main difference
to our work is that their analysis is for the membership-query setting [Ang88], in which the learning
algorithm can choose any point in the instance space and ask for its label; while the pool-based
model analyzed here assumes the algorithm can only request labels of the instances it observes.
Another related work is due to Friedman [Fri09]. He introduced a different notion of smoothness
and showed that this guarantees exponential improvement for active learning. But his work focused
on the realizable case and does not apply to the agnostic setting studied here.
Soon after A2 , Dasgupta, Hsu and Monteleoni [DHM07] proposed an elegant agnostic active learning algorithm. It reduces active learning to a series of supervised learning problems. If the hypothesis space has a finite VC dimension, it has a better label complexity than A2 . However, this
algorithm relies on the normalized uniform convergence bound for the VC class. It is not known
whether it holds for more general hypothesis space such as the smooth boundary class analyzed in
this paper. (For recent advances on this topic, see [GKW03].) It is left as an open problem whether
our results apply to this algorithm by refined analysis of the normalized bounds.
2
Preliminaries
Let X be an instance space, D a distribution over X ? {?1, 1}. Let H be the hypothesis space, a
set of classifiers from X to {?1}. Denote DX the marginal of D over X . In our active learning
model, the algorithm has access to a pool of unlabeled examples from DX . For any unlabeled point
x, the algorithm can ask for its label y, which is generated from the conditional distribution at x.
The error of a hypothesis h according to D is erD (h) = Pr(x,y)?D (h(x) 6= y). The empirical error
P
1
on a finite sample S is erS (h) = |S|
(x,y)?S I[h(x) 6= y], where I is the indicator function. We
use h? denote the best classifier in H. That is, h? = arg minh?H erD (h). Let ? = erD (h? ). Our
? ? H with error rate at most ? + ?, where ? is a predefined parameter.
goal is to learn a h
A2 is the first rigorous agnostic active learning algorithm. A description of the algorithm is given
in Fig.1. It was shown that A2 is never much worse than passive learning in terms of the label
complexity. The key observation that A2 can be superior to passive learning is that, since our goal is
? such that erD (h)
? ? erD (h? ) + ?, we only need to compare the errors of hypotheses.
to choose an h
Therefore we can just request labels of those x on which the hypotheses under consideration have
disagreement.
To do this, the algorithm keeps track of two spaces. One is the current version space Vi , consisting
of hypotheses that with statistical confidence are not too bad compared to h? . To achieve such a
statistical guarantee, the algorithm must be provided with a uniform convergence bound over the
hypothesis space. That is, with probability at least 1 ? ? over the draw of sample S according to D,
LB(S, h, ?) ? erD (h) ? U B(S, h, ?),
hold simultaneously for all h ? H, where the lower bound LB(S, h, ?) and upper bound
U B(S, h, ?) can be computed from the empirical error erS (h). The other space is the region of
disagreement DIS(Vi ), which is the set of all x ? X for which there are hypotheses in Vi that
disagree on x. Formally, for any V ? H,
DIS(V ) = {x ? X : ?h, h0 ? V, h(x) 6= h0 (x)}.
2
Input: concept space H, accuracy parameter ? ? (0, 1), confidence parameter ? ? (0, 1);
? ? H;
Output: classifier h
?
Let n
? = 2(2 log2 ? + ln 1? ) log2 2? (? depends on H and the problem, see Theorem 5) ;
Let ? 0 = ?/?
n;
V0 ? H, S0 ? ?, i ?0, j1 ?0, k ?1 ;
while ?(Vi )(minh?Vi U B(Si , h, ? 0 ) ? minh?Vi LB(Si , h, ? 0 )) > ? do
Vi+1 ? {h ? Vi : LB(Si , h, ? 0 ) ? minh0 ?Vi U B(Si , h0 , ? 0 )};
i ?i+1;
if ?(Vi ) < 12 ?(Vjk ) then
k ? k + 1; jk ? i;
end
Si0 ? Rejection sample 2i?jk samples x from D satisfying x ? DIS(Vi );
Si ? {(x, y = label(x)) : x ? Si0 };
end
? argminh?V U B(Si , h, ? 0 ).
Return h=
i
Algorithm 1: The A2 algorithm (this is the version in [Han07])
The volume of DIS(V ) is denoted by ?(V ) = PrX?DX (X ? DIS(V )). Requesting labels of
the instances from DIS(Vi ) allows A2 require fewer labels than passive learning. Hence the key
issue is how fast ?(Vi ) reduces. This process, and in turn the label complexity of A2 , are nicely
characterized by the disagreement coefficient ? introduced in [Han07].
Definition 1 Let ?(?, ?) be the pseudo-metric on a hypothesis space H induced by DX . That is, for
h, h0 ? H, ?(h, h0 ) = PrX?DX (h(X) 6= h0 (X)). Let B(h, r) = {h0 ? H: ?(h, h0 ) ? r}. The
disagreement coefficient ?(?) is
PrX?DX (X ? DIS(B(h? , r)))
,
r
r??
?(?) = sup
(1)
where h? = arg minh?H erD (h).
Note that ? depends on H and D, and 1 ? ?(?) ? 1? .
3 Main Results
As mentioned earlier, whether active learning helps or not depends on the distribution and the hypothesis space. There are simple examples such as learning intervals for which active learning has
no advantage. However, these negative examples are more or less ?artificial?. It is important to
understand whether problems with practical interest are actively learnable or not. In this section we
provide intuitively reasonable conditions under which the A2 algorithm is strictly superior to passive
learning. Our main results (Theorem 11 and Theorem 12) show that if the learning problem has a
smooth Bayes classification boundary, and the distribution DX has a density bounded by a smooth
function, then under some noise condition A2 saves label requests. It is a polynomial improvement
for finite smoothness, and exponential for infinite smoothness.
In Section 3.1 we formally define the smoothness and introduce the hypothesis space, which contains
smooth classifiers. We show a uniform convergence bound of order O(n?1/2 ) for this hypothesis
space. This bound determines U B(S, h, ?) and LB(S, h, ?) in A2 . Section 3.2 is the main technical
part, where we give upper bounds for the disagreement coefficient of smooth problems. In Section
3.3 we show that under some noise condition, there is a sharper bound for the label complexity in
terms of the disagreement coefficient. These lead to our main results.
3
3.1
Smoothness
Let f be a function defined on ? ? Rd . For any vector k = (k1 , ? ? ? , kd ) of d nonnegative integers,
Pd
let |k| = i=1 ki . Define the K-norm as
kf kK :=
Dk f (x) ? Dk f (x0 )
,
kx ? x0 k
|k|=K?1x,x0 ??
max sup|Dk f (x)| + max
|k|?K?1x??
where
Dk =
? k1 x1
sup
(2)
? |k|
,
? ? ? ? kd xd
is the differential operator.
Definition 2 (Finite Smooth Functions) A function f is said to be Kth order smooth with respect
to a constant C, if kf kK ? C. The set of Kth order smooth functions is defined as
FCK := {f : kf kK ? C}.
(3)
Thus Kth order smooth functions have uniformly bounded partial derivatives up to order K ? 1, and
the K ? 1th order partial derivatives are Lipschitz.
Definition 3 (Infinitely Smooth Functions) A function f is said to be infinitely smooth with respect
to a constant C, if kf kK ? C for all nonnegative integers K. The set of infinitely smooth functions
is denoted by FC? .
With the definitions of smoothness, we introduce the hypothesis space we use in the A2 algorithm.
K
Definition 4 (Hypotheses with Smooth Boundaries) A set of hypotheses HC
defined on [0, 1]d+1
K
is said to have Kth order smooth boundaries, if for every h ? HC , the classification boundary is
a Kth order smooth function on [0, 1]d . To be precise, let x = (x1 , x2 , . . . , xd+1 ) ? [0, 1]d+1 . The
classification boundary is the graph of function xd+1 = f (x1 , . . . , xd ), where f ? FCK . Similarly,
?
?
the
is said to have infinitely smooth boundaries, if for every h ? HC
a hypothesis space HC
d
classification boundary is the graph an infinitely smooth function on [0, 1] .
Previous results on the label complexity of A2 assumes the hypothesis space has finite VC dimension. The goal is to ensure a O(n?1/2 ) uniform convergence bound so that U B(S, h, ?) ?
K
?
LB(S, h, ?) = O(n?1/2 ). The hypothesis space HC
and HC
do not have finite VC dimensions.
K
?
Compared with the VC class, HC and HC are exponentially larger in terms of the covering num?
K
under a broad class of
and HC
bers [vdVW96]. But uniform convergence bound still holds for HC
distributions. The following theorem is a consequence of some known results in empirical processes.
Theorem 5 For any distribution D over [0, 1]d+1 ? {?1, 1}, whose marginal distribution DX on
[0, 1]d+1 has a density upper bounded by a constant M , and any 0 < ? ? ?0 (?0 is a constant), with
probability at least 1 ? ? over the draw of the training set S of n examples,
s
log 1?
|erD (h) ? erS (h)| ? ?
,
(4)
n
holds simultaneously for all h ? HK
C provided K > d (or K = ?). Here ? is a constant depending
only on d, K, C and M .
K
Proof It can be seen, from Corollary 2.7.3 in [vdVW96] that the bracketing numbers N[ ] of HC
2d
1
K
satisfies log N[ ] (?, HC
, L2 (DX )) = O(( ? ) K ). Since K > d, then there exist constants c1 , c2 such
that
?
!
?
?
nt2
PD sup |er(h) ? erS (h)| ? t ? c1 exp ?
c2
h?HK
C
for all nt2 ?
t0 is some constant (see Theorem 5.11 and Lemma 5.10 of [vdG00]). Let
? t0 , 2where
?
nt
? = c1 exp ? c2 , the theorem follows.
4
Now we can
U B(S, h, ?) and LB(S,q
h, ?) for A2 by simply letting U B(S, h, ?) =
q determine
ln ?1
ln ?1
erS (h) + ?
n and LB(S, h, ?) = erS (h) ? ?
n , where S is of size n.
3.2
Disagreement Coefficient
The disagreement coefficient ? plays an important role for the label complexity of active learning
algorithms. In fact previous negative examples for which active learning does not work are all the
results of large ?. For instance the interval learning problem, ?(?) = 1? , which leads to the same
label complexity as passive learning. In the following two theorems we show that the disagreement
coefficient ?(?) for smooth problems is small.
1
d+1
Theorem 6 Let the hypothesis space be HK
)
C . If the distribution DX has a density p(x , . . . , x
1
d+1
such that there exists a Kth order smooth function g(x , . . . , x ) and two constants 0 < ? ?
? such that ?g(x1 , . . . , x?d+1 ) ? ?
p(x1 , . . . , xd+1 ) ? ?g(x1 , . . . , xd+1 ) for all (x1 , . . . , xd+1 ) ?
d
?
?
[0, 1]d+1 , then ?(?) = O 1? K+d .
1
d+1
Theorem 7 Let the hypothesis space be H?
)
C . If the distribution DX has a density p(x , . . . , x
1
d
such that there exist an infinitely smooth function g(x , . . . , x ) and two constants 0 < ? ? ? such
that ?g(x1 , . . . , xd ) ? p(x1 , . . . , xd+1 ) ? ?g(x1 , . . . , xd ) for all (x1 , . . . , xd+1 ) ? [0, 1]d+1 , then
?(?) = O(logd ( 1? )).
The key points in the theorems are: the classification boundaries are smooth; and the density is
bounded from above and below by constants times a smooth function. These two conditions include
a large class of learning problems. Note that the density itself is not necessarily smooth. We just
require the density does not change too rapidly.
The intuition behind the two theorems above is as follows. Let fh? (x) and fh (x) be the classification
boundaries of h? and h, and suppose ?(h, h? ) is small, where ?(h, h? ) = Prx?DX (h(x) 6= h? (x))
is the pseudo metric. If the classification boundaries and the density are all smooth, then the two
boundaries have to be close to each other everywhere. That is, |fh (x) ? ff ? (x)| is small uniformly
for all x. Hence only the points close to the classification boundary of h? can be in DIS(B(h? , ?)),
which leads to a small disagreement coefficient.
The proofs of Theorem 6 and Theorem 7 rely on the following two lemmas.
R
Lemma 8 Let ? be a function defined on [0, 1]d and [0,1]d |?(x)|dx ? r. If there exists a Kth
? and 0 < ? ? ? such that ?|?(x)|
?
?
order smooth function ?
? |?(x)| ? ?|?(x)|
for all x ? [0, 1]d ,
K
d
then k?k? = O(r K+d ) = O(r ? ( 1r ) K+d ), where k?k? = supx?[0,1]d |?(x)|.
R
Lemma 9 Let ? be a function defined on [0, 1]d and [0,1]d |?(x)|dx ? r. If there exists an infinitely
? and 0 < ? ? ? such that ?|?(x)|
?
?
smooth function ?
? |?(x)| ? ?|?(x)|
for all x ? [0, 1]d , then
k?k? = O(r ? logd ( 1r ))
We will briefly describe the ideas of the proofs of these two lemmas in the Appendix. The formal
proofs are given in the supplementary file.
Proof of Theorem 6 First of all, since we focus on binary classification, DIS(B(h? , r)) can be
written equivalently as
DIS(B(h? , r)) = {x ? X , ?h ? B(h? , r), s.t. h(x) 6= h? (x)}.
Consider any h ? B(h? , r). Let fh , fh? ? FCK be the corresponding classification boundaries of h
and h? respectively. If r is sufficiently small, we must have
?
?Z
Z
?
? fh (x1 ,...,xd )
?
?
?(h, h? ) = Pr (h(X) 6= h? (X)) =
p(x1 , . . . , xd+1 )dxd+1 ? .
dx1 . . . dxd ?
X?DX
?
? fh? (x1 ,...,xd )
[0,1]d
5
Denote
Z
?h (x1 , . . . , xd ) =
fh (x1 ,...,xd )
p(x1 , . . . , xd+1 )dxd+1 .
fh? (x1 ,...,xd )
? h (x1 , . . . , xd ) and two constants 0 < u ? v
We assert that there is a Kth order smooth function ?
? h | ? |?h | ? v|?
? h |. To see this, remember that fh and fh? are Kth order smooth
such that u|?
functions; and the density p is upper and lower bounded by constants times a Kth order smooth
1
d
R
? h (x1 , . . . , xd ) = fh (x 1,...,x d) g(x1 , . . . , xd+1 )dxd+1 is
function g(x1 , . . . , xd+1 ); and note that ?
fh? (x ,...,x )
a Kth order smooth function. The latter is easy to check by taking derivatives. By Lemma 8, we have
R
d
k?h k? = O(r ? ( 1r ) K+d ), since |?h | = ?(h, h? ) ? r. Because this holds for all h ? B(h? , r),
d
we have suph?B(h? ,r) k?h k? = O(r ? ( 1r ) K+d ).
Now consider the region of disagreement of B(h? , r). Clearly DIS(B(h? , r)) = ?h?B(h? ,r) {x :
h(x) 6= h? (x)}. Hence
?
?
Pr (x ? DIS(B(h? , r))) = Pr x ? ?h?B(h? ,r) {x : h(x) 6= h? (x)}
X?DX
X?DX
? ? ? d !
Z
1 K+d
1
d
.
? 2 sup
k?h k? dx . . . dx = O r ?
r
h?B(h? ,r) [0,1]d
The theorem follows by the definition of ?(?).
Theorem 7 can be proved similarly by using Lemma 9.
3.3
Label Complexity
It was shown in [Han07] that the label complexity of A2 is
?
? ?
?
? ? 2
1
1
?
2
+ 1 polylog
ln
,
O ?
?2
?
?
(5)
where ? = minh?H erD (h). When ? ? ?, our previous results on the disagreement coefficient
already imply polynomial or exponential improvements for A2 . However, when ? < ?, the label
complexity becomes O( ?12 ), the same as passive learning whatever ? is. In fact, without any as2
sumption on the noise, the O( ?12 ) result is inevitable due to the ?( ??2 ) lower bound of agnostic
active learning [Kaa06].
Recently, there has been considerable interest in how noise affects the learning rate. A remarkable
notion is due to Tsybakov [Tsy04], which was first introduced for passive learning. Let ?(x) =
P (Y = 1|X = x). Tsybakov?s noise condition assumes that for some c > 0, 0 < ? ? ?
Pr (|?(X) ? 1/2| ? t) ? ct?? ,
(6)
X?DX
for all 0 < t ? t0 , where t0 is some constant. (6) implies a connection between the pseudo distance
?(h, h? ) and the excess risk erD (h) ? erD (h? ):
1/?
?(h, h? ) ? c0 (erD (h) ? erD (h? ))
?
,
(7)
1+?
?
0
where h is the Bayes classifier, c is some finite constant. Here ? =
? 1 is called the noise
exponent. ? = 1 is the optimal case, where the problem has bounded noise; ? > 1 correspond to
unbounded noise.
Castro and Nowak [CN07] noticed that Tsybakov?s noise condition is also important in active learning. They proved label complexity bounds in terms of ? for the membership-query setting. A notable
? 1 ) 2??2
?
fact is that O((
) (? > 1) is both an upper and a lower bound for membership-query in the
?
minimax sense. It is important to point out that the lower bound automatically applies to pool-based
model, since pool makes weaker assumptions than membership-query. Hence for large ?, active
learning has very limited improvement over passive learning whatever other factors are.
Recently, Hanneke [Han09] obtained similar label complexity for pool-based model. He showed the
labels requested by A2 is O(?2 ln 1? ln 1? ) for the bounded noise case, i.e. ? = 1. Here we slightly
6
generalize Hanneke?s result to unbounded noise by introducing the following noise condition. We
assume there exist c1 , c2 > 0 and T0 > 0 such that
Pr (|?(X) ? 1/2| ?
X?DX
1
) ? c1 e?c2 T ,
T
for all T ? T0 . It is not difficult to show that (8) implies
?
?
?(h, h ) = O (er(h) ? er(h? )) ln
1
(er(h) ? er(h? ))
(8)
?
.
(9)
This condition assumes unbounded noise. Under this noise condition, A2 has a better label complexity.
Theorem 10 Assume that the learning problem satisfies the noise condition (8) and DX has a density upper bounded by a constant M . For any hypothesis space H that has a O(n?1/2 ) uniform
convergence bound, if the Bayes classifier h? is in H, then with probability at least 1 ? ?, A2 outputs
? ? H with erD (h)
? ? erD (h? ) + ?, and the number of labels requested by the algorithm is at most
h
1
2
O(? (?) ? ln ? ? polylog( 1? )).
Proof As the proof of [Han07], one can show that with probability 1 ? ? we never remove h? from
Vi , and for any h, h0 ? Vi we must have ?(Vi )(eri (h) ? eri (h0 )) = erD (h) ? erD (h0 ), where
? ? erD (h? ) + ?.
eri (h) is the error rate of h conditioned on DIS(Vi ). These guarantees erD (h)
If ?(Vi ) ? 2??(?), due to the O(n?1/2 ) uniform convergence bound, O(?2 (?) ln 1? ) labels suffices
to make ?(Vi )(U B(Si , h, ? 0 ) ? LB(Si , h, ? 0 )) ? ? for all h ? DIS(Vi ) and the algorithm stops.
Hence we next consider ?(Vi ) > 2??(?). Note that there are at most O(ln 1? ) times ?(Vi ) <
1
1
2 ?(Vjk ) occurs. So below we bound the number of labels needed to make ?(Vi ) < 2 ?(Vjk )
?(V
)
jk
occurs. By the definition of ?(?), if ?(h, h? ) ? 2?(?)
for all h ? Vi , then ?(Vi ) < 21 ?(Vjk ). Let
?(h) = erD (h) ? erD (h? ). By the noise assumption (9) we have that if
?(h) ln
then ?(Vi ) <
1
2 ?(Vjk ).
1
?(Vjk )
?c
,
?(h)
2?(?)
(10)
Here and below, c is appropriate constant but may be different from
line to line. Note that (10) holds if ?(h) ? c
?(V
?(Vjk )
?(?) ln
?(?)
?(Vj )
k
)
jk
, and in turn if ?(h) ? c ?(?) ln
1 since
?
?(Vjk ) ? ?(Vi ) > 2??(?). But to have the last inequality, the algorithm only needs to label
O(?2 (?) ln2 1? ln 1? ) instances from DIS(Vi ). So the total number of labels requested by A2 is
O(?2 (?) ln 1? ln3 1? )
Now we give our main label complexity bounds for agnostic active learning.
K
Theorem 11 Let the instance space be [0, 1]d+1 . Let the Hypothesis space be HC
, where K > d.
?
K
Assume that the Bayes classifier h of the learning problem is in HC ; the noise condition (8) holds;
and DX has a density bounded by a Kth order smooth function as in Theorem 6. Then the A2
? with error rate erD (h)
? ? erD (h? ) + ? and the number of labels requested is at
algorithm
h
?
?? outputs
2d
?
? ?
? 1 K+d ln 1 , where in O
? we hide the polylog 1 term.
most O
?
?
?
Proof Note that the density DX is upper bounded by a smooth function implies that it is also upper
bounded by a constant M . Combining Theorem 5, 6 and 10 the theorem follows.
Combining Theorem 5, 7 and 10 we can show the following theorem.
?
Theorem 12 Let the instance space be [0, 1]d+1 . Let the Hypothesis space be HC
. Assume that
?
the Bayes classifier h? of the learning problem is in HC
; the noise condition (8) holds; and DX
has a density bounded by an infinitely smooth function as in Theorem 7. Then the A2 algorithm
? with error rate erD (h)
? ? erD (h? ) + ? and the number of labels requested is at most
outputs
h
?
?1? 1?
O polylog ? ln ? .
7
4
Conclusion
We show that if the Bayesian classification boundary is smooth and the distribution is bounded by a
smooth function, then under some noise condition active learning achieves polynomial or exponential improvement in the label complexity than passive supervised learning according to whether the
smoothness is of finite order or infinite.
Although we assume that the classification boundary is the graph of a function, our results can be
generalized to the case that the boundaries are a finite number of functions. To be precise, consider
N functions f1 (x) ? ? ? ? ? fN (x), for all x ? [0, 1]d . Let f0 (x) ? 0, fN +1 (x) ? 1. The positive (or
negative) set defined by these functions is {(x, xd+1 ) : f2i (x) ? x ? f2i+1 (x), i = 0, 1, . . . , N2 }.
Our theorems still hold in this case. In addition, by techniques in [Dud99] (page 259), our results
may generalize to problems which have intrinsic smooth boundaries (not only graphs of functions).
Appendix
In this appendix we describe very briefly the ideas to prove Lemma 8 and Lemma 9. The formal
proofs can be found in the supplementary file.
Ideas to Prove Lemma 8 First consider the d = 1 case. Note that if f ? FCK , then |f (K?1) (x) ?
f (K?1) (x0 )| ? C|x ? x0 | for all x, x0 ? [0, 1]. It is not difficult to see that we only need to show for
R1
K
any f such that |f (K?1) (x)?f (K?1) (x0 )| ? C|x?x0 |, if 0 |f (x)|dx = r, then kf k? = O(r K+1 ).
R
To show this, note that in order that kf k? achieves the maximum while |f | = r, the derivatives
of f must be as large as possible. Indeed, it can be shown that (one of) the optimal f is of the form
?
C
? K!
|x ? ?|K
0 ? x ? ?,
(11)
f (x) =
?
0
? < x ? 1.
That is, |f (K?1) (x) ? f (K?1) (x0 )| = C|x ? x0 | (i.e. the K ? 1 order derivatives reaches the upper
R1
bound of the Lipschitz constant.) for all x, x0 ? [0, ?], where ? is determined by 0 f (x)dx = r. It
K
is then easy to check that kf k? = O(r K+1 ).
For the general d > 1 case, we relax the constraint. Note that all K ? 1th order partial derivatives
are Lipschitz implies that all K ? 1th order directional derivatives are Lipschitz too. Under the latter
constraint, (one of) the optimal f has the form
?
C
? K!
|kxk ? ?|K
0 ? kxk ? ?,
f (x) =
?
0
? < kxk.
R
K
where ? is determined by [0,1]d |f (x)|dx = r. This implies kf k? = O(r K+d ).
Ideas to Prove
R Lemma 9 Similar to the proof of Lemma 8, we only need to show that for any
f ? FC? , if [0,1]d |f (x)|dx = r, then kf k? = O(r ? logd ( 1r )).
Since f is infinitely smooth, we can choose K large and depending on r. For the d = 1 case, let
log 1
K + 1 = log logr 1 . We know that the optimal f is of the form of Eq.(11). (Actually this choice of K
r
is approximately the largest K such that Eq.(11) is still the optimal form. If K is larger than this, ?
R1
C K
will be out of [0, 1].) Since 0 |f (x)| = r, we have ? K+1 = (K+1)!
. Now, kf k? = K!
? . Note
C
that ( 1r )K+1 = ( 1r )
log log 1
r
log 1
r
= log 1r . By Stirling?s formula we can show kf k? = O(r ? log 1r ).
For the d > 1 case, let K + d =
log r1
log log
1
r
. By similar arguments we can show kf k? = O(r ? logd 1r ).
Acknowledgement
This work was supported by NSFC(60775005).
8
References
[Ang88]
D. Angluin. Queries and concept learning. Machine Learning, 2:319?342, 1988.
[BAL06]
M.-F. Balcan, A.Beygelzimer, and J. Langford. Agnostic active learning. In 23th International
Conference on Machine Learning, 2006.
[CN07]
R. Castro and R. Nowak. Minimax bounds for active learning. In 20th Annual Conference on
Learning Theory, 2007.
[Das05]
S. Dasgupta. Coarse sample complexity bounds for active learning. In Advances in Neural Information Processing Systems, 2005.
[DHM07] S. Dasgupta, D. Hsu, and C. Monteleoni. A general agnostic active learning algorithm. In Advances
in Neural Information Processing Systems, 2007.
[Dud99]
R.M. Dudley. Uniform Central Limit Theorems. Cambridge University Press, 1999.
[Fri09]
E. Friedman. Active learning for smooth problems. In 22th Annual Conference on Learning
Theory, 2009.
[GKW03] V.E. Gine, V.I. Koltchinskii, and J. Wellner. Ratio limit theorems for empirical processes. Stochastic Inequalities and Applications, 56:249?278, 2003.
[Han07]
S. Hanneke. A bound on the label complexity of agnostic active learning. In 24th International
Conference on Machine Learning, 2007.
[Han09]
S. Hanneke. Adaptive rates of convergence in active learning. In 22th Annual Conference on
Learning Theory, 2009.
[Kaa06]
M. Kaariainen. Active learning in the non-realizable case. In 17th International Conference on
Algorithmic Learning Theory, 2006.
[Tsy04]
A. Tsybakov. Optimal aggregation of classifiers in statistical learning. The Annals of Statistics,
32:135?166, 2004.
[vdG00]
S. van de Geer. Applications of Empirical Process Theory. Cambridge University Press, 2000.
[vdVW96] A. van der Vaart and J. Wellner. Weak Convergence and Empirical Processes with Application to
Statistics. Springer Verlag, 1996.
9
| 3818 |@word briefly:2 version:2 polynomial:5 norm:1 c0:1 open:1 electronics:1 series:1 contains:1 current:1 nt:1 beygelzimer:1 si:8 dx:29 must:4 written:1 fn:2 realistic:1 j1:1 remove:1 fewer:2 num:1 coarse:1 unbounded:3 c2:5 vjk:8 differential:1 tsy04:3 prove:3 introduce:2 x0:11 indeed:1 automatically:1 encouraging:1 becomes:1 provided:2 underlying:3 bounded:15 agnostic:15 what:1 guarantee:3 pseudo:3 assert:1 every:2 remember:1 xd:23 classifier:13 demonstrates:1 whatever:2 unit:1 positive:2 engineering:1 limit:2 consequence:1 nsfc:1 approximately:1 koltchinskii:1 studied:1 limited:1 practical:1 empirical:6 liwei:1 confidence:2 unlabeled:3 close:2 operator:1 risk:1 focused:1 his:1 notion:2 annals:1 suppose:2 play:1 ang88:2 homogeneous:1 hypothesis:28 satisfying:1 jk:4 labeled:1 role:1 wang:1 region:2 observes:1 mentioned:1 intuition:1 pd:2 complexity:24 fast:1 describe:2 query:5 artificial:1 refined:1 h0:11 whose:1 larger:2 supplementary:2 relax:1 statistic:2 vaart:1 noisy:1 itself:1 advantage:2 propose:2 combining:2 rapidly:1 achieve:1 description:1 convergence:9 r1:4 perfect:1 help:3 depending:2 polylog:4 bers:1 school:1 eq:2 implies:5 closely:1 stochastic:1 vc:5 require:3 suffices:1 f1:1 sumption:1 preliminary:1 strictly:2 hold:9 dxd:4 sufficiently:1 exp:2 algorithmic:1 achieves:4 a2:26 fh:13 label:47 si0:2 largest:1 clearly:1 always:1 corollary:1 focus:2 improvement:8 check:2 hk:3 contrast:1 rigorous:1 realizable:3 sense:1 membership:4 arg:2 wanglw:1 classification:15 issue:1 denoted:2 exponent:1 marginal:2 construct:1 saving:3 never:2 nicely:1 broad:1 inevitable:1 few:1 randomly:1 simultaneously:2 consisting:1 friedman:2 interest:3 han07:7 analyzed:2 behind:1 predefined:1 accurate:3 nowak:2 partial:3 ln3:1 pku:1 instance:11 earlier:2 stirling:1 introducing:1 uniform:8 too:3 supx:1 density:13 international:3 pool:8 central:1 choose:3 worse:1 derivative:7 logr:1 return:1 actively:1 potential:1 de:1 coefficient:11 notable:1 depends:5 vi:28 characterizes:1 sup:5 bayes:5 aggregation:1 accuracy:1 correspond:1 directional:1 generalize:2 weak:1 bayesian:3 hanneke:4 reach:1 monteleoni:2 definition:7 proof:10 hsu:2 stop:1 proved:3 ask:2 actually:1 supervised:4 erd:25 just:2 langford:1 normalized:2 concept:2 hence:5 laboratory:1 covering:1 ln2:1 generalized:1 passive:15 balcan:1 logd:4 consideration:1 recently:2 superior:3 exponentially:1 volume:1 he:2 cambridge:2 smoothness:7 rd:2 similarly:2 access:1 f0:1 v0:1 showed:2 recent:1 hide:1 verlag:1 inequality:2 binary:2 der:1 seen:1 determine:1 dhm07:2 reduces:2 smooth:43 technical:1 characterized:1 sphere:1 peking:1 gine:1 metric:2 c1:5 addition:1 interval:4 bracketing:1 file:2 induced:1 elegant:1 effectiveness:1 integer:2 presence:1 easy:3 affect:1 reduce:1 idea:4 cn:1 requesting:2 t0:6 whether:6 wellner:2 useful:1 tsybakov:5 simplest:1 angluin:1 exist:3 track:1 dasgupta:3 key:4 threshold:2 drawn:1 graph:4 everywhere:1 reasonable:3 draw:2 appendix:3 bound:25 ki:1 pay:1 guaranteed:1 ct:1 nonnegative:2 annual:3 constraint:2 as2:1 x2:1 argument:1 according:3 request:5 kd:2 slightly:1 castro:2 intuitively:3 pr:6 ln:18 turn:2 needed:1 know:1 letting:1 dud99:2 end:2 apply:2 appropriate:1 disagreement:14 dudley:1 save:1 assumes:4 ensure:1 include:1 eri:3 log2:2 k1:2 noticed:1 question:1 already:1 occurs:2 said:4 kth:12 distance:1 separating:1 topic:1 f2i:2 kk:4 ratio:1 equivalently:1 difficult:2 sharper:1 negative:3 upper:10 disagree:1 observation:1 finite:11 minh:5 precise:2 discovered:1 lb:9 introduced:3 pair:1 moe:1 connection:1 address:1 below:4 perception:1 max:2 natural:1 difficulty:1 rely:1 indicator:1 minimax:2 imply:1 l2:1 acknowledgement:1 kf:12 suph:1 remarkable:1 das05:2 sufficient:3 s0:1 supported:1 last:1 free:1 soon:1 dis:15 formal:2 weaker:1 understand:1 taking:1 distributed:2 van:2 boundary:23 dimension:3 author:1 adaptive:1 excess:1 keep:1 active:39 assumed:1 search:1 learn:6 requested:5 hc:16 necessarily:1 separator:1 vj:1 main:6 noise:25 n2:1 prx:4 x1:22 fig:1 ff:1 exponential:7 theorem:29 formula:1 bad:1 er:11 learnable:2 dx1:1 dk:4 exists:4 intrinsic:2 ci:1 conditioned:1 kx:1 rejection:1 fc:2 simply:1 fck:4 infinitely:11 kxk:3 applies:1 springer:1 determines:1 relies:1 satisfies:2 conditional:1 goal:4 lipschitz:4 considerable:1 change:1 specifically:1 infinite:2 uniformly:4 determined:2 lemma:12 called:2 total:1 geer:1 formally:2 nt2:2 latter:2 vdg00:2 argminh:1 |
3,111 | 3,819 | Which graphical models are difficult to learn?
Andrea Montanari
Department of Electrical Engineering and
Department of Statistics
Stanford University
[email protected]
Jos?e Bento
Department of Electrical Engineering
Stanford University
[email protected]
Abstract
We consider the problem of learning the structure of Ising models (pairwise binary Markov random fields) from i.i.d. samples. While several methods have
been proposed to accomplish this task, their relative merits and limitations remain
somewhat obscure. By analyzing a number of concrete examples, we show that
low-complexity algorithms systematically fail when the Markov random field develops long-range correlations. More precisely, this phenomenon appears to be
related to the Ising model phase transition (although it does not coincide with it).
1
Introduction and main results
Given a graph G = (V = [p], E), and a positive parameter ? > 0 the ferromagnetic Ising model on
G is the pairwise Markov random field
Y
1
?G,? (x) =
(1)
e?xi xj
ZG,?
(i,j)?E
over binary variables x = (x1 , x2 , . . . , xp ). Apart from being one of the most studied models in
statistical mechanics, the Ising model is a prototypical undirected graphical model, with applications
in computer vision, clustering and spatial statistics. Its obvious generalization to edge-dependent
parameters ?ij , (i, j) ? E is of interest as well, and will be introduced in Section 1.2.2. (Let us
stress that we follow the statistical mechanics convention of calling (1) an Ising model for any graph
G.)
In this paper we study the following structural learning problem: Given n i.i.d. samples x(1) ,
x(2) ,. . . , x(n) with distribution ?G,? ( ? ), reconstruct the graph G. For the sake of simplicity, we
assume that the parameter ? is known, and that G has no double edges (it is a ?simple? graph).
The graph learning problem is solvable with unbounded sample complexity, and computational resources [1]. The question we address is: for which classes of graphs and values of the parameter ? is
the problem solvable under appropriate complexity constraints? More precisely, given an algorithm
Alg, a graph G, a value ? of the model parameter, and a small ? > 0, the sample complexity is
defined as
n
o
nAlg (G, ?) ? inf n ? N : Pn,G,? {Alg(x(1) , . . . , x(n) ) = G} ? 1 ? ? ,
(2)
where Pn,G,? denotes probability with respect to n i.i.d. samples with distribution ?G,? . Further,
we let ?Alg (G, ?) denote the number of operations of the algorithm Alg, when run on nAlg (G, ?)
samples.1
1
For the algorithms analyzed in this paper, the behavior of nAlg and ?Alg does not change significantly if we
require only ?approximate? reconstruction (e.g. in graph distance).
1
The general problem is therefore to characterize the functions nAlg (G, ?) and ?Alg (G, ?), in particular for an optimal choice of the algorithm. General bounds on nAlg (G, ?) have been given in
[2, 3], under the assumption of unbounded computational resources. A general charactrization of
how well low complexity algorithms can perform is therefore lacking. Although we cannot prove
such a general characterization, in this paper we estimate nAlg and ?Alg for a number of graph models, as a function of ?, and unveil a fascinating universal pattern: when the model (1) develops long
range correlations, low-complexity algorithms fail. Under the Ising model, the variables {xi }i?V
become strongly correlated for ? large. For a large class of graphs with degree bounded by ?, this
phenomenon corresponds to a phase transition beyond some critical value of ? uniformly bounded in
p, with typically ?crit ? const./?. In the examples discussed below, the failure of low-complexity
algorithms appears to be related to this phase transition (although it does not coincide with it).
1.1
A toy example: the thresholding algorithm
In order to illustrate the interplay between graph structure, sample complexity and interaction
strength ?, it is instructive to consider a warmup example. The thresholding algorithm reconstructs
G by thresholding the empirical correlations
n
X
(?) (?)
bij ? 1
C
xi xj
for i, j ? V .
(3)
n
?=1
T HRESHOLDING( samples {x(?) }, threshold ? )
bij }(i,j)?V ?V ;
1: Compute the empirical correlations {C
2: For each (i, j) ? V ? V
bij ? ? , set (i, j) ? E;
3:
If C
We will denote this algorithm by Thr(? ). Notice that its complexity is dominated by the computation
of the empirical correlations, i.e. ?Thr(? ) = O(p2 n). The sample complexity nThr(? ) can be bounded
for specific classes of graphs as follows (the proofs are straightforward and omitted from this paper).
Theorem 1.1. If G has maximum degree ? > 1 and if ? < atanh(1/(2?)) then there exists
? = ? (?) such that
2p
8
nThr(? ) (G, ?) ?
(4)
1 2 log ? .
)
(tanh ? ? 2?
Further, the choice ? (?) = (tanh ? + (1/2?))/2 achieves this bound.
Theorem 1.2. There exists a numerical constant K such that the following is true. If ? > 3 and
? > K/?, there are graphs of bounded degree ? such that for any ? , nThr(? ) = ?, i.e. the
thresholding algorithm always fails with high probability.
These results confirm the idea that the failure of low-complexity algorithms is related to long-range
correlations in the underlying graphical model. If the graph G is a tree, then correlations between far
apart variables xi , xj decay exponentially with the distance between vertices i, j. The same happens
on bounded-degree graphs if ? ? const./?. However, for ? > const./?, there exists families of
bounded degree graphs with long-range correlations.
1.2
More sophisticated algorithms
In this section we characterize ?Alg (G, ?) and nAlg (G, ?) for more advanced algorithms. We again
obtain very distinct behaviors of these algorithms depending on long range correlations. Due to
space limitations, we focus on two type of algorithms and only outline the proof of our most challenging result, namely Theorem 1.6.
In the following we denote by ?i the neighborhood of a node i ? G (i ?
/ ?i), and assume the degree
to be bounded: |?i| ? ?.
1.2.1
Local Independence Test
A recurring approach to structural learning consists in exploiting the conditional independence structure encoded by the graph [1, 4, 5, 6].
2
Let us consider, to be definite, the approach of [4], specializing it to the model (1). Fix a vertex r,
whose neighborhood we want to reconstruct, and consider the conditional distribution of xr given its
neighbors2 : ?G,? (xr |x?r ). Any change of xi , i ? ?r, produces a change in this distribution which
is bounded away from 0. Let U be a candidate neighborhood, and assume U ? ?r. Then changing
the value of xj , j ? U will produce a noticeable change in the marginal of Xr , even if we condition
on the remaining values in U and in any W , |W | ? ?. On the other hand, if U * ?r, then it is
possible to find W (with |W | ? ?) and a node i ? U such that, changing its value after fixing all
other values in U ? W will produce no noticeable change in the conditional marginal. (Just choose
i ? U \?r and W = ?r\U ). This procedure allows us to distinguish subsets of ?r from other sets
of vertices, thus motivating the following algorithm.
L OCAL I NDEPENDENCE T EST( samples {x(?) }, thresholds (?, ?) )
1: Select a node r ? V ;
2: Set as its neighborhood the largest candidate neighbor U of
size at most ? for which the score function S CORE(U ) > ?/2;
3: Repeat for all nodes r ? V ;
The score function S CORE( ? ) depends on ({x(?) }, ?, ?) and is defined as follows,
min
max
W,j xi ,xW ,xU ,xj
bn,G,? {Xi = xi |X = x , X = x }?
|P
W
W
U
U
bn,G,? {Xi = xi |X = x , X
P
W
W
U \j = xU \j , Xj = xj }| .
(5)
In the minimum, |W | ? ? and j ? U . In the maximum, the values must be such that
bn,G,? {X = x , X = x } > ?/2,
P
W
W
U
U
bn,G,? {X = x , X
P
W
W
U \j = xU \j , Xj = xj } > ?/2
bn,G,? is the empirical distribution calculated from the samples {x(?) }. We denote this algorithm
P
by Ind(?, ?). The search over candidate neighbors U , the search for minima and maxima in the
bn,G,? all contribute for ?Ind (G, ?).
computation of the S CORE(U ) and the computation of P
Both theorems that follow are consequences of the analysis of [4].
Theorem 1.3. Let G be a graph of bounded degree ? ? 1. For every ? there exists (?, ?), and a
numerical constant K, such that
2p
100?
,
?Ind(?,?) (G, ?) ? K (2p)2?+1 log p .
nInd(?,?) (G, ?) ? 2 4 log
? ?
?
More specifically, one can take ? =
1
4
sinh(2?), ? = e?4?? 2?2? .
This first result implies in particular that G can be reconstructed with polynomial complexity for
any bounded ?. However, the degree of such polynomial is pretty high and non-uniform in ?. This
makes the above approach impractical.
A way out was proposed in [4]. The idea is to identify a set of ?potential neighbors? of vertex r via
thresholding:
bri > ?/2} ,
B(r) = {i ? V : C
(6)
For each node r ? V , we evaluate S CORE(U ) by restricting the minimum in Eq. (5) over W ? B(r),
and search only over U ? B(r). We call this algorithm IndD(?, ?, ?). The basic intuition here is
that Cri decreases rapidly with the graph distance between vertices r and i. As mentioned above,
this is true at small ?.
Theorem 1.4. Let G be a graph of bounded degree ? ? 1. Assume that ? < K/? for some small
enough constant K. Then there exists ?, ?, ? such that
nIndD(?,?,?) (G, ?) ? 8(?2 + 8? ) log
4p
,
?
?IndD(?,?,?) (G, ?) ? K ? p??
More specifically, we can take ? = tanh ?, ? =
1
4
log(4/?)
?
+ K ? ?p2 log p .
sinh(2?) and ? = e?4?? 2?2? .
2
If a is a vector and R is a set of indices then we denote by aR the vector formed by the components of a
with index in R.
3
1.2.2
Regularized Pseudo-Likelihoods
A different approach to the learning problem consists in maximizing an appropriate empirical likelihood function [7, 8, 9, 10, 13]. To control the fluctuations caused by the limited number of samples,
and select sparse graphs a regularization term is often added [7, 8, 9, 10, 11, 12, 13].
As a specific low complexity implementation of this idea, we consider the ?1 -regularized pseudolikelihood method of [7]. For each node r, the following likelihood function is considered
n
L(?; {x(?) }) = ?
1X
(?)
log Pn,G,? (x(?)
r |x\r )
n
(7)
?=1
where x\r = xV \r = {xi : i ? V \ r} is the vector of all variables except xr and Pn,G,? is defined
from the following extension of (1),
Y
1
?G,? (x) =
(8)
e?ij xi xj
ZG,?
i,j?V
/E
where ? = {?ij }i,j?V is a vector of real parameters. Model (1) corresponds to ?ij = 0, ?(i, j) ?
and ?ij = ?, ?(i, j) ? E.
The function L(?; {x(?) }) depends only on ?r,? = {?rj , j ? ?r} and is used to estimate the neighborhood of each node by the following algorithm, Rlr(?),
R EGULARIZED L OGISTIC R EGRESSION( samples {x(?) }, regularization (?))
1: Select a node r ? V ;
2: Calculate ??r,? = arg min {L(?r,? ; {x(?) }) + ?||?r,? ||1 };
? r,? ?Rp?1
3:
If ??rj > 0, set (r, j) ? E;
Our first result shows that Rlr(?) indeed reconstructs G if ? is sufficiently small.
Theorem 1.5. There exists numerical constants K1 , K2 , K3 , such that the following is true. Let G
be a graph with degree bounded by ? ? 3. If ? ? K1 /?, then there exist ? such that
nRlr(?) (G, ?) ? K2 ??2 ? log
8p2
.
?
(9)
Further, the above holds with ? = K3 ? ??1/2 .
This theorem is proved by noting that for ? ? K1 /? correlations decay exponentially, which makes
all conditions in Theorem 1 of [7] (denoted there by A1 and A2) hold, and then computing the
probability of success as a function of n, while strenghtening the error bounds of [7].
In order to prove a converse to the above result, we need to make some assumptions on ?. Given
? > 0, we say that ? is ?reasonable for that value of ? if the following conditions old: (i) Rlr(?)
is successful with probability larger than 1/2 on any star graph (a graph composed by a vertex r
connected to ? neighbors, plus isolated vertices); (ii) ? ? ?(n) for some sequence ?(n) ? 0.
Theorem 1.6. There exists a numerical constant K such that the following happens. If ? > 3,
? > K/?, then there exists graphs G of degree bounded by ? such that for all reasonable ?,
nRlr(?) (G) = ?, i.e. regularized logistic regression fails with high probability.
The graphs for which regularized logistic regression fails are not contrived examples. Indeed we will
prove that the claim in the last theorem holds with high probability when G is a uniformly random
graph of regular degree ?.
The proof Theorem 1.6 is based on showing that an appropriate incoherence condition is necessary
for Rlr to successfully reconstruct G. The analogous result was proven in [14] for model selection
using the Lasso. In this paper we show that such a condition is also necessary when the underlying
model is an Ising model. Notice that, given the graph G, checking the incoherence condition is
NP-hard for general (non-ferromagnetic) Ising model, and requires significant computational effort
4
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
20
15
?0 10
5
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
1.2
1
0.8
0.6
Psucc
0.4
0.2
1
?
0
0
0.2
0.4
0.6
?
?crit 0.8
1
Figure 1: Learning random subgraphs of a 7 ? 7 (p = 49) two-dimensional grid from n = 4500
Ising models samples, using regularized logistic regression. Left: success probability as a function
of the model parameter ? and of the regularization parameter ?0 (darker corresponds to highest
probability). Right: the same data plotted for several choices of ? versus ?. The vertical line
corresponds to the model critical temperature. The thick line is an envelope of the curves obtained
for different ?, and should correspond to optimal regularization.
even in the ferromagnetic case. Hence the incoherence condition does not provide, by itself, a clear
picture of which graph structure are difficult to learn. We will instead show how to evaluate it on
specific graph families.
Under the restriction ? ? 0 the solutions given by Rlr converge to ?? with n [7]. Thus, for large
n we can expand L around ?? to second order in (? ? ?? ). When we add the regularization term
to L we obtain a quadratic model analogous the Lasso plus the error term due to the quadratic
approximation. It is thus not surprising that, when ? ? 0 the incoherence condition introduced for
the Lasso in [14] is also relevant for the Ising model.
2
Numerical experiments
In order to explore the practical relevance of the above results, we carried out extensive numerical
simulations using the regularized logistic regression algorithm Rlr(?). Among other learning algorithms, Rlr(?) strikes a good balance of complexity and performance. Samples from the Ising model
(1) where generated using Gibbs sampling (a.k.a. Glauber dynamics). Mixing time can be very large
for ? ? ?crit , and was estimated using the time required for the overall bias to change sign (this is a
quite conservative estimate at low temperature). Generating the samples {x(?) } was indeed the bulk
of our computational effort and took about 50 days CPU time on Pentium Dual Core processors (we
show here only part of these data). Notice that Rlr(?) had been tested in [7] only on tree graphs G,
or in the weakly coupled regime ? < ?crit . In these cases sampling from the Ising model is easy, but
structural learning is also intrinsically easier.
Figure reports the success probability of Rlr(?) when applied to random subgraphs of a 7 ? 7
two-dimensional grid. Each such graphs was obtained by removing each edge independently with
probability ? = 0.3. Success probability was estimated by applying Rlr(?) to each vertex of 8
graphs (thus averaging over 392 runs of Rlr(?)), using n = 4500 samples. We scaled the regularization parameter as ? = 2?0 ?(log p/n)1/2 (this choice is motivated by the algorithm analysis and
is empirically the most satisfactory), and searched over ?0 .
The data clearly illustrate the phenomenon discussed. Despite the large number of samples
n ? log p, when ? crosses a threshold, the algorithm starts performing poorly irrespective of ?.
Intriguingly, this threshold is not far from the critical point of the Ising model on a randomly diluted
grid ?crit (? = 0.3) ? 0.7 [15, 16].
5
1.2
1.2
? = 0.35, 0.40
1
1
? = 0.25
? = 0.20
0.8
0.8
? = 0.45
? = 0.10
0.6
0.6
Psucc
Psucc
0.4
0.4
? = 0.50
0.2
0.2
?thr
? = 0.65, 0.60, 0.55
0
0
0
2000
4000
6000
8000
10000
0
0.1
0.2
n
0.3
0.4
0.5
0.6
0.7
0.8
?
Figure 2: Learning uniformly random graphs of degree 4 from Ising models samples, using Rlr.
Left: success probability as a function of the number of samples n for several values of ?. Right:
the same data plotted for several choices of ? versus ? as in Fig. 1, right panel.
Figure 2 presents similar data when G is a uniformly random graph of degree ? = 4, over p = 50
vertices. The evolution of the success probability with n clearly shows a dichotomy. When ? is
below a threshold, a small number of samples is sufficient to reconstruct G with high probability.
Above the threshold even n = 104 samples are to few. In this case we can predict the threshold
analytically, cf. Lemma 3.3 below, and get ?thr (? = 4) ? 0.4203, which compares favorably with
the data.
3
Proofs
In order to prove Theorem 1.6, we need a few auxiliary results. It is convenient to introduce some
notations. If M is a matrix and R, P are index sets then MR P denotes the submatrix with row
indices in R and column indices in P . As above, we let r be the vertex whose neighborhood we are
trying to reconstruct and define S = ?r, S c = V \ ?r ? r. Since the cost function L(?; {x(?) }) +
?||?||1 only depend on ? through its components ?r,? = {?rj }, we will hereafter neglect all the other
parameters and write ? as a shorthand of ?r,? .
Let z?? be a subgradient of ||?||1 evaluated at the true parameters values, ?? = {?rj : ?ij = 0, ?j ?
/
n
?
?r, ?rj = ?, ?j ? ?r}. Let ? be the parameter estimate returned by Rlr(?) when the number
of samples is n. Note that, since we assumed ?? ? 0, z?S? = 1. Define Qn (?, ; {x(?) }) to be the
Hessian of L(?; {x(?) }) and Q(?) = limn?? Qn (?, ; {x(?) }). By the law of large numbers Q(?) is
the Hessian of EG,? log PG,? (Xr |X\r ) where EG,? is the expectation with respect to (8) and X is a
random variable distributed according to (8). We will denote the maximum and minimum eigenvalue
of a symmetric matrix M by ?max (M ) and ?min (M ) respectively.
We will omit arguments whenever clear from the context. Any quantity evaluated at the true parameter values will be represented with a ? , e.g. Q? = Q(?? ). Quantities under a ? depend on n.
Throughout this section G is a graph of maximum degree ?.
3.1
Proof of Theorem 1.6
Our first auxiliary results establishes that, if ? is small, then ||Q?S c S Q?SS ?1 z?S? ||? > 1 is a sufficient
condition for the failure of Rlr(?).
Lemma 3.1. Assumep[Q?S c S Q?SS ?1 z?S? ]i ? 1 + ? for some ? > 0 and some row i ? V , ?min (Q?SS ) ?
3 ?/29 ?4 . Then the success probability of Rlr(?) is upper bounded as
Cmin > 0, and ? < Cmin
2
2 2
?B
Psucc ? 4?2 e?n?A + 2? e?n?
2
where ?A = (Cmin
/100?2 )? and ?B = (Cmin /8?)?.
6
(10)
The next Lemma implies that, for ? to be ?reasonable? (in the sense introduced in Section 1.2.2),
n?2 must be unbounded.
Lemma 3.2. There exist M = M (K, ?) > 0 for ? > 0 such that the following is true: If G is the
graph with only one edge between nodes r and i and n?2 ? K, then
Psucc ? e?M (K,?)p + e?n(1?tanh ?)
2
/32
.
(11)
Finally, our key result shows that the condition ||Q?S c S Q?SS ?1 z?S? ||? ? 1 is violated with high
probability for large random graphs. The proof of this result relies on a local weak convergence
result for ferromagnetic Ising models on random graphs proved in [17].
Lemma 3.3. Let G be a uniformly random regular graph of degree ? > 3, and ? > 0 be sufficiently
small. Then, there exists ?thr (?, ?) such that, for ? > ?thr (?, ?), ||Q?S c S Q?SS ?1 z?S? ||? ? 1 + ? with
probability converging to 1 as p ? ?.
Furthermore, for large ?, ?thr (?, 0+) = ?? ??1 (1 + o(1)). The constant ?? is given by ?? =
? 2 . Finally, there exist
? h
? and h
? is the unique positive solution of h
? tanh h
? = (1 ? tanh2 h)
tanh h)/
?
Cmin > 0 dependent only on ? and ? such that ?min (QSS ) ? Cmin with probability converging to
1 as p ? ?.
The proofs of Lemmas 3.1 and 3.3 are sketched in the next subsection. Lemma 3.2 is more straightforward and we omit its proof for space reasons.
Proof. (Theorem 1.6) Fix ? > 3, ? > K/? (where K is a large enough constant independent of
?), and ?, Cmin > 0 and both small enough. By Lemma 3.3, for any p large enough we can choose
a ?-regular graph Gp = (V = [p], Ep ) and a vertex r ? V such that |Q?S c S Q?SS ?1 1S |i > 1 + ? for
some i ? V \ r.
By Theorem 1 in [4] we can assume, without loss of generality n > K ? ? log p for some small
constant K ? . Further by Lemma 3.2, n?2 ? F (p) for some F (p) ? ? as p ? ? and the condition
of Lemma 3.1 on ? is satisfied since by the ?reasonable? assumption ? ? 0 with n. Using these
results in Eq. (10) of Lemma 3.1 we get the following upper bound on the success probability
2
Psucc (Gp ) ? 4?2 p??A K
In particular Psucc (Gp ) ? 0 as p ? ?.
3.2
?
?
2
+ 2? e?nF (p)?B .
(12)
Proofs of auxiliary lemmas
Proof. (Lemma 3.1) We will show that under the assumptions of the lemma and if ?? = (??S , ??S C ) =
(??S , 0) then the probability that the i component of any subgradient of L(?; {x(?) })+?||?||1 vanishes
for any ??S > 0 (component wise) is upper bounded as in Eq. (10). To simplify notation we will omit
{x(?) } in all the expression derived from L.
z = 0. An application of the mean value
Let z? be a subgradient of ||?|| at ?? and assume ?L(??) + ??
theorem yields
?2 L(?? )[?? ? ?? ] = W n ? ??
z + Rn ,
(13)
(j)
(j)
?
? T ?
?
2
n
n
2
?
?
where W = ??L(? ) and [R ]j = [? L(? ) ? ? L(? )]j (? ? ? ) with ? a point in the line
from ?? to ?? . Notice that by definition ?2 L(?? ) = Qn ? = Qn (?? ). To simplify notation we will
omit the ? in all Qn ? . All Qn in this proof are thus evaluated at ?? .
Breaking this expression into its S and S c components and since ??S C = ??S C = 0 we can eliminate
?? ? ?? from the two expressions obtained and write
S
S
zS C .
[WSnC ? RSnC ] ? QnS C S (QnSS )?1 [WSn ? RSn ] + ?QnS C S (QnSS )?1 z?S = ??
Now notice that QnS C S (QnSS )?1 = T1 + T2 + T3 + T4 where
T1 = Q?S C S [(QnSS )?1 ? (Q?SS )?1 ] ,
T3 = [QnS C S ? Q?S C S ][(QnSS )?1 ? (Q?SS )?1 ] ,
7
T2 = [QnS C S ? Q?S C S ]Q?SS ?1 ,
T4 = Q?S C S Q?SS ?1 .
(14)
We will assume that the samples {x(?) } are such that the following event holds
E ? {||QnSS ? Q?SS ||? < ?A , ||QnS C S ? Q?S C S ||? < ?B , ||WSn /?||? < ?C } ,
(15)
?
2
n
where ?A ? Cmin ?/(16?), ?B ? Cmin ?/(8 ?) and ?C ? Cmin ?/(8?). Since EG,? (Q ) = Q?
and EG,? (W n ) = 0 and noticing that both Qn and W n are sums of bounded i.i.d. random variables,
a simple application of Azuma-Hoeffding inequality upper bounds the probability of E as in (10).
From E it follows that ?min (QnSS ) > ?min (Q?SS ) ? Cmin /2 > Cmin /2. We can therefore lower
bound the absolute value of the ith component of z?S C by
n
R n
W n Rn
?
WS
i i
?
? ?1
,
|[QS C S QSS 1S ]i |?||T1,i ||? ?||T2,i ||? ?||T3,i ||? ?
?
?
+ S
?
?
Cmin
? ?
? ?
where the subscript i denotes the i-th row of a matrix.
The proof is completed by showing that the event E and the assumptions of the theorem imply that
each of last 7 terms in this expression is smaller than ?/8. Since |[Q?S C S Q?SS ?1 ]Ti z?Sn | ? 1 + ? by
assumption, this implies |?
zi | ? 1 + ?/8 > 1 which cannot be since any subgradient of the 1-norm
has components of magnitude at most 1.
The last condition on E immediately bounds all terms involving W by ?/8. Some straightforward
manipulations imply (See Lemma 7 from [7])
?
?
?
n
?
||T2,i ||? ?
||[QnS C S ? Q?S C S ]i ||? ,
||T1,i ||? ? 2 ||QSS ? QSS ||? ,
Cmin
Cmin
2?
||T3,i ||? ? 2 ||QnSS ? Q?SS ||? ||[QnS C S ? Q?S C S ]i ||? ,
Cmin
and thus all will be bounded by ?/8 when E holds. The upper bound of Rn follows along similar
lines via an mean value theorem, and is deferred to a longer version of this paper.
Proof. (Lemma 3.3.) Let us state explicitly the local weak convergence result mentioned in Sec. 3.1.
For t ? N, let T(t) = (VT , ET ) be the regular rooted tree of t generations and define the associated
Ising measure as
Y
Y
?
1
e?xi xj
(16)
eh xi .
?+
T,? (x) =
ZT,?
(i,j)?ET
i??T(t)
Here ?T(t) is the set of leaves of T(t) and h? is the unique positive solution of h = (? ?
1) atanh {tanh ? tanh h}. It can be proved using [17] and uniform continuity with respect to the
?external field? that non-trivial local expectations with respect to ?G,? (x) converge to local expectations with respect to ?+
T,? (x), as p ? ?.
More precisely, let Br (t) denote a ball of radius t around node r ? G (the node whose neighborhood
we are trying to reconstruct). For any fixed t, the probability that Br (t) is not isomorphic to T(t)
goes to 0 as p ? ?. Let g(xBr (t) ) be any function of the variables in Br (t) such that g(xBr (t) ) =
g(?xBr (t) ). Then almost surely over graph sequences Gp of uniformly random regular graphs with
p nodes (expectations here are taken with respect to the measures (1) and (16))
lim EG,? {g(X Br (t) )} = ET(t),?,+ {g(X T(t) )} .
(17)
p??
The proof consists in considering [Q?S c S Q?SS ?1 z?S? ]i for t = dist(r, i) finite. We then write
(Q?SS )lk = E{gl,k (X Br (t) )} and (Q?S c S )il = E{gi,l (X Br (t) )} for some functions g?,? (X Br (t) ) and
apply the weak convergence result (17) to these expectations. We thus reduced the calculation of
[Q?S c S Q?SS ?1 z?S? ]i to the calculation of expectations with respect to the tree measure (16). The latter
can be implemented explicitly through a recursive procedure, with simplifications arising thanks to
the tree symmetry and by taking t ? 1. The actual calculations consist in a (very) long exercise in
calculus and we omit them from this outline.
The lower bound on ?min (Q?SS ) is proved by a similar calculation.
Acknowledgments
This work was partially supported by a Terman fellowship, the NSF CAREER award CCF-0743978
and the NSF grant DMS-0806211 and by a Portuguese Doctoral FCT fellowship.
8
References
[1] P. Abbeel, D. Koller and A. Ng, ?Learning factor graphs in polynomial time and sample complexity?. Journal of Machine Learning Research., 2006, Vol. 7, 1743?1788.
[2] M. Wainwright, ?Information-theoretic limits on sparsity recovery in the high-dimensional and
noisy setting?, arXiv:math/0702301v2 [math.ST], 2007.
[3] N. Santhanam, M. Wainwright, ?Information-theoretic limits of selecting binary graphical
models in high dimensions?, arXiv:0905.2639v1 [cs.IT], 2009.
[4] G. Bresler, E. Mossel and A. Sly, ?Reconstruction of Markov Random Fields from Samples: Some Observations and Algorithms?,Proceedings of the 11th international workshop,
APPROX 2008, and 12th international workshop RANDOM 2008, 2008 ,343?356.
[5] Csiszar and Z. Talata, ?Consistent estimation of the basic neighborhood structure of Markov
random fields?, The Annals of Statistics, 2006, 34, Vol. 1, 123-145.
[6] N. Friedman, I. Nachman, and D. Peer, ?Learning Bayesian network structure from massive
datasets: The sparse candidate algorithm?. In UAI, 1999.
[7] P. Ravikumar, M. Wainwright and J. Lafferty, ?High-Dimensional Ising Model Selection Using
l1-Regularized Logistic Regression?, arXiv:0804.4202v1 [math.ST], 2008.
[8] M.Wainwright, P. Ravikumar, and J. Lafferty, ?Inferring graphical model structure using l1regularized pseudolikelihood?, In NIPS, 2006.
[9] H. H?ofling and R. Tibshirani, ?Estimation of Sparse Binary Pairwise Markov Networks using
Pseudo-likelihoods? , Journal of Machine Learning Research, 2009, Vol. 10, 883?906.
[10] O.Banerjee, L. El Ghaoui and A. d?Aspremont, ?Model Selection Through Sparse Maximum
Likelihood Estimation for Multivariate Gaussian or Binary Data?, Journal of Machine Learning
Research, March 2008, Vol. 9, 485?516.
[11] M. Yuan and Y. Lin, ?Model Selection and Estimation in Regression with Grouped Variables?,
J. Royal. Statist. Soc B, 2006, 68, Vol. 19,49?67.
[12] N. Meinshausen and P. B?uuhlmann, ?High dimensional graphs and variable selection with the
lasso?, Annals of Statistics, 2006, 34, Vol. 3.
[13] R. Tibshirani, ?Regression shrinkage and selection via the lasso?, Journal of the Royal Statistical Society, Series B, 1994, Vol. 58, 267?288.
[14] P. Zhao, B. Yu, ?On model selection consistency of Lasso?, Journal of Machine. Learning
Research 7, 25412563, 2006.
[15] D. Zobin, ?Critical behavior of the bond-dilute two-dimensional Ising model?, Phys. Rev.,
1978 ,5, Vol. 18, 2387 ? 2390.
[16] M. Fisher, ?Critical Temperatures of Anisotropic Ising Lattices. II. General Upper Bounds?,
Phys. Rev. 162 ,Oct. 1967, Vol. 2, 480?485.
[17] A. Dembo and A. Montanari, ?Ising Models on Locally Tree Like Graphs?, Ann. Appl. Prob.
(2008), to appear, arXiv:0804.4726v2 [math.PR]
9
| 3819 |@word version:1 polynomial:3 norm:1 calculus:1 simulation:1 bn:6 pg:1 series:1 score:2 hereafter:1 selecting:1 surprising:1 must:2 portuguese:1 numerical:6 leaf:1 dembo:1 ith:1 core:5 characterization:1 math:4 node:12 contribute:1 unbounded:3 warmup:1 along:1 become:1 yuan:1 prove:4 consists:3 shorthand:1 introduce:1 pairwise:3 indeed:3 hresholding:1 behavior:3 andrea:1 dist:1 mechanic:2 cpu:1 actual:1 considering:1 bounded:17 underlying:2 panel:1 notation:3 z:1 impractical:1 pseudo:2 every:1 nf:1 ti:1 ofling:1 k2:2 scaled:1 control:1 converse:1 omit:5 grant:1 appear:1 positive:3 t1:4 engineering:2 local:5 xv:1 limit:2 consequence:1 despite:1 analyzing:1 subscript:1 fluctuation:1 incoherence:4 plus:2 doctoral:1 studied:1 meinshausen:1 challenging:1 appl:1 limited:1 range:5 practical:1 unique:2 acknowledgment:1 recursive:1 definite:1 xr:5 procedure:2 universal:1 empirical:5 significantly:1 convenient:1 regular:5 get:2 cannot:2 selection:7 context:1 applying:1 jbento:1 restriction:1 maximizing:1 straightforward:3 go:1 independently:1 simplicity:1 recovery:1 immediately:1 subgraphs:2 q:5 l1regularized:1 analogous:2 annals:2 massive:1 ising:20 ep:1 qns:8 electrical:2 rsnc:1 calculate:1 ferromagnetic:4 connected:1 decrease:1 highest:1 mentioned:2 intuition:1 vanishes:1 complexity:15 instructive:1 dynamic:1 weakly:1 depend:2 crit:5 tanh2:1 represented:1 distinct:1 dichotomy:1 neighborhood:8 peer:1 whose:3 encoded:1 stanford:4 larger:1 quite:1 say:1 s:18 reconstruct:6 statistic:4 gi:1 gp:4 itself:1 noisy:1 bento:1 interplay:1 sequence:2 eigenvalue:1 took:1 reconstruction:2 interaction:1 relevant:1 rapidly:1 mixing:1 poorly:1 exploiting:1 convergence:3 double:1 contrived:1 produce:3 generating:1 diluted:1 illustrate:2 depending:1 fixing:1 ij:6 noticeable:2 eq:3 p2:3 soc:1 auxiliary:3 implemented:1 c:1 implies:3 convention:1 thick:1 radius:1 cmin:16 require:1 fix:2 generalization:1 abbeel:1 extension:1 hold:5 sufficiently:2 considered:1 around:2 k3:2 predict:1 claim:1 achieves:1 a2:1 omitted:1 estimation:4 nachman:1 tanh:8 bond:1 largest:1 grouped:1 successfully:1 establishes:1 clearly:2 always:1 gaussian:1 pn:4 shrinkage:1 derived:1 focus:1 likelihood:5 pentium:1 sense:1 dependent:2 cri:1 el:1 typically:1 eliminate:1 w:1 koller:1 expand:1 unveil:1 sketched:1 arg:1 among:1 overall:1 dual:1 denoted:1 rsn:1 spatial:1 marginal:2 field:6 intriguingly:1 ng:1 sampling:2 yu:1 t2:4 np:1 report:1 develops:2 simplify:2 terman:1 few:2 randomly:1 composed:1 phase:3 friedman:1 interest:1 ogistic:1 deferred:1 analyzed:1 csiszar:1 edge:4 necessary:2 tree:6 old:1 plotted:2 isolated:1 column:1 ar:1 lattice:1 cost:1 vertex:11 subset:1 uniform:2 successful:1 characterize:2 motivating:1 accomplish:1 thanks:1 st:2 international:2 jos:1 concrete:1 again:1 satisfied:1 reconstructs:2 choose:2 hoeffding:1 ocal:1 external:1 zhao:1 toy:1 potential:1 wsn:2 star:1 sec:1 caused:1 explicitly:2 depends:2 start:1 formed:1 il:1 correspond:1 identify:1 yield:1 t3:4 weak:3 bayesian:1 processor:1 phys:2 whenever:1 definition:1 failure:3 obvious:1 dm:1 proof:15 associated:1 proved:4 intrinsically:1 subsection:1 lim:1 sophisticated:1 appears:2 day:1 follow:2 evaluated:3 strongly:1 generality:1 furthermore:1 just:1 sly:1 correlation:10 hand:1 banerjee:1 continuity:1 logistic:5 true:6 ccf:1 evolution:1 regularization:6 analytically:1 hence:1 symmetric:1 satisfactory:1 glauber:1 ind:3 eg:5 rooted:1 trying:2 stress:1 outline:2 theoretic:2 rlr:15 temperature:3 l1:1 wise:1 empirically:1 exponentially:2 anisotropic:1 discussed:2 significant:1 gibbs:1 approx:1 grid:3 consistency:1 had:1 longer:1 add:1 multivariate:1 inf:1 apart:2 manipulation:1 inequality:1 binary:5 success:8 vt:1 assumep:1 minimum:4 somewhat:1 mr:1 surely:1 converge:2 strike:1 ii:2 rj:5 calculation:4 cross:1 long:6 lin:1 ravikumar:2 award:1 specializing:1 a1:1 converging:2 involving:1 basic:2 regression:7 vision:1 expectation:6 arxiv:4 want:1 qnss:8 fellowship:2 limn:1 envelope:1 undirected:1 lafferty:2 call:1 structural:3 noting:1 enough:4 easy:1 xj:11 independence:2 zi:1 lasso:6 idea:3 br:7 motivated:1 expression:4 effort:2 returned:1 hessian:2 clear:2 locally:1 statist:1 reduced:1 exist:3 nsf:2 notice:5 talata:1 sign:1 estimated:2 arising:1 tibshirani:2 bulk:1 write:3 vol:9 santhanam:1 key:1 threshold:7 changing:2 v1:2 graph:46 subgradient:4 sum:1 run:2 prob:1 noticing:1 family:2 reasonable:4 throughout:1 almost:1 submatrix:1 bound:10 sinh:2 distinguish:1 simplification:1 quadratic:2 fascinating:1 strength:1 dilute:1 precisely:3 constraint:1 x2:1 calling:1 sake:1 dominated:1 argument:1 min:8 performing:1 bri:1 fct:1 department:3 according:1 ball:1 march:1 remain:1 smaller:1 rev:2 happens:2 pr:1 ghaoui:1 taken:1 resource:2 fail:2 merit:1 operation:1 apply:1 away:1 appropriate:3 v2:2 rp:1 denotes:3 remaining:1 clustering:1 cf:1 completed:1 graphical:5 const:3 xw:1 neglect:1 k1:3 society:1 atanh:2 question:1 added:1 quantity:2 distance:3 trivial:1 reason:1 index:5 balance:1 difficult:2 favorably:1 implementation:1 zt:1 perform:1 upper:6 vertical:1 observation:1 markov:6 datasets:1 finite:1 rn:3 introduced:3 namely:1 required:1 extensive:1 thr:7 nip:1 address:1 beyond:1 recurring:1 below:3 pattern:1 egression:1 regime:1 azuma:1 sparsity:1 max:2 royal:2 wainwright:4 critical:5 event:2 eh:1 regularized:7 solvable:2 advanced:1 mossel:1 imply:2 picture:1 lk:1 irrespective:1 carried:1 aspremont:1 coupled:1 sn:1 checking:1 relative:1 law:1 lacking:1 loss:1 bresler:1 prototypical:1 limitation:2 generation:1 proven:1 versus:2 degree:16 sufficient:2 xp:1 consistent:1 thresholding:5 systematically:1 obscure:1 row:3 repeat:1 last:3 gl:1 supported:1 bias:1 pseudolikelihood:2 neighbor:4 taking:1 absolute:1 sparse:4 distributed:1 curve:1 calculated:1 dimension:1 transition:3 qn:7 coincide:2 far:2 reconstructed:1 approximate:1 confirm:1 uai:1 assumed:1 xi:14 search:3 wsnc:1 pretty:1 learn:2 career:1 symmetry:1 alg:8 main:1 montanari:3 x1:1 xu:3 fig:1 darker:1 fails:3 inferring:1 exercise:1 candidate:4 breaking:1 bij:3 theorem:19 removing:1 specific:3 showing:2 decay:2 exists:9 consist:1 workshop:2 restricting:1 magnitude:1 t4:2 easier:1 explore:1 partially:1 corresponds:4 relies:1 oct:1 conditional:3 ann:1 fisher:1 change:6 hard:1 specifically:2 except:1 uniformly:6 averaging:1 lemma:16 conservative:1 isomorphic:1 est:1 zg:2 select:3 searched:1 latter:1 relevance:1 violated:1 evaluate:2 tested:1 phenomenon:3 correlated:1 |
3,112 | 382 | Shaping the State Space Landscape in Recurrent
Networks
Patrice Y. Simard >I<
Computer Science Dept.
University of Rochester
Rochester, NY 14627
Jean Pierre Raysz
LIUC
Universite de Caen
14032 Caen Cedex
France
Bernard Victorri
ELSAP
Universite de Caen
14032 Caen Cedex
France
Abstract
Fully recurrent (asymmetrical) networks can be thought of as dynamic
systems. The dynamics can be shaped to perform content addressable
memories, recognize sequences, or generate trajectories. Unfortunately
several problems can arise: First, the convergence in the state space is
not guaranteed. Second, the learned fixed points or trajectories are not
necessarily stable. Finally, there might exist spurious fixed points and/or
spurious "attracting" trajectories that do not correspond to any patterns.
In this paper, we introduce a new energy function that presents solutions
to all of these problems. We present an efficient gradient descent algorithm
which directly acts on the stability of the fixed points and trajectories and
on the size and shape of the corresponding basin and valley of attraction.
The results are illustrated by the simulation of a small content addressable
memory.
1
INTRODUCTION
Recurrent neural networks have the capability of storing information in the state
of their units. The temporal evolution of these states constitutes the dynamics of
the system and depends on the weights and the input of the network. In the case
of symmetric connections, the dynamics have been shown to be convergent [2] and
various procedures are known for finding the weights to compute different tasks.
In unconstrained neural networks however, little is known about how to train the
weights of the network when the convergence of the dynamics is not guaranteed.
In his review paper [1], Hirsh defines the conditions which must be satisfied for
>l<Now with AT&T Bell Laboratories, Crawfords Corner Road, Holmdel, NJ 07733
105
106
Simard, Raysz, and Victorri
some given dynamics to converge, but does not provide mechanisms for finding the
weights to implement these dynamics.
In this paper, a new energy function is introduced which reflects the convergence
and the stability of the dynamics of a network. A gradient descent procedure on
the weights provides an algorithm to control interesting properties of the dynamics
including contraction over a subspace, stability, and convergence.
2
AN ENERGY FUNCTION TO ENFORCE STABILITY
This section introduces a new energy function which can be used In combination with the backpropagation algorithm for recurrent networks (cf. [6;
5]). The continuous propagation rule is given by the equation:
.
g' .
(~w
+ I-,
, at = -x'"+L
J '3 x.)
3
T.. aXi
j
(1)
where x~ is the activation of unit i, gi is a differentiable function, Wij is the weight
from unit j to unit i, and 1i and Ii are respectively the time constant and the input
for unit i. A possible discretization of this equation is
(2)
(3)
Where 5:1 is the activation of unit i at the discrete time step t. Henceforth, only
the discrete version of the propagation equation will be considered and the tilda in
5: will be omitted to avoid heavy notations .
2.1
MAKING THE MAPPING CONTRACTING OR EXPANDING
IN A GIVEN DIRECTION
Using the Taylor expansion, G(x t + dxt) can be written as
G(x t + dxt) = G(xt)
+ G'(x t ) . dx t + o(lIxtlD
(4)
where G'(x t ) is the linear application derived from G(xt) and the term o(llxtlD
tends toward 0 faster than Ilxtli. The mapping G is contracting in the direction of
the unitary vector D if
IIG(xt
where
f
+ fD) -
G(xt)1I
fIlG'(x t ) . DII
IIG'(x t ) . DII
< IIfDIl
< f
< 1
(5)
is a small positive constant.
Accordingly, the following energy function is considered
EtJ(X, D) =
~(IIG'(X) . DII2 -
I<x?
(6)
Shaping the State Space Landscape in Recurrent Networks
where Kx is the target contracting rate at X in the direction D. Depending on
whether we choose Kx larger or smaller than 1, minimizing E3(X, D) will make
the mapping at X contracting or expanding in the direction D. Note that D can
be a complex vector.
The variation of E$(X, D) in respect to Wmn is equal to:
2:: (2::
(2::
OE3(X, D) = 2(IIG'(X)DW _ Kx)
OGi(X) D j )
oWmn
.
.
.
OXj
,
]
o2G,(X) Dj)
.] OWmnOXj
(7)
Assuming the activation function is of the form 2, the gradient operator yields:
OGi(X)
OXj
= Ek WikXk.
where Ui
evaluated:
= 8ij (l-
dt,
+ 11 g (Ui)Wij
(8)
To compute a!f~:~) the following expression needs to be
02Gi(X)
dt (,,( ) ""'(~ ~
~.
T.. 9 Ui L- UimUknXk
uWmn u X ] '
k
=
~
dt
11)
OXk )Wij + UimUjng
~ ~ '( Ui
+ Wik uWmn
~
?)
(9)
which in turn requires the evaluation of a&xls
? If we assume that for output units,
Wmn
Xk = X k and ~
= 0, we will improve the stability of the fixed point when
8Wmn.
the visible units are clamped to the input. What we want however, is to increase
stability for the network when the input units are unclamped (or hidden). This
means that for every unit (including output units), we have to evaluate ~.
aWmn
Since we are at the (unstable) fixed point, we have:
(10)
If we derived this equation with respect to Wmn we get:
2::
(WijXj
)
.
]
In matrix form:
(8mi x n +
2::
.
]
ox?
Wij 0
(11)
] )
Wmn
e = g'(y + we)
(12)
Where ei = &~:n' g' is a diagonal (square) matrix such that g~i = g~
and g~j
0 for i -# j (note that g'w -# wg'), y is a vector such that y,
W is the weight matrix. If we solve this we get:
(E
WijXj)
= 8miXn and
=
e = (Id - g'wr-1g'y
j
(13)
That is:
(14)
107
108
Simard, Raysz, and Victorri
where the matrix L is given by:
L,;
= 6,; -
g'
(~Wik"') w';
(15)
is the activation of the unit at the fixed point so it is the clamped value for the
visible unit, and xf for the hidden unit (the system converges to a stable fixed
point when the visible units are clamped).
Xk
To obtain the target rate of contraction Kx at X in the direction D, the weights
are updated iteratively according to the delta rule:
D.wij
= -Tf aEs(X, L)
(16)
aWij
This updating rule has the advantages and disadvantages of gradient descent algorithms.
2.2
COMPLEXITY
The algorithm given above can be implemented in O(N2) storage and O(N3) steps,
where N is the number of units. This complexity however can be improved by
avoiding inverting the matrix L using a local algorithm such as the one presented in
[7]. Another implementation of this energy function can be achieved using Lagrange
multipliers. This method exactly evaluates &~::n by using a backward pass [9]. Its
complexity depends on how many steps the network is unfolded in time.
2.3
GLOBAL STABILITY
Global convergence can be obtained if D is parallel the eigenvector corresponding
to the largest eigenvalue of G/(X). Indeed, in that case G/(X) . D is the largest
eigenvalue of G'(X). If X is a fixed point, the Ostrowski theorem [4; 3] guarantees
X is stable if and only if the maximum eigenvalue of the Jacobian of G is less than
1 in modulus.
Fortunately, the eigenvector corresponding to the largest eigenvalue can easily be
computed using an efficient iterative method [8]. By choosing D in that direction,
fixed points can be made stable.
3
RESULTS
To simplify the following discussion, V is defined to be the unitary eigenvector
corresponding to the largest eigenvalue of the Jacobian of G.
The energy function Es can be used in at least three ways. First it can be used to
accelerate the convergence toward an internal state upon presentation of a specific
input p. This is done by increasing the rate of contraction in the direction of V. The
chosen value for K x , is therefore small with respect to 1. The resulting network
will settle faster and therefore compute its output sooner. Second, Es can be used
to neutralize spurious fixed points by making them unstable. If the mapping G
Shaping the State Space Landscape in Recurrent Networks
is expanding in the direction of V the fixed point will be unstable, and will never
be reached by the system. The corresponding target value J(x p should be larger
than 1. Third, and most importantly, it can be used to force stability of the fixed
points when doing associative memory. Recurrent backpropagation (RBP) [7] can
be used to make the patterns fixed points, but there is no guarantee that these will
be stable. By making G contract along the direction V, this problem can be solved.
Furthermore, one can hope that by making the eigenvalue close to 1, smoothness in
the derivatives will make the basins of attraction larger. This can be used to absorb
and suppress spurious neighboring stable fixed points.
The following experiment illustrates how the unstable fixed points learned with RBP
can be made more stable using the energy function Ell. Consider a network of eight
fully connected units with two visible input/output units. The network is subject
to the dynamic specified by the equation 2. Three patterns are presented on the
two visible units. They correspond to the coordinates of the three points (0.3,0.7),
(0.8,0.4) and (0.2,0.1) which were chosen randomly. The learning phase for each
pattern consists of 1) clamping the visible units while propagating for five iterations
(to let the hidden units settle), 2) evaluating the difference between the activation
resulting from the incoming connections of the visible units and the value of the
presented pattern (this is the error), 3) backpropagating the corresponding error
signals and 4) updating the weight. This procedure can be used to make a pattern
a fixed point of the system [6]. Unfortunately, there is no guarantee that these
fixed points will be stable. Indeed, after learning with RBP only, the maximum
eigenvalue of the Jacobian of G for each fixed point is shown in table 1 (column
EV, no E$). As can be seen, the maximum eigenvalue of two of the three patterns
is larger than one.
pattern 0
pattern 1
pattern 2
unit 0
0.30
0.80
0.20
unit 1
0.70
0.40
0.10
EV, no Ell
1.063
1.172
0.783
EV, using Ell
0.966
0.999
0.710
Table 1: Patterns and corresponding norms of maximum eigenvalues (EV) of the
free system, with and without the stability constraint.
For a better understanding of what this means, the network can be viewed as a
dynamic system of 8 units. A projection of the dynamics of the system on the
visible units can be obtained by clamping these units while propagating for five
iterations, and computing the activation resulting from the incoming connection.
The difference between the latter value and the pattern value is a displacement (or
speed) indicating in which direction in the state space the activations are going.
The corresponding vector field is plotted on the top figure 1. It can easily be seen
that as predicted by the eigenvalues, patterns 0 and 1 are unstable (pattern 1 is
at a saddle point) and pattern 2 is stable. Furthermore there are two additional
spurious fixed points around (0.83,0.87) and (0.78,0.21).
The energy function Ell can be combined with REP using the following procedure:
1) propagate a few epochs until the error is below a certain threshold (10- 5 ), 2) for
each pattern, estimate the largest eigenvalue A and the corresponding eigenvector
V and 3), update the weights using E$ until IAI < J( in direction V. Steps 1 to 3
109
110
Simard, Raysz, and Victorri
.s
~~ ~ ~ l ~ ~ ~ ~ ~ ~ ~ ~ ~
..........,' , , , , , '" ...........
?..
\,, ................
\
:0: .
?
.,
,.,;,//1
\,"-.......
/
,
1 \ \.
" \ , r , , .... ,
I
1 \ ..............
I
I
.,
. . . . ",t
.
... , . I'
,,\\'T1
.4
--
" \
\
\
T T
,,, .... ,,., . . . . . . . .
r
t
\
,
,
..
0 ... ---
,,\\111("""'''---
,\ \ 1
.2
'"
T ,
r r , \ " .. ? .. - -
l TIl'
, , ....... ------....... , 1"\' '" ? ? ?? " I , ............-....
___ ,Y ... "\ \
~
/'/ I l \ \ \ ~
o
.2
l
I I I l \:\.."-
1 I 1 I ~ \ \~
?,
.4
.s
,,\\\ \ \
1~~
,' , , t ,~ ,~\ ,~~~~~
\ \ \ \ , , 1/
,,, ,
.. ? ? "
'"
-_ .. .. ?0 ? ? . .... .... ...... ....... ,. ,
-- ..... ? ? ?? ?? ? . . . . ?? ...
?? ?
?
-..- .... ? ? ? ? ? ? ? ? ? ? ?
, ........ ? ? - ? ? ? . . . .. ? ..... ,, , "
?. ? ? 0 ..........
,
,
,
?
-'" , , ? ?? ? ?? ..._?" ? ....... -.
,
, "
'"
-????
'"
. . . . . . '0- ,.. ? - ? , ? ,J \ '""'
\'\,
-- .... "
,'-\\ 1 1
-.......
.s
.,
.
.
.2
\
~~
-_
'"
//
-~-
1
1
f
\
'-
//1 1 \
I
0
I
?2
\
~
\ \ \
~
\
.
I
?
...........
~
1 1 \ \\'\.
I
.'
I
.'
Figure 1: Vector fields representing the dynamics of the state space after learning the patterns (0.3,0.7), (0.8,0.4) and (0.2,0.1). The field on the top represents
the dynamics of the network after training with the standard backpropagation algorithm. The field on the bottom represents the dynamics of the network after
training with the standard backpropagation algorithm combined with Es .
Shaping the State Space Landscape in Recurrent Networks
are repeated until no more progress is made. The largest eigenvalues after learning
are shown in table 1 in the last column. As can be noticed, all the eigenvalues
are less than one and therefore the mapping G is contracting in all directions. The
dynamics of the network is plotted at the bottom of figure 1. As can clearly be seen,
all the patterns are now attractors. Furthermore the two spurious fixed points have
disappeared in the large basin of attraction of pattern 1. This is a net improvement
over RBP used alone, since the network can now be used as a content addressable
memory.
4
DISCUSSION
In this paper we have introduced mechanisms to control global aspects such as
stability, attractor size, or contraction speed, of the dynamics of a recurrent network.
The power of the algorithm is illustrated by implementing a content addressable
memory with an asymmetric neural network. After learning, the stable fixed points
of the system coincide with the target patterns. All spurious fixed points have been
eliminated by spreading the basins of attraction of the target patterns.
The main limitation of the algorithm resides in using a gradient descent to update
the weights. Parameters such as the learning rate have to be carefully chosen, for
optimal performance. Furthermore, there is always a possibility that the evolution
of the weights might be trapped in a local minimum.
The complexity of the algorithm can be further improved. In equation 10 for instance, it is assumed that we are at a fixed point. This assumption is not true
unless the RBP error is really small. This requires that the RBP and the Es algorithms are run alternatively. A faster and more robust method consists in using
backpropagation in time to compute ~
and is presently under study.
OWrnn
Finally, the algorithm can be generalized to control the dynamics around target
trajectories, such as in [5]. The dynamics is projected onto the hyperplane orthogonal to the state space trajectory and constraints can be applied on the projected
dynamics.
Acknowledgements
This material is based upon work supported by the National Science Foundation
under Grant number IRI-8903582.
We thank the S.H.S department of C.N.R.S (France), Neuristic Inc. for allowing
the use of its neural net simulator SN2, and Corinna Cortes for helpful comments
and support.
References
[1] Morris W. Hirsch. Convergent activation dynamics in continuous time networks.
Neural Networks, 2:331-349, 1989.
[2] J. J. Hopfield. Neural networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences,
79:2554-2558, April 1982.
111
112
Simard, Raysz, and Victorri
[3] J. Ortega and M. Rockoff. Nonlinear difference equations and gauss-seidel type
iterative methods. SIAM J. Numer. Anal., 3:497-513, 1966.
[4] A Ostrowski. Solutions of Equations and Systems of Equations. Academic Press,
New York, 1960.
[5] Barak Pearlmutter. Learning state space trajectories in recurrent neural networks. Neural Computation, 1(2):263-269, 1989.
[6] Fernado J. Pineda. Dynamics and architecture in neural computation. Journal
of Complexity, 4:216-245, 1988.
[7] Fernando J. Pineda. Generalization of backpropagation to recurrent and higher
order networks. In IEEE Conference on Neural Information Processing Systems,
pages 602-601. American Institute of Physics, 1987.
[8] Anthony Ralston and Philip Rabinowitz. A First Course in Numerical Analysis.
McGraw-Hill, New York, 1978.
[9] Patrice Y. Simard. Learning State Space Dynamics in Recurrent Networks. PhD
thesis, University of Rochester, 1991.
| 382 |@word version:1 norm:1 simulation:1 propagate:1 contraction:4 awij:1 discretization:1 activation:8 dx:1 must:1 written:1 visible:8 numerical:1 shape:1 update:2 alone:1 accordingly:1 xk:2 provides:1 five:2 along:1 consists:2 introduce:1 indeed:2 simulator:1 unfolded:1 little:1 increasing:1 notation:1 what:2 eigenvector:4 finding:2 nj:1 guarantee:3 temporal:1 every:1 act:1 exactly:1 control:3 unit:27 grant:1 hirsh:1 positive:1 t1:1 local:2 tends:1 id:1 might:2 implement:1 backpropagation:6 procedure:4 addressable:4 displacement:1 bell:1 thought:1 projection:1 road:1 get:2 onto:1 close:1 valley:1 operator:1 storage:1 iri:1 rule:3 attraction:4 importantly:1 his:1 dw:1 stability:10 variation:1 coordinate:1 updated:1 target:6 updating:2 asymmetric:1 bottom:2 solved:1 connected:1 ui:4 complexity:5 dynamic:23 upon:2 iig:4 easily:2 accelerate:1 hopfield:1 emergent:1 various:1 caen:4 train:1 choosing:1 jean:1 larger:4 solve:1 wg:1 ability:1 gi:2 patrice:2 associative:1 pineda:2 sequence:1 differentiable:1 advantage:1 eigenvalue:13 net:2 neighboring:1 academy:1 convergence:6 etj:1 disappeared:1 converges:1 depending:1 recurrent:12 propagating:2 ij:1 progress:1 implemented:1 predicted:1 direction:12 settle:2 dii:2 implementing:1 material:1 generalization:1 really:1 around:2 considered:2 mapping:5 omitted:1 spreading:1 neutralize:1 largest:6 tf:1 reflects:1 hope:1 clearly:1 always:1 avoid:1 derived:2 unclamped:1 improvement:1 helpful:1 spurious:7 hidden:3 wij:5 france:3 going:1 ell:4 equal:1 field:4 never:1 shaped:1 eliminated:1 represents:2 constitutes:1 simplify:1 few:1 randomly:1 recognize:1 national:2 phase:1 attractor:2 fd:1 possibility:1 evaluation:1 numer:1 introduces:1 orthogonal:1 unless:1 taylor:1 sooner:1 plotted:2 instance:1 column:2 disadvantage:1 combined:2 siam:1 contract:1 physic:1 thesis:1 satisfied:1 choose:1 henceforth:1 corner:1 ek:1 simard:6 derivative:1 til:1 american:1 de:2 inc:1 depends:2 doing:1 reached:1 capability:1 parallel:1 rochester:3 square:1 correspond:2 yield:1 landscape:4 trajectory:7 evaluates:1 energy:9 universite:2 mi:1 shaping:4 carefully:1 higher:1 dt:3 improved:2 iai:1 april:1 evaluated:1 ox:1 done:1 furthermore:4 until:3 ei:1 nonlinear:1 propagation:2 defines:1 rabinowitz:1 modulus:1 rbp:6 asymmetrical:1 multiplier:1 true:1 evolution:2 symmetric:1 laboratory:1 iteratively:1 illustrated:2 ogi:2 backpropagating:1 generalized:1 ortega:1 hill:1 pearlmutter:1 ralston:1 physical:1 smoothness:1 unconstrained:1 dj:1 stable:10 attracting:1 certain:1 rep:1 seen:3 minimum:1 fortunately:1 additional:1 wijxj:2 converge:1 fernando:1 signal:1 ii:1 seidel:1 faster:3 xf:1 academic:1 iteration:2 achieved:1 want:1 victorri:5 cedex:2 subject:1 comment:1 unitary:2 architecture:1 whether:1 expression:1 e3:1 york:2 sn2:1 morris:1 generate:1 exist:1 delta:1 trapped:1 wr:1 discrete:2 threshold:1 backward:1 run:1 wmn:5 holmdel:1 guaranteed:2 convergent:2 constraint:2 n3:1 aspect:1 speed:2 department:1 according:1 combination:1 smaller:1 making:4 presently:1 ostrowski:2 equation:9 turn:1 mechanism:2 eight:1 enforce:1 pierre:1 corinna:1 top:2 cf:1 noticed:1 diagonal:1 gradient:5 subspace:1 thank:1 philip:1 evaluate:1 unstable:5 toward:2 aes:1 assuming:1 minimizing:1 unfortunately:2 suppress:1 implementation:1 anal:1 collective:1 perform:1 allowing:1 descent:4 introduced:2 inverting:1 specified:1 connection:3 learned:2 below:1 pattern:21 ev:4 including:2 memory:5 power:1 force:1 wik:2 representing:1 improve:1 crawford:1 review:1 understanding:1 epoch:1 acknowledgement:1 fully:2 contracting:5 dxt:2 interesting:1 limitation:1 foundation:1 basin:4 storing:1 heavy:1 course:1 supported:1 last:1 free:1 barak:1 institute:1 axi:1 evaluating:1 resides:1 made:3 coincide:1 projected:2 mcgraw:1 absorb:1 global:3 hirsch:1 incoming:2 assumed:1 alternatively:1 continuous:2 iterative:2 table:3 robust:1 expanding:3 expansion:1 necessarily:1 complex:1 anthony:1 main:1 arise:1 n2:1 repeated:1 ny:1 xl:1 clamped:3 jacobian:3 third:1 oxj:2 theorem:1 xt:4 specific:1 cortes:1 phd:1 illustrates:1 kx:4 clamping:2 saddle:1 lagrange:1 viewed:1 presentation:1 wikxk:1 content:4 hyperplane:1 bernard:1 pas:1 e:4 gauss:1 indicating:1 internal:1 support:1 latter:1 dept:1 avoiding:1 |
3,113 | 3,820 | Learning models of object structure
Joseph Schlecht
Department of Computer Science
University of Arizona
Kobus Barnard
Department of Computer Science
University of Arizona
[email protected]
[email protected]
Abstract
We present an approach for learning stochastic geometric models of object categories from single view images. We focus here on models expressible as a
spatially contiguous assemblage of blocks. Model topologies are learned across
groups of images, and one or more such topologies is linked to an object category (e.g. chairs). Fitting learned topologies to an image can be used to identify
the object class, as well as detail its geometry. The latter goes beyond labeling
objects, as it provides the geometric structure of particular instances. We learn
the models using joint statistical inference over category parameters, camera parameters, and instance parameters. These produce an image likelihood through a
statistical imaging model. We use trans-dimensional sampling to explore topology
hypotheses, and alternate between Metropolis-Hastings and stochastic dynamics
to explore instance parameters. Experiments on images of furniture objects such
as tables and chairs suggest that this is an effective approach for learning models
that encode simple representations of category geometry and the statistics thereof,
and support inferring both category and geometry on held out single view images.
1
Introduction
In this paper we develop an approach to learn stochastic 3D geometric models of object categories
from single view images. Exploiting such models for object recognition systems enables going
beyond simple labeling. In particular, fitting such models opens up opportunities to reason about
function or utility, how the particular object integrates into the scene (i.e., perhaps it is an obstacle), how the form of the particular instance is related to others in its category (i.e., perhaps it is a
particularly tall and narrow one), and how categories themselves are related.
Capturing the wide variation in both topology and geometry within object categories, and finding
good estimates for the underlying statistics, suggests a large scale learning approach. We propose
exploiting the growing number of labeled single-view images to learn such models. While our
approach is trivially extendable to exploit multiple views of the same object, large quantities of such
data is rare. Further, the key issue is to learn about the variation of the category. Put differently,
if we are limited to 100 images, we would prefer to have 100 images of different examples, rather
than, say, 10 views of 10 examples.
Representing, learning, and using object statistical geometric properties is potentially simpler in the
context of 3D models. In contrast, statistical models that encode image-based appearance characteristics and/or part configuration statistics must deal with confounds due to the imaging process. For
example, right angles in 3D can have a wide variety of angles in the image plane, leading to using
the same representations for both structure variation and pose variation. This means that the represented geometry is less specific and less informative. By contrast, encoding the structure variation
in 3D models is simpler and more informative because they are linked to the object alone.
To deal with the effect of an unknown camera, we estimate the camera parameters simultaneously
while fitting the model hypothesis. A 3D model hypothesis is a relatively strong hint as to what
1
the camera might be. Further, we make the observation that the variations due to standard camera
projection are quite unlike typical category variation. Hence, in the context of a given object model
hypothesis, the fact that the camera is not known is not a significant impediment, and much can be
estimated about the camera under that hypothesis.
We develop our approach with object models that are expressible as a spatially contiguous assemblage of blocks. We include in the model a constraint on right angles between blocks. We further
simplify matters by considering images where there are minimal distracting features in the background. We experiment with images from five categories of furniture objects. Within this domain,
we are able to automatically learn topologies. The models can then be used to identify the object
category using statistical inference. Recognition of objects in clutter is likely effective with this approach, but we have yet to integrate support for occlusion of object parts into our inference process.
We learn the parameters of each category model using Bayesian inference over multiple image
examples for the category. Thus we have a number of parameters specifying the category topology
that apply to all images of objects from the category. Further, as a side effect, the inference process
finds instance parameters that apply specifically to each object. For example, all tables have legs and
a top, but the proportions of the parts differ among the instances. In addition, the camera parameters
for each image are determined, as these are simultaneously fit with the object models. The object
and camera hypotheses are combined with an imaging model to provide the image likelihood that
drives the inference process.
For learning we need to find parameters that give a high likelihood of the data from multiple examples. Because we are searching for model topologies, we need to search among models with
varying dimension. For this we use the trans-dimensional sampling framework [7, 8]. We explore
the posterior space within a given probability space of a particular dimension by combining standard
Metropolis-Hastings [1, 14], with stochastic dynamics [18]. As developed further below, these two
methods have complementary strengths for our problem. Importantly, we arrange the sampling so
that the hybrid of samplers are guaranteed to converge to the posterior distribution. This ensures that
the space will be completely explored, given enough time.
Related work. Most work on learning representations for object categories has focused on imagebased appearance characteristics and/or part configuration statistics (e.g., [4, 5, 6, 12, 13, 24]).
These approaches typically rely on effective descriptors that are somewhat resilient to pose
change (e.g., [16]). A second force favoring learning 2D representations is the explosion of readily available images compared with that for 3D structure, and thus treating category learning as
statistical pattern recognition is more convenient in the data domain (2D images). However, some
researchers have started imposing more projective geometry into the spatial models. For example,
Savarese and Fei-Fei [19, 20] build a model where arranged parts are linked by a fundamental matrix. Their training process is helped by multiple examples of the same objects, but notably they
are able to use training data with clutter. Their approach is different than ours in that models are
built more bottom up, and this process is somewhat reliant on the presence of surface textures. A
different strategy proposed by Hoeim et al. [9] is to fit a deformable 3D blob to cars, driven largely
by appearance cues mapped onto the model. Our work also relates to recent efforts in learning abstract topologies [11, 26] and structure models for 2D images of objects constrained by grammar
representations [29, 30]. Also relevant is a large body of older work on representing objects with
3D parts [2, 3, 28] and detecting objects in images given a precise 3D model [10, 15, 25], such
as one for machined parts in an industrial setting. Finally, we have also been inspired by work
on fitting deformable models of known topology to 2D images in the case of human pose estimation (e.g., [17, 22, 23]).
2
Modeling object category structure
We use a generative model for image features corresponding to examples from object categories
(Fig. 1). A category is associated with a sampling from category level parameters which are the
number of parts, n, their interconnections (topology), t, the structure statistics rs , and the camera
statistics, rs . Associating camera distributional parameters with a category allows us to exploit
regularity in how different objects are photographed during learning. We support clusters within
categories to model multiple structural possibilities (e.g., chairs with and without arm rests). The
cluster variable, z, selects a category topology and structure distributional parameters for attachment
locations and part sizes. We denote the specific values for a particular example by s. Similarly, we
2
D
n
?c
rc
cd
?c
?
t
xd
zd
rs
?s
?s
sd
Figure 1: Graphical model for the generative approach
to images of objects from categories described by
stochastic geometric models. The category level parameters are the number of parts, n, their interconnections
(topology), t, the structure statistics rs , and the camera
statistics, rs . Hyperparameters for category level parameters are omitted for clarity. A sample of category
level parameters provides a statistical model for a given
category, which is then sampled for the camera and object structure values cd and sd , optionally selected from
a cluster within the category by zd . cd and sd yield a
distribution over image features xd .
denote the camera capturing it by c. The projected model image then generates image features,
x, for which we use edge points and surface pixels. In summary, the parameters for an image are
? (n) = (c, s, t, rc , rs , n).
Given a set of D images containing examples of an object category, our goal is to learn the model
?(n) generating them from detected features sets X = x1 , . . . , xD . In addition to category-level
parameters shared across instances which is of most interest, ?(n) comprises camera models C =
c1 , . . . , cD and structure part parameters S = s1 , . . . , sD assuming a hard cluster assignment. In
other words, the camera and the geometry of the training examples are fit collaterally.
We separate the joint density into a likelihood and prior
?
?
p X, ?(n) = p(n) (X, C, S | t, rc , rs ) p(n) (t, rc , rs , n) ,
(1)
where we use the notation p(n) (?) for a density function corresponding to n parts. Conditioned on
the category parameters, we assume that the D sets of image features and instance parameters are
independent, giving
D
Y
p(n) (xd , cd , sd | t, rc , rs ) .
(2)
p(n) (X, C, S | t, rc , rs ) =
d=1
The feature data and structure parameters are generated by a sub-category cluster with weights and
distributions defined by rs = (?, ?s , ?s ). As previously mentioned, the camera is shared across
clusters, and drawn from a distribution defined by rc = (?c , ?c ). We formalize the likelihood of
an object, camera, and image features under M clusters as
p(n) (xd , cd , sd | t, rc , rs )
=
M
X
?m p(nm ) (xd | cd , smd ) p(cd | ?c , ?c ) p(nm ) (smd | tm , ?sm , ?sm ) .
|
{z
} |
{z
} |
{z
}
m=1
Image
Camera
(3)
Object
We arrive at equation (3) by introducing a binary assignment vector z for each image feature set,
such that zm = 1 if the mth cluster generated it and 0 otherwise. The cluster weights are then given
by ?m = p(zm = 1) .
For the prior probability distribution, we assume category parameter independence, with the clustered topologies conditionally independent given the number of parts. The prior in (1) becomes
p(n) (t, rc , rs , n) = p(rc )
M
Y
p(nm ) (tm | nm ) p(nm ) (rsm ) p(nm ) .
(4)
m=1
For category parameters in the camera and structure models, rc and rs , we use Gaussian statistics
with weak Gamma priors that are empirically chosen. We set the number of parts in the object subcategories, n to be geometrically distributed. We set the prior over edges in the topology given n to
be uniform.
2.1 Object model
We model object structure as a set of connected three-dimensional block constructs representing
object parts. We account for symmetric structure in an object category, e.g., legs of a table or chair,
3
f,s
y
?
d
z
x
Figure 2: The camera model is constrained to reduce the ambiguity introduced in learning from a single view of an object. We position the camera at
a fixed distance and direct its focus at the origin; rotation is allowed about the
x-axis. Since the object model is allowed to move about the scene and rotate,
this model is capable of capturing most images of a scene.
by introducing compound block constructs. We define two constructs for symmetrically aligned
pairs (2) or quartets (4) of blocks. Unless otherwise specified, we will use blocks to specify both
simple blocks and compound blocks as they handled similarly.
The connections between blocks are made at a point on adjacent, parallel faces. We consider the
organization of these connections as a graph defining the structural topology of an object category,
where the nodes in the graph represent structural parts and the edges give the connections. We use
directed edges, inducing attachment dependence among parts.
Each block has three internal parameters representing its width, height, and length. Blocks representing symmetric pairs or quartets have one or two extra parameters defining the relative positioning
of the sub-blocks Blocks potentially have two external attachment parameters u, v where one other
is connected. We further constrain blocks to attach to at most one other block, giving a directed tree
for the topology and enabling conditional independence among attachments. Note that blocks can
be visually ?attached? to additional blocks that they abut, but representing them as true attachments
makes the model more complex and is not necessary. Intuitively, the model is much like physically
building a piece of furniture block by block, but saving on glue by only connecting an added block
to one other block. Despite its simplicity, this model can approximate a surprising range of man
made objects.
For a set of n connected blocks of the form b = (w, h, l, u1 , v1 , . . .), the structure model is
s = (?, po , b1 , . . . , bn ). We position the connected blocks in an object coordinate system defined
by a point po ? R3 on one of the blocks and a y-axis rotation angle, ?, about this position. Since
we constrain the blocks to be connected at right angles on parallel faces, the position of other blocks
within the object coordinate system is entirely defined by po and the attachments points between
blocks.
The object structure instance parameters are assumed Gaussian distributed according to ?s , ?s in
the likelihood (3). Since the instance parameters in the object model are conditionally independent
given the category, the covariance matrix is diagonal. Finally, for a block b?i attaching to bj on faces
?
k
defined by the k th size parameter, the topology edge set is defined as t = i, j, k : bi ?? bj .
2.2
Camera model
A full specification of the camera and the object position, pose, and scale leads to a redundant set
of parameters. We choose a minimal set for inference that retains full expressiveness as follows.
Since we are unable to distinguish the actual size of an object from its distance to the camera, we
constrain the camera to be at a fixed distance from the world origin. We reduce potential ambiguity
from objects of interest being variably positioned in R3 by constraining the camera to always look
at the world origin. Because we allow an object to rotate around its vertical axis, we only need to
specify the camera zenith angle, ?. Thus we set the horizontal x-coordinate of the camera in the
world to zero and allow ? to be the only variable extrinsic parameter. In other words, the position
of the camera is constrained to a circular arc on the y, z-plane (Figure 2). We model the amount of
perspective in the image from the camera by parameterizing its focal length, f . Our camera instance
parameters are thus c = (?, f, s), where ? ? [??/2, ?/2], and f, s > 0. The camera instance
parameters in (3) are modeled as Gaussian with category parameters ?s , ?s .
2.3
Image model
We represent an image as a collection of detected feature sets that are statistically generated by an
instance of our object and camera. Each image feature sets as arising from a corresponding feature
generator that depends on projected object information. For this work we generate edge points from
projected object contours and image foreground from colored surface points (Figure 3).
4
Projected Surface
Figure 3: Example of the generative image model for detected features. The
left side of the figure gives a rendering
of the object and camera models fit to
the image on the right side. The rightward arrows show the process of statistical generation of image features. The
leftward arrows are feature detection in
the image data.
Fg Detection
s ?( xi)
Object Model
Image Data
e?( xi)
Projected Contours
Edge Detection
We assume that feature responses are conditionally independent given the model and that the G
different types of features are also independent. Denoting the detected feature sets in the dth image
by xd = xd1 , . . . , xdG , we expand the image component of equation (3) to
Nx
G Y
Y
p(nm ) (xd | cd , smd , tm ) =
(n )
f?g m (xdgi ) .
(5)
g=1 i=1
(n )
The function f?g m (?) measures the likelihood of a feature generator producing the response of a
detector at each pixel using our object and camera models. Effective construction and implementation of the edge and surface point generators is intricate, and thus we only briefly summarize them.
Please refer to our technical report [21] for more details.
Edge point generator. We model edge point location and orientation as generated from projected
3D contours of our object model. Since the feature generator likelihood in (5) is computed over all
detection responses in an image, we define the edge generator likelihood as
Nx
Y
f? (xi ) =
i=1
Nx
Y
Ei
e? (xi )
? e?? (xi )
(1?Ei )
,
(6)
i=1
where the probability density function e? (?) gives the likelihood of detected edge point at the ith
pixel, and e?? (?) is the density for pixel locations not containing an edge point. The indicator Ei is 1
if the pixel is an edge point and 0 otherwise. This can be approximated by [21]
(N
)
Nx
x
Y
Y
N
Ei
miss
f? (xi ) ?
ee? (xi )
(7)
ebg bg eN
miss ,
i=1
Nbg
i=1
Nmiss
where ebg and emiss are the probabilities of background and missing detections and Nbg and Nmiss
are the number of background and missing detections. The density ee? approximates e? by estimating
the most likely correspondence between observed edge points and model edges.
To compute the edge point density e? , we assume correspondence and use the ith edge point generated from the j th model point as a Gaussian distributed displacement dij in the direction perpendicular of the projected model contour. We further define the gradient direction of the generated
edge point to have Gaussian error in its angle difference ?ij with the perpendicular direction of the
projected contour. If mj is a the model point assumed to generate xi , then
e? (xi ) = ce N (dij ; 0, ?d ) N (?ij ; 0, ?? )
(8)
where the perpendicular distance between xi and mj and angular difference between edge point
gradient
? gi and model ?contour perpendicular vj are defined dij = k xi ? mj k and ?ij =
cos?1 giT vj /kgi k kvj k . The range of dij is ? 0, and the angle ?ij is in [0, 1].
Surface point generator. Surface points are the projected points of viewable surfaces in our object model. Image foreground pixels are found using k-means clustering on pixel intensities. Setting
k = 2 works well as our training images were selected to have minimal clutter. Surface point detections intersecting with model surface projection leads to four easily identifiable cases: foreground,
background, missing, and noise. Similar to the edge point generator, the surface point generator
likelihood expands to
Nx
Y
N
N
Nmiss
noise
(9)
f? (xi ) = sfg fg sbg bg sN
noise smiss ,
i=1
5
3
Learning
?
?
?
?
To learn a category model, we sample the posterior, p ?(n) | X ? p X, ?(n) , to find good parameters shared by images of multiple object examples from the category. Given enough iterations,
a good sampler converges to the target distribution and an optimal value can be readily discovered
in the process. However, our posterior distribution is highly convoluted with many sharp, narrow
ridges for close fits to the edge points and foreground. In our domain, as in many similar problems,
standard sampling techniques tend to get trapped in these local extrema for long periods of time.
Our strategy for inference is to combine a mixture of sampling techniques with different strengths
in exploring the posterior distribution while still maintaining convergence conditions.
Our sampling space is over all category and instance parameters for a set of input images. We denote
the space over an instance of the camera and object models with n parts as C ? S(n) . Let T(n) be
(n)
the space over all topologies and R(n)
over all category statistics. The complete sampling
c ? Rs
space with m subcategories and D instances is then defined as
[
(n)
CD ? S(n)D ? T(n) ? R(n)
(10)
?=
c ? Rs ,
n ? Nm
Our goal is to sample the posterior with ?(n) ? ? such that we find the set of parameters that
maximizes it. Since the number of parameters in the sampling space is a unknown, some proposals
must change the model dimension. In particular, these jump moves (following the terminology of Tu
and Zhu [27]) arise from changes in topology. Diffusion moves make changes to parameters within
a given topology. We cycle between the two kinds of moves.
Diffusion moves for sampling within topology. We found that a multivariate Gaussian with small
covariance values on the diagonal to be a good proposal distribution for the instance parameters.
Proposals for block size changes are done in one of two ways: scaling or shifting attached blocks.
We found that both are useful good exploration of the object structure parameter space. Category
parameters were sampled by making proposals from the Gamma priors.
Using standard Metropolis-Hastings (MH) [1, 14], the proposed moves are accepted with probability
(
)
?
?
p(??(n) | X) q(? (n) | ??(n) )
(n)
?
= min 1,
? ?
.
(11)
p(? (n) | X) q(??(n) | ? (n) )
The MH diffusion moves exhibit a random walk behavior and can take extended periods of time
with many rejections to converge and properly mix well in regions of high probability in the target
distribution. Hence we occasionally follow a hybrid Markov chain based on stochastic dynamics,
where our joint density is used in a potential energy function. We use the common leapfrog discretization [18] to follow the dynamics and sample from phase space. The necessary derivative
calculations are approximated using numerical differentiation (details in [21]).
Jump moves for topology changes. For jump moves, we use the trans-dimensional sampling approach outlined by Green [7]. For example, in the case of a block birth in the model, we modify the
standard MH acceptance probability to
?
?)
(
?
?
?(n+1) | X) rd ?? ?(??(n+1) ) ??
p(
?
(n+1)
? ??
= min 1,
(12)
?
? .
? ?t) rb ? ?(? (n) , b,
? ?t) ?
p(? (n) | X) q(b,
The jump proposal distribution generates a new block and attachment edge in the topology that are
directly used in the proposed object model. Hence, the change of variable factor in the Jacobian
reduces to 1. The probability of selecting a birth move versus a death move is given by the ratio of
rd /rb , which we have also defined to be 1. The complimentary block death move is similar with the
inverse ratio of posterior and proposal distributions. We additionally define split and merge moves.
These are essential moves in our case because the sampler often generates blocks with strong partial
fits and proposing splitting it is often accepted.
4
Results
We evaluated our model and its inference with image sets of furniture categories, including tables,
chairs, sofas, footstools, and desks. We have 30 images in each category containing a single arbitrary
6
(a)
(b)
Predicted
Table
Chair
Actual
Footstool
Sofa
Desk
Table
Chair
Footstool
Sofa
Desk
10
5
0
0
0
5
9
0
1
0
4
10
1
0
0
0
5
3
7
0
2
3
1
3
6
Figure 4: Generated samples of tables (a) and chairs (b) from the learned structure topology and statistical category parameters. The table shows the confusion matrix for object category recognition.
view of the object instance. The images we selected for our data set have the furniture object
prominently in the foreground. This enables focusing on evaluating how well we learn 3D structure
models of objects.
Inference of the object and camera instances was done on detected edge and surface points in the
images. We applied a Canny-based detector for the edges in each image, using the same parameterization each time. Thus, the images contain some edge points considered noise or that are missing
from obvious contours. To extract the foreground, we applied a dynamic-threshold discovered in
each image with a k-means algorithm. Since the furniture objects in the images primarily occupy
the image foreground, the detection is quite effective.
We learned the object structure for each category over a 15-image subset of our data for training
purposes. We initialized each run of the sampler with a random draw of the category and instance
parameters. This is accomplished by first sampling the prior for the object position, rotation and
camera view; initially there are no structural elements in the model. We then sample the likelihoods
for the instance parameters. The reversible-jump moves in the sampler iteratively propose adding
and removing object constructs to the model. The mixture of moves in the sampler was 1-to-1 for
jump and diffusion and very infrequently performing a stochastic dynamics chain. Figure 6 shows
examples of learned furniture categories and their instances to images after 100K iterations. We
visualize the inferred structure topology and statistics in Figure 4 with generated samples from the
learned table and chair categories. We observe that the topology of the object structure is quickly
established after roughly 10K iterations, this can be seen in Figure 5, which shows the simultaneous
inference of two table instances through roughly 10K iterations.
We tested the recognition ability of the learned models on a held out 15-image subset of our data for
each category. For each image, we draw a random sample from the category statistics and a topology
and begin the diffusion sampling process to fit it. The best overall fit according to the joint density
is declared the predicted category. The confusion matrix shown in Figure 4 shows mixed results.
Overall, recognition is substantively better than chance (20%), but we expect that much better results
are possible with our approach. We conclude from the learned models and confusion matrix that the
chair topology shares much of its structure with the other categories and causes the most mistakes.
We continue to experiment with larger training data sets, clustering category structure, and longer
run times to get better structure fits in the difficult training examples, each of which could help
resolve this confusion.
Figure 5: From left to right, successive random samples from 2 of 15 table instances, each after 2K
iterations of model inference. The category topology and statistics are learned simultaneously from
the set of images; the form of the structure is shared across instances.
7
Figure 6: Learning the topology of furniture objects. Sets of contiguous blocks were fit across five
image data sets. Model fitting is done jointly for the fifteen images of each set. The fits for the
training examples is shown by the blocks drawn in red. Detected edge points are shown in green.
Acknowledgments
This work is supported in part by NSF CAREER Grant IIS-0747511.
8
References
[1] C. Andrieu, N. de Freitas, A. Doucet, and M. I. Jordan. An introduction to MCMC for machine learning.
Machine Learning, 50(1):5?43, 2003.
[2] I. Biederman. Recognition-by-components: A theory of human image understanding. Psychological
Review, 94(2):115?147, April 1987.
[3] M. B. Clowes. On seeing things. Artificial Intelligence, 2(1):79?116, 1971.
[4] D. Crandall and D. Huttenlocher. Weakly-supervised learning of part-based spatial models for visual
object recognition. In 9th European Conference on Computer Vision, 2006.
[5] L. Fei-Fei, R. Fergus, and P. Perona. Learning generative visual models from few training examples: an
incremental bayesian approach tested on 101 object categories. In Workshop on Generative-Model Based
Vision, 2004.
[6] R. Fergus, P. Perona, and A. Zisserman. Object class recognition by unsupervised scale-invariant learning.
In IEEE Conference on Computer Vision and Pattern Recognition, 2003.
[7] P. J. Green. Reversible jump Markov chain Monte Carlo computation and Bayesian model determination.
Biometrika, 82(4):711?732, 1995.
[8] P. J. Green. Trans-dimensional markov chain monte carlo. In Highly Structured Stochastic Systems. 2003.
[9] D. Hoiem, C. Rother, and J. Winn. 3d layoutcrf for multi-view object class recognition and segmentation.
In CVPR, 2007.
[10] D. Huttenlocher and S. Ullman. Recognizing solid objects by alignment with an image. IJCV, 5(2):195?
212, 1990.
[11] C. Kemp and J. B. Tenenbaum. The discovery of structural form. Proceedings of the National Academy
of Sciences, 105(31):10687?10692, 2008.
[12] A. Kushal, C. Schmid, and J. Ponce. Flexible object models for category-level 3d object recognition. In
CVPR, 2007.
[13] M. Leordeanu, M. Hebert, and R. Sukthankar. Beyond local appearance: Category recognition from
pairwise interactions of simple features. In CVPR, 2007.
[14] J. S. Liu. Monte Carlo Strategies in Scientific Computing. Springer-Verlag, 2001.
[15] D. G. Lowe. Fitting parameterized three-dimensional models to images. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 13(5):441?450, 1991.
[16] D. G. Lowe. Distinctive image features from scale-invariant keypoint. International Journal of Computer
Vision, 60(2):91?110, 2004.
[17] G. Mori and J. Malik. Recovering 3d human body configurations using shape contexts. IEEE Transactions
on Pattern Analysis and Machine Intelligence, 2006.
[18] R. M. Neal. Probabilistic inference using Markov chain Monte Carlo methods. Technical Report CRGTR-93-1, University of Toronto, 1993.
[19] S. Savarese and L. Fei-Fei. 3d generic object categorization, localization and pose estimation. In IEEE
Intern. Conf. in Computer Vision (ICCV), 2007.
[20] S. Savarese and L. Fei-Fei. View synthesis for recognizing unseen poses of object classes. In European
Conference on Computer Vision (ECCV), 2008.
[21] J. Schlecht and K. Barnard. Learning models of object structure. Technical report, University of Arizona,
2009.
[22] C. Sminchisescu. Kinematic jump processes for monocular 3d human tracking. In Computer vision and
pattern recognition, 2003.
[23] C. Sminchisescu and B. Triggs. Estimating articulated human motion with covariance scaled sampling.
International Journal of Robotics Research, 22(6):371?393, 2003.
[24] E. B. Sudderth, A. Torralba, W. T. Freeman, and A. S. Willsky. Learning hierarchical models of scenes,
objects, and parts. In ICCV, 2005.
[25] K. Sugihara. A necessary and sufficient condition for a picture to represent a polyhedral scene. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 6(5):578?586, September 1984.
[26] J. B. Tenenbaum, T. L. Griffiths, and C. Kemp. Theory-based bayesian models of inductive learning and
reasoning. Trends in Cognitive Sciences, 10(7):309?318, 2006.
[27] Z. Tu and S.-C. Zhu. Image segmentation by data-driven markov chain monte-carlo. IEEE Trans. Patt.
Analy. Mach. Intell., 24(5):657?673, 2002.
[28] P. H. Winston. Learning structural descriptions from examples. In P. H. Winston, editor, The psychology
of computer vision, pages 157?209. McGraw-Hill, 1975.
[29] L. Zhu, Y. Chen, and A. Yuille. Unsupervised learning of a probabilistic grammar for object detection
and parsing. In NIPS, 2006.
[30] S. Zhu and D. Mumford. A stochastic grammar of images. Foundations and Trends in Computer Graphics
and Vision, 4(2):259?362, 2006.
9
| 3820 |@word briefly:1 proportion:1 glue:1 triggs:1 open:1 r:16 bn:1 covariance:3 git:1 fifteen:1 solid:1 configuration:3 liu:1 selecting:1 hoiem:1 denoting:1 ours:1 freitas:1 discretization:1 surprising:1 yet:1 must:2 readily:2 parsing:1 numerical:1 informative:2 shape:1 enables:2 treating:1 alone:1 cue:1 generative:5 selected:3 intelligence:4 parameterization:1 plane:2 ith:2 colored:1 provides:2 detecting:1 node:1 location:3 successive:1 toronto:1 simpler:2 five:2 height:1 rc:11 direct:1 ijcv:1 fitting:6 combine:1 polyhedral:1 pairwise:1 notably:1 intricate:1 roughly:2 themselves:1 behavior:1 growing:1 multi:1 inspired:1 freeman:1 automatically:1 resolve:1 actual:2 considering:1 becomes:1 begin:1 estimating:2 underlying:1 notation:1 maximizes:1 what:1 kind:1 complimentary:1 developed:1 proposing:1 finding:1 extremum:1 differentiation:1 expands:1 xd:8 biometrika:1 scaled:1 grant:1 producing:1 local:2 modify:1 sd:6 mistake:1 despite:1 encoding:1 mach:1 merge:1 might:1 suggests:1 specifying:1 co:1 limited:1 projective:1 range:2 bi:1 statistically:1 perpendicular:4 directed:2 acknowledgment:1 camera:39 block:37 displacement:1 projection:2 convenient:1 word:2 griffith:1 seeing:1 suggest:1 get:2 onto:1 close:1 put:1 context:3 sukthankar:1 missing:4 go:1 focused:1 simplicity:1 splitting:1 parameterizing:1 importantly:1 searching:1 variation:7 coordinate:3 construction:1 target:2 analy:1 hypothesis:6 origin:3 element:1 infrequently:1 recognition:14 particularly:1 variably:1 approximated:2 trend:2 distributional:2 labeled:1 huttenlocher:2 bottom:1 observed:1 region:1 ensures:1 connected:5 cycle:1 mentioned:1 dynamic:6 weakly:1 yuille:1 distinctive:1 localization:1 completely:1 rightward:1 po:3 joint:4 easily:1 differently:1 mh:3 represented:1 articulated:1 effective:5 monte:5 detected:7 artificial:1 labeling:2 crandall:1 birth:2 quite:2 larger:1 cvpr:3 say:1 interconnection:2 otherwise:3 grammar:3 ability:1 statistic:13 gi:1 unseen:1 jointly:1 blob:1 propose:2 interaction:1 zm:2 canny:1 tu:2 relevant:1 combining:1 aligned:1 deformable:2 academy:1 description:1 inducing:1 convoluted:1 exploiting:2 convergence:1 regularity:1 cluster:9 produce:1 generating:1 incremental:1 converges:1 categorization:1 object:88 tall:1 help:1 develop:2 pose:6 ij:4 strong:2 recovering:1 c:2 predicted:2 differ:1 direction:3 stochastic:9 exploration:1 human:5 zenith:1 resilient:1 clustered:1 kobus:2 exploring:1 around:1 considered:1 visually:1 bj:2 visualize:1 arrange:1 torralba:1 omitted:1 purpose:1 estimation:2 integrates:1 sofa:3 gaussian:6 always:1 rather:1 varying:1 encode:2 focus:2 leapfrog:1 properly:1 ponce:1 likelihood:12 contrast:2 industrial:1 inference:13 kushal:1 typically:1 initially:1 mth:1 perona:2 favoring:1 expressible:2 going:1 expand:1 selects:1 pixel:7 issue:1 among:4 flexible:1 overall:2 orientation:1 spatial:2 constrained:3 construct:4 saving:1 sampling:14 look:1 unsupervised:2 foreground:7 others:1 report:3 simplify:1 hint:1 viewable:1 primarily:1 few:1 simultaneously:3 gamma:2 national:1 intell:1 geometry:7 occlusion:1 phase:1 detection:9 organization:1 interest:2 acceptance:1 possibility:1 circular:1 highly:2 kinematic:1 alignment:1 mixture:2 held:2 chain:6 edge:27 capable:1 explosion:1 necessary:3 partial:1 unless:1 tree:1 savarese:3 walk:1 initialized:1 minimal:3 psychological:1 instance:25 modeling:1 obstacle:1 contiguous:3 retains:1 assignment:2 introducing:2 subset:2 rare:1 abut:1 uniform:1 recognizing:2 dij:4 graphic:1 extendable:1 combined:1 density:8 fundamental:1 kgi:1 international:2 probabilistic:2 connecting:1 quickly:1 kvj:1 synthesis:1 intersecting:1 ambiguity:2 nm:8 containing:3 choose:1 external:1 conf:1 cognitive:1 derivative:1 leading:1 ullman:1 account:1 potential:2 de:1 matter:1 depends:1 bg:2 piece:1 view:11 helped:1 lowe:2 linked:3 red:1 parallel:2 descriptor:1 characteristic:2 largely:1 yield:1 identify:2 confounds:1 weak:1 bayesian:4 carlo:5 drive:1 researcher:1 detector:2 simultaneous:1 energy:1 thereof:1 obvious:1 associated:1 sampled:2 substantively:1 car:1 segmentation:2 formalize:1 positioned:1 focusing:1 supervised:1 follow:2 specify:2 response:3 april:1 zisserman:1 arranged:1 done:3 evaluated:1 sbg:1 angular:1 hastings:3 horizontal:1 ei:4 reversible:2 perhaps:2 scientific:1 building:1 effect:2 contain:1 true:1 andrieu:1 hence:3 inductive:1 spatially:2 symmetric:2 death:2 iteratively:1 neal:1 deal:2 conditionally:3 adjacent:1 during:1 width:1 please:1 distracting:1 hill:1 ridge:1 complete:1 confusion:4 motion:1 rsm:1 reasoning:1 image:74 common:1 rotation:3 empirically:1 attached:2 approximates:1 significant:1 refer:1 imposing:1 rd:2 trivially:1 focal:1 similarly:2 outlined:1 specification:1 longer:1 surface:12 posterior:7 multivariate:1 recent:1 perspective:1 leftward:1 driven:2 compound:2 occasionally:1 verlag:1 binary:1 continue:1 accomplished:1 seen:1 additional:1 somewhat:2 converge:2 redundant:1 period:2 ii:1 relates:1 multiple:6 full:2 mix:1 reduces:1 positioning:1 technical:3 determination:1 calculation:1 long:1 vision:9 physically:1 iteration:5 represent:3 robotics:1 c1:1 proposal:6 background:4 addition:2 winn:1 sudderth:1 extra:1 rest:1 unlike:1 tend:1 thing:1 jordan:1 structural:6 ee:2 presence:1 symmetrically:1 constraining:1 split:1 enough:2 rendering:1 variety:1 independence:2 fit:11 psychology:1 topology:31 associating:1 impediment:1 reduce:2 tm:3 handled:1 utility:1 effort:1 cause:1 useful:1 amount:1 clutter:3 desk:3 tenenbaum:2 category:64 generate:2 occupy:1 nsf:1 estimated:1 extrinsic:1 arising:1 trapped:1 rb:2 zd:2 patt:1 group:1 key:1 four:1 terminology:1 threshold:1 drawn:2 clarity:1 ce:1 diffusion:5 v1:1 imaging:3 graph:2 geometrically:1 run:2 angle:8 inverse:1 parameterized:1 arrive:1 draw:2 prefer:1 scaling:1 capturing:3 entirely:1 smd:3 guaranteed:1 furniture:8 distinguish:1 schlecht:3 correspondence:2 arizona:5 winston:2 identifiable:1 strength:2 constraint:1 fei:8 constrain:3 scene:5 generates:3 u1:1 declared:1 chair:10 min:2 performing:1 relatively:1 photographed:1 department:2 structured:1 according:2 alternate:1 across:5 joseph:1 metropolis:3 making:1 s1:1 leg:2 intuitively:1 invariant:2 iccv:2 mori:1 equation:2 monocular:1 previously:1 r3:2 smiss:1 available:1 apply:2 observe:1 hierarchical:1 generic:1 top:1 clustering:2 include:1 graphical:1 opportunity:1 maintaining:1 exploit:2 giving:2 build:1 move:16 malik:1 added:1 quantity:1 mumford:1 strategy:3 dependence:1 diagonal:2 exhibit:1 gradient:2 september:1 distance:4 separate:1 mapped:1 unable:1 nx:5 kemp:2 reason:1 willsky:1 assuming:1 quartet:2 length:2 rother:1 modeled:1 ratio:2 optionally:1 difficult:1 potentially:2 implementation:1 unknown:2 vertical:1 observation:1 markov:5 sm:2 arc:1 enabling:1 defining:2 extended:1 precise:1 discovered:2 sharp:1 arbitrary:1 expressiveness:1 intensity:1 inferred:1 biederman:1 introduced:1 pair:2 specified:1 connection:3 learned:9 narrow:2 established:1 nip:1 trans:5 beyond:3 able:2 dth:1 below:1 pattern:6 summarize:1 built:1 green:4 including:1 shifting:1 hybrid:2 rely:1 force:1 attach:1 indicator:1 arm:1 representing:6 older:1 zhu:4 xd1:1 keypoint:1 picture:1 attachment:7 started:1 axis:3 extract:1 schmid:1 sn:1 prior:7 geometric:5 understanding:1 review:1 discovery:1 relative:1 subcategories:2 expect:1 mixed:1 generation:1 versus:1 generator:9 foundation:1 integrate:1 sufficient:1 assemblage:2 editor:1 share:1 cd:10 eccv:1 summary:1 supported:1 hebert:1 side:3 allow:2 sugihara:1 wide:2 face:3 attaching:1 fg:2 distributed:3 dimension:3 world:3 evaluating:1 contour:7 made:2 collection:1 projected:9 jump:8 transaction:3 approximate:1 mcgraw:1 doucet:1 b1:1 hoeim:1 assumed:2 conclude:1 xi:12 fergus:2 search:1 table:11 additionally:1 learn:9 mj:3 career:1 sminchisescu:2 complex:1 european:2 domain:3 vj:2 arrow:2 noise:4 hyperparameters:1 arise:1 allowed:2 complementary:1 body:2 x1:1 fig:1 crgtr:1 en:1 sub:2 inferring:1 comprises:1 position:7 prominently:1 jacobian:1 removing:1 specific:2 explored:1 essential:1 workshop:1 adding:1 texture:1 conditioned:1 chen:1 rejection:1 explore:3 appearance:4 likely:2 reliant:1 visual:2 intern:1 tracking:1 leordeanu:1 springer:1 chance:1 conditional:1 goal:2 barnard:2 shared:4 man:1 change:7 hard:1 typical:1 specifically:1 determined:1 sampler:6 miss:2 accepted:2 internal:1 support:3 latter:1 rotate:2 mcmc:1 tested:2 |
3,114 | 3,821 | From PAC-Bayes Bounds to KL Regularization
Pascal Germain, Alexandre Lacasse, Franc?ois Laviolette, Mario Marchand, Sara Shanian
Department of Computer Science and Software Engineering
Laval University, Qu?ebec (QC), Canada
[email protected]
Abstract
We show that convex KL-regularized objective functions are obtained from a
PAC-Bayes risk bound when using convex loss functions for the stochastic Gibbs
classifier that upper-bound the standard zero-one loss used for the weighted majority vote. By restricting ourselves to a class of posteriors, that we call quasi
uniform, we propose a simple coordinate descent learning algorithm to minimize
the proposed KL-regularized cost function. We show that standard `p -regularized
objective functions currently used, such as ridge regression and `p -regularized
boosting, are obtained from a relaxation of the KL divergence between the quasi
uniform posterior and the uniform prior. We present numerical experiments where
the proposed learning algorithm generally outperforms ridge regression and AdaBoost.
1
Introduction
What should a learning algorithm optimize on the training data in order to give classifiers having the
smallest possible true risk? Many different specifications of what should be optimized on the training data have been provided by using different inductive principles. But the universally accepted
guarantee on the true risk, however, always comes with a so-called risk bound that holds uniformly
over a set of classifiers. Since a risk bound can be computed from what a classifier achieves on the
training data, it automatically suggests that learning algorithms should find a classifier that minimizes a tight risk (upper) bound.
Among the data-dependent bounds that have been proposed recently, the PAC-Bayes bounds [6, 8,
4, 1, 3] seem to be especially tight. These bounds thus appear to be a good starting point for the
design of a bound-minimizing learning algorithm. In that respect, [4, 5, 3] have proposed to use
isotropic Gaussian posteriors over the space of linear classifiers. But a computational drawback of
this approach is the fact the Gibbs empirical risk is not a quasi-convex function of the parameters
of the posterior. Consequently, the resultant PAC-Bayes bound may have several local minima for
certain data sets?thus giving an intractable optimization problem in the general case. To avoid
such computational problems, we propose here to use convex loss functions for stochastic Gibbs
classifiers that upper-bound the standard zero-one loss used for the weighted majority vote. By
restricting ourselves to a class of posteriors, that we call quasi uniform, we propose a simple coordinate descent learning algorithm to minimize the proposed KL-regularized cost function. We show
that there are no loss of discriminative power by restricting the posterior to be quasi uniform. We
also show that standard `p -regularized objective functions currently used, such as ridge regression
and `p -regularized boosting, are obtained from a relaxation of the KL divergence between the quasi
uniform posterior and the uniform prior. We present numerical experiments where the proposed
learning algorithm generally outperforms ridge regression and AdaBoost [7].
1
2
Basic Definitions
We consider binary classification problems where the input space X consists of an arbitrary subset
of Rd and the output space Y = {?1, +1}. An example is an input-output (x, y) pair where x ? X
and y ? Y. Throughout the paper, we adopt the PAC setting where each example (x, y) is drawn
according to a fixed, but unknown, distribution D on X ? Y.
The risk R(h) of any classifier h : X ? Y is defined as the probability that h misclassifies an
example drawn according to D. Given a training set S of m examples, the empirical risk RS (h) of
any classifier h is defined by the frequency of training errors of h on S. Hence
m
def
R(h) =
E
I(h(x) 6= y)
def
;
RS (h) =
(x,y)?D
1 X
I(h(xi ) 6= yi ) ,
m i=1
where I(a) = 1 if predicate a is true and 0 otherwise.
After observing the training set S, the task of the learner is to choose a posterior distribution Q
over a space H of classifiers such that the Q-weighted majority vote classifier BQ will have the
smallest possible risk. On any input example x, the output BQ (x) of the majority vote classifier BQ
(sometimes called the Bayes classifier) is given by
def
BQ (x) = sgn E h(x) ,
h?Q
where sgn(s) = +1 if s > 0 and sgn(s) = ?1 otherwise. The output of the deterministic majority
vote classifier BQ is closely related to the output of a stochastic classifier called the Gibbs classifier
GQ . To classify an input example x, the Gibbs classifier GQ chooses randomly a (deterministic)
classifier h according to Q to classify x. The true risk R(GQ ) and the empirical risk RS (GQ ) of the
Gibbs classifier are thus given by
R(GQ ) =
E R(h)
h?Q
;
RS (GQ ) =
E RS (h) .
h?Q
Any bound for R(GQ ) can straightforwardly be turned into a bound for the risk of the majority vote
R(BQ ). Indeed, whenever BQ misclassifies x, at least half of the classifiers (under measure Q)
misclassifies x. It follows that the error rate of GQ is at least half of the error rate of BQ . Hence
R(BQ ) ? 2R(GQ ). As shown in [5], this factor of 2 can sometimes be reduced to (1 + ).
3
PAC-Bayes Bounds and General Loss Functions
In this paper, we use the following PAC-Bayes bound which is obtained directly from Theorem 1.2.1
of [1] and Corollary 2.2 of [3] by using 1 ? exp(?x) ? x ?x ? R.
Theorem 3.1. For any distribution D, any set H of classifiers, any distribution P of support H,
any ? ? (0, 1], and any positive real number C 0 , we have
1
1h
1i
0
Pr ? Q on H : R(GQ ) ?
C ?RS (GQ ) +
KL(QkP ) + ln
? 1??,
S?D m
1 ? e?C 0
m
?
def
where KL(QkP ) = E ln Q(h)
P (h) is the Kullback-Leibler divergence between Q and P .
h?Q
Note that the dependence on Q of the upper bound on R(GQ ) is realized via Gibbs? empirical risk
RS (GQ ) and the PAC-Bayes regularizer KL(QkP ). As in boosting, we focus on the case where
the a priori defined class H consists (mostly) of ?weak? classifiers having large risk R(h) . In this
case, R(GQ ) is (almost) always large (near 1/2) for any Q even if the majority vote BQ has null
risk. In this case the disparity between R(BQ ) and R(GQ ) is enormous and the upper-bound on
R(GQ ) has very little relevance with R(BQ ). On way to obtain a more relevant bound on R(BQ )
from PAC-Bayes theory is to use a loss function ?Q (x, y) for stochastic classifiers which is distinct
from the loss used for the deterministic classifiers (the zero-one loss in our case). In order to obtain
a tractable optimization problem for a learning algorithm to solve, we propose here to use a loss
?Q (x, y) which is convex in Q and that upper-bounds as closely as possible the zero-one loss of the
deterministic majority vote BQ .
2
def
Consider WQ (x, y) = Eh?Q I(h(x) 6= y), the Q-fraction of binary classifiers that err on example (x, y). Then, R(GQ ) = E(x,y)?D WQ (x, y). Following [2], we consider any non-negative
convex loss ?Q (x, y) that can be expanded in a Taylor series around WQ (x, y) = 1/2:
k
?
?
X
X
def
k
?Q (x, y) = 1 +
ak (2WQ (x, y) ? 1) = 1 +
ak E ? yh(x)
,
k=1
h?Q
k=1
that upper bounds the risk of the majority vote BQ , i.e.,
1
?Q (x, y) ? I WQ (x, y) >
2
?Q, x, y .
It has been shown [2] that ?Q (x, y) can be expressed in terms of the risk on example (x, y) of a
Gibbs classifier described by a transformed posterior Q on N ? H? , i.e.,
h
i
?Q (x, y) = 1 + ca 2WQ (x, y) ? 1 ,
def
where ca =
P?
k=1
|ak | and where
def
WQ (x, y) =
?
1 X
|ak | E . . . E I (?y)k h1 (x) . . . hk (x) = ?sgn(ak ) .
h1 ?Q
hk ?Q
ca
k=1
Since WQ (x, y) is the expectation of boolean random variable, Theorem 3.1 holds if we replace
def
def 1 Pm
(P, Q) by (P , Q) with R(GQ ) =
E WQ (x, y) and RS (GQ ) = m
i=1 WQ (xi , yi ). More(x,y)?D
over, it has been shown [2] that
def
KL(QkP ) = k ? KL(QkP ) ,
where k =
?
1 X
|ak | ? k .
ca
k=1
If we define
?Q
def
=
E
(x,y)?D
?(x, y) = 1 + ca [2R(GQ ) ? 1]
m
?c
Q
def
=
1 X
?(xi , yi ) = 1 + ca [2RS (GQ ) ? 1] ,
m i=1
Theorem 3.1 gives an upper bound on ?Q and, consequently, on the true risk R(BQ ) of the majority
vote. More precisely, we have the following theorem.
Theorem 3.2. For any D, any H, any P of support H, any ? ? (0, 1], any positive real number C 0 ,
any loss function ?Q (x, y) defined above, we have
C0
2ca h
1i
c
Pr m ? Q on H : ?Q ? g(ca , C 0 ) +
?
+
k
?
KL(QkP
)
+
ln
? 1??,
Q
S?D
1 ? e?C 0
mC 0
?
def
where g(ca , C 0 ) = 1 ? ca +
4
C0
1?e?C 0
? (ca ? 1).
Bound Minimization Learning Algorithms
The task of the learner is to find the posterior Q that minimizes the upper bound on ?Q for a fixed
loss function given by the coefficients {ak }?
k=1 of the Taylor series expansion for ?Q (x, y). Finding
Q that minimizes the upper bound given by Theorem 3.2 is equivalent to finding Q that minimizes
def
f (Q) = C
m
X
?Q (xi , yi ) + KL(QkP ) ,
i=1
def
where C = C 0 /(2ca k) .
3
To compare the proposed learning algorithms with AdaBoost, we will consider, for ?Q (x, y), the
exponential loss given by
!
1 X
1
exp ? y
Q(h)h(x) = exp
[2WQ (x, y) ? 1] .
?
?
h?H
?1
?1
For this choice of loss, we have ca = e? ? 1 and k = ? ?1 /(1 ? e?? ). Because of its simplicity,
we will also consider, for ?Q (x, y), the quadratic loss given by
!2
2
1 X
1
y
Q(h)h(x) ? 1
=
[1 ? 2WQ (x, y)] ? 1
.
?
?
h?H
+ 2)/(2? + 1). Note that this loss
For this choice of loss, we have ca = 2? ?1 + ? ?2 and k = (2?P
has the minimum value of zero for examples having a margin y h?H Q(h)h(x) = ?.
With these two choices of loss functions, ?Q (x, y) is convex in Q. Moreover, KL(QkP ) is also
convex in Q. Since a sum of convex functions is also convex, it follows that objective function f
is convex in Q (which has a convex domain). Consequently, f has a single local minimum which
coincides with the global minimum. We therefore propose to minimize f coordinate-wise,
similarly
P
as it is done for AdaBoost [7]. However, to ensure that Q is a distribution (i.e., that h?H Q(h) =
1), each coordinate minimization will consist of a transfer of weight from one classifier to another.
4.1
Quasi Uniform Posteriors
We consider learning algorithms that work in a space H of binary classifiers such that for
each h ? H, the boolean complement of h is also in H. More specifically, we have H =
{h1 , . . . , hn , hn+1 , . . . , h2n } where hi (x) = ?hn+i (x) ?x ? X and ?i ? {1, . . . , n}. We thus
say that (hi , hn+i ) constitutes a boolean complement pair of classifiers.
We consider a uniform prior distribution P over H, i.e., Pi =
1
2n
?i ? {1, . . . , 2n}.
The posterior distribution Q over H is constrained to be quasi uniform. By this, we mean that
Qi + Qi+n = n1 ?i ? {1, . . . , n}, i.e., the total weight assigned to each boolean complement pair of
def
classifiers is fixed to 1/n. Let wi = Qi ? Qi+n ?i ? {1, . . . , n}. Then wi ? [?1/n, +1/n] ?i ?
{1, . . . , n} whereas Qi ? [0, 1/n] ?i ? {1, . . . , 2n}.
For any quasi uniform Q, the output BQ (x) of the majority vote on any example x is given by
2n
n
X
X
def
BQ (x) = sgn
Qi hi (x) = sgn
wi hi (x) = sgn w ? h(x) .
i=1
i=1
Consequently, the set of majority votes BQ over quasi uniform posteriors is isomorphic to the set
of linear separators with real weights. There is thus no loss of discriminative power if we restrict
ourselves to quasi uniform posteriors.
P
Since all loss functions that we consider are functions of 2WQ (x, y) ? 1 = ?y i Qi hi (x), they
are thus functions of yw ? h(x). Hence we will often write ?(yw ? h(x)) for ?Q (x, y).
The basic iteration for the learning algorithm consists of choosing (at random) a boolean complement pair of classifiers, call it (h1 , hn+1 ), and then attempting to change only Q1 , Qn+1 , w1
according to:
?
?
; Qn+1 ? Qn+1 ?
; w1 ? w1 + ? ,
(1)
Q1 ? Q1 +
2
2
for some optimally chosen value of ?.
Let Q? and w? be, respectively, the new posterior and the new weight vector obtained with such a
change. The above-mentioned convex properties of objective function f imply that we only need to
look for the value of ? ? satisfying
df (Q? )
= 0.
(2)
d?
4
If w1 + ? ? > 1/n, then w1 ? 1/n, Q1 ? 1/n, Qn+1 ? 0. If w1 + ? ? < ?1/n, then w1 ?
?1/n, Q1 ? 0, Qn+1 ? 1/n. Otherwise, we accept the change described by Equation 1 with
? = ?? .
For objective function f we simply have
df (Q? )
d?d
dKL(Q? kP )
Q?
= Cm
+
,
d?
d?
d?
where
"
#
Q1 + 2?
Qn+1 ? 2?
dKL(Q? kP )
d
?
?
=
Q1 +
ln
+ Qn+1 ?
ln
1
1
d?
d?
2
2
2n
2n
1
Q1 + ?/2
=
ln
.
2
Qn+1 ? ?/2
For the quadratic loss, we find
m
2 X ql
d?d
2m?
Q?
+
m
=
D (i)yi h1 (xi ) ,
d?
?2
? 2 i=1 w
(3)
(4)
(5)
where
def
ql
Dw
(i) = yi w ? h(xi ) ? ? .
Consequently, for the quadratic loss case, the optimal value ? ? satisfies
m
Q1 + ?/2
1
2Cm? 2C X ql
+
D
(i)y
h
(x
)
+
ln
=
i
1
i
?2
? 2 i=1 w
2
Qn+1 ? ?/2
(6)
0.
For the exponential loss, we find
m
m
e?/? X el
d?d
e??/? X el
Q?
=
Dw (i)I(h1 (xi ) 6= yi ) ?
D (i)I(h1 (xi ) = yi ) ,
m
d?
? i=1
? i=1 w
(7)
(8)
where
1
= exp ? yi w ? h(xi ) .
?
Consequently, for the exponential loss case, the optimal value ? ? satisfies
el
Dw
(i)
def
(9)
m
Ce?/? X el
D (i)I(h1 (xi ) 6= yi )
? i=1 w
?
m
Q1 + ?/2
Ce??/? X el
1
Dw (i)I(h1 (xi ) = yi ) + ln
= 0 . (10)
?
2
Qn+1 ? ?/2
i=1
After changing w1 , we need to recompute1 Dw (i) ?i ? {1, . . . , m}. This can be done with the
following update rules.
ql
ql
Dw
(i) ? Dw
(i) + yi h1 (xi )? (quadratic loss case)
(11)
1
el
el
Dw
(i) ? Dw
(i)e? ? yi h1 (xi )? (exponential loss case) .
Since, initially we have
ql
Dw
(i) = ?? ?i ? {1, . . . , m} (quadratic loss case)
(12)
(13)
el
Dw
(i)
= 1 ?i ? {1, . . . , m} (exponential loss case) ,
(14)
the dot product present in Equations 6 and 9 never needs to be computed. Consequently, updating
Dw takes ?(m) time.
The computation of the summations over the m examples in Equation 7 or 10 takes ?(m) time.
Once these summations are computed, solving Equation 7 or 10 takes ?(1) time. Consequently,
it takes ?(m) time to perform one basic iteration of the learning algorithm which consist of (1)
solving Equation 7 or 10 to find ? ? , (2) modifying w1 , Q1 , Qn+1 , and (3) updating Dw according to
Equation 11 or 12. The complete algorithm, called f minimization, is described by the pseudo code
of Algorithm 1.
1
ql
el
Dw (i) stands for either Dw
(i) or Dw
(i).
5
Algorithm 1 : f minimization
1: Initialization: Let Qi = Qn+i =
1
2n ,
wi = 0, ?i ? {1, . . . , n}.
Initialize Dw according to Equation 13 or 14.
2: repeat
3:
Choose at random h ? H and call it h1 (hn+1 is then the boolean complement of h1 ).
4:
Find ? ? that solves Equation 7 or 10.
5:
?
If [ ?1
n < w1 + ? <
6:
?
If [w1 + ? ?
1
n]
7:
If [w1 + ? ? ?
?1
n ]
8:
Update Dw according to Equation 11 or 12.
1
n]
then Q1 ? Q1 + ?/2; Qn+1 ? Qn+1 ? ?/2; w1 ? w1 + ?.
then Q1 ?
1
n;
Qn+1 ? 0; w1 ?
then Q1 ? 0; Qn+1 ?
1
n;
1
n.
w1 ?
?1
n .
9: until Convergence
The repeat-until loop in Algorithm 1 was implemented as follows. We first mix at random the n
boolean complement pairs of classifiers and then go sequentially over each pair (hi , hn+i ) to update
wi and Dw . We repeat this sequence until no weight change exceeds a specified small number .
4.2
From KL(QkP ) to `p Regularization
We can recover `2 regularization if we upper-bound KL(QkP ) by a quadratic function. Indeed, if
we use
2
1
1
1
1
1
q ln q +
? q ln
?q ?
ln
+ 4n q ?
?q ? [0, 1/n] ,
(15)
n
n
n 2n
2n
we obtain, for the uniform prior Pi = 1/(2n),
n
X
1
1
? Qi ln
? Qi
KL(QkP ) = ln(2n) +
Qi ln Qi +
n
n
i=1
2
n
n
X
X
1
= n
wi2 .
Qi ?
? 4n
2n
i=1
i=1
With this approximation, the objective function to minimize becomes
m
X
1
f`2 (w) = C 00
?
yi w ? h(xi ) + kwk22 ,
?
i=1
(16)
(17)
subject to the `? constraint |wj | ? 1/n ?j ? {1, . . . , n}. Here kwk2 denotes the Euclidean norm
of w and ?(x) = (x ? 1)2 for the quadratic loss and e?x for the exponential loss.
def
If, instead, we minimize f`2 for v = w/? and remove the `? constraint, we recover exactly ridge
regression for the quadratic loss case and `2 -regularized boosting for the exponential loss case.
We can obtain a `1 -regularized version of Equation 17 by repeating the above steps and us
1 2
1
ing 4n q ? 2n
? 2 q ? 2n
?q ? [0, 1/n] since, in that case, we find that KL(QkP ) ?
Pn
def
i=1 |wi | = kwk1 .
To sum up, the KL-regularized objective function f immediately follows from PAC-Bayes theory
and `p regularization is obtained from a relaxation of f . Consequently, PAC-Bayes theory favors
the use of KL regularization if the goal of the learner is to produce a weighted majority vote with
good generalization.2
2
Interestingly, [9] has recently proposed a KL-regularized version of LPBoost but their objective function
was not derived from a uniform risk bound.
6
5
Empirical Results
For the sake of comparison, all learning algorithms of this subsection are producing a weighted
majority vote classifier on the set of basis functions {h1 , . . . , hn } known as decision stumps. Each
decision stump hi is a threshold classifier that depends on a single attribute: its output is +b if
the tested attribute exceeds a threshold value t, and ?b otherwise, where b ? {?1, +1}. For each
attribute, at most ten equally-spaced possible values for t were determined a priori. Recall that,
although Algorithm 1 needs a set H of 2n classifiers containing n boolean complement pairs, it
outputs a majority vote with n real-valued weights defined on {h1 , . . . , hn }.
The results obtained for all tested algorithms are summarized in Table 1. We have compared Algorithm 1 with quadratic loss (KL-QL) and exponential loss (KL-EL) to AdaBoost [7] (AdB) and
ridge regression (RR).
Except for MNIST, all data sets were taken from the UCI repository. Each data set was randomly
split into a training set S of |S| examples and a testing set T of |T | examples. The number a
of attributes for each data set is also specified in Table 1. For AdaBoost, the number of boosting
rounds was fixed to 200. For all algorithms, RT refers to the frequency of errors, measured on the
testing set T .
In addition to this, the ?C and ??? columns in Table 1 refer, respectively, to the C value of the
objective function f and to the ? parameter present in the loss functions. These hyperparameters
were determined from the training set only by performing the 10-fold cross validation (CV) method.
The hyperparameters that gave the smallest 10-fold CV error were then used to train the Algorithms
on the whole training set and the resulting classifiers were then run on the testing set.
Table 1: Summary of results.
Dataset
Name
|S|
BreastCancer 343
Liver
170
Credit-A
353
Glass
107
Haberman
144
Heart
150
Ionosphere
176
Letter:AB
500
Letter:DO
500
Letter:OQ
500
MNIST:0vs8 500
MNIST:1vs7 500
MNIST:1vs8 500
MNIST:2vs3 500
Mushroom 4062
Ringnorm
3700
Sonar
104
Usvotes
235
Waveform
4000
Wdbc
285
|T |
340
175
300
107
150
147
175
1055
1058
1036
1916
1922
1936
1905
4062
3700
104
200
4000
284
a
9
6
15
9
3
13
34
16
16
16
784
784
784
784
22
20
60
16
21
30
(1) AdB
RT
0.053
0.320
0.170
0.178
0.260
0.252
0.120
0.010
0.036
0.038
0.008
0.013
0.025
0.047
0.000
0.043
0.231
0.055
0.085
0.049
(2) RR
RT C
0.050 10
0.309 5
0.157 2
0.206 5
0.273 100
0.197 1
0.131 0.05
0.004 0.5
0.026 0.05
0.045 0.5
0.015 0.05
0.012 1
0.024 0.2
0.033 0.2
0.001 0.5
0.037 0.05
0.192 0.05
0.060 2
0.079 0.02
0.049 0.2
(3) KL-EL
RT C
?
0.047 0.1 0.1
0.360 0.5 0.02
0.227 0.1 0.2
0.187 500 0.01
0.253 500 0.2
0.211 0.2 0.1
0.120 20 0.0001
0.006 0.1 0.02
0.019 500 0.01
0.043 10 0.0001
0.006 500 0.001
0.014 500 0.02
0.016 0.2 0.001
0.035 500 0.0001
0.000 10 0.001
0.025 500 0.01
0.135 500 0.05
0.060 0.5 0.1
0.080 0.2 0.05
0.039 500 0.02
(4) KL-QL
RT
C
?
0.047 0.02 0.4
0.286 0.02 0.3
0.183 0.02 0.05
0.196 0.02 0.01
0.260 0.02 0.5
0.177 0.05 0.2
0.097 0.2 0.1
0.006 1000 0.1
0.020 0.02 0.05
0.047 0.1 0.05
0.015 0.2 0.02
0.014 1000 0.1
0.031 1 0.02
0.029 0.02 0.05
0.000 1000 0.02
0.039 0.05 0.05
0.115 1000 0.1
0.055 1000 0.05
0.080 0.02 0.05
0.046 1000 0.1
SSB
(3) < (2, 4)
(3) < (4)
(4) < (1)
(3) < (1, 2, 4)
We clearly see that the cross-validation method generally chooses very small values for ?. This, in
turn, gives a risk bound (computed from Theorem 3.2) having very large values (results not shown
here). We have also tried to choose C and ? from the risk bound values.3 This method for selecting
hyperparameters turned out to produce classifiers having larger testing errors (results not shown
here).
To determine whether or not a difference of empirical risk measured on the testing set T is statistically significant, we have used the test set bound method of [4] (based on the binomial tail inversion)
3
From the standard union bound argument, the bound of Theorem 3.2 holds simultaneously for k different
choices of (?, C) if we replace ? by ?/k.
7
with a confidence level of 95%. It turns out that no algorithm has succeeded in choosing a majority
vote classifier which was statistically significantly better (SSB) than the one chosen by another algorithm except for the 4 cases that are listed in the column ?SSB? of Table 1. We see that on these
cases, Algorithm 1 turned out to be statistically significantly better.
6
Conclusion
Our numerical results indicate that Algorithm 1 generally outperforms AdaBoost and ridge regression when the hyperparameters C and ? are chosen by cross-validation. This indicates that the
empirical loss ?c
Q and the KL(QkP ) regularizer that are present in the PAC-Bayes bound of Theorem 3.2 are key ingredients for learning algorithms to focus on. The fact that cross-validation turns
out to be more efficient than Theorem 3.2 at selecting good values for hyperparameters indicates that
PAC-Bayes theory does not yet capture quantitatively the proper tradeoff between ?c
Q and KL(QkP )
that learners should optimize on the trading data. However, we feel that it is important to pursue
this research direction since it could potentially eliminate the need to perform the time-consuming
cross-validation method for selecting hyperparameters and provide better guarantees on the generalization error of classifiers output by learning algorithms. In short, it could perhaps yield the best
generic optimization problem for learning.
Acknowledgments
Work supported by NSERC discovery grants 122405 (M.M.) and 262067 (F.L.).
References
[1] Olivier Catoni.
PAC-Bayesian surpevised classification: the thermodynamics of statistical learning.
Monograph series of the Institute of Mathematical Statistics,
http://arxiv.org/abs/0712.0248, December 2007.
[2] Pascal Germain, Alexandre Lacasse, Franc?ois Laviolette, and Mario Marchand. A pac-bayes
risk bound for general loss functions. In B. Sch?olkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information Processing Systems 19, pages 449?456. MIT Press, Cambridge,
MA, 2007.
[3] Pascal Germain, Alexandre Lacasse, Franc?ois Laviolette, and Mario Marchand. PAC-Bayesian
learning of linear classifiers. In L?eon Bottou and Michael Littman, editors, Proceedings of
the 26th International Conference on Machine Learning, pages 353?360, Montreal, June 2009.
Omnipress.
[4] John Langford. Tutorial on practical prediction theory for classification. Journal of Machine
Learning Research, 6:273?306, 2005.
[5] John Langford and John Shawe-Taylor. PAC-Bayes & margins. In S. Thrun S. Becker and
K. Obermayer, editors, Advances in Neural Information Processing Systems 15, pages 423?430.
MIT Press, Cambridge, MA, 2003.
[6] David McAllester. PAC-Bayesian stochastic model selection. Machine Learning, 51:5?21,
2003.
[7] Robert E. Schapire, Yoav Freund, Peter Bartlett, and Wee Sun Lee. Boosting the margin: A new
explanation for the effectiveness of voting methods. The Annals of Statistics, 26:1651?1686,
1998.
[8] Matthias Seeger. PAC-Bayesian generalization bounds for gaussian processes. Journal of Machine Learning Research, 3:233?269, 2002.
[9] Manfred K. Warmuth, Karen A. Glocer, and S.V.N. Vishwanathan. Entropy regularized LPBoost. In Proceedings of the 2008 conference on Algorithmic Learning Theory, Springer LNAI
5254,, pages 256?271, 2008.
8
| 3821 |@word repository:1 version:2 inversion:1 norm:1 c0:2 r:9 tried:1 q1:15 series:3 disparity:1 selecting:3 interestingly:1 outperforms:3 err:1 mushroom:1 yet:1 john:3 numerical:3 remove:1 update:3 half:2 warmuth:1 isotropic:1 short:1 manfred:1 boosting:6 org:1 mathematical:1 consists:3 indeed:2 automatically:1 little:1 haberman:1 becomes:1 provided:1 moreover:1 null:1 what:3 cm:2 minimizes:4 pursue:1 finding:2 guarantee:2 pseudo:1 voting:1 ebec:1 exactly:1 classifier:40 platt:1 grant:1 appear:1 producing:1 positive:2 engineering:1 local:2 ak:7 initialization:1 suggests:1 sara:1 ringnorm:1 statistically:3 acknowledgment:1 practical:1 testing:5 union:1 empirical:7 significantly:2 confidence:1 refers:1 selection:1 breastcancer:1 risk:24 optimize:2 equivalent:1 deterministic:4 go:1 starting:1 convex:13 qc:1 simplicity:1 immediately:1 rule:1 dw:19 coordinate:4 feel:1 qkp:14 annals:1 olivier:1 satisfying:1 updating:2 lpboost:2 capture:1 wj:1 sun:1 mentioned:1 monograph:1 littman:1 tight:2 solving:2 learner:4 basis:1 regularizer:2 train:1 distinct:1 kp:2 shanian:1 choosing:2 larger:1 solve:1 valued:1 say:1 otherwise:4 favor:1 statistic:2 sequence:1 rr:2 matthias:1 propose:5 gq:21 product:1 turned:3 relevant:1 loop:1 uci:1 olkopf:1 convergence:1 produce:2 montreal:1 liver:1 measured:2 solves:1 implemented:1 ois:3 come:1 indicate:1 trading:1 direction:1 waveform:1 drawback:1 closely:2 attribute:4 modifying:1 stochastic:5 sgn:7 mcallester:1 generalization:3 adb:2 summation:2 hold:3 around:1 credit:1 exp:4 algorithmic:1 achieves:1 adopt:1 smallest:3 currently:2 weighted:5 hoffman:1 minimization:4 mit:2 clearly:1 always:2 gaussian:2 avoid:1 pn:1 corollary:1 derived:1 focus:2 june:1 indicates:2 hk:2 seeger:1 glass:1 dependent:1 el:11 eliminate:1 accept:1 initially:1 lnai:1 quasi:11 transformed:1 among:1 classification:3 pascal:3 priori:2 misclassifies:3 constrained:1 initialize:1 once:1 never:1 having:5 look:1 constitutes:1 quantitatively:1 franc:3 randomly:2 wee:1 simultaneously:1 divergence:3 ourselves:3 n1:1 ab:2 succeeded:1 bq:19 taylor:3 euclidean:1 classify:2 column:2 boolean:8 yoav:1 cost:2 subset:1 uniform:15 predicate:1 optimally:1 straightforwardly:1 chooses:2 international:1 lee:1 michael:1 w1:16 containing:1 choose:3 hn:9 stump:2 summarized:1 coefficient:1 depends:1 h1:15 mario:3 observing:1 bayes:15 recover:2 minimize:5 spaced:1 yield:1 weak:1 bayesian:4 mc:1 whenever:1 definition:1 vs3:1 frequency:2 resultant:1 dataset:1 recall:1 subsection:1 alexandre:3 adaboost:7 done:2 until:3 langford:2 glocer:1 perhaps:1 name:1 true:5 inductive:1 regularization:5 hence:3 assigned:1 leibler:1 round:1 coincides:1 ulaval:1 ridge:7 complete:1 omnipress:1 wise:1 recently:2 laval:1 tail:1 kwk2:1 refer:1 significant:1 cambridge:2 gibbs:8 cv:2 rd:1 pm:1 similarly:1 shawe:1 dot:1 specification:1 posterior:15 certain:1 binary:3 kwk1:1 yi:14 minimum:4 determine:1 mix:1 exceeds:2 ing:1 cross:5 equally:1 dkl:2 usvotes:1 qi:13 prediction:1 regression:7 basic:3 expectation:1 df:2 arxiv:1 iteration:2 sometimes:2 whereas:1 addition:1 sch:1 kwk22:1 subject:1 december:1 oq:1 seem:1 effectiveness:1 call:4 near:1 split:1 gave:1 restrict:1 tradeoff:1 whether:1 bartlett:1 becker:1 peter:1 karen:1 generally:4 yw:2 listed:1 repeating:1 ten:1 reduced:1 http:1 schapire:1 tutorial:1 write:1 key:1 threshold:2 enormous:1 drawn:2 changing:1 ce:2 relaxation:3 fraction:1 sum:2 run:1 letter:3 throughout:1 almost:1 decision:2 def:22 bound:35 hi:7 fold:2 marchand:3 quadratic:9 precisely:1 constraint:2 vishwanathan:1 software:1 sake:1 argument:1 attempting:1 expanded:1 performing:1 department:1 according:7 wi:6 qu:1 pr:2 taken:1 heart:1 ln:14 equation:10 turn:3 tractable:1 generic:1 vs7:1 denotes:1 binomial:1 ensure:1 laviolette:3 giving:1 eon:1 especially:1 objective:10 realized:1 dependence:1 rt:5 obermayer:1 thrun:1 majority:16 code:1 minimizing:1 ql:9 mostly:1 robert:1 potentially:1 negative:1 design:1 proper:1 unknown:1 perform:2 upper:11 lacasse:3 descent:2 arbitrary:1 canada:1 david:1 complement:7 germain:3 pair:7 kl:27 specified:2 optimized:1 vs8:2 firstname:1 wi2:1 secondname:1 explanation:1 power:2 eh:1 regularized:12 thermodynamics:1 imply:1 prior:4 discovery:1 freund:1 loss:39 ingredient:1 validation:5 principle:1 editor:3 pi:2 ift:1 summary:1 repeat:3 supported:1 institute:1 stand:1 qn:16 universally:1 kullback:1 global:1 sequentially:1 consuming:1 discriminative:2 xi:14 sonar:1 table:5 transfer:1 ca:15 h2n:1 expansion:1 bottou:1 separator:1 domain:1 whole:1 hyperparameters:6 exponential:8 yh:1 theorem:11 pac:19 ionosphere:1 intractable:1 consist:2 mnist:5 restricting:3 catoni:1 margin:3 wdbc:1 entropy:1 simply:1 expressed:1 nserc:1 springer:1 satisfies:2 ma:2 goal:1 consequently:9 replace:2 change:4 specifically:1 determined:2 uniformly:1 except:2 called:4 total:1 isomorphic:1 accepted:1 vote:16 wq:13 support:2 relevance:1 tested:2 |
3,115 | 3,822 | Convex Relaxation of Mixture Regression with
Efficient Algorithms
Novi Quadrianto, Tib?erio S. Caetano, John Lim
NICTA - Australian National University
Canberra, Australia
{firstname.lastname}@nicta.com.au
Dale Schuurmans
University of Alberta
Edmonton, Canada
[email protected]
Abstract
We develop a convex relaxation of maximum a posteriori estimation of a mixture
of regression models. Although our relaxation involves a semidefinite matrix variable, we reformulate the problem to eliminate the need for general semidefinite
programming. In particular, we provide two reformulations that admit fast algorithms. The first is a max-min spectral reformulation exploiting quasi-Newton descent. The second is a min-min reformulation consisting of fast alternating steps of
closed-form updates. We evaluate the methods against Expectation-Maximization
in a real problem of motion segmentation from video data.
1
Introduction
Regression is a foundational problem in machine learning and statistics. In practice, however, data
is often better modeled by a mixture of regressors, as demonstrated by the prominence of mixture
regression in a number of application areas. Gaffney and Smyth [1], for example, use mixture regression to cluster trajectories, i.e. sets of short sequences of data such as cyclone or object movements
in video sequences as a function of time. Each trajectory is believed to have been generated from one
of a number of components, where each component is associated with a regression model. Finney et
al. [2] have employed an identical mixture regression model in the context of planning: regression
functions are strategies for a given planning problem. Elsewhere, the mixture of regressors model
has been shown to be useful in addressing covariate shift, i.e. the situation where the distribution of
the training set used for modeling does not match the distribution of the test set in which the model
will be used. Storkey and Sugiyama [3] model the covariate shift process in a mixture regression
setting by assuming a shift in the mixing proportions of the components.
In each of these problems, one must estimate k distinct latent regression functions; that is, estimate
functions whose values correspond to the mean of response variables, under the assumption that
the response variable is generated by a mixture of k components. This estimation problem can
be easily tackled if it is known to which component each response variable belongs (yielding k
independent regression problems). However in general the component of a given observation is not
known and is modeled as a latent variable. A commonly adopted approach for maximum-likelihood
estimation with latent variables (in this case, component membership for each response variable) is
Expectation-Maximization (EM) [4]. Essentially, EM iterates inference over the hidden variables
and parameter estimation of the resulting decoupled models until a local optimum is reached. We
are not aware of any approach to maximum likelihood estimation of a mixture of regression models
that is not based on the non-convex marginal likelihood objective of EM.
In this paper we present a convex relaxation of maximum a posteriori estimation of a mixture of regression models. Recently, convex relaxations have gained considerable attention in machine learning (c.f. [5, 6]). By exploiting convex duality, we reformulate a relaxation of mixture regression as
a semidefinite program. To achieve a scalable approach, however, we propose two reformulations
that admit fast algorithms. The first is a max-min optimization problem which can be solved by iterations of quasi-Newton steps and eigenvector computations. The second is a min-min optimization
problem solvable by iterations of closed-form solutions. We present experimental results comparing
our methods against EM, both in synthetic problems and real computer vision problems, and show
some benefits of a convex approach over a local solution method.
1
Related work Goldfeld and Quandt [7] introduced a mixture regression model with two components
called switching regressions. The problem is re-cast into a single composite regression equation by
introducing a switching variable. A consistent estimator is then produced by a continuous relaxation
of this switching variable. An EM algorithm for switching regressions was first presented by Hosmer [8]. Sp?ath [9] introduced a problem called clusterwise linear regression, consisting of finding a
k-partition of the data such that a least squares regression criterion within those partitions becomes
a minimum. A non-probabilistic algorithm similar to k-means was proposed. Subsequently, the
general k-partition case employing EM was developed (c.f. [10, 11, 1]) and extended to various
situations including the use of variable length trajectory data and to non-parametric regression models. In the extreme, each individual could have its specific regression model but coupled at higher
level with a mixture on regression parameters [12]. An EM algorithm is again employed to handle
hidden data, in this case group membership of parameters. The Hierarchical Mixtures of Experts
[13] model also shares some similarity to mixture regression in that gating networks which contain
mixtures of generalized linear models are defined. In principle, our algorithmic advances can be
applied to many of these formulations.
2
The Model
Notation In the following we use the uppercase letters (X, ?, ?) to denote matrices and the lowercase letters (x, y, w, ?, ?, c) to denote vectors. We use t to denote the sample size, n to denote
the dimensionality of the data and k to denote the number of mixture components. ?(a) denotes a
diagonal matrix whose diagonal is equal to vector a, and diag(A) is a vector equal to the diagonal
of matrix A. Finally, we let 1 denote the vector of all ones, use to denote Hadamard (componentwise) matrix product, and use ? to denote Kronecker product.
We are given a matrix of regressors X ? Rt?n and a vector of regressands y ? Rt?1 where the response variable y is generated by a mixture of k components, but we do not know which component
of the mixture generates each response yi . We therefore use the matrix ? ? {0, 1}t?k , ?1 = 1,
to denote the hidden assignment of mixture labels to each observation: ?ij = 1 iff observation i
has mixture label j. We use xi to denote the ith row of X (i.e. observation i as a row vector), ?i to
denote the ith row of ? and yi to denote the ith element of y. We assume a linear generative model
for yi on a feature representation ?i = ?i ? xi , under i.i.d. sampling
yi |xi , ?i = ?i w + i , i ? N (0, ? 2 ),
(1)
where w ? R(n?k)?1 is the vector of stacked parameter vectors of the components. We therefore
have the likelihood
1
1
2
p(yi |xi , ?i ; w) = ?
(2)
exp ? 2 (?i w ? yi )
2?
2?? 2
for a single observation i (recalling that ?i depends on both xi and ?i ). We further impose a Gaussian
prior on w for capacity control. Also, one may want to constrain the size of the largest mixture
component. For that purpose one could constrain the solutions ? such that max(diag(?T ?)) ? ?t,
where ?t is an upper bound on the size of the largest component (? is an upper bound on the
proportion of the largest component). Combining these assumptions and adopting matrix notation
we obtain the optimization problem: minimize the negative log-posterior of the entire sample
"
#
X
1 T
1 T
? T
min
A(?i , w) ? 2 y ?w + 2 y y + w w , where
(3)
?,w
?
2?
2
i
1 T T
1
w ?i ?i w + log(2?? 2 ).
(4)
2? 2
2
Here ? is the matrix whose rows are the vectors ?i = ?i ? xi . Since X is observed, note that the
optimization only runs over ? in ?. The constraint max(diag(?T ?)) ? ?t may also be added.
A(?i , w) =
Eliminating constant terms, our final task will be to solve
1 T T
1 T
? T
w ? ?w ? 2 y ?w + w w .
min
(5)
?,w 2? 2
?
2
Although marginally convex on w, this objective is not jointly convex on w and ? (and involves
non-convex constraints on ? owing to its discreteness). The lack of joint convexity makes the optimization difficult. The typical approach in such situations is to use an alternating descent strategy,
such as EM. Instead, in the following we develop a convex relaxation for problem (5).
2
3
Semidefinite Relaxation
To obtain a convex relaxation we proceed in three steps. First, we dualize the first term in (5).
Lemma 1 Define A(?w) := 2?1 2 wT ?T ?w. Then the Fenchel dual of A(?w) is A? (c) = 21 ? 2 cT c,
and therefore A(?w) = maxc cT ?w ? 21 ? 2 cT c.
Proof From the definition of Fenchel dual we have A? (u) := maxw uT w ? 2?1 2 wT ?T ?w. Differentiating with respect to w and equating to zero we obtain u = ?12 ?T ?w. Therefore u is only
realizable if there exists a c such that u = ?T c. Solving for A? (c) we obtain A? (c) = 12 ? 2 cT c, and
therefore by definition of Fenchel duality A(?w) = maxc cT ?w ? 12 ? 2 cT c.
A second Lemma is required to further establish the relaxation:
Lemma 2 The following set inclusion holds
{??T : ? ? {0, 1}t?k , ?1 = 1, max(diag(?T ?)) ? ?t}
?
{M : M ? R
t?t
, tr M = t, ?tI < M < 0}.
(6)
(7)
Proof Let ??T be an element of the first set. First notice that [??T ]ij ? {0, 1} since ? ? {0, 1}t?k
and ?1 = 1 together imply that ? has a single 1 per row (and the rest are zeros). In particular
[??T ]ii = 1 for all i, i.e. tr M = t. Finally, note that (??T )? = ?(?T ?) where ?T ? is a
diagonal matrix and therefore its diagonal elements are the eigenvalues of ??T and in particular
max(diag(?T ?)) ? ?t means that the largest possible eigenvalue of ??T is ?t, which implies
?tI < ??T . Since ??T is by construction positive semidefinite, we have ?tI < ??T < 0.
Therefore ??T is also a member of the second set.
The above two lemmas allow us to state our first main result below.
Theorem 3 The following convex optimization problem
T
y
1 y
1 2 T
T
min
max ? ? c c ?
? c M XX
?c
M :tr M =t,?tI<M <0 c
2
2? ? 2
?2
is a relaxation of (5) only in the sense that domain (6) is replaced by domain (7).
Proof We first use Lemma 1 in order to rewrite the objective (5) and obtain
1 2 T
1 T
? T
T
min max c ?w ? ? c c ? 2 y ?w + w w .
c
?,w
2
?
2
(8)
(9)
Second, using the distributivity of the (max, +) semi-ring, the maxc can be pulled out and we then
use Sion?s minimax theorem [14], which allows us to interchange maxc with minw
1
1
?
min max min cT ?w ? ? 2 cT c ? 2 y T ?w + wT w ,
(10)
c
w
?
2
?
2
and we can solve for w first, obtaining
y
1
w = ?T
?
c
.
(11)
?
?2
Substituting (11) in the objective of (10) results in
T
y
1 2 T
1 y
T
min max ? ? c c ?
? c ??
?c .
(12)
c
?
2
2? ? 2
?2
We now note the critical fact that ? only shows up in the expression ??T which, from the definition
?i = ?i ?xi , is seen to be equivalent to ??T XX T . Therefore the minimization over ? effectively
takes place over ??T (since X is observed), and we have that (12) can be rewritten as
T
y
1 2 T
1 y
T
T
min max ? ? c c ?
? c ?? XX
?c .
(13)
c
2
2? ? 2
?2
??T
So far no relaxation has taken place. By finally replacing the constraint (6) with constraint (7) from
Lemma 2, we obtain the claimed semidefinite relaxation.
3
4
Max-Min Reformulation
By upper bounding the inner maximization in (8) and applying a Schur complement, problem (8)
can be re-expressed as a semidefinite program. Unfortunately, such a formulation is computationally
expensive to solve, requiring O(t6 ) for typical interior-point methods. Instead, we can reformulate
problem (8) to allow for a fast algorithmic approach, without the introduction of any additional
relaxation. The basis of our development is the following classical result.
Theorem 4 ([15]) Let V ? Rt?t , V = V T have eigenvalues ?1 ? ?2 ? ? ? ? ? ?t . Let P be the
matrix whose columns are the normalized eigenvectors of V, i.e. P T V P = ?((?1 , . . . , ?t )). Let
q ? {1, . . . , t} and Pq be the matrix comprised by the top q eigenvectors of P . Then
q
X
max
tr M V T =
?i
and
(14)
M :tr(M )=q,I<M <0
argmax
i=1
tr M V T 3 Pq PqT .
(15)
M :tr(M )=q,I<M <0
Proof See [15] for a proof of a slightly more general result (Theorem 3.4).
We will now show how the optimization on M for problem (8) can be cast in the terms of Theorem
4. This will turn out to be critical for the efficiency of the optimization procedure, since Theorem
4 describes a purely spectral optimization routine, which is far more efficient (O(t3 )) than standard
interior-point methods used for semidefinite programming (O(t6 )).
Proposition 5 Define y? := ?y2 . The following optimization problem
1
1
max
tr(M (XX T (?
y ? c)(?
y ? c)T ))
max ? ? 2 cT c ?
c
2
2? M :tr M =t,?tI<M <0
(16)
is equivalent to optimization problem (8).
Proof By Sion?s minimax theorem [14], minM and maxc in (8) can be interchanged
1
1 2 T
T
T
(?
y ? c) M XX (?
y ? c)
max
min
? ? c c?
c M :tr M =t,?tI<M <0
2
2?
which, by distributivity of the (min, +) semi-ring, is equivalent to
1 2 T
1
T
T
max ? ? c c +
min
? (?
y ? c) M XX (?
y ? c) .
c
2
2? M :tr M =t,?tI<M <0
Now, define K := XX T . The objective of the minimization in (18) can then be written as
? (?
y ? c)T (M K)(?
y ? c) = ? tr (M K) (?
y ? c)(?
y ? c)T
X
X
=?
(Mij Kij ) (?
y ? c)(?
y ? c)T ij = ?
Mij Kij (?
y ? c)(?
y ? c)T ij
ij
(17)
(18)
(19)
(20)
ij
= ? tr(M (K (?
y ? c)(?
y ? c)T )) = ? tr(M (XX T (?
y ? c)(?
y ? c)T )).
(21)
Finally, by writing minM ?f (M ) as ? maxM f (M ), we obtain the claim.
We can now exploit the result in Theorem 4 for the purpose of our optimization problem.
Proposition 6 Let q = {u : u = max{1, . . . , t}, u ? ? ?1 }. The following optimization problem
1 2 T
t
T
T
?
max ? ? c c ?
max
tr(M (XX (?
y ? c)(?
y ? c) ))
(22)
c
2
2?q M? :tr M? =q,I<M? <0
is equivalent to optimization problem (16).
4
Algorithm 1
1: Input: ?, ?, ?, XX T
2: Output: (c? , M ? )
3: Initialize c = 0
4: repeat
5:
Solve for maximum value in inner maximization of (22) using (14)
6:
Solve outer maximization in (22) using nonsmooth BFGS [16], obtain new c
7: until c has converged (c = c? )
8: At c? , solve for the maximizer(s) Pq in the inner maximization of (22) using (15)
9: if Pq is unique then
10:
return M ? = Pq PqT break
11: else
12:
Assemble top l eigenvectors in Pl
13:
Solve (24)
14:
return M ? = Pl ?(?? )PlT
15: end if
Proof The only differences between (16) and (22) are (i) the factor t/q in the second term of (22) and
(ii) the constraints {M : tr M = t, ?tI < M < 0} in (16) versus {M : tr M = q, I < M < 0} in
? := (q/t)M ,
(22). These differences are simply the result of a proper rescaling of M . If we define M
? < 0 since q ? ? ?1 . We then have tr M
? = q. The result follows.
then I < M
And finally we have the second main result
Theorem 7 Optimization problem (22) is equivalent to optimization problem (8).
Proof The equivalence follows directly from Propositions 5 and 6.
Note that, crucially, the objective in (22) is concave in c. Our strategy is now clear. Instead of solving
(8), which demands O(t6 ) operations, we instead solve (22), which has as inner optimization a max
eigenvalue problem, demanding only O(t3 ) operations. In the next section we describe an algorithm
to jointly optimize for M and c in (22), which will essentially consist of alternating the efficient
spectral solution over M with a subgradient optimization over c.
4.1 Max-Min Algorithm
Algorithm 1 describes how we solve optimization problem (22). The idea of the algorithm is the
following. First, having noted that (22) is concave in c, we can simply initialize c arbitrarily and
pursue a fast subgradient ascent algorithm (e.g. such as nonsmooth BFGS [16]). So at each step
we solve the eigenvalue problem and recompute a subgradient, until convergence to c? . We then
need to recover M ? such that (c? , M ? ) is a saddle point (note that problem (22) is concave in c
and convex in M ). For that purpose we use (15). If M ? = Pq PqT is such that Pq is unique, then
we are done and the labeling solution of mixture membership is M ? (subject to roundoff). If Pq
is not unique, then we have multiplicity of eigenvalues and we need to proceed as follows. Define
Pl = [p1 . . . pq . . . pl ], l > q, where each of the additional l ? q eigenvectors has an associated
eigenvalue which is equal to the eigenvalue of some of the previous q eigenvectors. We then have
that at the saddle point there must exist a diagonal matrix ? such that M ? = Pl ?PlT , subject to
? < 0 and tr ? = q (if this were not the case there would be an ascent direction in c? , contradicting
the hypothesis that c? is optimal). To find such a ? and therefore recover the correct M , we need to
enforce that we are at the optimal c (c? ), i.e. we must have
2
d
1 2 T
q
T
T
=0
?
?
c
c
?
max
tr(M
(XX
(?
y
?
c)(?
y
?
c)
))
(23)
dc
2
2?t M :tr M =q,I<M <0
2
Such condition can be pursued by minimizing the above norm, which gives a quadratic program
2
q
Pl ?(?)PlT XX T (c? ? y?)
(24)
min
? 2 c? +
?t
??0,?T 1=q
2
We can then recover the final solution (subject to roundoff) by M ? = Pl ?(?? )PlT , where ?? is the
optimizer of (24). The optimal value of (24) should be very close to zero (since it?s the norm of the
derivative at point c? ). The pseudocode for the algorithm appears in Algorithm 1.
5
Algorithm 2
1: Input: ?, ?, ?, XX T
2: Output: (c? , M ? )
3: Initialize M = ?((1/(?t), . . . , 1/(?t)))
4: repeat
5:
Solve for minimum value in inner minimization of (25), obtain A
6:
Solve outer minimization in (25) given SVD of A using Theorem 4.1 of [18], obtain new M
7: until M has converged (M = M ? )
? T
8: Recover c? = ?1
? 2 diag(X(A ) )
5
Min-Min Reformulation
Although the max-min formulation appears satisfactory, the recent literature on multitask learning [17, 18] has developed an alternate strategy for bypassing general semidefinite programming.
Specifically, work in this area lead to convex optimization problems expressed jointly over two matrix variables where each step is an alternating min-min descent that can be executed in closed-form
or by a very fast algorithm. Although it is not immediately apparent that this algorithmic strategy
is applicable to the problem at hand, with some further reformulation of (8) we discover that in fact
the same min-min algorithmic approach can be applied to our mixture of regression problem.
Theorem 8 The following optimization problem
1 T
1
?
T
T T
T
T
?1
y
diag(XA
)
+
diag(XA
)
diag(XA
)
+
min
min
tr(A
M
A)
?2
2? 2
2?t
{M :IM 0,tr M =1/?} A
(25)
is equivalent to optimization problem (8).
Proof
min
{M :IM 0,tr M =1/?}
=
min
max ?
c
?2 T
?t
c c?
(c ? y?)T (M XX T )(c ? y?)
2
2?
max
{M :IM 0,tr M =1/?} {c,C:C=?(c??
y )X}
?
?2 T
?t
c c?
tr(C T M C)
2
2?
(26)
(27)
?t
?2 T
c c?
tr(C T M C) + tr(AT C) ? tr(AT ?(c ? y?)X)
2
2?
{M :IM 0,tr M =1/?} A c,C
(28)
1
?
T
?1
We can then solve for c and C, obtaining c = ? ?2 diag(XA ) and C = ?t M A. Substituting
those two variables into (28) proves the claim.
=
5.1
min
min max ?
Min-Min Algorithm
The problem (25) is jointly convex in A and M [14] and Algorithm 2 describes how to solve it.
It is important to note that although each iteration in Algorithm 2 is efficient, many iterations are
required to reach a desired tolerance, since it is only first-order convergent. It is observed in our
experiments that the concave-convex max-min approach in Algorithm 1 is more efficient simply
because it has the same iteration cost but exploits a quasi-Newton descent in the outer optimization,
which converges faster.
Remark 9 In practice, similarly to [17], a regularizer on M is added to avoid singularity, resulting
in the following regularized objective function,
1
1
min
min 2 y T diag(XAT ) + 2 diag(XAT )T diag(XAT )
A ?
2?
{M :IM 0,tr M =1/?}
?
+
tr(AT M ?1 A) + tr(M ?1 ).
(29)
2?t
The problem is still jointly convex in M and A.
6
6
Experiments
Our primary objective in formulating this convex approach to mixture regression is to tackle a difficult problem in video analysis (see below). However, to initially evaluate the different approaches
we conducted some synthetic experiments. We generated 30 synthetic data points according to
yi = (?i ? xi )w + i , with xi ? R, i ? N (0, 1) and w ? U (0, 1). The response variable yi is
assumed to be generated from a mixture of 5 components. We compared the quality of the relaxation in (22) to EM. Max-min algorithm is used in this experiment. For EM, 100 random restarts
was used to help avoid poor local optima. The experiment is repeated 10 times. The error rates are
0.347 ? 0.086 and 0.280 ? 0.063 for EM and convex relaxation, respectively. The visualization
of the recovered membership for one of the runs is given in Figure 1. This demonstrates that the
relaxation can retain much of the structure of the problem.
6.1 Vision Experiment
In a dynamic scene, various static and moving objects are viewed by a possibly moving observer.
For example, consider a moving, hand-held camera filming a scene of several cars driving down
the road. Each car has a separate motion, and even the static objects, such as trees, appear to move
in the video due to the self-motion of the camera. The task of segmenting each object according
to its motion, estimating the parameters of each motion, and recovering the structure of the scene
is known as the multibody structure and motion problem. This is a missing variable problem. If
the motions have been segmented correctly, it is easy to estimate the parameters of each motion.
Naturally, models employing EM have been proposed to tackle such problems (c.f. [19, 20]).
From epipolar geometry, given a pair of corresponding points pi and qi from two images (pi , qi ?
R3?1 ), we have the epipolar equation qiT F pi = 0. The fundamental matrix F encapsulates information about the translation and rotation relative to the scene points between the positions of the
camera where the two images were captured, as well as the camera calibration parameters such as
its focal length. In a static scene, where only the camera is moving, there is only one fundamental
matrix, which arises from the camera self-motion. However, if some of the scene points are moving
independently under multiple different motions, there are several fundamental matrices. If there
are k motion groups, the epipolar equation can be expressed in term of the multibody fundamental
Qk
matrix [21], i.e. j=1 (qiT Fj pi ) = 0. An algebraic method was proposed to recover this matrix via
Generalized PCA [21]. An alternative approach, which we follow here, is by Li [22], who casts the
Pk
problem as a mixture of fundamental matrices, i.e. qiT ( j=1 ?ij Fj )pi = 0 where the membership
variable ?ij = 1 when image point i belongs to motion group j, and zero otherwise. Furthermore,
since qiT F pi = 0 is bilinear in the image points, we can rewrite it to be xTi wj = 0, with the column
vectors xi = [qix pxi qix pyi qix p?i .... qi? p?i ]T and w = vec(FjT ). Thus, we will end up with the
Pk
following linear equation: j=1 ?ij xTi wj = 0. The weight vector wj for motion group j can be
recovered easily if the indicator variable ?ij is known.
We are interested in assessing the effectiveness of EM-based and convex relaxation-based methods
for this multibody structure and motion problem. We used the Hopkins 155 dataset [23]. The experimental results are summarized in Table 1. All hyperparameters (EM: ? and ?; Convex relaxation:
?, ?, and ?) were tuned and the best performances for each learning algorithm are reported. The
EM algorithm was run with 100 random restarts to help avoid poor local optima. In terms of computation time, the max-min runs comparably to the EM algorithm, while min-min runs in the order
of 3 to 4 times slower. As an illustration, on a Pentium 4 3.6 GHz machine, the elapsed time (in
seconds) for two cranes dataset is 16.880, 23.536, and 60.003 for EM, max-min and min-min,
respectively. Rounding for the convex versions was done by k-means, which introduces some differences in the final results for both algorithms. Noticeably, both max-min and min-min outperform
the EM algorithm. Visualizations of the motion segmentation on two cranes, three cars,
and cars2 07 datasets are given in Figure 2 (for kanatani2 and articulated please refer
to Appendix).
7
Conclusion
The mixture regression problem is pervasive in many applications and known approaches for parameter estimation rely on variants of EM, which naturally have issues with local minima. In this paper
we introduced a semidefinite relaxation for the mixture regression problem, thus obtaining a convex formulation which does not suffer from local minima. In addition we showed how to avoid the
7
use of expensive interior-point methods typically needed to solve semidefinite programs. This was
achieved by introducing two reformulations amenable to the use of faster algorithms. Experimental
results with synthetic data as well as with real computer vision data suggest the proposed methods
can substantially improve on EM while one of the methods in addition has comparable runtimes.
Table 1: Error rate on several datasets from the Hopkins 155
Data set
m
EM Max-Min Convex Min-Min Convex
three cars
173 0.0532
0.0289
0.0347
kanatani2
63
0.0000
0.0000
0.0000
cars2 07
212 0.3396
0.2642
0.2594
two cranes
94
0.0532
0.0213
0.0106
articulated 150 0.0000
0.0000
0.0000
(a) Ground Truth
(b) EM
(c) Convex Relaxation
Figure 1: Recovered membership on synthetic data with EM and convex relaxation. 30 data points
are generated according to yi = (?i ? xi )w + i , with xi ? R, i ? N (0, 1) and w ? U (0, 1).
(a) Ground Truth
(b) EM
(c) Max-Min Convex (d) Min-Min Convex
(e) Ground Truth
(f) EM
(g) Max-Min Convex (h) Min-Min Convex
(i) Ground Truth
(j) EM
(k) Max-Min Convex (l) Min-Min Convex
Figure 2: Resulting motion segmentations produced by the various techniques on the Hopkins 155
dataset. 2(a)-2(d): two cranes, 2(e)-2(h): three cars, and 2(i)-2(l): cars2 07. In two
cranes (first row), EM produces more segmentation errors at the left crane. In three cars
(second row), the max-min method gives the least segmentation error (at the front side of the middle
car) and EM produces more segmentation errors at the front side of the left car. The contrast of EM
and convex methods is apparent for cars2 07 (third row): the convex methods segment correctly
the static grass field object, while EM makes mistakes. Further, the min-min method can almost
perfectly segment the car in the middle of the scene from the static tree background.
8
References
[1] S. Gaffney and P. Smyth. Trajectory clustering with mixtures of regression models. In ACM
SIGKDD, volume 62, pages 63?72, 1999.
[2] S. Finney, L. Kaelbling, and T. Lozano-Perez. Predicting partial paths from planning problem
parameters. In Proceedings of Robotics: Science and Systems, Atlanta, GA, USA, June 2007.
[3] A. J. Storkey and M. Sugiyama. Mixture regression for covariate shift. In Sch?olkopf, editor,
Advances in Neural Information Processing Systems 19, pages 1337?1344, 2007.
[4] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data
via the em algorithm. Journal of the Royal Statistical Society. Series B (Methodological),
39(1):1?38, 1977.
[5] T. De Bie, N. Cristianini, P. Bennett, and E. Parrado-hern?andez. Fast sdp relaxations of graph
cut clustering, transduction, and other combinatorial problems. JMLR, 7:1409?1436, 2006.
[6] Y. Guo and D. Schuurmans. Convex relaxations for latent variable training. In Platt et al.,
editor, Advances in Neural Information Processing Systems 20, pages 601?608, 2008.
[7] S. M. Goldfeld and R.E. Quandt. Nonlinear methods in econometrics. Amsterdam: NorthHolland Publishing Co., 1972.
[8] D. W. Hosmer. Maximum likelihood estimates of the parameters of a mixture of two regression
lines. Communications in Statistics, 3(10):995?1006, 1974.
[9] H. Sp?ath. Algorithm 39: clusterwise linear regression. Computing, 22:367?373, 1979.
[10] W.S. DeSarbo and W.L. Cron. A maximum likelihood methodology for clusterwise linear
regression. Journal of Classification, 5(1):249?282, 1988.
[11] P.N. Jones and G.J. McLachlan. Fitting finite mixtures models in a regression context. Austral.
J. Statistics, 34(2):233?240, 1992.
[12] S. Gaffney and P. Smyth. Curve clustering with random effects regression mixtures. In AISTATS, 2003.
[13] M.I. Jordan and R.A. Jacobs. Hierarchical mixtures of experts and the em algorithm. Neural
computation, 6:181?214, 1994.
[14] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[15] M. Overton and R. Womersley. Optimality conditions and duality theory for minimizing sums
of the largest eigenvalues of symmetric matrices. Mathematical Programming, 62:321?357,
1993.
[16] J. Yu, S.V.N. Vishwanathan, S. G?unter, and N. Schraudolph. A quasi-Newton approach to
nonsmooth convex optimization. In ICML, 2008.
[17] A. Argyriou, T. Evgeniou, and M. Pontil. Convex multi-task feature learning. Machine Learning, 73:243?272, 2008.
[18] J. Chen, L. Tang, J. Liu, and J. Ye. A convex formulation for learning shared structures from
multiple tasks. In ICML, 2009.
[19] N.Vasconcelos and A. Lippman. Empirical bayesian em-based motion segmentation. In CVPR,
1997.
[20] P. Torr. Geometric motion segmentation and model selection. Philosophical Trans. of the
Royal Society of London, 356(1740):1321?1340, 1998.
[21] R. Vidal, Y. Ma, S. Soatto, and S. Sastry. Two-view multibody structure from motion. IJCV,
68(1):7?25, 2006.
[22] H. Li. Two-view motion segmentation from linear programming relaxation. In CVPR, 2007.
[23] http://www.vision.jhu.edu/data/hopkins155/.
9
| 3822 |@word multitask:1 version:1 eliminating:1 middle:2 proportion:2 norm:2 crucially:1 prominence:1 jacob:1 tr:34 liu:1 series:1 tuned:1 recovered:3 com:1 comparing:1 bie:1 must:3 written:1 john:1 partition:3 update:1 grass:1 generative:1 pursued:1 ith:3 short:1 iterates:1 recompute:1 mathematical:1 ijcv:1 fitting:1 p1:1 planning:3 sdp:1 multi:1 alberta:1 xti:2 becomes:1 multibody:4 xx:14 notation:2 discover:1 estimating:1 pursue:1 eigenvector:1 substantially:1 developed:2 finding:1 ti:8 concave:4 tackle:2 demonstrates:1 platt:1 control:1 appear:1 segmenting:1 positive:1 local:6 mistake:1 switching:4 bilinear:1 path:1 au:1 equating:1 equivalence:1 co:1 roundoff:2 unique:3 camera:6 practice:2 lippman:1 procedure:1 pontil:1 foundational:1 area:2 empirical:1 jhu:1 composite:1 boyd:1 road:1 suggest:1 interior:3 close:1 ga:1 selection:1 context:2 applying:1 writing:1 optimize:1 equivalent:6 crane:6 demonstrated:1 missing:1 www:1 attention:1 independently:1 convex:41 pyi:1 immediately:1 estimator:1 vandenberghe:1 handle:1 construction:1 ualberta:1 smyth:3 programming:5 hypothesis:1 storkey:2 element:3 expensive:2 econometrics:1 cut:1 observed:3 tib:1 solved:1 wj:3 caetano:1 movement:1 dempster:1 convexity:1 cristianini:1 dynamic:1 solving:2 rewrite:2 segment:2 purely:1 efficiency:1 basis:1 easily:2 joint:1 various:3 regularizer:1 stacked:1 articulated:2 distinct:1 fast:7 describe:1 london:1 labeling:1 whose:4 apparent:2 solve:15 cvpr:2 otherwise:1 statistic:3 jointly:5 laird:1 final:3 sequence:2 eigenvalue:9 propose:1 product:2 hadamard:1 ath:2 combining:1 mixing:1 iff:1 achieve:1 olkopf:1 exploiting:2 convergence:1 cluster:1 optimum:3 assessing:1 produce:2 ring:2 converges:1 object:5 help:2 develop:2 ij:10 recovering:1 c:1 involves:2 implies:1 australian:1 direction:1 correct:1 owing:1 subsequently:1 australia:1 noticeably:1 clusterwise:3 andez:1 finney:2 proposition:3 singularity:1 pl:7 bypassing:1 hold:1 ground:4 exp:1 algorithmic:4 claim:2 substituting:2 driving:1 interchanged:1 optimizer:1 purpose:3 estimation:7 applicable:1 label:2 combinatorial:1 largest:5 maxm:1 minimization:4 mclachlan:1 gaussian:1 avoid:4 sion:2 pervasive:1 pxi:1 june:1 methodological:1 likelihood:7 pentium:1 contrast:1 sigkdd:1 realizable:1 sense:1 posteriori:2 inference:1 membership:6 lowercase:1 eliminate:1 entire:1 typically:1 initially:1 hidden:3 quasi:4 interested:1 issue:1 dual:2 classification:1 development:1 initialize:3 marginal:1 equal:3 aware:1 field:1 vasconcelos:1 having:1 sampling:1 runtimes:1 identical:1 evgeniou:1 jones:1 novi:1 yu:1 icml:2 nonsmooth:3 national:1 individual:1 replaced:1 argmax:1 consisting:2 geometry:1 recalling:1 atlanta:1 gaffney:3 introduces:1 mixture:36 extreme:1 semidefinite:11 yielding:1 uppercase:1 perez:1 held:1 xat:3 amenable:1 overton:1 partial:1 unter:1 minw:1 decoupled:1 tree:2 incomplete:1 re:2 desired:1 fenchel:3 column:2 modeling:1 kij:2 assignment:1 maximization:6 cost:1 introducing:2 addressing:1 kaelbling:1 comprised:1 rounding:1 conducted:1 front:2 reported:1 synthetic:5 fundamental:5 retain:1 probabilistic:1 together:1 hopkins:3 again:1 possibly:1 admit:2 expert:2 derivative:1 return:2 rescaling:1 li:2 bfgs:2 de:1 summarized:1 depends:1 break:1 observer:1 closed:3 view:2 reached:1 recover:5 hopkins155:1 minimize:1 square:1 qk:1 who:1 correspond:1 t3:2 bayesian:1 produced:2 comparably:1 marginally:1 trajectory:4 minm:2 converged:2 maxc:5 reach:1 definition:3 against:2 naturally:2 associated:2 proof:9 static:5 dataset:3 lim:1 ut:1 dimensionality:1 car:9 segmentation:9 routine:1 appears:2 higher:1 restarts:2 hosmer:2 response:7 follow:1 methodology:1 formulation:5 done:2 furthermore:1 xa:4 until:4 hand:2 replacing:1 dualize:1 nonlinear:1 maximizer:1 lack:1 quality:1 usa:1 effect:1 ye:1 contain:1 requiring:1 normalized:1 y2:1 lozano:1 soatto:1 alternating:4 symmetric:1 satisfactory:1 self:2 lastname:1 please:1 noted:1 criterion:1 generalized:2 motion:20 fj:2 image:4 recently:1 womersley:1 rotation:1 pseudocode:1 volume:1 refer:1 cambridge:1 vec:1 focal:1 sastry:1 similarly:1 inclusion:1 sugiyama:2 pq:9 moving:5 calibration:1 austral:1 similarity:1 posterior:1 recent:1 showed:1 belongs:2 claimed:1 arbitrarily:1 yi:9 seen:1 minimum:4 additional:2 captured:1 impose:1 employed:2 ii:2 semi:2 multiple:2 segmented:1 match:1 faster:2 believed:1 schraudolph:1 qi:3 scalable:1 regression:35 variant:1 cron:1 essentially:2 expectation:2 vision:4 iteration:5 adopting:1 achieved:1 robotics:1 addition:2 want:1 background:1 else:1 sch:1 rest:1 ascent:2 subject:3 member:1 desarbo:1 schur:1 effectiveness:1 jordan:1 easy:1 perfectly:1 inner:5 idea:1 shift:4 expression:1 pca:1 suffer:1 algebraic:1 proceed:2 remark:1 useful:1 clear:1 eigenvectors:5 pqt:3 http:1 outperform:1 exist:1 notice:1 per:1 correctly:2 group:4 reformulation:5 discreteness:1 graph:1 relaxation:26 subgradient:3 sum:1 run:5 letter:2 place:2 almost:1 appendix:1 comparable:1 bound:2 ct:9 tackled:1 convergent:1 quadratic:1 assemble:1 kronecker:1 constraint:5 constrain:2 vishwanathan:1 scene:7 generates:1 min:63 formulating:1 optimality:1 according:3 alternate:1 plt:4 poor:2 describes:3 slightly:1 em:33 encapsulates:1 multiplicity:1 taken:1 computationally:1 equation:4 visualization:2 hern:1 turn:1 r3:1 needed:1 know:1 end:2 reformulations:3 adopted:1 operation:2 rewritten:1 vidal:1 hierarchical:2 spectral:3 enforce:1 alternative:1 slower:1 denotes:1 top:2 clustering:3 publishing:1 newton:4 qit:4 exploit:2 prof:1 establish:1 classical:1 society:2 objective:8 move:1 added:2 strategy:5 parametric:1 rt:3 primary:1 diagonal:6 separate:1 capacity:1 outer:3 nicta:2 assuming:1 length:2 modeled:2 reformulate:3 illustration:1 minimizing:2 difficult:2 unfortunately:1 executed:1 negative:1 proper:1 fjt:1 upper:3 observation:5 datasets:2 finite:1 descent:4 erio:1 situation:3 extended:1 communication:1 dc:1 canada:1 introduced:3 complement:1 cast:3 required:2 pair:1 componentwise:1 philosophical:1 elapsed:1 trans:1 below:2 firstname:1 program:4 max:38 including:1 video:4 epipolar:3 royal:2 critical:2 demanding:1 rely:1 regularized:1 predicting:1 solvable:1 indicator:1 minimax:2 improve:1 imply:1 coupled:1 prior:1 literature:1 geometric:1 relative:1 distributivity:2 versus:1 consistent:1 rubin:1 principle:1 editor:2 share:1 pi:6 translation:1 row:8 elsewhere:1 qix:3 repeat:2 t6:3 side:2 allow:2 pulled:1 differentiating:1 benefit:1 tolerance:1 ghz:1 curve:1 dale:2 interchange:1 commonly:1 regressors:3 employing:2 far:2 assumed:1 xi:12 continuous:1 latent:4 parrado:1 table:2 ca:1 obtaining:3 northholland:1 schuurmans:2 domain:2 diag:13 sp:2 pk:2 main:2 aistats:1 bounding:1 hyperparameters:1 quadrianto:1 contradicting:1 repeated:1 canberra:1 edmonton:1 transduction:1 cyclone:1 position:1 jmlr:1 third:1 tang:1 theorem:11 down:1 specific:1 covariate:3 gating:1 exists:1 consist:1 effectively:1 gained:1 demand:1 chen:1 simply:3 saddle:2 expressed:3 amsterdam:1 maxw:1 mij:2 truth:4 acm:1 ma:1 viewed:1 shared:1 bennett:1 considerable:1 typical:2 specifically:1 torr:1 wt:3 lemma:6 called:2 duality:3 experimental:3 svd:1 guo:1 arises:1 evaluate:2 argyriou:1 |
3,116 | 3,823 | A Stochastic approximation method for inference
in probabilistic graphical models
Peter Carbonetto
Dept. of Human Genetics
University of Chicago
Chicago, IL, U.S.A.
[email protected]
Matthew King
Firas Hamze
Dept. of Botany
D-Wave Systems
University of British Columbia
Burnaby, B.C., Canada
Vancouver, B.C., Canada
[email protected]
[email protected]
Abstract
We describe a new algorithmic framework for inference in probabilistic models,
and apply it to inference for latent Dirichlet allocation (LDA). Our framework
adopts the methodology of variational inference, but unlike existing variational
methods such as mean field and expectation propagation it is not restricted to
tractable classes of approximating distributions. Our approach can also be viewed
as a ?population-based? sequential Monte Carlo (SMC) method, but unlike existing SMC methods there is no need to design the artificial sequence of distributions. Significantly, our framework offers a principled means to exchange
the variance of an importance sampling estimate for the bias incurred through
variational approximation. We conduct experiments on a difficult inference problem in population genetics, a problem that is related to inference for LDA. The
results of these experiments suggest that our method can offer improvements in
stability and accuracy over existing methods, and at a comparable cost.
1
Introduction
Over the past several decades, researchers in many different fields?statistics, economics, physics,
genetics and machine learning?have focused on coming up with more accurate and more efficient
approximate solutions to intractable probabilistic inference problems. To date, there are three
widely-explored approaches to approximate inference in probabilistic models: obtaining a Monte
Carlo estimate by simulating a Markov chain (MCMC); obtaining a Monte Carlo estimate by
drawing samples from a distribution other than the target then reweighting the samples to account
for any discrepancies (importance sampling); and variational inference, in which the original
integration problem is transformed into an optimization problem.
The variational approach in particular has attracted wide interest in the machine learning community, and this interest has lead to a number of important innovations in approximate inference?
some of these more recent developments are described in the dissertations of Beal [3], Minka [22],
Ravikumar [27] and Wainwright [31]. The key idea behind variational inference is to come up
with a family of approximating distributions p?(x; ?) that have ?nice? analytic properties, then to
optimize some criterion in order to find the distribution parameterized by ? that most closely
matches the target posterior p(x). All variational inference algorithms, including belief propagation and its generalizations [32], expectation propagation [22] and mean field [19], can be derived
from a common objective, the Kullback-Leibler (K-L) divergence [9]. The major drawback of
variational methods is that the best approximating distribution may still impose an unrealistic or
questionable factorization, leading to excessively biased estimates (see Fig. 1, left-hand side).
In this paper, we describe a new variational method that does not have this limitation: it adopts the
methodology of variational inference without being restricted to tractable classes of approximate
1
distributions (see Fig. 1, right-hand side). The catch is that the variational objective (the K-L
divergence) is difficult to optimize because its gradient cannot be computed exactly. So to descend
along the surface of the variational objective, we propose to employ stochastic approximation [28]
with Monte Carlo estimates of the gradient, and update these estimates over time with sequential
Monte Carlo (SMC) [12]?hence, a stochastic approximation method for probabilistic inference.
Large gradient descent steps may quickly lead to a degenerate sample, so we introduce a mechanism
that safeguards the variance of the Monte Carlo estimate at each iteration (Sec. 3.5). This variance
safeguard mechanism does not make the standard effective sample size (ESS) approximation [14],
hence it is likely to more accurately monitor the variance of the sample.
Indirectly, the variance safeguard provides a way to obtain an
estimator that has low variance in exchange for (hopefully small)
bias. To our knowledge, our algorithm is the first general means
of achieving such a trade-off and, in so doing, it draws meaningful connections between Monte Carlo and variational methods.
The advantage of our stochastic approximation method with respect to other variational methods is rather straightforward: it
does not restrict the class of variational densities by making assumptions about their structure. However, whe advantage of our
approach compared to Monte Carlo methods such as annealed
importance sampling (AIS) [24] is less obvious. One key advantage is that there is no need to design the sequence of SMC
distributions as it is a direct product of the algorithm?s derivation (Sec. 3). It is our conjecture that this automatic selection,
when combined with the variance safeguard, is more efficient
than setting the sequence by hand, say, via tempered transitions
[12, 18, 24]. The population genetics experiments we conduct
in Sec. 4 provide some support for this claim.
We illustrate our approach on the problem of inferring population structure from a cohort of genotyped sequences using Figure 1: The guiding princithe mixture model of Pritchard et al. [26]. We show in Sec. 4 ple behind standard variational
that Markov chain Monte Carlo (MCMC) is prone to producing methods (top) is to find the apvery different answers in independent simulations, and that it proximating density p?(x; ?) that
fails to adequately capture the uncertainty in its solutions. For is closest to the distribution of
many population genetics applications, such as wildlife conser- interest p(x), yet remains within
vation [8], it is crucial to accurately characterize the confidence the defined set of tractable probin a solution. Since variational methods employing mean field ability distributions. In our apapproximations [4, 30] tend to be overconfident, they are poorly proach (bottom), the class of apsuited for this problem. (This has generally not been an issue proximating densities always cofor semantic text analysis [4, 15].) As we show, SMC with a incides with the target p(x).
uniform sequence of tempered distributions fares little better than MCMC. The implementation of
our approach on the population structure model demonstrates improvements in both accuracy and
reliability over MCMC and SMC alternatives, and at a comparable computational cost.
The latent Dirichlet allocation (LDA) model [4] is very similar to the population structure model
of [26], under the assumption of fixed Dirichlet priors. Since LDA is already familiar to the
machine learning audience, it serves as a running example throughout our presentation.
1.1
Related work
The interface of optimization and simulation strategies for inference has been explored in a number
of papers, but none of the existing literature resembles the approach proposed in this paper. De
Freitas et al. [11] use a variational approximation to formulate a Metropolis-Hastings proposal. Recent work on adaptive MCMC [1] combines ideas from both stochastic approximation and MCMC
to automatically learn better proposal distributions. Our work is also unrelated to the paper [20]
with a similar title, where stochastic approximation is applied to improving the Wang-Landau
algorithm. Younes [33] employs stochastic approximation to compute the maximum likelihood
estimate of an undirected graphical model. Also, the cross-entropy method [10] uses importance
sampling and optimization for inference, but exhibits no similarity to our work beyond that.
2
2
Latent Dirichlet allocation
Latent Dirichlet allocation (LDA) is a generative model of a collection of text documents, or corpus.
Its two key features are: the order of the words is unimportant, and each document is drawn from
a mixture of topics. Each document d = 1, . . . , D is expressed as a ?bag? of words, and each
word wdi = j refers to a vocabulary item j ? {1, . . . , W }. (Here we assume each document has
the same length N .) Also, each word has a latent topic indicator zdi ? {1, . . . , K}. Observing
the jth vocabulary item in the kth topic occurs with probability ?kj . The word proportions for
each topic are generated according to a Dirichlet distribution with fixed prior ?. The latent topic
indicators are generated independently according to p(zdi = k | ?d ) ? ?dk , and ?d in turn follows a
Dirichlet with prior ?. The generative process we just described defines a joint distribution over
the observed data w and unknowns x = {?, ?, z} given the hyperparameters {?, ?}:
p(w, x | ?, ?) =
K
Y
p(?k | ?) ?
k=1
D
Y
p(?d | ?) ?
D Y
N
Y
p(wdi | zdi , ?) p(zdi | ?d ),
(1)
d=1 i=1
d=1
The directed graphical model is given in Fig. 2.
Implementations of approximate inference in LDA include
MCMC [15, 26] and variational inference with a mean field
approximation [4, 30]. The advantages of our inference approach become clear when it is measured up against the
variational mean field algorithm of [4]: first, we make no
additional assumptions regarding the model?s factorization;
second, the number of variational parameters is independent
of the size of the corpus, so there is no need to resort to
coordinate-wise updates that are typically slow to converge. Figure 2: Directed graphical model
for LDA. Shaded nodes represent
observations or fixed quantities.
3
Description of algorithm
The goal is to calculate the expectation of function ?(x) with respect to target distribution p(x):
R
Ep( ? ) [?(X)] = ?(x) p(x) dx.
(2)
In LDA, the target density p(x) is the posterior of x = {?, ?, z} given w derived via Bayes? rule.
From the importance sampling identity [2], we can obtain an unbiased estimate of (2) by drawing
n samples from a proposal q(x) and evaluating importance weights w(x) = p(x)/q(x). (Usually
p(x) can only be evaluated up to a normalizing constant, in which case the asymptotically unbiased
normalized importance sampling estimator [2] is used instead.) The Monte Carlo estimator is
Pn
Ep( ? ) [?(X)] ? n1 s=1 w(x(s) ) ?(x(s) ).
(3)
Unless great care is taken is in designing the proposal q(x), the Monte Carlo estimator will exhibit
astronomically high variance for all but the smallest problems.
Instead, we construct a Monte Carlo estimate (3) by replacing p(x) with an alternate target p?(x; ?) that resembles it, so
that all importance weights are evaluated
with respect to this alternate target. (We
elaborate on the exact form of p?(x; ?) in
Sec. 3.1.) This new estimator is biased,
but we minimize the bias by solving a variational optimization problem.
? Draw samples from initial density p?(x; ?1 ).
? for k = 2, 3, 4, . . .
- Stochastic approximation step: take gradient descent step ?k = ?k?1 ??k gk , where gk
is a Monte Carlo estimate of the gradient of
the K-L divergence, and ?k is the variancesafeguarded step size.
- SMC step: update samples and importance
weights to reflect new density p?(x; ?k ).
Our algorithm has a dual interpretation: it
can be interpreted as a stochastic approxiFigure 3: Algorithm sketch.
mation algorithm for solving a variational
optimization problem, in which the iterates are the parameter vectors ?k , and it can be equally
viewed as a sequential Monte Carlo (SMC) method [12], in which each distribution p?(x; ?k ) in the
3
sequence is chosen dynamically based on samples from the previous iteration. The basic idea is
spelled out in Fig. 3. At each iteration, the algorithm selects a new target p?(x; ?k ) by optimizing
the variational objective. Next, the samples are revised in order to compute the stochastic gradient
gk+1 at the next iteration. Since SMC is effectively a framework for conducting importance sampling over a sequence of distributions, we describe a ?variance safeguard? mechanism (Sec. 3.5)
that directly regulates increases in variance at each step by preventing the iterates ?k from moving
too quickly. It is in this manner that we achieve a trade-off between bias and variance.
Since this is a stochastic approximation method, asymptotic convergence of ?k to a minimizer of
the objective is guaranteed under basic theory of stochastic approximation [29]. As we elaborate
below, this implies that p?(x; ?k ) will converge almost surely to the target distribution p(x) as k
approaches infinity. And asymptotic variance results from the SMC literature [12] tell us that the
Monte Carlo estimates will converge almost surely to the target expectation (2) so long as p?(x; ?k )
approaches p(x). A crucial condition is that the stochastic estimates of the gradient be unbiased.
There is no way to guarantee unbiased estimates under a finite number of samples, so convergence
holds only as the number of iterations and number of samples both approach infinity.
To recap, the probabilistic inference recipe we propose has five main ingredients: one, a family
of approximating distributions that admits the target (Sec. 3.1); two, a variational optimization
problem framed using the K-L divergence measure (Sec. 3.2); three, a stochastic approximation
method for finding a solution to the variational optimization problem (Sec. 3.3); four, the implementation of a sequential Monte Carlo method for constructing stochastic estimates of the gradient
of the variational objective (Sec 3.4); and five, a way to safeguard the variance of the importance
weights at each iteration of the stochastic approximation algorithm (Sec. 3.5).
3.1
The family of approximating distributions
The first implementation step is the design of a family of approximating distributions p?(x; ?)
parameterized by vector ?. In order to devise a useful variational inference procedure, the usual
strategy is to restrict the class of approximating distributions to those that factorize in an analytically
convenient fashion [4, 19] or, in the dual formulation, to introduce an approximate (but tractable)
decomposition of the entropy [32]. Here, we impose no such restrictions on tractability; refer
to Fig. 1. We allow any family of approximating distributions so long as it satisfies these three
conditions: 1.) there is at least one ? = ?1 such that samples can be drawn from p?(x; ?1 ); 2.) there
is a ? = ?? that recovers the target p?(x; ?? ) = p(x), hence an unbiased estimate of (2); and 3.) the
densities are members of the exponential family [13] expressed in standard form
p?(x; ?) = exp{ha(x), ?i ? c(?)},
(4)
in which h?, ?i is an inner product, the vector-valued function a(x) is the statisticRof x, and ? is the
natural or canonical parameterization. The log-normalization factor c(?) ? log expha(x), ?i dx
ensures that p?(x; ?) represents a proper probability. We further assume that the random vector
x can be partitioned into two sets A and B such that it is always possible to draw samples
from the conditionals p?(xA | xB ; ?) and p?(xB | xA ; ?). Hidden Markov models, mixture models,
continuous-time Markov processes, and some Markov random fields are all models that satisfy
this condition. This extra condition could be removed without great difficulty, but doing so would
add several complications to the description of the algorithm. The restriction to the exponential
family is not a strong one as most conventionally-studied densities can be written in the form (4).
For LDA, we chose a family of approximating densities of the form
PD PK
PK PW
?kj ? 1) log ?kj
p?(x; ?) = exp
j=1 (?
k=1
k=1 (?k + ndk ? 1) log ?dk +
d=1
PK PW
PK PW
+ ? k=1 j=1 mkj log ?kj + ? k=1 j=1 (cj ? mkj ) log ?kj ? c(?) , (5)
P P
where mkj ? d Pi ?k (zdi ) ?j (wdi ) counts the number of times the jth word is assigned to the
?k (z
kth topic, ndk ? iP
di ) counts the number of words assigned to the kth topic in the dth
P
document, and cj ? d i ?j (wdi ) is is the number of times jth vocabulary item is observed.
The natural parameters are ? = {?
? , ?, ?}, with ? ? 0. The target posterior p?(x; ?? ) ? p(w, x | ?, ?)
is recovered by setting ? = 1, ? = 0 and ?? = ?. A sampling density with a tractable expression
for c(?) is recovered whenever we set ? equal to ?. The graphical structure of LDA (Fig. 2) allows
us to draw samples from the conditionals p?(?, ? | z; ?) and p?(z | ?, ? ; ?). Loosely speaking, this
choice is meant to strike a balance between the mean field approximation [4] (with parameters
??kj ) and the tempered distribution (with ?local? temperature parameters ? and ?).
4
3.2
The variational objective
The Kullback Leibler (K-L) divergence [9] asymmetrically measures the distance between the
target distribution p(x) = p?(x; ?? ) and approximating distribution p?(x; ?),
?
?
F (?) = hEp(
(6)
? ? ; ?) [a(X)], ? ? ? i + c(? ) ? c(?),
?
the optimal choice being ? = ? . This is our variational objective. The fact that we cannot compute
c(?) poses no obstacle to optimizing the objective (6); through application of basic properties of
the exponential family, the gradient vector works out to be the matrix-vector product
?
?F (?) = Varp(
(7)
? ? ; ?) [a(X)](? ? ? ),
where Var[a(X)] is the covariance matrix of the statistic a(x). The real obstacle is the presence of
an integral in (7) that is most likely intractable. With a collection of samples x(s) with importance
weights w(s) , for s = 1, . . . , n, that approximate p?(x; ?), we have the Monte Carlo estimate
Pn
?F (?) ? s=1 w(s) (a(x(s) ) ? a
?)(a(x(s) ) ? a
?)T (? ? ?? ),
(8)
P (s)
(s)
where a
? ? s w a(x ) denotes the Monte Carlo estimate of the mean statistic. Note that
these samples {x(s) , w(s) } serve to estimate both the expectation (2) and the gradient (7). The
algorithm?s performance hinges on a good search direction, so it is worth our while to reduce the
variance of the gradient measurements when possible via Rao-Blackwellization [6]. Since we no
longer have an exact value for the gradient, we appeal to the theory of stochastic approximation.
3.3
Stochastic approximation
Instead of insisting on making progress toward a minimizer of f (?) at every iteration, as in
gradient descent, stochastic approximation only requires that progress be achieved on average.
The Robbins-Monro algorithm [28] iteratively adjusts the control variable ? according to
?k+1 = ?k ? ?k gk ,
(9)
where gk is a noisy observation of f (?k ), and {?k } is a sequence of step sizes. Provided the
sequence of step sizes satisfies certain conditions, this algorithm is guaranteed to converge to the
solution f (?? ) = 0; see [29]. In our case, f (?) = ?F (?) = 0 is the first-order condition for an
unconstrained minimum. Due to poor conditioning, we advocate replacing the gradient descent
search direction ??k = ?gk in (9) by the quasi-Newton search direction ??k = ?Bk?1 gk , where
Bk is a damped quasi-Newton (BFGS) approximation of the Hessian [25]. To handle constraints
? ? 0 introduced in Sec. 3.1, we use the stochastic interior-point method of [5].
After having taken a step along ??k , the samples must be updated to reflect the new distribution
p?(x; ?k+1 ). To accomplish this feat, we use SMC [12] to sample from a sequence of distributions.
3.4
Sequential Monte Carlo
In the first step of SMC, samples x1(s) are drawn from a proposal density q1 (x) = p?(x; ?1 ) so
that the initial importance weights are uniform. After k steps the proposal density is
q?k (x1:k ) = Kk (xk | xk?1 ) ? ? ? K2 (x2 | x1 ) p?(x1 ; ?1 ),
(10)
?
where Kk (x | x) is the Markov kernel that extends the path at every iteration. The insight of [12] is
that if we choose the densities p?k (x1:k ) wisely, we can update the importance weights w
?k (x1:k ) =
p?k (x1:k )/?
qk (x1:k ) without having to look at the entire history. This special construction is
p?k (x1:k ) = L1 (x1 | x2 ) ? ? ? Lk?1 (xk?1 | xk ) p?(xk ; ?k ),
(11)
?
where we?ve introduced a series of artificial ?backward? kernels Lk (x | x ). In this paper, the
sequence of distributions is determined by the iterates ?k , so there remain two degrees of freedom:
the choice of forward kernel Kk (x? | x), and the backward kernel Lk (x | x? ). From the assumptions
made in Sec. 3.1, a natural choice for the forward transition kernel is the two-stage Gibbs sampler,
Kk (x? | x) = p?(x?A | x?B ; ?k ) p?(x?B | xA ; ?k ),
(12)
in which we first draw a sample of xB (in LDA, the variables ? and ?) given xA (the discrete
variables z), then update xA conditioned on xB . A Rao-Blackwellized version of the sub-optimal
backward kernel [12] then leads to the following expression for updating the importance weights:
w
?k (x1:k ) = p?(xA ; ?k )/?
p(xA ; ?k?1 ) ? w
?k?1 (x1:k?1 ),
(13)
where xA is the component from time step k ? 1 restricted to the set A, and p?(xA ; ?k ) is the
unnormalized version of the marginal p?(xA ; ?k ). It also follows from earlier assumptions (Sec 3.1)
that it is always possible to compute p?(xA ; ?). Refer to [15] for the marginal of z for LDA.
5
3.5
Safeguarding the variance
?
?
?
?
Let n, ?1 , ?? , A, B, {?k } be given.
Draw x(s) ? p?(x; ?1 ), set w(s) = 1/n.
Set inverse Hessian H to the identity.
for k = 2, 3, 4, . . .
1. Compute gk ? ?F (?k?1 ); see (8).
2. if k > 2, then modify H following
damped quasi-Newton update.
3. Compute variance-safeguarded step
size ?k ? ?
? k given ??k = ?Hgk .
4. Set ?k = ?k?1 + ?k ??k .
5. Update w(s) following (13).
6. Run the two-stage Gibbs sampler:
(s)
(s)
- Draw xB ? p?( ? | xA ; ?k ).
(s)
(s)
- Draw xA ? p?( ? | xB ; ?k ).
7. Resample particles, if necessary.
A key component of the algorithm is a mechanism that enables the practitioner to regulate the
variance of the importance weights and, by extension, the variance of the Monte Carlo estimate of
E[?(X)]. The trouble with taking a full step (9)
is that the Gibbs kernel (12) may be unable to
effectively migrate the particles toward the new
target, in which case the the importance weights
will overcompensate for this failure, quickly leading to a degenerate population. The remedy we
propose is to find a step size ?k that satisfies
?Sk (?k ) ? Sk?1 (?k?1 ),
(14)
for ? ? [0, 1], whereby a ? near 1 leads to a stringent safeguard, and we?ve defined
Pn
(s)
Figure 4: The proposed algorithm.
Sk (?k ) ? s=1 (w
?k (x1:k ) ? n1 )2
(15)
to be the sample variance (? n) for our choice of L(x | x? ). Note that since our variance safeguard
scheme is myopic, the behaviour of the algorithm can be sensitive to the number of iterations.
The safeguarded step size is derived as follows. The goal is to find the largest step size ?k
satisfying (14). Forming a Taylor-series expansion with second-order terms about the point ?k = 0,
the safeguarded step size is the solution to
T 2
2
1
2 ??k ? Sk (?k?1 )??k ?k
+ ??kT ?Sk (?k?1 ) ?k =
1??
? Sk?1 (?k?1 ),
(16)
where ??k is the search direction at iteration k. In our experience, the quadratic approximation to
the importance weights (13) was unstable as it occasionally recommended strange step sizes, but
a naive importance weight update without Rao-Blackwellization yielded a reliable bound on (14).
The derivatives of Sk (?k ) work out to sample estimates of second and third moments that can be
computed in O(n) time. Since the importance weights initially have zero variance, no positive
step size will satisfy (14). We propose to also permit step sizes that do not drive the ESS below a
factor ? ? (0, 1) from the optimal sample. Resampling will still be necessary over long sequences
to prevent the population from degenerating. The basic algorithm is summarized in Fig. 4.
population
text corpus
structure
documents
individuals
Microsatellite genetic markers have been used to determine the
?
topics
demes
genealogy of human populations, and to assess individuals? anlanguages
loci
cestry in inferring disease risks [16]. The problem is that all
vocabulary
alleles
these tasks require defining a priori population structure. The
Bayesian model of Pritchard et al. [26] offers a solution to this Figure 5: Correspondence beconundrum by simultaneously identifying both patterns of pop- tween LDA [4] and the populaulation subdivision and the ancestry of individuals from highly tion structure [26] models.
variable genetic markers. This model is the same as LDA assuming fixed Dirichlet priors and
a single genetic marker; see Fig. 5 for the connection between the two domains. This model,
however, can be frustrating to work with because independent MCMC simulations can produce
remarkably different answers for the same data, even simulations millions of samples long. Such
inference challenges have been observed in other mixture models [7]; MCMC can do a poor job
exploring the hypothesis space when there are several divergent hypotheses that explain the data.
4
Application to population genetics
Method. We used the software CoaSim [21] to simulate the evolution of genetic markers following
a coalescent process. The coalescent is a lineage of alleles in a sample traced backward in time to
their common ancestor allele, and the coalescent process is the stochastic process that generates
the genealogy [17]. We introduced divergence events at various coalescent times (see Fig. 6) so
that we ended up with 4 isolated populations. We simulated 10 microsatellite markers each with
a maximum of 30 alleles. We simulated the markers twice with scaled mutation rates of 2 and
1
2 , and for each rate we simulated 60 samples from the coalescent process (15 diploid individuals
from each of the 4 populations). These samples are the words w in LDA. This may not seem like
a large data set, but it will be large enough to impose major challenges to approximate inference.
6
Figure 7: Variance in estimates of the admixture distance and admixture level taken over 20 trials.
The goal is to obtain posterior estimates that recover the correct population structure (Fig. 6) and
exhibit high agreement in independent simulations. Specifically, the goal is to recover the moments of two statistics: the admixture distance, a
measure of two individuals? dissimilarity in their
ancestry, and the admixture level where 0 means
an individual?s alleles all come from a single population, and 1 means its ancestry is shared equally
among the K populations. The admixture disFigure 6: The structured coalescent process
tance between individuals d and d? is
P
with divergence events at coalescent times T =
K
(17) 0, 1 , 1, 2. The width of the branches represents
?(?d , ?d? ) ? 12 k=1 |?dk ? ?d? k |,
2
effective population size, and the arrow points
and the admixture level of the dth individual is
PK
backward in time. The present isolated popu1
K
?(?d ) ? 1 ? 2(K?1)
(18) lations are labeled left-to-right 1 through 4.
k=1 ?dk ? K .
We compared our algorithm to MCMC as implemented in the software Structure [26], and to
another SMC algorithm, annealed importance sampling (AIS) [24], with a uniform tempering
schedule. One possible limitation of our study is that the choice of temperature scehdule can be
critical to the success of AIS, and we did not thoroughly investigate alternative schedules. Also,
note that our intent was not to present an exhaustive comparison of Monte Carlo methods, so we
did not compare to population MCMC [18], for example, which has advantages similar to AIS.
For the two data sets, and for each K from 2 to 6 (the most appropriate setting being K = 4), we
carried out 20 independent trials of the three methods. For fair comparison, we ran the methods
with the same number of sampling events: for MCMC, a Markov chain of length 50,000 and
burn-in of 10,000; for both SMC methods, 100 particles and 500 iterations. Additional settings
included an ESS threshold of 50, maximum step sizes ?k = 1/(1 + k)0.6 , centering parameters
?k = 1/k 0.9 for the stochastic interior-point method, safeguards ? = 0.95 and ? = 0.9, and a
quasi-Newton damping factor of 0.75. We set the initial iterate of stochastic approximation to
? = ? = ??kj = ?j? . We used uniform Dirichlet priors ?j? = ?k = 0.1 throughout.
Results. First let?s examine the variance in the answers. Fig. 7 shows the variance in the estimates
of the admixture level and admixture distance over the independent trials. To produce these plots,
at every K we took the individual d or pair (d, d? ) that exhibited the most variance in the estimate
of E[?(?d , ?d? )] and E[?(?d )]. What we observe is that the stochastic approximation method
produced significantly more consistent estimates in almost all cases, whereas AIS offered little or
no improvement over MCMC. The next step is to examine the accuracy of these answers.
Fig. 8 shows estimates from MCMC and stochastic approximation selected trials under a mutation
rate of 21 and K = 4 (left-hand side), and under a mutation rate of 2 and K = 3 (right-hand side).
The trials were chosen to reflect the extent of variation in the answers. The mean and standard
deviation of the admixture distance statistic are drawn as matrices. The 60 rows and 60 columns in
each matrix correspond to individuals sorted by their true population label; the rows and columns
are ordered so that they correspond to the populations 1 through 4 in Fig. 6. In each ?mean? matrix,
a light square means that two individuals share little ancestry in common, and a dark square means
that two individuals have similar ancestry. In each ?std. dev.? matrix, the darker the square, the
higher the variance. In the first trial (top-left), the MCMC algorithm mostly recovered the correct
7
Figure 8: Estimated mean and standard deviation (?std. dev.?) of the admixture distance statistic for
two independent trials and at two different simulation settings. See the text for a full explanation.
population structure; i.e. it successfully assigned individuals to their coalescent populations based
on the sampled alleles w. As expected, the individuals from populations 3 and 4 were hardest
to distinguish, hence the high standard deviation in the bottom-right entries of the matrix. The
results of the second trial are less satisfying: MCMC failed to distinguish between individuals
from populations 3 and 4, and it decided rather arbitrarily to partition the samples originating from
population 2. In all these experiments, AIS exhibited behaviour that was very similar to MCMC.
Under the same conditions, our algorithm (bottom-left) failed to distinguish between the third and
fourth populations. The trials, however, are more consistent and do not mislead by placing high
confidence in these answers; observe the large number of dark squares in the bottom-right portion
of the ?std. dev.? matrix. This evidence suggests that these trials are more representative of
the true posterior because the MCMC trials are inconsistent and occasionally spurious (trial #2).
This trend is repeated in the more challenging inference scenario with K = 3 and a mutation
rate of 2 (right-hand side). MCMC, as before, exhibited a great deal of variance in its estimates
of the admixture distance: the estimates from the first trial are very accurate, but the second
trial strangely failed to distinguish between populations 1 and 2, and did not correctly assign the
individuals in populations 3 and 4. What?s worse, MCMC placed disproportionate confidence in
these estimates. The stochastic approximation method also exhibited some variance under these
conditions, but importantly it did not place nearly so much confidence in its solutions; observe the
high standard deviation in the matrix entries corresponding to the individuals from population 3.
5
Conclusions and discussion
In this paper, we proposed a new approach to probabilistic inference grounded on variational,
Monte Carlo and stochastic approximation methodology. We demonstrated that our sophisticated
method pays off in terms of producing more consistent, reliable estimates for a real and challenging
inference problem in population genetics. Some of the components such as the variance safeguard
have not been independently validated, so we cannot fully attest to how critical they are, at least
beyond the motivation we already gave. More standard tricks, such as Rao-Blackwellization, were
explicitly included to demonstrate that well-known techniques from the Monte Carlo literature
apply without modification to our algorithm. We have argued for the generality of our inference
approach, but ultimately the success of our scheme hinges on a good choice of the variational
approximation. Thus, it remains to be seen how well our results extend to probabilistic graphical
models beyond LDA, and how much ingenuity will be required to achieve favourable outcomes.
Another critical issue, as we mentioned in Sec. 3.5, is the sensitivity of our method to the number
of iterations. This issue is related to the bias-variance trade-off, and in the future we would like to
explore more principled ways to formulate this trade-off, in the process reducing this sensitivity.
Acknowledgments
We would like to thank Matthew Hoffman, Nolan Kane, Emtiyaz Khan, Hendrik K?ck and Pooja
Viswanathan for their input, and the reviewers for exceptionally detailed and thoughtful comments.
8
References
[1] C. Andrieu and E. Moulines. On the ergodicity properties of some adaptive MCMC algorithms. Annals
of Applied Probability, 16:1462?1505, 2006.
[2] C. Andrieu, N. de Freitas, A. Doucet, and M. I. Jordan. An introduction to MCMC for machine learning.
Machine Learning, 50:5?43, 2003.
[3] M. J. Beal. Variational Algorithms for Approximate Bayesian Inference. PhD thesis, University College
London, 2003.
[4] D. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet allocation. Journal of Machine Learning Research,
3:993?1022, 2003.
[5] P. Carbonetto, M. Schmidt, and N. de Freitas. An interior-point stochastic approximation method and
an L1-regularized delta rule. In Advances in Neural Information Processing Systems, volume 21. 2009.
[6] G. Casella and C. P. Robert. Rao-Blackwellisation of sampling schemes. Biometrika, 83:81?94, 1996.
[7] G. Celeux, M. Hurn, and C. P. Robert. Computational and inferential difficulties with mixture posterior
distributions. Journal of the American Statistical Association, 95:957?970, 2000.
[8] D. W. Coltman. Molecular ecological approaches to studying the evolutionary impact of selective
harvesting in wildlife. Molecular Ecology, 17:221?235, 2007.
[9] T. M. Cover and J. A. Thomas. Elements of Information Theory. Wiley, 1991.
[10] P.-T. de Boer, D. P. Kroese, S. Mannor, and R. Y. Rubinstein. A tutorial on the cross-entropy method.
Annals of Operations Research, 134:19?67, 2005.
[11] N. de Freitas, P. H?jen-S?rensen, M. I. Jordan, and S. Russell. Variational MCMC. In Proceedings of
the 17th Conference on Uncertainty in Artificial Intelligence, pages 120?127, 2001.
[12] P. Del Moral, A. Doucet, and A. Jasra. Sequential Monte Carlo samplers. Journal of the Royal Statistical
Society, 68:411?436, 2006.
[13] A. J. Dobson. An Introduction to Generalized Linear Models. Chapman & Hall/CRC Press, 2002.
[14] A. Doucet, S. Godsill, and C. Andrieu. On sequential Monte Carlo sampling methods for Bayesian
filtering. Statistics and Computing, 10:197?208, 2000.
[15] T. L. Griffiths and M. Steyvers. Finding scientific topics. Proceedings of the National Academy of
Sciences, 101:5228?5235, 2004.
[16] D. L. Hartl and A. G. Clark. Principles of population genetics. Sinauer Associates, 2007.
[17] J. Hein, M. H. Schierup, and C. Wiuf. Gene genealogies, variation and evolution: a primer in coalescent
theory. Oxford University Press, 2005.
[18] A. Jasra, D. Stephens, and C. Holmes. On population-based simulation for static inference. Statistics
and Computing, 17:263?279, 2007.
[19] M. Jordan, Z. Ghahramani, T. Jaakkola, and L. Saul. An introduction to variational methods for graphical
models. In M. Jordan, editor, Learning in Graphical Models, pages 105?161. MIT Press, 1998.
[20] F. Liang, C. Liu, and R. J. Carroll. Stochastic approximation in Monte Carlo computation. Journal of
the American Statistical Association, 102:305?320, 2007.
[21] T. Mailund, M. Schierup, C. Pedersen, P. Mechlenborg, J. Madsen, and L. Schauser. CoaSim: a flexible
environment for simulating genetic data under coalescent models. BMC Bioinformatics, 6, 2005.
[22] T. Minka. A family of algorithms for approximate Bayesian inference. PhD thesis, MIT, 2001.
[23] R. Neal and G. Hinton. A view of the EM algorithm that that justififies incremental, sparse, and other
variants. In M. Jordan, editor, Learning in graphical models, pages 355?368. Kluwer Academic, 1998.
[24] R. M. Neal. Annealed importance sampling. Statistics and Computing, 11:125?139, 2001.
[25] M. J. D. Powell. Algorithms for nonlinear constraints that use Lagrangian functions. Mathematical
Programming, 14:224?248, 1978.
[26] J. K. Pritchard, M. Stephens, and P. Donnelly. Inference of population structure using multilocus
genotype data. Genetics, 155:945?959, 2000.
[27] P. Ravikumar. Approximate Inference, Structure Learning and Feature Estimation in Markov Random
Fields. PhD thesis, Carnegie Mellon University, 2007.
[28] H. Robbins and S. Monro. A stochastic approximation method. Annals of Math. Statistics, 22, 1951.
[29] J. C. Spall. Introduction to stochastic search and optimization. Wiley-Interscience, 2003.
[30] Y. W. Teh, D. Newman, and M. Welling. A collapsed variational Bayesian inference algorithm for latent
Dirichlet allocation. In Advances in Neural Information Processing Systems, volume 19, 2007.
[31] M. J. Wainwright. Stochastic processes on graphs with cycles: geometric and variational approaches.
PhD thesis, Massachusetts Institute of Technology, 2002.
[32] J. S. Yedidia, W. T. Freeman, and Y. Weiss. Constructing free-energy approximations and generalized
belief propagation algorithms. IEEE Transactions on Information Theory, 51:2282?2312, 2005.
[33] L. Younes. Stochastic gradient estimation strategies for Markov random fields. In Proceedings of the
Spatial Statistics and Imaging Conference, 1991.
9
| 3823 |@word trial:14 version:2 pw:3 proportion:1 simulation:7 decomposition:1 covariance:1 q1:1 moment:2 initial:3 liu:1 series:2 safeguarded:3 genetic:5 document:6 past:1 existing:4 freitas:4 recovered:3 com:1 yet:1 dx:2 attracted:1 written:1 must:1 schierup:2 chicago:2 partition:1 analytic:1 enables:1 plot:1 update:8 resampling:1 generative:2 selected:1 intelligence:1 item:3 parameterization:1 xk:5 es:3 dissertation:1 harvesting:1 blei:1 provides:1 iterates:3 node:1 complication:1 mannor:1 math:1 five:2 blackwellized:1 along:2 mathematical:1 direct:1 become:1 combine:1 advocate:1 interscience:1 manner:1 introduce:2 expected:1 ingenuity:1 examine:2 blackwellization:3 moulines:1 freeman:1 automatically:1 landau:1 little:3 provided:1 unrelated:1 what:2 interpreted:1 finding:2 ended:1 guarantee:1 every:3 questionable:1 exactly:1 biometrika:1 demonstrates:1 k2:1 scaled:1 control:1 producing:2 hurn:1 positive:1 before:1 local:1 modify:1 oxford:1 path:1 chose:1 twice:1 burn:1 resembles:2 studied:1 dynamically:1 suggests:1 shaded:1 challenging:2 kane:1 factorization:2 smc:14 directed:2 decided:1 acknowledgment:1 procedure:1 powell:1 significantly:2 convenient:1 inferential:1 confidence:4 word:8 refers:1 griffith:1 suggest:1 mkj:3 cannot:3 interior:3 selection:1 risk:1 collapsed:1 optimize:2 restriction:2 demonstrated:1 reviewer:1 lagrangian:1 annealed:3 straightforward:1 economics:1 independently:2 focused:1 formulate:2 mislead:1 identifying:1 lineage:1 zdi:5 estimator:5 rule:2 adjusts:1 insight:1 importantly:1 holmes:1 steyvers:1 population:35 stability:1 handle:1 coordinate:1 variation:2 updated:1 annals:3 target:15 construction:1 exact:2 programming:1 us:1 designing:1 hypothesis:2 agreement:1 trick:1 trend:1 element:1 satisfying:2 associate:1 updating:1 std:3 labeled:1 bottom:4 observed:3 ep:2 wang:1 capture:1 descend:1 calculate:1 ensures:1 cycle:1 trade:4 removed:1 russell:1 ran:1 principled:2 disease:1 pd:1 mentioned:1 environment:1 ultimately:1 solving:2 serve:1 joint:1 various:1 derivation:1 describe:3 effective:2 monte:26 london:1 artificial:3 microsatellite:2 tell:1 rubinstein:1 newman:1 outcome:1 exhaustive:1 widely:1 valued:1 say:1 drawing:2 nolan:1 ability:1 statistic:11 noisy:1 ip:1 beal:2 sequence:12 advantage:5 took:1 propose:4 coming:1 product:3 date:1 degenerate:2 poorly:1 achieve:2 academy:1 description:2 recipe:1 convergence:2 wiuf:1 produce:2 incremental:1 spelled:1 illustrate:1 pose:1 measured:1 progress:2 job:1 strong:1 implemented:1 disproportionate:1 come:2 implies:1 direction:4 closely:1 drawback:1 correct:2 stochastic:33 allele:6 human:2 coalescent:10 stringent:1 crc:1 argued:1 exchange:2 require:1 carbonetto:2 behaviour:2 assign:1 generalization:1 extension:1 genealogy:3 exploring:1 hold:1 recap:1 hall:1 wdi:4 exp:2 great:3 proximating:2 algorithmic:1 claim:1 matthew:2 major:2 smallest:1 overcompensate:1 resample:1 estimation:2 bag:1 label:1 title:1 sensitive:1 robbins:2 largest:1 successfully:1 hoffman:1 mit:2 always:3 mation:1 rather:2 ck:1 pn:3 jaakkola:1 derived:3 validated:1 improvement:3 likelihood:1 inference:33 burnaby:1 typically:1 entire:1 initially:1 spurious:1 hidden:1 ancestor:1 originating:1 transformed:1 quasi:4 selects:1 selective:1 issue:3 dual:2 among:1 flexible:1 priori:1 development:1 spatial:1 integration:1 special:1 marginal:2 incides:1 field:10 construct:1 equal:1 having:2 sampling:13 ng:1 bmc:1 represents:2 placing:1 look:1 hardest:1 nearly:1 chapman:1 discrepancy:1 future:1 spall:1 employ:2 simultaneously:1 divergence:7 ve:2 individual:17 national:1 familiar:1 n1:2 ecology:1 freedom:1 interest:3 highly:1 investigate:1 mixture:5 genotype:1 light:1 behind:2 damped:2 myopic:1 xb:6 chain:3 accurate:2 kt:1 integral:1 necessary:2 experience:1 unless:1 conduct:2 damping:1 loosely:1 taylor:1 isolated:2 hein:1 column:2 earlier:1 obstacle:2 rao:5 hep:1 dev:3 cover:1 cost:2 tractability:1 deviation:4 entry:2 frustrating:1 uniform:4 firas:1 too:1 characterize:1 vation:1 answer:6 accomplish:1 combined:1 thoroughly:1 density:13 sensitivity:2 boer:1 probabilistic:8 off:5 physic:1 safeguard:10 quickly:3 kroese:1 thesis:4 reflect:3 choose:1 worse:1 resort:1 derivative:1 leading:2 american:2 account:1 de:5 bfgs:1 sec:15 summarized:1 satisfy:2 explicitly:1 tion:1 view:1 doing:2 observing:1 portion:1 wave:1 bayes:1 recover:2 attest:1 mutation:4 monro:2 minimize:1 square:4 il:1 accuracy:3 ass:1 variance:30 conducting:1 qk:1 diploid:1 correspond:2 emtiyaz:1 bayesian:5 pedersen:1 accurately:2 produced:1 none:1 carlo:26 worth:1 researcher:1 drive:1 history:1 explain:1 casella:1 whenever:1 centering:1 against:1 failure:1 energy:1 minka:2 obvious:1 di:1 recovers:1 static:1 sampled:1 massachusetts:1 knowledge:1 cj:2 schedule:2 sophisticated:1 higher:1 methodology:3 wei:1 formulation:1 evaluated:2 generality:1 just:1 xa:13 varp:1 stage:2 ergodicity:1 hand:6 hastings:1 sketch:1 replacing:2 nonlinear:1 reweighting:1 propagation:4 hopefully:1 marker:6 del:1 defines:1 lda:16 scientific:1 excessively:1 normalized:1 unbiased:5 remedy:1 true:2 adequately:1 hence:4 analytically:1 assigned:3 evolution:2 andrieu:3 leibler:2 iteratively:1 semantic:1 neal:2 deal:1 width:1 degenerating:1 whereby:1 unnormalized:1 criterion:1 generalized:2 demonstrate:1 l1:2 interface:1 temperature:2 variational:37 wise:1 common:3 regulates:1 conditioning:1 volume:2 million:1 extend:1 fare:1 interpretation:1 association:2 kluwer:1 refer:2 measurement:1 mellon:1 gibbs:3 ai:6 framed:1 automatic:1 unconstrained:1 particle:3 reliability:1 moving:1 similarity:1 surface:1 longer:1 carroll:1 add:1 posterior:6 closest:1 recent:2 madsen:1 optimizing:2 scenario:1 occasionally:2 certain:1 ecological:1 success:2 arbitrarily:1 tempered:3 devise:1 seen:1 wildlife:2 additional:2 care:1 impose:3 minimum:1 ndk:2 surely:2 converge:4 determine:1 strike:1 recommended:1 stephen:2 branch:1 full:2 match:1 academic:1 offer:3 cross:2 long:4 ravikumar:2 equally:2 molecular:2 impact:1 variant:1 basic:4 expectation:5 iteration:12 represent:1 normalization:1 kernel:7 grounded:1 achieved:1 audience:1 proposal:6 whereas:1 conditionals:2 remarkably:1 crucial:2 biased:2 extra:1 unlike:2 exhibited:4 comment:1 tend:1 undirected:1 member:1 inconsistent:1 seem:1 jordan:6 hamze:1 practitioner:1 near:1 presence:1 cohort:1 enough:1 iterate:1 gave:1 restrict:2 inner:1 idea:3 regarding:1 reduce:1 expression:2 moral:1 peter:1 speaking:1 hessian:2 migrate:1 generally:1 useful:1 clear:1 unimportant:1 detailed:1 dark:2 younes:2 rensen:1 canonical:1 wisely:1 tutorial:1 estimated:1 delta:1 correctly:1 dobson:1 discrete:1 proach:1 carnegie:1 donnelly:1 key:4 four:1 threshold:1 monitor:1 achieving:1 drawn:4 traced:1 prevent:1 tempering:1 backward:5 imaging:1 asymptotically:1 graph:1 run:1 inverse:1 parameterized:2 uncertainty:2 fourth:1 multilocus:1 extends:1 family:10 throughout:2 almost:3 strange:1 place:1 draw:8 comparable:2 bound:1 pay:1 guaranteed:2 distinguish:4 correspondence:1 quadratic:1 yielded:1 infinity:2 constraint:2 x2:2 software:2 generates:1 simulate:1 strangely:1 jasra:2 conjecture:1 structured:1 according:3 overconfident:1 alternate:2 bsd:1 poor:2 viswanathan:1 remain:1 em:1 partitioned:1 metropolis:1 making:2 modification:1 restricted:3 taken:3 remains:2 turn:1 count:2 mechanism:4 locus:1 tractable:5 serf:1 studying:1 operation:1 permit:1 yedidia:1 apply:2 observe:3 indirectly:1 regulate:1 appropriate:1 simulating:2 alternative:2 schmidt:1 primer:1 original:1 thomas:1 top:2 dirichlet:11 running:1 include:1 denotes:1 graphical:9 trouble:1 hinge:2 newton:4 ghahramani:1 approximating:10 society:1 objective:9 already:2 quantity:1 occurs:1 strategy:3 usual:1 exhibit:3 gradient:15 kth:3 evolutionary:1 distance:7 unable:1 thank:1 simulated:3 topic:9 extent:1 unstable:1 toward:2 assuming:1 length:2 kk:4 balance:1 thoughtful:1 innovation:1 liang:1 kingdom:1 difficult:2 mostly:1 robert:2 gk:8 intent:1 godsill:1 design:3 implementation:4 proper:1 unknown:1 teh:1 observation:2 revised:1 markov:9 finite:1 descent:4 defining:1 hinton:1 pritchard:3 community:1 canada:2 bk:2 introduced:3 pair:1 required:1 khan:1 connection:2 pop:1 beyond:3 dth:2 usually:1 below:2 pattern:1 hendrik:1 challenge:2 royal:1 including:1 explanation:1 reliable:2 belief:2 wainwright:2 unrealistic:1 event:3 tance:1 natural:3 difficulty:2 critical:3 indicator:2 regularized:1 scheme:3 technology:1 lk:3 admixture:11 carried:1 conventionally:1 catch:1 naive:1 columbia:1 kj:7 text:4 nice:1 prior:5 literature:3 geometric:1 vancouver:1 asymptotic:2 sinauer:1 fully:1 limitation:2 allocation:6 filtering:1 var:1 ingredient:1 clark:1 incurred:1 degree:1 offered:1 consistent:3 principle:1 editor:2 pi:1 share:1 row:2 prone:1 genetics:9 placed:1 free:1 blackwellisation:1 jth:3 bias:5 uchicago:1 side:5 allow:1 institute:1 wide:1 saul:1 taking:1 sparse:1 vocabulary:4 transition:2 evaluating:1 preventing:1 interchange:1 adopts:2 adaptive:2 collection:2 forward:2 made:1 ple:1 employing:1 welling:1 transaction:1 approximate:11 kullback:2 feat:1 gene:1 doucet:3 corpus:3 factorize:1 continuous:1 latent:8 search:5 decade:1 sk:7 ancestry:5 learn:1 ca:1 obtaining:2 improving:1 expansion:1 whe:1 constructing:2 domain:1 tween:1 pk:5 main:1 did:4 arrow:1 motivation:1 hyperparameters:1 fair:1 repeated:1 x1:13 fig:13 representative:1 elaborate:2 fashion:1 slow:1 darker:1 wiley:2 fails:1 inferring:2 guiding:1 sub:1 exponential:3 third:2 british:1 hgk:1 jen:1 favourable:1 explored:2 dk:4 admits:1 appeal:1 divergent:1 normalizing:1 evidence:1 intractable:2 sequential:7 effectively:2 importance:22 phd:4 dissimilarity:1 conditioned:1 botany:1 entropy:3 likely:2 explore:1 forming:1 failed:3 expressed:2 ordered:1 ubc:1 minimizer:2 satisfies:3 astronomically:1 insisting:1 viewed:2 presentation:1 king:1 goal:4 identity:2 sorted:1 shared:1 exceptionally:1 included:2 determined:1 specifically:1 reducing:1 sampler:3 celeux:1 asymmetrically:1 subdivision:1 meaningful:1 college:1 support:1 meant:1 bioinformatics:1 dept:2 mcmc:23 |
3,117 | 3,824 | Multi-Label Prediction via Compressed Sensing
Daniel Hsu
UC San Diego
[email protected]
Sham M. Kakade
TTI-Chicago
[email protected]
John Langford
Yahoo! Research
[email protected]
Tong Zhang
Rutgers University
[email protected]
Abstract
We consider multi-label prediction problems with large output spaces under the
assumption of output sparsity ? that the target (label) vectors have small support.
We develop a general theory for a variant of the popular error correcting output
code scheme, using ideas from compressed sensing for exploiting this sparsity.
The method can be regarded as a simple reduction from multi-label regression
problems to binary regression problems. We show that the number of subproblems need only be logarithmic in the total number of possible labels, making this
approach radically more efficient than others. We also state and prove robustness
guarantees for this method in the form of regret transform bounds (in general),
and also provide a more detailed analysis for the linear prediction setting.
1
Introduction
Suppose we have a large database of images, and we want to learn to predict who or what is in any
given one. A standard approach to this task is to collect a sample of these images x along with
corresponding labels y = (y1 , . . . , yd ) ? {0, 1}d , where yi = 1 if and only if person or object i
is depicted in image x, and then feed the labeled sample to a multi-label learning algorithm. Here,
d is the total number of entities depicted in the entire database. When d is very large (e.g. 103 ,
104 ), the simple one-against-all approach of learning a single predictor for each entity can become
prohibitively expensive, both at training and testing time.
Our motivation for the present work comes from the observation that although the output (label)
space may be very high dimensional, the actual labels are often sparse. That is, in each image, only
a small number of entities may be present and there may only be a small amount of ambiguity in
who or what they are. In this work, we consider how this sparsity in the output space, or output
sparsity, eases the burden of large-scale multi-label learning.
Exploiting output sparsity. A subtle but critical point that distinguishes output sparsity from more
common notions of sparsity (say, in feature or weight vectors) is that we are interested in the sparsity
of E[y|x] rather than y. In general, E[y|x] may be sparse while the actual outcome y may not (e.g. if
there is much unbiased noise); and, vice versa, y may be sparse with probability one but E[y|x] may
have large support (e.g. if there is little distinction between several labels).
Conventional linear algebra suggests that we must predict d parameters in order to find the value of
the d-dimensional vector E[y|x] for each x. A crucial observation ? central to the area of compressed
sensing [1] ? is that methods exist to recover E[y|x] from just O(k log d) measurements when E[y|x]
is k-sparse. This is the basis of our approach.
1
Our contributions. We show how to apply algorithms for compressed sensing to the output coding
approach [2]. At a high level, the output coding approach creates a collection of subproblems of
the form ?Is the label in this subset or its complement??, solves these problems, and then uses their
solution to predict the final label.
The role of compressed sensing in our application is distinct from its more conventional uses in data
compression. Although we do employ a sensing matrix to compress training data, we ultimately
are not interested in recovering data explicitly compressed this way. Rather, we learn to predict
compressed label vectors, and then use sparse reconstruction algorithms to recover uncompressed
labels from these predictions. Thus we are interested in reconstruction accuracy of predictions,
averaged over the data distribution.
The main contributions of this work are:
1. A formal application of compressed sensing to prediction problems with output sparsity.
2. An efficient output coding method, in which the number of required predictions is only
logarithmic in the number of labels d, making it applicable to very large-scale problems.
3. Robustness guarantees, in the form of regret transform bounds (in general) and a further
detailed analysis for the linear prediction setting.
Prior work. The ubiquity of multi-label prediction problems in domains ranging from multiple object recognition in computer vision to automatic keyword tagging for content databases has spurred
the development of numerous general methods for the task. Perhaps the most straightforward approach is the well-known one-against-all reduction [3], but this can be too expensive when the number of possible labels is large (especially if applied to the power set of the label space [4]). When
structure can be imposed on the label space (e.g. class hierarchy), efficient learning and prediction
methods are often possible [5, 6, 7, 8, 9]. Here, we focus on a different type of structure, namely
output sparsity, which is not addressed in previous work. Moreover, our method is general enough to
take advantage of structured notions of sparsity (e.g. group sparsity) when available [10]. Recently,
heuristics have been proposed for discovering structure in large output spaces that empirically offer
some degree of efficiency [11].
As previously mentioned, our work is most closely related to the class of output coding method
for multi-class prediction, which was first introduced and shown to be useful experimentally in [2].
Relative to this work, we expand the scope of the approach to multi-label prediction and provide
bounds on regret and error which guide the design of codes. The loss based decoding approach [12]
suggests decoding so as to minimize loss. However, it does not provide significant guidance in the
choice of encoding method, or the feedback between encoding and decoding which we analyze here.
The output coding approach is inconsistent when classifiers are used and the underlying problems
being encoded are noisy. This is proved and analyzed in [13], where it is also shown that using a
Hadamard code creates a robust consistent predictor when reduced to binary regression. Compared
to this method, our approach achieves the same robustness guarantees up to a constant factor, but
requires training and evaluating exponentially (in d) fewer predictors.
Our algorithms rely on several methods from compressed sensing, which we detail where used.
2
Preliminaries
Let X be an arbitrary input space and Y ? Rd be a d-dimensional output (label) space. We assume
the data source is defined by a fixed but unknown distribution over X ? Y. Our goal is to learn a
predictor F : X ? Y with low expected ?22 -error Ex kF (x) ? E[y|x]k22 (the sum of mean-squarederrors over all labels) using a set of n training data {(xi , yi )}ni=1 .
We focus on the regime in which the output space is very high-dimensional (d very large), but for
any given x ? X , the expected value E[y|x] of the corresponding label y ? Y has only a few
non-zero entries. A vector is k-sparse if it has at most k non-zero entries.
2
3
3.1
Learning and Prediction
Learning to Predict Compressed Labels
Let A : Rd ? Rm be a linear compression function, where m ? d (but hopefully m ? d). We use
A to compress (i.e. reduce the dimension of) the labels Y, and learn a predictor H : X ? A(Y) of
these compressed labels. Since A is linear, we simply represent A ? Rm?d as a matrix.
Specifically, given a sample {(xi , yi )}ni=1 , we form a compressed sample {(xi , Ayi )}ni=1 and then
learn a predictor H of E[Ay|x] with the objective of minimizing the ?22 -error Ex kH(x)?E[Ay|x]k22 .
3.2
Predicting Sparse Labels
To obtain a predictor F of E[y|x], we compose the predictor H of E[Ay|x] (learned using the compressed sample) with a reconstruction algorithm R : Rm ? Rd . The algorithm R maps predictions
of compressed labels h ? Rm to predictions of labels y ? Y in the original output space. These
algorithms typically aim to find a sparse vector y such that Ay closely approximates h.
Recent developments in the area of compressed sensing have produced a spate of reconstruction
algorithms with strong performance guarantees when the compression function A satisfies certain
properties. We abstract out the relevant aspects of these guarantees in the following definition.
Definition.
S An algorithm R is a valid reconstruction algorithm for a family of compression functions
(Ak ? m?1 Rm?d : k ? N) and sparsity error sperr : N ? Rd ? R, if there exists a function
f : N ? N and constants C1 , C2 ? R such that: on input k ? N, A ? Ak with m rows, and
h ? Rm , the algorithm R(k, A, h) returns an f (k)-sparse vector yb satisfying
kb
y ? yk22 ? C1 ? kh ? Ayk22 + C2 ? sperr(k, y)
for all y ? Rd . The function f is the output sparsity of R and the constants C1 and C2 are the regret
factors.
Informally, if the predicted compressed label H(x) is close to E[Ay|x] = AE[y|x], then the sparse
vector yb returned by the reconstruction algorithm should be close to E[y|x]; this latter distance
kb
y ?E[y|x]k22 should degrade gracefully in terms of the accuracy of H(x) and the sparsity of E[y|x].
Moreover, the algorithm should be agnostic about the sparsity of E[y|x] (and thus the sparsity error
sperr(k, E[y|x])), as well as the ?measurement noise? (the prediction error kH(x) ? E[Ay|x]k2 ).
This is a subtle condition and precludes certain reconstruction algorithm (e.g. Basis Pursuit [14])
that require the user to supply a bound on the measurement noise. However, the condition is needed
in our application, as such bounds on the prediction error (for each x) are not generally known
beforehand.
We make a few additional remarks on the definition.
1. The minimum number of rows of matrices A ? Ak may in general depend on k (as well as
the ambient dimension d). In the next section, we show how to construct such A with close
to the optimal number of rows.
2. The sparsity error sperr(k, y) should measure how poorly y ? Rd is approximated by a
k-sparse vector.
3. A reasonable output sparsity f (k) for sparsity level k should not be much more than k,
e.g. f (k) = O(k).
Concrete examples of valid reconstruction algorithms (along with the associated Ak , sperr, etc.) are
given in the next section.
4
Algorithms
Our prescribed recipe is summarized in Algorithms 1 and 2. We give some examples of compression
functions and reconstruction algorithms in the following subsections.
3
Algorithm 1 Training algorithm
parameters sparsity level k, compression
function A ? Ak with m rows, regression
learning algorithm L
input training data S ? X ? Rd
for i = 1, . . . , m do
hi ? L({(x, (Ay)i ) : (x, y) ? S})
end for
output regressors H = [h1 , . . . , hm ]
Algorithm 2 Prediction algorithm
parameters sparsity level k, compression
function A ? Ak with m rows, valid reconstruction algorithm R for Ak
input regressors H = [h1 , . . . , hm ], test
point x ? X
output yb = ~R(k, A, [h1 (x), . . . , hm (x)])
Figure 1: Training and prediction algorithms.
4.1
Compression Functions
Several valid reconstruction algorithms are known for compression matrices that satisfy a restricted
isometry property.
Definition. A matrix A ? Rm?d satisfies the (k, ?)-restricted isometry property ((k, ?)-RIP), ? ?
(0, 1), if (1 ? ?)kxk22 ? kAxk22 ? (1 + ?)kxk22 for all k-sparse x ? Rd .
While some explicit constructions of (k, ?)-RIP matrices are known (e.g. [15]), the best guarantees
are obtained when the matrix is chosen randomly from an appropriate distribution, such as one of
the following [16, 17].
? All entries i.i.d. Gaussian N (0, 1/m), with m = O(k log(d/k)).
?
? All entries i.i.d. Bernoulli B(1/2) over {?1/ m}, with m = O(k log(d/k)).
?
? m randomly chosen rows of the d ? d Hadamard matrix over {?1/ m}, with m =
O(k log5 d).
The hidden constants in the big-O notation depend inversely on ? and the probability of failure.
A striking feature of these constructions is the very mild dependence of m on the ambient dimension
d. This translates to a significant savings in the number of learning problems one has to solve after
employing our reduction.
Some reconstruction algorithms require a stronger guarantee of bounded coherence ?(A) ?
O(1/k), where ?(A) defined as
q
?(A) = max |(A? A)i,j |/ |(A? A)i,i ||(A? A)j,j |
1?i<j?d
It is easy to check that the Gaussian,pBernoulli, and Hadamard-based random matrices given
above have coherence bounded by O( (log d)/m) with high probability. Thus, one can take
m = O(k 2 log d) to guarantee 1/k coherence. This is a factor k worse than what was needed
for (k, ?)-RIP, but the dependence on d is still small.
4.2
Reconstruction Algorithms
In this section, we give some examples of valid reconstruction algorithms. Each of these algorithm
is valid with respect to the sparsity error given by
1
ky ? y(1:k) k21
k
where y(1:k) is the best k-sparse approximation of y (i.e. the vector with just the k largest (in magnitude) coefficients of y).
sperr(k, y) = ky ? y(1:k) k22 +
The following theorem relates reconstruction quality to approximate sparse regression, giving a
sufficient condition for any algorithm to be valid for RIP matrices.
4
Algorithm 3 Prediction algorithm with R = OMP
parameters sparsity level k, compression function A = [a1 | . . . |ad ] ? Ak with m rows,
input regressors H = [h1 , . . . , hm ], test point x ? X
h ? [h1 (x), . . . , hm (x)]? (predict compressed label vector)
yb ? ~0, J ? ?, r ? h
for i = 1, . . . , 2k do
j? ? arg maxj |r? aj |/kaj k2 (column of A most correlated with residual r)
J ? J ? {j? } (add j? to set of selected columns)
ybJ ? (AJ )? h, ybJ c ? ~0 (least-squares restricted to columns in J)
r ? h ? Ab
y (update residual)
end for
output yb
Figure 2: Prediction algorithm specialized with Orthogonal Matching Pursuit.
Theorem 1. Let Ak = {(k + f (k), ?)-RIP matrices} for some function f : N ? N, and let A ? Ak
have m rows. If for any h ? Rm , a reconstruction algorithm R returns an f (k)-sparse solution
yb = R(k, A, h) satisfying
kAb
y ? hk22 ? inf CkAy(1:k) ? hk22 ,
y?Rd
then it is a valid reconstruction
for Ak and sperr given?above, with output sparsity f and
? algorithm
2
regret factors C1 = 2(1 + C) /(1 ? ?) and C2 = 4(1 + (1 + C)/(1 ? ?))2 .
Proofs are deferred to Appendix B.
Iterative and greedy algorithms. Orthogonal Matching Pursuit (OMP) [18], FoBa [19], and
CoSaMP [20] are examples of iterative or greedy reconstruction algorithms. OMP is a greedy
forward selection method that repeatedly selects a new column of A to use in fitting h (see Algorithm 3). FoBa is similar, except it also incorporates backward steps to un-select columns that are
later discovered to be unnecessary. CoSaMP is also similar to OMP, but instead selects larger sets
of columns in each iteration.
FoBa and CoSaMP are valid reconstruction algorithms for RIP matrices ((8k, 0.1)-RIP and
(4k, 0.1)-RIP, respectively) and have linear output sparsity (8k and 2k). These guarantees are apparent from the cited references. For OMP, we give the following guarantee.
Theorem 2. If ?(A) ? 0.1/k, then after f (k) = 2k steps of OMP, the algorithm returns yb satisfying
kAb
y ? hk22 ? 23kAy(1:k) ? hk22
?y ? Rd .
This theorem, combined with Theorem 1, implies that OMP is valid for matrices A with ?(A) ?
0.1/k and has output sparsity f (k) = 2k.
?1 algorithms. Basis Pursuit (BP) [14] and its variants are based on finding the minimum ?1 -norm
solution to a linear system. While the basic form of BP is ill-suited for our application (it requires
the user to supply the amount of measurement error kAy ? hk2 ), its more advanced path-following
or multi-stage variants may be valid [21].
5
5.1
Analysis
General Robustness Guarantees
We now state our main regret transform bound, which follows immediately from the definition of a
valid reconstruction algorithm and linearity of expectation.
Theorem 3 (Regret Transform). Let R be a valid reconstruction algorithm for {Ak : k ? N} and
sperr : N ? Rd ? R. Then there exists some constants C1 and C2 such that the following holds.
5
Pick any k ? N, A ? Ak with m rows, and H : X ? Rm . Let F : X ? Rd be the composition of
R(k, A, ?) and H, i.e. F (x) = R(k, A, H(x)). Then
Ex kF (x) ? E[y|x]k22
? C1 ? Ex kH(x) ? E[Ay|x]k22 + C2 ? sperr(k, E[y|x]).
The simplicity of this theorem is a consequence of the careful composition of the learned predictors
with the reconstruction algorithm meeting the formal specifications described above.
In order compare this regret bound with the bounds afforded by Sensitive Error Correcting Output
Codes (SECOC) [13], we need to relate Ex kH(x) ? E[Ay|x]k22 to the average scaled mean-squarederror over all induced regression problems; the error is scaled by the maximum difference Li =
maxy?Y (Ay)i ? miny (Ay)i between induced labels:
2
m
1 X
H(x)i ? E[(Ay)i |x]
r? =
Ex
.
m i=1
Li
In k-sparse multi-label problems, we have Y = {y ? {0, 1}d : kyk0 ? k}. In these terms, SECOC
can be tuned to yield Ex kF (x) ? E[y|x]k22 ? 4k 2 ? r? for general k.
For now, ignore the sparsity error. For simplicity,
let A ? Rm?d with entries chosen i.i.d. from the
?
Bernoulli B(1/2) distribution
over {?1/ m},?where m = O(k log d). Then for any k-sparse y,
?
we have kAyk? ? k/ m, and thus Li ? 2k/ m for each i. This gives the bound
C1 ? Ex kH(x) ? E[Ay|x]k22 ? 4C1 ? k 2 ? r?,
which is within a constant factor of the guarantee afforded by SECOC. Note that our reduction
induces exponentially (in d) fewer subproblems than SECOC.
Now we consider the sparsity error. In the extreme case m = d, E[y|x] is allowed to be fully
dense (k = d) and sperr(k, E[y|x]) = 0. When m = O(k log d) < d, we potentially incur an
extra penalty in sperr(k, E[y|x]), which relates how far E[y|x] is from being k-sparse. For example,
suppose E[y|x] has small ?p norm for 0 ? p < 2. Then even if E[y|x] has full support, the penalty
will decrease polynomially in k ? m/ log d.
5.2
Linear Prediction
A danger of using generic reductions is that one might create a problem instance that is even harder
to solve than the original problem. This is an oft cited issue with using output codes for multiclass problems. In the case of linear prediction, however, the danger is mitigated, as we now show.
Suppose, for instance, there is a perfect linear predictor of E[y|x], i.e. E[y|x] = B ? x for some
B ? Rp?d (here X = Rp ). Then it is easy to see that H = BA? is a perfect linear predictor of
E[Ay|x]:
H ? x = AB ? x = AE[y|x] = E[Ay|x].
The following theorem generalizes this observation to imperfect linear predictors for certain wellbehaved A.
Theorem 4. Suppose X ? Rp . Let B ? Rp?d be a linear function with
2
Ex
B ? x ? E[y|x]
= ?.
2
Let A ? Rm?d have entries drawn i.i.d. from N (0, 1/m), and let H = BA? . Then with high
probability (over the choice of A),
?
Ex kH ? x ? AE[y|x]k22 ? 1 + O(1/ m) ?.
Remark 5. Similar guarantees can be proven for the Bernoulli-based matrices. Note that dp
does not
appear in the bound, which is in contrast to the expected spectral norm of A: roughly 1+O( d/m).
Theorem 4 implies that the errors of any linear predictor are not magnified much by the compression function. So a good linear predictor for the original problem implies an almost-as-good linear
predictor for the induced problem. Using this theorem together with known results about linear
prediction [22], it is straightforward to derive sample complexity bounds for achieving a given error
relative to that of the best linear predictor in some class. The bound will depend polynomially in k
but only logarithmically in d. This is cosmetically similar to learning bounds for feature-efficient
algorithms (e.g. [23, 22]) which are concerned with sparsity in the weight vector, rather than in the
output.
6
6
Experimental Validation
We conducted an empirical assessment of our proposed reduction on two labeled data sets with large
label spaces. These experiments demonstrate the feasibility of our method ? a sanity check that the
reduction does in fact preserve learnability ? and compare different compression and reconstruction
options.
6.1
Data
Image data.1 The first data set was collected by the ESP Game [24], an online game in which
players ultimately provide word tags for a diverse set of web images.
The set contains nearly 68000 images, with about 22000 unique labels. We retained just the 1000
most frequent labels: the least frequent of these occurs 39 times in the data, and the most frequent
occurs about 12000 times. Each image contains about four labels on average. We used half of the
data for training and half for testing.
We represented each image as a bag-of-features vector in a manner similar to [25]. Specifically, we
identified 1024 representative SURF features points [26] from 10 ? 10 gray-scale patches chosen
randomly from the training images; this partitions the space of image patches (represented with
SURF features) into Voronoi cells. We then built a histogram for each image, counting the number
of patches that fall in each cell.
Text data.2 The second data set was collected by Tsoumakas et al. [11] from del.icio.us, a
social bookmarking service in which users assign descriptive textual tags to web pages.
The set contains about 16000 labeled web page and 983 unique labels. The least frequent label
occurs 21 times and the most frequent occurs almost 6500 times. Each web page is assigned 19
labels on average. Again, we used half the data for training and half for testing.
Each web page is represented as a boolean bag-of-words vector, with the vocabulary chosen using a
combination of frequency thresholding and ?2 feature ranking. See [11] for details.
Each binary label vector (in both data sets) indicates the labels of the corresponding data point.
6.2
Output Sparsity
We first performed a bit of exploratory data analysis to get a sense of how sparse the target in our
b ? Rp?d on the training data (without any
data is. We computed the least-squares linear regressor B
b ? x on the test data (clipping values
output coding) and predicted the label probabilities pb(x) = B
to the range [0, 1]). Using pb(x) as a surrogate for the actual target E[y|x], we examined the relative
Pd
?22 error of pb and its best k-sparse approximation ?(k, pb(x)) = i=k+1 pb(i) (x)2 /kb
p(x)k22 , where
pb(1) (x) ? . . . ? pb(d) (x).
Examining Ex ?(k, pb(x)) as a function of k, we saw that in both the image and text data, the falloff with k is eventually super-polynomial, but we are interested in the behavior for small k where it
appears polynomial k ?r for some r. Around k = 10, we estimated an exponent of 0.50 for the image
data and 0.55 for the text data. This is somewhat below the standard of what is considered sparse
(e.g. vectors with small ?1 -norm show k ?1 decay). Thus, we expect the reconstruction algorithms
will have to contend with the sparsity error of the target.
6.3
Procedure
We used least-squares linear regression as our base learning algorithm, with no regularization on the
image data and with ?2 -regularization with the text data (? = 0.01) for numerical stability. We did
not attempt any parameter tuning.
1
2
http://hunch.net/?learning/ESP-ImageSet.tar.gz
http://mlkd.csd.auth.gr/multilabel.html
7
The compression functions we used were generated by selecting m random rows of the 1024 ? 1024
Hadamard matrix, for m ? {100, 200, 300, 400}. We also experimented with Gaussian matrices;
these yielded similar but uniformly worse results.
We tested the greedy and iterative reconstruction algorithms described earlier (OMP, FoBa, and
CoSaMP) as well as a path-following version of Lasso based on LARS [21]. Each algorithm was
used to recover a k-sparse label vector ybk from the predicted compressed label H(x), for k =
1, . . . , 10. We measured the ?22 distance kb
y k ? yk22 of the prediction to the true test label y. In
addition, we measured the precision of the predicted support at various values of k using the 10sparse label prediction. That is, we ordered the coefficients of each 10-sparse label prediction yb10
10
by magnitude, and measured the precision of predicting the first k coordinates | supp(b
y(1:k)
)?
2k
10
supp(y)|/k. Actually, for k ? 6, we used yb instead of yb .
We used correlation decoding (CD) as a baseline method, as it is a standard decoding method for
ECOC approaches. CD predicts using the top k coordinates in A? H(x), ordered by magnitude. For
mean-squared-error comparisons, we used the least-squares approximation of H(x) using these k
columns of A. Note that CD is not a valid reconstruction algorithm when m < d.
6.4
Results
As expected, the performance of the reduction, using any reconstruction algorithm, improves as the
number of induced subproblems m is increased (see figures in Appendix A) When m is small and
A 6? AK , the reconstruction algorithm cannot reliably choose k ? K coordinates, so its performance may degrade after this point by over-fitting. But when the compression function A is in AK
for a sufficiently large K, then the squared-error decreases as the output sparsity k increases up to
K. Note the fact that precision-at-k decreases as k increases is expected, as fewer data will have at
least k correct labels.
All of the reconstruction algorithms at least match or out-performed the baseline on the meansquared-error criterion, except when m = 100. When A has few rows, (1) A ? AK only for very
small K, and (2) many of its columns will have significant correlation. In this case, when choosing
k > K columns, it is better to choose correlated columns to avoid over-fitting. Both OMP and
FoBa explicitly avoid this and thus do not fare well; but CoSaMP, Lasso, and CD do allow selecting
correlated columns and thus perform better in this regime.
The results for precision-at-k are similar to that of mean-squared-error, except that choosing correlated columns does not necessarily help in the small m regime. This is because the extra correlated
columns need not correspond to accurate label coordinates.
In summary, the experiments demonstrate the feasibility and robustness of our reduction method for
two natural multi-label prediction tasks. They show that predictions of relatively few compressed
labels are sufficient to recover an accurate sparse label vector, and as our theory suggests, the robustness of the reconstruction algorithms is a key factor in their success.
Acknowledgments
We thank Andy Cotter for help processing the image features for the ESP Game data. This work
was completed while the first author was an intern at TTI-C in 2008.
References
[1] David Donoho. Compressed sensing. IEEE Trans. Info. Theory, 52(4):1289?1306, 2006.
[2] T. Dietterich and G. Bakiri. Solving multiclass learning problems via error-correcting output codes.
Journal of Artificial Intelligence Research, 2:263?286, 1995.
[3] R. Rifkin and A. Klautau. In defense of one-vs-all classification. Journal of Machine Learning Research,
5:101?141, 2004.
[4] M. Boutell, J. Luo, X. Shen, and C. Brown. Learning multi-label scene classification. Pattern Recognition,
37(9):1757?1771, 2004.
[5] A. Clare and R.D. King. Knowledge discovery in multi-label phenotype data. In European Conference
on Principles of Data Mining and Knowledge Discovery, 2001.
8
[6] B. Taskar, C. Guestrin, and D. Koller. Max-margin markov networks. In NIPS, 2003.
[7] N. Cesa-Bianchi, C. Gentile, and L. Zaniboni. Incremental algorithms for hierarchical classification.
Journal of Machine Learning Research, 7:31?54, 2006.
[8] I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun. Support vector machine learning for interdependent and structured output spaces. In ICML, 2004.
[9] J. Rousu, C. Saunders, S. Szedmak, and J. Shawe-Taylor. Kernel-based learning of hierarchical multilabel
classification models. Journal of Machine Learning Research, 7:1601?1626, 2006.
[10] J. Huang, T. Zhang, and D. Metaxax. Learning with structured sparsity. In ICML, 2009.
[11] G. Tsoumakas, I. Katakis, and I. Vlahavas. Effective and efficient multilabel classification in domains
with large number of labels. In Proc. ECML/PKDD 2008 Workshop on Mining Multidimensional Data,
2008.
[12] Erin Allwein, Robert Schapire, and Yoram Singer. Reducing multiclass to binary: A unifying approach
for margin classifiers. Journal of Machine Learning Research, 1:113?141, 2000.
[13] J. Langford and A. Beygelzimer. Sensitive error correcting output codes. In Proc. Conference on Learning
Theory, 2005.
[14] Emmanuel Cand`es, Justin Romberg, and Terrence Tao. Stable signal recovery from incomplete and
inaccurate measurements. Comm. Pure Appl. Math., 59:1207?122, 2006.
[15] R. DeVore. Deterministic constructions of compressed sensing matrices. J. of Complexity, 23:918?925,
2007.
[16] Shahar Mendelson, Alain Pajor, and Nicole Tomczak-Jaegermann. Uniform uncertainty principle for
Bernoulli and subgaussian ensembles. Constructive Approximation, 28(3):277?289, 2008.
[17] M. Rudelson and R. Vershynin. Sparse reconstruction by convex relaxation: Fourier and Gaussian measurements. In Proc. Conference on Information Sciences and Systems, 2006.
[18] S. Mallat and Z. Zhang. Matching pursuits with time-frequency dictionaries. IEEE Transactions on Signal
Processing, 41(12):3397?3415, 1993.
[19] Tong Zhang. Adaptive forward-backward greedy algorithm for sparse learning with linear models. In
Proc. Neural Information Processing Systems, 2008.
[20] D. Needell and J.A. Tropp. CoSaMP: Iterative signal recovery from incomplete and inaccurate samples.
Applied and Computational Harmonic Analysis, 2007.
[21] Bradley Efron, Trevor Hastie, Iain Johnstone, and Robert Tibshirani. Least angle regression. Annals of
Statistics, 32(2):407?499, 2004.
[22] Sham M. Kakade, Karthik Sridharan, and Ambuj Tewari. On the complexity of linear prediction: Risk
bounds, margin bounds, and regularization. In Proc. Neural Information Processing Systems, 2008.
[23] Andrew Ng. Feature selection, l1 vs. l2 regularization, and rotational invariance. In ICML, 2004.
[24] Luis von Ahn and Laura Dabbish. Labeling images with a computer game. In Proc. ACM Conference on
Human Factors in Computing Systems, 2004.
[25] Marcin Marsza?ek, Cordelia Schmid, Hedi Harzallah, and Joost van de Weijer. Learning object representations for visual object class recognition. In Visual Recognition Challange Workshop, in conjunction
with ICCV, 2007.
[26] Herbert Bay, Andreas Ess, Tinne Tuytelaars, and Luc Van Gool. SURF: Speeded up robust features.
Computer Vision and Image Understanding, 110(3):346?359, 2008.
[27] David Donoho, Michael Elad, and Vladimir Temlyakov. Stable recovery of sparse overcomplete representations in the presence of noise. IEEE Trans. Info. Theory, 52(1):6?18, 2006.
[28] Sanjoy Dasgupta. Learning Probability Distributions. PhD thesis, University of California, 2000.
9
| 3824 |@word mild:1 version:1 compression:14 stronger:1 norm:4 polynomial:2 pick:1 harder:1 reduction:9 contains:3 selecting:2 daniel:1 tuned:1 bradley:1 luo:1 beygelzimer:1 must:1 luis:1 john:1 chicago:1 partition:1 numerical:1 wellbehaved:1 hofmann:1 update:1 v:2 greedy:5 discovering:1 fewer:3 selected:1 half:4 intelligence:1 es:1 math:1 org:1 zhang:4 along:2 c2:6 become:1 supply:2 prove:1 compose:1 fitting:3 manner:1 tagging:1 expected:5 behavior:1 pkdd:1 cand:1 roughly:1 multi:13 ecoc:1 actual:3 little:1 pajor:1 moreover:2 underlying:1 notation:1 agnostic:1 bounded:2 linearity:1 what:4 katakis:1 mitigated:1 kaxk22:1 finding:1 magnified:1 guarantee:13 multidimensional:1 prohibitively:1 k2:2 classifier:2 rm:11 scaled:2 appear:1 service:1 esp:3 consequence:1 encoding:2 ak:16 path:2 foba:5 yd:1 might:1 examined:1 collect:1 suggests:3 appl:1 range:1 speeded:1 averaged:1 unique:2 acknowledgment:1 testing:3 regret:8 procedure:1 bookmarking:1 danger:2 area:2 empirical:1 matching:3 word:2 altun:1 get:1 cannot:1 close:3 selection:2 tsochantaridis:1 romberg:1 risk:1 conventional:2 imposed:1 map:1 deterministic:1 nicole:1 straightforward:2 convex:1 boutell:1 shen:1 simplicity:2 recovery:3 immediately:1 correcting:4 pure:1 needell:1 iain:1 regarded:1 kay:2 stability:1 notion:2 exploratory:1 coordinate:4 annals:1 hierarchy:1 diego:1 target:4 suppose:4 user:3 rip:8 construction:3 us:2 mallat:1 hunch:2 logarithmically:1 expensive:2 recognition:4 satisfying:3 approximated:1 predicts:1 database:3 labeled:3 role:1 taskar:1 keyword:1 decrease:3 mentioned:1 tongz:1 pd:1 complexity:3 miny:1 comm:1 ultimately:2 multilabel:3 depend:3 solving:1 algebra:1 incur:1 creates:2 efficiency:1 basis:3 tinne:1 joost:1 represented:3 various:1 distinct:1 effective:1 artificial:1 labeling:1 outcome:1 choosing:2 saunders:1 sanity:1 apparent:1 heuristic:1 encoded:1 solve:2 larger:1 say:1 elad:1 compressed:21 precludes:1 statistic:1 tuytelaars:1 transform:4 noisy:1 final:1 online:1 advantage:1 descriptive:1 net:2 reconstruction:31 frequent:5 relevant:1 hadamard:4 rifkin:1 poorly:1 kh:7 ky:2 recipe:1 exploiting:2 cosamp:6 perfect:2 tti:3 incremental:1 object:4 help:2 derive:1 develop:1 andrew:1 measured:3 ex:11 strong:1 solves:1 recovering:1 c:1 predicted:4 come:1 implies:3 closely:2 correct:1 lars:1 kb:4 hedi:1 human:1 tsoumakas:2 require:2 assign:1 preliminary:1 hold:1 around:1 considered:1 sufficiently:1 scope:1 predict:6 achieves:1 dictionary:1 proc:6 applicable:1 bag:2 label:57 marsza:1 sensitive:2 saw:1 largest:1 vice:1 create:1 djhsu:1 cotter:1 gaussian:4 aim:1 super:1 rather:3 avoid:2 tar:1 allwein:1 conjunction:1 focus:2 joachim:1 bernoulli:4 check:2 indicates:1 contrast:1 baseline:2 sense:1 voronoi:1 inaccurate:2 entire:1 typically:1 hidden:1 koller:1 expand:1 marcin:1 selects:2 interested:4 tao:1 arg:1 issue:1 ill:1 html:1 classification:5 exponent:1 yahoo:1 development:2 weijer:1 uc:1 construct:1 saving:1 cordelia:1 ng:1 uncompressed:1 nearly:1 icml:3 rci:1 others:1 employ:1 distinguishes:1 few:4 randomly:3 preserve:1 maxj:1 karthik:1 ab:2 attempt:1 mining:2 deferred:1 analyzed:1 extreme:1 dabbish:1 accurate:2 beforehand:1 ambient:2 andy:1 orthogonal:2 incomplete:2 taylor:1 guidance:1 overcomplete:1 instance:2 column:13 earlier:1 boolean:1 increased:1 clipping:1 subset:1 entry:6 predictor:16 uniform:1 examining:1 conducted:1 gr:1 too:1 learnability:1 combined:1 vershynin:1 person:1 cited:2 eas:1 terrence:1 decoding:5 regressor:1 michael:1 together:1 concrete:1 thesis:1 again:1 ambiguity:1 central:1 squared:3 choose:2 cesa:1 huang:1 von:1 worse:2 laura:1 ek:1 return:3 clare:1 li:3 supp:2 de:1 coding:6 summarized:1 erin:1 coefficient:2 satisfy:1 explicitly:2 ranking:1 ad:1 later:1 h1:5 performed:2 ayi:1 analyze:1 recover:4 option:1 contribution:2 minimize:1 square:4 ni:3 accuracy:2 who:2 ensemble:1 yield:1 correspond:1 produced:1 falloff:1 trevor:1 definition:5 against:2 failure:1 frequency:2 jaegermann:1 associated:1 proof:1 hsu:1 proved:1 spate:1 popular:1 subsection:1 knowledge:2 improves:1 efron:1 subtle:2 actually:1 appears:1 feed:1 devore:1 yb:9 just:3 stage:1 langford:2 correlation:2 web:5 tropp:1 hopefully:1 assessment:1 del:1 quality:1 perhaps:1 aj:2 gray:1 kab:2 dietterich:1 k22:11 brown:1 unbiased:1 true:1 regularization:4 assigned:1 game:4 criterion:1 ay:15 demonstrate:2 l1:1 image:18 ranging:1 harmonic:1 recently:1 common:1 specialized:1 empirically:1 exponentially:2 jl:1 fare:1 approximates:1 measurement:6 significant:3 composition:2 versa:1 automatic:1 rd:12 tuning:1 shawe:1 specification:1 stable:2 ahn:1 etc:1 add:1 base:1 isometry:2 recent:1 inf:1 certain:3 zaniboni:1 binary:4 success:1 shahar:1 meeting:1 yi:3 guestrin:1 minimum:2 additional:1 somewhat:1 gentile:1 omp:9 herbert:1 signal:3 relates:2 multiple:1 full:1 sham:3 match:1 offer:1 a1:1 feasibility:2 prediction:30 variant:3 regression:8 basic:1 ae:3 vision:2 expectation:1 rutgers:2 rousu:1 iteration:1 represent:1 histogram:1 kernel:1 cell:2 c1:8 addition:1 want:1 hk2:1 addressed:1 source:1 crucial:1 extra:2 induced:4 inconsistent:1 incorporates:1 sridharan:1 subgaussian:1 counting:1 yk22:2 presence:1 enough:1 easy:2 concerned:1 hastie:1 identified:1 lasso:2 reduce:1 idea:1 imperfect:1 andreas:1 multiclass:3 translates:1 klautau:1 defense:1 penalty:2 returned:1 remark:2 repeatedly:1 useful:1 generally:1 detailed:2 informally:1 tewari:1 amount:2 induces:1 reduced:1 http:2 schapire:1 exist:1 estimated:1 tibshirani:1 diverse:1 dasgupta:1 group:1 key:1 four:1 pb:8 achieving:1 drawn:1 backward:2 relaxation:1 sum:1 angle:1 uncertainty:1 striking:1 family:1 reasonable:1 almost:2 patch:3 coherence:3 appendix:2 kaj:1 bit:1 bound:15 hi:1 yielded:1 log5:1 bp:2 afforded:2 scene:1 tag:2 aspect:1 fourier:1 prescribed:1 relatively:1 structured:3 combination:1 kakade:2 making:2 maxy:1 restricted:3 iccv:1 previously:1 yb10:1 eventually:1 needed:2 singer:1 end:2 available:1 pursuit:5 generalizes:1 apply:1 hierarchical:2 appropriate:1 ubiquity:1 generic:1 spectral:1 vlahavas:1 robustness:6 rp:5 original:3 compress:2 top:1 spurred:1 rudelson:1 completed:1 unifying:1 yoram:1 giving:1 emmanuel:1 especially:1 bakiri:1 objective:1 occurs:4 dependence:2 surrogate:1 dp:1 distance:2 thank:1 entity:3 gracefully:1 degrade:2 collected:2 code:7 retained:1 rotational:1 minimizing:1 vladimir:1 tomczak:1 robert:2 potentially:1 relate:1 subproblems:4 info:2 ba:2 design:1 reliably:1 unknown:1 contend:1 perform:1 bianchi:1 observation:3 markov:1 ecml:1 y1:1 discovered:1 ucsd:1 arbitrary:1 introduced:1 complement:1 namely:1 required:1 squarederror:1 david:2 meansquared:1 california:1 distinction:1 learned:2 textual:1 nip:1 trans:2 justin:1 below:1 pattern:1 regime:3 sparsity:34 oft:1 ambuj:1 built:1 max:2 gool:1 power:1 critical:1 natural:1 rely:1 predicting:2 residual:2 advanced:1 scheme:1 kxk22:2 inversely:1 numerous:1 auth:1 hm:5 gz:1 schmid:1 szedmak:1 text:4 prior:1 interdependent:1 discovery:2 l2:1 kf:3 understanding:1 relative:3 loss:2 fully:1 expect:1 proven:1 validation:1 degree:1 sufficient:2 consistent:1 thresholding:1 principle:2 cd:4 row:11 summary:1 alain:1 formal:2 guide:1 allow:1 johnstone:1 fall:1 sparse:28 van:2 feedback:1 dimension:3 vocabulary:1 evaluating:1 valid:14 forward:2 collection:1 author:1 san:1 regressors:3 adaptive:1 employing:1 far:1 polynomially:2 social:1 transaction:1 harzallah:1 approximate:1 temlyakov:1 ignore:1 unnecessary:1 xi:3 un:1 iterative:4 bay:1 learn:5 robust:2 necessarily:1 european:1 domain:2 surf:3 did:1 main:2 ybj:2 dense:1 csd:1 motivation:1 noise:4 big:1 allowed:1 representative:1 tong:2 icio:1 precision:4 explicit:1 theorem:11 k21:1 sensing:11 decay:1 experimented:1 burden:1 exists:2 workshop:2 mendelson:1 phd:1 magnitude:3 margin:3 phenotype:1 suited:1 depicted:2 logarithmic:2 simply:1 intern:1 visual:2 ordered:2 radically:1 satisfies:2 acm:1 goal:1 king:1 donoho:2 careful:1 luc:1 content:1 experimentally:1 specifically:2 except:3 uniformly:1 reducing:1 total:2 sanjoy:1 invariance:1 experimental:1 e:1 player:1 select:1 support:5 latter:1 constructive:1 tested:1 correlated:5 |
3,118 | 3,825 | Solving Stochastic Games
Charles Isbell
College of Computing
Georgia Tech
801 Atlantic Drive
Atlanta, GA 30332-0280
[email protected]
Liam Mac Dermed
College of Computing
Georgia Tech
801 Atlantic Drive
Atlanta, GA 30332-0280
[email protected]
Abstract
Solving multi-agent reinforcement learning problems has proven difficult because
of the lack of tractable algorithms. We provide the first approximation algorithm
which solves stochastic games with cheap-talk to within absolute error of the optimal game-theoretic solution, in time polynomial in 1/. Our algorithm extends
Murray?s and Gordon?s (2007) modified Bellman equation which determines the
set of all possible achievable utilities; this provides us a truly general framework
for multi-agent learning. Further, we empirically validate our algorithm and find
the computational cost to be orders of magnitude less than what the theory predicts.
1
Introduction
In reinforcement learning, Bellman?s dynamic programming equation is typically viewed as a
method for determining the value function ? the maximum achievable utility at each state. Instead,
we can view the Bellman equation as a method of determining all possible achievable utilities. In the
single-agent case we care only about the maximum utility, but for multiple agents it is rare to be able
to simultaneous maximize all agents? utilities. In this paper we seek to find the set of all achievable
joint utilities (a vector of utilities, one for each player). This set is known as the feasible-set. Given
this goal we can reconstruct a proper multi-agent equivalent to the Bellman equation that operates
on feasible-sets for each state instead of values.
Murray and Gordon (2007) presented an algorithm for calculating the exact form of the feasibleset based Bellman equation and proved correctness and convergence; however, their algorithm is
not guaranteed to converge in a finite number of iterations. Worse, a particular iteration may not
be tractable. These are two separate problems. The first problem is caused by the intolerance of
an equilibrium to error, and the second results from a potential need for an unbounded number of
points to define the closed convex hull that is each states feasible-set. We solve the first problem
by targeting -equilibria instead of exact equilibria, and we solve the second by approximating the
hull with a bounded number of points. Importantly, we achieve both solutions while bounding the
final error introduced by these approximations. Taken together this produces the first multi-agent
reinforcement learning algorithm with theoretical guarantees similar to single-agent value iteration.
2
Agenda
We model the world as a fully-observable n-player stochastic game with cheap talk (communication
between agents that does not affect rewards). Stochastic games (also called Markov games) are
the natural multi-agent extension of Markov decision processes with actions being joint actions and
rewards being a vector of rewards, one to each player. We assume an implicit inclusion of past joint
1
actions as part of state (we actually only rely on log2 n + 1 bits of history containing if and who has
defected). We also assume that each player is rational in the game-theoretic sense.
Our goal is to produce a joint policy that is Pareto-optimal (no other viable joint policy gives a player
more utility without lowering another player?s utility), fair (players agree on the joint policy), and
in equilibrium (no player can gain by deviating from the joint policy).1 This solution concept is the
game-theoretic solution.
We present the first approximation algorithm that can efficiently and provably converge to within
a given error of game-theoretic solution concepts for all such stochastic games. We factor out the
various game theoretic elements of the problem by taking in three functions which compute in turn:
the equilibrium Feq (such as correlated equilibrium), the threat Fth (such as grim trigger), and the
bargaining solution Fbs (such as Nash bargaining solution). An error parameters 1 controls the
degree of approximation. The final algorithm takes in a stochastic game, and returns a targeted
utility-vector and joint policy such that the policy achieves the targeted utility while guaranteeing
that the policy is an 1 /(1 ? ?)-equilibrium (where ? is the discount factor) and there are no exact
equilibria that Pareto-dominate the targeted utility.
3
Previous approaches
Many attempts have been made to extend the Bellman equation to domains with multiple agents.
Most of these attempts have focused on retaining the idea of a value function as the memoized
solution to subproblems in Bellman?s dynamic programming approach (Greenwald & Hall, 2003),
(Littman, 2001), (Littman, 2005). This has lead to a few successes particularly in the zero-sum case
where the same guarantees as standard reinforcement learning have been achieved (Littman, 2001).
Unfortunately, more general convergence results have not been achieved. Recently a negative result
has shown that any value function based approach cannot solve the general multi-agent scenario
(Littman, 2005). Consider a simple game (Figure 1-A):
Reward:{1,-2}
(1, -0.5)
pass
exit
Player
1
pass
A)
(1.8, -0.9)
Player
2
exit
Reward:{2,-1}
B)
(1, -2)
Figure 1: A) The Breakup Game demonstrates the limitation of traditional value-function based approaches.
Circles represent states, outgoing arrows represent deterministic actions. Unspecified rewards are zero. B) The
final feasible-set for player 1?s state (? = 0.9).
Player 2 Utility
This game has four states with two terminal states. In the two middle states play alternates between
the two players until one of the players decides to exit the game. In this game the only equilibria
are stochastic (E.G. the randomized policy of each player passing and exiting with probability 21 ).
In each state only one(0,of0)the agents takes an(0,
action,
0) so an algorithm(0,that
0) depends only on a value
Player
1?s will myopically choose to deterministically
function
take the best action, and never converge to
the stochastic equilibrium. This result exposed the inadequacy of value functions to capture cyclic
Choice
-2)
equilibrium (where the equilibrium policy may revisit a(1,
state).
Player 2 Utility
Several other complaints have been leveled against the motivation behind MAL research following
Player
Utility
the Bellman heritage.
One 1such
complaint is that value function based algorithms inherently target
only stage-game equilibria and not full-game equilibria potentially ignoring much better solutions
(Shoham & Grenager, 2006). Our approach solves this problem and allows a full-game equilibrium
to be reached. Another complaint goes even further, challenging the desire to even target equilibria
0)
0) us that equilibrium
(0, 0)solutions are correct when
(Shoham
theorists have (0,
shown
Player
2?s et al., 2003).(0,Game
agents are rational (infinitely intelligent), so the argument against targeting equilibria boils down
Choice
to either assuming other agents are not infinitely intelligent (which is reasonable) or that finding
(2, -1)
1
(1.33, -1.33)
(1.33, -1.33)
The precise meaning of fair, and the type of equilibrium is intentionally left unspecified for generality.
Player 1 Utility
Iteration:
Initialization
2
1
2
Equilibrium Contraction
equilibria is not computationally tractable (which we tackle here). We believe that although MAL
is primarily concerned with the case when agents are not fully rational, first assuming agents are
rational and subsequently relaxing this assumption will prove to be an effective approach.
Murray and Gordon (2007) presented the first multidimensional extension to the Bellman equation
which overcame many of the problems mentioned above. In their later technical report (Murray &
Gordon, June 2007) they provided an exact solution equivalent to our solution targeting subgame
perfect correlated equilibrium with credible threats, while using the Nash bargaining solution for
equilibrium selection. In the same technical report they present an approximation method for their
exact algorithm that involved sampling the feasible-set. Their approach was a significant step forward; however, their approximation algorithm has no finite time convergence guarantees, and can
result in unbounded error.
4
Exact feasible-set solution
They key idea needed to extend reinforcement learning into multi-agent domains is to replace the
value-function, V (s), in Bellman?s dynamic program with a feasible-set function ? a mapping from
state to feasible-set. As a group of n agents follow a joint-policy, each player i receives rewards. the
discounted sum of these rewards is that player?s utility, ui . The n-dimensional vector ~u containing
these utility is known as the joint-utility. Thus a joint-policy yields a joint-utility which is a point
in n-dimensional space. If we examine all (including stochastic) joint-policies starting from state
s, discard those not in equilibrium, and compute the remaining joint-utilities we will have a set of
n-dimensional points - the feasible-set. This set is closed and convex, and can be thought of as
an n-dimensional convex polytope. As this set contains all possible joint-utilities, it will contain
the optimal joint-utility for any definition of optimal (the bargaining solution Fbs will select the
utility vector it deems optimal). After an optimal joint-utility has been chosen, a joint-policy can
be constructed to achieve that joint-utility using the computed feasible-sets (Murray & Gordon,
June 2007). Recall that agents care only about the utility they achieve and not the specific policy
used. Thus computing the feasible-set function solves stochastic games, just as computing the value
function solves MDPs.
Figure 1-B shows a final feasible-set in the breakup game. The set is a closed convex hull with
extreme points (1, ?0.5), (1, ?2), and (1.8, ?0.9). This feasible-set depicts the fact that when
starting in player 1?s state any full game equilibria will result in a joint-utility that is some weighted
average of these three points. For example the players can achieve (1, ?0.5) by having player 1
always pass and player 2 exit with probability 0.55. If player 2 tries to cheat by passing when
they are supposed to exit, player 1 will immediate exit in retaliation (recall that history is implicitly
included in state).
An exact dynamic programing solution falls out naturally after replacing the value-function in Bellman?s dynamic program with a feasible-set function, however the changes in variable dimension
complicate the backup. An illustration of the modified backup is shown in Figure 2, where steps
A-C solve for the action-feasible-set (Q(s, ~a)), and steps D-E solve for V (s) given Q(s, ~a). What
is not depicted in Figure 2 is the process of eliminating non-equilibrium policies in steps D-E. We
assume an equilibrium filter function Feq is provided to the algorithm, which is applied to eliminate
non-equilibrium policies. Details of this process is given in section 5.4. The final dynamic program
starts by initializing each feasible-set to be some large over-estimate (a hypercube of the maximum
and minimum utilities possible for each player). Each iteration of the backup then contracts the
feasible-sets, eliminating unachievable utility-vectors. Eventually the algorithm converges and only
achievable joint-utilities remain. The invariant of feasible-sets always overestimating is crucial for
guaranteeing correctness, and is a point of great concern below. A more detailed examination of
the exact algorithm including a formal treatment of the backup, various game theoretic issues, and
convergence proofs are given in Murray and Gordon?s technical report (June 2007). This paper does
not focus on the exact solution, instead focusing on creating a tractable generalized version.
5
Making a tractable algorithm
There are a few serious computational bottlenecks in the exact algorithm. The first problem is
that the size of the game itself is exponential in the number of agents because joint actions are
3
(.45, -.9)
(?, -1)
?
(1.35, -1.35)
(1?, -1?)
(2, -1)
A)
Feasible sets of
successor states
B)
Expected values
of all policies
C)
Feasible set of
expected values
R
P
S
D)
Player 2 Utility
Player 1 Utiliy
(1, -?)
(1, -2)
(0, 0)
(.9, -.45)
Player 1 Action
?
Player 2 Action
R
P
S
(0, 0)
(0, 0)
(0, 0)
Feasible sets of
all joint actions
E)
Feasible set of
initial state
Figure 2: An example of the backup step (one iteration of our modified Bellman equation). The state
shown being calculated is an initial rock-paper-scissors game played to decide who goes first in the
breakup game from Figure 1. A tie results in a random winner. The backup shown depicts the 2nd
iteration of the dynamic program when feasible-sets are initialized to (0,0) and binding contracts
are allowed (Feq = set union). In step A the feasibility set of the two successor states are shown
graphically. For each combination of points from each successor state the expected value is found
(in this case 1/2 of the bottom and 1/2 of the top). These points are shown in step B as circles. Next
in step C, the minimum encircling polygon is found. This feasibility region is then scaled by the
discount factor and translated by the immediate reward. This is the feasibility-set of a particular
joint action from our original state. The process is repeated for each joint action in step D. Finally,
in step E, the feasible outcomes of all joint actions are fed into Feq to yield the updated feasibility
set of our state.
exponential in the number of players. This problem is unavoidable unless we approximate the game
which is outside the scope of this paper. The second problem is that although the exact algorithm
always converges, it is not guaranteed to converge in finite time (during the equilibrium backup,
an arbitrarily small update can lead to a drastically large change in the resulting contracted set). A
third big problem is that maintaining an exact representation of a feasible-set becomes unwieldy (the
number of faces of the polytope my blow up, such as if it is curved).
Two important modifications to the exact algorithm allow us to make the algorithm tractable: Approximating the feasible-sets with a bounded number of vertices, and adding a stopping criterion.
Our approach is to approximate the feasible-set at the end of each iteration after first calculating it
exactly. The degree of approximation is captured by a user-specified parameters: 1 . The approximation scheme yields a solution that is an 1 /(1??)-equilibrium of the full game while guaranteeing
there exists no exact equilibrium that Pareto-dominates the solution?s utility. This means that despite
not being able to calculate the true utilities at each stage game, if other players did know the true utilities they would gain no more than 1 /(1 ? ?) by defecting. Moreover our approximate solution is
as good or better than any true equilibrium. By targeting an 1 /(1 ? ?)-equilibrium we do not mean
that the backup?s equilibrium filter function Feq is an -equilibrium (it could be, although making it
such would do nothing to alleviate the convergence problem). Instead we apply the standard filter
function but stop if no feasible-set has changed by more than 1 .
5.1
Consequences of a stopping criterion
Recall we have added a criterion to stop when all feasible-sets contract by less than 1 (in terms
of Hausdorff distance). This is added to ensure that the algorithm makes 1 absolute progress each
iteration and thus will take no more than O(1/1 ) iterations to converge. After our stopping criterion
is triggered the total error present in any state is no more than 1 /(1 ? ?) (i.e. if agents followed
a prescribed policy they would find their actual rewards to be no less than 1 /(1 ? ?) promised).
Therefore the feasible-sets must represent at least an 1 /(1 ? ?)-equilibrium. In other words, after
a backup each feasible-set is in equilibrium (according to the filter function) with respect to the
previous iteration?s estimation. If that previous estimation is off by at most 1 /(1 ? ?) than the most
any one player could gain by deviating is 1 /(1 ? ?). Because we are only checking for a stopping
condition, and not explicitly targeting the 1 /(1 ? ?)-equilibrium in the backup we can?t guarantee
that the algorithm will terminate with the best 1 /(1 ? ?)-equilibrium. Instead we can guarantee that
when we do terminate we know that our feasible-sets contain all equilibrium satisfying our original
equilibrium filter and no equilibrium with incentive greater than an 1 /(1 ? ?) to deviate.
4
5.2
Bounding the number of vertices
Bounding the number of points defining each feasible-set is crucial for achieving a tractable algorithm. At the end of each iteration we can replace each state feasible-set (V (s)) with an N point
approximation. The computational geometry literature is rich with techniques for approximating
convex hulls. However, we want to insure that our feasible estimation is always an over estimation
and not an under estimation, otherwise the equilibrium contraction step may erroneously eliminate
valid policies. Also, we need the technique to work in arbitrary dimensions and guarantee a bounded
number of vertices for a given error bound. A number of recent algorithms meet these conditions
and provide efficient running times and optimal worse-case performance (Lopez & Reisner, 2002),
(Chan, 2003), (Clarkson, 1993).
Despite the nice theoretical performance and error guarantees of these algorithms they admit a potential problem. The approximation step is controlled by a parameter 2 (0 < 2 < 1 ) determining
the maximum tolerated error induced by the approximation. This error results in an expansion of
the feasible-set by at most 2 . On the other hand by targeting 1 -equilibrium we can terminate if
the backups fail to make 1 progress. Unfortunately this 1 progress is not uniform and may not
affect much of the feasible-set. If this is the case, the approximation expansion could potentially
expand past the original feasible-set (thus violating our need for progress to be made every iteration,
see Figure 3-A). Essentially our approximation scheme must also insure that it is a subset of the
previous step?s approximation. With this additional constraint in mind we develop the following
approximation inspired by (Chen, 2005):
?1
?2
A)
I
II
III
B)
C)
Figure 3: A) (I) Feasible hull from previous iteration. (II) Feasible hull after equilibrium contraction.
The set contracts at least 1 . (III) Feasible hull after a poor approximation scheme. The set expands
at most 2 , but might sabotage progress. B) The hull from A-I is approximated using halfspaces
from a given regular approximation of a Euclidean ball. C) Subsequent approximations using the
same set of halfspaces will not backtrack.
We take a fixed set of hyperplanes which form a regular approximation of a Euclidean ball such that
the hyperplane?s normals form an angle of at most ? with their neighbors (E.G. an optimal Delaunay
triangulation). We then project these halfspaces onto the polytope we wish to approximate (i.e.
retain each hyperplanes? normals but reduce their offsets until they touch the given polytope). After
removing redundant hyperplanes the resulting polytope is returned as the approximation (Figure 3B). To insure a maximum error of 2 with n players: ? ? 2 arccos[(r/(2 + r))1/n ] where r =
Rmax /(1 ? ?).
The scheme trivially uses a bounded number of facets (only those from the predetermined set), and
hence a bounded number of vertices. Finally, by using a fixed set of approximating hyperplanes
successive approximations will strictly be subsets of each other - no hyperplane will move farther
away when the set its projecting onto shrinks (Figure 3-C). After both the 1 -equilibrium contraction
step and the 2 approximation step we can guarantee at least 1 ? 2 progress is made. Although
the final error depends only on 1 and not 2 , the rate of convergence and the speed of each iteration
is heavily influenced by 2 . Our experiments (section 6) suggest that the theoretical requirement of
2 < 1 is far too conservative.
5
5.3
Computing expected feasible-sets
Another difficulty occurs during the backup of Q(s, ~a). Finding the expectation over feasible-sets
involves a modified set sum (step B in fig 2), which naively requires an exponential looping over
all possible combinations of taking one point from the feasible-set of each successor state. We can
help the problem by applying the set sum on an initial two sets and fold subsequent sets into the
result. This leads to polynomial performance, but to an uncomfortably high-degree. Instead we can
describe the problem as the following multiobjective linear program (MOLP):
P P
Simultaneously maximize foreach player i from 1 to n:
v
s0
~
v ?V (s0 ) vi xs0 ~
P
0
0
0
Subject to: for every state s
a)
v = P (s |s, ~
~
v ?V (s0 ) xs ~
where we maximize over variables xs0~v (one for each ~v ? V (s0 ) for all s0 ) and ~v is a vertex in the
feasible-set V (s0 ) and vi is the value of that vertex to player i. This returns only the Pareto frontier.
An optimized version of the algorithm described in this paper would only need the frontier, not the
full set as calculating the frontier depends only on the frontier (unless the threat function needs the
entire set). For the full feasible-set 2n such MOLPs are needed, one for each orthant.
Like our modified view of the Bellman equation as trying to find the entire set of achievable policy
payoffs so too can we view linear programming as trying to find the entire set of achievable values
of the objective function. When there is a single objective function this is simply a maximum
and minimum value. When there is more than one objective function the solution then becomes
a multidimensional convex set of achievable vectors. This problem is known as multiobjective
linear programming and has been previously studied by a small community of operation researchers
under the umbrella subject of multiobjective optimization (Branke et al., 2005). MOLP is formally
defined as a technique to find the Pareto frontier of a set of linear objective functions subject to linear
inequality constraints. The most prominent exact method for MOLP is the Evans-Steuer algorithm
(Branke et al., 2005).
5.4
Computing correlated equilibria of sets
Our generalized algorithm requires an equilibrium-filter function Feq . Formally this is a monotonic
function Feq : P(Rn ) ? . . . ? P(Rn )) ? P(Rn ) which outputs a closed convex subset of the
smallest convex set containing the union of the input sets. Here P denotes the powerset. It is
monotonic as x ? y ? Feq (x) ? Feq (y). The threat function Fth is also passed to Feq . Note
than requiring Feq to return a closed convex set disqualifies Nash equilibria and its refinements.
Due to the availability of cheap talk, reasonable choices for Feq include correlated equilibria (CE),
-CE, or a coalition resistant variant of CE. Filtering non-equilibrium policies takes place when the
various action feasible-sets (Q) are merged together as shown in step E of Figure 2. Constructing
Feq is more complicated than computing the equilibria for a stage game so we describe below how
to target CE.
For a normal-form game the set of correlated equilibria can be determined by taking the intersection
of a set of halfspaces (linear inequality constraints) (Greenwald & Hall, 2003). Each variable of
these halfspaces represents the probability that a particular joint action is chosen (via a shared random variable) and each halfspace represents a rationality constraint
Pn that a player being told to take
one action would not want to switch to another action. There are 1 |Ai |(|Ai | ? 1) such rationality
constraints (where |Ai | is the number of actions player i can take).
Unlike in a normal-form game, the rewards for following the correlation device or defecting (switching actions) are not directly given in our dynamic program. Instead we have a feasible-set of possible
outcomes for each joint action Q(s, ~a) and a threat function Fth . Recall that when following a policy to achieve a desired payoff, not only must a joint action be given, but also subsequent payoffs
for each successor state. Thus the halfspace variables must not only specify probabilities over joint
actions but also the subsequent payoffs (a probability distribution over the extreme points of each
successor feasible-set). Luckily, a mixture
P of probability distributions is still a probability distribution so our final halfspaces now have ~a |Q(s, ~a)| variables (we still have the same number of
halfspaces with the same meaning as before).
At the end of the day we do not want feasible probabilities over successor states, we want the
utility-vectors afforded by them. To achieve this without having to explicitly construct the polytope
6
described above (which can be exponential in the number of halfspaces) we can describe the problem
as the following MOLP (given Q(s, ~a) and Fth ):
P
Simultaneously maximize foreach player
a~
u
~
a~
u ui x~
P i from 1 to n:
Subject to: probability constraints x~a~u = 1 and x~a~u ? 0
and
actions a1 ,a2 ? Ai , (a2 6= a1 )
P foreach player i, P
u
x
?
a)x~a~u
i
~
a
~
u
~
a~
u|ai =a1
~
a~
u|ai =a2 Fth (s, ~
where variables x~a~u represent the probability of choosing joint action ~a and subsequent payoff
~u ? Q(s, ~a) in state s and ui is the utility to player i.
5.5
Proof of correctness
Murray and Gordon (June 2007) proved correctness and convergence for the exact algorithm by
proving four properties: 1) Monotonicity (feasible-sets only shrink), 2) Achievability (after convergence, feasible-sets contain only achievable joint-utilities), 3) Conservative initialization (initialization is an over-estimate), and 4) Conservative backups (backups don?t discard valid joint-utilities).
We show that our approximation algorithm maintains these properties.
1) Our feasible-set approximation scheme was carefully constructed so that it would not permit
backtracking, maintaining monotonicity (all other steps of the backup are exact). 2) We have broadened the definition of achievability to permit 1 /(1 ? ?) error. After all feasible-sets shrink by less
than 1 we could modify the game by giving a bonus reward less than 1 to each player in each state
(equal to that state?s shrinkage). This modified game would then have converged exactly (and thus
would have a perfectly achievable feasible-set as proved by Murray and Gordon). Any joint-policy
of the modified game will yield at most 1 /(1 ? ?) more than the same joint-policy of our original
game thus all utilities of our original game are off by at most 1 /(1 ? ?). 3) Conservative initializai
/(1 ? ?)).
tion is identical to the exact solution (start with a huge hyperrectangle with sides Rmax
4) Backups remain conservative as our approximation scheme never underestimates (as shown in
section 5.2) and our equilibrium filter function Feq is required to be monotonic and thus will never
underestimate if operating on overestimates (this is why we require monotonicity of Feq ). CE over
sets as presented in section 5.4 is monotonic. Thus our algorithm maintains the four crucial properties and terminates with all exact equilibria (as per conservative backups) while containing no
equilibrium with error greater than 1 /(1 ? ?).
6
Empirical results
We implemented a version of our algorithm targeting exact correlated equilibrium using grim trigger
threats (defection is punished to the maximum degree possible by all other players, even at one?s
own expense). The grim trigger threat reduces to a 2 person zero sum game where the defector
receives their normal reward and all other players receive the opposite reward. Because the other
players receive the same reward in this game they can be viewed as a single entity. Zero sum 2player stochastic games can be quickly solved using FFQ-Learning (Littman, 2001). Note that grim
trigger threats can be computed separately before the main algorithm is run. When computing the
threats for each joint action, we use the GNU Linear Programming Kit (GLPK) to solve the zero-sum
stage games. Within the main algorithm itself we use ADBASE (Steuer, 2006) to solve our various
MOLPs. Finally we use QHull (Barber et al., 1995) to compute the convex hull of our feasible-sets
and to determine the normals of the set?s facets. We use these normals to compute the approximation.
To improve performance our implementation does not compute the entire feasible hull, only those
points on the Pareto frontier. A final policy will exclusively choose targets from the frontier (using
Fbs ) (as will the computed intermediate equilibria) so we lose nothing by ignoring the rest of the
feasible-set (unless the threat function requires other sections of the feasible-set, for instance in the
case of credible threats). In other words, when computing the Pareto frontier during the backup the
algorithm relies on no points except those of the previous step?s Pareto frontier. Thus computing
only the Pareto frontier at each iteration is not an approximation, but an exact simplification.
We tested our algorithm on a number of problems with known closed form solutions, including the
breakup game (Figure 4). We also tested the algorithm on a suite of random games varying across the
number of states, number of players, number of actions, number of successor states (stochasticity of
7
the game), coarseness of approximation, and density of rewards. All rewards were chosen at random
between 1 and -1, and ? was always set to 0.9.
Terminal
State
Player 1
Equilibrium
Iteration:
0
10
20
30
40
50
Figure 4: A visualization of feasible-sets for the terminal state and player 1?s state of the breakup
game at various iterations of the dynamic program. By the 50th iteration the sets have converged.
An important empirical question is what degree of approximation should be adopted. Our testing
(see Figure 5) suggests that the theoretical requirement of 2 < 1 is overly conservative. While the
bound on 2 is theoretically
to Rmax /(1 ? ?)
5 proportional15
25(the worst case35scale of the feasible-set)
45
a more practical choice for 2 would be in scale with the final feasible-sets (as should a choice for
1 ).
12
6
120
40
35
35
30
25
20
15
10
5
0
12
6
30
25
20
15
10
5
Iterations
B
1
10
19
28
37
46
55
64
73
82
91
100
0
1
10
19
28
37
46
55
64
73
82
91
100
A
36
120
Average Set Change Each Iteration
36
Wall Clock Time (seconds)
Feasible Set Size
120
40
Iterations
C
36
12
6
1.4
1.2
1
0.8
0.6
0.4
0.2
0
1
10
19
28
Iterations
37
46
Figure 5: Statistics from a random game (100 states, 2 players, 2 actions each, with 1 = 0.02 )
run with different levels of approximation. The numbers shown (120, 36, 12, and 6) represent the
number of predetermined hyperplanes used to approximate each Pareto frontier. A) The better approximations only use a fraction of the hyperplanes available to them. B) Wall clock time is directly
proportional to the size of the feasible-sets. C) Better approximations converge more each iteration
(the coarser approximations have a longer tail), however due to the additional computational costs
the 12 hyperplane approximation converged quickest (in total wall time). The 6, 12, and 36 hyperplane approximations are insufficient to guarantee convergence (2 = 0.7, 0.3, 0.1 respectively) yet
only the 6-face approximation occasionally failed to converge.
6.1
Limitations
Our approach is overkill when the feasible-sets are one dimensional (line segments) (as when the
game is zero-sum, or agents share a reward function), because CE-Q learning will converge to the
correct solution without additional overhead. When there are no cycles in the state-transition graph
(or one does not wish to consider cyclic equilibria) traditional game-theory approaches suffice. In
more general cases, our algorithm brings significant advantages. However despite scaling linearly
with the number of states, the multiobjective linear program for computing the equilibrium hull
scales very poorly. The MOLP remains tractable only up to about 15 joint actions (which results in
a few hundred variables and a few dozen constraints, depending on feasible-set size). This in turn
prevents the algorithm from running with more than four agents.
8
References
Barber, C. B., Dobkin, D. P., & Huhdanpaa, H. (1995). The quickhull algorithm for convex hulls.
ACM Transactions on Mathematical Software, 22, 469?483.
Branke, J., Deb, K., Miettinen, K., & Steuer, R. E. (Eds.). (2005). Practical approaches to multiobjective optimization, 7.-12. november 2004, vol. 04461 of Dagstuhl Seminar Proceedings.
Internationales Begegnungs- und Forschungszentrum (IBFI), Schloss Dagstuhl, Germany IBFI,
Schloss Dagstuhl, Germany.
Chan, T. M. (2003). Faster core-set constructions and data stream algorithms in fixed dimensions.
Comput. Geom. Theory Appl (pp. 152?159).
Chen, L. (2005). New analysis of the sphere covering problems and optimal polytope approximation
of convex bodies. J. Approx. Theory, 133, 134?145.
Clarkson, K. L. (1993). Algorithms for polytope covering and approximation, and for approximate
closest-point queries.
Greenwald, A., & Hall, K. (2003). Correlated-q learning. Proceedings of the Twentieth International
Conference on Machine Learning (pp. 242?249).
Littman, M. L. (2001). Friend-or-foe Q-learning in general-sum games. Proc. 18th International
Conf. on Machine Learning (pp. 322?328). Morgan Kaufmann, San Francisco, CA.
Littman, M. Z. . A. G. . M. L. (2005). Cyclic equilibria in markov games. Proceedings of Neural
Information Processing Systems. Vancouver, BC, Canada.
Lopez, M. A., & Reisner, S. (2002). Linear time approximation of 3d convex polytopes. Comput.
Geom. Theory Appl., 23, 291?301.
Murray, C., & Gordon, G. (June 2007). Finding correlated equilibria in general sum stochastic
games (Technical Report). School of Computer Science, Carnegie Mellon University.
Murray, C., & Gordon, G. J. (2007). Multi-robot negotiation: Approximating the set of subgame
perfect equilibria in general-sum stochastic games. In B. Sch?olkopf, J. Platt and T. Hoffman
(Eds.), Advances in neural information processing systems 19, 1001?1008. Cambridge, MA: MIT
Press.
Shoham, Yoav, P., & Grenager (2006). If multi-agent learning is the answer, what is the question?
Artificial Intelligence.
Shoham, Y., Powers, R., & Grenager, T. (2003). Multi-agent reinforcement learning: a critical
survey (Technical Report).
Steuer, R. E. (2006). Adbase: A multiple objective linear programming solver for efficient extreme
points and unbounded efficient edges.
9
| 3825 |@word version:3 middle:1 achievable:10 eliminating:2 polynomial:2 nd:1 coarseness:1 seek:1 contraction:4 deems:1 initial:3 cyclic:3 contains:1 exclusively:1 bc:1 past:2 atlantic:2 yet:1 must:4 evans:1 subsequent:5 predetermined:2 cheap:3 update:1 intelligence:1 device:1 core:1 farther:1 provides:1 successive:1 hyperplanes:6 unbounded:3 mathematical:1 constructed:2 viable:1 lopez:2 prove:1 fth:5 overhead:1 theoretically:1 expected:4 examine:1 multi:10 terminal:3 bellman:13 inspired:1 discounted:1 actual:1 solver:1 becomes:2 provided:2 project:1 bounded:5 moreover:1 insure:3 bonus:1 suffice:1 what:4 unspecified:2 rmax:3 finding:3 suite:1 guarantee:9 every:2 multidimensional:2 expands:1 tackle:1 tie:1 exactly:2 complaint:3 demonstrates:1 scaled:1 platt:1 control:1 broadened:1 overestimate:1 before:2 reisner:2 defection:1 multiobjective:5 modify:1 consequence:1 switching:1 despite:3 cheat:1 meet:1 might:1 quickest:1 initialization:3 studied:1 suggests:1 challenging:1 relaxing:1 appl:2 liam:2 practical:2 testing:1 union:2 subgame:2 empirical:2 shoham:4 thought:1 word:2 regular:2 suggest:1 cannot:1 ga:2 targeting:7 selection:1 onto:2 applying:1 glpk:1 equivalent:2 deterministic:1 go:2 graphically:1 starting:2 convex:13 focused:1 survey:1 importantly:1 dominate:1 retaliation:1 proving:1 updated:1 construction:1 target:4 play:1 trigger:4 exact:21 programming:6 user:1 us:1 heavily:1 rationality:2 element:1 satisfying:1 particularly:1 approximated:1 predicts:1 coarser:1 bottom:1 initializing:1 capture:1 solved:1 calculate:1 mal:2 region:1 worst:1 cycle:1 halfspaces:8 mentioned:1 dagstuhl:3 nash:3 ui:3 und:1 reward:18 littman:7 dynamic:9 solving:2 segment:1 exposed:1 exit:6 translated:1 joint:37 various:5 polygon:1 talk:3 effective:1 describe:3 query:1 artificial:1 outcome:2 outside:1 choosing:1 solve:7 reconstruct:1 otherwise:1 statistic:1 grenager:3 itself:2 final:9 triggered:1 advantage:1 rock:1 poorly:1 achieve:6 supposed:1 validate:1 olkopf:1 convergence:9 requirement:2 produce:2 guaranteeing:3 perfect:2 converges:2 help:1 depending:1 develop:1 friend:1 school:1 progress:6 solves:4 implemented:1 involves:1 merged:1 correct:2 filter:7 stochastic:13 hull:12 subsequently:1 luckily:1 successor:8 require:1 wall:3 alleviate:1 extension:2 strictly:1 frontier:11 hall:3 normal:7 great:1 equilibrium:64 mapping:1 scope:1 achieves:1 smallest:1 a2:3 estimation:5 proc:1 lose:1 punished:1 correctness:4 weighted:1 hoffman:1 mit:1 always:5 modified:7 pn:1 shrinkage:1 varying:1 gatech:2 focus:1 june:5 tech:2 sense:1 stopping:4 typically:1 eliminate:2 entire:4 expand:1 germany:2 provably:1 issue:1 retaining:1 negotiation:1 arccos:1 equal:1 construct:1 never:3 having:2 sampling:1 identical:1 represents:2 report:5 gordon:10 serious:1 intelligent:2 few:4 primarily:1 overestimating:1 simultaneously:2 deviating:2 geometry:1 powerset:1 attempt:2 atlanta:2 huge:1 truly:1 extreme:3 mixture:1 behind:1 edge:1 unless:3 euclidean:2 initialized:1 circle:2 desired:1 theoretical:4 instance:1 facet:2 yoav:1 cost:2 mac:1 vertex:6 subset:3 rare:1 uniform:1 hundred:1 too:2 answer:1 my:1 tolerated:1 person:1 density:1 international:2 randomized:1 retain:1 contract:4 off:2 told:1 together:2 quickly:1 unavoidable:1 containing:4 choose:2 worse:2 admit:1 creating:1 conf:1 return:3 potential:2 blow:1 availability:1 caused:1 scissors:1 vi:2 stream:1 depends:3 leveled:1 try:1 explicitly:2 later:1 view:3 closed:6 tion:1 reached:1 start:2 maintains:2 complicated:1 halfspace:2 kaufmann:1 who:2 efficiently:1 yield:4 fbs:3 backtrack:1 cc:2 drive:2 researcher:1 history:2 converged:3 foe:1 simultaneous:1 influenced:1 complicate:1 ed:2 definition:2 against:2 underestimate:2 pp:3 bargaining:4 intentionally:1 involved:1 naturally:1 proof:2 boil:1 rational:4 gain:3 proved:3 treatment:1 stop:2 recall:4 credible:2 carefully:1 actually:1 focusing:1 violating:1 follow:1 day:1 specify:1 shrink:3 generality:1 just:1 implicit:1 stage:4 until:2 correlation:1 hand:1 receives:2 clock:2 replacing:1 touch:1 lack:1 brings:1 believe:1 xs0:2 concept:2 contain:3 true:3 hausdorff:1 umbrella:1 hence:1 requiring:1 game:54 during:3 covering:2 criterion:4 generalized:2 trying:2 prominent:1 theoretic:6 meaning:2 recently:1 charles:1 empirically:1 winner:1 foreach:3 extend:2 tail:1 significant:2 mellon:1 theorist:1 cambridge:1 ai:6 approx:1 trivially:1 inclusion:1 stochasticity:1 resistant:1 robot:1 longer:1 operating:1 delaunay:1 closest:1 own:1 recent:1 chan:2 triangulation:1 discard:2 scenario:1 occasionally:1 inequality:2 success:1 arbitrarily:1 captured:1 minimum:3 greater:2 care:2 additional:3 kit:1 morgan:1 converge:8 maximize:4 redundant:1 determine:1 schloss:2 ii:2 multiple:3 full:6 reduces:1 technical:5 faster:1 sphere:1 a1:3 feasibility:4 controlled:1 variant:1 essentially:1 expectation:1 iteration:24 represent:5 achieved:2 receive:2 want:4 separately:1 crucial:3 myopically:1 sch:1 rest:1 unlike:1 induced:1 subject:4 intermediate:1 iii:2 concerned:1 switch:1 affect:2 perfectly:1 opposite:1 reduce:1 idea:2 bottleneck:1 utility:40 inadequacy:1 passed:1 clarkson:2 returned:1 passing:2 action:29 detailed:1 discount:2 revisit:1 overly:1 per:1 carnegie:1 incentive:1 vol:1 threat:11 key:1 four:4 group:1 promised:1 achieving:1 ce:6 internationale:1 lowering:1 graph:1 fraction:1 sum:11 run:2 angle:1 extends:1 place:1 reasonable:2 decide:1 decision:1 scaling:1 bit:1 gnu:1 bound:2 overcame:1 guaranteed:2 played:1 followed:1 simplification:1 fold:1 constraint:7 deb:1 isbell:2 looping:1 afforded:1 software:1 unachievable:1 erroneously:1 speed:1 argument:1 prescribed:1 according:1 alternate:1 combination:2 poor:1 ball:2 coalition:1 remain:2 terminates:1 across:1 making:2 modification:1 projecting:1 invariant:1 taken:1 computationally:1 equation:9 agree:1 previously:1 visualization:1 turn:2 eventually:1 fail:1 remains:1 needed:2 know:2 mind:1 tractable:8 fed:1 end:3 adopted:1 available:1 operation:1 permit:2 apply:1 away:1 original:5 top:1 remaining:1 ensure:1 running:2 denotes:1 heritage:1 log2:1 maintaining:2 include:1 calculating:3 giving:1 murray:10 approximating:5 hypercube:1 move:1 objective:5 added:2 breakup:5 occurs:1 question:2 traditional:2 distance:1 separate:1 miettinen:1 entity:1 polytope:8 barber:2 assuming:2 illustration:1 insufficient:1 difficult:1 unfortunately:2 potentially:2 subproblems:1 expense:1 negative:1 agenda:1 implementation:1 proper:1 policy:25 markov:3 finite:3 november:1 curved:1 orthant:1 immediate:2 defining:1 payoff:5 communication:1 precise:1 rn:3 arbitrary:1 exiting:1 community:1 dobkin:1 canada:1 introduced:1 required:1 specified:1 hyperrectangle:1 optimized:1 polytopes:1 able:2 below:2 memoized:1 geom:2 program:8 including:3 power:1 critical:1 natural:1 rely:1 examination:1 difficulty:1 steuer:4 scheme:6 improve:1 mdps:1 deviate:1 nice:1 literature:1 checking:1 vancouver:1 determining:3 fully:2 limitation:2 filtering:1 proportional:1 proven:1 agent:26 degree:5 s0:6 pareto:10 share:1 achievability:2 changed:1 drastically:1 formal:1 allow:1 side:1 fall:1 neighbor:1 taking:3 face:2 overkill:1 absolute:2 dimension:3 calculated:1 world:1 valid:2 rich:1 transition:1 forward:1 made:3 reinforcement:6 refinement:1 san:1 far:1 transaction:1 approximate:6 observable:1 implicitly:1 monotonicity:3 decides:1 forschungszentrum:1 dermed:1 francisco:1 don:1 why:1 terminate:3 ca:1 inherently:1 ignoring:2 expansion:2 constructing:1 domain:2 did:1 main:2 linearly:1 arrow:1 bounding:3 motivation:1 backup:18 big:1 nothing:2 fair:2 allowed:1 repeated:1 body:1 contracted:1 fig:1 depicts:2 georgia:2 seminar:1 deterministically:1 defecting:2 exponential:4 wish:2 comput:2 third:1 dozen:1 down:1 unwieldy:1 removing:1 specific:1 offset:1 x:1 concern:1 dominates:1 exists:1 naively:1 adding:1 magnitude:1 chen:2 depicted:1 intersection:1 backtracking:1 simply:1 twentieth:1 infinitely:2 failed:1 prevents:1 desire:1 binding:1 monotonic:4 determines:1 relies:1 acm:1 ma:1 viewed:2 goal:2 targeted:3 greenwald:3 replace:2 shared:1 feasible:66 change:3 programing:1 included:1 determined:1 except:1 operates:1 hyperplane:4 conservative:7 called:1 total:2 pas:3 player:53 select:1 college:2 formally:2 outgoing:1 tested:2 correlated:8 |
3,119 | 3,826 | Submanifold density estimation
Alexander Gray
College of Computing
Georgia Institute of Technology
[email protected]
Arkadas Ozakin
Georgia Tech Research Institute
Georgia Insitute of Technology
[email protected]
Abstract
Kernel density estimation is the most widely-used practical method for accurate
nonparametric density estimation. However, long-standing worst-case theoretical
results showing that its performance worsens exponentially with the dimension
of the data have quashed its application to modern high-dimensional datasets for
decades. In practice, it has been recognized that often such data have a much
lower-dimensional intrinsic structure. We propose a small modification to kernel density estimation for estimating probability density functions on Riemannian
submanifolds of Euclidean space. Using ideas from Riemannian geometry, we
prove the consistency of this modified estimator and show that the convergence
rate is determined by the intrinsic dimension of the submanifold. We conclude
with empirical results demonstrating the behavior predicted by our theory.
1 Introduction: Density estimation and the curse of dimensionality
Kernel density estimation (KDE) [8] is one of the most popular methods for estimating the underlying probability density function (PDF) of a dataset. Roughly speaking, KDE consists of having
the data points ?contribute? to the estimate at a given point according to their distances from the
point. In the simplest multi-dimensional KDE [3], the estimate f?m (y0 ) of the PDF f (y0 ) at a point
y0 ? RN is given in terms of a sample {y1 , . . . , ym } as,
m
1 X 1
kyi ? y0 k
?
fm (y0 ) =
,
(1)
K
m i=1 hN
hm
m
where hm > 0, the bandwidth, is chosen to approach to zero at a suitable rate as the number
m of data points increases, and K : [0.?) ? [0, ?) is a kernel function that satisfies certain
properties such as boundedness. Various theorems exist on the different types of convergence of
the estimator to the correct result and the rates of convergence. The earliest result on the pointwise
convergence rate in the multivariable case seems to be given in [3], where it is stated that under
certain conditions for f and K, assuming hm ? 0 and mhm ? ? as m ? ?, the mean squared
?
error
density at a point goes to zero with the rate, MSE[f?m (y0 )] =
in the estimate f(y0 ) of the
2
E f?m (y0 ) ? f (y0 )
= O h4m + mh1 N as m ? ?. If hm is chosen to be proportional to
m
m?1/(N +4) , one gets,
MSE[f?m (p)] = O
1
m4/(N +4)
,
(2)
as m ? ?. This is an example of a curse of dimensionality; the convergence rate slows as the
dimensionality N of the data set increases. In Table 4.2 of [12], Silverman demonstrates how the
sample size required for a given mean square error for the estimate of a multivariable normal distribution increases with the dimensionality. The numbers look as discouraging as the formula 2.
1
One source of optimism towards various curses of dimensionality is the fact that although the data
for a given problem may have many features, in reality the intrinsic dimensionality of the ?data
subspace? of the full feature space may be low. This may result in there being no curse at all, if
the performance of the method/algorithm under consideration can be shown to depend only on the
intrinsic dimensionality of the data. Alternatively, one may be able to avoid the curse by devising
ways to work with the low-dimensional data subspace by using dimensional reduction techniques
on the data. One example of the former case is the results on nearest neighbor search [6, 2] which
indicate that the performance of certain nearest-neighbor search algortihms is determined not by the
full dimensionality of the feature space, but only on the intrinsic dimensionality of the data subspace.
Riemannian manifolds. In this paper, we will assume that the data subspace is a Riemannian
manifold. Riemannian manifolds provide a generalization of the notion of a smooth surface in R3
to higher dimensions. As first clarified by Gauss in the two-dimensional case (and by Riemann in
the general case) it turns out that intrinsic features of the geometry of a surface such as lengths of
its curves or intrinsic distances between its points, etc., can be given in terms of the so-called metric
tensor1 g without referring to the particular way the the surface is embedded in R3 . A space whose
geometry is defined in terms of a metric tensor is called a Riemannian manifold (for a rigorous
definition, see, e.g., [5, 7, 1]).
Previous work. In [9], Pelletier defines an estimator of a PDF on a Riemannian manifold M by
using the distances measured on M via its metric tensor, and obtains the same convergence rate
as in (2), with N being replaced by the dimensionality of the Riemannian manifold. Thus, if we
know that the data lives on a Riemannian manifold M , the convergence rate of this estimator will
be determined by the dimensionality of M , instead of the full dimensionality of the feature space
on which the data may have been originally sampled. While an interesting generalization of the
usual KDE, this approach assumes that the data manifold M is known in advance, and that we have
access to certain geometric quantities related to this manifold such as intrinsic distances between
its points and the so-called volume density function. Thus, this Riemannian KDE cannot be used
directly in a case where the data lives on an unknown Riemannian submanifold of RN . Certain tools
from existing nonlinear dimensionality reduction methods could perhaps be utilized to estimate
the quantities needed in the estimator of [9], however, a more straightforward method that directly
estimates the density of the data as measured in the subspace is desirable.
Other related works include [13], where the authors propose a submanifold density estimation
method that uses a kernel function with a variable covariance but do not present theorerical results, [4] where the author proposes a method for doing density estimation on a Riemannian manifold by using the eigenfunctions of the Laplace-Beltrami operator, which, as in [9], assumes that
the manifold is known in advance, together with intricate geometric information pertaining to it, and
[10, 11], which discuss various issues related to statistics on a Riemannian manifold.
This paper. In this paper, we propose a direct way to estimate the density of Euclidean data that
lives on a Riemannian submanifold of RN with known dimension n < N . We prove the pointwise
consistency of the estimator, and prove bounds on its convergence rates given in terms of the intrinsic
dimension of the submanifold the data lives in. This is an example of the avoidance of the curse of
dimensionality in the manner mentioned above, by a method whose performance depends on the
intrinsic dimensionality of the data instead of the full dimensionality of the feature space. Our
method is practical in that it works with Euclidean distances on RN . In particular, we do not assume
any knowledge of the quantities pertaining to the intrinsic geometry of the underlying submanifold
such as its metric tensor, geodesic distances between its points, its volume form, etc.
2 The estimator and its convergence rate
Motivation. In this paper, we are concerned with the estimation of a PDF that lives on an (unknown) n-dimensional Riemannian submanifold M of RN , where N > n. Usual, N -dimensional
kernel density estimation would not work for this problem, since if interpreted as living on RN , the
1
The metric tensor can be thought of as giving the ?infinitesimal distance?
P ds between two points whose
coordinates differ by the infinitesimal amounts (dy 1 , . . . , dy N ) as ds2 = ij gij dy i dy j .
2
underlying PDF would involve a ?delta function? that vanishes when one moves away from M , and
?becomes infinite? on M in order to have proper normalization. More formally, the N -dimensional
probability measure for such an n-dimensional PDF on M will have support only on M , will not be
absolutely continuous with respect to the Lebesgue measure on RN , and will not have a probability
density function on RN . If one attempts to use the usual, N -dimensional KDE for data drawn from
such a probability measure, the estimator will ?try to converge? to a singular PDF, one that is infinite
on M , zero outside.
In order to estimate the probability density function on M by using data given in RN , we propose a simple modification of usual KDE on RN , namely, to use a kernel that is normalized for
n-dimensions instead of N , while still using the Euclidean distances in RN . The intuition behind
this approach is based on three facts: 1) For small distances, an n-dimensional Riemannian manifold ?looks like? Rn , and densities in Rn should be estimated by an n-dimensional kernel, 2) For
points of M that are close enough to each other, the intrinsic distances as measured on M are close
to Euclidean distances as measured in RN , and, 3) For small bandwidths, the main contribution to
the estimate at a point comes from data points that are nearby. Thus, as the number of data points
increases and the bandwidth is taken to be smaller and smaller, estimating the density by using a
kernel normalized for n-dimensions and distances as measured in RN should give a result closer
and closer to the correct value.
We will next give the formal definition of the estimator motivated by these considerations, and state
our theorem on its asymptotics. As in the original work of Parzen [8], the proof that the estimator
is asymptotically unbiased consists of proving that as the bandwidth converges to zero, the kernel
function becomes a ?delta function?. This result is also used in showing that with an appropriate
choice of vanishing rate for the bandwidth, the variance also vanishes asymptotically, hence the
estimator is pointwise consistent.
Statement of the theorem Let M be an n-dimensional, embedded, complete Riemannian submanifold of RN (n < N ) with an induced metric g and injectivity radius rinj > 0.2 Let d(p, q) be
the length of a length-minimizing geodesic in M between p, q ? M , and let u(p, q) be the geodesic
(linear) distance between p and q as measured in RN . Note that u(p, q) ? d(p, q). We will use the
notation up (q) = u(p, q) and dp (q) = d(p, q). We will denote the Riemannian volume measure on
M by V , and the volume form by dV .
Theorem 2.1. Let f : M ? [0, ?) be a probability density function defined on M (so that the
related probability measure is f V ), and K : [0, ?) ? [0, ?) be a continous function that satisfies
vanishes outside [0, 1), is differentiable with a bounded derivative in [0, 1), and satisfies,
R
n
kzk?1 K(kzk)d z = 1. Assume f is differentiable to second order in a neighborhood of p ? M ,
and for a sample q1 , . . . , qm of size m drawn from the density f , define an estimator f?m (p) of f (p)
as,
m
1 X 1
up (qj )
f?m (p) =
(3)
K
m j=1 hnm
hm
where hm > 0. If hm satisfies limm?? hm = 0 and limm?? mhnm = ?, then, there exists
non-negative numbers m? , Cb , and CV such that for all m > m? we have,
h
i
2
CV
MSE f?m (p) = E f?m (p) ? f (p)
< Cb h4m +
.
(4)
mhnm
i
h
1
If hm is chosen to be proportional to m?1/(n+4) , this gives, E (fm (p) ? f (p))2 = O m4/(n+4)
as m ? ?.
Thus, the convergence rate of the estimator is given as in [3, 9], with the dimensionality replaced
by the intrinsic dimension n of M . The proof will follow from the two lemmas below on the
convergence rates of the bias and the variance.
2
The injectivity radius rinj of a Riemannian manifold is a distance such that all geodesic pieces (i.e., curves
with zero intrinsic acceleration) of length less than rinj minimize the length between their endpoints. On a
complete Riemannian manifold, there exists a distance-minimizing geodesic between any given pair of points,
however, an arbitrary geodesic need not be distance minimizing. For example, any two non-antipodal points
on the sphere can be connected with two geodesics with different lengths, namely, the two pieces of the great
circle passing throught the points. For a detailed discussion of these issues, see, e.g., [1].
3
3 Preliminary results
The following theorem, which is analogous to Theorem 1A in [8], tells that up to a constant, the
kernel becomes a ?delta function? as the bandwidth gets smaller.
Theorem 3.1. Let K : [0, ?) ? [0, ?) be a continuous function that vanishes outside [0, 1) and
is differentiable with a bounded derivative in [0, 1), and let ? : M ? R be a function that is
differentiable to second order in a neighborhood of p ? M . Let
Z
1
up (q)
?h (p) = n
?(q) dV (q) ,
(5)
K
h M
h
where h > 0 and dV (q) denotes the Riemannian volume form on M at point q. Then, as h ? 0,
Z
K(kzk)dn z = O(h2 ) ,
(6)
?h (p) ? ?(p)
Rn
n
n
1
n
where z = (z 1 , . . . , z n ) denotes the Cartesian coordinates on
R R and d z = dz . . . dz denotes
the volume form on Rn . In particular, limh?0 ?h (p) = ?(p) Rn K(kzk)dn z.
Before proving this theorem, we prove some results on the relation between up (q) and dp (q).
Lemma 3.1. There exist ?up > 0 and Mup > 0 such that for all q with dp (q) ? ?up , we have,
3
dp (q) ? up (q) ? dp (q) ? Mup [dp (q)] .
In particular, limq?p
up (q)
dp (q)
(7)
= 1.
Proof. Let cv0 (s) be a geodesic in M parametrized by arclength s, with c(0) = p and initial vedc
locity dsv0 s=0 = v0 . When s < rinj , s is equal to dp (cv0 (s)) [7, 1]. Now let xv0 (s) be the
representation of cv0 (s) in RN in terms of Cartesian coordinates with the origin at p. We have
gives3 x?v0 (s) ? x??v0 (s) = 0. Using these
up (cv0 (s)) = kxv0 (s)k and kx?v0 (s)k = 1, which
2
du (c (s))
d up (cv0 (s))
we get, p dsv0
= 0. Let M3 ? 0 be an upper bound on
= 1 , and
ds2
s=0
s=0
the3 absolute value of the third derivative of up (cv0 (s)) for all s ? rinj and all unit length v30 :
d up (cv0 (s))
? M3 . Taylor?s theorem gives up (cv0 (s)) = s + Rv0 (s) where |Rv0 (s)| ? M3 s3! .
ds3
Thus, (7) holds with Mup = M3!3 , for all r < rinj . For later convenience, instead of ?u = rinj ,
we will pick ?up as follows. The polynomial r ? Mup r3 is monotonically increasing in the interval
p
p
0 ? r ? 1/ 3Mup . We let ?up = min{rinj , 1/ Mup }, so that r ? Mup r3 is ensured to be
monotonic for 0 ? r ? ?up .
Definition 3.2. For 0 ? r1 < r2 , let,
Hp (r1 , r2 ) =
Hp (r) =
inf{up (q) : r1 ? dp (q) < r2 } ,
Hp (r, ?) = inf{up (q) : r1 ? dp (q)} ,
(8)
(9)
i.e., Hp (r1 , r2 ) is the smallest u-distance from p among all points that have a d-distance between r1
and r2 .
Since M is assumed to be an embedded submanifold, we have Hp (r) > 0 for all r > 0. In the
below, we will assume that all radii are smaller than rinj , in particular, a set of the form {q : r1 ?
dp (q) < r2 } will be assumed to be non-empty and so, due to the completeness of M , to contain a
point q ? M such that dp (q) = r1 . Note that,
Hp (r1 ) = min{H(r1 , r2 ), H(r2 )} .
(10)
Lemma 3.2. Hp (r) is a non-decreasing, non-negative function, and there exist ?Hp > 0 and MHp ?
H (r)
0 such that, r ? Hp (r) ? r ? MHp r3 , for all r < ?Hp . In particular, limr?0 pr = 1.
3
Primes denote differentiation with respect to s.
4
Proof. Hp (r) is clearly non-decreasing and Hp (r) ? r follows from up (q) ? dp (q) and the fact
that there exists at least one point q with dp (q) = r in the set {q : r ? dp (q)}
Let ?Hp = Hp (?up ) where ?up is as in the proof of Lemma 3.1 and let r < ?Hp . Since r < ?Hp =
Hp (?up ) ? ?up , by Lemma 3.1 we have,
r ? up (r) ? r ? Mup r3 ,
(11)
for some Mup > 0. Now, since r and r ? Mup r3 are both monotonic for 0 ? r ? ?up , we have (see
figure)
(12)
r ? Hp (r, ?up ) ? r ? Mup r3 .
In particular, H(r, ?up ) ? r < ?Hp = Hp (?up ), i.e, H(r, ?up ) < Hp (?up ). Using (10) this
gives, Hp (r) = Hp (r, ?up ). Combining this with (12), we get r ? Hp (r) ? r ? Mup r3 for all
r < ?Hp .
Next we show that for all small enough h, there exists some radius Rp (h) such that for all points q
with a dp (q) ? Rp (h), we have up (q) ? h. Rp (h) will roughly be the inverse function of Hp (r).
Lemma 3.3. For any h < Hp (rinj ), let Rp (h) = sup{r : Hp (r) ? h}. Then, up (q) ? h for all
q with dp (q) ? Rp (h) and there exist ?Rp > 0 and MRp > 0 such that for all h ? ?Rp , Rp (h)
satisfies,
(13)
h ? Rp (h) ? h + MRp h3 .
In particular, limh?0
Rp (h)
h
= 1.
Proof. That up (q) ? h when dq (q) ? Rp (h) follows from the definitions. In order to show (13), we
will use Lemma 3.2. Let ?(r) = r ? MHp r3 , where MHp is as in Lemma 3.2. Then, ?(r) is oneto-one and continuous in the interval 0 ? r ? ?Hp ? ?up . Let ? = ??1 be the inverse function of
? in this interval. From the definition of Rp (h) and Lemma 3.2, it follows that h ? Rp (h) ? ?(h)
for all h ? ?(?Hp ). Now, ?(0) = 0, ? ? (0) = 1, ? ?? (0) = 0, so by Taylor?s theorem and the fact
that the third derivative of ? is bounded in a neighborhood of 0, there exists ?g and MRp such that
?(h) ? h + MRp h3 for all h ? ?g . Thus,
h ? Rp (h) ? h + MRp h3 ,
(14)
for all h ? ?R where ?R = min{?(?Hp ), ?g }.
Proof of Theorem 3.1. We will begin by proving that for small enough h, there is no contribution to
the integral in the definition of ?h (p) (see (5)) from outside the coordinate patch covered by normal
coordinates.4
Let h0 > 0 be such that Rp (h0 ) < rinj (such an h0 exists since limh? 0 Rp (h) = 0). For any
h ? h0 , all points q with dp (q) > rinj will satisfy up (q) > h. This means if h is small enough,
u (q)
K( ph ) = 0 for all points outside the injectivity radius and we can perform the integral in (5)
solely in the patch of normal coordinates at p.
For normal coordinates y = (y 1 , . . . , y n ) around the point p with y(p) = 0, we have dp (q) =
ky(q)k [7, 1]. With slight abuse of notation, we will write up (y(q)) = up (q), ?(y(q)) = ?(q) and
g(q) = g(y(q)), where g is the metric tensor of M .
Since K(
up (q)
h )
= 0 for all q with dp (q) > Rp (h), we have,
Z
p
up (y)
1
?(y) g(y)dy 1 . . . dy n ,
K
?h (p) = n
h kyk?Rp (h)
h
(15)
4
Normal coordinates at a point p in a Riemannian manifold are a close approximation to Cartesian coordinates, in the sense that the components of the metric have vanishing first derivatives at p, and gij (p) = ?ij [1].
Normal coordinates can be defined in a ?geodesic ball? of radius less than rinj .
5
where g denotes the determinant of g as calculated in normal coordinates. Changing the variable of
integration to z = y/h, we get,
Z
?h (p) ? ?(p) K(kzk)dn z =
Z
Z
p
up (zh)
K(kzk)dn z
?(zh) g(zh)dn z ? ?(0)
K
h
kzk?1
kzk?Rp (h)/h
Z
p
up (zh)
g(zh) ? 1 dn z +
K
=
?(zh)
h
kzk?1
Z
up (zh)
?(zh) K
? K(kzk) dn z +
h
kzk?1
Z
K(kzk) (?(zh) ? ?(0)) dn z +
kzk?1
Z
K
1<kzk?Rp (h)/h
up (zh)
h
p
?(zh) g(zh)dn z .
Thus,
Z
?h (p) ? ?(p) K (kzk) dn z ?
p
Z
sup K(t) . sup |?(zh)| . sup g(zh) ? 1 .
t?R
kzk?1
kzk?1
(16)
dn z +
Z
up (zh)
dn z +
) ? K(kzk) .
sup |?(zh)| . sup K(
h
kzk?1
kzk?1
kzk?1
Z
n
K(kzk)(?(zh) ? ?(0))d z +
kzk?1
Z
p
g(zh) .
sup
sup K(t) .
sup
|?(zh)| .
t?R
1<kzk?Rp (h)/h
(17)
kzk?1
1<kzk?Rp (h)/h
(18)
(19)
dn z . (20)
1<kzk?Rp (h)/h
Letting h ? 0, the terms (17)-(20) approach zero at the following rates:
(17): K(t) is bounded and ?(y) is continuous at y = 0, so the first two terms can be bounded
2
by constants
In normal coordinates y, gij (y) = ?ij + O(kyk ) as kyk ? 0, so,
p as h ? 0.
supkzk?1 g(zh) ? 1 = O(h2 ) as h ? 0.
(18): Since K is assumed to be differentiable with a bounded derivative in [0, 1), we get K(b) ?
up (zh)
2
K(a)
= O(b
? a) as b ? a. By Lemma 3.1 we have h ? kzk = O(h ) as h ? 0. Thus,
K
up (zh)
h
? K(kzk) = O(h2 ) as h ? 0.
(19): Since ?(y) is assumed to have partial derivatives up to second order in a neighborhood of
y(p) = 0, for kzk ? 1, Taylor?s theorem gives,
?(zh) = ?(0) + h
n
X
zi
i=1
as h ? 0. Since
h ? 0.
??(y)
+ O(h2 )
?y i y=0
(21)
R
n
n
K(kzk)(?(zh)
?
?(0))d
z
zK(kzk)d
z
=
0,
we
get
= O(h2 ) as
kzk?1
kzk?1
R
(20): The first three terms can be bounded by constants. By Lemma 3.3, Rp (h) = h + O(h3 ) as
h ? 0. A spherical shell 1 < kzk ? 1 + ? has volume O(?) as ? ? 0+ . Thus, the volume of
1 < kzk ? Rp (h)/h is O(Rp (h)/h ? 1) = O(h2 ) as h ? 0.
Thus, the sum of the terms (17-20), is O(h2 ) as h ? 0, as claimed in Theorem 3.1.
6
4 Bias, variance and mean squared error
Let M , f , f?m , K, p be as in Theorem 2.1 and assume hm ? 0 as m ? ?.
h
i
Lemma 4.1. Bias f?m (p) = O(h2m ), as m ? ?.
i
h
R
u (q)
, so recalling Rn K(kzk)dn z = 1, the lemma
Proof. We have Bias[fm (p)] = Bias h1m K ph
follows from Theorem 3.1 with ? replaced with f .
n
Lemma
4.2.
If in addition to hm ? 0, we have mhm ? ? as m ? ?, then, Var[fm (p)] =
1
O mhn , as m ? ?.
m
Proof.
Var[fm (p)] =
up (q)
1
1
Var n K
m
hm
hm
(22)
(23)
Now,
Var
1
K
hnm
up (q)
hm
=E
1
K2
h2n
m
up (q)
hm
2
1
up (q)
? E nK
,
hm
hm
(24)
and,
1 2 up (q)
dV (q) .
(25)
f (q) n K
hm
hm
M
R 2
By Theorem 3.1, the integral in (25) converges to f (p) K (kzk)dn z, so, the right hand side of
i2
h
u (q)
(25) is O h1n as m ? ?. By Lemma 4.1 we have, E h1n K hpm
? f 2 (p). Thus,
m
m
Var[f?m (p)] = O mh1 n as m ? ?.
1
E 2n K 2
hm
up (q)
hm
1
= n
hm
Z
m
h
i
Proof of Theorem 2.1 Finally, since MSE f?m (p) = Bias2 [f?m (p)] + Var[f?m (p)], the theorem
follows from Lemma 4.1 and 4.2.
5 Experiments and discussion
We have empirically tested the estimator (3) on two datasets: A unit normal distribution mapped
onto a piece of a spiral in the plane, so that n = 1 and N = 2, and a uniform
p distribution on the unit
disc x2 + y 2 ? 1 mapped onto the unit hemisphere by (x, y) ? (x, y, 1 ? x2 + y 2 ), so that n = 2
and N = 3. We picked the bandwidths to be proportional to m?1/(n+4) where m is the number of
data points. We performed live-one-out estimates of the density on the data points, and obtained the
MSE for a range of ms. See Figure 5.
6 Conclusion and future work
We have proposed a small modification of the usual KDE in order to estimate the density of data
that lives on an n-dimensional submanifold of RN , and proved that the rate of convergence of the
estimator is determined by the intrinsic dimension n. This shows that the curse of dimensionality in
KDE can be overcome for data with low intrinsic dimension. Our method assumes that the intrinsic
dimensionality n is given, so it has to be supplemented with an estimator of the dimension. We
have assumed various smoothness properties for the submanifold M , the density f , and the kernel
K. We find it likely that our estimator or slight modifications of it will be consistent under weaker
requirements. Such a relaxation of requirements would have practical consequences, since it is
unlikely that a generic data set lives on a smooth Riemannian manifold.
7
MSE
MSE
Mean squared error for the hemisphere data
Mean squared error for the spiral data
0.000175
0.0008
0.00015
0.000125
0.0006
0.0001
0.000075
0.0004
0.00005
0.0002
0.000025
# of data points
50000
100000
150000
200000
# of data points
50000
100000
150000
200000
Figure 1: Mean squared error as a function of the number of data points for the spiral data (left) and the
hemisphere data. In each case, we fit a curve of the form M SE(m) = amb , which gave b = ?0.80 for the
spiral and b = ?0.69 for the hemisphere. Theorem 2.1 bounds the MSE by Cm?4/(n+4) , which gives the
exponent as ?0.80 for the spiral and ?0.67 for the hemisphere.
References
[1] M. Berger and N. Hitchin. A panoramic view of Riemannian geometry. The Mathematical
Intelligencer, 28(2):73?74, 2006.
[2] A. Beygelzimer, S. Kakade, and J. Langford. Cover trees for nearest neighbor. In Proceedings
of the 23rd international conference on Machine learning, pages 97?104. ACM New York,
NY, USA, 2006.
[3] T. Cacoullos. Estimation of a multivariate density. Annals of the Institute of Statistical Mathematics, 18(1):179?189, 1966.
[4] H. Hendriks. Nonparametric estimation of a probability density on a Riemannian manifold
using Fourier expansions. The Annals of Statistics, 18(2):832?849, 1990.
[5] J. Jost. Riemannian geometry and geometric analysis. Springer, 2008.
[6] F. Korn, B. Pagel, and C. Faloutsos. On dimensionality and self-similarity . IEEE Transactions
on Knowledge and Data Engineering, 13(1):96?111, 2001.
[7] J. Lee. Riemannian manifolds: an introduction to curvature. Springer Verlag, 1997.
[8] E. Parzen. On estimation of a probability density function and mode. The Annals of Mathematical Statistics, pages 1065?1076, 1962.
[9] B. Pelletier. Kernel density estimation on Riemannian manifolds. Statistics and Probability
Letters, 73(3):297?304, 2005.
[10] X. Pennec. Probabilities and statistics on Riemannian manifolds: Basic tools for geometric
measurements. In IEEE Workshop on Nonlinear Signal and Image Processing, volume 4.
Citeseer, 1999.
[11] X. Pennec. Intrinsic statistics on Riemannian manifolds: Basic tools for geometric measurements. Journal of Mathematical Imaging and Vision, 25(1):127?154, 2006.
[12] B. Silverman. Density estimation for statistics and data analysis. Chapman & Hall/CRC,
1986.
[13] P. Vincent and Y. Bengio. Manifold Parzen Windows. Advances in Neural Information Processing Systems, pages 849?856, 2003.
8
| 3826 |@word worsens:1 determinant:1 polynomial:1 seems:1 mhn:1 covariance:1 citeseer:1 q1:1 pick:1 boundedness:1 reduction:2 initial:1 existing:1 beygelzimer:1 devising:1 kyk:3 plane:1 vanishing:2 oneto:1 completeness:1 contribute:1 clarified:1 mhm:2 mathematical:3 dn:15 direct:1 prove:4 consists:2 mhp:4 manner:1 intricate:1 roughly:2 behavior:1 multi:1 antipodal:1 decreasing:2 riemann:1 spherical:1 curse:7 window:1 increasing:1 becomes:3 begin:1 estimating:3 underlying:3 mh1:2 notation:2 bounded:7 submanifolds:1 interpreted:1 cm:1 differentiation:1 ensured:1 demonstrates:1 qm:1 k2:1 unit:4 algortihms:1 before:1 engineering:1 consequence:1 solely:1 abuse:1 range:1 practical:3 practice:1 silverman:2 asymptotics:1 empirical:1 thought:1 get:7 cannot:1 onto:2 close:3 operator:1 convenience:1 live:1 dz:2 go:1 straightforward:1 limr:1 estimator:17 avoidance:1 proving:3 notion:1 coordinate:12 laplace:1 analogous:1 annals:3 us:1 origin:1 utilized:1 ds3:1 worst:1 connected:1 mentioned:1 intuition:1 vanishes:4 geodesic:9 depend:1 various:4 insitute:1 pertaining:2 tell:1 outside:5 neighborhood:4 h0:4 whose:3 widely:1 statistic:7 differentiable:5 propose:4 combining:1 ky:1 convergence:12 empty:1 requirement:2 r1:10 converges:2 measured:6 ij:3 nearest:3 h3:4 predicted:1 indicate:1 come:1 differ:1 beltrami:1 radius:6 correct:2 crc:1 generalization:2 cacoullos:1 preliminary:1 kxv:1 hold:1 around:1 hall:1 normal:9 great:1 cb:2 smallest:1 estimation:15 tool:3 clearly:1 modified:1 avoid:1 gatech:2 earliest:1 panoramic:1 tech:1 rigorous:1 sense:1 unlikely:1 v30:1 relation:1 limm:2 issue:2 among:1 exponent:1 proposes:1 integration:1 equal:1 having:1 chapman:1 look:2 cv0:8 future:1 modern:1 m4:2 replaced:3 geometry:6 lebesgue:1 attempt:1 recalling:1 behind:1 accurate:1 arkadas:2 integral:3 closer:2 partial:1 tree:1 euclidean:5 taylor:3 circle:1 ozakin:2 theoretical:1 cover:1 uniform:1 submanifold:12 referring:1 density:29 international:1 standing:1 mup:12 lee:1 ym:1 together:1 parzen:3 squared:5 hn:1 derivative:7 satisfy:1 depends:1 piece:3 later:1 try:1 picked:1 performed:1 view:1 doing:1 sup:9 contribution:2 minimize:1 square:1 variance:3 vincent:1 disc:1 cc:1 definition:6 infinitesimal:2 proof:10 riemannian:30 sampled:1 dataset:1 proved:1 popular:1 knowledge:2 limh:3 dimensionality:20 higher:1 originally:1 follow:1 langford:1 d:1 hand:1 nonlinear:2 xv0:1 defines:1 mode:1 gray:1 perhaps:1 usa:1 contain:1 unbiased:1 normalized:2 former:1 hence:1 hendriks:1 self:1 multivariable:2 m:1 pdf:7 complete:2 image:1 consideration:2 empirically:1 endpoint:1 exponentially:1 volume:9 slight:2 measurement:2 cv:2 smoothness:1 rd:1 consistency:2 mathematics:1 hp:31 access:1 similarity:1 surface:3 v0:4 etc:2 curvature:1 multivariate:1 inf:2 hemisphere:5 prime:1 claimed:1 certain:5 verlag:1 pennec:2 life:7 injectivity:3 recognized:1 converge:1 monotonically:1 living:1 signal:1 full:4 desirable:1 smooth:2 long:1 sphere:1 jost:1 basic:2 vision:1 metric:8 kernel:13 normalization:1 h1n:2 addition:1 interval:3 singular:1 source:1 eigenfunctions:1 induced:1 bengio:1 enough:4 concerned:1 spiral:5 fit:1 zi:1 gave:1 fm:5 bandwidth:7 idea:1 qj:1 motivated:1 optimism:1 speaking:1 passing:1 york:1 detailed:1 involve:1 covered:1 se:1 amount:1 nonparametric:2 ph:2 simplest:1 exist:4 s3:1 delta:3 estimated:1 write:1 demonstrating:1 drawn:2 kyi:1 changing:1 hpm:1 imaging:1 asymptotically:2 relaxation:1 sum:1 inverse:2 letter:1 bias2:1 patch:2 dy:6 bound:3 x2:2 nearby:1 fourier:1 min:3 pelletier:2 according:1 ball:1 smaller:4 y0:8 kakade:1 modification:4 dv:4 pr:1 taken:1 turn:1 r3:10 discus:1 needed:1 know:1 letting:1 away:1 appropriate:1 generic:1 faloutsos:1 rp:26 original:1 assumes:3 denotes:4 include:1 giving:1 tensor:5 move:1 quantity:3 usual:5 dp:20 subspace:5 distance:18 h1m:1 mapped:2 parametrized:1 manifold:23 hnm:2 assuming:1 length:7 pointwise:3 berger:1 minimizing:3 statement:1 kde:9 stated:1 slows:1 ds2:2 negative:2 proper:1 unknown:2 perform:1 upper:1 datasets:2 y1:1 rn:23 arbitrary:1 namely:2 required:1 pair:1 amb:1 continous:1 able:1 below:2 suitable:1 korn:1 technology:2 hm:22 geometric:5 discouraging:1 zh:24 embedded:3 interesting:1 proportional:3 var:6 h2:7 consistent:2 dq:1 mrp:5 formal:1 bias:5 side:1 weaker:1 institute:3 neighbor:3 absolute:1 curve:3 dimension:11 kzk:38 calculated:1 overcome:1 author:2 transaction:1 obtains:1 conclude:1 assumed:5 alternatively:1 search:2 continuous:4 decade:1 table:1 reality:1 zk:1 h2n:1 du:1 mse:8 expansion:1 agray:1 main:1 motivation:1 georgia:3 ny:1 third:2 theorem:19 formula:1 arclength:1 showing:2 supplemented:1 r2:8 intrinsic:18 exists:6 workshop:1 rv0:2 cartesian:3 kx:1 nk:1 likely:1 monotonic:2 springer:2 satisfies:5 acm:1 shell:1 acceleration:1 towards:1 determined:4 infinite:2 lemma:16 called:3 gij:3 gauss:1 m3:4 formally:1 college:1 support:1 alexander:1 absolutely:1 tested:1 |
3,120 | 3,827 | Accelerating Bayesian Structural Inference for
Non-Decomposable Gaussian Graphical Models
Baback Moghaddam
Jet Propulsion Laboratory
California Institute of Technology
[email protected]
Benjamin M. Marlin
Department of Computer Science
University of British Columbia
[email protected]
Mohammad Emtiyaz Khan
Department of Computer Science
University of British Columbia
[email protected]
Kevin P. Murphy
Department of Computer Science
University of British Columbia
[email protected]
Abstract
We make several contributions in accelerating approximate Bayesian structural
inference for non-decomposable GGMs. Our first contribution is to show how to
efficiently compute a BIC or Laplace approximation to the marginal likelihood of
non-decomposable graphs using convex methods for precision matrix estimation.
This optimization technique can be used as a fast scoring function inside standard
Stochastic Local Search (SLS) for generating posterior samples. Our second contribution is a novel framework for efficiently generating large sets of high-quality
graph topologies without performing local search. This graph proposal method,
which we call ?Neighborhood Fusion? (NF), samples candidate Markov blankets
at each node using sparse regression techniques. Our third contribution is a hybrid
method combining the complementary strengths of NF and SLS. Experimental
results in structural recovery and prediction tasks demonstrate that NF and hybrid
NF/SLS out-perform state-of-the-art local search methods, on both synthetic and
real-world datasets, when realistic computational limits are imposed.
1 Introduction
There are two main reasons to learn the structure of graphical models: knowledge discovery (to
interpret the learned topology) and density estimation (to compute log-likelihoods and make predictions). The main difficulty in graphical model structure learning is that the hypothesis space is
extremely large, containing up to 2d(d?1)/2 graphs on d nodes. When the sample size n is small,
there can be significant uncertainty with respect to the graph structure. It is therefore advantageous
to adopt a Bayesian approach and maintain an approximate posterior over graphs instead of using a
single ?best? graph, especially since Bayesian model averaging (BMA) can improve predictions.
There has been much work on Bayesian inference for directed acyclic graphical model (DAG)
structure, mostly based on Markov chain Monte Carlo (MCMC) or stochastic local search (SLS)
[22, 19, 16, 14]. MCMC and SLS methods for DAGs exploit the important fact that the marginal
likelihood of a DAG, or an approximation such as the Bayesian Information Criterion (BIC) score,
can be computed very efficiently under standard assumptions including independent conjugate
priors, and complete data. An equally important property in the DAG setting is that the score can be
quickly updated when small local changes are made to the graph. This conveniently allows one to
move rapidly through the very large graph space of DAGs.
However, for knowledge discovery, a DAG may be an unsuitable representation for several reasons.
First, it does not allow directed cycles, which may be an unnatural restriction in certain domains.
Second, DAGs can only be identified up to Markov equivalence in the general case. In contrast,
undirected graphs (UGs) avoid these issues and may be a more natural representation for some
problems. Also, for UGs there are fast methods available for identifying the local connectivity at
each node (the node?s Markov blanket). We note that while the UG and DAG representations have
different properties and enable different inference and structure learning algorithms, the distinction
between UGs and DAGs from a density estimation perspective may be less important [12].
Most prior work on Bayesian inference for Gaussian Graphical Models (GGMs) has focused on the
special case of decomposable graphs (e.g., [17, 2, 29]). The popularity of decomposable GGMs is
mostly due to the fact that one can compute the marginal likelihood in closed form using similar
assumptions to the DAG case. In addition, one can update the marginal likelihood in constant time
after single-edge moves in graph space [17]. However, the space of decomposable graphs is much
smaller than the space of general undirected graphs. For example, the number of decomposable
graphs on d nodes for d = 2, . . . , 8 is 2, 8, 61, 822, 18154, 617675, 30888596 [1, p.158]. If we
divide the number of decomposable graphs by the number of general undirected graphs, we get the
?volume? ratios: 1, 1, 0.95, 0.80, 0.55, 0.29, 0.12. This means that decomposability significantly
limits the subclass of UGs available for modeling purposes, even for small d. Several authors
have studied Bayesian inference for GGM structure in the general case using approximations to the
marginal likelihood based on Monte Carlo methods (e.g., [8, 31, 20, 3]). However, these methods
cannot scale to large graphs because of the high computational cost of Monte Carlo approximation.
In this paper, we propose several techniques to help accelerate approximate Bayesian structural
inference for non-decomposable GGMs. In Section 2, we show how to efficiently compute BIC
and Laplace approximations to the marginal likelihood p(D|G) by using recent convex optimization
methods for estimating the precision matrix of a GGM. In Section 3, we present a novel framework
for generating large sets of high-quality graphs which we call ?Neighborhood Fusion? (NF). This
framework is quite general in scope and can use any Markov blanket finding method to devise a
set of probability distributions (proposal densities) over the local topology at each node. It then
specifies rules for ?fusing? these local densities (via sampling) into an approximate posterior over
whole graphs p(G|D). In Section 4, we combine the complementary strengths of NF and existing
SLS methods to obtain even higher quality posterior distributions in certain cases. In Section 5,
we present an empirical evaluation of both knowledge discovery and predictive performance of our
methods. For knowledge discovery, we measure structural recovery in terms of accuracy of finding
true edges in synthetic GGMs (with known structure). For predictive performance, we evaluate
test set log-likelihood as well as missing-data imputation on real data (with unknown structure).
We show that the proposed NF and hybrid NF/SLS methods for general graphs outperform current
approaches to GGM learning for both decomposable and general (non-decomposable) graphs.
Throughout this paper we will view the marginal likelihood p(D|G) as the key to structural inference
and as being equivalent to the graph posterior p(G|D) by adopting a flat structural prior p(G) w.l.o.g.
2 Marginal Likelihood for General Graphs
In this section we will review the G-Wishart distribution and discuss approximations to the marginal
likelihood of a non-decomposable GGM under the G-Wishart prior. Unlike the decomposable case,
here the marginal likelihood can not be found in closed form. Our main contribution is the insight
that recently proposed convex optimization methods for precision matrix estimation can be used
to efficiently find the mode of a G-Wishart distribution, which in turn allows for more efficient
computation of BIC and Laplace modal approximations to the marginal likelihood.
We begin with some notation. We define n to be the number of data cases and d to be the number of
data dimensions. We denote the ith data case by xi and a complete data set D with the n ? d matrix
X, with the corresponding scatter matrix S = X T X (we assume centered data). We use G to denote
an undirected graph, or more precisely its adjacency matrix. Graph edges are denoted by unordered
pairs (i, j) and the edge (i, j) is in the graph G if Gij = 1. The space of all positive definite matrices
++
having the same zero-pattern as G is denoted by SG
. The covariance matrix is denoted by ? and
?1
its inverse or the precision matrix by ? = ? . We also define hA, Bi = Trace(AB).
The Gaussian likelihood p(D|?) is expressed in terms of the data scatter matrix S in Equation 1.
We denote the prior distribution over precision matrices given a graph G by p(?|G). The standard
measure of model quality in the Bayesian model selection setting is the marginal likelihood p(D|G)
++
which is obtained by integrating p(D|?)p(?|G) over the space SG
as shown in Equation 2.
p(D|?) =
n
Y
i=1
p(D|G) =
Z
1
N (xi | 0, ??1 ) ? |?|n/2 exp(? h?, Si)
2
p(D|?) p(?|G) d?
++
SG
(1)
(2)
The G-Wishart density in Equation 3 is the Diaconis-Ylvisaker conjugate form [10] for the GGM
++
likelihood as shown in [27]. The indicator function I[? ? SG
] in Equation 3 restricts the density?s
++
support to SG . The G-Wishart generalizes the hyper inverse Wishart (HIW) distribution to general
non-decomposable graphs. The G-Wishart normalization constant Z is shown in Equation 4.
++
I[? ? SG
]
1
|?|(?0 ?2)/2 exp(? h?, S0 i)
Z(G, ?0 , S0 )
2
Z
1
|?|(?0 ?2)/2 exp(? h?, S0 i) d?
Z(G, ?0 , S0 ) =
++
2
S
Z G
Z(G, ?n , Sn )
p(D|G) =
p(D|?) W (?|G, ?0 , S0 ) d? ?
++
Z(G, ?0 , S0 )
SG
W (?|G, ?0 , S0 ) =
(3)
(4)
(5)
Because of the conjugate prior in Equation 3, the ? posterior has a similar form W (?|G, ? n , Sn )
where ?n = ?0 + n is the posterior degrees of freedom and the posterior scatter matrix Sn = S + S0 .
The resulting marginal likelihood is then the ratio of the two normalizing terms shown in Equation 5
(which we refer to as Zn and Z0 for short).
The main drawback of the G-Wishart for general graphs, compared to the HIW for decomposable
graphs, is that one cannot compute the normalization terms Zn and Z0 in closed form. As a
result, Bayesian model selection for non-decomposable GGMs relies on approximating the marginal
likelihood p(D|G). The existing literature focuses on Monte Carlo and Laplace approximations.
One strategy that makes use of Monte Carlo estimates of both Zn and Z0 is given by [3]. However,
the computation time required to find accurate estimates can be extremely high [20] (see Section 6).
An effective approximation strategy based on using a Laplace approximation to Z n and a Monte
Carlo approximation to Z0 is given in [21]. This requires finding the mode of the G-Wishart, with
which a closed-form expression for the Hessian is derived [21]. We consider a simpler method which
applies the Laplace approximation to both Zn and Z0 for greater speed, which we call full-Laplace.
Nevertheless, computing the Hessian determinant has a computational complexity of O(E 3 ), where
E is the number of edges in G. Since E = O(d2 ) in the worst-case scenario, computing a full
Hessian determinant becomes infeasible for large d in all but the sparsest of graphs.
Due to the high computational cost of Monte Carlo and Laplace approximation in high dimensions,
we consider two alternative marginal likelihood approximations that are significantly more efficient.
The first alternative is to approximate Zn and Z0 by Laplace computations in which the Hessian
matrix is replaced by its diagonal (by setting off-diagonal elements to zero). We refer to this method
as the diagonal-Laplace score. The other alternative is the Bayesian Information Criterion (BIC)
score shown in Equation 6, which is another large-sample Laplace approximation
X
? G ) ? 1 dof(G) log n ,
dof(G) = d +
Gij
(6)
BIC(G) = log p(D|?
2
i<j
where, by analogy to [34], we define the GGM?s degrees-of-freedom (dof) to be the number of free
? G as the plug-in
parameters in the precision matrix. For BIC we use the G-Wishart posterior mode ?
estimate, since the MLE is undefined for n < d. But we use a vague and proper prior (? 0 = 3).
Therefore, all three approximations will require finding the mode of a G-Wishart (for the posterior
and/or the prior). In [21] an Iterative Proportional Scaling (IPS) algorithm [30] is proposed to find
the G-Wishart mode. However, IPS requires finding the maximal cliques of the graph, which is an
NP-hard problem. We will now derive a much more efficient G-Wishart mode-finder using convex
? G when computing BIC scores, as well as
optimization techniques. We apply this method to find ?
the prior and posterior G-Wishart modes when computing Laplace approximations to Z 0 and Zn .
Observe that we can express the mode of any G-Wishart distribution with the optimization problem
in Equation 7, where the density is parameterized by graph G, degree ? and the scatter matrix S.
? G = arg max log W (?|G, ?, S) = arg min ? log |?| + ?, S
(7)
?
++
++
??2
??SG
??SG
This ?COVSEL? type problem [9] is equivalent to finding the maximum likelihood precision matrix
of a GGM with known structure G, and is a convex optimization problem. Several new methods for
solving this precision estimation problem have been recently proposed, and unlike IPS they do not
require computing the clique structure of the underlying graph. Hastie et al. [18] present one such
method which consists of iteratively solving a series of least square problems on the free elements
Pd P
of the precision matrix, which has O( i ( j6=i Gij )3 ) complexity per iteration [18, p.634].
The G-Wishart mode in Equation 7 can also be found more directly with a gradient-based optimizer
such as L-BFGS [6], by using the implementation convention that the objective function is ? for a
non-positive definite matrix. This technique has been used previously by Duchi et al. for the more
difficult problem of `1 penalized precision matrix estimation [13]. The gradient of the objective
function is simply set to (???1 + S) G, where indicates element-wise multiplication. The
elements of the precision matrix corresponding to absent edges in G are fixed to zero, and we
optimize over the remaining elements. The complexity per iteration is O(d3 ). In practice, initializing
the above optimization with the output of few iterations of the block coordinate descent method
of [18] (Glasso with known G) is quite effective, as it requires fewer subsequent L-BFGS steps.
In Section 5 we explore the speed vs. accuracy trade-off of the various marginal likelihood approximation schemes discussed above; comparing full-Laplace, diagonal-Laplace and the BIC score
functions to the marginal likelihood values obtained with the Monte Carlo method of [3].
3 Neighborhood Fusion
In this section we describe a novel framework we call ?Neighborhood Fusion? (NF) for generating an
approximate posterior distribution p(G|D) over general graphs. An important advantage of working
with general graphs, instead of decomposable graphs, is that we can leverage simple and stable
methods for quickly exploring Markov blankets. One popular method for structural recovery is
Glasso which imposes an l1 penalty on ? [4, 15, 32]. Finding the corresponding graph takes O(d3 )
time per iteration for each setting of the regularization parameter ?. However, the choice of the ?
parameter is critical, and in practice we often find that no setting of this parameter leads to good
recovery. A related approach, proposed in [23], uses l1 -regularized linear regression or Lasso to
identify the Markov blanket (MB) of each node. These Markov blankets are then combined using
intersection or union (AND/OR) to give the global graph G.
These methods essentially produce a single ?best? graph, but our main interest is in approximating
the full posterior p(G|D). Our NF framework uses a Markov blanket finding method to derive a set
of probability distributions over the local topology at each node, and specifies a rule for combining
these into an approximate posterior over graphs. The detailed steps of the generic NF algorithm are:
1. Regress each node i on all others to find neighborhoods of all cardinalities k = 0 : d ? 1
using a sparse regression method. Denote the set of Markov blankets for node i by N i
2. Compute the linear P
regression scores s(b) for each Markov blanket b in Ni , and define
pi (b) = exp(s(b))/( b0 ?Ni exp(s(b0 ))) as the node?s Markov blanket proposal density
3. Independently sample a Markov blanket for each node i from its proposal density p i (b),
and then combine all d sampled Markov blankets to assemble a single graph G
4. Find G?s precision matrix using Equation 7 and compute the graph score as in Section 2
5. Repeat sampling step 3 and 4 to produce a large ensemble of posterior-weighted graphs
The design choices in the NF framework are the choice of a sparse linear regression method (and its
score function), the choice of a method for combining Markov blankets, and the choice of a graph
score function (for marginal likelihood). In all the results that follow we use the linear regression
BIC score induced by regressing node i on Ni , and generate whole graphs by intersecting the
Markov blankets using the AND operator. This essentially constitutes sampling from the ?ANDcensored? pseudo marginal likelihood and is therefore likely to produce good candidate MBs that
can be fused into high-quality graphs. Note that the uncertainty modeled by the MB proposal density
is critical, as it promotes efficient exploration of model space to generate a large variety of highscoring models. Indeed, the best NF-sampled graphs typically have higher scores than the pseudo
?MAP? graph obtained by simply intersecting the best MBs [23], due to the inherent noise in the
linear regression BIC scores and the possibility of over-fitting. Moreover, our MB proposals can be
?flattened? with a temperature parameter to trade-off exploration vs. fidelity of the sampled graphs,
though we generally find it unnecessary to go to such extremes and use a default temperature of one.
We next consider two further specialized instances of the NF framework using different sparse linear
regression methods. The first method uses the full Lasso/LARS regularization path and is called
L1MB (?L1 Markov Blanket?) which we adapted from the DAG-learning method of [28]. NF based
on these l1 -derived MBs we call NF-L1MB (or NF-L1 for short). In light of recent theoretical results
on the superiority of greedy forward/backward search over Lasso [33] we also use the l 0 -based
method of [24] which we call L0MB (?L0 Markov blanket?). And NF based on L0MB we will call
NF-L0MB (or NF-L0 for short). Our experimental results show that the improvement of the l 0 -based
greedy search of [24] over Lasso/LARS translates directly to obtaining improved MB proposals with
NF-L0MB compared to NF-L1MB. Similar forward/backward greedy variable selection techniques
were put to good use in the ?compositional network? DAG-to-UG method of [11], however not for
deriving proposal distributions for parents/MBs as we do here for NF.
Our overall computational scheme is quite fast by design: finding MB proposals is at most O(d 4 )
with L1MB/L0MB (although L0MB has a smaller constant for both the forward and backward
passes). Thereafter, we sample full graphs in O(d2 ) time (since we are sampling a discrete p.m.f.
? G is just O(d3 ) per iteration.
for d MB candidates at each node) and computing a G-Wishart mode ?
4 Stochastic Local Search
Stochastic Local Search (SLS) can also be viewed as a mechanism for generating an approximate
posterior distribution over graphs. Like MCMC methods, SLS explores high probability regions of
graph space, but unlike MCMC it computes approximate model probabilities directly for each graph
it visits. This is sensible for large discrete hypothesis spaces like the space of UGs since the chance of
visiting the same graph multiple times is extremely small. We note that SLS represents an orthogonal
and complementary approach to structural inference relative to the NF framework presented in
Section 3. In this section we discuss SLS for both decomposable and general (non-decomposable)
GGMs. Specifically, we describe new initialization and edge-marginal updating methods for nondecomposable GGMs, and also introduce a highly effective hybrid NF/SLS method.
SLS with decomposable graphs has the advantage that its natural scoring function, the marginal
likelihood, can be computed exactly under the conjugate Hyper Inverse Wishart prior. The marginal
likelihood can also be updated efficiently when local changes are made to the underlying graph. A
state-of-the-art SLS method for decomposable GGMs is given in [29], which can be used with an
arbitrary score function over the space of general graphs. Here we consider SLS for general graphs
using the Laplace score described in Section 2. In the SLS in [29], at iteration t, an edge (i, j) from
Gt is chosen at random and flipped with probability qij . If the resulting graph is admissible and has
not been visited before, this graph becomes Gt+1 , and we evaluate its score. In the general case,
every new graph generated is admissible. In the decomposable case, only decomposable graphs are
admissible. We should note that unlike exhaustive search methods, this method avoids evaluating
the score of all O(d2 ) neighboring graphs at each iteration, and instead picks one at random.
There are two key modifications used in [29] which help this method work well in practice. First,
the marginal edge probabilities qij are updated online, so edges that have proved useful in the past
are more likely to be proposed in the future. Second, on each iteration the algorithm chooses to
perform a resampling step with probability pr or a global move with probability pg . In a resampling
step we set Gt+1 to Gv , where v ? t, with probability proportional to the score (or exponentiated
score) of Gv . In a global move we sample a completely new graph (based on the edge marginals q ij )
for Gt+1 . We note that a similar idea of using edge-marginals to propose moves in DAG space was
suggested in [14]. In this paper, we set pr = 0.02 and pg = 0 (i.e., we do not use global moves).
BIC
Laplace
Diag Laplace
0.6
1
NF?L0 (5k)
GLS?T
NF?L0 (100)
9450
?1
0.2
Score
10
0.3
9350
DLS?T
9300
9250
?2
10
0.1
0
GLS?NF
9500
9400
0.4
Time (sec)
KL (bits)
0.5
9550
BIC
Laplace
Diag Laplace
MC
9200
6
8
10
12
Dimension (d)
14
(a) Posterior Error
16
?3
10
6
8
10
12
Dimension (d)
14
(b) Score CPU Time
16
9150
C?L Tree
NF?L1 (5k)
1000 2000 3000 4000 5000
Iteration
(c) MF Score Trace
Figure 1: Score trade-offs: (a) average KL error of posterior approximations and (b) the average time to score
a single graph as a function of data dimensionality. (c) Results on the MF dataset: scores for various methods.
We now propose a new initialization and updating scheme for non-decomposable SLS based on
a set of k initial graphs G10 , ..., Gk0 (with positive weights w01 , ..., w0k defined by normalized scores)
obtained from our NF graph-sampling framework. Our approach views qij as a Beta random variable
Pk
Pk
with prior parameters ?ij = l=1 w0l Gl0,ij and ?ij = l=1 w0l (1 ? Gl0,ij ). We update this distrit
t
t
bution online using p(qij |G1:t ) = Beta(?ij +tfij
, ?ij +t(1?fij
)), where fij
=
t
We then flip an edge with probability E[qij ] = (?ij + tfij
)/(?ij + ?ij + t).
Pt
l
l
l=1 Gij p(D|G )
P
.
t
l
l=1 p(D|G )
SLS?s main drawback is that, if started from the empty graph as in [29], it will necessarily take at
least E steps to find the highest scoring graph, where E is the number of true edges. This means that
it will likely require a very large number of iterations even in moderately large and dense graphs.
An improved initialization strategy is to start the search from the optimal tree, which can be found
in O(d2 ) time using the Chow-Liu algorithm [7]. An even better initialization strategy, for nondecomposable graphs, is to ?seed? SLS with a batch of NF-sampled graphs for G 10 , ..., Gk0 and then
start the search by executing a resampling step. In this way, a limited number of SLS steps can
effectively explore the space around these initial high-quality graphs. We refer to this new method,
where NF is used to both initialize the edge-marginals and seed the graph history, as hybrid NF/SLS.
5 Experiments
We begin our experimental analysis by first assessing the speed vs. accuracy trade-off of the different
marginal likelihood approximations in Section 2. For this evaluation we use the Monte Carlo method
of [3] as a proxy for the ground truth marginal likelihood. For data dimensions d = 6, ..., 16,
we sample 100 random, sparse precision matrices with an average edge density of 0.5. For each
sampled precision matrix ? we generate 10d observations from the corresponding GGM. Using
each approximation method, we score all d(d ? 1)/2 neighbors of G obtained from G by single edge
flips. We then compute a posterior distribution over this set of graphs by normalizing the scores (or
exponentiated scores as appropriate). We then compute the Kullback-Leibler (KL) divergence from
the Monte Carlo based posterior to each approximate posterior. We also record the time required to
score each graph. The scoring methods we use are BIC, full-Laplace and diagonal-Laplace for Z n
and Z0 . We use a G-Wishart prior with parameters ?0 = 3 and S0 = I. In Figure 1(a) we show the
average error of these posterior approximations as a function of data dimensionality d, as measured
by KL divergence. In Figure 1(b) we show the average time required to score a single graph as
a function of graph size d. As expected, full-Laplace is the most accurate and most costly of the
approximations next to Monte Carlo. Interestingly, diagonal-Laplace appears to be significantly
more accurate than BIC (for this test) and is in fact only twice as costly. Moreover, diagonalLaplace is already more than 20 times faster than Monte Carlo and full-Laplace at d = 16. On the
basis of the speed vs. accuracy trade-off seen in Figure 1(a) and Figure 1(b), we will report only the
diagonal-Laplace score in the remainder of our experiments.
We next evaluate the NF-L1MB and NF-L0MB methods described in Section 3 (note that we will
use the short labels NF-L1 and NF-L0 in the Figures), and SLS for decomposable and general graphs
256
9550
9500
254
4700
9450
252
9400
4650
250
9350
9300
4600
248
9250
246
4550
9200
244
9150
4500
9100
DLS?T
GLS?T GLS?NF NF?L0
NF?L1
(a) Diagonal-Laplace Score
242
DLS?T
GLS?T GLS?NF NF?L0
(b) Test log-likelihood
NF?L1
DLS?T
GLS?T GLS?NF NF?L0
NF?L1
(c) Imputed log-likelihood
Figure 2: Mutual Fund results: box plots of the (a) scores, (b) test set log-likelihoods and (c) test set imputation
log-likelihoods (averaged over all possible missing 3-nodes). The BMA performance is indicated with a circle.
initialized from the optimal tree as described in Section 4 (denoted as DLS-T and GLS-T, respectively), and a L0MB-based hybrid NF/SLS method as described in Section 4 (denoted as GLS-NF).
We sample 5000 graphs for each of the NF methods and run each of the SLS methods for 5000 steps,
also producing 5000 graphs. The hybrid NF/SLS method is initialized with a sample of 100 NF
graphs, and then run for 5000 steps. We compute the score for each set of graphs (diagonal-Laplace
for non-decomposable and exact marginal likelihood for decomposables). We extract the 100 best
graphs by score, and produce an approximation to p(G|D) by normalizing the exponentiated scores.
We report results for individual graphs in the best 100, but our main focus is on performance statistics
under Bayesian model averaging (BMA) with approximate scores of each method. In the following
experiments we use a G-Wishart prior degree ?0 = 3 (the smallest integer yielding a proper prior)
and unless otherwise noted, a default prior scatter matrix of S0 = mean(diag(cov(X))) ? Id .
We examine the two main inferential tasks of prediction and knowledge discovery. We first measure
the predictive ability of each method by computing both test set log-likelihoods and test set imputation log-likelihoods. For this task we use the ?Mutual Funds? (MF) dataset used by [29] for SLS
with decomposable GGMs, with d = 59, which they split into 60 months of training data and 26
months of test data. But due to the resulting critical sampling (n ? d), here we use a more stable
S0 = ? ? Diag(X T X) with ? = 0.055 (a Ledoit-Wolf shrinkage). In Figure 1(c) we show a trace plot
of scores for the SLS methods and best scores for the NF and tree methods. Box plots of diagonalLaplace scores for each method on the MF data are shown in Figure 2(a). The corresponding test
set log-likelihoods are shown in Figure 2(b). For the imputation experiment, we impute ?missing?
triplets of variables given the values of the remaining variables. We compute the log-likelihood of
this predictive (imputed) distribution by averaging it over all 59-choose-3 = 32509 possible missing
patterns and all 26 test cases. The imputation log-likelihoods are shown in Figure 2(c). We can see
that NF-L0MB out-performs NF-L1MB on both predictive tasks (full and missing). Interestingly, on
this small data set SLS for general graphs (GLS-T) performs rather well. But our hybrid NF-L0MB
?seeding? approach for SLS (GLS-NF) has the best overall BMA performance.
In the second set of tasks, we evaluate the structural recovery of each method by measuring the
true positive and false positive rates for edge inclusion w.r.t. a ground-truth GGM. The synthetic
data sets contain d = 100 nodes, E = 300 edges and n/d ratios of 5.0 (Synth-1) and 0.5 (Synth-2).
Synth-1 is thus generously oversampled while Synth-2 is undersampled. Both synthetic GGMs were
generated by moralizing a random DAG. Figures 3(a) and 3(b) show plots of TPR vs. FPR for edge
recovery. The rates for individual graphs are shown as small grey symbols while the BMA rate is
shown with a large bold colored symbol. The results show that NF-L0MB and GLS-NF (based on
seeding GLS with 100 NF-L0MB graphs) are the best methods on both data sets. We also see that
NF-L0MB dominates NF-L1MB, while the hybrid GLS-NF dominates both GLS-T and DLS-T.
For the d = 59 MF dataset in Figure 1(c), NF-sampling 5000 graphs and doing the G-Wishart modefits and diagonal-Laplace scoring, takes a total of 13 mins, and likewise 30 mins for the synthetic
d = 100 dataset in Figure 3. Generating and scoring 5000 graphs with non-decomposable SLS takes
37 mins on the MF dataset and 59 mins on the synthetic one. Decomposable SLS takes 31 mins on
MF and 43 mins on the synthetic. All times quoted are for Matlab code running on a 3.16 GHz PC.
0.55
1
0.5
0.8
0.45
TPR
TPR
0.6
0.4
DLS?T
GLS?T
GLS?NF
NF?L0
NF?L1
0.2
0
?4
10
?3
10
?2
10
FPR
?1
10
(a) Synth-1: n/d = 5.0 (d = 100)
0.4
DLS?T
GLS?T
GLS?NF
NF?L0
NF?L1
0.35
0.3
1
?2
10
?1
10
FPR
1
(b) Synth-2: n/d = 0.5 (d = 100)
Figure 3: True Positive vs. False Positive rates for (a) Synth-1 and (b) Synth-2 datasets for each recovery
method. The top 100 graphs are shown with a grey symbol and the bold colored symbol is the BMA graph.
6 Discussion
We offer a practical framework for fast inference in non-decomposable GGMs providing reasonable
accuracy for marginal likelihoods. While Monte Carlo methods are the ?gold standard? (modulo the
usual convergence issues) they are exorbitantly costly for even moderately large d. For example,
scoring all the neighbors of a 150-node graph via SLS required over 40 days of computation in [20].
A similar size task would take less than 40 mins with our diagonal-Laplace approximation method.
As pointed out by [21] there may not always be sufficient concentration for a Laplace approximation
to Z0 to be very accurate, which is why they use MC for this quantity. We chose Laplace for both Z n
and Z0 solely for speed (to avoid MC altogether) and found good agreement between full-Laplace
and BIC for much larger graphs than in Figure 1(a). Our Laplace scores also roughly matched the
MC values for the Fisher Iris data in [3], selecting essentially the same top-ranked 16 graphs (see
Figure 5 in [3]). Using a diagonal instead of a full Hessian was yet another compromise for speed.
An issue that should be explored further is the sensitivity of these approximations to different priors.
We experimentally validated NF on nearly 104 synthetic cases ranging in size from d = 10, ..., 500,
with various edge densities and n/d ratios, with consistently good results, typified by the two test
cases shown in Figure 3. Note that the sub-par performance of NF-L1 is not a failing of NF but due
to l1 -based MBs, and superiority of l0 -based F/B greedy search is not without precedent [25, 24, 33].
We note that NF can be partially justified as a pseudo marginal likelihood (PML), but whereas most
authors rely only on its maximizer [23] we exploit the full (pseudo) density. Without the AND filter,
NF-drawn MBs are sampled from a set of ?consistent? full-conditionals in the sense of Besag [5],
and their max-BIC MBs are collectively the PML mode (note that here we mean the node regression
BIC, not graph BIC). Enforcing AND is a necessary domain truncation for a valid UG which alters
the mode. This symmetrized ?pseudo-MAP? G is often an average-scoring one compared to the best
and worst found by NF, which motivates BMA and justifies NF. We can also view NF as an overdispersed proposal density; its weighted graphs a rough proxy for p(G|D). This approximation may
be biased but our results show it is quite useful for prediction and imputation (and seeding SLS with
high-quality graphs). Finally, while use of BIC/Laplace for hypothesis testing is often criticized, it
can still be useful for estimation [26], and nowhere in our framework are these scores being used to
select a single ?best? model (whether it be a MB or a G) due to our reliance on sampling and BMA.
Acknowledgments
We like to thank the reviewers for their helpful and encouraging feedback. BMM was supported
by the Killam Trusts at UBC and KPM would like to thank NSERC and CIFAR. This work was in
part carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract
with the National Aeronautics and Space Administration.
References
[1] H. Armstrong. Bayesian Estimation of Decomposable GGMs. PhD thesis, UNSW, 2005.
[2] H. Armstrong, C. Carter, K. Wong, and R. Kohn. Bayesian covariance matrix estimation using a mixture
of decomposable graphical models. Statistics and Computing, 2008.
[3] A. Atay-Kayis and H. Massam. A Monte Carlo method for computing the marginal likelihood in nondecomposable Gaussian graphical models. Biometrika, 92, 2005.
[4] O. Banerjee, L. El Ghaoui, A. d?Aspremont, and G. Natsoulis. Convex optimization techniques for fitting
sparse Gaussian graphical models. In Intl. Conf. on Machine Learning, 2006.
[5] J. Besag. Efficiency of pseudo-likelihood estimation for simple Gaussian fields. Biometrika, 1977.
[6] R. Byrd, P. Lu, J. Nocedal, and C. Zhu. A limited memory algorithm for bound constrained optimization.
SIAM J. of Scientific & Statistical Computing, 16(5), 1995.
[7] C. Chow and C. Liu. Approximating discrete probability distributions with dependence trees. IEEE Trans.
on Info. Theory, 14, 1968.
[8] P. Dellaportas, P. Giudici, and G. Roberts. Bayesian inference for nondecomposable graphical Gaussian
models. Sankhya, Ser. A, 65, 2003.
[9] A. Dempster. Covariance selection. Biometrics, 28(1), 1972.
[10] P. Diaconis and D. Ylvisaker. Conjugate priors for exponential families. Annals of statistics, 7(2), 1979.
[11] D. Dobra, C. Hans, B. Jones, J. Nevins, G. Yao, and M. West. Sparse graphical models for exploring gene
expression data. J. Multivariate analysis, 90, 2004.
[12] J. Domke, A. Karapurkar, and Y. Aloimonos. Who killed the directed model? In CVPR, 2008.
[13] J. Duchi, S. Gould, and D. Koller. Projected subgradients for learning sparse Gaussians. In UAI, 2008.
[14] D. Eaton and K. Murphy. Bayesian structure learning using DP and MCMC. In UAI, 2007.
[15] J. Friedman, T. Hastie, and R. Tibshirani. Sparse inverse covariance estimation in Glasso. Biostats, 2007.
[16] N. Friedman and D. Koller. Being Bayesian about network structure: A Bayesian approach to structure
discovery in Bayesian networks. Machine Learning, 50, 2003.
[17] P. Giudici and P. Green. Decomposable graphical Gaussian model determination. Biometrika, 1999.
[18] T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning. Springer, 2009.
[19] D. Heckerman, D. Geiger, and M. Chickering. Learning Bayesian networks: the combination of knowledge and statistical data. Machine Learning, 20(3), 1995.
[20] B. Jones, C. Carvalho, A. Dobra, C. Hans, C. Carter, and M. West. Experiments in stochastic computation
for high-dimensional graphical models. Statistical Science, 20, 2005.
[21] A. Lenkoski and A. Dobra. Bayesian structural learning and estimation in Gaussian graphical models.
Technical Report 545, Department of Statistics, University of Washington, 2008.
[22] D. Madigan and A. Raftery. Model selection and accounting for model uncertainty in graphical models
using Occam?s window. J. of the Am. Stat. Assoc., 89, 1994.
[23] N. Meinshausen and P. Buhlmann. High dimensional graphs and variable selection with the Lasso. The
Annals of Statistics, 2006.
[24] B. Moghaddam, A. Gruber, Y. Weiss, and S. Avidan. Sparse regression as a sparse eigenvalue problem.
In Information Theory & Applications Workshop (ITA?08), 2008.
[25] B. Moghaddam, Y. Weiss, and S. Avidan. Spectral bounds for sparse PCA: Exact & greedy algorithms.
In NIPS, 2006.
[26] A. Raftery. Bayesian model selection in social research. Sociological Methodology, 25, 1995.
[27] A. Roverato. Hyper inverse Wishart distribution for non-decomposable graphs and its application to
Bayesian inference for Gaussian graphical models. Scand. J. Statistics, 29, 2002.
[28] M. Schmidt, A Niculescu-Mizil, and K Murphy. Learning graphical model structure using l 1 regularization paths. In AAAI, 2007.
[29] J. Scott and C. Carvalho. Feature-inclusion stochastic search for Gaussian graphical models. J. of
Computational and Graphical Statistics, 17(4), 2008.
[30] T. Speed and H. Kiiveri. Gaussian Markov distributions over finite graphs. Annals of Statistics, 1986.
[31] F. Wong, C. Carter, and R. Kohn. Efficient estimation of covariance selection models. Biometrika, 2003.
[32] M. Yuan and Yi Lin. Model selection and estimation in the GGM. Biometrika, 94(1), 2007.
[33] T. Zhang. Adaptive forward-backward greedy algorithm for sparse learning. In NIPS, 2008.
[34] H. Zou, T. Hastie, and R. Tibshirani. On the ?degrees of freedom? of Lasso. Annals of Statistics, 2007.
| 3827 |@word determinant:2 advantageous:1 giudici:2 grey:2 d2:4 covariance:5 natsoulis:1 pg:2 accounting:1 pick:1 initial:2 liu:2 series:1 score:43 selecting:1 interestingly:2 past:1 existing:2 current:1 comparing:1 si:1 scatter:5 yet:1 realistic:1 subsequent:1 gv:2 seeding:3 plot:4 update:2 fund:2 v:6 resampling:3 greedy:6 fewer:1 fpr:3 ith:1 short:4 record:1 colored:2 node:18 simpler:1 zhang:1 beta:2 qij:5 consists:1 yuan:1 combine:2 fitting:2 inside:1 introduce:1 expected:1 indeed:1 roughly:1 examine:1 byrd:1 gov:1 cpu:1 encouraging:1 window:1 cardinality:1 becomes:2 begin:2 estimating:1 notation:1 underlying:2 moreover:2 matched:1 marlin:1 finding:9 pseudo:6 every:1 nf:77 subclass:1 exactly:1 biometrika:5 assoc:1 ser:1 murphyk:1 superiority:2 producing:1 positive:7 before:1 local:12 limit:2 id:1 path:2 solely:1 chose:1 twice:1 initialization:4 studied:1 equivalence:1 meinshausen:1 limited:2 bi:1 averaged:1 directed:3 practical:1 acknowledgment:1 nevins:1 testing:1 practice:3 block:1 definite:2 union:1 nondecomposable:4 empirical:1 significantly:3 inferential:1 integrating:1 madigan:1 get:1 cannot:2 selection:9 operator:1 put:1 wong:2 restriction:1 equivalent:2 imposed:1 optimize:1 missing:5 map:2 gls:20 go:1 reviewer:1 independently:1 convex:6 focused:1 decomposable:34 recovery:7 identifying:1 rule:2 insight:1 deriving:1 coordinate:1 laplace:34 updated:3 annals:4 pt:1 modulo:1 exact:2 us:3 hypothesis:3 agreement:1 element:6 nowhere:1 updating:2 initializing:1 worst:2 region:1 cycle:1 trade:5 highest:1 benjamin:1 pd:1 dempster:1 complexity:3 moderately:2 solving:2 compromise:1 predictive:5 efficiency:1 completely:1 vague:1 basis:1 accelerate:1 various:3 fast:4 effective:3 describe:2 monte:14 kevin:1 neighborhood:5 hyper:3 dof:3 exhaustive:1 quite:4 larger:1 cvpr:1 otherwise:1 ability:1 statistic:9 cov:1 g1:1 ledoit:1 ip:3 online:2 advantage:2 eigenvalue:1 propose:3 maximal:1 mb:14 remainder:1 neighboring:1 combining:3 rapidly:1 gold:1 lenkoski:1 w01:1 parent:1 empty:1 convergence:1 assessing:1 intl:1 produce:4 generating:6 executing:1 help:2 derive:2 stat:1 measured:1 ij:10 b0:2 c:3 blanket:16 convention:1 fij:2 drawback:2 filter:1 stochastic:6 lars:2 centered:1 exploration:2 enable:1 adjacency:1 require:3 exploring:2 killam:1 around:1 ground:2 exp:5 seed:2 scope:1 eaton:1 optimizer:1 adopt:1 smallest:1 purpose:1 failing:1 estimation:14 bma:8 label:1 visited:1 weighted:2 offs:1 generously:1 rough:1 gaussian:12 always:1 rather:1 avoid:2 shrinkage:1 derived:2 focus:2 l0:11 validated:1 improvement:1 consistently:1 likelihood:42 indicates:1 contrast:1 besag:2 sense:1 am:1 helpful:1 inference:12 el:1 niculescu:1 typically:1 chow:2 koller:2 hiw:2 issue:3 arg:2 fidelity:1 overall:2 denoted:5 art:2 special:1 initialize:1 mutual:2 marginal:29 field:1 constrained:1 having:1 washington:1 sampling:8 represents:1 flipped:1 jones:2 constitutes:1 nearly:1 future:1 np:1 others:1 report:3 inherent:1 few:1 diaconis:2 divergence:2 national:1 individual:2 murphy:3 replaced:1 maintain:1 ab:1 freedom:3 friedman:3 interest:1 possibility:1 highly:1 evaluation:2 regressing:1 mixture:1 extreme:1 yielding:1 undefined:1 light:1 pc:1 chain:1 accurate:4 moghaddam:3 edge:21 necessary:1 orthogonal:1 unless:1 tree:5 biometrics:1 divide:1 initialized:2 circle:1 theoretical:1 covsel:1 instance:1 criticized:1 modeling:1 typified:1 measuring:1 zn:6 cost:2 fusing:1 decomposability:1 ugs:5 synthetic:8 combined:1 chooses:1 density:14 explores:1 sensitivity:1 siam:1 l1mb:7 contract:1 off:5 quickly:2 fused:1 yao:1 intersecting:2 connectivity:1 thesis:1 aaai:1 containing:1 choose:1 wishart:22 conf:1 bfgs:2 unordered:1 sec:1 bold:2 view:3 dellaportas:1 closed:4 doing:1 w0k:1 bution:1 start:2 contribution:5 square:1 ggm:10 accuracy:5 ni:3 who:1 efficiently:6 ensemble:1 emtiyaz:2 identify:1 likewise:1 bayesian:24 mc:4 carlo:14 lu:1 j6:1 history:1 regress:1 sampled:6 proved:1 dataset:5 popular:1 knowledge:6 dimensionality:2 nasa:1 appears:1 dobra:3 higher:2 day:1 follow:1 methodology:1 modal:1 improved:2 wei:2 though:1 box:2 just:1 working:1 trust:1 maximizer:1 banerjee:1 mode:12 quality:7 indicated:1 scientific:1 normalized:1 true:4 contain:1 regularization:3 overdispersed:1 laboratory:2 iteratively:1 leibler:1 impute:1 noted:1 iris:1 criterion:2 complete:2 mohammad:1 demonstrate:1 duchi:2 l1:14 temperature:2 performs:2 ranging:1 wise:1 novel:3 recently:2 specialized:1 ug:3 volume:1 discussed:1 tpr:3 interpret:1 marginals:3 significant:1 refer:3 dag:14 inclusion:2 pointed:1 killed:1 stable:2 han:2 gt:4 aeronautics:1 posterior:22 multivariate:1 recent:2 perspective:1 scenario:1 kpm:1 certain:2 yi:1 devise:1 scoring:8 seen:1 greater:1 baback:2 full:14 multiple:1 technical:1 jet:2 faster:1 plug:1 offer:1 determination:1 cifar:1 lin:1 equally:1 mle:1 promotes:1 finder:1 visit:1 prediction:5 regression:10 avidan:2 essentially:3 iteration:10 normalization:2 adopting:1 proposal:10 addition:1 justified:1 whereas:1 conditionals:1 biased:1 unlike:4 pass:1 induced:1 undirected:4 call:7 integer:1 structural:11 leverage:1 split:1 variety:1 bic:20 hastie:4 topology:4 identified:1 lasso:6 idea:1 translates:1 administration:1 absent:1 whether:1 expression:2 pca:1 kohn:2 accelerating:2 unnatural:1 penalty:1 hessian:5 compositional:1 matlab:1 generally:1 useful:3 detailed:1 carter:3 imputed:2 generate:3 specifies:2 sl:33 outperform:1 restricts:1 alters:1 popularity:1 per:4 tibshirani:3 discrete:3 express:1 key:2 thereafter:1 reliance:1 nevertheless:1 drawn:1 imputation:6 d3:3 backward:4 nocedal:1 graph:103 run:2 inverse:5 parameterized:1 uncertainty:3 moralizing:1 throughout:1 reasonable:1 family:1 geiger:1 scaling:1 bit:1 karapurkar:1 bound:2 assemble:1 strength:2 adapted:1 precisely:1 flat:1 speed:7 extremely:3 min:8 performing:1 subgradients:1 gould:1 department:4 combination:1 conjugate:5 smaller:2 heckerman:1 pml:2 modification:1 bmm:1 pr:2 ghaoui:1 equation:11 previously:1 discus:2 turn:1 mechanism:1 flip:2 available:2 generalizes:1 gaussians:1 apply:1 observe:1 generic:1 appropriate:1 spectral:1 alternative:3 batch:1 symmetrized:1 altogether:1 schmidt:1 top:2 remaining:2 running:1 graphical:18 unsuitable:1 exploit:2 especially:1 approximating:3 move:6 objective:2 already:1 quantity:1 strategy:4 costly:3 concentration:1 usual:1 diagonal:12 dependence:1 visiting:1 gradient:2 dp:1 thank:2 sensible:1 propulsion:2 reason:2 enforcing:1 code:1 modeled:1 scand:1 ratio:4 providing:1 difficult:1 mostly:2 robert:1 synth:8 trace:3 info:1 implementation:1 design:2 proper:2 motivates:1 unknown:1 perform:2 unsw:1 observation:1 markov:19 datasets:2 finite:1 descent:1 arbitrary:1 buhlmann:1 pair:1 ylvisaker:2 required:4 khan:1 kl:4 oversampled:1 california:2 learned:1 distinction:1 gk0:2 trans:1 aloimonos:1 nip:2 suggested:1 pattern:2 scott:1 including:1 max:2 memory:1 green:1 critical:3 difficulty:1 hybrid:9 natural:2 regularized:1 indicator:1 undersampled:1 ranked:1 rely:1 zhu:1 mizil:1 scheme:3 improve:1 technology:2 started:1 carried:1 raftery:2 aspremont:1 extract:1 columbia:3 sn:3 prior:17 review:1 discovery:6 sg:9 literature:1 multiplication:1 precedent:1 relative:1 glasso:3 par:1 sociological:1 proportional:2 acyclic:1 analogy:1 carvalho:2 ita:1 degree:5 sufficient:1 proxy:2 s0:11 imposes:1 consistent:1 gruber:1 pi:1 occam:1 penalized:1 repeat:1 supported:1 kiiveri:1 free:2 infeasible:1 truncation:1 allow:1 exponentiated:3 institute:2 neighbor:2 sparse:13 ghz:1 feedback:1 dimension:5 default:2 world:1 avoids:1 evaluating:1 computes:1 valid:1 author:2 made:2 forward:4 projected:1 adaptive:1 social:1 approximate:11 kullback:1 gene:1 clique:2 global:4 uai:2 unnecessary:1 xi:2 quoted:1 search:13 iterative:1 triplet:1 why:1 learn:1 ca:3 obtaining:1 necessarily:1 zou:1 domain:2 diag:4 pk:2 main:8 dense:1 whole:2 noise:1 complementary:3 west:2 sankhya:1 precision:14 sub:1 sparsest:1 exponential:1 candidate:3 chickering:1 third:1 admissible:3 british:3 z0:9 symbol:4 explored:1 jpl:1 normalizing:3 fusion:4 dl:8 dominates:2 workshop:1 false:2 g10:1 flattened:1 effectively:1 phd:1 justifies:1 mf:7 intersection:1 simply:2 explore:2 likely:3 conveniently:1 expressed:1 nserc:1 partially:1 applies:1 collectively:1 springer:1 ubc:4 truth:2 chance:1 relies:1 wolf:1 viewed:1 month:2 fisher:1 change:2 hard:1 experimentally:1 specifically:1 averaging:3 domke:1 called:1 gij:4 total:1 experimental:3 select:1 ggms:13 support:1 evaluate:4 mcmc:5 armstrong:2 |
3,121 | 3,828 | Explaining human multiple object tracking as
resource-constrained approximate inference in a
dynamic probabilistic model
Edward Vul, Michael C. Frank, and Joshua B. Tenenbaum
Department of Brain and Cognitive Sciences
Massachusetts Institute of Technology
Cambridge, MA 02138
{evul, mcfrank, jbt}@mit.edu
George Alvarez
Department of Psychology
Harvard University
Cambridge, MA 02138
[email protected]
Abstract
Multiple object tracking is a task commonly used to investigate the architecture
of human visual attention. Human participants show a distinctive pattern of successes and failures in tracking experiments that is often attributed to limits on an
object system, a tracking module, or other specialized cognitive structures. Here
we use a computational analysis of the task of object tracking to ask which human
failures arise from cognitive limitations and which are consequences of inevitable
perceptual uncertainty in the tracking task. We find that many human performance phenomena, measured through novel behavioral experiments, are naturally
produced by the operation of our ideal observer model (a Rao-Blackwelized particle filter). The tradeoff between the speed and number of objects being tracked,
however, can only arise from the allocation of a flexible cognitive resource, which
can be formalized as either memory or attention.
1
Introduction
Since William James first described the phenomenology of attention [11], psychologists have been
struggling to specify the cognitive architecture of this process, how it is limited, and how it helps
information processing. The study of visual attention specifically has benefited from rich, simple
paradigms, and of these multiple object tracking (MOT) [16] has recently gained substantial popularity. In a typical MOT task (Figure 1), subjects see a number of objects, typically colorless circles,
moving onscreen. Some subset of the objects are marked as targets before the trial begins, but during the trial all objects turn to a uniform color and move haphazardly for several seconds. The task
is to keep track of which objects were marked as targets at the start of the trial so that they can be
identified at the end of the trial when the objects stop moving.
The pattern of results from MOT experiments is complicated. Participants can only track a finite
number of objects [16], but more objects can be tracked when they move slower [1], suggesting a
limit on attentional speed. If objects are moved far apart in the visual field, however, they can be
tracked at high speeds, suggesting that spatial crowding also limits tracking [9]. When tracking,
participants seem to maintain information about the velocity of objects [19] and this information is
sometimes helpful in tracking [8]. More frequently, however, velocity is not used to track, suggesting
limitations on the kinds of information available to the tracking system [13]. Finally, although
participants can track objects using features like color and orientation [3], some features seem to
hurt tracking [15], and tracking is primarily considered to be a spatial phenomenon. These results
and others have left researchers puzzled: What limits tracking performance?
1
Figure 1: Left: A typical multiple object tracking experiment. Right: The generative model underlying our
probabilistic tracker (see text for details).
Proposed limitations on MOT performance may be characterized along the dimensions of discreteness and flexibility. A proposal positing fixed number of slots (each holding one object) describes a
discrete limitation, while proposals positing limits on attention speed or resolution are more continuous. Attention and working memory are canonical examples of flexible limitations: Based on the
task, we decide where to attend and what to remember. Such cognitive limitations may be framed
either as a discrete number of slots or as a continuous resource. In contrast, visual acuity and noise
in velocity perception are low-level, task-independent limitations: Regardless of the task we are doing, the resolution of our retina is limited and our motion-discrimination thresholds are stable. Such
perceptual limitations tend only to be continuous.
We aim to determine which MOT effects reflect perceptual, task-independent uncertainty, and which
reflect flexible, cognitive limitations. Our approach is to describe the minimal computations that an
ideal observer must undertake to track objects and combine available information. To the extent
that an effect is not naturally explained at the computational level given only perceptual sources of
uncertainty, it is more likely to reflect flexible cognitive limitations.
We propose that humans track objects in a manner consistent with the Bayesian multi-target tracking
framework common in computer vision [10, 18]. We implement a variant of this tracking model
using Rao-Blackwellized particle filtering and show how it can be easily adapted for a wide range
of MOT experiments. This unifying model allows us to design novel experiments that interpolate
between seemingly disparate phenomena. We argue that, since the effects of speed, spacing, and
features arise naturally in an ideal observer with no limits on attention, memory, or number of objects
that can be tracked, these phenomena can be explained by optimal object tracking given low-level,
perceptual sources of uncertainty. We identify a subset of MOT phenomena that must reflect flexible
cognitive resources, however: effects that manipulate the number of objects that can be tracked. To
account for tradeoffs between object speed and number, a task-dependent resource constraint must
be added to our model. This constraint can be interpreted as either limited attentional resolution or
limited short term memory.
2
Optimal multiple object tracking
To track objects in a typical MOT experiment (Figure 1), at each point in time the observer must
determine which of many observed objects corresponds to which of the objects that were present
in the display in the last frame. Here we will formalize this procedure using a classical tracking
algorithm in computer vision[10, 18].
2.1
Dynamics
Object tracking requires some assumptions about how objects evolve over time. Since there is no
consensus on how to generate object tracking displays in the visual attention literature, we will
assume simple linear dynamics, which can approximate prior experimental manipulations. Specifically, we assume that the true state of the world St contains information about each object being
tracked (i): to start we consider objects defined by position (xt (i)) and velocity (vt (i)), but we will
later consider tracking objects through more complicated feature-spaces. Although we refer to position and velocity, the state actually contains two position and two velocity dimensions: one of each
for x and y.
2
St evolves according to linear dynamics with noise. Position and velocity for x and y evolve independently according to an Ornstein-Uhlenbeck (mean-reverting) process, which can be thought of
as Brownian motion on a spring, and can be most clearly spelled out as a series of equations:
xt = xt?1 + vt ,
vt = ?vt?1 ? kxt?1 + wt ,
wt ? N(0, ?w )
(1)
where x and v are the position and velocity at time t; ? is an inertia parameter constrained to be between 0 and 1; k is a spring constant which produces the mean-reverting properties of the dynamics;
and wt is random acceleration noise added at each time point which is distributed as a zero-mean
Gaussian with standard deviation ?w .
In two dimensions, this stochastic process describes a randomly moving cloud of objects; the spring
constant assures that the objects will not drift off to infinity, and the friction parameter assures that
they will not accelerate to infinity. Within the range of parameters we consider, this process converges to a stable distribution of positions and velocities both of which will be normally distributed
around zero. We can solve for the standard deviations for position (?x ) and velocity (?v ), by assuming that the expected values of ?x , ?v and their covariance will not change through an update step;
thus obtaining:
s
s
(1 + ?)?2w
2?2w
?x =
, and ?v =
,
(2)
k(? ? 1)(2? ? k ? 2)
k(? ? 1)(2? ? k + 2)
respectively. Because these terms are familiar in the human multiple object tracking literature, for
the rest of this paper we will describe the dynamics in terms of the spatial extent of the cloud of
moving dots (?x ), the standard deviation of the velocity distribution (?v ), and the inertia parameter
(?; solving for k and ?w to generate dynamics and track objects).
2.2
Probabilistic model
The goal of an object tracking model is to track the set of n objects in S over a fixed period from
t0 to tm . For our model, we assume observations (mt ) at each time t are noisy measurements of
the true state of the world at that time (St ). In other words, our tracking model is a stripped-down
simplification of tracking models commonly used in computer vision because we do not track from
noisy images, but instead, from extracted position and velocity estimates. The observer must estimate St based on the current, and previous measurements, thus obtaining S?t . However, this task is
complicated by the fact that the observer obtains an unlabeled bag of observations (mt ), and does
not know which observations correspond to which objects in the previous state estimate S?t?1 . Thus,
the observer must not only estimate St , but must also determine the data assignment of observations
to objects ? which can be described by a permutation vector ?t .
Since we assume independent linear dynamics for each individual object, then conditioned on ?,
we can track each individual object via a Kalman filter. That is, what is a series of unlabeled bags
of observations when data assignments were unknown, becomes a set of individuated time-series
? one for each object ? in which each point in time contains only a single observation when
conditioned on the data assignment. The Kalman filter will be updated via transition matrix A,
according to St = ASt?1 +Wt , and state perturbations W are distributed with covariance Q (A and Q
can be derived straight-forwardly from the dynamics in Eq. 1; see Supplementary Materials).
Inference about both the state estimate and the data assignment can proceed by predicting the current
location for each object, which will be a multivariate normal distribution with mean predicted state
S?t|t?1 = AS?t?1 and predicted estimate covariance Gt|t?1 = AGt?1 A0 + Q. From these predictions, we
can define the probability of a particular data assignment permutation vector as:
P(?t |St , Gt , Mt ) = ? P(?t (i)|S?t|t?1 (i), Gt|t?1 (i), Mt (i)), where
i
(3)
P(?i|S?t|t?1 (i), Gt|t?1 (i)) = N(mt (?(i)); S?t|t?1 (i), Gt|t?1 (i) + Mt (?(i)))
where the posterior probability of a particular ? value is determined by the Gaussian probability
density, and Mt ( j) is the covariance of measurement noise for mt ( j). Because an observation can
3
arise from only one object, mutual exclusivity is built into this conditional probability distribution
? this complication makes analytical solutions impossible, and the data assignment vector, ?, must
be sampled. However, given an estimate of ?, an estimate of the current state of the object is given
by the Kalman state update ([12]; see Supplementary Materials).
2.3
Inference
To infer the state of the tracking model described above, we must sample the data-association vector, ?, and then the rest of the tracking may proceed analytically. Thus, we implement a RaoBlackwelized particle filter [6]: each particle corresponds to one sampled ? vector and contains the
analytically computed state estimates for each of the objects, conditioned on that sampled ? vector.
Thus, taken together, the particles used for tracking (in our case we use 50, but see Section 3.4 for
discussion) approximate the joint probability distribution over ? and S.
In practice, we sample ? with the following iterative procedure. First, we sample each component
of ? independently of all other ? components (as in PMHT [18]). Then, if the resulting ? vector
contains conflicts that violate the mutual exclusivity of data assignments, a subset of ? is resampled.
If this resampling procedure fails to come up with an assignment vector that meets the mutual
exclusivity, we compute the combinatoric expansion of the permutation of the conflicted subset
of ? and sample assignments within that subset from the combinatoric space. This procedure is very
fast when tracking is easy, but can slow down when tracking is hard and the combinatoric expansion
is necessary.
2.4
Perceptual uncertainty
In order to determine the limits on optimal tracking in our model, we must know what information
human observers have access to. We assume that observers know the summary statistics of the
cloud of moving dots (their spatial extent, given by ?x , and their velocity distribution, ?v ). We also
start with the assumption that they know the inertia parameter (?; however, this assumption will be
questioned in section 3.2). Given a perfect measurement of ?x , ?v , and ?, observers will thus know
the dynamics by which the objects evolve.
We must also specify the low-level, task-independent noise for human observers. We assume that
noise in observing the positions of objects (?mx ) is given by previously published eccentricity scaling
parameters, ?mx (x) = c(1 + 0.42x) (from [5]), where c is uncertainty in position. We use c = 0.08
(standard deviation in degrees visual angle) throughout this paper. We also assume that observations
of speed are corrupted by Weber-scaled noise with some irreducible uncertainty (a): ?mv (v) = a+bv,
setting a = 0.01 and b = 0.05 (b is the weber fraction as measured in [17]).
3
3.1
Results
Tracking through space
When objects move faster, tracking them is harder [1], suggesting to researchers that an attentional
speed limit may be limiting tracking. However, when objects cover a wider area of space (when they
move on a whole field display), they can be tracked more easily at a given speed, suggesting that
crowding rather than speed is the limiting factor [9].
Both of these effects are predicted by our model: both the speed and spatial separation of objects alter
the uncertainty inherent in the tracking task. When objects move faster (greater ?v ) predictions about
about where objects will be on the next time-step have greater uncertainty: the covariance of the
predicted state (Gt|t?1 ) has greater entropy and inference about which observation arose from which
object (?) is less certain and more prone to errors. Additionally, even at a given speed and inertia,
when the spatial extent (?x ) is smaller, objects are closer together. Even given a fixed uncertainty
about where in space an object will end up, the odds of another object appearing therein is greater,
again limiting our ability to infer ?. Thus, both increasing velocity variance and decreasing spatial
variance will make tracking harder, and to achieve a particular level of performance the two must
trade off.
4
Figure 2: Top: Stimuli and data from [9] ? when objects are tracked over the whole visual field, they can
move at greater speed to achieve a particular level of accuracy. Bottom-Left: Our own experimental data in
which subjects set a ?comfortable? spacing for tracking 3 of 6 objects at a particular speed. Bottom-Middle:
Model accuracy for tracking 3 of 6 objects as a function of speed and spacing. Bottom-Right: Model ?settings?
? (85% accuracy) threshold spacing for a particular speed. See text for details.
We show the speed-space tradeoff in both people and our ideal tracking model. We asked 10 human
observers to track 3 of 6 objects moving according to the dynamics described earlier. Their goal was
to adjust the difficulty of the tracking task so that they could track the objects for 5 seconds. We
told them that sometimes tracking would be too hard and sometimes too easy, and they could adjust
the difficulty by hitting one button to make the task easier and another button to make it harder.1
Making the task easier or harder amounted to moving the objects farther apart or closer together by
adjusting ?x of the dynamics, while the speed (?v ) stayed constant. We parametrically varied ?v
between 0.01 and 0.4, and could thus obtain an iso-difficulty curve for people making settings of ?x
as a function of ?v (2).
To elicit predictions from our model on this task we simulated 5 second trials where the model had
to track 3 of 6 objects, and measured accuracy across 15 spacing intervals (?x between 0.5 and 4.0
degrees visual angle), crossed with 11 speeds (?v between 0.01 and 0.4). At each point in this speedspace grid, we simulated 250 trials, to measure mean tracking accuracy for the model. The resulting
accuracy surface is shown in Figure 2 ? an apparent tradeoff can be seen, when objects move faster,
they must be farther apart to achieve the same level of accuracy as when they move slower.
To make the model generate thresholds of ?x for a particular ?v , as we had human subjects do,
we fit psychometric functions to each cross-section through the accuracy surface, and used the psychometric function to predict settings that would achieve a particular level of accuracy (one such
psychometric function is shown in red on the surface in Figure2).2 The plot in Figure 2 shows the
model setting for the 0.85 accuracy mark; the upper and lower error bounds represent the settings to
achieve an accuracy of 0.8 and 0.9, respectively (in subsequent plots we show only the 0.85 threshold for simplicity). As in the human performance, there is a continuous tradeoff: when objects are
faster, spacing must be wider to achieve the same level of difficulty.
1 The
correlation of this method with participants? objective tracking performance was validated by [1].
used the Weibull cumulative density as our psychometric function p = 1 ? exp (x/xcrit )s , where x is
the stimulus dimension which, which covaries positively with performance (either ?x , or 1/?v ), xcrit is the
location term, and s is the scale, or slope, parameter. We obtained the MAP estimate of both parameters of the
Weibull density function, and predicted the model?s 85% threshold (blue plane in Figure 2) from the inverse of
the psychometric function: x = ?xcrit ln(1 ? p)1/s .
2 We
5
Figure 3: Left: Human speed-space tradeoff settings do not vary for different physical inertias. Middle panels:
This is the case for the ideal model with no knowledge of inertia, but not so for the ideal model with perfect
knowledge of inertia. Right: This may be the case because it is safer to assume a lower inertia: tracking is
worse if inertia is assumed to be higher than it is (red) than vice versa (green).
3.2
Inertia
It is disputed whether human observers use velocity to track[13]. Nonetheless, it is clear that adults,
and even babies, know something about object velocity [19]. The model we propose can reconcile
these conflicting findings.
In our model, knowing object velocity means having an accurate ?v term for the object: an estimate
of how much distance it might cover in a particular time step. Using velocity trajectories to make
predictions about future states also requires that people know the inertia term. Thus, the degree to
which trajectories are used to track is a question about the inertia parameter (?) that best matches
human performance. Thus far we have assumed that people know ? perfectly and use it to predict
future states, but this need not be the case. Indeed, while the two other parameters of the dynamics
? the spatial extent (?x ) and velocity distribution (?v ) ? may be estimated quickly and efficiently
from a brief observation of the tracking display, inertia is more difficult to estimate. Thus, observers
may be more uncertain about the inertia, and may be more likely to guess it incorrectly. (Under our
model, a guess of ? = 0 corresponds to tracking without any velocity information.)
We ran an experiment to assess what inertia parameter best fits human observers. We asked subjects
to set iso-difficulty contours as a function of the underlying inertia (?) parameter, by using the same
difficulty-setting procedure described earlier. An ideal observer who knows the inertia perfectly will
greatly benefit from displays with high inertia in which uncertainty will be low, and will be able
to track with the same level of accuracy at greater speeds given a particular spacing. However, if
inertia is incorrectly assumed to be zero, high- and low-inertia iso-difficulty contours will be quite
similar (Figure 3). We find that in human observers, iso-difficulty contours for ? = 0.7, ? = 0.8, and
? = 0.9, are remarkably similar ? consistent with observers assuming a single, low, inertia term.
Although these results corroborate previous findings that human observers do not seem to use trajectories to track, there is evidence that sometime people do use trajectories. These variations in
observers? assumptions about inertia may be attributable to two factors. First, most MOT experiments including rather sudden changes in velocity from objects bouncing off the walls or simply
as a function of their underlying dynamics. Second, under uncertainty about the inertia underlying
a particular display, an observer is better off underestimating rather than overestimating. Figure 3
shows the decrement in performance as a function of a mismatch of the observers? assumed inertia
to that of the tracking display.
3.3
Tracking through feature space
In addition to tracking through space, observers can also track objects through feature domains. For
example, experimental participants can track two spatially superimposed gratings based on their
slowly varying colors, orientations or spatial frequencies [3].
We can modify our model to track in feature space by adding new dimensions corresponding to the
features being tracked. Linear feature dimensions like the log of spatial frequency can be treated
exactly like position and velocity. Circular features like hue angle and orientation require a slight
6
Figure 4: Left: When object color drifts
more slowly over time (lower ?c ), people
can track objects more effectively. Right:
Our tracking model does so as well (observation noise for color ?mc in the model
was set to 0.02?)
modification: we pre-process the state estimates and observations via modulus to preserve their
circular relationship and the linear the Kalman update. With this modification, the linear Kalman
state update can operate on circular variables, and our basic tracking model can track colored objects
with a high level of accuracy when they are superimposed (?x = ?v = 0, Figure 4).
We additionally tested the novel prediction from our model that human observers can combine the information available from space and features for tracking. Nine human observers made iso-difficulty
settings as described above; however, this time each object had a color and we varied the color drift
rate (?c ) on hue angle. Figure 4 shows subjects? settings of ?x as a function of ?v and ?c . When
color changes slowly, observers can track objects in a smaller space at a given velocity. Figure 4
also shows that the pattern of thresholds from the model in the same task match those of the experimental participants. Thus, not only can human observers track objects in feature space, they
can combine both spatial location and featural information, and additional information in the feature
domain allows people to track successfully with less spatial information, as argued by [7].
3.4
Cognitive limitations
Thus far we have shown that many human failures in multiple object tracking do not reflect cognitive
limitations on tracking, but are instead a consequence of the structure of the task and the limits on
available perceptual information. However, a limit on the number of objects that may be tracked
[16] cannot be accounted for in this way. Observers can more easily track 4 of 16 objects at a higher
speed than 8 of 16 objects (Figure 5), even though the stimulus presentation is identical in both cases
[1]. Thus, this limitation must be a consequence of uncertainty that may be modulated by task ? a
flexible resource [2].
Within our model, there are two plausible alternatives for what such a limited resource may be:
visual attention, which improves the fidelity of measurements; or memory, which enables more
or less noiseless propagation of state estimates through time3 . In both cases, when more objects
are tracked, less of the resource is available for each object, resulting in an increase of noise and
uncertainty. At a superficial level, both memory and attention resources amount to a limited amount
of gain to be used to reduce noise. Given the linear Kalman filtering computation we have proposed
as underlying tracking, equal magnitude noise in either will have the same effects. Thus, to avoid the
complexities inherent in allocating attention to space, we will consider memory limitations, but this
resource limitation can be thought of as ?attention gain? as well (though some of our work suggests
that memory may be a more appropriate interpretation).
We must decide on a linking function between the covariance U of the memory noise, and the
number of objects tracked. It is natural to propose that covariance scales positively with the number
of objects tracked ? that is U for n objects would be equal
? to Un = U1 n. This expression captures
the idea that task modulated noise should follow the ? ? n rule, as would be the case if the state
for a given object were stored or measured with a finite number of samples. With more samples,
3 One might suppose that limiting the number of particles used for tracking as in [4] and [14], might be
a likely resource capacity; however, in object tracking, having more particles produces a benefit only insofar
as future observations might disambiguate previous inferences. In multiple object tracking with uniform dots
(as is the case in most human experiments) once objects have been mis-associated, no future observations can
provide evidence of a mistake having been made in the past; and as such, having additional particles to keep
track of low-probability data associations carries no benefit.
7
Figure 5: Left: When
more objects are tracked
(out of 16) they must
move at a slower speed to
reach a particular level of
accuracy [1]. Right: Our
model exhibits this effect
only if task-dependent
uncertainty is introduced
(see text).
precision would increase; however, because the number of samples available is fixed at c, the number
of samples per object would be c/n, giving rise to the scaling rule described above.
In Figure 5 we add such a noise-term to our model and measure performance (threshold speed ?
?v ? for a given number of targets nt , when spacing is fixed, ?x = 4, and the total number of
objects is also fixed n = 16). The characteristic tradeoff between the number of targets, and the
speed with which they may be tracked is clearly evident. Thus, while many results in MOT arise
as consequences of the information available for the computational task, the speed-number tradeoff
seems to be the result of a flexibly-allocated resource such as memory or attention.
4
Conclusions
We investigated what limitations are responsible for human failures in multiple object tracking tasks.
Are such limitations discrete (like a fixed number of objects) or continuous (like memory)? Are they
flexible with task (cognitive resources such as memory and attention), or are they task-independent
(like perceptual noise)?
We modified a Bayes-optimal tracking solution for typical MOT experiments and implemented this
solution using a Rao-Blackwellized particle filter. Using novel behavioral experiments inspired by
the model, we showed that this ideal observer exhibits many of the classic phenomena in multiple
object tracking given only perceptual uncertainty (a continuous, task-independent source of limitation). Just as for human observers, tracking in our model is harder when objects move faster or are
closer together; inertia information is available, but may not be used; and objects can be tracked
in features as well as space. However, effects of the number of objects tracked do not arise from
perceptual uncertainty alone. To account for the tradeoff between the number of objects tracked and
their speed, a task-dependent resource must be introduced ? we introduce this resource as a memory
constraint, but it may well be attentional gain.
Although the dichotomy of flexible, cognitive resources and task-independent, low-level uncertainty
is a convenient distinction to start our analysis, it is misleading. When engaging in any real world
task this distinction is blurred: people will use whatever resources they have to facilitate performance; even perceptual uncertainty as basic as the resolution of the retina becomes a flexible resource when people are allowed to move their eyes (they were not allowed to do so in our experiments). Connecting resource limitations measured in controlled experiments to human performance
in the real world requires that we address not only what the structure of the task may be, but also how
human agents allocate resources to accomplish this task. Here we have shown that a computational
model of the multiple object tracking task can unify a large set of experimental findings on human
object tracking, and most importantly, determine how these experimental findings map onto cognitive limitations. Because our findings implicate a flexible cognitive resource, the next necessary
step is to investigate how people allocate such a resource, and this question will be pursued in future
work.
Acknowledgments: This work was supported by ONR MURI: Complex Learning and Skill Transfer with Video Games N00014-07-1-0937 (PI: Daphne Bavelier); NDSEG fellowship to EV and
NSF DRMS Dissertation grant to EV.
8
References
[1] G. Alvarez and S. Franconeri. How many objects can you attentively track?: Evidence for a resourcelimited tracking mechanism. Journal of Vision, 7(13):1?10, 2007.
[2] P. Bays and M. Husain. Dynamic shifts of limited working memory resources in human vision. Science,
321(5890):851, 2008.
[3] E. Blaser, Z. Pylyshyn, and A. Holcombe. Tracking an object through feature space. Nature,
408(6809):196 ? 199, 2000.
[4] S. Brown and M. Steyvers. Detecting and predicting changes. Cognitive Psychology, 58:49?67, 2008.
[5] M. Carrasco and K. Frieder. Cortical magnification neutralizes the eccentricity effect in visual search.
Vision Research, 37(1):63?82, 1997.
[6] A. Doucet, N. de Freitas, K. Murphy, and S. Russell. Rao-Blackwellised particle filtering for dynamic
Bayesian networks. In Proceedings of Uncertainty in AI, volume 00, 2000.
[7] J. Feldman and P. Tremoulet. Individuation of visual objects over time. Cognition, 99:131?165, 2006.
[8] D. E. Fencsik, J. Urrea, S. S. Place, J. M. Wolfe, and T. S. Horowitz. Velocity cues improve visual search
and multiple object tracking. Visual Cognition, 14:92?95, 2006.
[9] S. Franconeri, J. Lin, Z. Pylyshyn, B. Fisher, and J. Enns. Evidence against a speed limit in multiple
object tracking. Psychonomic Bulletin & Review, 15:802?808, 2008.
[10] F. Gustafsson, F. Gunnarsson, N. Bergman, U. Forssell, J. Jansson, R. Karlsson, and P. Nordlund. Particle
filters for positioning, navigation, and tracking. In IEEE Transactions on Signal Processing, volume 50,
2002.
[11] W. James. The Principles of Psychology. Harvard University Press, Cambridge, 1890.
[12] R. Kalman. A new approach to linear filtering and prediction problems. J. of Basic Engineering, 82D:35?
45, 1960.
[13] B. P. Keane and Z. W. Pylyshyn. Is motion extrapolation employed in multiple object tracking? Tracking
as a low-level non-predictive function. Cognitive Psychology, 52:346 ? 368, 2006.
[14] R. Levy, F. Reali, and T. Griffiths. Modeling the effects of memory on human online sentence processing
with particle filters. In Advances in Neural Information Processing Systems, volume 21, 2009.
[15] T. Makovski and Y. Jiang. Feature binding in attentive tracking of distinct objects. Visual Cognition,
17:180 ? 194, 2009.
[16] Z. W. Pylyshyn and R. W. Storm. Tracking multiple independent targets: Evidence for a parallel tracking
mechanism. Spatial Vision, 3:179?197, 1988.
[17] R. Snowden and O. Braddick. The temporal integration and resolution of velocity signals. Vision Research, 31(5):907?914, 1991.
[18] R. Streit and Luginbuhl. Probabilistic multi-hypothesis tracking. Technical report 10428, NUWC, Newport, Rhode Island, USA, 1995.
[19] S. P. Tripathy and B. T. Barrett. Severe loss of positional information when detecting deviations in multiple
trajectories. Journal of Vision, 4(12):1020 ? 1043, 2004.
9
| 3828 |@word trial:6 middle:2 seems:1 covariance:7 harder:5 carry:1 crowding:2 contains:5 series:3 past:1 freitas:1 current:3 nt:1 reali:1 must:18 subsequent:1 enables:1 plot:2 update:4 discrimination:1 resampling:1 generative:1 alone:1 guess:2 pursued:1 pylyshyn:4 cue:1 plane:1 iso:5 short:1 farther:2 underestimating:1 dissertation:1 colored:1 sudden:1 detecting:2 complication:1 location:3 daphne:1 positing:2 blackwellized:2 along:1 gustafsson:1 mcfrank:1 combine:3 behavioral:2 introduce:1 manner:1 indeed:1 expected:1 frequently:1 multi:2 brain:1 inspired:1 decreasing:1 increasing:1 becomes:2 begin:1 underlying:5 panel:1 what:8 kind:1 interpreted:1 weibull:2 finding:5 temporal:1 remember:1 blackwellised:1 exactly:1 scaled:1 whatever:1 normally:1 grant:1 comfortable:1 before:1 attend:1 engineering:1 modify:1 mistake:1 limit:11 consequence:4 jiang:1 meet:1 rhode:1 might:4 therein:1 suggests:1 limited:7 range:2 acknowledgment:1 responsible:1 practice:1 implement:2 procedure:5 area:1 elicit:1 thought:2 convenient:1 word:1 pre:1 griffith:1 disputed:1 cannot:1 unlabeled:2 onto:1 ast:1 impossible:1 map:2 attention:14 regardless:1 independently:2 flexibly:1 resolution:5 unify:1 formalized:1 simplicity:1 rule:2 importantly:1 steyvers:1 classic:1 pmht:1 hurt:1 variation:1 updated:1 limiting:4 target:6 suppose:1 husain:1 hypothesis:1 bergman:1 engaging:1 harvard:3 velocity:25 wolfe:1 magnification:1 carrasco:1 muri:1 exclusivity:3 observed:1 cloud:3 module:1 bottom:3 capture:1 forwardly:1 trade:1 russell:1 ran:1 substantial:1 complexity:1 asked:2 bavelier:1 dynamic:16 solving:1 predictive:1 distinctive:1 easily:3 accelerate:1 joint:1 distinct:1 fast:1 describe:2 dichotomy:1 apparent:1 quite:1 supplementary:2 solve:1 plausible:1 ability:1 statistic:1 noisy:2 seemingly:1 online:1 kxt:1 analytical:1 propose:3 flexibility:1 achieve:6 moved:1 eccentricity:2 produce:2 perfect:2 spelled:1 converges:1 object:112 help:1 wider:2 measured:5 eq:1 grating:1 edward:1 implemented:1 predicted:5 come:1 filter:7 stochastic:1 holcombe:1 human:30 newport:1 material:2 require:1 argued:1 stayed:1 wall:1 tracker:1 considered:1 around:1 normal:1 exp:1 cognition:3 predict:2 vary:1 sometime:1 bag:2 vice:1 successfully:1 mit:1 clearly:2 gaussian:2 aim:1 modified:1 rather:3 arose:1 avoid:1 snowden:1 varying:1 derived:1 acuity:1 validated:1 superimposed:2 greatly:1 contrast:1 helpful:1 inference:5 dependent:3 typically:1 a0:1 enns:1 fidelity:1 flexible:10 orientation:3 constrained:2 spatial:13 integration:1 mutual:3 field:3 equal:2 once:1 having:4 identical:1 inevitable:1 alter:1 future:5 report:1 others:1 stimulus:3 overestimating:1 inherent:2 primarily:1 retina:2 irreducible:1 randomly:1 preserve:1 interpolate:1 individual:2 murphy:1 familiar:1 william:1 maintain:1 investigate:2 circular:3 karlsson:1 adjust:2 severe:1 navigation:1 accurate:1 allocating:1 closer:3 necessary:2 circle:1 minimal:1 uncertain:1 combinatoric:3 earlier:2 rao:4 modeling:1 cover:2 corroborate:1 assignment:9 deviation:5 subset:5 parametrically:1 uniform:2 too:2 mot:11 stored:1 frieder:1 wjh:1 corrupted:1 accomplish:1 st:7 density:3 individuation:1 told:1 probabilistic:4 off:4 michael:1 together:4 quickly:1 connecting:1 again:1 reflect:5 ndseg:1 slowly:3 worse:1 cognitive:17 horowitz:1 suggesting:5 account:2 de:1 blurred:1 mv:1 ornstein:1 crossed:1 later:1 observer:30 extrapolation:1 doing:1 observing:1 red:2 start:4 bayes:1 participant:7 complicated:3 parallel:1 slope:1 tremoulet:1 ass:1 accuracy:14 variance:2 who:1 efficiently:1 characteristic:1 correspond:1 identify:1 bayesian:2 produced:1 mc:1 trajectory:5 researcher:2 straight:1 published:1 reach:1 failure:4 against:1 nonetheless:1 attentive:1 frequency:2 james:2 storm:1 naturally:3 associated:1 attributed:1 puzzled:1 mi:1 stop:1 sampled:3 gain:3 adjusting:1 massachusetts:1 ask:1 color:8 knowledge:2 improves:1 formalize:1 actually:1 jansson:1 higher:2 follow:1 specify:2 alvarez:3 though:2 keane:1 just:1 correlation:1 working:2 struggling:1 propagation:1 modulus:1 facilitate:1 effect:10 usa:1 brown:1 true:2 analytically:2 spatially:1 jbt:1 time3:1 during:1 game:1 tripathy:1 evident:1 motion:3 image:1 weber:2 novel:4 recently:1 common:1 specialized:1 psychonomic:1 mt:8 physical:1 tracked:18 volume:3 association:2 slight:1 interpretation:1 linking:1 refer:1 measurement:5 cambridge:3 versa:1 ai:1 feldman:1 framed:1 grid:1 particle:12 had:3 dot:3 moving:7 stable:2 access:1 surface:3 gt:6 add:1 something:1 brownian:1 multivariate:1 posterior:1 own:1 showed:1 apart:3 manipulation:1 certain:1 n00014:1 onr:1 success:1 vt:4 baby:1 vul:1 joshua:1 seen:1 george:1 greater:6 additional:2 employed:1 determine:5 paradigm:1 period:1 signal:2 multiple:16 violate:1 infer:2 positioning:1 faster:5 characterized:1 match:2 cross:1 technical:1 lin:1 manipulate:1 controlled:1 prediction:6 variant:1 basic:3 vision:9 noiseless:1 sometimes:3 uhlenbeck:1 represent:1 proposal:2 addition:1 remarkably:1 fellowship:1 spacing:8 interval:1 source:3 allocated:1 rest:2 operate:1 subject:5 tend:1 seem:3 odds:1 ideal:8 easy:2 insofar:1 undertake:1 fit:2 psychology:4 architecture:2 identified:1 perfectly:2 reduce:1 figure2:1 tm:1 knowing:1 tradeoff:9 idea:1 shift:1 t0:1 whether:1 expression:1 allocate:2 questioned:1 proceed:2 nine:1 clear:1 amount:2 hue:2 tenenbaum:1 generate:3 canonical:1 nsf:1 estimated:1 popularity:1 track:29 per:1 blue:1 discrete:3 threshold:7 discreteness:1 button:2 fraction:1 angle:4 inverse:1 uncertainty:20 bouncing:1 you:1 place:1 throughout:1 decide:2 separation:1 scaling:2 bound:1 resampled:1 simplification:1 display:7 adapted:1 bv:1 constraint:3 infinity:2 u1:1 speed:28 friction:1 spring:3 department:2 according:4 describes:2 smaller:2 across:1 island:1 evolves:1 making:2 modification:2 psychologist:1 explained:2 taken:1 ln:1 resource:23 equation:1 previously:1 assures:2 turn:1 mechanism:2 reverting:2 know:9 colorless:1 end:2 available:8 operation:1 phenomenology:1 haphazardly:1 appropriate:1 appearing:1 alternative:1 slower:3 top:1 unifying:1 giving:1 classical:1 move:11 objective:1 added:2 question:2 exhibit:2 onscreen:1 mx:2 distance:1 attentional:4 simulated:2 capacity:1 braddick:1 argue:1 extent:5 consensus:1 assuming:2 kalman:7 relationship:1 difficult:1 frank:1 holding:1 attentively:1 disparate:1 rise:1 design:1 nordlund:1 unknown:1 upper:1 observation:14 finite:2 incorrectly:2 frame:1 perturbation:1 varied:2 implicate:1 drift:3 introduced:2 sentence:1 conflict:1 distinction:2 conflicting:1 adult:1 able:1 address:1 pattern:3 perception:1 mismatch:1 ev:2 built:1 green:1 memory:15 including:1 video:1 difficulty:9 treated:1 natural:1 predicting:2 improve:1 technology:1 brief:1 misleading:1 eye:1 covaries:1 featural:1 text:3 prior:1 literature:2 review:1 evolve:3 neutralizes:1 loss:1 permutation:3 limitation:20 allocation:1 filtering:4 degree:3 agent:1 consistent:2 principle:1 pi:1 prone:1 summary:1 accounted:1 supported:1 last:1 institute:1 explaining:1 wide:1 stripped:1 bulletin:1 distributed:3 benefit:3 curve:1 dimension:6 cortical:1 world:4 transition:1 rich:1 cumulative:1 contour:3 inertia:25 commonly:2 made:2 far:3 transaction:1 approximate:3 obtains:1 skill:1 makovski:1 keep:2 doucet:1 assumed:4 continuous:6 iterative:1 un:1 search:2 bay:1 additionally:2 disambiguate:1 nature:1 transfer:1 superficial:1 obtaining:2 expansion:2 investigated:1 complex:1 domain:2 decrement:1 whole:2 noise:15 arise:6 reconcile:1 allowed:2 evul:1 positively:2 benefited:1 psychometric:5 slow:1 attributable:1 precision:1 fails:1 position:11 perceptual:11 levy:1 down:2 xt:3 barrett:1 evidence:5 adding:1 effectively:1 gained:1 agt:1 magnitude:1 conditioned:3 easier:2 entropy:1 simply:1 likely:3 visual:14 positional:1 hitting:1 tracking:77 binding:1 corresponds:3 extracted:1 ma:2 slot:2 conditional:1 marked:2 goal:2 presentation:1 acceleration:1 fisher:1 change:4 hard:2 safer:1 specifically:2 typical:4 determined:1 wt:4 total:1 amounted:1 experimental:6 people:10 mark:1 modulated:2 tested:1 phenomenon:6 |
3,122 | 3,829 | A Gaussian Tree Approximation for Integer
Least-Squares
Jacob Goldberger
School of Engineering
Bar-Ilan University
[email protected]
Amir Leshem
School of Engineering
Bar-Ilan University
[email protected]
Abstract
This paper proposes a new algorithm for the linear least squares problem where
the unknown variables are constrained to be in a finite set. The factor graph that
corresponds to this problem is very loopy; in fact, it is a complete graph. Hence,
applying the Belief Propagation (BP) algorithm yields very poor results. The algorithm described here is based on an optimal tree approximation of the Gaussian
density of the unconstrained linear system. It is shown that even though the approximation is not directly applied to the exact discrete distribution, applying the
BP algorithm to the modified factor graph outperforms current methods in terms
of both performance and complexity. The improved performance of the proposed
algorithm is demonstrated on the problem of MIMO detection.
1 Introduction
Finding the linear least squares fit to data is a well-known problem, with applications in almost every field of science. When there are no restrictions on the variables, the problem has a closed form
solution. In many cases, a-priori knowledge on the values of the variables is available. One example
is the existence of priors, which leads to Bayesian estimators. Another example of great interest
in many applications is when the variables are constrained to a discrete finite set. This problem
has many diverse applications such as decoding of multi-input-multi-output (MIMO) digital communication systems, GPS system ambiguity resolution [15] and many lattice problems in computer
science, such as finding the closest vector in a lattice to a given point in Rn [1], and vector subset
sum problems which have applications in cryptography [11]. In contrast to the continuous linear
least squares problem, this problem is known to be NP hard.
This paper concentrates on the MIMO application. It should be noted, however, that the proposed
method is general and can be applied to any integer linear least-square problem. A multiple-inputmultiple-output (MIMO) is a communication system with n transmit antennas and m receive antennas. The tap gain from transmit antenna i to receive antenna j is denoted by Hij . In each use of
?
the MIMO channel a vector x = (x1 , ..., xn ) is independently selected from a finite set of points
A according to the data to be transmitted, so that x ? An . A standard example of a finite set A
in MIMO communication is A = {?1, 1} or more generally A = {?1, ?3, ..., ?(2k + 1)}. The
received vector y is given by:
y = Hx + ?
(1)
The vector ? is an additive noise in which the noise components are assumed to be zero mean,
statistically independent Gaussians with a known variance ? 2 I. The m ? n matrix H is assumed
to be known. (In the MIMO application we further assume that H comprises iid elements drawn
from a normal distribution of unit variance.) The MIMO detection problem consists of finding the
unknown transmitted vector x given H and y. The task, therefore, boils down to solving a linear
system in which the unknowns are constrained to a discrete finite set. Since the noise ? is assumed
1
to be additive Gaussian, the optimal maximum likelihood (ML) solution is:
x
? = arg minn kHx ? yk2
x?A
(2)
However, going over all the |A|n vectors is unfeasible when either n or |A| are large.
A simple sub-optimal solution is based on a linear decision that ignores the finite set constraint:
?
?
z = (H H)?1 H y
(3)
and then, neglecting the correlation between the symbols, finding the closest point in A for each
symbol independently:
x
?i = arg min |zi ? a|
(4)
a?A
This scheme performs poorly due to its inability to handle ill-conditioned realizations of the matrix
H. Somewhat better performance can be obtained by using a minimum mean square error (MMSE)
Bayesian estimation on the continuous linear system. Let e be the variance of a uniform distribution
over the members of A. We can partially incorporate the information that x ? An by using the prior
Gaussian distribution x ? N (0, eI). The MMSE estimation becomes:
? 2 ?1 ?
I) H y
(5)
e
and then the finite-set solution is obtained by finding the closest lattice point in each component
independently. A vast improvement over the linear approaches described above can be achieved by
using sequential decoding:
?
E(x|y) = (H H +
? Apply MMSE (5) and choose the most reliable symbol, i.e. the symbol that corresponds to
the column with the minimal norm of the matrix:
?
? 2 ?1 ?
(H H +
I) H
e
? Make a discrete
symbol decision for the most reliable symbol x
?i . Eliminate the detected
P
symbol: j6=i hj xj = y ? hi x
?i (hj is the j-th column of H) to obtain a new smaller linear
system. Go to the first step to detect the next symbol.
This algorithm, known as MMSE-SIC [5], has the best performance for this family of linear-based
algorithms but the price is higher complexity. These linear type algorithms can also easily provide
probabilistic (soft-decision) estimates for each symbol. However, there is still a significant gap
between the detection performance of the MMSE-SIC algorithm and the performance of the optimal
ML detector.
Many alternative structures have been proposed to approach ML detection performance. For example, sphere decoding algorithm (an efficient way to go over all the possible solutions) [7], approaches
using the sequential Monte Carlo framework [3] and methods based on semidefinite relaxation [17]
have been implemented. Although the detection schemes listed above reduce computational complexity compared to the exhaustive search of ML solution, sphere decoding is still exponential in the
average case [9] and the semidefinite relaxation is a high-degree polynomial. Thus, there is still a
need for low complexity detection algorithms that can achieve good performance.
This study attempts to solve the integer least-squares problem using the Belief Propagation (BP)
paradigm. It is well-known (see e.g. [14]) that a straightforward implementation of the BP algorithm
to the MIMO detection problem yields very poor results since there are a large number of short
cycles in the underlying factor graph. In this study we introduce a novel approach to utilize the BP
paradigm for MIMO detection. The proposed variant of the BP algorithm is both computationally
efficient and achieves near optimal results.
2 The Loopy Belief Propagation Approach
Given the constrained linear system y = Hx + ?, and a uniform prior distribution on x, the posterior
probability function of the discrete random vector x given y is:
1
,
x ? An
(6)
p(x|y) ? exp(? 2 kHx ? yk2 )
2?
2
The notation ? stands for equality up to a normalization constant. Observing that kHx ? yk2 is
a quadratic expression, it can be easily verified that p(x|y) is factorized into a product of two- and
single-variable potentials:
p(x1 , .., xn |y) ?
Y
?i (xi )
i
Y
?ij (xi , xj )
(7)
i<j
such that
?i (xi ) = exp(?
1 ?
y hi xi )
2? 2
,
?ij (xi , xj ) = exp(?
1 ?
h hj xi xj )
?2 i
(8)
where hi is the i-th column of the matrix H. Since the obtained factors are simply a function of
pairs, we obtain a Markov Random Field (MRF) representation [18]. In the MIMO application
the (known) matrix H is randomly selected and therefore, the MRF graph is usually a completely
connected graph.
In a loop-free MRF graph the max-product variant of the BP algorithm always converges to the most
likely configuration (which corresponds to ML decoding in our case). For loop-free graphs, BP is
essentially a distributed variant of dynamic programming. The BP message update equations only
involve passing messages between neighboring nodes. Computationally, it is thus straightforward
to apply the same local message updates in graphs with cycles. In most such models, however,
this loopy BP algorithm will not compute exact marginal distributions; hence, there is almost no
theoretical justification for applying the BP algorithm. (One exception is that, for Gaussian graphs,
if BP converges, then the means are correct [16]). However, the BP algorithm applied to loopy
graphs has been found to have outstanding empirical success in many applications, e.g., in decoding
LDPC codes [6]. The performance of BP in this application may be attributed to the sparsity of the
graphs. The cycles in the graph are long, hence the graph have tree-like properties, so that messages
are approximately independent and inference may be performed as though the graph was loop-free.
The BP algorithm has also been used successfully in image processing and computer vision (e.g.
[4]) where the image is represented using a grid-structured MRF that is based on local connections
between neighboring nodes.
However, when the graph is not sparse, and is not based on local grid connections, loopy BP almost
always fails to converge. Unlike the sparse graphs of LDPC codes, or grid graphs in computer vision
applications, the MRF graphs of MIMO channels are completely connected graphs and therefore
the associated detection performance is poor. This has prevented the BP from being an asset for
the MIMO problem. Fig. 1 shows an example of a MIMO real-valued system based on an 8 ? 8
matrix and A = {?1, 1} (see the experiment section for a detailed description of the simulation
set-up). As can be seen in Fig. 1, the BP decoder based on the MRF representation (7) has very poor
results. Standard techniques to stabilize the BP iterations such as damping the message updates do
not help here. Even applying more advanced versions of BP (e.g. Generalized BP and Expectation
Propagation) to inference problems on complete MRF graphs yields poor results [12]. The problem
here is not in the optimization method but in the cost function that needs to be modified yield a good
approximate solution.
There have been several recent attempts to apply BP to the MIMO detection problem with good
results (e.g. [8, 10]). However in the methods proposed in [8] and [10] the factorization of the
probability function is done in such a way that each factor corresponds to a single linear equation.
This leads to a partition of the probability function into factors each of which is a function of all
the unknown variables. This leads to exponential computational complexity in computing the BP
messages. Shental et. al [14] analyzed the case where the matrix H is relatively sparse (and has
a grid structure). They showed that even under this restricted assumption the BP still does not
perform well. As an alternative method they proposed the generalized belief propagation (GBP)
algorithm that does work well on the sparse matrix if the algorithm regions are carefully chosen.
There are situations where the sparsity assumption makes sense (e.g. 2D intersymbol interference
(ISI) channels). However, in the MIMO channel model we assume that the channel matrix elements
are iid and Gaussian; hence we cannot assume that the channel matrix H is sparse.
3
0
10
BP
MMSE
MMSE?SIC
ML
?1
10
?2
SER
10
?3
10
?4
10
?5
10
?6
10
0
5
10
15
20
25
30
35
40
ES/N0
Figure 1: Decoding results for 8 ? 8 system, A = {?1, 1}.
3 The Tree Approximation of the Gaussian Density
Our approach is based on an approximation of the exact probability function:
1
kHx ? yk2 )
,
x ? An
(9)
2? 2
that enables a successful implementation of the Belief Propagation paradigm. Since the BP algorithm is optimal on loop-free factor graphs (trees) a reasonable approach is finding an optimal
tree approximation of the exact distribution (9). Chow and Liu [2] proposed a method to find a
tree approximation of a given distribution that has the minimum Kullback-Leibler distance to the
actual distribution. They showed that the optimal tree can be learned efficiently via a maximum
spanning tree whose edge weights correspond to the mutual information between the two variables
corresponding to the edges endpoints. The problem is that the Chow-Liu algorithm is based on the
(2-dimensional) marginal distributions. However, finding the marginal distribution of the probability
function (9) is, unfortunately, NP hard and it is (equivalent to) our final target.
p(x1 , .., xn |y) ? exp(?
To overcome this obstacle, our approach is based on applying the Chow-Liu algorithm on the distribution corresponding to the unconstrained linear system. This distribution is Gaussian and therefore
it is straightforward in this case to compute the (2-dimensional) marginal distributions. Given the
Gaussian tree approximation, the next step of our approach is to apply the finite-set constraint and
utilize the Gaussian tree distribution to form a discrete loop free approximation of p(x|y) which can
be efficiently globally maximized using the BP algorithm. To motivate this approach we first apply
a simplified version to derive the linear solution (4) described in Section 2.
?
?
?
Let z(y) = (H H)?1 H y be the least-squares estimator (3) and C = ? 2 (H H)?1 is its variance.
It can be easily verified that p(x|y) (9) can be written as:
?
1
(10)
p(x|y) ? f (x; z, C) = exp(? (z ? x) C ?1 (z ? x))
2
where f (x; z, C) is a Gaussian density with mean z and covariance matrix C (to simplify notation
we ignore hereafter the constant coefficient of the Gaussian densities). Now, instead of marginalizing
the true distribution p(x|y), which is an NP hard problem, we approximate it by the product of the
marginals of the Gaussian density f (x; z, C):
Y
(zi ? xi )2
)
(11)
f (x; z, C) ?
f (xi ; zi , Cii ) = exp(?
2Cii
i
From the Gaussian approximation (11) we can extract a discrete approximation:
p?(xi = a|y) ? f (xi ; zi , Cii ) = exp(?
4
(zi ? a)2
)
2Cii
,
a?A
(12)
Input: A constrained linear LS problem: Hx + ? = y, a noise level ? 2 and a finite symbol set A.
Goal: Find (approx. to) arg minx?An kHx ? yk2
Algorithm:
?
? Compute z = (H H +
? Denote:
?2
?1 ?
H y
e I)
?
and C = ? 2 (H H +
?2
?1
.
e I)
1 (xi ? zi )2
)
2
Cii
1 ((xi ? zi ) ? Cij /Cjj (xj ? zj ))2
f (xi |xj ; z, C) = exp(?
)
2 /C
2
Cii ? Cij
jj
f (xi ; z, C) = exp(?
? compute maximum spanning tree of the n-node graph where the weight of the i-j edge
is the square of the correlation coefficient:
2
?2ij = Cij
/(Cii Cjj )
Assume the tree is rooted at node ?1? and denote the parent of node i by p(i).
? Apply BP on the loop free MRF:
p?(x1 , ..., xn |y) ? f (x1 ; z, C)
n
Y
f (xi |xp(i) ; z, C)
x1 , ..., xn ? A
i=2
to find the (approx. to the) most likely configuration.
Figure 2: The Gaussian Tree Approximation (GTA) Algorithm.
Taking the most likely symbol we obtain the sub-optimal linear solution (4).
Motivated by the simple product-of-marginals approximation described above, we suggest approximating the discrete distribution p(x|y) via a tree-based approximation of the Gaussian distribution
f (x; z, C). Although the Chow-Liu algorithm was originally stated for discrete distributions, one
can easily verify that it also applies for the Gaussian case. Let
C
I(xi ; xj ) = log Cii + log Cjj ? log ii
Cji
Cij
= ? log(1 ? ?2ij )
Cjj
be the mutual information of xi and xj based on the Gaussian distribution f (x; z, C), where ?ij is
the correlation coefficient between xi and xj . Let f?(x) be the optimal Chow-Liu tree approximation
of f (x; z, C). We can assume, without loss of generality, that f?(x) is rooted at x1 . f?(x) is a loopfree Gaussian distribution on x1 , ..., xn , i.e.
f?(x) = f (x1 ; z, C)
n
Y
f (xi |xp(i) ; z, C)
,
x ? Rn
(13)
i=2
where p(i) is the ?parent? of the i-th node in the optimal tree. The Chow-Liu algorithm guarantees
that f?(x) is the optimal Gaussian tree approximation of f (x; z, C) in the sense that the KL divergence D(f ||f?) is minimal among all the Gauss-Markov distributions on Rn . We note in passing that
applying a monotonic function on the graph weights does not change the topology of the optimal
tree. Hence to find the optimal tree we can use the weights ?2ij instead of ? log(1??2ij ). The optimal
tree, therefore is one that maximizes the sum of the square correlation coefficients between adjacent
nodes.
5
Our approximation approach is, therefore, based on replacing the true distribution p(x|y) with the
following approximation:
p?(x1 , ..., xn |y) ? f?(x) = f (x1 ; z, C)
n
Y
f (xi |xp(i) ; z, C)
,
x ? An
(14)
i=2
The probability function p?(x|y) is a loop free factor graph. Hence the BP algorithm can be applied
to find its most likely configuration. An optimal BP schedule requires passing a message once for
each direction of each edge. The BP messages are first sent from leaf variables to the root and then
back to the leaves. We demonstrate empirically in the experiment section that the optimal solution
of p?(x|y) is indeed nearly optimal for p(x|y).
The MMSE Bayesian approach (5) is known to be better than the linear based solution (4). In
a similar way we can consider a Bayesian version of the proposed Gaussian tree approximation.
We can partially incorporate theP
information that x ? An by using the prior Gaussian distribution
1
2
x ? N (0, eI) such that e = |A|
a?A a . This yields the posterior Gaussian distribution:
f(x|y) (x|y)
1
1
kxk2 ? 2 kHx ? yk2 )
2e
2?
?
?
1
?2
? exp(? (x ? E(x|y)) (H H +
I)(x ? E(x|y))
2
e
? exp(?
?
2
(15)
?
such that E(x|y) = (H H + ?e I)?1 H y. We can apply the Chow-Liu tree approximation on
the Gaussian distribution (15) to obtain a ?Bayesian? Gaussian tree approximation for p(x|y). One
can expect that this yields is a better approximation of the discrete distribution p(x|y) than the tree
distribution that is based on the unconstrained distribution f (x; z, C) since it partially includes the
finite-set constraint. We show in Section 4 that the Bayesian version indeed yields better results.
To summarize, our solution to the constrained least squares problem is based on applying BP on
a Gaussian tree approximation of the Bayesian version of the continuous least-square case. We
dub this method ?The Gaussian-Tree-Approximation (GTA) Algorithm?. The GTA algorithm is
summarized in Fig. 3. We next compute the complexity of the GTA algorithm. The complexity
2
?
of computing the covariance matrix (H H + ?e I)?1 is O(n3 ), the complexity of the Chow-Liu
algorithm (based on Prim?s algorithm for finding the minimum spanning tree) is O(n2 ) and the
complexity of the BP algorithm is O(|A|2 n).
4 Experimental Results
In this section we provide simulation results for the GTA algorithm over various MIMO systems.
We assume a frame length of 100, i.e. the channel matrix H is constant for 100 channel uses.
The channel matrix comprised iid elements drawn from a zero-mean normal distribution of unit
variance. We used 104 realizations of the channel matrix. This resulted in 106 vector messages. The
performance of the proposed algorithm is shown as a function of the variance of the additive noise
? 2 . The signal-to-noise ratio (SNR) is defined as 10 log10 (Es /N0 ) where Es /N0 = ?ne2 (n is the
number of variables, ? 2 is the variance of the Gaussian additive noise, and e is the variance of the
uniform distribution over the discrete set A).
Fig. 3 shows the symbol error rate (SER) versus SNR for a 10 ? 10, |A| = 8, MIMO system and
for a 20?20, |A| = 4, MIMO system. Note that the algorithm was applied in Fig. 3 to a real world
practical application (MIMO communication) using real world parameters. Unlike other areas (e.g
computer vision, bioinformatics) here the real world performance analysis is based on extensive
simulations of the communication channel. Note that a 20 ? 20 fully connected MRF is not a small
problem and unlike the Potts model that is defined on a grid MRF, the BP and it variants do not
work here. The performance of the GTA method was compared to the MMSE and the MMSESIC algorithms (see Section 2). The GTA algorithm differs from these algorithms in two ways.
The first is a Markovian approximation of f (x; z, C) instead of a product of independent densities.
The second aspect is utilizing the optimal tree. To clarify the contribution of each component we
modified the GTA algorithm by replaced the Chow-Liu optimal tree by the tree 1 ? 2 ? 3, ..., ? n.
We call this method the ?Line-Tree?. As can be seen from Fig. 3, using the optimal tree is crucial
6
to obtain improved results. Fig. 3b also shows results of the non-Bayesian variant of the GTA
algorithm. As can be seen, the Bayesian version yields better results. In Fig. 3a the two versions
yield the same results. It can be seen that the performance of the GTA algorithm is significantly
better than the MMSE-SIC (and its computational complexity is much smaller).
0
0
10
10
MMSE
Line?Tree
MMSE?SIC
GTA
ML
?1
?1
10
SER
SER
10
?2
10
?3
10
?4
10
?2
10
?3
10
10
MMSE
Line?Tree
MMSE?SIC
non?Bayesian?GTA
GTA
ML
?4
15
20
25
30
35
40
45
10
50
10
15
20
25
E /N
S
30
35
40
45
50
E /N
0
S
(a) 10 ? 10, |A| = 8
0
(b) 20 ? 20, |A| = 4
Figure 3: Comparative results of MMSE, MMSE-SIC and variants of the GTA.
0
0.35
10
0.3
?1
10
SER
SER
0.25
?2
10
0.2
?3
0.1
0
10
MMSE
MMSE?SIC
GTA?MP
GTA?SP
0.15
20
40
60
80
MMSE
MMSE?SIC
GTA (MP/SP)
?4
100
n
10
0
20
40
60
80
100
n
(a) noise level: ? 2 = 2.5
(b) noise level:? 2 = 0.25
Figure 4: Comparative results of MMSE, MMSE-SIC and the GTA approximation followed by the
sum-product and max-product variants of the BP algorithm. The alphabet size is |A| = 8 and the
results are shown as a function of the matrix size n ? n.
Fig. 4 depicts comparative performance results as a function of n, the size of the linear system. The
alphabet size in all the experiments was |A| = 8 and as in Fig. 3 each experiment was repeated
104? 102 times. The performance of the GTA method was compared to the MMSE and the MMSESIC algorithms (see Section 2). In Fig. 4a the noise variance was set to ? 2 = 2.5 and in Fig. 4b to
? 2 = 0.25. In all cases the GTA was found to be better than the MMSE-SIC. The GTA algorithm
is based on an optimal Gaussian tree approximation followed by a BP algorithm. There are two
variants of the BP, namely the max-product (MP) and the sum-product (SP). Since the performance
is measured in symbol error-rate and not frame error-rate the SP should yield improved results. Note
that if the exact distribution was loop-free then SP would obviously be the optimal method when
the error is measured in number of symbols. However, since the BP is applied to an approximated
distribution the superiority of the SP is not straightforward. When the noise level is relatively high
the sum-product version is better than the max-product. When the noise level is lower there is no
significant difference between the two BP variants. Note that from an algorithmic point of view, the
MP unlike the SP, can be easily computed in the log domain.
7
5 Conclusion
Solving integer linear least squares problems is an important issue in many fields. We proposed a
novel technique based on the principle of a tree approximation of the Gaussian distribution that corresponds to the continuous linear problem. The proposed method improved performance compared
to all other polynomial algorithms for solving the problem as demonstrated in simulations. As far
as we know this is the first successful attempt to apply the BP paradigm to completely connected
MRF. A main concept in the GTA model is the interplay between discrete and Gaussian models.
Such hybrid ideas can be considered also for discrete inference problems other than least-squares.
One example is the work of Opper and Winther who applied an iterative algorithm using a model
which is seen as discrete and Gaussian in turn to address Ising model problems [13]. Although the
focus of this paper is on an approach based on tree approximation, more complicated approximations such as multi-parent trees have potential to improve performance and can potentially provide a
smooth performance-complexity trade-off. Although the proposed method yields improved results,
the tree approximation we applied nay not be the best one (finding the best tree for the integer constrained linear problem is NP hard). It is left for future research to search for a better discrete tree
approximation for the constrained linear least squares problem.
References
[1] E. Agrell, T. Eriksson, A. Vardy, and K. Zeger. Closest point search in lattices. IEEE Transactions on
Information Theory, 48(8):2201?2214, 2002.
[2] C. K. Chow and C. N. Liu. Approximating discrete probability distributions with dependence trees. IEEE
Trans. on Info. Theory, pages 462?467, 1968.
[3] B. Dong, X. Wang, and A. Doucet. A new class of soft MIMO demodulation algorithms. IEEE Trans.
Signal Processing, pages 2752?63, 2003.
[4] P. F. Felzenszwalb and D. P. Huttenlocher. Efficient belief propagation for early vision. International
Journal of Computer Vision, pages 41?54, 2006.
[5] G.J. Foschini. Layered space-time architecture for wireless communication in a fading environment when
using multi-element antennas. Bell Labs Technical Journal, 1(2):41?59, 1996.
[6] R. G. Gallager. Low density parity check codes. IRE Trans. Inform.Theory, pages 21?28, 1962.
[7] B. M. Hochwald and S. ten Brink. Achieving near-capacity on a multiple antenna channel. IEEE Trans.
Commun., pages 389?399, 2003.
[8] J. Hu and T. M. Duman. Graph-based detector for BLAST architecture. IEEE Communications Society
ICC, 2007.
[9] J. Jalden and B. Ottersten. An exponential lower bound on the expected complexity of sphere decoding.
IEEE Intl. Conf. Acoustic, Speech, Signal Processing, 2004.
[10] M. Kaynak, T. Duman, and E. Kurtas. Belief propagation over SISO/MIMO frequency selective fading
channels. IEEE Transactions on Wireless Communications, pages 2001?5, 2007.
[11] J. C. Lagarias and A. M. Odlyzko. Solving low-density subset sum problems. J. ACM, 32(1):229?246,
1985.
[12] T. Minka and Y. Qi. Tree-structured approximations by expectation propagation. Advances in Neural
Information Processing Systems, 2004.
[13] M. Opper and O. Winther. Expectation consistent approximate inference. Journal of Machine Learning
Research, pages 2177?2204, 2005.
[14] O. Shental, N. Shental, S. Shamai (Shitz), I. Kanter, A.J. Weiss, and Y. Weiss. Discrete-input twodimensional gaussian channels with memory: Estimation and information rates via graphical models and
statistical mechanics. Information Theory, IEEE Transactions on, pages 1500?1513, 2008.
[15] P.J.G. Teunissen. Success probability of integer GPS ambiguity rounding and bootstrapping. Journal of
Geodesy, 72:606?612, 1998.
[16] Y. Weiss and W.T. Freeman. Correctness of belief propagation in Gaussian graphical models of arbitrary
topology. Neural Computation, pages 2173?2200, 2001.
[17] A. Wiesel, Y. C. Eldar, and S. Shamai. Semidefinite relaxation for detection of 16-QAM signaling in
MIMO channels. IEEE Signal Processing Letters, 2005.
[18] J. S. Yedidia, W. T. Freeman, and Y. Weiss. Understanding belief propagation and its generalizations.
IJCAI, 2001.
8
| 3829 |@word version:8 wiesel:1 polynomial:2 norm:1 hu:1 simulation:4 jacob:1 eng:2 covariance:2 configuration:3 liu:10 hereafter:1 mmse:24 outperforms:1 current:1 goldberger:1 written:1 additive:4 zeger:1 partition:1 enables:1 shamai:2 update:3 n0:3 selected:2 leaf:2 amir:1 short:1 ire:1 node:7 qam:1 consists:1 nay:1 introduce:1 blast:1 expected:1 indeed:2 isi:1 mechanic:1 multi:4 freeman:2 globally:1 actual:1 becomes:1 underlying:1 notation:2 maximizes:1 factorized:1 finding:9 bootstrapping:1 guarantee:1 every:1 ser:6 unit:2 superiority:1 engineering:2 local:3 approximately:1 factorization:1 statistically:1 practical:1 differs:1 signaling:1 area:1 empirical:1 bell:1 significantly:1 suggest:1 unfeasible:1 cannot:1 layered:1 eriksson:1 twodimensional:1 applying:7 restriction:1 equivalent:1 demonstrated:2 go:2 straightforward:4 independently:3 l:1 resolution:1 estimator:2 utilizing:1 gta:22 handle:1 justification:1 transmit:2 target:1 exact:5 programming:1 gps:2 us:1 element:4 approximated:1 ising:1 huttenlocher:1 wang:1 region:1 cycle:3 connected:4 trade:1 environment:1 complexity:12 dynamic:1 motivate:1 solving:4 completely:3 easily:5 represented:1 various:1 alphabet:2 monte:1 detected:1 exhaustive:1 whose:1 kanter:1 solve:1 valued:1 antenna:6 final:1 obviously:1 interplay:1 product:11 neighboring:2 loop:8 realization:2 poorly:1 achieve:1 description:1 parent:3 ijcai:1 intl:1 comparative:3 converges:2 help:1 derive:1 ac:2 measured:2 ij:7 school:2 received:1 implemented:1 concentrate:1 direction:1 correct:1 hx:3 generalization:1 clarify:1 considered:1 normal:2 exp:11 great:1 algorithmic:1 achieves:1 early:1 estimation:3 siso:1 correctness:1 successfully:1 gaussian:34 always:2 modified:3 hj:3 focus:1 improvement:1 potts:1 likelihood:1 check:1 contrast:1 detect:1 sense:2 inference:4 eliminate:1 chow:10 going:1 selective:1 arg:3 among:1 ill:1 issue:1 denoted:1 priori:1 eldar:1 proposes:1 constrained:8 mutual:2 marginal:4 field:3 once:1 nearly:1 future:1 np:4 simplify:1 randomly:1 ne2:1 divergence:1 resulted:1 replaced:1 attempt:3 detection:11 interest:1 message:9 analyzed:1 semidefinite:3 edge:4 neglecting:1 damping:1 tree:44 theoretical:1 minimal:2 column:3 soft:2 obstacle:1 markovian:1 lattice:4 loopy:5 cost:1 subset:2 snr:2 uniform:3 comprised:1 successful:2 mimo:23 rounding:1 density:8 winther:2 international:1 probabilistic:1 off:1 dong:1 decoding:8 ambiguity:2 choose:1 conf:1 ilan:2 potential:2 summarized:1 stabilize:1 includes:1 coefficient:4 mp:4 performed:1 root:1 view:1 closed:1 lab:1 observing:1 complicated:1 contribution:1 square:15 il:2 variance:9 who:1 efficiently:2 maximized:1 yield:11 correspond:1 bayesian:10 iid:3 dub:1 carlo:1 asset:1 j6:1 detector:2 inform:1 frequency:1 minka:1 associated:1 attributed:1 boil:1 gain:1 knowledge:1 schedule:1 carefully:1 back:1 higher:1 originally:1 improved:5 wei:4 done:1 though:2 generality:1 correlation:4 ei:2 replacing:1 propagation:11 sic:11 verify:1 true:2 concept:1 hence:6 equality:1 leibler:1 adjacent:1 rooted:2 noted:1 generalized:2 complete:2 demonstrate:1 performs:1 image:2 novel:2 empirically:1 endpoint:1 marginals:2 significant:2 approx:2 unconstrained:3 grid:5 yk2:6 closest:4 posterior:2 recent:1 showed:2 commun:1 success:2 transmitted:2 minimum:3 seen:5 somewhat:1 cii:8 converge:1 paradigm:4 signal:4 ii:1 multiple:2 smooth:1 technical:1 cjj:4 sphere:3 long:1 prevented:1 demodulation:1 qi:1 variant:9 mrf:11 essentially:1 vision:5 expectation:3 iteration:1 normalization:1 achieved:1 receive:2 crucial:1 unlike:4 sent:1 member:1 integer:6 call:1 near:2 xj:9 fit:1 zi:7 architecture:2 topology:2 reduce:1 idea:1 expression:1 motivated:1 cji:1 speech:1 passing:3 jj:1 generally:1 detailed:1 listed:1 involve:1 ten:1 zj:1 diverse:1 discrete:17 shental:3 achieving:1 drawn:2 verified:2 utilize:2 vast:1 graph:26 relaxation:3 sum:6 letter:1 almost:3 family:1 reasonable:1 decision:3 bound:1 hi:3 followed:2 quadratic:1 fading:2 constraint:3 bp:40 n3:1 aspect:1 min:1 relatively:2 structured:2 according:1 poor:5 smaller:2 restricted:1 interference:1 computationally:2 equation:2 turn:1 know:1 available:1 gaussians:1 yedidia:1 apply:8 alternative:2 existence:1 graphical:2 log10:1 approximating:2 society:1 dependence:1 minx:1 distance:1 capacity:1 decoder:1 spanning:3 code:3 length:1 minn:1 ldpc:2 biu:2 ratio:1 unfortunately:1 cij:4 potentially:1 hij:1 info:1 stated:1 implementation:2 unknown:4 perform:1 markov:2 finite:10 kaynak:1 situation:1 communication:8 frame:2 rn:3 arbitrary:1 pair:1 namely:1 kl:1 extensive:1 connection:2 gbp:1 tap:1 acoustic:1 learned:1 trans:4 address:1 bar:2 usually:1 agrell:1 sparsity:2 summarize:1 reliable:2 max:4 memory:1 belief:9 hybrid:1 advanced:1 scheme:2 improve:1 extract:1 prior:4 understanding:1 icc:1 marginalizing:1 loss:1 expect:1 fully:1 versus:1 digital:1 degree:1 xp:3 consistent:1 principle:1 wireless:2 free:8 parity:1 taking:1 felzenszwalb:1 sparse:5 distributed:1 overcome:1 opper:2 xn:7 stand:1 world:3 ignores:1 simplified:1 far:1 odlyzko:1 transaction:3 approximate:3 ignore:1 kullback:1 ml:8 doucet:1 assumed:3 xi:20 thep:1 continuous:4 search:3 iterative:1 channel:15 domain:1 sp:7 main:1 noise:12 lagarias:1 n2:1 repeated:1 cryptography:1 x1:11 fig:12 depicts:1 sub:2 fails:1 comprises:1 exponential:3 kxk2:1 down:1 prim:1 symbol:14 sequential:2 conditioned:1 gap:1 simply:1 likely:4 gallager:1 partially:3 applies:1 monotonic:1 corresponds:5 acm:1 goal:1 price:1 hard:4 change:1 e:3 gauss:1 experimental:1 exception:1 inability:1 bioinformatics:1 outstanding:1 khx:6 incorporate:2 |
3,123 | 383 | Back Propagation Implementation on the
Adaptive Solutions CNAPS Neurocomputer Chip
Hal McCartor
Adaptive Solutions Inc.
1400 N.W. Compton Drive
Suite 340
Beaverton, OR 97006
Abstract
The Adaptive Solutions CN APS architecture chip is a general purpose
neurocomputer chip. It has 64 processors, each with 4 K bytes of local
memory, running at 25 megahertz. It is capable of implementing most
current neural network algorithms with on chip learning. This paper discusses the implementation of the Back Propagation algorithm on an array
of these chips and shows performance figures from a clock accurate hardware simulator. An eight chip configuration on one board can update 2.3
billion connections per second in learning mode and process 9.6 billion
connections per second in feed forward mode.
1
Introduction
The huge computational requirements of neural networks and their natural parallelism have led to a number of interesting hardware innovations for executing such
networks. Most investigators have created large parallel computers or special purpose chips limited to a small subset of algorithms. The Adaptive Solutions CNAPS
architecture describes a general-purpose 64-processor chip which supports on chip
learning and is capable of implementing most current algorithms. Implementation
of the popular Back Propagation (BP) algorithm will demonstrate the speed and
versatility of this new chip.
1028
Back Propagation Implementation
2
The Hardware Resources
The Adaptive Solutions CNAPS architecture is embodied in a single chip digital
neurocomputer with 64 processors running at 25 megahertz. All processors receive
the same instruction which they conditionally execute. Multiplication and addition
are performed in parallel allowing 1.6 billion inner product steps per second per
chip . Each processor has a 32-bit adder, 9-bit by 16-bit multiplier (16 by 16 in two
clock cycles), shifter, logic unit, 32 16-bit registers, and 4096 bytes oflocal memory.
Input and output are accomplished over 8-bit input and output buses common
to all processors. The output bus is tied to the input bus so that output of one
processor can be broadcast to all others. When multiple chips are used, they appear
to the user as one chip with more processors. Special circuits support finding the
maximum of values held in each processor and conserving weight space for sparsely
connected networks. An accompanying sequencer chip controls instruction flow,
input and output.
3
The Back Propagation Algorithm Implementation
Three critical issues must be addressed in the parallel implementation of BP on efficient hardware. These are the availability of weight values for back propagating the
error, the scaling and precision of computations, and the efficient implementation
of the output transfer function.
BP requires weight values at different nodes during the feed forward and back
propagation phases of computation. This problem is solved by having a second set
of weights which is the transpose of the output layer weights. These are located on
hidden node processors. The two matrices are updated identically. The input to the
hidden layer weight matrix is not used for error propagation and is not duplicated.
BP implementations typically use 32-bit floating point math. This largely eliminates
scaling, precision and dynamic range issues. Efficient hardware implementation
dictates integer arithmetic units with precision no greater than required. Baker
[Bak90] has shown 16-bit integer weights are sufficient for BP training and much
lower values adequate for use after training.
With fixed point integer math, the position of the binary point must be chosen. In
this implementation weights are 16 bits and use 12 bits to the right of the binary
point and four to the left including a sign bit. They range from -8 to +8. Input
and output are represented as 8-bit unsigned integers with binary point at the left.
The leaning rate is represented as an 8-bits integer with two bits to the left of the
binary point and values ranging from .016 to 3.98. Error is represented as 8 bit
signed integers at the output layer and with the same representation as the weights
at the hidden layer.
This data representation has been used to train benchmark BP applications with
results comparable to the floating point versions [HB91].
The BP sigmoid output function is implemented as an 8-bit by 256 lookup table.
During the forward pass input values are broadcast to all processors from off chip
via the input bus or from hidden nodes via the output bus to the input bus. During
1029
1030
McCartor
the backward error propagation, error values are broadcast from the output nodes
to hidden nodes.
The typical BP network has two computational layers, the hidden and output layers.
They can be assigned to the same or different processor nodes (PN s) depending on
available memory for weights. PNs used for the hidden layer contain the transpose
weights of the output layer for back propagating error. If momentum or periodic
weight update are used, additional storage space is allocated with each weight.
In this implementation BP can be mapped to any set of contiguous processors
allowing multiple networks in CNAPS memory simultaneously. Thus, the output
of one algorithm can be directly used as input to another. For instance, in speech
recognition, a Fourier transform performed on the PN array could be input to a
series of matched BP networks whose hidden layers run concurrently. Their output
could be directed to an LVQ2 network for final classification. This can all be
accomplished without any intermediate results leaving the chip array.
4
Results
BP networks have been successfully run on a hardware clock accurate simulator
which gives the following timing results. In this example an eight-chip implementation (512 processors) was used. The network had 1900 inputs, 500 hidden nodes
and 12 outputs. Weights were updated after each input and no momentum was
used. The following calculations show BP performance:
TRAINING PHASE
Overhead clock cycles per input vector = 360
Cycles per input vector element = 4
Cycles per hidden node = 4
Cycles per output node = 7
Cycles per vector = 360+(1900*4)+(500*4)+(12*7) = 10,044
Vectors per second = 25,000,000 / 10,044 = 2,489
Total forward weights = (1900*500)+(500*12) = 956,000
Weight updates per second = 956,000*2,489 = 2,3'79,484,000
FEED FORWARD ONLY
Overhead cycles per input vector = 59
Cycles per input vector element
1
Cycles per hidden node = 1
Cycles per output node = 1 (for output of data)
Cycles per vector = 59+1900+500+12 = 2,471
Vectors per second
25,000,000/2,471
10,117
=
=
=
Connections per second = 956,000*10,11'7 = 9,6'71,852,000
Back Propagation Implementation
5
Comparative Performance
An array of eight Adaptive Solutions CN APS chips would execute the preceding BP
network at 2.3 billion training weight updates per second or 9.6 billion feed forward
connections per second. These results can be compared with the results on other
computers shown in table 1.
MACHINE
SUN 3 lD88j
SAle SIGMA-llD88j
WARP [PGTK88]
CRAY 2 lPGTK88J
CRAY X-MP lD88J
CM-2 (65,536) [ZMMW90]
GF-1l1566) lWZ89j
8 ADAPTIVE CN APS chips
MCUPS
.034
MCPS
0.25
8
17
7
40
901
2,379
50
182
9,671
WTS
fp
fp
fp
fp
fp
fp
fp
16 bit int
Table 1. Comparison of BP performance for various computers and 8 Adaptive
Solutions CNAPS chips on one board. MCUPS is Millions of BP connection updates
per second in training mode. MCPS is millions of connections processed per second
in feed forward mode. WTS is representation used for weights.
6
Summary
The Adaptive Solutions CN APS chip is a very fast general purpose digital neurocomputer chip. It is capable of executing the Back Propagation algorithm quite
efficiently. An 8 chip configuration can train 2.3 billion connections per second and
evaluate 9.6 billion BP feed forward connections per second.
References
[Bak90] T Baker. Implementation limits for artificial neural networks. Master's
thesis, Oregon Graduate Institute, 1990.
[D88] DARPA Neural Network Study. pp309-310 AFCEA International Press, Fairfax Virginia. 1988
[HB91] J. Holt and T. Baker. Back Propagation Simulations using Limited Precision
Calculations. Submitted to IJCNN, Seattle WA 1991.
[RM86] D. Rummelhart, J. McClelland. Parallel Distributed Processing. (1986)
MIT Press, Cambridge, MA.
[WZ89] M. Witbrock and M Zagha. An Implementation of Back-Propagation Learning on GFll, a Large SIMD Parallel Computer. 1989. Tech report CMU-CS-89-208
Carnegie Mellon University.
[ZMMW90] X. Zhang, M. Mckenna, J Misirov, D Waltz. An Efficient Implementation of the Back-propagation Algorithm on the Connection Machine CM-2 (1990)
in Adv. in Neural Information Processing Systems 2. Ed. D. Touretzky. Morgan
Kaufmann, San Mateo, CA.
1031
| 383 |@word implemented:1 c:1 version:1 contain:1 multiplier:1 assigned:1 instruction:2 simulation:1 conditionally:1 during:3 implementing:2 fairfax:1 mapped:1 configuration:2 series:1 demonstrate:1 shifter:1 current:2 accompanying:1 ranging:1 must:2 innovation:1 common:1 sigmoid:1 sigma:1 purpose:4 aps:4 update:5 million:2 implementation:16 allowing:2 mellon:1 cambridge:1 benchmark:1 successfully:1 mit:1 math:2 node:11 concurrently:1 had:1 zhang:1 pn:2 cray:2 required:1 overhead:2 connection:9 pns:1 tech:1 binary:4 wz89:1 accomplished:2 simulator:2 morgan:1 parallelism:1 greater:1 typically:1 additional:1 preceding:1 fp:7 hidden:11 arithmetic:1 baker:3 matched:1 circuit:1 issue:2 classification:1 multiple:2 memory:4 including:1 cm:2 critical:1 natural:1 calculation:2 special:2 finding:1 simd:1 having:1 suite:1 created:1 cmu:1 rummelhart:1 others:1 report:1 embodied:1 sale:1 control:1 unit:2 gf:1 appear:1 byte:2 receive:1 simultaneously:1 addition:1 multiplication:1 investigator:1 local:1 timing:1 floating:2 limit:1 phase:2 addressed:1 leaving:1 allocated:1 versatility:1 eliminates:1 interesting:1 huge:1 digital:2 signed:1 mateo:1 flow:1 mcups:2 sufficient:1 integer:6 leaning:1 limited:2 intermediate:1 range:2 graduate:1 held:1 identically:1 directed:1 summary:1 accurate:2 waltz:1 architecture:3 capable:3 transpose:2 inner:1 wts:2 cn:4 warp:1 sequencer:1 institute:1 distributed:1 oflocal:1 dictate:1 instance:1 holt:1 forward:8 adaptive:9 contiguous:1 speech:1 san:1 unsigned:1 storage:1 adequate:1 witbrock:1 subset:1 logic:1 cnaps:5 virginia:1 hardware:6 processed:1 mcclelland:1 periodic:1 array:4 international:1 sign:1 per:24 table:3 off:1 transfer:1 carnegie:1 ca:1 updated:2 four:1 user:1 thesis:1 broadcast:3 element:2 recognition:1 located:1 backward:1 sparsely:1 run:2 lookup:1 master:1 solved:1 availability:1 int:1 inc:1 oregon:1 board:2 cycle:11 sun:1 connected:1 adv:1 register:1 mp:1 performed:2 precision:4 position:1 scaling:2 momentum:2 comparable:1 bit:16 tied:1 layer:9 parallel:5 dynamic:1 ijcnn:1 mckenna:1 kaufmann:1 largely:1 efficiently:1 bp:16 darpa:1 fourier:1 chip:24 speed:1 represented:3 various:1 lvq2:1 zagha:1 train:2 drive:1 fast:1 processor:14 submitted:1 artificial:1 touretzky:1 led:1 ed:1 describes:1 whose:1 quite:1 mcps:2 neurocomputer:4 transform:1 final:1 popular:1 duplicated:1 resource:1 ma:1 bus:6 discus:1 product:1 back:13 feed:6 available:1 typical:1 conserving:1 eight:3 execute:2 total:1 pas:1 billion:7 seattle:1 clock:4 requirement:1 adder:1 comparative:1 running:2 executing:2 propagation:13 support:2 depending:1 beaverton:1 mode:4 propagating:2 afcea:1 evaluate:1 hal:1 |
3,124 | 3,830 | Free energy score-space
Alessandro Perina1,3 , Marco Cristani1,2 , Umberto Castellani1
Vittorio Murino1,2 and Nebojsa Jojic3
{alessandro.perina, marco.cristani, umberto.castellani, vittorio.murino}@univr.it
[email protected]
1
Department of Computer Science, University of Verona, Italy
2
IIT, Italian Institute of Technology, Genova, Italy
3
Microsoft Research, Redmond, WA
Abstract
A score function induced by a generative model of the data can provide a feature vector of a fixed dimension for each data sample. Data samples themselves
may be of differing lengths (e.g., speech segments, or other sequence data), but
as a score function is based on the properties of the data generation process, it
produces a fixed-length vector in a highly informative space, typically referred to
as a ?score space?. Discriminative classifiers have been shown to achieve higher
performance in appropriately chosen score spaces than is achievable by either the
corresponding generative likelihood-based classifiers, or the discriminative classifiers using standard feature extractors. In this paper, we present a novel score
space that exploits the free energy associated with a generative model. The resulting free energy score space (FESS) takes into account latent structure of the data
at various levels, and can be trivially shown to lead to classification performance
that at least matches the performance of the free energy classifier based on the
same generative model, and the same factorization of the posterior. We also show
that in several typical vision and computational biology applications the classifiers
optimized in FESS outperform the corresponding pure generative approaches, as
well as a number of previous approaches to combining discriminating and generative models.
1 Introduction
The complementary nature of discriminative and generative approaches to machine learning [20] has
motivated lots of research on the ways in which these can be combined [5, 12, 15, 18, 9, 24, 27]. One
recipe for such integration uses ?generative score-spaces.? Using the notation of [24], such spaces
can be built from data by considering for each observed sequence x = (x1 , . . . , xk , . . . , xK ) of
observations xk ? <d , k = 1, . . . , K, a family of generative models P = {P (x|?i )} parameterized
by ?i .
The observed sequence x is mapped to the fixed-length score vector ?fF? (x),
?fF? (x) = ?F? f ({Pi (x|?i ))}),
(1)
where f is the function of the set of probability densities under the different models, and F? is some
operator applied to it. For instance, in case of the Fisher score [9], f is the log likelihood, and the
operator F? produces the first order derivatives with respect to parameters, whereas in [24] other
derivatives are also included. Another example is the TOP kernel [27] for which the function f is
the posterior log-odds and F? is again the gradient operator.
In these cases, the generative score-space approaches help to distill the relationship between a
model parameter ?i and the particular data sample. After the mapping, a score-space metric must
1
be defined in order to employ discriminative approaches.
A number of nice properties for these mappings, and especially for Fisher score, can be derived
under the assumption that the test data indeed follows the generative model used for the score
computation. However, the generative score spaces build upon the choice of one (or few) out
of many possible generative models, as well as the parameters fit to a limited amount of data.
In practice, these models can therefore suffer from improper parametrization of the probability
density function, local minima, over-fitting add under-training problems. Consider, for instance,
the situation where the assumed model over high dimensional data is a mixture of n diagonal
Gaussians with a given small and fixed variance, and a uniform prior over the components. The
only free parameters are therefore the Gaussian centers, and let us assume that training data is best
captured with these centers all lying on (or close to) a hypersphere with a radius sufficiently larger
than the Gaussians? deviation. An especially surprising and inconvenient outlier in this case would
be a test data point that falls close to the center of the hypersphere, as the derivatives of its log
likelihood with respect to these parameters (Gaussian centers) evaluated at the estimate could be
very low when the number of components n in the mixture is large, because the derivatives are
scaled by the uniform posterior 1/n. But, this makes such a test point insufficiently distinguishable
from the test points that actually satisfy the model perfectly by falling directly into one of the
Gaussian centers. If the model parameters are extended to include the prior distribution over mixture
components, then derivatives with respect to these parameters would help disambiguate these points.
In this paper, we propose a novel score space which focuses on how well the data point fits different
parts of the generative model, rather than on derivatives with respect to the model parameters. We
start with the variational free energy as a lower bound on the negative log-likelihood of the data, as
this affords us with two advantages. First of all, the variational free energy can be computed for an
arbitrary structure of the posterior distribution, allowing us to deal with generative models with many
latent variables and complex structure without compromising tractability, as was previously done
for inference in generative models. Second, a variational approximation of the posterior typically
provides an additive decomposition of the free energy, providing many terms that can be used as
features. These terms/features are divided into two categories: the ?entropy set? of terms that express
uncertainty in the posterior distribution, and the ?cross-entropy set? describing the quality of the fit
of the data to different parts of the model according to the posterior distribution.
We find the resulting score space to be highly informative for discriminative learning. In particular, we tested our approach on three computational biology problems (promoter recognition, exons/introns classification, and homology detection), as well as vision problems (scene/object recognition). The results compare favorably with the state-of-the-art from recent literature.
The rest of the paper is organized as follows. The next section describes the proposed framework in
more detail. In Sec. 3, we show that the proposed generative score space leads to better classification
performances than the related generative counterpart. Some simple extensions are described in Sec.
4, and used in the experiments in Sec. 5.
2
FESS: Free Energy Score Space
QT
A generative model defines the distribution P (h, x|?) = t=1 P (h(t) , x(t) |?) over a set of observations x = {x(t) }Tt=1 , each with associated hidden variables h(t) , for a given set of model parameters
? shared across all observations. In addition, to model the posterior distribution P (h|x), we also
define a family of distributions Q from which we need to select a variational distribution Q(h) that
best fits the model and the data. Assuming i.i.d data, the family Q can be simplified to include only
QT
distributions of the form Q(h) = t=1 q(h(t) ). The free energy [19, 11] is a function of the data,
parameters of the posterior Q(h), and the parameters of the model P , defined as
FQ = KL(Q, P (h|x, ?)) ? log P (x|?) =
X
h
Q(h) log
Q(h)
P (h, x|?)
(2)
The free energy bounds the log likelihood, FQ ? ? log P (x) and the equality is attained only if
Q is expressive enough to capture the true posterior distribution, as the free energy is minimized
when Q(h) = P (h|x). Constraining Q to belong to a simplified family of distributions Q, however,
2
provides computational advantages for dealing with intractable models P . Examples of distribution
families used for approximation are the fully-factorized mean field form [13], or the structured variational approximation [7], where some dependencies among the hidden variables are kept.
Minimization of FQ as a proxy for negative log likelihood is usually achieved by alternating optimization of with respect to Q and ?, a special case of which ? when Q is fully expressive ? is the EM
algorithm. Different choices of Q provide different types of compromise between the accuracy and
computational complexity. For some models, accurate inference of some of the latent variables may
require excessive computation even though the results of the inference can be correctly reinterpreted
by studying the posterior Q from a simpler family and observing the symmetries of the model, or
by reparametrizing the model (see for example [1]). In what follows, we will develop a technique
that uses the parts of the free energy to infer the mapping of the data to a class variable with an
increased accuracy despite possible imperfections of the data fit, whether this imperfection is due to
the approximations and errors in the model or the posterior.
Having obtained an estimate of parameters ?? that fit the given i.i.d. data we can rearrange the free
energy (Eq.2) as
X
t
FQ =
FQ
, and
t
t
FQ
=
X
? ? log q(h(t) |?)
? ?
q(h(t) |?)
h(t)
X
? ? log P (h(t) , x(t) |?)
?
q(h(t) |?)
(3)
h(t)
The second term in the equation above is the cross-entropy term and it quantifies how well the data
point fits the model, assuming that hidden variables follow the estimated posterior distribution. This
posterior distribution is fit to minimize the free energy; the first term in 3 is the entropy and quantifies
the uncertainty in this fit.
If Q and P factorize, then each of these two terms further breaks into a sum of individual terms,
each quantifying the aspects of the fit of the data point with respect to different parts of the model.
For example, if the generative model is described by a Bayesian network, the joint distribution can
Q
(t)
be written as P (v (t) = n P (vn |PAn ), where v (t) = {x(t) , h(t) } denotes the set of all variables
(t)
(hidden or visible) and PAn are the parents of the n ? th of these variables, i.e., vn .
The cross-entropy term in the equation above further decomposes into
X
(t)
? ? log P (v (t) |PA1 , ?)
? + ??? +
q(v1 ? PA1 |?)
1
(t)
X
(t)
? ? log P (v (t) |PAN , ?)
?
q(vN ? PAN |?)
N
(4)
(t)
[v1 ]
[vN ]
(t)
For each discrete hidden variable vn , the appropriate terms above can be further broken down into
individual terms in the summation over the Dn possible configurations of the variable, e.g.,
?
?
?
?
q(vn(t) = 1, ? PAn |?)?log
P (vn(t) = 1|PAn , ?)+?
? ?+ q(vn(t) = Dn , ? PAn |?)?log
P (vn(t) = Dn |PAn , ?)
(5)
In a similar fashion, the entropy term can also be decomposed further into a sum of terms as dictated
by the factorization of the family Q. Therefore, the free energy for a single sample t can be expressed
as the sum
X
t
FQ
=
fi,t ??
(6)
i
where all the free energy pieces fi,t ?? derive from the finest decomposition (5) or (4).
The terms fi,t ?? describe how the data point fits possible configurations of the hidden variables in
different parts of the model. Such information can be encapsulated in a score space that we call free
energy score space or simply FESS.
For example, in the case of a binary classification problem, given the generative models for the two
(t)
(t)
classes, we can define as F(Q,?)
to a vector of scores f with respect to a
? (x ) the mapping of x
particular model with its estimated parameters, and a particular choice of the posterior family Q for
each of the classes, and then concatenate the scores. Therefore, using the notation from [24] the free
3
ESS (t)
energy score operator ?F
(x ) is defined as
F?
h
i
ESS
?F
: x(t) ? F(Q1 ,??1 ) (x(t) ); F(Q2 ,??2 ) (x(t) ) where F(Qc ,??c ) = [. . . , fi,t ?? , . . . ]T , c = 1, 2
F?
c
(7)
If the posterior families are fully expressive, then the MAP estimate based on the generative models
for the two classes can be obtained from this mapping by simply summing the appropriate terms to
obtain the log likelihood difference, as the free energy equals the negative log likelihood.
However, the mapping also allows for the parts of the model fit to play uneven roles in classification
after an additional step of discriminative training. In this case the data points do not have to fit either
model well in order to be correctly classified. Furthermore, even in the extreme case where one
model provides a higher likelihood than the other for the data from both classes (e.g., because the
models are not nested, and likelihoods cannot be directly compared), the mapping may still provide
an abstraction from which another step of discriminative training can benefit. The additional step
of training a discriminative model allows for mining the similarities among the data points in terms
of the path through different hidden variables that has to be followed in their generation. These
similarities may be informative even if the generative process is imperfect.
Obviously, (7) can be generalized to include multiple models (or the use of a single model) and/or
multiple posterior approximations, either for two-class or multi-class classification problems.
3
Free energy score space classification dominates the MAP classification
We use here the terminology introduced in [27], under which FESS would be considered a modeldependent feature extractor, as different generative models lead to different feature vectors [25].
The family of feature extractors ?F? : X ? <d maps the input data x ? X in a space of fixed
dimension derived from a plug-in estimate ?, in our case the generative model with parameters ??
from which the features are extracted.
Given some observations x and the corresponding class labels y ? {?1, +1} following the joint
probability P (x, y|?? ), a generative model can be trained to provide an estimate ?? 6= ?? , where ??
are the true parameters. As most kernels (e.g. Fisher and TOP) are commonly used in combination
with linear classifiers such as linear SVMs, [27] proposes as a starting point for evaluating the
performance of a feature extractor the classification error of a linear classifier wT ? ?F? (x) + b in
the feature space <d , where w ? <d and b ? <. Assuming that w and b are chosen by an optimal
learning algorithm on a sufficiently large training dataset, and that the test set follows the same
distribution with parameter ?? , the classification error R(?F? ) can be shown to tend to
R(?F? ) = min Ex,y ?[?y(wT ? ?F? (x) + b)]
w,b
(8)
where ?[a] is an indicator function which is 1 when a > 0, and 0 otherwise, and Ex,y denotes the
expectation with respect to the true distribution P (x, y|?? ).
The Fisher kernel (FK) classifier can perform at least as well as its plug-in estimate if the parameters
of a linear classifier are properly determined [9, 27],
K
? ? 1 )] = R(?)
R(?F
) ? Ex,t ?[?y(P (y = +1|x, ?)
F?
2
where ? represents the generative model used as plug-in estimate.
(9)
ESS (t)
(x ), because the free
This property also trivially holds for our method, where ?F? (x(t) ) = ?F
F?
energy can be expressed as a linear combination of the elements of ? .
In fact, the minimum free energy test (and the maximum likelihood rule when Q is fully expressive)
can be defined on ? derived from the generative models with parameters ??+1 for one class and ???1
for another as
h
i
t
t
T
(t)
T
(t)
y? = min{F(Q,
,
F
}
=
?
1
F
(x
)
?
1
F
(x
)
(10)
?
?
?
?
(Q,?+1 )
(Q,??1 )
? )
(Q,? )
y
+1
?1
The extension to a multiclass classification is straightforward. When the family Q is expressive
enough to capture the true posterior distribution, then free energy reduces to negative log likelihood,
4
and the free energy test reduces to ML classification. In other cases, likelihood computation is
intractable, and free energy test is used instead of the likelihood ratio test. It is straightforward
to prove that a kernel classifier that works in FESS is asymptotically at least as good as the MAP
labelling based on the generative models for the two classes since generative classification is a
special case of our framework.
ESS
(x(t) ) derived as above with its first M1 elements being the components of
Lemma 3.1 For ?F
F?
the free energy for one model, and the remaining M2 for the second, a linear classifier employing
ESS
?F
will, asymptotically (with enough data), provide classification error which is at least as low
F?
as RQ (?) achieved using the free energy test above.
?
?
1
ESS
?
R(?F
)
?
E
?
?y(P
(y
=
+1|x,
?)
?
)
= RQ (?)
x,t
F?
2
Proof
F ESS
R(?F?
T
)
=
min Ex,y ?[?y(w
F ESS
)
?
Ex,y ?[?y(wg ? ?F?
F ESS
)
?
RQ (?)
w,b
F ESS
? ?F?
(x) + b)] ? Ex,y ?[?y(w
T
F ESS
? ?F?
M1 times
R(?F?
R(?F?
T
F ESS
(x) + b)] ? w, b
M2 times
z
}|
{ z
}|
{
T
(x) + bg )] for wg = [+1, ? ? ? , +1, ?1, ? ? ? , ?1] , bg = 0
(11)
?
Furthermore, when the family Q is expressive enough to capture the true posterior distribution, the
free energy test is equivalent to maximum likelihood (ML) classification, RQ (?) = R(?). The
dominance of the Fisher and Top kernels [9, 27] over their plug-in holds for FESS too, and the same
plug-in (the likelihood under a generative model) may be used when this is tractable. However, if
the computation of the likelihood (and the kernels derived from it) is intractable, then the free energy
test as well as the kernel methods based on FESS that will outperform this test, can both still be used
with an appropriate family of variational distributions Q.
4 Controlling the length of the feature vector
In some generative models, especially sequence models, the number of hidden variables may change
from one data point to the next. In speech processing, for instance, hidden Markov models (HMM)
(t)
(t)
[23] may have to model utterances x1 , . . . , xK(t) of different sequence lengths K(t). As each
(t)
(t)
element in the sequence has an associated hidden variable, the hidden state sequences s1 , . . . , sK(t)
are also of variable lengths. The parameters ? of this model include the prior state distribution ?,
the state transition probability matrix A = a{ij} , and the emission probabilities B = b{iv} . Exact
inference is tractable in HMMs and so we can use the exact posterior (EX) distribution to formulate
the free energy and the free energy minimization is equivalent to the usual Baum-Welch training
algorithm [17] and FEX = ? log P (x). The free energy of each sample xt is
t
FEX
=
X
(t)
[s]
?
(t)
q(s1 ) log q(s1 ) +
X K(t)?1
X
[s]
X K(t)?1
X
[s]
k=1
(t)
(t)
(t)
(t)
q(sk , sk+1 ) log q(sk , sk+1 ) ?
k=1
X
[s]
(t) (t)
q(sk , sk+1 ) log a{s(t) ,s(t) }
k
k+1
?
X K(t)
X
[s] k=1
(t)
q(sk ) log b{s(t) ,x(t) }
k
(t)
q(s1 ) log ?s(t)
1
(12)
k
Depending on how this is broken into terms fi , we could get feature vectors whose dimension depends on the length of the sample K(t). To solve this problem, we first note that a standard approach
to dealing with utterances of different lengths is to normalize the likelihood by the sequence length,
and this approach is also used for defining other score spaces. If, before the application of the score
operator, we simply evaluate the sums over k in the free energy and divide each by K(t), we obtain
a fixed number of terms independent of the sequence length. This results in a length-normalized
score space nFESS, where the granularity of the decomposition of the free energy is dramatically
reduced.
5
Markov model
FESS
0.35
K=4
0.3
Error Rate
C)
0.4
hidden
Markov model
T=7
T=8
T=9
?HMM
T=10
T=11
50
0.25
0.2
0.15
0.1
1
2
3
4
5
6
Regularization log(C)
Error Rate
B)
60
Number of families
A)
40
30
FESS - MF
FESS - EX
FPS
FK
SAM ( ???? )
PSI?BLAST
20
0.2
FESS
TOP Kernel (TK) Fisher-SVM (FK)
0.15
10
0.15
0.1
0
0.05
0
0.05
1
2
3
4
5
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
6
Regularization log(C)
median RFP
Figure 1: A) SVM error rates for nFESS and probability product kernels [10] using Markov models
(we reported only their best result) and hidden Markov models as plug-ins. T represents the parameters used in the kernel of [10], and K is the order of the Markov chain. The results are arranged
along the x axis by the regularization constant used in SVM training. B) Comparison with results
obtained using FK and TK score spaces. C) Comparison of the five homology detection methods
in Experiment 3. Y axis represents the total number of families for which a given method exceeds a
median RFP score on the X axis.
In general, even for fixed-length data points and arbitrary generative models, we do not need to create large feature vectors corresponding to the finest level of granularity described in (5), or for that
matter the slightly coarser level of granularity in (4). Some of the terms in these equations can be
grouped and summed up to ensure for shorter feature vectors, if this is warranted by the application.
The longer the feature vector, the finer is the level of detail with which the generative process for the
data sample is represented, but more data is needed for the training of the discriminative classifier.
Domain knowledge can often be used to reduce the complexity of the representation by summing
appropriate terms without sacrificing the amount of useful information packed in the feature vectors.
Such control of the feature vector length does not negate the previously discussed advantages of the
classification in the free energy score space compared with the straightforward application of free
energy, likelihood, or in case of sequence models, length-normalized likelihood tests.
5
Experiments
We evaluated our approach on four standard datasets and compared its performance with the classification results provided by the datasets? creators, those estimated using the plug-in estimate ?,
and those obtained using the Fisher (FK) and TOP (TK) kernel [9, 27] derived from the plug-ins.
Support vector machines (SVMs) with RBF kernel were used as discriminative classifiers in all the
score spaces, as this technique was previously identified as most potent for dealing with variablelength sequences [25]. As plug-ins, or generative models/likelihoods ?, for the three score spaces
we compare across experiments, we used hidden Markov models (HMMs)[23] in Experiments 1-3
and latent Dirichlet allocation (LDA)[4] in Experiment 4. For each experiment, comparisons are
based on the same validation procedure used in the appropriate original papers that introduced the
datasets. For both FK and FESS, in each experiment we trained a single generative model (HMM
or LDA, depending on the experiment). For all HMM models, the length-normalization with associated summation over the sequence as described in the previous section was used in the construction
of the free energy score space. The model complexity, e.g., the number of states for the HMM were
chosen by cross-validation on the training set.
Experiment 1: E. coli promoter gene sequences. The first analyzed dataset consists of the E.
coli promoter gene sequences (DNA) with associated imperfect domain theory [26]. The standard
task on this dataset is to recognize promoters in strings of nucleotides (A, G, T, or C). A promoter
is a genetic region which facilitates the transcription of gene located nearby. The input features
are 57 sequential DNA nucleotides. Results, obtained using leave-one-out (LOO) validation, are
6
reported in Table 1 and illustrate that FESS represents well the fixed size genetic sequences, leading
to superior performance over other score spaces as well as over the plug-in ?HM M .
E.Coli
Accuracy
?HM M
67,34%
FESS
94,33%
nFESS
85,80%
FK
79,20%
TK
85,30%
Table 1: Promoter classification results.
Experiment 2: Introns/Exons classification in HS3 D data set. The HS 3 D data set 1 [10] contains labelled intron and exon sequences of nucleotides. The task here is to distinguish between
the two types of gene sequences that can both vary in length (from dozens of nucleotides to tens of
thousands of nucleotides). For the sake of comparison, we adopted the same experimental setting
of [10]. In Fig.1-A (top right), we reported the results obtained in [10] (overall error rate, OER,
7,5%), the results obtained using the HMM model (?HM M , OER 27,59%) together with the results
obtained by our method (OER 6,12%). In Fig. 1-B (bottom right), we compared our method also
with FK (OER 10,06%) and TK (OER 12,82%) kernels.
Experiment 3: Homology detection in SCOP 1.53. We tested the ability of FESS to classify
protein domains into superfamilies in the Structural Classification of Proteins (SCOP)2 version 1.53.
The sequences in the database were selected from the Astral database, based on the E-value threshold of 10?25 for removing similar sequences from it. In the end, 4352 distinct sequences were
grouped into families and superfamilies. For each family, the protein domains within the family are
considered positive test examples, and the protein domains outside the family, but within the same
superfamily, are taken as positive training examples. The data set yields 54 families containing at
least 10 family members (positive test) and 5 superfamily members outside of the family (positive
train) for a total of 54 One-Vs-All problems. The experimental setup is similar to that used in [8],
except for one important difference: in the current experiments, the positive training sets do not
include additional protein sequences extracted from a large, unlabelled database. Therefore, the
recognition tasks performed here are more difficult than those in [8]. In order to measure the quality
of the ranking, we used the median RFP score [8] which is the fraction of negative test sequences
that score as high as or better than the median-scoring positive sequence. We used SVM decision
values as score. We find that FESS outperforms task-specific algorithms (PSI-Blast [2] and SAM
[14]) as well as the Fisher score (FK,[8]) with statistical significance with p-values of 5.1e-9, 8.3e-7,
1.1e-5, respectively. There is no statistical difference between our results FESS and those based on
FPS [3]. In particular, the poor performance of [8] is explained by the under-training of HMMs [6].
The FESS representation proved to be much less sensitive to the training problems. We repeated
the test using two different choices of Q: the approximate mean field factorization and the exact
posterior (FESS-MF and FESS-EX, respectively, in Fig.1-C). Interestingly, the performance was
also robust with respect to these choices.
Experiment 4: Scene/object recognition. Our final set of experiments used the data from the
Graz dataset3 , as well as the dataset proposed in [21]. In both tests, we used Latent Dirichlet allocation (LDA) [4] as the generative model. The free energy for LDA is derived in [4]. To serve as
words in the model, we extracted SIFT features from 16x16 pixel windows computed over a grid
with spacing of 8 pixels. These features were mapped to 175 codewords (W = 175). We varied the
number of topics to explore the effectiveness of different techniques.
Graz dataset has two object classes, bikes (373 images) and persons (460 images), in addition to a
background class (270 images)4 . The range of scales and poses at which exemplars are presented
is highly diverse, e.g., a ?person? image may show a pedestrian at a certain distance, a side view
of a complete body, or just a closeup of a head. We performed two-class detection (object vs.
background) using an experimental setup consistent with [16, 22]. We generated ROC curves by
thresholding raw SVM output, and report here the ROC equal error rate averaged over ten runs. The
results are shown in Table 2. The standard deviation of the classification rate is quite high as the
images in the database have very different complexities, and the performance for any single run is
1
www.sci.unisannio.it/docenti/rampone
http://scop.mrc-lmb.cam.ac.uk/scop/
3
http://www.emt.tugraz.at/ pinz/data/GRAZ 02/
4
The car class is ignored as in [16]
2
7
Graz dataset
Bikes
People
Scenes dataset
Natural
Artificial
FESS - Z=15
86,1% (1,8)
83,1% (3,1)
?LDA
63,93%
67,21%
FESS - Z=30
86,5% (2,0)
82,9% (2,8)
FESS
95,21%
94,38%
FESS - Z=45
89,1% (2,3)
84,4% (2,0)
FK
90,10%
90,32%
[16]
86,3% (2,5)
82,3% (3,1)
[21]
89,00%
89,00%
[22]
86,5%
80,8%
[16]
84,51%
89,43%
Table 2: Classification rates for object/scene recognition tasks. The deviation is shown in brackets.
Our approach tends to be robust to the choice of the number Z of topics, and so in scene recognition
experiments, we report only the result for Z=40.
highly dependent on the composition of the training set.
We also tested our approach on the scene recognition task using the datasets of [21], composed of
two (Natural and Artificial scenes) datasets, each with 4 different classes. The results are reported in
Table 2 where for the first time we employed Fisher-LDA in a vision application. Although this new
technique outperformed state of the art, once again, FESS outperforms both this result and other
state-of-the-art discriminative methods [21, 16].
6 Conclusions
In this paper, we present a novel generative score space, FESS, exploiting variational free energy
terms as features. The additive free energy terms arise naturally as a consequence of the factorization
of the model P and the posterior Q. We show that the use of these terms as features in discriminative
classification leads to more robust results than the use of the Fisher scores, which are based on the
derivatives of the log likelihood of the data with respect to the model parameters. As was previously
observed, we find that the Fisher score space suffers from the so called ?wrap-around? problem,
where very different data points may map to the same derivative, an example of which was discussed in the introduction. The free energy terms, on the other hand, quantify the data fit in different
parts of the model, and seem to be informative even when the model is imperfect. This indicates
that the re-scaling of these terms, which the subsequent discriminative training provides, leads to
improvedP
modelling of the data in some way. Scaling a term in the free energy composition, e.g.,
the term h q(h) log p(x|h), by a constant w is equivalent to raising the appropriate conditional
distribution to the power w. This is indeed reminiscent of some previous approaches to correcting
generative modelling problems. In speech applications, for example, it is a standard practice to raise
the observation likelihood in HMMs to a power less than 1, before inference is performed on the
test sample, as the acoustic signal would otherwise overwhelm the hidden process modelling the
language constraints [28]. This problem arises from the approximations in the acoustic model. For
instance, a high-dimensional acoustic observation is often modelled as following a diagonal Gaussian distribution, thus assuming independent noise in the elements of the signal, even though the
true acoustics of speech is far more constrained. This results in over-accounting for the variation in
the observed acoustic signal, and to correct for this in practice, the log probability of the observation
given the hidden variable is scaled down. The technique described here proposes a way to automatically infer the best scaling, but it also goes a step further in allowing for such corrections at all levels
of the model hierarchy, and even for specific configurations of hidden variables. Furthermore, the
use of kernel methods provides for nonlinear corrections, as well. This extremely simple technique
was shown here to work remarkably well, outperforming previous score space approaches as well
as the state of the art in multiple applications.
It is possible to extend the ideas here to other types of model/data energy. For example, the free
energy approximated in different ways is used in [1] to construct various inference algorithms for
a single scene parsing task. It may also be effective, for example, to use the terms in the Bethe
free energy linked to different belief propagation messages to construct the feature vectors. Finally,
although we find that FESS outperforms the previously studied score spaces that depend on the
derivatives, i.e. where F? is a derivative with respect to ?, the use of this derivative in (7) is, of
course, possible. This allows for the construction of kernels similar to FK and TK, but derived
from intractable generative models as we show in Experiment 4 (FK in Table 2) on latent Dirichlet
allocation.
Acknowledgements
We acknowledge financial support from the FET programme within the EU FP7, under the SIMBAD
project (contract 213250).
8
References
[1] B. Frey and N. Jojic. A Comparison of Algorithms for Inference and Learning in Probabilistic Graphical
Models Transactions on pattern analysis and machine intelligence, 1392:1416?27, 2005.
[2] S. F. Altschul, W. Gish, W. Miller, E. W. Myers, and D. J. Lipman. Basic local alignment search tool. J
Mol Biol, 215(3):403?410, October 1990.
[3] T. L. Bailey and W. N. Grundy. Classifying proteins by family using the product of correlated p-values. In
Proceedings of the Third Annual International Conference on Computational Molecular Biology, pages
10?14. ACM, 1999.
[4] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent dirichlet allocation. J. Mach. Learn. Res., 3:993?1022,
2003.
[5] G. Bouchard and B. Triggs. The tradeoff between generative and discriminative classifiers. In IASC
International Symposium on Computational Statistics, pages 721?728, Prague, August 2004.
[6] K. Tsuda B.Schlkopf and J. Vert. Kernel Methods in Computational Biology. The MIT Press, 2004.
[7] Z. Ghahramani. On structured variational approximations. Technical Report CRG-TR-97-1, 1997.
[8] T. Jaakkola, M. Diekhaus, and D. Haussler. Using the fisher kernel method to detect remote protein
homologies. 7th Intell. Sys. Mol. Biol., pages 149?158, 1999.
[9] T. Jaakkola and D. Haussler. Exploiting generative models in discriminative classifiers. Nips, 1998.
[10] T. Jebara, R. Kondor, A. Howard, K. Bennett, and N. Cesa-bianchi. Probability product kernels. Journal
of Machine Learning Research, 5:819?844, 2004.
[11] M.I. Jordan, Z. Ghahramani, T. Jaakkola, and L.K. Saul. An introduction to variational methods for
graphical models. Machine Learning, 37(2):183?233, 1999.
[12] S. Kapadia. Discriminative Training of Hidden Markov Models. PhD thesis, 1998.
[13] H. Kappen and W. Wiegerinck. Mean field theory for graphical models, 2001.
[14] K. Karplus, C. Barrett, and R. Hughey. Hidden markov models for detecting remote protein homologies.
Bioinformatics, 14:846?856, 1999.
[15] J. A. Lasserre, C. M. Bishop, and T. P. Minka. Principled hybrids of generative and discriminative models.
In Cvpr, pages 87?94, 2006.
[16] S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid matching for recognizing
natural scene categories. Cvpr, 2:2169?2178, 2006.
[17] D. MacKay. Ensemble learning for Hidden Markov Models, 1997. Unpublished. Department of Physics,
University of Cambridge.
[18] A. Mccallum, C. Pal, G. Druck, and X. Wang. Multi-conditional learning: Generative/discriminative
training for clustering and classification. In In Proceedings of the 21st National Conference on Artificial
Intelligence, pages 433?439, 2006.
[19] R. M. Neal and G. E. Hinton. A view of the em algorithm that justifies incremental, sparse, and other
variants. pages 355?368, 1999.
[20] A. Y. Ng and M. I. Jordan. On discriminative vs. generative classifiers: A comparison of logistic regression
and naive bayes. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, NIPS, Cambridge, MA, 2002.
MIT Press.
[21] A. Oliva and A. Torralba. Modeling the shape of the scene: A holistic representation of the spatial
envelope. International Journal of Computer Vision, 42:145?175, 2001.
[22] A. Opelt, M. Fussenegger, A. Pinz, and P. Auer. Weak hypotheses and boosting for generic object detection and recognition. In Eccv, volume 2, pages 71?84, 2004.
[23] L.R. Rabiner. A tutorial on Hidden Markov Models and selected applications in speech recognition. Proc.
of IEEE, 77(2):257?286, 1989.
[24] N. Smith and M. Gales. Speech recognition using SVMs. In Nips, pages 1197?1204. MIT Press, 2002.
[25] N. Smith and M. Gales. Using svms to classify variable length speech patterns. Technical Report
CUED/F-INGENF/TR.412, University of Cambridge, UK, 2002.
[26] G. G. Towell, J. W. Shavlik, and M. O. Noordewier. Refinement of approximate domain theories by
knowledge-based neural networks. In In Proceedings of the Eighth National Conference on Artificial
Intelligence, pages 861?866, 1990.
[27] K. Tsuda, M. Kawanabe, G. R?atsch, S. Sonnenburg, and K. R. M?uller. A new discriminative kernel from
probabilistic models. Neural Comput., 14(10):2397?2414, 2002.
[28] L. Deng and D. O?Shaughnessy, Speech Processing - A Dynamic and Optimization-Oriented Approach
Marcel Dekker Inc., June 2003
9
| 3830 |@word h:1 version:1 kondor:1 achievable:1 verona:1 triggs:1 dekker:1 gish:1 decomposition:3 accounting:1 q1:1 tr:2 kappen:1 configuration:3 contains:1 score:44 genetic:2 interestingly:1 outperforms:3 current:1 com:1 surprising:1 must:1 written:1 finest:2 reminiscent:1 parsing:1 additive:2 visible:1 informative:4 concatenate:1 subsequent:1 shape:1 v:3 nebojsa:1 generative:45 selected:2 intelligence:3 xk:4 es:12 parametrization:1 sys:1 mccallum:1 smith:2 blei:1 hypersphere:2 provides:5 detecting:1 boosting:1 simpler:1 five:1 dn:3 along:1 symposium:1 fps:2 prove:1 consists:1 fitting:1 blast:2 indeed:2 themselves:1 multi:2 decomposed:1 automatically:1 window:1 considering:1 provided:1 project:1 notation:2 factorized:1 bike:2 what:1 string:1 q2:1 differing:1 classifier:16 scaled:2 uk:2 control:1 before:2 positive:6 local:2 frey:1 tends:1 consequence:1 despite:1 mach:1 path:1 studied:1 hmms:4 factorization:4 limited:1 range:1 averaged:1 fex:2 practice:3 procedure:1 vert:1 matching:1 word:1 protein:8 get:1 cannot:1 close:2 operator:5 closeup:1 www:2 equivalent:3 vittorio:2 map:5 center:5 baum:1 straightforward:3 go:1 starting:1 formulate:1 qc:1 welch:1 pure:1 correcting:1 m2:2 rule:1 haussler:2 financial:1 variation:1 controlling:1 play:1 construction:2 hierarchy:1 exact:3 us:2 hypothesis:1 element:4 recognition:10 approximated:1 located:1 coarser:1 database:4 observed:4 role:1 bottom:1 wang:1 capture:3 murino:1 thousand:1 region:1 graz:4 improper:1 sonnenburg:1 eu:1 remote:2 alessandro:2 rq:4 grundy:1 broken:2 complexity:4 principled:1 pinz:2 cam:1 fussenegger:1 dynamic:1 trained:2 raise:1 depend:1 segment:1 compromise:1 serve:1 upon:1 exon:3 joint:2 iit:1 various:2 represented:1 train:1 distinct:1 describe:1 effective:1 diekhaus:1 artificial:4 outside:2 whose:1 quite:1 larger:1 solve:1 cvpr:2 univr:1 otherwise:2 wg:2 ability:1 statistic:1 final:1 obviously:1 sequence:23 advantage:3 myers:1 kapadia:1 propose:1 product:3 combining:1 holistic:1 achieve:1 normalize:1 recipe:1 exploiting:2 parent:1 lmb:1 produce:2 incremental:1 leave:1 object:6 cued:1 help:2 develop:1 derive:1 pose:1 depending:2 exemplar:1 tk:6 illustrate:1 ij:1 qt:2 eq:1 ac:1 marcel:1 quantify:1 radius:1 correct:1 compromising:1 require:1 summation:2 crg:1 extension:2 correction:2 marco:2 lying:1 sufficiently:2 considered:2 hold:2 astral:1 around:1 mapping:7 vary:1 torralba:1 encapsulated:1 proc:1 outperformed:1 bag:1 label:1 sensitive:1 grouped:2 create:1 tool:1 minimization:2 uller:1 mit:3 imperfection:2 gaussian:4 rather:1 jaakkola:3 derived:8 focus:1 emission:1 ponce:1 properly:1 june:1 modelling:3 likelihood:23 fq:7 indicates:1 detect:1 inference:7 abstraction:1 dependent:1 typically:2 hidden:21 italian:1 pixel:2 overall:1 classification:24 among:2 proposes:2 art:4 integration:1 special:2 fes:28 summed:1 field:3 equal:2 once:1 having:1 construct:2 lipman:1 ng:2 biology:4 represents:4 perina:1 excessive:1 minimized:1 report:4 variablelength:1 employ:1 few:1 oriented:1 composed:1 recognize:1 intell:1 individual:2 national:2 microsoft:2 detection:5 message:1 highly:4 mining:1 reinterpreted:1 alignment:1 mixture:3 extreme:1 analyzed:1 bracket:1 rearrange:1 chain:1 accurate:1 dataset3:1 shorter:1 nucleotide:5 iv:1 divide:1 re:2 tsuda:2 inconvenient:1 sacrificing:1 karplus:1 instance:4 increased:1 classify:2 modeling:1 tractability:1 distill:1 deviation:3 uniform:2 noordewier:1 recognizing:1 too:1 loo:1 pal:1 reported:4 dependency:1 combined:1 person:2 density:2 potent:1 international:3 discriminating:1 st:1 contract:1 probabilistic:2 physic:1 together:1 druck:1 again:2 thesis:1 cesa:1 containing:1 gale:2 coli:3 derivative:11 leading:1 account:1 scop:4 sec:3 matter:1 pedestrian:1 satisfy:1 inc:1 umberto:2 bg:2 depends:1 piece:1 performed:3 break:1 lot:1 view:2 ranking:1 observing:1 linked:1 start:1 bayes:1 bouchard:1 minimize:1 accuracy:3 variance:1 miller:1 yield:1 ensemble:1 rabiner:1 modelled:1 bayesian:1 raw:1 weak:1 schlkopf:1 mrc:1 finer:1 classified:1 suffers:1 energy:45 minka:1 naturally:1 associated:5 proof:1 psi:2 dataset:7 proved:1 knowledge:2 car:1 organized:1 actually:1 auer:1 higher:2 attained:1 follow:1 arranged:1 evaluated:2 done:1 though:2 hughey:1 furthermore:3 just:1 hand:1 expressive:6 reparametrizing:1 nonlinear:1 propagation:1 defines:1 logistic:1 quality:2 lda:6 dietterich:1 normalized:2 homology:5 true:6 counterpart:1 equality:1 regularization:3 jojic:2 alternating:1 neal:1 deal:1 fet:1 generalized:1 tt:1 complete:1 image:5 variational:9 lazebnik:1 novel:3 fi:5 superior:1 emt:1 volume:1 belong:1 discussed:2 m1:2 extend:1 composition:2 cambridge:3 trivially:2 fk:12 grid:1 language:1 similarity:2 longer:1 add:1 posterior:22 recent:1 dictated:1 italy:2 altschul:1 certain:1 binary:1 outperforming:1 scoring:1 captured:1 minimum:2 additional:3 employed:1 deng:1 signal:3 multiple:3 infer:2 reduces:2 exceeds:1 technical:2 match:1 unlabelled:1 plug:10 cross:4 divided:1 molecular:1 variant:1 basic:1 regression:1 oliva:1 vision:4 metric:1 expectation:1 kernel:19 normalization:1 pyramid:1 achieved:2 whereas:1 addition:2 background:2 spacing:1 remarkably:1 median:4 appropriately:1 envelope:1 rest:1 induced:1 tend:1 facilitates:1 member:2 effectiveness:1 seem:1 odds:1 call:1 structural:1 jordan:3 prague:1 granularity:3 constraining:1 enough:4 castellani:1 fit:14 perfectly:1 identified:1 imperfect:3 reduce:1 idea:1 multiclass:1 tradeoff:1 whether:1 motivated:1 becker:1 suffer:1 speech:8 dramatically:1 useful:1 ignored:1 amount:2 ten:2 svms:4 category:2 dna:2 reduced:1 http:2 outperform:2 affords:1 tutorial:1 estimated:3 towell:1 correctly:2 diverse:1 discrete:1 express:1 dominance:1 four:1 terminology:1 threshold:1 falling:1 kept:1 v1:2 asymptotically:2 fraction:1 sum:4 run:2 parameterized:1 uncertainty:2 family:23 pa1:2 vn:9 decision:1 scaling:3 genova:1 bound:2 followed:1 distinguish:1 annual:1 simbad:1 insufficiently:1 constraint:1 scene:10 sake:1 nearby:1 aspect:1 min:3 extremely:1 department:2 structured:2 according:1 combination:2 poor:1 describes:1 across:2 em:2 pan:8 sam:2 slightly:1 s1:4 constrained:1 outlier:1 explained:1 taken:1 equation:3 previously:5 overwhelm:1 describing:1 needed:1 tractable:2 fp7:1 end:1 studying:1 adopted:1 gaussians:2 kawanabe:1 appropriate:6 generic:1 bailey:1 original:1 top:6 denotes:2 include:5 remaining:1 ensure:1 creator:1 dirichlet:4 tugraz:1 graphical:3 clustering:1 exploit:1 ghahramani:3 especially:3 build:1 codewords:1 usual:1 diagonal:2 gradient:1 wrap:1 distance:1 mapped:2 sci:1 hmm:6 topic:2 rfp:3 assuming:4 length:17 relationship:1 providing:1 ratio:1 setup:2 difficult:1 october:1 favorably:1 negative:5 packed:1 perform:1 allowing:2 bianchi:1 observation:7 markov:11 datasets:5 oer:5 acknowledge:1 howard:1 situation:1 extended:1 defining:1 head:1 hinton:1 varied:1 arbitrary:2 august:1 jebara:1 introduced:2 unpublished:1 kl:1 optimized:1 raising:1 acoustic:5 nip:3 beyond:1 redmond:1 usually:1 pattern:2 eighth:1 built:1 belief:1 power:2 natural:3 hybrid:1 indicator:1 technology:1 axis:3 hm:3 naive:1 utterance:2 schmid:1 nice:1 prior:3 literature:1 acknowledgement:1 fully:4 generation:2 allocation:4 validation:3 proxy:1 consistent:1 thresholding:1 editor:1 classifying:1 pi:1 eccv:1 course:1 free:45 side:1 institute:1 fall:1 saul:1 opelt:1 shavlik:1 superfamily:4 sparse:1 benefit:1 curve:1 dimension:3 evaluating:1 transition:1 commonly:1 refinement:1 simplified:2 programme:1 employing:1 far:1 transaction:1 approximate:2 transcription:1 gene:4 dealing:3 ml:2 summing:2 assumed:1 discriminative:20 factorize:1 search:1 latent:7 quantifies:2 decomposes:1 sk:8 lasserre:1 table:6 disambiguate:1 learn:1 nature:1 bethe:1 robust:3 correlated:1 symmetry:1 mol:2 warranted:1 complex:1 domain:6 significance:1 promoter:6 noise:1 arise:1 repeated:1 complementary:1 x1:2 body:1 fig:3 referred:1 ff:2 roc:2 fashion:1 x16:1 comput:1 spatial:2 third:1 extractor:4 dozen:1 down:2 removing:1 xt:1 specific:2 bishop:1 sift:1 intron:3 barrett:1 svm:5 negate:1 dominates:1 intractable:4 sequential:1 phd:1 labelling:1 justifies:1 mf:2 entropy:6 distinguishable:1 simply:3 explore:1 expressed:2 cristani:1 nested:1 mackay:1 extracted:3 acm:1 ma:1 conditional:2 quantifying:1 rbf:1 labelled:1 shared:1 fisher:12 bennett:1 change:1 included:1 typical:1 determined:1 except:1 wt:2 wiegerinck:1 lemma:1 total:2 called:1 experimental:3 atsch:1 select:1 uneven:1 support:2 people:1 arises:1 bioinformatics:1 evaluate:1 tested:3 biol:2 ex:9 |
3,125 | 3,831 | A joint maximum-entropy model for binary neural
population patterns and continuous signals
Sebastian Gerwinn
Philipp Berens
Matthias Bethge
MPI for Biological Cybernetics
and University of T?ubingen
Computational Vision and Neuroscience
Spemannstrasse 41, 72076 T?ubingen, Germany
{firstname.surname}@tuebingen.mpg.de
Abstract
Second-order maximum-entropy models have recently gained much interest for
describing the statistics of binary spike trains. Here, we extend this approach to
take continuous stimuli into account as well. By constraining the joint secondorder statistics, we obtain a joint Gaussian-Boltzmann distribution of continuous
stimuli and binary neural firing patterns, for which we also compute marginal
and conditional distributions. This model has the same computational complexity as pure binary models and fitting it to data is a convex problem. We show
that the model can be seen as an extension to the classical spike-triggered average/covariance analysis and can be used as a non-linear method for extracting
features which a neural population is sensitive to. Further, by calculating the posterior distribution of stimuli given an observed neural response, the model can be
used to decode stimuli and yields a natural spike-train metric. Therefore, extending the framework of maximum-entropy models to continuous variables allows us
to gain novel insights into the relationship between the firing patterns of neural
ensembles and the stimuli they are processing.
1
Introduction
Recent technical advances in systems neuroscience allow us to monitor the activity of increasingly
large neural ensembles simultaneously (e.g. [5, 21]). To understand how such ensembles process
sensory information and perform the complex computations underlying successful behavior requires
not only collecting massive amounts of data, but also the use of suitable statistical models for data
analysis. What degree of precision should be incorportated into such a model involves a tradeoff between the question of interest and mathematical tractability: Complex multi-compartmental
models [8] allow inference concerning the underlying biophysical processes, but their applicability
to neural populations is limited. The generalized linear model [15] on the other hand is tractable
even for large ensembles and provides a phenomenological description of the data.
Recently, several groups have used binary maximum entropy models incorporating pairwise correlations to model neural activity in large populations of neurons on short time scales [19, 22, 7, 25].
These models have two important features: (1) Since they only require measuring the mean activity
of individual neurons and correlations in pairs of neurons, they can be estimated from moderate
amounts of data. (2) They seem to capture the essential structure of neural population activity at
these timescales even in networks of up to a hundred neurons [21]. Although the generality of these
findings have been subject to debate [3, 18], pairwise maximum-entropy and related models [12] are
an important tool for the description of neural population activity [23, 17].
1
To find features to which a neuron is sensitive spike-triggered average and spike-triggered covariance
are commonly used techniques [20, 16]. They correspond to fitting a Gaussian distribution to the
spike-triggered ensemble. If one has access to multi-neuron recordings, a straightforward extension
of this approach is to fit a different Gaussian distribution to each binary population pattern. In
statistics, the corresponding model is known as the location model [14, 10, 9]. To estimate this
model, one has to observe sufficient amounts of data for each population pattern. As the number of
possible binary patterns grows exponentially with the number of neurons, it is desirable to include
regularization constraints in order to make parameter estimation tractable.
Here, we extend the framework of pairwise maximum entropy modeling to a joint model for binary
and continuous variables. This allows us to analyze the functional connection structure in a neural
population at the same time as its relationship with further continuous signals of interest. In particular, this approach makes it possible to include a stimulus as a continuous variable into the framework
of maximum-entropy modeling. In this way, we can study the stimulus dependence of binary neural
population activity in a regularized framework in a rigorous way. In particular, we can use it to extract non-linear features in the stimulus that a population of neurons is sensitive to, while taking the
binary nature of spike trains into account. We discuss the relationship of the obtained features with
classical approaches such as spike-triggered average (STA) and spike-triggered covariance (STC).
In addition, we show how the model can be used to perform spike-by-spike decoding and yields a
natural spike-train metric [24, 2]. We start with a derivation of the model and a discussion of its
features.
2
Model
In this section we derive the maximum-entropy model for joint continuous and binary data with
second-order constraints and describe its basic properties. We write continuous variables x and
binary variables b. Having observed the joint mean ? and joint covariance C, we want to find
a distribution pME which achieves the maximal entropy under all distributions with these observed
moments. Since we model continuous and binary variables jointly, we define entropy to be a mixed
discrete entropy and differential entropy:
XZ
H[p] = ?
p(x, b) log p(x, b)dx
b
Formally, we require pME to satisfy the following constraints:
E[x] = ?x
E[b] = ?b
E[bb> ] = Cbb + ?b ?>
b
>
E[xx ] = Cxx + ?x ?>
x
>
E[xb ] = Cxb +
?x ?>
b
>
E[bx ] = Cbx +
?b ?>
x
(1)
= Cxb +
?x ?>
b
where the expectations are taken over pME . Cxx , Cxb and Cbb are blocks in the observed covariance
matrix corresponding to the respective subsets of variables. This problem can be solved analytically
using the Lagrange formalism, which leads to a maximum entropy distribution of Boltzmann type:
1
exp (Q (x, b|?, ?))
Z(?, ?)
>
1
x
x
x
Q(x, b|?, ?) =
?
+ ?>
b
b
b
2
XZ
Z(?, ?) =
exp (Q (x, b|?, ?)) dx,
pME (x, b|?, ?) =
(2)
b
where ? and ? are chosen such that the resulting distribution fulfills the constraints in equation
(1), as we discuss below. Before we compute marginal and conditional distributions in this model,
we explore its basic properties. First, we note that the joint distribution can be factorized in the
following way:
pME (x, b|?, ?) = pME (x|b, ?, ?)pME (b|?, ?)
2
(3)
The conditional density pME (x|b, ?, ?) is a Normal distribution, given by:
1 >
>
pME (x|b, ?, ?) ? exp
x ?xx x + x (?x + ?xb b)
2
? N x|?x|b , ?
,with ?x|b = ? (?x + ?xb b) ,
(4)
? = (??xx )
?1
Here, ?xx , ?xb , ?bx , ?x are the blocks in ? which correspond to x and b, respectively. While
the mean of this Normal distribution dependent on b, the covariance matrix is independent of the
specific binary state. The marginal probability pME (b|?, ?) is given by:
Z
1 >
1 >
>
>
exp
Z(?, ?)pME (b|?, ?) = exp
b ?bb b + b ?b
x ?xx x + x (?x + ?xb b) dx
2
2
n
1 >
?1
?1
= (2?) 2 |??xx | 2 exp
b ?bb + ?>
(??
)
?
b
(5)
xx
xb
xb
2
1
?1
?1
+b> ?b + ?>
(??
)
?
+ ?>
(??xx ) ?x
xx
x
xb
2 x
To evaluate the maximum entropy distribution, we need to compute the partition function, which
follows from the previous equation by summing over b:
X
n
1 >
? 12
?1
2
Z(?, ?) = (2?) |??xx |
exp
b ?bb + ?>
(??
)
?
b
xx
xb
xb
2
b
(6)
1
?1
?1
>
>
>
+b ?b + ?xb (??xx ) ?x + ?x (??xx ) ?x
2
Next, we compute the marginal distribution with respect to x. From equation (5) and (4), we find
that pME (x|?, ?) is a mixture of Gaussians, where each Gaussian of equation (4) is weighted by the
corresponding pME (b|?, ?). While all mixture components have the same covariance, the different
weighting terms affect each component?s influence on the marginal covariance of x. Finally, we also
compute the conditional density pME (b|x, ?, ?), which is given by:
1 >
1
b ?bb b + b> (?b + ?bx x)
pME (b|x, ?, ?) = 0 exp
Z
2
(7)
X
1 >
>
0
Z =
exp
b ?bb b + b (?b + ?bx x)
2
b
Note, that the distribution of the binary variables given the continuous variables is again of Boltzmann type.
Parameter fitting To find suitable parameters for given data, we employ a maximum likelihood
approach [1, 11], where we find the optimal parameters via gradient descent:
X
l(?, ?) = log p({x(n) , b(n) }N
Q(x(n) , b(n) |?, ?) ? N log Z (?, ?)
(8)
n=1 |?, ?) =
n
?*
?
*
> +
> +
x
x
x
x
?
? ?? l = N ?
?
b
b
b
b
data
pME
"
#
x
x
?? l = N
?
b
b
data
p
ME
To calculate the moments over the model distribution pME we make use of the above factorization:
D
E
>
>
?1
xx = xx |b b = (??xx ) + ?x|b ?>
x|b
b
D
E D
E
D
E
(9)
> >
>
>
,
hxi = ?x|b
xb
= ?x|b b
= bx
b
b
3
A
B
C
0.12
0.12
0.12
0.08
0.08
0.08
0.04
0.04
0.04
0.00
4 3 2 1 0 1 2 3 4
0.00
4 3 2 1 0 1 2 3 4
0.00
4 3 2 1 0 1 2 3 4
F IGURE 1: Illustration of different parameter settings. A:independent binary and continuous variables, B: correlations (0.4) between variables, C: changing mean of the binary variables (here:
0.7) corresponds to changing weightings of the Gaussians, correlations are 0.4. Blue lines indicate
p(x|b = 1) and green ones p(x|b = 0).
Hence, the only average we actually need to evaluate numerically is the one over the binary variables.
Unfortunately, we cannot directly set the parameters for the continuous part, as they depend on the
ones for the binary part. However, since the above equations can be evaluated analytically, the
difficult part is finding the parameters for the binary variables. In particular, if the number of binary
variables is large, calculating the partition function can become infeasible. To some extent, this can
be remedied by the use of specialized Monte-Carlo algorithms [4].
2.1
Example
In order to gain intuition into the properties of the model, we illustrate it in a simple one-dimensional
case. From equation (4) for the conditional mean of the continuous variables, we expect the distance
between the conditional means ?x|b to increase with increasing correlation between continuous and
binary variables increases. We see that this is indeed the case: While the conditional Gaussians
p(x|b = 1) and p(x|b = 0) are identical if x and b are uncorrelated (figure 1 A), a correlation
between x and b shifts them away from the unconditional mean (figure 1 B). Also, the weight
assigned to each of the two Gaussians can be changed. While in figures 1 A and 1 B b has a symmetric
mean of 0.5, a non-symmetric mean leads to an asymmetry in the weighting of each Gaussian
illustrated in figure 1 C.
2.2
Comparison with other models for the joint modeling of binary and continous data
There are two models in the literature which model the joint distribution of continuous and binary
variables, which we will list in the following and compare them to the model derived in this paper.
Location model The location model (LM) [14, 10, 9] also uses the same factorization as above
p(x, b) = p(x|b)p(b). However, the distribution for the binary variables p(b) is not of Boltzmann
type but a general multinomial distribution and therefore has more degrees of freedom. The conditional distribution p(x|b) is assumed to be Gaussian with moments (?b , ?b ), which can both
depend on the conditional state b. Thus to fit the LM usually requires much more data to estimate
the moments for every possible binary state. The location model can also be seen as a maximum entropy model in the sense, that it is the distribution with maximal entropy under all distribution with
the conditional moments. As fitting this model in its general form is prone to overfitting, various ad
hoc constraints have been proposed; see [9] for details.
Partially dichotomized Gaussian model Another simple possibility to obtain a joint distribution
of continuous and binary variables is to take multivariate (latent) Gaussian distribution for all variables and then dichotomize those components which should represent the binary variables. Thus, a
binary variable bi is set to 1 if the underlying Gaussian variables is greater than 0 and it is set to 0 if
the Gaussian variable is smaller than 0. This model is known as the partially dichotomized Gaussian
(PDG) [6]. Importantly the marginal distribution over the continuous variables is always Gaussian
and not a mixture as in our model. The reason for this is that all marginals of a Gaussian distribution
are again Gaussian.
4
A
B
D
C
E
F IGURE 2: Illustration of the binary encoding with box-type tuning curves. A: shows the marginal
distribution over stimuli. The true underlying stimulus distribution is a uniform distribution over the
interval (0 5 0 5) and is plotted in shaded gray. The mixture of Gaussian approximation of the
MaxEnt model is plotted in black. Each neuron has a tuning-curve, consisting of a superposition of
box-functions. B shows the tuning-curve of the first neuron. This is equivalent to the conditional
distribution, when conditioning on the first bit, which indicates if the stimulus is in the right part of
the interval. The tuning-curve is a superposition of 5 box-functions. The true tuning curve is plotted
in shaded gray whereas the MaxEnt approximation is plotted in black. C shows the tuning curve
of neuron with index 2. D: Covariance between continuous and binary variables as a function of
the index of the binary variables. This is the same as the STA for each neuron (see also equation
(10)). E shows the conditional distribution, when conditioning on both variables (0,2) to be one.
This corresponds to the product of the tuning-curves.
3
3.1
Applications
Spike triggering and feature extraction
Spike triggering is a common technique in order to find features which a single neuron is sensitive
to. The presented model can be seen as an extension in the following sense. Suppose that we
have observed samples (xn bn ) from a population responding to a stimulus. The spike triggered
average (STA) for a neuron i is then defined as
n n
n x bi
(10)
STAi =
n = E[xbi ]ri
n bi
P
bn
where ri = nN i = p(bi = 1) is the firing rate of the i-th neuron or fraction of ones within the
sample. Note, that the moment E[xbi ] is one of the constraints we require for the maximum entropy
model and therefore the STA is included in the model.
In addition, the model has also similarities to spike-triggered covariance (STC) [20, 16]. STC denotes the distribution or, more precisely, the covariance of the stimuli that evoked a spiking response.
Usually, this covariance is then compared to the total covariance over the entire stimulus distribution.
In the joint maximum-entropy model, we have access to a similar distribution, namely the conditional distribution p(x| bi = 1), which is a compact description of the spike-triggered distribution.
Note that p(x| bi = 1) can be highly non-Gaussian as all neurons j
= i are marginalized out ? this is
why the current model is an extension to spike triggering. Additionally, we can also trigger or con5
A
B
C
F IGURE 3: Illustration of a spike-by-spike decoding scheme. The MaxEnt model was fit to data
from two deterministic integrate-and-fire models. The MaxEnt model can then be used for decoding
spikes generated by the two independent deterministic models. The two green arrows correspond the
weights of a two-pixel receptive field for each of the two neurons. The 2 dimensional stimulus was
drawn from two independent Gamma distributions. The resulting spike-trains were discretized in 5
time-bins, each 200 ms long. A spike-train to a particular stimulus (x? cross) is decoded. In A) the
marginal distribution of the continuous variables is shown. In B) the posterior, when conditioning
on the first temporal half of the response to that stimulus is shown. Finally in C) the conditional
distribution, when conditioning on the full observed binary pattern is plotted.
dition not on a single neuron but on any response pattern BS of a sub-population S. The resulting
p(x| BS ) with BS = { b : bi = Bi i S} is then also a mixture of Gaussians with 2n components, where n is the number of unspecified neurons j S. As illustrated above (see figure 1 B),
correlations between neurons and stimuli lead to a separation of the individual Gaussians. Hence,
stimulus correlations of other neurons j
= i in the distribution p(x bj
=i | bi = 1) would have the
same effect on the spike-triggered distribution of neuron i. Correlations within this distribution also
imply, that there are correlations between neuron j and neuron i. Thus, stimulus as well as noise
correlations cause deviations of the conditional p(x| BS ) from a single Gaussian. Therefore, the
full conditional distribution p(x| BS ) in general contain more information about the features which
trigger this sub-population to evoke the specified response pattern, than the conditional mean, i.e.
the STA.
We demonstrate the capabilities of this approach by considering the following encoding. As stimulus, we consider one continuous real valued variable that is drawn uniformly from the interval
[0 5 0 5]. It is mapped to a binary population response in the following way. Each neuron i has a
square-wave tuning function:
bi (x) =
(sin (2 (i + 1)x))
where is the Heaviside function. In this way, the response of a neuron is set to 1 if its tuningfunction is positive and 0 otherwise. The first (index 0) neuron distinguishes the left and the right
part of the entire interval. The (i + 1)st neuron distinguishes subsequently left from right in the subintervals of the ith neuron. That is, the response of the second neuron is always 1, if the stimulus is
in the right part of the intervals [0 5 0] and [0 0 5]. These tuning curves can also be thought of as
a mapping into a non-linear feature space in which the neuron acts linear again. Although the datageneration process is not contained in our model class we were able to extract the tuning curves
as shown in figure 2. Note, that for this example neither the STA nor STC analysis alone would
provide any insight into the feature selectivity of the neurons, in particular for the neurons which
have multi-modal tuning curves (the ones with higher indexes in the above example). However, the
tuning curves could be reconstructed with any kind of density estimation, given the STA.
3.2
Spike-by-Spike decoding
Since we have a simple expression for the conditional distribution p(x| b
) (see equation (4)),
we can use the model to analyze the decoding performance of a neural population. To illustrate
this, we sampled spike trains from two leaky integrate-and-fire neurons for 1 second and discretized
the resulting spike trains into 5 bins of 200 ms length each. Each trial, we used a constant two
dimensional stimulus, which was drawn from two independent Gamma distributions with shape
6
A
B
F IGURE 4: Illustration of the conditional probability p(b| x) for the example in figure 3. In 4 A,
for every binary pattern the corresponding probability is plotted for the given stimulus from figure
3, where the brightness of each square indicates its probability. For the given stimulus the actual
response pattern used for figure 3 is marked with a circle. Each pattern b is split into two halves by
the contributions of the two neurons (32 possible patterns for each neuron) and response patterns of
the first neuron are shown on the x-axis, while response patterns of the second neuron on the y-axis.
In 4 B we plotted for each pattern b its probability under the two conditional distributions p(b| x? )
and p(b| x? ) against each other with x? = (0 85 0 72) and x? = (1 5 1 5).
parameter = 3 and scale parameter = 0 3. For each LIF neuron, this two dimensional stimulus
was then projected onto the one-dimensional subspace spanned by its receptive field and used as
input current. Hence, there are 10 binary variables, 5 for each spike-train of the neurons and 2
continuous variables for the stimulus to be modeled. We draw 5 ? 106 samples, calculated the second
order moments of the joint stimulus and response vectors and fitted our maximum entropy model
to these moments. The obtained distribution is shown in figure 3. In 3 A, we show the marginal
distribution of the stimuli, which is a mixture of 210 Gaussians. The receptive fields of the two
neurons are indicated by green arrows. To illustrate the decoding process, we sampled a stimulus
and corresponding response r, from which we try to reconstruct the stimulus. In 3 B, we show
the conditional distribution when conditioning on the first half of the response. Finally in 3 C, the
complete posterior is shown when conditioned on the full response. From a-c, the posterior is more
and more concentrated around the true stimulus. Although there is no neural noise in the encoding
model, the reconstruction is not perfect. This is due to the regularization properties of the maximum
entropy approach.
3.3
Stimulus dependence of firing patterns
While previous studies on the structure of neuronal firing patterns in the retina have compared how
well second-order maximum entropy models fit the empirically observed distributions under different stimulation conditions [19, 22], the stimulus has never been explicitly taken into account into
the model. In the proposed framework, we have access to p(b| x), so we can explicitly study how
the pattern distribution of a neural population depends on the stimulus. We illustrate this by continuing the example of figure 3. First, we show how the individual firing probabilities depend on x
(figure 4 A). Note, that although the encoding process for the previous example was noiseless, that
is, for every given stimulus there is only one response pattern, the conditional distribution p(b| x)
is not a delta-function, but dispersed around the expected response. This is due to the second order
approximation to the encoding model. Further, as it turns out, that a spike in the next bin after a
spike is very unlikely under the model, which captures the property of the leaky integrator. Also, we
compare how p(b| x) changes for different values of x. This is illustrated in figure 4 B.
3.4
Spike train metric
Oftentimes, it is desirable to measure distances between spike trains [24]. One problem, however, is
that not every spike might be of equal importance. That is, if a spike train differs only in one spike, it
might nevertheless represent a completely different stimulus. Therefore, Ahmadian [2] suggested to
measure the distance between spike trains as the difference of stimuli when reconstructed based on
7
the one or the other spike train seems. If the population is noisy, we want to measure the difference
of reconstructed stimuli on average. To this end, we need access to the posterior distribution, when
conditioning on a particular spike train or binary pattern. Using the maximum entropy model, we
can define the following spike-metric:
>
1
1
2
1
2
(11)
d(b , b ) = DKL pME (x|b )||pME (x|b ) =
?xx ?x|b1 ? ?x|b2
?x|b1 ? ?x|b2
2
Here, DKL denotes the Kullback-Leibler divergence between the posterior densities. Equation 11
is symmetric in b, however, in order to get a symmetric expression for other types of posterior
distributions, the Jensen-Shannon divergence might be used instead. As an example we consider
the induced metrics for the encoding model of figure 2. The metric induced by the square-wave
tuning functions of section 3.1 is relatively simple. When conditioning on a particular population
response, the conditional distribution p(x|b) is always a Gaussian with approximately the width of
the smallest wavelength. Flipping a neuron?s response within this pattern corresponds to shifting the
conditional distribution. Suppose we have observed a population response consisting of only ones.
This results in a Gaussian posterior distribution with mean in the middle of the rightmost interval
1
(0.5 ? 1024
, 0.5). Now flipping the response of the ?low-frequency? neuron, that is the one shown
1
in figure 2 B, shifts the mean of the posterior to the middle of the sub-interval (? 1024
, 0). Whereas
flipping the ?high-frequency? neuron, the one which indicates left or right within the smallest possible sub-interval, corresponds to shifting the mean just by the amount of this smallest interval to the
left. Flipping the response of single neurons within this population can result in posterior distribution which look quite different in terms of the Kullback-Leibler divergence. In particular, there is an
ordering in terms of the frequency of the neurons with respect to the proposed metric.
4
Conclusion
We have presented a maximum-entropy model based on the joint second order statistics of continuous valued variables and binary neural responses. This allows us to extend the maximum-entropy
approach [19] for analyzing neural data to incorporate other variables of interest such as continuous
valued stimuli. Alternatively, additional neurophysiological signals such as local field potentials
[13] can be taken into account to study their relation with the joint firing patterns of local neural
ensembles. We have demonstrated four applications of this approach: (1) It allows us to extract the
features a (sub-)population of neurons is sensitive to, (2) we can use it for spike-by-spike decoding,
(3) we can assess the impact of stimuli on the distribution of population patterns and (4) it yields a
natural spike-train metric.
We have shown that the joint maximum-entropy model can be learned in a convex fashion, although
high-dimensional binary patterns might require the use of efficient sampling techniques. Because
of the maximum-entropy approach the resulting distribution is well regularized and does not require
any ad-hoc restrictions or regularity assumptions as have been proposed for related models [9].
Analogous to a Boltzmann machine with hidden variables, it is possible to further add hidden binary
nodes to the model. This allows us to take higher-order correlations into account as well, although
we stay essentially in the second-order framework. Fortunately, the learning scheme for fitting the
modified model to observed data remains almost unchanged: The only difference is that the moments
have to be averaged over the non-observed binary variables as well. In this way, the model can also
be used as a clustering algorithm if we marginalize over all binary variables. The resulting mixture of
Gaussian model will consist of 2N components, where N is the number of hidden binary variables.
Unfortunately, convexity cannot be guaranteed if the model contains hidden nodes. In a similar
fashion, we could also add hidden continuous variables, for example to model unobserved common
inputs. In contrast to hidden binary nodes, this does not lead to an increased model complexity:
averaging over hidden continuous variables corresponds to integrating out each Gaussian within the
mixture, which results in another Gaussian. Also the restriction that all covariance matrices in the
mixture need to be the same still holds, because each Gaussian is integrated in the same way.
Acknowledgments We would like to thank J. Macke and J. Cotton for discussions and feedback on the
manuscript. This work is supported by the German Ministry of Education, Science, Research and Technology
through the Bernstein award to MB (BMBF; FKZ: 01GQ0601), the Werner-Reichardt Centre for Integrative
Neuroscience T?ubingen, and the Max Planck Society.
8
References
[1] D.H. Ackley, G.E. Hinton, and T.J. Sejnowski. A learning algorithm for boltzmann machines. Cognitive
Science, 9:147?169, 1985.
[2] Y. Ahmadian, J. Pillow, J. Shlens, E. Simoncelli, E.J. Chichilinsky, and L. Paninski. A decoder-based
spike train metric for analyzing the neural code in the retina. In Frontiers in Systems Neuroscience.
Conference Abstract: Computational and systems neuroscience, 2009.
[3] M. Bethge and P. Berens. Near-Maximum entropy models for binary neural representations of natural images. In Proceedings of the Twenty-First Annual Conference on Neural Information Processing Systems,
volume 20, pages 97?104, Cambridge, MA, 2008. MIT Press.
[4] Tamara Broderick, Miroslav Dudik, Gasper Tkacik, Robert E Schapire, and William Bialek. Faster solutions of the inverse pairwise ising problem. arXiv, q-bio.QM:0712.2437, Dec 2007.
[5] G. Buzsaki. Large-scale recording of neuronal ensembles. Nature Neuroscience, 7(5):446?451, 2004.
[6] D. R. Cox and Nanny Wermuth. Likelihood factorizations for mixed discrete and continuous variables.
Scandinavian Journal of Statistics, 26(2):209?220, June 1999.
[7] A. Tang et al. A maximum entropy model applied to spatial and temporal correlations from cortical
networks in vitro. J. Neurosci., 28(2):505?518, 2008.
[8] Q.J.M. Huys, M.B. Ahrens, and L. Paninski. Efficient estimation of detailed single-neuron models. Journal of neurophysiology, 96(2):872, 2006.
[9] W. Krzanowski. The location model for mixtures of categorical and continuous variables. Journal of
Classification, 10(1):25?49, 1993.
[10] S. L. Lauritzen and N. Wermuth. Graphical models for associations between variables, some of which are
qualitative and some quantitative. The Annals of Statistics, 17(1):31?57, March 1989.
[11] D.J.C. MacKay. Information theory, inference and learning algorithms. Cambridge U. Press, 2003.
[12] J.H. Macke, P. Berens, A.S. Ecker, A.S. Tolias, and M. Bethge. Generating spike trains with specified
correlation coefficients. Neural Computation, 21(2):1?27, 2009.
[13] Marcelo A. Montemurro, Malte J. Rasch, Yusuke Murayama, Nikos K. Logothetis, and Stefano Panzeri.
Phase-of-Firing coding of natural visual stimuli in primary visual cortex. Current Biology, Vol 18:375?
380, March 2008.
[14] I. Olkin and R. F. Tate. Multivariate correlation models with mixed discrete and continuous variables.
The Annals of Mathematical Statistics, 32(2):448?465, June 1961.
[15] J.W. Pillow, J. Shlens, L. Paninski, A. Sher, A.M. Litke, EJ Chichilnisky, and E.P. Simoncelli. Spatiotemporal correlations and visual signalling in a complete neuronal population. Nature, 454(7207):995?
999, 2008.
[16] J.W. Pillow and E.P. Simoncelli. Dimensionality reduction in neural models: an information-theoretic
generalization of spike-triggered average and covariance analysis. Journal of Vision, 6(4):414?428, 2006.
[17] Y. Roudi, E. Aurell, and J.A. Hertz. Statistical physics of pairwise probability models. Frontiers in
Computational Neuroscience, 2009.
[18] Yasser Roudi, Sheila Nirenberg, and Peter E. Latham. Pairwise maximum entropy models for studying
large biological systems: When they can work and when they can?t. PLoS Comput Biol, 5(5), 2009.
[19] Elad Schneidman, Michael J. Berry, Ronen Segev, and William Bialek. Weak pairwise correlations imply
strongly correlated network states in a neural population. Nature, 440(7087):1007?1012, April 2006.
[20] O. Schwartz, EJ Chichilnisky, and E.P. Simoncelli. Characterizing neural gain control using spiketriggered covariance. In Advances in Neural Information Processing Systems 14: Proceedings of the
2002 [sic] Conference, page 269. MIT Press, 2002.
[21] J. Shlens, G. D. Field, J. L. Gauthier, M. Greschner, A. Sher, A. M. Litke, and E. J. Chichilnisky. The
structure of Large-Scale synchronized firing in primate retina. Journal of Neuroscience, 29(15):5022,
2009.
[22] Jonathon Shlens, Greg D. Field, Jeffrey L. Gauthier, Matthew I. Grivich, Dumitru Petrusca, Alexander
Sher, Alan M. Litke, and E. J. Chichilnisky. The structure of Multi-Neuron firing patterns in primate
retina. J. Neurosci., 26(32):8254?8266, August 2006.
[23] Jonathon Shlens, Fred Rieke, and E. J. Chichilnisky. Synchronized firing in the retina. Current Opinion
in Neurobiology, 18(4):396?402, August 2008.
[24] J.D. Victor and K.P. Purpura. Metric-space analysis of spike trains: theory, algorithms and application.
Network: computation in neural systems, 8(2):127?164, 1997.
[25] Shan Yu, Debin Huang, Wolf Singer, and Danko Nikolic. A small world of neuronal synchrony. Cereb.
Cortex, 18(12):2891?2901, April 2008.
9
| 3831 |@word neurophysiology:1 trial:1 cox:1 middle:2 seems:1 integrative:1 bn:2 covariance:16 tkacik:1 brightness:1 reduction:1 moment:9 contains:1 rightmost:1 current:4 olkin:1 dx:3 partition:2 shape:1 alone:1 half:3 signalling:1 greschner:1 ith:1 short:1 provides:1 node:3 philipp:1 location:5 mathematical:2 differential:1 become:1 qualitative:1 fitting:5 pairwise:7 expected:1 indeed:1 montemurro:1 mpg:1 xz:2 nor:1 multi:4 integrator:1 discretized:2 behavior:1 actual:1 considering:1 increasing:1 xx:17 underlying:4 factorized:1 what:1 kind:1 unspecified:1 finding:2 unobserved:1 temporal:2 quantitative:1 every:4 collecting:1 act:1 qm:1 schwartz:1 bio:1 control:1 planck:1 before:1 positive:1 local:2 encoding:6 analyzing:2 yusuke:1 firing:11 approximately:1 black:2 might:4 xbi:2 evoked:1 shaded:2 dichotomize:1 limited:1 factorization:3 bi:10 huys:1 averaged:1 acknowledgment:1 block:2 differs:1 thought:1 integrating:1 get:1 cannot:2 onto:1 marginalize:1 krzanowski:1 influence:1 restriction:2 equivalent:1 deterministic:2 demonstrated:1 ecker:1 straightforward:1 convex:2 pure:1 insight:2 importantly:1 spanned:1 shlens:5 population:25 rieke:1 analogous:1 annals:2 suppose:2 trigger:2 decode:1 massive:1 logothetis:1 us:1 secondorder:1 nanny:1 ising:1 observed:10 ackley:1 wermuth:2 solved:1 capture:2 calculate:1 ordering:1 plo:1 intuition:1 convexity:1 complexity:2 broderick:1 depend:3 completely:1 joint:16 various:1 derivation:1 train:19 describe:1 ahmadian:2 monte:1 sejnowski:1 quite:1 elad:1 valued:3 compartmental:1 otherwise:1 reconstruct:1 nirenberg:1 statistic:7 jointly:1 noisy:1 hoc:2 triggered:11 biophysical:1 matthias:1 pdg:1 reconstruction:1 maximal:2 product:1 mb:1 murayama:1 buzsaki:1 description:3 regularity:1 asymmetry:1 extending:1 generating:1 perfect:1 derive:1 illustrate:4 lauritzen:1 involves:1 indicate:1 synchronized:2 rasch:1 subsequently:1 jonathon:2 opinion:1 bin:3 education:1 require:5 surname:1 generalization:1 biological:2 extension:4 con5:1 frontier:2 hold:1 around:2 normal:2 exp:9 panzeri:1 mapping:1 bj:1 lm:2 matthew:1 achieves:1 smallest:3 estimation:3 gq0601:1 superposition:2 sensitive:5 tool:1 weighted:1 mit:2 gaussian:23 always:3 modified:1 ej:2 derived:1 june:2 likelihood:2 indicates:3 contrast:1 rigorous:1 litke:3 sense:2 inference:2 tate:1 dependent:1 nn:1 entire:2 unlikely:1 integrated:1 hidden:7 relation:1 germany:1 pixel:1 classification:1 spatial:1 lif:1 mackay:1 marginal:9 field:6 equal:1 never:1 having:1 extraction:1 sampling:1 petrusca:1 identical:1 biology:1 look:1 yu:1 stimulus:42 employ:1 distinguishes:2 sta:7 retina:5 simultaneously:1 gamma:2 divergence:3 individual:3 phase:1 consisting:2 fire:2 jeffrey:1 william:2 freedom:1 interest:4 cbb:2 possibility:1 highly:1 mixture:10 unconditional:1 xb:12 respective:1 continuing:1 maxent:4 circle:1 plotted:7 dichotomized:2 fitted:1 miroslav:1 increased:1 formalism:1 modeling:3 measuring:1 werner:1 tractability:1 applicability:1 deviation:1 subset:1 hundred:1 uniform:1 successful:1 spatiotemporal:1 st:1 density:4 stay:1 physic:1 decoding:7 michael:1 bethge:3 again:3 huang:1 cognitive:1 macke:2 bx:5 account:5 potential:1 de:1 b2:2 coding:1 coefficient:1 satisfy:1 explicitly:2 ad:2 depends:1 try:1 analyze:2 start:1 wave:2 capability:1 synchrony:1 contribution:1 ass:1 square:3 marcelo:1 greg:1 ensemble:7 yield:3 correspond:3 ronen:1 weak:1 carlo:1 cybernetics:1 sebastian:1 against:1 frequency:3 tamara:1 gain:3 sampled:2 dimensionality:1 actually:1 manuscript:1 higher:2 response:23 modal:1 april:2 evaluated:1 box:3 strongly:1 generality:1 just:1 correlation:17 hand:1 gauthier:2 sic:1 gray:2 indicated:1 grows:1 effect:1 contain:1 true:3 regularization:2 analytically:2 hence:3 assigned:1 symmetric:4 leibler:2 illustrated:3 sin:1 spemannstrasse:1 width:1 mpi:1 m:2 generalized:1 complete:2 demonstrate:1 theoretic:1 latham:1 cereb:1 stefano:1 image:1 novel:1 recently:2 common:2 specialized:1 functional:1 multinomial:1 spiking:1 empirically:1 stimulation:1 vitro:1 conditioning:7 exponentially:1 volume:1 extend:3 association:1 numerically:1 marginals:1 cambridge:2 tuning:13 centre:1 phenomenological:1 hxi:1 access:4 scandinavian:1 similarity:1 cortex:2 add:2 posterior:10 multivariate:2 recent:1 roudi:2 moderate:1 selectivity:1 ubingen:3 gerwinn:1 binary:45 cxb:3 victor:1 seen:3 ministry:1 greater:1 additional:1 fortunately:1 dudik:1 nikos:1 schneidman:1 signal:3 full:3 desirable:2 simoncelli:4 alan:1 technical:1 faster:1 cross:1 long:1 concerning:1 award:1 dkl:2 impact:1 basic:2 vision:2 metric:10 expectation:1 noiseless:1 essentially:1 arxiv:1 represent:2 dec:1 addition:2 want:2 whereas:2 interval:9 subject:1 recording:2 induced:2 seem:1 extracting:1 near:1 constraining:1 split:1 bernstein:1 affect:1 fit:4 fkz:1 triggering:3 tradeoff:1 shift:2 expression:2 peter:1 cause:1 detailed:1 gasper:1 amount:4 concentrated:1 schapire:1 ahrens:1 neuroscience:8 estimated:1 delta:1 blue:1 discrete:3 write:1 vol:1 group:1 four:1 nevertheless:1 monitor:1 drawn:3 changing:2 neither:1 fraction:1 inverse:1 almost:1 separation:1 draw:1 cxx:2 bit:1 shan:1 guaranteed:1 igure:4 dition:1 annual:1 activity:6 constraint:6 precisely:1 segev:1 sheila:1 ri:2 relatively:1 march:2 hertz:1 smaller:1 increasingly:1 b:5 primate:2 taken:3 equation:9 remains:1 describing:1 discus:2 turn:1 german:1 singer:1 tractable:2 end:1 studying:1 grivich:1 gaussians:7 observe:1 away:1 responding:1 denotes:2 include:2 clustering:1 graphical:1 marginalized:1 calculating:2 classical:2 society:1 unchanged:1 question:1 spike:47 flipping:4 receptive:3 primary:1 dependence:2 bialek:2 gradient:1 subspace:1 distance:3 thank:1 remedied:1 mapped:1 decoder:1 me:1 extent:1 tuebingen:1 reason:1 length:1 code:1 index:4 relationship:3 illustration:4 modeled:1 difficult:1 unfortunately:2 robert:1 debate:1 boltzmann:6 twenty:1 perform:2 neuron:49 descent:1 spiketriggered:1 hinton:1 neurobiology:1 incorporate:1 august:2 pair:1 namely:1 specified:2 chichilnisky:5 connection:1 continous:1 cotton:1 learned:1 able:1 suggested:1 below:1 pattern:26 firstname:1 usually:2 green:3 max:1 shifting:2 suitable:2 malte:1 natural:5 regularized:2 scheme:2 technology:1 imply:2 axis:2 categorical:1 extract:3 sher:3 reichardt:1 literature:1 berry:1 expect:1 mixed:3 aurell:1 integrate:2 degree:2 sufficient:1 uncorrelated:1 prone:1 changed:1 supported:1 infeasible:1 allow:2 understand:1 taking:1 characterizing:1 leaky:2 curve:11 calculated:1 xn:1 feedback:1 pillow:3 cortical:1 fred:1 sensory:1 world:1 commonly:1 projected:1 oftentimes:1 bb:6 reconstructed:3 compact:1 kullback:2 evoke:1 overfitting:1 summing:1 b1:2 assumed:1 tolias:1 alternatively:1 continuous:29 latent:1 why:1 purpura:1 additionally:1 nature:4 subintervals:1 complex:2 berens:3 stc:4 timescales:1 neurosci:2 arrow:2 noise:2 neuronal:4 fashion:2 bmbf:1 precision:1 sub:5 nikolic:1 decoded:1 comput:1 weighting:3 tang:1 dumitru:1 specific:1 jensen:1 list:1 incorporating:1 essential:1 consist:1 gained:1 importance:1 conditioned:1 entropy:29 wavelength:1 explore:1 paninski:3 neurophysiological:1 visual:3 lagrange:1 contained:1 partially:2 corresponds:5 wolf:1 dispersed:1 ma:1 conditional:24 marked:1 change:1 included:1 uniformly:1 averaging:1 pme:19 total:1 shannon:1 formally:1 fulfills:1 alexander:1 danko:1 evaluate:2 heaviside:1 biol:1 correlated:1 |
3,126 | 3,832 | Training Factor Graphs with Reinforcement
Learning for Efficient MAP Inference
Michael Wick, Khashayar Rohanimanesh, Sameer Singh, Andrew McCallum
Department of Computer Science
University of Massachusetts Amherst
Amherst, MA 01003
{mwick,khash,sameer,mccallum}@cs.umass.edu
Abstract
Large, relational factor graphs with structure defined by first-order logic or other
languages give rise to notoriously difficult inference problems. Because unrolling
the structure necessary to represent distributions over all hypotheses has exponential blow-up, solutions are often derived from MCMC. However, because of limitations in the design and parameterization of the jump function, these samplingbased methods suffer from local minima?the system must transition through
lower-scoring configurations before arriving at a better MAP solution. This paper presents a new method of explicitly selecting fruitful downward jumps by
leveraging reinforcement learning (RL). Rather than setting parameters to maximize the likelihood of the training data, parameters of the factor graph are treated
as a log-linear function approximator and learned with methods of temporal difference (TD); MAP inference is performed by executing the resulting policy on
held out test data. Our method allows efficient gradient updates since only factors
in the neighborhood of variables affected by an action need to be computed?we
bypass the need to compute marginals entirely. Our method yields dramatic empirical success, producing new state-of-the-art results on a complex joint model
of ontology alignment, with a 48% reduction in error over state-of-the-art in that
domain.
1
Introduction
Factor graphs are a widely used representation for modeling complex dependencies amongst hidden
variables in structured prediction problems. There are two common inference problems: learning
(setting model parameters) and decoding (maximum a posteriori (MAP) inference). MAP inference
is the problem of finding the most probable setting to the graph?s hidden variables conditioned on
some observed variables.
For certain types of graphs, such as chains and trees, exact inference and learning is polynomial time
[1, 2, 3]. Unfortunately, many interesting problems require more complicated structure rendering
exact inference intractable [4, 5, 6, 7]. In such cases we must rely on approximate techniques; in
particular, stochastic methods such as Markov chain Monte Carlo (e.g., Metropolis-Hastings) have
been applied to problems such as MAP inference in these graphs [8, 9, 10, 11, 6]. However, for
many real-world structured prediction tasks, MCMC (and other local stochastic methods) are likely
to struggle as they transition through lower-scoring regions of the configuration space.
For example, consider the structured prediction task of clustering where the MAP inference problem
is to group data points into equivalence classes according to some model. Assume for a moment that
1
0.66
0.65
0.64
0.63
0.62
0.61
P =.44
R =1.0
P =.34
R =.80
P =.48
R =.70
P =1.0
R =1.0
F1=.61
F1=.48
F1=.57
F1=1.0
0.6
0.59
0
1
2
3
4
5
6
7
8
9
20
Figure 1: The figure on the left shows the sequence of states along an optimal path beginning at
a single-cluster configuration and ending at the MAP configuration (F1 scores for each state are
shown). The figure on the right plots the F1 scores along the optimal path to the goal for the case
where the MAP clustering has forty instances (twenty per cluster) instead of 5.
this model is perfect and exactly reflects the pairwise F1 score. Even in these ideal conditions
MCMC must make many downhill jumps to reach the MAP configuration. For example, Figure 1
shows the F1 scores of each state along the optimal path to the MAP clustering (assuming each
MCMC jump can reposition one data point at a time). We can see that several consecutive downhill
transitions must be realized before model-scores begin to improve.
Taking into account the above discussion with an emphasis on the delayed feedback nature of the
MAP inference problem immediately inspires us to employ reinforcement learning (RL) [12]. RL
is a framework for solving the sequential decision making problem with delayed reward. There has
been an extensive study of this problem in many areas of machine learning, planning, and robotics.
Our approach is to directly learn the parameters of the log-linear factor graph with reinforcement
learning during a training phase; MAP inference is performed by executing the policy. Because we
develop the reward-structure to assign the most mass to the goal configuration, the parameters of the
model can also be interpreted as a regularized version of maximum likelihood that is smoothed over
neighboring states in the proposal manifold.
The rest of this document is organized as follows: in ?2 we briefly overview background material.
In ?3 we describe the details of our algorithm and discuss a number of ideas for coping with the
combinatorial complexity in both state and action spaces. In ?4.3 we present our empirical results,
and finally in ?6 we conclude and lay out a number of ideas for future work.
2
2.1
Preliminaries
Factor Graphs
A factor graph is undirected bipartite graphical representation of a probability distribution with
random variables and factors as nodes. Let X be a set of observed variables and Y be a set of
hidden variables. The factor graph expresses the conditional probability of Y = y given X = x
discriminatively:
!
X
1
1 Y
i
k
?i (x, y ) =
exp
?k ?k (x, y )
P (y|x) =
ZX
ZX
?i ??
(1)
k
Where ZX is an input-dependent normalizing constant ensuring that the distribution sums to one,
? is the set of factors, and ?(x, y i ) are factors over the observed variables x and a set of hidden
variables y i that are the neighbors of the factor (we use superscript to denote a set). Factors are
log-linear combinations of features ?(x, y i ) and parameters ? = {?j }. The problem of learning is
to find a setting of the parameters ? that explains the data. For example, maximum likelihood sets
the parameters so that the model?s feature expectations matches the data?s expectations.
2
2.2
Reinforcement Learning
Most of the discussion here is based on [12]. Reinforcement learning (RL) refers to a class of
problems in which an agent interacts with the environment and the objective is to learn a course of
actions that optimizes a long-term measure of a delayed reward signal. The most popular realization
of RL has been in the context of markov decision processes (MDPs).
An MDP is the tuple M = hS, A, R, Pi, where S is the set of states, A is the set of actions,
R : S ? A ? S ? IR is the reward function, i.e. R(s, a, s0 ) is the expected reward when action a is
taken in state s and transitions to state s0 , and P : S ? A ? S ? [0, 1] is the transition probability
function, i.e. P a (s, s0 ) is the probability of reaching state s0 if action a is taken in state s.
P
A stochastic policy ? is defined as ? : S ? A ? [0, 1] such that a ?(a|s) = 1, where ?(s, a) is
the probability of choosing action a (as the next action) when in state s. Following a policy on an
MDP results in an expected discounted reward Rt? accumulated over the course of the run, where
PT
Rt? = k=0 ? k rt+k+1 . An optimal policy ? ? is a policy that maximizes this reward.
Given a Q-function (Q : S ? A ? IR) that represents the expected discounted reward for taking
action a in state s, the optimal policy ? ? can be found by locally maximizing Q at each step. Methods of temporal difference (TD) [13] can be used to learn the optimal policy in MDPs, and even
have convergence guarantees when the Q-function is in tabular form. However, in practice, tabular
representations do not scale to large or continuous domains; a problem that function approximation
techniques address [12]. Although the convergence properties of these approaches have not yet been
established, the methods have been applied successfully to many problems [14, 15, 16, 17].
When linear functional approximation is used, the state-action pair hs, ai is represented by a feature
vector ?(s, a) and the Q value is represented using a vector of parameters ?, i.e.
X
Q(s, a) =
? k ?k
(2)
?k ??(s,a)
Instead of updating the Q values directly, the updates are made to the parameters ?:
? ? ? + ? rt+1 ? Q(st , at ) + ? max Q(st+1 , a) ?(st , at )
a
(3)
notice the similarity between the linear function approximator (Equation 2) and the log-linear factors
(right-hand side of Equation 1); namely, the approximator has the same form as the unnormalized
log probabilities of the distribution. This enables us to share the parameters ? from Equation 1.
3
Our Approach
In our RL treatment of learning factor graphs, each state in the system represents a complete assignment to the hidden variables Y =y. Given a particular state, an action modifies the setting to a
subset of the hidden variables; therefore, an action can also be defined as a setting to all the hidden
variables Y =y 0 . However, in order to cope with complexity of the action space, we introduce a proposer (as in Metropolis-Hastings) B : Y ? Y that constrains the space by limiting the number of
possible actions from each state. The reward function R can be defined as the residual performance
improvement when the systems transitions from a current state y to a neighboring state y 0 on the
manifold induced by B. In our approach, we use a performance measure based on the ground truth
labels (for example, F1, accuracy, or normalized mutual information) as the reward. These rewards
ensure that the ground truth configuration is the goal.
3.1
Model
Recall that an MDP is defined as M = hS, A, R, Pi with a set of states S, set of actions A,
reward function R and transition probability function P; we can now reformulate MAP inference
and learning in factor graphs as follows:
? States: we require the state space to encompass the entire feasible region of the factor graph.
Therefore, a natural definition for a state is a complete assignment to the hidden variables Y ?y and
3
the state space itself is defined as the set S = {y | y ? DOM(Y )}, where DOM(Y ) is the domain
space of Y , and we omit the fixed observables x for clarity since only y is required to uniquely
identify a state. Note that unless the hidden variables are highly constrained, the feasible regional
will be combinatorial in |Y |; we discuss how to cope with this in the following sections.
? Actions Given a state s (e.g., an assignment of Y variables), an action may be defined as a
constrained set of modifications to a subset of the hidden variable assignments. We constrain the
action space to a manageable size by using a proposer, or a behavior policy from which actions
are sampled. A proposer defines the set of reachable states by describing the distribution over
neighboring states s0 given a state s. In context of the action space of an MDP, the proposer can be
viewed in two ways. First, each possible neighbor state s0 can be considered the result of an action
a, leading to a large number of deterministic actions. Second, it can be regarded as a single highly
stochastic action, whose next state s0 is a sample from the distribution given by the proposer. Both
of these views are equivalent; the former view is used for notation simplicity.
? Reward Function The reward function is designed so that the policy learned through delayed
reward reaches the MAP configuration. Rewards are shaped to facilitate efficient learning in this
combinatorial space. Let F be some performance metric (for example, for information extraction
tasks, it could be F 1 score based on the ground truth labels).
The reward function used is the residual improvement based on the performance metric F when the
system transitions between states s and s0 :
R(s, s0 ) = F(s0 ) ? F(s)
(4)
this reward can viewed as learning to minimize the geodesic distance between a current state and the
MAP configuration on the proposal manifold. Alternatively, we could define a Euclidean reward as
F(s? ) ? F(s0 ), where s? is the ground truth. We choose an F such that the ground truth scores the
highest, that is s? = arg maxs F(s).
? Transition Probability Function: Recall that the actions in our system are samples generated
from a proposer B, and that each action uniquely identifies a next state in the system. The function
that returns this next state deterministically is called simulate(s,a). Thus, given the state s and the
action a, the next state s0 has probability P a (s, s0 ) = 1 if s0 = simulate(s, a), and 0 otherwise.
3.2
Efficient Q Value Computations
We use linear function approximation to obtain Q values over the state/action space. That is,
Q(s, a) = ? ? ?(s, a), where ?(s, a) are features over the state-action pair s, a. We show below
how Q values can be derived from the factor graph (Equation 1) in a manner that enables efficient
computations.
As mentioned previously, a state is an assignment to hidden variables Y =y and an action is another
assignment to the hidden variables Y =y 0 (that results from changing the values of a subset of the
variables ?Y ? Y ). Let ?y be the setting to those variables in y and ?y0 be the new setting to
those variables in y 0 . For each assignment, the factor graph can compute the conditional probability
p(y | x). Then, the residual log-probability S resulting from taking action a in state y and reaching
y 0 is therefore log(p(y 0 | x)) ? log(p(y | x)). Plugging in the model from Equation 1 and performing
some algebraic manipulation so redundant factors cancel yields:
?
???
?
X
?(x, y 0i ) ?
y 0i ??y0
X
?(x, y i )?
(5)
y i ??y
Where the partition function ZX and factors outside the neighborhood of ?y cancel. In practice an
action will modify a small subset of the variables so this computation is extremely efficient. We are
now justified in using Equation 5 (derived from the model) to compute the inner product (? ? ?(s, a))
from Equation 2.
4
3.3
Algorithm
Now that we have defined MAP inference in a factor graph as an MDP, we can apply a wide variety
of RL algorithms to learn the model?s parameters. In particular, we build upon Watkin?s Q(?) [18,
19], a temporal difference learning algorithm [13]; we augment it with function approximation as
described in the previous section. Our RL learning method for factor graphs is shown in Algorithm 1.
Algorithm 1 Modified Watkin?s-Q(?) for Factor Graphs
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
21:
22:
23:
24:
25:
26:
27:
28:
29:
30:
31:
32:
33:
34:
Input: Performance metric F, proposer B
?
?
?
?
?
Initialize ? and ?
e = 0
repeat {For every episode}
s ? random initial configuration
Sample n actions a ? B(s); collect action samples in AB (s)
for samples a ? AB (s) do
s0 ? simulate(s, a)
?(s, s0 ) ? set of features between s, s0
Q(s, a) ? ? ? ?(s, s0 )
{Equation 5}
end for
repeat {For every step of the episode}
if with probability (1 ? ) then
a ? arg maxa0 Q(s, a0 )
?
?
?
e ? ???
e
else
Sample a random action a ? B(s)
?
?
?
?
e ? 0
end if
s0 ? simulate(s, a)
??i ? ?(s, s0 ) : e(i) ? e(i) + ?i
{Accumulate eligibility traces}
Observe reward r = F(s) ? F (s0 )
{Equation 4}
? ? r ? Q(s, a)
Sample n actions a ? B(s0 ); collect action samples in AB (s0 )
for samples a ? AB (s0 ) do
s00 ? simulate(s0 , a)
?(s0 , s00 ) ? set of features between s0 , s00
Q(s0 , a) ? ? ? ?(s0 , s00 )
end for
a ? arg maxa0 Q(s0 , a0 )
? ? ? + ?Q(s0 , a)
{Equation 3 with elig. traces}
?
?
?
?
?
? ? ? + ?? ?
e
s ? s0
until end of episode
until end of training
At the beginning of each episode, the factor graph is initialized to a random initial state s (by assigning Y =y0 ). Then, during each step of the episode, the maximum action is obtained by repeatedly
sampling from the proposal distribution (s0 =simulate(s, a)). The system transition to the greedy state
s0 with high probability (1 ? ), or transitions to a random state instead. We also include eligibility
traces that have been modified to handle function approximation [12].
Once learning has completed on a training set, MAP inference can be evaluated on test data by
executing the resulting policy. Because Q-values encode both the reward and value together, policy
execution can be performed by choosing the action that maximizes the Q-function at each state.
4
Experiments
We evaluate our approach by training a factor graph for solving the ontology alignment problem.
Ontology alignment is the problem of mapping concepts from one ontology to semantically equivalent concepts from another ontology; our treatment of the problem involves learning a first-order
probabilistic model that clusters concepts into semantically equivalent sets. For our experiments,
5
we use the the dataset provided by the Illinois Semantic Integration Archive (ISIA)1 . There are two
ontology mappings: one between two course catalog hierarchies, and another between two company
profile hierarchies. Each ontology is organized as a taxonomy tree. The course catalog contains 104
concepts and 4360 data records while the company profile domain contains 219 concepts and 23139
records. For our experiments we perform two-fold cross validation with even splits.
The conditional random field we use to model the problem factors into binary decisions over sets
of concepts, where the binary variable is one if all concepts in the set map to each other, and zero
otherwise. Each of these hidden variables neighbors a factor that also examines the observed concept
data. Since there are variables and factors for each hypothetical cluster, the size of the CRF is
combinatorial in the number of concepts in the ontology, and it cannot be full instantiated even for
small amounts of data. Therefore, we believe that this is be a good dataset demonstrate the scalability
of the approach.
4.1
Features
The features used to represent the ontology alignment problem are described here. We choose
to encode our features in first order logic, aggregating and quantifying pairwise comparisons of
concepts over entire sets. These features are described more detail in our technical report [17].
The pairwise feature extractors are the following:
? TFIDF cosine similarity between concept-names of ci and cj
? TFIDF cosine similarity between data-records that instantiate ci and cj
? TFIDF similarity of the children of ci and cj
? Lexical features for each string in the concept name
? True if there is a substring overlap between ci and cj
? True if both concepts are the same level in the tree
The above pairwise features are used as a basis for features over entire sets with the following first
order quantifiers and aggregators:
? ?: universal first order logic quantifier
? ?: existential quantifier
? Average: conditional mean over a cluster
? Max: maximum value obtained for a cluster
? Min: minimum value obtained for a cluster
? Bias: conditional bias, counts number of pairs where a pairwise feature could potentially fire.
The real-valued aggregators (min,max,average) are also quantized into bins of various sizes corresponding to the number of bins={2,4,20,100}. Note that our first order features must be computed
on-the-fly since the model is too large to be grounded in advance.
RL
MH-CD1
MH-SR
GA-PW
GLUE
F1
94.3
76.9
92.0
89.9
80
Course Catalog
Precision Recall
96.1
92.6
78.0
57.0
88.9
76.3
100
81.5
80
80
F1
84.5
64.7
81.5
81.5
80
Company Profile
Precision Recall
84.5
84.5
64.7
64.7
88.0
75.9
88.0
75.9
80
80
Table 1: pairwise-matching precision, recall and F1 on the course catalog dataset
4.2
Systems
In this section we evaluate the performance of our reinforcement learning approach to MAP inference and compare it current stochastic and greedy alternatives. In particular, we compare piecewise
[20], contrastive divergence [21], and SampleRank [22, 11, 23]; these are described in more detail
below.
1
http://pages.cs.wisc.edu/ anhai/wisc-si-archive/
6
? Piecewise (GA-PW): the CRF parameters are learned by training independent logistic regression
classifiers in a piecewise fashion. Inference is performed by greedy agglomerative clustering.
? Contrastive Divergence (MH-CD1) with Metropolis-Hastings the system is trained with contrastive divergence and allowed to wander one step from the ground-truth configuration. Once the
parameters are learned, MAP inference is performed using Metropolis-Hastings (with a proposal
distribution that modifies a single variable at a time).
? SampleRank with Metropolis-Hastings (MH-SR): this system is the same as above, but trains
the CRF using SampleRank rather than CD1. MAP is performed with Metropolis-Hastings using a
proposal distribution that modifies a single variable at a time (same proposer as in MH-CD1).
? Reinforcement Learning (RL): this is the system introduced in the paper that trains the CRF
with delayed reward using Q(?) to learn state-action returns. The actions are derived from the same
proposal distribution as used by our Metropolis-Hastings (MH-CD1,MH-SR) systems (modifying a
single variable at a time); however it is exhaustively applied to find the maximum action. We set the
RL parameters as follows: ?=0.00001, ?=0.9, ?=0.9.
? GLUE: in order to compare with a well-known system on the this dataset, we choose GLUE [24].
In these experiments contrastive divergence and SampleRank were run for 10,000 samples each ,
while reinforcement learning was run for twenty episodes and 200 steps per episode. CD1 and
SampleRank were run for more steps to compensate for only observing a single action at each step
(recall RL computes the action with the maximum value at each step by observing a large number
of samples).
4.3
Results
In Table 1 we compare F1 (pairwise-matching) scores of the various systems on the course catalog
and company profile datasets. We also compare to the well known system, GLUE [24]. SampleRank (MH-SR), contrastive divergence (MH-CD1) and reinforcement learning (RL) underwent
ten training episodes initialized from random configurations; during MAP inference we initialized
the systems to the state predicted by greedy agglomerative clustering. Both SampleRank and reinforcement learning were able to achieve higher scores than greedy; however, reinforcement learning
outperformed all systems with an error reduction of 75.3% over contrastive divergence, 28% over
SampleRank, 71% over GLUE and 48% over the previous state of the art (greedy agglomerative inference on a conditional random field). Reinforcement learning also reduces error over each system
on the company profile dataset.
After observing the improvements obtained by reinforcement learning, we wished to test how robust
the method was at recovering from the local optima problem described in the introduction. To gain
more insight, we designed a separate experiment to compare Metropolis-Hastings inference (trained
with SampleRank) and reinforcement learning more carefully.
In the second experiment we evaluate our approach under more difficult conditions. In particular,
the MAP inference procedures are initialized to random clusterings (in regions riddled with the
type of local optima discussed in the introduction). We then compare greedy MAP inference on a
model whose parameters were learned with RL, to Metropolis-Hastings on a model with parameters
learned from SampleRank. More specifically, we generate a set of ten random configurations from
the test corpus and run both algorithms, averaging the results over the ten runs. The first two rows
of Table 2 summarizes this experiment. Even though reinforcement learning?s policy requires it
to be greedy with respect to the q-function, we observe that it is able to better escape the random
initial configuration than the Metropolis-Hastings method. This is demonstrated in the first rows
of Table 2. Although both systems perform worse than under these conditions than those of the
previous experiment, reinforcement learning does much better in this situation, indicating that the
q-function learned is fairly robust and capable of generalizing to random regions of the space.
After observing Metropolis-Hasting?s tendency to get stuck in regions of lower score than reinforcement learning, we test RL to see if it would fall victim to these same optima. In the last two rows
of Table 2 we record the results of re-running both reinforcement learning and Metropolis-Hastings
(on the SampleRank model) from the configurations Metropolis-Hastings became stuck. We notice
that RL is able to climb out of these optima and achieve a score comparable to our first experiment.
7
MH is also able to progress out of the optima, demonstrating that the stochastic method is capable
of escaping optima, but perhaps not as quickly on this particular problem.
RL on random
MH-SR on random
RL on MH-SR
MH-SR on MH-SR
F1
86.4
81.1
93.0
84.3
Precision
87.2
82.9
94.6
87.3
Recall
85.6
79.3
91.5
81.5
Table 2: Average pairwise-matching precision, recall and F1 over ten random initialization points,
and on the output of MH-SR after 10,000 inference steps.
5
Related Work
The expanded version of this work is our technical report [17], which provides additional detail and
motivation. Our approach is similar in spirit to Zhang and Dietterich who propose a reinforcement
learning framework for solving combinatorial optimization problems [25]. Similar to this approach,
we also rely on generalization techniques in RL in order to directly approximate a policy over unseen test domains. However, our formulation provides a framework that explicitly targets the MAP
problem in large factor graphs and takes advantage of the log-linear representation of such models
in order to employ a well studied class of generalization techniques in RL known as linear function
approximation. Learning generalizable function approximators has been also studied for efficiently
guiding standard search algorithms through experience [26].
There are a number of approaches for learning parameters that specifically target the problem of
MAP inference. For example, the frameworks of LASO [27] and SEARN [28]) formulate MAP
in the context of search optimization, where a cost function is learned to score partial (incomplete)
configurations that lead to a goal state. In this framework, actions incrementally construct a solution,
rather than explore the solution space itself. As shown in [28] these frameworks have connections to
learning policies in reinforcement learning. However, the policies are learned over incomplete configurations. In contrast, we formulate parameter learning in factor graphs as an MDP over the space
of complete configurations from which a variety of RL methods can be used to set the parameters.
Another approach that targets the problem of MAP inference is SampleRank [11, 23], which computes atomic gradient updates from jumps in the local search space. This method has the advantage
of learning over the space of complete configurations, but ignores the issue of delayed reward.
6
Conclusions and Future Work
We proposed an approach for solving the MAP inference problem in large factor graphs by using
reinforcement learning to train model parameters. RL allows us to evaluate jumps in the configuration space based on a value function that optimizes the long term improvement in model scores.
Hence ? unlike most search optimization approaches ? the system is able to move out of local optima
while aiming for the MAP configuration. Benefitting from log linear nature of factor graphs such
as CRFs we are also able to employ well studied RL linear function approximation techniques for
learning generalizable value functions that are able to provide value estimates on the test set. Our
experiments over a real world domain shows impressive error reduction when compared to the other
approaches. Future work should investigate additional RL paradigms for training models such as
actor-critic.
Acknowledgments
This work was supported in part by the CIIR; SRI #27-001338 and ARFL #FA8750-09-C-0181,
CIA, NSA and NSF #IIS-0326249; Army #W911NF-07-1-0216 and UPenn subaward #103-548106;
and UPenn NSF #IS-0803847. Any opinions, findings and conclusions or recommendations expressed in this material are the authors? and do not necessarily reflect those of the sponsor.
8
References
[1] Andrew McCallum, Dayne Freitag, and Fernando Pereira. Maximum entropy markov models for information extraction and segmentation. In International Conference on Machine Learning (ICML), 2000.
[2] John D. Lafferty, Andrew McCallum, and Fernando Pereira. Conditional random fields: Probabilistic models for segmenting and labeling
sequence data. In Int Conf on Machine Learning (ICML), 2001.
[3] Ben Taskar, Carlos Guestrin, and Daphne Koller. Max-margin markov networks. In NIPS, 2003.
[4] Ryan McDonald and Fernando Pereira. Online learning of approximate dependency parsing algorithms. In European Chapter of the
Association for Computational Linguistics (EACL), pages 81?88, 2006.
[5] Matthew Richardson and Pedro Domingos. Markov logic networks. Machine Learning, 62, 2006.
[6] Brian Milch, Bhaskara Marthi, and Stuart Russell. BLOG: Relational Modeling with Unknown Objects. PhD thesis, University of
California, Berkeley, 2006.
[7] Andrew McCallum, Khashayar Rohanimanesh, Michael Wick, Karl Schultz, and Sameer Singh. Factorie: Efficient probabilistic programming via imperative declarations of structure, inference and learning. In Neural Information Processing Systems(NIPS) Workshop
on Probabilistic Programming, Vancouver, BC, Canda, 2008.
[8] Aria Haghighi and Dan Klein. Unsupervised coreference resolution in a nonparametric bayesian model. In Association for Computational
Linguistics (ACL), 2007.
[9] Hanna Pasula, Bhaskara Marthi, Brian Milch, Stuart Russell, and Ilya Shpitser. Identity uncertainty and citation matching. In Advances
in Neural Information Processing Systems 15. MIT Press, 2003.
[10] Sonia Jain and Radford M. Neal. A split-merge markov chain monte carlo procedure for the dirichlet process mixture model. Journal of
Computational and Graphical Statistics, 13:158?182, 2004.
[11] Aron Culotta. Learning and inference in weighted logic with application to natural language processing. PhD thesis, University of
Massachusetts, May 2008.
[12] Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. The MIT Press, March 1998.
[13] Richard S. Sutton. Learning to predict by the methods of temporal differences. Machine Learning, pages 9?44, 1988.
[14] Robert H. Crites and Andrew G. Barto. Improving elevator performance using reinforcement learning. In Advances in Neural Information
Processing Systems 8, pages 1017?1023. MIT Press, 1996.
[15] Wei Zhang and Thomas G. Dietterich. Solving combinatorial optimization tasks by reinforcement learning: A general methodology
applied to resource-constrained scheduling. Journal of Artificial Intelligence Reseach, 1, 2000.
[16] Gerald Tesauro. Temporal difference learning and td-gammon. Commun. ACM, 38(3):58?68, 1995.
[17] Khashayar Rohanimanesh, Michael Wick, Sameer Singh, and Andrew McCallum. Reinforcement learning for map inference in large
factor graphs. Technical Report #UM-CS-2008-040, University of Massachusetts, Amherst, 2008.
[18] Christopher J. Watkins. Learning from Delayed Rewards. PhD thesis, Kings College, Cambridge, 1989.
[19] Christopher J. Watkins and Peter Dayan. Q-learning. Machine Learning, 8(3):279?292, May 1992.
[20] Andrew McCallum and Charles Sutton. Piecewise training with parameter independence diagrams: Comparing globally- and locallytrained linear-chain CRFs. In NIPS Workshop on Learning with Structured Outputs, 2004.
[21] Geoffrey E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation, 14(8):1771?1800, 2002.
[22] Culotta. First. In International Joint Conference on Artificial Intelligence, 2007.
[23] Khashayar Rohanimanesh, Michael Wick, and Andrew McCallum. Inference and learning in large factor graphs with adaptive proposal
distributions. Technical Report #UM-CS-2009-028, University of Massachusetts, Amherst, 2009.
[24] AnHai Doan, Jayant Madhavan, Pedro Domingos, and Alon Y. Halevy. Learning to map between ontologies on the semantic web. In
WWW, page 662, 2002.
[25] Wei Zhang and Thomas G. Dietterich. A reinforcement learning approach to job-shop scheduling. In International Joint Conference on
Artificial Intelligence (IJCAI), pages 1114?1120, 1995.
[26] Justin Boyan and Andrew W. Moore. Learning evaluation functions to improve optimization by local search. J. Mach. Learn. Res.,
1:77?112, 2001.
[27] Hal Daum?e III and Daniel Marcu. Learning as search optimization: approximate large margin methods for structured prediction. In
International Conference on Machine learning (ICML), 2005.
[28] Hal Daum?e III, John Langford, and Daniel Marcu. Search-based structured prediction. Machine Learning, 2009.
9
| 3832 |@word h:3 version:2 manageable:1 polynomial:1 briefly:1 pw:2 sri:1 glue:5 contrastive:7 dramatic:1 moment:1 reduction:3 configuration:21 contains:2 uma:1 selecting:1 score:13 daniel:2 initial:3 document:1 bc:1 fa8750:1 current:3 comparing:1 si:1 yet:1 assigning:1 must:5 parsing:1 john:2 partition:1 enables:2 plot:1 designed:2 update:3 greedy:8 instantiate:1 intelligence:3 parameterization:1 samplingbased:1 mccallum:8 beginning:2 record:4 provides:2 quantized:1 node:1 zhang:3 daphne:1 along:3 freitag:1 dan:1 manner:1 introduce:1 pairwise:8 upenn:2 expected:3 behavior:1 ontology:10 planning:1 discounted:2 globally:1 td:3 company:5 unrolling:1 begin:1 provided:1 notation:1 maximizes:2 mass:1 interpreted:1 string:1 generalizable:2 finding:2 guarantee:1 temporal:5 berkeley:1 every:2 hypothetical:1 exactly:1 um:2 classifier:1 omit:1 producing:1 segmenting:1 before:2 local:7 modify:1 struggle:1 aggregating:1 aiming:1 sutton:3 mach:1 path:3 merge:1 acl:1 emphasis:1 initialization:1 studied:3 equivalence:1 collect:2 acknowledgment:1 atomic:1 practice:2 ciir:1 procedure:2 area:1 empirical:2 coping:1 universal:1 matching:4 gammon:1 refers:1 get:1 cannot:1 ga:2 scheduling:2 context:3 milch:2 www:1 fruitful:1 map:33 deterministic:1 equivalent:3 maximizing:1 modifies:3 lexical:1 demonstrated:1 crfs:2 formulate:2 resolution:1 simplicity:1 immediately:1 examines:1 insight:1 regarded:1 handle:1 limiting:1 pt:1 hierarchy:2 target:3 exact:2 programming:2 hypothesis:1 domingo:2 updating:1 lay:1 marcu:2 observed:4 taskar:1 fly:1 region:5 culotta:2 episode:8 russell:2 highest:1 mentioned:1 environment:1 complexity:2 constrains:1 reward:24 exhaustively:1 dom:2 geodesic:1 gerald:1 trained:2 singh:3 solving:5 eacl:1 coreference:1 upon:1 bipartite:1 observables:1 basis:1 joint:3 mh:15 represented:2 various:2 chapter:1 train:3 instantiated:1 jain:1 describe:1 monte:2 artificial:3 labeling:1 neighborhood:2 choosing:2 outside:1 whose:2 victim:1 widely:1 valued:1 otherwise:2 statistic:1 unseen:1 richardson:1 khash:1 itself:2 superscript:1 online:1 sequence:2 advantage:2 propose:1 product:2 neighboring:3 realization:1 achieve:2 scalability:1 convergence:2 cluster:7 optimum:7 ijcai:1 perfect:1 executing:3 ben:1 object:1 andrew:10 develop:1 alon:1 wished:1 progress:1 job:1 recovering:1 c:4 involves:1 predicted:1 modifying:1 stochastic:6 opinion:1 material:2 bin:2 explains:1 require:2 maxa0:2 assign:1 f1:15 generalization:2 preliminary:1 tfidf:3 brian:2 probable:1 ryan:1 considered:1 ground:6 exp:1 mapping:2 predict:1 matthew:1 consecutive:1 outperformed:1 combinatorial:6 label:2 successfully:1 reflects:1 weighted:1 mit:3 modified:2 rather:3 reaching:2 barto:2 encode:2 derived:4 improvement:4 likelihood:3 contrast:1 benefitting:1 posteriori:1 inference:31 dependent:1 dayan:1 accumulated:1 entire:3 a0:2 hidden:13 koller:1 arg:3 issue:1 augment:1 art:3 constrained:3 initialize:1 mutual:1 integration:1 field:3 construct:1 once:2 shaped:1 extraction:2 sampling:1 fairly:1 represents:2 stuart:2 cancel:2 icml:3 unsupervised:1 future:3 tabular:2 report:4 piecewise:4 escape:1 employ:3 richard:2 divergence:7 delayed:7 elevator:1 phase:1 fire:1 ab:4 highly:2 investigate:1 evaluation:1 alignment:4 nsa:1 mixture:1 reseach:1 held:1 chain:4 tuple:1 capable:2 partial:1 necessary:1 experience:1 unless:1 tree:3 incomplete:2 euclidean:1 initialized:4 re:2 instance:1 modeling:2 w911nf:1 assignment:7 cost:1 imperative:1 subset:4 inspires:1 too:1 dependency:2 st:3 international:4 amherst:4 probabilistic:4 decoding:1 michael:4 together:1 quickly:1 ilya:1 thesis:3 s00:4 reflect:1 choose:3 watkin:2 worse:1 conf:1 expert:1 leading:1 return:2 shpitser:1 account:1 blow:1 int:1 explicitly:2 aron:1 performed:6 view:2 observing:4 carlos:1 complicated:1 minimize:1 ir:2 accuracy:1 became:1 who:1 efficiently:1 yield:2 identify:1 bayesian:1 substring:1 carlo:2 notoriously:1 zx:4 reach:2 aggregator:2 definition:1 sampled:1 gain:1 dataset:5 treatment:2 massachusetts:4 popular:1 recall:8 organized:2 cj:4 segmentation:1 carefully:1 higher:1 factorie:1 methodology:1 wei:2 formulation:1 evaluated:1 though:1 until:2 pasula:1 hand:1 hastings:12 langford:1 web:1 christopher:2 incrementally:1 defines:1 logistic:1 perhaps:1 mdp:6 believe:1 hal:2 facilitate:1 name:2 dietterich:3 normalized:1 concept:13 true:2 former:1 hence:1 riddled:1 moore:1 semantic:2 neal:1 during:3 uniquely:2 eligibility:2 unnormalized:1 cosine:2 complete:4 crf:4 demonstrate:1 mcdonald:1 samplerank:12 charles:1 common:1 functional:1 rl:24 overview:1 discussed:1 association:2 marginals:1 accumulate:1 cambridge:1 ai:1 illinois:1 language:2 arfl:1 reachable:1 actor:1 similarity:4 impressive:1 optimizes:2 commun:1 tesauro:1 manipulation:1 certain:1 binary:2 success:1 blog:1 approximators:1 scoring:2 guestrin:1 minimum:2 additional:2 forty:1 fernando:3 maximize:1 redundant:1 paradigm:1 signal:1 ii:1 encompass:1 full:1 sameer:4 reduces:1 technical:4 match:1 cross:1 long:2 compensate:1 plugging:1 anhai:2 ensuring:1 prediction:5 hasting:1 regression:1 sponsor:1 expectation:2 metric:3 represent:2 grounded:1 robotics:1 proposal:7 background:1 justified:1 else:1 diagram:1 haghighi:1 rest:1 regional:1 sr:9 archive:2 unlike:1 induced:1 undirected:1 leveraging:1 climb:1 spirit:1 lafferty:1 ideal:1 reposition:1 split:2 iii:2 rendering:1 variety:2 independence:1 escaping:1 inner:1 idea:2 suffer:1 peter:1 algebraic:1 laso:1 searn:1 action:44 repeatedly:1 amount:1 nonparametric:1 locally:1 ten:4 http:1 generate:1 nsf:2 notice:2 per:2 klein:1 wick:4 affected:1 express:1 group:1 demonstrating:1 clarity:1 changing:1 wisc:2 graph:27 sum:1 run:6 uncertainty:1 decision:3 proposer:8 summarizes:1 comparable:1 entirely:1 fold:1 constrain:1 simulate:6 extremely:1 min:2 performing:1 expanded:1 department:1 structured:6 according:1 combination:1 march:1 y0:3 metropolis:13 making:1 modification:1 quantifier:3 taken:2 equation:10 resource:1 previously:1 discus:2 describing:1 count:1 end:5 apply:1 observe:2 cia:1 alternative:1 sonia:1 thomas:2 clustering:6 ensure:1 include:1 completed:1 graphical:2 running:1 linguistics:2 dirichlet:1 daum:2 build:1 objective:1 move:1 realized:1 rt:4 interacts:1 gradient:2 amongst:1 distance:1 separate:1 khashayar:4 manifold:3 agglomerative:3 assuming:1 reformulate:1 minimizing:1 difficult:2 unfortunately:1 robert:1 taxonomy:1 potentially:1 trace:3 rise:1 design:1 policy:16 twenty:2 perform:2 unknown:1 markov:6 datasets:1 situation:1 relational:2 hinton:1 smoothed:1 introduced:1 pair:3 namely:1 required:1 extensive:1 connection:1 catalog:5 california:1 marthi:2 learned:9 established:1 nip:3 address:1 able:7 justin:1 below:2 max:5 overlap:1 treated:1 rely:2 regularized:1 natural:2 boyan:1 residual:3 shop:1 improve:2 mdps:2 identifies:1 existential:1 vancouver:1 wander:1 discriminatively:1 interesting:1 limitation:1 approximator:3 geoffrey:1 validation:1 agent:1 madhavan:1 s0:34 doan:1 bypass:1 pi:2 share:1 critic:1 karl:1 row:3 course:7 repeat:2 last:1 supported:1 arriving:1 side:1 bias:2 neighbor:3 wide:1 taking:3 underwent:1 fall:1 feedback:1 transition:11 world:2 ending:1 computes:2 ignores:1 stuck:2 made:1 reinforcement:27 jump:6 author:1 schultz:1 adaptive:1 cope:2 approximate:4 citation:1 logic:5 corpus:1 conclude:1 alternatively:1 continuous:1 search:7 table:6 jayant:1 learn:6 nature:2 rohanimanesh:4 robust:2 improving:1 hanna:1 complex:2 necessarily:1 european:1 domain:6 crites:1 motivation:1 profile:5 child:1 allowed:1 fashion:1 precision:5 downhill:2 deterministically:1 guiding:1 exponential:1 pereira:3 watkins:2 extractor:1 bhaskara:2 normalizing:1 intractable:1 workshop:2 sequential:1 ci:4 aria:1 phd:3 execution:1 downward:1 conditioned:1 margin:2 entropy:1 generalizing:1 cd1:7 likely:1 explore:1 army:1 expressed:1 recommendation:1 radford:1 pedro:2 truth:6 acm:1 ma:1 declaration:1 conditional:7 goal:4 viewed:2 identity:1 quantifying:1 king:1 feasible:2 specifically:2 semantically:2 averaging:1 called:1 tendency:1 indicating:1 college:1 evaluate:4 mcmc:4 subaward:1 |
3,127 | 3,833 | An Infinite Factor Model Hierarchy
Via a Noisy-Or Mechanism
Aaron C. Courville, Douglas Eck and Yoshua Bengio
Department of Computer Science and Operations Research
University of Montr?eal
Montr?eal, Qu?ebec, Canada
{courvila,eckdoug,bengioy}@iro.umontreal.ca
Abstract
The Indian Buffet Process is a Bayesian nonparametric approach that models objects as arising from an infinite number of latent factors. Here we extend the latent
factor model framework to two or more unbounded layers of latent factors. From a
generative perspective, each layer defines a conditional factorial prior distribution
over the binary latent variables of the layer below via a noisy-or mechanism. We
explore the properties of the model with two empirical studies, one digit recognition task and one music tag data experiment.
1
Introduction
The Indian Buffet Process (IBP) [5] is a Bayesian nonparametric approach that models objects as
arising from an unbounded number of latent features. One of the main motivations for the IBP
is the desire for a factorial representation of data, with each element of the data vector modelled
independently, i.e. as a collection of factors rather than as monolithic wholes as assumed by other
modeling paradigms such as mixture models. Consider music tag data collected through the internet
service provider Last.fm. Users of the service label songs and artists with descriptive tags that
collectively form a representation of an artist or song. These tags can then be used to organize
playlists around certain themes, such as music from the 80?s. The top 8 tags for the popular band
R ADIOHEAD are: alternative, rock, alternative rock, indie, electronic, britpop, british, and indie
rock. The tags point to various facets of the band, for example that they are based in Britain, that
they make use of electronic music and that their style of music is alternative and/or rock. These
facets or features are not mutually exclusive properties but represent some set of distinct aspects of
the band.
Modeling such data with an IBP allows us to capture the latent factors that give rise to the tags,
including inferring the number of factors characterizing the data. However the IBP assumes these
latent features are independent across object instances. Yet in many situations, a more compact
and/or accurate description of the data could be obtained if we were prepared to consider dependencies between latent factors. Despite there being a wealth of distinct factors that collectively describe
an artist, it is clear that the co-occurrence of some features is more likely than others. For example,
factors associated with the tag alternative are more likely to co-occur with those associated with the
tag indie than those associated with tag classical.
The main contribution of this work is to present a method for extending infinite latent factor models to two or more unbounded layers of factors, with upper-layer factors defining a factorial prior
distribution over the binary factors of the layer below. In this framework, the upper-layer factors
express correlations between lower-layer factors via a noisy-or mechanism. Thus our model may
be interpreted as a Bayesian nonparametric version of the noisy-or network [6, 8]. In specifying the
model and inference scheme, we make use of the recent stick-breaking construction of the IBP [10].
1
For simplicity of presentation, we focus on a two-layer hierarchy, though the method extends readily
to higher-order cases. We show how the complete model is amenable to efficient inference via a
Gibbs sampling procedure and compare performance of our hierarchical method with the standard
IBP construction on both a digit modeling task, and a music genre-tagging task.
2
Latent Factor Modeling
Consider a set of N objects or exemplars: x1:N = [x1 , x2 , . . . , xN ]. We model the nth object
with the distribution xn | zn,1:K , ? ? F (zn,1:K , ?1:K ), with model parameters ?1:K = [?k ]K
k=1
(where ?k ? H indep. ?k) and feature variables zn,1:K = [znk ]K
k=1 which we take to be binary:
znk ? {0, 1}. We denote the presence of feature k in example n as znk = 1 and its absence as
znk = 0. Features present in an object are said to be active while absent features are inactive.
Collectively, the features form a typically sparse binary N ? K feature matrix, which we denote as
z1:N,1:K , or simply Z. For each feature k let ?k be the prior probability that the feature is active.
The collection of K probabilities: ?1:K , are assumed to be mutually independent, and distributed
according to a Beta(?/K, 1) prior. Summarizing the full model, we have (indep.?n, k):
!? "
,1
xn | zn,1:K , ? ? F (zn,1:K , ?)
znk | ?k ? Bernoulli(?k )
?k | ? ? Beta
K
According to the standard development of the IBP, we can marginalize over variables ?1:K and
take the limit K ? ? to recover a distribution over an unbounded binary feature matrix Z. In
the development of the inference scheme for our hierarchical model, we make use of an alternative
characterization of the IBP: the IBP stick-breaking construction [10]. As with the stick-breaking
construction of the Dirichlet process (DP), the IBP stick-breaking construction provides a direct
characterization of the random latent feature probabilities via an unbounded sequence. Consider
once again the finite latent factor model described above. Letting K ? ?, Z now possesses
an unbounded number of columns with a corresponding unbounded set of random probabilities
[?1 , ?2 , . . . ]. Re-arranged in decreasing order: ?(1) > ?(2) > . . . , these factor probabilities can be
#
i.i.d
expressed recursively as: ?(k) = U(k) ?(k?1) = (l) U(l) , where U(k) ? Beta(?, 1).
3 A Hierarchy of Latent Features Via a Noisy-OR Mechanism
In this section we extend the infinite latent features framework to incorporate interactions between
multiple layers of unbounded features. We begin by defining a finite version of the model before
considering the limiting process. We consider here the simplest hierarchical latent factor model
consisting of two layers of binary latent features: an upper-layer binary latent feature matrix Y with
elements ynj , and a lower-layer binary latent feature matrix Z with elements znk . The probability
distribution over the elements ynj is defined as previously in the limit construction of the IBP:
ynj | ?j ? Bernoulli(?j ), with ?j | ?? ? Beta(?? /J, 1). The lower binary variables znk are also
defined as Bernoulli distributed random quantities:
$
znk | yn,: , V:,k ? Bernoulli(1 ? (1 ? ynj Vjk ))
indep.?n, k.
(1)
j
However, here the probability that znk = 1 is a function of the upper binary variables yn,: and the
kth column of the weight matrix V , with probabilities Vjk ? [0, 1] connecting ynj to znk . The
crux of the model is how ynj interacts with znk via a noisy-or mechanism defined in Eq. (1). The
binary ynj modulates the involvement of the Vjk terms in the product, which in turn modulates
P (znk = 1 | yn,: , V:,k ). The noisy-or mechanism interacts positively in the sense that changing an
element ynj from inactive to active can only increase P (znk = 1 | yn: , V:k ), or leave it unchanged
in the case where Vjk = 0. We interpret the active yn,: to be possible causes of the activation of
the individual znk , ?k. Through the weight matrix V , every element of Yn,1:J is connected to every
element of Zn,1:K , thus V is a random matrix of size J ? K. In the case of finite J and K, an
i.i.d
obvious choice of prior for V is: Vjk ? Beta(a, b), ?j, k. However, looking ahead to the case
where J ? ? and K ? ?, the prior over V will require some additional structure.
Recently, [11] introduced the Hierarchical Beta Process (HBP) and elucidated the relationship between this and the Indian Buffet Process. We use a variant of the HBP to define a prior over V :
?k ? Beta(?? /K, 1)
Vjk | ?k ? Beta(c?k , c(1 ? ?k ) + 1)
indep.?k, j,
(2)
2
AM
Mj
AN
Nk
Kmd
H
Jmd
Ql
ynj
Vjk
!k
xn
N
?
i.i.d
Beta(?? , 1), ?j =
j
$
Ql
l
k
$
?
Beta(?? , 1), ?k =
Vjk
ynj
?
?
znk
?
Beta(c?k , c(1 ? ?k ) + 1)
Bern(?j )
$
Bern(1 ? (1 ? ynj Vjk )).
Rl
znk
i.i.d
Rl
l
j
Figure 1: Left: A graphical representation of the 2-layer hierarchy of infinite binary factor models. Right:
Summary of the hierarchical infinite noisy-or factor model in the stick-breaking parametrization.
where each column of V (indexed by k) is constrained to share a common prior. Structuring the
prior this way allows us to maintain a well behaved prior over the Z matrix as we let K ? ?,
grouping the values of Vjk across j while E[?k ] ? 0. However beyond the region of very small ?k
(0 < ?k << 1), we would like the weights Vjk to vary more independently. Thus we modify the
model of [11] to include the +1 term to the prior over Vjk (in Eq. (2)) and we limit c ? 1. Fig.
1 shows a graphical representation of the complete 2-layer hierarchical noisy-or factor model, as
J ? ? and K ? ?.
Finally, we augment the model with an additional random matrix A with multinomial elements
Ank , assigning each instance of znk = 1 to an index j corresponding to the active upper-layer
unit ynj responsible for causing the event. The probability that Ank = j is defined via a familiar stick-breaking scheme. By enforcing an (arbitrary) ordering over the indices j = [1, J], we
can view the noisy-or mechanism defined in Eq. (1) as specifying, for each znk , an ordered series
of binary trials (i.e. coin flips). For each znk , we proceed through the ordered set of elements,
{Vjk , ynj }j=1,2,... , performing random trials. With probability yn,j ? Vj ? ,k , trial j ? is deemed a ?success? and we set znk = 1, Ank = j ? , and no further trials are conducted for {n, k, j > j ? }.
Conversely, with probability (1 ? ynj ? Vj ? k ) the trial is deemed a ?failure? and we move on to trial
j ? + 1. Since all trials j associated with inactive upper-layer features are failures with probability one (because ynj = 0), we need only consider the trials for which ynj = 1. If, for a given
znk , all trials j for which ynj = 1 (active) are failures, then we set znk = 0 with probability one.
The probability associated with the event znk = 0 is therefore given by the product of the failure
#J
probabilities for each of the J trials: P (znk = 0 | yn,: , V:,k ) = j=1 (1 ? ynj Vjk ), and with
P (znk = 1 | yn,: , V:,k ) = 1 ? P (znk = 0 | yn,: , V:,k ), we arrive at the noisy-or mechanism given
in Eq. (1). This process is similar to the sampling process associated with the Dirichlet process
stick-breaking construction [7]. Indeed, the process described above specifies a stick-breaking construction of a generalized Dirichlet distribution [1] over the multinomial probabilities corresponding
to the Ank . The generalized Dirichlet distribution defined in this way has the important property that
it is conjugate to multinomial sampling.
With the generative process specified as above, we can define the posterior distribution over
the weights V given the assignment matrix A and the latent feature matrix Y . Let Mjk =
%N
was a success for z:,k (i.e. the number of
n=1 I(Ank = j) be the number of times that the jth trial
%N
times ynj caused the activation of znk ) and let Njk = n=1 ynj I(Ank > j), that is the number of
times that the j-th trial was a failure for znk despite ynj being active. Finally, let us also denote the
%N
number of times y:,j is active: Nj = n=1 ynj . Given these quantities, the posterior distributions
for the model parameters ?j and Vjk are given by:
?j | Y ? Beta(?? /J + Nj , 1 + N ? Nj )
Vjk | Y, A ? Beta(c?k + Mjk , c(1 ? ?k ) + Njk + 1)
(3)
(4)
These conjugate relationships are exploited in the Gibbs sampling procedure described in Sect. 4.
By integrating out Vjk , we can recover (up to a constant) the posterior distribution over ?k :
3
? /K?1
p(?k | A:,k ) ? ?k ?
J
$
?(c?k + Mjk ) ?(c(1 ? ?k ) + Njk + 1)
?(c?k )
?(c(1 ? ?k ) + 1)
j=1
(5)
One property of the marginal likelihood is that wholly inactive elements of Y , which we denote
as y:,j " = 0, do not impact the likelihood as Nj " ,k = 0, Mj " ,k = 0. This becomes particularly
important as we let J ? ?.
Having defined the finite model, it remains to take the limit as both K ? ? and J ? ?. Taking
the limit of J ? ? is relatively straightforward as the upper-layer factor model naturally tends to
an IBP: Y ? IBP, and its involvement in the remainder of the model is limited to the set of active
elements of Y , which remains finite for finite datasets. In taking K ? ?, the distribution over
the unbounded ?k converges to that of the IBP, while the conditional distribution over the noisy-or
weights Vjk remain simple beta distributions given the corresponding ?k (as in Eq. (4)).
4 Inference
In this section, we describe an inference strategy to draw samples from the model posterior. The
algorithm is based jointly on the blocked Gibbs sampling strategy for truncated Dirichlet distributions [7] and on the IBP semi-ordered slice sampler [10], which we employ at each layer of the
hierarchy. Because both algorithms are based on the strategy of directly sampling an instantiation
of the model parameters, their use together permits us to define an efficient extended blocked Gibbs
sampler over the entire model without approximation.
To facilitate our description of the semi-ordered slice sampler, we separate ?1:? into two subsets:
+
o
+
?+
1:J + and ?1:? , where ?1:J + are the probabilities associated with the set of J active upper-layer
+
+
factors Y (those that appear at least once in the dataset, i.e. ?i : yij " = 1, 1 ? j $ ? J + ) and ?o1:?
are associated with the unbounded set of inactive features Y o (those not appearing in the dataset).
+
o
+
Similarly, we separate ?1:? into ?1:K
and inactive
+ and ?1:? , and Z into corresponding active Z
o
+
Z where K is the number of active lower-layer factors.
4.1
Semi-ordered slice sampling of the upper-layer IBP
+
The IBP semi-ordered slice sampler maintains an unordered set of active y1:N,1:J
+ with correspond+
ing ?1:J + and V1:J + ,1:K , while exploiting the IBP stick-breaking construction to sample from the
distribution of ordered inactive features, up to an adaptively chosen truncation level controlled by
an auxiliary slice variable sy .
Sample sy . The uniformly distributed auxiliary slice variables, sy controls the truncation level of
the upper-layer IBP, where ?? is defined as the smallest probability ? corresponding to an active
feature:
&
'
+
sy | Y, ?1:? ? Uniform(0, ?? ),
?? = min 1, min
?
(6)
j" .
"
+
1?j ?J
As discussed in [10], the joint distribution is given by p(sy , ?1:? , Y ) = p(Y, ?1:? ) ? p(sy |
Y, ?1:? ), where marginalizing over sy preserves the original distribution over Y and ?1:? . However, given sy , the conditional distribution p(ynj " = 1 | Z, sy , ?1:? ) = 0 for all n, j $ such that
?j " < sy . This is the crux of the slice sampling approach: Each sample sy adaptively truncates the
model, with ?1:J > sy . Yet by marginalizing over sy , we can recover samples from the original
non-truncated distribution p(Y, ?1:? ) without approximation.
Sample ?o1:J o . For the inactive features, we use adaptive rejection sampling (ARS) [4] to sequentially draw an ordered set of J o posterior feature probabilities from the distribution:
*
(
N
)
1
o n
o
o
o
(1 ? ?j )
? (?oj )?? ?1 (1 ? ?oj )N I(0 ? ?oj ? ?oj?1 ),
p(?j | ?j?1 , y:,?j = 0) ? exp ??
n
n=1
until ?oJ o +1 < sy . The above expression arises from using the IBP stick-breaking construction to
marginalize over the inactive elements of ?: [10]. For each of the J o inactive features drawn, the
4
o
o
corresponding features y1:N,1:J
o are initialized to zero and the corresponding weight V1:J o ,1:K are
sampled from their prior in Eq. (2). With the probabilities for both the active and a truncated set of
inactive features sampled, the set of features are re-integrated into a set of J = J + + J o features
+
+
o
o
Y = [y1:N,1:J
+ , y1:N,1:J o ] with probabilities ?1:J = [?1:J + , ?1:J o ], and corresponding weights
+
T
T
o
T
V = [(V1:J + ,1:K ) , (V1:J o ,1:K ) ].
Sample Y . Given the upper-layer feature probabilities ?1:J , weight matrix V , and the lower-layer
binary feature values znk , we update each ynj as follows:
p(ynj = 1 | ?j , zn,: , ?? ) ?
K
?j $
p(znk | ynj = 1, yn,?j , V:,k )
??
(7)
k=1
The denominator ?? is subject to change if changing ynj induces a change in ?? (as defined in Eq.
(6)); yn,?j represents all elements yn,1:J except ynj The
# conditional probability of the lower-layer
binary variables is given by: p(znk | yn,: , V:,k ) = (1 ? j (1 ? ynj Vjk )).
+
Sample ?+
with prob1:J + . Once again we separate Y and ?1:? into a set of active features: Y
+
o
o
abilities ?1:J + ; and a set of inactive features Y with ?1:? . The inactive set is discarded while the
+
+
active set of ?+
1:J + are resampled from the posterior distribution: ?j | y:,j ? Beta(Nj , 1+N ?Nj ).
At this point we also separate the lower-layer factors into an active set of K+ factors Z + with cor+
+
+
responding ?1:K
+ , V1:J + ,1:K + and data likelihood parameters ? ; and a discarded inactive set.
4.2
Semi-ordered slice sampling of the lower-layer factor model
Sampling the variables of the lower-layer IFM model proceeds analogously to the upper-layer IBP.
However the presence of the hierarchical relationship between the ?k and the V:,k (as defined in
Eqs. (3) and (4)) does require some additional attention. We proceed by making use of the marginal
distribution over the assignment probabilities to define a second auxiliary slice variable, sz .
Sample sz . The auxiliary slice variable is sampled according to the following, where ? ? is defined
as the smallest probability corresponding to an active feature:
&
'
+
?
?
sz | Z, ?1:? ? Uniform(0, ? ),
? = min 1, min
? " .
"
+ k
1?k ?K
o
Sample ?1:K
Given sz and Y , the random probabilities over the inactive lower-layer binary
o.
o
features, ?1:?
, are sampled sequentially to draw a set of K o feature probabilities, until ?K o +1 < sz .
The samples are drawn according to the distribution:
p(?ko
|
o
o
?k?1
, Y + , z:,?k
= 0)
?
I (0 ?
?ko
exp ??
?
o
?k?1
) (?ko )?? ?1
J
Y
j=1
?(c)
?(c + Nj )
J
Y
?(c(1 ? ?ko ) + Nj )
?(c(1 ? ?ko ))
j=1
N1 +???+NJ
X
i=0
!
i
X
1
wi c
(1 ? ?ko )l
l
i
l=1
?
!
?
(8)
Eq. (8) arises from the stick-breaking construction of the IBP and from the expression for
o
P (z:,>k
= 0 | ?ko , Y + ) derived in the supplementary material [2]. Here we simply note that the
wi are weights derived from the expansion of a product of terms involving unsigned Stirling numbers of the first kind. The distribution over the ordered inactive features is log-concave in log ?k , and
is therefore amenable to efficient sample via adaptive rejection sampling (as was done in sampling
?o1:J o ). Each of the K o inactive features are initialized to zero for every data object, Z o = 0, while
the corresponding V o and likelihood parameters ?o are drawn from their priors. Once the ?1:K o
are drawn, both the active and inactive features of the lower-layer are re-integrated into the set of
+
o
K = K + + K o features Z = [Z + , Z o ] with probabilities ?1:K = [?1:K
+ , ?1:K o ] and corresponding
+
o
+ o
weight matrix V = [V1:J + ,1:K + , V1:J + ,1:K o ] and parameters ? = [? , ? ].
5
Sample Z. Given Y + and V we use Eq. (1) to specify the prior over z1:N,1:K ? . Then, conditional
on this prior, the data X and parameters ?, we sample sequentially for each znk :
p(znk
0
1
J+
Y
1
+
+
| yn,:
, V:,k , zn,?k , ?, ? ? ) = ? @1 ?
(1 ? ynj
Vjk )A f (xn | zn,: , ?),
?
j=1
where f (xn | zn,: , ?) is the likelihood function for the nth data object.
+
Sample A. Given znk , yn,:
and V:,k , we draw the multinomial variable Ank to assign responsibil+
ity, in the event zik = 1, to one of the upper-layer features ynj
,
+
, V:,k ) = Vjk
p(Ank = j | znk = 1, yn,:
"j?1
Y
i=1
#
(9)
#j ? ?1
+
(1 ? yni
Vik ) to ensure
+
(1 ? yni
Vik ) ,
+
$
?
?
+
and if yn,j
| znk = 1, yn,:
, V:,k ) =
" = 0, ?j > j , then p(Ank = j
normalization of the distribution. If znk = 0, then P (Ank = ?) = 1.
i=1
+
Sample V and ?1:K
Conditional on Y + , Z and A, the weights V are resampled from Eq. (4),
+.
following the blocked Gibbs sampling procedure of [7]. Given the assignments A, the posterior of
?k+ is given (up to a constant) by Eq. (5). This distribution is log concave in ?k+ , therefore we can
once again use ARS to draw samples of the posterior of ?k+ , 1 ? k ? K + .
5 Experiments
In this section, we present two experiments to highlight the properties and capabilities of our hierarchical infinite factor model. Our goal is to assess, in these two cases, the impact of including an
additional modeling layer. To this end, and in each experiment, we compare our hierarchical model
to the equivalent IBP model. In each case, hyperparameters are specified with respect to the IBP (using cross-validation by evaluating the likelihood of a holdout set) and held fixed for the hierarchical
factor model. Finally all hyperparameters of the hierarchical model that were not marginalized out
were held constant over all experiments, in particular c = 1 and ?? = 1.
5.1
Experiment I: Digits
In this experiment we took examples of images of hand-written digits from the MNIST dataset.
Following [10], the dataset consisted of 1000 examples of images of the digit 3 where the handwritten digit images are first preprocessed by projecting onto the first 64 PCA components. To model
MNIST digits, we augment both the IBP and the hierarchical model with a matrix G of the same
size as Z and with i.i.d. zero mean and unit variance elements. Each data object, xn is modeled
2
as: xn | Z, G, ?, ?x2 ? N ((zn,: + gn,: )?, ?X
I) where + is the Hadamard (element-wise) product.
The inclusion of G introduces an additional step to our Gibbs sampling procedure, however the rest
of the hierarchical infinity factor model is as described in Sect. 3. In order to assess the success
of our hierarchical IFM in capturing higher-order factors present in the MNIST data, we consider
a de-noising task. Random noise (std=0.5) was added to a post-processed test set and the models
were evaluated in its ability to recover the noise-free version of a set of 500 examples not used in
training. Fig. 2 (a) presents a comparison of the log likelihood of the (noise-free) test-set for both the
hierarchical model and the IBP model. The figure shows that the 2-layer noisy-or model gives significantly more likelihood to the pre-corrupted data than the IBP, indicating that the noisy-or model
was able to learn useful higher-order structure from MNIST data. One of the potential benefits of
the style of model we propose here is that there is the opportunity for latent factors at one layer to
share features at a lower layer. Fig. 2 illustrates the conditional mode of the random weight matrix
V (conditional on a sample of the other variables) and shows that there is significant sharing of lowlevel features by the higher-layer factors. Fig. 2 (d)-(e) compare the features (sampled rows of the
? matrix) learned by both the IBP and by the hierarchical noisy-or factor model. Interestingly, the
sampled features learned in the hierarchical model appear to be slightly more spatially localized and
sparse. Fig. 2 (f)-(i) illustrates some of the marginals that arise from the Gibbs sampling inference
process. Interestingly, the IBP model infers a greater number of latent factors that did the 2-layer
6
5000
?2.5
2000
1000
?3
0.5
1
150
100
50
140
160
180
num. active features
0
200
0
10
20
30
num. active features
1.5 2 2.5
3 3.5
A MCMC iterations
4
4.5
x10 4
40
G
8000
600
500
6000
num. of objects
2?layer Noisy?Or model
?4
?4.5
0
200
F
IBP
?3.5
IBP
Hierarchical
250
3000
0
120
num. MCMC iterations
log likelihood
?2
300
IBP
Hierarchical
4000
num. of objects
num. MCMC iterations
4
?1.5 x 10
4000
2000
400
300
200
100
0
20
25
30
35
num. active features
H
40
0
1
2
3
4
num. active features
5
I
0.9
5
0.8
0.7
10
0.6
0.5
15
0.4
0.3
20
0.2
0.1
25
10 20
30 40 50 60 70 80 90 100
B
0
C
D
E
Figure 2: (a) The log likelihood of a de-noised testset. Corrupted (with 0.5-std Gaussian noise) versions of
test examples were provided to the factor models and the likelihood of the noise-free testset was evaluated for
both an IBP-based model as well as for the 2-layer noisy-or model. The two layer model shown substantial
improvement in log likelihood. (b) Reconstruction of noisy examples. The top row shows the original values
for a collection of digits. The second row shows their corrupted versions; while the third and fourth row show
the reconstructions for the IBP-based model and the 2 layer noisy-or respectively. (c) A subset of the V matrix.
The rows of V are indexed by j while the columns of V are indexed by k. The vertical striping pattern is
evidence of significant sharing of lower-layer features among the upper-layer factors. (d)-(e) The most frequent
64 features (rows of the ? matrix) for (d) the IBP and for (e) the 2-layer infinite noisy-or factor model. (f) A
comparison of the distributions of the number of active elements between the IBP and the noisy-or model. (g)
A comparison of the number of active (lower-layer) factors possessed by an object between the IBP and the
hierarchical model. (h) the distribution of upper-layer active factors and (i) the number of active factors found
in an object.
noisy-or model (at the first layer). However, the distribution over factors active for each data object
is nearly identical. This suggests the possibility that the IBP is maintaining specialized factors that
possibly represent a superposition of frequently co-occurring factors that the noisy-or model has
captured more compactly.
5.2 Experiment II: Music Tags
Returning to our motivating example from the introduction, we extracted tags and tag frequencies
from the social music website Last.fm using the Audioscrobbler web service. The data is in the form
of counts1 of tag assignment for each artist. Our goal in modeling this data is to reduce this often
noisy collection of tags to a sparse representation for each artist. We will adopt a different approach
to the standard Latent Dirichlet Allocation (LDA) document processing strategy of modeling the
document ? or in this case tag collection ? as having been generated from a mixture of tag multinomials. We wish to distinguish between an artist that everyone agrees is both country and rock versus
an artist that people are divided whether they are rock or country.
To this end, we can again make use of the conjugate noisy-or model to model the count data in the
form of binomial probabilities, i.e. to the model defined in Sect. 3, we add the random weights
i.i.d
W
#kt ? Beta(a, b), ?k.t connecting Z to the data X via the distribution: Xnt ? Binomial(1 ?
k (1 ? znk W ), C) where C is the limit on the number of possible counts achievable. This would
correspond to the number of people who ever contributed a tag to that artist. In the case of the
Last.fm data C = 100. Maintaining conjugacy over W will require us to add an assignment parameter
1
The publicly available data is normalized to maximum value 100.
7
800
300
2000
600
500
200
150
100
200
1500
num. objects
400
MCMC iterations
num. objects
MCMC iterations
250
600
1000
100
120
140
num. active features
A
160
0
300
200
500
50
0
80
400
100
0
2
4
6
num. active features
0
20
8
30
40
50
60
num. active features
70
0
0
1
2
3
num. active features
C
B
4
D
Figure 3: The distribution of active features for the noisy-or model at the (a) lower-layer and (c) the upperlayer. The distribution over active features per data object for the (b) upper-layer and (d) lower-layer.
Bnt whose role is analogous to Ank . With the model thus specified, we present a dataset of 1000
artists with a vocabulary size of 100 tags representing a total of 312134 counts. Fig. 3 shows the
result running the Gibbs sampler for 10000 iterations. As the figure shows, both layers are quite
sparse. Generally, most of the features learned in the first layer are dominated by one to three tags.
Most features at the second layer cover a broader range of tags. The two most probable factors to
emerge at the upper layer are associated with the tags (in order of probability):
1. electronic, electronica, chillout, ambient, experimental
2. pop, rock, 80s, dance, 90s
The ability of the 2-layer noisy-or model to capture higher-order structure in the tag data was again
assessed though a comparison to the standard IBP using the noisy-or observation model above. The
model was also compared against a more standard latent factor model with the latent representation
?nk modeling the data through a generalized linear model: Xnt ? Binomial(Logistic(?n,: O:,t ), C),
where the function Logistic(.) is the logistic sigmoid link function and the latent representation
?nk ? N (0, ?? ) are normally distributed. In this case, inference is performed via a MetropolisHastings MCMC method that mixes readily. The test data was missing 90% of the tags and the models were evaluated by their success in imputing the missing data from the 10% that remained. Here
again, the 2-Layer Noisy-Or model achieved superior performance, as measured by the marginal log
likelihood on a hold out set of 600 artist-tag collections. Interestingly both sparse models ? the IBP
and the noisy-or model ? dramatically out performed the generalized latent linear model.
Method
Gen. latent linear model (Best Dim = 30)
IBP
2-Layer Noisy-Or IFM
6
NLL
8.7781e05 ? 0.02e05
5.638e05 ? 0.001e05
5.542e05 ? 0.001e05
Discussion
We have defined a noisy-or mechanism that allows one infinite factor model to act as a prior for
another infinite factor model. The model permits high-order structure to be captured in a factor
model framework while maintaining an efficient sampling algorithm. The model presented here is
similar in spirit to the hierarchical Beta process, [11] in the sense that both models define a hierarchy
of unbounded latent factor models. However, while the hierarchical Beta process can be seen as a
way to group objects in the data-set with similar features, our model provides a way to group features
that frequently co-occur in the data-set. It is perhaps more similar in spirit to the work of [9] who
also sought a means of associating latent factors in an IBP, however their work does not act directly
on the unbounded binary factors as ours does. Recently the question of how to define a hierarchical
factor model to induce correlations between lower-layer factors was addressed by [3] with their IBPIBP model. However, unlike our model, where the dependencies induced by the upper-layer factors
via an noisy-or mechanism, the IBP-IBP model models correlations via an AND construct through
the interaction of binary factors.
Acknowledgments
The authors acknowledge the support of NSERC and the Canada Research Chairs program. We also
thank Last.fm for making the tag data publicly available and Paul Lamere for his help in processing
the tag data.
8
References
[1] Robert J. Connor and James E. Mosimann. Concepts of independence for proportions with a
generalization of the Dirichlet distribution. Journal of the American Statistical Association,
64(325):194?206, 1969.
[2] Aaron C. Courvile, Douglas Eck, and Yoshua Bengio. An infinite factor model hierarchy via a
noisy-or mechanism: Supplemental material. Supplement to the NIPS paper.
[3] Finale Doshi-Velez and Zoubin Ghahramni. Correlated nonparametric latent feature models.
In Proceedings of the 25 th Conference on Uncertainty in Artificial Intelligence, 2009.
[4] W. R. Gilks and P. Wild. Adaptive rejection sampling for Gibbs sampling. Applied Statistics,
41(2):337?348, 1992.
[5] Tom Griffiths and Zoubin Ghahramani. Infinite latent feature models and the indian buffet
process. In Advances in Neural Information Processing Systems 18, Cambridge, MA, 2006.
MIT Press.
[6] Max Henrion. Practical issues in constructing a bayes? belief network. In Proceedings of the
Proceedings of the Third Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-87), page 132?139, New York, NY, 1987. Elsevier Science.
[7] Hemant Ishwaran and Lancelot F. James. Gibbs sampling methods for stick-breaking priors.
American Statistical Association, 96(453):161?173, 2001.
[8] Michael Kearns and Yishay Mansour. Exact inference of hidden structure from sample data
in noisy-or networks. In Proceedings of the 14 th Conference on Uncertainty in Artificial
Intelligence, pages 304?310, 1998.
[9] Piyush Rai and Hal Daum?e III. The infinite hierarchical factor regression model. In Daphne
Koller, Dale Schuurmans, Yoshua Bengio, and L?eon Bottou, editors, Advances in Neural Information Processing Systems 21, 2009.
[10] Yee Whye Teh, Dilan G?or?ur, and Zoubin Ghahramani. Stick-breaking construction for the
indian buffet process. In Proceedings of the Eleventh International Conference on Artifical
Intelligence and Statistics (AISTAT 2007)., 2007.
[11] Romain Thibaux and Michael I. Jordan. Hierarchical beta process and the indian buffet process. In Proceedings of the Eleventh International Conference on Artifical Intelligence and
Statistics (AISTAT 2007)., 2007.
9
| 3833 |@word trial:12 version:5 achievable:1 proportion:1 recursively:1 series:1 njk:3 document:2 interestingly:3 ours:1 yni:2 activation:2 yet:2 assigning:1 written:1 readily:2 audioscrobbler:1 update:1 zik:1 generative:2 intelligence:5 website:1 parametrization:1 num:14 characterization:2 provides:2 daphne:1 unbounded:12 direct:1 beta:19 vjk:21 wild:1 eleventh:2 tagging:1 indeed:1 frequently:2 ifm:3 decreasing:1 eck:2 considering:1 becomes:1 begin:1 provided:1 kind:1 courvila:1 interpreted:1 supplemental:1 nj:9 every:3 act:2 concave:2 ebec:1 returning:1 stick:13 control:1 unit:2 normally:1 yn:19 organize:1 appear:2 before:1 service:3 monolithic:1 modify:1 tends:1 limit:6 hemant:1 despite:2 specifying:2 conversely:1 suggests:1 co:4 limited:1 range:1 acknowledgment:1 responsible:1 gilks:1 practical:1 digit:8 procedure:4 lancelot:1 wholly:1 empirical:1 significantly:1 pre:1 integrating:1 induce:1 griffith:1 zoubin:3 onto:1 marginalize:2 lamere:1 noising:1 unsigned:1 yee:1 equivalent:1 britain:1 missing:2 straightforward:1 attention:1 lowlevel:1 independently:2 simplicity:1 his:1 ity:1 analogous:1 limiting:1 hierarchy:7 construction:12 yishay:1 user:1 exact:1 romain:1 element:16 recognition:1 particularly:1 std:2 role:1 capture:2 region:1 noised:1 connected:1 sect:3 indep:4 ordering:1 substantial:1 compactly:1 joint:1 various:1 genre:1 distinct:2 describe:2 artificial:3 whose:1 quite:1 supplementary:1 ability:3 statistic:3 jointly:1 noisy:35 nll:1 descriptive:1 sequence:1 rock:7 took:1 propose:1 reconstruction:2 interaction:2 product:4 remainder:1 frequent:1 causing:1 hadamard:1 gen:1 indie:3 description:2 aistat:2 exploiting:1 extending:1 leave:1 converges:1 object:18 help:1 piyush:1 measured:1 exemplar:1 ibp:44 eq:12 bnt:1 auxiliary:4 material:2 require:3 crux:2 assign:1 generalization:1 probable:1 yij:1 hold:1 around:1 exp:2 vary:1 adopt:1 smallest:2 sought:1 label:1 superposition:1 agrees:1 mit:1 gaussian:1 rather:1 broader:1 structuring:1 derived:2 focus:1 improvement:1 bernoulli:4 likelihood:13 summarizing:1 sense:2 am:1 inference:8 dim:1 elsevier:1 typically:1 entire:1 integrated:2 hidden:1 koller:1 playlist:1 issue:1 among:1 augment:2 development:2 constrained:1 marginal:3 once:5 construct:1 having:2 sampling:20 identical:1 represents:1 nearly:1 yoshua:3 others:1 employ:1 preserve:1 individual:1 familiar:1 consisting:1 maintain:1 n1:1 montr:2 possibility:1 running:1 introduces:1 mixture:2 held:2 amenable:2 accurate:1 kt:1 ambient:1 indexed:3 initialized:2 re:3 ynj:31 instance:2 eal:2 modeling:8 column:4 facet:2 gn:1 ar:2 cover:1 zn:11 assignment:5 stirling:1 subset:2 uniform:2 conducted:1 motivating:1 dependency:2 thibaux:1 corrupted:3 adaptively:2 international:2 michael:2 connecting:2 together:1 analogously:1 again:6 possibly:1 american:2 style:2 potential:1 de:2 unordered:1 caused:1 performed:2 view:1 recover:4 maintains:1 capability:1 bayes:1 contribution:1 ass:2 publicly:2 variance:1 who:2 sy:14 correspond:2 modelled:1 bayesian:3 handwritten:1 artist:10 provider:1 sharing:2 failure:5 against:1 frequency:1 james:2 obvious:1 doshi:1 naturally:1 associated:9 sampled:6 dataset:5 holdout:1 popular:1 infers:1 higher:5 tom:1 specify:1 arranged:1 done:1 though:2 evaluated:3 correlation:3 until:2 hand:1 web:1 defines:1 mode:1 logistic:3 lda:1 perhaps:1 behaved:1 hal:1 facilitate:1 consisted:1 normalized:1 concept:1 spatially:1 generalized:4 whye:1 complete:2 image:3 wise:1 recently:2 umontreal:1 common:1 sigmoid:1 specialized:1 imputing:1 multinomial:5 superior:1 rl:2 extend:2 discussed:1 association:2 interpret:1 marginals:1 velez:1 significant:2 blocked:3 connor:1 gibbs:10 cambridge:1 similarly:1 inclusion:1 add:2 posterior:8 recent:1 perspective:1 involvement:2 certain:1 binary:18 success:4 exploited:1 captured:2 seen:1 additional:5 greater:1 paradigm:1 semi:5 ii:1 full:1 multiple:1 mix:1 x10:1 ing:1 cross:1 divided:1 post:1 controlled:1 impact:2 variant:1 ko:7 involving:1 denominator:1 regression:1 iteration:6 represent:2 normalization:1 achieved:1 ank:11 addressed:1 wealth:1 country:2 rest:1 unlike:1 posse:1 subject:1 induced:1 spirit:2 finale:1 jordan:1 presence:2 bengio:3 iii:1 dilan:1 independence:1 fm:4 associating:1 reduce:1 vik:2 absent:1 inactive:18 whether:1 expression:2 pca:1 song:2 proceed:2 cause:1 york:1 dramatically:1 useful:1 generally:1 clear:1 factorial:3 nonparametric:4 prepared:1 band:3 induces:1 processed:1 simplest:1 specifies:1 arising:2 per:1 express:1 group:2 drawn:4 changing:2 preprocessed:1 douglas:2 v1:7 fourth:1 uncertainty:3 extends:1 arrive:1 electronic:3 draw:5 capturing:1 layer:61 internet:1 resampled:2 distinguish:1 courville:1 annual:1 elucidated:1 occur:2 ahead:1 infinity:1 x2:2 tag:27 dominated:1 aspect:1 min:4 chair:1 performing:1 relatively:1 department:1 according:4 rai:1 conjugate:3 across:2 remain:1 slightly:1 ur:1 wi:2 qu:1 prob1:1 making:2 projecting:1 mutually:2 previously:1 remains:2 turn:1 count:3 mechanism:11 conjugacy:1 letting:1 flip:1 cor:1 end:2 available:2 operation:1 permit:2 ishwaran:1 hierarchical:25 occurrence:1 appearing:1 alternative:5 buffet:6 coin:1 original:3 binomial:3 top:2 assumes:1 dirichlet:7 include:1 responding:1 graphical:2 ensure:1 marginalized:1 opportunity:1 maintaining:3 music:8 daum:1 eon:1 ghahramani:2 classical:1 unchanged:1 hbp:2 move:1 added:1 quantity:2 question:1 strategy:4 exclusive:1 interacts:2 said:1 dp:1 kth:1 separate:4 link:1 thank:1 collected:1 iro:1 enforcing:1 o1:3 index:2 relationship:3 modeled:1 ql:2 truncates:1 robert:1 rise:1 xnt:2 contributed:1 teh:1 upper:18 vertical:1 observation:1 datasets:1 discarded:2 finite:6 acknowledge:1 possessed:1 truncated:3 situation:1 defining:2 looking:1 extended:1 ever:1 y1:4 mansour:1 incorporate:1 arbitrary:1 canada:2 introduced:1 specified:3 z1:2 learned:3 pop:1 nip:1 beyond:1 able:1 proceeds:1 below:2 pattern:1 electronica:1 program:1 including:2 oj:5 max:1 everyone:1 belief:1 event:3 metropolishastings:1 nth:2 representing:1 scheme:3 mjk:3 deemed:2 prior:17 marginalizing:2 highlight:1 allocation:1 versus:1 localized:1 validation:1 znk:38 editor:1 share:2 row:6 summary:1 last:4 truncation:2 bern:2 jth:1 free:3 characterizing:1 taking:2 emerge:1 sparse:5 distributed:4 slice:10 benefit:1 xn:8 evaluating:1 vocabulary:1 dale:1 author:1 collection:6 adaptive:3 testset:2 social:1 compact:1 sz:5 active:35 instantiation:1 sequentially:3 uai:1 assumed:2 latent:31 mj:2 learn:1 ca:1 schuurmans:1 expansion:1 bottou:1 constructing:1 vj:2 did:1 main:2 motivation:1 whole:1 hyperparameters:2 noise:5 arise:1 paul:1 x1:2 positively:1 fig:6 ny:1 theme:1 inferring:1 wish:1 bengioy:1 breaking:13 third:2 british:1 remained:1 evidence:1 grouping:1 mnist:4 modulates:2 supplement:1 illustrates:2 occurring:1 nk:3 rejection:3 simply:2 explore:1 likely:2 desire:1 expressed:1 ordered:10 nserc:1 collectively:3 extracted:1 ma:1 conditional:8 goal:2 presentation:1 absence:1 change:2 henrion:1 infinite:13 except:1 uniformly:1 sampler:5 kearns:1 total:1 experimental:1 aaron:2 indicating:1 people:2 support:1 arises:2 assessed:1 indian:6 artifical:2 mcmc:6 dance:1 correlated:1 |
3,128 | 3,834 | The "tree-dependent components" of natural scenes
are edge filters
Daniel Zoran
Interdisciplinary Center for Neural Computation
Hebrew University of Jerusalem
[email protected]
Yair Weiss
School of Computer Science
Hebrew University of Jerusalem
[email protected]
Abstract
We propose a new model for natural image statistics. Instead of minimizing dependency between components of natural images, we maximize a simple form of
dependency in the form of tree-dependencies. By learning filters and tree structures which are best suited for natural images we observe that the resulting filters
are edge filters, similar to the famous ICA on natural images results. Calculating
the likelihood of an image patch using our model requires estimating the squared
output of pairs of filters connected in the tree. We observe that after learning,
these pairs of filters are predominantly of similar orientations but different phases,
so their joint energy resembles models of complex cells.
1
Introduction and related work
Many models of natural image statistics have been proposed in recent years [1, 2, 3, 4]. A common
goal of many of these models is finding a representation in which components or sub-components
of the image are made as independent or as sparse as possible [5, 6, 2]. This has been found to be a
difficult goal, as natural images have a highly intricate structure and removing dependencies between
components is hard [7]. In this work we take a different approach, instead of minimizing dependence
between components we try to maximize a simple form of dependence - tree dependence.
It would be useful to place this model in context of previous works about natural image statistics.
Many earlier models are described by the marginal statistics solely, obtaining a factorial form of the
likelihood:
Y
p(x) =
pi (xi )
(1)
i
The most notable model of this approach is Independent Component Analysis (ICA), where one
seeks to find a linear transformation which maximizes independence between components (thus fitting well with the aforementioned factorization). This model has been applied to many scenarios,
and proved to be one of the great successes of natural image statistics modeling with the emergence
of edge-filters [5]. This approach has two problems. The first is that dependencies between components are still very strong, even with those learned transformation seeking to remove them. Second,
it has been shown that ICA achieves, after the learned transformation, only marginal gains when
measured quantitatively against simpler method like PCA [7] in terms of redundancy reduction. A
different approach was taken recently in the form of radial Gaussianization [8], in which components which are distributed in a radially symmetric manner are made independent by transforming
them non-linearly into a radial Gaussian, and thus, independent from one another.
A more elaborate approach, related to ICA, is Independent Subspace Component Analysis or ISA.
In this model, one looks for independent subspaces of the data, while allowing the sub-components
1
Figure 1: Our model with respect to marginal models such as ICA (a), and ISA like models (b). Our
model, being a tree based model (c), allows components to belong to more than one subspace, and
the subspaces are not required to be independent.
of each subspace to be dependent:
p(x) =
Y
pk (xi?K )
(2)
k
This model has been applied to natural images as well and has been shown to produce the emergence
of phase invariant edge detectors, akin to complex cells in V1 [2].
Independent models have several shortcoming, but by far the most notable one is the fact that the
resulting components are, in fact, highly dependent. First, dependency between the responses of
ICA filters has been reported many times [2, 7]. Also, dependencies between ISA components has
also been observed [9]. Given these robust dependencies between filter outputs, it is somewhat
peculiar that in order to get simple cell properties one needs to assume independence. In this work
we ask whether it is possible to obtain V1 like filters in a model that assumes dependence.
In our model we assume the filter distribution can be described by a tree graphical model [10] (see
Figure 1). Degenerate cases of tree graphical models include ICA (in which no edges are present)
and ISA (in which edges are only present within a subspace). But in its non-degenerate form, our
model assumes any two filter outputs may be dependent. We allow components to belong to more
than one subspace, and as a result, do not require independence between them.
2
Model and learning
Our model is comprised of three main components. Given a set of patches, we look for the parameters which maximize the likelihood of a whitened natural image patch z:
p(z; W, ?, T ) = p(y1 )
N
Y
p(yi |ypai ; ?)
(3)
i=1
Where y = Wz, T is the tree structure, pai denotes the parent of node i and ? is a parameter of the
density model (see below for the details). The three components we are trying to learn are:
1. The filter matrix W, where every row defines one of the filters. The response of these
filters is assumed to be tree-dependent. We assume that W is orthogonal (and is a rotation
of a whitening transform).
2. The tree structure T which specifies which components are dependent on each other.
3. The probability density function for connected nodes in the tree, which specify the exact
form of dependency between nodes.
All three together describe a complete model for whitened natural image patches, allowing likelihood estimation and exact inference [11].
We perform the learning in an iterative manner: we start by learning the tree structure and density
model from the entire data set, then, keeping the structure and density constant, we learn the filters
via gradient ascent in mini-batches. Going back to the tree structure we repeat the process many
times iteratively. It is important to note that both the filter set and tree structure are learned from the
data, and are continuously updated during learning. In the following sections we will provide details
on the specifics of each part of the model.
2
?=0.5
?=1.0
?=0.0
?=0.5
?=1.0
3
2
2
1
1
1
1
1
1
0
?1
0
?1
?2
?1
?2
?3
0
x1
2
0
x1
2
0
x1
2
?2
?3
?2
0
x1
2
0
?1
?2
?3
?2
0
?1
?2
?3
?2
0
?1
?2
?3
?2
0
x2
3
2
x2
3
2
x2
3
2
x2
3
2
x2
x2
?=0.0
3
?3
?2
0
x1
2
?2
0
x1
2
Figure 2: Shape of the conditional (Left three plots) and joint (Right three plots) density model in
log scale for several values of ?, from dependence to independence.
2.1
Learning tree structure
In their seminal paper, Chow and Liu showed how to learn the optimal tree structure approximation
for a multidimensional probability density function [12]. This algorithm is easy to apply to this
scenario, and requires just a few simple steps. First, given the current estimate for the filter matrix
W, we calculate the response of each of the filters with all the patches in the data set. Using these
responses, we calculate the mutual information between each pair of filters (nodes) to obtain a fully
connected weighted graph. The final step is to find a maximal spanning tree over this graph. The
resulting unrooted tree is the optimal tree approximation of the joint distribution function over all
nodes. We will note that the tree is unrooted, and the root can be chosen arbitrarily - this means that
there is no node, or filter, that is more important than the others - the direction in the tree graph is
arbitrary as long as it is chosen in a consistent way.
2.2
Joint probability density functions
Gabor filter responses on natural images exhibit highly kurtotic marginal distributions, with heavy
tails and sharp peaks [13, 3, 14]. Joint pair wise distributions also exhibit this same shape with
varying degrees of dependency between the components [13, 2]. The density model we use allows
us to capture both the highly kurtotic nature of the distributions, while still allowing varying degrees
of dependence using a mixing variable. We use a mix of two forms of finite, zero mean Gaussian
Scale Mixtures (GSM). In one, the components are assumed to be independent of each other and in
the other, they are assumed to be spherically distributed. The mixing variable linearly interpolates
between the two, allowing us to capture the whole range of dependencies:
p(x1 , x2 ; ?) = ?pdep (x1 , x2 ) + (1 ? ?)pind (x1 , x2 )
(4)
When ? = 1 the two components are dependent (unless p is Gaussian), whereas when ? = 0 the
two components are independent. For the density functions themselves, we use a finite GSM. The
dependent case is a scale mixture of bivariate Gaussians:
X
pdep (x1 , x2 ) =
?k N (x1 , x2 ; ?k2 I)
(5)
k
While the independent case is a product of two independent univariate Gaussians:
!
!
X
X
pind (x1 , x2 ) =
?k N (x1 ; ?k2 )
?k N (x2 ; ?k2 )
k
(6)
k
Estimating parameters ?k and ?k2 for the GSM is done directly from the data using Expectation
Maximization. These parameters are the same for all edges and are estimated only once on the first
iteration. See Figure 2 for a visualization of the conditional distribution functions for varying values
of ?. We will note that the marginal distributions for the two types of joint distributions above are
the same. The mixing parameter ? is also estimated using EM, but this is done for each edge in the
tree separately, thus allowing our model to theoretically capture the fully independent case (ICA)
and other degenerate models such as ISA.
2.3
Learning tree dependent components
Given the current tree structure and density model, we can now learn the matrix W via gradient
ascent on the log likelihood of the model. All learning is performed on whitened, dimensionally
3
reduced patches. This means that W is a N ? N rotation (orthonormal) matrix, where N is the
number of dimensions after dimensionality reduction (see details below). Given an image patch z
we multiply it by W to get the response vector y:
y = Wz
(7)
Now we can calculate the log likelihood of the given patch using the tree model (which we assume
is constant at the moment):
log p(y) = log p(yroot ) +
N
X
log p(yi |ypai )
(8)
i=1
Where pai denotes the parent of node i. Now, taking the derivative w.r.t the r-th row of W:
? log p(y)
? log p(y) T
=
z
?Wr
?yr
(9)
Where z is the whitened natural image patch. Finally, we can calculate the derivative of the log
likelihood with respect to the r-th element in y:
X ? log p(yr , yc ) ? log p(yr )
? log p(ypar , yr )
? log p(y)
=
+
?
?yr
?yr
?yr
?yr
(10)
c?C(r)
Where C(r) denote the children of node r. In summary, the gradient ascent rule for updating the
rotation matrix W is given by:
Wrt+1 = Wrt + ?
? log p(y) T
z
?yr
(11)
Where ? is the learning rate constant. After update, the rows of W are orthonormalized.
This gradient ascent rule is applied for several hundreds of patches (see details below), after which
the tree structure is learned again as described in Section 2.1, using the new filter matrix W, repeating this process for many iterations.
3
Results and analysis
3.1
Validation
Before running the full algorithm on natural image data, we wanted to validate that it does produce
sensible results with simple synthetic data. We generated noise from four different models, one is
1/f independent Gaussian noise with 8 Discrete Cosine Transform (DCT) filters, the second is a
simple ICA model with 8 DCT filters, and highly kurtotic marginals. The third was a simple ISA
model - 4 subspaces, each with two filters from the DCT filter set. Distribution within the subspace
was a circular, highly kurtotic GSM, and the subspaces were sampled independently. Finally, we
generated data from a simple synthetic tree of DCT filters, using the same joint distributions as for
the ISA model. These four synthetic random data sets were given to the algorithm - results can
be seen in Figure 3 for the ICA, ISA and tree samples. In all cases the model learned the filters
and distribution correctly, reproducing both the filters (up to rotations within the subspace in ISA)
and the dependency structure between the different filters. In the case of 1/f Gaussian noise, any
whitening transformation is equally likely and any value of beta is equally likely. Thus in this case,
the algorithm cannot find the tree or the filters.
3.2
Learning from natural image patches
We then ran experiments with a set of natural images [9]1 . These images contain natural scenes
such as mountains, fields and lakes. . The data set was 50,000 patches, each 16 ? 16 pixels large.
The patches? DC was removed and they were then whitened using PCA. Dimension was reduced
from 256 to 128 dimensions. The GSM for the density model had 16 components. Several initial
1
available at http://www.cis.hut.fi/projects/ica/imageica/
4
Figure 3: Validation of the algorithm. Noise was generated from three models - top row is ICA,
middle row is ISA and bottom row is a tree model. Samples were then given to the algorithm. On
the right are the resulting learned tree models. Presented are the learned filters, tree model (with
white edges meaning ? = 0, black meaning ? = 1 and grays intermediate values) and an example
of a marginal histogram for one of the filters. It can be seen that in all cases all parts of the model
were correctly learned. Filters in the ISA case were learned up to rotation within the subspace, and
all filters were learned up to sign. ? values for the ICA case were always below 0.1, as were the
values of ? between subspaces in ISA.
conditions for the matrix W were tried out (random rotations, identity) but this had little effect
on results. Mini-batches of 10 patches each were used for the gradient ascent - the gradient of 10
patches was summed, and then normalized to have unit norm. The learning rate constant ? was
set to 0.1. Tree structure learning and estimation of the mixing variable ? were done every 500
mini-batches. All in all, 50 iterations were done over the data set.
3.3
Filters and tree structure
Figures 4 and 5 show the learned filters (WQ where Q is the whitening matrix) and tree structure
(T ) learned from natural images. Unlike the ISA toy data in figure 3, here a full tree was learned
and ? is approximately one for all edges. The GSM that was learned for the marginals was highly
kurtotic.
It can be seen that resulting filters are edge filters at varying scales, positions and orientations. This
is similar to the result one gets when applying ICA to natural images [5, 15]. More interesting is
Figure 4: Left: Filter set learned from 16 ? 16 natural image patches. Filters are ordered by PCA
eigenvalues, largest to smallest. Resulting filters are edge filters having different orientations, positions, frequencies and phases. Right: The ?feature? set learned, that is, columns of the pseudo
inverse of the filter set.
5
Figure 5: The learned tree graph structure and feature set. It can be seen that neighboring features
on the graph have similar orientation, position and frequency. See Figure 4 for a better view of the
feature details, and see text for full detail and analysis. Note that the figure is rotated CW.
6
Optimal Orientation
Optimal Frequency
3.5
Optimal Phase
7
3
6
2.5
5
Optimal Position Y
0.9
6
0.8
5
0.7
0.6
3
Child
1.5
4
Child
2
Child
Child
4
3
2
1
2
0.5
1
0.5
0.4
0.3
0.2
0
1
0.1
0
0
1
2
Parent
3
4
0
0
2
4
Parent
6
8
0
0
1
2
3
Parent
4
5
6
0
0.2
0.4
0.6
Parent
0.8
1
Figure 6: Correlation of optimal parameters in neighboring nodes in the tree graph. Orientation,
frequency and position are highly correlated, while phase seems to be entirely uncorrelated. This
property of correlation in frequency and orientation, while having no correlation in phase is related
to the ubiquitous energy model of complex cells in V1. See text for further details.
Figure 7: Left: Comparison of log likelihood values of our model with PCA, ICA and ISA. Our
model gives the highest likelihood. Right: Samples taken at random from ICA, ISA and our model.
Samples from our model appear to contain more long-range structure.
the tree graph structure learned along with the filters which is shown in Figure 5. It can be seen that
neighboring filters (nodes) in the tree tend to have similar position, frequency and orientation. Figure
6 shows the correlation of optimal frequency, orientation and position for neighboring filters in the
tree - it is obvious that all three are highly correlated. Also apparent in this figure is the fact that
the optimal phase for neighboring filters has no significant correlation. It has been suggested that
filters which have the same orientation, frequency and position with different phase can be related
to complex cells in V1 [2, 16].
3.4
Comparison to other models
Since our model is a generalization of both ICA and ISA we use it to learn both models. In order to
learn ICA we used the exact same data set, but the tree had no edges and was not learned from the
data (alternatively, we could have just set ? = 0). For ISA we used a forest architecture of 2 node
trees, setting ? = 1 for all edges (which means a spherical symmetric distribution), no tree structure
was learned. Both models produce edge filters similar to what we learn (and to those in [5, 15, 6]).
The ISA model produces neighboring nodes with similar frequency and orientation, but different
phase, as was reported in [2]. We also compare to a simple PCA whitening transform, using the
same whitening transform and marginals as in the ICA case, but setting W = I.
We compare the likelihood each model gives for a test set of natural image patches, different from
the one that was used in training. There were 50,000 patches in the test set, and we calculate the
mean log likelihood over the entire set. The table in Figure 7 shows the result - as can be seen, our
model performs better in likelihood terms than both ICA and ISA.
Using a tree model, as opposed to more complex graphical models, allows for easy sampling from
the model. Figure 7 shows 20 random samples taken from our tree model along with samples from
the ICA and ISA models. Note the elongated structures (e.g. in the bottom left sample) in the
samples from the tree model, and compare to patches sampled from the ICA and ISA models.
7
40
40
30
30
20
20
10
10
1
0.8
0.6
0.4
0
0.2
0
0
1
2
3
Orientation
4
0
0
2
4
6
Frequency
8
0
2
4
Phase
Figure 8: Left: Interpretation of the model. Given a patch, the response of all edge filters is computed
(?simple cells?), then at each edge, the corresponding nodes are squared and summed to produce the
response of the ?complex cell? this edge represents. Both the response of complex cells and simple
cells is summed to produce the likelihood of the patch. Right: Response of a ?complex cell? in our
model to changing phase, frequency and orientation. Response in the y-axis is the sum of squares
of the two filters in this ?complex cell?. Note that while the cell is selective to orientation and
frequency, it is rather invariant to phase.
3.5
Tree models and complex cells
One way to interpret the model is looking at the likelihood of a given patch under this model. For
the case of ? = 1 substituting Equation 4 into Equation 3 yields:
X q
2 ) ? ?(|y
log L(z) =
?( yi2 + ypa
(12)
pai |)
i
i
P
2
Where ?(x) = log
k ?k N (x; ?k ) . This form of likelihood has an interesting similarity to models of complex cells in V1 [2, 4]. In Figure 8 we draw a simple two-layer network that computes
the likelihood. The first layer applies linear filters (?simple cells?) to the image patch, while the second layer sums the squared outputs of similarly oriented filters from the first layer, having different
phases, which are connected in the tree (?complex cells?). Output is also dependent on the actual
response of the ?simple cell? layer. The likelihood here is maximized when both the response of the
parent filter ypai and the child yi is zero, but, given that one filter has responded with a non-zero
value, the likelihood is maximized when the other filter also fires (see the conditional density in
Figure 2). Figure 8 also shows an example of the phase invariance which is present in the learned
"complex cell" (energy of a pair of learned filters connected in the tree) - it seems that sum squared
response of the shown pair of nodes is relatively invariant to the phase of the stimulus, while it is
selective to both frequency and orientation - the hallmark of ?complex cells?. Quantifying this result with the AC/DC ratio, as is common [17] we find that around 60% percent of the edges have an
AC/DC ratio which is smaller than one - meaning they would be classified as complex cells using
standard methods [17].
4
Discussion
We have proposed a new model for natural image statistics which, instead of minimizing dependency
between components, maximizes a simple form of dependency - tree dependency. This model is a
generalization of both ICA and ISA. We suggest a method to learn such a model, including the tree
structure, filter set and density model. When applied to natural image data, our model learns edge
filters similar to those learned with ICA or ISA. The ordering in the tree, however, is interesting neighboring filters in the tree tend to have similar orientation, position and frequency, but different
phase. This decorrelation of phase, in conjunction with correlations in frequency and orientation are
the hallmark of energy models for complex cells in V1.
Future work will include applications of the model to several image processing scenarios. We have
started experimenting with application of this model to image denoising by using belief propagation
for inference, and results are promising.
Acknowledgments
This work has been supported by the AMN foundation and the ISF. The authors wish to thank the
anonymous reviewers for their helpful comments.
8
References
[1] Y. Weiss and W. Freeman, ?What makes a good model of natural images?? Computer Vision
and Pattern Recognition, 2007. CVPR ?07. IEEE Conference on, pp. 1?8, June 2007.
[2] A. Hyvarinen and P. Hoyer, ?Emergence of phase-and shift-invariant features by decomposition
of natural images into independent feature subspaces,? Neural Computation, vol. 12, no. 7, pp.
1705?1720, 2000.
[3] A. Srivastava, A. B. Lee, E. P. Simoncelli, and S.-C. Zhu, ?On advances in statistical modeling
of natural images,? J. Math. Imaging Vis., vol. 18, no. 1, pp. 17?33, 2003.
[4] Y. Karklin and M. Lewicki, ?Emergence of complex cell properties by learning to generalize
in natural scenes,? Nature, November 2008.
[5] A. J. Bell and T. J. Sejnowski, ?The independent components of natural scenes are edge filters,?
Vision Research, vol. 37, pp. 3327?3338, 1997.
[6] B. Olshausen et al., ?Emergence of simple-cell receptive field properties by learning a sparse
code for natural images,? Nature, vol. 381, no. 6583, pp. 607?609, 1996.
[7] M. Bethge, ?Factorial coding of natural images: how effective are linear models in removing
higher-order dependencies?? vol. 23, no. 6, pp. 1253?1268, June 2006.
[8] S. Lyu and E. P. Simoncelli, ?Nonlinear extraction of ?independent components? of natural
images using radial Gaussianization,? Neural Computation, vol. 21, no. 6, pp. 1485?1519, Jun
2009.
[9] A. Hyvrinen, P. Hoyer, and M. Inki, ?Topographic independent component analysis: Visualizing the dependence structure,? in Proc. 2nd Int. Workshop on Independent Component Analysis
and Blind Signal Separation (ICA2000), Espoo, Finland. Citeseer, 2000, pp. 591?596.
[10] F. Bach and M. Jordan, ?Beyond independent components: trees and clusters,? The Journal of
Machine Learning Research, vol. 4, pp. 1205?1233, 2003.
[11] J. Yedidia, W. Freeman, and Y. Weiss, ?Understanding belief propagation and its generalizations,? Exploring artificial intelligence in the new millennium, pp. 239?236, 2003.
[12] C. Chow and C. Liu, ?Approximating discrete probability distributions with dependence trees,?
IEEE transactions on Information Theory, vol. 14, no. 3, pp. 462?467, 1968.
[13] E. Simoncelli, ?Bayesian denoising of visual images in the wavelet domain,? LECTURE
NOTES IN STATISTICS-NEW YORK-SPRINGER VERLAG-, pp. 291?308, 1999.
[14] A. Levin, A. Zomet, and Y. Weiss, ?Learning to perceive transparency from the statistics of
natural scenes,? Advances in Neural Information Processing Systems, pp. 1271?1278, 2003.
[15] J. van Hateren, ?Independent component filters of natural images compared with simple cells
in primary visual cortex,? Proceedings of the Royal Society B: Biological Sciences, vol. 265,
no. 1394, pp. 359?366, 1998.
[16] C. Zetzsche, E. Barth, and B. Wegmann, ?The importance of intrinsically two-dimensional
image features in biological vision and picture coding,? in Digital images and human vision.
MIT Press, 1993, p. 138.
[17] K. Kording, C. Kayser, W. Einhauser, and P. Konig, ?How are complex cell properties adapted
to the statistics of natural stimuli?? Journal of Neurophysiology, vol. 91, no. 1, pp. 206?212,
2004.
9
| 3834 |@word neurophysiology:1 middle:1 norm:1 seems:2 nd:1 seek:1 tried:1 decomposition:1 citeseer:1 moment:1 reduction:2 liu:2 initial:1 daniel:1 current:2 dct:4 shape:2 wanted:1 remove:1 plot:2 update:1 intelligence:1 yr:9 math:1 node:14 simpler:1 along:2 beta:1 fitting:1 manner:2 theoretically:1 intricate:1 ica:24 themselves:1 freeman:2 spherical:1 little:1 actual:1 project:1 estimating:2 maximizes:2 what:2 mountain:1 finding:1 transformation:4 pseudo:1 every:2 multidimensional:1 k2:4 unit:1 appear:1 before:1 solely:1 approximately:1 black:1 resembles:1 factorization:1 range:2 acknowledgment:1 kayser:1 bell:1 gabor:1 radial:3 suggest:1 get:3 cannot:1 context:1 applying:1 seminal:1 www:1 elongated:1 reviewer:1 center:1 jerusalem:2 independently:1 perceive:1 rule:2 orthonormal:1 updated:1 exact:3 element:1 recognition:1 updating:1 observed:1 bottom:2 capture:3 calculate:5 connected:5 ordering:1 removed:1 highest:1 ran:1 transforming:1 zoran:1 joint:7 einhauser:1 shortcoming:1 describe:1 sejnowski:1 effective:1 artificial:1 apparent:1 cvpr:1 statistic:9 topographic:1 emergence:5 transform:4 final:1 eigenvalue:1 propose:1 maximal:1 product:1 neighboring:7 mixing:4 degenerate:3 validate:1 konig:1 parent:7 cluster:1 produce:6 rotated:1 ac:4 measured:1 school:1 strong:1 c:2 direction:1 gaussianization:2 filter:64 human:1 require:1 generalization:3 yweiss:1 anonymous:1 biological:2 exploring:1 hut:1 around:1 great:1 lyu:1 substituting:1 achieves:1 finland:1 smallest:1 estimation:2 proc:1 pdep:2 largest:1 weighted:1 orthonormalized:1 mit:1 gaussian:5 always:1 rather:1 varying:4 conjunction:1 june:2 likelihood:18 experimenting:1 helpful:1 inference:2 dependent:10 wegmann:1 entire:2 daniez:1 chow:2 going:1 selective:2 pixel:1 aforementioned:1 orientation:17 espoo:1 summed:3 mutual:1 marginal:6 field:2 once:1 having:3 extraction:1 sampling:1 pai:3 represents:1 look:2 future:1 others:1 stimulus:2 quantitatively:1 few:1 oriented:1 ypar:1 phase:18 fire:1 highly:9 circular:1 multiply:1 mixture:2 zetzsche:1 peculiar:1 edge:21 ypa:1 orthogonal:1 unless:1 tree:57 column:1 earlier:1 modeling:2 kurtotic:5 maximization:1 hundred:1 comprised:1 levin:1 reported:2 dependency:16 synthetic:3 density:13 peak:1 huji:2 interdisciplinary:1 lee:1 together:1 continuously:1 bethge:1 squared:4 again:1 opposed:1 derivative:2 toy:1 coding:2 int:1 notable:2 vi:1 blind:1 performed:1 try:1 root:1 view:1 start:1 il:2 square:1 responded:1 maximized:2 yield:1 generalize:1 famous:1 bayesian:1 classified:1 detector:1 gsm:6 against:1 energy:4 frequency:15 pp:15 obvious:1 gain:1 sampled:2 proved:1 radially:1 intrinsically:1 ask:1 dimensionality:1 ubiquitous:1 back:1 barth:1 higher:1 response:14 wei:4 specify:1 done:4 just:2 correlation:6 nonlinear:1 propagation:2 defines:1 gray:1 olshausen:1 effect:1 contain:2 normalized:1 symmetric:2 iteratively:1 spherically:1 white:1 amn:1 visualizing:1 during:1 cosine:1 trying:1 complete:1 performs:1 percent:1 image:39 wise:1 meaning:3 hallmark:2 recently:1 fi:1 predominantly:1 common:2 rotation:6 inki:1 belong:2 tail:1 interpretation:1 marginals:3 interpret:1 isf:1 significant:1 similarly:1 had:3 similarity:1 cortex:1 whitening:5 recent:1 showed:1 scenario:3 dimensionally:1 verlag:1 success:1 arbitrarily:1 yi:3 seen:6 somewhat:1 unrooted:2 maximize:3 signal:1 full:3 mix:1 isa:23 simoncelli:3 transparency:1 bach:1 long:2 equally:2 whitened:5 vision:4 expectation:1 iteration:3 histogram:1 cell:25 whereas:1 separately:1 unlike:1 ascent:5 comment:1 tend:2 jordan:1 intermediate:1 easy:2 independence:4 architecture:1 shift:1 whether:1 pca:5 akin:1 interpolates:1 york:1 useful:1 factorial:2 repeating:1 reduced:2 http:1 specifies:1 sign:1 estimated:2 wr:1 correctly:2 discrete:2 vol:10 redundancy:1 four:2 changing:1 v1:6 imaging:1 graph:7 year:1 sum:3 inverse:1 place:1 patch:23 lake:1 draw:1 separation:1 entirely:1 layer:5 adapted:1 scene:5 x2:13 relatively:1 smaller:1 em:1 invariant:4 taken:3 equation:2 visualization:1 wrt:2 available:1 gaussians:2 yedidia:1 apply:1 observe:2 batch:3 yair:1 denotes:2 assumes:2 include:2 running:1 top:1 graphical:3 calculating:1 approximating:1 society:1 seeking:1 receptive:1 primary:1 dependence:8 exhibit:2 gradient:6 hoyer:2 subspace:14 cw:1 thank:1 sensible:1 spanning:1 code:1 mini:3 ratio:2 minimizing:3 hebrew:2 difficult:1 perform:1 allowing:5 finite:2 november:1 looking:1 y1:1 dc:3 reproducing:1 ica2000:1 arbitrary:1 sharp:1 pair:6 required:1 learned:23 beyond:1 suggested:1 below:4 pattern:1 yc:1 wz:2 including:1 royal:1 belief:2 decorrelation:1 natural:35 karklin:1 zhu:1 millennium:1 picture:1 axis:1 started:1 jun:1 text:2 understanding:1 fully:2 lecture:1 interesting:3 validation:2 foundation:1 digital:1 degree:2 consistent:1 uncorrelated:1 pi:1 heavy:1 row:6 summary:1 repeat:1 supported:1 keeping:1 allow:1 taking:1 sparse:2 distributed:2 van:1 dimension:3 computes:1 author:1 made:2 far:1 hyvarinen:1 transaction:1 kording:1 assumed:3 xi:2 alternatively:1 iterative:1 table:1 promising:1 learn:8 nature:3 robust:1 obtaining:1 forest:1 complex:18 domain:1 pk:1 main:1 yi2:1 linearly:2 whole:1 noise:4 child:6 x1:13 elaborate:1 sub:2 position:9 wish:1 third:1 learns:1 wavelet:1 removing:2 specific:1 bivariate:1 workshop:1 importance:1 ci:1 suited:1 univariate:1 likely:2 visual:2 ordered:1 lewicki:1 applies:1 srivastava:1 springer:1 conditional:3 goal:2 identity:1 quantifying:1 hard:1 zomet:1 denoising:2 invariance:1 wq:1 hateren:1 correlated:2 |
3,129 | 3,835 | Decoupling Sparsity and Smoothness in the
Discrete Hierarchical Dirichlet Process
Chong Wang
Computer Science Department
Princeton University
David M. Blei
Computer Science Department
Princeton University
[email protected]
[email protected]
Abstract
We present a nonparametric hierarchical Bayesian model of document collections
that decouples sparsity and smoothness in the component distributions (i.e., the
?topics?). In the sparse topic model (sparseTM), each topic is represented by a
bank of selector variables that determine which terms appear in the topic. Thus
each topic is associated with a subset of the vocabulary, and topic smoothness is
modeled on this subset. We develop an efficient Gibbs sampler for the sparseTM
that includes a general-purpose method for sampling from a Dirichlet mixture
with a combinatorial number of components. We demonstrate the sparseTM on
four real-world datasets. Compared to traditional approaches, the empirical results
will show that sparseTMs give better predictive performance with simpler inferred
models.
1
Introduction
The hierarchical Dirichlet process (HDP) [1] has emerged as a powerful model for the unsupervised
analysis of text. The HDP models documents as distributions over a collection of latent components,
which are often called ?topics? [2, 3]. Each word is assigned to a topic, and is drawn from a distribution
over terms associated with that topic. The per-document distributions over topics represent systematic
regularities of word use among the documents; the per-topic distributions over terms encode the
randomness inherent in observations from the topics. The number of topics is unbounded.
Given a corpus of documents, analysis proceeds by approximating the posterior of the topics and topic
proportions. This posterior bundles the two types of regularity. It is a probabilistic decomposition
of the corpus into its systematic components, i.e., the distributions over topics associated with
each document, and a representation of our uncertainty surrounding observations from each of
those components, i.e., the topic distributions themselves. With this perspective, it is important to
investigate how prior assumptions behind the HDP affect our inferences of these regularities.
In the HDP for document modeling, the topics are typically assumed drawn from an exchangeable
Dirichlet, a Dirichlet for which the components of the vector parameter are equal to the same scalar
parameter. As this scalar parameter approaches zero, it affects the Dirichlet in two ways. First,
the resulting draws of random distributions will place their mass on only a few terms. That is, the
resulting topics will be sparse. Second, given observations from such a Dirichlet, a small scalar
parameter encodes increased confidence in the estimate from the observed counts. As the parameter
approaches zero, the expectation of each per-term probability becomes closer to its empirical estimate.
Thus, the expected distribution over terms becomes less smooth. The single scalar Dirichlet parameter
affects both the sparsity of the topics and smoothness of the word probabilities within them.
When employing the exchangeable Dirichlet in an HDP, these distinct properties of the prior have
consequences for both the global and local regularities captured by the model. Globally, posterior
inference will prefer more topics because more sparse topics are needed to account for the observed
1
words of the collection. Locally, the per-topic distribution over terms will be less smooth?the
posterior distribution has more confidence in its assessment of the per-topic word probabilities?and
this results in less smooth document-specific predictive distributions.
The goal of this work is to decouple sparsity and smoothness in the HDP. With the sparse topic model
(sparseTM), we can fit sparse topics with more smoothing. Rather than placing a prior for the entire
vocabulary, we introduce a Bernoulli variable for each term and each topic to determine whether
or not the term appears in the topic. Conditioned on these variables, each topic is represented by a
multinomial distribution over its subset of the vocabulary, a sparse representation.
This prior smoothes only the relevant terms and thus the smoothness and sparsity are controlled
through different hyper-parameters. As we will demonstrate, sparseTMs give better predictive
performance with simpler models than traditional approaches.
2
Sparse Topic Models
Sparse topic models (sparseTMs) aim to separately control the number of terms in a topic, i.e.,
sparsity, and the probabilities of those words, i.e., smoothness. Recall that a topic is a pattern of word
use, represented as a distribution over the fixed vocabulary of the collection. In order to decouple
smoothness and sparsity, we define a topic on a random subset of the vocabulary (giving sparsity),
and then model uncertainty of the probabilities on that subset (giving smoothness). For each topic, we
introduce a Bernoulli variable for each term in the vocabulary that decides whether the term appears
in the topic. Similar ideas of using Bernoulli variables to represent ?on? and ?off? have been seen
in several other models, such as the noisy-OR model [4] and aspect Bernoulli model [5]. We can
view this approach as a particular ?spike and slab? prior [6] over Dirichlet distributions. The ?spike?
chooses the terms for the topic; the ?slab? only smoothes those terms selected by the spike.
Assume the size of the vocabulary is V . A Dirichlet distribution over the topic is defined on a
V ? 1-simplex, i.e.,
? ? Dirichlet(?1),
(1)
where 1 is a V -length vector of 1s. In an sparseTM, the idea of imposing sparsity is to use Bernoulli
variables to restrict the size of the simplex over which the Dirichlet distribution is defined. Let b
be a V -length binary vector composed of V Bernoulli variables. Thus b specifies a smaller simplex
through the ?on?s of its elements. The Dirichlet distribution over the restricted simplex is
? ? Dirichlet(?b),
(2)
which is a degenerate Dirichlet distribution over the sub-simplex specified by b. In [7], Friedman and
Singer use this type of distributions for language modeling.
Now we introduce the generative process of the sparseTM. The sparseTM is built on the hierarchical
Dirichlet process for text, which we shorthand HDP-LDA. 1 In the Bayesian nonparametric setting
the number of topics is not specified in advance or found by model comparison. Rather, it is inferred
through posterior inference. The sparseTM assumes the following generative process:
1. For each topic k ? {1, 2, . . .}, draw term selection proportion ?k ? Beta(r, s).
(a) For each term v, 1 ? v ? V , draw term selector bkv ? Bernoulli(?k ).
PV
+1
(b) Let bV +1 = 1[ v=1 bkv = 0] and bk = [bkv ]Vv=1
.
Draw topic distribution ?k ? Dirichlet(?bk ).
2. Draw stick lengths ? ? GEM(?), which are the global topic proportions.
3. For document d:
(a) Draw per-document topic proportions ?d ? DP(?, ?).
(b) For the ith word:
i. Draw topic assignment zdi ? Mult(?d ).
ii. Draw word wdi ? Mult(?zdi )
Figure 1 illustrates the sparseTM as a graphical model.
1
This acronym comes from the fact that the HDP for text is akin to a nonparametric Bayesian version of
latent Dirichlet allocation (LDA).
2
Figure 1: A graphical model representation for sparseTMs.
The distinguishing feature of the sparseTM is step 1, which generates the latent topics in such a
way that decouples sparsity and smoothness. For each topic k there is a corresponding Beta random
variable ?k and a set of Bernoulli variables bkv s, one for each term in the vocabulary. Define the
sparsity of the topic as
PV
sparsityk , 1 ? v=1 bkv /V.
(3)
This is the proportion of zeros in its bank of Bernoulli random variables. Conditioned on the Bernoulli
parameter ?k , the expectation of the sparsity is
E [sparsityk |?k ] = 1 ? ?k .
(4)
The conditional distribution of the topic ?k given the vocabulary subset bk is Dirichlet(?bk ). Thus,
topic k is represented by those terms with non-zero bkv s, and the smoothing is only enforced over
these terms through hyperparameter ?. Sparsity, which is determined by the pattern of ones in bk , is
controlled by the Bernoulli parameter. Smoothing and sparsity are decoupled.
PV
One nuance is that we introduce bV +1 = 1[ v=1 bkv = 0]. The reason is that when bk,1:V = 0,
Dirichlet(?bk,1:V ) is not well defined. The term bV +1 extends the vocabulary to V + 1 terms, where
the V + 1th term never appears in the documents. Thus, Dirichlet(?bk,1:V +1 ) is always well defined.
We next compute the marginal distribution of ?k , after integrating out Bernoullis bk and their
parameter ?k :
Z
p(?k |?, r, s) = d?k p(?k |?, ?k )p(?k |r, s)
Z
X
=
p(?k |?, bk )
d?k p(bk |?k )p(?k |r, s).
bk
We see that p(?k |?, r, s) and p(?k |?, ?k ) are mixtures of Dirichlet distributions, where the mixture
components are defined over simplices of different dimensions. In total, there are 2V components;
each configuration of Bernoulli variables bk specifies one particular component. In posterior inference
we will need to sample from this distribution. Sampling from such a mixture is difficult in general,
due to the combinatorial sum. In the supplement, we present an efficient procedure to overcome this
issue. This is the central computational challenge for the sparseTM.
Step 2 and 3 mimic the generative process of HDP-LDA [1]. The stick lengths ? come from a
Griffiths, Engen, and McCloskey (GEM) distribution [8], which is drawn using the stick-breaking
construction [9],
?k ? Beta(1, ?),
Qk?1
?k = ?k j=1 (1 ? ?j ), k ? {1, 2, . . . }.
P
Note that k ?k = 1 almost surely. The stick lengths are used as a base measure in the Dirichlet
process prior on the per-document topic proportions, ?d ? DP(?, ?). Finally, the generative process
for the topic assignments z and observed words w is straightforward.
3
Approximate posterior inference using collapsed Gibbs sampling
Since the posterior inference is intractable in sparseTMs, we turn to a collapsed Gibbs sampling
algorithm for posterior inference. In order to do so, we integrate out topic proportions ?, topic distributions ? and term selectors b analytically. The latent variables needed by the sampling algorithm
3
are stick lengths ?, Bernoulli parameter ? and topic assignment z. We fix the hyperparameter s
equal to 1.
To sample ? and topic assignments z, we use the direct-assignment method, which is based on an
analogy to the Chinese restaurant franchise (CRF) [1]. To apply direct assignment sampling, an
auxiliary table count random variable m is introduced. In the CRF setting, we use the following
notation. The number of customers in restaurant d (document) eating dish k (topic) is denoted ndk ,
and nd? denotes the number of customers in restaurant d. The number of tables in restaurant d serving
dish k is denoted mdk , md? denotes the number of tables in restaurant d, m?k denotes the number
of tables serving dish k, and m?? denotes the total number of tables occupied. (Marginal counts are
(v)
represented with dots.) Let K be the current number of topics. The function nk denotes the number
(?)
of times that term v has been assigned to topic k, while nk denotes the number of times that all the
terms have been assigned to topic k. Index u is used to indicate the new topic in the sampling process.
Note that direct assignment sampling of ? and z is conditioned on ?.
The crux for sampling stick lengths ? and topic assignments z (conditioned on ?) is to compute the
conditional density of wdi under the topic component k given all data items except wdi as,
fk?wdi (wdi = v|?k ) , p(wdi = v|{wd0 i0 , zd0 i0 : zd0 i0 = k, d0 i0 6= di}, ?k ).
(5)
2
The derivation of equations for computing this conditional density is detailed in the supplement. We
summarize our findings as follows. Let V , {1, . . . , V } be the set of vocabulary terms, Bk , {v :
(v)
nk,?di > 0, v ? V} be the set of terms that have word assignments in topic k after excluding wdi
and |Bk | be its cardinality. Let?s assume that Bk is not an empty set.3 We have the following,
(v)
(nk,?di + ?)E [gBk (X)|?k ] if v ? Bk
fk?wdi (wdi = v|?k ) ?
,
(6)
? k]
??k E [gB?k (X)|?
otherwise.
where
gBk (x) =
?((|Bk | + x)?)
(?)
?(nk,?di
+ 1 + (|Bk | + x)?)
,
X | ?k ? Binomial(V ? |Bk |, ?k ),
? | ?k ? Binomial(V ? |B
?k |, ?k ),
X
(7)
(v)
?k = Bk ? {v}. Further note ?(?) is the Gamma function and n
and where B
k,?di describes the
corresponding count excluding word wdi . In the supplement, we also show that E [gBk (X)|?k ] >
? k ]. The central difference between the algorithms for HDP-LDA and the sparseTM is
?k E [gB?k (X)|?
conditional probability in Equation 6 which depends on the selector variables and selector proportions.
We now describe how we sample stick lengths ? and topic assignments z. This is similar to the
sampling procedure for HDP-LDA [1].
Sampling stick lengths ?. Although ? is an infinite-length vector, the number of topics K is
finite at every point in the sampling process. Sampling ? can be replaced by sampling ? ,
[?1 , . . . , ?K , ?u ] [1]. That is,
? | m ? Dirichlet(m?1 , . . . , m?K , ?).
(8)
Sampling topic assignments z. This is similar to the sampling approach for HDP-LDA [1] as well.
Using the conditional density f defined Equation 5 and 6, we have
(ndk,?di + ? ?k )fk?wdi (wdi |?k ) if k previously used,
p(zdi = k|z?di , m, ?, ?k ) ?
(9)
? ?u fu?wdi (wdi |?u )
k = u.
If a new topic knew is sampled, then sample ? ? Beta(1, ?), and let ?knew = ??u and ?unew =
(1 ? ?)?u .
2
Note we integrate out ?k and bk . Another sampling strategy is to sample b (by integrating out ?) and the
Gibbs sampler is much easier to derive. However, conditioned on b, sampling z will be constrained to a smaller
set of topics (specified by the values of b), which slows down convergence of the sampler.
3
In the supplement, we show that if Bk is an empty set, the result is trivial.
4
Sampling Bernoulli parameter ?. To sample ?k , we use bk as an auxiliary variable. Note that bk
was integrated out earlier. Recall Bk is the set of terms that have word assignments in topic k. (This
time, we don?t need to exclude certain words since we are sampling ?.) Let Ak = {v : bkv = 1, v ?
V} be the set of the indices of bk that are ?on?, the joint conditional distribution of ?k and bk is
p(?k , bk |rest) ? p(bk |?k )p(?k |r)p({wdi : zdi = k}|bk , {zdi : zdi = k})
Z
= p(bk |?k )p(?k |r) d?k p({wdi : zdi = k}|?k , {zdi : zdi = k})p(?k |bk )
(v)
v?Ak ?(nk
(?)
?|Ak | (?)?(nk + |Ak |?)
Q
(v)
1Bk ?Ak ?(|Ak |?) v?Bk ?(nk
p(bk |?k )p(?k |r)
(?)
?|Bk | (?)?(nk + |Ak |?)
= p(bk |?k )p(?k |r)
=
?
Y
1Bk ?Ak ?(|Ak |?)
p(bkv |?k )p(?k |r)
Q
+ ?)
+ ?)
1Bk ?Ak ?(|Ak |?)
,
(10)
(?)
?(nk + |Ak |?)
P
where 1Bk ?Ak is an indicator function and |Ak | = v bkv . This follows because if Ak is not a
super set of Bk , there must be a term, say v in Bk but not in Ak , causing ?kv = 0, a.s., and then
p({wdi : (d, i) ? Zk }|?k , {zdi : (d, i) ? Zk }) = 0 a.s.. Using this joint conditional distribution4 ,
we iteratively sample bk conditioned on ?k and ?k conditioned on bk to ultimately obtain a sample
from ?k .
v
Others. Sampling the table counts m is exactly the same as for the HDP [1], so we omit the details
here. In addition, we can sample the hyper-parameters ?, ? and ?. For the concentration parameters
? and ? in both HDP-LDA and sparseTMs, we use previously developed approaches for Gamma
priors [1, 10]. For the Dirichlet hyper-parameter ?, we use Metropolis-Hastings.
Finally, with any single sample we can estimate topic distributions ? from the value topic assignments
z and term selector b by
(v)
??k,v =
nk + bk,v ?
,
P
(?)
nk + v bkv ?
(11)
where we can smooth only those terms that are chosen to be in the topics. Note that we can obtain the
samples of b when sampling the Bernoulli parameter ?.
4
Experiments
In this section, we studied the performance of the sparseTM on four datasets and demonstrated how
sparseTM decouples the smoothness and sparsity in the HDP.5 We placed Gamma(1, 1) priors over
the hyper-parameters ? and ? . The sparsity proportion prior was a uniform Beta, i.e., r = s = 1.
For hyper-parameter ?, we use Metropolis-Hastings sampling method using symmetric Gaussian
proposal with variance 1.0. A disadvantage of sparseTM is that its running speed is about 4-5 times
slower than the HDP-LDA.
4.1
Datasets
The four datasets we use in the experiments are:
1. The arXiv data set contains 2500 (randomly sampled) online research abstracts
(http://arxiv.org). It has 2873 unique terms, around 128K observed words and an average of 36 unique terms per document.
4
In our experiments, we used the algorithm described in the main text to sample ?. We note
P that an improved
algorithm
distribution of ?k and v bkv instead, i.e.,
P might be achieved by modeling the joint conditional
P
p(?k , v bkv |rest), since sampling ?k only depends on v bkv .
5
Other experiments, which we don?t report here, also showed that the finite version of sparseTM outperforms
LDA with the same number of topics.
5
2. The Nematode Biology data set contains 2500 (randomly sampled) research abstracts
(http://elegans.swmed.edu/wli/cgcbib). It has 2944 unique terms, around 179K observed
words and an average of 52 unique terms per document.
3. The NIPS data set contains the NIPS articles published between 1988-1999
(http://www.cs.utoronto.ca/?sroweis/nips). It has 5005 unique terms and around 403K
observed words. We randomly sample 20% of the words for each paper and this leads to an
average of 150 unique terms per document.
4. The Conf.
abstracts set data contains abstracts (including papers and posters)
from six international conferences: CIKM, ICML, KDD, NIPS, SIGIR and WWW
(http://www.cs.princeton.edu/?chongw/data/6conf.tgz). It has 3733 unique terms, around
173K observed words and an average of 46 unique terms per document. The data are from
2005-2008.
For all data, stop words and words occurring fewer than 10 times were removed.
4.2
Performance evaluation and model examinations
We studied the predictive performance of the sparseTM compared to HDP-LDA. On the training
documents our Gibbs sampler uses the first 2000 steps as burn-in, and we record the following 100
samples as samples from the posterior. Conditioned on these samples, we run the Gibbs sampler for
test documents to estimate the predictive quantities of interest. We use 5-fold cross validation.
We study two predictive quantities. First, we examine overall predictive power with the predictive
perplexity of the test set given the training set. (This is a metric from the natural language literature.)
The predictive perplexity is
( P
)
d?Dtest log p(wd |Dtrain )
P
perplexitypw = exp ?
.
d?Dtest Nd
Lower perplexity is better.
Second, we compute model complexity. Nonparametric Bayesian methods are often used to sidestep
model selection and integrate over all instances (and all complexities) of a model at hand (e.g., the
number of clusters). The model, though hidden and random, still lurks in the background. Here
we study its posterior distribution with the desideratum that between two equally good predictive
distributions, a simpler model?or a posterior peaked at a simpler model?is preferred.
To capture model complexity we first define the complexity of topic. Recall that each Gibbs sample
contains a topic assignment z for every observed word in the corpus (see Equation 9). The topic
complexity is the number of unique terms that have at least one word assigned to the topic. This can
be expressed as a sum of indicators,
P
P
complexityk = d 1 [( n 1[zd,n = k]) > 0] ,
where recall that zd,n is the topic assignment for the nth word in document d. Note a topic with
no words assigned to it has complexity zero. For a particular Gibbs sample, the model complexity
is the sum of the topic complexities and the number of topics. Loosely, this is the number of free
parameters in the ?model? that the nonparametric Bayesian method has selected, which is
P
complexity = #topics + k complexityk .
(12)
We performed posterior inference with the sparseTM and HDP-LDA, computing predictive perplexity
and average model complexity with 5-fold cross validation. Figure 2 illustrates the results.
Perplexity versus Complexity. Figure 2 (first row) shows the model complexity versus predictive
perplexity for each fold: Red circles represent sparseTM, blue squares represent HDP-LDA, and the
dashed line connecting a red circle and blue square indicates the that the two are from the same fold.
These results shows that the sparseTM achieves better perplexity than HDP-LDA, and at simpler
models. (To see this, notice that all the connecting lines going from HDP-LDA to sparseTM point
down and to the left.)
6
?
22000
16000
?
?
17000
18000
19000
19000
21000
23000
0.12
complexity
0.00
?
?
?
HDP?LDA
STM
HDP?LDA
STM
1600
STM
HDP?LDA
STM
HDP?LDA
STM
1000
1200
STM
800
800
600
500
HDP?LDA
HDP?LDA
STM
?
?
18
30
20
26
35
28
22
40
30
?
24
18 20 22 24 26 28
?
?
?
?
?
?
700
1100
900
800
#topics
46000
0.04
0.04
0.02
HDP?LDA
?
#terms per topic
40000
0.08
0.06
0.05
0.01
?
0.03
STM
?
complexity
?
HDP?LDA
34000
complexity
0.08
complexity
?
?
920
?
?
?
?
?
?
0.03
20000
?
?
?
?
0.01
18000
stm
stm
hdp?lda
stm
?
?
1200
?
?
?
?
?
stm
hdp?lda
?
?
?
?
750
?
?
?
?
??
1000
hdp?lda
960
?
?
?
700
?
?
?
?
1450
?
?
?
?
?
?
Conf. abstracts
?
1350
hdp?lda
?
?
850
1250
?
800
?
NIPS
?
1040
Nematode Biology
?
?
1200
1150
perplexity
1300
arXiv
?
HDP?LDA
STM
HDP?LDA
STM
HDP?LDA
STM
HDP?LDA
STM
Figure 2: Experimental results for sparseTM (shortened as STM in this figure) and HDP-LDA on four
datasets. First row. The scatter plots of model complexity versus predictive perplexity for 5-fold
cross validation: Red circles represent the results from sparseTM, blue squares represent the results
from HDP-LDA and the dashed lines connect results from the same fold. Second row. Box plots of
the hyperparameter ? values. Third row. Box plots of the number of topics. Fourth row. Box plots
of the number of terms per topic.
Hyperparameter ?, number of topics and number of terms per topic. Figure 2 (from the second
to fourth rows) shows the Dirichlet parameter ? and posterior number of topics for HDP-LDA and
sparseTM. HDP-LDA tends to have a very small ? in order to attain a reasonable number of topics,
but this leads to less smooth distributions. In contrast, sparseTM allows a larger ? and selects more
smoothing, even with a smaller number of topics. The numbers of terms per topic for two models
don?t have a consistent trend, but they don?t differ too much either.
Example topics. For the NIPS data set, we provide some example topics (with top 15 terms)
discovered by HDP-LDA and sparseTM in Table 1. Accidentally, we found that HDP-LDA seems to
produce more noisy topics, such as, those shown in Table 2.
7
sparseTM
support
vector
svm
kernel
machines
margin
training
vapnik
solution
examples
space
sv
note
kernels
svms
HDP-LDA
svm
vector
support
machines
kernel
svms
decision
http
digit
machine
diagonal
regression
sparse
optimization
misclassification
sparseTM
belief
networks
inference
lower
bound
variational
jordan
graphical
exact
field
probabilistic
approximate
conditional
variables
models
HDP-LDA
variational
networks
jordan
parameters
inference
bound
belief
distributions
approximation
lower
methods
quadratic
field
distribution
intractable
Table 1: Similar topics discovered.
5
Example ?noise topics?
epsilon
resulting
stream
mation
inferred
direct
development transfer
behaviour
depicted
motor
global
corner
submitted
carried
inter
applications
applicable
mixture
replicated
served
refers
specification searching
modest
operates
vertical
tension
matter
class
Table 2: ?Noise? topics in HDP-LDA.
Discussion
These results illuminate the issue with a single parameter controlling both sparsity and smoothing. In
the Gibbs sampler, if the HDP-LDA posterior requires more topics to explain the data, it will reduce
the value of ? to accommodate for the increased (necessary) sparseness. This smaller ?, however,
leads to less smooth topics that are less robust to ?noise?, i.e., infrequent words that might populate
a topic. The process is circular: To explain the noisy words, the Gibbs sampler might invoke new
topics still, thereby further reducing the hyperparameter. As a result of this interplay, HDP-LDA
settles on more topics and a smaller ?. Ultimately, the fit to held out data suffers.
For the sparseTM, however, more topics can be used to explain the data by using the sparsity control
gained from the ?spike? component of the prior. The hyperparameter ? is controlled separately. Thus
the smoothing effect is retained, and held out performance is better.
Acknowledgements. We thank anonymous reviewers for insightful suggestions. David M. Blei is
supported by ONR 175-6343, NSF CAREER 0745520, and grants from Google and Microsoft.
References
[1] Teh, Y. W., M. I. Jordan, M. J. Beal, et al. Hierarchical Dirichlet processes. Journal of the American
Statistical Association, 101(476):1566?1581, 2006.
[2] Blei, D., A. Ng, M. Jordan. Latent Dirichlet allocation. J. Mach. Learn. Res., 3:993?1022, 2003.
[3] Griffths, T., M. Steyvers. Probabilistic topic models. In Latent Semantic Analysis: A Road to Meaning.
2006.
[4] Saund, E. A multiple cause mixture model for unsupervised learning. Neural Comput., 7(1):51?71, 1995.
[5] Kab?an, A., E. Bingham, T. Hirsim?aki. Learning to read between the lines: The aspect Bernoulli model. In
SDM. 2004.
[6] Ishwaran, H., J. S. Rao. Spike and slab variable selection: Frequentist and Bayesian strategies. The Annals
of Statistics, 33(2):730?773, 2005.
[7] Friedman, N., Y. Singer. Efficient Bayesian parameter estimation in large discrete domains. In NIPS. 1999.
[8] Pitman, J. Poisson?Dirichlet and GEM invariant distributions for split-and-merge transformations of an
interval partition. Comb. Probab. Comput., 11(5):501?514, 2002.
[9] Sethuraman, J. A constructive definition of Dirichlet priors. Statistica Sinica, 4:639?650, 1994.
[10] Escobar, M. D., M. West. Bayesian density estimation and inference using mixtures. Journal of the
American Statistical Association, 90:577?588, 1995.
8
| 3835 |@word version:2 proportion:9 seems:1 nd:2 decomposition:1 thereby:1 accommodate:1 configuration:1 contains:5 document:20 outperforms:1 current:1 wd:1 scatter:1 must:1 partition:1 kdd:1 motor:1 plot:4 generative:4 selected:2 fewer:1 item:1 ith:1 record:1 blei:4 org:1 simpler:5 unbounded:1 direct:4 beta:5 shorthand:1 comb:1 introduce:4 inter:1 expected:1 themselves:1 examine:1 globally:1 cardinality:1 becomes:2 stm:17 notation:1 mass:1 developed:1 finding:1 transformation:1 every:2 exactly:1 decouples:3 stick:8 control:2 exchangeable:2 grant:1 omit:1 appear:1 local:1 tends:1 consequence:1 shortened:1 ak:16 mach:1 merge:1 might:3 burn:1 studied:2 unique:9 chongw:2 digit:1 procedure:2 empirical:2 mult:2 poster:1 attain:1 word:27 confidence:2 integrating:2 griffith:1 refers:1 road:1 selection:3 collapsed:2 www:3 customer:2 demonstrated:1 reviewer:1 straightforward:1 sigir:1 zdi:10 steyvers:1 searching:1 annals:1 construction:1 controlling:1 infrequent:1 exact:1 distinguishing:1 us:1 element:1 trend:1 observed:8 wang:1 capture:1 removed:1 complexity:17 ultimately:2 predictive:13 joint:3 represented:5 surrounding:1 derivation:1 distinct:1 describe:1 hyper:5 nematode:2 emerged:1 larger:1 say:1 otherwise:1 statistic:1 noisy:3 online:1 beal:1 interplay:1 sdm:1 causing:1 relevant:1 degenerate:1 kv:1 convergence:1 regularity:4 empty:2 cluster:1 produce:1 escobar:1 franchise:1 derive:1 develop:1 auxiliary:2 c:4 come:2 indicate:1 differ:1 unew:1 settle:1 crux:1 behaviour:1 fix:1 anonymous:1 around:4 wdi:17 exp:1 slab:3 achieves:1 purpose:1 estimation:2 applicable:1 combinatorial:2 always:1 gaussian:1 aim:1 super:1 rather:2 occupied:1 mation:1 eating:1 encode:1 bernoulli:17 indicates:1 contrast:1 inference:11 i0:4 typically:1 entire:1 integrated:1 hidden:1 going:1 selects:1 issue:2 among:1 overall:1 denoted:2 development:1 smoothing:6 constrained:1 marginal:2 equal:2 field:2 never:1 ng:1 sampling:24 biology:2 placing:1 unsupervised:2 icml:1 peaked:1 mimic:1 simplex:5 others:1 report:1 inherent:1 few:1 randomly:3 composed:1 gamma:3 replaced:1 microsoft:1 friedman:2 wli:1 interest:1 investigate:1 circular:1 evaluation:1 chong:1 mixture:7 behind:1 held:2 bundle:1 wd0:1 closer:1 fu:1 necessary:1 decoupled:1 modest:1 loosely:1 circle:3 re:1 increased:2 instance:1 modeling:3 earlier:1 rao:1 disadvantage:1 assignment:15 subset:6 uniform:1 too:1 dtrain:1 connect:1 sv:1 chooses:1 density:4 international:1 systematic:2 probabilistic:3 off:1 invoke:1 connecting:2 central:2 dtest:2 conf:3 corner:1 american:2 sidestep:1 account:1 exclude:1 includes:1 matter:1 depends:2 stream:1 performed:1 view:1 saund:1 red:3 square:3 qk:1 variance:1 bayesian:8 served:1 published:1 randomness:1 submitted:1 explain:3 suffers:1 definition:1 associated:3 di:7 sampled:3 stop:1 recall:4 appears:3 tension:1 improved:1 though:1 box:3 hand:1 hastings:2 assessment:1 google:1 lda:41 kab:1 effect:1 analytically:1 assigned:5 read:1 symmetric:1 iteratively:1 semantic:1 aki:1 swmed:1 crf:2 demonstrate:2 meaning:1 variational:2 zd0:2 multinomial:1 association:2 gibbs:10 imposing:1 smoothness:11 fk:3 language:2 dot:1 specification:1 base:1 posterior:15 showed:1 perspective:1 dish:3 perplexity:9 certain:1 binary:1 onr:1 captured:1 seen:1 ndk:2 surely:1 determine:2 dashed:2 ii:1 multiple:1 d0:1 smooth:6 cross:3 equally:1 controlled:3 desideratum:1 regression:1 expectation:2 metric:1 poisson:1 arxiv:3 represent:6 kernel:3 achieved:1 proposal:1 addition:1 background:1 separately:2 interval:1 rest:2 mdk:1 elegans:1 jordan:4 split:1 affect:3 fit:2 restaurant:5 restrict:1 gbk:3 reduce:1 idea:2 whether:2 six:1 tgz:1 gb:2 akin:1 cause:1 detailed:1 nonparametric:5 locally:1 svms:2 http:5 specifies:2 nsf:1 notice:1 cikm:1 per:15 serving:2 zd:2 blue:3 discrete:2 hyperparameter:6 four:4 drawn:3 sum:3 enforced:1 run:1 powerful:1 uncertainty:2 fourth:2 place:1 extends:1 almost:1 smoothes:2 reasonable:1 draw:8 decision:1 prefer:1 bound:2 fold:6 quadratic:1 bv:3 encodes:1 generates:1 aspect:2 speed:1 department:2 smaller:5 describes:1 metropolis:2 restricted:1 invariant:1 equation:4 previously:2 turn:1 count:5 needed:2 singer:2 acronym:1 apply:1 ishwaran:1 hierarchical:5 frequentist:1 slower:1 assumes:1 dirichlet:31 denotes:6 binomial:2 running:1 graphical:3 top:1 giving:2 epsilon:1 chinese:1 approximating:1 quantity:2 spike:5 strategy:2 concentration:1 md:1 traditional:2 diagonal:1 illuminate:1 dp:2 thank:1 topic:105 trivial:1 reason:1 nuance:1 hdp:48 length:10 modeled:1 index:2 retained:1 difficult:1 sinica:1 slows:1 teh:1 vertical:1 observation:3 datasets:5 finite:2 excluding:2 discovered:2 inferred:3 david:2 bk:46 introduced:1 specified:3 nip:7 proceeds:1 pattern:2 sparsity:18 challenge:1 summarize:1 built:1 including:1 belief:2 power:1 misclassification:1 natural:1 examination:1 indicator:2 nth:1 sethuraman:1 carried:1 text:4 prior:11 literature:1 acknowledgement:1 probab:1 suggestion:1 allocation:2 analogy:1 versus:3 validation:3 integrate:3 consistent:1 article:1 bank:2 row:6 placed:1 supported:1 free:1 accidentally:1 populate:1 vv:1 sparse:9 pitman:1 overcome:1 dimension:1 vocabulary:11 world:1 collection:4 replicated:1 employing:1 approximate:2 selector:6 preferred:1 global:3 decides:1 corpus:3 assumed:1 gem:3 knew:2 don:4 bingham:1 latent:6 table:10 learn:1 zk:2 transfer:1 ca:1 decoupling:1 robust:1 career:1 domain:1 main:1 statistica:1 noise:3 west:1 simplices:1 sub:1 pv:3 comput:2 breaking:1 third:1 down:2 specific:1 utoronto:1 insightful:1 svm:2 intractable:2 vapnik:1 gained:1 supplement:4 conditioned:8 illustrates:2 engen:1 occurring:1 nk:12 margin:1 easier:1 sparseness:1 depicted:1 expressed:1 mccloskey:1 scalar:4 conditional:9 goal:1 determined:1 except:1 infinite:1 operates:1 sampler:7 reducing:1 decouple:2 called:1 total:2 experimental:1 support:2 constructive:1 princeton:5 |
3,130 | 3,836 | Sparsistent Learning of Varying-coefficient Models
with Structural Changes
Mladen Kolar, Le Song and Eric P. Xing ?
School of Computer Science, Carnegie Mellon University
{mkolar,lesong,epxing}@cs.cmu.edu
Abstract
To estimate the changing structure of a varying-coefficient varying-structure
(VCVS) model remains an important and open problem in dynamic system modelling, which includes learning trajectories of stock prices, or uncovering the
topology of an evolving gene network. In this paper, we investigate sparsistent
learning of a sub-family of this model ? piecewise constant VCVS models. We
analyze two main issues in this problem: inferring time points where structural
changes occur and estimating model structure (i.e., model selection) on each of
the constant segments. We propose a two-stage adaptive procedure, which first
identifies jump points of structural changes and then identifies relevant covariates
to a response on each of the segments. We provide an asymptotic analysis of
the procedure, showing that with the increasing sample size, number of structural
changes, and number of variables, the true model can be consistently selected. We
demonstrate the performance of the method on synthetic data and apply it to the
brain computer interface dataset. We also consider how this applies to structure
estimation of time-varying probabilistic graphical models.
1
Introduction
Consider the following regression model:
Yi = X?i ?(ti ) + ?i , i = 1, . . . , n,
(1)
p
where the design variables Xi ? R are i.i.d. zero mean random variables sampled at some conditions indexed by i = 1, . . . , n, such as the prices of a set of stocks at time i, or the signals from
some sensors deployed at location i; the noise ?1 , . . . , ?n are i.i.d. Gaussian variables with variance
?
? 2 independent of the design variables; and ?(ti ) = (?1 (ti ), . . . , ?p (ti )) : [0, 1] 7? Rp is a vector
of unknown coefficient functions. Since the coefficient vector is a function of the conditions rather
than a constant, such a model is called a varying-coefficient model [12]. Varying-coefficient models
are a non-parametric extension to the linear regression models, which unlike other non-parametric
models, assume that there is a linear relationship (generalizable to log-linear relationship) between
the feature variables and the output variable, albeit a changing one. The model given in Eq. (1) has
the flexibility of a non-parametric model and the interpretability of an ordinary linear regression.
Varying-coefficient models were popularized in the work of [9] and [16]. Since then, they have been
applied to a variety of domains, including multidimensional regression, longitudinal and functional
data analysis, and modeling problems in econometrics and finance, to model and predict time- or
space- varying response to multidimensional inputs (see e.g. [12] for an overview.) One can easily
imagine a more general form of such a model applicable to these domains, where both the coefficient
value and the model structure change with values of other variables. We refer to this class of models
as varying-coefficient varying-structure (VCVS) models. The more challenging problem of structure
recovery (or model selection) under VCVS has started to catch attention very recently [1, 24].
?
LS is supported by a Ray and Stephenie Lane Research Fellowship. EPX is supported by grant ONR
N000140910758, NSF DBI-0640543, NSF DBI-0546594, NSF IIS-0713379 and an Alfred P. Sloan Research
Fellowship. We also thank Za??d Harchaoui for useful discussions.
1
?2(t)
?1
0
?0.5
?
?1(t)
0.5
0
?0.5
0.5
0
?0.5
?2(t)
?1(t)
0
?2(t)
?1(t)
0.5
?0.5
0.5
0.5
0
?0.5
0.5
0
?0.5
?
?2
?
Y
0
?0.5
0
0.2
0.4
0.6
Time t (i/n)
0.8
0.5
?p(t)
?p(t)
?p(t)
.
.
.
0.5
0
?0.5
0
1
0.2
?p
(a)
(b)
0.4
0.6
Time t (i/n)
0.8
1
0.5
0
?0.5
0
0.2
(c)
0.4
0.6
Time t (i/n)
0.8
1
(d)
Figure 1: (a) Illustration of an VCVS as varying functions of time. The interval [0, 1] is partitioned into
{0, 0.25, 0.4, 0.7, 1}, which defines blocks on which the coefficient functions are constant. At different blocks
only covariates with non-zero coefficient affect the response, e.g. on the interval B2 = (0.25, 0.4) covariates
X2 and Xp do not affect response. (b) Schematic representation of the covariates affecting the response during
the second block in panel (a), which is reminiscent of neighborhood selection in graph structure learning. (c)
and (d) Application of VCVS for graph structure estimation (see Section 7) of non-piecewise constant evolving
graphs. Coefficients defining neighborhoods of different nodes can change on different partitions.
In this paper, we analyze VCVS as functions of time, and the main goal is to estimate the dynamic
structure and jump points of the unknown vector function ?(t). To be more specific, we consider the
case where the function ?(t) is time-varying, but piecewise constant (see Fig. 1), i.e., there exists
a partition T = {T1 = 0 < T2 < . . . < TB = 1}, 1 < B ? n, of the time interval (scaled to)
[0, 1], such that ?(t) = ?j , t ? [Tj?1 , Tj ) for some constant vectors ?j ? Rp , j = 1, . . . , B. We
refer to points T1 , . . . , TB as jump points. Furthermore, we assume that at each time point ti only
a few covariates affect the response, i.e., the vector ?(ti ) is sparse. A good estimation procedure
would be able to identify the correct partition of the interval [0, 1] so that within each segment the
coefficient function is constant. In addition, the procedure can identify active coefficients and their
values within each segment, i.e., the time-varying structure of the model. This estimation problem
is particularly important in applications where one needs to uncover dynamic relational information
or model structures from time series data. For example, one may want to infer at chosen time points
the (changing) set of stocks that are predictive of a particular stock one has been holding from a
time series of all stock prices; or to understand the evolving circuitry of gene regulation at different
growth stages of an organism that determines the activity of a target gene based on other regulative
genes, based on time series of microarray data. Another important problem is to identify structural
changes in fields such as signal processing, EEG segmentation and analysis of seismic signals. In
all these problems, the goal is not to estimate the optimum value of ?(t) for predicting Y , but to
consistently uncover the zero and non-zero patterns in ?(t) at time points of interest that reveal the
changing structure of the model. In this paper, we provide a new algorithm to achieve this goal, and
a theoretical analysis that proves the asymptotic consistency of our algorithm.
Our problem is remotely related to, but very different from, earlier works on linear regression models
with structural changes [4], and the problem of change-point detection (e.g. [19]), which can also
be analyzed in the framework of varying-coefficient models. A number of existing methods are
available to identify only one structural change in the data; in order to identify multiple changes
these methods can be applied sequentially on smaller intervals that are assumed to harbor only one
change [14]. Another common approach is to assume that there are K changes and use Dynamic
Programming to estimate them [4]. In this paper, we propose and analyze a penalized least squares
approach, which automatically adapts to the unknown number of structural changes present in the
data and performs the variable selection on each of the constant regions.
2
Preliminaries
For a varying-coefficient regression model described in Eq. (1) with structural changes, a reasonable estimator of the time-varying structure can be obtained by minimizing the so-called TESLA
(temporally smoothed L1 -regularized regression) loss proposed in [1]: (for simplicity we suppress
the sample-size notation n in the regularization constants ?n = {?n1 , ?n2 }, but it should be clear that
their values depend on n)
? 1 ; ?), . . . , ?(t
? n ; ?) = arg min
?(t
?
n
X
i=1
(Yi ?
2
X?i ?(ti ))
+ 2?1
n
X
i=1
||?(ti )||1 + 2?2
p
X
k=1
||?k ||TV , (2)
where
Pn ||?||1 denotes the ?1 norm, and ||?||TV denotes a total variation norm: ||?k ||TV =
i=2 |?k (ti ) ? ?k (ti?1 )|. From the analysis of [20], it is known that each component function
2
?k can be chosen as a piecewise constant and right continuous function, i.e., ?k is a spline function,
with potential jump points at observation times ti , i = 1, . . . , n. In this particular case, the total
variation penalty defined above allows us to conceptualize ?k as a vector in Rn , whose components
?k,i ? ?k (ti ) correspond to function values at ti , i = 1, . . . , n, but not as a function [0, 1] 7? R. We
continue to use the vector representation through the rest of the paper as it will simplify the notation.
The estimation problem defined in Eq. (2) has a few appealing properties. The objective function on
? which can be found efficiently using a
the right-hand-side is convex and there exists a solution ?,
standard convex optimization package. Furthermore, the penalty terms in Eq. (2) are constructed in
a way to perform model selection. Observe that ?1 penalty encourages sparsity of the signal at each
time point and enables a selection over the relevant coefficients; whereas the total variation penalty
is used to partition the interval [0, 1] so that ??k is constant within each segment. However, there are
also some drawbacks of the procedure, as shown in Lemma 1 below.
Let?s start with some notational clarifications. Let X denote the design matrix, input observation
Xi at time i corresponds to the i-th row in X. For simplicity, we assume throughout the paper
that X are normalized to have unit length columns, i.e., each dimension has unit Euclidean norm.
Let Bj , j = 1, . . . , B, denote the set of time points that fall into the interval [Tj?1 , Tj ); when the
meaning is clear from the context, we also use Bj as a shorthand of this interval. For example, XBj
and YBj represent the submatrix of X and subvector of Y , respectively, that include elements only
corresponding to time points within interval Bj . For a given solution ?? to Eq. (2), there exists a
block partition T? = {T?1 , . . . , T?B? } of [0, 1] (possibly a trivial one) and unique vectors ??j ? Rp , j =
? such that ??k,i = ??j,k for ti ? B?j . The set of relevant covariates during inverval Bj , i.e., the
1, . . . , B,
support of vector ?j , is denoted as SBj = {k | ?j,k 6= 0}. Likewise we define S?B?j over ??j .
By construction, no consecutive vectors ??j and ??j+1 are identical. Note that both the number of
? = |T? |, and the elements in the partition T? , are random quantities. The following
partitions B
lemma characterizes the vectors ??j using the subgradient equation of Eq. (2).
? be vectors and segments obtained from a minimizer of
Lemma 1 Let ??j and B?j , j = 1, . . . , B
Eq. (2). Then each ??j can be found as a solution to the subgradient equation:
(1)
(TV)
X?B? XB?j ??j ? X?B? YB?j + ?1 |B?j |?
sj + ?2 s?j
= 0,
(3)
j
j
where
(1)
s?j
? ? ||?
?j ||1 = sign(?j ),
(TV)
s?j
(4)
p
by convention sign(0) ? [?1, 1], and
? R such that
1 if ??B,k
?B?1,k
>0
?1 if ??2,k ? ??1,k > 0
? ??
?
(TV)
(TV)
s?1,k =
, s?B,k
=
?
1 if ??2,k ? ??1,k < 0
?1 if ??B,k
?B?1,k
<0
? ??
?
(5)
?
and, for 1 < j < B,
(TV)
s?j,k
=
(
2
?2
0
if ??j+1,k ? ??j,k > 0, ??j,k ? ??j?1,k < 0
if ??j+1,k ? ??j,k < 0, ??j,k ? ??j?1,k > 0
if (?
?j,k ? ??j?1,k )(?
?j+1,k ? ??j,k ) = 1.
(6)
Lemma 1 does not provide a practical way to estimate ??TV , but it does characterize a solution.
From Eq. (3) we can see that the coefficients in each of the estimated blocks are biased by two terms
coming from the ?1 and ||?||TV penalties. The larger the estimated segments, the smaller the relative
influence of the bias from the total variation, while the magnitude of the bias introduced by the ?1
penalty is uniform across different segments. The additional bias coming from the total variation
penalty was also noted in the problem of signal denoising [23]. In the next section, we introduce a
two step procedure which alleviate this effect.
3
A two-step procedure for estimating time-varying structures
In this section, we propose a new algorithm for estimating the time-varying structure of the varyingcoefficient model in Eq. (1), which does not suffer from the bias introduced by minimizing the
objective in Eq. (2). The algorithm is a two-step procedure summarized as follows:
3
1. Estimate the block partition T? , on which the coefficient vector is constant within each
block. This can be obtained by minimizing the following objective:
n
X
i=1
2
(Yi ? X?i ?(ti )) + 2?2
p
X
k=1
||?k ||TV ,
(7)
which we refer to as a temporal difference (TD) regression for reasons that will be clear
shortly. We will employ a TD-transformation to Eq. (7) and turn it into an ?1 -regularized
regression problem, and solve it using the randomized Lasso. Details of the algorithm and
how to extract T? from the TD-estimate will be given shortly.
? estimate ??j by minimizing the Lasso
2. For each block of the partition, B?j , 1 ? j ? B,
objective within the block:
X
??j = argmin
(Yi ? X?i ?)2 + 2?1 ||?||1 .
(8)
??Rp
ti ?B?j
We name this procedure TDB-Lasso (or TDBL), after the two steps (TD randomized Lasso, and
Lasso within Blocks) given above. The advantage of the TDB-Lasso compared to a minimizer of
Eq. (2) comes from decoupling the interactions between the ?1 and TV penalties (note that the two
procedures result in different estimates). Now we discuss step 1 in detail; step 2 is straightforward
using a standard Lasso toolbox.
To obtain a consistent estimate of T? from the TD-regression in Eq. (7), we can transform Eq. (7)
into an equivalent ?1 penalized regression problem, which allows us to cast the T? estimation
?
problem as a feature selection problem. Let ?k,i
denote the temporal difference between the regression coefficients corresponding to the same covariate k at successive time points ti?1 and
?
ti : ?k,i
? ?k (ti ) ? ?k (ti?1 ), k = 1, . . . , p, i = 1, . . . , n with ?k (t0 ) = 0, by convention. It can be shown that the model in Eq. (1) can be expressed as Y ? = X? ? ? + ?? , where
Y ? ? Rn is a transformed vector of the TDs of responses, i.e., each element Yi? ? Yi ? Yi?1 ;
X? = (X?1 , . . . , X?p ) ? Rn?np is the transformed design matrix with lower triangular matrices
X?k ? Rn?n corresponding to TD features computed from the covariates; ?? ? Rn is the transformed TD-error vector; and ? ? ? Rnp is a vector obtained by stacking TD-coefficient vectors ?k? .
(See Appendix for more details of the transformation.) Note that the elements of the vector ?? are
not i.i.d. any more. Using the transformation above, the estimation problem defined on objective
Eq. (7) can be expressed in the following matrix form:
2
??? = argmin Y ? ? X? ? ? 2 + 2?2 ? ? 1 .
(9)
??Rnp
This transformation was proposed in [8] in the context of one-dimensional signal denoising, however, we are interested in the estimation of jump points in the context of time-varying coefficient
model.
The estimator defined in Eq. (9) is not robust with respect to small perturbations of data, i.e., small
changes of variables Xi or Yi would result in a different T? . To deal with the problem of robustness,
we employed the stability selection procedure of [22] (see also the bootstrap Lasso [2], however,
we have decided to use the stability selection because of the weaker assumptions). The stability
selection approach to estimating the jump-points is comprised of two main components: i) simulating multiple datasets using bootstrap, and ii) using the randomized Lasso outlined in Algorithm 1
(see also Appendix) to solve (9). While the bootstrap step improves the robustness of the estimator,
the randomized Lasso weakens the conditions under which the estimator ??? selects exactly the true
features.
Let {??b? , J?b? }M
b=1 represent the set of estimates and their supports (i.e., index of non-zero elements)
obtained by minimizing (9) for each of the M bootstrapped datasets. We obtain a stable estimate of
the support by selecting variables that appear in multiple supports
PM
1I{k ? J?b? }
?
?
J = {k | b=1
? ? },
(10)
M
which is then used to obtain the block partition estimate T? . The parameter ? is a tuning parameter
that controls the number of falsely identified jump points.
4
Algorithm 1 Randomized Lasso
p
Input: Dataset {Xi , Yi }n
i=1 Xi ? R , penalty parameter ?, weakness parameter ? ? (0, 1]
p
?
?
Output: Estimate ? ? R , support S
1: Choose randomly p weights {Wk }pk=1 from interval [?, 1]
P
Pp |?k |
2
2: ?? = argmin??Rp n
i=1 (Yi ? Xi ?) + 2?
k=1 Wk
3: S? = {k | ??k 6= 0}
4
Theoretical analysis
We provide a theoretical analysis of TDB-Lasso, and show that under certain conditions both the
jump points and structure of VCVS can be consistently estimated. Proofs are deferred to Appendix.
4.1
Estimating jump points
We first address the issue of estimating jump points by analyzing the transformed TD-regression
problem Eq. (9) and its feature selection properties. The feature selection using ?1 penalization has
been analyzed intensively over the past few years and we can adapt some of the existing results to
the problem at hand. To prove that all the jump points are included in J?? , we first state a sparse
eigenvalue condition on the design (e.g. [6]). The minimal and maximal sparse eigenvalue, for
matrix X ? Rn?p , are defined as
||Xa||2
||Xa||2
,
?max (k, X) :=
sup
, k ? p. (11)
?min (k, X) :=
inf
p
a?Rp ,||a||0 ?k ||a||2
a?R ,||a||0 ?k ||a||2
Note that in Eq. (11) eigenvalues are computed over submatrices of size k (i.e., due to the constraint
on a by the ||?||0 norm). We can now express the sparse eigenvalues condition on the design.
A1: Let J ? be the true support of ? ? and J = |J ? |. There exist some C > 1 and ? ? 10 such that
?max (CJ 2 , X? ) ?
(12)
< C/?.
3/2
?min (CJ 2 , X? )
This condition guarantees a correlation structure between TD-transformed covariates that allows for
detection of the jump points. Comparing to the irrepresentible condition [30, 21, 27], necessary for
the ordinary Lasso to perform feature selection, condition A1 is much weaker [22] and is sufficient
for the randomized Lasso to select the relevant feature with high probability (see also [26]).
2
2
?
2
Theorem 1 Let A1 be
?satisfied; and let the weakness ? be given as ? = ??min (CJ , X )/(CJ ),
for any ? ? (7/?, 1/ 2). If the minimum size of the jump is bounded away from zero as
min |?k? | ? 0.3(CJ)3/2 ?min ,
k?J ?
(13)
q
?
2
where ?min = 2? ? ( CJ + 1) lognnp and ? ? ? V ar(Yi? ), for np > 10 and J ? 7, there exists
some ? = ?J ? (0, 1) such that for all ? ? 1 ? ?, the collection of the estimated jump points J??
satisfies,
P(J?? = J ? ) ? 1 ? 5/np.
(14)
Remark: Note that Theorem 1 gives conditions under which we can recover every jump point in
every covariates. In particular, there are no assumptions on the number of covariates that change
values at a jump point. Assuming that multiple covariates change their values at a jump point, we
could further relax the condition on the minimal size of a jump given in Eq. (13). It was also pointed
to us that the framework of [18] may be a more natural way to estimate jump points.
4.2
Identifying correct covariates
Now we address the issue of selecting the relevant features for every estimated segment. Under
the conditions of Theorem 1, correct jump points will be detected with probability arbitrarily close
to 1. That means under the assumption A1, we can run the regular Lasso on each of the estimated
segments to select the relevant features therein. We will assume that the mutual coherence condition
P
j
= (?j )k,l .
[10] holds for each segment Bj . Let ?j = |B1j | i?Bj X?i Xi , with ?kl
5
A2: We assume there is a constant 0 < d ? 1 such that
(
P
j
|?kl
|
max
k?SBj ,l6=k
d
?
SBj
)!
= 1.
(15)
The assumption A2 is a mild version of the mutual coherence condition used in [7], which is neces?n denote the
sary for identification of the relevant covariates in each segment. Let ??j , k = 1, . . . , B
Lasso estimates for each segment obtained by minimizing (8).
Theorem 2 Let A2 be satisfied. Also, assume that the conditions of Theorem 1 are satisfied. Let
K = max1?j?B ||?j ||0 be the upper bound on the number of features in segments and let L be
an upper bound on elements of X. Let ? = min1?j?B |Bj | denote the number of samples in the
smallest segment. Then for a sequence ? = ?n ? 0,
s
ln 4Kp
ln 2Kp
?
?
? 8L
and
min min |?j,k | ? 2?1 ,
?1 ? 4L?
1?j?B k?SBj
?
?
we have
? = B) = 1,
lim P(B
(16)
lim max P(||?
?j ? ?j ||1 = 0) = 1,
(17)
n??
n?? 1?j?B
lim
min P(S?Bj = SBj ) = 1.
n?? 1?j?B
(18)
Theorem 2 states that asymptotically, the two stage procedure estimates the correct model, i.e., it
selects the correct jump points and for each segment between two jump points it is able to select the
correct covariates. Furthermore, we can conclude that the procedure is consistent.
5
Practical considerations
As in standard Lasso, the regularization parameters in TDB-Lasso need to be tuned appropriately
to attain correct structural recovery. The TD regression procedure requires three parameters: the
penalty parameter ?2 , cut-off parameter ? , and weakness parameter ?. From our empirical experience, the recovered set of jump points T? vary very little with respect to these parameters in a wide
range. The result of Theorem 1 is valid as long as ?2 is larger than ?min given in the statement
of the theorem. Theorem 1 in [22] gives a way to select the cutoff ? while controlling the number
of falsely included jump points. Note that this relieves users from carefully choosing the range of
parameter ?2 , which is challenging. The weakness parameter can be chosen in quite a large interval
(see Appendix on the randomized Lasso) and we report our results using the values ? = 0.6.
In the second step of the algorithm, the ordinary Lasso minimizes Eq. (8) on each estimated segment
to select relevant variables, which requires a choice of the penalty parameter ?1 . We do so by
minimizing the BIC criterion [25].
In practice, one cannot verify assumptions A1 and A2 on real datasets. In cases where the assumptions are violated, the resulting set of estimated jump points is larger than the true set T , e.g. the
points close to the true jump points get included into the resulting estimate T? . We propose to use
an ad hoc heuristic to refine the initially selected set of jump points. A commonly used procedure
for estimation of linear regression models with structural changes [3] is a dynamic programming
method that considers a possible structural change at every location ti , i = 1, . . . , n, with a computational complexity of O(n2 ) (see also [15]). We modify this method to consider jump points only
in the estimated set T? and thus considerably reducing the computational complexity to O(|T? |2 ),
? that minimizes the BIC
since |T? | ? n. The algorithm effectively chooses a subset T? ? T? of size B
objective.
6
Experiments on Synthetic Data
We compared the TDB-Lasso on synthetic data with commonly used methods for estimating VCVS
models. The synthetic data was generated as follows. We varied the sample size from n = 100
6
Kernel + l1/l2
Kernel + l1
Corr. zeros
MREE
1.5
l1 + TV
TDB?Lasso
F1
Recall
Precision
1
1
1
1
0.5
0.5
0.5
0.5
1
0.5
0
200 400
Sample size
0
200 400
Sample size
0
0
200 400
Sample size
200 400
Sample size
0
200 400
Sample size
Figure 2: Comparison results of different estimation procedures on a synthetic dataset.
to 500 time points, and fixed the number of covariates is fixed to p = 20. The block partition
was generated randomly and consists of ten blocks with minimum length set to 10 time points. In
each of the block, only 5 covariates out of 20 affected the response. Their values were uniformly at
random drawn from [?1, ?0.1]?[0.1, 1]. With this configuration, a dataset was created by randomly
drawing Xi ? N (0, Ip ), ?i ? N (0, 1.52 ) and computing Yi = Xi ?(ti ) + ?i for i = 1, . . . , n. For
each sample size, we independently generated 100 datasets and report results averaged over them.
A simple local regression method [13], which is commonly used for estimation in varying coefficient
models, was used as the simplest baseline for comparing the relative performance of estimation. Our
first competitor is an extension of the baseline, which uses the following estimator [28]:
v
u n
p
n
n X
X
X
uX
?
2
(Yi ? Xi ?i? ) Kh (ti? ? ti ) +
?j t
?i2? ,j ,
(19)
min
??Rp?n
i? =1 i=1
j=1
i? =1
1
h K(?/h) is the kernel
the ?1 penalized local
where Kh (?) =
function. We will call this method ?Kernel ?1 /?2 ?. Another
competitor uses
regression independently at each time point, which leads to
the following estimator of ?(t),
p
n
X
X
minp
(Yi ? X?i ?)2 Kh (ti ? t) +
?j |?j |.
(20)
??R
i=1
j=1
We call this method ?Kernel ?1 ?. The difference between the two methods is that ?Kernel ?1 /?2 ?
biases certain covariates toward zero at every time point, based on global information; whereas
?Kernel ?1 ? biases covariates toward zero only based on local information. The final competitor is
chosen to be the minimizer of Eq. (2) [1], which we call ??1 + TV?. The bandwidth parameter for
?Kernel ?1 ? and ?Kernel ?1 /?2 ? is chosen using a generalized cross validation of a non-penalized
estimator. The penalty parameters ?j are chosen according to the BIC criterion [28]. For the ??1 +
TV? method, we optimize the BIC criterion over a two-dimensional grid of values for ?1 and ?2 .
We report the relative estimation error, REE = 100 ?
Pp
?
?
j=1 |?i,j ??i,j |
Pp
,
?
?
i=1
j=1 |?i,j ??i,j |
Pn
i=1
Pn
where ?? is the baseline
local linear estimator, as a measure of estimation accuracy. To asses the performance of the model
selection, we report precision, recall and their harmonic mean F1 measure when estimating the
relevant covariates at each time point and the percentage of correctly identified irrelevant covariates.
From the experimental results, summarized in Fig. 2, we can see that the TDB-Lasso succeeds in
recovering the true model as the sample size increases. It also estimates the coefficient values with
better accuracy than the other methods. It worth noting that the ?Kernel + ?1 ? performs better than
the ?Kernel + ?1 /?2 ? approach, which is due to the violation of the assumptions made in [28]. The
??1 + TV? performs better than the local linear regression approaches, however, the method gets
very slow for the larger values of the sample size and it requires selecting two tuning parameters,
which makes it quite difficult to use. We conjecture that the ??1 + TV? and TDB-Lasso have similar
asymptotic properties with respect to model selection, however, from our numerical experiments we
can see that for finite sample data, the TDB-Lasso performs better.
7
Application to Time-varying Graph Structure Estimation
An interesting application of the TDB-Lasso is in structural estimation of time-varying undirected
graphical models [1, 17]. A graph structure estimation can be posed as a neighborhood selection
7
problem, in which neighbors of each node are estimated independently. Neighborhood selection
in the time-varying Gaussian graphical models (GGM) is equivalent to model selection in VCVS,
where value of one node is regressed to the rest of nodes. The regression problem for each node
can be solved using the TDB-Lasso. Graphs estimated in this way will have neighborhoods of each
node that are constant on a partition, but the graph as a whole changes more flexibly (Fig. 1b-d).
Subject aa
t=1.00s
t=2.00s
t=3.00s
The graph structure estimation using the TDBLasso is demonstrated on a real dataset of electroencephalogram (EEG) measurements. We use
the brain computer interface (BCI) dataset IVa
from [11] in which the EEG data is collected from
5 subjects, who were given visual cues based on
which they were required to imagine right hand Figure 3: Brain interactions for the subject ?aa?
or right foot for 3.5s. The measurement was per- when presented with visual cues of the class 1
formed when the visual cues were presented on the
screen (280 times), intermitted by periods of random length in which the subject could relax. We
use the down-sampled data at 100Hz. Fig. 3 gives a visualization of the brain interactions over the
time of the experiment for the subject ?aa? while presented with visual cues for the class 1 (right
hand). Estimated graphs of interactions between different parts of the brain for other subjects and
classes are given in Appendix due to the space limit.
We also want to study whether the estimated time-varying network are discriminative features for
classifying the type of imaginations in the EEG signal. For this purpose, we perform unsupervised
clustering of EEG signals using the time-varying networks and study whether the grouping correspond to the true grouping according to imagination label. We estimate a time-varying GGM using
the TDB-Lasso for each visual cue and cluster the graphs using the spectral K-means clustering [29]
(using a linear kernel on the coefficients to measure similarity). Each cluster is labeled according to
the majority of points it contains. Finally, each cue if classified based on labels of the time-points
that it contains. Table 1 summarizes the classification accuracy for each subject based on K = 4
clusters (K was chosen as a cutoff point, when there was little decrease in K-means objective). We
compare this approach to a case when GGMs with a static structure are estimated [5]. Note that
the supervised classifiers with special EEG features are able to achieve much higher classification
accuracy, however, our approach does not use any labeled data and can be seen as an exploratory
step. We also used TDB-Lasso for estimating the time-varying gene networks from microarray data
time series data, but due to space limit, results will be reported later in a biological paper.
Table 1: Classification accuracies based on learned brain interactions.
Subject
TDB-Lasso
Static
8
aa
0.69
0.58
al
0.80
0.63
av
0.59
0.54
aw
0.67
0.57
ay
0.83
0.61
Discussion
We have developed the TDB-Lasso procedure, a novel approach for model selection and variable estimation in the varying-coefficient varying-structure models with piecewise constant functions. The
VCVS models form a flexible nonparametric class of models that retain interpretability of parametric
models. Due to their flexibility, important classical problems, such as linear regression with structural changes and change point detection, and some more recent problems, like structure estimation
of varying graphical models, can be modeled within this class of models. The TDB-Lasso compares
favorably to other commonly used [28] or latest [1] techniques for estimation in this class of models,
which was demonstrated on the synthetic data. The model selection properties of the TDB-Lasso,
demonstrated on the synthetic data, are also supported by the theoretical analysis. Furthermore, we
demonstrate a way of applying the TDB-Lasso for graph estimation on a real dataset.
Application of the TDB-Lasso procedure goes beyond the linear varying coefficient regression models. A direct extension is to generalized varying-coefficient models g(m(Xi , ti )) = X?i ?(ti ), i =
1, . . . , n, where g(?) is a given link function and m(Xi , ti ) = E[Y |X = Xi , t = ti ] is the conditional mean. Estimation in generalized varying-coefficient models proceeds by changing the
squared loss in Eq. (7) and Eq. (8) to a different appropriate loss function. The generalized varyingcoefficient models can be used to estimate the time-varying structure of discrete Markov Random
Fields, again by performing the neighborhood selection.
8
References
[1] Amr Ahmed and Eric P. Xing. Tesla: Recovering time-varying networks of dependencies in social and
biological studies. Proceeding of the National Academy of Science, 2009.
[2] Francis R. Bach. Bolasso: model consistent lasso estimation through the bootstrap. In William W. Cohen,
Andrew McCallum, and Sam T. Roweis, editors, ICML, volume 307 of ACM International Conference
Proceeding Series, pages 33?40. ACM, 2008.
[3] J Bai and P Perron. Computation and analysis of multiple structural change models. Journal of Applied
Econometrics, (18):1?22, 2003.
[4] Jushan Bai and Pierre Perron. Estimating and testing linear models with multiple structural changes.
Econometrica, 66(1):47?78, January 1998.
[5] O. Banerjee, L. El Ghaoui, and A. d?Aspremont. Model selection through sparse maximum likelihood
estimation. J. Mach. Learn. Res., 9:485?516, 2008.
[6] P. Bickel, Y. Ritov, and A. Tsybakov. Simultaneous analysis of lasso and dantzig selector. Ann. of Stat.
[7] Florentina Bunea. Honest variable selection in linear and logistic regression models via ?1 and ?1 + ?2
penalization. Electronic Journal of Statistics, 2:1153, 2008.
[8] Scott S. Chen, David L. Donoho, and Michael A. Saunders. Atomic decomposition by basis pursuit.
SIAM Journal on Scientific Computing, 20(1):33?61, 1999.
[9] William S. Cleveland, Eric Grosse, and William M. Shyu. Local regression models. In John M. Chambers
and Trevor J. Hastie, editors, Statistical Models in S, pages 309?376, 1991.
[10] David L. Donoho, Michael Elad, and Vladimir N. Temlyakov. Stable recovery of sparse overcomplete
representations in the presence of noise. IEEE Trans. Inform. Theory, 52:6?18, 2006.
[11] G. Dornhege, B. Blankertz, G. Curio, and K. M?uller. Boosting bit rates in non-invasive EEG single-trial
classifications by feature combination and multi-class paradigms. IEEE Trans. Biomed. Eng., 51:993?
1002, 2004.
[12] Jianqing Fan and Qiwei Yao. Nonlinear Time Series: Nonparametric and Parametric Methods. (Springer
Series in Statistics). Springer, August 2005.
[13] Jianqing Fan and Wenyang Zhang. Statistical estimation in varying-coefficient models. The Annals of
Statistics, 27:1491?1518, 2000.
? Moulines. Kernel change-point analysis. In D. Koller, D. Schu[14] Za??d Harchaoui, Francis Bach, and Eric
urmans, Y. Bengio, and L. Bottou, editors, Advances in Neural Information Processing Systems 21. 2009.
[15] Za??d Harchaoui and C?eline Levy-Leduc. Catching change-points with lasso. In J.C. Platt, D. Koller,
Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20, pages 617?
624. MIT Press, Cambridge, MA, 2008.
[16] Trevor Hastie and Robert Tibshirani. Varying-coefficient models. Journal of the Royal Statistical Society.
Series B (Methodological), 55(4):757?796, 1993.
[17] Mladen Kolar, Le Song, and Eric Xing. Estimating time-varying networks. In arXiv:0812.5087, 2008.
[18] Marc Lavielle and Eric Moulines. Least-squares estimation of an unknown number of shifts in a time
series. Journal of Time Series Analysis, 21(1):33?59, 2000.
[19] E. Lebarbier. Detecting multiple change-points in the mean of gaussian process by model selection. Signal
Process., 85(4):717?736, 2005.
[20] E. Mammen and S. van de Geer. Locally adaptive regression splines. Ann. of Stat., 25(1):387?413, 1997.
[21] N. Meinshausen and P. B?uhlmann. High-dimensional graphs and variable selection with the lasso. Annals
of Statistics, 34:1436, 2006.
[22] Nicolai Meinshausen and Peter B?uhlmann. Stability selection. Preprint, 2008.
[23] Alessandro Rinaldo. Properties and refinements of the fused lasso. Preprint, 2008.
[24] Le Song, Mladen Kolar, and Eric P. Xing. Keller: Estimating time-evolving interactions between genes.
In Proceedings of the 16th International Conference on Intelligent Systems for Molecular Biology, 2009.
[25] Robert Tibshirani, Michael Saunders, Saharon Rosset, Ji Zhu, and Keith Knight. Sparsity and smoothness
via the fused lasso. Journal Of The Royal Statistical Society Series B, 67(1):91?108, 2005.
[26] S. A. van de Geer and P. Buhlmann. On the conditions used to prove oracle results for the lasso, 2009.
[27] M. J. Wainwright. Sharp thresholds for high-dimensional and noisy recovery of sparsity. Preprint, 2006.
[28] H. Wang and Y. Xia. Shrinkage estimation of the varying coefficient model. Manuscript, 2008.
[29] H Zha, C Ding, M Gu, X He, and H Simon. Spectral relaxation for k-means clustering. pages 1057?1064.
MIT Press, 2001.
[30] P. Zhao and B. Yu. On model selection consistency of lasso. J. Mach. Learn. Res., 7:2541?2563, 2006.
9
| 3836 |@word mild:1 trial:1 version:1 norm:4 open:1 decomposition:1 eng:1 bai:2 configuration:1 series:11 contains:2 selecting:3 tuned:1 bootstrapped:1 longitudinal:1 past:1 existing:2 recovered:1 comparing:2 nicolai:1 reminiscent:1 john:1 numerical:1 partition:12 enables:1 cue:6 selected:2 mccallum:1 detecting:1 boosting:1 node:6 location:2 successive:1 zhang:1 constructed:1 direct:1 shorthand:1 prove:2 consists:1 ray:1 introduce:1 falsely:2 multi:1 brain:6 moulines:2 automatically:1 td:11 little:2 increasing:1 cleveland:1 estimating:12 notation:2 bounded:1 panel:1 argmin:3 minimizes:2 generalizable:1 developed:1 iva:1 transformation:4 guarantee:1 temporal:2 dornhege:1 every:5 multidimensional:2 ti:29 growth:1 finance:1 exactly:1 scaled:1 classifier:1 platt:1 control:1 unit:2 grant:1 appear:1 t1:2 local:6 modify:1 limit:2 mach:2 analyzing:1 ree:1 therein:1 dantzig:1 meinshausen:2 challenging:2 range:2 averaged:1 decided:1 unique:1 practical:2 testing:1 atomic:1 practice:1 block:14 bootstrap:4 procedure:18 empirical:1 evolving:4 remotely:1 submatrices:1 attain:1 regular:1 get:2 cannot:1 close:2 selection:27 context:3 influence:1 applying:1 sparsistent:2 equivalent:2 optimize:1 demonstrated:3 go:1 straightforward:1 attention:1 flexibly:1 l:1 convex:2 independently:3 latest:1 simplicity:2 recovery:4 identifying:1 keller:1 estimator:8 dbi:2 stability:4 exploratory:1 variation:5 annals:2 imagine:2 target:1 construction:1 controlling:1 user:1 programming:2 us:2 element:6 particularly:1 econometrics:2 cut:1 labeled:2 min1:1 preprint:3 ding:1 solved:1 wang:1 region:1 decrease:1 knight:1 alessandro:1 complexity:2 covariates:20 econometrica:1 dynamic:5 n000140910758:1 depend:1 segment:17 predictive:1 qiwei:1 max1:1 eric:7 basis:1 gu:1 easily:1 stock:5 kp:2 detected:1 neighborhood:6 choosing:1 saunders:2 whose:1 quite:2 larger:4 solve:2 heuristic:1 posed:1 relax:2 drawing:1 elad:1 triangular:1 bci:1 statistic:4 transform:1 noisy:1 ip:1 final:1 hoc:1 advantage:1 eigenvalue:4 sequence:1 propose:4 interaction:6 coming:2 maximal:1 relevant:9 flexibility:2 achieve:2 adapts:1 academy:1 roweis:2 kh:3 epx:1 cluster:3 optimum:1 weakens:1 andrew:1 stat:2 school:1 keith:1 eq:24 recovering:2 c:1 come:1 convention:2 foot:1 drawback:1 correct:7 sary:1 f1:2 preliminary:1 alleviate:1 biological:2 extension:3 hold:1 predict:1 bj:8 circuitry:1 vary:1 consecutive:1 a2:4 smallest:1 bickel:1 purpose:1 estimation:28 applicable:1 label:2 uhlmann:2 bunea:1 uller:1 mit:2 sensor:1 gaussian:3 rather:1 pn:3 shrinkage:1 varying:39 notational:1 consistently:3 modelling:1 likelihood:1 methodological:1 baseline:3 el:1 initially:1 koller:2 transformed:5 interested:1 selects:2 biomed:1 uncovering:1 issue:3 flexible:1 arg:1 classification:4 denoted:1 conceptualize:1 special:1 mutual:2 field:2 tds:1 identical:1 biology:1 yu:1 unsupervised:1 icml:1 report:4 t2:1 spline:2 piecewise:5 leduc:1 simplify:1 few:3 employ:1 randomly:3 np:3 intelligent:1 national:1 relief:1 n1:1 william:3 detection:3 interest:1 investigate:1 weakness:4 deferred:1 analyzed:2 violation:1 tj:4 xb:1 necessary:1 experience:1 eline:1 indexed:1 euclidean:1 re:2 overcomplete:1 catching:1 theoretical:4 minimal:2 column:1 modeling:1 earlier:1 ar:1 ordinary:3 stacking:1 subset:1 uniform:1 comprised:1 characterize:1 reported:1 dependency:1 aw:1 synthetic:7 considerably:1 chooses:1 rosset:1 international:2 randomized:7 siam:1 retain:1 probabilistic:1 off:1 michael:3 fused:2 yao:1 squared:1 again:1 satisfied:3 choose:1 possibly:1 imagination:2 zhao:1 potential:1 de:2 b2:1 summarized:2 includes:1 coefficient:32 wk:2 sloan:1 ad:1 later:1 analyze:3 characterizes:1 sup:1 xing:4 start:1 recover:1 francis:2 zha:1 simon:1 ass:1 formed:1 ggm:2 square:2 accuracy:5 variance:1 who:1 efficiently:1 likewise:1 correspond:2 identify:5 identification:1 trajectory:1 worth:1 classified:1 za:3 simultaneous:1 inform:1 trevor:2 competitor:3 pp:3 invasive:1 proof:1 static:2 sampled:2 dataset:7 intensively:1 recall:2 lim:3 improves:1 segmentation:1 cj:6 uncover:2 carefully:1 manuscript:1 mkolar:1 higher:1 supervised:1 response:8 yb:1 ritov:1 furthermore:4 xa:2 stage:3 correlation:1 hand:4 nonlinear:1 banerjee:1 defines:1 logistic:1 reveal:1 varyingcoefficient:2 scientific:1 name:1 effect:1 normalized:1 true:7 verify:1 regularization:2 i2:1 deal:1 during:2 encourages:1 noted:1 mammen:1 criterion:3 generalized:4 ay:1 demonstrate:2 electroencephalogram:1 performs:4 l1:4 interface:2 saharon:1 meaning:1 harmonic:1 consideration:1 novel:1 recently:1 common:1 functional:1 ji:1 overview:1 cohen:1 xbj:1 volume:1 organism:1 he:1 mellon:1 refer:3 measurement:2 cambridge:1 smoothness:1 tuning:2 consistency:2 outlined:1 pm:1 pointed:1 grid:1 stable:2 similarity:1 recent:1 inf:1 irrelevant:1 certain:2 jianqing:2 onr:1 continue:1 arbitrarily:1 yi:14 seen:1 minimum:2 additional:1 employed:1 paradigm:1 period:1 signal:9 ii:2 multiple:7 harchaoui:3 infer:1 adapt:1 ahmed:1 cross:1 long:1 bach:2 molecular:1 a1:5 schematic:1 regression:24 cmu:1 arxiv:1 represent:2 kernel:13 affecting:1 fellowship:2 addition:1 want:2 interval:11 whereas:2 microarray:2 appropriately:1 biased:1 rest:2 unlike:1 subject:8 hz:1 undirected:1 call:3 structural:16 noting:1 presence:1 bengio:1 variety:1 affect:3 harbor:1 bic:4 hastie:2 topology:1 lasso:43 identified:2 bandwidth:1 shift:1 honest:1 t0:1 whether:2 lesong:1 penalty:12 song:3 suffer:1 peter:1 remark:1 useful:1 clear:3 nonparametric:2 tsybakov:1 ten:1 locally:1 simplest:1 exist:1 percentage:1 nsf:3 sign:2 estimated:14 correctly:1 per:1 tibshirani:2 alfred:1 carnegie:1 discrete:1 affected:1 express:1 bolasso:1 threshold:1 drawn:1 changing:5 cutoff:2 graph:12 subgradient:2 asymptotically:1 relaxation:1 year:1 run:1 package:1 family:1 reasonable:1 throughout:1 electronic:1 coherence:2 appendix:5 summarizes:1 florentina:1 submatrix:1 bit:1 bound:2 fan:2 refine:1 oracle:1 activity:1 occur:1 constraint:1 x2:1 regressed:1 lane:1 min:12 performing:1 conjecture:1 tv:17 according:3 popularized:1 combination:1 rnp:2 smaller:2 across:1 sam:1 partitioned:1 appealing:1 ghaoui:1 ln:2 equation:2 visualization:1 remains:1 turn:1 discus:1 singer:1 available:1 pursuit:1 apply:1 observe:1 away:1 spectral:2 appropriate:1 chamber:1 simulating:1 pierre:1 robustness:2 shortly:2 sbj:5 rp:7 denotes:2 clustering:3 include:1 graphical:4 l6:1 prof:1 classical:1 society:2 objective:7 quantity:1 amr:1 parametric:5 thank:1 link:1 majority:1 considers:1 collected:1 trivial:1 reason:1 toward:2 assuming:1 length:3 index:1 relationship:2 illustration:1 modeled:1 kolar:3 minimizing:7 vladimir:1 regulation:1 difficult:1 robert:2 statement:1 holding:1 favorably:1 suppress:1 design:6 unknown:4 seismic:1 perform:3 upper:2 av:1 observation:2 datasets:4 markov:1 mladen:3 finite:1 january:1 defining:1 relational:1 rn:6 perturbation:1 varied:1 smoothed:1 sharp:1 august:1 buhlmann:1 introduced:2 david:2 cast:1 subvector:1 toolbox:1 kl:2 neces:1 required:1 perron:2 learned:1 trans:2 address:2 able:3 beyond:1 proceeds:1 below:1 pattern:1 scott:1 sparsity:3 tb:2 interpretability:2 including:1 max:4 royal:2 wainwright:1 natural:1 regularized:2 predicting:1 zhu:1 blankertz:1 epxing:1 temporally:1 identifies:2 started:1 created:1 aspremont:1 catch:1 extract:1 l2:1 asymptotic:3 relative:3 loss:3 interesting:1 penalization:2 validation:1 sufficient:1 xp:1 consistent:3 minp:1 editor:4 classifying:1 row:1 penalized:4 supported:3 side:1 bias:6 understand:1 weaker:2 fall:1 wide:1 neighbor:1 sparse:6 van:2 xia:1 dimension:1 valid:1 collection:1 adaptive:2 jump:28 commonly:4 made:1 refinement:1 social:1 sj:1 temlyakov:1 selector:1 gene:6 global:1 active:1 sequentially:1 assumed:1 conclude:1 xi:13 discriminative:1 continuous:1 table:2 learn:2 robust:1 decoupling:1 eeg:7 bottou:1 domain:2 marc:1 pk:1 main:3 ybj:1 whole:1 noise:2 n2:2 tesla:2 fig:4 screen:1 deployed:1 slow:1 grosse:1 precision:2 sub:1 inferring:1 jushan:1 levy:1 theorem:9 down:1 specific:1 covariate:1 showing:1 lebarbier:1 grouping:2 exists:4 curio:1 albeit:1 effectively:1 corr:1 magnitude:1 chen:1 visual:5 rinaldo:1 expressed:2 ux:1 applies:1 springer:2 aa:4 corresponds:1 minimizer:3 determines:1 satisfies:1 acm:2 ma:1 lavielle:1 conditional:1 goal:3 ann:2 donoho:2 price:3 change:28 included:3 reducing:1 uniformly:1 denoising:2 lemma:4 called:2 total:5 clarification:1 geer:2 experimental:1 succeeds:1 select:5 ggms:1 support:6 violated:1 |
3,131 | 3,837 | Optimizing Multi-class Spatio-Spectral Filters via
Bayes Error Estimation for EEG Classification
Wenming Zheng
Research Center for Learning Science
Southeast University
Nanjing, Jiangsu 210096, P.R. China
wenming [email protected]
Zhouchen Lin
Microsoft Research Asia
Beijing 100190, P.R. China
[email protected]
Abstract
The method of common spatio-spectral patterns (CSSPs) is an extension of common spatial patterns (CSPs) by utilizing the technique of delay embedding to alleviate the adverse effects of noises and artifacts on the electroencephalogram
(EEG) classification. Although the CSSPs method has shown to be more powerful than the CSPs method in the EEG classification, this method is only suitable for
two-class EEG classification problems. In this paper, we generalize the two-class
CSSPs method to multi-class cases. To this end, we first develop a novel theory of
multi-class Bayes error estimation and then present the multi-class CSSPs (MCSSPs) method based on this Bayes error theoretical framework. By minimizing the
estimated closed-form Bayes error, we obtain the optimal spatio-spectral filters of
MCSSPs. To demonstrate the effectiveness of the proposed method, we conduct
extensive experiments on the BCI competition 2005 data set. The experimental
results show that our method significantly outperforms the previous multi-class
CSPs (MCSPs) methods in the EEG classification.
1 Introduction
The development of non-invasive brain computer interface (BCI) using the electroencephalogram
(EEG) signal has become a very hot research topic in the BCI community [1]. During the last several years, a large number of signal processing and machine learning methods have been proposed
for EEG classification [6]. It is challenging to extract the discriminant features from the EEG signal
for EEG classification. This is because in most cases the EEG data are centered at zero and thus
many traditional discriminant feature extraction methods, e.g., Fisher?s linear discriminant analysis
(FLDA) [7], cannot be successfully used. Among the various EEG feature extraction methods, the
common spatial patterns (CSPs) method [2] is one of the most popular. Given two classes of EEG
signal, the basic idea of CSPs is to find some projection directions such that the projections of the
EEG signal onto these directions will maximize the variance of one class and simultaneously minimize the variance of the other class. Although CSPs have achieved great success in EEG classification, this method only utilizes the spatial information of the EEG signal. To utilize both the spatial
and the temporal information of the EEG signal for classification, Lemm et al. [3] proposed a new
EEG feature extraction method, called common spatio-spectral patterns (CSSPs), which extended
the CSPs method by concatenating the original EEG data and a time-delayed one to form a longer
vector sample, and then performed EEG feature extraction, which is similar to the CSPs method,
from these padded samples. The experiments in [3] showed that the CSSPs method outperforms the
CSPs method.
A multi-class extension of the two-class CSPs method (MCSPs) was proposed by Dornhege et al.
[4] who adopted a joint approximate diagonalization (JAD) technique to find the optimal spatial
filters. Grosse-Wentrup and Buss [5] recently pointed out that the MCSPs method has two major
1
drawbacks. The first drawback is that this method lacks solid theoretical foundation with respect to
its classification error. The second one is that the selection of the optimal spatial filters of MCSPs
is based on heuristics. To overcome these drawbacks, they proposed a method based on mutual
information to select the optimal spatial filters from the original MCSPs result. Nevertheless, it
should be noted that both the MCSPs methods are based on the JAD technique, where a closed-form
solution is unavailable, making the theoretical analysis difficult.
In this paper, we generalize the two-class CSSPs method to multi-class cases, hereafter called the
MCSSPs method. However, we do not adopt the same JAD technique used in the MCSPs method
to derive our MCSSPs method. Instead, we derive our MCSSPs method directly based on the Bayes
error estimation, and thus provide a solid theoretic foundation. To this end, we first develop a novel
theory of multi-class Bayes error estimation, which has a closed-form solution to find the optimal
discriminant vectors. Based on this new theoretic framework, we propose our MCSSPs method for
EEG feature extraction and recognition.
2
Brief Review of CSPs and CSSPs
Let Xti = {xti,j ? IRd |j = 1, ? ? ? , mi,t } (t = 1, ? ? ? , ni ; i = 1, ? ? ? , c) denote the EEG data set from
the tth trial of the ith class, where d, c, ni , and mi,t denote the number of channels (i.e., recording
electrodes), the number of classes, the number of trials of the ith class, and the number of samples
(i.e., recording points) in the tth trial of the ith class, respectively. Assume that the EEG data
conditioned on each class follows a Gaussian distribution with a zero mean, i.e., pi (x) = N (0, ?i )
(i = 1, ? ? ? , c)1 . Then the main task of EEG feature extraction is to find a linear transformation
t
W ? IRd?k (k < d), such that for finite training data using the projected vectors yi,j
= WT xti,j to
t
classify the EEG signal may lead to better classification accuracy than using xi,j .
2.1 The CSPs Method
For the two-class EEG classification problem, the basic idea of CSPs is to find a transformation
matrix W that simultaneously diagonalizes both class covariance matrices ?1 and ?2 [2], i.e.,
WT ?i W = ?i , (i = 1, 2),
(1)
where ?i = diag{?i,1 , ? ? ? , ?i,d } (i = 1, 2) are diagonal matrices. The spatial filters can be chosen
?
(j = 1, ? ? ? , d). Parra et
as the columns of W associated with the maximal or minimal ratio of ?1,j
2,j
al. [6] proved that the CSPs method can be formulated as the following optimization problem:
? T
?
? ?1 ? ? T ?2 ?
? = arg max max
,
,
(2)
?
? T ?2 ? ? T ?1 ?
and this optimization problem boils down to solving the following generalized eigenvalue decomposition problem:
?1 ? = ??2 ?.
(3)
Let ?1 , ? ? ? , ?d and ?1 , ? ? ? , ?d be the eigenvectors and the corresponding eigenvalues of equation
(3), then the spatial filters ?i1 , ? ? ? , ?ik can be chosen from the eigenvectors ?1 , ? ? ? , ?d associated
with the largest and the smallest eigenvalues.
Then W = [?i1 , ? ? ? , ?ik ] and the projection of Xti with W can be expressed as:
Yit = WT Xti .
2.2
(4)
The CSSPs Method
The CSSPs method is an extension of CSPs by concatenating the original EEG data and a timedelayed one to form a longer vector sample, and then performing EEG feature extraction, which
is similar to the CSPs method, from these padded samples. More specifically, let ? ? denote the
time-delay operator with the delayed time ? , i.e.,
? ? (xti,j ) = xti,j?? .
1
This model is often assumed in the literature, e.g., [5].
2
(5)
Then, equation (4) can be re-written as the following:
? t = WT Xt + WT ? ? (Xt ),
Y
i
i
(0) i
(? )
(6)
where W(0) and W(? ) are the transformation matrices on the EEG data Xt and ? ? (Xt ), respectively.
To express the above equation in a similar form as CSPs, we define
?
?
Xti
?t =
X
.
i
? ? (Xti )
(7)
In this way, solving the CSSPs problem boils down to solving a similar generalized eigenvalue
? 1 and ?
? 2 to
problem as defined in equation (3), if we use the new class covariance matrices ?
replace the original class covariance matrices ?1 and ?2 , where
X
?i
?
?i =
? t (X
? t )T .
?i =
, and ?
X
(8)
?
i
i
?
trace(?i )
t
3 MCSSPs Based on Multi-class Bayes Error Estimation
In this section, we extend the CSSPs method to the multi-class case. To begin with, we develop a
novel theory of multi-class Bayes error estimation. Then we present our MCSSPs method based on
this Bayes error framework.
3.1
Multi-class Bayes Error Estimation
It is well known that the Bayes error regarding classes i and j can be expressed as [7]:
Z
? =
min(Pi pi (x), Pj pj (x))dx,
(9)
where Pi and pi (x) areRthe
papriori probability and the probability density function of the ith class,
Pi Pj pi (x)pj (x)dx. By applying the following inequality:
respectively. Let ?ij =
?
min(a, b) ? ab, ?a, b ? 0,
(10)
and the assumption pi (x) = N (0, ?i ), we obtain the following upper bound of the Bayes error:
?
!
?
!? 12
? ij |
? ij |
p
p
1
|?
|?
= Pi Pj p
? ? ?ij = Pi Pj exp ? ln p
,
(11)
2
|?i ||?j |
|?i ||?j |
? ij = 1 (?i + ?j ). The expression in exp(?) is the simplified Bhattacharyya distance [7]. If
where ?
2
we project the samples to 1D by a vector ?, then the upper bound ?ij becomes:
!? 12
?
? ij ?
p
?T ?
?ij = Pi Pj p
.
(12)
(? T ?i ?)(? T ?j ?)
? ij ? and v = ? T ??ij ?, where ??ij = 1 (?i ? ?j ). Then ?ij can be written as
Define u = ? T ?
2
?
?? 12
?
? 14
?
?
?
?
2
p
p
p
u
v
1 ? v ?2
?ij = Pi Pj ?
= Pi Pj 1 ?
? Pi Pj 1 ?
.
(13)
u
4 u
u2 ? v 2
For the c classes problem, the upper bound of the Bayes error in the reduced feature space can be
Pc?1 Pc
estimated as ? ? i=1 j=i+1 ?ij [8]. Then, from equation (13), we obtain that
?
?
?2 !
c?1 X
c
c?1 X
c
X
X
p
1 ? T ??ij ?
Pi Pj 1 ?
? ?
?ij ?
? ij ?
4
?T ?
i=1 j=i+1
i=1 j=i+1
? T
?2
c?1 X
c
c
c
X
p
1 XXp
? (??ij )?
Pi Pj ?
Pi Pj
.
(14)
=
? ij ?
8 i=1 j=1
?T ?
i=1 j=i+1
3
?2
? ?2 ? ?2 ?
Recursively applying the following inequality ab + dc ? a+c
, ?a, c ? 0; b, d > 0 to the
b+d
error bound in equation (14), we have
? Pc Pc
!2
5
c?1 X
c
T
X
4
p
1
i=1
j=1 (Pi Pj ) |? ??ij ?|
Pc Pc
.
(15)
? ?
Pi Pj ?
T ?
8
i=1
j=1 Pi Pj ? ?ij ?
i=1 j=i+1
? = Pc Pi ?i be the global covariance matrix. Then we have
Let ?
i=1
c X
c X
c
c
X
X
? ij = 1
?
Pi Pj ?
Pi Pj (?i + ?j ) = ?.
(16)
2
i=1 j=1
i=1 j=1
Combining equations (15) and (16), we have
?
c?1 X
c
X
p
1
Pi Pj ?
?
8
i=1 j=i+1
? Pc
Pc
i=1
5
4
j=1 (Pi Pj )
?
? T ??
|? T ??ij ?|
!2
.
(17)
Assume that the prior probabilities of the classes are the same, i.e., Pi = Pj = P , which holds for
most EEG experiments. Then equation (17) becomes
!2
? 5 Pc Pc
c?1 X
c
T
X
1 P 2 i=1 j=1 |? (?i ? ?j )?|
.
(18)
? ?
P?
?
8
2? T ??
i=1 j=i+1
? = Pc Pi ?i = Pc P ?i , we obtain that
On the other hand, from ?
i=1
i=1
?
?
?
?
c
c
X
X
?
?
?
P
|? T (?i ? ?j )?| ? ??
P ? T (?i ? ?j )? ?? = |? T (?i ? ?)?|.
(19)
? j=1
?
i=1
Combining equations (19) and (18), we obtain that
? 3P
!2
c?1 X
c
c
X
?
1 P 2 i=1 |? T (?i ? ?)?|
.
? ?
P?
?
8
2? T ??
i=1 j=i+1
3.2
(20)
MCSSPs Based on Multi-class Bayes Error Estimation
? i (k = 1, ? ? ? , c) denote the new class covariance matrices computed via equation (8). Then
Let ?
to minimize the Bayes error, we should minimize its upper bound, which boils down to maximizing
the following discriminant criterion
Pc
??
? i ? ?)?|
|? T (?
.
(21)
J(?) = i=1
??
? T ??
?
? is the global covariance matrix. Based on this criterion, we define the k optimal spatial
where ?
filters of MCSSPs as follows:
Pc
??
? i ? ?)?|
|? T (?
,
?1 = arg max i=1
??
?
? T ??
???
Pc
??
T ?
i=1 |? (?i ? ?)?|
?k = arg
max
.
(22)
??
?
? j = 0,
? T ??
? T ??
j = 1, ? ? ? , k ? 1
? 21
? 12
1
??
?
?
?? 2 ?. Then solving the optimization problem of
? ?
?
? i?
Let ?
(i = 1, ? ? ? , c) and ? = ?
i = ?
equation (22) is equivalent to solving the following optimization problem
Pc
?? i ? I)?|
|?T (?
,
?1 = arg max i=1
?T ?
?
???
Pc
T ?
?
i=1 |? (?i ? I)?|
?k = arg
max
,
(23)
T
? ?
?T Uk?1 = 0
4
where Uk?1 = [?1 , ? ? ? , ?k?1 ] and I is the identity matrix. Suppose that si ? {+1, ?1} denotes
?
? i ? I)?. Then
the positive or negative sign of ?T (?
? i ? I)?| = ?T si (?
? i ? I)?.
|?T (?
(24)
So equation (23) can be expressed as
Pc
??
?T i=1 si (?
i ? I)?
?1 = arg max
,
?T ?
?
???
Pc
??
?T i=1 si (?
i ? I)?
.
(25)
?k = arg
max
T
T
? ?
? Uk?1 = 0
Pc
?
? i ? I), where s = [s1 , s2 , ? ? ? , sc ]T and si ? {+1, ?1}. Then the first vector
Let T(s) = i=1 si (?
?1 defined in equation (25) is the principal eigenvector associated with the largest eigenvalue of the
matrix T(s). Suppose that we have obtained the first k vectors ?1 , ? ? ? , ?k . To solve the (k + 1)-th
vector ?k+1 , we introduce Theorems 1 and 2 below. The similar proofs of both theorems can be
found in [9].
Theorem 1. Let Qk Rk be the QR decomposition of Uk . Then ?k+1 defined in (25) is the principal
eigenvector corresponding to the largest eigenvalue of the following matrix
(Id ? Qk QTk )T(s)(Id ? Qk QTk ).
Theorem 2. Suppose that Qk Rk is the QR
of Uk . Let Uk+1 = (Uk
? decomposition
?
q
T
q = ?k+1 ? Qk (Qk ?k+1 ), and Qk+1 = Qk kqk . Then
?
?
Rk QTk ?k+1
Qk+1
0
kqk
?k+1 ),
is the QR decomposition of Uk+1 .
The above two theorems are crucial to design our fast algorithm for solving MCSSPs: Theorem 1
makes it possible to use the power method to solve MCSSPs, while Theorem 2 makes it possible to
update Qk+1 from Qk by adding a single column. Moreover, it is notable that
Id ? Qk QTk =
k
Y
(Id ? qi qTi ) = (Id ? Qk?1 QTk?1 )(Id ? qk qTk ),
(26)
i=1
where qi is the i-th column of Qk . Equation (26) makes it possible to update the matrix (Id ?
Qk QTk )T(s)(Id ? Qk QTk ) from (Id ? Qk?1 QTk?1 )T(s)(Id ? Qk?1 QTk?1 ) by the rank-one update
technique.
Let S = {s|s ? {+1, ?1}c } denote the parameter vector set, whose cardinality is 2c . Then we have
that
c
X
??
T
max
|?T (?
(27)
i ? I)?| = max max ? T(s)?.
k?k=1
s?S k?k=1
i=1
If c is not too large, a full search on S similar to that proposed in [9] is affordable. We present the
pseudo-code of our MCSSPs method using the full search on S in Algorithm 1. However, if c is a
bit large, we may adopt a similar approach as proposed in [10], which is based on a greedy search,
to find the suboptimal solution. The pseudo-code based on the greedy search is given in Algorithm
2.
4
EEG Feature Extraction Based on the MCSSPs
Let Xti be the EEG sample points from the tth trial under the ith condition (i.e., the ith?class). Let ?
?j
t
Xi
?t =
be the jth optimal spatial filter of the MCSSPs method. Construct the new data X
,
i
? ? (Xti )
and let
?t
? ti,j = ?jT X
p
(28)
i
5
Algorithm 1: The MCSSPs Algorithm Based on the Full Search Strategy
Input:
? Input data matrix X and the class label vector l.
Initialization:
??
? i (i = 1, ? ? ? , c) and ?;
1. Compute the average covariance matrices ?
1
?1
?
? = U?UT , compute ?
?? ? 2 = U?? 21 UT and ?
??
? ?
?
= U??1 UT ;
2. Perform SVD of ?:
1
1
?
? ?2 ?
? ? 2 and ??
?i = ?
? i?
?? i = ?
?
?
?? i ? I (i = 1, ? ? ? , c);
3. Compute ?
4. Enumerate all the elements of S and denote them by S = {s1 , s2 , ? ? ? , s2c };
For i = 1, 2, ? ? ? , k, Do
1. For j=1 to 2c
? Compute T(si );
? Solve the principal eigenvector of T(si )?(j) = ?(j) ?(j) via the power iteration
method;
2. Select the eigenvector ? with the largest eigenvalue maxj=1,???,2c {?(j) };
3. If i = 1, then qi ? ?, qi ? qi /kqi k, and Q1 ? qi ;
else qi ? ? ? Qi?1 (QTi?1 ?), qi ? qi /kqi k, and Qi ? (Qi?1 qi );
?
?
??
??
??
T
T
T
T
? p ? ??
? p ?(??
4. Compute ??
p qi )qi ?qi (qi ??p )+qi (qi ??p qi )qi (p = 1, ? ? ? , c);
? 21
?
?
Output: ?i = ?
?i , i = 1, ? ? ? , k.
Algorithm 2: The MCSSPs Algorithm Based on the Greedy Search Strategy
Input:
? Input data matrix X and the class label vector l.
Initialization:
??
? i (i = 1, ? ? ? , c) and ?;
1. Compute the average covariance matrices ?
1
?? ?1 = U??1 UT ;
?
?
?? ? 2 = U?? 21 UT and ?
? ?
? = U?UT , compute ?
2. Perform SVD of ?:
1
1
?
?
?
??
??
?
?
? i?
? 2 and ??
?i = ?
? 2?
3. Compute ?
i = ?i ? I (i = 1, ? ? ? , c);
For i = 1, 2, ? ? ? , k, Do
1. Set s ? (1, ? ? ? , 1)T , s1 ? ?s, and compute T(s);
2. Solve the principal eigenvector of T(s)? = ?? associated with the largest absolute eigenvalue |?| via the power iteration method. Set ?0 ? |?|;
While s 6= s1 , Do
(a) Set s1 ? s;
(b) For j = 1, 2, ? ? ? , c, Do
? Set sj ? ?sj , where sj denotes the jth element of s. Compute T(s);
? Solve the principal eigenvector of T(s)? = ?? associated with the largest absolute eigenvalue |?| via the power iteration method, and set ?1 ? |?|;
? If ?1 ? ?0 , then sj ? ?sj , else ?0 ? ?1 ;
(c) Compute T(s) and solve the principal eigenvector ?i of T(s)?i = ??i associated
with the largest absolute eigenvalue |?| via the power iteration method;
3. If i = 1, then qi ? ?i , qi ? qi /kqi k, and Q1 ? qi ;
else qi ? ?i ? Qi?1 (QTi?1 ?i ), qi ? qi /kqi k, and Qi ? (Qi?1 qi );
?
?
??
??
T
T
T
T
? p ? ??
?
??
4. Compute ??
p ?(??p qi )qi ?qi (qi ??p )+qi (qi ??p qi )qi (p = 1, ? ? ? , c);
1
?
?
? 2 ?i , i = 1, ? ? ? , k.
Output: ?i = ?
6
? t onto the projection vector ?j . Then the covariance of the
be the projections of the EEG data X
? ti,j can be expressed as
elements in the projections p
t
vi,j
? t ) = ?T ?
? t ?j .
= var(?jT X
i
j
i
(29)
? t denotes the covariance matrix of the EEG data in the tth trial of the ith class.
where ?
i
t
For all the k spatio-spectral filters ?1 , ? ? ? , ?k , we obtain the k features vi,j
(j = 1, ? ? ? , k) from the
t
t
t T
tth trial of EEG data. Now let vi = [vi,1 , ? ? ? , vi,k ] be the feature vector associated with the tth
trial of the ith class. Similar to the method used in [2], the following log-transformation form is
used as the final feature vector of the EEG signal:
?
!
vit
t
fi = log P t
,
(30)
k vi,k
where the log function is applied to each element of the vector independently.
transformation serves to approximate the normal distribution of the data [2].
The log-
For the given unknown EEG data Z, we use the ?
same procedures
to extract the corresponding fea?
Z
?
tures, i.e., we first construct the new data Z =
, and then adopt the above method to
? ? (Z)
z
extract the corresponding discriminant feature vector f , where
?
?
vz
z
? z ?j ,
f = log P z , vz = [v1z , ? ? ? , vkz ]T , and vjz = ?jT ?
(31)
k vk
? z denotes the covariance matrix of Z.
?
in which ?
After obtaining the discriminant feature vectors fit (i = 1, ? ? ? , c; t = 1, 2 ? ? ? , ni ) and f z , we can
classify the unknown EEG data into one of the c classes by using a classifier, e.g., the K-nearest
neighbor (K-NN) classifier [7].
5 Experiments
To test the performance of our MCSSPs method, we use the real world EEG data set to conduct
experiments. The data set used here is from ?BCI competition 2005? - data set IIIa [11]. This data set
consists of recordings from three subjects (k3b, k6b, and l1b), which performed four different motor
imagery tasks (left/right hand, one foot, or tongue) according to a cue. During the experiments, the
EEG signal is recorded in 60 channels, using the left mastoid as reference and the right mastoid as
ground. The EEG was sampled at 250 Hz and was filtered between 1 and 50 Hz with the notch filter
on. Each trial lasted for 7 s, with the motor imagery performed during the last 4 s of each trial. For
subjects k6b and l1b, a total of 60 trials per condition were recorded. For subject k3b, a total of 90
trials per condition were recorded. Similar to the method in [5], we discard the four trials of subject
k6b with missing data. For each trial of the EEG raw data, we only use part of the sample points,
i.e., from No.1001 to No.1750, as the experiment data since they carry most of the information in
the EEG signal. Consequently, each trial contains 750 data points. We adopt the two-fold cross
validation strategy to perform the experiment, i.e., for all the trials of each condition per subject, we
divide them into two groups. Each group is used as training data and testing data once. We conduct
five rounds of experiments in total, with different divisions of the training and testing data sets, to
obtain ten recognition rates, which are averaged as the final recognition rate. For comparison, we
also conduct the same experiment using both MCSPs methods proposed by [4] and [5], respectively.
To better identify the effect of using different EEG filters, a simple classifier, K-NN classifier with
the Euclidean distance and 7 nearest neighbors, is used for final classification.
Table 1 shows the average classification rates (%) versus the standard deviations (%) of the three
methods2 , while figure 1 shows the average recognition rates of our MCSSPs method with different
choices of the delayed time ? . From table 1, we can see that the MCSSPs method achieves much
better classification performance than the MCSPs methods.
2
The results using the MCSPs method proposed in [5] are inferior to those reported in [5] because we did
not pre-filter the EEG signals with a Butterworth filter and did not use the logistic regression classifiers for
classification either, as we are more interested in comparing the effect of different EEG filters.
7
Table 1: Comparison of the classification rates (%) versus standard deviations (%) between MCSPs
and MCSSPs.
Subject MCSPs [4]
MCSPs [5] MCSSPs/Bayes
k3b
46.17 (6.15) 84.89 (2.74)
85.83 (2.23)
k6b
33.54 (4.27) 50.09 (2.59)
56.28 (3.87)
l1b
35.17 (3.92) 62.08 (3.99)
68.58 (6.16)
90
Classification rates (%)
85
80
k3b
k6b
l1b
75
70
65
60
55
50
1
2
3
4
5
?
6
7
8
9
10
Figure 1: The classification rates (%) of our MCSSPs method with different choices of ? .
6 Conclusions
In this paper, we extended the two-class CSSPs method to the multi-class cases via the Bayes error
estimation. We first proposed a novel theory on multi-class Bayes error estimation, which has a
closed-form solution to find the optimal discriminant vectors for feature extraction. Then we applied
the multi-class Bayes error estimation theory to generalize the two-class CSSPs method to multiclass cases. The experiments on the data set IIIa from BCI competition 2005 have shown that
our MCSSPs method is superior to the MCSPS methods. With more elaborate treatments, e.g.,
preprocessing the EEG signal and adopting a more advanced classifier, even higher classification
rates are possible. These will be reported in our forthcoming papers.
Acknowledgment
This work was partly supported by National Natural Science Foundation of China under Grants
60503023 and 60872160.
References
[1] B. Blankertz, G. Curio, & K.-R. M?uller (2002) Classifying single trial EEG: towards brain computer
interfacing. In: T.G. Dietterich, S. Bechker, Z. Ghaharamani (Eds.), Advances in Neural Information
Processing Systems14, pp.157-164. Cambridge, MA:MIT Press.
[2] H. Ramoser, J. Mueller-Gerking, & G. Pfurtscheller (2000) Optimal spatial filtering of single trial EEG
during imaged hand movement. IEEE Transactions on Rehabilitation Engineering. 8(4):441-446.
8
[3] S. Lemm, B. Blanketz, G. Curio, & K.-R. M?uller (2005) Spatio-spectral filters for improved classification
of single trial EEG. IEEE Transactions on Biomedical Engineering. 52(9):1541-1548.
[4] G. Dornhege, B. Blankertz, G. Curio, & K.-R. M?uller (2004) Boosting bit rates in noninvasive EEG singletrial classifications by feature combination and multiclass paradigms. IEEE Transactions on Biomedical
Engineering. 51(6):993-1002.
[5] M. Grosse-Wentrup, & M. Buss (2008) Multiclass Common Spatial Patterns and Information Theoretic
Feature Extraction. IEEE Transactions on Biomedical Engineering. 55:1991-2000.
[6] L. C. Parra, C. D. Spence, A. D. Gerson, & P. Sajda (2005) Recipes for linear analysis of EEG. Neuroimage, 28:326-341.
[7] K. Fukunaga (1990) Introduction to Statistical Pattern Recognition (Second Edition). New York: Academic Press.
[8] J.T. Chu & J.C. Chuen (1967) Error Probability in Decision Functions for Character Recognition. Journal
of the Association for Computing Machinery. 14(2):273-280.
[9] W. Zheng (2009) Heteroscedastic Feature Extraction for Texture Classification. IEEE Signal Processing
Letters, 16(9):766-769.
[10] W. Zheng, H. Tang, Z. Lin, & T.S. Huang (2009) A Novel Approach to Expression Recognition from
Non-frontal Face Images. Proceedings of 2009 IEEE International Conference on Computer Vision
(ICCV2009), pp.1901-1908.
[11] G. Blankertz, K.R. Mueller, D. Krusienski, G. Schalk, J.R. Wolpaw, A. Schloegl, G. Pfurtscheller, J. R.
Millan, M. Schroeder, & N. Birbaumer (2006) The BCI competition III: Validating alternative approaches
to actual BCI problems. IEEE Transactions on Rehabilitation Engineering 14:153-159.
9
| 3837 |@word trial:18 covariance:11 decomposition:4 q1:2 solid:2 recursively:1 carry:1 contains:1 hereafter:1 bhattacharyya:1 outperforms:2 com:1 comparing:1 si:8 dx:2 written:2 chu:1 motor:2 update:3 greedy:3 cue:1 ith:8 filtered:1 boosting:1 five:1 become:1 ik:2 consists:1 introduce:1 multi:16 brain:2 xti:11 actual:1 cardinality:1 becomes:2 begin:1 project:1 moreover:1 eigenvector:7 transformation:5 dornhege:2 temporal:1 pseudo:2 ti:2 classifier:6 uk:8 grant:1 positive:1 engineering:5 id:10 initialization:2 china:3 challenging:1 heteroscedastic:1 averaged:1 acknowledgment:1 testing:2 spence:1 wolpaw:1 procedure:1 significantly:1 projection:6 pre:1 nanjing:1 cannot:1 onto:2 selection:1 operator:1 krusienski:1 applying:2 equivalent:1 center:1 maximizing:1 missing:1 vit:1 independently:1 utilizing:1 embedding:1 suppose:3 element:4 recognition:7 wentrup:2 movement:1 solving:6 division:1 joint:1 various:1 sajda:1 fast:1 sc:1 whose:1 heuristic:1 solve:6 bci:7 final:3 eigenvalue:10 propose:1 maximal:1 fea:1 combining:2 singletrial:1 competition:4 qr:3 recipe:1 electrode:1 derive:2 develop:3 nearest:2 ij:23 direction:2 foot:1 drawback:3 filter:16 centered:1 alleviate:1 parra:2 extension:3 hold:1 ground:1 normal:1 exp:2 great:1 gerson:1 major:1 achieves:1 adopt:4 smallest:1 estimation:11 label:2 southeast:1 largest:7 vz:2 successfully:1 uller:3 butterworth:1 mit:1 interfacing:1 gaussian:1 vk:1 rank:1 lasted:1 mueller:2 nn:2 i1:2 interested:1 arg:7 classification:23 among:1 development:1 spatial:13 mutual:1 construct:2 once:1 extraction:11 simultaneously:2 seu:1 national:1 delayed:3 maxj:1 microsoft:2 ab:2 zheng:4 pc:21 machinery:1 conduct:4 divide:1 euclidean:1 re:1 theoretical:3 minimal:1 tongue:1 classify:2 column:3 deviation:2 jiangsu:1 delay:2 too:1 reported:2 gerking:1 density:1 international:1 imagery:2 recorded:3 huang:1 notable:1 vi:6 performed:3 closed:4 mastoid:2 bayes:19 minimize:3 ni:3 accuracy:1 variance:2 who:1 qk:19 identify:1 generalize:3 raw:1 ed:1 pp:2 invasive:1 associated:7 mi:2 proof:1 boil:3 sampled:1 proved:1 treatment:1 popular:1 ut:6 higher:1 asia:1 improved:1 biomedical:3 hand:3 lack:1 logistic:1 artifact:1 effect:3 dietterich:1 imaged:1 round:1 during:4 inferior:1 noted:1 criterion:2 generalized:2 theoretic:3 demonstrate:1 electroencephalogram:2 qti:3 interface:1 image:1 l1b:4 novel:5 recently:1 fi:1 common:5 superior:1 birbaumer:1 extend:1 association:1 cambridge:1 zhouchen:1 pointed:1 longer:2 showed:1 csps:17 optimizing:1 discard:1 inequality:2 success:1 yi:1 maximize:1 paradigm:1 signal:14 full:3 academic:1 cross:1 lin:2 qi:40 basic:2 regression:1 vision:1 affordable:1 iteration:4 adopting:1 achieved:1 else:3 crucial:1 recording:3 subject:6 hz:2 validating:1 effectiveness:1 iii:1 fit:1 forthcoming:1 suboptimal:1 idea:2 cn:1 regarding:1 multiclass:3 expression:2 notch:1 ird:2 york:1 enumerate:1 kqi:4 eigenvectors:2 ten:1 tth:6 reduced:1 sign:1 estimated:2 per:3 express:1 group:2 four:2 nevertheless:1 yit:1 pj:21 kqk:2 utilize:1 padded:2 year:1 beijing:1 letter:1 powerful:1 utilizes:1 decision:1 bit:2 bound:5 fold:1 schroeder:1 lemm:2 min:2 fukunaga:1 performing:1 according:1 combination:1 character:1 rehabilitation:2 making:1 s1:5 ln:1 equation:14 bus:2 diagonalizes:1 end:2 serf:1 adopted:1 spectral:6 alternative:1 original:4 denotes:4 schalk:1 strategy:3 traditional:1 diagonal:1 distance:2 topic:1 discriminant:8 code:2 ratio:1 minimizing:1 difficult:1 trace:1 negative:1 design:1 unknown:2 perform:3 upper:4 finite:1 extended:2 dc:1 qtk:10 timedelayed:1 community:1 extensive:1 below:1 pattern:6 max:11 hot:1 suitable:1 power:5 natural:1 advanced:1 blankertz:3 brief:1 flda:1 methods2:1 extract:3 review:1 literature:1 prior:1 tures:1 filtering:1 var:1 versus:2 validation:1 foundation:3 classifying:1 pi:27 supported:1 last:2 jth:2 iiia:2 neighbor:2 face:1 absolute:3 overcome:1 noninvasive:1 world:1 projected:1 simplified:1 preprocessing:1 transaction:5 sj:5 approximate:2 jad:3 global:2 assumed:1 spatio:6 xi:2 search:6 table:3 channel:2 obtaining:1 eeg:51 unavailable:1 ramoser:1 diag:1 did:2 main:1 s2:2 noise:1 edition:1 elaborate:1 grosse:2 pfurtscheller:2 neuroimage:1 concatenating:2 tang:1 down:3 theorem:7 rk:3 xt:4 jt:3 curio:3 adding:1 chuen:1 diagonalization:1 texture:1 conditioned:1 expressed:4 u2:1 ma:1 identity:1 formulated:1 ghaharamani:1 consequently:1 towards:1 replace:1 fisher:1 adverse:1 specifically:1 wt:5 principal:6 called:2 total:3 partly:1 experimental:1 svd:2 select:2 millan:1 s2c:1 frontal:1 |
3,132 | 3,838 | AUC optimization and the two-sample problem
St?ephan Cl?emenc?on
Telecom Paristech (TSI) - LTCI UMR Institut Telecom/CNRS 5141
[email protected]
Marine Depecker
Telecom Paristech (TSI) - LTCI UMR Institut Telecom/CNRS 5141
[email protected]
Nicolas Vayatis
ENS Cachan & UniverSud - CMLA UMR CNRS 8536
[email protected]
Abstract
The purpose of the paper is to explore the connection between multivariate homogeneity tests and AUC optimization. The latter problem has recently received
much attention in the statistical learning literature. From the elementary observation that, in the two-sample problem setup, the null assumption corresponds to the
situation where the area under the optimal ROC curve is equal to 1/2, we propose a two-stage testing method based on data splitting. A nearly optimal scoring
function in the AUC sense is first learnt from one of the two half-samples. Data
from the remaining half-sample are then projected onto the real line and eventually ranked according to the scoring function computed at the first stage. The last
step amounts to performing a standard Mann-Whitney Wilcoxon test in the onedimensional framework. We show that the learning step of the procedure does
not affect the consistency of the test as well as its properties in terms of power,
provided the ranking produced is accurate enough in the AUC sense. The results
of a numerical experiment are eventually displayed in order to show the efficiency
of the method.
1
Introduction
The statistical problem of testing homogeneity of two samples arises in a wide variety of applications, ranging from bioinformatics to psychometrics through database attribute matching for instance. Practitioners may rely upon a wide range of nonparametric tests for detecting differences in
distribution (or location) between two one-dimensional samples, among which tests based on linear rank statistics, such as the celebrated Mann-Whitney Wilcoxon test. Being a (locally) optimal
procedure, the latter is the most widely used in homogeneity testing. Such rank statistics were originally introduced because they are distribution-free under the null hypothesis, thus permitting to set
critical values in a non asymptotic fashion for any given level. Beyond this simple fact, the crucial advantage of rank-based tests relies in their asymptotic efficiency in a variety of nonparametric
situations. We refer for instance to [15] for an account of asymptotically (locally) uniformly most
powerful tests and a comprehensive treatment of asymptotic optimality of R-statistics.
In a different context, consider data sampled from a feature space X ? Rd of high dimension with
binary label information in {?1, +1}. The problem of ranking such data, also known as the bipartite
ranking problem, has recently gained an increasing attention in the machine-learning literature, see
1
[5, 10, 19]. Here, the goal is to learn, based on a pooled set of labeled examples, how to rank
novel data with unknown labels, by means of a scoring function s : X ? R, in order that positive
ones appear on top of the list. Over the last few years, this global learning problem has been the
subject of intensive research, involving issues related to the design of appropriate criteria reflecting
ranking performance or valid extensions of the Empirical Risk Minimization approach (ERM) to
this framework [2, 6, 11]. In most applications, the gold standard for measuring the capacity of a
scoring function s to discriminate between the class populations however remains the area under
the ROC curve criterion (AUC) and most ranking/scoring methods boil down to maximizing its
empirical counterpart. The empirical AUC may be viewed as the Mann-Whitney statistic based on
the images of the multivariate samples by s, see [13, 9, 12, 18].
The purpose of this paper is to investigate how ranking methods for multivariate data with binary
labels may be exploited in order to extend the rank-based test approach for testing homogeneity
between two samples to a multidimensional setting. Precisely, the testing principle promoted in this
paper is described through an extension of the Mann-Whitney Wilcoxon test, based on a preliminary
ranking of the data through empirical AUC maximization. The consistency of the test is proved to
hold, as soon as the learning procedure is consistent in the AUC sense and its capacity to detect
?small? deviations from the homogeneity assumption is illustrated by a simulation example.
The rest of the paper is organized as follows. In Section 2, the homogeneity testing problem is
formulated and standard approaches are recalled, with focus on the one-dimensional case. Section
3 highlights the connection of the two-sample problem with optimal ROC curves and gives some
insight to our appproach. In Section 4, we describe the testing procedure proposed and set preliminary grounds for its theoretical validity. Simulation results are presented in Section 5 and technical
details are deferred to the Appendix.
2
The two-sample problem
We start off by setting out the notations needed throughout the paper and formulate the two-sample
problem precisely. We recall standard approaches to homogeneity testing. In particular, special
attention is paid to the one-dimensional case, for which two-sample linear rank statistics allow for
constructing locally optimal tests in a variety of situations.
Probabilistic setup. The problem considered in this paper is to test the hypothesis that two inde?
are
pendent i.i.d. random samples, valued in Rd with d ? 1, X1+ , . . . , Xn+ and X1? , . . . , Xm
+
identical in distributions. We denote by G(dx) the distribution function of the Xi ?s, while the one
of the Xj? ?s is denoted by H(dx). We also denote by P(G,H) the probability distribution on the
underlying space. The testing problem is tackled here from a nonparametric perspective, meaning
that the distributions G(dx) and H(dx) are assumed to be unknown. We suppose in addition that
G(dx) and H(dx) are continuous distributions and the asymptotics are described as follows: we set
N = m + n and suppose that n/N ? p ? (0, 1) as n, m tend to infinity. Formally, the problem
is to test the null hypothesis H0 : G = H against the alternative H1 : G 6= H, based on the two
data sets. In this paper, we place ourselves in the difficult case where G and H have same support,
X ? Rd say.
Measuring dissimilarity. A possible approach is to consider a probability (pseudo)-metric D on the
space of probability distributions on Rd . Based on the simple observation that D(G, H) = 0 under
b n and H
b m of the
the null hypothesis, possible testing procedures consist of computing estimates G
b
b
underlying distributions and rejecting H0 for ?large? values of the statistic D(Gn , Hm ), see [3] for
instance. Beyond computational difficulties and the necessity of identifying a proper standardization
in order to make the statistic asymptotically pivotal (i.e. its limit distribution is parameter free), the
major issue one faces when trying to implement such plug-in procedures is related to the curse of
dimensionality. Indeed, plug-in procedures involve the consistent estimation of distributions on a
feature space of possibly very large dimension d ? N? .
Various metrics or pseudo-metrics can be considered for measuring dissimilarity between two probability distributions. We refer to [17] for an excellent account of metrics in spaces of probability measures and their applications. Typical examples include the chi-square distance, the Kullback-Leibler
divergence, the Hellinger distance, the Kolmogorov-Smirnov distance and its generalizations of the
2
following type
Z
MMD(G, H) = sup
f?F
Z
f(x)G(dx) ?
x?X
f(x)H(dx) ,
(1)
where F denotes a supposedly rich enough class of functions f : X ? Rd ? R, so that
MMD(G, H) = 0 if and only if G = H. The quantity (1) is called the Maximum Mean Discrepancy in [1], where a unit ball of a reproducing kernel Hilbert space H is chosen for F in order
to allow for efficient computation of the supremum (1), see also [23]. The view promoted in the
present paper for the two-sample problem is very different in nature and is inspired from traditional
procedures in the particular one-dimensional case.
The one-dimensional case. A classical approach to the two-sample problem in the one-dimensional
setup lies in ordering the observed data using the natural order on the real line R and then basing the
decision depending on the ranks of the positive instances among the pooled sample:
?i ? {1, . . . , n}, Ri = N Fn,m (Xi+ ),
+
b n (t) + (m/N )H
b m (t), and denoting by G
b n (t) = n?1 P
where Fn,m (t) = (n/N )G
i?n I{Xi ? t}
P
?
b n (t) = m?1
and H
i?n I{Xi ? t} the empirical counterparts of the cumulative distribution
functions G and H respectively. This approach is grounded in invariance considerations, practical
simplicity and optimality of tests based on R-estimates for this problem, depending on the class
of alternative hypotheses considered. Assuming the distributions G and H continuous, the idea
underlying such tests lies in the simple fact that, under the null hypothesis, the ranks of positive
instances are uniformly distributed over {1, . . . , N }. A popular choice is to consider the sum of
?positive ranks?, leading to the well-known rank-sum Wilcoxon statistic [22]
cn,m =
W
n
X
Ri ,
i=1
which is distribution-free under H0 , see Section 6.9 in [15] for further details. We also recall that,
the validity framework of the rank-sum test classically extends to the case where some observations
are tied (i.e. when G and/or H may be degenerate at some points), by assigning the mean rank to ties
[4]. We shall denote by Wn,m the distribution of the (average rank version of the) Wilcoxon statistic
cn,m under the homogeneity hypothesis. Since tables for the distributions Wn,m are available, no
W
asymptotic approximation result is thus needed for building a test of appropriate level. As it will be
cn,m has appealing optimality properties for certain
recalled below, the test based on the R-statistic W
classes of alternatives. Although R-estimates (i.e. functions of the Ri ?s) form a very rich collection
of statistics, but, for lack of space, we restrict our attention to the two-sample Wilcoxon statistic in
this paper.
Heuristics. We may now give a first insight into the way we shall tackle the problem in the multidimensional case. Suppose that we are able to ?project? the multivariate sampling data onto the real
line through a certain scoring function s : Rd ? R in order to preserve the possible dissimilarity
(considered in a certain specific sense, which we shall discuss below) between the two populations,
leading then to ?large? values of the score s(x) for the positive instances and ?small? values for the
negative ones with high probability. Now that the dimension of the problem has been brought down
to 1, observations can be ranked and one may perform for instance a basic two-sample Wilcoxon
?
test based on the data sets s(X1+ ), . . . , s(Xn+ ) and s(X1? ), . . . , s(Xm
).
Remark 1 (L EARNING A S TUDENT t TEST.) We point out that it is precisely the task Linear
Discriminant Analysis (LDA) tries to performs, in a restrictive Gaussian framework however (when
G and H are normal distributions with same covariance structure namely). In order to test deviations
from the homogeneity hypothesis on the basis of the original samples, one may consider applying a
b + ) : 1 ? i ? n} and {?(X
b ?) : 1 ?
univariate Student t test based on the ?projected? data {?(X
i
i
b
i ? m}, where ? denotes the empirical discriminant function, this may be shown as an appealing
alternative to multivariate extensions of the standard t test [14].
The goal of this paper is to show how to exploit recent advances in ROC/AUC optimization for
extending this heuristics to more general situations than the parametric one mentioned above.
3
3
Connections with bipartite ranking
ROC curves are among the most widely used graphical tools for visualizing the dissimilarity between two one-dimensional distributions in a large variety of applications such as anomaly detection
in signal analysis, medical diagnosis, information retrieval, etc. As this concept is at the heart of
the ranking issue in the binary setting, which forms the first stage of the testing procedure sketched
above, we recall its definition precisely.
Definition 1 (ROC curve) Let g and h be two cumulative distribution functions on R. The ROC
curve related to the distributions g(dt) and h(dt) is the graph of the mapping:
ROC ((g, h), ?) : ? ? [0, 1] 7? 1 ? g ? h?1 (1 ? ?),
denoting by f ?1 (u) = inf{t ? R : f (t) ? u} the generalized inverse of any c`ad-l`ag function
f : R ? R. When the distributions g(dt) and h(t) are continuous, it can alternatively be defined as
the parametric curve t ? R 7? (1 ? h(t), 1 ? g(t)).
One may show that ROC ((g, h), ?) is above the diagonal ? : ? ? [0, 1] 7? ? of the ROC space if
and only if the distribution g is stochastically larger than h and it is concave as soon as the likelihood
ratio dg/dh is increasing. When g(dt) and h(dt) are both continuous, the curves ROC((g, h), .) and
ROC((h, g), .) are symmetric with respect to the diagonal of the ROC space with slope equal to one.
Refer to [9] for a detailed list of properties of ROC curves.
The notion of ROC curve provides a functional measure of dissimilarity between distributions on
R: the closer to the corners of the unit square the curve ROC ((g, h), ?) is, the more dissimilar the
distributions g and h are. For instance, it exactly coincides with the upper left-hand corner of the unit
square, namely the curve ? ? [0, 1] 7? I{? ?]0, 1]}, when there exists l ? R such that the support
of distribution g(dt) is a subset of [l, ?[, while ]l, ??, ] contains the support of h. In contrast, it
merges with the diagonal ? when g = h. Hence, distance of ROC ((g, h), ?) to the diagonal may
be naturally used to quantify departure from the homogeneous situation. The L1 -norm provides a
convenient way of measuring such a distance, leading to the classical AUC criterion (AUC standing
for area under the ROC curve):
Z 1
AUC(g, h) =
ROC ((g, h), ?) d?.
?=0
The popularity of this summary quantity arises from the fact that it can be interpreted in a probabilistic fashion, and may be viewed as a distance between the locations of the two distributions. In
this respect, we recall the following result.
Proposition 1 Let g and h be two distributions on R. We have:
AUC(g, h)
1
1
= P {Z > Z 0 } + P {Z = Z 0 } = + E[h(Z)] ? E[g(Z 0 )],
2
2
where Z and Z 0 denote independent random variables, drawn from g(dt) and h(dt) respectively.
We recall that the homogeneous situation corresponds to the case where AUC(g, h) = 1/2 and the
Mann-Withney statistic [16]
n m
1 XX
1
?
+
?
+
Un,m =
I{Xj < Xi } + I{Xj = Xi }
nm i=1 j=1
2
is exactly the empirical counterpart of AUC(g, h). It yields exactly the same statistical decisions as
the two-sample Wilcoxon statistic, insofar they are related as follows:
bn,m + n(n + 1)/2.
Wn,m = nmU
For this reason, the related test of hypotheses is called Mann-Whitney Wilcoxon test (MWW).
Multidimensional extension. In the multivariate setup, the notion of ROC curve can be extended
the following way. Let H(dx) and G(dx) be two given distributions on Rd and S = {s : X ? R |
4
s Borel measurable}. For any scoring function s ? S, we denote by Hs (dt) and Gs (t) the images
of H(dx) and G(x) by the mapping s(x). In addition, we set for all s ? S:
ROC(s, .) = ROC((Gs , Hs ), .) and AUC(s) = AUC(Gs , Hs ).
Clearly, the families of univariate distributions {Gs }s?S and {Hs }s?S entirely characterize the
multivariate probability measures G and H. One may thus consider evaluating the dissimilarity
between H(dx) and G(dx) on Rd through the family of curves {ROC(s, .)}s?S or through the
collection of scalar values {AUC(s)}s?S . Going back to the homogeneity testing problem, the null
assumption may be reformulated as
?H0 : ?s ? S, AUC(s) = 1/2? versus ?H1 : ?s ? S such that AUC(s) > 1/2?.
The next result, following from standard Neyman-Pearson type arguments, shows that the supremum
sups?S AUC(s) is attained by increasing transforms of the likelihood ratio ?(x) = dG/dH(x),
x ? X . Scoring functions with largest AUC are natural candidates for detecting the alternative H1 .
Theorem 1 (O PTIMAL ROC CURVE .) The set of S ? = {T ? ? | T : R ? R strictly increasing }
defines the collection of optimal scoring functions in the sense that: ?s ? S,
?? ? [0, 1], ROC(s, ?) ? ROC? (?) and AUC(s) ? AUC? ,
with the notations ROC? (.) = ROC(s? , .) and AUC? = AUC(s? ) for s? ? S ? .
Refer to Proposition 4?s proof in [9] for a detailed argument. Notice that, as dG/dH(X) =
dG? (X)/dH? (?(X)), replacing X by s? (X) with s? ? S ? leaves the optimal ROC curve untouched. The following corollary is straightforward.
Corollary 1 For any s ? S ? , we have: sups?S |AUC(s) ? 1/2| = AUC(s? ) ? 1/2.
Consequently, the homogeneity testing problem may be seen as closely related to the problem of
estimating the optimal AUC? , since it may be re-formulated as follows:
?H0 : AUC? = 1/2? versus ?H1 : AUC? > 1/2?.
Knowing how a single optimal scoring function s? ? S ? ranks observations drawn from a mixture
of G and H is sufficient for detecting departure from the homogeneity hypothesis in an optimal fashion, the MWW statistic computed from the (s? (Xi+ ), s? (Xj? ))?s being an asymptotically efficient
estimate of AUC? and thus yields an asymptotically (locally) uniformly most powerful test.
Let F (dx) = pG(dx) + (1 ? p)H(dx) and denote by Fs (dt) the image of the distribution F by
s ? S. Notice that, for any s? ? S ? , the scoring function S ? = Fs? ? s? is still optimal and the
score variable S ? (X) is uniformly distributed on [0, 1] under the mixture distribution F (in addition,
it may be easily shown to be independent from s? ? S ? ). Observe in addition that AUC? ? 1/2
may be viewed as the Earth Mover?s distance between the class distributions HS ? and GS ? for this
?normalization?:
Z 1
AUC? ? 1/2 =
{HS? (t) ? GS? (t)} dt.
t=0
Empirical AUC maximization. A natural way of inferring the value of AUC? and/or selecting
a scoring function s? with AUC nearly as large as AUC? is to maximize an empirical version of
the AUC criterion over a set S0 of scoring function candidates. We assume that the class S0 is
sufficiently rich in order to guarantee that the bias AUC? ? sups?S0 AUC(s) is small, and its complexity is controlled (when measured for instance by the VC dimension of the collection of sets
{{x ? X : s(x) ? t}, (s, t) ? S0 ? R} as in [7] or by the order of magnitude of conditional
Rademacher averages as in [6]). We recall that, under such assumptions, universal consistency
results have been established for empirical AUC maximizers, together with distribution-free generalization bounds, see [2, 6] for instance. We point out that this approach can be extended to other
relevant ranking criteria. The contours of a theory guaranteeing the statistical performance of the
ERM approach for empirical risk functionals defined by R-estimates have been sketched in [8].
5
4
The two-stage testing procedure
Assume that data have been split into two subsamples: the first data set Dn0 ,m0 =
?
{X1+ , . . . , Xn+0 } ? {X1? , . . . , Xm
} will be used for deriving a scoring function on X and
0
+
?
?
0
, . . . , Xm
} will serve to
the second data set Dn1 ,m1 = {Xn0 +1 , . . . , Xn+0 +n1 } ? {Xm
0 +m1
0 +1
compute a pseudo- two-sample Wilcoxon test statistic from the ranked data. We set N0 = n0 + m0
and N1 = n1 + m1 and suppose that ni /Ni ? p as ni and mi tend to infinity for i ? {0, 1}.
Let ? ? (0, 1). The testing procedure at level ? is then performed in two steps, as follows.
S CORE -BASED R ANK -S UM W ILCOXON T EST
1. Ranking. From dataset Dn0 ,m0 , perform empirical AUC maximization over S0 ? S, yielding
the scoring function s?(x) = s?n0 ,m0 (x). Compute the ranks of data with positive labels among
the sample Dn0 1 ,m1 , once sorted by increasing order of magnitude of their score:
b i = N1 S(X
? n+ +i ) for 1 ? i ? n1 ,
R
0
?P
?
Pm1
n1
+
?
where Fbs?(t) = N1?1
I{?
s
(X
)
?
t}
+
I{?
s
(X
)
?
t}
and S? = Fbs? ? s?.
n
+i
m
+j
i=1
j=1
0
0
2. Rank-sum Wilcoxon test. Reject the homogeneity hypothesis H0 when:
cn1 ,m1 ? Qn1 ,m1 (?),
W
cn1 ,m1 =
where W
Pn1 b
i=1 Ri and Qn1 ,m1 (?) denotes the (1??)-quantile of distribution Wn1 ,m1 .
The next result shows that the learning step does not affect the consistency property, provided it
outputs a universally consistent scoring rule.
Theorem 2 Let ? ? (0, 1/2) and suppose that the ranking/scoring method involved at step 1 yields
a universally consistent scoring rule s? in the AUC sense. The score-based rank-sum Wilcoxon test
n
o
cn ,m ? Qn ,m (?)
?=I W
1
1
1
1
is universally consistent as ni and mi tend to ? for i ? {0, 1} at level ?, in the following sense.
1. It is of level ? for all ni and mi , i ? {0, 1}: P(H,H) {? = +1} ? ? for any H(dx).
2. Its power converges to 1 as ni and mi , i ? {0, 1}, tend to infinity for every alternative:
limni , mi ?? P(G,H) {? = +1} = 1 for every pair of distinct distributions (G, H).
Remark 2 (C ONVERGENCE RATES .) Under adequate complexity assumptions on the set S0 over
which empirical AUC maximization or one of its variants is performed, distribution-free rate bounds
for the generalization ability of scoring rules may be established in terms of AUC, see Corollary 6 in
[2] or Corollary 3 in [6]. As shown by a careful examination of Theorem 2, this permits to derive a
convergence rate for the decay of the score-based type II error of MWW under any given alternative
(G, H), when?
combined with the Berry-Esseen theorem for two-sample U -statistics. For instance,
if a typical 1/ N0 rate bound
holds for s?(x), one may show that choosing N1 ? N0 then yields a
?
rate of order OP(G,H) (1/ N0 ).
Remark 3 (I NFINITE - DIMENSIONAL FEATURE SPACE .) We point out that the method presented
here is by no means restricted to the case where X is of finite dimension, but may be applied to
functional input data, provided an AUC-consistent ranking procedure can be applied in this context.
5
Numerical examples
The procedure proposed above is extremely simple once the delicate AUC maximization stage is
performed. A stunning property is the fact that critical thresholds are set automatically, with no reference to the data. We firts consider a low-dimensional toy experiment and display some numerical
results. Two independent i.i.d. samples of equal size m = n = N/2 have been generated from
two conditional 4-dimensional gaussian distributions on the hypercube [?2, 2]4 . Their parameters
6
are denoted by ?+ and ?? for the means and ? is their common covariance matrix. Three cases
have been considered. The first example corresponds to a homogeneous situation: ?+ = ?? = ?1
where ?1 = (?0.96, ?0.83, 0.29, ?1.34) and the upper diagonals of ?1 are (6.52, 3.84, 4.72, 3.1),
(?1.89, 3.56, 1.52), (?3.2, 0.2) and (?2.6). In the second example, we test homogeneity under an
alternative, ?fairly far? from H0 , where ?? = ?1 , ?+ = (0.17, ?0.24, 0.04, ?1.02) and ? as before.
Eventually, the third example corresponds to a much more difficult problem, ?close? to H0 , where
?? = (1.19, ?1.20, ?0.02, ?0.16), ?+ = (1.08, ?1.18, ?0.1, ?0.06) and the upper diagonals of
? are (1.83, 6.02, 0.69, 4.99), (?0.65, ?0.31, 1.03), (?0.54, ?0.03) and (?1.24). The difficulty of
each of these examples is illustrated by Fig. 2 in terms of (optimal) ROC curve. The table in Fig.
2 gives Monte-Carlo estimates of the power of three testing procedures when ? = 0.05 (averaged
over B = 150 replications): 1) the score-based MWW test, where ranking is performed using the
scoring function output by a run of the T REE R ANK algorithm [9] on a training sample Dn0 ,m0 , 2)
the LDA-based Student test sketched in Remark 1 and 3) a bootstrap version of the MMD-test with
a Gaussian RBF Kernel proposed in [1].
DataSet
Ex. 1
Ex. 2
Ex. 3
Sample size (m0 ,m1 )
(500,500)
(500,500)
(2000,1000)
(3000,2000)
LDA-Student
6%
99%
75%
98%
Score-based MWW
1%
99%
45%
73%
MMD
5%
99%
30%
65%
Figure 1: Powers and ROC curves describing the ?distance? to H0 for each situation: example 1
(red), example 2 (black) and example 3 (blue).
In the second series of experimental results, gaussian distributions with same covariance matrix
on Rd are generated, with larger values for the input space dimension d ? {10, 30}. We
considered several problems at given toughness. The increasing difficulty of the testing problems
considered is controlled through the euclidian distance between the means ?? = ||?+ ? ?? || and
is described by Fig. 2, which depicts the related ROC curves, corresponding to situations where
?? ? {0.2, 0.1, 0.08, 0.05}. On these examples, we compared the performance of four methods at
level ? = 0.05: the score-based MWW test, where ranking is again performed using the scoring
function output by a run of the T REE R ANK algorithm on a training sample Dn0 ,m0 , the KFDA
test proposed in [23], a bootstrap version of the MMD-test with a Gaussian RBF Kernel (M M D)
and another version, with moment matching to Pearson curves (M M Dmom ), using also with a
Gaussian RBF kernel (see [1]). Monte-Carlo estimates of the corresponding powers are given in the
Table displayed in Fig. 2.
6
Conclusion
We have provided a sound strategy, involving a preliminary bipartite ranking stage, to extend classical approaches for testing homogeneity based on ranks to a multidimensional setup. Consistency
of the extended version of the popular MWW test has been established, under the assumption of
universal consistency of the ranking method in the AUC sense. This principle can be applied to
other R-statistics, standing as natural criteria for the bipartite ranking problem [8]. Beyond the illustrative preliminary simulation example displayed in this paper, we intend to investigate the relative
efficiency of such tests with respect to other tests standing as natural candidates in this setup.
Appendix - Proof of Theorem 2
cn ,m is distributed according
Observe that, conditioned upon the first sample Dn0 ,m0 , the statistic W
1
1
to Wn1 ,m1 under the null hypothesis. For any distribution H, we thus have: ?? ? (0, 1/2),
n
o
cn ,m > Qn ,m (?) | Dn ,m ? ?.
P(H,H) W
1
1
1
1
0
0
Taking the expectation, we obtain that the test is of level ? for all n, m.
7
Dim. d
M M Dboot
d = 10
d = 30
86%
54%
d = 10
d = 30
20%
9%
d = 10
d = 30
19%
5%
d = 10
d = 30
11%
6%
M M Dmom Kfda
case 1 :?? = 0.2
86%
64%
58%
36%
case 1 :?? = 0.1
20%
20%
7%
15%
case 3 :?? = 0.08
19%
16%
7%
9%
case 4 :?? = 0.05
13%
13%
6%
8%
Sc.based MWW
90%
85%
58%
47%
42%
32%
18%
16%
Figure 2: Power estimates and ROC curves describing the ?distance? to H0 for each situation: case
1 (black), case 2 (blue), case 3 (green) and case 4 (red).
For any s ? S, denote by Un1 ,m1 (s) the empirical AUC of s evaluated on the sample Dn0 1 ,m1 .
Recall first that it follows from the two-sample U -statistic theorem (see [20]) that:
? X
n
N1 1
+
Hs (s(X+
i+n0 )) ? E[Hs (s(X1 ))]
n1
i=1
?
m1
N1 X
?
Gs (s(Xj+m
)) ? E[Gs (s(X1? ))] + oP(G,H) (1),
?
0
m1 j=1
?
N {Un1 ,m1 (s) ? AUC(s)} =
as n, m ?
tend to infinity. In particular, for any pair of distributions (G, H), the centered random
variable N {Un1 ,m1 (s) ? AUC(s)} is asymptotically normal with limit variance ?s2 (G, H) =
Var(Hs (s(X1+ )))/p+Var(Gs (s(X1? )))/(1?p) under P(G,H) . Notice that ?s2 (H, H) = 1/(12p(1?
p)) for any s ? S such that the distribution Hs (dt) is continuous. Refer to Theorem 12.4 in [21] for
further details.
We now place ourselves under an alternative hypothesis described by a pair of distinct distribution
?
bn ,m = Un ,m (?
b
(G, H), so that AUC? > 1/2. Setting U
1
1
1
1 s) and decomposing AUC ? Un1 ,m1 as
?
b
the sum of the deficit of AUC of s?(x), AUC ?AUC(?s) namely, and the
n deviation AUC(?s)? Uon1 ,m1
cn ,m ? Qn ,m (?) may
evaluated on the sample D0
, type II error of ? given by P(G,H) W
n1 ,m1
1
1
1
1
be bounded by:
np
o
bn ,m ? AUC(bs) ? n ,m (?)
P(G,H)
N1 U
1
1
1
1
np
o
+ P(G,H)
N1 (AUC(bs) ? AUC? ) ? n1 ,m1 (?) ,
where
p Qn1 ,m1 (?) n1 + 1 1 p
1
n1 ,m1 (?) = N1
?
?
? N1 (AUC? ? ).
n1 m 1
2m1
2
2
?
Observe that, by p
virtue of the CLT recalled above, N1 (Qn1 ,m1 (?)/(n1 m1 ) ? (n1 + 1)/(2m1 ))
converges to z? / 12p(1 ? p). Now, the fact that type II error of ? converges to zero as ni and
mi tend to ? for i ? {0, 1} immediately follows from the assumption in regards to the AUC of
s?(x) universal consistency and the CLT for two-sample U -statistics combined with the theorem of
dominated convergence. Due to space limitations, details are omitted.
8
References
[1] M.J. Rasch B. Scholkopf A. Smola A. Gretton, K.M. Borgwardt. A kernel method for the two-sample
problem. In Advances in Neural Information Processing Systems 19. MIT Press, Cambridge, MA, 2007.
[2] S. Agarwal, T. Graepel, R. Herbrich, S. Har-Peled, and D. Roth. Generalization bounds for the area under
the ROC curve. J. Mach. Learn. Res., 6:393?425, 2005.
[3] G. Biau and L. Gyorfi. On the asymptotic properties of a nonparametric l1 -test statistic of homogeneity.
IEEE Transactions on Information Theory, 51(11):3965?3973, 2005.
[4] Y.K. Cheung and J.H. Klotz. The Mann Whitney Wilcoxon distribution using linked list. Statistica Sinica,
7:805?813, 1997.
[5] S. Cl?emenc?on, G. Lugosi, and N. Vayatis. Ranking and scoring using empirical risk minimization. In
P. Auer and R. Meir, editors, Proceedings of COLT 2005, volume 3559 of Lecture Notes in Computer
Science, pages 1?15. Springer, 2005.
[6] S. Cl?emenc?on, G. Lugosi, and N. Vayatis. Ranking and empirical risk minimization of U-statistics. The
Annals of Statistics, 36(2):844?874, 2008.
[7] S. Cl?emenc?on and N. Vayatis. Ranking the best instances. Journal of Machine Learning Research,
8:2671?2699, 2007.
[8] S. Cl?emenc?on and N. Vayatis. Empirical performance maximization based on linear rank statistics. In
Advances in Neural Information Processing Systems, volume 3559 of Lecture Notes in Computer Science,
pages 1?15. Springer, 2009.
[9] S. Cl?emenc?on and N. Vayatis. Tree-based ranking methods. IEEE Transactions on Information Theory,
55(9):4316?4336, 2009.
[10] W.W. Cohen, R.E. Schapire, and Y. Singer. Learning to order things. In NIPS ?97: Proceedings of the
1997 conference on Advances in neural information processing systems 10, pages 451?457, Cambridge,
MA, USA, 1998. MIT Press.
[11] C. Cortes and M. Mohri. AUC optimization vs. error rate minimization. In S. Thrun, L. Saul, and
B. Sch?olkopf, editors, Advances in Neural Information Processing Systems 16. MIT Press, Cambridge,
MA, 2004.
[12] C. Ferri, P.A. Flach, and J. Hern?andez-Orallo. Learning decision trees using the area under the roc curve.
In ICML ?02: Proceedings of the Nineteenth International Conference on Machine Learning, pages 139?
146, 2002.
[13] Y. Freund, R. D. Iyer, R. E. Schapire, and Y. Singer. An efficient boosting algorithm for combining
preferences. Journal of Machine Learning Research, 4:933?969, 2003.
[14] S. Kotz and S. Nadarajah. Multivariate t Distributions and Their Applications. Cambridge University
Press, 2004.
[15] E.L. Lehmann and J. P. Romano. Testing Statistical Hypotheses. Springer, 2005.
[16] H.B. Mann and D.R. Whitney. On a test of whether one of two random variables is stochastically larger
than the other. Ann. Math. Stat., 18:50?60, 1947.
[17] A. Rachev. Probability Metrics and the Stability of Stochastic Models. Wiley, 1991.
[18] A. Rakotomamonjy. Optimizing Area Under Roc Curve with SVMs. In Proceedings of the First Workshop
on ROC Analysis in AI, 2004.
[19] C. Rudin, C. Cortes, M. Mohri, and R. E. Schapire. Margin-based ranking and boosting meet in the
middle. In P. Auer and R. Meir, editors, Proceedings of COLT 2005, volume 3559 of Lecture Notes in
Computer Science, pages 63?78. Springer, 2005.
[20] R.J. Serfling. Approximation theorems of mathematical statistics. Wiley, 1980.
[21] A.K. van der Vaart. Asymptotic Analysis. Cambridge University Press, 1998.
[22] F. Wilcoxon. Individual comparisons by ranking methods. Biometrics, 1:80?83, 1945.
[23] E. Moulines Z. Harchaoui, F. Bach. Testing for homogeneity with kernel Fischer discriminant analysis.
In Advances in Neural Information Processing Systems 20. MIT Press, Cambridge, MA, 2008.
9
| 3838 |@word h:10 version:6 middle:1 norm:1 smirnov:1 flach:1 simulation:3 bn:3 covariance:3 pg:1 paid:1 euclidian:1 moment:1 necessity:1 celebrated:1 contains:1 score:8 selecting:1 series:1 denoting:2 assigning:1 dx:17 fn:2 numerical:3 n0:7 v:1 half:2 leaf:1 rudin:1 marine:2 core:1 detecting:3 provides:2 boosting:2 location:2 herbrich:1 preference:1 math:1 mathematical:1 dn:1 replication:1 scholkopf:1 hellinger:1 indeed:1 chi:1 moulines:1 inspired:1 automatically:1 curse:1 increasing:6 provided:4 project:1 notation:2 underlying:3 xx:1 estimating:1 bounded:1 null:7 interpreted:1 ag:1 guarantee:1 pseudo:3 every:2 multidimensional:4 concave:1 tackle:1 tie:1 exactly:3 um:1 unit:3 medical:1 appear:1 positive:6 before:1 limit:2 mach:1 meet:1 ree:2 lugosi:2 black:2 umr:3 range:1 gyorfi:1 averaged:1 practical:1 testing:20 implement:1 cn1:2 bootstrap:2 procedure:14 asymptotics:1 area:6 empirical:17 universal:3 reject:1 matching:2 convenient:1 onto:2 close:1 context:2 risk:4 applying:1 measurable:1 roth:1 maximizing:1 emenc:6 attention:4 straightforward:1 formulate:1 simplicity:1 splitting:1 identifying:1 immediately:1 insight:2 rule:3 deriving:1 depecker:2 population:2 stability:1 notion:2 annals:1 suppose:5 anomaly:1 cmla:2 homogeneous:3 hypothesis:14 database:1 labeled:1 observed:1 ordering:1 mentioned:1 supposedly:1 complexity:2 peled:1 serve:1 upon:2 bipartite:4 efficiency:3 basis:1 easily:1 various:1 kolmogorov:1 distinct:2 describe:1 monte:2 sc:1 pearson:2 h0:10 choosing:1 heuristic:2 widely:2 valued:1 larger:3 psychometrics:1 say:1 nineteenth:1 ability:1 statistic:26 fischer:1 vaart:1 nfinite:1 subsamples:1 advantage:1 un1:4 propose:1 fr:3 relevant:1 combining:1 degenerate:1 gold:1 olkopf:1 convergence:2 extending:1 rademacher:1 guaranteeing:1 converges:3 depending:2 derive:1 stat:1 measured:1 op:2 received:1 pendent:1 quantify:1 rasch:1 ptimal:1 closely:1 attribute:1 stochastic:1 vc:1 centered:1 mann:8 generalization:4 andez:1 preliminary:4 proposition:2 elementary:1 extension:4 strictly:1 hold:2 sufficiently:1 considered:7 ground:1 normal:2 mapping:2 m0:8 major:1 omitted:1 purpose:2 earth:1 estimation:1 label:4 largest:1 basing:1 tool:1 minimization:4 brought:1 clearly:1 mit:4 gaussian:6 nmu:1 corollary:4 focus:1 rank:19 likelihood:2 ferri:1 contrast:1 sense:8 detect:1 dim:1 cnrs:3 going:1 sketched:3 issue:3 among:4 colt:2 denoted:2 special:1 fairly:1 equal:3 once:2 sampling:1 identical:1 icml:1 nearly:2 discrepancy:1 np:2 few:1 dg:4 preserve:1 divergence:1 homogeneity:17 comprehensive:1 mover:1 individual:1 stunning:1 ourselves:2 n1:23 delicate:1 ltci:2 detection:1 investigate:2 deferred:1 mixture:2 yielding:1 har:1 accurate:1 closer:1 clemencon:1 institut:2 biometrics:1 tree:2 re:2 theoretical:1 instance:12 gn:1 whitney:7 measuring:4 maximization:6 deviation:3 subset:1 rakotomamonjy:1 characterize:1 learnt:1 combined:2 st:1 borgwardt:1 international:1 standing:3 probabilistic:2 off:1 together:1 again:1 nm:1 possibly:1 classically:1 stochastically:2 corner:2 leading:3 toy:1 account:2 pooled:2 student:3 ranking:24 ad:1 performed:5 h1:4 view:1 try:1 linked:1 sup:4 red:2 start:1 slope:1 square:3 ni:7 variance:1 yield:4 biau:1 pm1:1 fbs:2 rejecting:1 produced:1 carlo:2 definition:2 against:1 involved:1 naturally:1 proof:2 mi:6 boil:1 sampled:1 proved:1 treatment:1 popular:2 dataset:2 recall:7 dimensionality:1 organized:1 hilbert:1 graepel:1 auer:2 reflecting:1 back:1 originally:1 dt:12 attained:1 evaluated:2 stage:6 smola:1 hand:1 replacing:1 lack:1 defines:1 dn0:7 lda:3 building:1 usa:1 validity:2 concept:1 counterpart:3 hence:1 symmetric:1 leibler:1 nadarajah:1 illustrated:2 visualizing:1 auc:64 qn1:4 universud:1 illustrative:1 coincides:1 criterion:6 generalized:1 trying:1 performs:1 l1:2 ranging:1 image:3 meaning:1 novel:1 recently:2 consideration:1 common:1 functional:2 cohen:1 volume:3 untouched:1 extend:2 dn1:1 m1:27 onedimensional:1 orallo:1 refer:5 pn1:1 cambridge:6 ai:1 rd:9 consistency:7 etc:1 wilcoxon:14 multivariate:8 recent:1 perspective:1 optimizing:1 inf:1 certain:3 binary:3 der:1 exploited:1 scoring:22 seen:1 promoted:2 maximize:1 signal:1 ii:3 clt:2 sound:1 harchaoui:1 gretton:1 d0:1 technical:1 plug:2 bach:1 retrieval:1 permitting:1 controlled:2 involving:2 basic:1 variant:1 metric:5 expectation:1 kernel:6 grounded:1 mmd:5 normalization:1 esseen:1 agarwal:1 vayatis:7 addition:4 ank:3 crucial:1 sch:1 rest:1 subject:1 tend:6 thing:1 practitioner:1 split:1 stephan:1 enough:2 wn:3 variety:4 affect:2 xj:5 insofar:1 restrict:1 idea:1 cn:7 knowing:1 intensive:1 whether:1 f:2 reformulated:1 romano:1 remark:4 adequate:1 detailed:2 involve:1 amount:1 nonparametric:4 transforms:1 locally:4 svms:1 schapire:3 meir:2 notice:3 popularity:1 blue:2 diagnosis:1 shall:3 four:1 threshold:1 drawn:2 asymptotically:5 graph:1 year:1 sum:6 run:2 inverse:1 powerful:2 lehmann:1 place:2 throughout:1 extends:1 family:2 kotz:1 earning:1 decision:3 cachan:2 appendix:2 entirely:1 bound:4 tackled:1 display:1 g:9 precisely:4 infinity:4 ri:4 dominated:1 argument:2 optimality:3 extremely:1 performing:1 according:2 ball:1 serfling:1 appealing:2 b:2 restricted:1 erm:2 heart:1 neyman:1 remains:1 hern:1 discus:1 eventually:3 describing:2 needed:2 singer:2 available:1 decomposing:1 permit:1 observe:3 appropriate:2 alternative:9 original:1 top:1 remaining:1 include:1 denotes:3 graphical:1 exploit:1 restrictive:1 quantile:1 classical:3 hypercube:1 intend:1 quantity:2 parametric:2 strategy:1 traditional:1 diagonal:6 distance:10 deficit:1 thrun:1 capacity:2 discriminant:3 reason:1 kfda:2 assuming:1 ratio:2 setup:6 difficult:2 sinica:1 negative:1 design:1 proper:1 unknown:2 perform:2 upper:3 observation:5 finite:1 displayed:3 situation:10 extended:3 reproducing:1 ephan:1 introduced:1 namely:3 pair:3 connection:3 recalled:3 merges:1 established:3 nip:1 beyond:3 able:1 below:2 xm:5 departure:2 green:1 power:6 critical:2 ranked:3 rely:1 difficulty:3 natural:5 examination:1 wn1:2 hm:1 literature:2 berry:1 asymptotic:6 relative:1 freund:1 lecture:3 highlight:1 inde:1 limitation:1 versus:2 var:2 sufficient:1 consistent:6 s0:6 standardization:1 principle:2 editor:3 summary:1 mohri:2 last:2 free:5 soon:2 bias:1 allow:2 wide:2 saul:1 face:1 taking:1 distributed:3 regard:1 curve:25 dimension:6 xn:4 valid:1 cumulative:2 rich:3 evaluating:1 contour:1 qn:3 collection:4 projected:2 universally:3 far:1 transaction:2 functionals:1 kullback:1 supremum:2 global:1 tsi:2 assumed:1 xi:7 alternatively:1 continuous:5 un:2 table:3 learn:2 nature:1 nicolas:2 excellent:1 cl:6 constructing:1 statistica:1 s2:2 pivotal:1 x1:10 fig:4 telecom:6 en:2 roc:37 borel:1 fashion:3 depicts:1 wiley:2 inferring:1 lie:2 candidate:3 tied:1 third:1 rachev:1 down:2 theorem:9 specific:1 list:3 decay:1 cortes:2 virtue:1 maximizers:1 consist:1 exists:1 workshop:1 gained:1 dissimilarity:6 magnitude:2 iyer:1 conditioned:1 margin:1 explore:1 univariate:2 scalar:1 van:1 springer:4 corresponds:4 relies:1 dh:4 ma:4 conditional:2 goal:2 viewed:3 formulated:2 consequently:1 sorted:1 careful:1 rbf:3 cheung:1 ann:1 paristech:4 typical:2 uniformly:4 called:2 discriminate:1 invariance:1 experimental:1 xn0:1 est:1 formally:1 support:3 latter:2 arises:2 dissimilar:1 bioinformatics:1 ex:3 |
3,133 | 3,839 | Correlation Coefficients Are Insufficient
for Analyzing Spike Count Dependencies
Arno Onken
Technische Universit?at Berlin / BCCN Berlin
Franklinstr. 28/29, 10587 Berlin, Germany
[email protected]
?
Steffen Grunew?
alder
University College London
Gower Street, London WC1E 6BT, UK
[email protected]
Klaus Obermayer
Technische Universit?at Berlin / BCCN Berlin
[email protected]
Abstract
The linear correlation coefficient is typically used to characterize and analyze dependencies of neural spike counts. Here, we show that the correlation coefficient is
in general insufficient to characterize these dependencies. We construct two neuron spike count models with Poisson-like marginals and vary their dependence
structure using copulas. To this end, we construct a copula that allows to keep
the spike counts uncorrelated while varying their dependence strength. Moreover,
we employ a network of leaky integrate-and-fire neurons to investigate whether
weakly correlated spike counts with strong dependencies are likely to occur in
real networks. We find that the entropy of uncorrelated but dependent spike count
distributions can deviate from the corresponding distribution with independent
components by more than 25 % and that weakly correlated but strongly dependent
spike counts are very likely to occur in biological networks. Finally, we introduce
a test for deciding whether the dependence structure of distributions with Poissonlike marginals is well characterized by the linear correlation coefficient and verify
it for different copula-based models.
1
Introduction
The linear correlation coefficient is of central importance in many studies that deal with spike count
data of neural populations. For example, a low correlation coefficient is often used as an evidence
for independence in recorded data and to justify simplifying model assumptions (e.g. [1, 2]). In line
with this many computational studies constructed distributions for observed data based solely on
reported correlation coefficients [3, 4, 5, 6]. The correlation coefficient is in this sense treated as an
equivalent to the full dependence.
The correlation coefficient is also extensively used in combination with information measures such
as the Fisher information (for continuous variables only) and the Shannon information to assess the
importance of couplings between neurons for neural coding [7]. The discussion in the literature
encircles two main topics. On the one hand, it is debated whether pairwise correlations versus
higher order correlations across different neurons are sufficient for obtaining good estimates of the
information (see e.g. [8, 9, 10]). On the other hand, it is questioned whether correlations matter at
all (see e.g. [11, 12, 13]). In [13], for example, based on the correlation coefficient it was argued
that the impact of correlations is negligible for small populations of neurons.
The correlation coefficient is one measure of dependence among others. It has become common to
report only the correlation coefficient of recorded spike trains without reporting any other properties
of the actual dependence structure (see e.g. [3, 14, 15]). The problem with this common practice is
that it is unclear beforehand whether the linear correlation coefficient suffices to describe the dependence or at least the relevant part of the dependence. Of course, it is well known that uncorrelated
does not imply statistically independent. Yet, it might seem likely that this is not important for
realistic spike count distributions which have a Poisson-like shape. Problems could be restricted
to pathological cases that are very unlikely to occur in realistic biological networks. At least one
might expect to find a tendency of weak dependencies for uncorrelated distributions with Poissonlike marginals. It might also seem likely that these dependencies are unimportant in terms of typical
information measures even if they are present and go unnoticed or are ignored.
In this paper we show that these assumptions are false. Indeed, the dependence structure can have
a profound impact on the information of spike count distributions with Poisson-like single neuron
statistics. This impact can be substantial not only for large networks of neurons but even for two
neuron distributions. As a matter of fact, the correlation coefficient places only a weak constraint on
the dependence structure. Moreover, we show that uncorrelated or weakly correlated spike counts
with strong dependencies are very likely to be common in biological networks. Thus, it is not
sufficient to report only the correlation coefficient or to derive strong implications like independence
from a low correlation coefficient alone. At least a statistical test should be applied that states for
a given significance level whether the dependence is well characterized by the linear correlation
coefficient. We will introduce such a test in this paper. The test is adjusted to the setting that a
neuroscientist typically faces, namely the case of Poisson-like spike count distributions of single
neurons and small numbers of samples.
In the next section, we describe state-of-the-art methods for modeling dependent spike counts, to
compute their entropy, and to generate network models based on integrate-and-fire neurons. Section 3 shows examples of what can go wrong for entropy estimation when relying on the correlation
coefficient only. Emergences of such cases in simple network models are explored. Section 4 introduces the linear correlation test which is tailored to the needs of neuroscience applications and the
section examines its performance on different dependence structures. The paper concludes with a
discussion of the advantages and limitations of the presented methods and cases.
2
General methods
We will now describe formal aspects of spike count models and their Shannon information.
2.1
Copula-based models with discrete marginals
A copula is a cumulative distribution function (CDF) which is defined on the unit hypercube and has
uniform marginals [16]. Formally, a bivariate copula C is defined as follows:
Definition 1. A copula is a function C : [0, 1]2 ?? [0, 1] such that:
1. ?u, v ? [0, 1]: C(u, 0) = 0 = C(0, v) and C(u, 1) = u and C(1, v) = v.
2. ?u1 , v1 , u2 , v2 ? [0, 1] with u1 ? u2 and v1 ? v2 :
C(u2 , v2 ) ? C(u2 , v1 ) ? C(u1 , v2 ) + C(u1 , v1 ) ? 0.
Copulas can be used to couple arbitrary marginal CDF?s FX1 , FX2 to form a joint CDF FX~ , such that
FX~ (r1 , r2 ) = C(FX1 (r1 ), FX2 (r2 )) holds [16]. There are many families of copulas representing
different dependence structures. One example is the bivariate Frank family [17]. Its CDF is given
by
(
??u
??v
?1)
? ?1 ln 1 + (e ?1)(e
if ? 6= 0,
?? ?1
e
C? (u, v) =
(1)
uv
if ? = 0.
The Frank family is commutative and radial symmetric: its probability density c? abides by
?(u, v) ? [0, 1]2 : c? (u, v) = c? (1?u, 1?v) [17]. The scalar parameter ? controls the strength of dependence. As ? ? ?? the copula approaches deterministic positive/negative dependence: knowledge of one variable implies knowledge of the other (so-called Fr?echet-Hoeffding bounds [16]). The
linear correlation coefficient is capable of measuring this dependence. Another example is the bivariate Gaussian copula family defined as C? (u, v) = ?? (??1 (u), ??1 (v)), where ?? is the CDF of
the bivariate zero-mean unit-variance multivariate normal distribution with correlation ? and ??1 is
the inverse of the CDF of the univariate zero-mean unit-variance Gaussian distribution. This family can be used to construct multivariate distributions with Gauss-like dependencies and arbitrary
marginals.
For a given realization ~r, which can represent the counts of two neurons, we can set ui = FXi (ri )
and FX (~r) = C? (~u), where FXi can be arbitrary univariate CDF?s. Thereby, we can generate a
multivariate distribution with specific marginals FXi and a dependence structure determined by C.
Copulas allow us to have different discrete marginal distributions [18, 19]. Typically, the Poisson
distribution is a good approximation to spike count variations of single neurons [20]. For this distribution the CDF?s of the marginals take the form
FXi (r; ?i ) =
?r? k
X
?
i
k=0
k!
e??i ,
where ?i is the mean spike count of neuron i for a given bin size. We will also use the negative
binomial distribution as a generalization of the Poisson distribution:
FXi (r; ?i , ?i ) =
?r? k
X
?
1
i
k=0
k! (1 +
?i ?i
?i )
?(?i + k)
,
?(?i )(?i + ?i )k
where ? is the gamma function. The additional parameter ?i controls the degree of overdispersion:
?2
the smaller the value of ?i , the greater the Fano factor: the variance is given by ?i + ?ii . As ?i
approaches infinity, the negative binomial distribution converges to the Poisson distribution.
Likelihoods of discrete vectors can be computed by applying the inclusion-exclusion principle
of Poincar?e and Sylvester. The probability of a realization (x1 , x2 ) is given by PX~ (x1 , x2 ) =
FX~ (x1 , x2 ) ? FX~ (x1 ? 1, x2 ) ? FX~ (x1 , x2 ? 1) + FX~ (x1 ? 1, x2 ? 1). Thus, we can compute the
~
probability mass of a realization ~x using only the CDF of X.
2.2
Computation of information entropy
~ is a measure of the information that a
The Shannon entropy [21] of dependent spike counts X
~ It is given by
decoder is missing when it does not know the value ~x of X.
X
~ = E[I(X)]
~ =
PX~ (~x)I(~x),
H(X)
~
x?Nd
where I(~x) = ? log2 (PX~ (~x)) is the self-information of the realization ~x.
2.3
Leaky integrate-and-fire model
The leaky integrate-and-fire neuron is a simple neuron model that models only subthreshold membrane potentials. The equation for the membrane potential is given by
?m
dV
= EL ? V + Rm Is ,
dt
where EL denotes the resting membrane potential, Rm is the total membrane resistance, Is is
the synaptic input current, and ?m is the time constant. The model is completed by a rule which
states that whenever V reaches a threshold Vth , an action potential is fired and V is reset to
Vreset [22]. In all of our simulations we used ?m = 20 ms, Rm = 20 M?, Vth = ?50 mV,
and Vreset = Vinit = ?65 mV, which are typical values found in [22]. Current-based synaptic
input for an isolated presynaptic release that occurs at time t = 0 can be modeled by the so-called
?-function [22]: Is = Imax ?ts exp(1 ? ?ts ). The function reaches its peak Is at time t = ?s and then
decays with time constant ?s . We can model an excitatory synapse by a positive Imax and an inhibitory synapse by a negative Imax . We used Imax = 1 nA for excitatory synapses, Imax = ?1 nA
for inhibitory synapses, and ?s = 5 ms.
0
1
0.5
v 0 0
u
2
0
1
0.5
v 0 0
1
0.5
(a)
2
1
0.5
0.5
0.2
1
(d)
0
1
8
0.8
0.6
6
0.6
0.4
4
0.4
0.2
2
0.2
v
v
1
10
0.8
8
6
0.6
0.4
1
0.5
(c)
1
1.5
u
v
1
0.5
u
u
0
1
0.5
v 0 0
(b)
0.8
0
0
0.5
C
1
0.5
C
1
0.5
1
? ,? ,?
2
? ,? ,?
1
C
1
2
? ,? ,?
1
0
0
0.5
u
(e)
1
0
0
0
4
2
0.5
u
1
0
(f)
Figure 1: Cumulative distribution functions (a-c) and probability density functions (d-f) of selected
Frank shuffle copulas. (a, d): Independence: ?1 = ?2 = 0. (b, e): Strong negative dependence
in outer square: ?1 = ?30, ?2 = 5, ? = 0.2. (c, f): Strong positive dependence in inner square:
?1 = ?5, ?2 = 30, ? = 0.2.
3
Counter examples
In this section we describe entropy variations that can occur when relying on the correlation coefficient only. We will evaluate this effect for models of spike counts which have Poisson-like marginals
and show that such effects can occur in very simple biological networks.
3.1
Frank shuffle copula
We will now introduce the Frank shuffle copula family. This copula family allows arbitrarily strong
dependencies with a correlation coefficient of zero for attached Poisson-like marginals. It uses two
Frank copulas (see Section 2.1) in different regions of its domain such that the linear correlation
coefficient would vanish.
Proposition 1. The following function defines a copula ??1 , ?2 ? R, ? ? [0, 0.5] :
?
?C?1 (u, v) ? ??1 (?, ?, u, v) + z?1 ,?2 ,? (min{u, v})??2 (?, ?, u, v) if (u, v) ?
C?1 ,?2 ,? (u, v) =
(?, 1 ? ?)2 ,
?
otherwise,
C?1 (u, v)
where ?? (u1 , v1 , u2 , v2 ) = C? (u2 , v2 ) ? C? (u2 , v1 ) ? C? (u1 , v2 ) + C? (u1 , v1 ) and z?1 ,?2 ,? (m) =
??1 (?, ?, m, 1 ? ?)/??2 (?, ?, m, 1 ? ?).
The proof of the copula properties is given in Appendix A. This family is capable of modeling
a continuum between independence and deterministic dependence while keeping the correlation
coefficient at zero. There are two regions: the outer region [0, 1]2 \ (?, 1 ? ?)2 contains a Frank
copula with ?1 and the inner square (?, 1 ? ?)2 contains a Frank copula with ?2 modified by a factor
z. If we would restrict our analysis to copula-based distributions with continuous marginals it would
be sufficient to select ?1 = ??2 and to adjust ? such that the correlation coefficient would vanish. In
such cases, the factor z would be unnecessary. For discrete marginals, however, this is not sufficient
as the CDF is no longer a continuous function of ?. Different copulas of this family are shown in
Fig. 1.
We will now investigate the impact of this dependence structure on the entropy of copula-based distributions with Poisson-like marginals while keeping the correlation coefficient at zero. Introducing
more structure into a distribution typically reduces its entropy. Therefore, we expect that the entropy
can vary considerably for different dependence strengths, even though the correlation is always zero.
4
Poisson
Negative Binomial
2
0
0
?10
?20
?1
(a)
?30
?40
?50
Entropy Difference (%)
Entropy (Bits)
6
30
20
Poisson
10
Negative Binomial
0
0
?10
?20
?1
?30
?40
?50
(b)
Figure 2: Entropy of distributions based on the Frank shuffle copula C?1 ,?2 ,? for ? = 0.05 and
different dependence strengths ?1 . The second parameter ?2 was selected such that the absolute
correlation coefficient was below 10?10 . For Poisson marginals, we selected rates ?1 = ?2 = 5.
For 100 ms bins this would correspond to firing rates of 50 Hz. For negative binomial marginals
we selected rates ?1 = 2.22, ?2 = 4.57 and variances ?12 = 4.24, ?22 = 10.99 (values taken from
experimental data recorded in macaque prefrontal cortex and 100 ms bins [18]). (a): Entropy of the
C?1 ,?2 ,? based models. (b): Difference between the entropy of the C?1 ,?2 ,? -based models and the
model with independent elements in percent of the independent model.
Fig. 2(a) shows the entropy of the Frank shuffle-based models with Poisson and negative binomial
marginals for uncorrelated but dependent elements. ?1 was varied while ?2 was estimated using
the line-search algorithm for constrained nonlinear minimization [23] with the absolute correlation
coefficient as the objective function. Independence is attained for ?1 = 0. With increasing dependence the entropy decreases until it reaches a minimum at ?1 = ?20. Afterward, it increases again.
This is due to the shape of the marginal distributions. The region of strong dependence shifts to a
region with small mass. Therefore, the actual dependence decreases. However, in this region the
dependency is almost deterministic and thus does not represent a relevant case.
Fig. 2(b) shows the difference to the entropy of corresponding models with independent elements.
The entropy deviated by up to 25 % for the Poisson marginals and up to 15 % for the negative
binomial marginals. So the entropy varies indeed considerably in spite of fixed marginals and uncorrelated elements.
We constructed a copula family which allowed us to vary the dependence strength systematically
while keeping the variables uncorrelated. It could be argued that this is a pathological example. In
the next section, however, we show that such effects can occur even in simple biologically realistic
network models.
3.2
LIF network
We will now explore the feasibility of uncorrelated spike counts with strong dependencies in a biologically realistic network model. For this purpose, we set up a network of leaky integrate-and-fire
neurons (see Section 2.3). The neurons have two common input populations which introduce opposite dependencies (see Fig. 3(a)). Therefore, the correlation should vanish for the right proportion of
input strengths. Note that the bottom input population does not contradict to Dale?s principle, since
excitatory neurons can project to both excitatory and inhibitory neurons.
We can find a copula family which can model this relation and has two separate parameters for the
strengths of the input populations:
?1/?1
1
(u, v) = max u??1 + v ??1 ? 1, 0
C?cm
1 ,?2
2
(2)
?1/?2
1
+
,
u ? max u??2 + (1 ? v)??2 ? 1, 0
2
where ?1 , ?2 ? (0, ?). It is a mixture of the well known Clayton copula and an one element survival
transformation of the Clayton copula [16]. As a mixture of copulas this function is again a copula.
A copula of this family is shown in Fig. 3(b).
Fig. 3(c) shows the correlation coefficients of the network generated spike counts and of C?cm
1 ,?2
fits. The rate of population D that introduces negative dependence is kept constant, while the rate
of population B that introduces positive dependence is varied. The resulting spike count statistics
10
8
0.6
6
0.4
4
0.2
2
v
0.8
0.2
0.4
0.6
u
(a)
(b)
0.8
1
0
0.4
Correlation Coefficient
1
0.3
0.2
0.1
0
?0.1
?0.2
250
300
350
Input Rate of Top Center Population (Hz)
(c)
Figure 3: Strong dependence with zero correlation in a biological network model. (a): Neural network models used to generate synthetic spike count data. Two leaky integrate-and-fire neurons (LIF1
and LIF2, see Section 2.3) receive spike inputs (circles for excitation, bars for inhibition) from four
separate populations of neurons (rectangular boxes and circles, A-D), but only two populations (B,
D) send input to both neurons. All input spike trains were Poisson-distributed. (b): Probability denwith ?1 = 1.5 and ?2 = 2.0. (c): Correlation coefficients of
sity of the Clayton mixture model C?cm
1 ,?2
network generated spike counts compared to correlations of a maximum likelihood fit of the C?cm
1 ,?2
copula family to these counts. Solid line: correlation coefficients of counts generated by the network
shown in (a). Each neuron had a total inhibitory input rate of 300 Hz and a total excitatory input rate
of 900 Hz. Population D had a rate of 150 Hz. We increased the absolute correlation between the
spike counts by shifting the rates: we decreased the rates of A and C and increased the rate of B. The
total simulation time amounted to 200 s. Spike counts were calculated for 100 ms bins. Dashed line:
. Dashed-dotted line: Correlation
Correlation coefficients of the first mixture component of C?cm
1 ,?2
coefficients of the second mixture component of C?cm
.
,?
1 2
were close to typically recorded data. At approximately 275 Hz the dependencies cancel each other
out in the correlation coefficient. Nevertheless, the mixture components of the copula reveal that
there are still dependencies: the correlation coefficient of the first mixture component that models
negative dependence is relatively constant, while the correlation coefficient of the second mixture
component increases with the rate of the corresponding input population. Therefore, correlation
coefficients of spike counts that do not at all reflect the true strength of dependence are very likely
to occur in biological networks. Structures similar to the investigated network can be formed in any
feed-forward network that contains positive and negative weights.
Typically, the network structure is unknown. Hence, it is hard to construct an appropriate copula that
is parametrized such that individual dependence strengths are revealed. The goal of the next section
is to assess a test that reveals whether the linear correlation coefficient provides an appropriate
measure for the dependence.
4
Linear correlation test
We will now describe a test for bivariate distributions with Poisson-like marginals that determines
whether the dependence structure is well characterized by the linear correlation coefficient. This test
combines a variant of the ?2 goodness-of-fit test for discrete multivariate data with a semiparametric
model of linear dependence. We fit the semiparametric model to the data and we apply the goodnessof-fit test to see if the model is adequate for the data.
The semiparametric model that we use consists of the empirical marginals of the sample coupled by
a parametric copula family. A dependence structure is well characterized by the linear correlation
coefficient if it is Gauss-like. So one way to test for linear dependence would be to use the Gaussian
copula family. However, the likelihood of copula-based models relies on the CDF which has no
closed form solution for the Gaussian family. Fortunately, a whole class of copula families that are
Gauss-like exists. The Frank family is in this class [24] and its CDF can be computed very efficiently.
We therefore selected this family for our test (see Eq. 1). The Frank copula has a scalar parameter ?.
The parameter relates directly to the dependence. With growing ? the dependence increases strictly
0
0
?20
?
1
(a)
?40
?60
0
0
10
?
20
0
0
0.5
1
% Acceptance of H
0.5
1
% Acceptance of H
0
Samples: 128
Samples: 256
Samples: 512
% Acceptance of H
% Acceptance of H0
1
0.5
0
?10
0
?
1
(b)
(c)
10
1
0.5
0
?0.5
0
?
0.5
(d)
Figure 4: Percent acceptance of the linear correlation hypothesis for different copula-based models
with different dependence strengths and Poisson marginals with rates ?1 = ?2 = 5. We used 100
repetitions each. The number of samples was varied between 128 and 512. On the x-axis we varied
the strength of the dependence by means of the copula parameters. (a): Frank shuffle family with
correlation kept at zero. (b): Clayton mixture family C?cm
with ?1 = 2?2 . (c): Frank family.
1 ,?2
(d): Gaussian family.
monotonically. For ? = 0 the Frank copula corresponds to independence. Therefore, the usual ?2
independence test is a special case of our linear correlation test.
The parameter ? of the Frank family can be estimated based on a maximum likelihood fit. However,
this is time-consuming. As an alternative we propose to estimate the copula parameter ? by means
of Kendall?s ? . Kendall?s ? is a measure of dependence defined as ? (~x, ~y ) = c?d
c+d , where c is the
number of elements in the set {(i, j)|(xi < xj and yi < yj ) or (xi > xj and yi > yj )} and d is
the number of element in the set {(i, j)|(xi < xj and yi > yj ) or (xi > xj and yi < yj )} [16].
For the Frank copula with continuous marginals the relation between ? and ? is given by ?? =
Rx
tk
1 ? ?4 [1 ? D1 (?)], where Dk (x) is the Debye function Dk (x) = xkk 0 exp(t)?1
dt [25]. For discrete
marginals this is an approximate relation. Unfortunately, ???1 cannot be expressed in closed form,
but can be easily obtained numerically using Newton?s method.
The goodness-of-fit test that we apply for this model is based on the ?2 test [26]. It is widely
applied for testing goodness-of-fit or independence of categorical variables. For the test, observed
frequencies are compared to expected frequencies using the following statistic:
X2 =
k
X
(ni ? m0i )2
i=1
m0i
,
(3)
where ni are the observed frequencies, moi are the expected frequencies, and k is the number of
bins. For a 2-dimensional table the sum is over both indices of the table. If the frequencies are large
enough then X 2 is approximately ?2 -distributed with df = (N ?1)(M ?1)?s degrees of freedom,
where N is the number of rows, M is the number of columns, and s is the number of parameters
in the H0 model (1 for the Frank family). Thus, for a given significance level ? the test accepts
the hypothesis H0 that the observed frequencies are a sample from the distribution formed by the
expected frequencies, if X 2 is less than the (1 ? ?) point of the ?2 -distribution with df degrees of
freedom.
The ?2 statistic is an asymptotic statistic. In order to be of any value, the frequencies in each bin
must be large enough. As a rule of thumb, each frequency should be at least 5 [26]. This cannot
be accomplished for Poisson-like marginals since there is an infinite number of bins. For such
cases Loukas and Kemp [27] propose the ordered expected-frequencies procedure. The expected
frequencies m0 are sorted monotonically decreasing into a 1-dimensional array. The corresponding
observed frequencies form another 1-dimensional array. Then the frequencies in both arrays are
grouped from left to right such that the grouped m0 frequencies reach a specified minimum expected
frequency (MEF), e.g. MEF= 1 as in [27]. The ?2 statistic is then estimated using Eq. 3 with the
grouped expected and grouped observed frequencies.
To verify the test we applied it to samples from copula-based distributions with Poisson marginals
and four different copula families: the Frank shuffle family (Proposition 1), the Clayton mixture
family (Eq. 2), the Frank family (Eq. 1), and the Gaussian family (Section 2.1). For the Frank
family and the Gaussian family the linear correlation coefficient is well suited to characterize their
dependence. We therefore expected that the test should accept H0 , regardless of the dependence
strength. In contrast, for the Frank shuffle family and the Clayton mixture family the linear correlation does not reflect the dependence strength. Hence, the test should reject H0 most of the time
when there is dependence.
The acceptance rates for these copulas are shown in Fig. 4. For each of the families there was no
dependence when the first copula parameter was equal to zero. The Frank and the Gaussian families
have only Gauss-like dependence, meaning the correlation coefficient is well-suited to describe the
data. In all of these cases the achieved Type I error was small, i.e. the acceptance rate of H0 was
close to the desired value (0.95). The plots in (a) and (b) indicate the Type II errors: H0 was accepted
although the dependence structure of the counts was not Gauss-like. The Type II error decreased
for increasing sample sizes. This is reasonable since X 2 is only asymptotically ?2 -distributed.
Therefore, the test is unreliable when dependencies and sample sizes are both very small.
5
Conclusion
We investigated a worst-case scenario for reliance on the linear correlation coefficient for analyzing
dependent spike counts using the Shannon information. The spike counts were uncorrelated but had
a strong dependence. Thus, relying solely on the correlation coefficient would lead to an oversight of
such dependencies. Although uncorrelated with fixed marginals the information varied by more than
25 %. Therefore, the dependence was not negligible in terms of the entropy. Furthermore, we could
show that similar scenarios are very likely to occur in real biological networks. Our test provides a
convenient tool to verify whether the correlation coefficient is the right measure for an assessment of
the dependence. If the test rejects the Gauss-like dependence hypothesis, more elaborate measures
of the dependence should be applied. An adequate copula family provides one way to find such a
measure. In general, however, it is hard to find the right parametric family. Directions for future
research include a systematic approach for handling the alternative case when one has to deal with
the full dependence structure and a closer look at experimentally observed dependencies.
Acknowledgments.
This work was supported by BMBF grant 01GQ0410.
A Proof of proposition 1
Proof. We show that C?1 ,?2 ,? is a copula. Since C?1 ,?2 ,? is commutative we assume w.l.o.g. u ? v.
For u = 0 or v = 0 and for u = 1 or v = 1 we have C?1 ,?2 ,? (u, v) = C?1 (u, v). Hence, property 1
follows directly from C?1 . It remains to show that C?1 ,?2 ,? is 2-increasing (property 2). We will
show this in two steps:
1) We show that C?1 ,?2 ,? is continuous: For ?2 = 1 ? ? and u ? (?, ?2 ):
?? (?, ?, u, ?2 )
lim C?1 ,?2 ,? (u, t) = C?1 (u, ?2 ) ? ??1 (?, ?, u, ?2 ) + 1
?? (?, ?, u, ?2 )
t??2
??2 (?, ?, u, ?2 ) 2
= C?1 (u, ?2 ).
For v ? (?, 1 ? ?):
?? (?, ?, t, 1 ? ?)
??2 (?, ?, t, v).
lim C?1 ,?2 ,? (t, v) = C?1 (?, v) ? ??1 (?, ?, ?, v) + lim 1
t?? ??2 (?, ?, t, 1 ? ?)
t??
We can use l?H?opital?s rule since limt?? ?? (?, ?, t, 1 ? ?) = 0. It is easy to verify that
?C?
e??u (e??v ? 1)
(v) = ??
.
?u
e ? 1 + (e??u ? 1)(e??v ? 1)
Thus, the quotient is constant and limt?? C?1 ,?2 ,? (t, v) = C?1 (?, v) ? 0 + 0.
2) C?1 ,?2 ,? has non-negative density almost everywhere on [0, 1]2 . This is obvious for u1 , v1 ?
/
[?, 1 ? ?]2 , because C?1 is a copula. Straightforward but tedious algebra shows that ?u1 , v1 ?
(?, 1 ? ?)2 :
? 2 C?1 ,?2 ,?
(u1 , v1 )
?u?v
? 0.
Thus, C?1 ,?2 ,? is continuous and has density almost everywhere on [0, 1]2 and is therefore 2increasing.
References
[1] M. Jazayeri and J. A. Movshon. Optimal representation of sensory information by neural populations.
Nature Neuroscience, 9(5):690?696, 2006.
[2] L. Schwabe and K. Obermayer. Adaptivity of tuning functions in a generic recurrent network model of a
cortical hypercolumn. Journal of Neuroscience, 25(13):3323?3332, 2005.
[3] D. A. Gutnisky and V. Dragoi. Adaptive coding of visual information in neural populations. Nature,
452(7184):220?224, 2008.
[4] M. Shamir and H. Sompolinsky. Implications of neuronal diversity on population coding. Neural Computation, 18(8):1951?1986, 2006.
[5] P. Series, P. E. Latham, and A. Pouget. Tuning curve sharpening for orientation selectivity: coding
efficiency and the impact of correlations. Nature Neuroscience, 7(10):1129?1135, 2004.
[6] L. F. Abbott and P. Dayan. The effect of correlated variability on the accuracy of a population code.
Neural Computation, 11(1):91?101, 1999.
[7] B. B. Averbeck, P. E. Latham, and A. Pouget. Neural correlations, population coding and computation.
Nature Review Neuroscience, 7(5):358?366, 2006.
[8] Y. Roudi, S. Nirenberg, and P. E. Latham. Pairwise maximum entropy models for studying large biological
systems: When they can work and when they can?t. PLoS Computational Biology, 5(5):e1000380+, 2009.
[9] E. Schneidman, M. J. Berry II, R. Segev, and W. Bialek. Weak pairwise correlations imply strongly
correlated network states in a neural population. Nature, 440:1007?1012, 2006.
[10] J. Shlens, G. D. Field, J. L. Gauthier, M. I. Grivich, D. Petrusca, E. Sher, A. M. Litke, and E. J.
Chichilnisky. The structure of multi-neuron firing patterns in primate retina. Journal of Neuroscience,
26:2006, 2006.
[11] B. B. Averbeck and D. Lee. Neural noise and movement-related codes in the macaque supplementary
motor area. Journal of Neuroscience, 23(20):7630?7641, 2003.
[12] S. Panzeri, G. Pola, F. Petroni, M. P. Young, and R. S. Petersen. A critical assessment of different measures
of the information carried by correlated neuronal firing. Biosystems, 67(1-3):177?185, 2002.
[13] H. Sompolinsky, H. Yoon, K. Kang, and M. Shamir. Population coding in neuronal systems with correlated noise. Physical Review E, 64(5):051904, 2001.
[14] A. Kohn and M. A. Smith. Stimulus dependence of neuronal correlation in primary visual cortex of the
macaque. Journal of Neuroscience, 25(14):3661?3673, 2005.
[15] W. Bair, E. Zohary, and W. T. Newsome. Correlated firing in macaque visual area MT: time scales and
relationship to behavior. Journal of Neuroscience, 21(5):1676?1697, 2001.
[16] R. B. Nelsen. An Introduction to Copulas. Springer, New York, second edition, 2006.
[17] M. J. Frank. On the simultaneous associativity of f(x,y) and x+y-f(x,y). Aequations Math, 19:194?226,
1979.
[18] A. Onken, S. Gr?unew?alder, M. Munk, and K. Obermayer. Modeling short-term noise dependence of
spike counts in macaque prefrontal cortex. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou,
editors, Advances in Neural Information Processing Systems 21, pages 1233?1240, 2009.
[19] P. Berkes, F. Wood, and J. Pillow. Characterizing neural dependencies with copula models. In D. Koller,
D. Schuurmans, Y. Bengio, and L. Bottou, editors, Advances in Neural Information Processing Systems
21, pages 129?136, 2009.
[20] D. J. Tolhurst, J. A. Movshon, and A. F. Dean. The statistical reliability of signals in single neurons in cat
and monkey visual cortex. Vision Research, 23:775?785, 1982.
[21] C. E. Shannon. A mathematical theory of communication. Bell System Technical Journal, 27:379?423,
1948.
[22] P. Dayan and L. F. Abbott. Theoretical Neuroscience. Cambridge (Massachusetts): MIT Press, 2001.
[23] R. A. Waltz, J. L. Morales, J. Nocedal, and D. Orban. An interior algorithm for nonlinear optimization
that combines line search and trust region steps. Mathematical Programming, 107(3):391?408, 2006.
[24] C. Genest, B. R?emillard, and D. Beaudoin. Goodness-of-fit tests for copulas: A review and a power study.
Insurance: Mathematics and Economics, 44(2):199?213, 2009.
[25] C. Genest. Frank?s family of bivariate distributions. Biometrika, 74:549?555, 1987.
[26] W. G. Cochran. The ?2 test of goodness of fit. Annals of Mathematical Statistics, 23(3):315?345, 1952.
[27] S. Loukas and C. D. Kemp. On the chi-square goodness-of-fit statistic for bivariate discrete distributions.
The Statistician, 35:525?529, 1986.
| 3839 |@word proportion:1 nd:1 tedious:1 simulation:2 simplifying:1 thereby:1 solid:1 contains:3 series:1 current:2 yet:1 must:1 realistic:4 shape:2 motor:1 plot:1 alone:1 fx1:2 selected:5 smith:1 short:1 poissonlike:2 provides:3 math:1 tolhurst:1 mef:2 mathematical:3 constructed:2 become:1 profound:1 consists:1 combine:2 introduce:4 pairwise:3 expected:8 indeed:2 behavior:1 growing:1 multi:1 steffen:2 chi:1 relying:3 decreasing:1 actual:2 zohary:1 increasing:4 project:1 moreover:2 mass:2 what:1 cm:7 monkey:1 arno:1 transformation:1 sharpening:1 biometrika:1 universit:2 wrong:1 rm:3 uk:2 control:2 unit:3 grant:1 positive:5 negligible:2 bccn:2 analyzing:2 solely:2 firing:4 approximately:2 might:3 abides:1 statistically:1 acknowledgment:1 yj:4 testing:1 practice:1 procedure:1 poincar:1 area:2 empirical:1 bell:1 reject:2 convenient:1 radial:1 spite:1 petersen:1 cannot:2 close:2 interior:1 applying:1 equivalent:1 deterministic:3 moi:1 missing:1 center:1 send:1 go:2 regardless:1 straightforward:1 dean:1 economics:1 rectangular:1 pouget:2 examines:1 rule:3 array:3 imax:5 shlens:1 population:19 fx:7 variation:2 annals:1 shamir:2 programming:1 us:1 hypothesis:3 element:7 observed:7 bottom:1 yoon:1 worst:1 region:7 sompolinsky:2 plo:1 shuffle:8 counter:1 decrease:2 movement:1 substantial:1 ui:1 weakly:3 algebra:1 efficiency:1 xkk:1 easily:1 joint:1 cat:1 train:2 describe:6 london:2 oby:1 klaus:1 h0:7 widely:1 supplementary:1 otherwise:1 nirenberg:1 statistic:8 emergence:1 advantage:1 ucl:1 propose:2 reset:1 fr:1 tu:2 relevant:2 realization:4 fired:1 r1:2 nelsen:1 converges:1 sity:1 tk:1 coupling:1 derive:1 ac:1 recurrent:1 eq:4 strong:10 c:3 quotient:1 implies:1 indicate:1 direction:1 unew:1 bin:7 munk:1 argued:2 suffices:1 generalization:1 proposition:3 biological:8 adjusted:1 strictly:1 m0i:2 hold:1 normal:1 deciding:1 exp:2 panzeri:1 m0:2 vary:3 continuum:1 purpose:1 estimation:1 grouped:4 repetition:1 tool:1 minimization:1 mit:1 gaussian:8 always:1 modified:1 averbeck:2 varying:1 release:1 likelihood:4 contrast:1 litke:1 sense:1 dependent:6 el:2 dayan:2 bt:1 typically:6 unlikely:1 accept:1 associativity:1 relation:3 koller:2 germany:1 among:1 oversight:1 orientation:1 art:1 constrained:1 copula:55 lif:1 marginal:3 special:1 construct:4 equal:1 field:1 petrusca:1 biology:1 look:1 cancel:1 future:1 others:1 report:2 stimulus:1 employ:1 retina:1 pathological:2 gamma:1 individual:1 fire:6 statistician:1 freedom:2 neuroscientist:1 acceptance:7 investigate:2 insurance:1 adjust:1 introduces:3 mixture:11 implication:2 beforehand:1 waltz:1 capable:2 closer:1 circle:2 desired:1 isolated:1 theoretical:1 biosystems:1 jazayeri:1 increased:2 column:1 modeling:3 measuring:1 goodness:6 newsome:1 introducing:1 technische:2 uniform:1 gr:1 characterize:3 reported:1 dependency:18 varies:1 considerably:2 synthetic:1 density:4 peak:1 systematic:1 lee:1 na:2 again:2 central:1 recorded:4 reflect:2 prefrontal:2 hoeffding:1 potential:4 de:2 diversity:1 coding:6 coefficient:45 matter:2 mv:2 closed:2 kendall:2 analyze:1 cochran:1 ass:2 square:4 formed:2 ni:2 accuracy:1 variance:4 efficiently:1 subthreshold:1 correspond:1 weak:3 thumb:1 rx:1 simultaneous:1 synapsis:2 reach:4 whenever:1 synaptic:2 definition:1 echet:1 frequency:16 obvious:1 proof:3 couple:1 massachusetts:1 knowledge:2 lim:3 feed:1 higher:1 dt:2 attained:1 synapse:2 though:1 strongly:2 box:1 furthermore:1 correlation:65 until:1 hand:2 trust:1 gauthier:1 nonlinear:2 assessment:2 defines:1 reveal:1 effect:4 verify:4 true:1 hence:3 symmetric:1 deal:2 self:1 alder:2 excitation:1 m:5 latham:3 onken:2 percent:2 meaning:1 common:4 mt:1 physical:1 attached:1 resting:1 marginals:27 numerically:1 cambridge:1 tuning:2 uv:1 mathematics:1 fano:1 inclusion:1 had:3 reliability:1 longer:1 cortex:4 inhibition:1 berkes:1 multivariate:4 exclusion:1 roudi:1 scenario:2 selectivity:1 arbitrarily:1 opital:1 yi:4 accomplished:1 minimum:2 additional:1 greater:1 fortunately:1 monotonically:2 dashed:2 ii:4 relates:1 full:2 schneidman:1 signal:1 reduces:1 technical:1 characterized:4 feasibility:1 impact:5 variant:1 sylvester:1 vision:1 poisson:20 df:2 represent:2 tailored:1 limt:2 achieved:1 receive:1 semiparametric:3 decreased:2 schwabe:1 hz:6 seem:2 revealed:1 bengio:2 enough:2 easy:1 independence:8 fit:11 xj:4 restrict:1 opposite:1 inner:2 shift:1 whether:9 bair:1 kohn:1 movshon:2 resistance:1 questioned:1 york:1 action:1 adequate:2 ignored:1 unimportant:1 extensively:1 generate:3 inhibitory:4 dotted:1 neuroscience:10 estimated:3 discrete:7 four:2 reliance:1 threshold:1 nevertheless:1 abbott:2 kept:2 nocedal:1 v1:10 fx2:2 asymptotically:1 sum:1 wood:1 inverse:1 everywhere:2 franklinstr:1 reporting:1 place:1 family:39 almost:3 reasonable:1 appendix:1 bit:1 bound:1 deviated:1 strength:13 occur:8 constraint:1 infinity:1 segev:1 ri:1 x2:7 petroni:1 u1:10 aspect:1 orban:1 min:1 px:3 relatively:1 combination:1 membrane:4 across:1 smaller:1 biologically:2 primate:1 dv:1 restricted:1 taken:1 ln:1 equation:1 remains:1 count:33 know:1 overdispersion:1 end:1 studying:1 grivich:1 apply:2 v2:7 fxi:5 appropriate:2 generic:1 alternative:2 binomial:7 denotes:1 unnoticed:1 top:1 completed:1 include:1 log2:1 newton:1 gower:1 wc1e:1 vinit:1 hypercube:1 objective:1 spike:32 occurs:1 parametric:2 primary:1 dependence:58 usual:1 bialek:1 obermayer:3 unclear:1 separate:2 berlin:7 street:1 decoder:1 outer:2 parametrized:1 topic:1 presynaptic:1 kemp:2 dragoi:1 code:2 modeled:1 index:1 insufficient:2 relationship:1 unfortunately:1 frank:25 negative:14 unknown:1 neuron:25 t:2 variability:1 communication:1 varied:5 arbitrary:3 clayton:6 namely:1 hypercolumn:1 specified:1 chichilnisky:1 accepts:1 kang:1 macaque:5 bar:1 below:1 pattern:1 goodnessof:1 max:2 shifting:1 power:1 critical:1 treated:1 representing:1 imply:2 axis:1 concludes:1 carried:1 categorical:1 coupled:1 sher:1 vreset:2 deviate:1 review:3 literature:1 berry:1 asymptotic:1 expect:2 adaptivity:1 limitation:1 afterward:1 versus:1 integrate:6 degree:3 sufficient:4 principle:2 editor:2 systematically:1 uncorrelated:11 row:1 morale:1 course:1 excitatory:5 supported:1 keeping:3 formal:1 allow:1 face:1 characterizing:1 absolute:3 leaky:5 distributed:3 curve:1 calculated:1 cortical:1 cumulative:2 pillow:1 dale:1 forward:1 sensory:1 adaptive:1 approximate:1 contradict:1 keep:1 unreliable:1 reveals:1 unnecessary:1 consuming:1 xi:4 continuous:6 search:2 table:2 nature:5 correlated:8 obtaining:1 schuurmans:2 genest:2 investigated:2 bottou:2 gutnisky:1 domain:1 significance:2 main:1 whole:1 noise:3 edition:1 allowed:1 x1:6 neuronal:4 fig:7 elaborate:1 bmbf:1 debated:1 vanish:3 young:1 specific:1 explored:1 r2:2 decay:1 dk:2 evidence:1 bivariate:7 survival:1 exists:1 false:1 importance:2 commutative:2 suited:2 entropy:21 likely:7 univariate:2 explore:1 visual:4 vth:2 encircles:1 expressed:1 ordered:1 scalar:2 u2:7 springer:1 corresponds:1 determines:1 relies:1 cdf:12 goal:1 sorted:1 fisher:1 hard:2 experimentally:1 pola:1 typical:2 determined:1 infinite:1 justify:1 called:2 total:4 amounted:1 tendency:1 gauss:6 experimental:1 shannon:5 accepted:1 formally:1 college:1 select:1 evaluate:1 d1:1 handling:1 |
3,134 | 384 | Neural Networks Structured for Control
Application to Aircraft Landing
Charles Schley, Yves Chauvin, Van Henkle, Richard Golden
Thomson-CSP, Inc., Palo Alto Research Operations
630 Hansen Way, Suite 250
Palo Alto, CA 94306
Abstract
We present a generic neural network architecture capable of controlling non-linear plants. The network is composed of dynamic.
parallel, linear maps gated by non-linear switches. Using a recurrent form of the back-propagation algorithm, control is achieved
by optimizing the control gains and task-adapted switch parameters. A mean quadratic cost function computed across a nominal
plant trajectory is minimized along with performance constraint
penalties. The approach is demonstrated for a control task consisting of landing a commercial aircraft in difficult wind conditions.
We show that the network yields excellent performance while remaining within acceptable damping response constraints.
1 INTRODUCTION
This paper illustrates how a recurrent back-propagation neural network algorithm
(Rumelhart, Hinton & Williams, 1986) may be exploited as a procedure for controlling complex systems. In particular. a simplified mathematical model of an
aircraft landing in the presence of severe wind gusts was developed and simulated.
A recurrent back-propagation neural network architecture was then designed to
numerically estimate the parameters of an optimal non-linear control law for landing the aircraft. The performance of the network was then evaluated.
1.1 A TYPICAL CONTROL SYSTEM
A typical control system consists of a controller and a process to be controlled. The
controller's function is to accept task inputs along with process outputs and to determine control signals tailored to the response characteristics of the process. The
415
416
Schley, Chauvin, Henkle, Golden
physical process to be controlled can be electro-mechanical, aerodynamic, etc.
and generally has well defined behavior. It may be subjected to disturbances from
its external environment.
1.2 CONTROLLER DESIGN
Many variations of both classical and modern methods to design control systems
are described in the literature. Classical methods use linear approximations of the
plant to be controlled and some loosely defined response specifications such as
bandwidth (speed of response) and phase margin (degree of stability). Classical
methods are widely used in practice, even for sophisticated control problems.
Modern methods are more universal and generally assume that a performance index for the process is specified. Controllers are then designed to optimize the
performance index. Our approach relates more to modern methods.
Narendra and Parthasarathy (1990) and others have noted that recurrent backpropagation networks can implement gradient descent algorithms that may be used
to optimize the performance of a system. The essence of such methods is first to
propagate performance errors back through the process and then back through the
controller to give error signals for updating the controller parameters. Figure 1
provides an overview of the interaction of a neural control law with a complex
system and possible performance indices for evaluating various control laws. The
functional
components
Control
needed
to
train
the conTask
Neural Net
troller are shown within
1~!!P!!U!;t~~~~?nc~o~n~tr~o~1
process
Signals Process
Out
the shaded box of Figure
to be
The objective per1.
Controlled
formance measure contains factors that are written mathematically and
usually represent terms
Performance ...- - -..
such as weighted square
Index
System
error or other quantifiable
' -_ _ _ _ _;-',.. stralnt
measures. The performance constraints are often
more subjective in nature
Figure 1: Neural Network Controller Design
and can be formulated as
reward or penalty functions on categories of performance (e.g., "good" or "bad").
j------..
2 A GENERIC NON-LINEAR CONTROL ARCHITECTURE
Many complex systems are in fact non-linear or "multi-modal." That is, their
behavior changes in fundamental ways as a function of their position in the state
space. In practice, controllers are often designed for such systems by treating them
as a collection of systems linearized about a "setpoint" in state space. A linear
controller can then be determined separately for each of these system "modes."
These observations suggest that a reasonable approach for controlling non-linear
or "multi-modal" systems would be to design a "multi-modal" control law.
2.1 THE SWITCHING PRINCIPLE
The architecture of our proposed general non-linear control law for "multi-modal" plants is shown in Figure 2. Task inputs and process outputs are entered into
Neural Networks Structured for Control Application to Aircraft Landing
multiple basic controller
blocks (shown within the
Task
shaded
box of Figure 2).
Inputs
Each basic controller block
Process
first
determines a weighted
Outputs
sum of the task inputs and
process outputs (multiplication by weights W) .
Then, the degree to which
?
the weighted sum passes
through the block is modi:
Repeat
fied by means of a saturat:
ing switch and multiplier.
? .
The input to the switch is
.
Flgure 2: Neural Network Controller ArchItecture
itself another weighted sum
of the task inputs and process outputs (multiplication by weights V). If the input to
the saturating switch is large, its output is unity and the weighted sum (weighted by
W) is passed through unchanged. At the other extreme, if the saturating switch has
zero output, the weighted sum of task inputs and process outputs does not appear in
the output. When these basic controller blocks are replicated and their outputs are
added, control signals consist of weighted sums of the controller inputs that can be
selected and/or blended by the saturating switches. The overall effect is a prototypical feed-forward and feedback controller with selectable gains and multiple pathways where the overall equivalent gains are a function of the task inputs and process
outputs. The resulting architecture yields a sigma-pi processing unit in the final
controller (Rumelhart, Hinton & Williams, 1986).
2.2 MODELLING DYNAMIC MAPPINGS
Weights shown in Figure 2 may be constant and represent a static relationship between input and control. However, further controller functionality is obtained by
considering the weights Vand Was dynamic mappings. For example, proportional
plus integral plus derivative (PID) feedback may be used to ensure that process
outputs follow task inputs with adequate steady-state error and transient damping.
Thus, the weights can express parameters of various generally useful control functions. These functions, when combined with the switching principle, yield rich
capabilities that can be adapted to the task at hand.
3 AIRCRAFr LANDING
The generic neural network architecture of Figure 2 and the associated neural network techniques were tested with a "real-world" application: automatic landing of
an aircraft. Here, we describe the aircraft and environment model during landing.
3.1 GLIDESLOPE AND FLARE
During aircraft landing, the final two phases of a landing trajectory consist of a
"glideslope" phase and a "flare" phase. Figure 3 shows these two phases. Flare
occurs at about 45 feet. Glideslope is characterized by a linear downward slope;
flare by an exponential shaped curve. When the aircraft begins flare, its response
characteristics are changed to make it more sensitive to the pilot's actions, making
the process "multi-modal" or non-linear over the whole trajectory.
417
418
Schley, Chauvin, Henkle, Golden
At Flare Initiation
Altitude:
h = h,
Altitude Rate: h = h,
Speed:
v= V,
Glideslope Angle:
position x
At Touchdown:
Altitude:
hTD = 0
Altitude Rate: 0 < hmin s hTD S hmax
Position:
x min..;;:t
~ x TD <
- X max
Pitch Angle: 8 min S 8TD S 8 max
Ygs
touchdown point
Glideslope Predicted
Xmin
Intercept Point
Figure 3: Glideslope and Flare Geometry
XTD
--- ..
Xmax
3.2 STATE EQUATIONS
Linearized equations of motion were used for the aircraft during each phase. They
are adequate during the short period of time spent during glideslope and flare. A
pitch stability augmentation system and an auto-throttle were added to the aircraft
state equations to damp the bare airframe oscillatory behavior and provide speed
control. The function of the autoland controller is to transform information about
desired and actual trajectories into the aircraft pitch command. This is input to the
pitch stability augmentation system to develop the aircraft elevator angle that in
turn controls the aircraft's actual pitch angle. Simplifications retain the overall
quality of system response (Le., high frequency dynamics were neglected).
3.3 WIND MODEL
The environment influences the process through wind disturbances represented by
constant velocity and turbulence components. The magnitude of the constant velocity component is a function of altitude (wind shear). Turbulence is a stochastic
process whose mean and variance are functions of altitude. For the horizontal and
vertical wind turbulence velocities, the so-called Dryden spectra for spatial turbulence distribution are assumed. These are amenable to simulation and show reasonable agreement with measured data (Neuman & Foster, 1970). The generation
of turbulence is effected by applying Gaussian white noise to coloring filters.
4 NEURAL NETWORK LEARNING IMPLEMENTATION
As previously noted, modern control theory suggests that a performance index for
evaluating control laws should first be constructed, and then the control law should
be computed to optimize the performance index. Generally, numerical methods
are used for estimating the parameters of a control law. Neural network algorithms
can actually be seen as constituting such numerical methods (Narendra and Parthasarathy, 1990; Bryson and Ho, 1969; Le Cun, 1989) . We present here an implementation of a neural network algorithm to address the aircraft landing problem.
Neural Networks Structured for Control Application to Aircraft Landing
4.1 DIFFERENCE EQUATIONS
The state of the aircraft (including stability augmentation and autothrottle) can be
represented by a vector X, containing variables representing speed, angle of attack,
pitch rate, pitch angle, altitude rate and altitude . The difference equations describing the dynamics of the controlled plant can be written as shown in equation 1.
Xt+l
= AX, + B,U, + CD, + IV,
(1)
The matrix A represents the plant dynamics and B represents the aircraft response
to the control U. D is the desired state and N is the additive noise computed from
the wind model. The switching controller can be written as in equation 2 below.
Referring to Figure 2, the weight matrix V in the sigmoidal switch links actual altitude to each switch unit. The weight matrix W in the linear controller links altitude
error, altitude rate error and altitude integral error to each linear unit output.
U, =
pl L,
where P, = Sigmoidal switch and L, = Linear controller
(2)
Figure 4 shows a recurrent network implementation of the entire system. Actual
and desired states at time t+1 are fed back to the input layers. Thus, with recurrent
connections between outJP
put and input, the network
generates entire trajectories
X ,;;Dt+l
and is seen as a recurrent
back-propagation network
(Rumelhart, Hinton & Williams, 1986; Jordan & Jacobs, 1990). The network
Kh ?
?
Kit is trained using the backpropagation algorithm with
given wind distributions.
Damping judge For the controller, we
chose initially two basic
PID controller blocks (see
Figure 2) to represent glideslope and flare. The task
of the network is then to
learn the state dependent
X,
D,
PID controller gains that
Figure 4: Recurrent Neural Network Architecture
optimize the cost function.
????????
???
?
~
\ 7
????????
? ??
4.2 PERFORMANCE INDEX OPTIMIZATION
The basic performance measure selected for this problem was the squared trajectory error accumulated over the duration of the landing. Trajectory error corresponds to a weighted combination of altitude and altitude rate errors.
Since minimizing only the trajectory error can lead to undesirable responses (e.g.,
oscillatory aircraft motions), we include relative stability constraints in the performance index. Aircraft transient responses depend on the value of a damping factor.
An analysis was performed by probing the plant with a range of values for controller
weight parameters. The data was used to train a "damping judge" net to categorize
"good" and "bad" damping. This net was used to construct a penalty function on
"bad" damping. As seen in Figure 4, additional units were added for this purpose.
419
420
Schley, Chauvin, Henkle, Golden
The main optimization problem is now stated. Given an initial state, minimize the
expected value over the environmental disturbances of performance index J.
J = JE + Jp = Trajectory Error + Performance Constraint Penalty
(3)
T
IE =
I
ah[ h cmdt
-
ht ]2
+
ali[
]2 We used ah
hcmd/ - h t
=
ali
= 1
t= 1
T
Jp =
I
Max(O,
ejudge - e?judgeXejudge - e?judge)
t= 1
Note that when ejudl{f! S e?judge> there is
no penalty. Otherwzse, it is quadratic.
5 SIMULATION EXPERIMENTS
We now describe our simulations. First. we introduce our training procedure. We
then present statistical results of flight simulations for a variety of wind conditions.
5.1 TRAINING PROCEDURE
Networks were initialized in various ways and trained with a random wind distribution where the constant sheared speed varied from 10 ft/sec tailwind to 40 ft/sec
headwind (a strong wind). Several learning strategies were used to change the way
the network was exposed to various response characteristics of the plant. The exact
form of the resulting V switch weights varied. but not the equivalent gain schedules.
5.2 STATISTICAL RESULTS
After training. the performance of the network controller was tested for different
wind conditions. Table 1 shows means and standard deviations of performance
variables computed over 1000 landings for five different wind conditions. Shown
Table 1: Landing Statistics (standard deviations in parenthesis).
I
Wind
O vera ll
P,' rformance
JI T
T
I
G lide Slope
\
Touchdown
Flue
\lea n ::iquared Error
\[ pan Sq uared Error
J : hg,
J : hg,
J : hJ ,
J : h"
Pe rformance
ZTO
OTO
h To
1.34
21.6
13 .500
4.230
3.50
2.67
1030
0.0473
-2.15
(0 56)
(0.047)
(9.8 )
(2.6)
( 1.4)
( 1.2)
(11 )
(0.0039)
(0 .052 )
H=!)
1).27
22.2
0.603
0.209
3.31
1 86
10S0
-0.0400
-1.9S
10.00)
(0.000)
(0.0)
(0.0)
(0.0)
(00)
(0)
(0.0000)
(0.000)
H=1O
0.96
23.0
12.500
4.390
3.17
2.01
1160
-0.1260
-1.79
(0.53)
(0.052)
(9.2)
(2.6)
(2.2)
(1.1 )
(12)
(0.0046)
(0.040)
3.56
23 .S
54 .000
18 .500
8 65
3.43
1230
-0.2liO
-1.64
(210)
(0.100)
(38 .0)
(11.0)
(8 .2)
(2.7)
(24)
(0.0100)
(0.061)
II=-l0
H= 20
H=30
H=-tO
8.03
24.6
130000
43 .200
19 .20
5.73
1310
-0 .3110
-1.50
(470)
(0.1 60)
(9 1.0)
(25.0)
( 19.0)
(4 i )
(39)
(0.0170)
(0 .076)
1340
25.5
21 !J 000
76200
3700
9
!-tOO
-0..1030
-1.39
17 80)
(0.220)
( 1~00)
( ~6 .0)
,~ 6 . 0 )
I ~ . O)
(54)
(nOnOI
(0 .083)
~4
are values for overall performance (quadratic cost J per time step, landing time T),
trajectory performance (quadratic cost J on altitude and altitude rate), and landing
performance (touchdown position, pitch angle, altitude rate).
Neural Networks Structured for Control Application to Aircraft Landing
5.3 CONTROL LAWS OBTAINED BY LEARNING
By examining network weights, equation 2 yields the gains of an equivalent controller over the entire trajectory (gain schedules). These gain schedules represent
optimality with respect to a given performance index. Results show that the switch
builds a smooth transition between glideslope and flare and provides the network
controller with a non-linear distributed control law for the whole trajectory.
6 DISCUSSION
The architecture we propose integrates a priori knowledge of real plants within the
structure of the neural network. The knowledge of the physics of the system and its
representation in the network are part of the solution. Such a priori knowledge
structures are not only useful for finding control solutions, but also allow interpretations of network dynamics in term of standard control theory. By observing the
weights learned by the network, we can compute gain schedules and understand
how the network controls the plant.
The augmented architecture also allows us to control damping. In general, integrating optimal control performance indices with constraints on plant response
characteristics is not an easy task. The neural network approach and back-propagation learning represent an interesting and elegant solution to this problem. Other
constraints on states or response characteristics can also be implemented with similar architectures. In the present case, the control gains are obtained to minimize
the objective performance index while the plant remains within a desired stability
region. The effect of this approach provides good damping and control gain schedules that make the plant robust to disturbances.
Acknowledgements
This research was supported by the Boeing High Technology Center. Particular
thanks are extended to Gerald Cohen of Boeing. We would also like to thank Anil
Phatak for his decisive help and Yoshiro Miyata for the use of his XNet simulator.
References
Bryson, A. & Ho, Y. C. (1969). Applied Optimal Control. Blaisdel Publishing Co.
Jordan, M. I. & Jacobs, R. A. (1990). Learning to control an unstable system with
forward modeling. In D. S. Touretzky (Ed.), Neural Information Processing Systems 2. Morgan Kaufman: San Mateo, CA.
Le Cun, Y. (1989). A theoretical framework for back-propagation. In D. Touretzky, G. Hinton and T. Sejnowski (Eds.), Proceedings of the 1988 Connectionist
Models Summer School. Morgan Kaufman: San Mateo, CA.
Narendra, K. & Parthasarathy, K. (1990). Identification and control of dynamical
systems using neural networks. IEEE Transactions on Neural Networks, 1, 4-26.
Neuman, F. & Foster, J. D. (1970). Investigation of a digital automatic aircraft
landing system in turbulence. NASA Technical Note TN D-6066. NASA-Ames
Research Center, Moffett Field, CA.
Rumelhart, D. E., Hinton G. E., Williams R. J. (1986). Learning internal representations by error propagation. In D. E. Rumelhart & J. L. McClelland (Eds.)
Parallel Distributed Processing: Explorations in the Microstructures of Cognition
(Vol. I). Cambridge, MA: MIT Press.
421
| 384 |@word aircraft:22 simulation:4 propagate:1 linearized:2 jacob:2 tr:1 initial:1 contains:1 troller:1 subjective:1 written:3 additive:1 numerical:2 designed:3 treating:1 selected:2 flare:10 short:1 provides:3 ames:1 attack:1 sigmoidal:2 five:1 mathematical:1 along:2 constructed:1 consists:1 pathway:1 introduce:1 expected:1 behavior:3 multi:5 simulator:1 td:2 actual:4 considering:1 vera:1 begin:1 estimating:1 alto:2 kaufman:2 developed:1 finding:1 suite:1 golden:4 neuman:2 control:41 unit:4 appear:1 switching:3 plus:2 chose:1 mateo:2 suggests:1 shaded:2 co:1 range:1 practice:2 block:5 implement:1 backpropagation:2 sq:1 procedure:3 universal:1 integrating:1 suggest:1 undesirable:1 turbulence:6 put:1 influence:1 intercept:1 applying:1 landing:19 equivalent:3 map:1 demonstrated:1 center:2 optimize:4 williams:4 duration:1 his:2 stability:6 variation:1 controlling:3 nominal:1 commercial:1 exact:1 agreement:1 velocity:3 rumelhart:5 updating:1 ft:2 region:1 xmin:1 environment:3 reward:1 dynamic:7 neglected:1 gerald:1 trained:2 depend:1 ali:2 exposed:1 various:4 represented:2 train:2 describe:2 sejnowski:1 whose:1 widely:1 statistic:1 transform:1 itself:1 final:2 net:3 propose:1 interaction:1 entered:1 kh:1 quantifiable:1 spent:1 help:1 recurrent:8 develop:1 measured:1 school:1 flgure:1 strong:1 implemented:1 predicted:1 judge:4 foot:1 functionality:1 filter:1 stochastic:1 exploration:1 transient:2 investigation:1 mathematically:1 pl:1 mapping:2 cognition:1 narendra:3 purpose:1 integrates:1 hansen:1 palo:2 sensitive:1 weighted:9 mit:1 gaussian:1 csp:1 hj:1 command:1 ax:1 l0:1 modelling:1 bryson:2 dependent:1 accumulated:1 entire:3 accept:1 initially:1 overall:4 priori:2 spatial:1 field:1 construct:1 shaped:1 represents:2 minimized:1 others:1 connectionist:1 richard:1 modern:4 modi:1 composed:1 elevator:1 phase:6 consisting:1 geometry:1 severe:1 extreme:1 hg:2 amenable:1 integral:2 capable:1 damping:9 iv:1 loosely:1 initialized:1 desired:4 theoretical:1 yoshiro:1 modeling:1 blended:1 cost:4 deviation:2 examining:1 too:1 damp:1 combined:1 referring:1 thanks:1 fundamental:1 ie:1 retain:1 physic:1 augmentation:3 squared:1 containing:1 external:1 derivative:1 sec:2 inc:1 decisive:1 performed:1 wind:14 observing:1 effected:1 parallel:2 capability:1 slope:2 minimize:2 yves:1 square:1 hmin:1 formance:1 characteristic:5 variance:1 yield:4 identification:1 trajectory:12 ah:2 oscillatory:2 touretzky:2 ed:3 frequency:1 associated:1 static:1 henkle:4 gain:11 pilot:1 knowledge:3 schedule:5 sophisticated:1 actually:1 back:9 coloring:1 nasa:2 feed:1 dt:1 follow:1 response:12 modal:5 evaluated:1 box:2 hand:1 flight:1 horizontal:1 propagation:7 microstructures:1 mode:1 quality:1 effect:2 multiplier:1 white:1 oto:1 ll:1 during:5 essence:1 noted:2 steady:1 thomson:1 tn:1 motion:2 charles:1 functional:1 shear:1 physical:1 overview:1 ji:1 cohen:1 jp:2 interpretation:1 numerically:1 cambridge:1 automatic:2 xtd:1 specification:1 etc:1 optimizing:1 initiation:1 exploited:1 seen:3 morgan:2 additional:1 kit:1 determine:1 period:1 signal:4 ii:1 relates:1 multiple:2 ing:1 smooth:1 technical:1 characterized:1 controlled:5 parenthesis:1 pitch:8 basic:5 controller:28 rformance:2 represent:5 tailored:1 xmax:1 achieved:1 lea:1 separately:1 pass:1 elegant:1 electro:1 jordan:2 presence:1 easy:1 switch:12 variety:1 architecture:11 bandwidth:1 passed:1 penalty:5 action:1 adequate:2 useful:2 generally:4 category:1 mcclelland:1 per:1 vol:1 express:1 ht:1 sum:6 angle:7 reasonable:2 acceptable:1 layer:1 summer:1 simplification:1 aerodynamic:1 quadratic:4 adapted:2 constraint:7 generates:1 speed:5 min:2 optimality:1 structured:4 combination:1 across:1 pan:1 unity:1 cun:2 making:1 altitude:17 pid:3 equation:8 previously:1 remains:1 turn:1 describing:1 needed:1 subjected:1 fed:1 operation:1 generic:3 ho:2 remaining:1 setpoint:1 ensure:1 touchdown:4 include:1 publishing:1 build:1 classical:3 unchanged:1 objective:2 added:3 occurs:1 strategy:1 gradient:1 link:2 thank:1 simulated:1 lio:1 unstable:1 chauvin:4 index:12 relationship:1 minimizing:1 nc:1 difficult:1 sigma:1 stated:1 boeing:2 design:4 implementation:3 gated:1 vertical:1 observation:1 descent:1 hinton:5 extended:1 varied:2 mechanical:1 specified:1 connection:1 learned:1 address:1 usually:1 selectable:1 below:1 dynamical:1 max:3 including:1 disturbance:4 representing:1 technology:1 auto:1 bare:1 parthasarathy:3 literature:1 acknowledgement:1 multiplication:2 relative:1 law:10 plant:13 prototypical:1 generation:1 proportional:1 interesting:1 ygs:1 moffett:1 throttle:1 digital:1 degree:2 s0:1 principle:2 foster:2 pi:1 cd:1 changed:1 repeat:1 supported:1 allow:1 understand:1 van:1 distributed:2 feedback:2 curve:1 evaluating:2 world:1 rich:1 transition:1 forward:2 collection:1 replicated:1 simplified:1 san:2 constituting:1 transaction:1 assumed:1 spectrum:1 table:2 nature:1 learn:1 robust:1 ca:4 miyata:1 excellent:1 complex:3 main:1 whole:2 noise:2 fied:1 augmented:1 je:1 schley:4 htd:2 probing:1 position:4 exponential:1 pe:1 hmax:1 anil:1 bad:3 xt:1 consist:2 magnitude:1 illustrates:1 downward:1 margin:1 sheared:1 saturating:3 corresponds:1 determines:1 environmental:1 ma:1 formulated:1 change:2 typical:2 determined:1 called:1 internal:1 categorize:1 dryden:1 tested:2 gust:1 |
3,135 | 3,840 | Kernels and learning curves for Gaussian process
regression on random graphs
Peter Sollich, Matthew J Urry
King?s College London, Department of Mathematics
London WC2R 2LS, U.K.
{peter.sollich,matthew.urry}@kcl.ac.uk
Camille Coti
INRIA Saclay ?Ile de France, F-91893 Orsay, France
Abstract
We investigate how well Gaussian process regression can learn functions defined on graphs, using large regular random graphs as a paradigmatic example.
Random-walk based kernels are shown to have some non-trivial properties: within
the standard approximation of a locally tree-like graph structure, the kernel does
not become constant, i.e. neighbouring function values do not become fully correlated, when the lengthscale ? of the kernel is made large. Instead the kernel
attains a non-trivial limiting form, which we calculate. The fully correlated limit
is reached only once loops become relevant, and we estimate where the crossover
to this regime occurs. Our main subject are learning curves of Bayes error versus
training set size. We show that these are qualitatively well predicted by a simple
approximation using only the spectrum of a large tree as input, and generically
scale with n/V , the number of training examples per vertex. We also explore how
this behaviour changes for kernel lengthscales that are large enough for loops to
become important.
1
Motivation and Outline
Gaussian processes (GPs) have become a standard part of the machine learning toolbox [1]. Learning
curves are a convenient way of characterizing their capabilities: they give the generalization error
as a function of the number of training examples n, averaged over all datasets of size n under
appropriate assumptions about the process generating the data. We focus here on the case of GP
regression, where a real-valued output function f (x) is to be learned. The general behaviour of GP
learning curves is then relatively well understood for the scenario where the inputs x come from
a continuous space, typically Rn [2, 3, 4, 5, 6, 7, 8, 9, 10]. For large n, the learning curves then
typically decay as a power law ? n?? with an exponent ? ? 1 that depends on the dimensionality
n of the space as well as the smoothness properties of the function f (x) as encoded in the covariance
function.
But there are many interesting application domains that involve discrete input spaces, where x could
be a string, an amino acid sequence (with f (x) some measure of secondary structure or biological
function), a research paper (with f (x) related to impact), a web page (with f (x) giving a score
used to rank pages), etc. In many such situations, similarity between different inputs ? which will
govern our prior beliefs about how closely related the corresponding function values are ? can be
represented by edges in a graph. One would then like to know how well GP regression can work
in such problem domains; see also [11] for a related online regression algorithm. We study this
1
problem here theoretically by focussing on the paradigmatic example of random regular graphs,
where every node has the same connectivity.
Sec. 2 discusses the properties of random-walk inspired kernels [12] on such random graphs. These
are analogous to the standard radial basis function kernels exp[?(x ? x0 )2 /(2? 2 )], but we find that
they have surprising properties on large graphs. In particular, while loops in large random graphs
are long and can be neglected for many purposes, by approximating the graph structure as locally
tree-like, here this leads to a non-trivial limiting form of the kernel for ? ? ? that is not constant.
The fully correlated limit, where the kernel is constant, is obtained only because of the presence of
loops, and we estimate when the crossover to this regime takes place.
In Sec. 3 we move on to the learning curves themselves. A simple approximation based on the graph
eigenvalues, using only the known spectrum of a large tree as input, works well qualitatively and
predicts the exact asymptotics for large numbers of training examples. When the kernel lengthscale
is not too large, below the crossover discussed in Sec. 2 for the covariance kernel, the learning curves
depend on the number of examples per vertex. We also explore how this behaviour changes as the
kernel lengthscale is made larger. Sec. 4 summarizes the results and discusses some open questions.
2
Kernels on graphs and trees
We assume that we are trying to learn a function defined on the vertices of a graph. Vertices are
labelled by i = 1 . . . V , instead of the generic input label x we used in the introduction, and the
associated function values are denoted fi ? R. By taking the prior P (f ) over these functions
f = (f1 , . . . , fV ) as a (zero mean) Gaussian process we are saying that P (f ) ? exp(? 12 f T C ?1 f ).
The covariance function or kernel C is then, in our graph setting, just a positive definite V ? V
matrix.
The graph structure is characterized by a V ? V adjacency matrix, with Aij = 1 if nodes i and j are
connected by an edge, and 0 otherwise. All links are assumed to be undirected, so that Aij = Aji ,
PV
and there are no self-loops (Aii = 0). The degree of each node is then defined as di = j=1 Aij .
The covariance kernels we discuss in this paper are the natural generalizations of the squaredexponential kernel in Euclidean space [12]. They can be expressed in terms of the normalized
graph Laplacian, defined as L = 1 ? D ?1/2 AD ?1/2 , where D is a diagonal matrix with entries
d1 , . . . , dV and 1 is the V ? V identity matrix. An advantage of L over the unnormalized Laplacian
D ? A, which was used in the earlier paper [13], is that the eigenvalues of L (again a V ? V matrix)
lie in the interval [0,2] (see e.g. [14]).
From the graph Laplacian, the covariance kernels we consider here are constructed as follows. The
p-step random walk kernel is (for a ? 2)
h
ip
C ? (1 ? a?1 L)p = 1 ? a?1 1 + a?1 D ?1/2 AD ?1/2
(1)
while the diffusion kernel is given by
(2)
C ? exp ? 21 ? 2 L ? exp 21 ? 2 D ?1/2 AD ?1/2
P
We will always normalize these so that (1/V ) i Cii = 1, which corresponds to setting the average
(over vertices) prior variance of the function to be learned to unity.
To see the connection of the above kernels to random walks, assume we have a walker on the graph
who at each time step selects randomly one of the neighbouring vertices and moves to it. The
probability for a move from vertex j to i is then Aij /dj . The transition matrix after s steps follows
as (AD ?1 )s : its ij-element gives the probability of being on vertex i, having started at j. We can
now compare this with the p-step kernel by expanding the p-th power in (1):
C?
p
p
X
X
( ps )a?s (1?a?1 )p?s (D ?1/2 AD ?1/2 )s = D ?1/2
( ps )a?s (1?a?1 )p?s (AD ?1 )s D 1/2
s=0
s=0
(3)
Thus C is essentially a random walk transition matrix, averaged over the number of steps s with
s ? Binomial(p, 1/a)
2
(4)
a=2, d=3
K1 1
1
Cl,p
0.9
p=1
p=2
p=3
p=4
p=5
p=10
p=20
p=50
p=100
p=200
p=500
p=infty
0.8
0.6
0.4
0.8
d=3
0.7
0.6
a=2, V=infty
a=2, V=500
a=4, V=infty
a=4, V=500
0.5
0.4
0.3
0.2
0.2
ln V / ln(d-1)
0.1
0
0
5
l
10
0
15
1
10
p/a
100
1000
Figure 1: (Left) Random walk kernel C`,p plotted vs distance ` along graph, for increasing number
of steps p and a = 2, d = 3. Note the convergence to a limiting shape for large p that is not the naive
fully correlated limit C`,p?? = 1. (Right) Numerical results for average covariance K1 between
neighbouring nodes, averaged over neighbours and over randomly generated regular graphs.
This shows that 1/a can be interpreted as the probability of actually taking a step at each of p
?attempts?. To obtain the actual C the resulting averaged transition matrix is premultiplied by
D ?1/2 and postmultiplied by D 1/2 , which ensures that the kernel C is symmetric. For the diffusion
kernel, one finds an analogous result but the number of random walk steps is now distributed as
s ? Poisson(? 2 /2). This implies in particular that the diffusion kernel is the limit of the p-step
kernel for p, a ? ? at constant p/a = ? 2 /2. Accordingly, we discuss mainly the p-step kernel in
this paper because results for the diffusion kernel can be retrieved as limiting cases.
In the limit of a large number of steps s, the random walk on a graph will reach its stationary distribution p? ? De where e = (1, . . . , 1). (This form of p? can be verified by checking that it remains
unchanged after multiplication with the transition matrix AD ?1 .) The s-step transition matrix for
large s is then p? eT = DeeT because we converge from any starting vertex to the stationary distribution. It follows that for large p or ? 2 the covariance kernel becomes C ? D 1/2 eeT D 1/2 , i.e.
Cij ? (di dj )1/2 . This is consistent with the interpretation of ? or (p/a)1/2 as a lengthscale over
which the random walk can diffuse along the graph: once this lengthscale becomes large, the covariance kernel Cij is essentially independent of the distance (along the graph) between the vertices i
and j, and the function f becomes fully correlated across the graph. (Explicitly f = vD 1/2 e under
the prior, with v a single Gaussian random variable.) As we next show, however, the approach to
this fully correlated limit as p or ? are increased is non-trivial.
We focus in this paper on kernels on random regular graphs. This means we consider adjacency
matrices A which are regular in the sense that they give for each vertex the same degree, di = d. A
uniform probability distribution is then taken across all A that obey this constraint [15]. What will
the above kernels look like on typical samples drawn from this distribution? Such random regular
graphs will have long loops, of length of order ln(V ) or larger if V is large. Their local structure
is then that of a regular tree of degree d, which suggests that it should be possible to calculate the
kernel accurately within a tree approximation. In a regular tree all nodes are equivalent, so the kernel
can only depend on the distance ` between two nodes i and j. Denoting this kernel value C`,p for a
p-step random walk kernel, one has then C`,p=0 = ?`,0 and
?p+1 C0,p+1 = 1 ? a1 C0,p + a1 C1,p
(5)
1
1
d?1
?p+1 C`,p+1 = ad C`?1,p + 1 ? a C`,p + ad C`+1,p
for ` ? 1
(6)
where ?p is chosen to achieve the desired normalization C0,p = 1 of the prior variance for every p.
Fig. 1(left) shows results obtained by iterating this recursion numerically, for a regular graph (in the
tree approximation) with degree d = 3, and a = 2. As expected the kernel becomes more longranged initially as p increases, but eventually it is seen to approach a non-trivial limiting form. This
can be calculated as
C`,p?? = [1 + `(d ? 1)/d](d ? 1)?`/2
(7)
3
and is also plotted in the figure, showing good agreement with the numerical iteration. There are
(at least) two ways of obtaining the result (7). One is to take the limit ? ? ? of the integral
representation of the diffusion kernel on regular trees given in [16] (which is also quoted in [13] but
with a typographical error that effectively removes the factor (d ? 1)?`/2 ). Another route is to find
the steady state of the recursion for C`,p . This is easy to do but requires as input the unknown steady
state value of ?p . To determine this, one can map from C`,p to the total random walk probability S`,p
in each ?shell? of vertices at distance ` from the starting vertex, changing variables to S0,p = C0,p
and S`,p = d(d ? 1)`?1 C`,p (` ? 1). Omitting the factors ?p , this results in a recursion for S`,p
that simply describes a biased random walk on ` = 0, 1, 2, . . ., with a probability of 1 ? 1/a of
remaining at the current `, probability 1/(ad) of moving to the left and probability (d ? 1)/(ad)
of moving to the right. The point ` = 0 is a reflecting barrier where only moves to the right are
allowed, with probability 1/a. The time evolution of this random walk starting from ` = 0 can now
be analysed as in [17]. As expected from the balance of moves to the left and right, S`,p for large
p is peaked around the average position of the walk, ` = p(d ? 2)/(ad). For ` smaller than this
S`,p has a tail behaving as ? (d ? 1)`/2 , and converting back to C`,p gives the large-` scaling of
C`,p?? ? (d ? 1)?`/2 ; this in turn fixes the value of ?p?? and so eventually gives (7).
The above analysis shows that for large p the random walk kernel, calculated in the absence of loops,
does not approach the expected fully correlated limit; given that all vertices have the same degree,
the latter would correspond to C`,p?? = 1. This implies, conversely, that the fully correlated limit
is reached only because of the presence of loops in the graph. It is then interesting to ask at what
point, as p is increased, the tree approximation for the kernel breaks down. To estimate this, we note
that a regular tree of depth ` has V = 1 + d(d ? 1)`?1 nodes. So a regular graph can be tree-like
at most out to ` ? ln(V )/ ln(d ? 1). Comparing with the typical number of steps our random walk
takes, which is p/a from (4), we then expect loop effects to appear in the covariance kernel when
p/a ? ln(V )/ ln(d ? 1)
(8)
To check this prediction, we measure the analogue of C1,p on randomly generated [15] regular
graphs. Because of the presence of loops, the local kernel valuespare not all identical, so the appropriate estimate of what would be C1,p on a tree is K1 = Cij / Cii Cjj for neighbouring nodes i
and j. Averaging over all pairs of such neighbours, and then over a number of randomly generated
graphs we find the results in Fig. 1(right). The results for K1 (symbols) accurately track the tree predictions (lines) for small p/a, and start to deviate just around the values of p/a expected from (8), as
marked by the arrow. The deviations manifest themselves in larger values of K1 , which eventually
? now that p/a is large enough for the kernel to ?notice? the loops - approach the fully correlated
limit K1 = 1.
3
Learning curves
We now turn to the analysis of learning curves for GP regression on random regular graphs. We
assume that the target function f ? is drawn from a GP prior with a p-step random walk covariance
kernel C. Training examples are input-output pairs (i? , fi?? + ?? ) where ?? is i.i.d. Gaussian noise
of variance ? 2 ; the distribution of training inputs i? is taken to be uniform across vertices. Inference
from a data set D of n such examples ? = 1, . . . , n takes place using the prior defined by C and
a Gaussian likelihood with noise variance ? 2 . We thus assume an inference model that is matched
to the data generating process. This is obviously an over-simplification but is appropriate for the
present first exploration of learning curves on random graphs. We emphasize that as n is increased
we see more and more function values from the same graph, which is fixed by the problem domain;
the graph does not grow.
The generalization error is the squared difference between the estimated function f?i and the target
fi? , averaged across the (uniform) input distribution, the posterior distribution of f ? given D, the
distribution of datasets D, and finally ? in our non-Euclidean setting ? the random graph ensemble.
Given the assumption of a matched inference model, this is just the average Bayes error, or the
average posterior variance, which can be expressed explicitly as [1]
X
(n) = V ?1
Cii ? k(i)T Kk?1 (i) D,graphs
(9)
i
4
where the average is over data sets and over graphs, K is an n ? n matrix with elements K??0 =
Ci? ,i?0 + ? 2 ???0 and k(i) is a vector with entries k? (i) = Ci,i? . The resulting learning curve
depends, in addition to n, on the graph structure as determined by V and d, and the kernel and noise
level as specified by p, a and ? 2 . We fix d = 3 throughout to avoid having too many parameters to
vary, although similar results are obtained for larger d.
Exact prediction of learning curves by analytical calculation is very difficult due to the complicated
way in which the random selection of training inputs enters the matrix K and vector k in (9).
However, by first expressing these quantities in terms of kernel eigenvalues (see below) and then
approximating the average over datasets, one can derive the approximation [3, 6]
V
X
n
?1
=g
,
g(h)
=
(??1
(10)
? + h)
+ ?2
?=1
This equation for has to be solved self-consistently because also appears on the r.h.s. In the
Euclidean case the resulting predictions approximate the true learning curves quite reliably. The
derivation of (10) for inputs on a fixed graph is unchanged from [3], provided the kernel eigenvalues ?? appearing in the function g(h) are defined appropriately, by the eigenfunction
condition
P
hCij ?j i = ??i ; the average here is over the input distribution, i.e. h. . .i = V ?1 j . . . From the
p
definition (1) of the p-step kernel, we see that then ?? = ?V ?1 (1 ? ?L
? /a) in terms of the corresponding eigenvalue of P
the graph Laplacian L. The constant ? has to be chosen to enforce our
normalization convention ? ?? = hCjj i = 1.
Fortunately, for large V the spectrum of the Laplacian of a random regular graph can be approximated by that of the corresponding large regular tree, which has spectral density [14]
q
4(d?1)
? (?L ? 1)2
d2
L
?(? ) =
(11)
2?d?L (2 ? ?L )
?1
L
L
(d ? 1)1/2 , where the term under the square root is
in the range ?L ? [?L
? , ?+ ], ?? = 1 + 2d
positive. (There are also two isolated eigenvalues ?L = 0,
P2 but these have weight 1/V each and so
can be ignored for large V .) Rewriting (10) as = V ?1 ? [(V ?? )?1 + (n/V )( + ? 2 )?1 ]?1 and
then replacing the average over kernel eigenvalues by an integral over the spectral density leads to
the following prediction for the learning curve:
Z
= d?L ?(?L )[??1 (1 ? ?L /a)?p + ?/( + ? 2 )]?1
(12)
R
with ? determined from ? d?L ?(?L )(1 ? ?L /a)p = 1. A general consequence of the form of this
result is that the learning curve depends on n and V only through the ratio ? = n/V , i.e. the number
of training examples per vertex. The approximation (12) also predicts that the learning curve will
have two regimes, one for small ? where ? 2 and the generalization error will be essentially
independent of ? 2 ; and another for large ? where ? 2 so that can be neglected on the r.h.s. and
one has a fully explicit expression for .
We compare the above prediction in Fig. 2(left) to the results of numerical simulations of the learning curves, averaged over datasets and random regular graphs. The two regimes predicted by the
approximation are clearly visible; the approximation works well inside each regime but less well in
the crossover between the two. One striking observation is that the approximation seems to predict
the asymptotic large-n behaviour exactly; this is distinct to the Euclidean case, where generally only
the power-law of the n-dependence but not its prefactor come out accurately. To see why, we exploit
that for large n (where ? 2 ) the approximation (9) effectively neglects fluctuations in the training
input ?density? of a randomly drawn set of training inputs [3, 6]. This is justified in the graph case
for large ? = n/V , because the number of training inputs each vertex receives, Binomial(n, 1/V ),
has negligible relative fluctuations away from its mean ?. In the Euclidean case there is no similar
result, because all training inputs are different with probability one even for large n.
Fig. 2(right) illustrates that for larger a the difference in the crossover region between the true (numerically simulated) learning curves and our approximation becomes larger. This is because the
average number of steps p/a of the random walk kernel then decreases: we get closer to the limit
of uncorrelated function values (a ? ?, Cij = ?ij ). In that limit and for low ? 2 and large V the
5
V=500 (filled) & 1000 (empty), d=3, a=2, p=10
V=500, d=3, a=4, p=10
0
0
10
10
?
?
-1
-1
10
10
-2
10
-2
10
2
? = 0.1
2
? = 0.1
2
-3
10
? = 0.01
2
? = 0.01
-3
10
2
? = 0.001
2
? = 0.001
2
-4
10
2
? = 0.0001
? = 0.0001
-4
10
2
? =0
-5
2
? =0
-5
10
0.1
1
?=n/V
10
10
0.1
1
?=n/V
10
Figure 2: (Left) Learning curves for GP regression on random regular graphs with degree d = 3 and
V = 500 (small filled circles) and V = 1000 (empty circles) vertices. Plotting generalization error
versus ? = n/V superimposes the results for both values of V , as expected from the approximation
(12). The lines are the quantitative predictions of this approximation. Noise level as shown, kernel
parameters a = 2, p = 10. (Right) As on the left but with V = 500 only and for larger a = 4.
2
V=500, d=3, a=2, p=20
0
0
V=500, d=3, a=2, p=200, ? =0.1
10
10
?
?
simulation
-1
2
10
1/(1+n/? )
theory (tree)
theory (eigenv.)
-1
10
-2
10
2
? = 0.1
-3
10
-4
10
-2
10
2
? = 0.01
2
? = 0.001
2
? = 0.0001
-3
10
2
? =0
-5
10
-4
0.1
1
?=n/V
10
10 1
10
100
n
1000
10000
Figure 3: (Left) Learning curves for GP regression on random regular graphs with degree d = 3
and V = 500, and kernel parameters a = 2, p = 20; noise level ? 2 as shown. Circles: numerical
simulations; lines: approximation (12). (Right) As on the left but for much larger p = 200 and for
a single random graph, with ? 2 = 0.1. Dotted line: naive estimate = 1/(1 + n/? 2 ). Dashed
line: approximation (10) using the tree spectrum and the large p-limit, see (17). Solid line: (10) with
numerically determined graph eigenvalues ?L
? as input.
true learning curve is = exp(??), reflecting the probability of a training input set not containing
a particular vertex, while the approximation can be shown to predict = max{1 ? ?, 0}, i.e. a
decay of the error to zero at ? = 1. Plotting these two curves (not displayed here) indeed shows the
same ?shape? of disagreement as in Fig. 2(right), with the approximation underestimating the true
generalization error.
Increasing p has the effect of making the kernel longer ranged, giving an effect opposite to that of
increasing a. In line with this, larger values of p improve the accuracy of the approximation (12):
see Fig. 3(left).
One may ask about the shape of the learning curves for large number of training examples (per
vertex) ?. The roughly straight lines on the right of the log-log plots discussed so far suggest that
? 1/? in this regime. This is correct in the mathematical limit ? ? ? because the graph kernel
p
2
has a nonzero minimal eigenvalue ?? = ?V ?1 (1??L
+ /a) : for ? ? /(V ?? ), the square bracket
6
in (12) can then be approximated by ?/(+? 2 ) and one gets (because also ? 2 in the asymptotic
regime) ? ? 2 /?.
However, once p becomes reasonably large, V ?? can be shown ? by analysing the scaling of ?, see
Appendix ? to be extremely (exponentially in p) small; for the parameter values in Fig. 3(left) it is
around 4 ? 10?30 . The ?terminal? asymptotic regime ? ? 2 /? is then essentially unreachable. A
more detailed analysis of (12) for large p and large (but not exponentially large) ?, as sketched in
the Appendix, yields
? (c? 2 /?) ln3/2 (?/(c? 2 )),
c ? p?3/2
(13)
This shows that there are logarithmic corrections to the naive ? 2 /? scaling that would apply in the
true terminal regime. More intriguing is the scaling of the coefficient c with p, which implies that to
reach a specified (low) generalization error one needs a number of training examples per vertex of
order ? ? c? 2 ? p?3/2 ? 2 . Even though the covariance kernel C`,p ? in the same tree approximation
that also went into (12) ? approaches a limiting form for large p as discussed in Sec. 2, generalization
performance thus continues to improve with increasing p. The explanation for this must presumably
be that C`,p converges to the limit (7) only at fixed `, while in the tail ` ? p, it continues to change.
For finite graph sizes V we know of course that loops will eventually become important as p increases, around the crossover point estimated in (8). The approximation for the learning curve in
(12) should then break down. The most naive estimate beyond this point would be to say that the
kernel becomes nearly fully correlated, Cij ? (di dj )1/2 which in the regular case simplifies to
Cij = 1. With only one function value to learn, and correspondingly only one nonzero kernel eigenvalue ??=1 = 1, one would predict = 1/(1 + n/? 2 ). Fig. 3(right) shows, however, that this significantly underestimates the actual generalization error, even though for this graph ??=1 = 0.994
is very close to unity so that the other eigenvalues sum to no more than 0.006. An almost perfect
prediction is obtained, on the other hand, from the approximation (10) with the numerically calculated values of the Laplacian ? and hence kernel ? eigenvalues. The presence of the small kernel
eigenvalues is again seen to cause logarithmic corrections to the naive ? 1/n scaling. Using the
tree spectrum as an approximation and exploiting the large-p limit, one finds indeed (see Appendix,
Eq. (17)) that ? (c0 ? 2 /n) ln3/2 (n/c0 ? 2 ) where now n enters rather than ? = n/V , c0 being a
constant dependent only on p and a: informally, the function to be learned only has a finite (rather
than ? V ) number of degrees of freedom. The approximation (17) in fact provides a qualitatively
accurate description of the data Fig. 3(right), as the dashed line in the figure shows. We thus have the
somewhat unusual situation that the tree spectrum is enough to give a good description of the learning curves even when loops are important, while (see Sec. 2) this is not so as far as the evaluation of
the covariance kernel itself is concerned.
4
Summary and Outlook
We have studied theoretically the generalization performance of GP regression on graphs, focussing
on the paradigmatic case of random regular graphs where every vertex has the same degree d. Our
initial concern was with the behaviour of p-step random walk kernels on such graphs. If these are
calculated within the usual approximation of a locally tree-like structure, then they converge to a
non-trivial limiting form (7) when p ? or the corresponding lengthscale ? in the closely related
diffusion kernel ? becomes large. The limit of full correlation between all function values on the
graph is only reached because of the presence of loops, and we have estimated in (8) the values of
p around which the crossover to this loop-dominated regime occurs; numerical data for correlations
of function values on neighbouring vertices support this result.
In the second part of the paper we concentrated on the learning curves themselves. We assumed
that inference is performed with the correct parameters describing the data generating process; the
generalization error is then just the Bayes error. The approximation (12) gives a good qualitative
description of the learning curve using only the known spectrum of a large regular tree as input. It
predicts in particular that the key parameter that determines the generalization error is ? = n/V ,
the number of training examples per vertex. We demonstrated also that the approximation is in fact
more useful than in the Euclidean case because it gives exact asymptotics for the limit ? 1.
Quantitatively, we found that the learning curves decay as ? ? 2 /? with non-trivial logarithmic
correction terms. Slower power laws ? ? ?? with ? < 1, as in the Euclidean case, do not appear.
7
We attribute this to the fact that on a graph there is no analogue of the local roughness of a target
function because there is a minimum distance (one step along the graph) between different input
points. Finally we looked at the learning curves for larger p, where loops become important. These
can still be predicted quite accurately by using the tree eigenvalue spectrum as an approximation, if
one keeps track of the zero graph Laplacian eigenvalue which we were able to ignore previously; the
approximation shows that the generalization error scales as ? 2 /n with again logarithmic corrections.
In future work we plan to extend our analysis to graphs that are not regular, including ones from
application domains as well as artificial ones with power-law tails in the distribution of degrees
d, where qualitatively new effects are to be expected. It would also be desirable to improve the
predictions for the learning curve in the crossover region ? ? 2 , which should be achievable using
iterative approaches based on belief propagation that have already been shown to give accurate
approximations for graph eigenvalue spectra [18]. These tools could then be further extended to
study e.g. the effects of model mismatch in GP regression on random graphs, and how these are
mitigated by tuning appropriate hyperparameters.
Appendix
We sketch here how to derive (13) from (12) for large p. Eq. (12) writes = g(?V /( + ? 2 )) with
Z ?L+
g(h) =
d?L ?(?L )[??1 (1 ? ?L /a)?p + hV ?1 ]?1
(14)
?L
?
and ? determined from the condition g(0) = 1. (This g(h) is the tree spectrum approximation to the
g(h) of (10).) Turning first to g(0), the factor (1 ? ?L /a)p decays quickly to zero as ?L increases
p
L
L p
L
above ?L
? . One can then approximate this factor according to (1 ? ?? /a) [(a ? ? )/(a ? ?? )] ?
L
L
p
L
L
(1 ? ?L
? /a) exp[?(? ? ?? )p/(a ? ?? )]. In the regime near ?? one can also approximate the
1/2
L
, with r = (d ?
spectral density (11) by its leading square-root increase, ?(? ) = r(?L ? ?L
?)
1/4 5/2
2
L
1) d /[?(d ? 2) ]. Switching then to a new integration variable y = (? ? ?L
)p/(a
? ?L
?
? ) and
extending the integration limit to ? gives
Z ?
?
L
p
L ?3/2
1 = g(0) = ?r(1 ? ?? /a) [p/(a ? ?? )]
dy y e?y
(15)
0
and this fixes ?. Proceeding similarly for h > 0 gives
p
L ?3/2
p
g(h) = ?r(1??L
F (h?V ?1 (1??L
? /a) [p/(a??? )]
? /a) ),
Z
F (z) =
?
?
dy y (ey +z)?1
0
(16)
Dividing by g(0) = 1 shows that simply g(h) = F (hV ?1 c?1 )/F (0), where c = 1/[?(1 ?
p
L ?3/2
which scales as p?3/2 . In the asymptotic regime ? 2
?L
? /a) ] = rF (0)[p/(a ? ?? )]
2
we then have = g(?V /? ) = F (?/(c? 2 ))/F (0) and the desired result (13) follows from the
large-z behaviour of F (z) ? z ?1 ln3/2 (z).
One can proceed similarly for the regime where loops become important. Clearly the zero Laplacian
eigenvalue with weight 1/V then has to be taken into account. If we assume that the remainder of
the Laplacian spectrum can still be approximated by that of a tree [18], we get
p
L ?3/2
p
(V + h?)?1 + r(1 ? ?L
F (h?V ?1 (1 ? ?L
? /a) [p/(a ? ?? )]
? /a) )
g(h) =
(17)
L ?3/2 F (0)
p
V ?1 + r(1 ? ?L
? /a) [p/(a ? ?? )]
The denominator here is ??1 and the two terms are proportional respectively to the covariance kernel
eigenvalue ?1 , corresponding to ?L
1 = 0 and the constant eigenfunction, and to 1??1 . Dropping the
first terms in the numerator and denominator of (17) by taking V ? ? leads back to the previous
analysis as it should. For a situation as in Fig. 3(right), on the other hand, where ?1 is close to unity,
we have ? ? V and so
p
L ?3/2
p
g(h) ? (1 + h)?1 + rV (1 ? ?L
F (h(1 ? ?L
(18)
? /a) [p/(a ? ?? )]
? /a) )
The second term, coming from the small kernel eigenvalues, is the more slowly decaying because
it corresponds to fine detail of the target function that needs many training examples to learn accurately. It will therefore dominate the asymptotic behaviour of the learning curve: = g(n/? 2 ) ?
?p
F (n/(c0 ? 2 )) with c0 = (1 ? ?L
independent of V . The large-n tail of the learning curve in
? /a)
Fig. 3(right) is consistent with this form.
8
References
[1] C E Rasmussen and C K I Williams. Gaussian processes for regression. In D S Touretzky, M C Mozer,
and M E Hasselmo, editors, Advances in Neural Information Processing Systems 8, pages 514?520, Cambridge, MA, 1996. MIT Press.
[2] M Opper. Regression with Gaussian processes: Average case performance. In I K Kwok-Yee, M Wong,
I King, and Dit-Yun Yeung, editors, Theoretical Aspects of Neural Computation: A Multidisciplinary
Perspective, pages 17?23. Springer, 1997.
[3] P Sollich. Learning curves for Gaussian processes. In M S Kearns, S A Solla, and D A Cohn, editors,
Advances in Neural Information Processing Systems 11, pages 344?350, Cambridge, MA, 1999. MIT
Press.
[4] M Opper and F Vivarelli. General bounds on Bayes errors for regression with Gaussian processes. In
M Kearns, S A Solla, and D Cohn, editors, Advances in Neural Information Processing Systems 11, pages
302?308, Cambridge, MA, 1999. MIT Press.
[5] C K I Williams and F Vivarelli. Upper and lower bounds on the learning curve for Gaussian processes.
Mach. Learn., 40(1):77?102, 2000.
[6] D Malzahn and M Opper. Learning curves for Gaussian processes regression: A framework for good
approximations. In T K Leen, T G Dietterich, and V Tresp, editors, Advances in Neural Information
Processing Systems 13, pages 273?279, Cambridge, MA, 2001. MIT Press.
[7] D Malzahn and M Opper. A variational approach to learning curves. In T G Dietterich, S Becker,
and Z Ghahramani, editors, Advances in Neural Information Processing Systems 14, pages 463?469,
Cambridge, MA, 2002. MIT Press.
[8] P Sollich and A Halees. Learning curves for Gaussian process regression: approximations and bounds.
Neural Comput., 14(6):1393?1428, 2002.
[9] P Sollich. Gaussian process regression with mismatched models. In T G Dietterich, S Becker, and
Z Ghahramani, editors, Advances in Neural Information Processing Systems 14, pages 519?526, Cambridge, MA, 2002. MIT Press.
[10] P Sollich. Can Gaussian process regression be made robust against model mismatch? In Deterministic
and Statistical Methods in Machine Learning, volume 3635 of Lecture Notes in Artificial Intelligence,
pages 199?210. 2005.
[11] M Herbster, M Pontil, and L Wainer. Online learning over graphs. In ICML ?05: Proceedings of the 22nd
international conference on Machine learning, pages 305?312, New York, NY, USA, 2005. ACM.
[12] A J Smola and R Kondor. Kernels and regularization on graphs. In M Warmuth and B Sch?olkopf,
editors, Proc. Conference on Learning Theory (COLT), Lect. Notes Comp. Sci., pages 144?158. Springer,
Heidelberg, 2003.
[13] R I Kondor and J D Lafferty. Diffusion kernels on graphs and other discrete input spaces. In ICML
?02: Proceedings of the Nineteenth International Conference on Machine Learning, pages 315?322, San
Francisco, CA, USA, 2002. Morgan Kaufmann.
[14] F R K Chung. Spectral graph theory. Number 92 in Regional Conference Series in Mathematics. Americal
Mathematical Society, 1997.
[15] A Steger and N C Wormald. Generating random regular graphs quickly. Combinator. Probab. Comput.,
8(4):377?396, 1999.
[16] F Chung and S-T Yau. Coverings, heat kernels and spanning trees. The Electronic Journal of Combinatorics, 6(1):R12, 1999.
[17] C Monthus and C Texier. Random walk on the Bethe lattice and hyperbolic brownian motion. J. Phys. A,
29(10):2399?2409, 1996.
[18] T Rogers, I Perez Castillo, R Kuehn, and K Takeda. Cavity approach to the spectral density of sparse
symmetric random matrices. Phys. Rev. E, 78(3):031116, 2008.
9
| 3840 |@word kondor:2 achievable:1 seems:1 nd:1 c0:9 open:1 d2:1 simulation:3 covariance:13 outlook:1 solid:1 initial:1 series:1 score:1 denoting:1 current:1 comparing:1 surprising:1 analysed:1 intriguing:1 must:1 numerical:5 visible:1 shape:3 remove:1 plot:1 v:1 stationary:2 intelligence:1 warmuth:1 accordingly:1 underestimating:1 premultiplied:1 provides:1 node:8 mathematical:2 along:4 constructed:1 become:8 qualitative:1 inside:1 x0:1 theoretically:2 indeed:2 expected:6 roughly:1 themselves:3 terminal:2 inspired:1 actual:2 increasing:4 becomes:8 provided:1 matched:2 mitigated:1 what:3 interpreted:1 string:1 quantitative:1 every:3 exactly:1 uk:1 appear:2 positive:2 negligible:1 understood:1 local:3 limit:19 consequence:1 switching:1 mach:1 fluctuation:2 inria:1 wormald:1 steger:1 studied:1 suggests:1 conversely:1 range:1 averaged:6 definite:1 writes:1 aji:1 pontil:1 asymptotics:2 crossover:8 significantly:1 hyperbolic:1 convenient:1 radial:1 regular:24 suggest:1 get:3 close:2 selection:1 yee:1 wong:1 equivalent:1 map:1 demonstrated:1 deterministic:1 williams:2 starting:3 l:1 dominate:1 analogous:2 limiting:7 target:4 exact:3 neighbouring:5 gps:1 agreement:1 element:2 approximated:3 continues:2 predicts:3 prefactor:1 enters:2 solved:1 hv:2 calculate:2 region:2 ensures:1 connected:1 went:1 solla:2 decrease:1 mozer:1 govern:1 neglected:2 depend:2 basis:1 aii:1 represented:1 derivation:1 distinct:1 kcl:1 heat:1 london:2 lengthscale:6 lect:1 artificial:2 lengthscales:1 quite:2 encoded:1 larger:10 valued:1 nineteenth:1 say:1 otherwise:1 gp:9 itself:1 ip:1 online:2 obviously:1 sequence:1 eigenvalue:19 advantage:1 analytical:1 coming:1 remainder:1 relevant:1 loop:17 achieve:1 description:3 normalize:1 olkopf:1 takeda:1 exploiting:1 convergence:1 empty:2 p:2 extending:1 generating:4 perfect:1 converges:1 derive:2 ac:1 ij:2 eq:2 p2:1 dividing:1 predicted:3 come:2 implies:3 convention:1 closely:2 correct:2 attribute:1 exploration:1 rogers:1 adjacency:2 wc2r:1 behaviour:7 f1:1 generalization:13 fix:3 biological:1 roughness:1 correction:4 around:5 exp:6 presumably:1 predict:3 matthew:2 vary:1 purpose:1 proc:1 label:1 hasselmo:1 tool:1 mit:6 clearly:2 gaussian:16 always:1 rather:2 avoid:1 focus:2 superimposes:1 consistently:1 rank:1 check:1 mainly:1 likelihood:1 attains:1 sense:1 inference:4 dependent:1 typically:2 initially:1 france:2 selects:1 sketched:1 unreachable:1 colt:1 denoted:1 exponent:1 plan:1 integration:2 once:3 having:2 identical:1 look:1 icml:2 nearly:1 peaked:1 future:1 quantitatively:1 randomly:5 neighbour:2 attempt:1 freedom:1 investigate:1 evaluation:1 generically:1 bracket:1 perez:1 accurate:2 edge:2 integral:2 closer:1 typographical:1 ln3:3 tree:27 filled:2 euclidean:7 walk:20 circle:3 desired:2 plotted:2 isolated:1 theoretical:1 minimal:1 increased:3 earlier:1 lattice:1 vertex:24 entry:2 deviation:1 uniform:3 too:2 density:5 herbster:1 international:2 quickly:2 connectivity:1 again:3 squared:1 containing:1 slowly:1 yau:1 chung:2 leading:1 account:1 de:2 sec:6 coefficient:1 combinatorics:1 explicitly:2 depends:3 ad:12 performed:1 break:2 root:2 reached:3 start:1 bayes:4 decaying:1 capability:1 complicated:1 square:3 accuracy:1 acid:1 variance:5 who:1 ensemble:1 correspond:1 yield:1 kaufmann:1 accurately:5 comp:1 straight:1 reach:2 phys:2 touretzky:1 definition:1 against:1 underestimate:1 associated:1 di:4 ask:2 manifest:1 dimensionality:1 actually:1 reflecting:2 back:2 appears:1 leen:1 though:2 just:4 smola:1 correlation:2 hand:2 receives:1 sketch:1 web:1 replacing:1 cohn:2 propagation:1 multidisciplinary:1 usa:2 omitting:1 effect:5 normalized:1 true:5 ranged:1 dietterich:3 evolution:1 hence:1 regularization:1 symmetric:2 nonzero:2 numerator:1 self:2 covering:1 steady:2 unnormalized:1 trying:1 yun:1 outline:1 wainer:1 motion:1 variational:1 fi:3 exponentially:2 volume:1 discussed:3 interpretation:1 tail:4 extend:1 numerically:4 expressing:1 cambridge:6 smoothness:1 tuning:1 mathematics:2 similarly:2 dj:3 moving:2 similarity:1 behaving:1 longer:1 etc:1 posterior:2 brownian:1 retrieved:1 perspective:1 scenario:1 route:1 seen:2 minimum:1 fortunately:1 somewhat:1 morgan:1 cii:3 ey:1 converting:1 converge:2 determine:1 focussing:2 paradigmatic:3 dashed:2 rv:1 full:1 desirable:1 characterized:1 calculation:1 cjj:1 long:2 infty:3 a1:2 laplacian:9 impact:1 ile:1 prediction:9 regression:17 denominator:2 essentially:4 poisson:1 yeung:1 iteration:1 kernel:68 normalization:2 c1:3 justified:1 addition:1 fine:1 interval:1 walker:1 grow:1 appropriately:1 biased:1 sch:1 regional:1 subject:1 undirected:1 lafferty:1 orsay:1 near:1 presence:5 enough:3 easy:1 concerned:1 opposite:1 simplifies:1 americal:1 expression:1 becker:2 peter:2 proceed:1 cause:1 york:1 ignored:1 generally:1 iterating:1 detailed:1 involve:1 informally:1 useful:1 locally:3 concentrated:1 dit:1 r12:1 notice:1 dotted:1 estimated:3 per:6 track:2 discrete:2 dropping:1 key:1 drawn:3 changing:1 rewriting:1 verified:1 diffusion:7 graph:66 sum:1 striking:1 place:2 saying:1 throughout:1 almost:1 electronic:1 summarizes:1 scaling:5 appendix:4 dy:2 bound:3 simplification:1 constraint:1 diffuse:1 dominated:1 aspect:1 extremely:1 relatively:1 department:1 according:1 across:4 sollich:6 describes:1 smaller:1 unity:3 rev:1 making:1 dv:1 taken:3 ln:7 equation:1 remains:1 previously:1 discus:4 eventually:4 turn:2 describing:1 vivarelli:2 know:2 unusual:1 apply:1 kwok:1 obey:1 away:1 appropriate:4 generic:1 enforce:1 spectral:5 appearing:1 disagreement:1 slower:1 binomial:2 remaining:1 neglect:1 exploit:1 giving:2 k1:6 ghahramani:2 approximating:2 society:1 unchanged:2 move:5 question:1 quantity:1 occurs:2 looked:1 already:1 dependence:1 usual:1 diagonal:1 distance:5 link:1 simulated:1 sci:1 vd:1 trivial:7 spanning:1 length:1 kk:1 ratio:1 balance:1 difficult:1 cij:6 reliably:1 unknown:1 upper:1 observation:1 datasets:4 finite:2 displayed:1 situation:3 extended:1 rn:1 camille:1 pair:2 toolbox:1 specified:2 connection:1 fv:1 learned:3 eigenfunction:2 malzahn:2 beyond:1 able:1 below:2 mismatch:2 regime:13 saclay:1 rf:1 max:1 including:1 explanation:1 belief:2 analogue:2 power:5 natural:1 turning:1 recursion:3 improve:3 started:1 naive:5 tresp:1 deviate:1 prior:7 probab:1 checking:1 multiplication:1 asymptotic:5 law:4 relative:1 fully:11 expect:1 lecture:1 interesting:2 proportional:1 versus:2 degree:10 consistent:2 s0:1 plotting:2 editor:8 uncorrelated:1 course:1 summary:1 rasmussen:1 aij:4 mismatched:1 characterizing:1 taking:3 barrier:1 correspondingly:1 combinator:1 sparse:1 distributed:1 curve:37 calculated:4 depth:1 transition:5 opper:4 made:3 qualitatively:4 san:1 far:2 approximate:3 emphasize:1 ignore:1 eet:1 keep:1 cavity:1 assumed:2 francisco:1 quoted:1 spectrum:11 continuous:1 iterative:1 why:1 bethe:1 learn:5 reasonably:1 ca:1 expanding:1 robust:1 obtaining:1 heidelberg:1 cl:1 domain:4 main:1 arrow:1 motivation:1 noise:5 hyperparameters:1 allowed:1 amino:1 fig:11 ny:1 position:1 pv:1 explicit:1 urry:2 comput:2 lie:1 down:2 showing:1 symbol:1 decay:4 concern:1 effectively:2 ci:2 illustrates:1 logarithmic:4 simply:2 explore:2 expressed:2 halees:1 springer:2 corresponds:2 determines:1 acm:1 ma:6 shell:1 identity:1 marked:1 king:2 labelled:1 absence:1 change:3 analysing:1 typical:2 determined:4 averaging:1 kearns:2 total:1 castillo:1 secondary:1 squaredexponential:1 college:1 support:1 latter:1 d1:1 correlated:10 |
3,136 | 3,841 | Know Thy Neighbour:
A Normative Theory of Synaptic Depression
Jean-Pascal Pfister
Computational & Biological Learning Lab
Department of Engineering, University of Cambridge
Trumpington Street, Cambridge CB2 1PZ, United Kingdom
[email protected]
Peter Dayan
Gatsby Computational Neuroscience Unit, UCL
17 Queen Square, London WC1N 3AR, United Kingdom
[email protected]
M?at?e Lengyel
Computational & Biological Learning Lab
Department of Engineering, University of Cambridge
Trumpington Street, Cambridge CB2 1PZ, United Kingdom
[email protected]
Abstract
Synapses exhibit an extraordinary degree of short-term malleability, with release
probabilities and effective synaptic strengths changing markedly over multiple
timescales. From the perspective of a fixed computational operation in a network, this seems like a most unacceptable degree of added variability. We suggest an alternative theory according to which short-term synaptic plasticity plays a
normatively-justifiable role. This theory starts from the commonplace observation
that the spiking of a neuron is an incomplete, digital, report of the analog quantity that contains all the critical information, namely its membrane potential. We
suggest that a synapse solves the inverse problem of estimating the pre-synaptic
membrane potential from the spikes it receives, acting as a recursive filter. We
show that the dynamics of short-term synaptic depression closely resemble those
required for optimal filtering, and that they indeed support high quality estimation. Under this account, the local postsynaptic potential and the level of synaptic resources track the (scaled) mean and variance of the estimated presynaptic
membrane potential. We make experimentally testable predictions for how the
statistics of subthreshold membrane potential fluctuations and the form of spiking non-linearity should be related to the properties of short-term plasticity in any
particular cell type.
1
Introduction
Far from being static relays, synapses are complex dynamical elements. The effect of a spike from a
presynaptic neuron on its postsynaptic partner depends on the history of the activity of both pre- and
postsynaptic neurons, and thus the efficacy of a synapse undergoes perpetual modification. These
changes in efficacy can last from hundreds of milliseconds or minutes (short-term plasticity) to hours
or months (long-term plasticity). Short-term plasticity typically only depends on the firing pattern
1
of the presynaptic cell [1]; short term depression gradually diminishes the postsynaptic effects of
presynaptic spikes that arrive in quick succession (Fig. 1A). Given the prominence and ubiquity of
synaptic depression in cortical (and subcortical) synapses [2], it is pressing to identify its computational role(s).
There have thus been various important suggestions for the functional significance of synaptic depression, including ? just to name a few ? low-pass filtering of inputs [3], rendering postsynaptic
responses insensitive to the absolute intensity of presynaptic activity [4, 5], and decorrelating input
spike sequences [6]. However, important though they must be for select neural systems, these suggestions have a piecemeal flavor ? for instance, chaining together stages of low-pass filtering would
lead to trivial responding.
Here, we propose a theory according which synaptic depression solves a computational problem that
is faced by any neural population in which neurons represent and compute with analog quantities,
but communicate with discrete spikes. For convenience, we assume this analog quantity to be the
membrane potential, but, via a non-linear transformation [7], it could equally well be an analog firing
rate. That is, we assume that network computations require the evolution of the membrane potential
of a neuron to be a function of the membrane potentials of its presynaptic partners. However, such
a neuron does not have (at least not directly, see [8] for an example of indirect interaction) access
to these membrane potentials, but rather only to the spikes to which they lead, and so it faces a key
estimation problem.
Thus, much as in the vein of standard textbook presentations, the operation of a neuron can be
logically broken down into three concurrent processes, each running in its dedicated functional
compartment: 1) the neuron?s afferent synapses (e.g. spines) estimate the membrane potential of its
presynaptic partners, scaled according to the rules of the network computation; 2) the neuron?s somadendritic compartment follows the membrane potential-dependent dynamics and post-synaptic integration also determined by the computation; and 3) its axon generates action potentials that are
broadcasted to its efferent synapses (and possibly back to the other compartments, eg. for long-term
plasticity). It is in the indispensable first estimation step that we suggest synaptic depression to be
involved.
In Section 2 we formalise the problem of estimating presynaptic membrane potentials as an instance
of Bayesian inference, and derive an online recursive estimator for it. Given suitable assumptions
about presynaptic membrane potential dynamics and spike generation, this optimal estimator can be
written in closed form exactly [9, 10]. In Section 3, we introduce a canonical model of postsynaptic membrane potential and synaptic depression dynamics, and show how it relates to the optimal
estimator derived earlier. In Section 4, we present results from numerical simulations showing the
quality with which synaptic depression can approximate the performance of the optimal estimator,
and how much is gained relative to a static synapse without synaptic depression. Finally, in Section
5, we sum up, suggest experimentally testable predictions, and discuss possible extensions of this
work, eg. to incorporate other forms of short-term synaptic plasticity.
2
Bayesian estimation of presynaptic membrane potentials
The Bayesian estimation problem that needs to be solved by a synapse involves inferring the posterior distribution p (ut |s1..t ) over the presynaptic membrane potential ut at time step t (for discretized
time), given the spikes seen from the presynaptic cell up to that time step, s1..t . We first define a
statistical (generative) model of presynaptic membrane potential fluctuations and spiking, and then
derive the estimator that is appropriate for it.
The generative model involves two simplifying assumptions (Fig. 1B). First we assume that presynaptic membrane potential dynamics are Markovian
p(ut |u1..t?1 ) = p(ut |ut?1 )
(1)
In particular, we assume that the presynaptic membrane potential evolves as an Ornstein-Uhlenbeck
(OU) process, given (again, in discretized time) by
?
ut = ut?1 ? ?(ut?1 ? ur )?t + Wt ?t,
2
iid
2
Wt ? N (Wt ; 0, ?W
)
(2)
A
C
4
3
u [mV]
2
B
...
ut?1
ut
1
0
?1
?2
?3
0
...
st?1
100
200
300
st
400
500
600
time [ms]
700
800
900
1000
Figure 1: A. Synaptic depression: postsynaptic responses to a train of presynaptic action potentials (not shown) at 40 Hz. (Reproduced from [11], adapted from [12].) B. Graphical model of
the process generating presynaptic subthreshold membrane potential fluctuations, u, and spikes, s.
The membrane potential evolves according to a first-order Markov process, the Ornstein-Uhlenbeck
(OU) process (Eqs. 1-2). The probability of generating a spike at time t (st = 1) depends only on
the current membrane potential, ut , and is determined by a non-linear Poisson (NP) model (Eqs. 35). C. Sample membrane potential trace (red line) and spike timings (vertical black dotted lines)
2
= 0.02 mV2 /ms ?
generated by the OU-NP process; with ur = 0 mV, ??1 = 100 ms, ?W
2
= 1 mV2 , ? ?1 = 1 mV, and g0 = 10 Hz.
?OU
where 1/? is the time constant with which the membrane potential decays back to its resting value,
ur , and ?t is the size of the discretized time bins. Because both ? and ?W are assumed to be
2
2
constant, the variance of the presynaptic membrane potential, ?OU
= ?W
/2?, is stationary.
The second assumption is that spiking activity at any time only depends on the membrane potential
at that time:
p(st |u1..t ) = p(st |ut )
(3)
In particular, we assume that the spike generating mechanism is an inhomogeneous Poisson process
(Fig. 1C). Thus, at time step t, the neuron emits a spike (st = 1) with probability g(ut )?t, and
therefore the spiking probability p(st |ut ) given the membrane potential can be written as:
st
p(st |ut ) = [g(ut )?t]
(1?st )
[1 ? g(u)?t]
(4)
We further assume that the transfer function, g(u), is exponential1 :
g(u) = g0 exp(?u)
(5)
where ? determines the stochasticity of spiking. In the limit ? ? ? the spiking process is deterministic, i.e. if the membrane potential, u, is bigger than zero, the neuron emits a spike, and if u < 0,
the neuron does not fire.
Estimating on-line the membrane potential of the presynaptic cell from its spiking history amounts
to computing the posterior probability distribution, p (ut |s1..t ). Since equations 1 and 3 define a
hidden Markov model, the posterior can be written in a recursive form:
Z
p(ut |s1..t ) ? p(st |ut ) p(ut |ut?1 ) p(ut?1 | s1..t?1 ) dut?1
(6)
That is, the posterior at time step t, p(ut |s1..t ), can be computed by combining information from the
current time step with the posterior obtained at the previous time step, p(ut?1 |s1..t?1 ). Note that
even though inference can be performed recursively, and the hidden dynamics is linear-Gaussian
(Eq. 2), the (extended) Kalman filter cannot be used here for inference because the measurement
does not involve additive Gaussian noise, but rather comes from the stochasticity of the spiking
process (Eqs. 4-5).
1
Note that the exponential gain function is a convenient choice since the product of a Gaussian and an exponential gives again an (unnormalised) Gaussian (see Supplementary Information). Furthermore, the exponential
gain function has also some experimental support [13].
3
Performing recursive inference (filtering), as described by equation 6, under the generative model described by equations 1-5 results in a posterior distribution that is Gaussian, ut |s1..t ? N (ut ; ?, ? 2 )
(see Supplementary Information). The mean and variance of this Gaussian evolve (in continuous
time, by taking the limit ?t ? 0) as:
??(? ? ur ) + ?? 2 (S(t) ? ?)
2
= ?2? ? 2 ? ?OU
? ?? 2 ? 4
?? =
??
2
(7)
(8)
with the normalisation factor given by
? = hg0 exp(?u)iut |s1..t
? 2 ?2
= g0 exp ?? +
2
(9)
where S(t) is the spike train of the presynaptic cell (represented as a sum of Dirac delta functions).
(A similar, but not identical, derivation can be found in [9]).
Equation 7 indicates that each time a spike is observed, the estimated membrane potential should
increase proportionally to the uncertainty (variance) about the current estimate. This estimation
uncertainty then decreases each time a spike is observed (Eqs. 8-9). As Fig. 2A shows, the higher the
presynaptic membrane potential is, the more spikes are emitted (because the instantaneous firing rate
is a monotonic function of membrane potential, see Eq. 5), and therefore the smaller the posterior
variance becomes. Therefore the estimation error is smaller for higher membrane potential (see
Fig. 2B). Conversely, in the absence of spikes, the estimated membrane potential decreases while the
variance increases back to its asymptotic value. Fig. 2C shows that the representation of uncertainty
about the membrane potential by ? 2 is self-consistent because it is predictive of the error of the
mean estimator, ?.
The first term on the r.h.s of equation 7 comes from the prior knowledge about the membrane potential dynamics. The second term comes from the likelihood of the spiking observations. Those two
contributions can be isolated independently by taking two different limits that we will consider in
the next two subsections.
2.1
Small noise limit
2
2
= ?W
with ? 0,
In the limit of small variance of the noise driving the OU process, i.e., ?W
0
2
2
2
2
the asymptotic uncertainty ?? scales with : ?? = ?W0 /2? (c.f. Eq. 8 with ?? = 0). Then the
dynamics of ? becomes driven only by the prior mean membrane potential ur :
?? ' ?? (? ? ur )
(10)
and so the asymptotic estimated membrane potential will tend to the prior mean membrane potential.
This is reasonable since in the small noise limit, the true membrane potential ut will effectively be
very close to ur . Furthermore the convergence time constant of the estimated membrane potential
should be matched to the time constant ??1 of the OU process and this is indeed the case in Eq. 10.
2.2
Slow dynamics limit
A second interesting limit is where the time constant of the OU process becomes small, i.e., ? = ?0
with ? 0. In this case, the variance of the noise in the OU process must also scale with , i.e
2
2
2
2
?W
= ?W
, to prevent the process from being unbounded. The variance ?OU
= ?W
/2?0 of the
0
0
OU process is therefore
independent
of
.
In
this
case,
the
asymptotic
value
of
the
posterior
variance
?
?
2
2
becomes ??
= ?W0 / ?? (c.f. Eq. 8 with
?
?
=
0).
In
the
limit
of
small
,
the
first
term
of Eq. 7
?
scales with whereas the second term with . We can therefore write:
?
?
?? ' S(t) ? ?
(11)
?W
Because the time constant ??1 of the OU process is slow, the driving force that pulls the membrane
potential back to its mean value ur is weak. Therefore the membrane potential estimation dynamics
should rely on the observed spikes rather than on the prior information ur . This is apparent in Eq. 11.
p
Furthermore, the time constant ? = ?//?W0 is not fixed but is a function of the mean estimated
membrane potential ?. Thus, if the initial estimate ?0 = ?(0) is below the target value ur , ?
4
A
4
u [mV]
2
0
?2
?4
0
100
200
300
400
500
600
time [ms]
B
800
900
1000
C
10
0.4
2
Probability density
8
(??u) [mV ]
700
2
6
4
2
0
?3
?2
?1
0
u [mV]
1
2
0.3
0.2
0.1
0
?3
3
?2
?1
0
1
z = (u??)/?
2
3
Figure 2: The performance of the optimal on-line estimator. A. Red line: presynaptic membrane
potential, u, as a function of time, vertical dotted lines: spikes emitted. Dot-dashed black line:
on-line estimator ? given by Eq. (7), gray shading: ? ? ?, with ? given by Eq. (8). B. Estimation
error (? ? u)2 as a function of the membrane potential u of the OU process. Black dots: estimation
error and true membrane potential in individual time steps, red line: third order polynomial fit. C
Black bars: histogram of normalized estimation error z = (? ? u)/?. Red line: normal distribution
N (z; 0, 1). Parameters were as in Fig. 1, except for ? ?1 = 0.5 mV .
will be small and hence the time constant ? will be small as well. As a consequence, each spike
will greatly increase the estimate and therefore speed up the approach of this estimate to the true
value. As ? gets closer to the true membrane potential, the time constant increases, leading to an
appropriately accurate estimate of the membrane potential. This dynamical time constant therefore
helps the estimation avoid the traditional speed vs accuracy trade-off (short time constant are fast
but give a noisy estimation; longer time constant are slow but yield a more accurate estimation), by
combining the best of the two worlds.
3
Depressing synapses as estimators of presynaptic membrane potential
In section 2 we have shown that presynaptic spikes have a varying, context-dependent effect on
the optimal on-line estimator of presynaptic membrane potential. In this section we will show that
the variability that synaptic depression introduces in postsynaptic responses closely resembles the
variability of the optimal estimator.
A simple way to study the similarity between the optimal estimator and short-term plasticity is to
consider their steady state filtering properties. As we saw above, according to the optimal estimator,
the higher the input firing rate is, the smaller the posterior variance becomes, and therefore the
increment due to subsequent spikes should decrease. This is consistent with depressing synapses
for which the amount of excitatory postsynaptic current (EPSC) decreases when the stimulation
frequency is increased (see Fig. 3).
5
A
B
steady state increment [mV]
1.2
1
0.8
0.6
0.4
0.2
0
0
20
40
60
Stimulus rate [Hz]
80
100
Figure 3: A. Steady-state spiking increment ?? 2 of the optimal estimator as a function of r = hSi
(Eq. 8). B. Synaptic depression in the climbing fibre to Purkinje cell synapse: average (?s.e.m.)
normalised ?steady-state? magnitude of EPSCs as a function of stimulation frequency. Reproduced
from [3].
Importantly, the similarity between the optimal membrane potential estimator and short-term plasticity is not limited to stationary properties. Indeed, the actual dynamics of the optimal estimator
(Eqs. 7-9) can be well approximated by the dynamics of synaptic depression. In a canonical model
of short-term depression [14], the postsynaptic membrane potential, v, changes as
v ? v0
1?x
v? = ?
+ J Y x S(t),
with
x? =
? Y x S(t)
(12)
?
?D
where J and Y are constants (synaptic weight and utilisation fraction), and x is a time varying ?resource? variable (e.g. the fraction of presynaptic vesicles ready to fuse to the membrane). Thus, v is
increased by each presynaptic spike, and in the absence of spikes it decays to its resting value, v0 ,
with membrane time constant ? . However, the effect of each spike on v is scaled by x which itself
is decreased after each spike and increases between spikes back towards one with time constant ?D .
Thus, the postsynaptic potential, v, behaves much like the posterior mean of the optimal estimator,
?, while the dynamics of the synaptic resource variable, x, closely resemble that of the posterior
variance of the optimal estimator, ? 2 . This qualitative similarity can be made more formal under
appropriate assumptions, for details see section 3 of supplementary information. Indeed, the capacity of a depressing synapse (with appropriate parameters) to estimate the presynaptic membrane
potential can be nearly as good as that of the optimal estimator (Fig. 4, top). Interestingly, although
2
does not follow the resource variable dynamics x perfectly just after a
the scaled variance ? 2 /??
spike, these two quantities are virtually identical at the time of the next spike, i.e. when they are
used by the membrane potential estimators (Fig. 4, bottom).
4
Performance analysis
In order to quantify how well synaptic dynamics with depression perform in estimating presynaptic membrane potentials, we measure performance by the mean-squared error (MSE) between the
true membrane potential u and the estimated membrane potential, and compare the MSE of three
alternatives estimators.
The simplest model we consider is a static (non-depressing) synapse, in which v is given by Eq. 12
with constant x = 1. This estimator has only 3 tuneable parameters: ? , v0 and J (Y = 1 is fixed
without loss of generality). The second estimator we consider includes synaptic depression, i.e. x
is also allowed to vary (Eq. 12). This estimator contains 5 tuneable parameters ( v0 , ? , Y , J, ?D ).
Finally, we consider the optimal estimator (Eqs. 7-9). This estimator has no tunable parameters.
Once the parameters of presynaptic membrane potential dynamics (?W , ?, ur ) and spiking (?, g0 )
are fixed, the optimal estimator is entirely determined. The comparison of the performance of these
three estimators is displayed on Fig. 5. The optimal estimator (black circles) is obviously a lower
bound on any type of estimator. For a wide range of parameter values, the depressing synapse
performs almost as well as the optimal estimator, and both perform better than the static synapse.
6
4
u [mV]
2
0
STP
Optimal
membrane pot. u
?2
?4
0
500
1000
time [ms]
1500
2000
1
x, ?2/?2?
0.8
0.6
STP
Optimal
0.4
0.2
0
500
1000
time [ms]
1500
2000
Figure 4: Depressing synapses implement near-optimal estimation of presynaptic membrane potentials. Top. Red line, and vertical dotted lines: membrane potential, u, and spikes, S, generated by
a simulated presynaptic cell (with parameters as in Fig. 1). Blue line: postsynaptic potential, v, in
a depressing synapse (Eq. 12) with all 5 parameters (J = 4.82, ? = 60.6 ms, v0 = ?0.59 mV,
?d = 64 ms, Y = 0.17) tuned to minimize the mean squared estimation error, (u ? v)2 . Black line:
Posterior mean of the optimal on-line estimator, ? (Eq. 7). Bottom. Black: resource variable, x, in
the depressing synapse (Eq. 12). Blue: posterior variance of the optimal estimator, ? 2 (Eq. 8).
In the slow dynamics limit ( ? 0, see section 2.2), the estimation error of the optimal estimator can
even ?
be approximated analytically (see?Supplementary Information). In this limit, the error scales
with ?W and therefore scales with 4 . As can be seen on Fig. 5B, for small , the analytical
expression is consistent with the simulations.
5
Discussion
Synapses are a cornerstone of computation in networks, and are highly complex dynamical systems
involving more than a thousand different types of protein. One prominent feature of their dynamics
is significant short-term changes in efficacy; these belie the sort of single fixed, or slowly changing,
weights popular in most neural models. We interpreted short-term synaptic depression, a key feature
of synaptic dynamics, as solving the fundamental computational task of estimating the analog membrane potential of the presynaptic cell from observed spikes. Steady-state and dynamical properties
of a Bayes-optimal estimator are well-matched by a canonical model of depression; using a fixed
synaptic efficacy instead leads to a highly suboptimal estimator.
Our theory is readily testable, since it suggests a precise relationship between quantities that have
been subject to extensive, separate, empirical study ? namely the statistics of a neuron?s membrane
potential dynamics (captured by the parameters of Eq. (2)), the form of its spiking non-linearity
(described by Eq. (5)), and the synaptic depression it expresses in its efferent synapses. Accounting
for the observation that different efferent synapses of the same cell can express different forms
of short-term synaptic plasticity [15] remains a challenge; one obvious possibility is that different
synapses are estimating different aspects or functions of the membrane potential.
Our approach is almost dual to that explored in [16]. For that model, the spike generation mechanism
of the presynaptic neuron was modified such that even a simple read-out mechanism with fixed
efficacies could correctly decode the analogue quantity encoded presynaptically. By contrast, we
considered a standard model of spiking [17], and thereby derived an explanation for the evident fact
that synapses are not in fact fixed.
7
A
B
1.2
Estimation Error in [mV]
Estimation Error in [mV]
1.2
1
0.8
0.6
0.4
no STP: simulation
STP: simulation
optimal: simulation
0.2
0 ?1
10
0.8
0.6
0.4
0.2
0 ?2
10
0
?
1
10
optimal: simulation
optimal: theory
?1
10
?
10
Figure 5: A. Comparing the estimation error for different membrane potential estimators as a func2
2
tion of . (? = ?0 , ?W
= ?W
). Black: asymptotic error of the optimal estimator. Blue: depressing
0
synapse with its 5 tuneable parameters (see text) being optimised for each value of . Red: static
synapse with its 3 tuneable parameters (see text) being optimised. Total simulated time was
? 5 min.
Horizontal dot-dashed line: upper bound on the estimation error given by ?OU = ?W / 2? = 1.
B. Analysing the estimation error of the optimal estimator in the slow dynamics limit ( ? 0).
Solid line: analytical approximation (Eq. 31 in the Supplementary Information), circles: simulation,
horizontal dot-dashed line: as in A.
There are several avenues to extend the present analysis. For example, it would be important to understand in more quantitative detail the mapping between the parameters of the process generating
the presynaptic membrane potential and spikes, and the parameters of synaptic depression that will
best realize the corresponding optimal estimator. We present some preliminary derivations in the
supplementary material that seem to yield at least the right ball-park values for optimal synaptic dynamics. This should also enable us to explore the particular parameter regimes in which depressing
synapses have the most (or least) advantage over static synapses in terms of estimation performance,
as in Fig. 5. We should also consider a meta-plasticity rule that suitably adapts the parameters of the
short-term dynamics in the light of the statistics of spiking.
Our assumption about the prior distribution of presynaptic membrane potential dynamics is highly
restrictive. A broader scheme that has previously been explored is that it follow a Gaussian process
model [18, 19] with a more general covariance function. Recursive estimation is often a reasonable
approximation in such cases, even for those covariance functions, for instance enforcing smoothness, for which it cannot be exact. One interesting property of smooth trajectories is that a couple
of spikes arriving in quick succession may be diagnostic of an upward-going trend in membrane potential which is best decoded with increasing, i.e., facilitating, rather than decreasing, postsynaptic
responses. Thus it may be possible to encompass other forms of short term plasticity within our
scheme.
The spike generation process can also be extended to incorporate refractoriness, bursting, and other
forms of non-Poisson behaviour, eg. as in [20]. Similarly, synaptic failures could also be considered.
We hope through our theory to be able to provide a teleological account of the rich complexities of
real synaptic inconstancy.
Acknowledgements
Funding was from the Gatsby Charitable Foundation (PD) and the Wellcome Trust (JPP, ML and
PD).
8
0
References
[1] Abbott, L.F. & Regehr, W.G. Synaptic computation. Nature 431, 796?803 (2004).
[2] Zucker, R. & Regehr, W. Short-term synaptic plasticity. Annual Review of Physiology 64,
355?405 (2002).
[3] Dittman, J., Kreitzer, A. & Regehr, W. Interplay between facilitation, depression, and residual
calcium at three presynaptic terminals. Journal of Neuroscience 20, 1374 (2000).
[4] Abbott, L.F., Varela, J.A., Sen, K. & Nelson, S.B. Synaptic depression and cortical gain
control. Science 275, 220?224 (1997).
[5] Cook, D., Schwindt, P., Grande, L. & Spain, W. Synaptic depression in the localization of
sound. Nature 421, 66?70 (2003).
[6] Goldman, M., Maldonado, P. & Abbott, L. Redundancy reduction and sustained firing with
stochastic depressing synapses. Journal of Neuroscience 22, 584 (2002).
[7] Ermentrout, B. Neural networks as spatio-temporal pattern-forming systems. Reports on
Progress in Physics 61, 353 (1998).
[8] Shu, Y., Hasenstaub, A., Duque, A., Yu, Y. & McCormick, D. Modulation of intracortical
synaptic potentials by presynaptic somatic membrane potential. Nature 441, 761?765 (2006).
[9] Eden, U., Frank, L., Barbieri, R., Solo, V. & Brown, E. Dynamic analysis of neural encoding
by point process adaptive filtering. Neural Computation 16, 971?998 (2004).
[10] Bobrowski, O., Meir, R. & Eldar, Y. Bayesian filtering in spiking neural networks: Noise,
adaptation, and multisensory integration. Neural Computation 21, 1277?1320 (2009).
[11] Dayan, P. & Abbott, L.F. Theoretical Neuroscience (MIT Press, Cambridge, 2001).
[12] Markram, H. & Tsodyks, M. Redistribution of synaptic efficacy between neocortical pyramidal
neurons. Nature 382, 807?810 (1996).
[13] Jolivet, R., Rauch, A., L?uscher, H.R. & Gerstner, W. Predicting spike timing of neocortical
pyramidal neurons by simple threshold models. J. Computational Neuroscience 21, 35?49
(2006).
[14] Mongillo, G., Barak, O. & Tsodyks, M. Synaptic theory of working memory. Science 319,
1543 (2008).
[15] Markram, H., Wu, Y. & Tosdyks, M. Differential signaling via the same axon of neocortical
pyramidal neurons. Proc. Natl. Acad. Sci. USA 95, 5323?5328 (1998).
[16] Deneve, S. Bayesian spiking neurons I: inference. Neural Computation 20, 91?117 (2008).
[17] Gerstner, W. & Kistler, W.K. Spiking Neuron Models (Cambridge University Press, Cambridge
UK, 2002).
[18] Cunningham, J., Yu, B., Shenoy, K. & Sahani, M. Inferring neural firing rates from spike trains
using Gaussian processes. Advances in Neural Information Processing Systems 20, 329?336
(2008).
[19] Huys, Q., Zemel, R., Natarajan, R. & Dayan, P. Fast population coding. Neural Computation
19, 404?441 (2007).
[20] Pillow, J. et al. Spatio-temporal correlations and visual signalling in a complete neuronal
population. Nature 454, 995?999 (2008).
9
| 3841 |@word polynomial:1 seems:1 suitably:1 simulation:7 eng:2 prominence:1 simplifying:1 accounting:1 covariance:2 thereby:1 solid:1 shading:1 recursively:1 reduction:1 initial:1 contains:2 efficacy:6 united:3 tuned:1 interestingly:1 current:4 comparing:1 must:2 written:3 readily:1 realize:1 numerical:1 additive:1 subsequent:1 plasticity:13 v:1 stationary:2 generative:3 cook:1 signalling:1 short:18 unbounded:1 unacceptable:1 differential:1 qualitative:1 sustained:1 introduce:1 thy:1 indeed:4 spine:1 discretized:3 terminal:1 decreasing:1 goldman:1 actual:1 increasing:1 becomes:5 spain:1 estimating:6 linearity:2 matched:2 interpreted:1 textbook:1 transformation:1 temporal:2 quantitative:1 exactly:1 scaled:4 uk:4 control:1 unit:1 shenoy:1 engineering:2 local:1 timing:2 limit:12 consequence:1 acad:1 encoding:1 barbieri:1 optimised:2 fluctuation:3 firing:6 modulation:1 black:8 hg0:1 dut:1 resembles:1 bursting:1 conversely:1 suggests:1 limited:1 range:1 huys:1 recursive:5 implement:1 cb2:2 signaling:1 empirical:1 physiology:1 convenient:1 pre:2 suggest:4 protein:1 get:1 convenience:1 cannot:2 close:1 context:1 deterministic:1 quick:2 independently:1 rule:2 estimator:40 importantly:1 pull:1 facilitation:1 population:3 increment:3 target:1 play:1 decode:1 exact:1 element:1 trend:1 approximated:2 natarajan:1 vein:1 observed:4 role:2 bottom:2 solved:1 epsc:1 commonplace:1 thousand:1 tsodyks:2 decrease:4 trade:1 pd:2 broken:1 complexity:1 ermentrout:1 cam:2 dynamic:25 solving:1 vesicle:1 predictive:1 localization:1 indirect:1 various:1 represented:1 derivation:2 train:3 fast:2 effective:1 london:1 zemel:1 tuneable:4 jean:2 apparent:1 supplementary:6 encoded:1 statistic:3 noisy:1 itself:1 online:1 reproduced:2 obviously:1 sequence:1 pressing:1 advantage:1 analytical:2 interplay:1 ucl:2 propose:1 sen:1 interaction:1 product:1 adaptation:1 combining:2 adapts:1 dirac:1 convergence:1 generating:4 help:1 derive:2 ac:3 progress:1 eq:25 solves:2 pot:1 resemble:2 involves:2 come:3 quantify:1 inhomogeneous:1 closely:3 filter:2 stochastic:1 enable:1 func2:1 material:1 redistribution:1 kistler:1 bin:1 require:1 behaviour:1 preliminary:1 biological:2 extension:1 considered:2 normal:1 exp:3 mapping:1 driving:2 vary:1 relay:1 estimation:24 diminishes:1 proc:1 saw:1 concurrent:1 hope:1 mit:1 gaussian:8 modified:1 rather:4 avoid:1 varying:2 broader:1 release:1 derived:2 indicates:1 logically:1 likelihood:1 greatly:1 contrast:1 inference:5 dayan:4 dependent:2 typically:1 cunningham:1 hidden:2 going:1 upward:1 stp:4 dual:1 pascal:2 eldar:1 integration:2 once:1 identical:2 park:1 yu:2 nearly:1 report:2 np:2 stimulus:1 few:1 neighbour:1 individual:1 fire:1 normalisation:1 highly:3 possibility:1 maldonado:1 uscher:1 introduces:1 light:1 natl:1 wc1n:1 accurate:2 solo:1 closer:1 incomplete:1 circle:2 isolated:1 formalise:1 theoretical:1 instance:3 increased:2 earlier:1 purkinje:1 markovian:1 ar:1 hasenstaub:1 queen:1 hundred:1 st:11 density:1 fundamental:1 off:1 physic:1 together:1 again:2 squared:2 possibly:1 slowly:1 leading:1 account:2 potential:74 intracortical:1 coding:1 includes:1 afferent:1 depends:4 ornstein:2 mv:12 performed:1 tion:1 lab:2 closed:1 red:6 start:1 sort:1 bayes:1 mongillo:1 contribution:1 minimize:1 square:1 compartment:3 accuracy:1 variance:14 succession:2 subthreshold:2 identify:1 yield:2 climbing:1 weak:1 bayesian:5 iid:1 trajectory:1 justifiable:1 lengyel:2 history:2 synapsis:16 synaptic:39 failure:1 frequency:2 involved:1 obvious:1 static:6 efferent:3 gain:3 emits:2 tunable:1 couple:1 popular:1 knowledge:1 ut:27 subsection:1 ou:15 back:5 higher:3 follow:2 response:4 synapse:13 decorrelating:1 depressing:11 though:2 refractoriness:1 generality:1 furthermore:3 just:2 stage:1 correlation:1 working:1 receives:1 horizontal:2 trust:1 undergoes:1 quality:2 gray:1 normatively:1 name:1 effect:4 usa:1 normalized:1 true:5 regehr:3 brown:1 evolution:1 hence:1 analytically:1 read:1 utilisation:1 eg:3 self:1 steady:5 chaining:1 m:8 prominent:1 evident:1 neocortical:3 complete:1 performs:1 dedicated:1 instantaneous:1 funding:1 behaves:1 functional:2 spiking:18 stimulation:2 insensitive:1 broadcasted:1 analog:5 extend:1 resting:2 measurement:1 significant:1 cambridge:7 smoothness:1 similarly:1 stochasticity:2 dot:4 access:1 zucker:1 longer:1 similarity:3 v0:5 posterior:13 perspective:1 driven:1 indispensable:1 meta:1 seen:2 captured:1 dashed:3 hsi:1 relates:1 multiple:1 encompass:1 sound:1 smooth:1 long:2 post:1 equally:1 bigger:1 prediction:2 involving:1 poisson:3 histogram:1 represent:1 uhlenbeck:2 cell:9 whereas:1 decreased:1 pyramidal:3 appropriately:1 markedly:1 hz:3 tend:1 virtually:1 subject:1 seem:1 emitted:2 near:1 rendering:1 fit:1 perfectly:1 suboptimal:1 avenue:1 expression:1 rauch:1 peter:1 action:2 depression:24 cornerstone:1 proportionally:1 involve:1 amount:2 simplest:1 meir:1 canonical:3 millisecond:1 dotted:3 neuroscience:5 estimated:7 track:1 delta:1 correctly:1 blue:3 diagnostic:1 discrete:1 write:1 express:2 key:2 varela:1 redundancy:1 threshold:1 eden:1 changing:2 mv2:2 prevent:1 abbott:4 deneve:1 fuse:1 fraction:2 sum:2 fibre:1 inverse:1 uncertainty:4 communicate:1 arrive:1 almost:2 reasonable:2 wu:1 entirely:1 bound:2 annual:1 activity:3 strength:1 adapted:1 generates:1 u1:2 speed:2 aspect:1 min:1 performing:1 department:2 trumpington:2 according:5 ball:1 membrane:71 smaller:3 postsynaptic:13 ur:11 evolves:2 modification:1 s1:9 gradually:1 wellcome:1 resource:5 equation:5 remains:1 previously:1 discus:1 mechanism:3 know:1 operation:2 appropriate:3 ubiquity:1 alternative:2 responding:1 running:1 top:2 graphical:1 testable:3 restrictive:1 presynaptically:1 g0:4 added:1 quantity:6 spike:39 traditional:1 exhibit:1 separate:1 simulated:2 capacity:1 street:2 sci:1 w0:3 nelson:1 partner:3 presynaptic:38 trivial:1 enforcing:1 kalman:1 relationship:1 kingdom:3 frank:1 shu:1 trace:1 calcium:1 perform:2 mccormick:1 upper:1 vertical:3 observation:3 neuron:19 markov:2 displayed:1 extended:2 variability:3 precise:1 somatic:1 intensity:1 namely:2 required:1 extensive:1 hour:1 jolivet:1 able:1 bar:1 dynamical:4 pattern:2 below:1 teleological:1 regime:1 challenge:1 including:1 memory:1 explanation:1 analogue:1 critical:1 suitable:1 force:1 rely:1 predicting:1 residual:1 scheme:2 ready:1 unnormalised:1 sahani:1 faced:1 prior:5 text:2 acknowledgement:1 review:1 evolve:1 relative:1 asymptotic:5 loss:1 suggestion:2 generation:3 filtering:7 subcortical:1 interesting:2 digital:1 foundation:1 degree:2 consistent:3 charitable:1 excitatory:1 last:1 arriving:1 formal:1 normalised:1 understand:1 barak:1 wide:1 face:1 taking:2 markram:2 absolute:1 cortical:2 world:1 pillow:1 rich:1 made:1 adaptive:1 far:1 piecemeal:1 approximate:1 ml:1 assumed:1 spatio:2 continuous:1 grande:1 nature:5 transfer:1 mse:2 gerstner:2 complex:2 significance:1 timescales:1 noise:6 allowed:1 facilitating:1 neuronal:1 fig:14 gatsby:3 extraordinary:1 axon:2 slow:5 inferring:2 decoded:1 exponential:3 third:1 minute:1 down:1 showing:1 normative:1 explored:2 pz:2 decay:2 effectively:1 gained:1 magnitude:1 epscs:1 flavor:1 explore:1 forming:1 visual:1 monotonic:1 determines:1 month:1 presentation:1 towards:1 absence:2 experimentally:2 change:3 analysing:1 determined:3 except:1 acting:1 wt:3 total:1 pfister:2 pas:2 bobrowski:1 experimental:1 multisensory:1 select:1 support:2 incorporate:2 |
3,137 | 3,842 | Compositionality of optimal control laws
Emanuel Todorov
Applied Mathematics and Computer Science & Engineering
University of Washington
[email protected]
Abstract
We present a theory of compositionality in stochastic optimal control, showing
how task-optimal controllers can be constructed from certain primitives. The
primitives are themselves feedback controllers pursuing their own agendas. They
are mixed in proportion to how much progress they are making towards their agendas and how compatible their agendas are with the present task. The resulting
composite control law is provably optimal when the problem belongs to a certain
class. This class is rather general and yet has a number of unique properties ? one
of which is that the Bellman equation can be made linear even for non-linear or
discrete dynamics. This gives rise to the compositionality developed here. In the
special case of linear dynamics and Gaussian noise our framework yields analytical solutions (i.e. non-linear mixtures of LQG controllers) without requiring the
final cost to be quadratic. More generally, a natural set of control primitives can
be constructed by applying SVD to Green?s function of the Bellman equation. We
illustrate the theory in the context of human arm movements. The ideas of optimality and compositionality are both very prominent in the field of motor control,
yet they have been difficult to reconcile. Our work makes this possible.
1 Introduction
Stochastic optimal control is of interest in many fields of science and engineering, however it remains hard to solve. Dynamic programming [1] and reinforcement learning [2] work well in discrete state spaces of reasonable size, but cannot handle continuous high-dimensional state spaces
characteristic of complex dynamical systems. A variety of function approximation methods are
available [3, 4], yet the shortage of convincing results on challenging problems suggests that existing approximation methods do not scale as well as one would like. Thus there is need for more
efficient methods. The idea we pursue in this paper is compositionality. With few exceptions [5, 6]
this good-in-general idea is rarely used in optimal control, because it is unclear what/how can be
composed in a way that guarantees optimality of the resulting control law.
Our second motivation is understanding how the brain controls movement. Since the brain remains
pretty much the only system capable of solving truly complex control problems, sensorimotor neuroscience is a natural (albeit under-exploited) source of inspiration. To be sure, a satisfactory understanding of the neural control of movement is nowhere in sight. Yet there exist theoretical ideas
backed by experimental data which shed light on the underlying computational principles. One such
idea is that biological movements are near-optimal [7, 8]. This is not surprising given that motor
behavior is shaped by the processes of evolution, development, learning and adaptation, all of which
resemble iterative optimization. Precisely what algorithms enable the brain to approach optimal performance is not known, however a clue is provided by another prominent idea: compositionality. For
about a century, researchers have been talking about motor synergies or primitives which somehow
simplify control [9?11]. The implied reduction in dimensionality is now well documented [12?14].
However the structure and origin of the hypothetical primitives, the rules for combining them, and
the ways in which they actually simplify the control problem remain unclear.
1
2 Stochastic optimal control problems with linear Bellman equations
We will be able to derive compositionality rules for first-exit and finite-horizon stochastic optimal
control problems which belong to a certain class. This class includes both discrete-time [15?17]
and continuous-time [17?19] formulations, and is rather general, yet affords substantial simplification. Most notably the optimal control law is found analytically given the optimal cost-to-go,
which in turn is the solution to a linear equation obtained from the Bellman equation by exponentiation. Linearity implies compositionality as will be shown here. It also makes a number of other
things possible: finding the most likely trajectories of optimally-controlled stochastic systems via
deterministic methods; solving inverse optimal control problems via convex optimization; applying
off-policy learning in the state space as opposed to the state-action space; establishing duality between stochastic optimal control and Bayesian estimation. An overview can be found in [17]. Here
we only provide the background needed for the present paper.
The discrete-time problem is defined by a state cost q (x) ? 0 describing how (un)desirable different
states are, and passive dynamics x0 ? p (?|x) characterizing the behavior of the system in the absence
of controls. The controller can impose any dynamics x0 ? u (?|x) it wishes, however it pays a price
(control cost) which is the KL divergence between u and p. We further require that u (x0 |x) = 0
whenever p (x0 |x) = 0 so that KL divergence is well-defined. Thus the discrete-time problem is
dynamics:
cost rate:
x0 ? u (?|x)
(x, u (?|x)) = q (x) + KL (u (?|x) ||p (?|x))
Let I denote the set of interior states and B the set of boundary states, and let f (x) ? 0, x ? B be
a final cost. Let v (x) denote the optimal cost-to-go, and define the desirability function
z (x) = exp (?v (x))
Let G denote the linear operator which computes expectation under the passive dynamics:
G [z] (x) = Ex0 ?p(?|x) z (x0 )
For x ? I it can be shown that the optimal control law u? (?|x) and the desirability z (x) satisfy
optimal control law:
linear Bellman equation:
p (x0 |x) z (x0 )
G [z] (x)
exp (q (x)) z (x) = G [z] (x)
u? (x0 |x) =
(1)
On the boundary x ? B we have z (x) = exp (?f (x)). The linear Bellman equation can be written
more explicitly in vector-matrix notation as
zI = M zI + N zB
(2)
where M = diag (exp (?qI )) PII and N = diag (exp (?qI )) PIB . The matrix M is guaranteed
to have spectral radius less than 1, thus the simple iterative solver zI ? M zI + N zB converges.
The continuous-time problem is a control-affine Ito diffusion with control-quadratic cost:
dx = a (x) dt + B (x) (udt + ?d?)
1
2
cost rate:
(x, u) = q (x) + 2 kuk
2?
The control u is now a (more traditional) vector and ? is a Brownian motion process. Note that
the control cost scaling by ? ?2 , which is needed to make the math work, can be compensated by
rescaling q. The optimal control law u? (x) and desirability z (x) satisfy
dynamics:
optimal control law:
linear HJB equation:
zx (x)
z (x)
q (x) z (x) = L [z] (x)
T
u? (x) = ? 2 B (x)
where the 2nd-order linear differential operator L is defined as
?
?2 ?
tr B (x) B (x)T zxx (x)
L [z] (x) = a (x)T zx (x) +
2
2
(3)
The relationship between the two formulations above is not obvious, but nevertheless it can be
shown that the continuous-time formulation is a special case of the discrete-time formulation. This
is done by defining the passive dynamics p(h) (?|x) as the h-step transition probability density of
the uncontrolled diffusion (or an Euler approximation?to it),
? and the state cost as q(h) (x) = hq (x).
Then, in the limit h ? 0, the integral equation exp q(h) z = G(h) [z] reduces to the differential
equation qz = L [z]. Note that for small h the density p(h) (?|x) is close to Gaussian. From the
formula for KL divergence between Gaussians, the KL control cost in the discrete-time formulation
reduces to the quadratic control cost in the continuous-time formulation.
The reason for working with both formulations and emphasizing the relationship between them is
that most problems of practical interest are continuous in time and space, yet the discrete-time formulation is easier to work with. Furthermore it leads to better numerical stability because integral
equations are better behaved than differential equations. Note also that the discrete-time formulation can be used in both discrete and continuous state spaces, although the latter require function
approximation in order to solve the linear Bellman equation [20].
3 Compositionality theory
The compositionality developed in this section follows from the linearity of equations (1, 3). We
focus on first-exit problems which are more general. An example involving a finite-horizon problem
will be given later. Consider a collection of K optimal control problems in our class which all have
the same dynamics ? p (?|x) in discrete time or a (x) , B (x) , ? in continuous time ? the same state
cost rate q (x) and the same sets I and B of interior and boundary states. These problems differ only
in their final costs fk (x). Let zk (x) denote the desirability function for problem k, and u?k (?|x) or
u?k (x) the corresponding optimal control law. The latter will serve as primitives for constructing
optimal control laws for new problems in our class. We will call the K problems we started with
component and the new problem composite.
Suppose the final cost for the composite problem is f (x), and there exist weights wk such that
?
?P
K
(4)
f (x) = ? log
k=1 wk exp (?fk (x))
Thus the functions fk (x) define a K-dimensional manifold of composite problems. The above
condition ensures that for all boundary/terminal states x ? B we have
P
(5)
z (x) = K
k=1 wk zk (x)
Since z is the solution to a linear equation, if (5) holds on the boundary then it must hold everywhere.
Thus the desirability function for the composite problem is a linear combination of the desirability
functions for the component problems. The weights in this linear combination can be interpreted as
compatibilities between the control objectives in the component problems and the control objective
in the composite problem. The optimal control law for the composite problem is given by (1, 3).
The above construction implies that both z and zk are everywhere positive. Since z is defined as an
exponent, it must be positive. However this is not necessary for the components. Indeed if
?P
?
K
f (x) = ? log
w
z
(x)
(6)
k
k
k=1
holds for all x ? B, then (5) and z (x) > 0 hold everywhere even if zk (x) ? 0 for some k and x.
In this case the zk ?s are no longer desirability functions for well-defined optimal control problems.
Nevertheless we can think of them as generalized desirability functions with similar meaning: the
larger zk (x) is the more compatible state x is with the agenda of component k.
3.1 Compositionality of discrete-time control laws
When zk (x) > 0 the composite control law u? can be expressed as a state-dependent convex
combination of the component control laws u?k . Combining (5, 1) and using the linearity of G,
u? (x0 |x) =
X wk G [zk ] (x) p (x0 |x) zk (x0 )
P
G [zk ] (x)
s ws G [zs ] (x)
k
3
The second term above is u?k . The first term is a state-dependent mixture weight which we denote
mk (x). The composition rule for optimal control laws is then
P
u? (?|x) = k mk (x) u?k (?|x)
(7)
Using the fact that zk (x) satisfies the linear Bellman equation (1) and q (x) does not depend on k,
the mixture weights can be simplified as
Note that
P
k mk
wk G [zk ] (x)
wk zk (x)
mk (x) = P
=P
s ws G [zs ] (x)
s ws zs (x)
(8)
(x) = 1 and mk (x) > 0.
3.2 Compositionality of continuous-time control laws
Substituting (5) in (3) and assuming zk (x) > 0, the control law given by (3) can be written as
?
X wk zk (x) ? ? 2
T ?
?
P
B (x)
zk (x)
u (x) =
?x
s ws zs (x) zk (x)
k
The term in brackets is
u?k
(x). We denote the first term with mk (x) as before:
wk zk (x)
mk (x) = P
s ws zs (x)
Then the composite optimal control law is
u? (x) =
P
k mk
(x) u?k (x)
(9)
Note the similarity between the discrete-time result (7) and the continuous-time result (9), as well as
the fact that the mixing weights are computed in the same way. This is surprising given that in one
case the control law directly specifies the probability distribution over next states, while in the other
case the control law shifts the mean of the distribution given by the passive dynamics.
4 Analytical solutions to linear-Gaussian problems with non-quadratic costs
Here we specialize the above results to the case when the components are continuous-time linear
quadratic Gaussian (LQG) problems of the form
dx = Axdt + B (udt + ?d?)
1
1
2
(x, u) = xT Qx + 2 kuk
2
2?
dynamics:
cost rate:
The component final costs are quadratic:
1 T
x Fk x
2
The optimal cost-to-go function for LQG problems is known to be quadratic [21] in the form
1
vk (x, t) = xT Vk (t) x + ?k (t)
2
At the predefined final time T we have Vk (T ) = Fk and ?k (T ) = 0. The optimal control law is
fk (x) =
u?k (x, t) = ?? 2 B T Vk (t) x
The quantities Vk (t) and ?k (t) can be computed by integrating backward in time the ODEs
?V? k = Q + AT Vk + Vk AT ? Vk ?Vk
1
??? k = tr (?Vk )
2
Now consider a composite problem with final cost
?
?
??
P
1 T
f (x) = ? log
k wk exp ? x Fk x
2
4
(10)
Figure 1: Illustration of compositionality in the LQG framework. (A) An LQG problem with
quadratic cost-to-go and linear feedback control law. T = 10 is the final time. (B, C) Non-LQG
problems solved analytically by mixing the solutions to multiple LQG problems.
This composite problem is no longer LQG because it has non-quadratic final cost (i.e. log of mixture of Gaussians), and yet we will be able to find a closed-form solution by combining multiple
LQG controllers. Note that, since mixtures of Gaussians are universal function approximators, we
can represent any desired final cost to within arbitrary accuracy given enough LQG components.
Applying the results from the previous section, the desirability for the composite problem is
?
?
P
1 T
z (x, t) = k wk exp ? x Vk (t) x ? ?k (t)
2
The optimal control law can now be obtained directly from (3), or via composition from (9). Note
that the constants ?k (t) do not affect the component control laws (and indeed are rarely computed
in the LQG framework) however they affect the composite control law through the mixing weights.
We illustrate the above construction on a scalar example with integrator dynamics dx = udt+0.2d?.
The state cost rate is q (x) = 0. We set wk = 1 for all k. The final time is T = 10. The component
final costs are of the form
dk
fk (x) =
(x ? ck )2
2
In order to center these quadratics at ck rather than 0 we augment the state: x = [x; 1]. The matrices
defining the problem are then
?
? ?
?
?
?
1
?ck
1
0 0
, Fk = dk
, B=
A=
0
0 0
?ck c2k
The ODEs (10) are integrated using ode45 in Matlab. Fig 1 shows the optimal cost-to-go functions v (x, t) = ? log (z (x, t)) and the optimal control laws u? (x, t) for the following problems:
{c = 0; d = 5}, {c = ?1, 0, 1; d = 5, 0.1, 15}, and {c = ?1.5 : 0.5 : 1.5; d = 5}. The first problem (Fig 1A) is just an LQG. As expected the cost-to-go is quadratic and the control law is linear
with time-varying gain. The second problem (Fig 1B) has a multimodal cost-to-go. The control law
is no longer linear but instead has an elaborate shape. The third problem (Fig 1C) resembles robust
control in the sense that there is a f1at region where all states are equally good. The corresponding
control law uses feedback to push the state into this f1at region. Inside the region the controller
does nothing, so as to save energy. As these examples illustrate, the methodology developed here
significantly extends the LQG framework while preserving its tractability.
5
5 Constructing minimal sets of primitives via SVD of Green?s function
We showed how composite problems can be solved once the solutions to the component problems
are available. The choice of component boundary conditions defines the manifold (6) of problems
that can be solved exactly. One can use any available set of solutions as components, but is there a
set which is in some sense minimal? Here we offer an answer based on singular value decomposition
(SVD). We focus on discrete state spaces; continuous spaces can be discretized following [22].
Recall that the vector of desirability values z (x) at interior states x ? I, which we denoted zI ,
satisfies the linear equation (2). We can write the solution to that equation explicitly as
zI = G zB
?1
where G = (diag (exp (qI )) ? PII ) PIB . The matrix G maps values on the boundary to values
on the interior, and thus resembles Green?s function for linear PDEs. A minimal set of primitives
corresponds to the best low-rank approximation to G. If we define "best" in terms of least squares,
a minimal set of R primitives is obtained by approximating G using the top R singular values:
G ? U SV T
S is an R-by-R diagonal matrix, U and V are |I|-by-R and |B|-by-R orthonormal matrices. If we
now set zB = V?r , which is the r-th column of V , then
zI = G zB ? U SV T V?r = Srr U?r
Thus the right singular vectors (columns of V ) are the component boundary conditions, while the
left singular vectors (columns of U ) are the component solutions.
The above construction does not use knowledge of the family of composite problems we aim to
solve/approximate. A slight modification makes it possible to incorporate such knowledge. Let the
family in question have parametric final costs f (x, ?). Choose a discrete set {?k }k=1???K of values
of the parameter ?, and form the |B|-by-K matrix ? with elements ?ik = exp (?f (xi , ?k )), xi ? B.
As in (4), this choice restricts the boundary conditions that can be represented to zB = ?w, where
w is a K-dimensional vector. Now apply SVD to obtain a rank-R approximation to the matrix G?
instead of G. We can set R ? K to achieve significant reduction in the number of components.
Note that G? is smaller than G so the SVD here is faster to compute.
We illustrate the above approach using a discretization of the following 2D problem:
?
?
?0.2 x2
a (x) =
, B = I, ? = 1, q (x) = 0.1
0.2 |x1 |
The vector field in Fig 2A illustrates the function a (x). To make the problem more interesting
we introduce an L-shaped obstacle which can be?hit without penalty but cannot be penetrated. The
domain is a disk centered at (0, 0) with radius 21. The constant q implements a penalty for the
time spent inside the disk. The discretization involves |I| = 24520 interior states and |B| = 4163
boundary states. The parametric family of final costs is
f (x, ?) = 13 ? 13 exp (5 cos (atan 2 (x2 , x1 ) ? ?) ? 5)
This is an inverted von Mises function specifying the desired location where the state should exit
the disk. f (x, 0) is plotted in red in Fig 2A. The set {?k } includes 200 uniformly spaced values of
?. The SVD components are constructed using the second method above (although the first method
gives very similar results). Fig 2B compares the solution obtained with a direct solver (i.e. using
the exact G) for ? = 0, and the solutions obtained using R = 70 and R = 40 components. The
desirability function z is well approximated in both cases. In fact the approximation to z looks
perfect with much fewer components (not shown). However v = ? log (z) is more difficult to
approximate. The difficulty comes from the fact that the components are not always positive, and
as a result the composite solution is not always positive. The regions where that happens are shown
in white in Fig 2B. In those regions the approximation is undefined. Note that this occurs only
near the boundary. Fig 2C shows the first 10 components. They resemble harmonic functions.
It is notable that the higher-order components (corresponding to smaller singular values) are only
modulated near the boundary ? which explains why the approximation errors in Fig 2B are near the
boundary. In summary, a small number of components are sufficient to construct composite control
laws which are near-optimal in most of the state space. Accuracy at the boundary requires additional
components. Alternatively one could use positive SVD and obtain not just positive but also more
localized components (as we have done in preliminary work).
6
Figure 2: Illustration of primitives obtained via SVD. (A) Passive dynamics and cost. (B) Solutions
obtained with a direct solver and with different numbers of primitives. (C) Top ten primitives zk (x).
B 20
speed (cm / sec)
A
C
10
0
0
1
2
time (sec)
3
Figure 3: Preliminary model of arm movements. (A) Hand paths of different lengths. Red dots
denote start points, black circles denote end points. (B) Speed profiles for the movements shown
in (A). Note that the same controller generates movements of different duration. (C) Hand paths
generated by a composite controller obtained by mixing the optimal controllers for two targets. This
controller "decides" online which target to go to.
7
6 Application to arm movements
We are currently working on an optimal control model of arm movements based on compositionality.
The dynamics correspond to a 2-link arm moving in the horizontal plane, and have the form
? ?
? = M (?) ?? + n ?, ??
? contains the shoulder and elbow joint angles, ? is the applied torque, M is the configurationdependent inertia, and n is the vector of Coriolis, centripetal and viscous forces. Model parameters
are taken from the biomechanics literature. The final cost f is a quadratic (in Cartesian space) centered at the target. The running state cost is q = const encoding a penalty for duration. The above
? In order to encode reaching movements, we introduce
model has a 4-dimensional state space (?, ?).
an additional state variable s which keeps track of how long the hand speed (in Cartesian space)
has remained below a threshold. When s becomes sufficiently large the movement ends. This augmentation is needed in order to express reaching movements as a first-exit problem. Without it the
movement would stop whenever the instantaneous speed becomes zero ? which can happen at reversal points as well as the starting point. Note that most models of reaching movements have assumed
predefined final time. However this is unrealistic because we know that movement duration scales
with distance, and furthermore such scaling takes place online (i.e. movement duration increases if
the target is perturbed during the movement).
The above second-order system is expressed in general first-order form, and then the passive dynamics corresponding to ? = 0 are discretized in space and time. The time step is h = 0.02 sec.
The space discretization uses a grid with 514 x3 points. The factor of 3 is needed to discretize the
variable s. Thus we have around 20 million discrete states, and the matrix P characterizing the
passive dynamics is 20 million - by - 20 million. Fortunately it is very sparse because the noise (in
torque space) cannot have a large effect within a single time step: there are about 50 non-zero entries
in each row. Our simple iterative solver converges in about 30 iterations and takes less than 2 min
of CPU time, using custom multi-threaded C++ code.
Fig 3A shows hand paths from different starting points to the same target. The speed profiles for
these movements are shown in Fig 3B. The scaling with amplitude looks quite realistic. In particular, it is known that human reaching movements of different amplitude have similar speed profiles
around movement onset, and diverge later. Fig 3C shows results for a composite controller obtained by mixing the optimal control laws for two different targets. In this example the targets are
sufficiently far away and the final costs are sufficiently steep, thus the mixing yields a switching controller instead of an interpolating controller. Depending on the starting point, this controller takes
the hand to one or the other target, and can also switch online if the hand is perturbed. An interpolating controller can be created by placing the targets closer or making the component final costs
less steep. While these results are preliminary we find them encouraging. In future work we will
explore this model in more detail and also build a more realistic model using 3rd-order dynamics
(incorporating muscle time constants). We do not expect to be able to discretize the latter system,
but we are in the process of making a transition from discretization to function approximation [20].
7 Summary and relation to prior work
We developed a theory of compositionality applicable to a general class of stochastic optimal control
problems. Although in this paper we used simple examples, the potential of such compositionality
to tackle complex control problems seems clear.
Our work is somewhat related to proto value functions (PVFs) which are eigenfunctions of the
Laplacian [5], i.e. the matrix I ? PII . While the motivation is similar, PVFs are based on intuitions
(mostly from grid worlds divided into rooms) rather than mathematical results regarding optimality
of the composite solution. In fact our work suggests that PVFs should perhaps be used to approximate the exponent of the value function instead of the value function itself. Another difference is
that PVFs do not take into account the cost rate q and the boundary B. This sounds like a good
thing but it may be too good, in the sense that such generality may be the reason why guarantees
regarding PVF optimality are lacking. Nevertheless the ambitious agenda behind PVFs is certainly
worth pursuing, and it will be interesting to compare the two approaches in more detail.
8
Finally, another group [6] has developed similar ideas independently and in parallel. Although
their paper is restricted to combination of LQG controllers for finite-horizon problems, it contains
very interesting examples from complex tasks such as walking, jumping and diving. A particularly
important point made by [6] is that the primitives can be only approximately optimal (in this case
obtained via local LQG approximations), and yet their combination still produces good results.
References
[1] D. Bertsekas, Dynamic Programming and Optimal Control (2nd Ed). Bellmont, MA: Athena
Scientific, 2001.
[2] R. Sutton and A. Barto, Reinforcement Learning: An Introduction. MIT Press, Cambridge
MA, 1998.
[3] D. Bertsekas and J. Tsitsiklis, Neuro-dynamic programming. Belmont, MA: Athena Scientific,
1997.
[4] J. Si, A. Barto, W. Powell, and D. Wunsch, Handbook of Learning and Approximate Dynamic
Programming. Wiley-IEEE Press, 2004.
[5] S. Mahadevan and M. Maggioni, ?Proto-value functions: A Laplacian farmework for learning representation and control in Markov decision processes,? Journal of Machine Learning
Research, vol. 8, pp. 2169?2231, 2007.
[6] M. daSilva, F. Durand, and J. Popovic, ?Linear bellman combination for control of character
animation,? To appear in SIGGRAPH, 2009.
[7] E. Todorov, ?Optimality principles in sensorimotor control,? Nature Neuroscience, vol. 7, no. 9,
pp. 907?915, 2004.
[8] C. Harris and D. Wolpert, ?Signal-dependent noise determines motor planning,? Nature, vol.
394, pp. 780?784, 1998.
[9] C. Sherrington, The integrative action of the nervous system. New Haven: Yale University
Press, 1906.
[10] N. Bernstein, On the construction of movements. Moscow: Medgiz, 1947.
[11] M. Latash, ?On the evolution of the notion of synergy,? in Motor Control, Today and Tomorrow,
G. Gantchev, S. Mori, and J. Massion, Eds. Sofia: Academic Publishing House "Prof. M.
Drinov", 1999, pp. 181?196.
[12] M. Tresch, P. Saltiel, and E. Bizzi, ?The construction of movement by the spinal cord,? Nature
Neuroscience, vol. 2, no. 2, pp. 162?167, 1999.
[13] A. D?Avella, P. Saltiel, and E. Bizzi, ?Combinations of muscle synergies in the construction of
a natural motor behavior,? Nat.Neurosci., vol. 6, no. 3, pp. 300?308, 2003.
[14] M. Santello, M. Flanders, and J. Soechting, ?Postural hand synergies for tool use,? J Neurosci,
vol. 18, no. 23, pp. 10 105?15, 1998.
[15] E. Todorov, ?Linearly-solvable Markov decision problems,? Advances in Neural Information
Processing Systems, 2006.
[16] ??, ?General duality between optimal control and estimation,? IEEE Conference on Decision
and Control, 2008.
[17] ??, ?Efficient computation of optimal actions,? PNAS, in press, 2009.
[18] S. Mitter and N. Newton, ?A variational approach to nonlinear estimation,? SIAM J Control
Opt, vol. 42, pp. 1813?1833, 2003.
[19] H. Kappen, ?Linear theory for control of nonlinear stochastic systems,? Physical Review Letters, vol. 95, 2005.
[20] E. Todorov, ?Eigen-function approximation methods for linearly-solvable optimal control
problems,? IEEE International Symposium on Adaptive Dynamic Programming and Reinforcemenet Learning, 2009.
[21] R. Stengel, Optimal Control and Estimation. New York: Dover, 1994.
[22] H. Kushner and P. Dupuis, Numerical Methods for Stochastic Optimal Control Problems in
Continuous Time. New York: Springer, 2001.
9
| 3842 |@word proportion:1 seems:1 nd:2 disk:3 integrative:1 decomposition:1 tr:2 kappen:1 reduction:2 contains:2 existing:1 discretization:4 surprising:2 si:1 yet:8 dx:3 written:2 must:2 belmont:1 realistic:2 numerical:2 happen:1 shape:1 lqg:15 motor:6 fewer:1 nervous:1 plane:1 dover:1 math:1 location:1 mathematical:1 constructed:3 direct:2 differential:3 symposium:1 ik:1 tomorrow:1 tresch:1 specialize:1 hjb:1 inside:2 introduce:2 x0:12 notably:1 expected:1 indeed:2 behavior:3 themselves:1 planning:1 multi:1 brain:3 terminal:1 bellman:9 integrator:1 discretized:2 torque:2 cpu:1 encouraging:1 solver:4 elbow:1 becomes:2 provided:1 underlying:1 linearity:3 notation:1 what:2 cm:1 interpreted:1 pursue:1 viscous:1 z:5 developed:5 finding:1 guarantee:2 hypothetical:1 tackle:1 shed:1 exactly:1 hit:1 control:73 appear:1 bertsekas:2 positive:6 before:1 engineering:2 local:1 limit:1 switching:1 sutton:1 encoding:1 establishing:1 path:3 approximately:1 black:1 resembles:2 suggests:2 challenging:1 specifying:1 co:1 unique:1 practical:1 implement:1 coriolis:1 x3:1 powell:1 universal:1 significantly:1 composite:20 integrating:1 cannot:3 interior:5 close:1 operator:2 context:1 applying:3 deterministic:1 map:1 compensated:1 center:1 backed:1 primitive:13 go:8 starting:3 duration:4 convex:2 independently:1 rule:3 orthonormal:1 wunsch:1 century:1 handle:1 stability:1 maggioni:1 notion:1 construction:6 suppose:1 target:9 today:1 exact:1 programming:5 us:2 origin:1 element:1 nowhere:1 approximated:1 particularly:1 walking:1 solved:3 region:5 ensures:1 cord:1 movement:22 pvf:1 substantial:1 intuition:1 dynamic:22 depend:1 solving:2 serve:1 exit:4 multimodal:1 joint:1 siggraph:1 represented:1 quite:1 larger:1 solve:3 think:1 itself:1 final:18 online:3 analytical:2 adaptation:1 combining:3 mixing:6 achieve:1 produce:1 perfect:1 converges:2 spent:1 illustrate:4 derive:1 depending:1 progress:1 c:1 resemble:2 implies:2 pii:3 involves:1 differ:1 come:1 radius:2 stochastic:9 centered:2 human:2 enable:1 explains:1 require:2 dupuis:1 preliminary:3 opt:1 biological:1 hold:4 sufficiently:3 around:2 avella:1 exp:12 substituting:1 bizzi:2 estimation:4 srr:1 soechting:1 applicable:1 currently:1 ex0:1 tool:1 mit:1 gaussian:4 sight:1 desirability:11 aim:1 rather:4 ck:4 always:2 reaching:4 varying:1 barto:2 encode:1 focus:2 vk:11 rank:2 sense:3 dependent:3 integrated:1 w:5 massion:1 relation:1 atan:1 provably:1 compatibility:1 augment:1 exponent:2 denoted:1 development:1 special:2 field:3 once:1 construct:1 shaped:2 washington:2 placing:1 look:2 future:1 simplify:2 haven:1 few:1 composed:1 divergence:3 interest:2 custom:1 certainly:1 mixture:5 truly:1 bracket:1 light:1 undefined:1 behind:1 predefined:2 integral:2 capable:1 closer:1 necessary:1 jumping:1 desired:2 plotted:1 circle:1 theoretical:1 minimal:4 mk:8 column:3 obstacle:1 bellmont:1 cost:37 tractability:1 entry:1 euler:1 too:1 optimally:1 answer:1 perturbed:2 sv:2 density:2 international:1 siam:1 off:1 diverge:1 von:1 augmentation:1 opposed:1 choose:1 udt:3 rescaling:1 account:1 potential:1 stengel:1 sec:3 wk:11 includes:2 satisfy:2 notable:1 explicitly:2 onset:1 later:2 closed:1 red:2 start:1 parallel:1 square:1 accuracy:2 characteristic:1 yield:2 spaced:1 correspond:1 bayesian:1 trajectory:1 worth:1 researcher:1 zx:2 whenever:2 ed:2 energy:1 sensorimotor:2 pp:8 obvious:1 mi:1 gain:1 emanuel:1 stop:1 recall:1 knowledge:2 dimensionality:1 amplitude:2 actually:1 higher:1 dt:1 methodology:1 formulation:9 done:2 generality:1 furthermore:2 just:2 c2k:1 working:2 hand:7 horizontal:1 nonlinear:2 somehow:1 defines:1 perhaps:1 behaved:1 scientific:2 effect:1 requiring:1 evolution:2 analytically:2 inspiration:1 satisfactory:1 white:1 during:1 generalized:1 prominent:2 sherrington:1 motion:1 passive:7 meaning:1 harmonic:1 instantaneous:1 variational:1 physical:1 overview:1 spinal:1 million:3 belong:1 slight:1 penetrated:1 significant:1 composition:2 cambridge:1 rd:1 fk:9 mathematics:1 grid:2 dot:1 moving:1 longer:3 similarity:1 brownian:1 own:1 showed:1 belongs:1 diving:1 certain:3 durand:1 approximators:1 exploited:1 muscle:2 inverted:1 preserving:1 additional:2 fortunately:1 impose:1 somewhat:1 signal:1 multiple:2 desirable:1 sound:1 reduces:2 pnas:1 faster:1 academic:1 offer:1 long:1 biomechanics:1 divided:1 equally:1 controlled:1 qi:3 laplacian:2 involving:1 neuro:1 controller:16 expectation:1 iteration:1 represent:1 background:1 ode:2 singular:5 source:1 sure:1 eigenfunctions:1 thing:2 call:1 near:5 bernstein:1 mahadevan:1 enough:1 todorov:5 variety:1 affect:2 zi:7 switch:1 idea:7 regarding:2 shift:1 penalty:3 york:2 action:3 matlab:1 generally:1 clear:1 shortage:1 ten:1 documented:1 specifies:1 exist:2 affords:1 restricts:1 neuroscience:3 track:1 discrete:16 write:1 vol:8 express:1 group:1 nevertheless:3 threshold:1 kuk:2 diffusion:2 backward:1 inverse:1 exponentiation:1 everywhere:3 angle:1 letter:1 extends:1 family:3 reasonable:1 place:1 pursuing:2 decision:3 scaling:3 uncontrolled:1 pay:1 guaranteed:1 simplification:1 yale:1 quadratic:12 precisely:1 x2:2 generates:1 speed:6 optimality:5 min:1 combination:7 remain:1 smaller:2 character:1 making:3 modification:1 happens:1 restricted:1 taken:1 mori:1 equation:18 remains:2 turn:1 describing:1 needed:4 know:1 end:2 reversal:1 available:3 gaussians:3 apply:1 away:1 spectral:1 save:1 eigen:1 top:2 running:1 moscow:1 kushner:1 publishing:1 newton:1 const:1 build:1 prof:1 approximating:1 postural:1 implied:1 objective:2 question:1 quantity:1 occurs:1 parametric:2 traditional:1 diagonal:1 unclear:2 hq:1 distance:1 link:1 athena:2 manifold:2 threaded:1 reason:2 assuming:1 length:1 code:1 relationship:2 illustration:2 convincing:1 difficult:2 steep:2 mostly:1 rise:1 agenda:5 ambitious:1 policy:1 discretize:2 markov:2 finite:3 saltiel:2 defining:2 shoulder:1 arbitrary:1 compositionality:16 kl:5 able:3 dynamical:1 below:1 pib:2 green:3 unrealistic:1 natural:3 difficulty:1 force:1 solvable:2 arm:5 started:1 created:1 prior:1 understanding:2 literature:1 review:1 law:31 lacking:1 expect:1 mixed:1 interesting:3 localized:1 affine:1 sufficient:1 principle:2 row:1 compatible:2 summary:2 pdes:1 tsitsiklis:1 characterizing:2 sparse:1 feedback:3 boundary:15 transition:2 world:1 computes:1 inertia:1 made:2 reinforcement:2 clue:1 collection:1 simplified:1 adaptive:1 far:1 qx:1 approximate:4 synergy:4 keep:1 decides:1 handbook:1 assumed:1 popovic:1 xi:2 alternatively:1 continuous:13 iterative:3 un:1 pretty:1 why:2 qz:1 nature:3 zk:19 robust:1 complex:4 interpolating:2 constructing:2 domain:1 diag:3 neurosci:2 linearly:2 motivation:2 noise:3 reconcile:1 profile:3 animation:1 nothing:1 sofia:1 x1:2 fig:13 zxx:1 mitter:1 elaborate:1 wiley:1 wish:1 house:1 flanders:1 third:1 ito:1 formula:1 emphasizing:1 remained:1 xt:2 showing:1 dk:2 incorporating:1 albeit:1 nat:1 illustrates:1 push:1 cartesian:2 horizon:3 easier:1 wolpert:1 likely:1 explore:1 expressed:2 scalar:1 talking:1 springer:1 corresponds:1 satisfies:2 determines:1 harris:1 ma:3 towards:1 room:1 price:1 absence:1 hard:1 uniformly:1 zb:6 pvfs:5 duality:2 svd:8 experimental:1 exception:1 rarely:2 centripetal:1 latter:3 modulated:1 incorporate:1 proto:2 |
3,138 | 3,843 | Adaptive Regularization for
Transductive Support Vector Machine
Zenglin Xu ??
Cluster MMCI
Saarland Univ. & MPI INF
Saarbrucken, Germany
[email protected]
?
Rong Jin
Computer Sci. & Eng.
Michigan State Univ.
East Lansing, MI, U.S.
[email protected]
Irwin King?
Michael R. Lyu?
?
Computer Science & Engineering
The Chinese Univ. of Hong Kong
Shatin, N.T., Hong Kong
{king,lyu}@cse.cuhk.edu.hk
Jianke Zhu
Computer Vision Lab
ETH Zurich
Zurich, Switzerland
[email protected]
Zhirong Yang
Information & Computer Science
Helsinki Univ. of Technology
Espoo, Finland
[email protected]
Abstract
We discuss the framework of Transductive Support Vector Machine
(TSVM) from the perspective of the regularization strength induced by
the unlabeled data. In this framework, SVM and TSVM can be regarded
as a learning machine without regularization and one with full regularization from the unlabeled data, respectively. Therefore, to supplement
this framework of the regularization strength, it is necessary to introduce
data-dependant partial regularization. To this end, we reformulate TSVM
into a form with controllable regularization strength, which includes SVM
and TSVM as special cases. Furthermore, we introduce a method of adaptive regularization that is data dependant and is based on the smoothness assumption. Experiments on a set of benchmark data sets indicate
the promising results of the proposed work compared with state-of-the-art
TSVM algorithms.
1
Introduction
Semi-supervised learning has attracted a lot of research focus in recently years. Most of
the existing approaches can be roughly divided into two categories: (1) the clustering-based
methods [12, 4, 8, 17] assume that most of the data, including both the labeled ones and the
unlabeled ones, should be far away from the decision boundary of the target classes; (2) the
manifold-based methods make the assumption that most of data lie on a low-dimensional
manifold in the input space, which include Label Propagation [21], Graph Cuts [2], Spectral
Kernels [9, 22], Spectral Graph Transducer [11], and Manifold Regularization [1]. The
comprehensive study on semi-supervised learning techniques can be found in the recent
surveys [23, 3].
Although semi-supervised learning wins success in many real-world applications, there still
remains two major unsolved challenges. One is whether the unlabeled data can help the
classification, and the other is what is the relation between the clustering assumption and
the manifold assumption.
As for the first challenge, Singh et al. [16] provided a finite sample analysis on the usefulness
of unlabeled data based on the cluster assumption. They show that unlabeled data may
be useful for improving the error bounds of supervised learning methods when the margin
between different classes satisfies some conditions. However, in the real-world problems, it
is hard to identify the conditions that unlabeled data can help.
On the other hand, it is interesting to explore the relation between the low density assumption and the manifold assumption. Narayanan et al. [14] implied that the cut-size of the
graph partition converges to the weighted volume of the boundary which separates the two
regions of the domain for a fixed partition. This makes a step forward for exploring the
connection between graph-based partitioning and the idea surrounding the low density assumption. Unfortunately, this approach cannot be generalized uniformly over all partitions.
Lafferty and Wasserman [13] revisited the assumptions of semi-supervised learning from the
perspective of minimax theory, and suggested that the manifold assumption is stronger than
the smoothness assumption for regression. Till now, the underlying relationships between
the cluster assumption and the manifold assumption are still undisclosed. Specifically, it is
unclear that in what kind of situation the clustering assumption or the manifold assumption
should be adopted.
In this paper, we address these current limitations by a unified solution from the perspective
of the regularization strength of the unlabeled data. Taking Transductive Support Vector
Machine (TSVM) as an example, we suggest an framework that introduces the regularization
strength of the unlabeled data when estimating the decision boundary. Therefore, we can
obtain a spectrum of models by varying the regularization strength of unlabeled data which
corresponds to changing the models from supervised SVM to Transductive SVM. To select
the optimal model under the proposed framework, we employ the manifold regularization
assumption that enables the prediction function to be smooth over the data space. Further,
the optimal function is a linear combination of supervised models, weakly semi-supervised
models, and semi-supervised models. Additionally, it provides an effective approach towards
combining the cluster assumption and the manifold assumption in semi-supervised learning.
The rest of this paper is organized as follows. In Section 2, we review the background of
Transductive SVM. In Section 3, we first present a framework of models with different regularization strength, followed by an integrating approach based on manifold regularization.
In Section 4, we report the experimental results on a series of benchmark data sets. Section
5 concludes the paper.
2
Related Work on TSVM
Before presenting the formulation of TSVM, we first describe the notations used in this
paper. Let X = (x1 , . . . , xn ) denote the entire data set, including both the labeled examples
and the unlabeled ones. We assume that the first l examples within X are labeled and the
u
, . . . , ynu ).
next n ? l examples are unlabeled. We denote the unknown labels by yu = (yl+1
TSVM [12] maximizes the margin in the presence of unlabeled data and keeps the boundary
traversing through low density regions while respecting labels in the input space. Under
the maximum-margin framework, TSVM aims to find the classification model with the
maximum classification margin for both labeled and unlabeled examples, which amounts to
solve the following optimization problem:
min
w?Rn ,yu ?Rn?` ,??Rn
s. t.
n
l
X
X
1
?i + C ?
?i
kwkK + C
2
i=1
(1)
i=l+1
yi w> ?(xi ) ? 1 ? ?i , ?i ? 0, 1 ? i ? l,
yiu w> ?(xi ) ? 1 ? ?i , ?i ? 0, l + 1 ? i ? n,
where C and C ? are the trade-off parameters between the complexity of the function w and
the margin errors. Moreover, the prediction function can be formulated as f (x) = w> ?(x).
Note that we remove the bias term in the above formulation, since it can be taken into
account by introducing a constant element into the input pattern alternatively.
As in [19] and [20], we can rewrite (1) into the following optimization problem:
min
f ,?
s. t.
n
l
X
X
1 > ?1
?
?i + C
?i
f K f +C
2
i=1
(2)
i=l+1
yi fi ? 1 ? ?i , ?i ? 0, 1 ? i ? l,
|fi | ? 1 ? ?i , ?i ? 0, l + 1 ? i ? n.
The optimization problem held in TSVM is a non-linear non-convex optimization [6]. During
past several years, researchers have devoted a significant amount of research efforts to solving
this critical problem. A branch-and-bound method [5] was developed to search for the
optimal solution, which is only limited to solve the problem with a small number of examples
due to involving the heavy computational cost. To apply TSVM for large-scale problems,
Joachims [12] proposed a label-switching-retraining procedure to speed up the optimization
procedure. Later, the hinge loss in TSVM is replaced by a smooth loss function, and a
gradient descent method is used to find the decision boundary in a region of low density [4].
In addition, there are some iterative methods, such as deterministic annealing [15], concaveconvex procedure (CCCP) [8], and convex relaxation method [19, 18]. Despite the success
of TSVM, the unlabeled data not necessarily improve classification accuracy.
To better utilize the unlabeled data, unlike existing TSVM approaches, we propose a framework that tries to control the regularization strength of the unlabeled data. To do this, we
intend to learn the optimal regularization strength configuration from the combination of a
spectrum of models: supervised, weakly-supervised, and semi-supervised.
3
TSVM: A Regularization View
For the sake of illustration, we first study a model that does not penalize on the classification
errors of unlabeled data. Note that the penalization on the margin errors of unlabeled data
can be included if needed. Therefore, we have the following form of TSVM that can be
derived through the duality:
l
min
f ,?
X
1 > ?1
?i
f K f +C
2
i=1
(3)
s. t. yi fi ? 1 ? ?i , ?i ? 0, 1 ? i ? l,
fi2 ? 1, l + 1 ? i ? n.
3.1
Full Regularization of Unlabeled Data
In order to adjust the strength of the regularization raised from the unlabeled examples, we
introduce a coefficient ? ? 0, and modify the above problem (3) as below:
l
min
f ,?
X
1 > ?1
?i
f K f +C
2
i=1
(4)
s. t. yi fi ? 1 ? ?i , ?i ? 0, 1 ? i ? l,
fi2 ? ?, l + 1 ? i ? n.
Obviously, it is the standard TSVM for ? = 1. In particular, the larger the ? is, the stronger
the regularization of unlabeled data is. It is also important to note that we only take into
account the classification errors on the labeled examples in the above equation. Namely, we
only denote ?i for each labeled example.
Further, we write f = (fl ; fu ) where fl = (f1 , . . . , fl ) and fu = (fl+1 , . . . , fn ) represent the
prediction for the labeled and the unlabeled examples, respectively. According to the inverse
lemma of the block matrix, we can write K?1 as follows:
?1
M?1
?K?1
?1
l
l,l Kl,u Mu
K =
,
?1
?M?1
M?1
u Ku,l Kl,l
u
where
Ml = Kl,l ? Kl,u K?1
u,u Ku,l ,
Mu = Ku,u ? Ku,l K?1
l,l Kl,u .
Thus, the term f > K?1 f is computed as
f > K?1 f
>
?1
> ?1
?1
= fl> M?1
l fl + fu Mu fu ? 2fl Kl,l Kl,u Mu fu .
When the unlabeled data are loosely correlated to the labeled data, namely when most of
the elements within Ku,l are small, this leads to Mu ? Ku . We refer to this case as ?weakly
unsupervised learning?. Using the above equations, we rewrite TSVM as follows:
l
min
X
1 > ?1
?i + ?(fl , ?)
fl Ml fl + C
2
i=1
s. t.
yi fi ? 1 ? ?i , ?i ? 0, 1 ? i ? l,
fl ,fu ,?
(5)
where ?(fl , ?) is a regularization function for fl and it is the result of the following optimization problem:
1 > ?1
?1
f M fu ? fl> K?1
l,l Kl,u Mu fu
2 u u
s. t. [fiu ]2 ? ?, l + 1 ? i ? n.
min
fu
(6)
To understand the regularization function ?(fl , ?), we first compute the dual of the problem
(6) by the Lagrangian function:
L
=
nu
X
1
1 > ?1
?1
fu Mu fu ? fl> K?1
K
M
f
?
?i ([fiu ]2 ? ?)
l,u
u
u
l,l
2
2
i=1
=
1 >
? >
> ?1
?1
f (M?1
u ? D(?))fu ? fl Kl,l Kl,u Mu fu + ? e,
2 u
2
where D(?) = diag(?1 , . . . , ?n?l ) and e denotes a vector with all elements being one. As
the derivatives vanish for optimality, we have
fu
=
?1
(Mu?1 ? D(?))?1 M?1
u Ku,l Kl,l fl
=
(I ? Mu D(?))?1 Ku,l K?1
l,l fl ,
where I is an identity matrix.
Replacing fu in (6) with the above equation, we have the following dual problem:
1
?1
?1
Ku,l Kl,l
fl + ??> e
max ? fl> K?1
l,l Kl,u (Mu ? Mu D(?)Mu )
?
2
s. t. M?1
u D(?), ?i ? 0, i = 1, . . . , n ? l.
(7)
The above formulation allows us to understand how the parameter ? controls the strength
of regularization from the unlabeled data. In the following, we will show that a series of
learning models can be derived through assigning various values for the coefficient ?.
3.2
No Regularization from Unlabeled Data
First, we study the case of ? = 0. We have the following theorem to illustrate the relationship
between the dual problem (7) and the supervised SVM.
Theorem 1 When ? = 0, the optimization problem is reduced to the standard supervised
SVM.
Proof 1 It is not difficult to see that the optimal solution to (7) is ? = 0. As a result,
?(fl , ?) becomes
1
?1
?1
?(fl , ? = 0) = ? fl K?1
l,l Kl,u Mu Ku,l Kl,l fl
2
Substituting ?(fl , ?) in (5) with the formulation above, the overall optimization problem
becomes
l
X
1 >
?1
?1
?1
fl (M?1
?
K
K
M
K
K
)f
+
C
?i
l,u
u,l
l
u
l
l,l
l,l
2
i=1
min
fl ,?
s. t. yi fi ? 1 ? ?i , ?i ? 0, 1 ? i ? l.
According to the matrix inverse lemma, we calculate M?1
as below:
l
M?1
l
=
?1
(Kl,l ? Kl,u K?1
u,u Ku,l )
=
?1
?1
?1
?1
K?1
Ku,l Kl,l
l,l + Kl,l Kl,u (Ku,u ? Ku,l Kl,l Kl,u )
=
?1
?1
?1
K?1
l,l + Kl,l Kl,u Mu Ku,l Kl,l .
Finally, the optimization problem is simplified as
l
min
fl ,?
X
1 > ?1
?i
fl Kl,l fl + C
2
i=1
(8)
s. t. yi fi ? 1 ? ?i , ?i ? 0, 1 ? i ? l.
Clearly, the above optimization is identical to the standard supervised SVM. Hence, the
unlabeled data are not employed to regularize the decision boundary when ? = 0.
3.3
Partial Regularization of Unlabeled Data
Second, we consider the case when ? is small. According to (7), we expect ? to be small
when ? is small. As a result, we can approximate (Mu ? Mu D(?)Mu )?1 as follows:
(Mu ? Mu D(?)Mu )?1 ? M?1
u + D(?).
Consequently, we can write ?(fl , ?) as follows:
?(fl , ?) =
1
?1
?1
? fl> K?1
l,l Kl,u Mu Ku,l Kl,l fl + ?(fl , ?),
2
(9)
where ?(fl , ?) is the output of the following optimization problem
max
?
s. t.
1
?1
??> e ? fl> K?1
l,l Kl,u D(?)Ku,l Kl,l fl
2
M?1
u D(?), ?i ? 0, i = 1, . . . , n ? l.
?1
We can simplify the above problem by approximating M?1
,
u D(?) as ?i ? [?1 (Mu )]
i = 1, . . . , n ? l, where ?1 (Mu ) represents the maximum eigenvalue of matrix Mu . The
resulting simplified problem becomes
max
?
s. t.
1
? >
?1
? e ? fl> K?1
l,l Kl,u D(?)Ku,l Kl,l fl
2
2
0 ? ?i ? [?1 (Mu )]?1 , 1 ? i ? n ? l.
As the above problem is a linear programming problem, the solution for ? can be computed
as:
2
0
[Ku,l K?1
l,l fl ]i > ?,
?i =
?1
?(Mu )?1 [Ku,l Kl,l fl ]2i ? ?.
From the above formulation, we find that ? plays the role of a threshold of selecting the
unlabeled examples. Since [Ku,l K?1
l,l fl ]i can be regarded as the approximation for the ith
unlabeled example, the above formulation can be interpreted in the way that only the unlabeled examples with low prediction confidence will be selected for regularizing the decision
boundary. Moreover, all the unlabeled examples with high prediction confidence will be
ignored. From the above discussions, we can conclude that ? determines the regularization
strength of unlabeled examples.
Then, we rewrite the overall optimization problem as below:
l
min max
fl ,?
?
s. t.
X
1
1 > ?1
?1
?i ? fl> K?1
fl Kl,l fl + C
l,l Kl,u D(?)Ku,l Kl,l fl
2
2
i=1
(10)
yi fi ? 1 ? ?i , ?i ? 0, 1 ? i ? l,
0 ? ?i ? [?1 (Mu )]?1 , 1 ? i ? n ? l.
This is a min-max optimization problem and thus the global optimal solution can be guaranteed. To obtain the optimal solution, we employ an alternating optimization procedure,
which iteratively computes the values of fl and ?. To account for the penalty on the margin
error from the unlabeled data, we just need to add an extra constraint of ?i ? 2C for
i = 1, . . . , n ? l.
By varying the parameter ? from 0 to 1, we can indeed obtain a series of transductive models
for SVM. When ? is small, we call the corresponding optimization problem as weakly semisupervised learning. Therefore, it is important to find an appropriate ? which adapts for the
input data. However, as the data distribution is usually unknown, it is very challenging to
directly estimate an optimal regularization strength parameter ?. Instead, we try to explore
an alternative approach to select an appropriate ? by combining the prediction functions.
Due to the large cost in calculating the inverse of kernel matrices, one can solve the dual
problems according to the Representer theorem.
3.4
Adaptive Regularization
As stated in previous sections, ? determines the regularization strength of the unlabeled
data. We now try to adapt the parameter ? according to the unlabeled data information.
Specifically, we intend to implicitly select the best ? from a given list, i.e., ? = {?1 , . . . , ?m }
where ?1 = 0 and ?m = 1. This is equivalent to selecting the optimal f from a list of
prediction functions, i.e., F = {f1 , . . . , fm }. Motivated from the ensemble technique for
semi-supervised learning [7], we assume that the optimal f comes from a linear combination
of the base functions {fi }. We then have:
f=
m
X
i=1
?i f i ,
m
X
?i = 1, ?i ? 0, i = 1, . . . , m.
i=1
where ?i is the weight of the prediction function fi and ? ? Rm . One can also involve a
priori to ?i . For example, if we have more confidences on the semi-supervised classifier,
we can introduce a constraint like ?m ? 0.5. It is important to note that the learning
functions in ensemble methods [7] are usually weak learners, while in our approach, the
learning functions are strong learners with different degrees of regularization.
In the following, we study how to set the regularization strength adaptive to data. Since
TSVM naturally follows the cluster assumption of semi-supervised learning, in order to
complement the cluster assumption, we adopt another principle in semi-supervised learning,
i.e., the manifold assumption. From the point of view of manifold assumption in semisupervised learning, the prediction function f should be smooth on unlabeled data. To
this end, the approach of manifold regularization is widely adopted as a smoothing term
in semi-supervised learning literatures, e.g., [1, 10]. In the following, we will employ the
manifold regularization principle for selecting the regularization strength.
The manifold regularization is mainly based on a graph G =< V, E > derived from the whole
data space X, where V = {xi }ni=1 is the vertex set, and E denotes the edges linking pairs of
nodes. In general, a graph is built in the following four steps: (1) constructing adjacency
graph; (2) calculating the weights on edges; (3) computing the adjacency matrix W; (4)
Pn
obtaining the graph Laplacian by L = diag( j=1 Wij ) ? W. Then, we denote the manifold
regularization term as f > Lf .
For simplicity, we denote the predicted values of function fi on the data X as fi , such that
fi = ([fi ]1 , . . . , [fi ]n ). F = (f1 , . . . , fm )> is used to represent the set of the prediction values
of all prediction functions. Finally, We have the following minimization problem:
1
min
?(?> F)L(F> ?) ? y`> (F>
(11)
` ?)
?
2
s. t. ?> e = 1, ?i ? 0, i = 1, . . . , m,
>
where the second term, y` (F>
` ?), is used to strengthen the confidence on the prediction over
the labeled data. ? is a trade-off parameter. The above optimization problem is a simple
quadratic programming problem, which can be solved very efficiently. It is important to note
that the above optimization problem is less sensitive to the graph structure than Laplacian
SVM as used in [1], since the basic learning functions are all strong learners. It also saves
a huge amount of efforts in estimating the parameters compared with Laplacian SVM.
The above approach indeed provides a practical approach towards a combination of both
the cluster assumption and the manifold assumption. It is empirically suggested that combining these two assumptions helps to improve the prediction accuracy of semi-supervised
learning according to the survey paper on semi-supervised SVM [6]. Moreover, when ? = 0,
supervised models are incorporated in the framework. Thus the usefulness of unlabeled in
naturally considered by the regularization. This therefore provides a practical solution to
the problems described in Section 1.
4
Experiment
In this section, we give details of our implementation and discuss the results on several
benchmark data sets for our proposed approach. To conduct a comprehensive evaluation, we
employ several well-known datasets as the testbed. As summarized in Table 1, three image
data sets and five text data sets are selected from the recent book (www.kyb.tuebingen.
mpg.de/ssl-book/) and the literature (www.cs.uchicago.edu/~vikass/).
Table 1: Datasets used in our experiments.
denotes the total number of examples.
Data set
n
d
usps
1500 241
coil
1500 241
pcmac
1946 7511
link
1051 1800
d represents the data dimensionality, and n
Data set
digit1
ibm vs rest
page
pagelink
n
1500
1500
1051
1051
d
241
11960
3000
4800
For simplicity, our proposed adaptive regularization approach is denoted as ARTSVM.
To evaluate it, we conduct an extensive comparison with several state-of-the-art approaches, including the label-switching-retraining algorithm in SVM-Light [12], CCCP [8],
and ?TSVM [4]. We employ SVM as the baseline method.
In our experiments, we repeat all the algorithms 20 times for each dataset. In each run,
10% of the data are randomly selected as the training data and the remaining data are used
as the unlabeled data. The value of C in all algorithms are selected from [1, 10, 100, 1000]
using cross-validation. The set of ? is set to [0, 0.01, 0.05, 0.1, 1] and ? is fixed to 0.001. As
stated in Section 3.4, ARTSVM is less sensitive to the graph structure. Thus, we adopt a
simple way to construct the graph: for each data, the number of neighbors is set to 20 and
binary weighting is employed. In ARTSVM, the supervised, weakly semi-supervised, and
semi-supervised algorithms are based on implementation in LibSVM (www.csie.ntu.edu.
tw/~cjlin/libsvm/), MOSEK (www.mosek.org), and ?TSVM (www.kyb.tuebingen.mpg.
de/bs/people/chapelle/lds/), respectively. For the comparison algorithms, we adopt the
original authors? own implementations.
Table 2 summarizes the classification accuracy and the standard deviations of the proposed
ARTSVM method and other competing methods. We can draw several observations from
the results. First of all, we can clearly see that our proposed algorithm performs significantly better than the baseline SVM method across all the data sets. Note that some
very large deviations in SVM are mainly because the labeled data and the unlabeled data
may have quite different distributions after the random sampling. On the other hand, the
unlabeled data capture the underlying distribution and help to correct such random error.
Comparing ARTSVM with other TSVM algorithms, we observe that ARTSVM achieves
the best performance in most cases. For example, for the digital image data sets, especially digit1, supervised learning usually works well and the advantages of TSVM are very
limited. However, the proposed ARTSVM outperforms both the supervised and other semisupervised algorithms. This indicates that the appropriate regularization from the unlabel
data improves the classification performance.
Table 2: The classification performance of Transductive SVMs on benchmark data sets.
Data Set
ARTSVM
?TSVM
SVM
CCCP
SVM-light
usps
81.30?4.04 79.44?3.63
79.23?8.60 80.48?3.20 78.16?4.41
digit1
82.10?2.11 80.55?1.94
81.70?5.61 80.69?2.97 77.53?4.24
coil
81.70?2.10 79.84?1.88
78.98?8.07 80.15?2.90 79.03?2.84
ibm vs rest 78.04?1.44 76.83?2.11
72.90?2.32 77.52?1.51 73.99?5.18
pcmac
95.50?0.88 95.42?0.95
92.57?0.82 94.86?1.09 91.42?7.24
page
94.65?1.19 94.78?1.83 75.22?17.38 94.47?1.67 93.98?2.60
link
94.27?0.97 93.56?1.58
40.79?3.63 92.60?2.10 92.18?2.45
pagelink
97.31?0.68 96.53?1.84
89.41?3.12 95.97?2.22 94.89?1.81
5
Conclusion
This paper presents a novel framework for semi-supervised learning from the perspective of
the regularization strength from the unlabeled data. In particular, for Transductive SVM,
we show that SVM and TSVM can be incorporated as special cases within this framework.
In more detail, the loss on the unlabeled data can essentially be regarded as an additional
regularizer for the decision boundary in TSVM. To control the regularization strength, we
introduce an alternative method of data-dependant regularization based on the principle of
manifold regularization. Empirical studies on benchmark data sets demonstrate that the
proposed framework is more effective than the previous transductive algorithms and purely
supervised methods.
For future work, we plan to design a controlling strategy that is adaptive to data from
the perspective of low density assumption and manifold regularization of semi-supervised
learning. Finally, it is desirable to integrate the low density assumption and manifold
regularization into a unified framework.
Acknowledgement
The work was supported by the National Science Foundation (IIS-0643494), National Institute of Health (1R01GM079688-01), Research Grants Council of Hong Kong
(CUHK4158/08E and CUHK4128/08E), and MSRA (FY09-RES-OPP-103). It is also affiliated with the MS-CUHK Joint Lab for Human-centric Computing & Interface Technologies.
References
[1] Mikhail Belkin, Partha Niyogi, and Vikas Sindhwani. Manifold regularization: A geometric
framework for learning from labeled and unlabeled examples. Journal of Machine Learning
Research, 7:2399?2434, 2006.
[2] Avrim Blum and Shuchi Chawla. Learning from labeled and unlabeled data using graph
mincuts. In ICML ?01: Proceedings of the 18th international conference on Machine learning,
pages 19?26. Morgan Kaufmann, San Francisco, CA, 2001.
[3] O. Chapelle, B. Sch?
olkopf, and A. Zien, editors. Semi-Supervised Learning. MIT Press, Cambridge, MA, 2006.
[4] O. Chapelle and A. Zien. Semi-supervised classification by low density separation. In Proceedings of the Tenth International Workshop on Artificial Intelligence and Statistics, pages
57?64, 2005.
[5] Olivier Chapelle, Vikas Sindhwani, and Sathiya Keerthi. Branch and bound for semi-supervised
support vector machines. In B. Sch?
olkopf, J. Platt, and T. Hoffman, editors, Advances in Neural
Information Processing Systems 19. MIT Press, Cambridge, MA, 2007.
[6] Olivier Chapelle, Vikas Sindhwani, and Sathiya S. Keerthi. Optimization techniques for semisupervised support vector machines. Journal of Machine Learning Research, 9:203?233, 2008.
[7] Ke Chen and Shihai Wang. Regularized boost for semi-supervised learning. In J.C. Platt,
D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20, pages 281?288. MIT Press, Cambridge, MA, 2008.
[8] Ronan Collobert, Fabian Sinz, Jason Weston, and L?
eon Bottou. Large scale transductive
SVMs. Journal of Machine Learning Reseaerch, 7:1687?1712, 2006.
[9] S. C. H. Hoi, M. R. Lyu, and E. Y. Chang. Learning the unified kernel machines for classification. In Proceedings of Twentith International Conference on Knowledge Discovery and Data
Mining (KDD-2006), pages 187?196, New York, NY, USA, 2006. ACM Press.
[10] Steven C. H. Hoi, Rong Jin, and Michael R. Lyu. Learning nonparametric kernel matrices
from pairwise constraints. In ICML ?07: Proceedings of the 24th international conference on
Machine learning, pages 361?368, New York, NY, USA, 2007. ACM.
[11] T. Joachims. Transductive learning via spectral graph partitioning. In ICML ?03: Proceedings
of the 20th international conference on Machine learning, pages 290?297, 2003.
[12] Thorsten Joachims. Transductive inference for text classification using support vector machines. In ICML ?99: Proceedings of the 16th international conference on Machine learning,
pages 200?209, San Francisco, CA, USA, 1999. Morgan Kaufmann Publishers Inc.
[13] John Lafferty and Larry Wasserman. Statistical analysis of semi-supervised regression. In J.C.
Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing
Systems 20, pages 801?808. MIT Press, Cambridge, MA, 2008.
[14] Hariharan Narayanan, Mikhail Belkin, and Partha Niyogi. On the relation between low density
separation, spectral clustering and graph cuts. In B. Sch?
olkopf, J. Platt, and T. Hoffman,
editors, Advances in Neural Information Processing Systems 19, pages 1025?1032. MIT Press,
Cambridge, MA, 2007.
[15] Vikas Sindhwani, S. Sathiya Keerthi, and Olivier Chapelle. Deterministic annealing for semisupervised kernel machines. In ICML ?06: Proceedings of the 23rd international conference on
Machine learning, pages 841?848, New York, NY, USA, 2006. ACM Press.
[16] Aarti Singh, Robert Nowak, and Xiaojin Zhu. Unlabeled data: Now it helps, now it doesn?t. In
D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, Advances in Neural Information
Processing Systems 21, pages 1513?1520. 2009.
[17] Junhui Wang, Xiaotong Shen, and Wei Pan. On efficient large margin semisupervised learning:
Method and theory. Journal of Machine Learning Research, 10:719?742, 2009.
[18] Linli Xu and Dale Schuurmans. Unsupervised and semi-supervised multi-class support vector
machines. In AAAI, pages 904?910, 2005.
[19] Zenglin Xu, Rong Jin, Jianke Zhu, Irwin King, and Michael R. Lyu. Efficient convex relaxation
for transductive support vector machine. In J.C. Platt, D. Koller, Y. Singer, and S. Roweis,
editors, Advances in Neural Information Processing Systems 20, pages 1641?1648. MIT Press,
Cambridge, MA, 2008.
[20] T. Zhang and R. Ando. Analysis of spectral kernel design based semi-supervised learning.
In Y. Weiss, B. Sch?
olkopf, and J. Platt, editors, Advances in Neural Information Processing
Systems (NIPS 18), pages 1601?1608. MIT Press, Cambridge, MA, 2006.
[21] Dengyong Zhou, Olivier Bousquet, Thomas Navin Lal, Jason Weston, and Bernhard Sch?
olkopf.
Learning with local and global consistency. In Sebastian Thrun, Lawrence Saul, and Bernhard Sch?
olkopf, editors, Advances in Neural Information Processing Systems 16. MIT Press,
Cambridge, MA, 2004.
[22] X. Zhu, J. Kandola, Z. Ghahramani, and J. Lafferty. Nonparametric transforms of graph
kernels for semi-supervised learning. In Advances in Neural Information Processing Systems
(NIPS 17), pages 1641?1648, Cambridge, MA, 2005. MIT Press.
[23] Xiaojin Zhu. Semi-supervised learning literature survey. Technical report, Computer Sciences,
University of Wisconsin-Madison, 2005.
| 3843 |@word kong:3 stronger:2 retraining:2 eng:1 configuration:1 series:3 selecting:3 past:1 existing:2 outperforms:1 current:1 comparing:1 assigning:1 attracted:1 john:1 r01gm079688:1 fn:1 ronan:1 partition:3 kdd:1 enables:1 kyb:2 remove:1 v:2 intelligence:1 selected:4 ith:1 provides:3 cse:2 revisited:1 node:1 org:1 zhang:1 five:1 saarland:1 transducer:1 introduce:5 lansing:1 pairwise:1 indeed:2 roughly:1 zlxu:1 mpg:3 multi:1 becomes:3 provided:1 estimating:2 notation:1 underlying:2 maximizes:1 moreover:3 what:2 kind:1 interpreted:1 developed:1 unified:3 sinz:1 rm:1 classifier:1 platt:6 control:3 partitioning:2 grant:1 before:1 engineering:1 local:1 modify:1 switching:2 despite:1 challenging:1 limited:2 practical:2 block:1 lf:1 procedure:4 empirical:1 eth:1 significantly:1 confidence:4 integrating:1 suggest:1 cannot:1 unlabeled:46 www:5 equivalent:1 deterministic:2 lagrangian:1 convex:3 survey:3 ke:1 shen:1 simplicity:2 wasserman:2 regarded:3 regularize:1 target:1 play:1 controlling:1 strengthen:1 programming:2 olivier:4 pcmac:2 element:3 cut:3 labeled:12 role:1 csie:1 steven:1 solved:1 capture:1 wang:2 calculate:1 region:3 trade:2 mu:28 respecting:1 complexity:1 singh:2 weakly:5 rewrite:3 solving:1 purely:1 learner:3 usps:2 joint:1 various:1 regularizer:1 surrounding:1 univ:4 effective:2 describe:1 artificial:1 quite:1 larger:1 solve:3 widely:1 niyogi:2 statistic:1 transductive:13 obviously:1 advantage:1 eigenvalue:1 propose:1 combining:3 till:1 adapts:1 roweis:3 olkopf:6 cluster:7 converges:1 help:5 illustrate:1 dengyong:1 strong:2 predicted:1 c:1 indicate:1 come:1 switzerland:1 unlabel:1 correct:1 human:1 larry:1 hoi:2 adjacency:2 f1:3 ntu:1 rong:3 exploring:1 considered:1 lawrence:1 lyu:5 substituting:1 major:1 finland:1 adopt:3 digit1:3 achieves:1 aarti:1 label:5 sensitive:2 council:1 weighted:1 hoffman:2 minimization:1 mit:9 clearly:2 aim:1 pn:1 zhou:1 varying:2 derived:3 focus:1 joachim:3 indicates:1 mainly:2 hk:1 baseline:2 inference:1 entire:1 relation:3 koller:4 wij:1 germany:1 overall:2 classification:12 dual:4 espoo:1 denoted:1 priori:1 plan:1 art:2 special:2 raised:1 smoothing:1 ssl:1 construct:1 sampling:1 identical:1 represents:2 yu:2 unsupervised:2 icml:5 representer:1 mosek:2 future:1 report:2 simplify:1 employ:5 belkin:2 randomly:1 kandola:1 national:2 comprehensive:2 replaced:1 keerthi:3 ando:1 huge:1 mining:1 fiu:2 evaluation:1 adjust:1 introduces:1 light:2 devoted:1 held:1 fu:15 edge:2 partial:2 necessary:1 nowak:1 traversing:1 conduct:2 loosely:1 re:1 cost:2 introducing:1 vertex:1 deviation:2 usefulness:2 zenglin:2 density:8 international:7 yl:1 off:2 michael:3 aaai:1 book:2 derivative:1 account:3 de:3 summarized:1 includes:1 coefficient:2 inc:1 collobert:1 later:1 try:3 lot:1 lab:2 view:2 jason:2 tsvm:27 partha:2 hariharan:1 accuracy:3 ni:1 kaufmann:2 efficiently:1 ensemble:2 identify:1 weak:1 lds:1 researcher:1 sebastian:1 naturally:2 proof:1 mi:1 unsolved:1 dataset:1 knowledge:1 dimensionality:1 improves:1 organized:1 centric:1 supervised:41 wei:2 formulation:6 furthermore:1 just:1 tkk:1 hand:2 navin:1 replacing:1 propagation:1 dependant:3 semisupervised:6 usa:4 regularization:47 hence:1 alternating:1 iteratively:1 during:1 mpi:2 hong:3 generalized:1 m:1 presenting:1 demonstrate:1 performs:1 interface:1 image:2 novel:1 fi:16 recently:1 empirically:1 volume:1 linking:1 significant:1 refer:1 cambridge:9 smoothness:2 rd:1 consistency:1 chapelle:6 add:1 base:1 own:1 recent:2 perspective:5 inf:2 zhirong:2 binary:1 success:2 yi:8 morgan:2 additional:1 employed:2 cuhk:2 semi:28 branch:2 full:2 desirable:1 ii:1 zien:2 jianke:2 smooth:3 technical:1 adapt:1 cross:1 divided:1 cccp:3 laplacian:3 prediction:13 involving:1 mmci:1 regression:2 basic:1 vision:2 essentially:1 kernel:7 represent:2 penalize:1 background:1 addition:1 annealing:2 publisher:1 sch:6 extra:1 rest:3 unlike:1 induced:1 lafferty:3 call:1 ee:1 yang:2 presence:1 bengio:1 fm:2 competing:1 idea:1 shatin:1 msra:1 whether:1 motivated:1 effort:2 penalty:1 york:3 linli:1 ignored:1 useful:1 involve:1 amount:3 nonparametric:2 transforms:1 narayanan:2 svms:2 category:1 reduced:1 write:3 four:1 threshold:1 blum:1 changing:1 libsvm:2 tenth:1 utilize:1 graph:15 relaxation:2 year:2 run:1 inverse:3 shuchi:1 separation:2 draw:1 decision:6 summarizes:1 bound:3 fl:50 followed:1 guaranteed:1 quadratic:1 strength:18 constraint:3 helsinki:1 sake:1 bousquet:1 speed:1 min:11 optimality:1 xiaotong:1 according:6 combination:4 across:1 pan:1 tw:1 b:1 thorsten:1 taken:1 equation:3 zurich:2 remains:1 discus:2 cjlin:1 needed:1 singer:3 end:2 adopted:2 apply:1 observe:1 away:1 spectral:5 appropriate:3 chawla:1 vikass:1 save:1 alternative:2 vikas:4 original:1 thomas:1 denotes:3 clustering:4 include:1 remaining:1 hinge:1 madison:1 calculating:2 eon:1 ghahramani:1 chinese:1 especially:1 approximating:1 implied:1 intend:2 strategy:1 unclear:1 gradient:1 win:1 separate:1 link:2 sci:1 thrun:1 manifold:22 tuebingen:2 relationship:2 reformulate:1 illustration:1 difficult:1 unfortunately:1 robert:1 stated:2 implementation:3 design:2 affiliated:1 unknown:2 observation:1 datasets:2 benchmark:5 finite:1 fabian:1 jin:3 descent:1 situation:1 incorporated:2 rn:3 complement:1 namely:2 pair:1 kl:36 extensive:1 connection:1 lal:1 testbed:1 boost:1 nu:1 nip:2 address:1 suggested:2 fi2:2 below:3 pattern:1 usually:3 challenge:2 built:1 including:3 max:5 critical:1 regularized:1 zhu:5 minimax:1 improve:2 technology:2 concludes:1 health:1 xiaojin:2 text:2 review:1 literature:3 acknowledgement:1 geometric:1 discovery:1 wisconsin:1 loss:3 expect:1 interesting:1 limitation:1 penalization:1 validation:1 digital:1 integrate:1 degree:1 foundation:1 principle:3 editor:9 heavy:1 ibm:2 repeat:1 supported:1 bias:1 uchicago:1 understand:2 institute:1 neighbor:1 saul:1 taking:1 mikhail:2 yiu:1 boundary:8 xn:1 world:2 computes:1 doesn:1 forward:1 author:1 adaptive:6 san:2 simplified:2 dale:1 far:1 approximate:1 implicitly:1 bernhard:2 opp:1 keep:1 ml:2 global:2 conclude:1 francisco:2 sathiya:3 xi:3 alternatively:1 spectrum:2 msu:1 search:1 iterative:1 table:4 additionally:1 promising:1 learn:1 ku:22 ca:2 controllable:1 rongjin:1 obtaining:1 improving:1 schuurmans:2 bottou:2 necessarily:1 constructing:1 domain:1 diag:2 whole:1 xu:3 x1:1 ny:3 lie:1 vanish:1 weighting:1 theorem:3 list:2 svm:20 workshop:1 avrim:1 supplement:1 margin:8 chen:1 michigan:1 explore:2 sindhwani:4 chang:1 ch:1 corresponds:1 satisfies:1 determines:2 acm:3 ma:9 coil:2 weston:2 identity:1 formulated:1 king:3 consequently:1 towards:2 hard:1 included:1 specifically:2 uniformly:1 lemma:2 total:1 mincuts:1 duality:1 experimental:1 east:1 kwkk:1 select:3 support:8 people:1 irwin:2 ethz:1 evaluate:1 regularizing:1 correlated:1 |
3,139 | 3,844 | Learning a Small Mixture of Trees?
Daphne Koller
Computer Science Department
Stanford University
[email protected]
M. Pawan Kumar
Computer Science Department
Stanford University
[email protected]
Abstract
The problem of approximating a given probability distribution using a simpler distribution plays an important role in several areas of machine learning, for example
variational inference and classification. Within this context, we consider the task
of learning a mixture of tree distributions. Although mixtures of trees can be
learned by minimizing the KL-divergence using an EM algorithm, its success depends heavily on the initialization. We propose an efficient strategy for obtaining
a good initial set of trees that attempts to cover the entire observed distribution by
minimizing the ?-divergence with ? = ?. We formulate the problem using the
fractional covering framework and present a convergent sequential algorithm that
only relies on solving a convex program at each iteration. Compared to previous
methods, our approach results in a significantly smaller mixture of trees that provides similar or better accuracies. We demonstrate the usefulness of our approach
by learning pictorial structures for face recognition.
1 Introduction
Probabilistic models provide a powerful and intuitive framework for formulating several problems
in machine learning and its application areas, such as computer vision and computational biology. A
critical choice to be made when using a probabilistic model is its complexity. For example, consider
a system that involves n random variables. A probabilistic model that defines a clique of size n
has the ability to model any distribution over these random variables. However, the task of learning
and inference on such a model becomes computationally intractable. The other extreme case is to
define a tree structured model that allows for efficient learning [3] and inference [23]. However, tree
distributions have a restrictive form. Hence, they are not suitable for all applications.
A natural way to alleviate the deficiencies of tree distributions is to use a mixture of trees [21].
Mixtures of trees can be employed as accurate models for several interesting problems such as pose
estimation [11] and recognition [5, 12]. In order to facilitate their use, we consider the problem
of learning them by approximating an observed distribution. Note that the mixture can be learned
by minimizing the Kullback-Leibler (KL) divergence with respect to the observed distribution using
an expectation-maximization (EM) algorithm [21]. However, there are two main drawbacks of this
approach: (i) minimization of KL divergence mostly tries to explain the dominant mode of the
observed distribution [22], that is it does not explain the entire distribution; and (ii) as the EM
algorithm is prone to local minima, its success depends heavily on the initialization. An intuitive
solution to both these problems is to obtain an initial set of trees that covers as much of the observed
distribution as possible. To this end, we pose the learning problem as that of obtaining a set of trees
that minimize a suitable ?-divergence [25].
The ?-divergence measures are a family of functions over two probability distributions that measure
the information gain contained in them: that is, given the first distribution, how much information
is obtained by observing the second distribution. They form a complete family of measures, in that
no other function satisfies all the postulates of information gain [25]. When used as an objective
?
This work was supported by DARPA SA4996-10929-4 and the Boeing company.
1
function to approximate an observed distribution, the value of ? plays a significant role. For example, when ? = 1 we obtain the KL divergence. As the value of ? keeps increasing, the divergence
measure becomes more and more inclusive [8], that is it tries to cover as much of the observed distribution as possible [22]. Hence, a natural choice for our task of obtaining a good initial estimate
would be to set ? = ?.
We formulate the minimization of ?-divergence with ? = ? within the fractional covering framework [24]. However, the standard iterative algorithm for solving fractional covering is not readily
applicable to our problem due to its small stepsize. In order to overcome this deficiency we adapt
this approach specifically for the task of learning mixtures of trees. Each iteration of our approach
adds one tree to the mixture and only requires solving a convex optimization problem. In practice,
our strategy converges within a small number of iterations thereby resulting in a small mixture of
trees. We demonstrate the effectiveness of our approach by providing a comparison with state of the
art methods and learning pictorial structures [6] for face recognition.
2 Related Work
The mixture of trees model was introduced by Meila and Jordan [21] who highlighted its appeal
by providing simple inference and sampling algorithms. They also described an EM algorithm that
learned a mixture of trees by minimizing the KL divergence. However, the accuracy of the EM
algorithm is highly dependent on the initial estimate of the mixture. This is evident in the fact
that their experiments required a large mixture of trees to explain the observed distribution, due to
random initialization.
Several works have attempted to obtain a good set of trees by devising algorithms for minimizing
the KL divergence [8, 13, 19, 26]. In contrast, our method uses ? = ?, thereby providing a set of
trees that covers the entire observed distribution. It has been shown that mixture of trees admit a
decomposable prior [20]. In other words, one can concisely specify a certain prior probability for
each of the exponential number of tree structures for a given set of random variables. Kirschner and
Smyth [14] have also proposed a method to handle a countably infinite mixture of trees. However,
the complexity of both learning and inference in these models restricts their practical use.
Researchers have also considered mixtures of trees in the log-probability space. Unlike a mixture in
the probability space considered in this paper (which contains a hidden variable), mixtures of trees
in log-probability space still define pairwise Markov networks. Such mixtures of trees have been
used to obtain upper bounds on the log partition function [27]. However, in this case, the mixture is
obtained by considering subgraphs of a given graphical model instead of minimizing a divergence
measure with respect to the observed data. Finally, we note that semi-metric distance functions can
be approximated to a mixture of tree metrics using the fractional packing framework [24]. This
allows us to approximate semi-metric probabilistic models to a simpler mixture of (not necessarily
tree) models whose pairwise potentials are defined by tree metrics [15, 17].
3 Preliminaries
Tree Distribution. Consider a set of n random variables V = {v1 , ? ? ? , vn }, where each variable
va can take a value xa ? Xa . We represent a labeling of the random variables (i.e. a particular
assignment of values) as a vector x = {xa |a = 1, ? ? ? , n}. A tree structured model defined over the
random variables V is a graph whose nodes correspond to the random variables and whose edges E
define a tree. Such a model assigns a probability to each labeling that can be written as
Q
T
1
(va ,vb )?E ?ab (xa , xb )
T
Q
Pr(x|? ) =
.
(1)
Z(? T ) va ?V ?aT (xa )deg(a)?1
T
Here ?aT (?) refers to unary potentials whose values depend on one variable at a time, and ?ab
(?, ?)
refers to pairwise potentials whose values depend on two neighboring variables at a time. The vector
?T is the parameter of the model (which consists of all the potentials) and Z(?T ) is the partition
function which ensures that the probability sums to one. The term deg(a) denotes the degree of the
variable va .
Mixture of Trees. As the name suggests, a mixture of trees is defined by a set of trees along with
a probability distribution
over them, that is ?M = {(?T , ?T )} such that mixture coefficients ?T > 0
P T
for all T and T ? = 1. It defines the probability of a given labeling as
X
Pr(x|? M ) =
?T Pr(x|? T ).
(2)
T
2
?-Divergence. The ?-divergence between distributions Pr(?|?1 ) (say the observed distribution)
and Pr(?|?2 ) (the simpler distribution) is given by
!
X Pr(x|? 1 )?
1
1
2
.
(3)
log
D? (? ||? ) =
??1
Pr(x|? 2 )??1
x
The ?-divergence measure is strictly non-negative and is equal to 0 if and only if ?1 is a reparameterization of ? 2 . It is a generalization of KL divergence which corresponds to ? = 1, that is
D1 (? 1 ||? 2 ) =
X
Pr(x|? 1 ) log
x
Pr(x|? 1 )
.
Pr(x|? 2 )
(4)
As mentioned earlier, we are interested in the case where ? = ?, that is
D? (? 1 ||? 2 ) = max log
x
Pr(x|? 1 )
.
Pr(x|? 2 )
(5)
The inclusive property of ? = ? is evident from the above formula. Since we would like to
minimize the maximum ratio of probabilities (i.e. the worst case), we need to ensure that no value of
Pr(x|? 2 ) is very small, that is the entire distribution is covered. In contrast, the KL divergence can
admit very small values of Pr(x|? 2 ) since it is concerned with the summation shown in equation (4)
(and not the worst case). To avoid confusion, we shall refer to the case where ? = 1 as KL divergence
and the ? = ? case as ?-divergence throughout this paper.
The Learning Problem. Given a set of samples {xi , i = 1, ? ? ? , m} along with their probabilities
?
P? (xi ), our task is to learn a mixture of trees ?M such that
!
!
Pr(xi |?M )
P? (xi )
M?
= arg max min
.
(6)
?
= arg min max log
i
i
Pr(xi |?M )
P? (xi )
?M
?M
We will concentrate on the second form in the above equation (where the logarithm has been
dropped). We define T = {?Tj } to be the set of all t tree distributions that are defined over n
variables. It follows that the probability of a labeling for any mixture of trees can be written as
X
?j Pr(x|? Tj ),
(7)
Pr(x|? M ) =
j
for suitable values of ?j . Note that the mixing coefficients ? should define a valid probability
distribution. In other words, ? belongs to the polytope P defined as
X
?j = 1, ?j ? 0, ?j = 1, ? ? ? , t.
(8)
??P ?
j
Our task is to find a sparse vector ? that minimizes the ?-divergence with respect to the observed distribution. In order to formally specify the minimization of ?-divergence as an optimization problem,
we define an m ? t matrix A and an m ? 1 vector b such that
A(i, j) = Pr(xi |? Tj ) and bi = P? (xi ).
(9)
We denote the ith row of A as ai and the ith element of b as bi . Using the above notation, the
learning problem can be specified as
max ?? ,
?
s.t.
ai ? ? ?? bi , ?i
? ? P,
(10)
where ?? = mini ai ?/bi due to the form of the above LP. The above formulation suggests that a
natural way to attack the problem would be to use the fractional covering framework [24]. We begin
by briefly describing fractional covering in the next section.
3
4 Fractional Covering
Given an m? t matrix A and an m? 1 vector b > 0, the fractional covering problem is to determine
whether there exists a vector ? ? P such that A? ? b. The only restriction on the polytope P is
that A? ? 0 for all ? ? P, which is clearly satisfied by our learning problem (since ai ? is the
probability of xi specified by the mixture of trees corresponding to ?). Let
ai ?
.
(11)
?? = max min
?
i
bi
If ?? < 1 then clearly there does not exist a ? such that A? ? b. However, if ?? ? 1, then the
fractional covering problem requires us to find an ?-optimal solution, that is find a ? such that
A? ? (1 ? ?)?? b,
(12)
where ? > 0 is a user-specified tolerance factor. Using the definitions of A, b and ? from the
previous section, we observe that in our case ?? = 1. In other words, there exists a solution such
that A? = b. This can easily be seen by considering a tree with parameter ? Tj such that
1
if
i = j,
Tj
Pr(xi |? ) =
(13)
0 otherwise,
and setting ?j = P? (xj ). The above solution provides an ?-divergence of 0 but at the cost of
introducing m trees in the mixture (where m is the number of samples provided). We would like
to find an ?-optimal solution with a smaller number of trees by solving the LP (10). However, we
cannot employ standard interior point algorithms for optimizing problem (10). This is due to the
fact that each of its m constraints is defined over an infinite number of unknowns (specifically, the
mixture coefficients for each of the infinite number of tree distributions defined over the n random
variables). Fortunately, Plotkin et al. [24] provide an iterative algorithm for solving problem (10)
that can handle arbitrarily large number of unknowns in every constraint.
The Fractional Covering Algorithm. In order to obtain a solution to problem (10), we solve the
following related problem:
min ?(y) ? y? b,
??P
1
ai ?
yi = exp ??
.
bi
bi
s.t.
(14)
The objective function ?(y) is called the potential function for fractional covering. Plotkin et al.
[24] showed that minimizing ?(y) solves the original fractional covering problem. The term ? is a
parameter that is inversely proportional to the stepsize ? of the algorithm. The fractional covering
algorithm is an iterative strategy. At iteration t, the variable ?t is updated as ?t ? (1??)?t?1 +???
such that the update attempts to decrease the potential function. Specifically, the algorithm proposed
in [24] suggests using the first order approximation of ?(y), that is
!
X
?
?
yi (bi ? ??ai ?) = arg max y?? A?.
(15)
? = arg min
?
?
i
where
1
(1 ? ?)ai ?
= exp ??
.
(16)
bi
bi
Typically, the above problem is easy to solve (including for our case, as will be seen in the next
section). Furthermore, for a sufficiently large value of ? (? log m) the above update rule decreases
?(y). In more detail, the algorithm of [24] is as follows:
? Define w = max? maxi ai ?/bi to be the width of the problem.
? Start with an initial solution ?0 .
? Define ??0 = mini ai ?0 /bi , and ? = ?/(4?w).
? While ?? < 2??0 , at iteration t:
? Define y? as shown in equation (16).
? Find ?? = arg max??P y?? A?.
? Update ?t ? (1 ? ?)?t?1 + ??? .
yi?
4
Plotkin et al. [24] suggest starting with a tolerance factor of ?0 = 1/6 and dividing the value of ?0
by 2 after every call to the above procedure terminates. This process is continued until a sufficiently
accurate (i.e. an ?-optimal) solution is recovered. Note that during each call to the above procedure
the potential function ?(y) is both upper and lower bounded, specifically
exp(?2???0 ) ? ?(y) ? m exp(????0 ).
(17)
Furthermore, we are guaranteed to decrease the value of ?(y) at each iteration. Hence, it follows
that the above algorithm will converge. We refer the reader to [24] for more details.
5 Modifying Fractional Covering
The above algorithm provides an elegant way to solve the general fractional covering problem.
However, as will be seen shortly, in our case it leads to undesirable solutions. Nevertheless, we
show that appropriate modifications can be made to obtain a small and accurate mixture of trees. We
begin by identify the deficiencies of the fractional covering algorithm for our learning problem.
5.1 Drawbacks of the Algorithm
There are two main drawbacks of fractional covering. First, the value of ? is typically very large,
which results in a small stepsize ?. In our experiments, ? was of the order of 103 , which resulted
in slow convergence of the algorithm. Second, the update step provides singleton trees, that is trees
with a probability of 1 for one labeling and 0 for all others. This is due to the fact that, in our case,
the update step solves the following problem:
!
X X
Tj
?
yi ?j Pr(xi |? ) .
(18)
max
??P
j
i
Note that the above problem is an LP in ?. Hence, there must exist an optimal solution on the vertex
?
on the polytope P. In other words, we obtain a single tree distribution ?T such that
!
X
T?
T
?
? = arg max
yi Pr(xi |? ) .
(19)
?T
i
The optimal tree distribution for the above problem concentrates the entire mass on the sample
xi? where i? = arg maxi yi? . Such singleton trees are not desirable as they also result in slow
convergence of the algorithm. Furthermore, the learned mixture only provides a non-zero probability
for the samples used during training. Hence, the mixture cannot be used for previously unseen
samples, thereby rendering it practically useless. Note that the method of Rosset and Segal [26]
also faces a similar problem during their update steps for minimizing the KL divergence. In order to
overcome this difficulty, they suggest approximating problem (18) by
X
?
yi? log Pr(xi |? T ) ,
(20)
? T = arg max
?T i
which can be solved efficiently using the Chow-Liu algorithm [3]. However, our preliminary experiments (accuracies not reported) indicate that this approach does not work well for minimizing the
potential function ?(y).
5.2 Fixing the Drawbacks
We adapt the original fractional covering algorithm for our problem in order to overcome the drawbacks mentioned above. The first drawback is handled easily. We start with a small value of ? and
increase it by a factor of 2 if we are not able to reduce the potential function ?(y) at a given iteration. Since we are assured that the value of ?(y) decreases for a finite value of ?, this procedure is
guaranteed to terminate. In our experiments, we initialized ? = 1/w and its value never exceeded
32/w. Note that choosing ? to be inversely proportional to w ensures that the initial values of yi? in
equation (16) are sufficiently large (at least exp(?(1 ? ?))).
In order to address the second drawback, we note that our aim at an iteration t of the algorithm is to
reduce the potential function ?(y). That is, given the current distribution parameterized by ? Mt we
would like to add a new tree ?Tt to the mixture that solves the following problem:
"
!#
X
? Pr(xi |? T )
Tt
?
yi exp ??
? =
arg min ?(y) ?
(21)
P? (xi )
?T
i
5
s.t.
X
Pr(xi |? T ) ? 1, Pr(xi |? T ) ? 0, ?i = 1, ? ? ? , m,
i
T
? ?T.
(22)
(23)
Here, T is the set of all tree distributions defined over n random variables. Note that the algorithm
of [24] optimizes the first order approximation of the objective function (21). However, as seen previously, for our problem this results in an undesirable solution. Instead, we directly optimize ?(y)
using an alternative two step strategy. In the first step, we drop the last constraint from the above
problem. In other words, we obtain the values of Pr(xi |?T ) that form a valid (but not necessarily
tree-structured) distribution and minimize the function ?(y). Note that since the ?(y) is not linear
in Pr(xi |?T ), the optimal solution provides a dense distribution Pr(?|? T ) (as opposed to the first
order linear approximation which provides a singleton distribution). In the second step, we project
these values to a tree distribution. It is easy to see that dropping constraint (23) results in a convex
relaxation of the original problem. We solve the convex relaxation using a log-barrier method [1].
Briefly, this implies solving a series of unconstrained optimization problems until we are within a
user-specified tolerance value of ? from the optimal solution. Specifically,
? Set f = 1.
P
P
? Solve minPr(?|?T ) f ?(y) ? i log(Pr(xi |?T )) ? log(1 ? i Pr(xi |? T )) .
? If m/f ? ? , then stop. Otherwise, update f = ?f and repeat the previous step.
We used ? = 1.5 in all our experiments, which was sufficient to obtain accurate solutions for
the convex relaxation. At each iteration, the unconstrained optimization problem is solved using
Newton?s method. Recall that Newton?s method minimizes a function g(z) by updating the current
solution as
?1
g(z) ? g(z) ? ?2 g(z)
?g(z),
(24)
where ?2 g(?) denotes the Hessian matrix and ?g(?) denotes the gradient vector. Note that the most
expensive step in the above approach is the inversion of the Hessian matrix. However, it is easy to
verify that in our case all the off-diagonal elements of the Hessian are equal to each other. By taking
advantage of this special form of the Hessian, we compute its inverse in O(m2 ) time using Gaussian
elimination (i.e. linear in the number of elements of the Hessian).
Once the values of Pr(xi |? T ) are computed in this manner, they are projected to a tree distribution
using the Chow-Liu algorithm [3]. Note that after the projection step we are no longer guaranteed to
decrease the function ?(y). This would imply that the overall algorithm would not be guaranteed to
converge. In order to overcome this problem, if we are unable to decrease ?(y) then we determine
the sample xi? such that
Pr(xi |?Mt )
i? = arg max
,
(25)
i
P? (xi )
that is the sample best explained by the current mixture. We enforce Pr(xi? |?T ) = 0 and solve
the above convex relaxation again. Note that the solution to the new convex relaxation (i.e. the one
with the newly introduced constraint for sample xi? ) can easily be obtained from the solution of the
previous convex relaxation using the following update:
if
i 6= i? ,
Pr(xi |? T ) + P? (xi ) Pr(xi? |?T )/s
T
Pr(xi |? ) ?
(26)
0
otherwise,
P
where s = i P? (xi ). In other words, we do not need to use the log-barrier method to solve the
new convex relaxation. We then project the updated values of Pr(xi |? T ) to a tree distribution. This
process of eliminating one sample and projecting to a tree is repeated until we are able to reduce
the value of ?(y). Note that in the worst case we will eliminate all but one sample (specifically, the
one that corresponds to the update scheme of [24]). In other words, we will add a singleton tree.
However, in practice our algorithm converges in a small number (? m) of iterations and provides an
accurate mixture of trees. In fact, in all our experiments we never obtained any singleton trees. We
conclude the description of our method by noting that once the new tree distribution ? Tt is obtained,
the value of ? is easily updated as ? = arg min? ?(y).
6 Experiments
We present a comparison of our method with the state of the art algorithms. We also use it to learn
pictorial structures for face recognition. Note that our method is efficient in practice due to the
6
Dataset
Agaricus
Nursery
Splice
TANB
MF
100.0 ? 0
93.0 ? 0
94.9 ? 0.9
99.45 ? 0.004
98.0 ? 0.01
-
Tree
98.65 ? 0.32
92.17 ? 0.38
95.7 ? 0.2
MT
99.98 ? 0.04
99.2 ? 0.02
95.5 ? 0.3
[26] + MT
100.0 ? 0
98.35 ? 0.30
95.6 ? 0.42
Our + MT
100.0 ? 0
99.28 ? 0.13
96.1 ? 0.15
Table 1: Classification accuracies for the datasets used in [21]. The first column shows the name of the dataset.
The subsequent columns show the mean accuracies and the standard deviation over 5 trials of tree-augmented
naive Bayes [10], mixture of factorial distributions [2], single tree classifier [3], mixture of trees with random
initialization (i.e. the numbers reported in [21]), initialization with [26] and initialization with our approach.
Note that our method provides similar accuracies to [21] while using a smaller mixture of trees (see text).
special form of the Hessian matrix (for the log-barrier method) and the Chow-Liu algorithm [3, 21]
(for the projection to tree distributions). In all our experiments, each iteration takes only 5 to 10
minutes (and the number of iterations is equal to the number of trees in the mixture).
Comparison with Previous Work. As mentioned earlier, our approach can be used to obtain a
good initialization for the EM algorithm of [21] since it minimizes ?-divergence (providing complementary information to the KL-divergence used in [21]). This is in contrast to the random initializations used in the experiments of [21] or the initialization obtained by [26] (that also attempts to
minimize the KL-divergence). We consider the task of using the mixture of trees as a classifier, that
is given training data that consists of feature vectors xi together with the class values ci , the task
is to correctly classify previously unseen test feature vectors. Following the protocol of [21], this
can be achieved in two ways. For the first type of classifier, we append the feature vector xi with
its class value ci to obtain a new feature vector x?i . We then learn a mixture of tree that predicts the
probability of x?i . Given a new feature vector x we assign it the class c that results in the highest
probability. For the second type of classifier, we learn a mixture of trees for each class value such
that it predicts the probability of a feature vector belonging to that particular class. Once again,
given a new feature vector x we assign it the class c which results in the probability.
We tested our approach on the three discrete valued datasets used in [21]. In all our experiments,
we initialized the mixture with a single tree obtained from the Chow-Liu algorithm. We closely
followed the experimental setup of [21] to ensure that the comparisons are fair. Table 1 provides the
accuracy of our approach together with the results reported in [21]. For ?Splice? the first classifier
provides the best results, while ?Agaricus? and ?Nursery? use the second classifier. Note that our
method provides similar accuracies to [21]. More importantly, it uses a smaller mixture of trees to
achieve these results. Specifically, the method of [21] uses 12, 30 and 3 trees for the three datasets
respectively. In contrast our method uses 3-5 trees for ?Agaricus?, 10-15 trees for ?Nursery? and 2
trees for Splice (where the number of trees in the mixture was obtained using a validation dataset,
see [21] for details). Furthermore, unlike [21, 26], we obtain better accuracies by using a mixture
of trees instead of a single tree for the ?Splice? dataset. It is worth noting that [26] also provided a
small set of initial trees (with comparable size to our method). However, since the trees do not cover
the entire observed distribution, their method provides less accurate results.
Face Recognition. We tested our approach on the task of recognizing faces using the publicly
available dataset1 containing the faces of 11 characters in an episode of ?Buffy the Vampire Slayer?.
The total number of faces in the dataset is 24,244. For each face we are provided with the location
of 13 facial features (see Fig. 1). Furthermore, for each facial feature, we are also provided with
a vector that represents the appearance of that facial feature [5] (using the normalized grayscale
values present in a circular region of radius 7 centered at the facial feature). As noted in previous
work [5, 18] the task is challenging due to large intra-class variations in expression and lighting
conditions.
Given the appearance vector, the likelihood of each facial feature belonging to a particular character
can be found using logistic regression. However, the relative locations of the facial features also
offer important cues in distinguishing one character from the other (e.g. the width of the eyes or the
distance between an eye and the nose). Typically, in vision systems, this information is not used.
In other words, the so-called bag of visual words model is employed. This is due to the somewhat
counter-intuitive observation made by several researchers that models that employ spatial prior on
the features, e.g. pictorial structures [6], often provide worse recognition accuracies than those that
throw away this information. However, this may be due to the fact that often the structure and
parameters of pictorial structures and other related models are set by hand.
1
Available at http://www.robots.ox.ac.uk/?vgg/research/nface/data.html
7
Figure 1: The structure of the seven trees learned for 3 of the 11 characters using our method. The red squares
show the position of the facial features while the blue lines indicate the edges. The structure and parameters of
the trees vary significantly, thereby indicating the multimodality of the observed distribution.
[26]
Our
0
65.68%
65.68%
1
66.05%
66.05%
2
66.01%
66.65%
3
66.01%
66.86%
4
66.08%
67.25%
5
66.08%
67.48%
6
66.16%
67.50%
7
66.20%
67.68%
Table 2: Accuracy for the face recognition experiments. The columns indicate the size of the mixture, ranging
from 0 (i.e. the bag of visual words model) to 7 (where the results saturate). Note that our approach, which
minimizes the ?-divergence, provides better results than the method of [26], which minimizes KL-divergence.
In order to test whether a spatial model can help improve recognition, we learned a mixture of trees
for each of the characters. The random variables of the trees correspond to the facial features and
their values correspond to the relative location of the facial feature with respect to the center of the
nose. The unary potentials of each random variable is specified using the appearance vectors (i.e.
the likelihood obtained by logistic regression). In order to obtain the pairwise potentials (i.e. the
structure and parameters of the mixture of trees), the faces are normalized to remove global scaling
and in-plane rotation using the location of the facial features. We use the faces found in the first 80%
of the episode to learn the mixture of trees. The faces found in the remaining 20% of the episode
were used as test data. Splitting the dataset in this manner (i.e. a non-random split) ensures that we
do not have any trivial cases where a face found in frame t is used for training and a (very similar)
face found in frame t + 1 is used for testing.
Fig. 1 shows the structure of the trees learned for 3 characters. The structures differ significantly
between characters, which indicates that different spatial priors are dominant for different characters.
Although the structure of the trees for a particular character are similar, they vary considerably in
the parameters. This suggests that the distribution is in fact multimodal and therefore cannot be
represented accurately using a single tree. Although vision researchers have tried to overcome this
problem by using more complex models, e.g. see [4], their use is limited by a lack of efficient
learning algorithms. Table 2 shows the accuracy of the mixture of trees learned by the method
of [26] and our approach. In this experiment, refining the mixture of trees using the EM algorithm
of [21] did not improve the results. This is due to the fact that the training and testing data differ
significantly (due to non-random splits, unlike the previous experiments which used random splits of
the UCI datasets). In fact, when we split the face dataset randomly, we found that the EM algorithm
did help. However, classification problems simulated using random splits of video frames are rare
in real-world applications. Since [26] tries to minimize the KL divergence, it mostly tries to explain
the dominant mode of the observed distribution. This is evident in the fact that the accuracy of the
mixture of trees does not increase significantly as the size of the mixture increases (see table 2, first
row). In contrast, the minimization of ?-divergence provides a diverse set of trees that attempt to
explain the entire distribution thereby providing significantly better results (table 2, second row).
7 Discussion
We formulated the problem of obtaining a small mixture of trees by minimizing the ?-divergence
within the fractional covering framework. Our experiments indicate that the suitably modified fractional covering algorithm provides accurate models. We believe that our approach offers a natural
framework for addressing the problem of minimizing ?-divergence and could prove useful for other
classes of mixture models, for example mixtures of trees in log-probability space for which there
exist several efficient and accurate inference algorithms [16, 27]. There also appears to be a connection between fractional covering (proposed in the theory community) and Discrete AdaBoost [7, 9]
(proposed in the machine learning community) that merits further exploration.
8
References
[1] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[2] P. Cheeseman and J. Stutz. Bayesian classification (AutoClass): Theory and results. In KDD,
pages 153?180, 1995.
[3] C. Chow and C. Liu. Approximating discrete probability distributions with dependence trees.
IEEE Transactions on Information Theory, 14(3):462?467, 1968.
[4] D. Crandall, P. Felzenszwalb, and D. Huttenlocher. Spatial priors for parts-based recognition
using statistical models. In CVPR, 2005.
[5] M. Everingham, J. Sivic, and A. Zisserman. Hello! My name is... Buffy - Automatic naming
of characters in TV video. In BMVC, 2006.
[6] M. Fischler and R. Elschlager. The representation and matching of pictorial structures. TC,
22:67?92, January 1973.
[7] Y. Freund and R. Schapire. A decision-theoretic generalization of on-line learning and an
application to boosting. Journal of Computer and System Sciences, 55(1):119?139, 1997.
[8] B. Frey, R. Patrascu, T. Jaakkola, and J. Moran. Sequentially fitting inclusive trees for inference
in noisy-OR networks. In NIPS, 2000.
[9] J. Friedman, T. Hastie, and R. Tibshirani. Additive logistic regression: A statistical view of
boosting. Annals of Statistics, 28(2):337?407, 2000.
[10] N. Friedman, D. Geiger, and M. Goldszmidt. Bayesian network classifiers. Machine Learning,
29:131?163, 1997.
[11] S. Ioffe and D. Forsyth. Human tracking with mixtures of trees. In ICCV, pages 690?695,
2001.
[12] S. Ioffe and D. Forsyth. Mixtures of trees for object recognition. In CVPR, pages 180?185,
2001.
[13] Y. Jing, V. Pavlovic, and J. Rehg. Boosted bayesian network classifiers. Machine Learning,
73(2):155?184, 2008.
[14] S. Kirschner and P. Smyth. Infinite mixture of trees. In ICML, pages 417?423, 2007.
[15] J. Kleinberg and E. Tardos. Approximation algorithms for classification problems with pairwise relationships: Metric labeling and Markov random fields. In STOC, 1999.
[16] V. Kolmogorov. Convergent tree-reweighted message passing for energy minimization. PAMI,
2006.
[17] M. P. Kumar and D. Koller. MAP estimation of semi-metric MRFs via hierarchical graph cuts.
In UAI, 2009.
[18] M. P. Kumar, P. Torr, and A. Zisserman. An invariant large margin nearest neighbour classifier.
In ICCV, 2007.
[19] Y. Lin, S. Zhu, D. Lee, and B. Taskar. Learning sparse Markov network structure via ensembleof-trees models. In AISTATS, 2009.
[20] M. Meila and T. Jaakkola. Tractable Bayesian learning of tree belief networks. In UAI, 2000.
[21] M. Meila and M. Jordan. Learning with a mixture of trees. JMLR, 1:1?48, 2000.
[22] T. Minka. Divergence measures and message passing. Technical report, Microsoft Research,
2005.
[23] J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference.
Morgan-Kauffman, 1988.
[24] S. Plotkin, D. Shmoys, and E. Tardos. Fast approximation algorithms for fractional packing
and covering problems. Mathematics of Operations Research, 20:257?301, 1995.
[25] A. Renyi. On measures of information and entropy. In Berkeley Symposium on Mathematics,
Statistics and Probability, pages 547?561, 1961.
[26] S. Rosset and E. Segal. Boosting density estimation. In NIPS, 2002.
[27] M. Wainwright, T. Jaakkola, and A. Willsky. A new class of upper bounds on the log partition
function. IEEE Transactions on Information Theory, 51:2313?2335, 2005.
9
| 3844 |@word trial:1 briefly:2 inversion:1 eliminating:1 everingham:1 suitably:1 tried:1 thereby:5 initial:7 liu:5 contains:1 series:1 recovered:1 current:3 written:2 readily:1 must:1 additive:1 subsequent:1 partition:3 kdd:1 remove:1 drop:1 update:9 cue:1 devising:1 plane:1 ith:2 provides:16 boosting:3 node:1 location:4 attack:1 simpler:3 daphne:1 along:2 symposium:1 consists:2 prove:1 fitting:1 multimodality:1 manner:2 pairwise:5 company:1 considering:2 increasing:1 becomes:2 begin:2 provided:4 notation:1 bounded:1 project:2 mass:1 minimizes:5 berkeley:1 every:2 classifier:9 uk:1 dropped:1 local:1 frey:1 pami:1 initialization:9 suggests:4 challenging:1 limited:1 bi:12 practical:1 testing:2 practice:3 procedure:3 area:2 significantly:6 projection:2 boyd:1 word:10 matching:1 refers:2 suggest:2 cannot:3 interior:1 undesirable:2 context:1 restriction:1 optimize:1 www:1 map:1 center:1 agaricus:3 starting:1 convex:10 formulate:2 decomposable:1 splitting:1 assigns:1 subgraphs:1 rule:1 continued:1 m2:1 importantly:1 d1:1 vandenberghe:1 rehg:1 reparameterization:1 handle:2 variation:1 updated:3 annals:1 tardos:2 play:2 heavily:2 user:2 smyth:2 us:4 distinguishing:1 element:3 recognition:10 approximated:1 updating:1 expensive:1 cut:1 predicts:2 huttenlocher:1 observed:15 role:2 taskar:1 solved:2 worst:3 region:1 ensures:3 episode:3 decrease:6 highest:1 counter:1 mentioned:3 complexity:2 fischler:1 depend:2 solving:6 packing:2 easily:4 darpa:1 multimodal:1 represented:1 kolmogorov:1 fast:1 crandall:1 labeling:6 choosing:1 whose:5 stanford:4 solve:7 valued:1 say:1 cvpr:2 otherwise:3 plausible:1 ability:1 statistic:2 unseen:2 highlighted:1 noisy:1 advantage:1 propose:1 neighboring:1 uci:1 mixing:1 achieve:1 intuitive:3 description:1 convergence:2 jing:1 converges:2 object:1 help:2 ac:1 fixing:1 pose:2 nearest:1 solves:3 dividing:1 throw:1 c:2 involves:1 indicate:4 implies:1 differ:2 concentrate:2 radius:1 drawback:7 closely:1 modifying:1 centered:1 exploration:1 human:1 elimination:1 assign:2 generalization:2 alleviate:1 preliminary:2 summation:1 strictly:1 practically:1 sufficiently:3 considered:2 exp:6 vary:2 estimation:3 applicable:1 bag:2 minimization:5 clearly:2 gaussian:1 hello:1 aim:1 modified:1 avoid:1 boosted:1 jaakkola:3 refining:1 likelihood:2 indicates:1 contrast:5 inference:8 dependent:1 mrfs:1 unary:2 entire:7 typically:3 chow:5 eliminate:1 hidden:1 koller:3 interested:1 arg:11 classification:5 overall:1 html:1 art:2 special:2 spatial:4 equal:3 once:3 never:2 field:1 sampling:1 biology:1 represents:1 icml:1 others:1 pavlovic:1 report:1 intelligent:1 employ:2 randomly:1 neighbour:1 divergence:33 resulted:1 pictorial:6 pawan:2 microsoft:1 attempt:4 ab:2 friedman:2 message:2 highly:1 circular:1 intra:1 mixture:63 extreme:1 tj:6 xb:1 accurate:8 edge:2 stutz:1 facial:10 tree:105 logarithm:1 initialized:2 column:3 earlier:2 classify:1 cover:5 assignment:1 maximization:1 cost:1 introducing:1 vertex:1 deviation:1 rare:1 addressing:1 usefulness:1 recognizing:1 reported:3 plotkin:4 rosset:2 considerably:1 my:1 density:1 probabilistic:5 off:1 lee:1 together:2 autoclass:1 again:2 postulate:1 satisfied:1 opposed:1 containing:1 worse:1 admit:2 potential:12 segal:2 singleton:5 coefficient:3 forsyth:2 depends:2 try:4 view:1 observing:1 red:1 start:2 bayes:1 minimize:5 square:1 publicly:1 accuracy:13 who:1 efficiently:1 correspond:3 identify:1 bayesian:4 shmoys:1 accurately:1 worth:1 researcher:3 lighting:1 explain:5 definition:1 energy:1 minka:1 gain:2 stop:1 newly:1 dataset:7 recall:1 fractional:22 appears:1 exceeded:1 adaboost:1 specify:2 zisserman:2 bmvc:1 formulation:1 ox:1 furthermore:5 xa:5 until:3 hand:1 lack:1 defines:2 mode:2 logistic:3 believe:1 facilitate:1 name:3 verify:1 normalized:2 hence:5 leibler:1 reweighted:1 during:3 width:2 covering:21 noted:1 evident:3 complete:1 demonstrate:2 confusion:1 tt:3 theoretic:1 reasoning:1 ranging:1 variational:1 rotation:1 mt:5 significant:1 refer:2 cambridge:1 ai:10 automatic:1 meila:3 unconstrained:2 mathematics:2 robot:1 longer:1 add:3 dominant:3 showed:1 optimizing:1 belongs:1 optimizes:1 certain:1 success:2 arbitrarily:1 yi:9 seen:4 minimum:1 fortunately:1 somewhat:1 morgan:1 employed:2 determine:2 converge:2 semi:3 ii:1 desirable:1 technical:1 adapt:2 offer:2 lin:1 naming:1 va:4 regression:3 vision:3 expectation:1 metric:6 iteration:12 represent:1 achieved:1 unlike:3 elegant:1 effectiveness:1 jordan:2 call:2 noting:2 split:5 easy:3 concerned:1 rendering:1 xj:1 hastie:1 reduce:3 vgg:1 whether:2 expression:1 handled:1 hessian:6 passing:2 useful:1 covered:1 factorial:1 http:1 schapire:1 exist:3 restricts:1 correctly:1 tibshirani:1 blue:1 nursery:3 diverse:1 discrete:3 shall:1 dropping:1 nevertheless:1 v1:1 graph:2 relaxation:7 sum:1 inverse:1 parameterized:1 powerful:1 family:2 throughout:1 reader:1 vn:1 geiger:1 decision:1 scaling:1 vb:1 comparable:1 bound:2 guaranteed:4 followed:1 convergent:2 constraint:5 deficiency:3 inclusive:3 kleinberg:1 min:7 formulating:1 kumar:3 department:2 structured:3 tv:1 belonging:2 smaller:4 terminates:1 em:8 character:10 lp:3 modification:1 explained:1 projecting:1 pr:38 iccv:2 invariant:1 computationally:1 equation:4 previously:3 describing:1 nose:2 merit:1 tractable:1 end:1 available:2 operation:1 observe:1 hierarchical:1 away:1 appropriate:1 enforce:1 stepsize:3 alternative:1 shortly:1 original:3 denotes:3 remaining:1 ensure:2 graphical:1 newton:2 restrictive:1 approximating:4 objective:3 strategy:4 dependence:1 diagonal:1 gradient:1 distance:2 unable:1 simulated:1 seven:1 polytope:3 trivial:1 willsky:1 useless:1 relationship:1 mini:2 providing:5 minimizing:11 ratio:1 setup:1 mostly:2 stoc:1 negative:1 boeing:1 append:1 unknown:2 upper:3 observation:1 markov:3 datasets:4 finite:1 january:1 frame:3 community:2 introduced:2 required:1 kl:14 specified:5 connection:1 sivic:1 concisely:1 learned:8 pearl:1 nip:2 address:1 able:2 kauffman:1 program:1 max:12 including:1 video:2 belief:1 wainwright:1 critical:1 suitable:3 natural:4 difficulty:1 cheeseman:1 zhu:1 scheme:1 improve:2 inversely:2 imply:1 eye:2 naive:1 text:1 prior:5 relative:2 freund:1 interesting:1 proportional:2 validation:1 degree:1 sufficient:1 elschlager:1 row:3 prone:1 supported:1 last:1 repeat:1 face:16 barrier:3 taking:1 felzenszwalb:1 sparse:2 tolerance:3 overcome:5 valid:2 world:1 dataset1:1 made:3 projected:1 transaction:2 approximate:2 countably:1 kullback:1 keep:1 clique:1 deg:2 global:1 sequentially:1 uai:2 ioffe:2 conclude:1 xi:36 grayscale:1 iterative:3 table:6 learn:5 terminate:1 obtaining:4 necessarily:2 complex:1 protocol:1 assured:1 did:2 aistats:1 main:2 dense:1 repeated:1 complementary:1 fair:1 augmented:1 fig:2 slow:2 position:1 exponential:1 jmlr:1 renyi:1 splice:4 formula:1 minute:1 saturate:1 maxi:2 appeal:1 moran:1 intractable:1 exists:2 sequential:1 ci:2 margin:1 mf:1 entropy:1 tc:1 appearance:3 visual:2 contained:1 tracking:1 patrascu:1 corresponds:2 satisfies:1 relies:1 kirschner:2 vampire:1 buffy:2 formulated:1 specifically:7 infinite:4 torr:1 called:2 total:1 experimental:1 attempted:1 indicating:1 formally:1 goldszmidt:1 tested:2 |
3,140 | 3,845 | Learning Label Embeddings for Nearest-Neighbor
Multi-class Classification with an Application to
Speech Recognition
Natasha Singh-Miller
Massachusetts Institute of Technology
Cambridge, MA
[email protected]
Michael Collins
Massachusetts Institute of Technology
Cambridge, MA
[email protected]
Abstract
We consider the problem of using nearest neighbor methods to provide a conditional probability estimate, P (y|a), when the number of labels y is large and the
labels share some underlying structure. We propose a method for learning label
embeddings (similar to error-correcting output codes (ECOCs)) to model the similarity between labels within a nearest neighbor framework. The learned ECOCs
and nearest neighbor information are used to provide conditional probability estimates. We apply these estimates to the problem of acoustic modeling for speech
recognition. We demonstrate significant improvements in terms of word error rate
(WER) on a lecture recognition task over a state-of-the-art baseline GMM model.
1
Introduction
Recent work has focused on the learning of similarity metrics within the context of nearest-neighbor
(NN) classification [7, 8, 12, 15]. These approaches learn an embedding (for example a linear
projection) of input points, and give significant improvements in the performance of NN classifiers.
In this paper we focus on the application of NN methods to multi-class problems, where the number
of possible labels is large, and where there is significant structure within the space of possible labels.
We describe an approach that induces prototype vectors My ? ?L (similar to error-correcting
output codes (ECOCs)) for each label y, from a set of training examples {(ai , yi )} for i = 1 . . . N .
The prototype vectors are embedded within a NN model that estimates P (y|a); the vectors are
learned using a leave-one-out estimate of conditional log-likelihood (CLL) derived from the training
examples. The end result is a method that embeds labels y into ?L in a way that significantly
improves conditional log-likelihood estimates for multi-class problems under a NN classifier.
The application we focus on is acoustic modeling for speech recognition, where each input a ? ?D
is a vector of measured acoustic features, and each label y ? Y is an acoustic-phonetic label. As
is common in speech recognition applications, the size of the label space Y is large (in our experiments we have 1871 possible labels), and there is significant structure within the labels: many
acoustic-phonetic labels are highly correlated or confusable, and many share underlying phonological features. We describe experiments measuring both conditional log-likelihood of test data, and
word error rates when the method is incorporated within a full speech recogniser. In both settings the
experiments show significant improvements for the ECOC method over both baseline NN methods
(e.g., the method of [8]), as well as Gaussian mixture models (GMMs), as conventionally used in
speech recognition systems.
While our experiments are on speech recognition, the method should be relevant to other domains
which involve large multi-class problems with structured labels?for example problems in natural
language processing, or in computer vision (e.g., see [14] for a recent use of neighborhood com1
ponents analysis (NCA) [8] within an object-recognition task with a very large number of object
labels). We note also that the approach is relatively efficient: our model is trained on around 11
million training examples.
2
Related Work
Several pieces of recent work have considered the learning of feature space embeddings with the
goal of optimizing the performance of nearest-neighbor classifiers [7, 8, 12, 15]. We make use of
the formalism of [8] as the starting point in our work. The central contrast between our work and
this previous work is that we learn an embedding of the labels in a multi-class problem; as we will
see, this gives significant improvements in performance when nearest-neighbor methods are applied
to multi-class problems arising in the context of speech recognition.
Our work is related to previous work on error-correcting output codes for multi-class problems.
[1, 2, 4, 9] describe error-correcting output codes; more recently [2, 3, 11] have described algorithms
for learning ECOCs. Our work differs from previous work in that ECOC codes are learned within
a nearest-neighbor framework. Also, we learn the ECOC codes in order to model the underlying
structure of the label space and not specifically to combine the results of multiple classifiers.
3
Background
The goal of our work is to derive a model that estimates P (y|a) where a ? ?D is a feature vector
representing some input, and y is a label drawn from a set of possible labels Y. The parameters of
our model are estimated using training examples {(a1 , y1 ), ..., (aN , yN )}. In general the training
criterion will be closely related to the conditional log-likelihood of the training points:
N
X
log P (yi |ai )
i=1
We choose to optimize the log-likelihood rather than simple classification error, because these estimates will be applied within a larger system, in our case a speech recognizer, where the probabilities
will be propagated throughout the recognition model; hence it is important for the model to provide
well-calibrated probability estimates.
For the speech recognition application considered in this paper, Y consists of 1871 acoustic-phonetic
classes that may be highly correlated with one another. Leveraging structure in the label space will
be crucial to providing good estimates of P (y|a); we would like to learn the inherent structure
of the label space automatically. Note in addition that efficiency is important within the speech
recognition application: in our experiments we make use of around 11 million training samples,
while the dimensionality of the data is D = 50.
In particular, we will develop nearest-neighbor methods that give an efficient estimate of P (y|a).
As a first baseline approach?and as a starting point for the methods we develop?consider the
neighbor components analysis (NCA) method introduced by [8]. In NCA, for any test point a, a
distribution ?(j|a) over the training examples is defined as follows where ?(j|a) decreases rapidly
as the distance between a and aj increases.
e?||a?aj ||
?(j|a) = PN
m=1
2
e?||a?am ||2
(1)
The estimate of P (y|a) is then defined as follows:
N
X
Pnca (y|a) =
i=1,yi =y
2
?(i|a)
(2)
?
In NCA the original training data consists of points (xi , yi ) for i = 1 . . . N , where xi ? ?D ,
with D? typically larger than D. The method learns a projection matrix A that defines the modified
representation ai = Axi (the same transformation is applied to test points). The matrix A is learned
from training examples, to optimize log-likelihood under the model in Eq. 2.
In our experiments we assume that a = Ax for some underlying representation x and a projection
matrix A that has been learned using NCA to optimize the log-likelihood of the training set. As
a result the matrix A, and consequently the representation a, are well-calibrated in terms of using
nearest neighbors to estimate P (y|a) through Eq. 2. A first baseline method for our problem is
therefore to directly use the estimates defined by Eq. 2.
We will, however, see that this baseline method performs poorly at providing estimates of P (y|a)
within the speech recognition application. Importantly, the model fails to exploit the underlying
structure or correlations within the label space. For example, consider a test point that has many
neighbors with the phonemic label /s/. This should be evidence that closely related phonemes,
/sh/ for instance, should also get a relatively high probability under the model, but the model is
unable to capture this effect.
As a second baseline, an alternative method for estimating P (y|a) using nearest neighbor information is the following:
Pk (y|a) =
# of k-nearest neighbors of a in training set with label y
k
Here the choice of k is crucial. A small k will be very sensitive to noise and necessarily lead to
many classes receiving a probability of zero, which is undesirable for our application. On the other
hand, if k is too large, samples from far outside the neighborhood of a will influence Pk (y|a). We
will describe a baseline method that interpolates estimates from several different values of k. This
baseline will be useful with our approach, but again suffers from the fact that it does not model the
underlying structure of the label space.
4
Error-Correcting Output Codes for Nearest-Neighbor Classifiers
We now describe a model that uses error correcting output codes to explicitly represent and learn the
underlying structure of the label space Y. For each label y, we define My ? ?L to be a prototype
vector. We assume that the inner product hMy , Mz i will in some sense represent the similarity
between labels y and z. The vectors My will be learned automatically, effectively representing an
embedding of the labels in ?L . In this section we first describe the structure of the model, and then
describe a method for training the parameters of the model (i.e., learning the prototype vectors My ).
4.1
ECOC Model
The ECOC model is defined as follows. When considering a test sample a, we first assign weights
?(j|a) to points aj from the training set through the NCA definition in Eq. 1. Let M be a matrix
that contains all the prototype vectors My as its rows. We can then construct a vector H(a; M) that
uses the weights ?(j|a) and the true labels of the training samples to calculate the expected value of
the output code representing a.
H(a; M) =
N
X
?(j|a)Myj
j=1
Given this definition of H(a; M), our estimate under the ECOC model is defined as follows:
ehMy ,H(a;M)i
hMy? ,H(a;M)i
y ? ?Y e
Pecoc (y|a; M) = P
3
L
2
10
20
30
40
50
60
average CLL
-4.388
-2.748
-2.580
-2.454
-2.432
-2.470
-2.481
Table 1: Average CLL achieved by Pecoc over DevSet1 for different values of L
This distribution assigns most of the probability for a sample vector a to classes whose prototype vectors have a large inner product with H(a; M). All labels receive a non-zero weight under
Pecoc (y|a; M).
4.2
Training the ECOC Model
We now describe a method for estimating the ECOC vectors My in the model. As in [8] the method
uses a leave-one-out optimization criterion, which is particularly convenient within nearest-neighbor
approaches. The optimization problem will be to maximize the conditional log-likelihood function
F (M) =
N
X
(loo)
log Pecoc
(yi |ai ; M)
i=1
(loo)
where Pecoc (yi |ai ; M) is a leave-one-out estimate of the probability of label yi given the input
ai , assuming an ECOC matrix M. This criterion is related to the classification performance of the
training data and also discourages the assignment of very low probability to the correct class.
(loo)
The estimate Pecoc (yi |ai ; M) is given through the following definitions:
?(loo) (j|i) = PN
e?||ai ?aj ||
m=1,m6=i
2
e?||ai ?am ||2
H (loo) (ai ; M) =
N
X
if i 6= j and 0 otherwise
?(loo) (j|i)Myj
j=1
(loo)
Pecoc
(y|ai ; M) = P
ehMy ,H
y ? ?Y
(loo)
(a;M)i
ehMy? ,H
(loo) (a;M)i
The criterion F (M) can be optimized using gradient-ascent methods, where the gradient is as follows:
?F (M)
= ?(z) ? ?? (z)
?Mz
N X
N
X
?(z) =
[?(loo) (j|i)(?z,yi Myj + ?yj ,z Myi )]
i=1 j=1
?? (z) =
N X
X
i=1 y ? ?Y
?
(loo) ?
Pecoc
(y |ai ; M) ?
N
X
j=1
4
?
[?(loo) (j|i)(?z,y? Myj + ?yj ,z My? )]?
Model
Pnca
Pnn
Pecoc
Pf ull
Pgmm
Pmix
Average CLL on DevSet 1
-2.657
-2.535
-2.432
-2.337
-2.299
-2.165
Perplexity
14.25
12.61
11.38
10.35
9.96
8.71
Table 2: Average conditional log-likelihood (CLL) of Pnca , Pnn , Pecoc , Pnn? , Pgmm and Pmix on
DevSet1. The corresponding perplexity values are indicated as well where the perplexity is defined
as e?x given that x is the average CLL.
Here ?a,b = 1 if a = b and ?a,b = 0 if a 6= b. Since ?(loo) (j|i) will be very small if ||ai ? aj ||2 is
large, the gradient calculation can be truncated for such pairs of points which significantly improves
the efficiency of the method (a similar observation is used in [8]). This optimization is non-convex
and it is possible to converge to a local optimum.
In our experiments we learn the matrix M using conjugate gradient ascent, though alternatives such
as stochastic gradient can also be used. A random initialization of M is used for each experiment.
We select L = 40 as the length of the prototype vectors My . We experimented with different
values of L. The average conditional log-likelihood achieved on a development set of approximately
115,000 samples (DevSet1) is listed in Table 1. The performance of the method improves initially
as the size of L increases, but the objective levels off around L = 40.
5
Experiments on Log-Likelihood
We test our approach on a large-vocabulary lecture recognition task [6]. This is a challenging task
that consists of recognizing college lectures given by multiple speakers. We use the SUMMIT
recognizer [5] that makes use of 1871 distinct class labels. The acoustic vectors we use are 112
dimensional vectors consisting of eight concatenated 14 dimensional vectors of MFCC measurements. These vectors are projected down to 50 dimensions using NCA as described in [13]. This
section describes experiments comparing the ECOC model to several baseline models in terms of
their performance on the conditional log-likelihood of sample acoustic vectors.
The baseline model, Pnn , makes use of estimates Pk (y|a) as defined in section 3. The set K is a set
of integers representing different values for k, the number of nearest neighbors used to evaluate Pk .
Additionally, we assume d functions over the the labels, P1 (y), ..., Pd (y). (More information on the
functions Pj (y) that we use in our experiments can be found in the appendix. We have found these
functions over the labels are useful within our speech recognition application.) The model is then
defined as
? =
Pnn (y|a; ?)
X
?k Pk (y|a) +
d
X
?0j Pj (y)
j=1
k?K
P
Pd
? values were
where ?k ? 0, ?k ? K, ?0j ? 0 for j = 1, ..., d, and k?K ?k + j=1 ?0j = 1. The ?
estimated using the EM algorithm on a validation set of examples (DevSet2). In our experiments,
we select K = {5, 10, 20, 30, 50, 100, 250, 500, 1000}. Table 2 contains the average conditional loglikelihood achieved on a development set (DevSet1) by Pnca , Pnn and Pecoc . These results show
that Pecoc clearly outperforms these two baseline models.
In a second experiment we combined Pecoc with Pnn to create a third model Pf ull (y|a). This model
includes information from the nearest neighbors, the output codes, as well as the distributions over
the label space. The model takes the following form:
? =
Pf ull (y|a; ?)
X
k?K
?k Pk (y|a) +
d
X
j=1
5
?0j Pj (y) + ?ecoc Pecoc (y|a; M)
Acoustic Model
Baseline Model
Augmented Model
WER (DevSet3)
36.3
35.2
WER (Test Set)
35.4
34.5
Table 3: WER of recognizer for different acoustic models on the development and test set.
? here have similar constraints as before and are again optimized using the EM algoThe values of ?
rithm. Results in Table 2 show that this model gives a further clear improvement over Pecoc .
We also compare ECOC to a GMM model, as conventionally used in speech recognition systems.
The GMM we use is trained using state-of-the-art algorithms with the SUMMIT system [5]. The
GMM defines a generative model Pgmm (a|y); we derive a conditional model as follows:
Pgmm (a|y)? P (y)
? ?
?
y ? ?Y Pgmm (a|y ) P (y )
Pgmm (y|a) = P
The parameter ? is selected experimentally to achieve maximum CLL on DevSet2 and P (y) refers
to the prior over the labels calculated directly from their relative proportions in the training set.
Table 2 shows that Pf ull and Pgmm are close in performance, with Pgmm giving slightly improved
? trained using the EM
results. A final interpolated model with similar constraints on the values of ?
algorithm is as follows:
? =
Pmix (y|a; ?)
X
?k Pk (y|a) +
k?K
d
X
?0j Pj (y) + ?ecoc Pecoc (y|a; M) + ?gmm Pgmm (y|a)
j=1
Results for Pmix are shown in the final row in the table. This interpolated model gives a clear
improvement over both the GMM and ECOC models alone. Thus the ECOC model, combined with
additional nearest-neighbor information, can give a clear improvement over state-of-the-art GMMs
on this task.
6
Recognition Experiments
In this section we describe experiments that integrate the ECOC model within a full speech recog? using both DevSet1 and DevSet2 for Pf ull (y|a). However,
nition system. We learn parameters ?
we need to derive an estimate for P (a|y) for use by the recognizer. We can do so by using an estimate for P (a|y) proportional to PP(y|a)
(y) [16]. The estimates for P (y) are derived directly from the
proportions of occurrences of each acoustic-phonetic class in the training set.
In our experiments we consider the following two methods for calculating the acoustic model.
? Baseline Model: ?1 log Pgmm (a|y)
?Pgmm (y|a)+(1??)Pf ull (y|a)
? Augmented Model: ?2 log
P (y)
The baseline method is just a GMM model with the commonly used scaling parameter ?1 . The
augmented model combines Pgmm linearly with Pf ull using parameter ? and the log of the combination is scaled by parameter ?2 . The parameters ?1 , ?2 , ? are selected using the downhill simplex
algorithm by optimizing WER over a development set [10]. Our development set (DevSet3) consists
of eight hours of data including six speakers and our test set consists of eight hours of data including
five speakers. Results for both methods on the development set and test set are presented in Table 3.
The augmented model outperforms the baseline GMM model. This indicates that the nearest neighbor information along with the ECOC embedding, can significantly improve the acoustic model.
Overall, an absolute reduction of 1.1% in WER on the development set and 0.9% on the test set are
achieved using the augmented acoustic model. These results are significant with p < 0.001 using
the sign test calculated at the utterance level.
6
4.0
s
z
sh
t
ch
jh
p f
th k
epi
zh
tcl
kcl
pcl
gcl dcl
bcl
v
b
ay
aaao
dh
hh
oy
4.0
em
ah
aw
owel
axr uh
l
rahf p
er
ae
eh
g
d dx
y
w
aw
uw
ih
ey
en
ng
m
n
iy
Figure 1: Plot of 2-dimensional output codes corresponding to 73 acoustic phonetic classes. The
red circles indicate noise and silence classes. The phonemic classes are divided as follows: vowels,
semivowels, nasals, stops and stop closures, fricatives, affricates, and the aspirant /hh/.
7
7.1
Discussion
Plot of a low-dimensional embedding
In order to get a sense of what is learned by the output codes of Pecoc we can plot the output codes
directly. Figure 1 shows a plot of the output codes learned when L = 2. The output codes are learned
for 1871 classes, but only 73 internal acoustic-phonetic classes are shown in the plot for clarity. In
the plot, classes of similar acoustic-phonetic category are shown in the same color and shape. We can
see that items of similar acoustic categories are grouped closely together. For example, the vowels
are close to each other in the bottom left quadrant, while the stop-closures are grouped together in
the top right, the affricates in the top left, and the nasals in the bottom right. The fricatives are a
little more spread out but usually grouped close to another fricative that shares some underlying
phonological feature such as /sh/ and /zh/ which are both palatal and /f/ and /th/ which are
both unvoiced. We can also see specific acoustic properties emerging. For example the voiced stops
/b/, /d/, /g/ are placed close to other voiced items of different acoustic categories.
7
7.2
Extensions
The ECOC embedding of the label space could also be co-learned with an embedding of the input
acoustic vector space by extending the approach of NCA [8]. It would simply require the reintroduction of the projection matrix A in the weights ?.
2
e?||Ax?Axj ||
?(j|x) = PN
?||Ax?Axm ||2
m=1 e
H(x; M) and Pecoc would still be defined as in section 4.1. The optimization criterion would now
depend on both A and M. To optimize A, we could again use gradient methods. Co-learning the
two embeddings M and A could potentially lead to further improvements.
8
Conclusion
We have shown that nearest neighbor methods can be used to improve the performance of a GMMbased acoustic model and reduce the WER on a challenging speech recognition task. We have
also developed a model for using error-correcting output codes to represent an embedding of the
acoustic-phonetic label space that helps us capture cross-class information. Future work on this task
could include co-learning an embedding of the input acoustic vector space with the ECOC matrix to
attempt to achieve further gains.
Appendix
We define three distributions based on the prior probabilities, P (y), of the acoustic phonetic classes.
The SUMMIT recognizer makes use of 1871 distinct acoustic phonetic labels [5]. We divide the set
of labels, Y, into three disjoint categories.
? Y (1) includes labels involving internal phonemic events (e.g. /ay/)
? Y (2) includes labels involving the transition from one acoustic-phonetic event to another
(e.g. /ow/->/ch/)
? Y (3) includes labels involving only non-phonetic events like noise and silence
We define a distribution P (1) (y) as follows. Distributions P (2) (y) and P (3) (y) are defined similarly.
P (1) (y) =
if y ? Y (1)
otherwise
?
y ? ?Y (1) P (y )
P (y),
0,
P
References
[1] E. L. Allwein, R. E. Schapire, and Y. Singer. Reducing multiclass to binary: a unifying approach for margin classifiers. Journal of Machine Learning Research, 1:113?141, 2000.
[2] K. Crammer and Y. Singer. Improved output coding for classification using continuous relaxation. In Advances in Neural Information Processing Systems. MIT Press, 2000.
[3] K. Crammer and Y. Singer. On the learnability and design of output codes for multiclass
problems. Machine Learning, 47(2-3):201?233, 2002.
[4] T. G. Dietterich and G. Bakiri. Solving multiclass learning problems via error-correcting output
codes. Journal of Artificial Intelligence Research, 2:263?286, 1995.
[5] J. Glass. A probabilistic framework for segment-based speech recognition. Computer, Speech,
and Language, 17(2-3):137?152, 2003.
8
[6] J. Glass, T. J. Hazen, L. Hetherington, and C. Wang. Analysis and processing of lecture
audio data: Preliminary investigations. In HLT-NAACL 2004 Workshop on Interdisciplinary
Approaches to Speech Indexing and Retrieval, pages 9?12, 2004.
[7] A. Globerson and S. Roweis. Metric learning by collapsing classes. In Y. Weiss, B. Scholkopf,
and J. Platt, editors, Advances in Neural Information Processing Systems 18, pages 513?520.
MIT Press, 2006.
[8] J. Goldberger, S. Roweis, G. Hinton, and R. Salakhutdinov. Neighbourhood components analysis. In L. K. Saul, Y. Weiss, and L. Bottou, editors, Advances in Neural Information Processing
Systems 17, pages 513?520. MIT Press, 2005.
[9] A. Klautau, N. Jevtic, and A. Orlitsky. On nearest-neighbor error-correcting output codes
with aplication to all-pairs multiclass support vector machines. Journal of Machine Learning
Research, 4:1?15, 2003.
[10] W. H. Press, S. A. Teukolsky, W. T. Vetterline, and B. P. Flannery. Numerical recipes: the art
of scientific computing. Cambridge University Press, 3 edition, 2007.
[11] O. Pujol, P. Radeva, and J. Vitria. Discriminant ecoc: a heuristic method for application
dependent design of error correcting output codes. IEEE Transactions of Pattern Analysis and
Machine Intelligence, 28(6), 2006.
[12] R. Salakhutdinov and G. Hinton. Learning a nonlinear embedding by preserving class neighbourhood structure. AI and Statistics, 2007.
[13] N. Singh-Miller, M. Collins, and T. J. Hazen. Dimensionality reduction for speech recognition
using neighborhood components analysis. In Interspeech, 2007.
[14] A. Torralba, R. Fergus, and Y. Weiss. Small codes and large image databases for recognition.
IEEE Computer Vision and Pattern Recognition, June 2008.
[15] K. Q. Weinberger, J. Blitzer, and L. K. Saul. Distance metric learning for large margin nearest
neighbor classification. In Advances in Neural Information Processing Systems. MIT Press,
2006.
[16] G. Zavaliagkos, Y. Zhao, R. Schwartz, and J. Makhoul. A hybrid segmental neural net/hidden
markov model system for continuous speech recognition. IEEE Transactions on Speech and
Audio Processing, 2(1):151?160, 1994.
9
| 3845 |@word proportion:2 closure:2 reduction:2 contains:2 outperforms:2 comparing:1 goldberger:1 dx:1 numerical:1 shape:1 plot:6 alone:1 generative:1 selected:2 intelligence:2 item:2 five:1 along:1 scholkopf:1 consists:5 combine:2 expected:1 p1:1 multi:7 ecoc:20 salakhutdinov:2 automatically:2 little:1 pf:7 considering:1 estimating:2 underlying:8 what:1 emerging:1 developed:1 transformation:1 orlitsky:1 ull:7 classifier:6 scaled:1 platt:1 schwartz:1 yn:1 before:1 local:1 approximately:1 initialization:1 challenging:2 co:3 nca:8 globerson:1 yj:2 differs:1 significantly:3 projection:4 convenient:1 word:2 refers:1 quadrant:1 ecocs:4 get:2 undesirable:1 close:4 context:2 influence:1 optimize:4 starting:2 convex:1 focused:1 assigns:1 correcting:10 importantly:1 embedding:10 mcollins:1 us:3 recognition:23 particularly:1 summit:3 database:1 bottom:2 recog:1 wang:1 capture:2 calculate:1 decrease:1 mz:2 pd:2 trained:3 singh:2 depend:1 solving:1 segment:1 efficiency:2 uh:1 epi:1 distinct:2 kcl:1 describe:9 artificial:1 neighborhood:3 outside:1 whose:1 heuristic:1 larger:2 loglikelihood:1 otherwise:2 pmix:4 statistic:1 final:2 net:1 propose:1 product:2 relevant:1 rapidly:1 hazen:2 poorly:1 achieve:2 roweis:2 recipe:1 optimum:1 extending:1 leave:3 object:2 gcl:1 derive:3 develop:2 help:1 blitzer:1 measured:1 nearest:21 semivowel:1 phonemic:3 eq:4 indicate:1 closely:3 correct:1 stochastic:1 require:1 assign:1 preliminary:1 investigation:1 extension:1 around:3 considered:2 torralba:1 recognizer:5 label:46 sensitive:1 grouped:3 create:1 mit:6 clearly:1 gaussian:1 modified:1 rather:1 pn:3 fricative:3 allwein:1 axj:1 derived:2 focus:2 ax:3 june:1 improvement:8 likelihood:12 indicates:1 contrast:1 baseline:15 am:2 sense:2 glass:2 dependent:1 nn:6 typically:1 initially:1 hidden:1 overall:1 classification:6 development:7 art:4 construct:1 phonological:2 ng:1 future:1 simplex:1 inherent:1 consisting:1 vowel:2 attempt:1 highly:2 mixture:1 sh:3 affricate:2 divide:1 confusable:1 circle:1 instance:1 formalism:1 modeling:2 measuring:1 assignment:1 recognizing:1 too:1 loo:13 learnability:1 aw:2 com1:1 my:8 calibrated:2 combined:2 cll:7 csail:1 interdisciplinary:1 probabilistic:1 off:1 receiving:1 michael:1 together:2 iy:1 again:3 central:1 choose:1 collapsing:1 myj:4 zhao:1 coding:1 includes:4 explicitly:1 recogniser:1 piece:1 red:1 dcl:1 voiced:2 phoneme:1 miller:2 mfcc:1 ah:1 suffers:1 hlt:1 definition:3 pp:1 propagated:1 stop:4 gain:1 massachusetts:2 color:1 improves:3 dimensionality:2 improved:2 wei:3 though:1 just:1 correlation:1 hand:1 nonlinear:1 defines:2 aj:5 indicated:1 scientific:1 effect:1 dietterich:1 naacl:1 true:1 hence:1 interspeech:1 speaker:3 criterion:5 ay:2 demonstrate:1 performs:1 image:1 owel:1 recently:1 common:1 discourages:1 million:2 significant:7 measurement:1 cambridge:3 ai:14 similarly:1 language:2 similarity:3 segmental:1 recent:3 optimizing:2 perplexity:3 phonetic:12 binary:1 yi:9 nition:1 devset:1 preserving:1 additional:1 ey:1 myi:1 converge:1 maximize:1 full:2 multiple:2 calculation:1 cross:1 retrieval:1 divided:1 a1:1 involving:3 ae:1 vision:2 metric:3 represent:3 achieved:4 pnn:7 receive:1 background:1 addition:1 crucial:2 ascent:2 leveraging:1 gmms:2 integer:1 embeddings:4 m6:1 inner:2 reduce:1 prototype:7 multiclass:4 klautau:1 six:1 speech:22 interpolates:1 useful:2 clear:3 involve:1 listed:1 nasal:2 induces:1 category:4 schapire:1 sign:1 estimated:2 arising:1 disjoint:1 drawn:1 clarity:1 gmm:8 pj:4 uw:1 relaxation:1 wer:7 throughout:1 appendix:2 scaling:1 constraint:2 pcl:1 interpolated:2 aspirant:1 relatively:2 structured:1 combination:1 conjugate:1 makhoul:1 describes:1 slightly:1 em:4 indexing:1 hh:2 singer:3 end:1 apply:1 eight:3 occurrence:1 neighbourhood:2 alternative:2 weinberger:1 original:1 top:2 include:1 unifying:1 calculating:1 exploit:1 giving:1 concatenated:1 bakiri:1 objective:1 gradient:6 ow:1 distance:2 unable:1 discriminant:1 assuming:1 code:21 length:1 providing:2 potentially:1 design:2 zavaliagkos:1 observation:1 unvoiced:1 markov:1 truncated:1 hinton:2 incorporated:1 y1:1 introduced:1 pair:2 optimized:2 acoustic:27 learned:10 hour:2 usually:1 pattern:2 pujol:1 including:2 event:3 natural:1 eh:1 hybrid:1 representing:4 improve:2 technology:2 conventionally:2 utterance:1 prior:2 zh:2 relative:1 bcl:1 embedded:1 lecture:4 oy:1 proportional:1 axm:1 validation:1 integrate:1 editor:2 share:3 row:2 placed:1 silence:2 jh:1 institute:2 neighbor:23 saul:2 absolute:1 axi:1 vocabulary:1 dimension:1 calculated:2 transition:1 commonly:1 projected:1 far:1 transaction:2 xi:2 fergus:1 continuous:2 table:9 additionally:1 learn:7 bottou:1 necessarily:1 domain:1 pk:7 spread:1 linearly:1 noise:3 edition:1 augmented:5 en:1 rithm:1 embeds:1 fails:1 downhill:1 third:1 learns:1 down:1 specific:1 er:1 experimented:1 evidence:1 workshop:1 ih:1 effectively:1 margin:2 flannery:1 simply:1 natasha:1 ch:2 dh:1 ma:2 teukolsky:1 conditional:12 goal:2 consequently:1 ponents:1 experimentally:1 specifically:1 reducing:1 hetherington:1 select:2 college:1 internal:2 support:1 tcl:1 crammer:2 collins:2 evaluate:1 audio:2 correlated:2 |
3,141 | 3,846 | Nonparametric Latent Feature Models
for Link Prediction
Thomas L. Griffiths
Psychology and Cognitive Science
University of California
Berkeley, CA 94720
tom [email protected]
Kurt T. Miller
EECS
University of California
Berkeley, CA 94720
[email protected]
Michael I. Jordan
EECS and Statistics
University of California
Berkeley, CA 94720
[email protected]
Abstract
As the availability and importance of relational data?such as the friendships summarized on a social networking website?increases, it becomes increasingly important to have good models for such data. The kinds of latent structure that have
been considered for use in predicting links in such networks have been relatively
limited. In particular, the machine learning community has focused on latent class
models, adapting Bayesian nonparametric methods to jointly infer how many latent classes there are while learning which entities belong to each class. We pursue
a similar approach with a richer kind of latent variable?latent features?using a
Bayesian nonparametric approach to simultaneously infer the number of features
at the same time we learn which entities have each feature. Our model combines
these inferred features with known covariates in order to perform link prediction.
We demonstrate that the greater expressiveness of this approach allows us to improve performance on three datasets.
1
Introduction
Statistical analysis of social networks and other relational data has been an active area of research for
over seventy years and is becoming an increasingly important problem as the scope and availability
of social network datasets increase [1]. In these problems, we observe the interactions between a set
of entities and we wish to extract informative representations that are useful for making predictions
about the entities and their relationships. One basic challenge is link prediction, where we observe
the relationships (or ?links?) between some pairs of entities in a network (or ?graph?) and we try
to predict unobserved links. For example, in a social network, we might only know some subset of
people are friends and some are not, and seek to predict which other people are likely to get along.
Our goal is to improve the expressiveness and performance of generative models based on extracting
latent structure representing the properties of individual entities from the observed data, so we will
focus on these kinds of models. This rules out approaches like the popular p? model that uses global
quantities of the graph, such as how many edges or triangles are present [2, 3]. Of the approaches
that do link prediction based on attributes of the individual entities, these can largely be classified
into class-based and feature-based approaches. There are many models that can be placed under
these approaches, so we will focus on the models that are most comparable to our approach.
1
Most generative models using a class-based representation are based on the stochastic blockmodel,
introduced in [4] and further developed in [5]. In the most basic form of the model, we assume there
are a finite number of classes that entities can belong to and that these classes entirely determine the
structure of the graph, with the probability of a link existing between two entities depending only
on the classes of those entities. In general, these classes are unobserved, and inference reduces to
assigning entities to classes and inferring the class interactions. One of the important issues that arise
in working with this model is determining how many latent classes there are for a given problem.
The Infinite Relational Model (IRM) [6] used methods from nonparametric Bayesian statistics to
tackle this problem, allowing the number of classes to be determined at inference time. The Infinite
Hidden Relational Model [7] further elaborated on this model and the Mixed Membership Stochastic
Blockmodel (MMSB) [8] extended it to allow entities to have mixed memberships.
All these class-based models share a basic limitation in the kinds of relational structure they naturally capture. For example, in a social network, we might find a class which contains ?male high
school athletes? and another which contains ?male high school musicians.? We might believe these
two classes will behave similarly, but with a class-based model, our options are to either merge the
classes or duplicate our knowledge about common aspects of them. In a similar vein, with a limited
amount of data, it might be reasonable to combine these into a single class ?male high school students,? but with more data we would want to split this group into athletes and musicians. For every
new attribute like this that we add, the number of classes would potentially double, quickly leading
to an overabundance of classes. In addition, if someone is both an athlete and a musician, we would
either have to add another class for that or use a mixed membership model, which would say that
the more a student is an athlete, the less he is a musician.
An alternative approach that addresses this problem is to use features to describe the entities. There
could be a separate feature for ?high school student,? ?male,? ?athlete,? and ?musician? and the
presence or absence of each of these features is what defines each person and determines their
relationships. One class of latent-feature models for social networks has been developed by [9, 10,
11], who proposed real-valued vectors as latent representations of the entities in the network where
depending on the model, either the distance, inner product, or weighted combination of the vectors
corresponding to two entities affects the likelihood of there being a link between them. However,
extending our high school student example, we might hope that instead of having arbitrary realvalued features (which are still useful for visualization), we would infer binary features where each
feature could correspond to an attribute like ?male? or ?athlete.? Continuing our earlier example, if
we had a limited amount of data, we might not pick up on a feature like ?athlete.? However, as we
observe more interactions, this could emerge as a clear feature. Instead of doubling the numbers of
classes in our model, we simply add an additional feature. Determining the number of features will
therefore be of extreme importance.
In this paper, we present the nonparametric latent feature relational model, a Bayesian nonparametric model in which each entity has binary-valued latent features that influences its relations. In
addition, the relations depend on a set of known covariates. This model allows us to simultaneously
infer how many latent features there are while at the same time inferring what features each entity
has and how those features influence the observations. This model is strictly more expressive than
the stochastic blockmodel. In Section 2, we describe a simplified version of our model and then
the full model. In Section 3, we discuss how to perform inference. In Section 4, we illustrate the
properties of our model using synthetic data and then show that the greater expressiveness of the
latent feature representation results in improved link prediction on three real datasets. Finally, we
conclude in Section 5.
2
The nonparametric latent feature relational model
Assume we observe the directed relational links between a set of N entities. Let Y be the N ? N
binary matrix that contains these links. That is, let yij ? Y (i, j) = 1 if we observe a link from
entity i to entity j in that relation and yij = 0 if we observe that there is not a link. Unobserved
links are left unfilled. Our goal will be to learn a model from the observed links such that we can
predict the values of the unfilled entries.
2
2.1
Basic model
In our basic model, each entity is described by a set of binary features. We are not given these
features a priori and will attempt to infer them. We assume that the probability of having a link
from one entity to another is entirely determined by the combined effect of all pairwise feature
interactions. If there are K features, then let Z be the N ? K binary matrix where each row
corresponds to an entity and each column corresponds to a feature such that zik ? Z(i, k) = 1 if the
ith entity has feature k and zik = 0 otherwise. and let Zi denote the feature vector corresponding to
entity i. Let W be a K ? K real-valued weight matrix where wkk0 ? W (k, k 0 ) is the weight that
affects the probability of there being a link from entity i to entity j if both entity i has feature k and
entity j has feature k 0 .
We assume that links are independent conditioned on Z and W , and that only the features of entities
i and j influence the probability of a link between those entities. This defines the likelihood
Pr(Y |Z, W )
=
Y
Pr(yij |Zi , Zj , W )
(1)
i,j
where the product ranges over all pairs of entities. Given the feature matrix Z and weight matrix W ,
the probability that there is a link from entity i to entity j is
Pr(yij
1
0
?
?
X
>
= 1|Z, W ) = ? Zi W Zj = ? @
zik zjk0 wkk0 A
(2)
k,k0
where ?(?) is a function that transforms values on (??, ?) to (0, 1) such as the sigmoid function
1
?(x) = 1+exp(?x)
or the probit function ?(x) = ?(x). An important aspect of this model is that
all-zero columns of Z do not affect the likelihood. We will take advantage of this in Section 2.2.
This model is very flexible. With a single feature per entity, it is equivalent to a stochastic blockmodel. However, since entities can have more than a single feature, the model is more expressive. In
the high school student example, each feature can correspond to an attribute like ?male,? ?musician,?
and ?athlete.? If we were looking at the relation ?friend of? (not necessarily symmetric!), then the
weight at the (athlete, musician) entry of W would correspond to the weight that an athlete would be
a friend of a musician. A positive weight would correspond to an increased probability, a negative
weight a decreased probability, and a zero weight would indicate that there is no correlation between
those two features and the observed relation. The more positively correlated features people have,
the more likely they are to be friends. Another advantage of this representation is that if our data
contained observations of students in two distant locations, we could have a geographic feature for
the different locations. While other features such as ?athlete? or ?musician? might indicate that one
person could be a friend of another, the geographic features could have extremely negative weights
so that people who live far from each other are less likely to be friends. However, the parameters
for the non-geographic features would still be tied for all people, allowing us to make stronger inferences about how they influence the relations. Class-based models would need an abundance of
classes to capture these effects and would not have the same kind of parameter sharing.
Given the full set of observations Y , we wish to infer the posterior distribution of the feature matrix
Z and the weights W . We do this using Bayes? theorem, p(Z, W |Y ) ? p(Y |Z, W )p(Z)p(W ),
where we have placed an independent prior on Z and W . Without any prior knowledge about the
2
features or their weights, a natural prior for W involves placing an independent N (0, ?w
) prior on
each wij . However, placing a prior on Z is more challenging. If we knew how many features there
were, we could place an arbitrary parametric prior on Z. However, we wish to have a flexible prior
that allows us to simultaneously infer the number of features at the same time we infer all the entries
in Z. The Indian Buffet Process is such a prior.
2.2
The Indian Buffet Process and the basic generative model
As mentioned in the previous section, any features which are all-zero do not affect the likelihood.
That means that even if we added an infinite number of all-zero features, the likelihood would remain
the same. The Indian Buffet Process (IBP) [12] is a prior on infinite binary matrices such that with
probability one, a feature matrix drawn from it for a finite number of entities will only have a finite
number of non-zero features. Moreover, any feature matrix, no matter how many non-zero features
3
it contains, has positive probability under the IBP prior. It is therefore a useful nonparametric prior
to place on our latent feature matrix Z.
The generative process to sample matrices from the IBP can be described through a culinary
metaphor that gave the IBP its name. In this metaphor, each row of Z corresponds to a diner at an
Indian buffet and each column corresponds to a dish at the infinitely long buffet. If a customer takes
a particular dish, then the entry that corresponds to the customer?s row and the dish?s column is a one
and the entry is zero otherwise. The culinary metaphor describes how people choose the dishes. In
the IBP, the first customer chooses a Poisson(?) number of dishes to sample, where ? is a parameter
of the IBP. The ith customer tries each previously sampled dish with probability proportional to the
number of people that have already tried the dish and then samples a Poisson(?/i) number of new
dishes. This process is exchangeable, which means that the order in which the customers enter the
restaurant does not affect the configuration of the dishes that people try (up to permutations of the
dishes as described in [12]). This insight leads to a straightforward Gibbs sampler to do posterior
inference that we describe in Section 3.
Using an IBP prior on Z, our basic generative latent feature relational model is:
Z ? IBP(?)
for all k, k 0 for which features k and k 0 are non-zero
2
wkk0 ? N (0, ?w
)
>
yij ? ? Zi W Zj
2.3
for each observation.
Full nonparametric latent feature relational model
We have described the basic nonparametric latent feature relational model. We now combine it
with ideas from the social network community to get our full model. First, we note that there are
many instances of logit models used in statistical network analysis that make use of covariates in
link prediction [2]. Here we will focus on a subset of ideas discussed in [10]. Let Xij be a vector
that influences the relation yij , let Xp,i be a vector of known attributes of entity i when it is the
parent of a link, and let Xc,i be a vector of known attributes of entity i when it is a child of a link.
For example, in Section 4.2, when Y represents relationships amongst countries, Xij is a scalar
representing the geographic similarity between countries (Xij = exp(?d(i, j))) since this could
influence the relationships and Xp,i = Xc,i is a set of known features associated with each country
(Xp,i and Xc,i would be distinct if we had covariates specific to each country?s roles). We then let
c be a normally distributed scalar and ?, ?p , ?c , a, and b be normally distributed vectors in our full
model in which
?
?
Pr(yij = 1|Z, W, X, ?, a, b, c) = ? Zi W Zj> + ? > Xij + (?p> Xp,i + ai ) + (?c> Xc,j + bj ) + c .
(3)
If we do not have information about one or all of X, Xp , and Xc , we drop the corresponding term(s).
In this model, c is a global offset that affects the default likelihood of a relation and ai and bj are
entity and role specific offsets.
So far, we have only considered the case of observing a single relation. It is not uncommon to
observe multiple relations for the same set of entities. For example, in addition to the ?friend of?
relation, we might also observe the ?admires? and ?collaborates with? relations. We still believe that
each entity has a single set of features that determines all its relations, but these features will not
affect each relation in the same way. If we are given m relations, label them Y 1 , Y 2 , . . . , Y m . We
will use the same features for each relation, but we will use an independent weight matrix W i for
each relation Y i . In addition, covariates might be relation specific or common across all relations.
Regardless, they will interact in different ways in each relation. Our full model is now
Pr(Y 1 , . . . , Y m |Z, {W i , X i , ? i , ai , bi , ci }m
i=1 )
=
m
Y
Pr(Y i |Z, W i , X i , ? i , ai , bi , ci ).
i=1
2.4
Variations of the nonparametric latent feature relational model
The model that we have defined is for directed graphs in which the matrix Y i is not assumed to be
symmetric. For undirected graphs, we would like to define a symmetric model. This is easy to do by
restricting W i to be symmetric. If we further believe that the features we learn should not interact,
we can assume that W i is diagonal.
4
2.5
Related nonparametric latent feature models
There are two models related to our nonparametric latent feature relational model that both use the
IBP as a prior on binary latent feature matrices. The most closely related model is the Binary Matrix
Factorization (BMF) model of [13]. The BMF is a general model with several concrete variants,
the most relevant of which was used to predict unobserved entries of binary matrices for image
reconstruction and collaborative filtering. If Y is the observed part of a binary matrix, then in this
variant, we assume that Y |U, V, W ? ?(U W V > ) where ?(?) is the logistic function, U and V are
independent binary matrices drawn from the IBP, and the entries in W are independent draws from a
normal distribution. If Y is an N ? N matrix where we assume the rows and columns have the same
features (i.e., U = V ), then this special case of their model is equivalent to our basic (covariate-free)
model. While [13] were interested in a more general formalization that is applicable to other tasks,
we have specialized and extended this model for the task of link prediction. The other related model
is the ADCLUS model [14]. This model assumes we are given a symmetric matrix of nonnegative
similarities Y and that Y = ZW Z > + where Z is drawn from the IBP, W is a diagonal matrix
with entries independently drawn from a Gamma distribution, and is independent Gaussian noise.
This model does not allow for arbitrary feature interactions nor does it allow for negative feature
correlations.
3
Inference
Exact inference in our nonparametric latent feature relational model is intractable [12]. However,
the IBP prior lends itself nicely to approximate inference via Markov Chain Monte Carlo [15]. We
first describe inference in the single relation, basic model, later extending it to the full model. In our
basic model, we must do posterior inference on Z and W . Since with probability one, any sample
of Z will have a finite number of non-zero entries, we can store just the non-zero columns of each
sample of the infinite binary matrix Z. Since we do not have a conjugate prior on W , we must also
sample the corresponding entries of W . Our sampler is as follows:
Given W , resample Z We do this by resampling each row Zi in succession. When sampling
entries in the ith row, we use the fact that the IBP is exchangeable to assume that the ith customer in
the IBP was the last one to enter the buffet. Therefore, when resampling zik for non-zero columns
k, if mk is the number of non-zero entries in column k excluding row i, then
Pr(zik = 1|Z?ik , W, Y )
?
mk Pr(Y |zik = 1, Z?ik , W ).
We must also sample zik for each of the infinitely many all-zero columns to add features to the
representation. Here, we use the fact that in the IBP, the prior distribution on the number of new
features for the last customer is Poisson(?/N ). As described in [12], we must then weight this
by the likelihood term for having that many new features, computing this for 0, 1, . . . .kmax new
features for some maximum number of new features kmax and sampling the number of new features
from this normalized distribution. The main difficulty arises because we have not sampled the values
of W for the all-zero columns and we do not have a conjugate prior on W , so we cannot compute
the likelihood term exactly. We can adopt one of the non-conjugate sampling approaches from the
Dirichlet process [16] to this task or use the suggestion in [13] to include a Metropolis-Hastings step
to propose and either accept or reject some number of new columns and the corresponding weights.
We chose to use a stochastic Monte Carlo approximation of the likelihood. Once the number of new
features is sampled, we must sample the new values in W as described below.
Given Z, resample W We sequentially resample each of the weights in W that correspond to
non-zero features and drop all weights that correspond to all-zero features. Since we do not have
a conjugate prior on W , we cannot directly sample W from its posterior. If ?(?) is the probit, we
adapt the auxiliary sampling trick from [17] to have a Gibbs sampler for the entries of W . If ?(?) is
the logistic function, no such trick exists and we resort to using a Metropolis-Hastings step for each
weight in which we propose a new weight from a normal distribution centered around the old one.
Hyperparameters We can also place conjugate priors on the hyperparameters ? and ?w and perform posterior inference on them. We use the approach from [18] for sampling of ?.
5
(a)
(b)
(c)
(d)
(e)
Figure 1: Features and corresponding observations for synthetic data. In (a), we show features that
could be explained by a latent-class model that then produces the observation matrix in (b). White
indicates one values, black indicates zero values, and gray indicates held out values. In (c), we show
the feature matrix of our other synthetic dataset along with the corresponding observations in (d).
(e) shows the feature matrix of a randomly chosen sample from our Gibbs sampler.
Multiple relations In the case of multiple relations, we can sample Wi given Z independently for
each i as above. However, when we resample Z, we must compute
Pr(zik = 1|Z?ik , {W, Y }m
i=1 )
?
mk
m
Y
Pr(Y i |zik = 1, Z?ik , W i ).
i=1
Full model In the full model, we must also update {? i , ?pi , ?ci , ai , bi , ci }m
i=1 . By conditioning on
i
these, the update equations for Z and W take the same form, but with Equation (3) used for the
likelihood. When we condition on Z and W i , the posterior updates for (? i , ?pi , ?ci , ai , bi , ci ) are
independent and can be derived from the updates in [10].
Implementation details Despite the ease of writing down the sampler, samplers for the IBP often
mix slowly due to the extremely large state space full of local optima. Even if we limited Z to have
K columns, there are 2N K potential feature matrices. In an effort to explore the space better, we can
augment the Gibbs sampler for Z by introducing split-merge style moves as described in [13] as well
as perform annealing or tempering to smooth out the likelihood. However, we found that the most
significant improvement came from using a good initialization. A key insight that was mentioned
in Section 2.1 is that the stochastic blockmodel is a special case of our model in which each entity
only has a single feature. Stochastic blockmodels have been shown to perform well for statistical
network analysis, so they seem like a reasonable way to initialize the feature matrix. In the results
section, we compare the performance of a random initialization to one in which Z is initialized with
a matrix learned by the Infinite Relational Model (IRM). To get our initialization point, we ran the
Gibbs sampler for the IRM for only 15 iterations and used the resulting class assignments to seed Z.
4
Results
We first qualitatively analyze the strengths and weaknesses of our model on synthetic data, establishing what we can and cannot expect from it. We then compare our model against two class-based
generative models, the Infinite Relational Model (IRM) [6] and the Mixed Membership Stochastic
Blockmodel (MMSB) [8], on two datasets from the original IRM paper and a NIPS coauthorship
dataset, establishing that our model does better than the best of those models on those datasets.
4.1
Synthetic data
We first focus on the qualitative performance of our model. We applied the basic model to two very
simple synthetic datasets generated from known features. These datasets were simple enough that
the basic model could attain 100% accuracy on held-out data, but were different enough to address
the qualitative characteristics of the latent features inferred. In one dataset, the features were the
class-based features seen in Figure 1(a) and in the other, we used the features in Figure 1(c). The
observations derived from these features can be seen in Figure 1(b) and Figure 1(d), respectively.
6
On both datasets, we initialized Z and W randomly. With the very simple, class-based model, 50%
of the sampled feature matrices were identical to the generating feature matrix with another 25%
differing by a single bit. However, on the other dataset, only 25% of the samples were at most a
single bit different than the true matrix. It is not the case that the other 75% of the samples were bad
samples, though. A randomly chosen sample of Z is shown in Figure 1(e). Though this matrix is
different from the true generating features, with the appropriate weight matrix it predicts just as well
as the true feature matrix. These tests show that while our latent feature approach is able to learn
features that explain the data well, due to subtle interactions between sets of features and weights,
the features themselves will not in general correspond to interpretable features. However, we can
expect the inferred features to do a good job explaining the data. This also indicates that there are
many local optima in the feature space, further motivating the need for good initialization.
4.2
Multi-relational datasets
In the original IRM paper, the IRM was applied to several datasets [6]. These include a dataset
containing 54 relations of 14 countries (such as ?exports to? and ?protests?) along with 90 given
features of the countries [19] and a dataset containing 26 kinship relationships of 104 people in the
Alyawarra tribe in Central Australia [20]. See [6, 19, 20] for more details on the datasets.
Our goal in applying the latent feature relational model to these datasets was to demonstrate the
effectiveness of our algorithm when compared to two established class-based algorithms, the IRM
and the MMSB, and to demonstrate the effectiveness of our full algorithm. For the Alyawarra
dataset, we had no known covariates. For the countries dataset, Xp = Xc was the set of known
features of the countries and X was the country distance similarity matrix described in Section 2.3.
As mentioned in the synthetic data section, the inferred features do not necessarily have any interpretable meaning, so we restrict ourselves to a quantitative comparison. For each dataset, we held
out 20% of the data during training and we report the AUC, the area under the ROC (Receiver Operating Characteristic) curve, for the held-out data [21]. We report results for inferring a global set of
features for all relations as described in Section 2.3 which we refer to as ?global? as well as results
when a different set of features is independently learned for each relation and then the AUCs of all
relations are averaged together, which we refer to as ?single.? In addition, we tried initializing our
sampler for the latent feature relational model with either a random feature matrix (?LFRM rand?)
or class-based features from the IRM (?LFRM w/ IRM?). We ran our sampler for 1000 iterations for
each configuration using a logistic squashing function (though results using the probit are similar),
throwing out the first 200 samples as burn-in. Each method was given five random restarts.
Table 1: AUC on the countries and kinship datasets. Bold identifies the best performance.
LFRM w/ IRM
LFRM rand
IRM
MMSB
Countries single
0.8521 ? 0.0035
0.8529 ? 0.0037
0.8423 ? 0.0034
0.8212 ? 0.0032
Countries global
0.8772 ? 0.0075
0.7067 ? 0.0534
0.8500 ? 0.0033
0.8643 ? 0.0077
Alyawarra single
0.9346 ? 0.0013
0.9443 ? 0.0018
0.9310 ? 0.0023
0.9005 ? 0.0022
Alyawarra global
0.9183 ? 0.0108
0.7127 ? 0.030
0.8943 ? 0.0300
0.9143 ? 0.0097
Results of these tests are in Table 1. As can be seen, the LFRM with class-based initialization outperforms both the IRM and MMSB. On the individual relations (?single?), the LFRM with random
initialization also does well, beating the IRM initialization on both datasets. However, the random
initialization does poorly at inferring the global features due to the coupling of features and the
weights for each of the relations. This highlights the importance of proper initialization. To demonstrate that the covariates are helping, but that even without them, our model does well, we ran the
global LFRM with class-based initialization without covariates on the countries dataset and the AUC
dropped to 0.8713 ? 0.0105, which is still the best performance.
On the countries data, the latent feature model inferred on average 5-7 features when seeded with
the IRM and 8-9 with a random initialization. On the kinship data, it inferred 9-11 features when
seeded with the IRM and 13-19 when seeded randomly.
7
20
20
20
20
40
40
40
40
60
60
60
80
80
80
80
100
100
100
100
120
120
120
120
140
140
140
140
160
160
160
160
180
180
180
180
200
200
200
200
220
220
220
50
100
150
200
(a) True relations
50
100
150
200
60
220
50
(b) Feature predictions
100
150
200
(c) IRM predictions
50
100
150
200
(d) MMSB predictions
Figure 2: Predictions for all algorithms on the NIPS coauthorship dataset. In (a), a white entry
means two people wrote a paper together. In (b-d), the lighter an entry, the more likely that algorithm
predicted the corresponding people would interact.
4.3
Predicting NIPS coauthorship
As our final example, highlighting the expressiveness of the latent feature relational model, we used
the coauthorship data from the NIPS dataset compiled in [22]. This dataset contains a list of all
papers and authors from NIPS 1-17. We took the 234 authors who had published with the most
other people and looked at their coauthorship information. The symmetric coauthor graph can be
seen in Figure 2(a). We again learned models for the latent feature relational model, the IRM and the
MMSB training on 80% of the data and using the remaining 20% as a test set. For the latent feature
model, since the coauthorship relationship is symmetric, we learned a full, symmetric weight matrix
W as described in Section 2.4. We did not use any covariates. A visualization of the predictions for
each of these algorithms can be seen in Figure 2(b-d). Figure 2 really drives home the difference
in expressiveness. Stochastic blockmodels are required to group authors into classes, and assumes
that all members of classes interact similarly. For visualization, we have ordered the authors by
the groups the IRM found. These groups can clearly be seen in Figure 2(c). The MMSB, by
allowing partial membership is not as restrictive. However, on this dataset, the IRM outperformed
it. The latent feature relational model is the most expressive of the models and is able to much more
faithfully reproduce the coauthorship network.
The latent feature relational model also quantitatively outperformed the IRM and MMSB. We again
ran our sampler for 1000 samples initializing with either a random feature matrix or a class-based
feature matrix from the IRM and reported the AUC on the held-out data. Using five restarts for each
method, the LFRM w/ IRM performed best with an AUC of 0.9509, the LFRM rand was next with
0.9466 and much lower were the IRM at 0.8906 and the MMSB at 0.8705 (all at most ?0.013). On
average, the latent feature relational model inferred 20-22 features when initialized with the IRM
and 38-44 features when initialized randomly.
5
Conclusion
We have introduced the nonparametric latent feature relational model, an expressive nonparametric
model for inferring latent binary features in relational entities. This model combines approaches
from the statistical network analysis community, which have emphasized feature-based methods for
analyzing network data, with ideas from Bayesian nonparametrics in order to simultaneously infer
the number of latent binary features at the same time we infer the features of each entity and how
those features interact. Existing class-based approaches infer latent structure that is a special case
of what can be inferred by this model. As a consequence, our model is strictly more expressive
than these approaches, and can use the solutions produced by these approaches for initialization.
We showed empirically that the nonparametric latent feature model performs well at link prediction
on several different datasets, including datasets that were originally used to argue for class-based
approaches. The success of this model can be traced to its richer representations, which make it able
to capture subtle patterns of interaction much better than class-based models.
Acknowledgments KTM was supported by the U.S. Department of Energy contract DE-AC5207NA27344 through Lawrence Livermore National Laboratory. TLG was supported by grant number FA955007-1-0351 from the Air Force Office of Scientific Research.
8
References
[1] Stanley Wasserman and Katherine Faust. Social Network Analysis: Methods and Applications. Cambridge
University Press, 1994.
[2] Stanley Wasserman and Philippa Pattison. Logit models and logistic regressions for social networks: I.
an introduction to Markov random graphs and p? . Psychometrika, 61(3):401?425, 1996.
[3] Garry Robins, Tom Snijders, Peng Wang, Mark Handcock, and Philippa Pattison. Recent developments in
exponential random graph (p*) models for social networks. Social Networks, 29(2):192?215, May 2007.
[4] Yuchung J. Wang and George Y. Wong. Stochastic blockmodels for directed graphs. Journal of the
American Statistical Association, 82(397):8?19, 1987.
[5] Krzysztof Nowicki and Tom A. B. Snijders. Estimation and prediction for stochastic blockstructures.
Journal of the American Statistical Association, 96(455):1077?1087, 2001.
[6] Charles Kemp, Joshua B. Tenenbaum, Thomas L. Griffiths, Takeshi Yamada, and Naonori Ueda. Learning
systems of concepts with an infinite relational model. In Proceedings of the American Association for
Artificial Intelligence (AAAI), 2006.
[7] Zhao Xu, Volker Tresp, Kai Yu, and Hans-Peter Kriegel. Infinite hidden relational models. In Proceedings
of the 22nd Conference on Uncertainty in Artificial Intelligence (UAI), 2006.
[8] Edoardo M. Airoldi, David M. Blei, Eric P. Xing, and Stephen E. Fienberg. Mixed membership stochastic
block models. In D. Koller, Y. Bengio, D. Schuurmans, and L. Bottou, editors, Advances in Neural
Information Processing Systems (NIPS) 21. Red Hook, NY: Curran Associates, 2009.
[9] Peter D. Hoff, Adrian E. Raftery, and Mark S. Handcock. Latent space approaches to social network
analysis. Journal of the American Statistical Association, 97(460):1090?1098, 2002.
[10] Peter D. Hoff. Bilinear mixed-effects models for dyadic data. Journal of the American Statistical Association, 100(469):286?295, 2005.
[11] Peter D. Hoff. Multiplicative latent factor models for description and prediction of social networks.
Computational and Mathematical Organization Theory, 2008.
[12] Thomas L. Griffiths and Zoubin Ghahramani. Infinite latent feature models and the Indian Buffet Process.
In Y. Weiss, B. Sch?olkopf, and J. Platt, editors, Advances in Neural Information Processing Systems
(NIPS) 18. Cambridge, MA: MIT Press, 2006.
[13] Edward Meeds, Zoubin Ghahramani, Radford Neal, and Sam Roweis. Modeling dyadic data with binary latent factors. In B. Sch?olkopf, J. Platt, and T. Hofmann, editors, Advances in Neural Information
Processing Systems (NIPS) 19. Cambridge, MA: MIT Press, 2007.
[14] Daniel L. Navarro and Thomas L. Griffiths. Latent features in similarity judgment: A nonparametric
Bayesian approach. Neural Computation, 20(11):2597?2628, 2008.
[15] Christian P. Robert and George Casella. Monte Carlo Statistical Methods. Springer, 2004.
[16] Radford M. Neal. Markov chain sampling methods for Dirichlet process mixture models. Journal of
Computational and Graphical Statistics, 9(2):249?265, 2000.
[17] James H. Albert and Siddhartha Chib. Bayesian analysis of binary and polychotomous response data.
Journal of the American Statistical Association, 88(422):669?679, 1993.
[18] Dilan G?or?ur, Frank J?akel, and Carl Edward Rasmussen. A choice model with infinitely many latent
features. In Proceedings of the 23rd International Conference on Machine learning (ICML), 2006.
[19] Rudolph J. Rummel. Dimensionality of nations project: Attributes of nations and behavior of nation
dyads, 1950?1965. ICPSR data file, 1999.
[20] Woodrow W. Denham. The Detection of Patterns in Alyawarra Nonverbal Behavior. PhD thesis, University of Washington, 1973.
[21] Jin Huang and Charles X. Ling. Using AUC and accuracy in evaluating learning algorithms. IEEE
Transactions on Knowledge and Data Engineering, 17(3):299?310, 2005.
[22] Amir Globerson, Gal Chechik, Fernando Pereira, and Naftali Tishby. Euclidean embedding of cooccurrence data. The Journal of Machine Learning Research, 8:2265?2295, 2007.
9
| 3846 |@word version:1 stronger:1 logit:2 nd:1 adrian:1 seek:1 tried:2 pick:1 configuration:2 contains:5 daniel:1 kurt:1 outperforms:1 existing:2 assigning:1 must:7 distant:1 informative:1 hofmann:1 christian:1 drop:2 interpretable:2 update:4 zik:9 resampling:2 generative:6 intelligence:2 website:1 amir:1 ith:4 yamada:1 blei:1 location:2 five:2 mathematical:1 along:3 ik:4 qualitative:2 combine:4 pairwise:1 peng:1 behavior:2 themselves:1 nor:1 multi:1 metaphor:3 becomes:1 psychometrika:1 project:1 moreover:1 kinship:3 what:4 kind:5 pursue:1 developed:2 differing:1 unobserved:4 gal:1 berkeley:6 every:1 quantitative:1 nation:3 tackle:1 exactly:1 platt:2 exchangeable:2 normally:2 grant:1 positive:2 dropped:1 local:2 engineering:1 consequence:1 despite:1 bilinear:1 analyzing:1 establishing:2 becoming:1 merge:2 might:9 chose:1 black:1 initialization:12 burn:1 challenging:1 someone:1 ease:1 limited:4 factorization:1 range:1 bi:4 averaged:1 directed:3 acknowledgment:1 globerson:1 block:1 area:2 adapting:1 reject:1 attain:1 chechik:1 griffith:5 zoubin:2 get:3 cannot:3 kmax:2 influence:6 live:1 writing:1 applying:1 wong:1 equivalent:2 customer:7 musician:9 straightforward:1 regardless:1 independently:3 focused:1 wasserman:2 rule:1 insight:2 embedding:1 variation:1 alyawarra:5 exact:1 lighter:1 carl:1 us:1 curran:1 trick:2 associate:1 predicts:1 vein:1 observed:4 role:2 export:1 initializing:2 capture:3 wang:2 ran:4 mentioned:3 wkk0:3 covariates:9 cooccurrence:1 depend:1 eric:1 meed:1 triangle:1 k0:1 distinct:1 describe:4 monte:3 artificial:2 richer:2 kai:1 valued:3 say:1 faust:1 otherwise:2 statistic:3 jointly:1 itself:1 rudolph:1 final:1 advantage:2 took:1 reconstruction:1 propose:2 interaction:7 product:2 relevant:1 poorly:1 roweis:1 description:1 olkopf:2 parent:1 double:1 optimum:2 extending:2 produce:1 generating:2 depending:2 friend:7 illustrate:1 coupling:1 school:6 ibp:16 job:1 icpsr:1 edward:2 auxiliary:1 c:2 involves:1 indicate:2 predicted:1 closely:1 attribute:7 stochastic:12 centered:1 australia:1 diner:1 really:1 yij:7 strictly:2 helping:1 around:1 considered:2 normal:2 exp:2 seed:1 scope:1 predict:4 bj:2 lawrence:1 adopt:1 resample:4 estimation:1 outperformed:2 applicable:1 label:1 faithfully:1 weighted:1 hope:1 mit:2 clearly:1 gaussian:1 volker:1 office:1 derived:2 focus:4 improvement:1 likelihood:11 indicates:4 blockmodel:6 inference:11 membership:6 accept:1 hidden:2 relation:30 koller:1 wij:1 reproduce:1 interested:1 issue:1 flexible:2 augment:1 priori:1 development:1 special:3 initialize:1 hoff:3 once:1 having:3 nicely:1 sampling:6 washington:1 identical:1 placing:2 seventy:1 represents:1 yu:1 icml:1 report:2 quantitatively:1 duplicate:1 randomly:5 chib:1 simultaneously:4 gamma:1 national:1 individual:3 ourselves:1 yuchung:1 attempt:1 detection:1 organization:1 weakness:1 male:6 mixture:1 extreme:1 uncommon:1 held:5 chain:2 edge:1 naonori:1 partial:1 continuing:1 irm:25 old:1 initialized:4 euclidean:1 mk:3 increased:1 column:12 earlier:1 instance:1 modeling:1 assignment:1 introducing:1 subset:2 entry:15 culinary:2 tishby:1 motivating:1 reported:1 eec:2 synthetic:7 combined:1 chooses:1 person:2 international:1 contract:1 michael:1 together:2 quickly:1 concrete:1 polychotomous:1 thesis:1 aaai:1 again:2 central:1 containing:2 choose:1 slowly:1 denham:1 huang:1 cognitive:1 resort:1 american:6 leading:1 style:1 zhao:1 potential:1 de:1 summarized:1 student:6 availability:2 bold:1 matter:1 philippa:2 multiplicative:1 later:1 try:3 performed:1 observing:1 analyze:1 red:1 xing:1 bayes:1 option:1 ktm:1 elaborated:1 collaborative:1 air:1 accuracy:2 largely:1 who:3 miller:1 correspond:7 succession:1 characteristic:2 judgment:1 akel:1 bayesian:7 produced:1 carlo:3 drive:1 published:1 classified:1 explain:1 networking:1 casella:1 sharing:1 against:1 energy:1 james:1 naturally:1 associated:1 sampled:4 nonverbal:1 dataset:14 popular:1 knowledge:3 dimensionality:1 stanley:2 subtle:2 originally:1 restarts:2 tom:3 response:1 improved:1 rand:3 wei:1 nonparametrics:1 though:3 just:2 correlation:2 working:1 hastings:2 expressive:5 defines:2 logistic:4 gray:1 scientific:1 believe:3 name:1 effect:3 normalized:1 geographic:4 true:4 concept:1 seeded:3 symmetric:8 laboratory:1 nowicki:1 unfilled:2 neal:2 white:2 during:1 auc:7 naftali:1 demonstrate:4 performs:1 image:1 meaning:1 charles:2 common:2 sigmoid:1 specialized:1 admires:1 empirically:1 conditioning:1 belong:2 he:1 discussed:1 association:6 significant:1 refer:2 cambridge:3 gibbs:5 enter:2 ai:6 rd:1 similarly:2 handcock:2 had:4 han:1 similarity:4 operating:1 compiled:1 tlg:1 add:4 posterior:6 showed:1 recent:1 dish:10 store:1 binary:16 came:1 success:1 joshua:1 seen:6 greater:2 additional:1 george:2 determine:1 fernando:1 stephen:1 full:12 multiple:3 mix:1 infer:11 reduces:1 snijders:2 smooth:1 adapt:1 long:1 prediction:16 variant:2 basic:13 regression:1 poisson:3 coauthor:1 iteration:2 albert:1 addition:5 want:1 decreased:1 annealing:1 country:14 sch:2 zw:1 file:1 navarro:1 undirected:1 member:1 seem:1 jordan:2 effectiveness:2 extracting:1 presence:1 split:2 easy:1 enough:2 bengio:1 dilan:1 affect:7 restaurant:1 psychology:1 zi:6 gave:1 restrict:1 inner:1 idea:3 dyad:1 effort:1 edoardo:1 peter:4 useful:3 clear:1 takeshi:1 amount:2 nonparametric:18 transforms:1 tenenbaum:1 xij:4 zj:4 per:1 siddhartha:1 group:4 key:1 traced:1 drawn:4 tempering:1 krzysztof:1 graph:9 year:1 uncertainty:1 place:3 reasonable:2 blockstructures:1 ueda:1 lfrm:9 home:1 draw:1 comparable:1 bit:2 entirely:2 nonnegative:1 strength:1 throwing:1 athlete:11 aspect:2 extremely:2 relatively:1 department:1 combination:1 protest:1 conjugate:5 remain:1 describes:1 increasingly:2 across:1 sam:1 wi:1 ur:1 metropolis:2 making:1 explained:1 pr:10 fienberg:1 equation:2 visualization:3 previously:1 discus:1 know:1 observe:8 appropriate:1 alternative:1 buffet:7 thomas:4 original:2 assumes:2 dirichlet:2 include:2 remaining:1 graphical:1 xc:6 restrictive:1 ghahramani:2 move:1 added:1 quantity:1 already:1 looked:1 parametric:1 diagonal:2 amongst:1 lends:1 distance:2 link:26 separate:1 entity:45 argue:1 kemp:1 relationship:7 katherine:1 robert:1 potentially:1 frank:1 negative:3 implementation:1 proper:1 perform:5 allowing:3 observation:8 datasets:16 markov:3 finite:4 jin:1 behave:1 relational:28 extended:2 looking:1 excluding:1 arbitrary:3 community:3 expressiveness:5 inferred:8 introduced:2 david:1 pair:2 required:1 livermore:1 adclus:1 california:3 learned:4 established:1 nip:8 address:2 able:3 kriegel:1 below:1 pattern:2 beating:1 challenge:1 woodrow:1 including:1 natural:1 difficulty:1 force:1 predicting:2 representing:2 improve:2 pattison:2 realvalued:1 identifies:1 hook:1 raftery:1 extract:1 tresp:1 prior:19 garry:1 determining:2 probit:3 expect:2 permutation:1 highlight:1 mixed:6 suggestion:1 limitation:1 proportional:1 filtering:1 xp:6 editor:3 share:1 pi:2 squashing:1 row:7 placed:2 last:2 free:1 tribe:1 supported:2 rasmussen:1 allow:3 explaining:1 emerge:1 distributed:2 curve:1 default:1 evaluating:1 author:4 qualitatively:1 simplified:1 far:2 social:13 transaction:1 mmsb:10 approximate:1 wrote:1 global:8 active:1 sequentially:1 uai:1 receiver:1 conclude:1 assumed:1 knew:1 latent:47 table:2 robin:1 learn:4 ca:3 schuurmans:1 interact:5 bottou:1 necessarily:2 did:1 blockmodels:3 main:1 bmf:2 arise:1 noise:1 hyperparameters:2 ling:1 child:1 dyadic:2 positively:1 xu:1 roc:1 ny:1 formalization:1 inferring:5 pereira:1 wish:3 exponential:1 tied:1 abundance:1 theorem:1 down:1 friendship:1 bad:1 specific:3 covariate:1 emphasized:1 offset:2 list:1 intractable:1 exists:1 restricting:1 importance:3 ci:6 airoldi:1 phd:1 conditioned:1 simply:1 likely:4 infinitely:3 explore:1 highlighting:1 contained:1 ordered:1 doubling:1 scalar:2 radford:2 springer:1 corresponds:5 determines:2 ma:2 goal:3 absence:1 infinite:10 determined:2 sampler:11 coauthorship:7 people:12 mark:2 arises:1 indian:5 correlated:1 |
3,142 | 3,847 | Sparse Metric Learning via Smooth Optimization
Yiming Ying?, Kaizhu Huang?, and Colin Campbell?
?Department of Engineering Mathematics, University of Bristol,
Bristol BS8 1TR, United Kingdom
?National Laboratory of Pattern Recognition, Institute of Automation,
The Chinese Academy of Sciences, 100190 Beijing, China
Abstract
In this paper we study the problem of learning a low-rank (sparse) distance matrix. We propose a novel metric learning model which can simultaneously conduct dimension reduction and learn a distance matrix. The sparse representation
involves a mixed-norm regularization which is non-convex. We then show that
it can be equivalently formulated as a convex saddle (min-max) problem. From
this saddle representation, we develop an efficient smooth optimization approach
[17] for sparse metric learning, although the learning model is based on a nondifferentiable loss function. Finally, we run experiments to validate the effectiveness and efficiency of our sparse metric learning model on various datasets.
1 Introduction
For many machine learning algorithms, the choice of a distance metric has a direct impact on their
success. Hence, choosing a good distance metric remains a challenging problem. There has been
much work attempting to exploit a distance metric in many learning settings, e.g. [8, 9, 10, 12, 20,
22, 23, 25]. These methods have successfully indicated that a good distance metric can significantly
improve the performance of k-nearest neighbor classification and k-means clustering, for example.
A good choice of a distance metric generally preserves the distance structure of the data: the distance between examples exhibiting similarity should be relatively smaller, in the transformed space,
than between examples exhibiting dissimilarity. For supervised classification, the label information
indicates whether the pair set is in the same class (similar) or in the different classes (dissimilar). In
semi-supervised clustering, the side information conveys the information that a pair of samples are
similar or dissimilar to each other. Since it is very common that the presented data is contaminated
by noise, especially for high-dimensional datasets, a good distance metric should also be minimally
influenced by noise. In this case, a low-rank distance matrix would produce a better generalization
performance than non-sparse counterparts and provide a much faster and efficient distance calculation for test samples. Hence, a good distance metric should also pursue dimension reduction during
the learning process.
In this paper we present a novel approach to learn a low-rank (sparse) distance matrix. We
first propose in Section 2 a novel metric learning model for estimating the linear transformation (equivalently distance matrix) that combines and retains the advantages of existing methods
[8, 9, 12, 20, 22, 23, 25]. Our method can simultaneously conduct dimension reduction and learn a
low-rank distance matrix. The sparse representation is realized by a mixed-norm regularization used
in various learning settings [1, 18, 21]. We then show that this non-convex mixed-norm regularization framework is equivalent to a convex saddle (min-max) problem. Based on this equivalent representation, we develop, in Section 3, Nesterov?s smooth optimization approach [16, 17] for sparse
metric learning using smoothing approximation techniques, although the learning model is based on
a non-differentiable loss function. In Section 4, we demonstrate the effectiveness and efficiency of
our sparse metric learning model with experiments on various datasets.
1
2
Sparse Distance Matrix Learning Model
We begin by introducing necessary notation. Let Nn = {1, 2, . . . , n} for any n ? N. The space
of symmetric d times d matrices will be denoted by S d . If S ? S d is positive definite, we write
d
it as S ? 0. The cone of positive semi-definite matrices is denoted by S+
and denote by O d
d?q
the set of d times d orthonormal matrices. For any X, Y ? R , hX, Y i := Tr(X > Y ) where
Tr(?) denotes the trace of a matrix. The standard Euclidean norm is denoted by k ? k. Denote by
z := {(xi , yi ) : i ? Nn } a training set of n labeled examples with input xi = (x1i , . . . , xdi ) ? Rd ,
class label yi (not necessary binary) and let xij = xi ? xj .
Let P = (P`k )`,k?Nd ? Rd?d be a transformation matrix. Denote by x?i = P xi for any i ? Nn and
? = {x?i : i ? Nn } the transformed data matrix. The linear transformation matrix P induces a
by x
distance matrix M = P > P which defines a distance between xi and xj given by
dM (xi , xj ) = (xi ? xj )> M (xi ? xj ).
Our sparse metric learning model is based on two principal hypotheses: 1) a good choice of distance
matrix M should preserve the distance structure, i.e. the distance between similar examples should
be relatively smaller than between dissimilar examples; 2) a good distance matrix should also be
able to effectively remove noise leading to dimension reduction.
For the first hypothesis, the distance structure in the transformed space can be specified, for example,
by the following constraints: kP (xj ? xk )k2 ? kP (xi ? xj )k2 + 1, ?(xi , xj ) ? S and (xj , xk ) ?
D, where S denotes the similarity pairs and D denotes the dissimilarity pairs based on the label
information. Equivalently,
k?
xj ? x
?k )k2 ? k?
xi ? x
?j k2 + 1, ?(xi , xj ) ? S and (xj , xk ) ? D.
(1)
For the second hypothesis, we use a sparse regularization to give a sparse solution. This regularization ranges from element-sparsity for variable selection to a low-rank matrix for dimension
reduction [1, 2, 3, 13, 21]. In particular, for any ` ? Nd , denote the `-th row vector of P by P`
P
2 21
and kP` k = ( k?Nd P`k
) . If kP` k = 0 then the `-th variable in the transformed space becomes
`
zero, i.e. xi = P` xi = 0 which means that kP` k = 0 has the effect of eleminating `-th variable.
Motivated by the aboveP
observation, a direct way would be to enforce a L1 -norm across the vector
(kP1 k, . . . , kPd k), i.e. `?Nd kP` k. This L1 -regularization yields row-vector (feature) sparsity of
? which plays the role of feature selection. Let W = P > P = (W1 , . . . , Wd ) and we can easily
x
show that
W` ? 0 ?? P` ? 0.
Motivated by this observation, instead of L1 -regularization over vector (kP1 k, . . . , kPd k) we can
enforce L1 -norm regularization across the vector (kW1 k, . . . , kWd k). However, a low-dimensional
? does not mean that its row-vector (feature) should be sparse. Ideally, we exprojected space x
? can be sparse. Hence, we introduce an extra orthonormal
pect that the principal component of x
transformation U ? O d and let x?i = P U xi . Denote a set of triplets T by
T = {? = (i, j, k) : i, j, k ? Nn , (xi , xj ) ? S and (xj , xk ) ? D}.
(2)
By introducing slack variables ? in constraints (1), we propose the following sparse (low-rank)
distance matrix learning formulation:
P
2
min min
? ?? + ?||W ||(2,1)
d
U ?O d W ?S+
s.t.
>
> >
1 + x>
ij U W U xij ? xkj U W U xkj + ?? ,
d
?? ? 0, ?? = (i, j, k) ? T , and W ? S+
.
(3)
P P 2 1
2
where ||W ||(2,1) =
`(
k wk` ) denotes the (2, 1)-norm of W . A similar mixed (2, 1)-norm
regularization was used in [1, 18] for multi-task learning and multi-class classification to learn the
sparse representation shared across different tasks or classes.
2.1
Equivalent Saddle Representation
We now turn our attention to an equivalent saddle (min-max) representation for sparse metric learning (3) which is essential for developing optimization algorithms in the next section. To this end, we
need the following lemma which develops and extends a similar version in multi-task learning [1, 2]
to the case of learning a positive semi-definite distance matrix.
2
Lemma 1. Problem (3) is equivalent to the following convex optimization problem
X
>
2
min
(1 + x>
ij M xij ? xkj M xkj )+ + ?(Tr(M ))
M ?0
(4)
? =(i,j,k)?T
Proof. Let M = U W U > in equation (3) and then W = U > M U . Hence, (3) is reduced to the
following
X
min min
?? + ?||U > M U ||2(2,1)
(5)
d U ?O d
M ?S+
?
>
s.t. x>
ij M xij ? xkj M xkj + ?? ,
d
?? ? 0 ?? = (i, j, k) ? T , and M ? S+
.
e ? Od
Now, for any fixed M in equation (5), by the eigen-decomposition of M there exists U
>
e
e
such that M = U ?(M )U . Here, the diagonal matrix ?(M ) = diag(?1 , ?2 , . . . , ?d ) where ?i is
e U ? O d , and then we have minU ?O d ||U > M U ||(2,1) =
the i-th eigenvalue of M . Let V = U
e U ||(2,1) = minV ?O d ||V > ?(M )V ||(2,1) . Observe that
e U )> ?(M )U
minU ?O d ||(U
P P P
1
||V > ?(M )V ||(2,1) = i ( j ( k Vki ?k Vkj )2 ) 2
(6)
? 21
P ?P
P
P ?P 2 2 ? 12
= i
= i
k,k0 (
j Vki Vk0 i )?k Vkj ?k0 Vk0 j
k ?k Vki
P
where, in the last equality, we use the fact that V ? O d , i.e. j Vkj Vk0 j = ?kk0 . Applying Cauchy?P 2 2 ? 12
?P 2 2 ? 21 P 2 1
P
2
. Putting
Schwartz?s inequality implies that k ?k Vki
?
?k Vki ( k Vki ) 2 =
k ?k Vki
k
P P
P
>
2
this back into (6) yields ||V ?(M )V ||(2,1) ? i k ?k Vki = k ?k = Tr(M ), where we use the
fact V ? O d again. However, if we select V to be identity matrix Id , ||V > ?(M )V ||(2,1) = Tr(M ).
Hence, minU ?O d ||U > M U ||(2,1) = minV ?O d ||V > ?(M )V ||(2,1) = Tr(M ). Putting this back into
equation (5) the result follows.
From the above lemma, we are ready to present an equivalent saddle (min-max) representation
of
p
d
: Tr(M ) ? T /? }
problem (3). First, let Q1 = {u? : ? ? T , 0 ? u? ? 1} and Q2 = {M ? S+
where T is the cardinality of triplet set T i.e. T = #{? ? T }.
Theorem 1. Problem (4) is equivalent to the following saddle representation
n
o X
X
>
2
u? (xjk x>
?
u?
(7)
min max h
jk ? xij xij ), M i ? ?(Tr(M ))
u?Q1 M ?Q2
t?T
? =(i,j,k)?T
Proof. Suppose P
that M ? is an optimal solution of problem (4). By its definition, there holds
>
2
?(Tr(M ? ))2 ? ? ?T (1 + x>
kj M xik ? xkj M xkj )+ + ?(Tr(M )) for any M ? 0. Letting M = 0
p
?
yields that Tr(M ) ? T /?. Hence, problem (4) is identical to
X
>
2
min
(1 + x>
(8)
ij M xij ? xkj M xkj )+ + ?(Tr(M )) .
M ?Q2
? =(i,j,k)?T
Observe that s+ = max{0, s} = max? {s? : 0P ? ? ? 1}. Consequently, the
>
above equation can be written as minM ?Q2 max0?u?1 ? ?T u? (1 + x>
kj M xik ? xij M xij ) +
2
?(Tr(M )) . By then min-max theorem (e.g.
[5]), the above problem
o P is equivalent to
P
>
>
2
minu?Q1 maxM ?Q2
? ? ?T ut . Combining
? ?T u? (?xij M xij + xjk M xjk ) ? ?(Tr(M ))
>
>
>
this with the fact that x>
jk M xjk ? xij M xij = hxjk xjk ? xij xij , M i completes the proof of the
theorem.
2.2
Related Work
There is a considerable amount of work on metric learning. In [9], an information-theoretic approach
to metric learning (ITML) is developed which equivalently transforms the metric learning problem
3
to that of learning an optimal Gaussian distribution with respect to an relative entropy. The method
of Relevant Component analysis (RCA)[7] attempts to find a distance metric which can minimize
the covariance matrix imposed by the equivalence constraints. In [25], a distance metric for k-means
clustering is then learned to shrink the averaged distance within the similar set while enlarging the
average distance within the dissimilar set simultaneously. All the above methods generally do not
yield sparse solutions and only work within their special settings. Maximally Collapsing Metric
Learning (MCML) tries to map all points in a same class to a single location in the feature space via
a stochastic selection rule. There are many other metric learning approaches in either unsupervised
or supervised learning setting, see [26] for a detailed review. We particularly mention the following
work which is more related to our sparse metric learning model (3).
? Large Margin Nearest Neighbor (LMNN) [23, 24]: LMNN aims to explore a large margin nearest
neighbor classifier by exploiting nearest neighbor samples as side information in the training set.
Specifically, let Nk (x) denotes the k-nearest neighbor of sample x and define the similar set S =
{(xi , xj ) : xi ? N (xj ), yi = yj } and D = {(xj , xk ) : xk ? N (xj ), yk 6= yj }. Then, recall that the
triplet set T is given by equation (2), the framework LMNN can be rewritten as the following:
X
>
min
(1 + x>
(9)
ij M xij ? xkj M xkj )+ + ?Tr(CM )
M ?0
? =(i,j,k)?T
P
where the covariance matrix C over the similar set S is defined by C = (xi ,xj )?S (xi ? xj )(xi ?
xj )> . From the above reformulation, we see that LMNN also involves a sparse regularization term
Tr(CM ). However, the sparsity of CM does not imply the sparsity of M , see the discussion in the
experimental section. Large Margin Component Analysis (LMCA) [22] is designed for conducting
classification and dimensionality reduction simultaneously. However, LMCA controls the sparsity
by directly specifying the dimensionality of the transformation matrix and it is an extended version
of LMNN. In practice, this low dimensionality is tuned by ad hoc methods such as cross-validation.
? Sparse Metric Learning via Linear Programming (SMLlp) [20]: the spirit of this approach is
closer to our method where the following sparse framework was proposed:
X
X
>
(1 + x>
|M`k |
(10)
min
ij M xij ? xkj M xkj )+ + ?
M ?0
t=(i,j,k)?T
`,k?Nd
P
However, the above 1-norm term `,k?Nd |M`k | can only enforce the element sparsity of M . The
learned sparse model would not generate an appropriate low-ranked principal matrix M for metric
learning. In order to solve the above optimization problem, [10] further proposed to restrict M to the
space of diagonal dominance matrices: a small subspace of the positive semi-definite cone. Such a
restriction would only result in a sub-optimal solution, although the final optimization is an efficient
linear programming problem.
3
Smooth Optimization Algorithms
Nesterov [17, 16] developed an efficient smooth optimization method for solving convex programming problems of the form minx?Q f (x) where Q is a bounded closed convex set in a finitedimensional real vector space E. This smooth optimization usually requires f to be differentiable
with Lipschitz continuous gradient and it has an optimal convergence rate of O(1/t2 ) for smooth
problems where t is the iteration number. Unfortunately, we can not directly apply the smooth optimization method to problem (4) since the hinge loss there is not continuously differentiable. Below
we show the smooth approximation method [17] can be approached through the saddle representation (7).
3.1 Nesterov?s Smooth Approximation Approach
We briefly review Nesterov?s approach [17] in the setting of a general min-max problem using
smoothing techiniques. To this end, we introduce some useful notation. Let Q1 (resp. Q2 ) be nonempty convex compact sets in finite-dimensional real vector spaces E1 (resp. E2 ) endowed with
norm k ? k1 (resp. k ? k2 ). Let E2? be the dual space of E2 with standard norm defined, for any
s ? E2? , by ksk?2 = max{hs, xi2 : kxk2 = 1}, where the scalar product h?, ?i2 denotes the value
of s at x. Let A : E1 ? E2? be a linear operator. Its adjoint operator A? : E2 ? E1? is defined,
4
Smooth Optimization Algorithm for Sparse Metric Learning (SMLsm)
P
1
2
1. Let ? > 0, t = 0 and initialize u(0) ? Q1 , M (?1) = 0 and let L = 2?
? ?T kX? k2
2. Compute M? (u(t) ) and ??? (u(t) ) = (?1 + hX? , M? (u(t) )i : ? ? T )
t
2
t
and let M (t) = t+2
M (t?1) + n
t+2 M? (u )
o
3. Compute z (t) = arg minz?Q1
4. Compute v (t) = arg minv?Q1
L
ku(t) ? zk2 + ??? (u(t) )> (z ? u(t) )
?
?o
Pt
L
i+1
(0)
2
(i)
(i) >
(i)
ku
?
vk
+
(
)
?
(u
)
+
??
(u
)
(v
?
u
)
?
?
i=0
2
2
n2
2
(t)
v (t) + t+1
5. Set u(t+1) = t+3
t+3 z
6. Set t ? t + 1. Go to step 2 until the stopping criterion less than ?
Table 1: Pseudo-code of first order Nesterov?s method
for any x ? E2 and u ? E1 , by hAu, xi2 = hA? x, ui1 . The norm of such a operator is defined by
kAk1,2 = maxx,u {hAu, xi2 : kxk2 = 1, kuk1 = 1} .
Now, the min-max problem considered in [17, Section 2] has the following special structure:
n
o
b
min ?(u) = ?(u)
+ max{hAu, xi2 ? f?(x) : x ? Q2 } .
u?Q1
(11)
b
Here, ?(u)
is assumed to be continuously differentiable and convex with Lipschitz continuous gra?
dient and f (x) is convex and differentiable. The above min-max problem is usually not smooth and
Nesterov [17] proposed a smoothing approximation approach to solve the above problem:
n
o
b
min ?? (u) = ?(u)
+ max{hAu, xi2 ? f?(x) ? ?d2 (x) : x ? Q2 } .
(12)
u?Q1
Here, d2 (?) is a continuous proxy-function, strongly convex on Q2 with some convexity parameter
?2 > 0 and ? > 0 is a small smoohting parameter. Let x0 = arg minx?Q2 d2 (x). Without loss
of generality, assume d2 (x0 ) = 0. The strong convexity of d2 (?) with parameter ?2 means that
d2 (x) ? 21 ?2 kx ? x0 k22 . Since d2 (?) is strongly convex, the solution of the maximization problem
??? (u) := max{hAu, xi2 ? f?(x) ? ?d2 (x) : x ? Q2 } is unique and differentiable, see [6, Theorem
4.1]. Indeed, it was established in [17, Theorem 1 ] that the gradient of ?? is given by
???? (u) = A? x? (u)
kAk2
(13)
kAk2
, i.e. kA? x? (u1 ) ? A? x? (u2 )k?1 ? ??1,2
ku1 ? u2 k1 .
and it has a Lipschitz constant L = ??1,2
2
2
Hence, the proxy-function d2 can be regarded as a generalized Moreau-Yosida regularization term
to smooth out the objective function.
As mentioned above, function ?? in problem (12) is differentiable with Lipschitz continuous gradients. Hence, we can apply the optimal smooth optimization scheme [17, Section 3] to the
smooth approximate problem (12). The optimal scheme needs another proxy-function d(u) associated with Q1 . Assume that d(u0 ) = minu?Q1 d(u) = 0 and it has convexity parameter ? i.e.
d(u) ? 21 ?ku ? u0 k1 . For this special problem (12), the primal solution u? ? Q1 and dual solution
x? ? Q2 can be simultaneously obtained, see [17, Theorem 3]. Below, we will apply this general
scheme to solve the min-max representation (7) of the sparse metric learning problem (3), and hence
solves the original problem (4).
3.2 Smooth Optimization Approach for Sparse Metric Learning
We now turn our attention to developing a smooth optimization approach for problem (4). Our main
idea is to connect the saddle representation (7) in Theorem 1 with the special formulation (11).
d
To this end, firstly let E1 = RT with standard Euclidean
P norm2 k ? k1 = k ? k and E2 = S with
Frobenius norm defined, for any S ? S d , by kSk22 = i,j?Nd Sij
. Secondly, the closed convex sets
d
are respectively given by Q1 = {u = (u? : ? ? T ) ? [0, 1]T } and Q2 = {M ? S+
: Tr(M ) ?
p
T /?}. Then, define the proxy-function d2 (M ) = kM k2 . Consequently, the proxy-function d2 (?)
is strongly convex on Q2 with convexity parameter ?2 = 2. Finally, for any ? = (i, j, k) ? T , let
5
P
>
b
X? = xjk x>
jk ? xij xij . In addition, we replace the variable x by M and ?(u) = ?
? ?T u? in
2
T
d ?
?
(12), f (M ) = ?(Tr(M )) . Finally, define the linear operator A : R ? (S ) , for any u ? RT , by
X
Au =
u? X? .
(14)
? ?T
With the above preparations, the saddle representation (7) exactly matches the special structure (11)
which can be approximated by problem (12) with ? sufficiently small. The norm of the linear
operator A can be estimated as follows.
?P
?1
2 2
where,
Lemma 2. Let the linear operator A be defined as above, then kAk1,2 ?
kX
k
?
2
? ?T
for any M ? S d , kM k2 denotes the Frobenius norm of M .
Proof. For any u ? Q1 and M ? S d , we have that
??P
? ?
?P
?
Tr
?
? ?T u? X? M
? ?T u? kX? k2 kM k2
?1
?1
?P
?P
? 1 ?P
2 2
2 2
2 2
= kM k2 kuk1
.
? kM k2
? ?T kX? k2
? ?T kX? k2
? ?T u?
?
? ?P
Combining the above inequality with the definition that kAk1,2 = max Tr ( ? ?T u? X? )M :
?
kuk1 = 1, kM k2 = 1 yields the desired result.
We now can adapt the smooth optimization [17, Section 3 and Theorem 3] to solve the smooth
approximation formulation (12) for metric learning. To this end, let the proxy-function d in Q1 be
the standard Euclidean norm i.e. for some u(0) ? Q1 ? RT , d(u) = ku ? u(0) k2 . The smooth
optimization pseudo-code for problem (7) (equivalently problem (4)) is outlined in Table 1. One can
stop the algorithm by monitoring the relative change of the objective function or change in the dual
gap.
The efficiency of Nesterov?s smooth optimization largely depends on Steps 2, 3, and 4 in Table 1.
Steps 3 and 4 can be solved straightforward where z (t) = min(max(0, u(t) ? ??? (u(t) )/L), 1) and
Pt
v (t) = min(max(0, u(0) ? i=0 (i + 1)??? (u(i) )/2L), 1). The solution M? (u) in Step 2 involves
the following problem
X
M? (u) = arg max{h
u? X? , M i ? ?(Tr(M ))2 ? ?kM k22 : M ? Q2 }.
(15)
? ?T
The next lemma shows it can be efficiently solved by quadratic programming (QP).
Lemma 3. Problem (15) is equivalent to the following
o
nX
X
X
X
p
s? = arg max
?i si ? ?(
si )2 ? ?
s2i :
si ? T /?, and si ? 0 ?i ? Nd (16)
i?Nd
i?Nd
i?Nd
i?Nd
P
where ? = (?1 , . . . , ?d ) are the eigenvalues of t?T ut Xt . Moreover, if we denotes the eigenP
P
decomposition t?T ut Xt by t?T ut Xt = U diag(?)U > with some U ? O d then the optimal
solution of problem (15) is given by M? (u) = U diag(s? )U > .
from Von Neumann?s inequality (see [14] or [4, Page 10]), for all X, Y ? S d ,
Proof. We know P
that Tr(XY ) ? i?Nd ?i (X)?i (Y ) where ?i (X) and ?i (Y ) are the eigenvalues of X and Y in
non-decreasing order, respectively. The equality is attained whenever X = U diag(?(X))U > , Y =
U diag(?(YP
))U > for some U ? O d . The desired result follows by applying the above inequality
with X = ? ?T u? X? and Y = M.
It was shown in [17, Theorem 3] that the iteration complexity is of O(1/?) for finding a ?-optimal
solution if we choose ? = O(?). This is usually much better than the standard sub-gradient descent
method with iteration complexity typically O(1/?2 ). As listed
P in Table 1, the complexity for each
iteration mainly depends on the eigen-decomposition on t?Nt ut Xt and the quadratic programming to solve problem (15) which has complexity O(d3 ). Hence, the overall iteration complexity of
the smooth optimization approach for sparse metric learning is of the orderP
O(d3 /?) for finding an
1
?-optimal solution. As a final remark, the Lipschitz given by the L = 2? ? kX? k2 could be too
loose in reality. One can use the line search scheme [15] to further accelerate the algorithm.
6
4
Experiments
In this section we compared our proposed method with four other methods including (1) the
LMNN method [23], (2) the Sparse Metric Learning via Linear Programming (SMLlp) [20], (3)
the information-theoretic approach for metric learning (ITML) [9], and (4) the Euclidean distance
based k-Nearest Neighbor (KNN) method (called Euc for brevity). We also implemented the iterative sub-gradient descent algorithm [24] to solve the proposed framework (4) (called SMLgd) in
order to evaluate the efficiency of the proposed smooth optimization algorithm SMLsm. We try to
exploit all these methods to learn a good distance metric and a KNN classifier is used to examine
the performance of these different learned metrics.
The comparison is done on four benchmark data sets: Wine, Iris, Balance Scale, and Ionosphere,
which were obtained from the UCI machine learning repository. We randomly partitioned the
data sets into a training and test sets by using a ratio 0.85. We then trained each approach on
the training set, and performed evaluation on the test sets. We repeat the above process 10 times
and then report the averaged result as the final performance. All the approaches except the Euclidean distance need to define a triplet set T before training. Following [20], we randomly generated 1500 triplets for SMLsm, SMLgd, SMLlp, and LMNN. The number of nearest neighbors
was adapted via cross validation for all the methods in the range of {1, 3, 5, 7}. The trade-off
parameter for SMLsm, SMLgd, SMLlp, and LMNN was also tuned via cross validation from
{10?5 , 10?4 , 10?3 , 10?2 , 10?1 , 100 , 101 , 102 }.
The first part of our evaluations focuses on testing the learning accuracy. The result can be seen
in Figure 1 (a)-(d) respectively for the four data sets. Clearly, the proposed SMLsm demonstrates
best performance. Specifically, SMLsm outperforms the other four methods in Wine and Iris, while
it ranks the second in Balance Scale and Ionosphere with slightly lower accuracy than the best
method. SMLgd showed different results with SMLsm due to the different optimization methods,
which we will discuss shortly in Figure 1 (i)-(l). We also report the dimension reduction Figure 1(e)(h). It is observed that our model outputs the most sparse metric. This validates the advantages
of our approach. That is, our method directly learns both an accurate and sparse distance metric
simultaneously. In contrast, other methods only touch this topic marginally: SMLlp is not optimal,
as they exploited the one-norm regularization term and also relaxed the learning problem; LMNN
aims to learn a metric with a large-margin regularization term, which is not directly related to sparsity
of the distance matrix. ITML and Euc do not generate a sparse metric at all. Finally, in order to
examine the efficiency of the proposed smooth optimization algorithm, we plot the convergence
graphs of SMLsm versus those of SMLgd in Figure 1(i)-(l). As observed, SMLsm converged much
faster than SMLgd in all the data sets. SMLgd sometimes oscillated and may incur a long tail due
to the non-smooth nature of the hinge loss. For some data sets, it converged especially slow, which
can be observed in Figure (k) and (l).
5
Conclusion
In this paper we proposed a novel regularization framework for learning a sparse (low-rank) distance
matrix. This model was realized by a mixed-norm regularization term over a distance matrix which
is non-convex. Using its special structure, it was shown to be equivalent to a convex min-max
(saddle) representation involving a trace norm regularization. Depart from the saddle representation,
we successfully developed an efficient Nesterov?s first-order optimization approach [16, 17] for our
metric learning model. Experimental results on various datasets show that our sparse metric learning
framework outperforms other state-of-the-art methods with higher accuracy and significantly smaller
dimensionality. In future, we are planning to apply our model to large-scale datasets with higher
dimensional features and use the line search scheme [15] to further accelerate the algorithm.
Acknowledgements
The second author is partially supported by the Excellent SKL Project of NSFC (No.60723005),
China. The first and third author is supported by EPSRC grant EP/E027296/1.
7
Wine?Average Error Rate (%)
Iris?Average Error Rate (%)
Bal?Average Error Rate (%)
3.7
3.04
20.11
18.62
2.17
2.48
1.74
1.95
1.48
1.74
1.4
1.3
1.48
8.19
9.21
8.09
6.81
0.74
SMLsm SMLgd SMLlp
ITML
LMNN
SMLsm SMLgd SMLlp
EUC
(a)
ITML
LMNN
SMLsm SMLgd SMLlp
EUC
(b)
Iono?Average Error Rate (%)
EUC
LMNN
(c)
Wine?Average Dim
15.28
ITML
Iris?Average Dim
13
4
4
4
13
12.1
12.45
10.7
10.2
9.06
9.3
10.19
8.87
2
5.7
1
SMLsm SMLgd SMLlp
ITML
LMNN
SMLsm
EUC
SMLgd SMLlp ITML
(d)
Bal?Average Dim
LMNN
EUC
1
SMLsm SMLgd SMLlp
(e)
4
4
Convergence Curve for Wine
33
3.3
EUC
LMNN
(f)
Iono?Average Dim
4
3.7
ITML
1
33
SMLgd
SMLsm
0.9
3.3
Normalized Objective Values
0.8
15.1
11
9.3
0.7
0.6
0.5
0.4
0.3
0.2
4.9
0.1
SMLsm SMLgd SMLlp
ITML
LMNN
SMLsm SMLgd SMLlp
EUC
LMNN
0
EUC
Convergence curves for Iris
200
300
400
500
(i)
Convergence Curves for Balance
1
Convergence Curves for Ionosphere
1
SMLgd
SMLsm
1
SMLgd
SMLsm
0.9
0.6
0.5
0.4
0.3
0.7
0.6
0.5
0.4
0.3
0.2
0.2
0.1
0.1
0
0
200
300
Epoch
(j)
400
500
0.8
Normalized Objective Values
0.7
SMLgd
SMLsm
0.9
0.8
Normalized Objective Values
0.8
100
100
(h)
0.9
0
0
Epoch
(g)
Normalized Objective Values
ITML
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
50
100
150
200
250
Epoch
(k)
300
350
400
450
0
0
100
200
300
400
500
Epoch
(l)
Figure 1: Performance comparison among different methods. Subfigures (a)-(d) present the average error rates; (e)-(h) plots the average dimensionality used in different methods; (i)-(l) give the
convergence graph for the sub-gradient algorithm and the proposed smooth optimization algorithm.
8
References
[1] A. Argyriou, T. Evgeniou, and M. Pontil. Multi-task feature learning. NIPS, 2007.
[2] A. Argyriou, C. A. Micchelli, M. Pontil, and Y. Ying. A spectral regularization framework
for multi-task structure learning. NIPS, 2008.
[3] F. R. Bach. Consistency of trace norm minimization. J. of Machine Learning Research, 9:
1019?1048, 2008.
[4] J. M. Borwein and A. S. Lewis. Convex Analysis and Nonlinear Optimization: Theory and
Examples. CMS Books in Mathematics. Springer, 2005.
[5] S. Boyd and L . Vandenberghe. Convex optimization. Cambridge University Press, 2004.
[6] J. F. Bonnans and A. Shapiro. Optimization problems with perturbation: A guided tour. SIAM
Review, 40:202?227 ,1998.
[7] A. Bar-Hillel, T. Hertz, N. Shental, and D. Weinshall. Learning a mahalanobis metric from
equivalence constraints. J. of Machine Learning Research, 6: 937-965, 2005.
[8] S. Chopra, R. Hadsell, and Y. LeCun. Learning a similarity metric discriminatively with application to face verification. CVPR, 2005.
[9] J. Davis, B. Kulis, P. Jain, S. Sra, and I. Dhillon. Information-theoretic metric learning. ICML,
2007.
[10] G. M. Fung, O. L. Mangasarian, and A. J. Smola. Minimal kernel classifiers. J. of Machine
Learning Research, 3: 303?321, 2002.
[11] A. Globerson, S. Roweis, Metric learning by collapsing classes. NIPS, 2005.
[12] J. Goldberger, S. Roweis, G. Hinton, and R. Salakhutdinov. Neighbourhood component analysis. NIPS, 2004.
[13] T. Hastie, R.Tibshirani, and Robert Friedman. The Elements of Statistical Learning. SpringerVerlag New York, LLC, 2003.
[14] R.A. Horn and C.R. Johhnson. Topics in Matrix Analysis. Cambridge University Press, 1991.
[15] A. Nemirovski. Efficient methods in convex programming. Lecture Notes, 1994.
[16] Y. Nesterov. Introductory Lectures on Convex Optimization: A Basic Course. Springer, 2003.
[17] Y. Nesterov. Smooth minimization of non-smooth functions. Mathematical Programming,
103:127-152, 2005.
[18] Obozinski, B. Taskar, and M. I. Jordan. Joint covariate selection and joint subspace selection
for multiple classification problems. Statistics and Computing. In press, 2009.
[19] J. D. M. Rennie, and N. Srebro. Fast maximum margin matrix factorization for collaborative
prediction. ICML, 2005.
[20] R. Rosales and G. Fung. Learning sparse metrics via linear programming. KDD, 2006.
[21] N. Srebro, J.D. M. Rennie, and T. S. Jaakkola. Maximum-margin matrix factorization. NIPS,
2005.
[22] L. Torresani and K. Lee. Large margin component analysis. NIPS, 2007.
[23] K. Q. Weinberger, J. Blitzer, and L. Saul. Distance metric learning for large margin nearest
neighbour classification. NIPS, 2006.
[24] K. Q. Weinberger and L. K. Saul. Fast solvers and efficient implementations for distance
metric learning. ICML, 2008.
[25] E. Xing, A. Ng, M. Jordan, and S. Russell. Distance metric learning with application to
clustering with side information. NIPS, 2002.
[26] L. Yang and R. Jin. Distance metric learning: A comprehensive survey. In Technical report,
Department of Computer Science and Engineering, Michigan State University, 2007.
9
| 3847 |@word h:1 kulis:1 repository:1 version:2 briefly:1 norm:20 nd:13 d2:11 km:7 decomposition:3 covariance:2 q1:16 mention:1 tr:23 reduction:7 united:1 tuned:2 outperforms:2 existing:1 ka:1 wd:1 od:1 nt:1 si:4 goldberger:1 written:1 kdd:1 remove:1 designed:1 plot:2 xk:6 location:1 firstly:1 mathematical:1 direct:2 combine:1 introductory:1 introduce:2 x0:3 indeed:1 examine:2 planning:1 multi:5 salakhutdinov:1 lmnn:17 decreasing:1 cardinality:1 solver:1 becomes:1 begin:1 estimating:1 notation:2 bounded:1 moreover:1 project:1 kk0:1 weinshall:1 cm:4 pursue:1 q2:15 developed:3 lmca:2 finding:2 transformation:5 pseudo:2 exactly:1 k2:17 classifier:3 schwartz:1 control:1 demonstrates:1 grant:1 positive:4 before:1 engineering:2 kuk1:3 id:1 nsfc:1 minimally:1 china:2 au:1 equivalence:2 specifying:1 challenging:1 factorization:2 nemirovski:1 range:2 averaged:2 unique:1 lecun:1 globerson:1 yj:2 testing:1 practice:1 minv:3 definite:4 horn:1 euc:10 pontil:2 maxx:1 significantly:2 boyd:1 selection:5 operator:6 applying:2 restriction:1 equivalent:10 imposed:1 map:1 go:1 attention:2 straightforward:1 convex:20 survey:1 oscillated:1 hadsell:1 rule:1 regarded:1 orthonormal:2 vandenberghe:1 resp:3 pt:2 play:1 suppose:1 programming:9 hypothesis:3 element:3 recognition:1 jk:3 particularly:1 approximated:1 labeled:1 observed:3 role:1 epsrc:1 ep:1 taskar:1 solved:2 trade:1 russell:1 yk:1 mentioned:1 yosida:1 convexity:4 complexity:5 ideally:1 nesterov:10 trained:1 solving:1 incur:1 efficiency:5 easily:1 accelerate:2 joint:2 k0:2 various:4 s2i:1 jain:1 fast:2 kp:6 approached:1 choosing:1 hillel:1 solve:6 cvpr:1 rennie:2 statistic:1 knn:2 validates:1 final:3 hoc:1 advantage:2 differentiable:7 eigenvalue:3 propose:3 product:1 relevant:1 combining:2 uci:1 kak1:3 roweis:2 academy:1 adjoint:1 frobenius:2 validate:1 exploiting:1 convergence:7 neumann:1 produce:1 yiming:1 blitzer:1 develop:2 ij:6 nearest:8 solves:1 strong:1 implemented:1 involves:3 implies:1 rosales:1 exhibiting:2 guided:1 stochastic:1 bonnans:1 hx:2 generalization:1 kaizhu:1 secondly:1 hold:1 sufficiently:1 considered:1 minu:5 vki:8 wine:5 label:3 maxm:1 successfully:2 minimization:2 bs8:1 clearly:1 gaussian:1 aim:2 jaakkola:1 focus:1 vk:1 rank:8 indicates:1 mainly:1 contrast:1 dim:4 dient:1 stopping:1 nn:5 ksk22:1 typically:1 transformed:4 arg:5 classification:6 dual:3 overall:1 denoted:3 among:1 art:1 smoothing:3 special:6 initialize:1 evgeniou:1 ng:1 identical:1 unsupervised:1 icml:3 future:1 contaminated:1 t2:1 develops:1 report:3 torresani:1 randomly:2 neighbour:1 kp1:2 simultaneously:6 national:1 preserve:2 comprehensive:1 iono:2 attempt:1 friedman:1 evaluation:2 kwd:1 primal:1 pect:1 accurate:1 closer:1 necessary:2 xy:1 conduct:2 euclidean:5 desired:2 xjk:6 subfigure:1 minimal:1 retains:1 maximization:1 introducing:2 tour:1 too:1 itml:11 xdi:1 connect:1 siam:1 lee:1 off:1 continuously:2 w1:1 again:1 von:1 borwein:1 huang:1 choose:1 collapsing:2 book:1 leading:1 yp:1 ku1:1 skl:1 wk:1 automation:1 ad:1 depends:2 performed:1 try:2 closed:2 xing:1 collaborative:1 minimize:1 accuracy:3 conducting:1 largely:1 efficiently:1 yield:5 marginally:1 monitoring:1 bristol:2 minm:1 converged:2 influenced:1 whenever:1 definition:2 conveys:1 dm:1 proof:5 e2:8 associated:1 stop:1 recall:1 ut:5 dimensionality:5 campbell:1 back:2 attained:1 higher:2 supervised:3 maximally:1 formulation:3 done:1 shrink:1 strongly:3 generality:1 smola:1 until:1 touch:1 nonlinear:1 defines:1 indicated:1 effect:1 k22:2 normalized:4 counterpart:1 regularization:17 hence:10 equality:2 symmetric:1 laboratory:1 dhillon:1 i2:1 mahalanobis:1 during:1 davis:1 iris:5 criterion:1 generalized:1 bal:2 theoretic:3 demonstrate:1 l1:4 novel:4 mangasarian:1 xkj:14 common:1 qp:1 tail:1 cambridge:2 rd:2 outlined:1 mathematics:2 consistency:1 kw1:1 similarity:3 showed:1 inequality:4 binary:1 success:1 yi:3 exploited:1 seen:1 relaxed:1 colin:1 semi:4 u0:2 multiple:1 smooth:28 technical:1 faster:2 match:1 calculation:1 cross:3 adapt:1 long:1 bach:1 e1:5 impact:1 prediction:1 involving:1 basic:1 metric:50 iteration:5 ui1:1 sometimes:1 kernel:1 addition:1 completes:1 extra:1 norm2:1 spirit:1 effectiveness:2 jordan:2 chopra:1 yang:1 xj:21 hastie:1 restrict:1 idea:1 whether:1 motivated:2 york:1 remark:1 generally:2 useful:1 detailed:1 listed:1 amount:1 transforms:1 induces:1 reduced:1 generate:2 shapiro:1 xij:19 estimated:1 tibshirani:1 write:1 shental:1 dominance:1 putting:2 four:4 reformulation:1 d3:2 graph:2 cone:2 beijing:1 run:1 kpd:2 mcml:1 extends:1 gra:1 quadratic:2 adapted:1 constraint:4 hau:5 u1:1 min:23 attempting:1 relatively:2 department:2 developing:2 vk0:3 fung:2 vkj:3 hertz:1 smaller:3 across:3 slightly:1 partitioned:1 sij:1 rca:1 equation:5 remains:1 slack:1 turn:2 nonempty:1 xi2:6 loose:1 know:1 letting:1 discus:1 end:4 zk2:1 rewritten:1 endowed:1 apply:4 observe:2 enforce:3 appropriate:1 spectral:1 neighbourhood:1 weinberger:2 shortly:1 eigen:2 original:1 denotes:8 clustering:4 hinge:2 exploit:2 k1:4 chinese:1 especially:2 micchelli:1 objective:6 realized:2 depart:1 rt:3 kak2:2 diagonal:2 minx:2 gradient:6 subspace:2 distance:41 nondifferentiable:1 nx:1 topic:2 cauchy:1 code:2 ratio:1 balance:3 ying:2 equivalently:5 kingdom:1 unfortunately:1 robert:1 xik:2 trace:3 implementation:1 observation:2 datasets:5 benchmark:1 finite:1 descent:2 jin:1 extended:1 hinton:1 perturbation:1 pair:4 specified:1 learned:3 established:1 nip:8 able:1 bar:1 usually:3 pattern:1 below:2 sparsity:7 max:22 including:1 ranked:1 scheme:5 improve:1 imply:1 ready:1 kj:2 review:3 epoch:4 acknowledgement:1 relative:2 loss:5 lecture:2 discriminatively:1 ksk:1 mixed:5 srebro:2 versus:1 validation:3 verification:1 proxy:6 row:3 course:1 repeat:1 last:1 supported:2 side:3 institute:1 neighbor:7 saul:2 face:1 sparse:36 moreau:1 curve:4 dimension:6 finitedimensional:1 llc:1 author:2 approximate:1 compact:1 assumed:1 xi:21 continuous:4 search:2 iterative:1 triplet:5 table:4 reality:1 learn:6 ku:4 nature:1 sra:1 excellent:1 diag:5 main:1 noise:3 n2:1 slow:1 sub:4 x1i:1 kxk2:2 minz:1 third:1 learns:1 theorem:9 enlarging:1 xt:4 covariate:1 ionosphere:3 essential:1 exists:1 effectively:1 dissimilarity:2 margin:8 nk:1 kx:7 gap:1 entropy:1 michigan:1 saddle:12 explore:1 partially:1 scalar:1 u2:2 springer:2 lewis:1 obozinski:1 identity:1 formulated:1 consequently:2 shared:1 lipschitz:5 considerable:1 replace:1 change:2 springerverlag:1 specifically:2 except:1 principal:3 lemma:6 max0:1 called:2 experimental:2 select:1 dissimilar:4 brevity:1 preparation:1 evaluate:1 argyriou:2 |
3,143 | 3,848 | Adaptive Regularization of Weight Vectors
Koby Crammer
Department of
Electrical Enginering
The Technion
Haifa, 32000 Israel
[email protected]
Alex Kulesza
Department of Computer
and Information Science
University of Pennsylvania
Philadelphia, PA 19104
[email protected]
Mark Dredze
Human Language Tech.
Center of Excellence
Johns Hopkins University
Baltimore, MD 21211
[email protected]
Abstract
We present AROW, a new online learning algorithm that combines several useful properties: large margin training, confidence weighting, and the
capacity to handle non-separable data. AROW performs adaptive regularization of the prediction function upon seeing each new instance, allowing
it to perform especially well in the presence of label noise. We derive
a mistake bound, similar in form to the second order perceptron bound,
that does not assume separability. We also relate our algorithm to recent
confidence-weighted online learning techniques and show empirically that
AROW achieves state-of-the-art performance and notable robustness in the
case of non-separable data.
1
Introduction
Online learning algorithms are fast, simple, make few statistical assumptions, and perform
well in a wide variety of settings. Recent work has shown that parameter confidence information can be effectively used to guide online learning [2]. Confidence weighted (CW)
learning, for example, maintains a Gaussian distribution over linear classifier hypotheses
and uses it to control the direction and scale of parameter updates [6]. In addition to formal guarantees in the mistake-bound model [11], CW learning has achieved state-of-the-art
performance on many tasks. However, the strict update criterion used by CW learning is
very aggressive and can over-fit [5]. Approximate solutions can be used to regularize the
update and improve results; however, current analyses of CW learning still assume that the
data are separable. It is not immediately clear how to relax this assumption.
In this paper we present a new online learning algorithm for binary classification that combines several attractive properties: large margin training, confidence weighting, and the
capacity to handle non-separable data. The key to our approach is the adaptive regularization of the prediction function upon seeing each new instance, so we call this algorithm
Adaptive Regularization of Weights (AROW). Because it adjusts its regularization for each
example, AROW is robust to sudden changes in the classification function due to label
noise. We derive a mistake bound, similar in form to the second order perceptron bound,
that does not assume separability. We also provide empirical results demonstrating that
AROW is competitive with state-of-the-art methods and improves upon them significantly
in the presence of label noise.
2
Confidence Weighted Online Learning of Linear Classifiers
Online algorithms operate in rounds. In round t the algorithm receives an instance xt ? Rd
and applies its current prediction rule to make a prediction y?t ? Y. It then receives the true
1
label yt ? Y and suffers a loss `(yt , y?t ). For binary classification we have Y = {?1, +1} and
use the zero-one loss `01 (yt , y?t ) = 0 if yt = y?t and 1 otherwise. Finally, the algorithm updates
its prediction rule using (xt , yt ) and proceeds to the next round. In this work we consider
linear prediction rules parameterized by a weight vector w: y? = hw (x) = sign(w ? x).
Recently Dredze, Crammer and Pereira [6, 5] proposed an algorithmic framework for online learning of binary classification tasks called confidence weighted (CW) learning. CW
learning captures the notion of confidence in a linear classifier by maintaining a Gaussian
distribution over the weights with mean ? ? Rd and covariance matrix ? ? Rd?d . The
values ?p and ?p,p , respectively, encode the learner?s knowledge of and confidence in the
weight for feature p: the smaller ?p,p , the more confidence the learner has in the mean
weight value ?p . Covariance terms ?p,q capture interactions between weights.
Conceptually, to classify an instance x, a CW classifier draws a parameter vector w ?
N (?, ?) and predicts the label according to sign(w ? x). In practice, however, it can be
easier to simply use the average weight vector E [w] = ? to make predictions. This is similar
to the approach taken by Bayes point machines [9], where a single weight vector is used to
approximate a distribution. Furthermore, for binary classification, the prediction given by
the mean weight vector turns out to be Bayes optimal.
CW classifiers are trained according to a passive-aggressive rule [3] that adjusts the distribution at each round to ensure that the probability of a correct prediction is at least
? ? (0.5, 1]. This yields the update constraint Pr [yt (w ? xt ) ? 0] ? ? . Subject to this
constraint, the algorithm makes the smallest possible change to the hypothesis weight distribution as measured using the KL divergence. This implies the following optimization
problem for each round t:
(?t , ?t ) = min DKL N (?, ?) k N ?t?1 , ?t?1
?,?
s.t. Prw?N (?,?) [yt (w ? xt ) ? 0] ? ?
Confidence-weighted algorithms have been shown to perform well in practice [5, 6], but they
suffer from several problems. First, the update is quite aggressive, forcing the probability
of predicting each example correctly to be at least ? > 1/2 regardless of the cost to the
objective. This may cause severe over-fitting when labels are noisy; indeed, current analyses
of the CW algorithm [5] assume that the data are linearly separable. Second, they are
designed for classification, and it is not clear how to extend them to alternative settings
such as regression. This is in part because the constraint is written in discrete terms where
the prediction is either correct or not.
We deal with both of these issues, coping more effectively with label noise and generalizing
the advantages of CW learning in an extensible way.
3
Adaptive Regularization Of Weights
We identify two important properties of the CW update rule that contribute to its good
performance but also make it sensitive to label noise. First, the mean parameters ? are
guaranteed to correctly classify the current training example with margin following each
update. This is because the p
probability constraint Pr [yt (w ? xt ) ? 0] ? ? can be written
explicitly as yt (? ? xt ) ? ? x>
t ?xt , where ? > 0 is a positive constant related to ?.
This aggressiveness yields rapid learning, but given an incorrectly labeled example, it can
also force the learner to make a drastic and incorrect change to its parameters. Second,
confidence, as measured by the inverse eigenvalues of ?, increases monotonically with every
update. While it is intuitive that our confidence should grow as we see more data, this
also means that even incorrectly labeled examples causing wild parameter swings result in
artificially increased confidence.
In order to maintain the positives but reduce the negatives of these two properties, we
isolate and soften them. As in CW learning, we maintain a Gaussian distribution over
weight vectors with mean ? and covariance ?; however, we recast the above characteristics
of the CW constraint as regularizers, minimizing the following unconstrained objective on
2
each round:
C (?, ?) = DKL N (?, ?) k N ?t?1 , ?t?1
+ ?1 `h2 (yt , ? ? xt ) + ?2 x>
t ?xt ,
(1)
2
where `h2 (yt , ? ? xt ) = (max{0, 1 ? yt (? ? xt )}) is the squared-hinge loss suffered using the
weight vector ? to predict the output for input xt when the true output is yt . ?1 , ?2 ? 0 are
two tradeoff hyperparameters. For simplicity and compactness of notation, in the following
we will assume that ?1 = ?2 = 1/(2r) for some r > 0.
The objective balances three desires. First, the parameters should not change radically on
each round, since the current parameters contain information about previous examples (first
term). Second, the new mean parameters should predict the current example with low loss
(second term). Finally, as we see more examples, our confidence in the parameters should
generally grow (third term).
Note that this objective is not simply the dualization of the CW constraint, but a new
formulation inspired by the properties discussed above. Since the loss term depends on ?
only via the inner-product ? ? xt , we are able to prove a representer theorem (Sec. 4). While
we use the squared-hinge loss for classification, different loss functions, as long as they are
convex and differentiable in ?, yield algorithms for different settings.1
To solve the optimization in (1), we begin by writing the KL explicitly:
1
>
d
1
det ?t?1
1
C (?, ?) = log
+ Tr ??1
?
? ? ??1
t?1 ? +
t?1 ?t?1 ? ? ?
2
det ?
2
2 t?1
2
1
1
+ `h2 (yt , ? ? xt ) + x>
?xt
2r
2r t
(2)
We can decompose the result into two terms: C1 (?), depending only on ?, and C2 (?), depending only on ?. The updates to ? and ? can therefore be performed independently.
The squared-hinge loss yields a conservative (or passive) update for ? in which the mean
parameters change only when the margin is too small, and we follow CW learning by enforcing a correspondingly conservative update for the confidence parameter ?, updating it
only when ? changes. This results in fewer updates and is easier to analyze. Our update
thus proceeds in two stages.
1. Update the mean parameters:
?t = arg min C1 (?)
(3)
2. If ?t 6= ?t?1 , update the confidence parameters:
?t = arg min C2 (?)
(4)
?
?
We now develop the update equations for (3) and (4) explicitly, starting with the former.
Taking the derivative of C (?, ?) with respect to ? and setting it to zero, we get
1 d
?t = ?t?1 ?
` 2 (yt , z) |z=?t ?xt ?t?1 xt ,
(5)
2r dz h
assuming ?t?1 is non-singular. Substituting the derivative of the squared-hinge loss in (5)
and assuming 1 ? yt (?t ? xt ) ? 0, we get
?t = ?t?1 +
yt
(1 ? yt (?t ? xt )) ?t?1 xt .
r
(6)
We solve for ?t by taking the dot product of each side of the equality with xt and substituting
back in (6) to obtain the rule
max 0, 1 ? yt x>
t ?t?1
?t = ?t?1 +
?t?1 yt xt .
(7)
x>
t ?t?1 xt + r
It can be easily verified that (7) satisfies our assumption that 1 ? yt (?t ? xt ) ? 0.
1
It can be shown that the well known recursive least squares (RLS) regression algorithm [7] is a
special case of AROW with the squared loss.
3
Input parameters r
Initialize ?0 = 0 , ?0 = I,
For t = 1, . . . , T
? Receive a training example xt ? Rd
? Compute margin and confidence mt = ?t?1 ? xt vt = x>
t ?t?1 xt
? Receive true label yt , and suffer loss `t = 1 if sign (mt ) 6= yt
? If mt yt < 1, update using eqs. (7) & (9):
?t = ?t?1 ? ?t ?t?1 xt x>
t ?t?1
?
?
?t = max 0, 1 ? yt x>
t ?t?1 ?t
?t = ?t?1 + ?t ?t?1 yt xt
1
?t = >
xt ?t?1 xt + r
Output: Weight vector ?T and confidence ?T .
Figure 1: The AROW algorithm for online binary classification.
The update for the confidence parameters is made only if ?t 6= ?t?1 , that is, if 1 >
y t x>
t ?t?1 . In this case, we compute the update of the confidence parameters by setting
the derivative of C (?, ?) with respect to ? to zero:
xt x >
t
(8)
r
Using the Woodbury identity we can also rewrite the update for ? in non-inverted form:
?1
??1
t = ?t?1 +
?t = ?t?1 ?
?t?1 xt x>
t ?t?1
r + x>
?
t?1 xt
t
(9)
Note that it follows directly from (8) and (9) that the eigenvalues of the confidence parameters are monotonically decreasing: ?t ?t?1 ; ??1
??1
t
t?1 . Pseudocode for AROW
appears in Fig. 1.
4
Analysis
We first show that AROW can be kernelized by stating the following representer theorem.
Lemma 1 (Representer Theorem) Assume that ?0 = I and ?0 = 0. The mean parameters ?t and confidence parameters ?t produced by updating via (7) and (9) can be written
as linear combinations of the input vectors (resp. outer products of the input vectors with
themselves) with coefficients depending only on inner-products of input vectors.
Proof sketch: By induction. The base case follows from the definitions of ?0 and ?0 ,
and the induction step follows algebraically from the update rules (7) and (9).
We now prove a mistake bound for AROW. Denote by M (M
= |M|) the set of example
indices for which the algorithm makes a mistake, yt ?t?1 ? xt ? 0, and by U (U = |U|) the
set of example indices for which there is an update but not a mistake, 0 < yt (?t ? xt ) ? 1.
Other
examples do not
behavior of the algorithm and can be ignored. Let XM =
P
P affect the
>
>
x
x
,
X
=
x
x
and
XA = XM + XU .
i
U
i
i
i
t?M
t?U
Theorem 2 For any reference weight vector u ? Rd , the number of mistakes made by
AROW (Fig. 1) is upper bounded by
s
q
X
1
2
M ? r kuk + u> XA u log det I + XA
+U +
gt ? U ,
(10)
r
t?M?U
where gt = max 0, 1 ? yt u> xt .
The proof depends on two lemmas; we omit the proof of the first for lack of space.
4
>
Lemma 3 Let `t = max 0, 1 ? yt ?>
t?1 xt and ?t = xt ?t?1 xt . Then, for every t ? M?U,
y t u> x t
r
2
?
t + r ? `t r
?1
?1
>
?>
t ?t ?t = ?t?1 ?t?1 ?t?1 +
r (?t + r)
> ?1
u> ??1
t ?t = u ?t?1 ?t?1 +
Lemma 4 Let T be the number of rounds. Then
X
?t r
? log det ??1
.
T +1
r
(?
+
r)
t
t
Proof : We compute the following quantity:
>
>
>
x>
t ?t xt = xt ?t?1 ? ?t ?t?1 xt xt ?t?1 xt = ?t ?
?t r
?2t
=
.
?t + r
?t + r
Using Lemma D.1 from [2] we have that
det ??1
1 >
t?1
.
x t ? t x>
=
1
?
t
r
det ??1
t
(11)
Combining, we get
X
t
!
X
X
det ??1
?t r
t?1
??
log
=
1?
?1
r (?t + r)
det ?t
t
t
!
det ??1
t?1
? log det ??1
.
T +1
?1
det ?t
We now prove Theorem 2.
Proof : We iterate the first equality of Lemma 3 to get
X y t u> x t
X 1 ? gt
M +U
1
u> ??1
?
=
?
T ?T =
r
r
r
r
t?M?U
t?M?U
We iterate the second equality to get
X ?t + r ? `2 r
t
?1
=
?>
T ?T ?T =
r (?t + r)
t?M?U
X
t?M?U
X
gt .
(12)
1 ? `2t
.
?t + r
(13)
t?M?U
X
?t
+
r (?t + r)
t?M?U
Using Lemma 4 we have that the first term of (13) is upper bounded by 1r log det ??1
.
T
For the second term in (13) we consider
two
cases.
First,
if
a
mistake
occurred
on
example
t, then we have that yt xt ? ?t?1 ? 0 and `t ? 1, so 1 ? `2t ? 0. Second, if
an the algorithm
made an update (but no mistake) on example t, then 0 < yt xt ? ?t?1 ? 1 and `t ? 0,
thus 1 ? `2t ? 1. We therefore have
X 1 ? `2
X
X 1
X 1
0
t
?
+
=
.
(14)
?t + r
?t + r
?t + r
?t + r
t?M?U
t?M
t?U
t?U
Combining and plugging into the Cauchy-Schwarz inequality
q
q
> ??1 u ?> ??1 ? ,
u> ??1
?
?
u
T
T
T
T
T T
we get
M +U
1
?
r
r
X
gt ?
q
u> ??1
T u
t?M?U
s
X 1
1
log det ??1
+
.
T
r
?t + r
t?U
Rearranging the terms and using the fact that ?t ? 0 yields
q
X
? q
?1
M ? r u> ??1
+U +
gt ? U .
T u log det ?T
t?M?U
5
(15)
By definition,
??1
T =I +
1
r
X
t?M?U
1
xi x>
i = I + XA ,
r
so substituting and simplifying completes the proof:
s
s
X
?
1
1
>
I + XA u log det I + XA
+U +
gt ? U
M? r u
r
r
t?M?U
s
q
X
1
2
>
= r kuk + u XA u log det I + XA
+U +
gt ? U .
r
t?M?U
A few comments are in order. First, the two square-root terms of the bound depend on r
in opposite ways: the first is monotonically increasing, while the second is monotonically
decreasing. One could expect to optimize the bound by minimizing over r. However, the
bound also depends on r indirectly via other quantities (e.g. XA ), so there is no direct way
to do so. Second, if all the updates are associated with errors, that is, U = ?, then the bound
reduces to the bound of the second-order perceptron [2]. In general, however, the bounds
are not comparable since each depends on the actual runtime behavior of its algorithm.
5
Empirical Evaluation
We evaluate AROW on both synthetic and real data, including several popular datasets
for document classification and optical character recognition (OCR). We compare with
three baselines: Passive-Aggressive (PA), Second Order Perceptron (SOP)2 and ConfidenceWeighted (CW) learning3 .
Our synthetic data are as in [5], but we invert the labels on 10% of the training examples.
(Note that evaluation is still done against the true labels.) Fig. 2(a) shows the online learning
curves for both full and diagonalized versions of the algorithms on these noisy data. AROW
improves over all competitors, and the full version outperforms the diagonal version. Note
that CW-full performs worse than CW-diagonal, as has been observed previously for noisy
data.
We selected a variety of document classification datasets popular in the NLP community,
summarized as follows. Amazon: Product reviews to be classified into domains (e.g.,
books or music) [6]. We created binary datasets by taking all pairs of the six domains (15
datasets). Feature extraction follows [1] (bigram counts). 20 Newsgroups: Approximately
20,000 newsgroup messages partitioned across 20 different newsgroups4 . We binarized the
corpus following [6] and used binary bag-of-words features (3 datasets). Each dataset has
between 1850 and 1971 instances. Reuters (RCV1-v2/LYRL2004): Over 800,000 manually categorized newswire stories. We created binary classification tasks using pairs of
labels following [6] (3 datasets). Details on document preparation and feature extraction
are given by [10]. Sentiment: Product reviews to be classified as positive or negative. We
used each Amazon product review domain as a sentiment classification task (6 datasets).
Spam: We selected three task A users from the ECML/PKDD Challenge5 , using bag-ofwords to classify each email as spam or ham (3 datasets). For OCR data we binarized two
well known digit recognition datasets, MNIST6 and USPS, into 45 all-pairs problems. We
also created ten one vs. all datasets from the MNIST data (100 datasets total).
Each result for the text datasets was averaged over 10-fold cross-validation. The OCR
experiments used the standard split into training and test sets. Hyperparameters (including
2
For the real world (high dimensional) datasets, we must drop cross-feature confidence terms by projecting
onto the set of diagonal matrices, following the approach of [6]. While this may reduce performance, we make the
same approximation for all evaluated algorithms.
3
We use the ?variance? version developed in [6].
4
http://people.csail.mit.edu/jrennie/20Newsgroups/
5
http://ecmlpkdd2006.org/challenge.html
6
http://yann.lecun.com/exdb/mnist/index.html
6
800
Perceptron
PA
SOP
AROW?full
AROW?diag
CW?full
CW?diag
Mistakes
1200
1000
800
600
400
400
200
200
100
1000
1500
2000
1000
300
600
500
1500
500
2500
3000
3500
4000
4500
5000
Instances
00
Mistakes
1400
2000
700
Mistakes
1600
2000
4000
6000
Instances
(a) synthetic data
PA
CW
AROW
SOP
8000
10000
500
00
2000
4000
6000
Instances
PA
CW
AROW
SOP
8000
10000
(b) MNIST data
Figure 2: Learning curves for AROW (full/diagonal) and baseline methods. (a) 5k synthetic
training examples and 10k test examples (10% noise, 100 runs). (b) MNIST 3 vs. 5 binary
classification task for different amounts of label noise (left: 0 noise, right: 10%).
r for AROW) and the number of online iterations (up to 10) were optimized using a single
randomized run. We used 2000 instances from each dataset unless otherwise noted above.
In order to observe each algorithm?s ability to handle non-separable data, we performed each
experiment using various levels of artifical label noise, generated by independently flipping
each binary label with fixed probability.
5.1
Results and Discussion
Our
experimental
results
are summarized in Table 1.
Noise level
AROW outperforms the baseAlgorithm
0.0
0.05
0.1
0.15
0.2
0.3
lines at all noise levels, but
AROW
1.51 1.44 1.38 1.42 1.25 1.25
CW
1.63
1.87
1.95
2.08
2.42
2.76
does especially well as noise
PA
2.95
2.83
2.78
2.61
2.33
2.08
increases.
More detailed
SOP
3.91
3.87
3.89
3.89
4.00
3.91
results for AROW and CW,
the overall best performing
baseline, are compared in Table 1: Mean rank (out of 4, over all datasets) at differFig. 3. AROW and CW are ent noise levels. A rank of 1 indicates that an algorithm
comparable when there is no outperformed all the others.
added noise, with AROW
winning the majority of the time. As label noise increases (moving across the rows in
Fig. 3) AROW holds up remarkably well. In almost every high noise evaluation, AROW
improves over CW (as well as the other baselines, not shown). Fig. 2(b) shows the total
number of mistakes (w.r.t. noise-free labels) made by each algorithm during training on the
MNIST dataset for 0% and 10% noise. Though absolute performance suffers with noise,
the gap between AROW and the baselines increases.
To help interpret the results, we classify the algorithms evaluated here according to four
characteristics: the use of large margin updates, confidence weighting, a design that accomodates non-separable data, and adaptive per-instance margin (Table 2). While all of these
properties can be desirable in different situations, we would like to understand how they
interact and achieve high performance while avoiding sensitivity to noise.
Based on the results in Table 1, it is clear that the comLarge
ConfNonAdaptive
bination of confidence informaAlgorithm Margin idence Separable
Margin
tion and large margin learning
PA
Yes
No
Yes
No
SOP
No
No
Yes
Yes
is powerful when label noise is
CW
Yes
Yes
No
Yes
low. CW easily outperforms
AROW
Yes
Yes
Yes
No
the other baselines in such situations, as it has been shown to
Table 2: Online algorithm properties overview.
do in previous work. However,
as noise increases, the separability assumption inherent in CW appears to reduce its performance considerably.
7
1.00
1.0
1.0
0.95
0.9
0.9
0.85
0.80
0.75
0.75
0.80
0.85
CW
0.90
0.95
1.00
1.00
0.8
20news
amazon
reuters
sentiment
spam
0.7
0.6
0.50.5
AROW
20news
amazon
reuters
sentiment
spam
AROW
0.8
AROW
0.90
0.6
0.7
CW
0.8
0.9
0.6
1.0
1.0
0.50.5
0.7
0.92
0.90
0.90
0.92
CW
0.8
0.9
1.0
0.7
0.6
0.50.5
0.7
AROW
0.9
0.8
AROW
0.9
0.8
AROW
0.98
USPS 1 vs. All
USPS All Pairs
MNIST 1 vs. All
0.94
0.96
0.98
1.00
CW
0.6
1.0
0.96
0.94
20news
amazon
reuters
sentiment
spam
0.7
0.6
0.7
USPS 1 vs. All
USPS All Pairs
MNIST 1 vs. All
0.8
0.9
1.0
CW
0.6
0.50.5
0.6
0.7
USPS 1 vs. All
USPS All Pairs
MNIST 1 vs. All
0.8
0.9
1.0
CW
Figure 3: Accuracy on text (top) and OCR (bottom) binary classification. Plots compare
performance between AROW and CW, the best performing baseline (Table 1). Markers
above the line indicate superior AROW performance and below the line superior CW performance. Label noise increases from left to right: 0%, 10% and 30%. AROW improves
relative to CW as noise increases.
AROW, by combining the large margin and confidence weighting of CW with a soft update
rule that accomodates non-separable data, matches CW?s performance in general while
avoiding degradation under noise. AROW lacks the adaptive margin of CW, suggesting
that this characteristic is not crucial to achieving strong performance. However, we leave
open for future work the possibility that an algorithm with all four properties might have
unique advantages.
6
Related and Future Work
AROW is most similar to the second order perceptron [2]. The SOP performs the same type
of update as AROW, but only when it makes an error. AROW, on the other hand, updates
even when its prediction is correct if there is insufficient margin. Confidence weighted (CW)
[6, 5] algorithms, by which AROW was inspired, update the mean and confidence parameters
simultaneously, while AROW makes a decoupled update and softens the hard constraint of
CW. The AROW algorithm can be seen as a variant of the PA-II algorithm from [3] where
the regularization is modified according to the data.
Hazan [8] describes a framework for gradient descent algorithms with logarithmic regret in
which a quantity similar to ?t plays an important role. Our algorithm differs in several
ways. First, Hazan [8] considers gradient algorithms, while we derive and analyze algorithms that directly solve an optimization problem. Second, we bound the loss directly, not
the cumulative sum of regularization and loss. Third, the gradient algorithms perform a
projection after making an update (not before) since the norm of the weight vector is kept
bounded.
Ongoing work includes the development and analysis of AROW style algorithms for other
settings, including a multi-class version following the recent extension of CW to multi-class
problems [4]. Our mistake bound can be extended to this case. Applying the ideas behind
AROW to regression problems turns out to yield the well known recursive least squares
(RLS) algorithm, for which AROW offers new bounds (omitted). Finally, while we used the
confidence term x>
t ?xt in (1), we can replace this term with any differentiable, monotonically increasing function f (x>
t ?xt ). This generalization may yield additional algorithms.
8
References
[1] John Blitzer, Mark Dredze, and Fernando Pereira. Biographies, bollywood, boom-boxes
and blenders: Domain adaptation for sentiment classification. In ACL, 2007.
[2] Nicol?
o Cesa-Bianchi, Alex Conconi, and Claudio Gentile. A second-order perceptron
algorithm. Siam J. of Comm., 34, 2005.
[3] Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev-Shwartz, and Yoram Singer.
Online passive-aggressive algorithms. Journal of Machine Learning Research, 7:551?
585, 2006.
[4] Koby Crammer, Mark Dredze, and Alex Kulesza. Multi-class confidence weighted
algorithms. In Empirical Methods in Natural Language Processing (EMNLP), 2009.
[5] Koby Crammer, Mark Dredze, and Fernando Pereira. Exact convex confidence-weighted
learning. In Neural Information Processing Systems (NIPS), 2008.
[6] Mark Dredze, Koby Crammer, and Fernando Pereira. Confidence-weighted linear classification. In International Conference on Machine Learning, 2008.
[7] Simon Haykin. Adaptive Filter Theory. 1996.
[8] Elad Hazan. Efficient algorithms for online convex optimization and their applications.
PhD thesis, Princeton University, 2006.
[9] Ralf Herbrich, Thore Graepel, and Colin Campbell. Bayes point machines. Journal of
Machine Learning Research (JMLR), 1:245?279, 2001.
[10] David D. Lewis, Yiming Yang, Tony G. Rose, and Fan Li. Rcv1: A new benchmark
collection for text categorization research. JMLR, 5:361?397, 2004.
[11] Nick Littlestone. Learning when irrelevant attributes abound: A new linear-threshold
algorithm. Machine Learning, 2:285?318, 1988.
9
| 3848 |@word version:5 bigram:1 norm:1 dekel:1 open:1 blender:1 covariance:3 simplifying:1 tr:1 document:3 outperforms:3 diagonalized:1 current:6 com:1 written:3 must:1 john:2 designed:1 drop:1 update:32 plot:1 v:8 fewer:1 selected:2 haykin:1 sudden:1 contribute:1 herbrich:1 org:1 c2:2 direct:1 incorrect:1 prove:3 combine:2 fitting:1 wild:1 softens:1 excellence:1 upenn:1 indeed:1 rapid:1 behavior:2 themselves:1 pkdd:1 multi:3 inspired:2 decreasing:2 actual:1 increasing:2 abound:1 begin:1 notation:1 bounded:3 israel:1 developed:1 guarantee:1 every:3 binarized:2 runtime:1 classifier:5 control:1 omit:1 positive:3 before:1 mistake:14 approximately:1 might:1 acl:1 averaged:1 unique:1 woodbury:1 lecun:1 practice:2 recursive:2 regret:1 differs:1 digit:1 coping:1 empirical:3 jhu:1 significantly:1 projection:1 confidence:33 word:1 ecmlpkdd2006:1 seeing:2 get:6 onto:1 applying:1 writing:1 optimize:1 center:1 yt:32 dz:1 regardless:1 starting:1 independently:2 convex:3 simplicity:1 amazon:5 immediately:1 adjusts:2 rule:8 regularize:1 ralf:1 handle:3 notion:1 resp:1 play:1 user:1 exact:1 us:1 hypothesis:2 pa:8 recognition:2 updating:2 predicts:1 labeled:2 observed:1 bottom:1 role:1 electrical:1 capture:2 news:3 rose:1 ham:1 comm:1 trained:1 depend:1 rewrite:1 upon:3 learner:3 usps:7 easily:2 various:1 fast:1 shalev:1 quite:1 elad:1 solve:3 relax:1 otherwise:2 ability:1 noisy:3 online:14 advantage:2 eigenvalue:2 differentiable:2 interaction:1 product:7 adaptation:1 causing:1 combining:3 achieve:1 intuitive:1 ent:1 categorization:1 leave:1 yiming:1 help:1 derive:3 depending:3 ac:1 develop:1 stating:1 measured:2 blitzer:1 eq:1 strong:1 c:1 implies:1 indicate:1 direction:1 correct:3 attribute:1 filter:1 human:1 aggressiveness:1 generalization:1 decompose:1 extension:1 hold:1 algorithmic:1 predict:2 substituting:3 achieves:1 smallest:1 omitted:1 outperformed:1 bag:2 label:19 sensitive:1 schwarz:1 weighted:9 mit:1 gaussian:3 modified:1 claudio:1 encode:1 rank:2 indicates:1 tech:1 baseline:7 compactness:1 kernelized:1 issue:1 classification:16 arg:2 html:2 overall:1 development:1 art:3 special:1 initialize:1 extraction:2 enginering:1 manually:1 koby:6 rls:2 representer:3 future:2 others:1 inherent:1 few:2 simultaneously:1 divergence:1 maintain:2 message:1 possibility:1 evaluation:3 severe:1 behind:1 regularizers:1 decoupled:1 unless:1 littlestone:1 haifa:1 instance:10 classify:4 increased:1 soft:1 extensible:1 soften:1 cost:1 technion:2 too:1 synthetic:4 considerably:1 international:1 randomized:1 sensitivity:1 siam:1 csail:1 hopkins:1 squared:5 thesis:1 cesa:1 emnlp:1 worse:1 book:1 derivative:3 style:1 li:1 aggressive:5 suggesting:1 sec:1 summarized:2 includes:1 coefficient:1 boom:1 notable:1 explicitly:3 depends:4 performed:2 root:1 tion:1 analyze:2 hazan:3 competitive:1 bayes:3 maintains:1 shai:1 simon:1 il:1 square:3 accuracy:1 variance:1 characteristic:3 yield:7 identify:1 yes:10 conceptually:1 produced:1 classified:2 suffers:2 email:1 definition:2 against:1 competitor:1 proof:6 associated:1 dataset:3 popular:2 knowledge:1 improves:4 graepel:1 back:1 campbell:1 appears:2 follow:1 formulation:1 done:1 evaluated:2 though:1 box:1 furthermore:1 xa:9 stage:1 sketch:1 receives:2 hand:1 marker:1 lack:2 thore:1 dredze:6 contain:1 true:4 swing:1 regularization:8 former:1 equality:3 deal:1 attractive:1 round:8 during:1 noted:1 criterion:1 exdb:1 performs:3 passive:4 recently:1 superior:2 pseudocode:1 mt:3 empirically:1 overview:1 extend:1 discussed:1 occurred:1 interpret:1 rd:5 unconstrained:1 newswire:1 language:2 dot:1 jrennie:1 moving:1 gt:8 base:1 recent:3 irrelevant:1 forcing:1 inequality:1 binary:11 vt:1 inverted:1 seen:1 additional:1 gentile:1 algebraically:1 fernando:3 colin:1 monotonically:5 ii:1 full:6 desirable:1 reduces:1 match:1 cross:2 long:1 offer:1 dkl:2 plugging:1 prediction:11 variant:1 regression:3 iteration:1 achieved:1 invert:1 c1:2 receive:2 addition:1 remarkably:1 baltimore:1 completes:1 grow:2 singular:1 suffered:1 crucial:1 operate:1 strict:1 comment:1 subject:1 isolate:1 call:1 ee:1 presence:2 yang:1 split:1 variety:2 affect:1 fit:1 iterate:2 newsgroups:2 pennsylvania:1 opposite:1 reduce:3 inner:2 idea:1 tradeoff:1 det:16 six:1 sentiment:6 suffer:2 cause:1 ignored:1 useful:1 generally:1 clear:3 detailed:1 amount:1 ten:1 http:3 sign:3 correctly:2 per:1 discrete:1 key:1 four:2 demonstrating:1 threshold:1 achieving:1 verified:1 kuk:2 kept:1 sum:1 run:2 inverse:1 parameterized:1 powerful:1 almost:1 yann:1 draw:1 comparable:2 bound:15 guaranteed:1 fold:1 fan:1 constraint:7 alex:3 min:3 rcv1:2 separable:9 optical:1 performing:2 department:2 according:4 combination:1 smaller:1 across:2 idence:1 separability:3 character:1 partitioned:1 describes:1 joseph:1 making:1 projecting:1 pr:2 prw:1 taken:1 equation:1 previously:1 turn:2 count:1 singer:1 drastic:1 ofer:1 observe:1 ocr:4 v2:1 indirectly:1 alternative:1 robustness:1 top:1 ensure:1 nlp:1 tony:1 maintaining:1 hinge:4 music:1 yoram:1 especially:2 objective:4 added:1 quantity:3 flipping:1 ofwords:1 md:1 diagonal:4 lyrl2004:1 gradient:3 cw:44 capacity:2 majority:1 outer:1 cauchy:1 considers:1 enforcing:1 induction:2 assuming:2 index:3 insufficient:1 minimizing:2 balance:1 relate:1 sop:7 negative:2 design:1 perform:4 allowing:1 upper:2 bianchi:1 datasets:14 benchmark:1 descent:1 ecml:1 incorrectly:2 situation:2 mdredze:1 extended:1 community:1 david:1 pair:6 kl:2 optimized:1 nick:1 nip:1 able:1 proceeds:2 below:1 xm:2 kulesza:3 challenge:1 recast:1 max:5 including:3 natural:1 force:1 predicting:1 accomodates:2 improve:1 created:3 philadelphia:1 text:3 review:3 nicol:1 relative:1 loss:13 expect:1 validation:1 h2:3 story:1 row:1 free:1 guide:1 formal:1 side:1 perceptron:7 understand:1 wide:1 taking:3 correspondingly:1 absolute:1 dualization:1 curve:2 world:1 cumulative:1 made:4 adaptive:8 collection:1 spam:5 approximate:2 corpus:1 xi:1 shwartz:1 table:6 bination:1 robust:1 rearranging:1 interact:1 artificially:1 domain:4 diag:2 bollywood:1 linearly:1 reuters:4 noise:25 hyperparameters:2 categorized:1 xu:1 fig:5 pereira:4 winning:1 jmlr:2 weighting:4 third:2 hw:1 theorem:5 xt:49 mnist:8 effectively:2 ci:1 keshet:1 phd:1 margin:13 gap:1 easier:2 generalizing:1 logarithmic:1 simply:2 desire:1 arow:49 conconi:1 applies:1 radically:1 satisfies:1 lewis:1 identity:1 replace:1 change:6 hard:1 lemma:7 conservative:2 called:1 total:2 degradation:1 experimental:1 newsgroup:1 mark:5 people:1 crammer:6 avoiding:2 preparation:1 ongoing:1 artifical:1 evaluate:1 princeton:1 biography:1 |
3,144 | 3,849 | Hierarchical Learning of Dimensional Biases in
Human Categorization
Katherine Heller
Department of Engineering
University of Cambridge
Cambridge CB2 1PZ
[email protected]
Adam Sanborn
Gatsby Computational Neuroscience Unit
University College London
London WC1N 3AR
[email protected]
Nick Chater
Cognitive, Perceptual and Brain Sciences
University College London
London WC1E 0AP
[email protected]
Abstract
Existing models of categorization typically represent to-be-classified items as
points in a multidimensional space. While from a mathematical point of view,
an infinite number of basis sets can be used to represent points in this space, the
choice of basis set is psychologically crucial. People generally choose the same
basis dimensions ? and have a strong preference to generalize along the axes of
these dimensions, but not ?diagonally?. What makes some choices of dimension
special? We explore the idea that the dimensions used by people echo the natural
variation in the environment. Specifically, we present a rational model that does
not assume dimensions, but learns the same type of dimensional generalizations
that people display. This bias is shaped by exposing the model to many categories
with a structure hypothesized to be like those which children encounter. The
learning behaviour of the model captures the developmental shift from roughly
?isotropic? for children to the axis-aligned generalization that adults show.
1
Introduction
Given only a few examples of a particular category, people have strong expectations as to which
new examples also belong to that same category. These expectations provide important insights
into how objects are mentally represented. One basic insight into mental representations is that
objects that have similar observed properties will be expected to belong to the same category, and
that expectation decreases as the Euclidean distance between the properties of the objects increases
[1, 2].
The Euclidean distance between observed properties is only part of the story however. Dimensions
also play a strong role in our expectations of categories. People do not always generalize isotropically: the direction of generalizations turns out to be centrally important. Specifically, people generalize along the dimensions, such as size, color, or shape ? dimensions that are termed separable.
In contrast, dimensions such as hue and saturation, which show isotropic generalization are termed
integral [3] . An illustration of the importance of separable dimensions is found in the time to learn
categories. If dimensions did not play a strong role in generalization, then rotating a category structure in a parameter space of separable dimensions should not influence how easily it can be learned.
To the contrary, rotating a pair of categories 45 degrees [3, 4] makes it more difficult to learn to
1
discriminate between them. Similarity rating results also show strong trends of judging objects to
be more similar if they match along separable dimensions [3, 5].
The tendency to generalize categories along separable dimensions is learned over development. On
dimensions such as size and color, children produce generalizations that are more isotropic than
adults [6]. Interestingly, the developmental transition between isotropic and dimensionally biased
generalizations is gradual [7].
What privileges separable dimensions? And why are they acquired over development? One possibility is that there is corresponding variation in real-world categories, and this provides a bias that
learners carry over to laboratory experiments. For example, Rosch et al. [1] identified shape as a key
constant in categories, and we can find categories that are constant along other separable dimensions
as well. For instance, categories of materials such as gold, wood, and ice all display a characteristic
color while being relatively unconstrained as to the shapes and sizes that they take. Size is often
constrained in artifacts such as books and cars, while color can vary across a very wide range.
Models of categorization are able to account for both the isotropic and dimension-based components
of generalization. Classic models of categorization, such as the exemplar and prototype model,
account for these using different mechanisms [8, 9, 10]. Rational models of categorization have
accounted for dimensional biases by assuming that the shapes of categories are aligned with the
axes that people use for generalization [11, 12, 13]. Neither the classic models nor rational models
have investigated how people learn to use the particular dimension basis that they do.
This paper presents a model that learns the dimensional basis that people use for generalization.
We connect these biases with a hypothesis about the structure of categories in the environment and
demonstrate how exposure to these categories during development results in human dimensional
biases. In the next section, we review models of categorization and how they have accounted for dimensional biases. Next, we review current nonparametric Bayesian models of categorization, which
all require that the dimensions be hand-coded. Next, we introduce a new prior for categorization
models that starts without pre-specified dimensions and learns to generalize new categories in the
same way that previous categories varied. We show that without the use of pre-specified dimensions,
we are able to produce generalizations that fit human data. We demonstrate that training the model
on reasonable category structures produces generalization behavior that mimics that of human subjects at various ages. In addition, our trained model predicts the challenging effect of violations of
the triangle inequality for similiarity judgments.
2
Modeling Dimensional Biases in Categorization
Models of categorization can be divided into generative and discriminative models ? we will focus
on generative models here and leave discriminative models for the discussion. Generative models
of categorization, such as the prototype [8] and exemplar models [9, 10], assume that people learn
category distributions, not just rules for discriminating between categories. In order to make a
judgment of whether a new item belongs to one category or another, a comparison is made of the
new item to the already existing categories, using Bayes rule with a uniform prior on the category
labels,
P (cn = i|xn , xn
1 , cn 1 )
P (xn |cn = i, xn 1 , cn 1 )
=P
j P (xn |cn = j, xn 1 , cn 1 )
(1)
where xn is the nth item and cn = j assigns that item to category j. The remaining items are
collected in the vector xn 1 and the known labels for these items are cn 1 .
For the prototype and exemplar models, the likelihood of an item belonging to a category is based
on the weighted Minkowski power metric1 ,
X?X
r ? r1
(d)
w(d) x(d)
R
(2)
n
i
i
1
(d)
For an exemplar model, Ri
average of xn 1 .
d
is each example in xn
2
1,
while for the prototype model, it is the single
which computes the absolute value of the power metric between the new example xn and the category representation Ri for category i on a dimension d. Integral dimensions are modeled with
r = 2, which results in a Euclidean distance metric. The Euclidean metric has the special property
that changing the basis set for the dimensions of the space does not affect the distances. Any other
choice of r means that the distances are affected by the basis set, and thus it must be chosen to match
human judgments. Separable dimensions are modeled with either r = 1, the city-block metric, or
r < 1, which no longer obeys the triangle equality [5].
Dimensional biases are also modeled in categorization by modifying the dimension weights for each
dimension, w(d) . In effect, the weights stretch or shrink the space of stimuli along each dimension
so that some items are closer than others. These dimension weights are assumed to correspond to
attention. To model learning of categories, it is often necessary to provide non-zero weights to only
a few features early in learning and gradually shift to uniform weights late in learning [14].
These generative models of categorization have been developed to account for the different types of
dimensional biases that are displayed by people, but they lack means for learning the dimensions
themselves. Extensions to these classical models learn the dimension weights [15, 16], but can only
learn the weights for pre-specified dimensions. If the chosen basis set did not match that used by
people, then the models would be very poor descriptions of human dimensional biases. A stronger
notion of between-category learning is required.
3
Rational Models of Categorization
Rational models of categorization view categorization behavior as the solution to a problem posed
by the environment: how best to to generalize properties from one object to another. Both exemplar
and prototype models can be viewed as restricted versions of rational models of categorization,
which also allow interpolations between these two extreme views of representation. Anderson [11]
proposed a rational model of categorization which modeled the stimuli in a task as a mixture of
clusters. This model treated category labels as features, performing unsupervised learning. The
model was extended to supervised learning so each category is a mixture [17],
P (x` |x`
1 , s`
1) =
K
X
P (s` = k|s`
k=1
1 )P (x` |s`
= k, x`
1 , s` 1 )
(3)
where x` is the newest example in a category i and x` 1 are the other members of category i. x`
is a mixture over a set of K components with the prior probability of x` belonging to a component
depending on the component membership of the other examples s` 1 .
Instead of a single component or a component for each previous item, the mixture model has the
flexibility to choose an intermediate number of components. To make full use of this flexibility,
Anderson used a nonparametric Chinese Restaurant Process (CRP) prior on the mixing weights,
which allows the flexibility of having an unspecified and potentially infinite number of components
(i.e., clusters) in our mixture model. The mixing proportions in a CRP are based on the number of
items already included in the cluster,
P (s` = k|s`
1) =
(
Mk
i 1+?
?
i 1+?
if Mk > 0 (i.e., k is old)
if Mk = 0 (i.e., k is new)
(4)
where Mj is the number of objects assigned to component k, and ? is the dispersion parameter.
Using Equation 4, the set of assignments s` 1 is built up as a simple sequential stochastic process
[18] in which the order of the observations is unimportant [19].
The likelihood of belonging to a component depends on the other members of the cluster. In the
case of continuous data, the components were modeled as Gaussian distributions,
P (x` |s` = k, x`
1 , s` 1 )
=
YZ
d
?(d)
Z
(d)
?(d)
3
N (x` ; ?(d) , ?(d) )P (?(d) )P (?(d) |?(d) )
(5)
where the mean and variance of each Gaussian distribution is given by ?(d) and ?(d) respectively.
The prior for the mean was assumed to be Gaussian given the variance and the prior for the variance
was a inverse-gamma distribution. The likelihood distribution for this model assumes a fixed basis
set of dimensions, which must align with the separable dimensions to produce dimensional biases
in generalization.
4
A Prior for Dimensional Learning
The rational model presented above assumes a certain basis set of dimensions, and the likelihood
distributions are aligned with these dimensions. To allow the learning of the basis set, we first need
multivariate versions of the prior distributions over the mean and variance parameters. For the mean
parameter, we will use a multivariate Gaussian distribution and for the covariance matrix, we will use
the multivariate generalization of the inverse-gamma distribution, the inverse-Wishart distribution.
?
The inverse-Wishart distribution has its mode at m+D+1
, where ? is the mean covariance matrix
parameter, m is the degrees of freedom, and D is the number of dimensions of the stimulus. A
covariance matrix is always diagonal under some rotated version of the initial basis set. This new
basis set gives the possible dimensional biases for this cluster.
However, using Gaussian distributions for each cluster, with a unimodal prior on the covariance
matrix, greatly limits the patterns of generalizations that can be produced. For a diagonal covariance
matrix, strong generalization along a particular dimension would be produced if the covariance
matrix has a high variance along that dimension, but low variances along the remaining dimensions.
Thus, this model can learn to strongly generalize along one dimension, but people often make strong
generalizations along multiple dimensions [5], such as in Equation 2 when r < 1. A unimodal
prior on covariance matrices cannot produce this behavior, so we use a mixture of inverse Wishart
distributions as a prior for covariance matrices,
p(?k |uk , ) =
J
X
p(uk = j|uk
j=1
1 )p(?k |
j , uk
= j)
(6)
where ?k is the covariance parameter for the kth component. For simplicity, the component parameters ?k are assumed i.i.d. given their class. j are the parameters of component j which reflect
the expected covariances generated by the jth inverse-Wishart distribution in the mixture. uk = j is
the assignment of parameters ?k to component j and the set of all other component assignments
is uk 1 .
and ?k are the sets of all j and ?k . The means of categories k have Gaussian priors,
which depend on ?k , but are otherwise independent of each other.
As before, we will use a nonparametric CRP prior over the mixture weights uk . We now have
two infinite mixtures: one that allows a category to be composed of a mixture of clusters, and one
that allows the prior for the covariance matrices to be composed of a mixture of inverse-Wishart
distributions. The final piece of the model is to specify p( ). We use another inverse-Wishart prior,
but with an identity matrix for the mean parameter, so as not to bias the j components toward a
particular dimension basis set. Figure 1 gives a schematic depiction of the model.
5
Learning the Prior
The categories we learn during development often vary along separable dimensions ? and people
are sensitive to this variability. The linguistic classification of nouns helps to identify categories that
are fixed on one separable dimension and variable on others. Nouns can be classified into count
nouns and mass nouns. Count nouns refer to objects that are discrete, such as books, shirts, and cars.
Mass nouns are those that refer to objects that appear in continuous quantities, such as grass, steel,
and milk. These two types of nouns show an interesting regularity: count nouns are often relatively
similar in size but vary greatly in color, while mass nouns are often relatively fixed in color but vary
greatly in size.
Smith [7] tested the development of children?s dimensionality biases. In this study, experimenters
showed participants six green circles that varied in shade and size. The discriminability judgments
of adults to scale the parameters of the stimuli, so that one step in color caused the same gain in discriminability as one step in size. Participants were asked to group the stimuli into clusters according
4
Figure 1: Schematic illustration of the hierarchical prior over covariance matrices. The top-level
prior is a covariance matrix (shown as the equiprobability curves of a Gaussian) that is not biased
towards any dimension. The mid level priors j are drawn from an inverse-Wishart distribution
centered on the top-level prior. The j components are used as priors for the covariance matrices
for clusters. The plot on the right shows some schematic examples of natural categories that tend
to vary along either color or size. The covariance matrices for these clusters are drawn from an
inverse-Wishart prior using one of the j components.
to their preferences, only being told that they should group the ?ones that go together?. The partitions of the stimuli into clusters that participants produced tended toward three informative patterns,
shown in Figure 2. The Overall Similarity pattern ignores dimension and appears to result from
isotropic similarity. The One-dimensional Similarity pattern is more biased towards generalizing
along separable dimensions than the Overall Similarity pattern. The strongest dimensional biases
are shown by the One-Dimensional Identity pattern, with the dimensional match overriding the close
isotropic similarity between neighboring stimuli.
Children aged 3 years, 4 years, 5 years and adults participated in this experiment. There were ten
participants in each age group, participants clustered eight problems each, and all dimension-aligned
orientations of the stimuli were tested. Figure 2 shows the developmental trend of each of the informative clustering patterns. The tendency to cluster according to Overal Similarity decreased with
age, reflecting a reduced influence of isotropic similarity. Clustering according One-dimensional
Similarity increased from 3-year-olds to 5-year-olds, but adults produced few of these patterns. The
percentage of One-dimensional Identity clusterings increased with age, and was the dominant response for adults, supporting the idea that strong dimensional biases are learned.
We trained our model with clusters that were aligned with the dimensions of size and color. Half of
the clusters varied strongly in shape and weakly in size, while the other half varied strongly in size
and weakly in shape. The larger standard deviation of the distribution that generated the training
stimuli was somewhat smaller than the largest distance between stimuli in the Smith experiment,
while the smaller standard deviation in the distribution that generated the training stimuli was much
smaller than the smallest distance between Smith stimuli. The two dispersion parameters were
set to 1, the degrees of freedom for all inverse-Wishart distributions were set to the number of
dimensions plus 1, and 0.01 was used for the scale factor for the mean parameters of the inverseWishart distributions2 .
Inference in the model was done as a combination of the Gibbs sampling and Metropolis-Hastings
algorithms. The assignments of data points to clusters in each class were Gibbs sampled conditioned
on the cluster assignments to inverse-Wishart components and the parameters of those components,
j . Following a complete pass of the assignments of data points to clusters, we then Gibbs sampled the assignments of the cluster covariance parameters ?k to components of the inverse-Wishart
2
The general pattern of the results was only weakly dependent on the parameter settings, but unsupervised
learning of the clusters required a small value of the scale factor.
5
????
????
????
?????
?????
???
???????
???
???????
???
????? ?
???
?????????????????? ?????????????
???????????
?????????????????
???
??? ??????
????????????????????????
??????????
???
???????
?????
??????????????????????????
??????????????????
???
??? ??????
???
??? ??????
???
??????????
???
??????????????????
?????????????
???????????
Figure 2: Experiment 2 of Smith [7]. In a free categorization task, the stimuli marked by dots in the
top row were grouped by participants. The three critical partitions are shown as circles in the top
row of plots. The top bar graph displays the developmental trends for each of the critical partitions.
The bottom bar graph displays the trend as the model is trained on a larger number of axis-aligned
clusters.
mixture prior. After a pass of this mid-level sampling, we resampled j , the parameters of the
inverse-Wishart components, and the prior expected means of each cluster. This sampling was done
using Metropolis-Hastings, with the non-symmetric proposals made from a separate inverse-Wishart
distribution. A large finite Dirichlet distribution was used to approximate p(U ). Given the learned
and uk , the predicted probabilities for the Smith experiment were computed exactly.
The predictions of our model as a result of training are shown in Figure 2. The model was trained
on 0, 2, 4, 8, and 16 axis-aligned clusters in an unsupervised fashion. For the all three patterns,
the model shows the same developmental trajectory as human data. Overall Similarity decreases
with the number of trained categories, One-dimensional Similarity increases and then decreases,
and One-dimensional Identity patterns are overwhelmingly produced by the fully trained model.
The probabilities plotted in the figure are the predicted posterior of only the partitions that exactly
matched the informative patterns, out of all 203 possible partitions, showing that the patterns in
Figure 2 dominated the model?s predictions as they dominated the participants? responses in the free
categorization task.
6
Generalization Gradients
Standard models of categorization, such as the prototype or exemplar model, have a variety of mechanisms for producing the dimensional biases seen in experiments with adults. We propose a very
different explanation for these dimensional biases. In this section we plot generalization gradients,
6
??????????????
????????????????
??????????????????
??????????????????
?????????????????
??????????????????
????????????????
???????????
????????????
????????????????
?????????????????
????????????
??????????????????
?????????
???????????????
??????????????????
Figure 3: Generalization gradients of the exemplar model and the posterior predictive distribution
of the model presented in this paper. The dots are the stimuli.
which provide a good feel for how the priors we propose match with the mechanisms used in earlier
models across a variety of conditions.
Generalizations of single items are studied by collecting similarity ratings. In this task, participants
judge the similarity of two items. In standard models of categorization, similarity ratings are modeled mainly by the exponent in the Minkowski power metric (Equation 2). For rational models,
similarity ratings can be modeled as the posterior predictive probability of one item, given the second item [20]. The first two columns of Figure 3 give a comparison between the exemplar model
and the model we propose for similarity ratings. The central dot is a particular stimulus and the
color gradient shows the predicted similarity ratings of all other stimuli. For integral dimensions, a
Euclidean metric (r = 2) is used in the exemplar model, which the model we propose matches if it
has not been trained on dimension-aligned categories.
For separable categories, the exemplar model usually uses a city-block metric (r = 1) [10]. However, experimental evidence shows that dimensions have an even stronger effect than predicted by
a city-block metric. In experiments to test violations of the triangle equality, Tversky and Gati [5]
showed that the best fitting exponent for similarity data is often r < 1. The model we propose can
produce this type of similarity prediction by using a prior that is a mixture of covariance matrices,
in which each component of the mixture generalizes strongly along one dimension. In a category
of one item, which is the case when making similarity judgments with the posterior predictive distribution, it is uncertain which covariance component best describes the category. This uncertainty
results in a generalization gradient that imitates an exponent of r < 1 using Gaussian distributions.
As a result, our proposed model predicts violations of the triangle inequality if it has been trained
on a set of clusters in which some vary strongly along one dimension and others vary strongly along
another dimension. A comparison between this generalization gradient and the exemplar model is
shown in the second column of Figure 3.
The second mechanism for dimensional biases in standard models of categorization is selective
attention. Selective attention is used to describe biases that occur in categorization experiments,
when many items are trained in each category. These biases are implemented in the exemplar model
as weights along each dimension, and early in learning there are usually large weights on a small
number of separable dimensions [14, 21]. Our proposed model does not have a mechanism for
selective attention, but provides a rational explanation for this effect in terms of the strong sampling
assumption [13]. If two items are assumed to come from the same cluster, then generalization tends
to be along a single dimension that has varied during training (third column of Figure 3). However,
if two items are inferred to belong to different clusters, then the generalization gradient corresponds
to additive similarity without selective attention (fourth column of Figure 3).
We have shown that the model we have proposed can reproduce the key generalization gradients of
the exemplar and prototype models. The important difference between our model of dimensional
7
biases and these standard categorization models is that we learn basis set for dimensional biases,
assuming these dimensions have proven to be useful for predicting category structure in the past.
Other models must have these dimensions pre-specified. To show that our model is not biased
towards a particular basis set, we rotated the training stimuli 45 degrees in space. The resulting
posterior predictive distributions in Figure 3 extendend in the same direction as the rotated training
categories varied.
7
Discussion
The approach to dimensional biases we have outlined in this paper provides a single explanation for
dimensional biases, in contrast to standard models of categorization, such as exemplar and prototype
models. These standard models of categorization assume two distinct mechanisms for producing dimensional biases: a Minkowski metric exponent, and attentional weights for each dimension. In
our approach, biases in both similarity judgments and categorization experiments are produced by
learning covariance matrices that are shared between clusters. For similarity judgments, the single
item does not give information about which covariance mixture component was used to generate it.
This uncertainty produces similarity judgments that would be best fit with an Minkowski exponent
of r < 1. For category judgments, the alignment of the items along a dimension allows the generating covariance mixture component to be inferred, so the judgments will show a bias like that
of attentional weights to the dimensions. The difference between tasks drives the different types of
dimensional biases in our approach.
We propose that people learn more complex cross-category information than most previous approaches do. Attention to dimensions is learned in connectionist models of categorization by finding the best single set of weights for each dimension in a basis set [15, 16], or by cross-category
learning in a Bayesian approach [22]. A more flexible approach is used in associative models of
categorization, which allow for different patterns of generalizations for different items. One associative model used a Hopfield network to predict different generalizations for solid and non-solid
objects [23]. A hierarchical Bayesian model with very similar properties to this associative model
motivated this result from cross-category learning [24]. The key difference between all these models
and our proposal is that they use only a single strong dimensional bias for each item, while we use
multiple latent strong dimensional biases for each item, which is needed for modeling both similarity and categorization dimensional biases with a single explanation. The only previous approach we
are aware of that learns such complex cross-category information is a Bayesian rule-based model of
categorization [25].
The main advantage of our approach over many other models of categorization is that we learn the
basis set of dimensions that can display dimensional biases. Our model learns the basis the same
way people do, from categories in the environment (as opposed to fitting to human similarity or
category judgements). We begin with a feature space of stimuli in which physically similar items
are near to each other. Using a version of the Transformed Dirichlet Process [26], a close relation to
the Hierarchical Dirichlet Process previously proposed as a unifying model of categorization [17],
a mixture of covariance matrices are learned from environmentally plausible training data. Most
other models of categorization, including exemplar models [10], prototype models [8], rule-based
discriminative models [27], as well as hierarchical Bayesian models for learning features [24, 22]
and Bayesian rule-based models [25] all must have a pre-specified basis set.
8
Summary and Conclusions
People generalize categories in two ways: they generalize to stimuli with parameters near to the
category and generalize to stimuli that match along separable dimensions. Existing models of categorization must assume the dimensions to produce human-like generalization performance. Our
model learns these dimensions from the data: starting with an unbiased prior, the dimensions that
categories vary along are learned to be dimensions important for generalization. After training the
model with categories intended to mirror those learned during development, our model reproduces
the trajectory of generalization biases as children grow into adults. Using this type of approach, we
hope to better tie models of human generalization to the natural world to which we belong.
8
References
[1] E. Rosch, C. B. Mervis, W. D. Gray, D. M. Johnson, and P. Boyes-Braem. Basic objects in natural
categories. Cognitive Psychology, 8:382?439, 1976.
[2] R. N. Shepard. Toward a universal law of generalization for psychological science. Science, 237:1317?
1323, 1987.
[3] W. R. Garner. The processing of information and structure. Erlbaum, Hillsdale, NJ, 1974.
[4] J. K. Kruschke. Human category learning: implications for backpropagation models. Connection Science,
5:3?36, 1993.
[5] A. Tversky and I. Gati. Similarity, separability and the triangular inequality. Psychological Review,
93:3?22, 1982.
[6] L. B. Smith and Kemler D. G. Developmental trends in free classification: Evidence for a new conceptualization of perceptual development. Journal of Experimental Child Psychology, 24:279?298, 1977.
[7] L. B. Smith. A model of perceptual classification in children and adults. Psychological Review, 96:125?
144, 1989.
[8] S. K. Reed. Pattern recognition and categorization. Cognitive Psychology, 3:393?407, 1972.
[9] D. L. Medin and M. M. Schaffer. Context theory of classification learning. Psychological Review, 85:207?
238, 1978.
[10] R. M. Nosofsky. Attention, similarity, and the identification-categorization relationship. Journal of Experimental Psychology: General, 115:39?57, 1986.
[11] J. R. Anderson. The adaptive nature of human categorization. Psychological Review, 98(3):409?429,
1991.
[12] D. J. Navarro. From natural kinds to complex categories. In Proceedings of CogSci, pages 621?626,
Mahwah, NJ, 2006. Lawrence Erlbaum.
[13] J. B. Tenenbaum and T. L. Griffiths. Generalization, similarity, and Bayesian inference. Behavioral and
Brain Sciences, 24:629?641, 2001.
[14] M. K. Johansen and T. J. Palmeri. Are there representational shifts in category learning?
Psychology, 45:482?553, 2002.
Cognitive
[15] John K. Kruschke. Alcove: An exemplar-based connectionist model of category learning. Psychological
Review, 99:22?44, 1992.
[16] B. C. Love, D. L. Medin, and T. M. Gureckis. SUSTAIN: A network model of category learning. Psychological Review, 111:309?332, 2004.
[17] T. L. Griffiths, K. R. Canini, A. N. Sanborn, and D. J. Navarro. Unifying rational models of categorization
via the hierarchical dirichlet process. In R. Sun and N. Miyake, editors, Proceedings CogSci, 2007.
[18] D. Blackwell and J. MacQueen. Ferguson distributions via Polya urn schemes. The Annals of Statistics,
1:353?355, 1973.
?
[19] D. Aldous. Exchangeability and related topics. In Ecole
d??et?e de probabilit?es de Saint-Flour, XIII?1983,
pages 1?198. Springer, Berlin, 1985.
[20] T. L. Griffiths, M. Steyvers, and J. B. Tenenbaum. Topics in semantic representation. Psychological
Review, 114:211?244, 2007.
[21] R. M. Nosofsky and S. R. Zaki. Exemplar and prototype models revisted: response strategies, selective
attention, and stimulus generalization. Journal of Experimental Psychology: Learning, Memory, and
Cognition, 28:924?940, 2002.
[22] A. Perfors and J.B. Tenenbaum. Learning to learn categories. In Proceedings of CogSci, 2009.
[23] E. Colunga and L. B. Smith. From the lexicon to expectations about kinds: a role for associative learning.
Psychological Review, 112, 2005.
[24] C. Kemp, A. Perfors, and J. B. Tenenbaum. Learning overhypotheses with hierarchical Bayesian models.
Developmental Science, 10:307?321, 2007.
[25] N. D. Goodman, J. B. Tenenbaum, J. Feldman, and T. L. Griffiths. A rational analysis of rule-based
concept learning. Cognitive Science, 32:108?154, 2008.
[26] E. Sudderth, A. Torralba, W. Freeman, and A. Willsky. Describing visual scenes using transformed
dirichlet processes. In Neural Information Processing Systems NIPS, 2005.
[27] R. M. Nosofsky and T. J. Palmeri. A rule-plus-exception model for classifying objects in continuousdimension spaces. Psychonomic Bulletin & Review, 5:345?369, 1998.
9
| 3849 |@word version:4 judgement:1 stronger:2 proportion:1 gradual:1 covariance:22 solid:2 carry:1 initial:1 ecole:1 interestingly:1 past:1 existing:3 current:1 must:5 john:1 exposing:1 additive:1 partition:5 informative:3 shape:6 plot:3 overriding:1 grass:1 newest:1 generative:4 half:2 item:25 isotropic:8 smith:8 mental:1 provides:3 lexicon:1 preference:2 mathematical:1 along:22 fitting:2 behavioral:1 introduce:1 acquired:1 expected:3 behavior:3 themselves:1 nor:1 love:1 roughly:1 brain:2 shirt:1 freeman:1 begin:1 matched:1 mass:3 what:2 kind:2 unspecified:1 developed:1 finding:1 nj:2 multidimensional:1 collecting:1 tie:1 exactly:2 uk:11 unit:1 appear:1 producing:2 ice:1 before:1 engineering:1 tends:1 limit:1 interpolation:1 ap:1 gati:2 plus:2 discriminability:2 studied:1 challenging:1 range:1 medin:2 obeys:1 block:3 backpropagation:1 cb2:1 probabilit:1 universal:1 pre:5 griffith:4 cannot:1 close:2 context:1 influence:2 exposure:1 attention:8 go:1 starting:1 kruschke:2 miyake:1 simplicity:1 assigns:1 insight:2 rule:7 steyvers:1 classic:2 notion:1 variation:2 feel:1 annals:1 play:2 us:1 hypothesis:1 trend:5 recognition:1 predicts:2 observed:2 role:3 bottom:1 capture:1 sun:1 decrease:3 environment:4 developmental:7 asked:1 tversky:2 trained:9 depend:1 weakly:3 predictive:4 learner:1 basis:20 triangle:4 easily:1 hopfield:1 represented:1 various:1 distinct:1 describe:1 london:4 perfors:2 cogsci:3 posed:1 larger:2 plausible:1 otherwise:1 triangular:1 statistic:1 echo:1 final:1 associative:4 advantage:1 ucl:3 propose:6 neighboring:1 aligned:8 mixing:2 flexibility:3 representational:1 gold:1 description:1 cluster:26 regularity:1 r1:1 produce:8 generating:1 categorization:40 adam:1 leave:1 object:11 rotated:3 depending:1 help:1 ac:3 exemplar:17 polya:1 strong:11 implemented:1 predicted:4 judge:1 come:1 direction:2 modifying:1 stochastic:1 centered:1 human:12 material:1 hillsdale:1 require:1 behaviour:1 generalization:35 clustered:1 extension:1 stretch:1 lawrence:1 cognition:1 predict:1 vary:8 early:2 smallest:1 torralba:1 label:3 sensitive:1 largest:1 grouped:1 equiprobability:1 city:3 alcove:1 weighted:1 hope:1 always:2 gaussian:8 exchangeability:1 overwhelmingly:1 chater:2 linguistic:1 ax:2 focus:1 likelihood:4 mainly:1 greatly:3 contrast:2 inference:2 dependent:1 membership:1 ferguson:1 typically:1 relation:1 selective:5 reproduce:1 transformed:2 overall:3 classification:4 orientation:1 flexible:1 exponent:5 development:7 constrained:1 special:2 noun:9 aware:1 inversewishart:1 shaped:1 having:1 sampling:4 unsupervised:3 mimic:1 others:3 stimulus:21 connectionist:2 xiii:1 few:3 composed:2 gamma:2 intended:1 privilege:1 freedom:2 possibility:1 alignment:1 flour:1 violation:3 mixture:17 extreme:1 wc1n:1 implication:1 integral:3 closer:1 necessary:1 euclidean:5 old:3 rotating:2 circle:2 plotted:1 uncertain:1 psychological:9 mk:3 instance:1 modeling:2 increased:2 earlier:1 column:4 ar:1 assignment:7 deviation:2 uniform:2 johnson:1 erlbaum:2 connect:1 discriminating:1 told:1 together:1 nosofsky:3 reflect:1 central:1 opposed:1 choose:2 wishart:13 cognitive:5 book:2 account:3 de:2 caused:1 depends:1 piece:1 view:3 start:1 bayes:1 participant:8 variance:6 characteristic:1 judgment:10 correspond:1 identify:1 generalize:10 bayesian:8 identification:1 garner:1 produced:6 trajectory:2 drive:1 classified:2 strongest:1 tended:1 rational:12 gain:1 experimenter:1 sampled:2 color:10 car:2 dimensionality:1 reflecting:1 appears:1 zaki:1 supervised:1 specify:1 response:3 sustain:1 done:2 shrink:1 strongly:6 anderson:3 just:1 crp:3 hand:1 hastings:2 lack:1 mode:1 artifact:1 gray:1 effect:4 hypothesized:1 concept:1 unbiased:1 equality:2 assigned:1 symmetric:1 laboratory:1 semantic:1 during:4 complete:1 demonstrate:2 mentally:1 psychonomic:1 shepard:1 belong:4 refer:2 cambridge:2 gibbs:3 feldman:1 unconstrained:1 outlined:1 dot:3 similarity:29 longer:1 depiction:1 align:1 dominant:1 multivariate:3 posterior:5 showed:2 aldous:1 belongs:1 termed:2 dimensionally:1 certain:1 inequality:3 seen:1 somewhat:1 full:1 unimodal:2 multiple:2 match:7 cross:4 divided:1 coded:1 schematic:3 prediction:3 basic:2 expectation:5 metric:9 physically:1 psychologically:1 represent:2 proposal:2 addition:1 participated:1 decreased:1 aged:1 grow:1 sudderth:1 crucial:1 goodman:1 biased:4 navarro:2 subject:1 tend:1 member:2 contrary:1 near:2 intermediate:1 variety:2 affect:1 fit:2 restaurant:1 psychology:6 identified:1 idea:2 prototype:10 cn:8 shift:3 whether:1 six:1 motivated:1 generally:1 useful:1 gureckis:1 unimportant:1 nonparametric:3 hue:1 mid:2 ten:1 tenenbaum:5 category:65 reduced:1 generate:1 percentage:1 judging:1 neuroscience:1 discrete:1 affected:1 group:3 key:3 drawn:2 changing:1 neither:1 graph:2 wood:1 year:5 inverse:15 uncertainty:2 fourth:1 reasonable:1 resampled:1 centrally:1 display:5 occur:1 ri:2 scene:1 dominated:2 minkowski:4 separable:15 performing:1 urn:1 relatively:3 department:1 conceptualization:1 according:3 combination:1 poor:1 belonging:3 across:2 smaller:3 describes:1 separability:1 metropolis:2 making:1 gradually:1 restricted:1 equation:3 previously:1 turn:1 count:3 mechanism:6 describing:1 needed:1 generalizes:1 eight:1 hierarchical:7 encounter:1 assumes:2 remaining:2 top:5 clustering:3 dirichlet:5 saint:1 unifying:2 wc1e:1 chinese:1 yz:1 braem:1 classical:1 rosch:2 already:2 quantity:1 strategy:1 diagonal:2 gradient:8 sanborn:2 kth:1 distance:7 separate:1 attentional:2 berlin:1 topic:2 collected:1 kemp:1 toward:3 willsky:1 assuming:2 modeled:7 reed:1 illustration:2 relationship:1 palmeri:2 difficult:1 katherine:1 potentially:1 steel:1 observation:1 dispersion:2 macqueen:1 finite:1 displayed:1 supporting:1 canini:1 extended:1 variability:1 varied:6 schaffer:1 inferred:2 rating:6 pair:1 required:2 specified:5 blackwell:1 connection:1 nick:1 johansen:1 learned:8 mervis:1 nip:1 adult:9 able:2 bar:2 usually:2 pattern:15 saturation:1 built:1 green:1 including:1 explanation:4 memory:1 power:3 critical:2 natural:5 treated:1 predicting:1 metric1:1 nth:1 scheme:1 axis:3 imitates:1 heller:2 review:11 prior:27 law:1 fully:1 interesting:1 proven:1 age:4 degree:4 editor:1 story:1 classifying:1 row:2 summary:1 diagonally:1 accounted:2 free:3 jth:1 bias:35 allow:3 wide:1 bulletin:1 absolute:1 curve:1 dimension:74 xn:11 transition:1 world:2 computes:1 ignores:1 made:2 adaptive:1 approximate:1 reproduces:1 environmentally:1 assumed:4 discriminative:3 continuous:2 latent:1 why:1 learn:12 mj:1 nature:1 investigated:1 complex:3 did:2 main:1 mahwah:1 child:8 fashion:1 gatsby:3 similiarity:1 perceptual:3 late:1 third:1 learns:6 shade:1 showing:1 pz:1 evidence:2 sequential:1 importance:1 milk:1 mirror:1 conditioned:1 overhypotheses:1 generalizing:1 explore:1 visual:1 isotropically:1 springer:1 corresponds:1 viewed:1 identity:4 marked:1 towards:3 shared:1 included:1 infinite:3 specifically:2 discriminate:1 pas:2 tendency:2 experimental:4 boyes:1 e:1 exception:1 college:2 people:17 tested:2 |
3,145 | 385 | A Connectionist Learning Control
Architecture for Navigation
Jonathan R. Bachrach
Department of Computer and Information Science
University of Massachusetts
Amherst, MA 01003
Abstract
A novel learning control architecture is used for navigation. A sophisticated test-bed is used to simulate a cylindrical robot with a sonar belt
in a planar environment. The task is short-range homing in the presence of obstacles. The robot receives no global information and assumes
no comprehensive world model. Instead the robot receives only sensory
information which is inherently limited. A connectionist architecture is
presented which incorporates a large amount of a priori knowledge in the
form of hard-wired networks, architectural constraints, and initial weights.
Instead of hard-wiring static potential fields from object models, myarchitecture learns sensor-based potential fields, automatically adjusting them
to avoid local minima and to produce efficient homing trajectories. It does
this without object models using only sensory information. This research
demonstrates the use of a large modular architecture on a difficult task.
1
OVERVIEW
I present a connectionist learning control architecture tailored for simulated shortrange homing in the presence of obstacles. The kinematics of a cylindrical robot
(shown in Figure 1) moving in a planar environment is simulated. The robot has
wheels that propel it independently and simultaneously in both the x and y directions with respect to a fixed orientation. It can move up to one radius per discrete
time step. The robot has a 360 degree sensor belt with 16 distance sensors and
16 grey-scale sensors evenly placed around its perimeter. These 32 values form the
robot's view.
Figure 2 is a display created by the navigation simulator. The bottom portion of
457
458
Bachrach
Figure 1: Simulated robot.
Figure 2: Navigation simulator.
the figure shows a bird's-eye view of the robot's environment. In this display, the
bold circle represents the robot's uhome" position, with the radius line indicating
the home orientation. The other circle with radius line reprelent. the robot's current position and orientation. The top panel shows the grey-scale view from the
home position, and the next panel down shows the grey-scale view from the robot's
current position. For better viewing, the distance and grey-scale sensor values are
superimposed, and the height of the profile is 1/distance instead of distance. Thus
as the robot gets closer to objects they get taller, and when the robot gets farther
away from objects they get shorter in the display.
The robot cannot move through nor usee" through obstacles (i.e., obstacles are
opaque). The task is for the robot to align itself with the home position from
arbitrary starting positions in the environment while not colliding with obstacles.
This task is performed using only the sensory information-the robot does not have
access to the bird's-eye view.
This is a difficult control task. The sensory information forms a high-dimensional
A Connectionist Learning Control Architecture for Navigation
;. -. '~I~')
:. -
1"'"
.. ",:.'
.it
"
Figure 3: The potential field method. This figure shows a contour plot of a terrain
created using potential fields generated from object models. The contour diagram
shows level curves where the grey level of the line depicts the height of the line: the
maximum height is depicted in black, and the minimum height is depicted in white.
continuous space, and successful homing generally requires a nonlinear mapping
from this space to the space of real-valued actions. Further, training networks
is not easily achieved on this space. The robot assumes no comprehensive world
model and receives no global information, but receives only sensory information
that is inherently limited. Furthermore, it is difficult to reach home using random
exploration thereby making simple trial-and-error learning intractable. In order to
handle this task an architecture was designed that facilitates the coding of domain
knowledge in the form of hard-wired networks, architectural constraints, and initial
weights.
1.1
POTENTIAL FIELDS
Before I describe the architecture, I briefly discuss a more traditional technique for
navigation that uses potential fields. This technique involves building explicit object
model. repre.enting the extent and position of object. in the robot'. environment.
Repelling potential fields are then placed around obstacle. using the object models,
and an attracting potential field is placed on the goal. This can be visualized as
a terrain where the global minimum is located at the goal, and where there are
bumps around the obstacles. The robot goes home by descending the terrain. The
contour diagram in Figure 3 shows such a terrain. The task is to go from the top
room to the bottom through the door. Unfortunately, there can be local minima.
In this environment there are two prime examples of minima: the right-hand wall
between the home location and the upper room-opposing forces exactly counteract
each other to produce a local minimum in the right-hand side of the upper room,
and the doorway-the repelling fields on the door frame create an insurmountable
bump in the center of the door.
In contrast, my technique learns a sensor-based potential field model. Instead of
hard-wiring static potential fields from the object models, the proposed architecture
459
460
Bachrach
Evaluation
Evaluation
State
State
2-Net
3-Net
Figure 4: Control architectures.
learns potential fields, automatically adjusting them to both avoid local minima
and produce efficient trajectories. Furthermore, it does this without object models,
using only sensory information.
1.2
2-NET /3-NET ARCHITECTURES
I shall begin by introducing two existing architectures: the 2-net and 3-net architectures. These architectures were proposed by Werbos [9] and Jordan and Jacobs [4]
and are also based on the ideas of Barto, Sutton, Watkins [2, 1, 8], and Jordan
and Rumelhart [3]. The basic idea is to learn an evaluation function and then
train the controller by differentiating this function with respect to the controller
weights. These derivatives indicate how to change the controller's weights in order
to minimize or maximize the evaluation function. The 2-net architecture consists
of a controller and a critic. The controller maps states to actions, and the 2-net
critic maps state/action pairs to evaluations. The 3-net architecture consists of a
controller, a forward model, and a critic. The controller maps states to actions, the
forward model maps state/action pairs to next states, and the 3-net critic maps
states to evaluations.
It has been said that it is easier to train a 2-net architecture because there is no
forward model [5]. The forward model might be very complicated and difficult to
train. With a 2-net architecture, only a 2-net critic is trained based on state/action
input pairs. But what if a forward model already exists or even a priori knowledge
exists to aid in explicit coding of a forward model? Then it might be simpler to
use the 3-net architecture because the 3-net critic would be easier to train. It is
based on state-only input and not state/action pairs, and it includes more domain
knowledge.
A Connectionist Learning Control Architecture for Navigation
Total Path Length Home
Stralgrht-Llne
Path Length
+
I
Avoldanee
Path L ength
Obstacle
Avofd...ee
CrItic
Homing
CrItic
I
I
Next View
Forward
Model
Action
ControRer
I
Current VIew
Figure 5: My architecture.
2
THE NAVIGATION ARCHITECTURE
The navigation architecture is a version of a 3-net architecture tailored for navigation, where the state is the robot's view and the evaluation is an estimate of the
length of the shortest path for the robot's current location to home. It consists of a
controller, a forward model, and two adaptive critics. The controller maps views to
actions, the forward model maps view/action pairs to next views, the homing critic
maps views to path length home using a straight line trajectory, and the obstacle
avoidance critic maps views to additional path length needed to avoid obstacles. The
sum of the outputs of the homing critic and the obstacle avoidance critic equals the
total path length home. The forward model is a hard-wired differentiable network
incorporating geometrical knowledge about the sensors and space. Both critics and
the controller are radial basis networks using Gaussian hidden units.
2.1
TRAINING
Initially the controller is trained to produce straight-line trajectories home. With
the forward model fixed, the homing critic and the controller are trained using deadreckoning. Dead-reckoning is a technique for keeping track of the distance home
by accumulating the incremental displacements. This distance provides a training
signal for training the homing critic via supervised learning.
Next, the controller is further trained to avoid obstacles. In this phase, the obstacle
avoidance critic is added while the weights of the homing critic and forward model
are frozen. Using the method of temporal differences [7] the controller and obstacle
avoidance qitic are adjusted so that the expected path length decreases by one
radius per time step. After training, the robot takes successive one-radius steps
461
462
Bachrach
"
I
~
.
'
,~
~ .
_.~
:
j'
~
?
1
"
,',; '..~., "',.~. '"i'
? - ? ." ??? 'I' .....
'
I"
Figure 6: An example.
toward its home location.
3
AN EXAMPLE
I applied this architecture to the environment shown in Figure 2. Figure 6 shows
the results of training. The left panel is a contour plot of the output of the homing
critic and reflects only the straight-line distance to the home location. The right
panel is a contour plot of the combined output of the homing critic and the obstacle
avoidance critic and now reflects the actual path length home. After training the
robot is able to form efficient homing trajectories starting from anywhere in the
environment.
4
DISCUSSION
The homing task represents a difficult control task requiring the solution of a number
of problems. The first problem is that there is a small chance of getting home using
random exploration. The solution to this problem involves building a nominal initial
controller that chooses straight-line trajectories home. Next, because the state
space is high-dimensional and continuous it is impractical to evenly place Gaussian
units, and it is difficult to learn continuous mappings using logistic hidden units.
Instead I use Gaussian units whose initial weights are determined using expectation
maximization. This is a soft form of competitive learning [6] that, in my case,
creates spatially tuned units. Next, the forward model for the robot's environments
is very difficult to learn. For this reason I used a hard-wired forward model whose
performance is good in a wide range of environments. Here the philosophy is to
learn only things that are difficult to hard-wire. Finally, the 2-net critic is difficult
to train. Therefore, I split the 2-net critic into a 3-net critic and a hard-wired
forward model.
There are many directions for extending this work. First, I would like to apply this
architecture to real robots using realistic sensors and dynamics. Secondly, I want to
A Connectionist Learning Control Architecture for Navigation
to look at long range homing. Lastly, I would like to investigate navigation tasks
involving multiple goals.
Acknowledgements
This material is based upon work supported by the Air Force Office of Scientific
Research, Bolling AFB, under Grant AFOSR-89-0526 and by the National Science
Foundation under Grant ECS-8912623. I would like to thank Richard Durbin, David
Rumelhart, Andy Barto, and the UMass Adaptive Networks Group for their help
on this project.
References
[1] A. G. Barto, R. S. Sutton, and C. Watkins. Sequential decision problems and
neural networks. In David S. Touretzky, editor, Advance" in Neural Information Proce16ing Sy"tem", P.O. Box 50490, Palo Alto, CA 94303, 1989. Morgan
Kaufmann Publishers.
[2] Andrew G. Barto, Richard S. Sutton, and Charles W. Anderson. Neuron like adaptive elements that can solve difficult learning control problems.
IEEE Tran"action" on Sy"tem", Man, and Cybernetic", SMC-13(15), September/October 1985.
[3] M. I. Jordan and D. E. Rumelhart. Supervised learning with a distal teacher.
1989. Submitted to: Cognitive Science.
[4] Michael I. Jordan and Robert Jacobs. Learning to control an unstable system
with forward modeling. In David S. Touretzky, editor, Advance" in Neural
Information Proce16ing Sy6tem", P.O. Box 50490, Palo Alto, CA 94303, 1989.
Morgan Kaufmann Publishers.
[5] Sridhar Mahadevan and Jonathan Connell.
Automatic programming of
behavior-based robots using reinforcemnt learning. Technical report, IBM Research Division, T.J. Watson Research Center, Box 704, Yorktown Heights, NY
10598, 1990.
[6] S. J. Nowlan. A generative framework for unsupervised learning. Denver,
Colorado, 1989. IEEE Conference on Neural Information Processing SystemsNatural and Synthetic.
[7] Richard Sutton. Learning to predict by the methods of temporal differences.
Technical report, GTE Laboratories, 1987.
[8] Richard S. Sutton. Temporal Credit A16ignment in Reinforcement Learning.
PhD thesis, Department of Computer and Information Science, University of
Massachusetts at Amherst, 1984.
[9] Paul J. Werbos. Reinforcement learning over time. In T. Miller, R. S. Sutton, and P. J. Werbos, editors, Neural Network" for Control. The MIT Press,
Cambridge, MA, In press.
463
| 385 |@word cylindrical:2 trial:1 version:1 briefly:1 grey:5 jacob:2 usee:1 thereby:1 initial:4 uma:1 tuned:1 existing:1 current:4 repelling:2 nowlan:1 realistic:1 plot:3 designed:1 generative:1 short:1 farther:1 provides:1 location:4 successive:1 simpler:1 belt:2 height:5 consists:3 expected:1 behavior:1 nor:1 simulator:2 automatically:2 actual:1 begin:1 project:1 panel:4 alto:2 what:1 impractical:1 temporal:3 exactly:1 demonstrates:1 control:12 unit:5 grant:2 before:1 local:4 sutton:6 path:9 black:1 might:2 bird:2 limited:2 smc:1 range:3 displacement:1 radial:1 get:4 cannot:1 wheel:1 descending:1 accumulating:1 map:9 center:2 go:2 starting:2 independently:1 bachrach:4 avoidance:5 handle:1 nominal:1 colorado:1 programming:1 us:1 element:1 rumelhart:3 located:1 werbos:3 bottom:2 decrease:1 environment:10 dynamic:1 trained:4 creates:1 upon:1 division:1 basis:1 easily:1 homing:15 train:5 describe:1 whose:2 modular:1 valued:1 solve:1 itself:1 differentiable:1 frozen:1 net:19 tran:1 bed:1 getting:1 extending:1 wired:5 produce:4 incremental:1 object:10 help:1 andrew:1 insurmountable:1 involves:2 indicate:1 direction:2 radius:5 exploration:2 viewing:1 material:1 wall:1 secondly:1 adjusted:1 around:3 credit:1 mapping:2 predict:1 bump:2 ength:1 palo:2 create:1 reflects:2 mit:1 sensor:8 gaussian:3 avoid:4 barto:4 office:1 superimposed:1 contrast:1 initially:1 hidden:2 orientation:3 priori:2 field:12 equal:1 represents:2 look:1 unsupervised:1 tem:2 connectionist:6 report:2 richard:4 simultaneously:1 national:1 comprehensive:2 phase:1 opposing:1 investigate:1 propel:1 evaluation:7 navigation:12 perimeter:1 andy:1 closer:1 shorter:1 circle:2 soft:1 obstacle:15 modeling:1 maximization:1 introducing:1 successful:1 teacher:1 my:3 combined:1 chooses:1 synthetic:1 amherst:2 michael:1 thesis:1 dead:1 cognitive:1 derivative:1 potential:11 bold:1 coding:2 includes:1 performed:1 view:13 portion:1 repre:1 competitive:1 complicated:1 minimize:1 air:1 kaufmann:2 reckoning:1 sy:2 miller:1 trajectory:6 straight:4 submitted:1 reach:1 touretzky:2 static:2 adjusting:2 massachusetts:2 knowledge:5 sophisticated:1 supervised:2 planar:2 afb:1 box:3 anderson:1 furthermore:2 anywhere:1 lastly:1 hand:2 receives:4 nonlinear:1 logistic:1 scientific:1 building:2 requiring:1 spatially:1 laboratory:1 white:1 distal:1 wiring:2 yorktown:1 geometrical:1 novel:1 charles:1 denver:1 overview:1 cambridge:1 automatic:1 moving:1 robot:27 access:1 attracting:1 align:1 prime:1 watson:1 morgan:2 minimum:7 additional:1 maximize:1 shortest:1 signal:1 multiple:1 technical:2 long:1 involving:1 basic:1 controller:15 expectation:1 tailored:2 achieved:1 want:1 diagram:2 publisher:2 facilitates:1 thing:1 incorporates:1 jordan:4 ee:1 presence:2 door:3 split:1 mahadevan:1 architecture:27 idea:2 cybernetic:1 action:11 generally:1 amount:1 visualized:1 per:2 track:1 discrete:1 taller:1 shall:1 group:1 sum:1 counteract:1 opaque:1 place:1 architectural:2 home:17 decision:1 display:3 durbin:1 constraint:2 colliding:1 simulate:1 connell:1 department:2 making:1 discus:1 kinematics:1 needed:1 apply:1 away:1 assumes:2 top:2 move:2 already:1 added:1 traditional:1 said:1 september:1 distance:7 thank:1 simulated:3 evenly:2 extent:1 unstable:1 toward:1 reason:1 length:8 difficult:10 unfortunately:1 october:1 robert:1 upper:2 wire:1 neuron:1 frame:1 arbitrary:1 david:3 bolling:1 pair:5 able:1 shortrange:1 force:2 eye:2 created:2 acknowledgement:1 afosr:1 foundation:1 degree:1 editor:3 critic:24 ibm:1 placed:3 supported:1 keeping:1 side:1 wide:1 differentiating:1 curve:1 world:2 contour:5 sensory:6 forward:16 adaptive:3 reinforcement:2 ec:1 global:3 doorway:1 terrain:4 continuous:3 sonar:1 learn:4 ca:2 inherently:2 domain:2 paul:1 profile:1 sridhar:1 depicts:1 ny:1 aid:1 position:7 explicit:2 watkins:2 learns:3 down:1 intractable:1 exists:2 incorporating:1 sequential:1 phd:1 easier:2 depicted:2 chance:1 ma:2 goal:3 room:3 man:1 hard:8 change:1 determined:1 gte:1 total:2 sy6tem:1 indicating:1 jonathan:2 philosophy:1 |
3,146 | 3,850 | Ensemble Nystr?om Method
Sanjiv Kumar
Google Research
New York, NY
[email protected]
Mehryar Mohri
Courant Institute and Google Research
New York, NY
[email protected]
Ameet Talwalkar
Courant Institute of Mathematical Sciences
New York, NY
[email protected]
Abstract
A crucial technique for scaling kernel methods to very large data sets reaching
or exceeding millions of instances is based on low-rank approximation of kernel
matrices. We introduce a new family of algorithms based on mixtures of Nystr?om
approximations, ensemble Nystr?om algorithms, that yield more accurate low-rank
approximations than the standard Nystr?om method. We give a detailed study of
variants of these algorithms based on simple averaging, an exponential weight
method, or regression-based methods. We also present a theoretical analysis of
these algorithms, including novel error bounds guaranteeing a better convergence
rate than the standard Nystr?om method. Finally, we report results of extensive
experiments with several data sets containing up to 1M points demonstrating the
significant improvement over the standard Nystr?om approximation.
1
Introduction
Modern learning problems in computer vision, natural language processing, computational biology,
and other areas are often based on large data sets of tens of thousands to millions of training instances. But, several standard learning algorithms such as support vector machines (SVMs) [2, 4],
kernel ridge regression (KRR) [14], kernel principal component analysis (KPCA) [15], manifold
learning [13], or other kernel-based algorithms do not scale to such orders of magnitude. Even the
storage of the kernel matrix is an issue at this scale since it is often not sparse and the number of
entries is extremely large. One solution to deal with such large data sets is to use an approximation
of the kernel matrix. As shown by [18], later by [6, 17, 19], low-rank approximations of the kernel
matrix using the Nystr?om method can provide an effective technique for tackling large-scale scale
data sets with no significant decrease in performance.
This paper deals with very large-scale applications where the sample size can reach millions of instances. This motivates our search for further improved low-rank approximations that can scale to
such orders of magnitude and generate accurate approximations. We show that a new family of algorithms based on mixtures of Nystr?om approximations, ensemble Nystr?om algorithms, yields more
accurate low-rank approximations than the standard Nystr?om method. Moreover, these ensemble algorithms naturally fit distributed computing environment where their computational cost is roughly
the same as that of the standard Nystr?om method. This issue is of great practical significance given
the prevalence of distributed computing frameworks to handle large-scale learning problems.
The remainder of this paper is organized as follows. Section 2 gives an overview of the Nystr?om
low-rank approximation method and describes our ensemble Nystr?om algorithms. We describe several variants of these algorithms, including one based on simple averaging of p Nystr?om solutions,
1
an exponential weight method, and a regression method which consists of estimating the mixture parameters of the ensemble using a few columns sampled from the matrix. In Section 3, we present a
theoretical analysis of ensemble Nystr?om algorithms, namely bounds on the reconstruction error for
both the Frobenius norm and the spectral norm. These novel generalization bounds guarantee a better convergence rate for these algorithms in comparison to the standard Nystr?om method. Section 4
reports the results of extensive experiments with these algorithms on several data sets containing up
to 1M points, comparing different variants of our ensemble Nystr?om algorithms and demonstrating
the performance improvements gained over the standard Nystr?om method.
2
Algorithm
We first give a brief overview of the Nystr?om low-rank approximation method, introduce the notation
used in the following sections, and then describe our ensemble Nystr?om algorithms.
2.1
Standard Nystr?om method
We adopt a notation similar to that of [5, 9] and other previous work. The Nystr?om approximation
of a symmetric positive semidefinite (SPSD) matrix K is based on a sample of m ? n columns
of K [5, 18]. Let C denote the n ? m matrix formed by these columns and W the m ? m matrix
consisting of the intersection of these m columns with the corresponding m rows of K. The columns
and rows of K can be rearranged based on this sampling so that K and C be written as follows:
W
W K?
21
and C =
.
(1)
K=
K21
K21 K22
Note that W is also SPSD since K is SPSD. For a uniform sampling of the columns, the Nystr?om
e of K for k ? m defined by:
method generates a rank-k approximation K
e = CW+ C? ? K,
K
(2)
k
where Wk is the best k-rank approximation of W for the Frobenius norm, that is Wk =
argminrank(V)=k kW ? VkF and Wk+ denotes the pseudo-inverse of Wk [7]. Wk+ can be derived from the singular value decomposition (SVD) of W, W = U?U? , where U is orthonormal
and ? = diag(?1 , . . . , ?m ) is a real diagonal matrix with ?1 ?? ? ?? ?m ? 0. For k ? rank(W), it
Pk
?
is given by Wk+ = i=1 ?i?1 Ui Ui , where Ui denotes the ith column of U. Since the running
3
time complexity of SVD is O(m ) and O(nmk) is required for multiplication with C, the total
complexity of the Nystr?om approximation computation is O(m3 +nmk).
2.2
Ensemble Nystr?om algorithm
The main idea behind our ensemble Nystr?om algorithm is to treat each approximation generated by
the Nystr?om method for a sample of m columns as an expert and to combine p ? 1 such experts to
derive an improved hypothesis, typically more accurate than any of the original experts.
The learning set-up is defined as follows. We assume a fixed kernel function K : X ?X ? R that
can be used to generate the entries of a kernel matrix K. The learner receives a sample S of mp
columns randomly selected from matrix K uniformly without replacement. S is decomposed into
p subsamples S1 ,. . ., Sp . Each subsample Sr , r ? [1, p], contains m columns and is used to define
e r . Dropping the rank subscript k in favor of the sample index
a rank-k Nystr?om approximation K
e r can be written as K
e r = Cr W+ C? , where Cr and Wr denote the matrices formed from
r, K
r
r
the columns of Sr and Wr+ is the pseudo-inverse of the rank-k approximation of Wr . The learner
further receives a sample V of s columns used to determine the weight ?r ? R attributed to each
e r . Thus, the general form of the approximation of K generated by the ensemble Nystr?om
expert K
algorithm is
p
X
e ens =
e r.
K
?r K
(3)
r=1
The mixture weights ?r can be defined in many ways. The most straightforward choice consists of
assigning equal weight to each expert, ?r = 1/p, r ? [1, p]. This choice does not require the additional sample V , but it ignores the relative quality of each Nystr?om approximation. Nevertheless,
2
this simple uniform method already generates a solution superior to any one of the approximations
e r used in the combination, as we shall see in the experimental section.
K
Another method, the exponential weight method, consists of measuring the reconstruction error ??r of
e r over the validation sample V and defining the mixture weight as ?r = exp(???
each expert K
?r )/Z,
where ? > 0 is a parameter of the algorithm and Z a normalization factor ensuring
that
the
vector
Pp
? = (?1 , . . . , ?p ) belongs to the simplex ? of Rp : ? = {? ? Rp : ? ? 0 ? r=1 ?r = 1}. The
choice of the mixture weights here is similar to those used in the weighted-majority algorithm [11].
e V denote
Let KV denote the matrix formed by using the samples from V as its columns and let K
r
e
the submatrix of Kr containing the columns corresponding to the columns in V . The reconstruction
e V ? KV k can be directly computed from these matrices.
error ??r = kK
r
A more general class of methods consists of using the sample V to train the mixture weights ?r to
optimize a regression objective function such as the following:
min ?k?k22 + k
?
p
X
r=1
e V ? KV k2 ,
?r K
r
F
(4)
where KV denotes the matrix formed by the columns of the samples S and V and ? > 0. This can
be viewed as a ridge regression objective function and admits a closed form solution. We will refer
to this method as the ridge regression method.
The total complexity of the ensemble Nystr?om algorithm is O(pm3 + pmkn + C? ), where C? is
the cost of computing the mixture weights, ?, used to combine the p Nystr?om approximations. In
general, the cubic term dominates the complexity since the mixture weights can be computed in
constant time for the uniform method, in O(psn) for the exponential weight method, or in O(p3 +
pms) for the ridge regression method. Furthermore, although the ensemble Nystr?om algorithm
requires p times more space and CPU cycles than the standard Nystr?om method, these additional
requirements are quite reasonable in practice. The space requirement is still manageable for even
large-scale applications given that p is typically O(1) and m is usually a very small percentage of
n (see Section 4 for further details). In terms of CPU requirements, we note that our algorithm
can be easily parallelized, as all p experts can be computed simultaneously. Thus, with a cluster
of p machines, the running time complexity of this algorithm is nearly equal to that of the standard
Nystr?om algorithm with m samples.
3
Theoretical analysis
We now present a theoretical analysis of the ensemble Nystr?om method for which we use as tools
some results previously shown by [5] and [9]. As in [9], we shall use the following generalization
of McDiarmid?s concentration bound to sampling without replacement [3].
Theorem 1. Let Z1 , . . . , Zm be a sequence of random variables sampled uniformly without replacement from a fixed set of m + u elements Z, and let ? : Z m ? R be a symmetric function
?
such that for all i ? [1, m] and for all z1 , . . . , zm ? Z and z1? , . . . , zm
? Z, |?(z1 , . . . , zm ) ?
?
?(z1 , . . . , zi?1 , zi , zi+1 , . . . , zm )| ? c. Then, for all ? > 0, the following inequality holds:
?2?2
Pr ? ? E[?] ? ? ? exp ?(m,u)c
(5)
2 ,
where ?(m, u) =
mu
1
m+u?1/2 1?1/(2 max{m,u}) .
We define the selection matrix corresponding to a sample of m columns as the matrix S ? Rn?m
defined by Sii = 1 if the ith column of K is among those sampled, Sij = 0 otherwise. Thus, C = KS
is the matrix formed by the columns sampled. Since K is SPSD, there exists X ? RN ?n such that
K = X? X. We shall denotepby Kmax the maximum diagonal entry of K, Kmax = maxi Kii , and
by dK
Kii + Kjj ? 2Kij .
max the distance maxij
3.1
Error bounds for the standard Nystr?om method
The following theorem gives an upper bound on the norm-2 error of the Nystr?om approximation of
e 2 /kKk2 ? kK ? Kk k2 /kKk2 + O(1/?m) and an upper bound on the Frobenius
the form kK ? Kk
3
e F /kKkF ? kK ? Kk kF /kKkF +
error of the Nystr?om approximation of the form kK ? Kk
1
4
O(1/m ). Note that these bounds are similar to the bounds in Theorem 3 in [9], though in this
work we give new results for the spectral norm and present a tighter Lipschitz condition (9), the
latter of which is needed to derive tighter bounds in Section 3.2.
e denote the rank-k Nystr?om approximation of K based on m columns sampled
Theorem 2. Let K
uniformly at random without replacement from K, and Kk the best rank-k approximation of K.
Then, with probability at least 1 ? ?, the following inequalities hold for any sample of size m:
q
i
h
1
n?m
1
1 K
2
e 2 ? kK ? Kk k2 + ?2n Kmax 1 +
kK ? Kk
log
d
/K
max
n?1/2 ?(m,n)
? max
m
q
i 21
h
1
1
n?m
1
1 K
2
e F ? kK ? Kk kF + 64k 4 nKmax 1 +
kK ? Kk
,
m
n?1/2 ?(m,n) log ? dmax /Kmax
1
where ?(m, n) = 1? 2 max{m,n?m}
.
Proof. To bound the norm-2 error of the Nystr?om method in the scenario of sampling without replacement, we start with the following general inequality given by [5][proof of Lemma 4]:
e 2 ? kK ? Kk k2 + 2kXX? ? ZZ? k2 ,
kK ? Kk
(6)
pn
where Z =
m XS. We then apply the McDiarmid-type inequality of Theorem 1 to ?(S) =
kXX? ?ZZ?p
k2 . Let S? be a sampling matrix selecting the same columns as S except for one, and
n
let Z? denote m
XS? . Let z and z? denote the only differing columns of Z and Z? , then
|?(S? ) ? ?(S)| ? kz? z?? ? zz? k2 = k(z? ? z)z?? + z(z? ? z)? k2
?
(7)
?
? 2kz ? zk2 max{kzk2 , kz k2 }.
(8)
p
Columns of Z are those of X scaled by n/m. The norm of the difference of two columns of X
can be viewed as the norm of the difference of two feature vectors associated 1to K and thus can be
2
bounded by dK . Similarly, the norm of a single column of X is bounded by Kmax
. This leads to the
following inequality:
1
2n K
2
.
(9)
dmax Kmax
|?(S? ) ? ?(S)| ?
m
The expectation of ? can be bounded as follows:
n
E[?] = E[kXX? ? ZZ? k2 ] ? E[kXX? ? ZZ? kF ] ? ? Kmax ,
(10)
m
where the last inequality follows Corollary 2 of [9]. The inequalities (9) and (10) combined with
Theorem 1 give a bound on kXX? ? ZZ? k2 and yield the statement of the theorem.
The following general inequality holds for the Frobenius error of the Nystr?om method [5]:
?
e 2 ? kK ? Kk k2 + 64k kXX? ? ZZ? k2 nKmax .
kK ? Kk
(11)
Bounding the term kXX
as in the norm-2 case and using the concentration bound of
Theorem 1 yields the result of the theorem.
F
?
3.2
F
F
ii
? ZZ? k2F
Error bounds for the ensemble Nystr?om method
The following error bounds hold for ensemble Nystr?om methods based on a convex combination of
Nystr?om approximations.
Theorem 3. Let S be a sample of pm columns drawn uniformly at random without replacement
e r denote the
from K, decomposed into p subsamples of size m, S1 , . . . , Sp . For r ? [1, p], let K
rank-k Nystr?om approximation of K based on the sample Sr , and let Kk denote the best rank-k
approximation of K. Then, with probability at least 1 ? ?, the following inequalities hold for any
e ens = Pp ?r K
e r:
sample S of size pm and for any ? in the simplex ? and K
r=1
q
h
i
1
1
1 K
2
e ens k2 ? kK ? Kk k2 + ?2n Kmax 1 + ?max p 21 n?pm
kK ? K
log
d
/K
max
max
n?1/2 ?(pm,n)
?
m
q
h
i 21
1
1
1
1 K
1
2
e ens kF ? kK ? Kk kF + 64k 4 nKmax 1 + ?max p 2 n?pm
log
/K
kK ? K
d
,
max
m
n?1/2 ?(pm,n)
? max
1
and ?max = maxpr=1 ?r .
where ?(pm, n) = 1? 2 max{pm,n?pm}
4
p
Proof. For r ? [1, p], let Zr = n/m XSr , where Sr denotes the selection matrix corresponding
e ens and the upper bound on kK ? K
e r k2 already used in the
to the sample Sr . By definition of K
proof of theorem 2, the following holds:
p
p
X
X
ens
e
e
e r k2
kK ? K k2 =
?r (K ? Kr )
?
?r kK ? K
?
2
r=1
p
X
r=1
?r kK ? Kk k2 + 2kXX? ? Zr Z?
r k2
= kK ? Kk k2 + 2
p
X
r=1
(12)
r=1
?r kXX? ? Zr Z?
r k2 .
(13)
(14)
Pp
?
We apply Theorem 1 to ?(S) = r=1 ?r kXX? ? Zr Z?
r k2 . Let S be a sample differing from
S by only one column. Observe that changing one column of the full sample S changes only one
subsample Sr and thus only one term ?r kXX? ? Zr Z?
r k2 . Thus, in view of the bound (9) on the
change to kXX? ? Zr Z?
k
,
the
following
holds:
2
r
|?(S ? ) ? ?(S)| ?
1
2n
2
?max dK
max Kmax ,
m
(15)
Pp
?
The expectation
of ? can be straightforwardly bounded by E[?(S)] =
r=1 ?r E[kXX ?
P
p
n
n
?
Zr Zr k2 ] ? r=1 ?r ?m Kmax = ?m Kmax using the bound (10) for a single expert. Plugging
in this upper bound and the Lipschitz bound (15) in Theorem 1 yields our norm-2 bound for the
ensemble Nystr?om method.
For the Frobenius error bound, using the convexity of the Frobenius norm square k?k2F and the
general inequality (11), we can write
p
p
X
2
X
e ens k2 =
e r k2
e r )
kK ? K
?
(K
?
K
?
?r kK ? K
r
F
F
?
F
r=1
p
X
r=1
h
i
?
max
.
?r kK ? Kk k2F + 64k kXX? ? Zr Z?
r kF nKii
= kK ? Kk k2F +
(16)
r=1
p
X
?
max
64k
?r kXX? ? Zr Z?
r kF nKii .
(17)
(18)
r=1
The result follows by the application of Theorem 1 to ?(S) =
similar to the norm-2 case.
Pp
r=1
?r kXX? ? Zr Z?
r kF in a way
The bounds of Theorem 3 are similar in form to those of Theorem 2. However, the bounds for the
ensemble Nystr?om are tighter than those for any Nystr?om expert based on a single sample of size
m even for a uniform weighting. In particular, for ? = 1/p, the last term of the ensemble bound for
1
?
norm-2 is smaller by a factor larger than ?max p 2 = 1/ p.
4
Experiments
In this section, we present experimental results that illustrate the performance of the ensemble
Nystr?om method. We work with the datasets listed in Table 1. In Section 4.1, we compare the
performance of various methods for calculating the mixture weights (?r ). In Section 4.2, we show
the effectiveness of our technique on large-scale datasets. Throughout our experiments, we meae by calculating the relative error in Frobenius and
sure the accuracy of a low-rank approximation K
spectral norms, that is, if we let ? = {2, F }, then we calculate the following quantity:
% error =
e ?
kK ? Kk
? 100.
kKk?
5
(19)
Dataset
Type of data # Points (n) # Features (d)
PIE-2.7K [16] face images
2731
2304
MNIST [10]
digit images
4000
784
ESS [8]
proteins
4728
16
AB-S [1]
abalones
4177
8
DEXT [1]
bag of words
2000
20000
SIFT-1M [12] Image features
1M
128
Kernel
linear
linear
RBF
RBF
linear
RBF
Table 1: A summary of the datasets used in the experiments.
4.1
Ensemble Nystr?om with various mixture weights
In this set of experiments, we show results for our ensemble Nystr?om method using different techniques to choose the mixture weights as discussed in Section 2.2. We first experimented with the
first five datasets shown in Table 1. For each dataset, we fixed the reduced rank to k = 50, and set the
number of sampled columns to m = 3% n.1 Furthermore, for the exponential and the ridge regression variants, we sampled an additional set of s = 20 columns and used an additional 20 columns
(s? ) as a hold-out set for selecting the optimal values of ? and ?. The number of approximations, p,
was varied from 2 to 30. As a baseline, we also measured the minimal and mean percent error across
e ens . For the Frobenius norm, we also calculated
the p Nystr?om approximations used to construct K
the performance when using the optimal ?, that is, we used least-square regression to find the best
possible choice of combination weights for a fixed set of p approximations by setting s = n.
The results of these experiments are presented in Figure 1 for the Frobenius norm and in Figure 2
for the spectral norm. These results clearly show that the ensemble Nystr?om performance is significantly better than any of the individual Nystr?om approximations. Furthermore, the ridge regression
technique is the best of the proposed techniques and generates nearly the optimal solution in terms of
the percent error in Frobenius norm. We also observed that when s is increased to approximately 5%
to 10% of n, linear regression without any regularization performs about as well as ridge regression
for both the Frobenius and spectral norm. Figure 3 shows this comparison between linear regression
and ridge regression for varying values of s using a fixed number of experts (p = 10). Finally we
note that the ensemble Nystr?om method tends to converge very quickly, and the most significant
gain in performance occurs as p increases from 2 to 10.
4.2
Large-scale experiments
Next, we present an empirical study of the effectiveness of the ensemble Nystr?om method on the
SIFT-1M dataset in Table 1 containing 1 million data points. As is common practice with large-scale
datasets, we worked on a cluster of several machines for this dataset. We present results comparing
the performance of the ensemble Nystr?om method, using both uniform and ridge regression mixture
weights, with that of the best and mean performance across the p Nystr?om approximations used to
e ens . We also make comparisons with a recently proposed k-means based sampling techconstruct K
nique for the Nystr?om method [19]. Although the k-means technique is quite effective at generating
informative columns by exploiting the data distribution, the cost of performing k-means becomes
expensive for even moderately sized datasets, making it difficult to use in large-scale settings. Nevertheless, in this work, we include the k-means method in our comparison, and we present results
for various subsamples of the SIFT-1M dataset, with n ranging from 5K to 1M.
To fairly compare these techniques, we performed ?fixed-time? experiments. To do this, we first
searched for an appropriate m such that the percent error for the ensemble Nystr?om method with
ridge weights was approximately 10%, and measured the time required by the cluster to construct
this approximation. We then alloted an equal amount of time (within 1 second) for the other techniques, and measured the quality of the resulting approximations. For these experiments, we set
k = 50 and p = 10, based on the results from the previous section. Furthermore, in order to speed up
computation on this large dataset, we decreased the size of the validation and hold-out sets to s = 2
and s? = 2, respectively.
1
Similar results (not reported here) were observed for other values of k and m as well.
6
Ensemble Method ? PIE?2.7K
Ensemble Method ? MNIST
Ensemble Method ? ESS
16
15
4
3.5
0.65
mean b.l.
best b.l.
uni
exp
ridge
optimal
Percent Error (Frobenius)
mean b.l.
best b.l.
uni
exp
ridge
optimal
Percent Error (Frobenius)
Percent Error (Frobenius)
4.5
14
13
12
11
3
0
5
10
15
20
25
10
0
30
5
Number of base learners (p)
10
15
20
25
0.5
0.45
5
10
15
20
25
30
Number of base learners (p)
Ensemble Method ? AB?S
Ensemble Method ? DEXT
70
mean b.l.
best b.l.
uni
exp
ridge
optimal
36
68
Percent Error (Frobenius)
38
Percent Error (Frobenius)
0.55
Number of base learners (p)
40
34
32
30
28
26
24
0
0.6
0.4
0
30
mean b.l.
best b.l.
uni
exp
ridge
optimal
66
64
mean b.l.
best b.l.
uni
exp
ridge
optimal
62
60
58
56
54
5
10
15
20
25
52
0
30
5
Number of base learners (p)
10
15
20
25
30
Number of base learners (p)
Figure 1: Percent error in Frobenius norm for ensemble Nystr?om method using uniform (?uni?), exponential (?exp?), ridge (?ridge?) and optimal (?optimal?) mixture weights as well as the best (?best
b.l.?) and mean (?mean b.l.?) performance of the p base learners used to create the ensemble approximation.
Ensemble Method ? PIE?2.7K
Ensemble Method ? MNIST
1.6
1.4
1.2
0.28
mean b.l.
best b.l.
uni
exp
ridge
9
Percent Error (Spectral)
1.8
Percent Error (Spectral)
Ensemble Method ? ESS
10
mean b.l.
best b.l.
uni
exp
ridge
8
7
6
5
4
0.24
0.22
0.2
0.18
0.16
0.14
0.12
1
3
0.8
0
mean b.l.
best b.l.
uni
exp
ridge
0.26
Percent Error (Spectral)
2
5
10
15
20
25
2
0
30
0.1
5
Number of base learners (p)
10
15
20
25
15
20
25
30
45
mean b.l.
best b.l.
uni
exp
ridge
mean b.l.
best b.l.
uni
exp
ridge
40
Percent Error (Spectral)
40
10
Number of base learners (p)
Ensemble Method ? DEXT
45
Percent Error (Spectral)
5
Number of base learners (p)
Ensemble Method ? AB?S
35
30
25
20
15
10
0
0.08
0
30
35
30
25
20
15
5
10
15
20
25
10
0
30
Number of base learners (p)
5
10
15
20
25
30
Number of base learners (p)
Figure 2: Percent error in spectral norm for ensemble Nystr?om method using various mixture
weights as well as the best and mean performance of the p approximations used to create the ensemble approximation. Legend entries are the same as in Figure 1.
The results of this experiment, presented in Figure 4, clearly show that the ensemble Nystr?om
method is the most effective technique given a fixed amount of time. Furthermore, even with
the small values of s and s? , ensemble Nystr?om with ridge-regression weighting outperforms the
uniform ensemble Nystr?om method. We also observe that due to the high computational cost of
k-means for large datasets, the k-means approximation does not perform well in this ?fixed-time?
experiment. It generates an approximation that is worse than the mean standard Nystr?om approximation and its performance increasingly deteriorates as n approaches 1M. Finally, we note that al7
3.45
3.4
3.35
10
15
20
Percent Error (Frobenius)
no?ridge
ridge
optimal
5
Effect of Ridge ? ESS
Effect of Ridge ? MNIST
Percent Error (Frobenius)
Percent Error (Frobenius)
Effect of Ridge ? PIE?2.7K
3.5
no?ridge
ridge
optimal
10.525
10.52
10.515
10.51
10.505
10.5
10.495
25
5
10
15
20
no?ridge
ridge
optimal
0.455
0.45
0.445
0
25
5
10
15
20
25
Percent Error (Frobenius)
Percent Error (Frobenius)
Relative size of validation set
Relative size of validation set
Relative size of validation set
Effect of Ridge ? AB?S
Effect of Ridge ? DEXT
no?ridge
ridge
optimal
28.5
28
27.5
27
26.5
26
0
5
10
15
20
56
no?ridge
ridge
optimal
55.5
55
54.5
25
5
Relative size of validation set
10
15
20
25
Relative size of validation set
Figure 3: Comparison of percent error in Frobenius norm for the ensemble Nystr?om method with p =
10 experts with weights derived from linear regression (?no-ridge?) and ridge regression (?ridge?).
The dotted line indicates the optimal combination. The relative size of the validation set equals
s/n?100%.
Large Scale Ensemble Study
17
Percent Error (Frobenius)
16
15
mean b.l.
best b.l.
uni
ridge
kmeans
14
13
12
11
10
9
4
5
10
10
6
10
Size of dataset (n)
Figure 4: Large-scale performance comparison with SIFT-1M dataset. Given fixed computational
time, ensemble Nystr?om with ridge weights tends to outperform other techniques.
though the space requirements are 10 times greater for ensemble Nystr?om in comparison to standard
Nystr?om (since p = 10 in this experiment), the space constraints are nonetheless quite reasonable.
For instance, when working with the full 1M points, the ensemble Nystr?om method with ridge regression weights only required approximately 1% of the columns of K to achieve a percent error of
10%.
5
Conclusion
We presented a novel family of algorithms, ensemble Nystr?om algorithms, for accurate low-rank approximations in large-scale applications. The consistent and significant performance improvement
across a number of different data sets, along with the fact that these algorithms can be easily parallelized, suggests that these algorithms can benefit a variety of applications where kernel methods
are used. Interestingly, the algorithmic solution we have proposed for scaling these kernel learning
algorithms to larger scales is itself derived from the machine learning idea of ensemble methods.
We also gave the first theoretical analysis of these methods. We expect that finer error bounds and
theoretical guarantees will further guide the design of the ensemble algorithms and help us gain a
better insight about the convergence properties of our algorithms.
8
References
[1] A. Asuncion and D. Newman. UCI machine learning repository, 2007.
[2] B. E. Boser, I. Guyon, and V. N. Vapnik. A training algorithm for optimal margin classifiers.
In COLT, volume 5, pages 144?152, 1992.
[3] C. Cortes, M. Mohri, D. Pechyony, and A. Rastogi. Stability of transductive regression algorithms. In ICML, 2008.
[4] C. Cortes and V. N. Vapnik. Support-Vector Networks. Machine Learning, 20(3):273?297,
1995.
[5] P. Drineas and M. W. Mahoney. On the Nystr?om method for approximating a Gram matrix for
improved kernel-based learning. JMLR, 6:2153?2175, 2005.
[6] C. Fowlkes, S. Belongie, F. Chung, and J. Malik. Spectral grouping using the Nystr?om method.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(2), 2004.
[7] G. Golub and C. V. Loan. Matrix Computations. Johns Hopkins University Press, Baltimore,
2nd edition, 1983.
[8] A. Gustafson, E. Snitkin, S. Parker, C. DeLisi, and S. Kasif. Towards the identification of
essential genes using targeted genome sequencing and comparative analysis. BMC:Genomics,
7:265, 2006.
[9] S. Kumar, M. Mohri, and A. Talwalkar. Sampling techniques for the Nystr?om method. In
AISTATS, pages 304?311, 2009.
[10] Y. LeCun and C. Cortes. The MNIST database of handwritten digits, 2009.
[11] N. Littlestone and M. K. Warmuth. The weighted majority algorithm. Information and Computation, 108(2):212261, 1994.
[12] D. G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal
of Computer Vision, 60:91?110, 2004.
[13] J. C. Platt. Fast embedding of sparse similarity graphs. In NIPS, 2004.
[14] C. Saunders, A. Gammerman, and V. Vovk. Ridge Regression Learning Algorithm in Dual
Variables. In Proceedings of the ICML ?98, pages 515?521, 1998.
[15] B. Sch?olkopf, A. Smola, and K.-R. M?uller. Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 10(5):1299?1319, 1998.
[16] T. Sim, S. Baker, and M. Bsat. The CMU PIE database. In Conference on Automatic Face and
Gesture Recognition, 2002.
[17] A. Talwalkar, S. Kumar, and H. Rowley. Large-scale manifold learning. In CVPR, 2008.
[18] C. K. I. Williams and M. Seeger. Using the Nystr?om method to speed up kernel machines. In
NIPS, pages 682?688, 2000.
[19] K. Zhang, I. Tsang, and J. Kwok. Improved Nystr?om low-rank approximation and error analysis. In ICML, pages 273?297, 2008.
9
| 3850 |@word repository:1 manageable:1 norm:23 nd:1 decomposition:1 nystr:78 contains:1 selecting:2 interestingly:1 outperforms:1 com:1 comparing:2 tackling:1 assigning:1 written:2 john:1 sanjiv:1 informative:1 intelligence:1 selected:1 warmuth:1 es:4 ith:2 mcdiarmid:2 zhang:1 five:1 mathematical:1 along:1 sii:1 consists:4 combine:2 introduce:2 roughly:1 decomposed:2 cpu:2 becomes:1 estimating:1 moreover:1 notation:2 bounded:4 baker:1 differing:2 guarantee:2 pseudo:2 k2:27 scaled:1 classifier:1 platt:1 positive:1 treat:1 tends:2 subscript:1 approximately:3 k:1 suggests:1 practical:1 lecun:1 practice:2 prevalence:1 digit:2 area:1 empirical:1 significantly:1 word:1 protein:1 selection:2 storage:1 kmax:11 vkf:1 optimize:1 sanjivk:1 straightforward:1 williams:1 convex:1 insight:1 orthonormal:1 stability:1 handle:1 embedding:1 hypothesis:1 element:1 expensive:1 recognition:1 database:2 observed:2 tsang:1 thousand:1 calculate:1 cycle:1 decrease:1 environment:1 mu:1 complexity:5 ui:3 convexity:1 moderately:1 argminrank:1 rowley:1 distinctive:1 learner:13 drineas:1 easily:2 kasif:1 various:4 train:1 fast:1 effective:3 describe:2 newman:1 saunders:1 quite:3 larger:2 cvpr:1 otherwise:1 favor:1 transductive:1 itself:1 subsamples:3 sequence:1 eigenvalue:1 reconstruction:3 remainder:1 zm:5 uci:1 achieve:1 frobenius:24 kv:4 olkopf:1 spsd:4 exploiting:1 convergence:3 cluster:3 requirement:4 generating:1 guaranteeing:1 comparative:1 help:1 derive:2 illustrate:1 measured:3 sim:1 c:2 require:1 kii:2 generalization:2 tighter:3 hold:9 exp:13 great:1 algorithmic:1 adopt:1 bag:1 krr:1 create:2 tool:1 weighted:2 uller:1 clearly:2 reaching:1 pn:1 cr:2 varying:1 corollary:1 derived:3 improvement:3 rank:21 indicates:1 sequencing:1 seeger:1 talwalkar:3 baseline:1 typically:2 issue:2 among:1 colt:1 dual:1 fairly:1 equal:4 construct:2 sampling:7 zz:8 biology:1 kw:1 bmc:1 k2f:4 nearly:2 icml:3 simplex:2 report:2 few:1 modern:1 randomly:1 simultaneously:1 individual:1 consisting:1 replacement:6 ab:4 golub:1 mahoney:1 mixture:15 semidefinite:1 behind:1 accurate:5 kxx:16 littlestone:1 theoretical:6 minimal:1 instance:4 column:33 kij:1 increased:1 measuring:1 kpca:1 cost:4 entry:4 uniform:7 reported:1 straightforwardly:1 combined:1 international:1 quickly:1 hopkins:1 containing:4 choose:1 nique:1 worse:1 expert:11 chung:1 wk:6 mp:1 kzk2:1 later:1 view:1 performed:1 closed:1 lowe:1 start:1 asuncion:1 om:78 formed:5 square:2 accuracy:1 ensemble:53 yield:5 rastogi:1 kkkf:2 identification:1 handwritten:1 pechyony:1 finer:1 reach:1 definition:1 nonetheless:1 pp:5 naturally:1 proof:4 attributed:1 associated:1 sampled:7 gain:2 dataset:8 organized:1 courant:2 improved:4 though:2 furthermore:5 smola:1 working:1 receives:2 nonlinear:1 google:3 quality:2 effect:5 k22:2 regularization:1 symmetric:2 deal:2 abalone:1 ridge:45 performs:1 percent:23 image:4 ranging:1 novel:3 recently:1 superior:1 common:1 overview:2 volume:1 million:4 discussed:1 significant:4 refer:1 automatic:1 pm:10 similarly:1 language:1 similarity:1 base:11 belongs:1 scenario:1 inequality:10 additional:4 greater:1 parallelized:2 determine:1 converge:1 ii:1 full:2 keypoints:1 gesture:1 plugging:1 ensuring:1 variant:4 regression:21 vision:2 expectation:2 cmu:1 kernel:16 normalization:1 decreased:1 baltimore:1 singular:1 crucial:1 sch:1 sr:6 sure:1 legend:1 effectiveness:2 gustafson:1 variety:1 fit:1 zi:3 gave:1 idea:2 york:3 detailed:1 listed:1 amount:2 ten:1 svms:1 rearranged:1 reduced:1 generate:2 outperform:1 percentage:1 dotted:1 deteriorates:1 wr:3 gammerman:1 write:1 dropping:1 shall:3 demonstrating:2 nevertheless:2 drawn:1 changing:1 graph:1 inverse:2 family:3 reasonable:2 throughout:1 guyon:1 p3:1 scaling:2 submatrix:1 bound:26 constraint:1 psn:1 worked:1 generates:4 speed:2 extremely:1 min:1 kumar:3 performing:1 ameet:2 combination:4 describes:1 smaller:1 across:3 increasingly:1 making:1 s1:2 invariant:1 pr:1 sij:1 previously:1 dmax:2 bsat:1 needed:1 zk2:1 apply:2 observe:2 kwok:1 spectral:12 appropriate:1 fowlkes:1 rp:2 original:1 denotes:4 running:2 include:1 calculating:2 approximating:1 objective:2 malik:1 already:2 quantity:1 occurs:1 concentration:2 diagonal:2 cw:1 distance:1 majority:2 manifold:2 index:1 kk:48 pie:5 difficult:1 statement:1 design:1 motivates:1 perform:1 upper:4 datasets:7 defining:1 rn:2 varied:1 namely:1 required:3 extensive:2 z1:5 boser:1 nip:2 usually:1 pattern:1 including:2 max:19 maxij:1 natural:1 zr:11 brief:1 genomics:1 kf:8 multiplication:1 relative:8 expect:1 validation:8 consistent:1 nmk:2 row:2 kkk2:2 mohri:4 nkmax:3 summary:1 last:2 guide:1 institute:2 face:2 sparse:2 distributed:2 benefit:1 calculated:1 gram:1 genome:1 kz:3 ignores:1 transaction:1 uni:12 gene:1 belongie:1 search:1 table:4 kjj:1 mehryar:1 alloted:1 diag:1 sp:2 significance:1 pk:1 main:1 aistats:1 bounding:1 subsample:2 edition:1 en:9 cubic:1 parker:1 ny:3 exceeding:1 exponential:6 jmlr:1 weighting:2 theorem:16 sift:4 kkk:1 k21:2 maxi:1 nyu:2 cortes:3 admits:1 dk:3 x:2 pm3:1 dominates:1 exists:1 experimented:1 mnist:5 vapnik:2 grouping:1 gained:1 kr:2 essential:1 magnitude:2 margin:1 intersection:1 viewed:2 sized:1 kmeans:1 targeted:1 rbf:3 towards:1 lipschitz:2 change:2 loan:1 except:1 uniformly:4 vovk:1 averaging:2 principal:1 lemma:1 total:2 svd:2 experimental:2 m3:1 support:2 searched:1 latter:1 |
3,147 | 3,851 | Non-Parametric Bayesian Dictionary Learning for
Sparse Image Representations
Mingyuan Zhou, Haojun Chen, John Paisley, Lu Ren, 1 Guillermo Sapiro and Lawrence Carin
Department of Electrical and Computer Engineering
Duke University, Durham, NC 27708-0291, USA
1
Department of Electrical and Computer Engineering
University of Minnesota, Minneapolis, MN 55455, USA
{mz1,hc44,jwp4,lr,lcarin}@ee.duke.edu, {guille}@umn.edu
Abstract
Non-parametric Bayesian techniques are considered for learning dictionaries for
sparse image representations, with applications in denoising, inpainting and compressive sensing (CS). The beta process is employed as a prior for learning the
dictionary, and this non-parametric method naturally infers an appropriate dictionary size. The Dirichlet process and a probit stick-breaking process are also
considered to exploit structure within an image. The proposed method can learn
a sparse dictionary in situ; training images may be exploited if available, but they
are not required. Further, the noise variance need not be known, and can be nonstationary. Another virtue of the proposed method is that sequential inference can
be readily employed, thereby allowing scaling to large images. Several example
results are presented, using both Gibbs and variational Bayesian inference, with
comparisons to other state-of-the-art approaches.
1
Introduction
There has been significant recent interest in sparse signal expansions in several settings. For example, such algorithms as the support vector machine (SVM) [1], the relevance vector machine
(RVM) [2], Lasso [3] and many others have been developed for sparse regression (and classification). A sparse representation has several advantages, including the fact that it encourages a simple
model, and therefore over-training is often avoided. The inferred sparse coefficients also often have
biological/physical meaning, of interest for model interpretation [4].
Of relevance for the current paper, there has recently been significant interest in sparse representations in the context of denoising, inpainting [5?10], compressive sensing (CS) [11, 12], and classification [13]. All of these applications exploit the fact that most images may be sparsely represented
in an appropriate dictionary. Most of the CS literature assumes ?off-the-shelf? wavelet and DCT
bases/dictionaries [14], but recent denoising and inpainting research has demonstrated the significant advantages of learning an often over-complete dictionary matched to the signals of interest
(e.g., images) [5?10, 12, 15]. The purpose of this paper is to perform dictionary learning using
new non-parametric Bayesian technology [16,17], that offers several advantages not found in earlier
approaches, which have generally sought point estimates.
This paper makes four main contributions:
? The dictionary is learned using a beta process construction [16, 17], and therefore the number of
dictionary elements and their relative importance may be inferred non-parametrically.
? For the denoising and inpainting applications, we do not have to assume a priori knowledge of the
noise variance (it is inferred within the inversion). The noise variance can also be non-stationary.
? The spatial inter-relationships between different components in images are exploited by use of the
Dirichlet process [18] and a probit stick-breaking process [19].
1
? Using learned dictionaries, inferred off-line or in situ, the proposed approach yields CS performance that is markedly better than existing standard CS methods as applied to imagery.
2
Dictionary Learning with a Beta Process
In traditional sparse coding tasks, one considers a signal x ? <n and a fixed dictionary D =
(d1 , d2 , . . . , dM ) where each dm ? <n . We wish to impose that any x ? <n may be represented
? = D?, where ? ? <M is sparse, and our objective is to also minimize the `2
approximately as x
error k?
x ? xk2 . With a proper dictionary, a sparse ? often manifests robustness to noise (the model
doesn?t fit noise well), and the model also yields effective inference of ? even when x is partially
or indirectly observed via a small number of measurements (of interest for inpainting, interpolation
and compressive sensing [5, 7]). To the authors? knowledge, all previous work in this direction has
been performed in the following manner: (i) if D is given, the sparse vector ? is estimated via a
point estimate (without a posterior distribution), typically based on orthogonal matching pursuits
(OMP), basis pursuits or related methods, for which the stopping criteria is defined by assuming
knowledge (or off-line estimation) of the noise variance or the sparsity level of ?; and (ii) when
the dictionary D is to be learned, the dictionary size M must be set a priori, and a point estimate
is achieved for D (in practice one may infer M via cross-validation, with this step avoided in the
proposed method). In many applications one may not know the noise variance or an appropriate
sparsity level of ?; further, one may be interested in the confidence of the estimate (e.g., ?error
bars? on the estimate of ?). To address these goals, we propose development of a non-parametric
Bayesian formulation to this problem, in terms of the beta process, this allowing one to infer the
appropriate values of M and k?k0 (sparsity level) jointly, also manifesting a full posterior density
function on the learned D and the inferred ? (for a particular x), yielding a measure of confidence
in the inversion. As discussed further below, the non-parametric Bayesian formulation also allows
one to relax other assumptions that have been made in the field of learning D and ? for denoising,
inpainting and compressive sensing. Further, the addition of other goals are readily addressed within
the non-parametric Bayesian paradigm, e.g. designing D for joint compression and classification.
2.1
Beta process formulation
We desire the model x = D?+, where x ? <n and D ? <n?M , and we wish to learn D and in so
doing infer M . Toward this end, we consider a dictionary D ? <n?K , with K ? ?; by inferring
the number of columns of D that are required for accurate representation of x, the appropriate
value of M is implicitly inferred (work has been considered in [20, 21] for the related but distinct
application of factor analysis). We wish to also impose that ? ? <K is sparse, and therefore only
a small fraction of the columns of D are used for representation of a given x. Specifically, assume
that we have a training set D = {xi , yi }i=1,N , where xi ? <n and yi ? {1, 2, . . . , Nc }, where
Nc ? 2 represents the number of classes from which the data arise; when learning the dictionary we
ignore the class labels yi , and later discuss how they may be considered in the learning process.
The two-parameter beta process (BP) was developed in [17], to which the reader is referred for
further details; we here only provide those details of relevance for the current application. The BP
with parameters a > 0 and b > 0, and base measure H0 , is represented as BP(a, b, H0 ), and a draw
H ? BP(a, b, H0 ) may be represented as
H(?) =
K
X
?k ??k (?)
?k ? Beta(a/K, b(K ? 1)/K)
? k ? H0
(1)
k=1
with this a valid measure as K ? ?. The expression ??k (?) equals one if ? = ? k and is zero
otherwise. Therefore, H(?) represents a vector of K probabilities, with each associated with a
respective atom ? k . In the limit K ? ?, H(?) corresponds to an infinite-dimensional vector of
probabilities, and each probability has an associated atom ? k drawn i.i.d. from H0 .
Using H(?), we may now draw N binary vectors, the ith of which is denoted z i ? {0, 1}K ,
and the kth component of z i is drawn zik ? Bernoulli(?k ). These N binary column vectors are
used to constitute a matrix Z ? {0, 1}K?N , with ith column corresponding to z i ; the kth row of
Z is associated with atom ? k , drawn as discussed above. For our problem the atoms ? k ? <n
will correspond to candidate members of our dictionary D, and the binary vector z i defines which
members of the dictionary are used to represent sample xi ? D.
2
Let ? = (? 1 , ? 2 , . . . , ? K ), and we may consider the limit K ? ?. A naive form of our model,
for representation of sample xi ? D, is xi = ?z i + i . However, this is highly restrictive, as it
imposes that the coefficients of the dictionary expansion must be binary. To address this, we draw
?1
weights wi ? N (0, ?w
IK ), where ?w is the precision or inverse variance; the dictionary weights
are now ?i = z i ? wi , and xi = ??i + i , where ? represents the Hadamard (element-wise)
multiplication of two vectors. Note that, by construction, ? is sparse; this imposition of sparseness
is distinct from the widely used Laplace shrinkage prior [3], which imposes that many coefficients
are small but not necessarily exactly zero.
For simplicity we assume that the dictionary elements, defined by the atoms ? k , are drawn from a
multivariate Gaussian base H0 , and the components of the error vectors i are drawn i.i.d. from a
zero-mean Gaussian. The hierarchical form of the model may now be expressed as
xi = ??i + i ,
?i = z i ? wi
? = (? 1 , ? 2 , . . . , ? K ) ,
? k ? N (0, n?1 In )
wi
?
zi
?
?1
N (0, ?w
IK ) ,
K
Y
i ? N (0, ??1 In )
?k ? Beta(a/K, b(K ? 1)/K)
Bernoulli(?k ) ,
(2)
k=1
Non-informative gamma hyper-priors are typically placed on ?w and ? . Consecutive elements
in the above hierarchical model are in the conjugate exponential family, and therefore inference may be implemented via a variational Bayesian [22] or Gibbs-sampling analysis, with
analytic update equations (all inference update equations, and the software, can be found at
http://people.ee.duke.edu/?lihan/cs/ ). After performing such inference, we retain those columns
of ? that are used in the representation of the data in D, thereby inferring D and hence M .
To impose our desire that the vector of dictionary weights ? is sparse, one may adjust the parameters
a and b. Particularly, as discussed in [17], in the limit K ? ?, the number of elements of z i that
are non-zero is a random variable drawn from Poisson(a/b). In Section 3.1 we discuss the fact that
these parameters are in general non-informative and the sparsity is intrinsic to the data.
2.2
Accounting for a classification task
There are problems for which it is desired that x is sparsely rendered in D, and the associated
weight vector ? may be employed for other purposes beyond representation. For example, one may
perform a classification task based on ?. If one is interested in joint compression and classification,
both goals should be accounted for when designing D. For simplicity, we assume that the number
of classes is NC = 2 (binary classification), with this readily extended [23] to NC > 2.
Following [9], we may define a linear or bilinear classifier based on the sparse weights ? and the
associated data x (in the bilinear case), with this here implemented in the form of a probit classifier.
We focus on the linear model, as it is simpler (has fewer parameters), and the results in [9] demonstrated that it was often as good or better than the bilinear classifier. To account for classification,
the model in (2) remains unchanged, and the following may be added to the top of the hierarchy:
? + ? > 0, yi = 2 if ? T ?
? + ? < 0, ? ? N (0, ???1 IK+1 ), and ? ? N (0, ?0?1 ), where
yi = 1 if ? T ?
K+1
K
? ?<
?
is the same as ? ? < with an appended one, to account for the classifier bias. Again,
one typically places (non-informative) gamma hyper-priors on ?? and ?0 . With the added layers for
the classifier, the conjugate-exponential character of the model is retained, sustaining the ability to
perform VB or MCMC inference with analytic update equations. Note that the model in (2) may
be employed for unlabeled data, and the extension above may be employed for the available labeled
data; consequently, all data (labeled and unlabeled) may be processed jointly to infer D.
2.3
Sequential dictionary learning for large training sets
In the above discussion, we implicitly assumed all data D = {xi , yi }i=1,N are used together to
infer the dictionary D. However, in some applications N may be large, and therefore such a ?batch?
approach is undesirable. To address this issue one may partition the data as D = D1 ? D2 ?
. . . DJ?1 ? DJ , with the data processed sequentially. This issue has been considered for point
estimates of D [8], in which considerations are required to assure algorithm convergence. It is
of interest to briefly note that sequential inference is handled naturally via the proposed Bayesian
analysis.
3
Specifically, let p(D|D, ?) represent the posterior on the desired dictionary, with all other model
parameters marginalized out (e.g., the sample-dependent coefficients ?); the vector ? represents
the model hyper-parameters. In a Bayesian analysis, rather than evaluating p(D|D, ?) directly, one
may employ the same model (prior) to infer p(D|D1 , ?). This posterior may then serve as a prior
for D when considering next D2 , inferring p(D|D1 ? D2 , ?). When doing variational Bayesian
(VB) inference we have an analytic approximate representation for posteriors such as p(D|D1 , ?),
while for Gibbs sampling we may use the inferred samples. When presenting results in Section 5,
we discuss additional means of sequentially accelerating a Gibbs sampler.
3
3.1
Denoising, Inpainting and Compressive Sensing
Image Denoising and Inpainting
Assume we are given an image I ? <Ny ?Nx with additive noise and missing pixels; we here assume
a monochrome image for simplicity, but color images are also readily handled, as demonstrated
when presenting results. As is done typically [6, 7], we partition the image into NB = (Ny ?
2
B + 1) ? (Nx ? B + 1) overlapping blocks {xi }i=1,NB , for each of which xi ? <B (B = 8 is
typically used). If there is only additive noise but no missing pixels, then the model in (2) can be
readily applied for simultaneous dictionary learning and image denoising. If there are both noise
and missing pixels, instead of directly observing xi , we observe a subset of the pixels in each xi .
Note that here ? and {?i }i=1,NB , which are used to recover the original noise-free and complete
image, are directly inferred from the data under test; one may also employ an appropriate training
set D with which to learn a dictionary D offline, or for initialization of in situ learning.
In denoising and inpainting studies of this type (see for example [6, 7] and references therein), it
is often assumed that either the variance is known and used as a ?stopping? criteria, or that the
sparsity level is pre-determined and fixed for all i ? {1, NB }. While these may be practical in
some applications, we feel it is more desirable to not make these assumptions. In (2) the noise
precision (inverse variance), ? , is assumed drawn from a non-informative gamma distribution, and
a full posterior density function is inferred for ? (and all other model parameters). In addition,
the problems of addressing spatially nonuniform noise as well as nonuniform noise across color
channels are of interest [7]; they are readily handled in the proposed model by drawing a separate
precision ? for each color channel in each B ? B block, each of which is drawn from a shared
gamma prior.
The sparsity level of the representation in our model, i.e., {k?i k0 }i=1,N , is influenced by the
parameters a and b in the beta prior in (2). Examining the posterior p(?k |?) ? Beta(a/K +
PN
PN
i=1 zik , b(K ? 1)/K + N ?
i=1 zik ), conditioned on all other parameters, we find that most
settings of a and b tend to be non-informative, especially in the case of sequential learning (discussed further in Section 5). Therefore, the average sparsity level of the representation is inferred by
the data itself and each sample xi has its own unique sparse representation based on the posterior,
which renders much more flexibility than enforcing the same sparsity level for each sample.
3.2
Compressive sensing
We consider CS in the manner employed in [12]. Assume our objective is to measure an image
I ? <Ny ?Nx , with this image constituting the 8 ? 8 blocks {xi }i=1,NB . Rather than measuring
the xi directly, pixel-by-pixel, in CS we perform the projection measurement v i = ?xi , where
v i ? <Np , with Np representing the number of projections, and ? ? <Np ?64 (assuming that xi
is represented by a 64-dimensional vector). There are many (typically random) ways in which ?
may be constructed, with the reader referred to [24]. Our goal is to have Np 64, thereby yielding
compressive measurements. Based on the CS measurements {v i }i=1,NB , our objective is to recover
{xi }i=1,NB .
Consider a potential dictionary ?, as discussed in Section 2. It is assumed that for each of the
{xi }i=1,NB from the image under test xi = ??i + i , for sparse ?i and relatively small error
ki k2 . The number of required projections Np needed for accurate estimation of ?i is proportional
to k?i k0 [11], with this underscoring the desirability of learning a dictionary in which very sparse
representations are manifested (as compared to using an ?off-the-shelf? wavelets or DCT basis).
For CS inversion, the model in (2) is employed, and therefore the appropriate dictionary D is learned
jointly while performing CS inversion, in situ on the image under test. When performing CS analy4
sis, in (2), rather than observing xi , we observe v i = ?D?i + i , for i = 1, . . . , NB (the likelihood
function is therefore modified slightly).
As discussed when presenting results, one may also learn the CS dictionary in advance, off-line,
with appropriate training images (using the model in (2)). However, the unique opportunity for joint
CS inversion and learning of an appropriate parsimonious dictionary is deemed to be a significant
advantage, as it does not presuppose that one would know an appropriate training set in advance.
The inpainting problem may be viewed as a special case of CS, in which each row of ? corresponds
to a delta function, locating a unique pixel on the image at which useful (unobscured) data are
observed. Those pixels that are unobserved, or that are contaminated (e.g., by superposed text [7])
are not considered when inferring the ?i and D. A CS camera designed around an inpainting
construction has several advantages, from the standpoint of simplicity. As observed from the results
in Section 5, an inpainting-based CS camera would simply observe a subset of the usual pixels,
selected at random.
4
Exploiting Spatial Structure
For the applications discussed above, the {xi }i=1,NB come from the single image under test, and
consequently there is underlying (spatial) structure that should ideally be exploited. Rather than
re-writing the entire model in (2), we focus on the following equations in the hierarchy: z i ?
QK
QK
k=1 Bernoulli(?k ), and ? ?
k=1 Beta(a/K, b(K ? 1)/K). Instead of having a single vector
? = {?1 , . . . , ?K } that is shared for all {xi }i=1,NB , it is expected that there may be a mixture of ?
vectors, corresponding to different segments in the image. Since the number of mixture components
is not known a priori, this mixture model is modeled via a Dirichlet process [18]. We may therefore
employ, for i = 1, . . . , NB ,
zi ?
K
Y
Bernoulli(?ik )
?i ? G
G ? DP(?,
k=1
K
Y
Beta(a/K, b(K ? 1)/K))
(3)
k=1
QK
Alternatively, we may cluster the z i directly, yielding z i ? G, G ? DP(?, k=1 Bernoulli(?k )),
QK
? ? k=1 Beta(a/K, b(K ? 1)/K), where the z i are drawn i.i.d. from G. In practice we implement such DP constructions via a truncated stick-breaking representation [25], again retaining the
conjugate-exponential structure of interest for analytic VB or Gibbs inference. In such an analysis
we place a non-informative gamma prior on the precision ?.
The construction in (3) clusters the blocks, and therefore it imposes structure not constituted in the
simpler model in (2). However, the DP still assumes that the members of {xi }i=1,NB are exchangeable. Space limitations preclude discussing this matter in detail here, but we have also considered
replacement of the DP framework above with a probit stick-breaking process (PSBP) [19], which
explicitly imposes that it is more likely for proximate blocks to be in the same cluster, relative to
distant blocks. When presenting results, we show examples in which PSBP has been used, with
its relative effectiveness compared to the simpler DP construction. The PSBP again retains full
conjugate-exponential character within the hierarchy, of interest for efficient inference, as discussed
above.
5
Example Results
For the denoising and inpainting results, we observed that the Gibbs sampler provided better performance than associated variational Bayesian inference. For denoising and inpainting we may exploit
shifted versions of the data, which accelerates convergence substantially (discussed in detail below). Therefore, all denoising and inpainting results are based on efficient Gibbs sampling. For CS
we cannot exploit shifted images, and therefore to achieve fast inversion variational Bayesian (VB)
inference [22] is employed; for this application VB has proven to be quite effective, as discussed
below. The same set of model hyper-parameters are used across all our denoising, inpainting and
CS examples (no tuning was performed): all gamma priors are set as Gamma(10?6 , 10?6 ), along
the lines suggested in [2], and the beta distribution parameters are set with a = K and b = N/8
(many other settings of a and b yield similar results).
5
5.1
Denoising
We consider denoising a 256 ? 256 image, with comparison of the proposed approach to K-SVD [6]
(for which the noise variance is assumed known and fixed); the true noise standard deviation is
set at 15, 25 and 50 in the examples below. We show results for three algorithms: (i) mismatched
K-SVD (with noise standard deviation of 30), (ii) K-SVD when the standard deviation is properly
matched, and (iii) the proposed BP approach. For (iii) a non-informative prior is placed on the
noise precision, and the same BP model is run for all three noise levels (with the underlying noise
levels inferred). The BP and K-SVD employed no a priori training data. In Figure 1 are shown
the noisy images at the three different noise levels, as well as the reconstructions via BP and KSVD. A preset large dictionary size K = 256 is used for both algorithms, and for the BP results
we inferred that approximately M = 196, 128, and 34 dictionary elements were important for noise
standard deviations 15, 25, and 50, respectively; the remaining elements of the dictionary were used
less than 0.1% of the time. As seen within the bottom portion of the right part of Figure 1, the
unused dictionary elements appear as random draws from the prior, since they are not used and
hence influenced by the data.
Note that K-SVD works well when the set noise variance is at or near truth, but the method is undermined by mismatch. The proposed BP approach is robust to changing noise levels. Quantitative
performance is summarized in Table 1. The BP denoiser estimates a full posterior density function on the noise standard deviation; for the examples considered here, the modes of the inferred
standard-deviation posteriors were 15.57, 25.35, and 48.12, for true standard deviations 15, 25, and
50, respectively.
To achieve these BP results, we employ a sequential implementation of the Gibbs sampler (a batch
implementation converges to the same results but with higher computational cost); this is discussed
in further detail below, when presenting inpainting results.
Figure 1: Left: Representative denoising results, with the top through bottom rows corresponding to noise
standard deviations of 15, 25 and 50, respectively. The second and third columns represent K-SVD [6] results
with assumed standard deviation equal to 30 and the ground truth, respectively. The fourth column represents
the proposed BP reconstructions. The noisy images are in the first column. Right: Inferred BP dictionary
elements for noise standard deviation 25, in order of importance (probability to be used) from the top-left.
Table 1: Peak signal-to-reconstructed image measure (PSNR) for the data in Figure 1, for K-SVD [6] and the
proposed BP method. The true standard deviation was 15, 25 and 50, respectively, from the top to the bottom
row. For the mismatched K-SVD results, the noise stand deviation was fixed at 30.
Original Noisy
Image (dB)
24.58
20.19
14.56
K-SVD Denoising
mismatched variance (dB)
30.67
31.52
19.60
K-SVD Denoising
matched variance (dB)
34.32
32.15
27.95
Beta Process
Denoising (dB)
34.44
32.17
28.08
5.2 Inpainting
Our inpainting and denoising results were achieved by using the following sequential procedure.
Consider any pixel [p, j], where p, j ? [1, B], and let this pixel constitute the left-bottom pixel in
a new B ? B block. Further, consider all B ? B blocks with left-bottom pixels at {p + `B, j +
6
30
25
PSNR
20
15
10
5
0
8
16
24
32
Learning round
40
48
56
64
Figure 2: Inpainting results. The curve shows the PSNR as a function of the B 2 = 64 Gibbs learning rounds.
The left figure is the test image, with 80% of the RGB pixels missing, the middle figure is the result after 64
after Gibbs rounds (final result), and the right figure is the original uncontaminated image.
mB} ? ?(p ? 1){Ny ? B + 1, j + mB} ? ?(j ? 1){p + `B, Nx ? B + 1} for ` and m that satisfy
p + `B ? Ny ? B + 1 and j + mB ? Nx ? B + 1. This set of blocks is denoted data set Dpj ,
and considering 1 ? p ? B and 1 ? j ? B, there are a total of B 2 such shifted data sets. In the
first iteration of learning ?, we employ the blocks in D11 , and for this first round we initialize ?
and ?i based on a singular value decomposition (SVD) of the blocks in D11 (we achieved similar
results when ? was initialized randomly). We do several Gibbs iterations with D11 and then stop
the Gibbs algorithm, retaining the last sample of ? and ?i from the previous step. These ? and ?i
are then used to initialize the Gibbs sampler in the second round, now applied to the B ? B blocks
in D11 ? D21 (for D21 the neighboring ?i is used for initialization). The Gibbs sampler is now run
on this expanded data for several iterations, the last sample is retained, and the data set is augmented
again. This is done B 2 = 64 times until at the end all shifted blocks are processed simultaneously.
This sequential process may be viewed as a sequential Gibbs burn in, after which all of the shifted
blocks are processed.
Theoretically, one would expect to need thousands of Gibbs iterations to achieve convergence. However, our experience is that even a single iteration in each of the above B 2 rounds yields good results.
In Figure 2 we show the PSNR as a function of each of the B 2 = 64 rounds discussed above. For
Gibbs rounds 16, 32 and 64 the corresponding PSNR values were 26.78 dB, 28.46 dB and 29.31 dB.
For this example we used K = 256. This example was considered in [7] (we obtained similar results
for the ?New Orleans? image, also considered in [7]); the best results reported there were a PSNR of
29.65 dB. However, to achieve those results a training data set was employed for initialization [7];
the BP results are achieved with no a priori training data. Concerning computational costs, the inpainting and denoising algorithms scale linearly as a function of the block size, the dictionary size,
the sparsity level, and the number of training samples; all results reported here were run efficiently
in Matlab on PCs, with comparable costs as K-SVD.
5.3
Compressive sensing
We consider a CS example, in which the image is divided into 8 ? 8 patches, with these constituting
the underlying data {xi }i=1,NB to be inferred. For each of the NB blocks, a vector of CS measurements v i = ?xi is measured, where the number of projections per patch is Np , and the total number
of CS projections is Np NB . In this example the elements of ? were constructed randomly as draws
from N (0, 1), but many other projection classes may be considered [11, 24]. Each xi is assumed
represented in terms of a dictionary xi = D?i + i , and three constructions for D were considered:
(i) a DCT expansion; (ii) learning of D using the beta process construction, using training images;
(iii) using the beta process to perform joint CS inversion and learning of D. For (ii), the training
data consisted of 4000 8 ? 8 patches chosen at random from 100 images selected from the Microsoft
database (http://research.microsoft.com/en-us/projects/objectclassrecognition). The dictionary was
set to K = 256, and the offline beta process inferred a dictionary of size M = 237.
Representative CS reconstruction results are shown in Figure 3, for a gray-scale version of the
?castle? image. The inversion results at left are based on a learned dictionary; except for the ?online
BP? results, all of these results employ the same dictionary D learned off-line as above, and the
algorithms are distinguished by different ways of estimating {?i }i=1,NB . A range of CS-inversion
7
algorithms are considered from the literature, and several BP-based constructions are considered as
well for CS inversion. The online BP results are quite competitive with those inferred off-line.
One also notes that the results based on a learned dictionary (left in Figure 3) are markedly better
than those based on the DCT (right in Figure 3); similar results were achieved when the DCT was
replaced by a wavelet representation. For the DCT-based results, note that the DP- and PSBP-based
BP CS inversion results are significantly better than those of all other CS inversion algorithms.
The results reported here are consistent with tests we performed using over 100 images from the
aforementioned Microsoft database, not reported here in detail for brevity.
Note that CS inversion using the DP-based BP algorithm (as discussed in Section 4) yield the best
results, significantly better than BP results not based on the DP, and better than all competing CS
inversion algorithms (for both learned dictionaries and the DCT). The DP-based results are very
similar to those generated by the probit stick-breaking process (PSBP) [19], which enforces spatial
information more explicitly; this suggests that the simpler DP-based results are adequate, at least
for the wide class of examples considered. Note that we also considered the DP and PSBP for
the denoising and inpaiting examples above (those results were omitted, for brevity). The DP and
PSBP denoising and inpainting results were similar to BP results without DP/PSBP (those presented
above); this is attributed to the fact that when performing denoising/inpainting we may consider
many shifted versions of the same image (as discussed when presenting the inpainting results).
Concerning computational costs, all CS inversions were run efficiently on PCs, with the specifics
computational times dictated by the detailed Matlab implementation and the machine run on. A
rough ranking of the computational speeds, from fastest to slowest, is as follows: StOMP-CFAR,
Fast BCS, OMP, BP, LARS/Lasso, Online BP, DP BP, PSBP BP, VB BCS, Basis Pursuit; in this
list, algorithms BP through Basis Pursuits have approximately the same computational costs. The
DP-based BP CS inversion algorithm scales as O(NB ? Np ? B 2 ).
PSBP BP
DP BP
Online BP
BP
BCS
Fast BCS
Basis Pursuit
LARS/Lasso
OMP
STOMP-CFAR
0.25
0.2
0.45
0.15
0.1
0.05
0
3
3.5
4
4.5
5
5.5
6
6.5
7
Measurements
NumberNumber
of CSofMeasurements
(x 104)
PSBP BP
DP BP
BP
BCS
Fast BCS
Basis Pursuit
LARS/Lasso
OMP
STOMP-CFAR
0.5
Relative
Reconstruction Error
Relative
Reconstruction
Error
Relative
Reconstruction Error
Relative
Reconstruction
Error
0.3
7.5
0.4
0.35
0.3
0.25
0.2
3
3.5
4
4.5
5
5.5
6
6.5
7
Measurements
NumberNumber
of CSofMeasurements
(x 104)
4
x 10
7.5
4
x 10
Figure 3: CS performance (fraction of `2 error) based on learned dictionaries (left) and based on the DCT
(right). For the left results, the ?Online BP? results simultaneously learned the dictionary and did CS inversion;
the remainder of the left results are based on a dictionary learned offline on a training set. A DCT dictionary
is used for the results on the right. The underlying image under test is shown at right. Matlab code for Basis
Pursuit, LARS/Lasso, OMP, STOMP are available at http://sparselab.stanford.edu/, and code for BCS and Fast
BCS are available at http://people.ee.duke.edu/?lihan/cs/. The horizontal axis represents the total number of
CS projections, Np NB . The total number of pixels in the image is 480 ? 320 = 153, 600. 99.9% of the signal
energy is contained in 33, 500 DCT coefficients.
6
Conclusions
The non-parametric beta process has been presented for dictionary learning with the goal of image
denoising, inpainting and compressive sensing, with very encouraging results relative to the state
of the art. The framework may also be applied to joint compression-classification tasks. In the
context of noisy underlying data, the noise variance need not be known in advance, and it need not
be spatially uniform. The proposed formulation also allows unique opportunities to leverage known
structure in the data, such as relative spatial locations within an image; this framework was used to
achieve marked improvements in CS-inversion quality.
Acknowledgement
The research reported here was supported in part by ARO, AFOSR, DOE, NGA and ONR.
8
References
[1] N. Cristianini and J. Shawe-Taylor. An Introduction to Support Vector Machines. Cambridge
University Press, 2000.
[2] M. Tipping. Sparse Bayesian learning and the relevance vector machine. Journal of Machine
Learning Research, 1, 2001.
[3] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical
Society, Series B, 58, 1994.
[4] B.A. Olshausen and D. J. Field. Sparse coding with an overcomplete basis set: A strategy
employed by V1? Vision Research, 37, 1998.
[5] M. Aharon, M. Elad, and A. M. Bruckstein. K-SVD: An algorithm for designing overcomplete
dictionaries for sparse representation. IEEE Trans. Signal Processing, 54, 2006.
[6] M. Elad and M. Aharon. Image denoising via sparse and redundant representations over
learned dictionaries. IEEE Trans. Image Processing, 15, 2006.
[7] J. Mairal, M. Elad, and G. Sapiro. Sparse representation for color image restoration. IEEE
Trans. Image Processing, 17, 2008.
[8] J. Mairal, F. Bach, J. Ponce, and G. Sapiro. Online dictionary learning for sparse coding. In
Proc. International Conference on Machine Learning, 2009.
[9] J. Mairal, F. Bach, J. Ponce, G. Sapiro, and A. Zisserman. Supervised dictionary learning. In
Proc. Neural Information Processing Systems, 2008.
[10] M. Ranzato, C. Poultney, S. Chopra, and Y. Lecun. Efficient learning of sparse representations
with an energy-based model. In Proc. Neural Information Processing Systems, 2006.
[11] E. Cand`es and T. Tao. Near-optimal signal recovery from random projections: universal encoding strategies? IEEE Trans. Information Theory, 52, 2006.
[12] J.M. Duarte-Carvajalino and G. Sapiro. Learning to sense sparse signals: Simultaneous sensing
matrix and sparsifying dictionary optimization. IMA Preprint Series 2211, 2008.
[13] J. Wright, A.Y. Yang, A. Ganesh, S.S. Sastry, and Y. Ma. Robust face recognition via sparse
representation. IEEE Trans. Pattern Analysis Machine Intelligence, 31, 2009.
[14] S. Ji, Y. Xue, and L. Carin. Bayesian compressive sensing. IEEE Trans. Signal Processing,
56, 2008.
[15] R. Raina, A. Battle, H. Lee, B. Packer, and A.Y. Ng. Self-taught learning: transfer learning
from unlabeled data. In Proc. International Conference on Machine Learning, 2007.
[16] R. Thibaux and M.I. Jordan. Hierarchical beta processes and the indian buffet process. In Proc.
International Conference on Artificial Intelligence and Statistics, 2007.
[17] J. Paisley and L. Carin. Nonparametric factor analysis with beta process priors. In Proc.
International Conference on Machine Learning, 2009.
[18] T. Ferguson. A Bayesian analysis of some nonparametric problems. Annals of Statistics, 1,
1973.
[19] A. Rodriguez and D.B. Dunson. Nonparametric bayesian models through probit stickbreaking
processes. Univ. California Santa Cruz Technical Report, 2009.
[20] D. Knowles and Z. Ghahramani. Infinite sparse factor analysis and infinite independent components analysis. In Proc. International Conference on Independent Component Analysis and
Signal Separation, 2007.
[21] P. Rai and H. Daum?e III. The infinite hierarchical factor regression model. In Proc. Neural
Information Processing Systems, 2008.
[22] M.J. Beal. Variational Algorithms for Approximate Bayesian Inference. PhD thesis, Gatsby
Computational Neuroscience Unit, University College London, 2003.
[23] M. Girolami and S. Rogers. Variational Bayesian multinomial probit regression with Gaussian
process priors. Neural Computation, 18, 2006.
[24] R.G. Baraniuk. Compressive sensing. IEEE Signal Processing Magazine, 24, 2007.
[25] J. Sethuraman. A constructive definition of Dirichlet priors. Statistica Sinica, 4, 1994.
9
| 3851 |@word version:3 middle:1 briefly:1 inversion:18 compression:3 d2:4 rgb:1 accounting:1 decomposition:1 thereby:3 inpainting:25 series:2 existing:1 current:2 com:1 si:1 must:2 readily:6 john:1 cruz:1 dct:10 additive:2 partition:2 informative:7 distant:1 analytic:4 designed:1 update:3 zik:3 stationary:1 intelligence:2 fewer:1 selected:2 ith:2 lr:1 location:1 simpler:4 along:1 constructed:2 beta:21 ik:4 ksvd:1 manner:2 theoretically:1 inter:1 expected:1 cand:1 encouraging:1 preclude:1 considering:2 provided:1 project:1 matched:3 underlying:5 estimating:1 substantially:1 developed:2 compressive:11 unobserved:1 sapiro:5 quantitative:1 exactly:1 k2:1 classifier:5 stick:5 exchangeable:1 unit:1 appear:1 engineering:2 limit:3 bilinear:3 encoding:1 interpolation:1 approximately:3 burn:1 initialization:3 therein:1 sustaining:1 suggests:1 fastest:1 range:1 minneapolis:1 practical:1 unique:4 camera:2 orleans:1 enforces:1 practice:2 block:16 implement:1 cfar:3 lecun:1 lcarin:1 procedure:1 universal:1 significantly:2 matching:1 projection:8 confidence:2 pre:1 cannot:1 unlabeled:3 undesirable:1 selection:1 nb:19 context:2 writing:1 superposed:1 demonstrated:3 missing:4 simplicity:4 recovery:1 stomp:4 laplace:1 feel:1 annals:1 construction:9 hierarchy:3 magazine:1 duke:4 designing:3 element:10 assure:1 recognition:1 particularly:1 sparsely:2 labeled:2 database:2 observed:4 bottom:5 preprint:1 electrical:2 thousand:1 ranzato:1 ideally:1 cristianini:1 segment:1 serve:1 basis:8 joint:5 k0:3 represented:6 univ:1 distinct:2 fast:5 effective:2 london:1 artificial:1 presuppose:1 hyper:4 h0:6 quite:2 widely:1 stanford:1 elad:3 relax:1 otherwise:1 drawing:1 ability:1 statistic:2 jointly:3 itself:1 noisy:4 final:1 online:6 beal:1 advantage:5 propose:1 reconstruction:7 aro:1 mb:3 remainder:1 d21:2 neighboring:1 hadamard:1 flexibility:1 achieve:5 exploiting:1 convergence:3 cluster:3 converges:1 measured:1 implemented:2 c:37 come:1 girolami:1 direction:1 lars:4 rogers:1 biological:1 extension:1 around:1 considered:16 ground:1 wright:1 lawrence:1 dictionary:58 sought:1 consecutive:1 xk2:1 omitted:1 purpose:2 estimation:2 proc:8 label:1 rvm:1 stickbreaking:1 rough:1 gaussian:3 desirability:1 modified:1 rather:4 zhou:1 shelf:2 shrinkage:2 pn:2 focus:2 monochrome:1 ponce:2 properly:1 improvement:1 bernoulli:5 likelihood:1 slowest:1 sense:1 duarte:1 inference:14 dependent:1 stopping:2 ferguson:1 typically:6 entire:1 interested:2 tao:1 pixel:15 issue:2 classification:9 aforementioned:1 denoted:2 priori:5 retaining:2 development:1 art:2 spatial:5 special:1 initialize:2 field:2 equal:2 having:1 ng:1 atom:5 sampling:3 represents:6 carin:3 others:1 np:9 contaminated:1 report:1 employ:6 randomly:2 gamma:7 simultaneously:2 packer:1 ima:1 replaced:1 replacement:1 microsoft:3 interest:9 highly:1 situ:4 adjust:1 umn:1 mixture:3 d11:4 yielding:3 pc:2 accurate:2 experience:1 respective:1 orthogonal:1 taylor:1 initialized:1 desired:2 re:1 overcomplete:2 column:8 earlier:1 measuring:1 retains:1 restoration:1 cost:5 addressing:1 parametrically:1 subset:2 deviation:12 uniform:1 examining:1 reported:5 thibaux:1 xue:1 density:3 peak:1 international:5 retain:1 lee:1 off:7 together:1 imagery:1 again:4 thesis:1 castle:1 account:2 potential:1 coding:3 summarized:1 coefficient:5 matter:1 satisfy:1 explicitly:2 ranking:1 performed:3 later:1 doing:2 observing:2 portion:1 competitive:1 recover:2 contribution:1 minimize:1 appended:1 variance:13 qk:4 efficiently:2 yield:5 correspond:1 bayesian:19 lu:1 ren:1 simultaneous:2 influenced:2 definition:1 energy:2 uncontaminated:1 dm:2 naturally:2 associated:6 attributed:1 stop:1 manifest:1 knowledge:3 color:4 infers:1 psnr:6 mingyuan:1 higher:1 tipping:1 supervised:1 zisserman:1 formulation:4 done:2 until:1 horizontal:1 ganesh:1 overlapping:1 rodriguez:1 defines:1 mode:1 quality:1 gray:1 olshausen:1 usa:2 consisted:1 true:3 hence:2 spatially:2 manifesting:1 round:8 self:1 encourages:1 criterion:2 presenting:6 complete:2 image:46 variational:7 meaning:1 wise:1 recently:1 consideration:1 multinomial:1 physical:1 ji:1 discussed:14 interpretation:1 significant:4 measurement:7 cambridge:1 gibbs:17 paisley:2 tuning:1 sastry:1 shawe:1 dj:2 minnesota:1 base:3 posterior:10 multivariate:1 recent:2 own:1 dictated:1 manifested:1 binary:5 onr:1 discussing:1 yi:6 exploited:3 seen:1 additional:1 impose:3 omp:5 employed:11 paradigm:1 redundant:1 signal:11 ii:4 full:4 desirable:1 bcs:8 infer:6 technical:1 offer:1 cross:1 bach:2 divided:1 concerning:2 proximate:1 regression:4 vision:1 poisson:1 iteration:5 represent:3 achieved:5 addition:2 addressed:1 psbp:11 singular:1 standpoint:1 markedly:2 tend:1 db:8 member:3 effectiveness:1 jordan:1 nonstationary:1 ee:3 near:2 leverage:1 unused:1 chopra:1 iii:4 yang:1 fit:1 zi:2 lasso:6 competing:1 expression:1 handled:3 accelerating:1 render:1 locating:1 constitute:2 matlab:3 adequate:1 generally:1 useful:1 detailed:1 santa:1 nonparametric:3 guille:1 processed:4 http:4 shifted:6 estimated:1 delta:1 per:1 tibshirani:1 neuroscience:1 taught:1 sparsifying:1 four:1 drawn:9 changing:1 v1:1 fraction:2 nga:1 imposition:1 inverse:2 run:5 fourth:1 baraniuk:1 place:2 family:1 reader:2 knowles:1 patch:3 parsimonious:1 draw:5 separation:1 scaling:1 vb:6 comparable:1 accelerates:1 layer:1 bp:37 software:1 underscoring:1 speed:1 performing:4 expanded:1 rendered:1 relatively:1 department:2 rai:1 conjugate:4 battle:1 across:2 slightly:1 character:2 wi:4 equation:4 remains:1 discus:3 needed:1 know:2 end:2 available:4 pursuit:7 aharon:2 observe:3 hierarchical:4 appropriate:10 indirectly:1 distinguished:1 batch:2 robustness:1 buffet:1 original:3 assumes:2 dirichlet:4 top:4 remaining:1 opportunity:2 marginalized:1 daum:1 exploit:4 restrictive:1 ghahramani:1 especially:1 society:1 unchanged:1 objective:3 added:2 parametric:8 strategy:2 usual:1 traditional:1 kth:2 dp:18 separate:1 nx:5 considers:1 toward:1 enforcing:1 assuming:2 denoiser:1 code:2 retained:2 relationship:1 modeled:1 nc:5 sinica:1 dunson:1 implementation:3 proper:1 perform:5 allowing:2 truncated:1 extended:1 nonuniform:2 inferred:17 required:4 california:1 learned:13 trans:6 address:3 beyond:1 bar:1 suggested:1 below:5 pattern:1 mismatch:1 sparsity:9 poultney:1 including:1 royal:1 raina:1 mn:1 representing:1 technology:1 axis:1 sethuraman:1 deemed:1 naive:1 text:1 prior:15 literature:2 acknowledgement:1 multiplication:1 relative:9 afosr:1 probit:7 expect:1 limitation:1 proportional:1 proven:1 validation:1 consistent:1 imposes:4 row:4 guillermo:1 accounted:1 placed:2 last:2 free:1 supported:1 offline:3 bias:1 mismatched:3 wide:1 face:1 sparse:29 curve:1 valid:1 evaluating:1 stand:1 doesn:1 dpj:1 author:1 made:1 avoided:2 constituting:2 reconstructed:1 approximate:2 ignore:1 implicitly:2 bruckstein:1 sequentially:2 mairal:3 assumed:7 xi:28 alternatively:1 table:2 learn:4 channel:2 robust:2 transfer:1 expansion:3 necessarily:1 did:1 main:1 constituted:1 linearly:1 statistica:1 noise:29 arise:1 augmented:1 referred:2 representative:2 en:1 gatsby:1 ny:5 precision:5 inferring:4 wish:3 exponential:4 candidate:1 breaking:5 third:1 wavelet:3 specific:1 sensing:11 list:1 svm:1 virtue:1 intrinsic:1 sequential:8 importance:2 phd:1 conditioned:1 sparseness:1 chen:1 durham:1 simply:1 likely:1 desire:2 expressed:1 contained:1 partially:1 corresponds:2 truth:2 ma:1 goal:5 viewed:2 marked:1 consequently:2 shared:2 specifically:2 infinite:4 determined:1 except:1 sampler:5 preset:1 denoising:26 total:4 svd:13 e:1 college:1 support:2 people:2 brevity:2 relevance:4 indian:1 constructive:1 mcmc:1 d1:5 |
3,148 | 3,852 | Semi-supervised Learning using Sparse
Eigenfunction Bases
Kaushik Sinha
Dept. of Computer Science and Engineering
Ohio State University
Columbus, OH 43210
[email protected]
Mikhail Belkin
Dept. of Computer Science and Engineering
Ohio State University
Columbus, OH 43210
[email protected]
Abstract
We present a new framework for semi-supervised learning with sparse eigenfunction bases of kernel matrices. It turns out that when the data has clustered, that
is, when the high density regions are sufficiently separated by low density valleys,
each high density area corresponds to a unique representative eigenvector.
Linear combination of such eigenvectors (or, more precisely, of their Nystrom
extensions) provide good candidates for good classification functions when the
cluster assumption holds. By first choosing an appropriate basis of these eigenvectors from unlabeled data and then using labeled data with Lasso to select a
classifier in the span of these eigenvectors, we obtain a classifier, which has a very
sparse representation in this basis. Importantly, the sparsity corresponds naturally
to the cluster assumption.
Experimental results on a number of real-world data-sets show that our method
is competitive with the state of the art semi-supervised learning algorithms and
outperforms the natural base-line algorithm (Lasso in the Kernel PCA basis).
1
Introduction
Semi-supervised learning, i.e., learning from both labeled and unlabeled data has received considerable attention in recent years due to its potential in reducing the need for expensive labeled
data. However, to make effective use of unlabeled examples one needs to make some assumptions
about the connection between the process generating the data and the process of assigning labels.
There are two important assumptions popular in semi-supervised learning community the ?cluster
assumption? [CWS02] and the ?manifold assumption? [BNS06] as well as a number of model-based
methods, such as Naive Bayes [HTF03]. In particular, the cluster assumption can be interpreted as
saying that two points are likely to have the same class labels if they can be connected by a path
passing through a high density area. In other words two high density areas with different class labels
must be separated by a low density valley.
In this paper, we develop a framework for semi-supervised learning when the cluster assumption
holds. Specifically, we show that when the high density areas are sufficiently separated, a few appropriately chosen eigenfunctions of a convolution operator (which is the continuous counterpart
of the kernel matrix) represents the high density areas reasonably well. Under the ideal conditions
each high density area can be represented by a single unique eigenfunction called the ?representative? eigenfunction. If the cluster assumption holds, each high density area will correspond to just
one class label and thus a sparse linear combination of these representative eigenfunctions would be
a good classifier. Moreover, the basis of such eigenfunctions can be learned using only the unlabeled
data by constructing the Nystrom extension of the eigenvectors of an appropriate kernel matrix.
Thus, given unlabeled data we construct the basis of eigenfunctions and then apply L 1 penalized
optimization procedure Lasso [Tib96] to fit a sparse linear combination of the basis elements to
1
the labeled data. We provide a detailed theoretical analysis of the algorithm and show that it is
comparable to the state-of-the-art on several common UCI datasets.
The rest of the paper is organized as follows. In section 2 we provide the proposed framework
for semi-supervised learning and describe the algorithm. In section 3 we provide an analysis of
this algorithm to show that it can consistently identify the correct model. In section 4 we provide
experimental results on synthetic and real datasets and finally we conclude with a discussion in
section 5.
2
Semi-supervised Learning Framework
2.1 Outline of the Idea
In this section we present a framework for semi-supervised learning under the cluster assumption.
Specifically we will assume that (i) data distribution has natural clusters separated by regions of low
density and (ii) the label assignment conforms to these clusters.
The recent work of [SBY08a, SBY08b] shows that if the (unlabeled) data is clustered, then for each
high density region there is a unique (representative) eigenfunction of a convolution operator, which
takes positive values for points in the chosen cluster and whose values are close to zero everywhere
else (no sign change). Moreover, it can be shown (e.g., [RBV08]) that these eigenfunctions can be
approximated from the eigenvectors of a kernel matrix obtained from the unlabeled data.
Thus, if the cluster assumption holds we expect each cluster to have exactly one label assignment.
Therefore eigenfunctions corresponding to these clusters should produce a natural sparse basis for
constructing a classification function.
This suggests the following learning strategy:
1. From unlabeled and labeled data obtain the eigenvectors of the Gaussian kernel matrix.
2. From these eigenvectors select a subset of candidate eigenvectors without sign change.
3. Using the labeled data, apply Lasso (sparse linear regression) in the constructed basis to
obtain a classifier.
4. Using the Nystrom extension (see [BPV03]), extend the eigenvectors to obtain the classification function defined everywhere.
Connection to Kernel PCA ( [SSM98]). We note that our method is related to KPCA, where data is
projected onto the space spanned by the top few eigenvectors of the kernel matrix and classification
or regression task can be performed in that projected space. The important difference is that we
choose a subset of the eigenvectors in accordance to the cluster assumption. We note that the method
simply using the KPCA basis does not seem to benefit from unlabeled data and, in fact, cannot
outperform the standard fully supervised SVM classifier. On the other hand, our algorithm using a
basis subselection procedure shows results comparable to the state of the art.
This is due to two reasons. We will see that each cluster in the data corresponds to its unique
representative eigenvector of the kernel matrix. However, this eigenvector may not be among the
top eigenvectors and may thus be omitted when applying KPCA. Alternatively, if the representative eigenvector is included, it will be included with a number of other uninformative eigenvectors
resulting in poor performance due to overfitting.
We now proceed with the detailed discussion of our algorithm and its analysis.
2.2 Algorithm
The focus of our discussion will be binary classification in the semi-supervised setting. Given l
labeled examples {(xi , yi )}li=1 sampled from an underlying joint probability distribution PX ,Y ,
X ? Rd , Y = {?1, 1}, where xi s are the data points, yi s are their corresponding labels and u
unlabeled examples {xi }l+u
drawn iid from the marginal distribution PX , we choose a Gausi=l+1
kx?zk2
with kernel bandwidth ? to construct the kernel matrix K
sian kernel k(x, z) = exp ? 2?2
where Kij = u1 k(z i , z j ). Let (?i , v i )ui=1 be the eigenvalue-eigenvector pair of K sorted by the
non-increasing eigenvalues. It has been shown ([SBY08a, SBY08b]) that when data distribution P X
2
has clusters, for each high density region there is a unique representative eigenfunction of a convolution operator that takes positive values around the chosen cluster and is close to zero everywhere
else. Moreover these eigenfunctions can be approximated from the eigenvectors of a kernel matrix
obtained from the unlabeled data ([RBV08]), thus for each high density region there is a unique representative eigenvector of the kernel matrix that takes only positive or negative values in the chosen
cluster and is nearly zero everywhere else (no sign change).
If the cluster assumption holds, i.e., each high density region corresponds to a portion of a pure class,
then the classifier can be naturally expressed as a linear combination of the representative eigenfunctions. representative eigenvector basis and a linear combination of the representative eigenvectors
will be a reasonable candidate for a good classification function. However, identifying representative
eigenvectors is not very trivial because in real life depending on the separation between high density
clusters the representative eigenvectors can have no sign change up to some small precision > 0.
Specifically, we say that a vector e = (e1 , e2 , ..., en ) ? Rn has no sign change up to precision if
either ?i ei > ? or ?i ei < . Let N be the set of indices of all eigenvectors that have no sign
change up to precision . If is chosen properly, N will contain representative eigenvectors (note
that the set N and the set {1, 2, ..., |N |} are not necessarily the same). Thus, instead of identifying
the representative eigenvectors, we carefully select a small set containing
P the representative eigenvectors. Our goal is to learn a linear combination of the eigenvectors i?N ?i v i which minimizes
classification error on the labeled examples and the coefficients corresponding to non-representative
eigenvectors are zeros. Thus, the task is more of model selection or sparse approximation.
Standard approach to get a sparse solution is to minimize a convex loss function V on the labeled
examples and apply a L1 penalty (on ?i s). If we select V to be square loss function, we end up
solving the L1 penalized least square or so called Lasso [Tib96], whose consistency property was
studied in [ZY06]. Thus we would seek a solution of the form
arg min(y ? ??)T (y ? ??) + ?||?||L1
?
(1)
which is a convex optimization problem, where ? is the l ? |N | design matrix whose ith column
is the first l elements of v N (i) , y ? Rl is the label vector, ? is the vector of coefficients and ? is a
regularization parameter. Note that solving the above problem is equivalent to solving
X
arg min(y ? ??)T (y ? ??) s.t.
|?i | ? t
(2)
?
i?N
because for any given ? ? [0, ?), there exists a t ? 0 such that the two problems have
?
the same solution, and vice versa [Tib96]. We will denote the solution of Equation 2, by ?.
To obtain a classification function which is defined everywhere, we use the Nystrom extension
Pl+u
of the ith eigenvector defined as ?i (x) = ? ?1l+u j=1 v i (xj )k(x, xj ). Let the set T coni
tains indices of all nonzero ??i s. Using Nystrom extension, classification function is given by,
P
Pl+u
f (x) = i?T ??i ?i (x) = i=1 Wi k(xi , x), where, W ? Ru is a weight vector whose ith element is given by
X ??j v j (xi )
?
(3)
Wi =
?j u
j?T
and can be computed while training.
Algorithm for Semi-supervised Learning
Input: {(xi , yi )}li=1 , {xi }l+u
i=l+1
Parameters: ?, t,
1. Construct kernel matrix K from l + u unlabeled examples {xi }l+u
i=1 .
2. Select set N containing indices of the eigenvectors with no sign change up to precision .
3. Construct design matrix ? whose ith column is top l rows of v N (i) .
? and calculate weight vector W using Equation 3.
4. Solve Equation 2 to get ?
Pu
5. Given a test point x, predict its label as y = sign ( i=1 k(xi , x)Wi )
3
3
Analysis of the Algorithm
The main purpose of the analysis is, (i) to estimate the amount of separation required among the high
density regions which ensures that each high density region can be well represented by a unique
(representative) eigenfunction, (ii) to estimate the number of unlabeled examples required so that
eigenvectors of kernel matrix can approximate the eigenfunctions of a convolution operator (defined
below) and (iii) to show that using few labeled examples Lasso can consistently identify the correct
model consisting of linear combination of representative eigenvectors.
Before starting the actual analysis, we first note that the continuous counterpart of the Gram matrix
is a convolution operator LK : L2 (X , PX ) ? L2 (X , PX ) defined by,
Z
k(x, z)f (z)dPX (z)
(4)
(LK f )(x) =
X
The eigenfunctions of the symmetric positive definite operator LK will be denoted by ?L
i .
Next, we briefly discuss the effectiveness of model selection using Lasso (established by [ZY06])
? (?) be the solution of Equation 1 for a chosen
which will be required for our analysis. Let ?
l
regularization parameter ?. In [ZY06] a concept of sign consistency was introduced which states
? (?) matches with the signs of ? ? with
that Lasso is sign consistent if, as l tends to infinity, signs of ?
l
probability 1, where ? ? is the coefficients of the correct model. Note that since we are expecting a
? (?) to the zeros of ? ? is not enough, but in addition, matching
sparse model, matching zeros of ?
l
the signs of the non zero coefficients ensures that the true model will be selected. Next, without loss
?
?
of generality assume ? ? = (?1? , ? ? ? , ?q? , ?q+1
, ? ? ? , ?|N
) has only first q terms non-zero, i.e., only
|
q predictors describe the model and rest of the predictors are irrelevant in describing the model. Now
let us write the first q and |N | ? q columns of ? as ?(1) and ?(2) respectively. Let C = 1l ?T ?.
Note that, for a random design matrix, sign consistency is equivalent to irrepresentable condition
(see [ZY06]). When ? ? is unknown, in order to ensure that irrepresentable condition holds for all
possible signs, it requires that L1 norm of the regression coefficients corresponding
to the irrelevant
T
T
T
u
predictors to be less than 1, which can be written as ?? = max?ju ??(2) ?(1) ?(1) ?(1) ?j <
1
1. The requirement ?? < 1 is not new and have also appeared in the context of noisy or noiseless
sparse recovery of signal [Tro04, Wai06, Zha08]. Note that Lasso is sign consistent if irrepresentable
condition holds and the sufficient condition needed for irrepresentable condition to hold is given by
the following result,
Theorem 3.1. [ZY06] Suppose ? ? has q nonzero entries. Let the matrix C 0 be normalized version
C
c
0
0
and maxi,j,i6=j |Cij
for a constant 0 ? c < 1, then strong
| ? 2q?1
of C such that Cij
= Cij
ii
irrepresentable condition holds.
Our main result in the following shows that this sufficient condition is satisfied with high probability
requiring relatively few labeled examples, as a result the correct model is identified consistently,
which in turn describes a good classification function.
Theorem 3.2. Let q be the minimum number of columns of the design matrix ? ? R l?|N | , constructed from l labeled examples, that describes the sparse model. Then for any 0 < ? < 1, if
2048q 2 log( ?2 )
, then with probability greater than
the number of unlabeled examples u satisfies u > g2
2
Nmax ?Nmax
l?2
Nmax
1
0
1 ? 2? ? 4 exp ? 50q
, maxi6=j |Cij
| < 2q?1
.
2
th
th
(to be defined later) largest eigenvalue of LK and gNmax is the Nmax
where ?Nmax is the Nmax
eigengap. Note that in our framework, unlabeled examples help polynomially fast in estimating
the eigenfunctions while labeled examples help exponentially fast in identifying the sparse model
consisting of representative eigenfunctions. Interestingly, in semi-supervised learning setting, similar role of labeled and unlabeled examples (in reducing classification error) has been reported in
literature [CC96, RV95, SB07, SNZ08].
3.1 Brief Overview of the Analysis
As a first step of our analysis, in section 3.2, we estimate the separation requirement among the
high density regions which ensures that each high density region (class) can be well represented
by a unique eigenfunction. This allows us to express the classification task in this eigenfunction
4
basis where we look for a classification function consisting of linear combination of representative
eigenfunctions only and thus relate the problem to sparse approximation from the model selection
point of view, which is a well studied field [Wai06, ZH06, CP07].
As a second step in section 3.3, using perturbation results from [RBV08], we estimate the number of
unlabeled examples required to ensure that Nystrom extensions of eigenvectors of K approximate
the eigenfunctions of the convolution operator LK reasonably well with high probability.
Finally, as a third step in section 3.4, we establish a concentration inequality, which along with
result from the second step 2, ensures that as more and more labeled examples are used to fit the
eigenfunctions basis to the data, the probability that Lasso identifies correct model consisting of
representative eigenfunctions increases exponentially fast.
3.2 Separation Requirement
To motivate our discussion we consider binary classification problem where the marginal density
can be considered as a mixture model where each class has its own probability density function,
p1 (x), p2 (x) and corresponding mixing weights ?1 , ?2 respectively. Thus, the density of the mixture
is p(x) = ?1 p1 (x) + ?2 p2 (x). We will use the following results from [SBY08a] specifying the
behavior of the eigenfunction of LK corresponding to the largest eigenvalue.
Theorem 3.3. [SBY08a] The top eigenfunction ?L
0 (x) of LK corresponding to the largest eigenvalue ?0 , (1) is the only eigenfunction with no sign change, (2) hasq
multiplicity one, (3) is non zero
R
1
L
on the support of the underlying density, (4) satisfies |?0 (x)| ? ?0
k 2 (x, z)p(z)dz (Tail decay
property), where p is the underlying probability density function.
Note that the last (tail decay) property above is not restricted to the top eigenfunction alone
but is satisfied by all eigenfunctions of LK . Now, consider applying LK to the three cases
when the underlying probability distributions are p1 , p2 and p. The largest eigenvalues and
L,2
L
corresponding eigenfunctions in the above three cases are ?10 , ?20 , ?0 and ?L,1
0 , ?0 , ?0 respectively. To show explicit dependency on the underlying probability distribution, we will denote
p2
p
p1
the corresponding operators as LpK1 , LpK2 and LpK respectively. Clearly, L
K = ?1 LK +
? 2 LK .
R
L,1
1
Then we can write, LpK ?0L,1 (x) = k(x, z)?L,1
+ T1 (x) where,
0 (z)p(z)dz = ?1 ?0 ?0
R
L,1
?2
T1 (x) = ?1 ?1 k(x, z)?0 (z)p2 (z)dz. In a similar way we can write, LpK ?L,2
0 (x) =
0
R
?2 ?20 ?L,2
+ T2 (x) where, T2 (x) = ??2 ?1 2 k(x, z)?L,2
0
0 (z)p1 (z)dz. Thus, when T1 (x) and
0
T2 (x) are small enough then ?L,1
and ?L,2
are eigenfunctions of LpK with corresponding eigen0
0
1
2
values ?1 ?0 and ?2 ?0 respectively. Note that ?separation condition? requirement refers to T1 (x),
T2 (x) being small, so that eigenfunctions corresponding to the largest eigenvalues of convolution
operator when applied to individual high density bumps are preserved in the case when convolution
operator is applied to the mixture. Clearly, we can not expect T1 (x), T2 (x) to arbitrarily small if
there is sufficient overlap between p1 and p2 . Thus, we will restrict ourselves to the following class
of probability distributions for each individual class which has reasonably fast tail decay.
Assumption 1. For any 1/2 < ? < 1, let M(?, R) be the class of probability distributions such
thatR its density function p satisfies
1) R p(x)d(x) = ? where R is the minimum volume ball around the mean of the distribution.
2) For any positive t > 0, smaller than the radius of R, and ?
for any point z ? X \ R with
2)} has total probability mass
dist(z, R) ? t, the volume
S
=
{x
?
(X
\
R)
?
B(z,
3t/
R
dist2 (z,R)
p(x)dx ? C1 ? exp ?
for some C1 > 0.
t2
S
where the distance between a point x and set D is defined as dist(x, D) = inf y?D ||x ? y||. With
a little abuse of notation we will use p ? M(?, R) to mean that p is the probability density function
of a member of M(?, R). Now a rough estimate of separation requirement can be given by the
following lemma.
Lemma 3.1. Let p1 ? M(?,
R1 ) and p2 ? M(?, R2 ) and let the minimum distance between R1 , R2
?
?
be ?. If ? = ? ? d then T1 (x) and T2 (x) can be made arbitrarily small for all x ? X .
The estimate of ? in the above lemma, where we hide the log factor by ?? , is by no means tight,
nevertheless, it shows that separation requirement refers to existence of a low density valley between
5
two high density regions each corresponding to one of the classes. This separation requirement is
roughly of the same order required to learn mixture of Gaussians [Das99]. Note that, provided
separation requirement is satisfied, ?L,1
and ?L,2
are not necessarily the top two eigenfunctions of
0
0
LK corresponding to the two largest eigenvalues but can be quite far down the spectrum of L pK
depending on the mixing weights ?1 , ?2 . Next, the following lemma suggests that we can say more
about the eigenfunction corresponding to the largest eigenvalue.
q
e
Lemma 3.2. For any 1+e
< ? < 1, let q ? M(?, R). If ?L
0 is the eigenfunction of LK corresponding to the largest eigenvalue ?0 then
0 such that
? there existsa C1 >
(C1 +?)
dist2 (x,R)
L
1) For all x ? X \ R, |?0 (x)| ?
exp
?
?0
2? 2
L
2) For all z ? R and x ? X \ R, |?L
0 (z)| ? |?0 (x)|
Thus for each class, top eigenfunction corresponding to the largest eigenvalue represents high density region reasonably well, outside high density region is has lower absolute value and decays
exponentially fast.
3.3 Finite Sample Results
We start with the following assumption.
Assumption 2. The Nmax largest eigenvalues of LK and K, where Nmax = maxi {i : i ? N }, are
simple and bounded away from zero.
Note that Nystrom extension ?i s are eigenfunctions of an operator LK,H : H ? H , where H
is the unique RKHS defined by the chosen Gaussian kernel and all the eigenvalues of K are also
eigenvalues of LK,H ([RBV08]). There are two implications of Assumption 2. The first one is due
to the bounded away from zero part, which ensures that if we restrict to ?i ? H corresponding to the
largest Nmax eigenvalues, then each of them is square integrable hence belongs to L2 (X , PX ). The
second implication due to the simple part, ensures that eigenfunctions corresponding to the N max
largest eigenvalues are uniquely defined and so are the orthogonal projections on to them. Note that
if any eigenvalue has multiplicity greater than one then the corresponding eigenspace is well defined
but not the individual eigenfunctions. Thus, Assumption 2 enables us to compare how close each ? i
th
eigengap
is to some other function in L2 (X , PX ) in L2 (X , PX ) norm sense. Let gNmax be the Nmax
when eigenvalues of LK are sorted in non increasing order. Then we have the following results.
Lemma 3.3. Suppose Assumption 2 holds and the top Nmax eigenvalues of LK and K are sorted in
the decreasing order. Then forq
any 0 < ? < 1 and for any i ? N , with probability at least (1 ? ?),
k?i ? ?L
i kL2 (X ,PX ) =
2
gNmax
2 log(2/?)
u?i
Corollary 3.1. Under the above conditions, for any 0 < ? < 1 and for any i, j ? N , with
probability at least (1 ? ?)
the followingholds, ?
8 log(2/?)
8 log(2/?)
1
1
1
?
?1
?
1) h?i , ?j iL2 (X ,PX ) ?
+?
2
u +
gNmax
u
?i
gN
? i ?j
?j
max
r
r
8 log(2/?)
8 log(2/?)
?1 ? k? u kL2 (X ,P ) ? 1 +
?1
2) 1 ?
i
X
?i
?i
g2
g2
u
u
Nmax
Nmax
3.4 Concentration Results
Having established that {?i }i?N approximate the top N eigenfunctions of LK reasonably well,
next, we need to consider what happens when we restrict each of the ?i s to finite labeled examples.
Note that the design matrix ? ? Rl?|N | is constructed by restricting the {?j }j?N to l labeled data
T
points {xi }li=1 such that the ith column of ? is ?N (i) (x1 ), ?N (i) (x2 ), ? ? ? , ?N (i) (xl ) ? Rl .
Pl
Now consider the |N | ? |N | matrix C = 1l ?T ? where, Cij = 1l k=1 ?N (i) (xk )?N (j) (xk ).
First, applying Hoeffding?s inequality we establish,
Lemma
P3.4. For all i, j ? N and 1 > 0 thefollowing two facts
hold.
1 l
l21 ?2i
2
2
P l k=1 [?i (xk )] ? E [?i (X)] ? 1 ? 2 exp ? 2
P
2
l ? ?
l
P 1l k=1 ?i (xk )?j (xk ) ? E (?i (X)?j (X)) ? 1 ? 2 exp ? 1 2i j
0
and Cii0 = 1. To enNext, consider the |N | ? |N | normalized matrix C 0 where Cij
= Cij
ii
sure that Lasso will consistently choose the correct model we need to show (see Theorem 3.1) that
C
6
1
0
maxi6=j |Cij
| < 2q?1
with high probability. Applying the above concentration result and finite
sample results yields Theorem 3.2.
4
Experimental Results
4.1 Toy Dataset
Here we present a synthetic example in 2-D. Consider a binary classification problem where the
positive examples are generated from a Gaussian distribution with mean (0, 0) and covariance matrix [2 0; 0 2] and the negative examples are generated from a mixture of Gaussians having means
and covariance matrices (5, 5), [2 1; 1 2] and (7, 7), [1.5 0; 0 1.5] respectively. The corresponding mixing weights are 0.4, 0.3 and 0.3 respectively. Left panel in Figure 1 shows the probability
density of the mixture in blue and representative eigenfunctions of each class in green and magenta
respectively using 1000 examples (positive and negative) drawn from this mixture. It is clear that
each representative eigenfunction represents high density area of a particular class reasonably well.
So intuitively a linear combination of them will represent a good decision function. In fact, the
right panel of Fig 1 shows the regularization path for L1 penalized least square regression with 20
labeled examples. The bold green and magenta lines shows the coefficient values for the representative eigenfunctions for different values of regularization parameter t. As can be seen, regularization
parameter t can be so chosen that the decision function will consist of a linear combination of representative eigenfunctions only. Note that these representative eigenfunctions need not be the top two
eigenfunctions corresponding to the largest eigenvalues.
Regularization path
20
0.05
Coefficients
Density / Eigenfunctions
Probability density for the mixture and representative eigenfunctions
0
?0.05
?0.1
?0.15
?10
?5
0
5
10
x
15
?20
0
20
10
0
?10
?20
y
0
10
20
30
t
40
50
60
Figure 1: Left panel: Probability density of the mixture in blue and representative eigenfunctions
in green and magenta. Right panel: Regularization path. Bold lines correspond to regularization
path associated with representative eigenfunctions.
4.2 UCI Datasets
In this set of experiment we tested the effectiveness of our algorithm (we call it SSL SEB) on some
common UCI datasets. We compared our algorithm with state of the art semi-supervised learning
(manifold regularization) method Laplacian SVM (LapSVM) [BNS06], fully supervised SVM and
also two other kernel sparse regression methods. In KPCA+L1 we selected top |N | eigenvectors,
and applied L1 regularization, in KPCA F+L1 we selected the top 20 (fixed) eigenvectors of Ku
and applied L1 regularization1 , where as in KPCA max+L1 we selected top max eigenvectors, and
applied L1 regularization, where max is the maximum index of set of eigenvectors in N , that is the
index of the lowest eigenvector, chosen by our method. For both SVM and LapSVM we used RBF
kernel. In each experiment a specified number of examples (l) were randomly chosen and labeled
and the rest (u) were treated as unlabeled test set. Such random splitting was performed 30 times
and the average is reported.
The results are reported in Table 1. As can be seen, for small number of labeled examples our method
convincingly outperform SVM and is comparable to LapSVM. The result also suggests that instead
of selecting top few eigenvectors, as is normally done in KPCA, selecting them by our method
and then applying L1 regularization yields better result. In particular, in case of IONOSPHERE
and BREAST-CANCER data sets top |N | (5 and 3 respectively) eigenvectors do not contain the
representative ones. As a result in these two cases KPCA+L1 performs very poorly. Table 2 shows
that the solution obtained by our method is very sparse, where average sparsity is the average number
of non-zero coefficients.
We note that our method does not work equally well for all datasets, and has generally higher
variability than LapSVM.
1
We also selected 100 top eigenvectors and applied L1 penalty but it gave worse result.
7
DATA SET
# Labeled Data
SSL SEB
KPCA+L1
KPCA F+L1
KPC max+L1
SVM
LapSVM
IONOSPHERE
HEART
d=33, l+u=351
d=13, l+u=303
l=10
l=20
l=30
l=10
l=20
l=30
78.26 85.84 87.25 75.45 77.34 79.92
?13.56 ?10.61 ?4.16 ?6.14 ?6.04 ?1.18
65.15
65.66
69.57
66.82 70.36 75.16
?8.82 ?9.81 ?9.89 ?7.94 ?8.41 ?6.68
64.92
67.43
69.43
60.91 67.32 71.46
?10.13 ?11.68 ?11.26 ?7.33 ?7.01 ?5.91
59.76
64.73
66.89
57.26 60.16 63.36
?10.23 ?11.62 ?12.45 ?5.16 ?6.69 ?6.15
65.16
72.09
79.8
64.61 73.16 76.55
?10.87 ?10.04 ?9.94 ?11.63 ?5.95 ?4.29
71.17
77.18
81.32
74.91 75.33 77.43
?7.33 ?4.07 ?3.81 ?5.55 ?6.08 ?3.14
WINE
d=13, l+u=178
l=10
l=20
93.01 98.95
?8.49 ?8.49
93.47
98.75
?10.06 ?3.89
79.82
87.32
?10.29 ?8.56
84.62
89.96
?9.63 ?9.26
83.98
88.12
?10.25 ?11.68
98.33 97.67
?5.33 ?1.57
BREAST-CANCER
d=30, l+u=569
l=5
l=10
96.68
98.66
?3.43
?2.86
70.26
73.95
?14.43 ?13.68
63.04
81.44
?12.29 ?13.12
59.32
73.95
?15.18
?8.97
72.83
97.32
?17.56
?8.65
98.95
99.72
?2.32
?1.42
VOTING
d=16, l+u=435
l=10
l=15
86.85
87.84
?6.21 ?3.82
86.85
87.84
?6.21 ?3.82
71.78
77.38
?12.65 ?10.43
71.78
77.38
?12.65 ?10.43
81.53
88.51
?16.05 ?5.88
89.52 89.97
?1.43 ?1.26
Table 1: Classification Accuracies for different UCI datasets
DATA SET
SSL SEB
KPCA+L1
KPCA F+L1
KPC max+L1
IONOSPHERE
2.83 / 5
3.23 / 5
6.05 / 20
6.85 / 23
HEART
4.63 / 9
5.84 / 9
8.11 / 20
16.42 / 78
WINE
3.52 / 6
3.8 / 6
6.12 / 20
6.07 /16
BREAST-CANCER
2.10 / 3
2.78 / 3
4.70 / 20
10.81 / 57
VOTING
2.02 / 3
2.02/ 3
3.05 / 20
2.02 / 3
Table 2: Average sparsity of our method for different UCI datasets. The notation A/B represents
average sparsity A and number of eigenvectors (|N | or 20).
4.3 Handwritten Digit Recognition
In this set of experiments we applied our method to the 45 binary classification problems that arise
in pairwise classification of handwritten digits and compare its performance with LapSVM. For
each pairwise classification problem, in each trial, 500 images of each digit in the USPS training
set were chosen uniformly at random out of which 20 images were labeled and the rest were set
aside for testing. This trial was repeated 10 times. For the LapSVM we set the regularization
terms and the kernel as reported by [BNS06] for a similar set of experiments, namely we set ? A l =
?I l
2
0.005, (u+l)
2 = 0.045 and chose a polynomial kernel of degree 3. The results are shown in Figure2.
As can be seen our method is comparable to LapSVM.
20
SSL_SEB
LapSVM
Test error rate (%)
15
10
5
0
0
5
10
15
20
25
30
All 45 two class classifications for USPS dataset
35
40
45
Figure 2: Classification results for USPS dataset
We also performed multi-class classification on USPS dataset. In particular, we chose all the images
of digits 3, 4 and 5 from USPS training data set (there were 1866 in total) and randomly labeled
10 images from each class. Rest of the 1836 images were set aside for testing. Average prediction
accuracy of LapSVM, after repeating this procedure 20 times, was 90.14% as compared to 87.53%
of our method.
5
Conclusion
In this paper we have presented a framework for spectral semi-supervised learning based on the
cluster assumption. We showed that the cluster assumption is equivalent to the classifier being
sparse in a certain appropriately chosen basis and demonstrated how such basis can be computed
using only unlabeled data. We have provided theoretical analysis of the resulting algorithm and
given experimental results demonstrating that the resulting algorithm has performance comparable
to the state-of-the-art for a number of data sets and dramatically outperforms the natural baseline of
KPCA + Lasso.
2
It turned out that the cases where our method performed very poorly, the respective distances between the
means of corresponding two classes were very small.
8
References
[BNS06]
M. Belkin, P. Niyogi, and V. Sindhwani. Manifold Regularization: A Geometric Framework for Learning from Labeled and Unlabeled Examples. Journal of Machine Learning
Research, 7:2399?2434, 2006.
[BPV03]
Y. Bengio, J-F. Paiement, and P. Vincent. Out-of-sample Extensions for LLE, Isomap,
MDS, Eigenmaps and Spectral Clustering. In NIPS. 2003.
[CC96]
V. Castelli and T. M. Cover. The Relative Value of Labeled and Unlabeled Samples in
Pattern Recognition with Unknown Mixing Parameters. IEEE Transactions on Information Theory, 42(6):2102?2117, 1996.
[CP07]
E. J. Candes and Y. Plan. Near Ideal Model Selection by `1 Minimization, eprint
arxiv:0801.0345. 2007.
[CWS02] O. Chapelle, J. Weston, and B. Scholkopf. Cluster Kernels for Semi-supervised Learning. In NIPS. 2002.
[Das99]
S. Dasgupta. Learning Mixture of Gaussians. In 40th Annual Symposium on Foundations
of Computer Science, 1999.
[HTF03]
T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning Data
Mining, Inference and Prediction. Springer, 2003.
[RBV08] L. Rosasco, M. Belkin, and E. De Vito. Perturbation Results for Learning Empirical
Opertors. Technical Report TR-2008-052, Massachusetts Institute of Technology, Cambridge, MA, August 2008.
[RV95]
J. Ratsaby and S. Venkatesh. Learning From a Mixture of Labeled and Unlabeled Examples with Parametric Side Information. In COLT. 1995.
[SB07]
K. Sinha and M. Belkin. The Value of Labeled and Unlabeled Examples when the Model
is Imperfect. In NIPS. 2007.
[SBY08a] T. Shi, M. Belkin, and B. Yu. Data Spectroscopy: Eigenspace of Convolution Operators
and Clustering. Technical report, Dept. of Statistics, Ohio State University, 2008.
[SBY08b] T. Shi, M. Belkin, and B. Yu. Data Spectroscopy: Learning Mixture Models using
Eigenspaces of Convolution Operators. In ICML. 2008.
[SNZ08] A. Singh, R. D. Nowak, and X. Zhu. Unlabeled Data: Now it Helps Now it Doesn?t. In
NIPS. 2008.
[SSM98] Bernhard Scholkopf, A. Smola, and Klaus-Robert Muller. Nonlinear Component Analysis as a Kernel Eigenvalue Problem. Neural Computation, 10:1299?1319, 1998.
[Tib96]
R. Tibshirani. Regression Shrinkage and Selection via the Lasso. Journal of the Royal
Statistical Society, Series B, 58:267?288, 1996.
[Tro04]
J. A. Tropp. Greed is Good: Algorithmic Result for Sparse Approximation. IEEE Trans.
Info. Theory, 50(10):2231?2242, 2004.
[Wai06] M. Wainwright. Sharp Thresholds for Noisy and High-dimensional Recovery of Sparsity using `1 -constrained Quadratic Programming. Technical Report TR-709, Dept. of
Statistics, U. C. Berkeley, September 2006.
[ZH06]
C. Zhang and J. Huang. Model Selection Consistency of Lasso in High Dimensional
Linear Regression. Technical report, Dept. of Statistics, Rutgers University, 2006.
[Zha08] T. Zhang. On consistency of feature selection using greedy least square regression.
Journal of Machine Learning Research, 2008.
[ZY06]
P. Zhao and B. Yu. On Model Selection Consistency of Lasso. Journal of Machine
Learning Research, 7:2541?2563, 2006.
9
| 3852 |@word trial:2 version:1 briefly:1 polynomial:1 norm:2 seek:1 covariance:2 tr:2 series:1 selecting:2 rkhs:1 interestingly:1 outperforms:2 assigning:1 dx:1 must:1 written:1 enables:1 aside:2 alone:1 greedy:1 selected:5 xk:5 ith:5 cse:2 zhang:2 along:1 constructed:3 symposium:1 scholkopf:2 pairwise:2 roughly:1 p1:6 dist:2 behavior:1 multi:1 decreasing:1 actual:1 little:1 increasing:2 provided:2 estimating:1 moreover:3 underlying:5 notation:2 mass:1 bounded:2 eigenspace:2 what:1 panel:4 lowest:1 interpreted:1 minimizes:1 eigenvector:9 berkeley:1 voting:2 exactly:1 classifier:7 normally:1 positive:7 before:1 engineering:2 accordance:1 t1:6 tends:1 path:5 abuse:1 chose:2 studied:2 suggests:3 specifying:1 unique:9 testing:2 definite:1 dpx:1 digit:4 procedure:3 area:8 empirical:1 matching:2 projection:1 word:1 refers:2 nmax:13 get:2 cannot:1 onto:1 unlabeled:24 valley:3 operator:13 close:3 selection:8 applying:5 irrepresentable:5 context:1 seb:3 equivalent:3 demonstrated:1 dz:4 shi:2 attention:1 starting:1 convex:2 identifying:3 recovery:2 pure:1 splitting:1 importantly:1 spanned:1 oh:2 suppose:2 programming:1 element:4 expensive:1 approximated:2 recognition:2 labeled:27 role:1 calculate:1 region:13 ensures:6 connected:1 expecting:1 ui:1 vito:1 motivate:1 singh:1 solving:3 tight:1 basis:15 usps:5 joint:1 represented:3 separated:4 fast:5 effective:1 describe:2 klaus:1 choosing:1 outside:1 whose:5 quite:1 solve:1 say:2 niyogi:1 statistic:3 noisy:2 eigenvalue:21 uci:5 turned:1 mixing:4 poorly:2 thatr:1 dist2:2 cluster:23 requirement:8 r1:2 produce:1 generating:1 maxi6:2 help:3 depending:2 develop:1 received:1 p2:7 strong:1 radius:1 correct:6 clustered:2 extension:8 pl:3 hold:12 sufficiently:2 around:2 considered:1 exp:6 algorithmic:1 predict:1 bump:1 omitted:1 wine:2 purpose:1 kpc:2 label:9 largest:13 vice:1 minimization:1 rough:1 clearly:2 gaussian:3 shrinkage:1 corollary:1 focus:1 properly:1 consistently:4 mbelkin:1 sense:1 baseline:1 inference:1 arg:2 classification:22 colt:1 among:3 denoted:1 plan:1 art:5 ssl:3 constrained:1 marginal:2 field:1 construct:4 having:2 represents:4 look:1 yu:3 nearly:1 icml:1 t2:7 report:4 belkin:6 few:5 randomly:2 individual:3 consisting:4 ourselves:1 friedman:1 mining:1 mixture:12 implication:2 nowak:1 conforms:1 il2:1 orthogonal:1 respective:1 eigenspaces:1 theoretical:2 sinha:2 kij:1 column:5 gn:1 cover:1 assignment:2 kpca:13 subset:2 entry:1 predictor:3 eigenmaps:1 reported:4 dependency:1 synthetic:2 ju:1 density:37 satisfied:3 containing:2 choose:3 rosasco:1 hoeffding:1 huang:1 worse:1 zhao:1 li:3 toy:1 potential:1 de:1 bold:2 coefficient:8 performed:4 later:1 view:1 portion:1 competitive:1 bayes:1 start:1 candes:1 minimize:1 square:5 accuracy:2 correspond:2 identify:2 yield:2 handwritten:2 vincent:1 castelli:1 iid:1 lpk:4 kl2:2 nystrom:7 naturally:2 e2:1 associated:1 sampled:1 dataset:4 popular:1 massachusetts:1 organized:1 carefully:1 higher:1 supervised:17 done:1 generality:1 just:1 smola:1 hand:1 tropp:1 ei:2 nonlinear:1 columbus:2 contain:2 concept:1 true:1 counterpart:2 normalized:2 regularization:14 hence:1 isomap:1 requiring:1 symmetric:1 nonzero:2 uniquely:1 kaushik:1 outline:1 performs:1 l1:20 image:5 ohio:5 cc96:2 common:2 rl:3 overview:1 exponentially:3 volume:2 extend:1 tail:3 versa:1 cambridge:1 rd:1 consistency:6 i6:1 chapelle:1 base:3 pu:1 own:1 recent:2 hide:1 showed:1 irrelevant:2 inf:1 belongs:1 certain:1 inequality:2 binary:4 arbitrarily:2 life:1 yi:3 muller:1 integrable:1 seen:3 minimum:3 greater:2 signal:1 semi:15 ii:4 technical:4 match:1 e1:1 equally:1 laplacian:1 prediction:2 regression:8 breast:3 noiseless:1 rutgers:1 arxiv:1 kernel:23 represent:1 c1:4 preserved:1 addition:1 uninformative:1 else:3 appropriately:2 rest:5 eigenfunctions:34 sure:1 member:1 eprint:1 seem:1 effectiveness:2 call:1 near:1 ideal:2 iii:1 enough:2 bengio:1 xj:2 fit:2 gave:1 hastie:1 lasso:15 bandwidth:1 identified:1 restrict:3 idea:1 figure2:1 imperfect:1 pca:2 eigengap:2 greed:1 penalty:2 proceed:1 passing:1 dramatically:1 generally:1 detailed:2 eigenvectors:35 clear:1 subselection:1 amount:1 repeating:1 regularization1:1 outperform:2 sign:17 tibshirani:2 blue:2 write:3 dasgupta:1 paiement:1 express:1 nevertheless:1 demonstrating:1 threshold:1 drawn:2 year:1 everywhere:5 saying:1 reasonable:1 separation:9 p3:1 decision:2 comparable:5 quadratic:1 annual:1 precisely:1 infinity:1 x2:1 u1:1 span:1 min:2 px:9 relatively:1 combination:10 poor:1 ball:1 describes:2 smaller:1 wi:3 happens:1 intuitively:1 restricted:1 multiplicity:2 heart:2 ratsaby:1 equation:4 turn:2 discus:1 describing:1 needed:1 zk2:1 end:1 gaussians:3 apply:3 away:2 appropriate:2 spectral:2 existence:1 top:16 clustering:2 ensure:2 coni:1 establish:2 society:1 strategy:1 concentration:3 parametric:1 md:1 september:1 distance:3 manifold:3 trivial:1 reason:1 ru:1 index:5 cij:8 robert:1 relate:1 info:1 negative:3 design:5 unknown:2 convolution:10 datasets:7 finite:3 variability:1 rn:1 perturbation:2 sharp:1 august:1 community:1 introduced:1 venkatesh:1 pair:1 required:5 specified:1 namely:1 connection:2 learned:1 established:2 nip:4 eigenfunction:17 trans:1 below:1 pattern:1 appeared:1 sparsity:5 convincingly:1 max:8 green:3 royal:1 wainwright:1 overlap:1 natural:4 treated:1 sian:1 zhu:1 technology:1 brief:1 identifies:1 lk:19 naive:1 literature:1 l2:5 geometric:1 relative:1 fully:2 expect:2 loss:3 tro04:2 foundation:1 forq:1 degree:1 sufficient:3 consistent:2 row:1 cancer:3 penalized:3 last:1 zy06:6 side:1 lle:1 institute:1 mikhail:1 sparse:18 absolute:1 benefit:1 world:1 gram:1 doesn:1 made:1 projected:2 far:1 polynomially:1 transaction:1 approximate:3 bernhard:1 tains:1 overfitting:1 conclude:1 xi:10 alternatively:1 spectrum:1 continuous:2 table:4 lapsvm:10 learn:2 reasonably:6 ku:1 spectroscopy:2 necessarily:2 constructing:2 pk:1 main:2 arise:1 repeated:1 x1:1 fig:1 representative:31 en:1 precision:4 explicit:1 xl:1 candidate:3 third:1 theorem:5 down:1 magenta:3 maxi:2 r2:2 decay:4 svm:6 ionosphere:3 exists:2 consist:1 restricting:1 kx:1 simply:1 likely:1 expressed:1 g2:3 sindhwani:1 springer:1 corresponds:4 satisfies:3 ma:1 weston:1 sorted:3 goal:1 rbf:1 considerable:1 change:8 included:2 specifically:3 reducing:2 uniformly:1 lemma:7 called:2 total:2 experimental:4 select:5 support:1 tib96:4 dept:5 das99:2 tested:1 |
3,149 | 3,853 | Bayesian Nonparametric Models on Decomposable
Graphs
Franc?ois Caron
INRIA Bordeaux Sud?Ouest
Institut de Math?ematiques de Bordeaux
University of Bordeaux, France
[email protected]
Arnaud Doucet
Departments of Computer Science & Statistics
University of British Columbia, Vancouver, Canada
and The Institute of Statistical Mathematics
Tokyo, Japan
[email protected]
Abstract
Over recent years Dirichlet processes and the associated Chinese restaurant process (CRP) have found many applications in clustering while the Indian buffet
process (IBP) is increasingly used to describe latent feature models. These models are attractive because they ensure exchangeability (over samples). We propose
here extensions of these models where the dependency between samples is given
by a known decomposable graph. These models have appealing properties and
can be easily learned using Monte Carlo techniques.
1
Motivation
The CRP and IBP have found numerous applications in machine learning over recent years [5,
10]. We consider here the case where the data we are interested in are ?locally? dependent; these
dependencies being represented by a known graph G where each data point/object is associated
to a vertex. These local dependencies can correspond to any conceptual or real (e.g. space, time)
metric. For example, in the context of clustering, we might want to propose a prior distribution on
partitions enforcing that data which are ?close? in the graph are more likely to be in the same cluster.
Similarly, in the context of latent feature models, we might be interested in a prior distribution on
features enforcing that data which are ?close? in the graph are more likely to possess similar features.
The ?standard? CRP and IBP correspond to the case where the graph G is complete; that is it is fully
connected. In this paper, we generalize the CRP and IBP to decomposable graphs. The resulting
generalized versions of the CRP and IBP enjoy attractive properties. Each clique of the graph follows
marginally a CRP or an IBP process and explicit expressions for the joint prior distribution on the
graph is available. It makes it easy to learn those models using straightforward generalizations of
Markov chain Monte Carlo (MCMC) or Sequential Monte Carlo (SMC) algorithms proposed to
perform inference for the CRP and IBP [5, 10, 14].
The rest of the paper is organized as follows. In Section 2, we review the popular Dirichlet multinomial allocation model and the Dirichlet Process (DP) partition distribution. We propose an extension of these two models to decomposable graphical models. In Section 3 we discuss nonparametric
latent feature models, reviewing briefly the construction in [5] and extending it to decomposable
graphs. We demonstrate these models in Section 4 on two applications: an alternative to the hierarchical DP model [12] and a time-varying matrix factorization problem.
2
Prior distributions for partitions on decomposable graphs
Assume we have n observations. When performing clustering, we associate to each of this observation an allocation variable zi ? [K] = {1, . . . , K}. Let ?n be the partition of [n] = {1, . . . , n} defined by the equivalence relation i ? j ? zi = zj . The resulting partition ?n = {A1 , . . . , An(?n ) }
1
is an unordered collection of disjoint non-empty subsets Aj of [n], j = 1, . . . , n(?n ), where
?j Aj = [n] and n(?n ) is the number of subsets for partition ?n . We also denote by Pn be the
set of all partitions of [n] and let nj , j = 1, . . . , n(?n ), be the size of the subset Aj .
Each allocation variable zi is associated to a vertex/site of an undirected graph G, which is assumed
to be known. In the standard case where the graph G is complete, we first review briefly here two
popular prior distributions on z1:n , equivalently on ?n . We then extend these models to undirected
decomposable graphs; see [2, 8] for an introduction to decomposable graphs. Finally we briefly
discuss the directed case. Note that the models proposed here are completely different from the
hyper multinomial-Dirichlet in [2] and its recent DP extension [6].
2.1
Dirichlet multinomial allocation model and DP partition distribution
Assume for the time being that K is finite. When the graph is complete, a popular choice for the
allocation variables is to consider a Dirichlet multinomial allocation model [11]
?
?
, . . . , ), zi |? ? ?
(1)
K
K
where D is the standard Dirichlet distribution and ? > 0. Integrating out ?, we obtain the following
Dirichlet multinomial prior distribution
QK
?
)
?(?) j=1 ?(nj + K
(2)
Pr(z1:n ) =
? K
?(? + n)?( K )
? ? D(
K!
Pr(z1:n ) valid for for all ?n ?
and then, using the straightforward equality Pr(?n ) = (K?n(?
n ))!
PK where PK = {?n ? Pn |n(?n ) ? K}, we obtain
Qn(? )
?
?(?) j=1n ?(nj + K
)
K!
.
(3)
Pr(?n ) =
?
n(?
)
(K ? n(?n ))! ?(? + n)?( K ) n
DP may be seen as a generalization of the Dirichlet multinomial model when the number of components K ? ?; see for example [10]. In this case the distribution over the partition ?n of [n] is
given by [11]
Qn(? )
?n(?n ) j=1n ?(nj )
.
(4)
Pr(?n ) = Qn
i=1 (? + i ? 1)
Let ??k = {A1,?k , . . . , An(??k ),?k } be the partition induced by removing item k to ?n and nj,?k
be the size of cluster j for j = 1, . . . , n(??k ). It follows from (4) that an item k is assigned
to an existing cluster j, j = 1, . . . , n(??k ), with probability proportional to nj,?k / (n ? 1 + ?)
and forms a new cluster with probability ?/ (n ? 1 + ?). This property is the basis of the CRP.
We now extend the Dirichlet multinomial allocation and the DP partition distribution models to
decomposable graphs.
2.2
Markov combination of Dirichlet multinomial and DP partition distributions
Let G be a decomposable undirected graph, C = {C1 , . . . , Cp } a perfect ordering of the cliques
and S = {S2 , . . . , Cp } the associated separators. It can be easily checked that if the marginal
distribution of zC for each clique C ? C is defined by (2) then these distributions are consistent as
they yield the same distribution (2) over the separators. Therefore, the unique Markov distribution
over G with Dirichlet multinomial distribution over the cliques is defined by [8]
Q
Pr(zC )
(5)
Pr(z1:n ) = QC?C
S?S Pr(zS )
where for each complete set B ? G, we have Pr(zB ) given by (2). It follows that we have for any
?n ? PK
Q
?
Q
?(?) K
j=1 ?(nj,C + K )
? K
C?C
K!
)
?(?+nC )?( K
Q
(6)
Pr(?n ) =
?
?(?) K
(K ? n(?n ))! Q
j=1 ?(nj,S + K )
S?S
2
? K
)
?(?+nS )?( K
where for each complete set B ? G, nj,B is the number of items associated to cluster j, j =
1, . . . , K in B and nB is the total number of items in B. Within each complete set B, the allocation
variables define a partition distributed according to the Dirichlet-multinomial distribution.
We now extend this approach to DP partition distributions; that is we derive a joint distribution over
?n such that the distribution of ?B over each complete set B of the graph is given by (4) with
? > 0. Such a distribution satisfies the consistency condition over the separators as the restriction of
any partition distributed according to (4) still follows (4) [7].
Proposition. Let PnG be the set of partitions ?n ? Pn such that for each decomposition A, B, and
any (i, j) ? A ? B, i ? j ? ?k ? A ? B such that k ? i ? j. As K ? ?, the prior distribution
over partitions (6) is given for each ?n ? PnG by
Qn(?C )
Q
Pr(?n ) = ?
n(?n )
C?C
Q
S?S
j=1
Q
nC
?(nj,C )
i=1 (?+i?1)
Qn(?S )
?(nj,S )
Qj=1
nS
(?+i?1)
i=1
(7)
where n(?B ) is the number of clusters in the complete set B.
Proof. From (6), we have
Qn(?C )
?
?(nj,C + K
)
Qnj=1
C (?+i?1)
C?C
i=1
Qn(? )
?
Q
)
? n(?S ) j=1 S ?(nj,S + K
QnS
S?S
(?+i?1)
i=1
Q
Pr(?n ) =
K(K ? 1) . . . (K ? n(?n ) + 1)
P
P
K C?C n(?C )? S?S n(?S )
? n(?C )
P
P
Thus when K ? ?,
if n(?n ) = C?C n(?C ) ? S?S n(?S ) and 0 otherwise.
P we obtain (7)P
We have n(?n ) ? C?C n(?C ) ? S?S n(?S ) for any ?n ? Pn and the subset of Pn verifying
P
P
n(?n ) = C?C n(?C ) ? S?S n(?S ) corresponds to the set PnG .?
Example. Let the notation i ? j (resp. i j) indicates an edge (resp. no edge) between two sites.
Let n = 3 and G be the decomposable graph defined by the relations 1 ? 2, 2 ? 3 and 1 3.
The set P3G is then equal to {{{1, 2, 3}}; {{1, 2}, {3}}; {{1}, {2, 3}}; {{1}, {2}, {3}}}. Note that
the partition {{1, 3}, {2}} does not belong to P3G . Indeed, as there is no edge between 1 and 3, they
cannot be in the same cluster if 2 is in another cluster. The cliques are C1 = {1, 2} and C2 = {2, 3}
Pr(?C1 ) Pr(?C2 )
hence we can
and the separator is S2 = {2}. The distribution is given by Pr(?3 ) =
Pr(?S )
2
check that we obtain Pr({1, 2, 3}) = (? + 1)?2 , Pr({1, 2}, {3}) = Pr({1, 2}, {3}) = ?(? + 1)?2
and Pr({1}, {2}, {3}) = ?2 (? + 1)?2 .?
Let now define the full conditional distributions. Based on (7) the conditional assignment of an item
k is proportional to the conditional over the cliques divided by the conditional over the separators.
Let denote G?k the undirected graph obtained by removing vertex k from G. Suppose that ?n ? PnG .
G?k
If ??k ?
/ Pn?1
, then do not change the value of item k. Otherwise, item k is assigned to cluster j
where j = 1, . . . , n(??k ) with probability proportional to
Q
{C?C|n?k,j,C >0} n?k,j,C
Q
(8)
{S?S|n?k,j,S >0} n?k,j,S
and to a new cluster with probability proportional to ?, where n?k,j,C is the number of items in the
set C \ {k} belonging to cluster j. The updating process is illustrated by the Chinese wedding party
process1 in Fig. 1. The results of this section can be extended to the Pitman-Yor process, and more
generally to species sampling models.
Example (continuing).
Given ??2 = {A1 = {1}, A2 = {3}}, we have
?1
Pr( item 2 assigned to A1 = {1}| ??2 ) = Pr( item 2 assigned to A2 = {3}| ??2 ) = (? + 2)
?1
and Pr( item 2 assigned to new cluster A3 | ??2 ) = ? (? + 2) . Given ??2 = {A1 = {1, 3}},
item 2 is assigned to A1 with probability 1.?
1
Note that this representation describes the full conditionals while the CRP represents the sequential updat-
ing.
3
(a)
(b)
(d)
(c)
(e)
Figure 1: Chinese wedding party. Consider a group of n guests attending a wedding party. Each
of the n guests may belong to one or several cliques, i.e. maximal groups of people such that
everybody knows everybody. The belonging of each guest to the different cliques is represented by
color patches on the figures, and the graphical representation of the relationship between the guests
is represented by the graphical model (e). (a) Suppose that the guests are already seated such that
two guests cannot be together at the same table is they are not part of the same clique, or if there
does not exist a group of other guests such that they are related (?Any friend of yours is a friend of
mine?). (b) The guest number k leaves his table and either (c) joins a table where there are guests
from the same clique as him, with probability proportional to the product of the number of guests
from each clique over the product of the number of guests belonging to several cliques on that table
or (d) he joins a new table with probability proportional to ?.
2.3
Monte Carlo inference
2.3.1
MCMC algorithm
Using the full conditionals, a single site Gibbs sampler can easily be designed to approximate the
posterior distribution Pr(?n |z1:n ). Given a partition ?n , an item k is taken out of the partition. If
G?k
??k ?
/ Pn?1
, item k keeps the same value. Otherwise, the item will be assigned to a cluster j,
j = 1, . . . , n(??k ), with probability proportional to
Q
p(z{k}?Aj,?k )
{C?C|n?k,j,C >0} n?k,j,C
? Q
(9)
p(zAj,?k )
{S?S|n?k,j,S >0} n?k,j,S
and the item will be assigned to a new cluster with probability proportional to p(z{k} ) ? ?. Similarly
to [3], we can also define a procedure to sample from p(?|n(?n ) = k)). We assume that ? ? G(a, b)
and use p auxiliary variables x1 , . . . , xp . The procedure is as follows.
? For j = 1, . . . , p, sample xj |k, ? ?
PBeta(? + nSj , nCj ? nSj )
? Sample ?|k, x1:p ? G(a + k, b ? j log xj )
2.3.2
Sequential Monte Carlo
We have so far only treated the case of an undirected decomposable graph G. We can formulate a sequential updating rule for the corresponding perfect directed version D of G. Indeed, let
(a1 , . . . a|V | ) be a perfect ordering and pa(ak ) be the set of parents of ak which is by definition complete. Let ?k?1 = {A1,k?1 , . . . , An(?k?1 ),k?1 } denote the partition of the first k?1 vertices a1:k?1
and let nj,pa(ak ) be the number of elements with value j in?the set pa(ak ), j ?
= 1, . . . , n(?k?1 ).
P
Then the vertex ak joins the set j with probability nj,pa(ak ) / ? + q nq,pa(ak ) and creates a new
?
?
P
cluster with probability ?/ ? + q nq,pa(ak ) .
One can then design a particle filter/SMC method in a similar fashion as [4]. Consider a set of
PN
(i)
(i)
(i)
(i)
N particles ?k?1 with weights wk?1 ? Pr(?k?1 , z1:k?1 ) ( i=1 wk?1 = 1) that approximate
(i)
the posterior distribution Pr(?k?1 |z1:k?1 ). For each particle i, there are n(?k?1 ) + 1 possible
4
e (i,j) the partition obtained by associating component ak
allocations for component ak . We denote ?
k
e (i,j) is given by
to cluster j. The weight associated to ?
k
?
n
(i)
?
Pj,pa(ak )
if j = 1, . . . , n(?k?1 )
?+ q nq,pa(ak )
(i,j)
(i) p(z{ak }?Aj,k?1 )
w
ek?1 = wk?1
?
(10)
(i)
? ?+P n?
p(zAj,k?1 )
if j = n(?k?1 ) + 1
q
q,pa(ak )
e (i,j) with highest
Then we can perform a deterministic resampling step by keeping the N particles ?
k
(i,j)
(i)
(i)
weights w
ek?1 . Let ?k be the resampled particles and wk the associated normalized weights.
3
Prior distributions for infinite binary matrices on decomposable graphs
Assume we have n objects; each of these objects being associated to the vertex of a graph G. To
K
each object is associated a K-dimensional binary vector zn = (zn,1 , . . . , zn,K ) ? {0, 1} where
zn,i = 1 if object n possesses feature i and zn,i = 0 otherwise. These vectors zt form a binary
n ? K matrix denoted Z1:n . We denote by ?1:n the associated equivalence class of left-ordered
matrices and let EK be the set of left-ordered matrices with at most K features.
In the standard case where the graph G is complete, we review briefly here two popular prior distributions on Z1:n , equivalently on ?1:n : the Beta-Bernoulli model and the IBP [5]. We then extend these
models to undirected decomposable graphs. This can be used for example to define a time-varying
IBP as illustrated in Section 4.
3.1
Beta-Bernoulli and IBP distributions
The Beta-Bernoulli distribution over the allocation Z1:n is
K
Y
Pr(Z1:n ) =
?
+K
)?(n ? nj + 1)
?
)
?(n + 1 + K
?
K ?(nj
j=1
(11)
where nj is the number of objects having feature j. It follows that
K
Y
K!
Pr(?1:n ) = Q2n ?1
h=0
?
K ?(nj
?
+K
)?(n ? nj + 1)
?
)
?(n + 1 + K
Kh ! j=1
(12)
where Kh is the number of features possessing the history h (see [5] for details). The nonparametric
model is obtained by taking the limit when K ? ?
?K
+
Pr(?1:n ) = Q2n ?1
h=1
Kh !
exp(??Hn )
where K + is the total number of features and Hn =
3.2
Pn
+
K
Y
(n ? nj )!(nj ? 1)!
n!
j=1
1
k=1 k .
(13)
The IBP follows from (13).
Markov combination of Beta-Bernoulli and IBP distributions
Let G be a decomposable undirected graph, C = {C1 , . . . , Cp } a perfect ordering of the cliques and
S = {S2 , . . . , Cp } the associated separators. As in the Dirichlet-multinomial case, it is easily seen
that if for each clique C ? C, the marginal distribution is defined by (11), then these distributions
are consistent as they yield the same distribution (11) over the separators. Therefore, the unique
Markov distribution over G with Beta-Bernoulli distribution over the cliques is defined by [8]
Q
Pr(ZC )
(14)
Pr(Z1:n ) = QC?C
S?S Pr(ZS )
where Pr(ZB ) given by (11) for each complete set B ? G. The prior over ?1:n is thus given, for
?1:n ? EK , by
Q
QK K? ?(nj,C + K? )?(nC ?nj,C +1)
?
C?C
j=1
K!
)
?(nC +1+ K
(15)
Pr(?1:n ) = Q2n ?1
Q
QK K? ?(nj,S + K? )?(nS ?nj,S +1)
K
!
h
?
h=0
S?S
j=1
?(n +1+ )
S
5
K
where for each complete set B ? G, nj,B is the number of items having feature j, j = 1, . . . , K in
the set B and nB is the whole set of objects in set B. Taking the limit when K ? ?, we obtain
after a few calculations
Q
QKC+ (nC ?nj,C )!(nj,C ?1)!
P
P
K+
? [n] exp [?? ( C HnC ? S HnS )]
j=1
C?C
nC !
? Q
Pr(?1:n ) =
Q2n ?1
QKS+ (nS ?nj,S )!(nj,S ?1)!
K !
h=1
P
h
S?S
j=1
nS !
P
+
+
+
+
if K[n]
=
C KC ?
S KS and 0 otherwise, where KB is the number of different features
possessed by objects in B.
Let EnG be the subset of En such that for each decomposition A, B and any (u, v) ? A ? B: {u and
v possess feature j} ? ?k ? A ? B such that {k possesses feature j}. Let ??k be the left-ordered
+
matrix obtained by removing object k from ?n and K?k
be the total number of different features in
G?k
+
??k . For each feature j = 1, . . . , K?k , if ??k ? En?1 then we have
? Q
? b QC?C nj,C
if i = 1
Q S?C nj,S
Pr(?k,j = i) =
(16)
? b QC?C (nC ?nj,C ) if i = 0
(nS ?nj,S )
S?C
? Q
?
nS
where b is the appropriate normalizing constant then the customer k tries Poisson ? Q {S?S|k?S} nC
{C?C|k?C}
new dishes. We can easily generalize this construction to a directed version D of G using arguments
similar to those presented in Section 2; see Section 4 for an application to time-varying matrix
factorization.
4
4.1
Applications
Sharing clusters among relative groups: An alternative to HDP
Consider that we are given d groups with nj data yi,j in each group, i = 1, . . . , nj , j = 1, . . . , d. We
consider latent cluster variables zi,j that define the partition of the data. We will use alternatively the
notation ?i,j = Uzi,j in the following. Hierarchical Dirichlet Process [12] (HDP) is a very popular
model for sharing clusters among related groups. It is based on a hierarchy of DPs
G0 ? DP (?, H),
Gj |G0 ? DP (?, G0 ) j = 1, . . . d
?i,j |Gj ? Gj , yi,j |?i,j ? f (?i,j ) i = 1, . . . , nj .
Under conjugacy assumptions, G0 , Gj and U can be integrated out and we can approximate the
marginal posterior of (zi,j ) given y = (yi,j ) with Gibbs sampling using the Chinese restaurant
franchise to sample from the full conditional p(zi,j |z?{i,j} , y).
Using the graph formulation defined in Section 2, we propose an alternative to HDP. Let
?0,1 , . . . , ?0,N be N auxiliary variables belonging to what we call group 0. We define each clique Cj
(j = 1, . . . , d) to be composed of elements from group j and elements from group 0. This defines a
decomposable graphical model whose separator is given by the elements of group 0. We can rewrite
the model in a way quite similar to HDP
G0 ? DP (?, H),
?0,i |G0 ? G0 i = 1, ..., N
PN
?
?
H + ?+N
Gj |?0,1 , . . . , ?0,N ? DP (? + N, ?+N
i=1 ??0,i )
?i,j |Gj ? Gj , yi,j |?i,j ? f (?i,j ) i = 1, . . . , nj
j = 1, . . . d,
N
. Again, under
For any subset A and j 6= k ? {1, . . . , p} we have corr(Gj (A), Gk (A)) = ?+N
conjugacy conditions, we can integrate out G0 , Gj and U and approximate the marginal posterior
distribution over the partition using the Chinese wedding party process defined in Section 2. Note
that for latent variables zi,j , j = 1, . . . , d, associated to data, this is the usual CRP update. As in
HDP, multiple layers can be added to the model. Figures 2 (a) and (b) resp. give the graphical DP
alternative to HDP and 2-layer HDP.
6
z0
root
root
z0
corpora
docs z1
z2
z1
z2
z3
docs
(a) Graphical DP alternative to HDP
z1,1 z1,2 z2,1 z2,2 z2,3
(b) Graphical DP alternative to 2-layer
HDP
Figure 2: Hierarchical Graphs of dependency with (a) one layer and (b) two layers of hierarchy.
If N = 0, then Gj ? DP (?, H) for all j and this is equivalent to setting ? ? ? in HDP. If N ? ?
then Gj = G0 for all j, G0 ? DP (?, H). This is equivalent to setting ? ? ? in the HDP. One
interesting feature of the model is that, contrary to HDP, the marginal distribution of Gj at any layer
of the tree is DP (?, H). As a consequence, the total number of clusters scales logarithmically (as in
the usual DP) with the size of each group, whereas it scales doubly logarithmically in HDP. Contrary
to HDP, there are at most N clusters shared between different groups. Our model is in that sense
reminiscent of [9] where only a limited number of clusters can be shared. Note however that contrary
to [9] we have a simple CRP-like process. The proposed methodology can be straightforwardly
extended to the infinite HMM [12].
The main issue of the proposed model is the setting of the number N of auxiliary parameters.
Another issue is that to achieve high correlation, we need a large number of auxiliary variables.
Nonetheless, the computational time used to sample from auxiliary variables is negligible compared
to the time used for latent variables associated to data. Moreover, it can be easily parallelized. The
model proposed offers a far richer framework and ensures that at each level of the tree, the marginal
distribution of the partition is given by a DP partition model.
4.2
Time-varying matrix factorization
Let X1:n be an observed matrix of dimension n ? D. We want to find a representation of this matrix
in terms of two latent matrices Z1:n of dimension n ? K and Y of dimension K ? ?D. Here Z1:n?
is a binary matrix whereas Y is a matrix of latent features. By assuming that Y ? N 0, ?Y2 IK?D
and
?
?
2
X1:n = Z1:n Y + ?X ?n where ?n ? N 0, ?X
In?D ,
we obtain
?
??D/2
? +T +
?
?
?
2
/?Y2 IKn+ ?
?Z1:n Z1:n + ?X
? T ?1
?
1
exp ? 2 tr X1:n ?n X1:n
(17)
p(X1:n |Z1:n ) ?
(n?Kn+ )D Kn+ D
2?X
?X
?Y
?
??1
+
+T +
2
2
+
+
where ??1
=
I
?
Z
Z
Z
+
?
/?
I
Z+T
n
1:n
1:n 1:n
1:n , Kn the number of non-zero columns of
X
Y Kn
+
Z1:n and Z+
1:n is the first Kn columns of Z1:n . To avoid having to set K, [5, 14] assume that Z1:n
follows an IBP. The resulting posterior distribution p(Z1:n |X1:n ) can be estimated through MCMC
[5] or SMC [14].
We consider here a different model where the object Xt is assumed to arrive at time index t and we
want a prior distribution on Z1:n ensuring that objects close in time are more likely to possess similar
features. To achieve this, we consider the simple directed graphical model D of Fig. 3 where the site
numbering corresponds to a time index in that case and a perfect numbering of D is (1, 2, . . .). The
set of parents pa(t) is composed of the r preceding sites {{t ? r}, . . . , {t ? 1}}. The time-varying
IBP to sample from p(Z1:n ) associated to this directed graph follows from (16) and proceeds as
follows.
At time t = 1
? Sample K1new ?Poisson(?), set z1,i = 1 for i = 1, ..., K1new and set K1+ = Knew .
At times t = 2, . . . , r
n
) and Ktnew ?Poisson( ?t ).
? For k = 1, . . . Kt+ , sample zt,k ? Ber( 1:t?1,k
t
7
? ??
??
??
??
??
?
- t?r - t?r+1 - . . . - t?1 - t - t+1
??
??
??
??
??
?
?
6
6
Figure 3: Directed graph.
At times t = r + 1, . . . , n
n
?
) and Ktnew ?Poisson( r+1
).
? For k = 1, . . . Kt+ , sample zt,k ? Ber( t?r:t?1,k
r+1
Here Kt+ is the total number of features appearing from time max(1, t ? r) to t ? 1 and nt?r:t?1,k
the restriction of n1:t?1 to the r last customers. Using (17) and the prior distribution of Z1:n which
can be sampled using the time-varying IBP described above, we can easily design an SMC method
to sample from p(Z1:n |X1:n ). We do not detail it here. Note that contrary to [14], our algorithm
does not require inverting a matrix whose dimension grows linearly with the size of the data but only
a matrix of dimension r ? r. In order to illustrate the model and SMC algorithm, we create 200 6 ? 6
images using a ground truth Y consisting of 4 different 6 ? 6 latent images. The 200 ? 4 binary
matrix was generated from Pr(zt,k = 1) = ?t,k , where ?t = ( .6 .5 0 0 ) if t = 1, . . . , 30,
?t = ( .4 .8 .4 0 ) if t = 31, . . . , 50 and ?t = ( 0 .3 .6 .6 ) if t = 51, . . . , 200. The
order of the model is set to r = 50. The feature occurences Z1:n and true features Y and their
estimates are represented in Figure 4. Two spurious features are detected by the model (features 2
and 5 on Fig. 3(c)) but quickly discarded (Fig. 4(d)). The algorithm is able to correctly estimate the
varying prior occurences of the features over time.
Feature1
Feature2
Feature1
Feature2
Feature3
20
20
40
40
60
60
Feature4
80
100
Feature4
Feature5
Feature6
Time
Feature3
Time
80
100
120
120
140
140
160
160
180
200
180
1
2
3
200
4
Feature
(a)
1
2
3
4
5
6
Feature
(b)
(c)
(d)
Figure 4: (a) True features, (b) True features occurences, (c) MAP estimate ZM AP and (d) associated
E[Y|ZM AP ]
t=20
t=50
t=20
t=50
t=100
t=200
t=100
t=200
(a)
(b)
Figure 5: (a) E[Xt |?t , Y] and (b) E[Xt |X1:t?1 ] at t = 20, 50, 100, 200.
5
Related work and Discussion
The fixed-lag version of the time-varying DP of Caron et al. [1] is a special case of the proposed
model when G is given by Fig. 3. The bivariate DP of Walker and Muliere [13] is also a special
case when G has only two cliques. In this paper, we have assumed that the structure of the graph
was known beforehand and we have shown that many flexible models arise from this framework. It
would be interesting in the future to investigate the case where the graphical structure is unknown
and must be estimated from the data.
Acknowledgment
The authors thank the reviewers for their comments that helped to improve the writing of the paper.
8
References
[1] F. Caron, M. Davy, and A. Doucet. Generalized Polya urn for time-varying Dirichlet process
mixtures. In Uncertainty in Artificial Intelligence, 2007.
[2] A.P. Dawid and S.L. Lauritzen. Hyper Markov laws in the statistical analysis of decomposable
graphical models. The Annals of Statistics, 21:1272?1317, 1993.
[3] M.D. Escobar and M. West. Bayesian density estimation and inference using mixtures. Journal
of the American Statistical Association, 90:577?588, 1995.
[4] P. Fearnhead. Particle filters for mixture models with an unknown number of components.
Statistics and Computing, 14:11?21, 2004.
[5] T.L. Griffiths and Z. Ghahramani. Infinite latent feature models and the Indian buffet process.
In Advances in Neural Information Processing Systems, 2006.
[6] D. Heinz. Building hyper dirichlet processes for graphical models. Electonic Journal of Statistics, 3:290?315, 2009.
[7] J.F.C. Kingman. Random partitions in population genetics. Proceedings of the Royal Society
of London, 361:1?20, 1978.
[8] S.L. Lauritzen. Graphical Models. Oxford University Press, 1996.
[9] P. M?uller, F. Quintana, and G. Rosner. A method for combining inference across related nonparametric Bayesian models. Journal of the Royal Statistical Society B, 66:735?749, 2004.
[10] R.M. Neal. Markov chain sampling methods for Dirichlet process mixture models. Journal of
Computational and Graphical Statistics, 9:249?265, 2000.
[11] J. Pitman. Exchangeable and partially exchangeable random partitions. Probability theory and
related fields, 102:145?158, 1995.
[12] Y.W. Teh, M.I. Jordan, M.J. Beal, and D.M. Blei. Hierarchical Dirichlet processes. Journal of
the American Statistical Association, 101:1566?1581, 2006.
[13] S. Walker and P. Muliere. A bivariate Dirichlet process. Statistics and Probability Letters,
64:1?7, 2003.
[14] F. Wood and T.L. Griffiths. Particle filtering for nonparametric Bayesian matrix factorization.
In Advances in Neural Information Processing Systems, 2007.
9
| 3853 |@word briefly:4 version:4 decomposition:2 eng:1 tr:1 wedding:4 existing:1 z2:5 nt:1 reminiscent:1 must:1 partition:28 designed:1 update:1 resampling:1 intelligence:1 leaf:1 item:17 nq:3 blei:1 math:1 c2:2 beta:5 ik:1 doubly:1 yours:1 indeed:2 sud:1 heinz:1 notation:2 moreover:1 what:1 z:2 nj:39 exchangeable:2 enjoy:1 negligible:1 local:1 limit:2 consequence:1 ak:14 oxford:1 ap:2 inria:2 might:2 k:1 equivalence:2 factorization:4 smc:5 limited:1 directed:6 unique:2 acknowledgment:1 procedure:2 davy:1 integrating:1 griffith:2 cannot:2 close:3 nb:2 context:2 writing:1 restriction:2 equivalent:2 deterministic:1 customer:2 map:1 reviewer:1 straightforward:2 formulate:1 qc:4 decomposable:17 occurences:3 attending:1 rule:1 his:1 population:1 resp:3 hierarchy:2 construction:2 suppose:2 annals:1 associate:1 pa:10 element:4 logarithmically:2 dawid:1 updating:2 observed:1 qns:1 verifying:1 ensures:1 connected:1 ordering:3 highest:1 mine:1 reviewing:1 rewrite:1 creates:1 completely:1 basis:1 easily:7 joint:2 represented:4 describe:1 london:1 monte:5 detected:1 artificial:1 hyper:3 whose:2 quite:1 richer:1 lag:1 ikn:1 otherwise:5 statistic:6 beal:1 propose:4 maximal:1 product:2 fr:1 zm:2 combining:1 achieve:2 kh:3 parent:2 empty:1 cluster:22 extending:1 francois:1 perfect:5 franchise:1 escobar:1 object:11 derive:1 friend:2 illustrate:1 ouest:1 lauritzen:2 polya:1 ibp:15 auxiliary:5 ois:1 c:1 muliere:2 feature1:2 tokyo:1 filter:2 kb:1 require:1 generalization:2 proposition:1 extension:3 ground:1 exp:3 a2:2 estimation:1 him:1 create:1 uller:1 fearnhead:1 pn:10 avoid:1 exchangeability:1 varying:9 bernoulli:5 indicates:1 check:1 sense:1 inference:4 dependent:1 integrated:1 spurious:1 relation:2 kc:1 france:1 interested:2 issue:2 among:2 flexible:1 denoted:1 special:2 marginal:6 equal:1 field:1 having:3 sampling:3 represents:1 future:1 few:1 franc:1 composed:2 qks:1 consisting:1 n1:1 investigate:1 mixture:4 chain:2 kt:3 beforehand:1 edge:3 institut:1 tree:2 continuing:1 quintana:1 column:2 zn:5 assignment:1 vertex:6 subset:6 straightforwardly:1 dependency:4 kn:5 density:1 together:1 quickly:1 again:1 k1new:2 hn:3 ek:4 american:2 kingman:1 japan:1 de:2 unordered:1 wk:4 try:1 root:2 helped:1 qk:3 correspond:2 yield:2 generalize:2 bayesian:4 marginally:1 carlo:5 zaj:2 history:1 sharing:2 checked:1 definition:1 nonetheless:1 associated:15 proof:1 sampled:1 popular:5 color:1 organized:1 cj:1 methodology:1 formulation:1 crp:11 correlation:1 defines:1 aj:5 grows:1 building:1 normalized:1 y2:2 true:3 equality:1 assigned:8 hence:1 arnaud:2 neal:1 illustrated:2 attractive:2 everybody:2 generalized:2 complete:12 demonstrate:1 cp:4 image:2 possessing:1 multinomial:11 extend:4 belong:2 he:1 association:2 caron:4 gibbs:2 consistency:1 mathematics:1 similarly:2 particle:7 gj:12 posterior:5 recent:3 dish:1 binary:5 yi:4 seen:2 preceding:1 parallelized:1 full:4 multiple:1 ing:1 calculation:1 offer:1 divided:1 a1:9 ensuring:1 metric:1 poisson:4 rosner:1 q2n:4 c1:4 whereas:2 want:3 conditionals:2 walker:2 rest:1 posse:5 comment:1 induced:1 undirected:7 contrary:4 jordan:1 nsj:2 call:1 easy:1 xj:2 restaurant:2 zi:8 associating:1 qj:1 ncj:1 expression:1 generally:1 nonparametric:5 locally:1 png:4 exist:1 zj:1 estimated:2 disjoint:1 correctly:1 group:13 pj:1 graph:32 year:2 wood:1 letter:1 uncertainty:1 arrive:1 patch:1 doc:2 layer:6 resampled:1 argument:1 performing:1 urn:1 department:1 numbering:2 according:2 combination:2 belonging:4 describes:1 across:1 increasingly:1 appealing:1 pr:37 taken:1 conjugacy:2 discus:2 know:1 available:1 hierarchical:4 process1:1 appropriate:1 appearing:1 alternative:6 ematiques:1 buffet:2 dirichlet:20 clustering:3 ensure:1 graphical:13 k1:1 chinese:5 ghahramani:1 society:2 g0:10 already:1 added:1 usual:2 dp:23 thank:1 hmm:1 enforcing:2 assuming:1 hdp:14 index:2 relationship:1 z3:1 guest:11 equivalently:2 nc:8 gk:1 design:2 zt:4 unknown:2 perform:2 teh:1 observation:2 markov:7 discarded:1 finite:1 qkc:1 possessed:1 extended:2 canada:1 inverting:1 z1:32 learned:1 able:1 proceeds:1 max:1 royal:2 treated:1 bordeaux:3 improve:1 numerous:1 columbia:1 prior:13 review:3 hnc:1 vancouver:1 relative:1 law:1 fully:1 interesting:2 allocation:10 proportional:8 filtering:1 integrate:1 consistent:2 xp:1 seated:1 genetics:1 last:1 keeping:1 zc:3 ber:2 institute:1 taking:2 pitman:2 yor:1 distributed:2 dimension:5 valid:1 qn:7 author:1 collection:1 party:4 far:2 approximate:4 keep:1 clique:17 doucet:2 conceptual:1 corpus:1 assumed:3 knew:1 alternatively:1 latent:10 table:5 learn:1 ca:1 separator:8 pk:3 main:1 linearly:1 motivation:1 s2:3 whole:1 arise:1 x1:10 site:5 fig:5 join:3 en:2 west:1 fashion:1 n:7 explicit:1 british:1 removing:3 z0:2 xt:3 a3:1 normalizing:1 bivariate:2 sequential:4 corr:1 updat:1 likely:3 ordered:3 partially:1 ubc:1 corresponds:2 satisfies:1 truth:1 conditional:5 shared:2 change:1 infinite:3 sampler:1 zb:2 total:5 specie:1 people:1 indian:2 mcmc:3 |
3,150 | 3,854 | Rethinking LDA: Why Priors Matter
Hanna M. Wallach David Mimno Andrew McCallum
Department of Computer Science
University of Massachusetts Amherst
Amherst, MA 01003
{wallach,mimno,mccallum}@cs.umass.edu
Abstract
Implementations of topic models typically use symmetric Dirichlet priors with
fixed concentration parameters, with the implicit assumption that such ?smoothing
parameters? have little practical effect. In this paper, we explore several classes
of structured priors for topic models. We find that an asymmetric Dirichlet prior
over the document?topic distributions has substantial advantages over a symmetric prior, while an asymmetric prior over the topic?word distributions provides no
real benefit. Approximation of this prior structure through simple, efficient hyperparameter optimization steps is sufficient to achieve these performance gains.
The prior structure we advocate substantially increases the robustness of topic
models to variations in the number of topics and to the highly skewed word frequency distributions common in natural language. Since this prior structure can be
implemented using efficient algorithms that add negligible cost beyond standard
inference techniques, we recommend it as a new standard for topic modeling.
1
Introduction
Topic models such as latent Dirichlet allocation (LDA) [3] have been recognized as useful tools for
analyzing large, unstructured collections of documents. There is a significant body of work applying LDA to an wide variety of tasks including analysis of news articles [14], study of the history of
scientific ideas [2, 9], topic-based search interfaces1 and navigation tools for digital libraries [12].
In practice, users of topic models are typically faced with two immediate problems: First, extremely
common words tend to dominate all topics. Second, there is relatively little guidance available on
how to set T , the number of topics, or studies regarding the effects of using a suboptimal setting
for T . Standard practice is to remove ?stop words? before modeling using a manually constructed,
corpus-specific stop word list and to optimize T by either analyzing probabilities of held-out documents or resorting to a more complicated nonparametric model. Additionally, there has been relatively little work in the machine learning literature on the structure of the prior distributions used
in LDA: most researchers simply use symmetric Dirichlet priors with heuristically set concentration
parameters. Asuncion et al. [1] recently advocated inferring the concentration parameters of these
symmetric Dirichlets from data, but to date there has been no rigorous scientific study of the priors
used in LDA?from the choice of prior (symmetric versus asymmetric Dirichlets) to the treatment
of hyperparameters (optimize versus integrate out)?and the effects of these modeling choices on
the probability of held-out documents and, more importantly, the quality of inferred topics. In this
paper, we demonstrate that practical implementation issues (handling stop words, setting the number
of topics) and theoretical issues involving the structure of Dirichlet priors are intimately related.
We start by exploring the effects of classes of hierarchically structured Dirichlet priors over the
document?topic distributions and topic?word distributions in LDA. Using MCMC simulations, we
find that using an asymmetric, hierarchical Dirichlet prior over the document?topic distributions and
1
http://rexa.info/
1
a symmetric Dirichlet prior over the topic?word distributions results in significantly better model
performance, measured both in terms of the probability of held-out documents and in the quality
of inferred topics. Although this hierarchical Bayesian treatment of LDA produces good results,
it is computationally intensive. We therefore demonstrate that optimizing the hyperparameters of
asymmetric, nonhierarchical Dirichlets as part of an iterative inference algorithm results in similar
performance to the full Bayesian model while adding negligible computational cost beyond standard
inference techniques. Finally, we show that using optimized Dirichlet hyperparameters results in
dramatically improved consistency in topic usage as T is increased. By decreasing the sensitivity
of the model to the number of topics, hyperparameter optimization results in robust, data-driven
models with substantially less model complexity and computational cost than nonparametric models.
Since the priors we advocate (an asymmetric Dirichlet over the document?topic distributions and a
symmetric Dirichlet over the topic?word distributions) have significant modeling benefits and can
be implemented using highly efficient algorithms, we recommend them as a new standard for LDA.
2
Latent Dirichlet Allocation
LDA is a generative topic model for documents W = {w(1) , w(2) , . . . , w(D) }. A ?topic? t is a
discrete distribution over words with probability vector ?t . A Dirichlet prior is placed over ? =
{?1 , . . . ?T }. In almost all previous work on LDA, this prior is assumed to be symmetric (i.e., the
base measure is fixed to a uniform distribution over words) with concentration parameter ?:
?
Q
Q
P
?(?) Q
W ?1
P (?) = t Dir (?t ; ?u) = t Q
(1)
w ?w|t ?
w ?w|t ? 1 .
?
w ?( W )
Each document, indexed by d, has a document-specific distribution over topics ? d . The prior over
? = {? 1 , . . . ? D } is also assumed to be a symmetric Dirichlet, this time with concentration param(d)
d
eter ?. The tokens in every document w(d) = {wn }N
n=1 are associated with corresponding topic
(d) Nd
(d)
assignments z = {zn }n=1 , drawn i.i.d. from the document-specific distribution over topics,
while the tokens are drawn i.i.d. from the topics? distributions over words ? = {?1 , . . . , ?T }:
Q
Q
P (z (d) | ? d ) = n ?z(d) |d and P (w(d) | z (d) , ?) = n ?w(d) |z(d) .
(2)
n
n
n
Dirichlet?multinomial conjugacy allows ? and ? to be marginalized out.
For real-world data, documents W are observed, while the corresponding topic assignments Z are
unobserved. Variational methods [3, 16] and MCMC methods [7] are both effective at inferring the
latent topic assignments Z. Asuncion et al. [1] demonstrated that the choice of inference method
has negligible effect on the probability of held-out documents or inferred topics. We use MCMC
methods throughout this paper?specifically Gibbs sampling [5]?since the internal structure of
hierarchical Dirichlet priors are typically inferred using a Gibbs sampling algorithm, which can be
easily interleaved with Gibbs updates for Z given W. The latter is accomplished by sequentially
(d)
resampling each topic assignment zn from its conditional posterior given W, ?u, ?u and Z\d,n
(the current topic assignments for all tokens other than the token at position n in document d):
P (zn(d) | W, Z\d,n , ?u, ?u) ? P (wn(d) | zn(d) , W\d,n , Z\d,n , ?u) P (zn(d) | Z\d,n , ?u)
N
?
\d,n
(d)
(d)
wn |zn
\d,n
N
(d)
zn
+
+?
?
W
N
\d,n
(d)
zn |d
+
?
T
Nd ? 1 + ?
,
(3)
where sub- or super-script ?\d, n? denotes a quantity excluding data from position n in document d.
3
Priors for LDA
The previous section outlined LDA as it is most commonly used?namely with symmetric Dirichlet priors over ? and ? with fixed concentration parameters ? and ?, respectively. The simplest
way to vary this choice of prior for either ? or ? is to infer the relevant concentration parameter
from data, either by computing a MAP estimate [1] or by using an MCMC algorithm such as slice
sampling [13]. A broad Gamma distribution is an appropriate choice of prior for both ? and ?.
2
u
D
?
u
T
N
?d
u
?
?0
zn
wn
?t
D
u
n
N
?
T
?
u
T
N
(d)
?1 = t ?2 = t0 ?3 = t
t|d t0 |d t|d
?0
?d
?
D
?0
zn
u
n
N
?
T
u
m
?0
?d
?
?1 = t ?2 = t0 ?3 = t0
zn
?1 = t ?2 = t0 ?3 = t
wn
?t
t|d
(c)
u
m
wn
?t
zn
(b)
u
D
m
?
wn
?t
(a)
?d
t|d t0 |d t|d
(e)
? 1 = t0 ? 2 = t0
t0 |d0 t0 |d0 t0 |d0 t0 |d0
t|d
(f)
Figure 1: (a)-(e): LDA with (a) symmetric Dirichlet priors over ? and ?, (b) a symmetric Dirichlet prior over
? and an asymmetric Dirichlet prior over ?, (d) an asymmetric Dirichlet prior over ? and a symmetric Dirichlet
(d)
prior over ?, (e) asymmetric Dirichlet priors over ? and ?. (c) Generating {zn }4n=1 = (t, t0 , t, t) from the
(d) 4
(d0 )
asymmetric, predictive distribution for document d; (f) generating {zn }n=1 = (t, t0 , t, t) and {zn }4n=1 =
0 0 0 0
0
(t , t , t , t ) from the asymmetric, hierarchical predictive distributions for documents d and d , respectively.
Alternatively, the uniform base measures in the Dirichlet priors over ? and ? can be replaced with
nonuniform base measures m and n, respectively. Throughout this section we use the prior over ?
as a running example, however the same construction and arguments also apply to the prior over ?.
In section 3.1, we describe the effects on the document-specific conditional posterior distributions,
or predictive distributions, of replacing u with a fixed asymmetric (i.e., nonuniform) base measure
m. In section 3.2, we then treat m as unknown, and take a fully Bayesian approach, giving m a
Dirichlet prior (with a uniform base measure and concentration parameter ?0 ) and integrating it out.
3.1
Asymmetric Dirichlet Priors
If ? is given an asymmetric Dirichlet prior with concentration parameter ? and an known (nonuniform) base measure m, the predictive probability of topic t occurring in document d given Z is
Z
Nt|d + ?mt
(d)
P (zNd +1 = t | Z, ?m) = d? d P (t | ? d ) P (? d | Z, ?m) =
.
(4)
Nd + ?
(d)
If topic t does not occur in z (d) , then Nt|d will be zero, and the probability of generating zNd +1 = t
will be mt . In other words, under an asymmetric prior, Nt|d is smoothed with a topic-specific
quantity ?mt . Consequently, different topics can be a priori more or less probable in all documents.
One way of describing the process of generating from (4) is to say that generating a topic assignment
(d)
(d)
zn is equivalent to setting the value of zn to the the value of some document-specific draw from
m. While this interpretation provides little benefit in the case of fixed m, it is useful for describing
the effects of marginalizing over m on the predictive distributions (see section 3.2). Figure 1c
(d)
(d)
depicts the process of drawing {zn }4n=1 using this interpretation. When drawing z1 , there are no
(d)
existing document-specific draws from m, so a new draw ?1 must be generated, and z1 assigned
(d)
the value of this draw (t in figure 1c). Next, z2 is drawn by either selecting ?1 , with probability
proportional to the number of topic assignments that have been previously ?matched? to ?1 , or a
new draw from m, with probability proportional to ?. In figure 1c, a new draw is selected, so ?2 is
(d)
drawn from m and z2 assigned its value, in this case t0 . The next topic assignment is drawn in the
same way: existing draws ?1 and ?2 are selected with probabilities proportional to the numbers of
topic assignments to which they have previously been matched, while with probability proportional
(d)
(d)
to ?, z3 is matched to a new draw from m. In figure 1c, ?1 is selected and z3 is assigned the
value of ?1 . In general, the probability of a new topic assignment being assigned the value of an
(i)
existing document-specific draw ?i from m is proportional to Nd , the number of topic assignments
3
previously matched to ?i . The predictive probability of topic t in document d is therefore
PI
(i)
N ? (?i ? t) + ?mt
(d)
P (zNd +1 = t | Z, ?m) = i=1 d
,
Nd + ?
(5)
where I is the current number of draws from m for document d. Since every topic assignment is
PI
(i)
matched to a draw from m, i=1 Nd ? (?i ? t) = Nt|d . Consequently, (4) and (5) are equivalent.
3.2
Integrating out m
In practice, the base measure m is not fixed a priori and must therefore be treated as an unknown
quantity. We take a fully Bayesian approach, and give m a symmetric Dirichlet prior with concentration parameter ?0 (as shown in figures 1d and 1e). This prior over m induces a hierarchical Dirichlet
prior over ?. Furthermore, Dirichlet?multinomial conjugacy then allows m to be integrated out.
Giving m a symmetric Dirichlet prior and integrating it out has the effect of replacing m in (5) with
a ?global? P?olya conditional distribution, shared by the document-specific predictive distributions.
Figure 1f depicts the process of drawing eight topic assignments?four for document d and four
for document d0 . As before, when a topic assignment is drawn from the predictive distribution
for document d, it is assigned the value of an existing (document-specific) internal draw ?i with
probability proportional to the number of topic assignments previously matched to that draw, and to
the value of a new draw ?i0 with probability proportional to ?. However, since m has been integrated
out, the new draw must be obtained from the ?global? distribution. At this level, ?i0 treated as if
it were a topic assignment, and assigned the value of an existing global draw ?j with probability
proportional to the number of document-level draws previously matched to ?j , and to a new global
draw, from u, with probability proportional to ?0 . Since the internal draws at the document level are
treated as topic assignments the global level, there is a path from every topic assignment to u, via
the internal draws. The predictive probability of topic t in document d given Z is now
Z
(d)
(d)
P (zNd +1 = t | Z, ?, ?0 u) = dm P (zNd +1 = t | Z, ?m) P (m | Z, ?0 u)
?t + ?0
N
T
Nt|d + ? P
0
?
t Nt + ?
=
,
Nd + ?
(6)
where I and J are the current numbers of document-level and global internal draws, respectively,
PI
(i)
?t = PJ N (j) ? (?j ? t). The quantity N (j) is the
Nt|d = i=1 Nd ? (?i ? t) as before and N
j=1
total number of document-level internal draws matched to global internal draw ?j . Since some topic
P
?
assignments
P will be matched to existing document-level draws, d ? (Nt|d > 0) ? Nt ? Nt ,
where d ? (Nt|d > 0) is the number of unique documents in Z in which topic t occurs.
P ?
An important property of (6) is that if concentration parameter ?0 is large relative to t N
t , then
P
?t and
?t are effectively ignored. In other words, as ?0 ? ? the hierarchical, asymcounts N
N
t
metric Dirichlet prior approaches a symmetric Dirichlet prior with concentration parameter ?.
For any given Z for real-world documents W, the internal draws and the paths from Z to u are
unknown. Only the value of each topic assignment is known, and hence Nt|d for each topic t and
document d. In order to compute the conditional posterior distribution for each topic assignment
?t for each topic t. These values can be inferred by
(needed to resample Z) it is necessary to infer N
Gibbs sampling the paths from Z to u [4, 15]. Resampling the paths from Z to u can be interleaved
(d)
with resampling Z itself. Removing zn = t from the model prior to resampling its value consists
of decrementing Nt|d and removing its current path to u. Similarly, adding a newly sampled value
(d)
(d)
zn = t0 into the model consists of incrementing Nt0 |d and sampling a new path from zn to u.
4
Comparing Priors for LDA
To investigate the effects of the priors over ? and ?, we compared the four combinations of symmetric and asymmetric Dirichlets shown in figure 1: symmetric priors over both ? and ? (denoted
4
?
?'
150
150
Frequency
0 50
?680000
-6.90 -6.85 -6.80 -6.75 -6.70 -6.65
?720000
3.5
4.5
5.5
0
50
patents[,
NYT 1]
80
-9.27
-9.26
-9.25
-9.24
-9.23
-9.22
nyt[, 1]
20News
40
Frequency
200
-9.28
0
5000
150
140
400
3000
Frequency
1000
0
0
100
log ?'
?
?760000
Log Probability
0 50
Frequency
Patent abstracts
50 topics
Iteration
60
70
80
90
10
30
50
70
-8.33
(a)
-8.32
-8.31
(b)
-8.30
-8.29
-8.28
(c)
Figure 2: (a) log P (W, Z | ?) (patent abstracts) for SS, SA, AS and AS, computed every 20 iterations and
averaged over 5 Gibbs sampling runs. AS (red) and AA (black) perform similarly and converge to higher
values of log P (W, Z | ?) than SS (blue) and SA (green). (b) Histograms of 4000 (iterations 1000-5000)
concentration parameter values for AA (patent abstracts). Note the log scale for ? 0 : the prior over ? approaches
a symmetric Dirichlet, making AA equivalent to AS. (c) log P (W, Z | ?) for all three data sets at T = 50. AS
is consistently better than SS. SA is poor (not shown). AA is capable of matching AS, but does not always.
Data set
Patent abstracts
20 Newsgroups
NYT articles
D
N?d
N
W
Stop
1016
540
1768
101.87
148.17
270.06
103499
80012
477465
6068
14492
41961
yes
no
no
Table 1: Data set statistics. D is the number of documents, N?d is the mean document length, N is the number
of tokens, W is the vocabulary size. ?Stop? indicates whether stop words were present (yes) or not (no).
SS), a symmetric prior over ? and an asymmetric prior over ? (denoted SA), an asymmetric prior
over ? and a symmetric prior over ? (denoted AS), and asymmetric priors over both ? and ? (denoted AA). Each combination was used to model three collections of documents: patent abstracts
about carbon nanotechnology, New York Times articles, and 20 Newsgroups postings. Due to the
computationally intensive nature of the fully Bayesian inference procedure, only a subset of each
collection was used (see table 1). In order to stress each combination of priors with respect to skewed
distributions over word frequencies, stop words were not removed from the patent abstracts.
The four models (SS, SA, AS, AA) were implemented in Java, with integrated-out base measures,
where appropriate. Each model was run with T ? {25, 50, 75, 100} for five runs of 5000 Gibbs
sampling iterations, using different random initializations. The concentration parameters for each
model (denoted by ?) were given broad Gamma priors and inferred using slice sampling [13].
During inference, log P (W, Z | ?) was recorded every twenty iterations. These values, averaged
over the five runs for T = 50, are shown in figure 2a. (Results for other values of T are similar.)
There are two distinct patterns: models with an asymmetric prior over ? (AS and AA; red and
black, respectively) perform very similarly, while models with a symmetric prior over ? (SS and
SA; blue and green, respectively) also perform similarly, with significantly worse performance than
AS and AA. Results for all three data sets are summarized in figure 2c, with the log probability
divided by the number of tokens in the collection. SA performs extremely poorly on NYT and 20
Newsgroups, and is not therefore shown. AS consistently achieves better likelihood than SS. The
fully asymmetric model, AA, is inconsistent, matching AS on the patents and 20 Newsgroups but
doing poorly on NYT. This is most likely due to the fact that although AA can match AS, it has
many more degrees of freedom and therefore a much larger space of possibilities to explore.
We also calculated the probability of held-out documents using the ?left-to-right? evaluation method
described by Wallach et al. [17]. These results are shown in figure 3a, and exhibit a similar pattern
to the results in figure 2a?the best-performing models are those with an asymmetric priors over ?.
We can gain intuition about the similarity between AS and AA by examining the values of the samP ?
pled concentration parameters. As explained in section 3.2, as ?0 or ? 0 grows large relative to t N
t
P ?
or w N
w , an asymmetric Dirichlet prior approaches a symmetric Dirichlet with concentration parameter ? or ?. Histograms of 4000 concentration parameter values (from iterations 1000-4000)
from the five Gibbs runs of AA with T = 50 are shown in figure 2b. The values for ?, ?0 and ?
5
50
75
Topics
100
Symmetric ?
25
0.080
0.080
0.080
0.080
0.080
Symmetric ?
a field emission an electron the
a the carbon and gas to an
the of a to and about at
of a surface the with in contact
the a and to is of liquid
0.042
0.042
0.042
0.042
0.042
Asymmetric ?
a field the emission and carbon is
the carbon catalyst a nanotubes
a the of substrate to material on
carbon single wall the nanotubes
the a probe tip and of to
Asymmetric ?
Nats / token
-6.18 -6.16 -6.14 -6.12 -6.10
Held-out probability
0.895
0.187
0.043
0.061
0.044
the a of to and is in
carbon nanotubes nanotube catalyst
sub is c or and n sup
fullerene compound fullerenes
material particles coating inorganic
1.300
0.257
0.135
0.065
0.029
the a of to and is in
and are of for in as such
a carbon material as structure nanotube
diameter swnt about nm than fiber swnts
compositions polymers polymer contain
(a)
(b)
Figure 3: (a) Log probability of held-out documents (patent abstracts). These results mirror those in figure 2a.
AS (red) and AA (black) again perform similarly, while SS (blue) and SA (green) are also similar, but exhibit
much worse performance. (b) ?mt values and the most probable words for topics obtained with T = 50. For
each model, topics were ranked according to usage and the topics at ranks 1, 5, 10, 20 and 30 are shown. AS
and AA are robust to skewed word frequency distributions and tend to sequester stop words in their own topics.
are all relatively small, while the values for ? 0 are extremely large, with a median around exp 30. In
other words, given the values of ? 0 , the prior over ? is effectively a symmetric prior over ? with concentration parameter ?. These results demonstrate that even when the model can use an asymmetric
prior over ?, a symmetric prior gives better performance. We therefore advocate using model AS.
It is worth noting the robustness of AS to stop words. Unlike SS and SA, AS effectively sequesters
stop words in a small number of more frequently used topics. The remaining topics are relatively
unaffected by stop words. Creating corpus-specific stop word lists is seen as an unpleasant but necessary chore in topic modeling. Also, for many specialized corpora, once standard stop words have
been removed, there are still other words that occur with very high probability, such as ?model,?
?data,? and ?results? in machine learning literature, but are not technically stop words. If LDA
cannot handle such words in an appropriate fashion then they must be treated as stop words and removed, despite the fact that they play meaningful semantic roles. The robustness of AS to stop words
has implications for HMM-LDA [8] which models stop words using a hidden Markov model and
?content? words using LDA, at considerable computational cost. AS achieves the same robustness
to stop words much more efficiently. Although there is empirical evidence that topic models that
use asymmetric Dirichlet priors with optimized hyperparameters, such as Pachinko allocation [10]
and Wallach?s topic-based language model [18], are robust to the presence of extremely common
words, these studies did not establish whether the robustness was a function of a more complicated
model structure or if careful consideration of hyperparameters alone was sufficient. We demonstrate
that AS is capable of learning meaningful topics even with no stop word removal. For efficiency,
we do not necessarily advocate doing away with stop word lists entirely, but we argue that using
an asymmetric prior over ? allows practitioners to use a standard, conservative list of determiners,
prepositions and conjunctions that is applicable to any document collection in a given language,
rather than hand-curated corpus-specific lists that risk removing common but meaningful terms.
5
Efficiency: Optimizing rather than Integrating Out
Inference in the full Bayesian formulation of AS is expensive because of the additional complexity
in sampling the paths from Z to u and maintaining hierarchical data structures. It is possible to
retain the theoretical and practical advantages of using AS without sacrificing the advantages of
simple, efficient models by directly optimizing m, rather than integrating it out. The concentration
parameters ? and ? may also be optimized (along with m for ? and by itself for ?). In this section,
we therefore compare the fully Bayesian version of AS with optimized AS, using SS as a baseline.
Wallach [19] compared several methods for jointly the maximum likelihood concentration parameter
and asymmetric base measure for a Dirichlet?multinomial model. We use the most efficient of
these methods. The advantage of optimizing m is considerable: although it is likely that further
optimizations would reduce the difference, 5000 Gibbs sampling iterations (including sampling ?,
6
ASO
AS
SS
Patents
NYT
20 NG
-6.65 ? 0.04
-6.62 ? 0.03
-6.91 ? 0.01
-9.24 ? 0.01
-9.23 ? 0.01
-9.26 ? 0.01
-8.27 ? 0.01
-8.28 ? 0.01
-8.31 ? 0.01
ASO
AS
SS
25
50
75
100
-6.18
-6.15
-6.18
-6.12
-6.13
-6.18
-6.12
-6.11
-6.16
-6.08
-6.10
-6.13
Table 2: log P (W, Z | ?) / N for T = 50 (left) and log P (W test | W, Z, ?) / N test for varying values of T
(right) for the patent abstracts. AS and ASO (optimized hyperparameters) consistently outperform SS except
for ASO with T = 25. Differences between AS and ASO are inconsistent and within standard deviations.
ASO
AS
SS
ASO
AS
SS
4.37 ? 0.08
?
?
4.34 ? 0.09
4.18 ? 0.09
?
5.43 ? 0.05
5.39 ? 0.06
5.93 ? 0.03
ASO
AS
SS
ASO
AS
SS
3.36 ? 0.03
?
?
3.43 ? 0.05
3.36 ? 0.02
?
3.50 ? 0.07
3.56 ? 0.07
3.49 ? 0.04
Table 3: Average VI distances between multiple runs of each model with T = 50 on (left) patent abstracts and
(right) 20 newsgroups. ASO partitions are approximately as similar to AS partitions as they are to other ASO
partitions. ASO and AS partitions are both are further from SS partitions, which tend to be more dispersed.
?0 and ?) for the patent abstracts using fully Bayesian AS with T = 25 took over four hours, while
5000 Gibbs sampling iterations (including hyperparameter optimization) took under 30 minutes.
In order to establish that optimizing m is a good approximation to integrating it out, we computed
log P (W, Z | ?) and the log probability of held-out documents for fully Bayesian AS, optimized
AS (denoted ASO) and as a baseline SS (see table 2). AS and ASO consistently outperformed SS,
except for ASO when T = 25. Since twenty-five is a very small number of topics, this is not a cause
for concern. Differences between AS and ASO are inconsistent and within standard deviations.
From a point of view of log probabilities, ASO therefore provides a good approximation to AS.
We can also compare topic assignments. Any set of topic assignments can be thought of as partition
of the corresponding tokens into T topics. In order to measure similarity between two sets of topic
assignments Z and Z 0 for W, we can compute the distance between these partitions using variation
of information (VI) [11, 6] (see suppl. mat. for a definition of VI for topic models). VI has several
attractive properties: it is a proper distance metric, it is invariant to permutations of the topic labels,
and it can be computed in O (N + T T 0 ) time, i.e., time that is linear in the number of tokens and
the product of the numbers of topics in Z and Z 0 . For each model (AS, ASO and SS), we calculated
the average VI distance between all 10 unique pairs of topic assignments from the 5 Gibbs runs for
that model, giving a measure of within-model consistency. We also calculated the between-model
VI distance for each pair of models, averaged over all 25 unique pairs of topic assignments for that
pair. Table 3 indicates that ASO partitions are approximately as similar to AS partitions as they are
to other ASO partitions. ASO and AS partitions are both further away from SS partitions, which
tend to be more dispersed. These results confirm that ASO is indeed a good approximation to AS.
6
Effect on Selecting the Number of Topics
Selecting the number of topics T is one of the most problematic modeling choices in finite topic
modeling. Not only is there no clear method for choosing T (other than evaluating the probability of
held-out data for various values of T ), but degree to which LDA is robust to a poor setting of T is not
well-understood. Although nonparametric models provide an alternative, they lose the substantial
computational efficiency advantages of finite models. We explore whether the combination of priors
advocated in the previous sections (model AS) can improve the stability of LDA to different values of
T , while retaining the static memory management and simple inference algorithms of finite models.
Ideally, if LDA has sufficient topics to model W well, the assignments of tokens to topics should be
relatively invariant to an increase in T ?i.e., the additional topics should be seldom used. For example, if ten topics is sufficient to accurately model the data, then increasing the number of topics to
twenty shouldn?t significantly affect inferred topic assignments. If this is the case, then using large
T should not have a significant impact on either Z or the speed of inference, especially as recentlyintroduced sparse sampling methods allow models with large T to be trained efficiently [20]. Figure 4a shows the average VI distance between topic assignments (for the patent abstracts) inferred
by models with T = 25 and models with T ? {50, 75, 100}. AS and AA, the bottom two lines, are
7
AS prior
1.0
0.4
0.4
0.6
0.8
0.6
0.8
6.5
6.0
5.5
5.0
0.2
0.2
4.5
50
75
100
0.0
0.0
4.0
Variation of Information
SS prior
1.0
Clustering distance from T=25
50 topics
75 topics 100 topics
50 topics
75 topics 100 topics
Topics
(a)
(b)
Figure 4: (a) Topic consistency measured by average VI distance from models with T = 25. As T increases,
AS (red) and AA (black) produce Zs that stay significantly closer to those obtained with T = 25 than SA
(green) and SS (blue). (b) Assignments of tokens (patent abstracts) allocated to the largest topic in a 25 topic
model, as T increases. For AS, the topic is relatively intact, even at T = 100: 80% of tokens assigned to the
topic at T = 25 are assigned to seven topics. For SS, the topic has been subdivided across many more topics.
much more stable (smaller average VI distances) than SS and SA at 50 topics and remain so as T
increases: even at 100 topics, AS has a smaller VI distance to a 25 topic model than SS at 50 topics.
Figure 4b provides intuition for this difference: for AS, the tokens assigned to the largest topic at
T = 25 remain within a small number of topics as T is increased, while for SS, topic usage is more
uniform and increasing T causes the tokens to be divided among many more topics. These results
suggest that for AS, new topics effectively ?nibble away? at existing topics, rather than splitting
them more uniformly. We therefore argue that the risk of using too many topics is lower than the
risk of using too few, and that practitioners should be comfortable using larger values of T .
7
Discussion
The previous sections demonstrated that AS results in the best performance over AA, SA and SS,
measured in several ways. However, it is worth examining why this combination of priors results
in superior performance. The primary assumption underlying topic modeling is that a topic should
capture semantically-related word co-occurrences. Topics must also be distinct in order to convey
information: knowing only a few co-occurring words should be sufficient to resolve semantic ambiguities. A priori, we therefore do not expect that a particular topic?s distribution over words will be
like that of any other topic. An asymmetric prior over ? is therefore a bad idea: the base measure
will reflect corpus-wide word usage statistics, and a priori, all topics will exhibit those statistics too.
A symmetric prior over ? only makes a prior statement (determined by the concentration parameter ?) about whether topics will have more sparse or more uniform distributions over words, so
the topics are free to be as distinct and specialized as is necessary. However, it is still necessary to
account for power-law word usage. A natural way of doing this is to expect that certain groups of
words will occur more frequently than others in every document in a given corpus. For example, the
words ?model,? ?data,? and ?algorithm? are likely to appear in every paper published in a machine
learning conference. These assumptions lead naturally to the combination of priors that we have
empirically identified as superior: an asymmetric Dirichlet prior over ? that serves to share commonalities across documents and a symmetric Dirichlet prior over ? that serves to avoid conflicts
between topics. Since these priors can be implemented using efficient algorithms that add negligible
cost beyond standard inference techniques, we recommend them as a new standard for LDA.
8
Acknowledgments
This work was supported in part by the Center for Intelligent Information Retrieval, in part by
CIA, NSA and NSF under NSF grant number IIS-0326249, and in part by subcontract number
B582467 from Lawrence Livermore National Security, LLC under prime contract number DEAC52-07NA27344 from DOE/NNSA. Any opinions, findings and conclusions or recommendations
expressed in this material are the authors? and do not necessarily reflect those of the sponsor.
8
References
[1] A. Asuncion, M. Welling, P. Smyth, and Y. W. Teh. On smoothing and inference for topic
models. In Proceedings of the 25th Conference on Uncertainty in Artificial Intelligence, 2009.
[2] D. Blei and J. Lafferty. A correlated topic model of Science. Annals of Applied Statistics,
1(1):17?35, 2007.
[3] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet allocation. Journal of Machine
Learning Research, 3:993?1022, January 2003.
[4] P. J. Cowans. Probabilistic Document Modelling. PhD thesis, University of Cambridge, 2006.
[5] S. Geman and D. Geman. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Transaction on Pattern Analysis and Machine Intelligence 6, pages
721?741, 1984.
[6] S. Goldwater and T. L. Griffiths. A fully Bayesian approach to unsupervised part-of-speech
tagging. In Association for Computational Linguistics, 2007.
[7] T. L. Griffiths and M. Steyvers. Finding scientific topics. Proceedings of the National Academy
of Sciences, 101(suppl. 1):5228?5235, 2004.
[8] T. L. Griffiths, M. Steyvers, D. M. Blei, and J. B. Tenenbaum. Integrating topics and syntax.
In L. K. Saul, Y. Weiss, and L. Bottou, editors, Advances in Neural Information Processing
Systems 17, pages 536?544. The MIT Press, 2005.
[9] D. Hall, D. Jurafsky, and C. D. Manning. Studying the history of ideas using topic models. In
Proceedings of EMNLP 2008, pages 363?371.
[10] W. Li and A. McCallum. Mixtures of hierarchical topics with pachinko allocation. In Proceedings of the 24th International Conference on Machine learning, pages 633?640, 2007.
[11] M. Meil?a. Comparing clusterings by the variation of information. In Conference on Learning
Theory, 2003.
[12] D. Mimno and A. McCallum. Organizing the OCA: Learning faceted subjects from a library
of digital books. In Proceedings of the 7th ACM/IEEE joint conference on Digital libraries,
pages 376?385, Vancouver, BC, Canada, 2007.
[13] R. M. Neal. Slice sampling. Annals of Statistics, 31:705?767, 2003.
[14] D. Newman, C. Chemudugunta, P. Smyth, and M. Steyvers. Analyzing entities and topics in
news articles using statistical topic models. In Intelligence and Security Informatics, Lecture
Notes in Computer Science. 2006.
[15] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Hierarchical Dirichlet processes. Journal
of the American Statistical Association, 101:1566?1581, 2006.
[16] Y. W. Teh, D. Newman, and M. Welling. A collapsed variational Bayesian inference algorithm
for latent Dirichlet allocation. In Advances in Neural Information Processing Systems 18,
2006.
[17] H. Wallach, I. Murray, R. Salakhutdinov, and D. Mimno. Evaluation methods for topic models.
In Proceedings of the 26th Interational Conference on Machine Learning, 2009.
[18] H. M. Wallach. Topic modeling: Beyond bag-of-words. In Proceedings of the 23rd International Conference on Machine Learning, pages 977?984, Pittsburgh, Pennsylvania, 2006.
[19] H. M. Wallach. Structured Topic Models for Language. Ph.D. thesis, University of Cambridge,
2008.
[20] L. Yao, D. Mimno, and A. McCallum. Efficient methods for topic model inference on streaming document collections. In Proceedings of KDD 2009, 2009.
9
| 3854 |@word version:1 nd:8 heuristically:1 simulation:1 uma:1 selecting:3 liquid:1 document:52 bc:1 existing:7 current:4 z2:2 nt:13 comparing:2 must:5 partition:12 kdd:1 remove:1 update:1 resampling:4 alone:1 generative:1 selected:3 intelligence:3 mccallum:5 blei:4 provides:4 five:4 along:1 constructed:1 consists:2 cowans:1 advocate:4 tagging:1 indeed:1 faceted:1 olya:1 frequently:2 salakhutdinov:1 decreasing:1 resolve:1 little:4 param:1 increasing:2 matched:9 underlying:1 substantially:2 z:1 unobserved:1 finding:2 every:7 grant:1 appear:1 comfortable:1 before:3 negligible:4 understood:1 treat:1 despite:1 analyzing:3 meil:1 path:7 approximately:2 black:4 initialization:1 wallach:8 co:2 jurafsky:1 averaged:3 practical:3 unique:3 acknowledgment:1 practice:3 procedure:1 empirical:1 significantly:4 java:1 matching:2 thought:1 word:46 integrating:7 griffith:3 suggest:1 cannot:1 risk:3 applying:1 collapsed:1 optimize:2 equivalent:3 map:1 demonstrated:2 center:1 unstructured:1 splitting:1 importantly:1 dominate:1 steyvers:3 stability:1 handle:1 variation:4 annals:2 construction:1 play:1 user:1 smyth:2 substrate:1 expensive:1 curated:1 asymmetric:31 geman:2 observed:1 role:1 bottom:1 capture:1 news:3 removed:3 substantial:2 intuition:2 complexity:2 nats:1 ideally:1 trained:1 predictive:9 technically:1 efficiency:3 easily:1 joint:1 various:1 fiber:1 distinct:3 effective:1 describe:1 artificial:1 newman:2 choosing:1 larger:2 say:1 drawing:3 s:29 statistic:5 jointly:1 itself:2 beal:1 advantage:5 took:2 product:1 relevant:1 date:1 organizing:1 poorly:2 achieve:1 academy:1 produce:2 generating:5 andrew:1 measured:3 advocated:2 sa:12 implemented:4 c:1 coating:1 stochastic:1 opinion:1 material:4 subdivided:1 wall:1 polymer:2 probable:2 exploring:1 around:1 hall:1 exp:1 lawrence:1 electron:1 vary:1 achieves:2 commonality:1 resample:1 determiner:1 outperformed:1 applicable:1 lose:1 label:1 bag:1 largest:2 tool:2 aso:22 mit:1 always:1 super:1 rather:4 avoid:1 varying:1 conjunction:1 emission:2 consistently:4 rank:1 indicates:2 likelihood:2 modelling:1 rigorous:1 baseline:2 inference:13 i0:2 streaming:1 typically:3 integrated:3 hidden:1 issue:2 among:1 denoted:6 priori:4 retaining:1 oca:1 smoothing:2 field:2 once:1 ng:2 sampling:14 manually:1 broad:2 unsupervised:1 others:1 recommend:3 intelligent:1 few:2 gamma:2 national:2 replaced:1 freedom:1 highly:2 investigate:1 possibility:1 evaluation:2 nsa:1 navigation:1 mixture:1 held:9 implication:1 capable:2 closer:1 necessary:4 indexed:1 sacrificing:1 guidance:1 theoretical:2 increased:2 modeling:9 zn:21 assignment:30 restoration:1 cost:5 deviation:2 subset:1 uniform:5 examining:2 too:3 dir:1 international:2 amherst:2 sensitivity:1 retain:1 stay:1 contract:1 probabilistic:1 informatics:1 tip:1 dirichlets:4 yao:1 thesis:2 again:1 recorded:1 nm:1 management:1 ambiguity:1 reflect:2 emnlp:1 worse:2 creating:1 book:1 american:1 li:1 account:1 summarized:1 matter:1 vi:10 script:1 view:1 doing:3 sup:1 red:4 start:1 complicated:2 samp:1 asuncion:3 efficiently:2 yes:2 goldwater:1 bayesian:12 accurately:1 worth:2 researcher:1 unaffected:1 published:1 history:2 definition:1 frequency:7 dm:1 naturally:1 associated:1 static:1 gain:2 stop:20 newly:1 treatment:2 massachusetts:1 sampled:1 higher:1 interational:1 improved:1 wei:1 formulation:1 furthermore:1 implicit:1 znd:5 hand:1 replacing:2 lda:21 quality:2 scientific:3 grows:1 usage:5 effect:10 contain:1 hence:1 assigned:9 symmetric:29 semantic:2 neal:1 attractive:1 skewed:3 during:1 subcontract:1 syntax:1 stress:1 demonstrate:4 performs:1 image:1 variational:2 consideration:1 recently:1 common:4 superior:2 specialized:2 multinomial:3 mt:5 empirically:1 patent:15 association:2 interpretation:2 significant:3 composition:1 cambridge:2 gibbs:11 rd:1 seldom:1 outlined:1 resorting:1 consistency:3 similarly:5 particle:1 language:4 stable:1 similarity:2 surface:1 add:2 base:10 posterior:3 own:1 optimizing:5 driven:1 prime:1 compound:1 certain:1 accomplished:1 seen:1 additional:2 recognized:1 converge:1 ii:1 full:2 multiple:1 infer:2 d0:6 match:1 retrieval:1 divided:2 sponsor:1 impact:1 involving:1 metric:2 iteration:8 histogram:2 suppl:2 eter:1 median:1 allocated:1 unlike:1 subject:1 tend:4 inconsistent:3 lafferty:1 jordan:2 practitioner:2 noting:1 presence:1 wn:7 variety:1 newsgroups:5 affect:1 pennsylvania:1 identified:1 suboptimal:1 reduce:1 idea:3 regarding:1 knowing:1 intensive:2 t0:16 whether:4 speech:1 york:1 cause:2 dramatically:1 useful:2 ignored:1 clear:1 nonparametric:3 ten:1 tenenbaum:1 induces:1 ph:1 simplest:1 diameter:1 http:1 outperform:1 problematic:1 nsf:2 blue:4 chemudugunta:1 discrete:1 hyperparameter:3 mat:1 group:1 four:5 drawn:6 nt0:1 pj:1 nyt:6 relaxation:1 nonhierarchical:1 shouldn:1 run:7 uncertainty:1 almost:1 throughout:2 draw:25 interleaved:2 entirely:1 occur:3 speed:1 argument:1 extremely:4 performing:1 relatively:6 department:1 structured:3 according:1 combination:6 poor:2 manning:1 across:2 smaller:2 remain:2 intimately:1 making:1 explained:1 invariant:2 handling:1 computationally:2 conjugacy:2 previously:5 describing:2 needed:1 serf:2 studying:1 available:1 apply:1 eight:1 hierarchical:9 probe:1 appropriate:3 away:3 occurrence:1 cia:1 alternative:1 robustness:5 denotes:1 dirichlet:43 running:1 remaining:1 clustering:2 linguistics:1 marginalized:1 maintaining:1 giving:3 nanotube:5 especially:1 establish:2 murray:1 pled:1 contact:1 quantity:4 occurs:1 concentration:21 primary:1 exhibit:3 distance:10 rethinking:1 hmm:1 entity:1 topic:138 seven:1 argue:2 length:1 z3:2 carbon:7 statement:1 info:1 implementation:2 proper:1 unknown:3 perform:4 twenty:3 teh:3 markov:1 finite:3 gas:1 january:1 immediate:1 excluding:1 nonuniform:3 smoothed:1 canada:1 inferred:8 david:1 namely:1 pair:4 livermore:1 optimized:6 z1:2 security:2 conflict:1 hour:1 beyond:4 pattern:3 including:3 green:4 memory:1 power:1 natural:2 treated:4 ranked:1 improve:1 library:3 faced:1 prior:78 literature:2 removal:1 vancouver:1 marginalizing:1 relative:2 law:1 catalyst:2 fully:8 expect:2 permutation:1 lecture:1 allocation:6 proportional:9 versus:2 digital:3 integrate:1 degree:2 sufficient:5 article:4 editor:1 pi:3 share:1 preposition:1 token:14 placed:1 supported:1 free:1 allow:1 wide:2 saul:1 sparse:2 benefit:3 mimno:5 slice:3 calculated:3 vocabulary:1 world:2 evaluating:1 llc:1 pachinko:2 author:1 collection:6 commonly:1 welling:2 transaction:1 confirm:1 global:7 sequentially:1 corpus:6 pittsburgh:1 assumed:2 decrementing:1 alternatively:1 search:1 latent:5 iterative:1 why:2 table:6 additionally:1 nature:1 robust:4 hanna:1 bottou:1 necessarily:2 did:1 hierarchically:1 incrementing:1 hyperparameters:6 convey:1 body:1 depicts:2 fashion:1 sub:2 inferring:2 position:2 posting:1 removing:3 minute:1 bad:1 specific:12 list:5 evidence:1 concern:1 adding:2 effectively:4 mirror:1 phd:1 occurring:2 simply:1 explore:3 likely:3 expressed:1 recommendation:1 aa:17 dispersed:2 acm:1 ma:1 conditional:4 consequently:2 careful:1 shared:1 content:1 considerable:2 specifically:1 except:2 uniformly:1 nanotechnology:1 semantically:1 determined:1 conservative:1 total:1 meaningful:3 intact:1 unpleasant:1 internal:8 latter:1 mcmc:4 rexa:1 correlated:1 |
3,151 | 3,855 | Graph-based Consensus Maximization among
Multiple Supervised and Unsupervised Models
Jing Gao? , Feng Liang? , Wei Fan? , Yizhou Sun? , and Jiawei Han?
?
University of Illinois at Urbana-Champaign, IL USA
?
IBM TJ Watson Research Center, Hawthorn, NY USA
?
{jinggao3,liangf,sun22,hanj}@illinois.edu, ? [email protected]
Abstract
Ensemble classifiers such as bagging, boosting and model averaging are known
to have improved accuracy and robustness over a single model. Their potential,
however, is limited in applications which have no access to raw data but to the
meta-level model output. In this paper, we study ensemble learning with output
from multiple supervised and unsupervised models, a topic where little work has
been done. Although unsupervised models, such as clustering, do not directly
generate label prediction for each individual, they provide useful constraints for
the joint prediction of a set of related objects. We propose to consolidate a classification solution by maximizing the consensus among both supervised predictions
and unsupervised constraints. We cast this ensemble task as an optimization problem on a bipartite graph, where the objective function favors the smoothness of the
prediction over the graph, as well as penalizing deviations from the initial labeling
provided by supervised models. We solve this problem through iterative propagation of probability estimates among neighboring nodes. Our method can also be
interpreted as conducting a constrained embedding in a transformed space, or a
ranking on the graph. Experimental results on three real applications demonstrate
the benefits of the proposed method over existing alternatives1 .
1
Introduction
We seek to integrate knowledge from multiple information sources. Traditional ensemble methods
such as bagging, boosting and model averaging are known to have improved accuracy and robustness
over a single model. Their potential, however, is limited in applications which have no access to raw
data but to the meta-level model output. For example, due to privacy, companies or agencies may
not be willing to share their raw data but their final models. So information fusion needs to be
conducted at the decision level. Furthermore, different data sources may have different formats, for
example, web video classification based on image, audio and text features. In these scenarios, we
have to combine incompatible information sources at the coarser level (predicted class labels) rather
than learn the joint model from raw data.
In this paper, we consider the general problem of combining output of multiple supervised and unsupervised models to improve prediction accuracy. Although unsupervised models, such as clustering,
do not directly generate label predictions, they provide useful constraints for the classification task.
The rationale is that objects that are in the same cluster should be more likely to receive the same
class label than the ones in different clusters. Furthermore, incorporating the unsupervised clustering
models into classification ensembles improves the base model diversity, and thus has the potential
of improving prediction accuracy.
1
More information, data and codes are available at http://ews.uiuc.edu/?jinggao3/nips09bgcm.htm
1
[1 0 0]
M1
1
2
4
1
2
3
7
6
[1 0 0]
3
5
g1
?...
Classifier [0 0 1]
g3
M2
x1
Supervised
Learning
SVM,
Logistic Regression,
?...
Semisupervised
Learning
Semi-supervised,
Transductive
Learning
Unsupervised
Learning
K-means,
Spectral Clustering,
?...
x2
g4
?...
Classifier [0 0 1]
g6
x3
x4
Bagging,
Boosting,
Bayesian
model
averaging,
?...
Mixture of
Experts,
Stacked
Generalization
Multi-view Learning
Majority
Voting
Consensus
Maximization
8
1
2
4
3
M3
Clustering
9
M4
6
5
x5
g9
x6
7
7
g7
?...
Clustering Ensemble
g10
?...
Clustering g
12
Single
Models
x7
Ensemble at
Raw Data
Ensemble
at Output
Level
Figure 1: Groups Figure 2: Bipartite Graph Figure 3: Position of Consensus Maximization
Suppose we have a set of data points X = {x1 , x2 , . . . , xn } from c classes. There are m models
that provide information about the classification of X, where the first r of them are (supervised)
classifiers, and the remaining are (unsupervised) clustering algorithms. Consider an example where
X = {x1 , . . . , x7 }, c = 3 and m = 4. The output of the four models are:
M1 = {1, 1, 1, 2, 3, 3, 2} M2 = {1, 1, 2, 2, 2, 3, 1} M3 = {2, 2, 1, 3, 3, 1, 3} M4 = {1, 2, 3, 1, 2, 1, 1}
where M1 and M2 assign each object a class label, whereas M3 and M4 simply partition the objects
into three clusters and assign each object a cluster ID. Each model, no matter it is supervised or
unsupervised, partitions X into groups, and objects in the same group share either the same predicted
class label or the same cluster ID. We summarize the data, models and the corresponding output by
a bipartite graph. In the graph, nodes at the left denote the groups output by the m models with
some labeled ones from the supervised models, nodes at the right denote the n objects, and a group
and an object are connected if the object is assigned to the group by one of the models. For the
aforementioned toy example, we show the groups obtained from a classifier M1 and a clustering
model M3 in Figure 1, as well as the group-object bipartite graph in Figure 2.
The objective is to predict the class label of xi ? X, which agrees with the base classifiers? predictions, and meanwhile, satisfies the constraints enforced by the clustering models, as much as
possible. To reach maximum consensus among all the models, we define an optimization problem
over the bipartite graph whose objective function penalizes deviations from the base classifiers? predictions, and discrepancies of predicted class labels among nearby nodes. In the toy example, the
consensus label predictions for X should be {1, 1, 1, 2, 2, 3, 2}.
Related Work. We summarize various learning problems in Figure 3, where one dimension represents the goal ? from unsupervised to supervised, and the other dimension represents the method ?
single models, ensembles at the raw data, or ensembles at the output level. Our proposed method is
a semi-supervised ensemble working at the output level, where little work has been done.
Many efforts have been devoted to develop single-model learning algorithms, such as Support Vector
Machines and logistic regression for classification, K-means and spectral clustering for clustering.
Recent studies reveal that unsupervised information can also be utilized to improve the accuracy of
supervised learning, which leads to semi-supervised [29, 8] and transductive learning [21]. Although
our proposed algorithm works in a transductive setting, existing semi-supervised and transductive
learning methods cannot be easily applied to our problem setting and we discuss this in more detail at the end of Section 2. Note that all methods listed in Figure 3 are for single task learning.
On the contrary, multi-task learning [6, 9] deals with multiple tasks simultaneously by exploiting
dependence among tasks, which has a different problem setting and thus is not discussed here.
In Figure 3, we divide ensemble methods into two categories depending on whether they require
access to raw data. In unsupervised learning, many clustering ensemble methods [12, 17, 25, 26]
have been developed to find a consensus clustering from multiple partitionings without accessing the
features. In supervised learning, however, only majority voting type algorithms work on the model
output level, and most well-known classification ensemble approaches [2, 11, 19] (eg. bagging,
boosting, bayesian model averaging) involve training diversified classifiers from raw data. Methods
such as mixture of experts [20] and stacked generalization [27] try to obtain a meta-learner on
top of the model output, however, they still need the labels of the raw data as feedbacks, so we
position them as an intermediate between raw data ensemble and output ensemble. In multi-view
2
learning [4, 13], a joint model is learnt from both labeled and unlabeled data from multiple sources.
Therefore, it can be regarded as a semi-supervised ensemble requiring access to the raw data.
Summary. The proposed consensus maximization problem is a challenging problem that cannot
be solved by simple majority voting. To achieve maximum agreement among various models, we
must seek a global optimal prediction for the target objects. In Section 2, we formally define the
graph-based consensus maximization problem and propose an iterative algorithm to solve it. The
proposed solution propagates labeled information among neighboring nodes until stabilization. We
also present two different interpretations of the proposed method in Section 3, and discuss how
to incorporate feedbacks obtained from a few labeled target objects into the framework in Section
4. An extensive experimental study is carried out in Section 5, where the benefits of the proposed
approach are illustrated on 20 Newsgroup, Cora research papers, and DBLP publication data sets.
2
Methodology
Suppose we have the output of r classification algorithms and (m ? r) clustering algorithms on a
data set X. For the sake of simplicity, we assume that each point is assigned to only one class or
cluster in each of the m algorithms, and the number of clusters in each clustering algorithm is c,
same as the number of classes. Note that cluster ID z may not be related to class z. So each base
algorithm partitions X into c groups and there are totally v = mc groups, where the first s = rc
groups are generated by classifiers and the remaining v ? s groups are from clustering algorithms.
Before proceeding further, we introduce some notations that will be used in the following discussion:
Bn?m denotes an n ? m matrix with bij representing the (ij)-th entry, and ~bi? and ~b?j denote vectors
of row i and column j, respectively. See Table 1 for a summary of important symbols.
We represent the objects and groups in a bipartite graph as shown in Figure 2, where the object nodes
x1 , . . . , xn are on the right, the group nodes g1 , . . . , gv are on the left. The affinity matrix An?v of
this graph summarizes the output of m algorithms on X:
aij = 1,
if xi is assigned to group gj by one of the algorithms;
0,
otherwise.
We aim at estimating the conditional probability of each object node xi belonging to c classes. As
a nuisance parameter, the conditional probabilities at each group node gj are also estimated. These
conditional probabilities are denoted by Un?c for object nodes and Qv?c for group nodes:
uiz = P? (y = z|xi )
and
qjz = P? (y = z|gj ).
Since the first s = rc groups are obtained from supervised learning models, they have some initial
class label estimates denoted by Yv?c where
Pc
yjz = 1,
if gj ?s predicted label is z, j = 1, . . . , s;
0,
otherwise.
Let kj =
z=1 yjz , and we formulate the consensus agreement as the following optimization
problem on the graph:
min f (Q, U ) = min
Q,U
Q,U
n X
v
?X
aij ||~ui? ? ~qj? ||2 + ?
i=1 j=1
v
X
kj ||~qj? ? ~yj? ||2
?
(1)
j=1
s.t. ~ui? ? ~0, |~ui? | = 1, i = 1 : n
~qj? ? ~0, |~qj? | = 1, j = 1 : v
where ||.|| and |.| denote a vector?s L2 and L1 norm respectively. The first term ensures that if an
object xi is assigned to group gj by one of the algorithm, their conditional probability estimates
must be close. When j = 1, . . . , s, the group node gj is from a classifier, so kj = 1 and the second
term puts the constraints that a group gj ?s consensus class label estimate should not deviate much
from its initial class label prediction. ? is the shadow price payment for violating the constraints.
When j = s + 1, . . . , v, gj is a group from an unsupervised model with no such constraints, and
thus kj = 0 and the weight of the constraint is 0. Finally, ~ui? and ~qj? are probability vectors, and
therefore each component must be greater than or equal to 0 and the sum equals to 1.
We propose to solve this problem using block coordinate descent methods as shown in Algorithm 1.
At the t-th iteration, if we fix the value of U , the objective function is a summation of v quadratic
components with respect to ~qj? . The corresponding Hessian matrix is diagonal with entries equal to
3
Algorithm 1 BGCM algorithm
Table 1: Important Notations
Input: group-object affinity matrix A, initial labeling matrix Y ; parameters ? and ?;
Symbol
Output: consensus matrix U ;
1, . . . , c
Algorithm:
1, . . . , n
Initialize U 0 ,U 1 randomly
1, . . . , s
t?1
s + 1, . . . , v
while ||U t ? U t?1 || > ? do
An?v = [aij ]
Qt = (Dv +?Kv )?1 (AT U t?1 +?Kv Y ) Un?c = [uiz ]
Qv?c = [qjz ]
U t = Dn?1 AQt
Yv?c = [yjz ]
return U t
Pn
Definition
class indexes
object indexes
indexes of groups from supervised models
indexes of groups from unsupervised models
aij -indicator of object i in group j
uiz -probability of object i wrt class z
qjz -probability of group j wrt class z
yjz -indicator of group j predicted as class z
aij + ?kj > 0. Therefore it is strictly convex and ?q~j? f (Q, U (t?1) ) = 0 gives the unique
global minimum of the cost function with respect to ~qj? in Eq. (2). Similarly, fixing Q, the unique
global minimum with respect to ~ui? is also obtained.
Pv
Pn
(t)
(t?1)
q j?
aij u
~ i?
+ ?kj ~yj?
j=1 aij ~
(t)
(t)
i=1
Pn
(2)
~q j? =
u
~ i? = Pv
i=1 aij + ?kj
j=1 aij
? Pn
?
The update formula in matrix forms are given in Algorithm 1. Dv = diag ( i=1 aij ) v?v and
? Pv
?
? Pc
?
Dn = diag ( j=1 aij ) n?n act as the normalization factors. Kv = diag ( z=1 yjz ) v?v indicates the existence of constraints on the group nodes. During each iteration, the probability estimate
at each group node (i.e., Q) receives the information from its neighboring object nodes while retains
its initial value Y , and in return the updated probability estimates at group nodes propagate the information back to its neighboring object nodes when updating U . It is straightforward to prove that
(Q(t) , U (t) ) converges to a stationary point of the optimization problem [3].
i=1
In [14], we proposed a heuristic method to combine heterogeneous information sources. In this paper, we bring up the concept of consensus maximization and solve the problem over a bipartite graph
representation. Our proposed method is related to graph-based semi-supervised learning (SSL). But
existing SSL algorithms only take one supervised source (i.e., the labeled objects) and one unsupervised source (i.e., the similarity graph) [29, 8], and thus cannot be applied to combine multiple
models. Some SSL methods [16] can incorporate results from an external classifier into the graph,
but obviously they cannot handle multiple classifiers and multiple unsupervised sources. To apply
SSL algorithms on our problem, we must first fuse all supervised models into one by some ensemble approach, and fuse all unsupervised models into one by defining a similarity function. Such a
compression may lead to information loss, whereas the proposed method retains all the information
and thus consensus can be reached among all the based model output.
3
Interpretations
In this part, we explain the proposed method from two independent perspectives.
Constrained Embedding. Now we focus on the ?hard? consensus solution, i.e., each point is
assigned to exactly one class. So U and Q are indicator matrices: uiz = 1 if the ensemble assigns
xi to class z, and 0 otherwise; similar for qjz ?s. For group nodes from classification algorithms,
we will treat their entries in Q as known since they have been assigned a class label by one of the
classifiers, that is, qjz = yjz for 1 ? j ? s.
Because U represents the consensus, we should let group gj correspond to class z if majority of the
objects in group gj correspond to class z in the consensus solution. The optimization is thus:
?
Pn
v X
c ?
X
?
aij uiz ??
i=1
?
min
?qjz ? Pn aij ?
Q,U
s.t.
c
X
z=1
uiz = 1 ?i ? {1, . . . , n}
c
X
(3)
i=1
j=1 z=1
qjz = 1 ?j ? {s+1, . . . , v} uiz ? {0, 1} qjz ? {0, 1} (4)
z=1
qjz = 1 ?j ? {1, . . . , s} if gj ?s label is z qjz = 0 ?j ? {1, . . . , s} if gj ?s label is not z
4
(5)
Here, the two indicator matrices U and Q can be viewed as embedding x1 , . . . , xn (object nodes)
and g1 , . . . , gv (group nodes) into a c-dimensional cube. Due to the constraints in Eq. (4), ~ui? and
~qj? reside on the boundary of the (c ? 1)-dimensional hyperplane in the cube. ~a?j denotes the
objects group gj contains, ~qj? can be regarded
as the group representative in this new space, and
Pn
ui?
i=1 aij ~
. For the s groups obtained from classification
thus it should be close to the group mean: P
n
i=1 aij
algorithms, we know their ?ideal? embedding, as represented in the constraints in Eq. (5).
We now relate this problem to the optimization framework discussed in Section 2. aij can only take
value of 0 or 1, and thus Eq.
Pn(3) just depends
Pn on the cases
Pnwhen aij = 1. When aij = 1, no matter
qjz is 1 or 0, we have |qjz i=1 aij ? i=1 aij uiz | = i=1 |aij (qjz ? uiz )|. Therefore,
c ?
X X
?
?qjz ?
?
j:aij =1 z=1
? Pn
?
P
Pn
?
c ?
c Pn
?
X X
X X
?
qjz i=1 aij ? n
ij (qjz ? uiz )|
i=1 aij uiz
i=1 |a
i=1 aij uiz ?
P
P
Pn
=
=
n
n
?
a
a
ij
ij
i=1
i=1
i=1 aij
j:a =1 z=1
j:a =1 z=1
ij
ij
Suppose the groups found by the base models have balanced size, i.e.,
constant for ?j. Then the objective function can be approximated as:
c X
n
X X
j:aij =1 z=1 i=1
|aij (qjz ? uiz )| =
n
X
X
aij
c
X
z=1
i=1 j:aij =1
|qjz ? uiz | =
Pn
aij = ? where ? is a
i=1
n X
v
X
aij
i=1 j=1
c
X
|qjz ? uiz |
z=1
Therefore, when the classification and clustering algorithms generate balanced groups, with the same
set of constraints
(4) and
Pn inPEq.
PcEq. (5), the constrained embedding problem in Eq. (3) is equivalent
v
to: min Q,U i=1 j=1 aij z=1 |qjz ? uiz |. It is obvious that this is the same as the optimization
problem we propose in Section 2 with two relaxations: 1) We transform hard constraints in Eq. (5)
to soft constraints where the ideal embedding is expressed in the initial labeling matrix Y and the
price for violating the constraints is set to ?. 2) uiz and qjz are relaxed to have values between 0 and
1, instead of either 0 or 1, and quadratic cost functions replace the L1 norms. So they are probability
estimates rather than class membership indicators, and we can embed them anywhere on the plane.
Though with these relaxations, we build connections between the constrained embedding framework
as discussed in this section with the one proposed in Section 2. Therefore, we can view our proposed
method as embedding both object nodes and group nodes into a hyperlane so that object nodes are
close to the group nodes they link to. The constraints are put on the group nodes from supervised
models to penalize the embedding that are far from the ?ideal? ones.
Ranking on Consensus Structure. Our method can also be viewed as conducting ranking with respect to each class on the bipartite graph, where group nodes from supervised models act as queries.
Suppose we wish to know the probability of any group gj belonging to class 1, which can
regarded
Pbe
n
as the relevance score of gj with respect to example queries from class 1. Let wj = i=1 aij . In
Algorithm 1, the relevance scores of all the groups are learnt using the following equation:
~q?1 = (Dv + ?Kv )?1 (AT Dn?1 A~q?1 + ?Kv ~y?1 ) = D? (Dv?1 AT Dn?1 A)~q?1 + D1?? ~y?1
where the v ? v diagonal matrices D? and D1?? have (j, j) entries as
wj
wj +?kj
and
?kj
wj +?kj .
Consider collapsing the original bipartite graph into a graph with group nodes only, then AT A is its
affinity matrix. After normalizing it to be a probability matrix, we have pij in P = Dv?1 AT Dn?1 A
represent the probability of jumping to node j from node i. The groups that are predicted to be in
class 1 by one of the supervised models have 1 at the corresponding entries in ~y?1 , therefore these
group nodes are ?queries? and we wish to rank the group nodes according to their relevance to them.
Comparing our ranking model with PageRank model [24], there are the following relationships: 1)
In PageRank, a uniform vector with entries all equal to 1 replaces ~y?1 . In our model, we use ~y?1 to
show our preference towards the query nodes, so the resulting scores would be biased to reflect the
relevance regarding class 1. 2) In PageRank, the weights D? and D1?? are fixed constants ? and
1 ? ?, whereas in our model D? and D1?? give personalized damping factors, where each group
wj
. 3) In PageRank, the value of link-votes are normalized by the
has a damping factor ?j = wj +?k
j
number of outlinks at each node, whereas our ranking model does not normalize pij on its outlinks,
and thus can be viewed as an un-normalized version of personalized PageRank [18, 28]. When each
base model generates balanced groups, both ?j and outlinks at each node become constants, and the
proposed method simulates the standard personalized PageRank.
5
Table 2: Data Sets Description
Data
Newsgroup
Cora
DBLP
ID
1
2
3
4
5
6
1
2
3
4
1
Category Labels
comp.graphics comp.os.ms-windows.misc sci.crypt sci.electronics
rec.autos rec.motorcycles rec.sport.baseball rec.sport.hockey
sci.cypt sci.electronics sci.med sci.space
misc.forsale rec.autos rec.motorcycles talk.politics.misc
rec.sport.baseball rec.sport.hockey sci.crypt sci.electronics
alt.atheism rec.sport.baseball rec.sport.hockey soc.religion.christian
Operating Systems Programming Data Structures Algorithms and Theory
Databases Hardware and Architecture Networking Human Computer Interaction
Distributed Memory Management Agents Vision and Pattern Recognition
Graphics and Virtual Reality Object Oriented Planning Robotics Compiler Design Software Development
Databases Data Mining Machine Learning Information Retrieval
#target
1408
1428
1413
1324
1424
1352
603
897
1368
875
3836
#labeled
160
160
160
160
160
160
60
80
100
100
400
The relevance scores with respect to class 1 for group and object nodes will converge to
~q?1 = (Iv ? D? Dv?1 AT Dn?1 A)?1 D1?? ~y?1
~u?1 = (In ? Dn?1 AD? Dv?1 AT )?1 Dn?1 AD1?? ~y?1
respectively. Iv and In are identity matrices with size v ? v and n ? n. The above arguments hold
for the other classes as well, and thus each column in U and Q represents the ranking of the nodes
with respect to each class. Because each row sums up to 1, they are conditional probability estimates
of the nodes belonging to one of the classes.
4
Incorporating Labeled Information
Thus far, we propose to combine the output of supervised and unsupervised models by consensus.
When the true labels of the objects are unknown, this is a reliable approach. However, incorporating
labels from even a small portion of the objects may greatly refine the final hypothesis. We assume
that labels of the first l objects are known, which is encoded in an n ? c matrix F :
fiz = 1,
xi ?s observed label is z, i = 1, . . . , l;
0,
otherwise.
We modify the objection function in Eq. (1) to penalize the deviation of ~ui? of labeled objects from
the observed label:
n X
v
v
n
X
X
X
2
2
f (Q, U ) =
aij ||~ui? ? ~qj? || + ?
kj ||~qj? ? ~yj? || + ?
hi ||~ui? ? f~i? ||2
(6)
Pc
i=1 j=1
j=1
i=1
where hi = z=1 fiz . When i = 1, . . . , l, hi = 1, so we enforce the constraints that an object xi ?s
consensus class label estimate should be close to its observed label with a shadow price ?. When
i = l + 1, . . . , n, xi is unlabeled. Therefore, hi = 0 and the constraint term is eliminated from the
objective function. To update the condition probability for the objects, we incorporate their prior
labeled information:
Pv
q tj? + ?hi f~i?
j=1 aij ~
t
Pv
u
~ i? =
(7)
j=1 aij + ?hi
t
In matrix
= (Dn + ?Hn )?1 (AQt + ?Hn F ) with Hn =
? Pc forms,? it would be U
diag ( z=1 fiz ) n?n . Note that the initial conditional probability of a labeled object is 1 at its
observed class label, and 0 at all the others. However, this optimistic estimate will be changed
during the updates, with the rationale that the observed labels are just random samples from some
multinomial distribution. Thus we only use the observed labels to bias the updating procedure,
instead of totally relying on them.
5
Experiments
We evaluate the proposed algorithms on eleven classification tasks from three real world applications. In each task, we have a target set on which we wish to predict class labels. Clustering
algorithms are performed on this target set to obtain the grouping results. On the other hand, we
learn classification models from some training sets that are in the same domain or a relevant domain
with respect to the target set. These classification models are applied to the target set as well. The
proposed algorithm generates a consolidated classification solution for the target set based on both
classification and clustering results. We elaborate details of each application in the following.
6
Table 3: Classification Accuracy Comparison on a Series of Data Sets
Methods
M1
M2
M3
M4
MCLA
HBGF
BGCM
2-L
3-L
BGCM-L
STD
1
0.7967
0.7721
0.8056
0.7770
0.7592
0.8199
0.8128
0.7981
0.8188
0.8316
0.0040
2
0.8855
0.8611
0.8796
0.8571
0.8173
0.9244
0.9101
0.9040
0.9206
0.9197
0.0038
20 Newsgroups
3
4
0.8557
0.8826
0.8134
0.8676
0.8658
0.8983
0.8149
0.8467
0.8253
0.8686
0.8811
0.9152
0.8608
0.9125
0.8511
0.8728
0.8820
0.9158
0.8859
0.9240
0.0037
0.0040
5
0.8765
0.8358
0.8716
0.8543
0.8295
0.8991
0.8864
0.8830
0.8989
0.9016
0.0027
6
0.8880
0.8563
0.9020
0.8578
0.8546
0.9125
0.9088
0.8977
0.9121
0.9177
0.0030
1
0.7745
0.7797
0.7779
0.7476
0.8703
0.7834
0.8687
0.8066
0.8557
0.8891
0.0096
Cora
2
3
0.8858
0.8671
0.8594
0.8508
0.8833
0.8646
0.8594
0.7810
0.8388
0.8892
0.9111
0.8481
0.9155
0.8965
0.8798
0.8932
0.9086
0.9202
0.9181
0.9246
0.0027
0.0052
4
0.8841
0.8879
0.8813
0.9016
0.8716
0.8943
0.9090
0.8951
0.9141
0.9206
0.0044
DBLP
1
0.9337
0.8766
0.9382
0.7949
0.8953
0.9357
0.9417
0.9054
0.9332
0.9480
0.0020
20 Newsgroup categorization. We construct six learning tasks, each of which involves four classes.
The objective is to classify newsgroup messages according to topics. We used the version [1] where
the newsgroup messages are sorted by date, and separated into training and test sets. The test sets
are our target sets. We learn logistic regression [15] and SVM models [7] from the training sets, and
apply these models, as well as K-means and min-cut clustering algorithms [22] on the target sets.
Cora research paper classification. We aim at classifying a set of research papers into their areas
[23]. We extract four target sets, each of which includes papers from around four areas. The training sets contain research papers that are different from those in the target sets. Both training and
target sets have two views, the paper abstracts, and the paper citations. We apply logistic regression
classifiers and K-means clustering algorithms on the two views of the target sets.
DBLP data. We retrieve around 4,000 authors from DBLP network [10], and try to predict their
research areas. The training sets are drawn from a different domain, i.e., the conferences in each
research field. There are also two views for both training and target sets, the publication network, and
the textual content of the publications. The amount of papers an author published in the conference
can be regarded as link feature, whereas the pool of titles that an author published is the text feature.
Logistic regression and K-means clustering algorithms are used to derive the predictions on the
target set. We manually label the target set for evaluation.
The details of each learning task are summarized in Table 2. On each target set, we apply four
models M1 to M4 , where the first two are classification models and the remaining two are clustering models. We denote the proposed method as Bipartite Graph-based Consensus Maximization
(BGCM), which combines the output of the four models. As shown in Figure 3, only clustering
ensembles, majority voting methods, and the proposed BGCM algorithm work at the meta output
level where raw data are discarded and only prediction results from multiple models are available.
However, majority voting can not be applied when there are clustering models because the correspondence between clusters and classes is unknown. Therefore, we compare BGCM with two
clustering ensemble approaches (MCLA [26] and HBGF [12]), which ignore the label information
from supervised models, regard all the base models as unsupervised clustering, and integrate the
output of the base models. So they only give clustering solutions, not classification results.
To evaluate classification accuracy, we map the output of all the clustering algorithms (the base
models, and the ensembles) to the best possible class predictions with the help of hungarian method
[5], where cluster IDs are matched with class labels. Actually, it is ?cheating? because the true class
labels are used to do the mapping, and thus it should be able to generate the best accuracy from
these unsupervised models. As discussed in Section 4, we can incorporate a few labeled objects,
which are drawn from the same domain of the target set, into the framework and improve accuracy.
This improved version of the BGCM algorithm is denoted as BGCM-L, and the number of labeled
objects used in each task is shown in Table 2. On each task, we repeat the experiments 50 times,
each of which has randomly chosen target and labeled objects, and report the average accuracy. Due
to space limit, we only show the standard deviation (STD) for BGCM-L method. The baselines
share very similar standard deviation with the reported one on each task.
Accuracy. In Table 3, we summarized the classification accuracy of all the baselines and the proposed approach on the target sets of eleven tasks. The two single classifiers (M1 and M2 ), and the
two clustering single models (M3 and M4 ) usually have low accuracy. By combining all the base
models, the clustering ensemble approaches (MCLA and HBGF) can improve the performance over
each single model. However, these two methods are not designed for classification, and the reported
7
(a) Performance w.r.t. ?
0.9
0.85
0.8
0
(c) Performance w.r.t. % Labeled Objects
1
Newsgroup1
Cora1
DBLP1
0.95
Accuracy
0.95
Accuracy
(b) Performance w.r.t. ?
1
Newsgroup1
Cora1
DBLP1
0.9
0.85
5
10
?
15
20
0.8
0
Newsgroup1
Cora1
DBLP1
0.95
Accuracy
1
0.9
0.85
5
10
?
15
20
0.8
0
0.02
0.04
0.06
% Labeled Objects
0.08
0.1
Figure 4: Sensitivity Analysis
accuracy is the upper bound of their ?true? accuracy. The proposed BGCM method always outperforms the base models, and achieves better or comparable performances compared with the upper
bound of the baseline ensembles. By incorporating a small portion (around 10%) of labeled objects,
the BGCM-L method further improves the performances. The consistent increase in accuracy can
be observed in all the tasks, where the margin between the accuracy of the best single model and
that of the BGCM-L method is from 2% to 10%. Even when taking variance into consideration, the
results demonstrate the power of consensus maximization in accuracy improvements.
Sensitivity. As shown in Figure 4 (a) and (b), the proposed BGCM-L method is not sensitive to the
parameters ? and ?. To make the plots clear, we just show the performance on the first task of each
application. ? and ? are the shadow prices paid for deviating from the estimated labels of groups
and observed labels of objects, so they should be greater than 0. ? and ? represent the confidence
of our belief in the labels of the groups and objects compared with 1. The labels of group nodes are
obtained from supervised models and may not be correct, therefore, a smaller ? usually achieves
better performance. On the other hand, the labels of objects can be regarded as groundtruths, and
thus the larger ? the better. In experiments, we find that when ? is below 4, and ? greater than 4,
good performance can be achieved. We let ? = 2 and ? = 8 to get the experimental results shown
in Table 3. Also, we fix the target set as 80% of all the objects, and use 1% to 20% as the labeled
objects to see how the performance varies, and the results are summarized in Figure 4 (c). In general,
more labeled objects would help the classification task where the improvements are more visible on
Cora data set. When the percentage reaches 10%, BGCM-L?s performance becomes stable.
Number of Models. We vary the number of base models incorporated into the consensus framework. The BGCM-L method on two models is denoted as 2-L, where we average the performance
of the combined model obtained by randomly choosing one classifier and one clustering algorithm.
Similarly, the BGCM-L method on three models is denoted as 3-L. From Table 3, we can see that
BGCM-L method using all the four models outperforms the method incorporating only two or three
models. When the base models are independent and each of them obtains reasonable accuracy,
combining more models would benefit more because the chances of reducing independent errors
increase. However, when the new model cannot provide additional information to the current pool
of models, incorporating it may not improve the performance anymore. In the future, we plan to
identify this upper bound through experiments with more input sources.
6
Conclusions
In this work, we take advantage of the complementary predictive powers of multiple supervised
and unsupervised models to derive a consolidated label assignment for a set of objects jointly. We
propose to summarize base model output in a group-object bipartite graph, and maximize the consensus by promoting smoothness of label assignment over the graph and consistency with the initial
labeling. The problem is solved by propagating labeled information between group and object nodes
through their links iteratively. The proposed method can be interpreted as conducting an embedding
of object and group nodes into a new space, as well as an un-normalized personalized PageRank.
When a few labeled objects are available, the proposed method uses them to guide the propagation
and refine the final hypothesis. In the experiments on 20 newsgroup, Cora and DBLP data, the
proposed consensus maximization method improves the best base model accuracy by 2% to 10%.
Acknowledgement The work was supported in part by the U.S. National Science Foundation grants
IIS-08-42769, IIS-09-05215 and DMS-07-32276, and the Air Force Office of Scientific Research
MURI award FA9550-08-1-0265.
8
References
[1] 20 Newsgroups Data Set. http://people.csail.mit.edu/jrennie/20Newsgroups/.
[2] E. Bauer and R. Kohavi. An Empirical Comparison of Voting Classification Algorithms: Bagging, Boosting, and Variants. Machine Learning, 36:105?139, 2004.
[3] Dimitri P. Bertsekas. Non-Linear Programming (2nd Edition). Athena Scientific, 1999.
[4] A. Blum and T. Mitchell. Combining Labeled and Unlabeled Data with Co-training. In Proc. of COLT?
98, pages 92?100, 1998.
[5] N. Borlin. Implementation of Hungarian Method. http://www.cs.umu.se/?niclas/matlab/assignprob/.
[6] R. Caruana. Multitask Learning. Machine Learning, 28:41?75, 1997.
[7] C.-C. Chang and C.-J. Lin. LibSVM: a Library for Support Vector Machines, 2001. Software available
at http://www.csie.ntu.edu.tw/?cjlin/libsvm.
[8] O. Chapelle, B. Sch?olkopf and A. Zien (eds). Semi-Supervised Learning. MIT Press, 2006.
[9] K. Crammer, M. Kearns and J. Wortman. Learning from Multiple Sources. Journal of Machine Learning
Research, 9:1757-1774 , 2008.
[10] DBLP Bibliography. http://www.informatik.uni-trier.de/?ley/db/.
[11] T. Dietterich. Ensemble Methods in Machine Learning. In Proc. of MCS ?00, pages 1?15, 2000.
[12] X. Z. Fern and C. E. Brodley. Solving Cluster Ensemble Problems by Bipartite Graph Partitioning. In
Proc. of ICML? 04, pages 281?288, 2004.
[13] K. Ganchev, J. Graca, J. Blitzer, and B. Taskar. Multi-view Learning over Structured and Non-identical
Outputs. In Proc. of UAI? 08, pages 204?211, 2008.
[14] J. Gao, W. Fan, Y. Sun, and J. Han. Heterogeneous source consensus learning via decision propagation
and negotiation. In Proc. of KDD? 09, pages 339?347, 2009.
[15] A. Genkin, D. D. Lewis, and D. Madigan. BBR: Bayesian Logistic Regression Software.
http://stat.rutgers.edu/?madigan/BBR/.
[16] A. Goldberg and X. Zhu. Seeing stars when there aren?t many stars: Graph-based semi-supervised
learning for sentiment categorization. In HLT-NAACL 2006 Workshop on Textgraphs.
[17] A. Gionis, H. Mannila, and P. Tsaparas. Clustering Aggregation. ACM Transactions on Knowledge
Discovery from Data, 1(1), 2007.
[18] T. Haveliwala. Topic-Sensitive PageRank: A Context-Sensitive Ranking Algorithm for Web Search.
IEEE Transactions on Knowledge and Data Engineering, 15(4):1041-4347, 2003.
[19] J. Hoeting, D. Madigan, A. Raftery, and C. Volinsky. Bayesian Model Averaging: a Tutorial. Statistical
Science, 14:382?417, 1999.
[20] R. Jacobs, M. Jordan, S. Nowlan, and G. Hinton. Adaptive Mixtures of Local Experts. Neural Computation, 3:79-87, 1991.
[21] T. Joachims. Transductive Learning via Spectral Graph Partitioning. In Proc. of ICML? 03, pages 290?
297, 2003.
[22] G. Karypis. CLUTO ? Family of Data Clustering Software Tools.
http://glaros.dtc.umn.edu/gkhome/views/cluto.
[23] A. McCallum, K. Nigam, J. Rennie, and K. Seymore. Automating the Construction of Internet Portals
with Machine Learning. Information Retrieval Journal, 3:127?163, 2000.
[24] L. Page, S. Brin, R. Motwani, and T. Winograd. The PageRank Citation Ranking: Bringing Order to the
Web. Technical Report, Stanford InfoLab, 1999.
[25] V. Singh, L. Mukherjee, J. Peng, and J. Xu. Ensemble Clustering using Semidefinite Programming. In
Proc. of NIPS? 07, 2007.
[26] A. Strehl and J. Ghosh. Cluster Ensembles ? a Knowledge Reuse Framework for Combining Multiple
Partitions. Journal of Machine Learning Research, 3:583?617, 2003.
[27] D. Wolpert. Stacked Generalization. Neural Networks, 5:241?259, 1992.
[28] D. Zhou , J. Weston, A. Gretton, O. Bousquet and B. Scholkopf. Ranking on Data Manifolds. In Proc.
of NIPS? 03, pages 169?176, 2003.
[29] X. Zhu. Semi-supervised Learning Literature Survey. Technical Report 1530, Computer Sciences, University of Wisconsin-Madison, 2005.
9
| 3855 |@word multitask:1 version:3 compression:1 norm:2 nd:1 willing:1 seek:2 propagate:1 bn:1 jacob:1 paid:1 electronics:3 initial:8 contains:1 score:4 series:1 outperforms:2 existing:3 current:1 com:1 comparing:1 nowlan:1 must:4 visible:1 partition:4 kdd:1 eleven:2 christian:1 gv:2 designed:1 plot:1 update:3 stationary:1 plane:1 mccallum:1 fa9550:1 boosting:5 node:40 preference:1 rc:2 dn:9 become:1 scholkopf:1 prove:1 combine:5 introduce:1 privacy:1 g4:1 peng:1 planning:1 uiuc:1 multi:4 relying:1 company:1 little:2 window:1 totally:2 becomes:1 provided:1 estimating:1 notation:2 matched:1 interpreted:2 consolidated:2 developed:1 ghosh:1 graca:1 voting:6 act:2 exactly:1 classifier:16 partitioning:3 grant:1 bertsekas:1 before:1 engineering:1 local:1 treat:1 modify:1 limit:1 id:5 challenging:1 co:1 limited:2 g7:1 bi:1 karypis:1 unique:2 yj:3 block:1 x3:1 mannila:1 procedure:1 area:3 empirical:1 confidence:1 seeing:1 madigan:3 get:1 cannot:5 unlabeled:3 close:4 put:2 context:1 www:3 equivalent:1 map:1 center:1 maximizing:1 straightforward:1 convex:1 survey:1 formulate:1 simplicity:1 assigns:1 m2:5 regarded:5 retrieve:1 embedding:10 handle:1 coordinate:1 updated:1 target:22 suppose:4 construction:1 programming:3 us:1 goldberg:1 hypothesis:2 agreement:2 approximated:1 recognition:1 utilized:1 updating:2 rec:10 std:2 cut:1 coarser:1 labeled:21 database:2 observed:8 muri:1 csie:1 taskar:1 winograd:1 solved:2 wj:6 ensures:1 connected:1 sun:2 hanj:1 accessing:1 agency:1 balanced:3 ui:10 singh:1 solving:1 predictive:1 baseball:3 bipartite:12 learner:1 htm:1 joint:3 easily:1 various:2 represented:1 talk:1 stacked:3 separated:1 hoeting:1 query:4 labeling:4 choosing:1 whose:1 heuristic:1 encoded:1 solve:4 larger:1 rennie:1 stanford:1 otherwise:4 favor:1 g1:3 transductive:5 transform:1 jointly:1 final:3 obviously:1 advantage:1 propose:6 interaction:1 neighboring:4 relevant:1 combining:5 motorcycle:2 date:1 liangf:1 achieve:1 description:1 kv:5 normalize:1 olkopf:1 g9:1 exploiting:1 cluster:12 motwani:1 jing:1 categorization:2 converges:1 object:55 help:2 depending:1 develop:1 derive:2 propagating:1 stat:1 fixing:1 blitzer:1 ij:6 qt:1 eq:7 soc:1 predicted:6 shadow:3 involves:1 hungarian:2 c:1 correct:1 ley:1 dtc:1 stabilization:1 human:1 virtual:1 brin:1 require:1 assign:2 fix:2 generalization:3 ntu:1 summation:1 strictly:1 hold:1 around:3 mapping:1 predict:3 forsale:1 achieves:2 vary:1 proc:8 label:40 title:1 sensitive:3 agrees:1 ganchev:1 qv:2 tool:1 cora:6 mit:2 always:1 aim:2 rather:2 pn:15 zhou:1 publication:3 office:1 focus:1 joachim:1 improvement:2 rank:1 indicates:1 greatly:1 baseline:3 membership:1 jiawei:1 transformed:1 among:9 classification:25 aforementioned:1 denoted:5 colt:1 negotiation:1 outlinks:3 development:1 constrained:4 ssl:4 initialize:1 plan:1 cube:2 equal:4 construct:1 field:1 eliminated:1 manually:1 x4:1 represents:4 identical:1 unsupervised:22 icml:2 discrepancy:1 future:1 others:1 report:3 few:3 randomly:3 oriented:1 genkin:1 simultaneously:1 national:1 individual:1 m4:6 deviating:1 fiz:3 message:2 mining:1 aqt:2 evaluation:1 umn:1 mixture:3 semidefinite:1 pc:4 tj:2 devoted:1 jumping:1 damping:2 iv:2 divide:1 penalizes:1 column:2 soft:1 classify:1 retains:2 caruana:1 assignment:2 maximization:9 cost:2 deviation:5 entry:6 uniform:1 wortman:1 conducted:1 graphic:2 haveliwala:1 reported:2 varies:1 learnt:2 combined:1 sensitivity:2 csail:1 automating:1 pbe:1 pool:2 reflect:1 management:1 hn:3 collapsing:1 external:1 expert:3 dimitri:1 return:2 toy:2 potential:3 diversity:1 de:1 star:2 summarized:3 includes:1 matter:2 gionis:1 ranking:9 depends:1 ad:1 performed:1 view:8 try:2 optimistic:1 reached:1 yv:2 compiler:1 portion:2 aggregation:1 il:1 air:1 accuracy:23 variance:1 conducting:3 ensemble:28 correspond:2 identify:1 infolab:1 raw:12 bayesian:4 informatik:1 yizhou:1 mc:2 fern:1 comp:2 published:2 explain:1 reach:2 networking:1 ed:1 hlt:1 definition:1 volinsky:1 crypt:2 obvious:1 dm:1 mitchell:1 knowledge:4 improves:3 actually:1 back:1 supervised:32 x6:1 methodology:1 violating:2 wei:1 improved:3 done:2 though:1 furthermore:2 just:3 anywhere:1 until:1 working:1 receives:1 hand:2 web:3 o:1 propagation:3 logistic:6 reveal:1 scientific:2 semisupervised:1 usa:2 dietterich:1 naacl:1 requiring:1 concept:1 normalized:3 true:3 contain:1 assigned:6 iteratively:1 misc:3 illustrated:1 deal:1 eg:1 x5:1 during:2 nuisance:1 m:1 demonstrate:2 l1:2 bring:1 image:1 consideration:1 multinomial:1 discussed:4 interpretation:2 m1:7 smoothness:2 consistency:1 similarly:2 illinois:2 jrennie:1 access:4 han:2 similarity:2 operating:1 gj:15 stable:1 base:15 chapelle:1 seymore:1 recent:1 perspective:1 scenario:1 meta:4 watson:1 minimum:2 greater:3 relaxed:1 additional:1 converge:1 maximize:1 semi:9 ii:2 multiple:14 zien:1 trier:1 gretton:1 champaign:1 technical:2 retrieval:2 lin:1 award:1 prediction:15 variant:1 regression:6 heterogeneous:2 vision:1 rutgers:1 iteration:2 represent:3 normalization:1 robotics:1 achieved:1 penalize:2 receive:1 whereas:5 objection:1 source:11 kohavi:1 sch:1 biased:1 bringing:1 med:1 simulates:1 db:1 contrary:1 jordan:1 ideal:3 intermediate:1 newsgroups:3 architecture:1 regarding:1 qj:11 politics:1 whether:1 six:1 reuse:1 effort:1 sentiment:1 hessian:1 matlab:1 useful:2 clear:1 listed:1 involve:1 se:1 amount:1 hardware:1 category:2 generate:4 http:7 percentage:1 tutorial:1 estimated:2 group:60 four:7 blum:1 drawn:2 penalizing:1 libsvm:2 graph:26 fuse:2 relaxation:2 sum:2 enforced:1 family:1 reasonable:1 decision:2 consolidate:1 incompatible:1 summarizes:1 comparable:1 bound:3 internet:1 hi:6 correspondence:1 fan:2 quadratic:2 replaces:1 refine:2 constraint:18 x2:2 software:4 personalized:4 bibliography:1 sake:1 nearby:1 x7:2 generates:2 bousquet:1 argument:1 min:5 format:1 structured:1 according:2 belonging:3 smaller:1 g3:1 tw:1 dv:7 umu:1 g6:1 equation:1 payment:1 discus:2 cjlin:1 wrt:2 know:2 end:1 available:4 apply:4 promoting:1 spectral:3 enforce:1 anymore:1 robustness:2 existence:1 original:1 bagging:5 top:1 clustering:36 remaining:3 denotes:2 madison:1 build:1 feng:1 objective:7 dependence:1 traditional:1 diagonal:2 affinity:3 link:4 sci:8 majority:6 athena:1 topic:3 manifold:1 consensus:26 code:1 index:4 relationship:1 liang:1 relate:1 design:1 implementation:1 unknown:2 upper:3 urbana:1 discarded:1 descent:1 defining:1 hinton:1 incorporated:1 textgraphs:1 cast:1 cheating:1 extensive:1 connection:1 textual:1 nip:2 able:1 usually:2 pattern:1 below:1 summarize:3 pagerank:9 reliable:1 memory:1 video:1 belief:1 power:2 force:1 indicator:5 zhu:2 representing:1 improve:5 brodley:1 library:1 carried:1 raftery:1 auto:2 extract:1 kj:11 text:2 deviate:1 prior:1 l2:1 mcla:3 acknowledgement:1 discovery:1 literature:1 wisconsin:1 loss:1 rationale:2 foundation:1 integrate:2 agent:1 pij:2 consistent:1 propagates:1 cluto:2 classifying:1 share:3 strehl:1 ibm:2 row:2 summary:2 changed:1 repeat:1 supported:1 aij:37 bias:1 guide:1 taking:1 tsaparas:1 mukherjee:1 benefit:3 distributed:1 feedback:2 dimension:2 xn:3 boundary:1 world:1 regard:1 bauer:1 reside:1 author:3 adaptive:1 far:2 transaction:2 citation:2 obtains:1 ignore:1 uni:1 global:3 uai:1 xi:9 un:4 iterative:2 search:1 table:9 hockey:3 reality:1 learn:3 nigam:1 improving:1 meanwhile:1 domain:4 diag:4 edition:1 atheism:1 complementary:1 x1:5 xu:1 representative:1 elaborate:1 ny:1 position:2 pv:5 yjz:6 wish:3 bij:1 formula:1 embed:1 symbol:2 svm:2 alt:1 normalizing:1 fusion:1 incorporating:6 ad1:1 grouping:1 workshop:1 g10:1 portal:1 margin:1 dblp:7 aren:1 wolpert:1 simply:1 likely:1 gao:2 expressed:1 religion:1 diversified:1 sport:6 chang:1 satisfies:1 chance:1 lewis:1 acm:1 weston:1 conditional:6 goal:1 viewed:3 identity:1 sorted:1 towards:1 price:4 replace:1 content:1 hard:2 reducing:1 averaging:5 hyperplane:1 kearns:1 experimental:3 m3:6 vote:1 newsgroup:6 ew:1 formally:1 support:2 people:1 crammer:1 relevance:5 incorporate:4 evaluate:2 audio:1 d1:5 |
3,152 | 3,856 | Replicated Softmax: an Undirected Topic Model
Ruslan Salakhutdinov
Brain and Cognitive Sciences and CSAIL
Massachusetts Institute of Technology
[email protected]
Geoffrey Hinton
Department of Computer Science
University of Toronto
[email protected]
Abstract
We introduce a two-layer undirected graphical model, called a ?Replicated Softmax?, that can be used to model and automatically extract low-dimensional latent
semantic representations from a large unstructured collection of documents. We
present efficient learning and inference algorithms for this model, and show how a
Monte-Carlo based method, Annealed Importance Sampling, can be used to produce an accurate estimate of the log-probability the model assigns to test data.
This allows us to demonstrate that the proposed model is able to generalize much
better compared to Latent Dirichlet Allocation in terms of both the log-probability
of held-out documents and the retrieval accuracy.
1 Introduction
Probabilistic topic models [2, 9, 6] are often used to analyze and extract semantic topics from large
text collections. Many of the existing topic models are based on the assumption that each document
is represented as a mixture of topics, where each topic defines a probability distribution over words.
The mixing proportions of the topics are document specific, but the probability distribution over
words, defined by each topic, is the same across all documents.
All these models can be viewed as graphical models in which latent topic variables have directed
connections to observed variables that represent words in a document. One major drawback is that
exact inference in these models is intractable, so one has to resort to slow or inaccurate approximations to compute the posterior distribution over topics. A second major drawback, that is shared by
all mixture models, is that these models can never make predictions for words that are sharper than
the distributions predicted by any of the individual topics. They are unable to capture the essential
idea of distributed representations which is that the distributions predicted by individual active features get multiplied together (and renormalized) to give the distribution predicted by a whole set of
active features. This allows individual features to be fairly general but their intersection to be much
more precise. For example, distributed representations allow the topics ?government?, ?mafia? and
?playboy? to combine to give very high probability to a word ?Berlusconi? that is not predicted
nearly as strongly by each topic alone.
To date, there has been very little work on developing topic models using undirected graphical models. Several authors [4, 17] used two-layer undirected graphical models, called Restricted Boltzmann
Machines (RBMs), in which word-count vectors are modeled as a Poisson distribution. While these
models are able to produce distributed representations of the input and perform well in terms of retrieval accuracy, they are unable to properly deal with documents of different lengths, which makes
learning very unstable and hard. This is perhaps the main reason why these potentially powerful
models have not found their application in practice. Directed models, on the other hand, can easily handle unobserved words (by simply ignoring them), which allows them to easily deal with
different-sized documents. For undirected models marginalizing over unobserved variables is generally a non-trivial operation, which makes learning far more difficult. Recently, [13] attempted to
fix this problem by proposing a Constrained Poisson model that would ensure that the mean Poisson
1
rates across all words sum up to the length of the document. While the parameter learning has been
shown to be stable, the introduced model no longer defines a proper probability distribution over the
word counts.
In the next section we introduce a ?Replicated Softmax? model. The model can be efficiently trained
using Contrastive Divergence, it has a better way of dealing with documents of different lengths, and
computing the posterior distribution over the latent topic values is easy. We will also demonstrate
that the proposed model is able to generalize much better compared to a popular Bayesian mixture
model, Latent Dirichlet Allocation (LDA) [2], in terms of both the log-probability on previously
unseen documents and the retrieval accuracy.
2 Replicated Softmax: A Generative Model of Word Counts
Consider modeling discrete visible units v using a restricted Boltzmann machine, that has a twolayer architecture as shown in Fig. 1. Let v ? {1, ..., K}D , where K is the dictionary size and D
is the document size, and let h ? {0, 1}F be binary stochastic hidden topic features. Let V be a
K ? D observed binary matrix with vik = 1 if visible unit i takes on k th value. We define the energy
of the state {V, h} as follows:
D X
F X
K
D X
K
F
X
X
X
h j aj ,
(1)
E(V, h) = ?
Wijk hj vik ?
vik bki ?
i=1 j=1 k=1
i=1 k=1
j=1
where {W, a, b} are the model parameters: Wijk is a symmetric interaction term between visible
unit i that takes on value k, and hidden feature j, bki is the bias of unit i that takes on value k, and aj
is the bias of hidden feature j (see Fig. 1). The probability that the model assigns to a visible binary
matrix V is:
XX
1 X
P (V) =
exp (?E(V, h)), Z =
exp (?E(V, h)),
(2)
Z
V
h
h
where Z is known as the partition function or normalizing constant. The conditional distributions
are given by softmax and logistic functions:
PF
exp (bki + j=1 hj Wijk )
k
(3)
p(vi = 1|h) = PK
PF
q
q
q=1 exp bi +
j=1 hj Wij
!
K
D X
X
(4)
vik Wijk ,
p(hj = 1|V) = ? aj +
i=1 k=1
where ?(x) = 1/(1 + exp(?x)) is the logistic function.
Now suppose that for each document we create a separate RBM with as many softmax units as there
are words in the document. Assuming we can ignore the order of the words, all of these softmax units
can share the same set of weights, connecting them to binary hidden units. Consider a document
that contains D words. In this case, we define the energy of the state {V, h} to be:
F X
K
K
F
X
X
X
E(V, h) = ?
h j aj ,
(5)
Wjk hj v?k ?
v?k bk ? D
j=1 k=1
k=1
j=1
PD
where v?k = i=1 vik denotes the count for the k th word. Observe that the bias terms of the hidden
units are scaled up by the length of the document. This scaling is crucial and allows hidden topic
units to behave sensibly when dealing with documents of different lengths.
Given a collection of N documents {Vn }N
n=1 , the derivative of the log-likelihood with respect to
parameters W takes the form:
N
1 X ? log P (Vn )
= EPdata v?k hj ? EPModel v?k hj ,
k
N n=1
?Wj
where EPdata [?] denotes an expectation P
with respect to the data distribution Pdata (h, V) =
p(h|V)Pdata (V), with Pdata (V) = N1 n ?(V ? Vn ) representing the empirical distribution,
2
Latent Topics
Latent Topics
h
W1 W 2
W1
W1
W2
W 1 W2
W1
W2
W2
W1
W2
v
Observed Softmax Visibles
Multinomial Visible
Figure 1: Replicated Softmax model. The top layer represents a vector h of stochastic, binary topic features
and and the bottom layer represents softmax visible units v. All visible units share the same set of weights,
connecting them to binary hidden units. Left: The model for a document containing two and three words.
Right: A different interpretation of the Replicated Softmax model, in which D softmax units with identical
weights are replaced by a single multinomial unit which is sampled D times.
and EPModel [?] is an expectation with respect to the distribution defined by the model. Exact maximum likelihood learning in this model is intractable because exact computation of the expectation
EPModel [?] takes time that is exponential in min{D, F }, i.e the number of visible or hidden units. To
avoid computing this expectation, learning is done by following an approximation to the gradient of
a different objective function, called the ?Contrastive Divergence? (CD) ([7]):
k
k
k
(6)
?Wj = ? EPdata v? hj ? EPT v? hj ,
where ? is the learning rate and PT represents a distribution defined by running the Gibbs chain,
initialized at the data, for T full steps. The special bipartite structure of RBM?s allows for quite an
efficient Gibbs sampler that alternates between sampling the states of the hidden units independently
given the states of the visible units, and vise versa (see Eqs. 3, 4). Setting T = ? recovers maximum
likelihood learning.
The weights can now be shared by the whole family of different-sized RBM?s that are created for
documents of different lengths (see Fig. 1). We call this the ?Replicated Softmax? model. A pleasing
property of this model is that computing the approximate gradients of the CD objective (Eq. 6) for a
document that contains 100 words is computationally not much more expensive than computing the
gradients for a document that contains only one word. A key observation is that using D softmax
units with identical weights is equivalent to having a single multinomial unit which is sampled D
times, as shown in Fig. 1, right panel. If instead of sampling, we use real-valued softmax probabilities multiplied by D, we exactly recover the learning algorithm of a Constrained Poisson model
[13], except for the scaling of the hidden biases with D.
3 Evaluating Replicated Softmax as a Generative Model
Assessing the generalization performance of probabilistic topic models plays an important role in
model selection. Much of the existing literature, particularly for undirected topic models [4, 17],
uses extremely indirect performance measures, such as information retrieval or document classification. More broadly, however, the ability of the model to generalize can be evaluated by computing
the probability that the model assigns to the previously unseen documents, which is independent of
any specific application.
For undirected models, computing the probability of held-out documents exactly is intractable, since
computing the global normalization constant requires enumeration over an exponential number of
terms. Evaluating the same probability for directed topic models is also difficult, because there are
an exponential number of possible topic assignments for the words.
Recently, [14] showed that a Monte Carlo based method, Annealed Importance Sampling (AIS) [12],
can be used to efficiently estimate the partition function of an RBM. We also find AIS attractive
because it not only provides a good estimate of the partition function in a reasonable amount of
computer time, but it can also just as easily be used to estimate the probability of held-out documents
for directed topic models, including Latent Dirichlet Allocation (for details see [16]). This will
allow us to properly measure and compare generalization capabilities of Replicated Softmax and
3
Algorithm 1 Annealed Importance Sampling (AIS) run.
1:
2:
3:
4:
5:
6:
Initialize 0 = ?0 < ?1 < ... < ?S = 1.
Sample V1 from p0 .
for s = 1 : S ? 1 do
Sample Vs+1 given Vs using Ts (Vs+1 ? Vs ).
end for
Q
?
?
Set wAIS = S
s=1 ps (Vs )/ps?1 (Vs ).
LDA models. We now show how AIS can be used to estimate the partition function of a Replicated
Softmax model.
3.1 Annealed Importance Sampling
Suppose we have two distributions: pA (x) = p?A (x)/ZA and pB (x) = p?B (x)/ZB . Typically
pA (x) is defined to be some simple proposal distribution with known ZA , whereas pB represents
our complex target distribution of interest. One way of estimating the ratio of normalizing constants
is to use a simple importance sampling method:
?
N
X p? (x)
ZB
1 X p?B (x(i) )
pB (x)
B
,
(7)
?
=
p
(x)
=
E
A
pA
ZA
p?A (x)
p?A (x)
N i=1 p?A (x(i) )
x
where x(i) ? pA . However, if the pA and pB are not close enough, the estimator will be very poor.
In high-dimensional spaces, the variance of the importance sampling estimator will be very large, or
possibly infinite, unless pA is a near-perfect approximation to pB .
Annealed Importance Sampling can be viewed as simple importance sampling defined on a much
higher dimensional state space. It uses many auxiliary variables in order to make the proposal distribution pA be closer to the target distribution pB . AIS starts by defining a sequence of intermediate
probability distributions: p0 , ..., pS , with p0 = pA and pS = pB . One general way to define this
sequence is to set:
pk (x) ? p?A (x)1??k p?B (x)?k ,
(8)
with ?inverse temperatures? 0 = ?0 < ?1 < ... < ?K = 1 chosen by the user. For each intermediate
distribution, a Markov chain transition operator Tk (x? ; x) that leaves pk (x) invariant must also be
defined.
Using the special bipartite structure of RBM?s, we can devise a better AIS scheme [14] for estimating
the model?s partition function. Let us consider a Replicated Softmax model with D words. Using
Eq. 5, the joint distribution over {V, h} is defined as1 :
?
?
F X
K
X
1
p(V, h) =
exp ?
(9)
Wjk hj v?k ? ,
Z
j=1
k=1
PD
where v?k = i=1 vik denotes the count for the k th word. By explicitly summing out the latent topic
units h we can easily evaluate an unnormalized probability p? (V). The sequence of intermediate
distributions, parameterized by ?, can now be defined as follows:
!!
K
F
X
1 X ?
1 ?
1 Y
k k
1 + exp ?s
p (V) =
Wj v?
.
(10)
ps (V) =
ps (V, h) =
Zs
Zs
Zs j=1
k=1
h
Note that for s = 0, we have ?s = 0, and so p0 represents a uniform distribution, whose partition
function evaluates to Z0 = 2F , where F is the number of hidden units. Similarly, when s = S, we
have ?s = 1, and so pS represents the distribution defined by the Replicated Softmax model. For the
intermediate values of s, we will have some interpolation between uniform and target distributions.
Using Eqs. 3, 4, it is also straightforward to derive an efficient Gibbs transition operator that leaves
ps (V) invariant.
1
We have omitted the bias terms for clarity of presentation
4
A single run of AIS procedure is summarized in Algorithm 1. It starts by first sampling from a simple uniform distribution p0 (V) and then applying a series of transition operators T1 , T2 , . . . , TS?1
that ?move? the sample through the intermediate distributions ps (V) towards the target distribution
pS (V). Note that there is no need to compute the normalizing constants of any intermediate distri(i)
butions. After performing M runs of AIS, the importance weights wAIS can be used to obtain an
unbiased estimate of our model?s partition function ZS :
ZS
Z0
?
M
1 X (i)
w ,
M i=1 AIS
(11)
where Z0 = 2F . Observe that the Markov transition operators do not necessarily need to be ergodic.
In particular, if we were to choose dumb transition operators that do nothing, Ts (V? ? V) =
?(V? ? V) for all s, we simply recover the simple importance sampling procedure of Eq. 7.
When evaluating the probability of a collection of several documents, we need to perform a separate
AIS run per document, if those documents are of different lengths. This is because each differentsized document can be represented as a separate RBM that has its own global normalizing constant.
4 Experimental Results
In this section we present experimental results on three three text datasets: NIPS proceedings papers, 20-newsgroups, and Reuters Corpus Volume I (RCV1-v2) [10], and report generalization performance of Replicated Softmax and LDA models.
4.1 Description of Datasets
The NIPS proceedings papers2 contains 1740 NIPS papers. We used the first 1690 documents as
training data and the remaining 50 documents as test. The dataset was already preprocessed, where
each document was represented as a vector containing 13,649 word counts.
The 20-newsgroups corpus contains 18,845 postings taken from the Usenet newsgroup collection.
The corpus is partitioned fairly evenly into 20 different newsgroups, each corresponding to a separate topic.3 The data was split by date into 11,314 training and 7,531 test articles, so the training and
test sets were separated in time. We further preprocessed the data by removing common stopwords,
stemming, and then only considering the 2000 most frequent words in the training dataset. As a result, each posting was represented as a vector containing 2000 word counts. No other preprocessing
was done.
The Reuters Corpus Volume I is an archive of 804,414 newswire stories4 that have been manually
categorized into 103 topics. The topic classes form a tree which is typically of depth 3. For this
dataset, we define the relevance of one document to another to be the fraction of the topic labels that
agree on the two paths from the root to the two documents. The data was randomly split into 794,414
training and 10,000 test articles. The available data was already in the preprocessed format, where
common stopwords were removed and all documents were stemmed. We again only considered the
10,000 most frequent words in the training dataset.
For all datasets, each word count wi was replaced by log(1 + wi ), rounded to the nearest integer,
which slightly improved retrieval performance of both models. Table 1 shows description of all three
datasets.
4.2 Details of Training
For the Replicated Softmax model, to speed-up learning, we subdivided datasets into minibatches,
each containing 100 training cases, and updated the parameters after each minibatch. Learning
was carried out using Contrastive Divergence by starting with one full Gibbs step and gradually
increaing to five steps during the course of training, as described in [14]. For all three datasets, the
total number of parameter updates was set to 100,000, which took several hours to train. For the
2
Available at http://psiexp.ss.uci.edu/research/programs data/toolbox.htm.
Available at http://people.csail.mit.edu/jrennie/20Newsgroups (20news-bydate.tar.gz).
4
Available at http://trec.nist.gov/data/reuters/reuters.html
3
5
Data set
NIPS
20-news
Reuters
K
?
D
St. Dev.
13,649
2,000
10,000
98.0
51.8
94.6
245.3
70.8
69.3
Number of docs
Train
Test
1,690
11,314
794,414
50
7,531
10,000
Avg. Test perplexity per word (in nats)
LDA-50
LDA-200
R. Soft-50
Unigram
3576
1091
1437
3391
1058
1142
3405
953
988
4385
1335
2208
Table 1: Results for LDA using 50 and 200 topics, and Replaced Softmax model that uses 50 topics. K is
? is the mean document length, St. Dev. is the estimated standard deviation in document
the vocabulary size, D
length.
NIPS Proceedings
20-newsgroups
5000
2500
1600
LDA
4000
3500
1400
2000
1200
1500
LDA
4500
LDA
Reuters
1000
3000
800
2500
600
1000
500
2500
3000
3500
4000
4500
Replicated Softmax
5000
600
800
1000
1200
1400
Replicated Softmax
1600
0
0
500
1000
1500
2000
2500
Replicated Softmax
Figure 2: The average test perplexity scores for each of the 50 held-out documents under the learned 50dimensional Replicated Softmax and LDA that uses 50 topics.
LDA model, we used the Gibbs sampling implementation of the Matlab Topic Modeling Toolbox5
[5]. The hyperparameters were optimized using stochastic EM as described by [15]. For the 20newsgroups and NIPS datasets, the number of Gibbs updates was set to 100,000. For the large
Reuters dataset, it was set to 10,000, which took several days to train.
4.3 Assessing Topic Models as Generative Models
For each of the three datasets, we estimated the log-probability for 50 held-out documents.6 For both
the Replicated Softmax and LDA models we used 10,000 inverse temperatures ?s , spaced uniformly
from 0 to 1. For each held-out document, the estimates were
over 100 AIS runs.
averaged
The
PN
1
1
average test perplexity per word was then estimated as exp ? /N n=1 /Dn log p(vn ) , where
N is the total number of documents, Dn and vn are the total number of words and the observed
word-count vector for a document n.
Table 1 shows that for all three datasets the 50-dimensional Replicated Softmax consistently outperforms the LDA with 50-topics. For the NIPS dataset, the undirected model achieves the average test
perplexity of 3405, improving upon LDA?s perplexity of 3576. The LDA with 200 topics performed
much better on this dataset compared to the LDA-50, but its performance only slightly improved
upon the 50-dimensional Replicated Softmax model. For the 20-newsgroups dataset, even with 200
topics, the LDA could not match the perplexity of the Replicated Softmax model with 50 topic units.
The difference in performance is particularly striking for the large Reuters dataset, whose vocabulary
size is 10,000. LDA achieves an average test perplexity of 1437, substantially reducing it from
2208, achieved by a simple smoothed unigram model. The Replicated Softmax further reduces the
perplexity down to 986, which is comparable in magnitude to the improvement produced by the LDA
over the unigram model. LDA with 200 topics does improve upon LDA-50, achieving a perplexity
of 1142. However, its performance is still considerably worse than that of the Replicated Softmax
model.
5
The code is available at http://psiexp.ss.uci.edu/research/programs data/toolbox.htm
For the 20-newsgroups and Reuters datasets, the 50 held-out documents were randomly sampled from the
test sets.
6
6
20-newsgroups
Replicated
Softmax 50?D
60
40
50
40
Precision (%)
Precision (%)
Reuters
50
LDA 50?D
30
20
30
LDA 50?D
20
10
10
0.02
Replicated
Softmax 50?D
0.1
0.4
1.6
6.4
25.6
100
0.001 0.006 0.051 0.4
Recall (%)
1.6
6.4
25.6 100
Recall (%)
Figure 3: Precision-Recall curves for the 20-newsgroups and Reuters datasets, when a query document from
the test set is used to retrieve similar documents from the training corpus. Results are averaged over all 7,531
(for 20-newsgroups) and 10,000 (for Reuters) possible queries.
Figure 2 further shows three scatter plots of the average test perplexity per document. Observe that
for almost all test documents, the Replicated Softmax achieves a better perplexity compared to the
corresponding LDA model. For the Reuters dataset, as expected, there are many documents that are
modeled much better by the undirected model than an LDA. Clearly, the Replicated Softmax is able
to generalize much better.
4.4 Document Retrieval
We used 20-newsgroup and Reuters datasets to evaluate model performance on a document retrieval
task. To decide whether a retrieved document is relevant to the query document, we simply check if
they have the same class label. This is the only time that the class labels are used. For the Replicated
Softmax, the mapping from a word-count vector to the values of the latent topic features is fast,
requiring only a single matrix multiplication followed by a componentwise sigmoid non-linearity.
For the LDA, we used 1000 Gibbs sweeps per test document in order to get an approximate posterior
over the topics. Figure 3 shows that when we use the cosine of the angle between two topic vectors to
measure their similarity, the Replicated Softmax significantly outperforms LDA, particularly when
retrieving the top few documents.
5 Conclusions and Extensions
We have presented a simple two-layer undirected topic model that be used to model and automatically extract distributed semantic representations from large collections of text corpora. The model
can be viewed as a family of different-sized RBM?s that share parameters. The proposed model have
several key advantages: the learning is easy and stable, it can model documents of different lengths,
and computing the posterior distribution over the latent topic values is easy. Furthermore, using
stochastic gradient descent, scaling up learning to billions of documents would not be particularly
difficult. This is in contrast to directed topic models, where most of the existing inference algorithms
are designed to be run in a batch mode. Therefore one would have to make further approximations,
for example by using particle filtering [3]. We have also demonstrated that the proposed model is
able to generalize much better than LDA in terms of both the log-probability on held-out documents
and the retrieval accuracy.
In this paper we have only considered the simplest possible topic model, but the proposed model can
be extended in several ways. For example, similar to supervised LDA [1], the proposed Replicated
Softmax can be easily extended to modeling the joint the distribution over words and a document
label, as shown in Fig. 4, left panel. Recently, [11] introduced a Dirichlet-multinomial regression
model, where a prior on the document-specific topic distributions was modeled as a function of
observed metadata of the document. Similarly, we can define a conditional Replicated Softmax
model, where the observed document-specific metadata, such as author, references, etc., can be used
7
Latent Topics
Latent Topics
Metadata
Label
Multinomial Visible
Multinomial Visible
Figure 4: Left: A Replicated Softmax model that models the joint distribution of words and document label.
Right: Conditional Replicated Softmax model where the observed document-specific metadata affects binary
states of the hidden topic units.
to influence the states of the latent topic units, as shown in Fig. 4, right panel. Finally, as argued by
[13], a single layer of binary features may not the best way to capture the complex structure in the
count data. Once the Replicated Softmax has been trained, we can add more layers to create a Deep
Belief Network [8], which could potentially produce a better generative model and further improve
retrieval accuracy.
Acknowledgments
This research was supported by NSERC, CFI, and CIFAR.
References
[1] D. Blei and J. McAuliffe. Supervised topic models. In NIPS, 2007.
[2] D. Blei, A. Ng, and M. Jordan. Latent dirichlet allocation. Journal of Machine Learning Research,
3:993?1022, 2003.
[3] K. Canini, L. Shi, and T. Griffiths. Online inference of topics with latent Dirichlet allocation. In Proceedings of the International Conference on Artificial Intelligence and Statistics, volume 5, 2009.
[4] P. Gehler, A. Holub, and M. Welling. The Rate Adapting Poisson (RAP) model for information retrieval
and object recognition. In Proceedings of the 23rd International Conference on Machine Learning, 2006.
[5] T. Griffiths and M. Steyvers. Finding scientific topics. In Proceedings of the National Academy of
Sciences, volume 101, pages 5228?5235, 2004.
[6] Thomas Griffiths and Mark Steyvers. Finding scientific topics. PNAS, 101(suppl. 1), 2004.
[7] G. Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation,
14(8):1711?1800, 2002.
[8] G. Hinton, S. Osindero, and Y. W. Teh. A fast learning algorithm for deep belief nets. Neural Computation,
18(7):1527?1554, 2006.
[9] T. Hofmann. Probabilistic latent semantic analysis. In Proceedings of the 15th Conference on Uncertainty
in AI, pages 289?296, San Fransisco, California, 1999. Morgan Kaufmann.
[10] D. Lewis, Y. Yang, T. Rose, and F. Li. RCV1: A new benchmark collection for text categorization
research. Journal of Machine Learning Research, 5:361?397, 2004.
[11] D. Mimno and A. McCallum. Topic models conditioned on arbitrary features with dirichlet-multinomial
regression. In UAI, pages 411?418, 2008.
[12] R. Neal. Annealed importance sampling. Statistics and Computing, 11:125?139, 2001.
[13] R. Salakhutdinov and G. Hinton. Semantic Hashing. In SIGIR workshop on Information Retrieval and
applications of Graphical Models, 2007.
[14] R. Salakhutdinov and I. Murray. On the quantitative analysis of deep belief networks. In Proceedings of
the International Conference on Machine Learning, volume 25, pages 872 ? 879, 2008.
[15] H. Wallach. Topic modeling: beyond bag-of-words. In ICML, volume 148, pages 977?984, 2006.
[16] H. Wallach, I. Murray, R. Salakhutdinov, and D. Mimno. Evaluation methods for topic models. In
Proceedings of the 26th International Conference on Machine Learning (ICML 2009), 2009.
[17] E. Xing, R. Yan, and A. Hauptmann. Mining associated text and images with dual-wing harmoniums. In
Proceedings of the 21st Conference on Uncertainty in Artificial Intelligence (UAI-2005), 2005.
8
| 3856 |@word proportion:1 p0:5 contrastive:4 twolayer:1 contains:5 series:1 score:1 document:64 outperforms:2 existing:3 stemmed:1 scatter:1 must:1 stemming:1 visible:11 partition:7 hofmann:1 plot:1 designed:1 update:2 v:6 alone:1 generative:4 leaf:2 intelligence:2 mccallum:1 blei:2 provides:1 toronto:2 five:1 stopwords:2 dn:2 retrieving:1 combine:1 bydate:1 introduce:2 expected:1 brain:1 salakhutdinov:4 automatically:2 gov:1 little:1 enumeration:1 pf:2 considering:1 distri:1 xx:1 estimating:2 linearity:1 panel:3 substantially:1 z:5 proposing:1 unobserved:2 finding:2 quantitative:1 visibles:1 exactly:2 sensibly:1 scaled:1 unit:24 mcauliffe:1 t1:1 usenet:1 path:1 interpolation:1 wallach:2 bi:1 averaged:2 directed:5 acknowledgment:1 practice:1 procedure:2 cfi:1 empirical:1 yan:1 significantly:1 adapting:1 word:33 griffith:3 get:2 close:1 selection:1 operator:5 applying:1 influence:1 equivalent:1 demonstrated:1 shi:1 annealed:6 straightforward:1 starting:1 independently:1 ergodic:1 sigir:1 unstructured:1 assigns:3 estimator:2 retrieve:1 steyvers:2 handle:1 updated:1 pt:1 suppose:2 play:1 target:4 exact:3 user:1 us:4 pa:8 expensive:1 particularly:4 recognition:1 gehler:1 observed:7 bottom:1 role:1 capture:2 wj:3 news:2 removed:1 rose:1 pd:2 nats:1 renormalized:1 trained:2 harmonium:1 upon:3 bipartite:2 easily:5 joint:3 indirect:1 htm:2 represented:4 train:3 separated:1 fast:2 monte:2 query:3 artificial:2 quite:1 whose:2 valued:1 s:2 ability:1 statistic:2 unseen:2 online:1 sequence:3 advantage:1 net:1 took:2 interaction:1 product:1 frequent:2 uci:2 relevant:1 date:2 mixing:1 academy:1 description:2 wjk:2 billion:1 p:10 assessing:2 produce:3 categorization:1 perfect:1 tk:1 object:1 derive:1 nearest:1 eq:5 auxiliary:1 c:1 predicted:4 drawback:2 stochastic:4 argued:1 government:1 subdivided:1 fix:1 generalization:3 extension:1 considered:2 exp:8 mapping:1 major:2 dictionary:1 achieves:3 omitted:1 ruslan:1 bag:1 label:6 create:2 mit:2 clearly:1 avoid:1 hj:10 pn:1 tar:1 properly:2 consistently:1 improvement:1 likelihood:3 check:1 contrast:1 ept:1 inference:4 inaccurate:1 typically:2 hidden:12 wij:1 classification:1 html:1 dual:1 constrained:2 softmax:44 fairly:2 special:2 initialize:1 once:1 never:1 having:1 ng:1 sampling:14 manually:1 identical:2 represents:6 icml:2 nearly:1 pdata:3 t2:1 report:1 few:1 randomly:2 divergence:4 national:1 individual:3 replaced:3 n1:1 pleasing:1 interest:1 mining:1 wijk:4 evaluation:1 mixture:3 bki:3 held:8 dumb:1 chain:2 accurate:1 closer:1 unless:1 tree:1 initialized:1 rap:1 modeling:4 soft:1 dev:2 assignment:1 deviation:1 uniform:3 osindero:1 fransisco:1 considerably:1 st:3 international:4 csail:2 probabilistic:3 rounded:1 together:1 connecting:2 w1:5 again:1 containing:4 choose:1 possibly:1 worse:1 cognitive:1 resort:1 derivative:1 expert:1 wing:1 li:1 summarized:1 psiexp:2 explicitly:1 vi:1 performed:1 root:1 analyze:1 start:2 recover:2 xing:1 capability:1 accuracy:5 variance:1 kaufmann:1 efficiently:2 spaced:1 generalize:5 bayesian:1 produced:1 carlo:2 za:3 evaluates:1 rbms:1 energy:2 associated:1 rbm:7 recovers:1 sampled:3 dataset:10 massachusetts:1 popular:1 recall:3 holub:1 higher:1 hashing:1 day:1 supervised:2 improved:2 done:2 evaluated:1 strongly:1 furthermore:1 just:1 hand:1 minibatch:1 defines:2 logistic:2 mode:1 lda:29 aj:4 perhaps:1 scientific:2 requiring:1 unbiased:1 symmetric:1 semantic:5 neal:1 deal:2 attractive:1 during:1 unnormalized:1 cosine:1 butions:1 demonstrate:2 temperature:2 image:1 recently:3 common:2 sigmoid:1 multinomial:7 volume:6 interpretation:1 versa:1 gibbs:7 ai:12 rd:1 similarly:2 newswire:1 particle:1 wais:2 jrennie:1 stable:2 longer:1 similarity:1 etc:1 add:1 posterior:4 own:1 showed:1 retrieved:1 perplexity:11 binary:8 devise:1 morgan:1 full:2 pnas:1 reduces:1 match:1 retrieval:11 cifar:1 prediction:1 regression:2 expectation:4 poisson:5 represent:1 normalization:1 suppl:1 achieved:1 proposal:2 whereas:1 crucial:1 w2:5 archive:1 undirected:10 jordan:1 call:1 integer:1 near:1 yang:1 intermediate:6 split:2 easy:3 enough:1 newsgroups:11 affect:1 architecture:1 idea:1 vik:6 whether:1 matlab:1 deep:3 generally:1 amount:1 simplest:1 http:4 estimated:3 per:5 broadly:1 discrete:1 key:2 pb:7 achieving:1 clarity:1 preprocessed:3 v1:1 fraction:1 sum:1 run:6 inverse:2 parameterized:1 powerful:1 angle:1 striking:1 uncertainty:2 family:2 reasonable:1 almost:1 decide:1 vn:5 doc:1 scaling:3 comparable:1 layer:7 epdata:3 followed:1 speed:1 min:1 extremely:1 performing:1 rcv1:2 format:1 department:1 developing:1 alternate:1 poor:1 across:2 slightly:2 em:1 partitioned:1 wi:2 rsalakhu:1 restricted:2 invariant:2 gradually:1 taken:1 computationally:1 agree:1 previously:2 count:11 end:1 available:5 operation:1 multiplied:2 observe:3 v2:1 batch:1 thomas:1 denotes:3 dirichlet:7 ensure:1 top:2 running:1 graphical:5 remaining:1 murray:2 sweep:1 objective:2 move:1 already:2 gradient:4 unable:2 separate:4 evenly:1 topic:59 unstable:1 trivial:1 reason:1 assuming:1 length:10 code:1 modeled:3 ratio:1 minimizing:1 difficult:3 sharper:1 potentially:2 implementation:1 proper:1 boltzmann:2 perform:2 teh:1 observation:1 markov:2 datasets:12 benchmark:1 nist:1 descent:1 behave:1 t:3 canini:1 defining:1 hinton:5 extended:2 precise:1 trec:1 smoothed:1 arbitrary:1 introduced:2 bk:1 toolbox:2 connection:1 optimized:1 componentwise:1 california:1 learned:1 hour:1 nip:8 able:5 beyond:1 program:2 including:1 belief:3 representing:1 scheme:1 improve:2 technology:1 created:1 carried:1 gz:1 extract:3 metadata:4 text:5 prior:1 literature:1 multiplication:1 marginalizing:1 allocation:5 filtering:1 geoffrey:1 epmodel:3 article:2 share:3 cd:2 course:1 supported:1 bias:5 allow:2 institute:1 distributed:4 mimno:2 curve:1 depth:1 vocabulary:2 evaluating:3 transition:5 author:2 collection:7 avg:1 replicated:35 preprocessing:1 san:1 far:1 welling:1 approximate:2 ignore:1 dealing:2 global:2 active:2 uai:2 summing:1 corpus:6 latent:17 why:1 table:3 as1:1 ignoring:1 improving:1 complex:2 necessarily:1 pk:3 main:1 whole:2 reuters:14 hyperparameters:1 nothing:1 categorized:1 fig:6 slow:1 precision:3 exponential:3 posting:2 z0:3 removing:1 down:1 specific:5 unigram:3 normalizing:4 intractable:3 essential:1 workshop:1 importance:11 magnitude:1 hauptmann:1 conditioned:1 mafia:1 intersection:1 vise:1 simply:3 nserc:1 lewis:1 minibatches:1 conditional:3 viewed:3 sized:3 presentation:1 towards:1 shared:2 hard:1 infinite:1 except:1 uniformly:1 reducing:1 sampler:1 zb:2 called:3 total:3 experimental:2 attempted:1 newsgroup:2 people:1 mark:1 relevance:1 evaluate:2 |
3,153 | 3,857 | Complexity of Decentralized Control: Special Cases
Shlomo Zilberstein
Department of Computer Science
University of Massachusetts
Amherst, MA 01003
[email protected]
Martin Allen
Department of Computer Science
Connecticut College
New London, CT 06320
[email protected]
Abstract
The worst-case complexity of general decentralized POMDPs, which are equivalent to partially observable stochastic games (POSGs) is very high, both for the
cooperative and competitive cases. Some reductions in complexity have been
achieved by exploiting independence relations in some models. We show that
these results are somewhat limited: when these independence assumptions are
relaxed in very small ways, complexity returns to that of the general case.
1
Introduction
Decentralized and partially observable stochastic decision and planning problems are very common,
comprising anything from strategic games of chance to robotic space exploration. In such domains,
multiple agents act under uncertainty about both their environment and the plans and actions of
others. These problems can be represented as decentralized partially observable Markov decision
processes (Dec-POMDPs), or the equivalent, partially observable stochastic games (POSGs), allowing for precise formulation of solution concepts and success criteria.
Alas, such problems are highly complex. As shown by Bernstein et al. [1, 2], the full, cooperative
problem?where all players share the same payoff, and strategies can depend upon entire observed
histories?is NEXP-complete. More recently, Goldsmith and Mundhenk [3] showed that the competitive case can be worse: when teamwork is allowed among agents, complexity rises to NEXPNP
(problems solvable by a NEXP machine employing an NP set as an oracle). Much attention has
thus been paid to restricted cases, particularly those where some parts of the system dynamics behave independently. The complexity of finite-horizon Dec-POMDPs goes down?from NEXP to
NP?when agents interact only via a joint reward structure, and are otherwise independent. Unfortunately, our new results show that further reduction, based on other combinations of fully or
partially independent system dynamics are unlikely, if not impossible.
We show that if the situation were reversed, so that rewards alone are independent, the problem remains NEXP-complete. Further, we consider two other Dec-POMDP sub-classes from the literature:
(a) domains where local agent sub-problems are independent except for a (relatively small) number
of event-based interactions, and (b) those where agents only interact influencing the set of currently
available actions. As it turns out, both types of problem are NEXP-complete as well?facts previously unknown. (In the latter case, this is a substantial increase in the known upper bound.) These
results provide further impetus to devise new tools for the analysis and classification of problem
difficulty in decentralized problem solving.
2
Basic definitions
The cooperative, decentralized partially observable Markov decision process (Dec-POMDP) is a
highly general and powerful framework, capable of representing a wide range of real-world problem
1
domains. It extends the basic POMDP to multiple agents, operating in conjunction based on locally
observed information about the world, and collecting a single source of reward.
Definition 1 (Dec-POMDP). A (Dec-POMDP), D, is specified by a tuple:
M = h{?i }, S, {Ai }, P, {?i }, O, R, T i
(1)
with individual components as follows:
? Each ?i is an agent; S is a finite set of world states with a distinguished initial state s0 ; Ai
is a finite set of actions, ai , available to ?i ; ?i is a finite set of observations, oi , for ?i ; and
T is the (finite or infinite) time-horizon of the problem.
? P is the Markovian state-action transition function. P (s, a1 , . . . , an , s0 ) is the probability
of going from state s to state s0 , given joint action ha1 , . . . , an i.
? O is the joint observation function for the set of agents, given each state-action transition.
O(a1 , . . . , an , s0 , o1 , . . . , on ) is the probability of observing ho1 , . . . , on i, if joint action
ha1 , . . . , an i causes a transition to global state s0 .
? R is the global reward function. R(s, a1 , . . . , an ) is the reward obtained for performing
joint action ha1 , . . . , an i when in global state s.
The most important sub-instance of the Dec-POMDP model is the decentralized MDP (Dec-MDP),
where the joint observation tells us everything we need to know about the system state.
Definition 2 (Dec-MDP). A decentralized Markov decision process (Dec-MDP) is a Dec-POMDP
that is jointly fully observable. That is, there exists a functional mapping, J : ?1 ? ? ? ? ? ?n ? S,
such that O(a1 , . . . , an , s0 , o1 , . . . , on ) 6= 0 if and only if J(o1 , . . . , on ) = s0 .
In a Dec-MDP, then, the sum total of the individual agent observations provides a complete picture of the state of the environment. It is important to note, however, that this does not mean that
any individual agent actually possesses this information. Dec-MDPs are still fully decentralized in
general, and individual agents cannot count on access to the global state when choosing actions.
Definition 3 (Policies). A local policy for an agent ?i is a mapping from sequences of that agent?s
observations, oi = ho1i , . . . , oki i, to its actions, ?i : ??i ? Ai . A joint policy for n agents is a
collection of local policies, one per agent, ? = h?1 , . . . , ?n i.
A solution method for a decentralized problem seeks to find some joint policy that maximizes expected value given the starting state (or distribution over states) of the problem. For complexity
purposes, the decision version of the Dec-(PO)MDP problem is to determine whether there exists
some joint policy with value greater at least k.
3
Bernstein?s proof of NEXP-completeness
Before establishing our new claims, we briefly review the NEXP-completeness result for finitehorizon Dec-MDPs, as given by Bernstein et al. [1, 2]. First, we note that the upper bound, namely
that finite-horizon Dec-POMDPs are in NEXP, will immediately establish the same upper bound for
all the problems that we will consider. (While we do not discuss the proof here, full details can be
found in the original, or the supplemental materials to this paper, ?1.)
Theorem 1 (Upper Bound). The finite-horizon, n-agent decision problem Dec-POMDP ? NEXP.
More challenging (and interesting) is establishing lower bounds on these problems, which is performed via our reduction from the known NEXP-complete TILING problem [4, 5]. A TILING
problem instance consists of a board size n, given concisely in log n binary bits, a set of tiletypes L = {t0 , . . . , tk }, and a collection of binary and vertical compatibility relations between
tiles H, V ? L ? L. A tiling is a mapping of board locations to tile-types, t : {0, . . . , n ? 1} ?
{0, . . . , n ? 1} ? L; such a tiling is consistent just in case (i) the origin location of the board
receives tile-type 0 (t(0, 0) = tile0 ); and (ii) all adjoint tile assignments are compatible:
(?x, y) ht(x, y), t(x + 1, y)i ? H & ht(x, y), t(x, y + 1)i ? V.
The TILING problem is thus to decide, for a given instance, whether such a consistent tiling exists.
Figure 1 shows an example instance and consistent solution.
2
n=5
L =
H=
0
0
1
1
1
2
0
1
1
2
1
2
0
2
0
V=
0
0
1
1
2
1
2
0
2
0
0
1
0
1
1
2
0
1
0
2
0
2
0
1
0
2
0
2
0
1
0
2
0
2
0
A consistent solution
Figure 1: An example of the TILING problem, and a consistent solution.
The reduction transforms a given instance of TILING into a 2-agent Dec-MDP, where each agent is
queried about some location in the grid, and must answer with a tile to be placed there. By careful
design of the query and response mechanism, it is ensured that a policy with non-negative value
exists only if the agents already have a consistent tiling, thus showing the Dec-MDP to be as hard
as TILING. Together with Theorem 1, and the fact that the finite-horizon, 2-agent Dec-MDP is a
special case of the general finite-horizon Dec-POMDP, the reduction establishes Bernstein?s main
complexity result (again, details are in the supplemental materials, ?1):
Theorem 2 (NEXP-Completeness). The finite-horizon Dec-POMDP problem is NEXP-complete.
4
Factored Dec-POMDPs and independence
In general, the state transitions, observations, and rewards in a Dec-POMDP can involve probabilistic dependencies between agents. An obvious restricted subcase is thus one in which these factors
are somehow independent. Becker et al. [6, 7] have thus studied problems in which the global statespace consists of the product of local states, so that each agent has its own individual state-space. A
Dec-POMDP can then be transition independent, observation independent, or reward independent,
as each the local effects given by each corresponding function are independent of one another.
Definition 4 (Factored Dec-POMDP). A factored, n-agent Dec-POMDP is a Dec-POMDP such
that the system state can be factored into n + 1 distinct components, so that S = S0 ? S1 ? ? ? ? ? Sn ,
and no state-variable appears in any Si , Sj , i 6= j.
As with the local (agent-specific) actions, ai , and observations, oi , in the general Dec-POMDP
definition, we now refer to the local state, s? ? Si ? S0 , namely that portion of the overall statespace that is either specific to agent ?i (si ? Si ), or shared among all agents (so ? S0 ). We use the
notation s?i for the sequence of all state-components except that for agent ?i :
s?i = (s0 , s1 , . . . , si?1 , si+1 , . . . , sn )
(and similarly for action- or observation-sequences, a?i and o?i ).
Definition 5 (Transition Independence). A factored, n-agent DEC-POMDP is transition independent iff the state-transition function can be separated into n + 1 distinct transition functions
P0 , . . . , Pn , where, for any next state s0i ? Si ,
P0 (s00 | s0 )
if i = 0;
P (s0i | (s0 , . . . , sn ), (a1 , . . . , an ), s?i ) =
Pi (s0i | s?i , ai , s00 ) else.
In other words, the next local state of each agent is independent of the local states of all others, given
its previous local state and local action, and the external system features (S0 ).
Definition 6 (Observation Independence). A factored, n-agent Dec-POMDP is observation independent iff the joint observation function can be separated into n separate probability functions
O1 , . . . , On , where, for any local observation oi ? ?i ,
O(oi | (a1 , . . . , an ), (s00 , . . . , s0n ), o?i ) = Oi (oi | ai , s?0i )
In such cases, the probability of an agent?s individual observations is a function of their own local
states and actions alone, independent of the states of others, and of what those others do or observe.
3
Definition 7 (Reward Independence). A factored, n-agent Dec-POMDP is reward independent iff
the joint reward function can be represented by local reward functions R1 , . . . , Rn , such that:
R((s0 , . . . sn ), (a0 , . . . , an )) = f (R1 (?
s1 , a1 ), . . . , Rn (?
sn , an ))
and
Ri (?
si , ai ) ? Ri (?
si , a0i ) ? f (R1 , . . . ,Ri (?
si , ai ), . . . , Rn ) ? f (R1 , . . . , Ri (?
si , a0i ), . . . , Rn )
That is, joint reward is a function of local reward, constrained so that we maximize global reward if
and only if we maximize local rewards. A typical example is the additive sum:
R((s0 , . . . sn ), (a0 , . . . , an )) = R1 (?
s1 , a1 ) + ? ? ? + Rn (?
sn , an ).
It is important to note that each definition applies equally to Dec-MDPs; in such cases, joint full
observability of the overall state is often accompanied by full observability at the local level.
Definition 8 (Local Full Observability). A factored, n-agent Dec-MDP is locally fully observable
iff an agent?s local observation uniquely determines its local state: ?oi ? ?i , ??
si : P (?
si | oi ) = 1.
Local full observability is not equivalent to independence of observations. In particular, a problem
may be locally fully observable without being observation independent (since agents may simply
observe outcomes of non-independent joint actions). On the other hand, it is easy to show that an
observation-independent Dec-MDP must be locally fully observable (supplementary, ?2).
4.1
Shared rewards alone lead to reduced complexity
It is easy to see that if a Dec-MDP (or Dec-POMDP) has all three forms of independence given
by Definitions 5?7, it can be decomposed into n separate problems, where each agent ?i works
solely within the local sub-environment Si ? S0 . Such single-agent problems are known to be Pcomplete, and can generally be solved efficiently to high degrees of optimality. More interesting
results follow when only some forms of independence hold. In particular, it has been shown that
Dec-MDPs with both transition- and observation-independence, but not reward-independence, are
NP-complete [8, 7]. (This result is discussed in detail in our supplementary material, ?3.)
Theorem 3. A transition- and observation-independent Dec-MDP with joint reward is NP-complete.
5
Other subclasses of interactions
As our new results will now show, there is a limit to this sort of complexity reduction: other relatively
obvious combinations of independence relationships do not bear the same fruit. That is, we show
the NP-completeness result to be specific to fully transition- and observation-independent problems.
When these properties are not fully present, worst-case complexity is once again NEXP.
5.1
Reward-independent-only models are NEXP-complete
We begin with a result that is rather simple, but has not, to the best of our knowledge, been established before. We consider the inverse of the NP-complete problem of Theorem 3: a Dec-MDP with
reward-independence (Df. 7), but without transition- or observation-independence (Dfs. 5, 6).
Theorem 4. Factored, reward-independent Dec-MDPs with n agents are NEXP-complete.
Proof Sketch. For the upper bound, we simply cite Theorem 1, immediately establishing that such
problems are in NEXP. For the lower bound, we simply modify the TILING Dec-MDP from Bernstein?s reduction proof so as to ensure that the reward-function factors appropriately into strictly
local rewards. (Full details are found in [9], and the supplementary materials, ?4.1.)
Thus we see that in some respects, transition and observation independence are fundamental to
the reduction of worst-case complexity from NEXP to NP. When only the rewards depend upon
the actions of both agents, the problems become easier; however, when the situation is reversed,
4
the general problem remains NEXP-hard. This is not entirely surprising: much of the complexity
of planning in decentralized domains stems from the necessity to take account of how one?s actionoutcomes are affected by the actions of others, and from the complications that ensue when observed
information about the system is tied to those actions as well. The structure of rewards, while obviously key to the nature of the optimal (or otherwise) solution, is not as vital?even if agents can
separate their individual reward-functions, making them entirely independent, other dependencies
can still make the problem extremely complex.
We therefore turn to two other interesting special-case Dec-MDP frameworks, in which independent
reward functions are accompanied by restricted degrees of transition- and observation-based interaction. While some empirical evidence has suggested that these problems may be easier on average to
solve, nothing has previously been shown about their worst-case complexity. We fill in these gaps,
showing that even under such restricted dynamics, the problems remain NEXP-hard.
5.2
Event-driven-interaction models are NEXP-complete
The first model we consider is one of Becker et al. [10], which generalizes the notion of a fully
transition-independent Dec-MDP. In this model, a set of primitive events, consisting of state-action
transitions, is defined for each agent. Such events can be thought of as occasions upon which
that agent takes the given action to generate the associated state transition. Dependencies are then
introduced in the form of relationships between one agent?s possible actions in given states and
another agent?s primitive events.
While no precise worst-case complexity results have been previously proven, the authors do point out
that the class of problems has an upper-bound deterministic complexity that is exponential in the size
of the state space, |S|, and doubly exponential in the number of defined interactions. This potentially
bad news is mitigated by noting that if the number of interactions is small, then reasonably-sized
problems can still be solved. Here, we examine this issue in detail, showing that, in fact these
problems are NEXP-hard (indeed, NEXP-complete); however, when the number of dependencies is
a log-factor of the size of the problem state-space, worst-case NP-hardness is achieved.
We begin with the formal framework of the model. Again, we give all definitions in terms of DecPOMDPs; they apply immediately to Dec-MDPs in particular.
Definition 9 (History). A history for an agent ?i in a factored, n-agent Dec-POMDP D is a sequence
of possible local states and actions, beginning in the agent?s initial state: ?i = [?
s0i , a0i , s?1i , a1i , . . .].
When a problem has a finite time-horizon T , all possible complete histories will be of the form
?Ti = [?
s0i , a0i , s?1i , a1i , . . . , s?Ti ?1 , aTi , s?Ti ].
Definition 10 (Events in a History). A primitive event e = (?
si , ai , s?0i ) for an agent ?i is a triple
representing a transition between two local states, given some action ai ? Ai . An event E =
{e1 , e2 , . . . , eh } is a set of primitive events. A primitive event e occurs in the history ?i , written
?i e, if and only if the triple e is a sub-sequence of the sequence ?i . An event E occurs in the
history ?i , written ?i E, if and only if some component occurs in that history: ?e ? E : ?i e.
Events can therefore be thought of disjunctively. That is, they specify a set of possible state-action
transitions from a Dec-POMDP, local to one of its agents. If the historical sequence of state-action
transitions that the agent encounters contains any one of those particular transitions, then the history
satisfies the overall event. Events can thus be used, for example, to represent such things as taking a
particular action in any one of a number of states over time, or taking one of several actions at some
particular state. For technical reasons, namely the use of a specialized solution algorithm, these
events are usually restricted in structure, as follows.
Definition 11 (Proper Events). A primitive event e is proper if it occurs at most once in any given
history. That is, for any history ?i if ?i = ?1i e ?2i then neither sub-history contains e: ?(?1i
e) ? ?(?2i e). An event E is proper if it consists of proper primitive events that are mutually
exclusive, in that no two of them both occur in any history:
??i ??x, y : (x 6= y) ? (ex ? E) ? (ey ? E) ? (?i ex ) ? (?i ey ).
Proper primitive events can be used, for instance, to represent actions that take place at particular
times (building the time into the local state s?i ? e). Since any given point in time can only occur
once in any history, the events involving such time-steps will be proper by default. A proper event
5
E can then be formed by collecting all the primitive events involving some single time-step, or by
taking all possible primitive events involving an unrepeatable action.
Our new model is then a Dec-MDP with:
1. Two (2) agents.1
2. A factored state-space: S = S0 ? S1 ? Sn .
3. Local full observability: each agent ?i can determine its own portion of the state-space,
s?i ? S0 ? Si , exactly.
4. Independent (additive) rewards: R(hs0 , s1 , s2 i, a1 , a2 ) = R1 (?
s1 , a1 ) + R2 (?
s2 , a2 ).
Interactions between agents are given in terms of a set of dependencies between certain state-action
transitions for one agent, and events featuring transitions involving the other agent. Thus, if a history
contains one of the primitive events from the latter set, this can have some direct effect upon the
transition-model for the first agent, introducing probabilistic transition-dependencies.
Definition 12 (Dependency). A dependency is a pair dkij = hEik , Djk i, where Eik is a proper event
sj , aj i for agent ?j ,
defined over primitive events for agent ?i , and Djk is a set of state-action pairs h?
such that each pair occurs in at most one dependency:
0
0
?(? k, k 0 , sj , aj ) (k 6= k 0 ) & hsj , aj i ? Djk ? dkij & hsj , aj i ? Djk ? dkij .
Such a dependency is thus a collection of possible actions that agent ?j can take in one of its local
state, each of which depends upon whether the other agent ?i has made one of the state-transitions
in its own set of primitive events. Such structures can be used to model, for instance, cases where
one agent cannot successfully complete some task until the other agent has completed an enabling
sub-task, or where the precise outcome depends upon the groundwork laid by the other agent.
Definition 13 (Satisfying Dependencies). A dependency dkij = hEik , Djk i is satisfied when the
current history for enabling agent ?i contains the relevant event: ?i Eik . For any state-action pair
h?
sj , aj i, we define a Boolean indicator variable bs?j aj , which is true if and only if some dependency
that contains the pair is satisfied:
1 if (? dkij = hEik , Djk i) h?
sj , aj i ? Djk & ?i Eik ,
bs?j aj =
0 otherwise.
The existence of dependencies allows us to factor the overall state-transition function into two parts,
each of which depends only on an agent?s local state, action, and relevant indicator variable.
Definition 14 (Local Transition Function). The transition function for our Dec-MDP is factored
into two functions, P1 and P2 , each defining the distribution over next possible local states:
Pi (?
s0i | s?i , ai , bs?i ai ). We can thus write Pi (?
si , ai , bs?i ai , s?0i ) for this transition probability.
When agents take some action in a state for which dependencies exist, they observe whether or not
the related events have occurred; that is, after taking any action aj in state sj , they can observe the
state of indicator variable bs?j aj .
With these definitions in place, we can now show that the worst-case complexity of the event-based
problems is the same as the general Dec-POMDP class.
Theorem 5. Factored, finite-horizon, n-agent Dec-MDPs with local full observability, independent
rewards, and event-driven interactions are NEXP-complete.
Proof Sketch. Again, the upper bound is immediate from Theorem 1, since the event-based structure
is just a specific case of general reward-dependence, and such models can always be converted into
Dec-MDPs without any events. For the lower bound, we again provide a reduction from TILING,
constrained to our special case. Local reward independence, which was not present in the original
problem, is ensured by using event dependencies to affect future rewards of the other agent. Thus,
local immediate rewards remain dependent only upon the actions of the individual agent, but the
state in which that agent finds itself (and so the options available to its reward function) can depend
upon events involving the other agent. (See [9] and supplemental materials, ?4.2.)
1
The model can be extended to n agents with little real difficulty. Since we will show that the 2-agent case
is NEXP-hard, however, this will suffice for the general claim.
6
5.2.1
A special, NP-hard case
The prior result requires allowing the number of dependencies in the problem to grow as a factor of
log n, for a TILING grid of size (n?n). Since the size of the state-space S in the reduced Dec-MDP
is also O(log n), the number of dependencies is O(|S|). Thus, the NEXP-completeness result holds
for any event-based Dec-MDP where the number of dependencies is linear in the state-space. When
we are able to restrict the number of dependencies further, however, we can do better.
Theorem 6. A factored, finite-horizon, n-agent Dec-MDP with local full observability, independent
rewards, and event-driven interactions are solvable in nondeterministic polynomial time (NP) if the
number of dependencies is O(log |S|), where S is the state-set of the problem.
Proof Sketch. As shown by Becker [10], we can use the Coverage Set algorithm to generate an
optimal policy for a problem of this type, in time that is exponential in the number of dependencies.
Clearly, if this number is logarithmic in the size of the state-set, then solution time is polynomial in
the problem size. (See [9] and supplemental materials, ?4.2.1.)
5.2.2
Discussion of the results
These results are interesting for two reasons. First, NEXP-completeness of the event-based case,
even with independent rewards and local full observability (Theorem 5), means that many interesting problems are potentially intractable. Becker et al. [10] show how to use event-dependencies
to represent common structures in the TAEMS task modeling language, used in many real-world
domains [11, 12, 13]; our complexity analysis thus extends to such practical problems. Second,
isolating where complexity is lower can help determine what task structures and agent interrelationships lead to intractability. In domains where the dependency structure can be kept relatively simple,
it may be possible to derive optimal solutions feasibly. Both subjects are worth further study.
5.3
State-dependent-action models are NEXP-complete
Guo and Lesser [14, 15, 16] consider another specialized Dec-MDP subclass, with apparently even
more restricted types of interaction. Agent state-spaces are again separate, and all action-transitions
and rewards are independent. Such problems are not wholly decoupled, however, as the actions
available to each agent at any point depend upon the global system state. Thus, agents interact by
making choices that restrict or broaden the range of actions available to others.
Definition 15 (Dec-MDP with State-Dependent Actions). An n-agent Dec-MDP with statedependent actions is a tuple D = hS0 , {Si }, {Ai }, {Bi }, {Pi }, {Ri }, T i, where:
? S0 is a set of shared states, and Si is the state-space of agent si , with global state space
S = S0 ? S1 ? ? ? ? ? Sn , and initial state s0 ? S; each Ai is the action-set for ?i ; T ? N
is the finite time-horizon of the problem.
? Each Bi : S ? 2Ai is a mapping from global states of the system to some set of available
actions for each agent ?i . For all s ? S, Bi (s) 6= ?.
? Pi : (S0 ? Si ) ? Ai (S0 ? Si ) is the state-transition function over local states for ?i . The
global transition function is simply the product of individual Pi .
? Ri : (S0 ? Si ) ? < is a local reward function for agent ?i . We let the global reward
function be the sum of local rewards.
Note that there need be no observations in such a problem; given local full observability, each agent
observes only its local states. Furthermore, it is presumed that each agent can observe its own
available actions in any state; a local policy is thus a mapping from local states to available actions.
For such cases, Guo presents a planning algorithm based on heuristic action-set pruning, along
with a learning algorithm. While empirical results show that these methods are capable of solving
potentially large instances, we again know very little about the analytical worst-case difficulty of
problems with state-dependent actions. An NP-hardness lower bound is given [14] for the overall
class, by reducing a normal-form game to the state-dependent model, but this is potentially quite
weak, since no upper bound has been established, and even the operative algorithmic complexity
of the given solution method is not well understood. We address this situation, showing that the
problem is also just as hard as the general case.
7
Theorem 7. Factored, finite-horizon, n-agent Dec-MDPs with local full observability, independent
rewards, and state-dependent action-sets are NEXP-complete.
Proof Sketch. Once more, we rely upon the general upper bound on the complexity of Dec-POMDPs
(Theorem 1). The lower bound is by another TILING reduction. Again, we ?record? actions of each
agent in the state-space of the other, ensuring purely local rewards and local full observability. This
time, however, we use the fact that action-sets depend upon the global state (rather than events) to
enforce the desired dynamics. That is, we add special state-dependent actions that, based on their
availability (or lack thereof), affect each agent?s local reward. (See [9], and supplemental ?4.3.)
5.3.1
Discussion of the result
Guo and Lesser [16, 14] were able to show that deciding whether a decentralized problem with
state-based actions had an equilibrium solution with value greater than k was NP-hard. It was not
ascertained whether or not this lower bound was tight, however; this remained a significant open
question. Our results show that this bound was indeed too low. Since an optimal joint policy will be
an equilibrium for the special case of additive rewards, the general problem can be no easier.
This is interesting, for reasons beyond the formal. Such decentralized problems indeed appear to be
quite simple in structure, requiring wholly independent rewards and action-transitions, so that agents
can only interact with one another via choices that affect which actions are available. (A typical
example involves two persons acting completely regardless of one another, except for the existence
of a single rowboat, used for crossing a stream; if either agent uses the rowboat to get to the other
side, then that action is no longer available to the other.) Such problems are intuitive, and common,
and not all of them are hard to solve, obviously. At the same time, however, our results show that
the same structures can be intractable in the worst case, establishing that even seemingly simple
interactions between agents can lead to prohibitively high complexity in decentralized problems.
6
Conclusions
This work addresses a number of existing models for decentralized problem-solving. In each case,
the models restrict agent interaction in some way, in order to produce a special sub-case of the
general Dec-POMDP problem. It has been known for some time that systems where agents act
entirely independently, but share rewards, have reduced worst-case complexity. We have shown that
this does not apply to other variants, where we relax the independence requirements even only a
little. In all of the cases addressed, the new problem variants are as hard as the general case. This
fact, combined with results showing many other decentralized problem models to be equivalent to
the general Dec-POMDP model, or strictly harder [17], reveals the essential difficulty of optimal
planning in decentralized settings. Together, these results begin to suggest that optimal solutions to
many common multiagent problems must remain out of reach; in turn, this indicates that we must
look to approximate or heuristic methods, since such problems are so prevalent in practice.
At the same time, it must be stressed that the NEXP-complexity demonstrated here is a worst-case
measure. Not all decentralized domains are going to be intractable, and indeed the event-based
and action-set models have been shown to yield to specialized solution methods in many cases,
enabling us to solve interesting instances in reasonable amounts of time. When the number of actiondependencies is small, or there are few ways that agents can affect available action-sets, it may well
be possible to provide optimal solutions effectively. That is, the high worst-case complexity is no
guarantee that average-case difficulty is likewise high. This remains a vital open problem in the field.
While establishing the average case is often difficult, if not impossible?given that the notion of an
?average? planning or decision problem is often ill-defined?it is still worth serious consideration.
Acknowledgments
This material is based upon work supported by the the Air Force Office of Scientific Research
under Award No. FA9550-05-1-0254. Any opinions, findings, and conclusions or recommendations
expressed in this publication are those of the authors and do not necessarily reflect the views of
AFOSR. The first author also acknowledges the support of the Andrew W. Mellon Foundation CTW
Computer Science Consortium Fellowship.
8
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
Daniel S. Bernstein, Shlomo Zilberstein, and Neil Immerman. The complexity of decentralized control of Markov decision processes. In Proceedings of the Sixteenth Conference on
Uncertainty in Artificial Intelligence, pages 32?37, Stanford, California, 2000.
Daniel S. Bernstein, Robert Givan, Neil Immerman, and Shlomo Zilberstein. The complexity
of decentralized control of Markov decision processes. Mathematics of Operations Research,
27(4):819?840, 2002.
Judy Goldsmith and Martin Mundhenk. Competition adds complexity. In J.C. Platt, D. Koller,
Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20,
pages 561?568. MIT Press, Cambridge, MA, 2008.
Harry R. Lewis. Complexity of solvable cases of the decision problem for predicate calculus.
In Proceedings of the Nineteenth Symposium on the Foundations of Computer Science, pages
35?47, Ann Arbor, Michigan, 1978.
Christos H. Papadimitriou. Computational Complexity. Addison-Wesley, Reading, Massachusetts, 1994.
Raphen Becker, Shlomo Zilberstein, Victor Lesser, and Claudia V. Goldman. Transitionindependent decentralized Markov decision processes. In Proceedings of the Second International Joint Conference on Autonomous Agents and Multi-Agent Systems, pages 41?48, Melbourne, Australia, 2003.
Raphen Becker, Shlomo Zilberstein, Victor Lesser, and Claudia V. Goldman. Solving transition
independent decentralized MDPs. Journal of Artificial Intelligence Research, 22:423?455,
November 2004.
Claudia V. Goldman and Shlomo Zilberstein. Decentralized control of cooperative systems:
Categorization and complexity analysis. Journal of Artificial Intelligence Research, 22:143?
174, 2004.
Martin Allen. Agent Interactions in Decentralized Environments. PhD thesis, University of
Massachusetts, Amherst, Massachusetts, 2009. Available at http://scholarworks.
umass.edu/open_access_dissertations/1/.
Raphen Becker, Victor Lesser, and Shlomo Zilberstein. Decentralized Markov decision processes with event-driven interactions. In Proceedings of the Third International Joint Conference on Autonomous Agents and Multi-Agent Systems, pages 302?309, New York, New York,
2004.
Keith S. Decker and Victor R. Lesser. Quantitative modeling of complex environments. International Journal of Intelligent Systems in Accounting, Finance and Management, 2:215?234,
1993.
V. Lesser, K. Decker, T.Wagner, N. Carver, A. Garvey, B. Horling, D. Neiman, R. Podorozhny, M. Nagendra Prasad, A. Raja, R. Vincent, P. Xuan, and X.Q Zhang. Evolution of the
GPGP/TAEMS domain-independent coordination framework. Autonomous Agents and MultiAgent Systems, 9(1):87?143, 2004.
Tom Wagner, Valerie Guralnik, and John Phelps. TAEMS agents: Enabling dynamic distributed supply chain management. Journal of Electronic Commerce Research and Applications, 2:114?132, 2003.
AnYuan Guo. Planning and Learning for Weakly-Coupled Distributed Agents. PhD thesis,
University of Massachusetts, Amherst, 2006.
AnYuan Guo and Victor Lesser. Planning for weakly-coupled partially observable stochastic
games. In Proceedings of the 19th International Joint Conference on Artificial Intelligence,
pages 1715?1716, Edinburgh, Scotland, 2005.
AnYuan Guo and Lesser Victor. Stochastic planning for weakly-coupled distributed agents.
In Proceedings of the Fifth Joint Conference on Autonomous Agents and Multiagent Systems,
pages 326?328, Hakodate, Japan, 2006.
Sven Seuken and Shlomo Zilberstein. Formal models and algorithms for decentralized decision making under uncertainty. Autonomous Agents and Multi-Agent Systems, 17(2):190?250,
2008.
9
| 3857 |@word briefly:1 version:1 polynomial:2 open:2 calculus:1 seek:1 prasad:1 accounting:1 p0:2 paid:1 subcase:1 harder:1 reduction:10 necessity:1 initial:3 contains:5 uma:2 daniel:2 groundwork:1 ala:1 ati:1 existing:1 hakodate:1 current:1 surprising:1 si:23 must:5 written:2 john:1 additive:3 mundhenk:2 shlomo:9 alone:3 intelligence:4 s0n:1 beginning:1 scotland:1 record:1 fa9550:1 completeness:6 provides:1 complication:1 location:3 statedependent:1 zhang:1 along:1 direct:1 become:1 symposium:1 supply:1 consists:3 doubly:1 nondeterministic:1 finitehorizon:1 indeed:4 hardness:2 presumed:1 p1:1 planning:8 examine:1 multi:3 expected:1 decomposed:1 goldman:3 little:3 begin:3 notation:1 mitigated:1 maximizes:1 suffice:1 what:2 supplemental:5 finding:1 guarantee:1 quantitative:1 collecting:2 act:2 subclass:2 ti:3 finance:1 exactly:1 ensured:2 oki:1 prohibitively:1 connecticut:1 control:4 platt:1 appear:1 before:2 influencing:1 local:48 modify:1 understood:1 limit:1 establishing:5 solely:1 studied:1 challenging:1 teamwork:1 limited:1 range:2 bi:3 practical:1 acknowledgment:1 commerce:1 practice:1 wholly:2 empirical:2 thought:2 ho1:1 word:1 suggest:1 consortium:1 get:1 cannot:2 impossible:2 equivalent:4 deterministic:1 demonstrated:1 go:1 attention:1 starting:1 independently:2 primitive:13 pomdp:25 regardless:1 immediately:3 factored:15 seuken:1 fill:1 notion:2 autonomous:5 us:1 origin:1 crossing:1 satisfying:1 particularly:1 cooperative:4 observed:3 solved:2 worst:12 news:1 observes:1 substantial:1 environment:5 complexity:31 reward:46 dynamic:5 depend:5 solving:4 tight:1 weakly:3 purely:1 upon:12 completely:1 po:1 joint:20 represented:2 separated:2 distinct:2 sven:1 london:1 query:1 artificial:4 tell:1 choosing:1 outcome:2 quite:2 heuristic:2 supplementary:3 solve:3 stanford:1 nineteenth:1 relax:1 otherwise:3 neil:2 jointly:1 itself:1 seemingly:1 obviously:2 sequence:7 analytical:1 interaction:14 product:2 decpomdps:1 relevant:2 iff:4 impetus:1 roweis:1 adjoint:1 sixteenth:1 intuitive:1 competition:1 exploiting:1 requirement:1 r1:6 produce:1 categorization:1 xuan:1 tk:1 help:1 derive:1 andrew:1 keith:1 p2:1 coverage:1 c:1 involves:1 raja:1 dfs:1 stochastic:5 exploration:1 australia:1 opinion:1 material:7 everything:1 givan:1 strictly:2 hold:2 normal:1 deciding:1 equilibrium:2 mapping:5 algorithmic:1 claim:2 a2:2 purpose:1 currently:1 coordination:1 establishes:1 tool:1 successfully:1 mit:1 clearly:1 always:1 rather:2 pn:1 office:1 conjunction:1 zilberstein:8 publication:1 prevalent:1 indicates:1 dependent:7 entire:1 unlikely:1 a0:2 relation:2 koller:1 going:2 comprising:1 djk:7 compatibility:1 overall:5 among:2 classification:1 issue:1 ill:1 plan:1 constrained:2 special:8 field:1 once:4 valerie:1 look:1 eik:3 papadimitriou:1 others:6 np:12 intelligent:1 feasibly:1 few:1 serious:1 future:1 individual:9 consisting:1 highly:2 a0i:4 chain:1 tuple:2 capable:2 ascertained:1 decoupled:1 carver:1 desired:1 isolating:1 melbourne:1 instance:9 modeling:2 boolean:1 markovian:1 assignment:1 strategic:1 introducing:1 predicate:1 too:1 dependency:24 answer:1 combined:1 person:1 fundamental:1 amherst:3 international:4 probabilistic:2 a1i:2 together:2 again:8 s00:3 satisfied:2 reflect:1 thesis:2 management:2 tile:5 worse:1 external:1 return:1 japan:1 account:1 converted:1 accompanied:2 harry:1 availability:1 depends:3 stream:1 performed:1 view:1 observing:1 apparently:1 portion:2 competitive:2 sort:1 option:1 garvey:1 oi:9 formed:1 air:1 efficiently:1 ensue:1 yield:1 likewise:1 weak:1 vincent:1 pomdps:6 worth:2 history:16 reach:1 definition:21 obvious:2 e2:1 thereof:1 proof:7 associated:1 massachusetts:5 knowledge:1 gpgp:1 actually:1 appears:1 wesley:1 follow:1 tom:1 response:1 specify:1 formulation:1 furthermore:1 just:3 until:1 hand:1 receives:1 sketch:4 lack:1 somehow:1 aj:10 scientific:1 mdp:25 building:1 effect:2 concept:1 true:1 requiring:1 evolution:1 game:5 uniquely:1 anything:1 claudia:3 criterion:1 occasion:1 complete:18 goldsmith:2 allen:3 interrelationship:1 consideration:1 recently:1 common:4 specialized:3 functional:1 discussed:1 occurred:1 refer:1 significant:1 mellon:1 cambridge:1 ai:20 queried:1 grid:2 mathematics:1 similarly:1 ctw:1 language:1 had:1 nexp:29 access:1 longer:1 operating:1 add:2 own:5 showed:1 driven:4 certain:1 binary:2 success:1 devise:1 victor:6 greater:2 somewhat:1 relaxed:1 ey:2 determine:3 maximize:2 ii:1 multiple:2 full:14 stem:1 technical:1 equally:1 e1:1 award:1 a1:10 ensuring:1 involving:5 basic:2 variant:2 df:1 represent:3 achieved:2 dec:62 fellowship:1 operative:1 addressed:1 else:1 grow:1 source:1 appropriately:1 posse:1 subject:1 thing:1 noting:1 bernstein:7 vital:2 easy:2 independence:17 affect:4 restrict:3 observability:11 lesser:9 t0:1 whether:6 becker:7 york:2 cause:1 phelps:1 action:59 rowboat:2 generally:1 involve:1 transforms:1 amount:1 locally:4 reduced:3 generate:2 http:1 exist:1 per:1 write:1 affected:1 key:1 neither:1 ht:2 kept:1 sum:3 inverse:1 uncertainty:3 powerful:1 extends:2 place:2 laid:1 decide:1 reasonable:1 electronic:1 decision:13 bit:1 entirely:3 bound:16 ct:1 oracle:1 occur:2 ri:6 optimality:1 extremely:1 performing:1 martin:4 relatively:3 department:2 combination:2 remain:3 nagendra:1 making:3 s1:8 b:5 restricted:6 mutually:1 remains:3 previously:3 turn:3 count:1 discus:1 mechanism:1 singer:1 know:2 addison:1 tiling:14 available:12 operation:1 decentralized:26 generalizes:1 posgs:2 apply:2 observe:5 enforce:1 distinguished:1 encounter:1 existence:2 original:2 broaden:1 ensure:1 completed:1 establish:1 dkij:5 already:1 question:1 occurs:5 strategy:1 exclusive:1 dependence:1 reversed:2 separate:4 reason:3 o1:4 relationship:2 difficult:1 unfortunately:1 robert:1 potentially:4 negative:1 rise:1 design:1 proper:8 policy:10 unknown:1 allowing:2 upper:9 vertical:1 observation:25 markov:7 finite:15 enabling:4 november:1 behave:1 immediate:2 payoff:1 situation:3 defining:1 precise:3 extended:1 rn:5 introduced:1 namely:3 pair:5 specified:1 california:1 concisely:1 established:2 address:2 able:2 suggested:1 beyond:1 usually:1 reading:1 event:44 difficulty:5 eh:1 rely:1 force:1 solvable:3 nexpnp:1 indicator:3 representing:2 immerman:2 mdps:10 picture:1 acknowledges:1 coupled:3 sn:9 review:1 literature:1 prior:1 afosr:1 fully:9 multiagent:3 bear:1 interesting:7 proven:1 triple:2 foundation:2 agent:105 degree:2 consistent:6 s0:25 fruit:1 editor:1 intractability:1 share:2 pi:6 compatible:1 featuring:1 placed:1 supported:1 formal:3 side:1 wide:1 taking:4 wagner:2 fifth:1 distributed:3 ha1:3 edinburgh:1 default:1 world:4 transition:36 author:3 collection:3 made:1 historical:1 employing:1 sj:6 pruning:1 observable:10 approximate:1 global:12 robotic:1 reveals:1 s0i:6 nature:1 reasonably:1 hsj:2 interact:4 complex:3 necessarily:1 domain:8 main:1 decker:2 s2:2 nothing:1 allowed:1 board:3 judy:1 christos:1 sub:8 exponential:3 tied:1 third:1 down:1 theorem:13 remained:1 bad:1 specific:4 showing:5 r2:1 evidence:1 exists:4 intractable:3 essential:1 effectively:1 phd:2 horizon:12 gap:1 easier:3 logarithmic:1 michigan:1 simply:4 expressed:1 partially:7 recommendation:1 applies:1 cite:1 chance:1 determines:1 satisfies:1 ma:2 lewis:1 sized:1 ann:1 careful:1 hs0:2 shared:3 hard:10 infinite:1 except:3 typical:2 reducing:1 acting:1 total:1 arbor:1 player:1 college:1 support:1 guo:6 latter:2 stressed:1 statespace:2 ex:2 |
3,154 | 3,858 | Efficie nt M om e nts-base d Per m utation Tests
Chunxiao Zhou
Dept. of Electrical and Computer Eng.
University of Illinois at Urbana-Champaign
Champaign, IL 61820
[email protected]
Huixia Judy Wang
Dept. of Statistics
North Carolina State University
Raleigh, NC 27695
[email protected]
Yongmei Michelle Wang
Depts. of Statistics, Psychology, and Bioengineering
University of Illinois at Urbana-Champaign
Champaign, IL 61820
[email protected]
Abstract
In this paper, we develop an efficient moments-based permutation test
approach to improve the test?s computational efficiency by approximating
the permutation distribution of the test statistic with Pearson distribution
series. This approach involves the calculation of the first four moments of
the permutation distribution. We propose a novel recursive method to derive
these moments theoretically and analytically without any permutation.
Experimental results using different test statistics are demonstrated using
simulated data and real data. The proposed strategy takes advantage of
nonparametric permutation tests and parametric Pearson distribution
approximation to achieve both accuracy and efficiency.
1
In t ro d u c t i o n
Permutation tests are flexible nonparametric alternatives to parametric tests in small
samples, or when the distribution of a test statistic is unknown or mathematically intractable.
In permutation tests, except exchangeability, no other statistical assumptions are required.
The p-values can be obtained by using the permutation distribution. Permutation tests are
appealing in many biomedical studies, which often have limited observations with unknown
distribution. They have been used successfully in structural MR image analysis [1, 2, 3], in
functional MR image analysis [4], and in 3D face analysis [5].
There are three common approaches to construct the permutation distribution [6, 7, 8]: (1)
exact permutation enumerating all possible arrangements; (2) approximate permutation
based on random sampling from all possible permutations; (3) approximate permutation
using the analytical moments of the exact permutation distribution under the null hypothesis.
The main disadvantage of the exact permutation is the computational cost, due to the
factorial increase in the number of permutations with the increasing number of subjects. The
second technique often gives inflated type I errors caused by random sampling. When a large
number of repeated tests are needed, the random permutation strategy is also
computationally expensive to achieve satisfactory accuracy. Regarding the third approach,
the exact permutation distribution may not have moments or moments with tractability. In
most applications, it is not the existence but the derivation of moments that limits the third
approach.
To the best of our knowledge, there is no systematic and efficient way to derive the moments
of the permutation distribution. Recently, Zhou [3] proposed a solution by converting the
permutation of data to that of the statistic coefficients that are symmetric to the permutation.
Since the test statistic coefficients usually have simple presentations, it is easier to track the
permutation of the test statistic coefficients than that of data. However, this method requires
the derivation of the permutation for each specific test statistic, which is not accessible to
practical users.
In this paper, we propose a novel strategy by employing a general theoretical method to
derive the moments of the permutation distribution of any weighted v-statistics, for both
univariate and multivariate data. We note that any moments of the permutation distribution
for weighted v-statistics [9] can be considered as a summation of the product of data
function term and index function term over a high dimensional index set and all possible
permutations. Our key idea is to divide the whole index set into several permutation
equivalent (see Definition 2) index subsets such that the summation of the data/index
function term over all permutations is invariant within each subset and can be calculated
without conducting any permutation. Then we can obtain the moments by summing up
several subtotals. The proposed method can be extended to equivalent weighted v-statistics
by replacing them with monotonic weighted v-statistics. This is due to the fact that only the
order of test statistics of all permutations matters for obtaining the p-values, so that the
monotonic weighted v-statistics shares the same p-value with the original test statistic. Given
the first four moments, the permutation distribution can be well fitted by Pearson
distribution series. The p-values are then obtained without conducting any real permutation.
For multiple comparison of two-group difference, given the sample size n1 = 21 and n2 = 21,
the number of tests m = 2,000, we need to conduct m?(n1 + n2 )!/n1 !/n2 ! ? 1.1?1015
permutations for the exact permutation test. Even for 20,000 random permutations per test,
we still need m?20,000 ? 4?107 permutations. Alternatively, our moments-based permutation
method using Pearson distribution approximation only involves the calculation of the first
four analytically-derived moments of exact permutation distributions to achieve high
accuracy (see section 3). Instead of calculating test statistics in factorial scale with exact
permutation, our moments-based permutation only requires computation of polynomial
order. For example, the computational cost for univariate mean difference test statistic and
modified multivariate Hotelling's T2 test statistics [8] are O(n) and O(n3 ), respectively, where
n = n 1 + n2 .
2
M e t h o d o lo g y
In this section, we shall mainly discuss how to calculate the moments of the permutation
distribution for weighted v-statistics. For other test statistics, a possible solution is to
replace them with their equivalent weighted v-statistics by monotonic transforms. The
detailed discussion about equivalent test statistics can be found in [7, 8, 10].
2.1
C o m p ut a t i o n a l c h a l l e n g e
Let us first look at a toy example. Suppose we have a two-group univariate data
x = ( x1 , L , xn1 , xn1 +1 ,L , xn1 + n2 ) , where the first n1 elements are in group A and the rest, n2 ,are
in group B. For comparison of the two groups, the hypothesis is typically constructed as:
H 0 : m A = mB vs. H a : m A ? m B , where m A , m B are the population means of the groups A
n1
n
i =1
i = n1 +1
and B, respectively. Define x A = ? xi / n1 and xB = ? xi / n2 as the sample means of two
groups, where n=n1+n2. We choose the univariate group mean difference
statistic,
i.e.,
n
T ( x) = x A - xB = ? w(i ) xi ,
i =1
where
the
index
function
as the test
w(i ) = 1/ n1 ,
if i ? {1, L , n1} and w(i ) = -1/ n2 , if i ? {n1 + 1, L, n} . Then the total number of all possible
permutations of {1, L, n} is n!. To calculate the fourth moment of the permutation
distribution,
n
n n n n
1
1
4
? ( ? w(i ) xp (i ) ) =
? ? ? ? ? w(i1 ) w(i2 ) w(i3 ) w(i4 )xp ( i1 ) xp ( i2 ) xp ( i3 ) xp ( i4 ) ,
n ! p ?S n i =1
n ! p ?S n i1 =1 i2 =1 i3 =1 i4 =1
where ? is the permutation operator and the symmetric group Sn [11] includes all distinct
permutations. The above example shows that the moment calculation can be considered as a
summation over all possible permutations and a large index set. It is noticeable that the
computational challenge here is to go through the factorial level permutations and
polynomial level indices.
Ep (T 4 ( x ))=
2.2
P a r t i t i o n t h e i n de x s e t
In this paper, we assume that the test statistic T can be expressed as a weighted v-statistic of
n
n
i1 =1
id =1
degree d [9], that is, T ( x) = ? L ? w(i1 , L , id ) h( xi1 ,L , xid ) , where x = ( x1 , x 2 , L , x n ) T is a data
with n observations, and w is a symmetric index function. h is a symmetric data function,
i.e., invariant under permutation of (i1 ,L , id ) . Though the symmetry property is not required
for our method, it helps reduce the computational cost. Here, each observation xk can be
either univariate or multivariate. In the above toy example, d=1 and h is the identity
function. Therefore, the r-th moment of the test statistic from the permutated data is:
Ep (T r ( x)) = Ep ( ? w(i1 ,L , id )h( xp (i1 ) ,L , xp ( id ) ))r
i1 ,i2 ,L,id
= Ep [
?
i1(1) ,L, id (1) ,
r
r
k =1
k =1
{ ? w(i1(k ) ,L, id ( k ) ) ? h( xp (i ( k ) ) ,L, xp (i ( k ) ) )}] .
L
i1( r ) ,L,id ( r )
d
1
Then we can exchange the summation order of permutations and that of indices,
Ep (T r ( x)) =
?
i1(1) ,L, id (1) ,
L
i1( r ) ,L,id ( r )
r
r
k =1
k =1
{( ? w(i1( k ) ,L , id ( k ) )) Ep ( ? h( xp (i ( k ) ) ,L, xp (i ( k ) ) ))}.
d
1
Thus any moment of permutation distribution can be considered as a summation of the
product of data function term and index function term over a high dimensional index set and
all possible permutations.
Since all possible permutations map any index value between 1 and n to all possible index
r
values from 1 to n with equal probability, Ep ( ? h( xp (i ( k ) ) ,L , xp (i ( k ) ) )) , the summation of
k =1
1
d
data function over all permutations is only related to the equal/unequal relationship among
indices. It is natural to divide the whole index set U = {i1 ,L , id }r = {(i1(1) , L , id (1) ), L
(r )
, ( i1
, L , id
(r )
r
)} into the union of disjoint index subsets, in which Ep ( ? h( xp (i ( k ) ) ,L , xp (i ( k ) ) ))
k =1
1
d
is invariant.
Definition 1. Since h is a symmetric function, two index elements (i1 ,L , id ) and ( j1 ,L , jd )
are said to be equivalent if they are the same up to the order. For example, for d = 3, (1, 4, 5)
= (1,5,4) = (4,1,5) = (4,5,1) = (5,1,4) = (5,4,1).
Definition 2. Two indices {(i1(1) , L , id (1) ), L , (i1( r ) , L , id ( r ) )} and {( j1(1) , L , jd (1) ), L , ( j1( r ) , L , jd ( r ) )} are
said to be permutation equivalent/? if there exists a permutation p ? Sn such that
{(p (i1(1) ), L , p (id (1) )), L , (p (i1( r ) ), L , p (id ( r ) ))} = {( j1(1) , L , jd (1) ), L , ( j1( r ) , L , jd ( r ) )} . Here "=" means they
have same index elements by Definition 1. For example, for d = 2, n = 4, r = 2, {(1, 2), (2,
3)} ? {(2, 4), (1, 4)} since we can apply ?: 1?1, 2?4, 3?2, 4?3, such that {( ?(1), ?(2)),
(?(2), ?(3))} = {(1, 4), (4, 2)}= {(2, 4), (1, 4)}. As a result, the whole index set for d = 2, r =
2, can be divided into seven permutation equivalent subsets, [{(1, 1), (1, 1)}], [{(1, 1), (1,
2)}], [{(1, 1), (2, 2)}], [{(1, 2), (1, 2)}], [{(1, 1), (2, 3)}], [{(1, 2), (1, 3)}], [{(1, 2), (3, 4)}],
where [ ] denotes the equivalence class. Note that the number of the permutation equivalent
subsets is only related to the order of weighted v-test statistic d and the order of moment r ,
but not related to the data size n, and it is small for the first several moments calculation
(small r) with low order test statistics (small d).
Using the permutation equivalent relationship defined in Definition 2, the whole index set U
can be partitioned into several permutation equivalent index subsets. Then we can calculate
the r-th moment by summing up subtotals of all index subsets. This procedure can be done
without any real permutations based on Proposition 1 and Proposition 2 below.
r
Proposition 1. We claim that the data function sum Ep ( ? h( xp (i ( k ) ) ,L , xp (i ( k ) ) )) is invariant
k =1
d
1
within each equivalent index subset, and
r
?
r
? h( x j ( k ) ,L, x j ( k ) )
1
{( j1(1) ,L, jd (1) ),L,( j1( r ) ,L, jd ( r ) )}?[{(i1(1) ,L,id (1) ),L,(i1( r ) ,L,id ( r ) )}] k =1
1
d
k =1
card ([{(i1(1) ,L, id (1) ),L,(i1(r ) ,L, id (r ) )}])
where card ([{(i1(1) ,L , id (1) ),L , (i1( r ) ,L , id (r ) )}]) is the number of indices falling
permutation equivalent index subset [{(i1(1) ,L , id (1) ),L , (i1(r ) ,L , id ( r ) )}] .
Ep ( ? h( xp (i ( k ) ) ,L, xp (i ( k ) ) )) =
d
,
into the
Proof sketch:
Since all indices in the same permutation equivalent subset are equivalent with respect to the
symmetric group Sn,
r
r
1
Ep ( ? h( xp ( i ( k ) ) ,L , xp (i ( k ) ) )) =
? ? h( xp ( i ( k ) ) ,L , xp (i ( k ) ) ) =
1
d
1
d
n ! p ?Sn k =1
k =1
r
?
=
( ? h ( x j ( k ) ,L , x j
1
{( j1(1) ,L, jd (1) ),L,( j1( r ) ,L, jd ( r ) )}?[{( i1(1) ,Lid (1) ),L,(i1( r ) ,Lid ( r ) )}] k =1
(1)
(1)
(r )
(r)
card ([{(i1 ,L , id ),L , (i1 ,L , id )}])
d
) n !)
(k )
,
n!
r
?
=
( ? h ( x j ( k ) ,L , x j
{( j1(1) ,L, jd (1) ),L,( j1( r ) ,L, jd ( r ) )}?[{(i1(1) ,L,id (1) ),L,( i1( r ) ,L,id ( r ) )}] k =1
card ([{(i1(1) ,L , id (1) ),L , (i1( r ) ,L , id (r ) )}])
d
1
(k )
))
.
Proposition 2. Thus we can obtain the r-th moment by summing up the production of the
data partition sum wl and the index partition sum hl over all permutation equivalent
subsets, i.e.,
Ep (T r ( x)) = ? wl hl , where l = [{(i1(1) ,L , id (1) ),L , (i1( r ) ,L , id ( r ) )}] is any permutation
l?[U ]
equivalent subset of the whole index set U. [U] denotes the set of all distinct permutation
equivalent classes of U. The data partition sum is
r
?
hl =
wl =
( ? h( x j ( k ) , L , x j
{( j1(1) ,L, jd (1) ),L,( j1( r ) ,L, jd ( r ) )}?l k =1
(k )
d
1
))
, and the index partition sum is
card (l )
r
?
( ? w( x j ( k ) , L , x j
{( j1(1) ,L, jd (1) ),L,( j1( r ) ,L, jd ( r ) )}?l k =1
d
1
(k )
)) .
Proof sketch:
r
With Proposition 1, Ep ( ? h( xp (i ( k ) ) ,L , xp (i ( k ) ) )) is invariant within each equivalent index
k =1
d
1
subset, therefore,
Ep (T r ( x)) =
?
i1(1) ,L,id (1) ,
L
i1( r ) ,L,id ( r )
r
r
k =1
k =1
{( ? w(i1( k ) ,L , id ( k ) )) Ep ( ? h( xp (i ( k ) ) ,L , xp (i
1
d
(k )
)
))} =
= ?
?
l?[U ] {( j1(1) ,L, jd (1) ),L,( j1( r ) ,L, jd ( r ) )}?l
= ?
l?[U ] {(
r
r
k =1
k =1
{( ? w( j1( k ) ,L , jd ( k ) )) Ep ( ? h( xp ( j ( k ) ) ,L , xp ( j
d
1
(k )
)
))} =
r
?
j1(1) ,L, jd (1) ),L,( j1( r ) ,L, jd ( r ) )}?l
{ ? w( j1( k ) ,L , jd ( k ) )hl } = ? wl hl .
k =1
l?[U ]
Since both data partition sum wl and the index partition sum hl can be calculated by
summation over all distinct indices within each permutation equivalent index subset, no any
real permutation is needed for computing the moments.
2.3
R e c ur s i v e c a l c ul a t i o n
Direct calculation of the data partition sum and index partition sum leads to traversing
throughout the whole index set. So the computational cost is O(ndr). In the following, we
shall discuss how to reduce the cost by a recursive calculation algorithm.
Definition 3. Let l = [{(i1(1) ,L , id (1) ), L , (i1(r ) ,L , id ( r ) )}] and n = [( j1(1) , L , jd (1) ), L , ( j1( r ) , L , jd ( r ) )] .
l and n are two different permutation equivalent subsets of the whole index set U. We say
that the partition order of n is less than that of l , i.e., n p l , if l can be converted to n
by merging two or more index elements. For instance, n = [(1,1), (2, 3)] p l = [(1, 2), (3, 4)] ,
since by merging 1 and 2, l is converted to [{(1, 1), (3, 4)}] = [{(1, 1), (2, 3)}]. [{(1, 1), (3,
4)}] and [{(1, 1), (2, 3)}] are the same permutation equivalent index subsets because we can
apply the permutation ?: 1?1, 2?4, 3?3, 4?2 to [{(1, 1), (3, 4)}]. Note that the merging
operation may not be unique, for example, n can also be converted to l by merging 3 and
4. To clarify the concept of partition order, we list the order of all partitions when d=2 and
r=2 in figure 1. The partition order of a permutation equivalent subset n is said to be lower
than that of another permutation equivalent subset l if there is a directed path from l to n .
[{(1, 1 ) ,
(1, 1 )}]
[ {(1, 1 ) ,
( 2, 2 )}]
[ {(1, 1 ) ,
(1, 2 )}]
[{(1, 2 ) ,
(1, 2 )}]
[{(1, 1 ) ,
( 2 , 3 )}]
[{(1, 2 ) ,
(1, 3 )}]
[ {(1, 2 ) ,
( 3 , 4 )}]
Figure 1: Order of all permutation equivalent subsets when d = 2 and r = 2.
The difficulty for computing data partition sum and index partition sum comes from two
constraints; equal constraint and unequal constraint. For example, in the permutation
equivalent subset [{(1, 1), (2, 2)}], the equal constraint is that the first and the second index
number are equal and the third and fourth index are also equal. On the other hand, the
unequal constraint requires that the first two index numbers are different from those of the
last two. Due to the difficulties mentioned, we solve this problem by first relaxing the
unequal constraint and then applying the principle of inclusion and exclusion. Thus, the
calculation of a partition sum can be separated into two parts: the relaxed partition sum
without unequal constraint, and lower order partition sums. For example,
wl =[(1,1), (2,2)] = ? ( w(i, i ) w( j , j )) = wl =[(1,1), (2,2)]* - wl =[(1,1), (1,1)] =
i? j
= ? (w(i , i ) w( j , j )) - ? (w(i , i ) w( j , j )) = (? w(i, i )) 2 - ? w(i, i ) 2 , as the relaxed index partition
i= j
i, j
i
i
sum wl =[(1,1), (2,2)]* = ? (w(i , i ) w( j , j )) = (? w(i, i )) 2 .
i, j
i
Proposition 3. The index partition sum wl can be calculated by subtracting all lower order
partition
sums
from
the
corresponding
relaxed
index
partition
sum
wl * ,
i.e.,
#(l )
#(l ? n ) , where #(l ) is the number of distinct order-sensitive
#(n )
n pl
permutation equivalent subsets. For example, there are 2!2!2!/2!/2!=2 order-sensitive index
partition types for l = [(1, 1), (2, 3)] . They are [(1, 1), (2, 3)] and [(2, 3), (1, 1)]. Note that [(1, 1),
(2, 3)] and [(1, 1), (3, 2)] are the same type. #(l ? n ) is the number of different ways of
merging a higher order permutation equivalent subset l to a low order permutation equivalent
subset n .
wl = wl* - ? wn
The calculation of the data index partition sum is similar. Therefore, the computational cost
mainly depends on the calculation of relaxed partition sum and the lowest order partition sum.
Since the computational cost of the lowest order term is O(n), we mainly discuss the calculation
of relaxed partition sums in the following paragraphs.
To reduce the computational cost, we develop a greedy graph search algorithm. For
demonstration, we use the following example.
wl* =[(1,1),(1,2),(1,2),(1,3),(2,3),(1,4)] = #(l ) ? w(i , i ) w(i, j ) w(i, j ) w(i, k )w( j , k ) w(i, l ) . The permutation
i, j ,k ,l
equivalent index subset is represented by an undirected graph. Every node denotes an index
number. We connect two different nodes if these two corresponding index numbers are in the
same index element, i.e., in the same small bracket. In figure 2, the number 2 on the edge ij
denotes that the pair (i, j) is used twice. The self-connected node is also allowed. We assume there
is no isolated subgraph in the following discussion. If any isolated subgraph exists, we only need
to repeat the same procedure for all isolated subgraphs.
Now we shall discuss the steps to compute the wl* =[(1,1),(1,2),(1,2),(1,3),(2,3),(1,4)] . Firstly, we get rid of
? w(i , i ) w(i, j ) w(i, j ) w(i, k ) w( j , k )w(i , l )
the weights of edges and self-connections, i.e.,
i , j ,k ,l
= ? a(i, j ) w(i, k ) w( j , k )w(i , l ) , as a(i, j ) = w(i , i ) w(i, j ) w(i, j ) . Then we search a node with the
i , j ,k ,l
lowest degree and do summation for all indices connected with respect to the chosen node, i.e.,
? a(i, j ) w(i, k ) w( j , k ) w(i , l ) = ? b(i , j ) w(i, k )w( j , k ) , as b(i, j ) = ? a(i , j ) w(i, l ) . The chosen
i , j ,k ,l
l
i , j ,k
nodes and connected edges are deleted after the above computation. We repeat the same step until a
symmetric graph occurs. Since every node in the symmetric graph has the same degree, we randomly choose
any node; for example, k for summation, then ? b(i , j ) w(i, k ) w( j , k ) = ? b(i, j )c(i, j ) , as
i, j ,k
i, j
c(i, j) = ? w(i, k )w( j , k ) . Finally, we clear the whole graph and obtain the relaxed index partition sum.
k
l
i
l
i
i
i
2
j
k
j
k
j
k
j
Figure 2: Greedy Search Algorithm for computing
The most computational-expensive case is the complete graph in which every pair of nodes is
connected. Hence, the computational cost of cl * is determined by the subtotal that has the largest
symmetric subgraph in its graph representation. For example, the most expensive relaxed index
partition sum for d=2 and r=3 is w(i , j ) w(i, k )w( j , k ) , which is a triangle in the graph
representation.
Proposition 4 For d>=2, let m(m -1) / 2 ? r (d - 1)d / 2 < (m + 1)m / 2 , where r is the order of
moment and m is an integer. For a d-th order test statistic, the computational cost of the partition
sum for the r-th moment is bounded by O(nm). When d = 1 the computational complexity of the
partition sum is O(n).
Specifically, the computational cost of the 3rd and 4th moments for a second order test statistic is
O(n3). The computational cost for the 1st and 2nd moments is O(n2).
2.4
Fitting
The Pearson distribution series (Pearson I ~ VII) is a family of probability distributions that are
more general than the normal distribution [12]. It covers all distributions in the (?1, ?2) plane
including normal, beta, gamma, log-normal, and etc., where distribution shape parameters ?1, ?2
are the square of standardized skewness and kurtosis measurements, respectively. Given the first
four moments, the Pearson distribution series can be utilized to approximate the permutation
distribution of the test statistic without conducting real permutation.
3
E xp e ri m e n t a l re su lt s
To evaluate the accuracy and efficiency of our moments-based permutation tests, we generate
simulated data and conduct permutation tests for both linear and quadratic test statistics. We
consider six simulated cases in the first experiment for testing the difference between two groups,
A and B. We use mean difference statistics here. For group A, n1 observations are generated
independently from Normal(0,1) in Cases 1-2, from Gamma(3,3) in Cases 3-4, and from Beta(0.8,
0.8) in Cases 5-6. For group B, n2 independent observations are generated from Normal(1, 0.5) in
Cases 1-2, from Gamma (3,2) in Cases 3-4, and from Beta(0.1, 0.1) in Cases 5-6. The design is
balanced in Cases 1, 3, and 5 with n1 = n2 = 10, and unbalanced in Cases 2, 4, and 6 with n1 = 6, n2
= 18.
Table 1 illustrates the high accuracy of our moments-based permutation technique. Furthermore,
comparing with exact permutation or random 10,000 permutations, the moments-based
permutation tests reduce more than 99.8% of the computation cost, and this efficiency gain
increases with sample size. Table 1 shows the computation time and p-values of three permutation
methods from one simulation. In order to demonstrate the robustness of our method, we repeated
the simulation for 10 times in each case, and calculated the mean and variance of the absolute
biases of p-values of both moments-based permutation and random permutation, treating the pvalues of exact permutation as gold standard. In most cases, our moments-based permutation is
less biased and more stable than random permutation (Table 2), which demonstrates the
robustness and accuracy of our method.
Table 1: Comparison of computation costs and p-values of three permutation methods: Momentsbased permutation (MP), random permutation (RP), and exact permutation (EP). The t_MP, t_RP,
and t_EP denote the computation time (in seconds), and p_MP, p_RP, and p_EP are the p-values
of the three permutation methods.
C a s e 1Ca s e 2Ca s e 3Ca s e 4Ca s e 5Ca s e 6
t _ M P 6 . 7 9 e - 4 5.37e-4 5.54e-4 5.16e-4 5.79e-4 6.53e-4
t _ R P 5 . 0 7 e - 1 5.15e-1 5.06e-1 1.30e-1 2.78e-1 5.99e-1
t _ E P 3 . 9 9 e - 0 1.21e-0 3.71e-0 1.21e-0 3.71e-0 1.22e-0
p _ M P 1 . 1 9 e - 1 2.45e-2 1.34e-1 1.19e-1 3.58e-2 5.07e-5
p _ R P 1 . 2 1 e - 1 2.56e-2 1.36e-1 1.20e-1 3.53e-2 5.09e-2
p _ E P 1 . 1 9 e - 1 2.39e-2 1.34e-1 1.15e-1 3.55e-2 5.11e-2
We consider three simulated cases in the second experiment for testing the difference among three
groups D, E, and F. We use modified F statistics [7] here. For group D, n1 observations are
generated independently from Normal(0,1) in Case 7, from Gamma(3,2) in Case 8, and from
Beta(0.8, 0.8) in Case 9. For group E, n2 independent observations are generated from Normal(0,1)
in Case 7, from Gamma(3,2) in Case 8, and from Beta(0.8, 0.8) in Case 9. For group F, n3
independent observations are generated from Normal(0.1,1) in Case 7, from Gamma(3,1) in Case
8, and from Beta(0.1, 0.1) in Case 9.The design is unbalanced with n1 = 6, n2 = 8, and n3 =12.
Since the exact permutation is too expensive here, we consider the p-values of 200,000 random
permutations (EP) as gold standard. Our methods are more than one hundred times faster than
2,000 random permutation (RP) and also more accurate and robust (Table 3).
We applied the method to the MRI hippocampi belonging to 2 groups, with 21 subjects in
group A and 15 in group B. The surface shapes of different objects are represented by the
same number of location vectors (with each location vector consisting of the spatial x, y, and
z coordinates of the corresponding vertex) for our subsequent statistical shape analysis.
There is no shape difference at a location if the corresponding location vector has an equal
mean between two groups. Evaluation of the hypothesis test using our moments-based
permutation with the modified Hotelling?s T2 test statistics [8] is shown in Fig. 3(a) and 3(b).
It can be seen that the Pearson distribution approximation leads to ignorable discrepancy
with the raw p-value map from real permutation. The false positive error control results are
shown in Fig. 3(c).
Table 2: Robustness and accuracy comparison of moments-based permutation and random
permutation across 10 simulations, considering the p-values of exact permutation as gold standard.
Mean_ABias_MP and VAR_MP are the mean of the absolute biases and the variance of the biases
of p-values of moments-based permutation; Mean_ABias_RP and VAR_RP are the mean of the
absolute biases and the variance of the biases of p-values of random permutation. Mean difference
statistic is used.
Case1Case2Case3Case4Case5Case6
Mean_ABias_MP 1.62e-4 3.04e-4 6.36e-4 8.41e-4 1.30e-3 3.50e-3
Mean_ABias_RP 7.54e-4 3.39e-4 9.59e-4 8.39e-4 1.30e-3 2.00e-3
VAR_MP
6.42e-8 2.74e-7 1.54e-6 1.90e-6 3.76e-6 2.77e-5
VAR_RP
7.85e-7 1.86e-7 1.69e-6 3.03e-6 4.24e-5 1.88e-5
Table 3: Computation cost, robustness, and accuracy comparison of moments-based permutation
and random permutation across 10 simulations. Modified F statistic is used.
Case7Case8Case9
Case7Case8Case9
t_MP 1.03e-3 1.42e-3 1.64e-3 Mean_ABias_MP 9.23e-4 2.37e-4 2.11e-3
t_RP 1.51e-1 1.48e-1 1.38e-1 Mean_ABias_RP 3.94e-3 2.79e-3 3.42e-3
t_EP 1.76e+1 1.86e+1 2.37e+1 VAR_MP
1.10e-6 8.74e-8 1.23e-5
VAR_RP
2.27e-5 1.48e-5 1.85e-5
p-value>0.05
=0.05
(a)
(b)
(c)
(d)
=0.0
(e)
Figure 3. (a) and (b): Comparison of techniques in raw p-value measurement at
a = 0.05 (without correction), through real permutation ((a); number of permutations =
10,000) and using the present moments-based permutation (b). (c) p-map after BH?s FDR
correction of (b). (e) Facial differences between Asian male and white male. Locations in red
on the 3D surface denote significant face shape differences (significance level ? = 0.01 with
false discovery rate control).
We also applied our method to the 3D face comparison between Asian males and white
males. We choose 10 Asian males and 10 white males out of the USF face database to
calculate their differences with the modified Hotelling?s T2 test statistics. Each face surface
is represented by 4,000 voxels. All surfaces are well aligned. Results from our algorithm in
Fig. 3(e) show that significant differences occur at eye edge, nose, lip corners, and cheeks.
They are consistent with anthropology findings and suggest the discriminant surface regions
for ethnic group recognition.
4
C o n c lu si o n
We present and develop novel moments-based permutation tests where the permutation
distributions are accurately approximated through Pearson distributions for considerably reduced
computation cost. Comparing with regular random permutation, the proposed method
considerably reduces computation cost without loss of accuracy. General and analytical
formulations for the moments of permutation distribution are derived for weighted v-test statistics.
The proposed strategy takes advantage of nonparametric permutation tests and parametric Pearson
distribution approximation to achieve both accuracy/flexibility and efficiency.
R e f e re n c e s
[1]
Nichols, T. E., and A. P. Holmes (2001), Nonparametric permutation tests for
functional neuroimaging: A primer with examples, Human Brain Mapping, 15, 1-25.
[2]
Zhou, C., D. C. Park, M. Styner, and Y. M. Wang (2007), ROI constrained statistical
surface morphometry, IEEE International Symposium on Biomedical Imaging,
Washington, D. C., 1212-1215.
[3]
Zhou, C., and Y. M. Wang (2008), Hybrid permutation test with application to
surface shape analysis, Statistica Sinica, 18, 1553-1568.
[4]
Pantazis, D., R. M. Leahy, T. E. Nichols, and M. Styner (2004), Statistical surfacebased morphometry using a non-parametric approach, IEEE International
Symposium on Biomedical Imaging, 2, 1283-1286.
[5]
Zhou, C., Y. Hu, Y. Fu., H. Wang, Y. M. Wang, and T. S. Huang (2008), 3D face
analysis for distinct features using statistical randomization, IEEE International
Conference on Acoustics, Speech, and Signal Processing, Las Vegas, Nevada, 981984.
[6]
Hubert, L. (1987), Assignment Methods in Combinatorial Data Analysis, Marcel
Dekker, New York.
[7]
Mielke, P. W., and K. J. Berry (2001), Permutation Methods: A Distance Function
Approach, Springer, New York.
[8]
Good, P. (2005), Permutation, Parametric and Bootstrap Tests of Hypotheses, 3rd
ed., Springer, New York.
[9]
Serfling, R. J. (1980), Approximation Theorems of Mathematical Statistics, Wiley,
New York.
[10]
Edgington, E., and P. Onghena (2007), Randomization Tests, 4th ed., Chapman &
Hall, London.
[11]
Nicholson, W. K. (2006), Introduction to Abstract Algebra, 3rd ed., Wiley, New
York.
[12]
Hahn, G. J., and S. S. Shapiro (1967), Statistical Models in Engineering, John Wiley
and Sons, Chichester, England.
| 3858 |@word mri:1 polynomial:2 hippocampus:1 nd:1 dekker:1 hu:1 simulation:4 carolina:1 nicholson:1 eng:1 moment:42 series:4 com:1 nt:2 comparing:2 si:1 gmail:1 john:1 subsequent:1 partition:30 j1:23 shape:6 treating:1 v:1 greedy:2 plane:1 xk:1 node:9 location:5 firstly:1 mathematical:1 constructed:1 direct:1 beta:6 symposium:2 fitting:1 paragraph:1 theoretically:1 brain:1 considering:1 increasing:1 bounded:1 null:1 lowest:3 skewness:1 finding:1 every:3 ro:1 demonstrates:1 control:2 positive:1 engineering:1 limit:1 id:41 path:1 twice:1 anthropology:1 equivalence:1 relaxing:1 limited:1 directed:1 practical:1 unique:1 testing:2 recursive:2 union:1 bootstrap:1 procedure:2 regular:1 suggest:1 get:1 operator:1 bh:1 applying:1 equivalent:29 map:3 demonstrated:1 go:1 independently:2 subgraphs:1 holmes:1 leahy:1 population:1 coordinate:1 suppose:1 user:1 exact:12 ndr:1 chunxiao:1 hypothesis:4 element:5 expensive:4 recognition:1 utilized:1 approximated:1 ignorable:1 database:1 ep:18 electrical:1 wang:7 calculate:4 region:1 connected:4 mentioned:1 balanced:1 complexity:1 subtotal:3 algebra:1 efficiency:5 triangle:1 represented:3 derivation:2 separated:1 distinct:5 london:1 pearson:10 solve:1 say:1 statistic:41 advantage:2 analytical:2 kurtosis:1 nevada:1 propose:2 subtracting:1 product:2 mb:1 aligned:1 subgraph:3 flexibility:1 achieve:4 gold:3 object:1 help:1 derive:3 develop:3 stat:1 ij:1 noticeable:1 involves:2 come:1 marcel:1 inflated:1 human:1 xid:1 exchange:1 randomization:2 proposition:7 summation:9 mathematically:1 pl:1 clarify:1 correction:2 considered:3 hall:1 normal:8 roi:1 mapping:1 claim:1 combinatorial:1 sensitive:2 largest:1 wl:15 successfully:1 weighted:10 modified:5 i3:3 zhou:5 exchangeability:1 derived:2 mainly:3 typically:1 i1:46 among:2 flexible:1 spatial:1 constrained:1 equal:7 construct:1 washington:1 sampling:2 chapman:1 park:1 look:1 discrepancy:1 t2:3 randomly:1 gamma:6 asian:3 consisting:1 n1:16 evaluation:1 chichester:1 male:6 bracket:1 hubert:1 xb:2 accurate:1 bioengineering:1 edge:4 fu:1 facial:1 traversing:1 conduct:2 divide:2 re:2 isolated:3 theoretical:1 fitted:1 instance:1 cover:1 disadvantage:1 assignment:1 cost:17 tractability:1 vertex:1 subset:24 hundred:1 too:1 connect:1 considerably:2 st:1 international:3 accessible:1 systematic:1 xi1:1 nm:1 choose:3 huang:1 corner:1 toy:2 converted:3 de:1 north:1 coefficient:3 matter:1 includes:1 caused:1 mp:1 depends:1 red:1 om:1 il:2 square:1 accuracy:10 variance:3 conducting:3 raw:2 accurately:1 lu:1 ed:3 definition:6 proof:2 xn1:3 gain:1 knowledge:1 ut:1 higher:1 pantazis:1 formulation:1 done:1 though:1 furthermore:1 biomedical:3 until:1 sketch:2 hand:1 replacing:1 su:1 concept:1 nichols:2 analytically:2 hence:1 symmetric:9 satisfactory:1 i2:4 white:3 self:2 complete:1 demonstrate:1 image:2 novel:3 recently:1 vega:1 common:1 functional:2 measurement:2 significant:2 rd:3 inclusion:1 illinois:3 stable:1 surface:7 etc:1 base:1 multivariate:3 exclusion:1 seen:1 relaxed:7 mr:2 converting:1 signal:1 multiple:1 reduces:1 champaign:4 faster:1 england:1 calculation:10 dept:2 divided:1 morphometry:2 biased:1 rest:1 subject:2 undirected:1 integer:1 structural:1 wn:1 psychology:1 reduce:4 regarding:1 idea:1 enumerating:1 six:1 ul:1 speech:1 york:5 detailed:1 clear:1 factorial:3 transforms:1 nonparametric:4 reduced:1 generate:1 shapiro:1 disjoint:1 per:2 track:1 shall:3 group:22 key:1 four:4 falling:1 deleted:1 imaging:2 graph:8 sum:26 fourth:2 throughout:1 family:1 cheek:1 quadratic:1 i4:3 occur:1 constraint:7 n3:4 ri:1 ncsu:1 belonging:1 across:2 son:1 ur:1 partitioned:1 appealing:1 serfling:1 lid:2 hl:6 invariant:5 computationally:1 discus:4 needed:2 nose:1 operation:1 apply:2 hotelling:3 alternative:1 robustness:4 primer:1 rp:2 existence:1 original:1 jd:23 denotes:4 standardized:1 calculating:1 hahn:1 approximating:1 arrangement:1 occurs:1 styner:2 strategy:4 parametric:5 said:3 distance:1 card:5 simulated:4 seven:1 discriminant:1 index:56 relationship:2 demonstration:1 nc:1 sinica:1 neuroimaging:1 design:2 fdr:1 unknown:2 observation:8 urbana:2 extended:1 pair:2 required:2 connection:1 acoustic:1 unequal:5 usually:1 below:1 challenge:1 including:1 natural:1 difficulty:2 hybrid:1 improve:1 eye:1 sn:4 voxels:1 discovery:1 berry:1 loss:1 permutation:122 degree:3 xp:30 consistent:1 principle:1 share:1 production:1 lo:1 repeat:2 last:1 bias:5 raleigh:1 face:6 michelle:1 absolute:3 calculated:4 usf:1 employing:1 approximate:3 rid:1 summing:3 xi:3 alternatively:1 search:3 table:7 lip:1 robust:1 ca:5 obtaining:1 symmetry:1 cl:1 significance:1 main:1 statistica:1 whole:8 n2:15 repeated:2 allowed:1 x1:2 ethnic:1 fig:3 judy:1 wiley:3 third:3 theorem:1 specific:1 list:1 intractable:1 exists:2 false:2 merging:5 illustrates:1 depts:1 easier:1 vii:1 lt:1 univariate:5 expressed:1 monotonic:3 springer:2 identity:1 presentation:1 replace:1 determined:1 except:1 specifically:1 pvalues:1 total:1 experimental:1 la:1 unbalanced:2 evaluate:1 |
3,155 | 3,859 | Maximum likelihood trajectories for continuous-time
Markov chains
Theodore J. Perkins
Ottawa Hospital Research Institute
Ottawa, Ontario, Canada
[email protected]
Abstract
Continuous-time Markov chains are used to model systems in which transitions
between states as well as the time the system spends in each state are random.
Many computational problems related to such chains have been solved, including
determining state distributions as a function of time, parameter estimation, and
control. However, the problem of inferring most likely trajectories, where a trajectory is a sequence of states as well as the amount of time spent in each state,
appears unsolved. We study three versions of this problem: (i) an initial value
problem, in which an initial state is given and we seek the most likely trajectory
until a given final time, (ii) a boundary value problem, in which initial and final
states and times are given, and we seek the most likely trajectory connecting them,
and (iii) trajectory inference under partial observability, analogous to finding maximum likelihood trajectories for hidden Markov models. We show that maximum
likelihood trajectories are not always well-defined, and describe a polynomial time
test for well-definedness. When well-definedness holds, we show that each of the
three problems can be solved in polynomial time, and we develop efficient dynamic programming algorithms for doing so.
1
Introduction
A continuous-time Markov chain (CTMC) is a model of a dynamical system which, upon entering
some state, remains in that state for a random real-valued amount of time (called the dwell time or
occupancy time) and then transitions randomly to a new state. CTMCs are used in a wide variety of
domains. In stochastic chemical kinetics, states may correspond to the conformation of a molecule
such as a protein, peptide or nucleic acid polymer, and transitions correspond to conformational
changes (e.g., [1]). Or, the state may correspond to the numbers of different types of molecules in
an interacting system, and transitions are the result of chemical reactions between molecules [2].
In phylogenetics, the states may correspond to the genomes of different organisms, and transitions
to the evolutionary events (mutations) that separate those organisms [3]. Other application domains
include queueing theory, process control and manufacturing, quality control, formal verification, and
robot nagivation.
Many computational problems associated with CTMCs have been solved, often by generalizing
methods developed for discrete-time Markov chains (DTMCs). For example, stationary distributions for CTMCs can be computed in a manner very similar to that for DTMCs [4]. Estimating the
parameters of a CTMC from fully observed data involves estimating state transition probabilities,
just as for DTMCs, but adds estimation of the state dwell time distributions. Estimating parameters
from partially observed data can be done by a generalization of the well-known Baum-Welch algorithm for parameter estimation for hidden Markov models [5] or by Bayesian methods [6, 7]. When
the state of a CTMC is observed periodically through time, but some transitions between observation times may go unseen, the parameter estimation problem can also be solved through embedding
1
techniques [8]. In scenarios such as manufacturing or robot navigation, one may assume that the
state transitions or dwell times are under at least partial control. When control choices are made
once for each state entered, dynamic programming and related methods can be used to develop optimal control strategies [9]. When control choices are made continuously in time, methods for hybrid
system control are more appropriate [10].
Another fundamental and well-studied problem for CTMCs is to compute, given an initial state and
time, the state distribution or most likely state at a later time. These problems are readily solved for
DTMCs by dynamic programming [11], but for the CTMCs, solutions have a somewhat different
flavor. One approach is based on the forward Chapman-Kolmogorov equations [4], called the Master equation in the stochastic chemical kinetics literature [12]. These specify a system of ordinary
differential equations the describe how the probabilities of being in each state change over time.
Solving the equations, sometimes analytically but more often numerically, yields the entire state
distribution as a function of time. Alternatively, one can uniformize the CTMC, which produces
a DTMC along with a probability distribution for a number of transitions to perform. The process
obtained by choosing the number of transitions, and then producing a trajectory with that many transitions from the DTMC, has the same state distribution as the original CTMC. This representation
allows particularly efficient computation of the state distribution if that distribution is only required
at one or a smaller number of different times. Finally, especially in the chemical kinetics community, stochastic simulation algorithms are popular [13]. These approaches act by simply simulating
trajectories from the CTMC to produce empirical, numerical estimates of state distributions or other
features of the dynamics.
Despite the extensive work on a variety of problems related to to CTMCs, to the best of our knowledge, the problem of finding most likely trajectories has not been addressed. With this paper, we
attempt to fill that gap. We propose dynamic programming solutions to three variants of the problem:
(i) an initial value problem, where a starting state and final time are given, and we seek the most
likely sequence of states and dwell times occurring up until the final time, (ii) a boundary value
problem, where initial and final states and times are given, and we seek the most likely intervening
trajectory, and (iii) a problem involving partial observability, where we have a sequence of ?observations? that may not give full state information, and we want to infer the most likely trajectory that
the system followed in producing the observations.
2
Definitions
A CTMC is defined by four things: (i) a finite state set S, (ii) initial state probabilities, Ps for s ? S,
(iii) state transition probabilities Pss0 for s, s0 ? S, and (iv) state dwell time parameters ?s for each
s ? S. Let St ? S denote the state of the system at time t ? [0, +?). The rules for the evolution
of the system are that it starts in state S0 , which is chosen according to the distribution Ps . At any
time t, when the system is in state St = s, the system stays in state s for a random amount of time
that is exponentially distributed with parameter ?s . When the system finally leaves state s, the next
state of the system is s0 6= s with probability Pss0 .
A trajectory of the CTMC is a sequence of states along with the dwell times in all but the
last state U = (s0 , t0 , s1 , t1 , . . . , sk?1 , tk?1 , sk ). The meaning of this trajectory is that the
system started in state s0 , where it stayed for time t0 , then transitioned to state s1 , where it
stayed for time t1 , and so on. Eventually, the system reaches state sk , where it remains. Let
Ut = (s0 , t0 , s1 , t1 , . . . , skt ?1 , tkt ?1 , skt ) be a random variable describing the trajectory of the
system up until time t. In particular, this means that there are kt state transitions up until time t
(where kt is itself a random variable), the system enters state skt sometime at or before time t, and
remains in state skt until sometime after time t.
Given the initial state, S0 , and a time t, the likelihood of a particular trajectory U is
(
Pk?1
0
if s0 6= S0 or i=0 ti > t
P
l(Ut = U|S0 ) =
??si ti
?k?1
Psi si+1 e??sk (t? i ti )
otherwise
i=0 ?si e
(1)
P
When i ti > t, the likelihood is zero, because it means that the specified transitions have not
completed by time t. Otherwise, the terms inside the first parentheses account for the likelihood of
the dwell times and the state transitions in the sequence, and the term inside the second parentheses
2
accounts for the probability that the dwell time in the final state does not complete before time t.
With this notation, the initial value problem we study is easily stated as
arg max l(Ut = U|S0 = s) ,
(2)
U
where s ? S and t > 0 are both given. The boundary value problem we study is
arg max l(Ut = U|S0 = s, St = s0 ).
(3)
U
Here, the given s and s0 are any states in S, possibly the same state, and t > 0 is also given.
A hidden continuous-time Markov chain (HCTMC) adds an observation model to the CTMC.
In particular, we assume a finite set of possible observations O. When the system is observed
and it is in state s ? S, the observer sees observation o ? O with probability Pso . Let O =
(o1 , ?1 , o2 , ?2 , . . . , om , ?m ) denote a sequence of observations and the times at which they are made.
We assume that the observation times are fixed, being chosen ahead of time, and depend in no way on
the evolution of the chain itself. Given a trajectory of the system U = (s0 , t0 , s1 , t1 , . . . , tk?1 , sk ),
let U(t) denote the state of the system at time t implied by that sequence. Then, the probability of
an observation sequence O given the trajectory U can be written as
P (O|U?m = U) = ?m
i=1 PU(?i )oi
(4)
The final problem we study in this paper is that of finding the most likely trajectory given an observation sequence:
arg max l(U?m = U|O) ? arg max P (O|U?m = U)l(U?m = U)
U
3
U
(5)
Solving the initial and boundary value problems
In this section we develop solutions to problems (2) and (3). The first step in this development is
to show that we can analytically optimize the dwell times if we are given the state sequence. This
is covered in the next subsection. Following that, we develop a dynamic program to find optimal
state sequences, assuming that the dwell times are set to their optimal values relative to the state
sequence.
3.1
Maximum likelihood dwell times
Consider a particular trajectory U = (s0 , t0 , s1 , t1 , . . . , sk?1 , tk?1 , sk ). Given S0 and a time t, the
likelihood of that particular trajectory, l(Ut = U|S0 ) is given above by Equation (1). Let us assume
that S0 = s0 , as we have no need to consider U starting from the wrong state, and let us maximize
l(Ut = U|S0 ) with respect
P to the dwell times. To be concise, let Ttk = {(t0 , t1 , . . . , tk?1 ) : ti ?
0 for all 0 ? i < k and i ti ? t}. This is the set of all feasible dwell times for the states up until
state sk . Then we can write the desired optimization as
??s (t??i ti )
??si ti
?
e
arg max(t0 ,...,tk?1 )?Ttk ?k?1
P
e k
.
(6)
s
s
s
i
i
i+1
i=0
It is more convenient to maximize the logarithm, which gives us
arg max(t0 ,...,tk?1 )?Ttk
k?1
X
!
log ?si ? ?si ti + log Psi si+1
? ?sk (t ? ?j tj )
(7)
i=0
Dropping the terms that do not depend on any of the ti and rearranging, we find the equivalent
problem
k?1
X
arg max(t0 ,...,tk?1 )?Ttk
(?sk ? ?si )ti
(8)
i=0
The solution can be obtained by inspection. If ?sk ? ?si for all 0 ? i < k, then we must have all
ti = 0. That is, the system transitions instantaneously through the states s0 , s1 , . . . , sk?1 and then
3
dwells in state sk for (at least) time t.1 Otherwise, let j be such that ?sj is minimal for 0 ? j < k.
Then an optimal solution has tj = t, and all other ti = 0. Intuitively, this says that if state sj has the
largest expected dwell time (corresponding to the smallest ? parameter), then the most likely setting
of dwell times is obtained by assuming all of the time t is spent in state sj , and all other transitions
happen instantaneously. This is not unintuitive, although it is dissatisfying in the sense that the most
likely set of dwell times are not typical in some sense. For example, none are near their expected
value. Moreoever, the basic character of the solution?that all the time t goes into waiting at the
slowest state?is independent of t. Nevertheless, being able to solve explicitly for the most likely
dwell times for a given state sequence makes it much easier to find the most likely Ut . So, let us
press onwards.
3.2
Dynamic programming for the most likely state sequence
Substituting back our solution for the ti into Equation (1), and continuing our assumption that s0 =
S0 , we obtain
?
?? t
sk
if ?sk ? ?si for
? ?k?1
i=0 ?si Psi si+1 e
all 0 ? i < k
max
l(Ut = U|S0 ) =
?(mink?1 ? )t
?
(t0 ,...,tk?1 )?Ttk
k?1
s
i=0
i
otherwise
?i=0 ?si Psi si+1 e
?(mink ?s )t
k?1
i=0
i
= ?i=0 ?si Psi si+1 e
(9)
This leads to a dynamic program for finding the state sequence that maximizes the likelihood. As is
typical, we build maximum likelihood paths of increasing length by finding the best ways of extending shorter paths. The main difference with a more typical scenario is that to score an extension we
need to know not just the score and final state of the shorter path, but also the smallest dwell time
parameter along that path. Define a (k, s, ?)-trajectory to be one that includes k ? {0, 1, 2, . . .} state
transitions, ends at state sk = s, and for which the smallest dwell time parameter of any state along
the trajectory is ?. Then define Fk (s, ?) to be the maximum achievable l(Ut = U|S0 ), where we
restrict attention to U that are (k, s, ?)-trajectories. We initialize the dynamic program as:
F0 (S0 , ?S0 ) = e?t?S0
F0 (s, ?) = 0 for all (s, ?) 6= (S0 , ?S0 )
To compute Fk (s, ?) for larger k, we first observe that Fk (s, ?) is undefined if ? > ?s . This is
because there are no (k, s, ?)-trajectories if ? > ?s . The fact that a trajectory ends at state s implies
that the minimum dwell time parameter along the trajectory can be no greater than ?s . So, we only
compute Fk (s, ?) for ? ? ?s .
To determine Fk+1 (s, ?), we must consider two cases. If ? < ?s , then the best (k + 1, s, ?)trajectory must come from some (k, s0 , ?)-trajectory. That is, the length k trajectory must already
have a dwell time parameter of ? along it. The state s0 can be any state other than s. If ? = ?s , then
the best (k + 1, s, ?)-trajectory may be an extension of any (k, s0 , ?0 )-trajectory with ?0 ? ? and
s 6= s0 . To be more concise, define
{?}
if ? < ?s
G(s, ?) =
(10)
{?s0 : ?s0 ? ?} if ? = ?s
We then compute F for increasing k as:
Fk+1 (s, ?) =
0
max
s0 6=s,?0 ?G(s,?)
Fk (s0 , ?0 )?s0 Ps0 s e?t(??? )
The first term on the right hand side accounts for the likelihood of the best (k, s0 , ?0 )-trajectory. The
next two terms account for the dwell in s0 and the transition probability to s. The final term accounts
for any difference between the smallest dwell time parameters along the k and k + 1 transition
trajectories.
1
If the reader is not comfortable with a dwell time exactly equal to zero, one may instead take ti = 0 as a
shorthand for an infinitesimal but positive dwell time. Alternatively, the optimization problem can be modified
to explicitly require ti > 0. However, this does nothing to change the fundamental nature of the solution, while
resulting in a significantly more laborious exposition.
4
?y=1/10
y
Pyz=Pza=Pab=Pba=1
Pxy=2/3
x
?x=1
Pxz=1/3
z
a
b
?z=1
?a
?b
Figure 1: A continuous-time Markov chain used as a demonstration domain. The five circles correspond to states, and the arrows to transitions between states. States are also labeled with their dwell
time parameters.
Because the set of possible states, S, is finite, so is the set of possible dwell time parameters, ?s for
s ? S. The size of the table Fk for each k is thus at most |S|2 . If we limit k to some maximum value
K, then the total size of all the tables is at most K|S|2 , and the total computational effort O(K|S|3 ).
To solve the initial value problem (2), we scan over all values of k, s and ? to find the maximum
value of Fk (s, ?). Such a value implies that the most likely state sequence ends at state s after k
state transitions. We can use a traceback to reconstitute the full sequence of states, and the result of
the previous section to obtain the most likely dwell times. To solve the boundary value problem (3),
we do the same, except that we only scan over values of k and ?, looking for the maximum value of
Fk (St , ?).
3.3
Examples
In this section, we use the toy chain depicted in Figure 1 to demonstrate the algorithm of the previous
section, and to highlight some properties of maximum likelihood trajectories. First, suppose that we
know the system is in state x at time zero and in state z at time t. There are two different paths,
(x, z) and (x, y, z), that lead from x to z. If we ignore the issue of dwell times and consider
only the transition probabilities, then the path (x, y, z) seems more probable. Its probability is
Pxy Pyz = 23 ? 1 = 23 , whereas the direct path (x, z) simply has probability Pxz = 13 . However, if
we consider the dwell times as well, the story can change. For example, suppose that t = 1. Note
1
that ?y = 10
, so that the expected dwell time in state y is 10. If the chain enters state y, the chance
of it leaving y before time t = 1 is quite small. If we run the dynamic programming algorithm of
the previous section to find the most likely trajectory, it finds (s0 = x, t0 = 0, s1 = z) to be most
likely, with a score of 0.1226. Along the way, it computes the likelihood of the most likely path
going through y, which is (s0 = x, t0 = 0, s1 = y, t1 = t, s2 = x). It prefers to place all the dwell
time t in state y, because that state is most likely to have a long dwell time. However, the total score
of this trajectory is still only 0.0603, making the direct path the more likely one. On the other hand,
if t = 2, then the path through y becomes more likely by a score of 0.0546 to 0.0451. If t = 10, then
the path through y still has a likelihood of 0.0245, whereas the direct path has a likelihood below
2 ? 10?5 , because it is highly unlikely to remain in x and/or z for so long.
Next, suppose that we know S0 = a and that we are interested in knowing the most likely trajectory
out until time t, regardless of the final state of that trajectory. For simplicity, suppose also that
?a = ?b . There is only one possible state sequence containing k transitions for each k = 0, 1, 2, . . .,
and the likelihood of any such sequence turns out to be independent of the dwell times (assuming
the dwell times total no more than time t):
??ti ??(t??i ti )
(?k?1
)e
= e??t ?k
i=0 ?e
(11)
If ? < 1, this implies the optimal trajectory has the system remaining at state a. However, if ? = 1
then all trajectories of all lengths have the same likelihood. If ? > 1, then there are trajectories of
arbitrarily large likelihood, but no maximum likelihood trajectory. Intuitively, because the likelihood
of a dwell time can be greater than one, the likelihood of a trajectory can be increased by including
short dwells in states with high dwell parameters ?.
In general, if a continuous-time Markov chain has a cycle of states (s0 , s1 , . . . , sk = s0 ), such
that ?k?1
i=0 Psi si+1 ?si > 1, then maximum likelihood trajectories do not exist. Rather, a sequence of
5
s0
o1
s1
s2
o2
o3
s3
o4
time
Figure 2: Abstract example of a continuous-time trajectory of a chain, along with observations taken
at fixed time intervals.
trajectories with ever-increasing likelihood can be found starting from any state from which the cycle
is reachable. One should, thus, always check the chain for this property before seeking maximum
likelihood trajectories. This can be easily done in polynomial time. For example, one can label the
edges of the transition graph with the weights log Pss0 ?s for the edge from s to s0 , and then check
the graph for the existence of a positive-weight cycle?a well-known polynomial-time computation.
4
Solving the partially observable problem
We now turn to problem (12), where we are given an observation sequence O =
(o1 , ?1 , o2 , ?2 , . . . , om , ?m ) and want to find the most likely trajectory U. For simplicity, we assume that ?1 = 0. The following can be straightforwardly generalized to allow the first observation
to take place sometime after the trajectory
Pbegins. Similarly, we restrict attention to trajectories
U = (s0 , t0 , s1 , t0 , . . . , tk?1 , sk ) where i tk ? ?m , so that we do not concern ourselves with
extrapolating the trajectory beyond the final observation time. The conditional likelihood of such a
trajectory can be written as
l(U?m = U|O) ? P (O|U?m = U)l(U?m = U)
(12)
??s (t??i ti )
k?1
m
??si ti
= ?i=1 PU(?i ) oi Ps0 ?i=0 ?si e
Psi si+1 e k
(13)
The term in the first parentheses is P (O|U?m = U), and the term in the second parentheses is
l(U?m = U). The only differences between the second parentheses and Equation (1)Pis that we now
include the probability of starting in state s0 , and we have implicitly assumed that i tk ? ?m , as
mentioned above. This form, however, is not convenient for optimizing U. To do this, we need to
rewrite l(U?m = U) in a way that separates the likelihood into events happening in each interval of
time between observations.
4.1
Decomposing trajectory likelihood by observation intervals
For simplicity, let us further restrict attention to trajectories U that do not include a transition
into a state si precisely at any observation time ?j . We do not have space here to show that
this restriction does not affect the value of the optimization problem; this will be addressed in
the full paper. The likelihood of the trajectory can be written in terms of the events in each observation interval. For example, consider the trajectory and observations depicted in Figure 2.
In the first interval, the system starts in state s0 and transitions to s1 , where it stays until time
?2 . The likelihood of this happening is Ps0 ?s0 e??s0 t0 Ps0 s1 e??s1 (?2 ?t0 ) . In the second observation interval, the system never leaves state s1 . The probability of this happening is e??s1 (?3 ??2 ) .
Finally, in the third interval, the system continues in state s1 before transitioning to state s2
and then s3 , where it remains until the final observation. The likelihood of this happening is
?s1 e??s1 (t0 +t1 ??3 ) Ps1 s2 ?s2 e??s2 t2 ps2 s3 e??s3 (?4 ?t0 ?t1 ?t2 ) . If we multiply these together, we obtain the full likelihood of the trajectory, Ps0 (?2i=0 ?si e??si ti )e??s3 (?4 ??j tj ) .
In general, let Ui = (si0 , ti0 , si1 , ti1 , . . . , siki ) denote the sequence of states and dwell times of
trajectory U during the time interval [?i , ?i+1 ). The first dwell time ti0 , if any, is measured with
respect to the start of the time interval. The component of the likelihood of the whole trajectory U
attributable to the ith time interval is nothing other than l(U?i+1 ??i = Ui |S0 = si0 ). Thus, the
likelihood of the whole trajectory can be written as
m?1
l(U?m = U) = Ps0 ?i=1
l(U?i+1 ??i = Ui |S0 = si0 )
6
(14)
4.2
Dynamic programming for the optimal trajectory
Combining Equations (12) and (14), we find
l(U?m = U|O) ? PU(0) PU(0)o1 ?m?1
i=1 l(U?i+1 ??i = Ui |S0 = U(?i ))PU(?i+1 )oi+1
(15)
The first two terms account for the probability of the initial state and the probability of the first
observation given the initial state. The terms inside the product account for the likelihood of the ith
interval of the trajectory, and the probability of the (i + 1)st observation, given the state at the end
of the ith interval of the trajectory.
One immediate implication of this rewriting of the conditional likelihood is the following. At times
?i and ?i+1 , the system is in states U(?i ) and U(?i+1 ). If U is to maximize the conditional likelihood, it had better be that the fragment of the trajectory between those two times, Ui , is a maximum
likelihood trajectory from state U(?i ) to state U(?i+1 ) in time ?i+1 ? ?i . If it is not, then an alternative, higher likelihood trajectory fragment could be swapped into U, resulting in a higher conditional
likelihood. Let us define
Ht (s, s0 ) = max
l(Ut = U0 |S0 = s, St = s0 )
0
U
(16)
to be the maximum achievable likelihood by any trajectory from state s to state s0 in time t. Then a
necessary condition for U to maximize the conditional likelihood is
l(U?i+1 ??i = Ui |S0 = U(?i )) = H?i+1 ??i (U(?i ), U(?i+1 )) .
(17)
Moreover, to find an optimal U, we can simply assume that the above condition holds, and concern ourselves only with finding the best endpoints for the each time interval, U(?i ) and U(?i+1 ).
(Of course, the endpoint of one interval must be the same as the initial point of the next interval.)
Specifically, define Ji (s) to be the likelihood of the most likely trajectory covering the time interval
[?1 , ?i ], accounting for the first i observations, and ending at state s. The we can compute J as
follows. To initialize, we set
J1 (s) = Ps Pso1 .
(18)
Then, for i = 1, 2, . . . , m ? 1,
Ji+1 (s) = max
Ji (s0 )H?i+1 ??i (s0 , s)Psoi+1 .
0
s
(19)
We can then reconstruct the most likely trajectory by finding s that maximizes Jm (s) and tracing
back to the beginning. This algorithm is identical to the Viterbi algorithm for finding most likely
state sequences for hidden Markov models, with the exception that the state transition probabilities
in the Viterbi algorithm are replaced by the H?i+1 ??i (s0 , s) terms above?which can, of course, be
computed based on the results of the previous section.
4.3
Examples
To demonstrate this algorithm, let us return to the CTMC depicted in Figure 1. We assume that
?a = ?b = 1, that the system always starts in state x, and that when we observe the system, we
get a real-valued Gaussian observation with standard deviation 1 and means 0, 10, 3, 100 and 100
for states x, y, z, a and b respectively.2 The left side of Figure 3 shows three sample sequences
of 20 observations. The right side of the figure shows the most likely trajectories inferred under
different assumptions. First, if we assume the time interval between observations is t = 1, and we
consider observations OA , then the most likely trajectory has the system in state x up through the
10th observation, after which it instantly transitions to state z and remains there. This makes sense,
as the lower observations at the start of the series are more likely in state x. If we consider instead
observations OB , which has a high observation at time t = 11, the procedure infers that the system
was in state y at that time. Moreover, it predicts that the system switches into y immediately after the
10th observation, and says there until just before the 12th observation, taking advantage of the fact
that longer dwell times are more likely in state y than in the other states. If we consider observations
OC , which have a spike at t = 5, the transit to state y is moved earlier, and state z is used to explain
observations at t = 6 onward, even though the first few are relatively unlikely in that state. If we
2
Although our derivations above assume the observation set O is finite, the same approach goes through if
O is continuous and individual observations have likelihoods instead of probabilities.
7
observation
10
OA
8
OB
6
OC
x
OA, t=1
x
OB, t=1
x
OC, t=1
4
2
1
x
OA, t=20
?2
5
10
15
observation number
20
1
y
z
y
x
OA, t=2
0
z
5
z
y
z
y
10
15
observation number
z
20
Figure 3: Left: three length-20 observation sequences, OA , OB , and OC . All three are the same
at most points, but the 11th observation of OB is 10, and the 5th observation of OC is 10. Right:
most likely trajectories inferred by our algorithm, assuming the underlying CTMC is the one given
in Figure 1, with parameters given in the text.
return to observations OA , but we assume that the time interval between observations is t = 2, then
the most likely trajectory is different than it is for t = 1. Although the same states are used to explain
the observations, the most likely trajectory has the system transitioning from x to y immediately
after the 10th observation and dwelling there until just before the 11th observation, where the state
becomes z. This is because, as explained previously, this is the more likely trajectory from x to z
given t = 2. If we assume the time interval between observations is t = 20, then a wider range of
observations during the trajectory are attributed to state y. Intuitively, this is because, although the
observations are somewhat unlikely under state y, it is extremely unlikely for the system to dwell
for so long in state z as to account for all of the observations from the 11th onward.
5
Discussion
We have provided correct, efficient algorithms for inferring most likely trajectories of CTMCs given
either initial or initial and final states of the chain, or given noisy/partial observations of the chain.
Given the enormous practical import of the analogous problems for discrete-time chains, we are
hopeful that our methods will prove useful additions to the toolkit of methods available for analyzing
continuous-time chains. An alternative, existing approach to the problems we have addressed here is
to discretize time, producing a DTMC which is then analyzed by standard methods [14]. A problem
with this approach, however, is that if the time step is taken too large, the discretized chain can
collapse a whole set of transition sequences of the CTMC into a single ?pseudotransition?, obscuring
the real behavior of the system in continuous time. If the time step is taken to be sufficiently small,
then the DTMC should produce substantially the same solutions as our approach. However, the
time complexity of the calculations increases as the time step shrinks, which can be a problem if
we are interested in long time intervals and/or there are states with very short expected dwell times,
necessitating very small time steps.
A related problem on which we are working is to find the most probable state sequence of a
continuous-time chain under similar informational assumptions. By this, we mean that the dwell
times, rather than being optimized, are marginalized out, so that we are left with only the sequence
of states and not the particular times they occurred. In many applications, this state sequence may
be of greater interest than the dwell times?especially since, as we have shown, maximum likelihood dwell times are often infinitessimal and hence non-representative of typical system behavior.
Morever, this version of the problem has the advantage of always being well-defined. Because state
sequences have probabilities rather than likelihoods, a most probable state sequence will always
exist.
Acknowledgments
Funding for this work was provided in part by the National Sciences and Engineering Research
Council of Canada and by the Ottawa Hospital Research Institute.
8
References
[1] FG Ball and JA Rice. Stochastic models for ion channels: introduction and bibliography.
Mathematical biosciences, 112(2):189, 1992.
[2] D.J. Wilkinson. Stochastic modelling for systems biology. Chapman & Hall/CRC, 2006.
[3] M. Holder and P.O. Lewis. Phylogeny estimation: traditional and Bayesian approaches. Nature
Reviews Genetics, 4(4):275?284, 2003.
[4] H.M. Taylor and S. Karlin. An introduction to stochastic modeling. Academic Press, 1998.
[5] D.R. Fredkin and J.A. Rice. Maximum likelihood estimation and identification directly from
single-channel recordings. Proceedings: Biological Sciences, pages 125?132, 1992.
[6] R. Rosales, J.A. Stark, W.J. Fitzgerald, and S.B. Hladky. Bayesian restoration of ion channel
records using hidden Markov models. Biophysical Journal, 80(3):1088?1103, 2001.
[7] M.A. Suchard, R.E. Weiss, and J.S. Sinsheimer. Bayesian selection of continuous-time Markov
chain evolutionary models. Molecular Biology and Evolution, 18(6):1001?1013, 2001.
[8] DT Crommelin and E. Vanden-Eijnden. Fitting timeseries by continuous-time Markov chains:
A quadratic programming approach. Journal of Computational Physics, 217(2):782?805,
2006.
[9] M. L. Puterman. Markov Decision Processes: Discrete Stochastic Dynamic Programming.
John Wiley and Sons, New York, 1994.
[10] S. Hedlund and A. Rantzer. Optimal control of hybrid systems. In Decision and Control, 1999.
Proceedings of the 38th IEEE Conference on, volume 4, 1999.
[11] D.P. Bertsekas. Dynamic programming and optimal control. Athena Scientific Belmont, Mass,
1995.
[12] NG Van Kampen. Stochastic processes in physics and chemistry. North-Holland, 2007.
[13] D. T. Gillespie. Exact stochastic simulation of coupled chemical reactions. Journal of Physical
Chemistry, 81:2340?2361, 1977.
[14] A. Hordijk, D.L. Iglehart, and R. Schassberger. Discrete time methods for simulating continuous time Markov chains. Advances in Applied Probability, pages 772?788, 1976.
9
| 3859 |@word version:2 achievable:2 polynomial:4 seems:1 seek:4 simulation:2 accounting:1 concise:2 initial:16 series:1 score:5 fragment:2 o2:3 reaction:2 existing:1 si:24 written:4 readily:1 must:5 import:1 john:1 periodically:1 numerical:1 happen:1 j1:1 belmont:1 extrapolating:1 stationary:1 leaf:2 inspection:1 dissatisfying:1 beginning:1 ith:3 short:2 record:1 ctmcs:7 ps0:6 si1:1 five:1 mathematical:1 along:9 direct:3 differential:1 shorthand:1 prove:1 fitting:1 inside:3 manner:1 expected:4 behavior:2 discretized:1 informational:1 jm:1 increasing:3 becomes:2 provided:2 estimating:3 notation:1 moreover:2 maximizes:2 underlying:1 mass:1 spends:1 substantially:1 developed:1 ps2:1 finding:8 act:1 ti:21 exactly:1 wrong:1 control:11 producing:3 comfortable:1 bertsekas:1 t1:9 before:7 positive:2 engineering:1 limit:1 despite:1 analyzing:1 path:12 studied:1 theodore:1 collapse:1 range:1 practical:1 acknowledgment:1 procedure:1 empirical:1 significantly:1 convenient:2 protein:1 get:1 selection:1 dwells:2 optimize:1 equivalent:1 restriction:1 baum:1 go:3 attention:3 starting:4 regardless:1 welch:1 simplicity:3 immediately:2 rule:1 fill:1 embedding:1 analogous:2 suppose:4 exact:1 programming:10 particularly:1 continues:1 predicts:1 labeled:1 observed:4 solved:5 enters:2 cycle:3 mentioned:1 ui:6 complexity:1 ti0:2 wilkinson:1 fitzgerald:1 dynamic:13 depend:2 solving:3 rewrite:1 upon:1 easily:2 kolmogorov:1 derivation:1 describe:2 choosing:1 quite:1 larger:1 valued:2 solve:3 say:2 otherwise:4 pab:1 reconstruct:1 unseen:1 itself:2 noisy:1 final:13 sequence:31 advantage:2 biophysical:1 karlin:1 propose:1 product:1 combining:1 uniformize:1 entered:1 hordijk:1 ontario:1 intervening:1 moved:1 p:3 extending:1 produce:3 tk:11 spent:2 wider:1 develop:4 measured:1 conformation:1 involves:1 implies:3 come:1 rosales:1 correct:1 stochastic:9 crc:1 require:1 ja:1 stayed:2 generalization:1 polymer:1 probable:3 biological:1 extension:2 kinetics:3 onward:2 hold:2 sufficiently:1 hall:1 viterbi:2 substituting:1 smallest:4 estimation:6 sometime:3 label:1 si0:3 peptide:1 council:1 largest:1 instantaneously:2 ttk:5 always:5 gaussian:1 modified:1 rather:3 modelling:1 likelihood:46 check:2 slowest:1 sense:3 inference:1 entire:1 unlikely:4 hidden:5 going:1 interested:2 arg:7 issue:1 development:1 initialize:2 equal:1 once:1 never:1 ng:1 chapman:2 identical:1 biology:2 t2:2 few:1 randomly:1 national:1 individual:1 replaced:1 ourselves:2 attempt:1 onwards:1 interest:1 highly:1 multiply:1 laborious:1 navigation:1 analyzed:1 undefined:1 tj:3 chain:22 implication:1 kt:2 edge:2 partial:4 necessary:1 shorter:2 iv:1 continuing:1 logarithm:1 taylor:1 desired:1 circle:1 minimal:1 increased:1 earlier:1 modeling:1 restoration:1 ordinary:1 ottawa:3 deviation:1 too:1 straightforwardly:1 st:6 fundamental:2 stay:2 physic:2 connecting:1 continuously:1 together:1 containing:1 possibly:1 return:2 stark:1 toy:1 account:8 chemistry:2 includes:1 north:1 explicitly:2 later:1 observer:1 doing:1 start:5 pso:1 mutation:1 om:2 oi:3 pba:1 holder:1 acid:1 correspond:5 yield:1 bayesian:4 identification:1 none:1 trajectory:79 explain:2 reach:1 definition:1 infinitesimal:1 associated:1 psi:7 attributed:1 unsolved:1 bioscience:1 popular:1 knowledge:1 ut:10 subsection:1 infers:1 back:2 appears:1 higher:2 dt:1 specify:1 wei:1 done:2 though:1 shrink:1 just:4 until:11 hand:2 working:1 pxz:2 quality:1 scientific:1 evolution:3 analytically:2 hence:1 chemical:5 entering:1 puterman:1 during:2 covering:1 oc:5 o3:1 generalized:1 complete:1 demonstrate:2 necessitating:1 meaning:1 funding:1 ji:3 physical:1 ctmc:12 endpoint:2 exponentially:1 volume:1 organism:2 occurred:1 numerically:1 fk:10 similarly:1 had:1 reachable:1 toolkit:1 robot:2 f0:2 longer:1 add:2 pu:5 kampen:1 optimizing:1 scenario:2 arbitrarily:1 minimum:1 greater:3 somewhat:2 determine:1 maximize:4 ii:3 u0:1 full:4 infer:1 academic:1 calculation:1 long:4 molecular:1 parenthesis:5 variant:1 involving:1 basic:1 sometimes:1 moreoever:1 ion:2 whereas:2 want:2 addition:1 addressed:3 interval:20 leaving:1 swapped:1 recording:1 thing:1 near:1 iii:3 ps1:1 variety:2 affect:1 switch:1 restrict:3 observability:2 knowing:1 ti1:1 t0:18 effort:1 morever:1 york:1 prefers:1 useful:1 conformational:1 covered:1 traceback:1 amount:3 exist:2 s3:5 instantly:1 discrete:4 write:1 dropping:1 waiting:1 four:1 nevertheless:1 enormous:1 queueing:1 rewriting:1 ht:1 graph:2 run:1 master:1 place:2 reader:1 ob:5 decision:2 dwelling:1 dwell:45 followed:1 pxy:2 quadratic:1 ahead:1 precisely:1 perkins:1 bibliography:1 extremely:1 relatively:1 according:1 ball:1 smaller:1 remain:1 son:1 character:1 making:1 s1:19 tkt:1 intuitively:3 explained:1 taken:3 equation:8 remains:5 previously:1 describing:1 eventually:1 turn:2 know:3 end:4 available:1 decomposing:1 obscuring:1 rantzer:1 observe:2 appropriate:1 simulating:2 alternative:2 existence:1 original:1 remaining:1 include:3 completed:1 marginalized:1 especially:2 build:1 skt:4 implied:1 seeking:1 already:1 spike:1 strategy:1 traditional:1 evolutionary:2 separate:2 oa:7 athena:1 transit:1 o4:1 assuming:4 length:4 o1:4 demonstration:1 phylogenetics:1 stated:1 mink:2 unintuitive:1 perform:1 discretize:1 nucleic:1 observation:54 markov:15 finite:4 timeseries:1 immediate:1 looking:1 ever:1 interacting:1 community:1 canada:2 inferred:2 required:1 specified:1 extensive:1 optimized:1 able:1 beyond:1 dynamical:1 below:1 program:3 including:2 max:11 gillespie:1 event:3 hybrid:2 occupancy:1 started:1 coupled:1 text:1 review:1 literature:1 determining:1 relative:1 fully:1 highlight:1 verification:1 s0:65 story:1 pi:1 course:2 genetics:1 last:1 formal:1 side:3 allow:1 institute:2 wide:1 taking:1 tracing:1 distributed:1 hopeful:1 boundary:5 fg:1 van:1 transition:30 ending:1 genome:1 computes:1 forward:1 made:3 pss0:3 sj:3 observable:1 ignore:1 implicitly:1 assumed:1 alternatively:2 continuous:14 sk:18 table:2 transitioned:1 nature:2 channel:3 molecule:3 ca:1 rearranging:1 domain:3 pk:1 main:1 arrow:1 s2:6 whole:3 nothing:2 representative:1 attributable:1 wiley:1 inferring:2 third:1 transitioning:2 concern:2 occurring:1 gap:1 flavor:1 easier:1 generalizing:1 depicted:3 simply:3 likely:36 happening:4 partially:2 holland:1 chance:1 lewis:1 rice:2 conditional:5 exposition:1 manufacturing:2 feasible:1 change:4 typical:4 except:1 specifically:1 definedness:2 hospital:2 called:2 total:4 exception:1 phylogeny:1 scan:2 |
3,156 | 386 | c-Entropy and the Complexity of
Feedforward Neural Networks
Robert C. Williamson
Department of Systems Engineering
Research School of Physical Sciences and Engineering
Australian National University
GPO Box 4, Canberra, 2601, Australia
Abstract
We develop a. new feedforward neuralnet.work represent.ation of Lipschitz
functions from [0, p]n into [0,1] ba'3ed on the level sets of the function. We
show that
~~ + ~?r + ( 1 + h) (:~)
n
is an upper bound on the number of nodes needed to represent f to within
uniform error Cr, where L is the Lipschitz constant. \Ve also show that the
number of bits needed to represent the weights in the network in order to
achieve this approximation is given by
o (~2;~r
(:~) n) .
\Ve compare this bound with the [-entropy of the functional class under
consideration.
1
INTRODUCTION
We are concerned with the problem of the number of nodes needed in a feedforward
neural network in order to represent a fUllction to within a specified accuracy.
All results to date (e.g. [7,10,15]) have been in the form of existence theorems,
stating that there does exist a neural network which achieves a certain accuracy of
representation, but no indication is given of the number of nodes necessary in order
to achieve this. The two techniques we use are the notion of [-entropy (also known
946
?-Enlropy and the Complexity of Feedforward Neural Networks
Table 1: Hierarchy of theoretical problems to be solved.
ABSTRACT
1. Determination of the general approximation properties of feedforward
neural networks. (Non-constructive results of the form mentioned
above [15].)
2. Explicit constructive approximation theorems for feedforward neural
networks, indica.ting the number (or bounds on the number) of nodes
needed to approxima.te a function from a given class to within a given
accuracy. (This is the subject of the present paper. We are unaware of
any other work along these lines apart from [6].)
3. Learning in general. Tha.t is, results on learning that are not dependent
on the pa.rticular representation chosen. The exciting new results using
the Vapnik-Chervonenkis dimension [4,9] fit into this category, as do
studies on the use of Shortest Description Length principles [2].
4. Sppcific results on capabilities of learning in a given architecture [11].
5. Sppcific algorithms for learning in a specific architecture [14].
CONCRETE
as metric entropy) originally introduced by Kolmogorov [16] and a representation
of a. function ill t.erms of its level sets, which was used by Arnold [1]. The place of
the current paper with respect to other works in the literature can be judged from
table 1.
We study the question of representing a function f in the class FiPc????'Pn),n, which
is the space of real valued functions defilled on the n-dimensional closed interval
X 7=dO, Pi] with a Lipschitz constant L and bounded in absolute value by C. If
Pi = P for i = 1.... , 11. WP denote the space Ff'~. The error measure we use is the
uniform or sup metric:
'
~ sup li(x) - f(x)1.
(1 )
xE[O,pln
where
2
f is the approximation of f.
c-ENTROPY OF FUNCTIONAL CLASSES
The ?-entropy He gives an indication of the number of bits required to represent
with accuracy ? an ar'bitrary function f in some functional class. It is defined as
the logarithm to base 2 of the number of elements in the smallest ?-cover of the
functional class. Kolmogorov [16] has proved that
(2)
where B( n) is a constant which depends only on n. \Ve use this result as a yardstick
for our neural network representation. A more powerful result is [18, p.86]:
947
948
Williamson
---- --------------. ?i
------- ?i-1
--------- ?i-2
----------. ? i-3 t<-.;;:>:':::::::::'::::::" ~:: ' :: :
------------. ?i-4
Figure 1: Illust.ration of some level sets of a function on R2.
=
Theorel11 1 Let p be a non-negative integer and let 0' E (0,1]. Set s
p + 0'. Let
F:";,C(O) denote the space of real functions f defined on [0, p]ll all of whose partial
derivatives of order p .satisfy a Lipschitz condition with constant L and index 0',
and are such that
n
for
L
ki ::; p.
(3)
;=1
Then for sufficiently small c,
(4)
where A(s, n) and B(s, n) are positive constants depending only on sand n.
We discuss the implication of this below.
3
A NEURAL NETWORK REPRESENTATION BASED
ON LEVEL SETS
We develop a new neura.l network architecture for representing functions from [0, p]n
onto [0,1] (the restriction of the range to [0,1] is just a conveni~nce and can be
easily dropped). The basic idea is to represent approximations f of the function
f in terms of the level sets of f (see figure 1). Then neural networks a.re used to
-
-
~
approximate the above sets la(f) of f, where la(f) = {x:f(x) ~ O'} = U.B~al.B(f)
and la(f) is the O'th level set: IoU) ~ {x:f(x)
= O'}.
The approximations ia(f) can
?-Enlropy and the Complexity of reedforward Neural Networks
be implemented using tluec layer neural nets with threshold logic neurons. These
approximations are of t.he form
l!,othetlC
approximatIOn
the_mth
component
of
i o_, _
(f)
r~_ _ _ _ _
_ _ _ _ _ _ _ _ _to
~A~
___
_______
__
_
, ,
U U n [S(hu),9~m)nS(h_Uj,_(9;m+v>;mJ)]'
C' ca ,
lo,(f) =
Am
11
111=1 A",=tj=l
______________
~~
~v~
(5)
_______________J
Il-rectangle of dimensions v>~m x ' . ' X v>~m
where
!/J;m
is the "\yidlh" in t.he jt.h dimension of the Am t.h rectangula.r part of the
mt.h component (disjoint connected subset) C~J~I) of the ith approximate above-set
fa, I Ca , is t he number of compollcnts of the above-set la, (1), Am is the number of
l1-rectangles (paris) that cHe required to form an c/-cover for c~')(J), Uj l::.
(U)l), ... 'll~It)). ujlJl)
/ijln . S'(hw ,(d is the ll-half-space defined by the hyperplane
hw ,9:
(6)
S'(hw,/I)
{.r: htuJI(.t.) 2:: a},
where hw ,I:I(.I') = w.r - 0 and U' = (WI ? . . . , wll ).
C01IV(X
=
=
The function
f is then approximat.ed by
.f~S-ua,.,( ,I). -~ -.1- r
2.\
N
+ ---; L 17
1
/",1=1
'"
(f)
(:t.,),
(7)
=
where OJ
iA
.l, i
L. .. " .V and I..., is the indicator function of a set S. The
approximatioll IY-uas(.!') i:-s t.1H'11 further approximated by implementing (5) using
N :3-layel' Il emal net.ti ill parnll('I :
j
:NN
(J: ) =
1
LX
-+.'i
1.\ '
.
/=1
IV/~<l I\A'~':)
ll
1J1=1 k",=l
tigl1
(LIl
(i)
Wk
11:'1 mi'
(i))
O~.
~~~'-
x E X;l=dO,Pi], (8)
m
,/=1
_ _ _ _ _ _-"v".._ _ _ _ _ _
~J
first
where .r = (.t't, ... , ,vlJ)T, .s o = Z' and l/~i) is the number of nodes in the second
layer . The last. layer combines t.he abov(~-sct.s il1 the manner of (7). The general
architecture of tIw llet.work is shown in figure 2.
4
NUMBER OF BITS NEEDED TO REPRESENT THE
WEIGHTS OF THE NETWORK
The t.wo main results of t.his paper arc bounds on t.he number of nodes needed in such
a neural net.wOl"k in order to J"(>present. f E F?,~
, wit.h uniform error Cr, and bounds
on t.he number of bits needed t.o represent. t.he weights in such an approximation .
Theorem 2 The 1Il/lIIber' of nodf.') needed ill a 1leural lIeiworh' of lhe above architeclure ill ordu' '0 n]l1'(.')(111 ([IIY f E Ff ' ~. 10 wilhin E,. ill the sup-melt-ie is .qiven
by
L "
I/f'L
1
11
( 1+) ( -P )
( 9)
+
+
2~,.
;:2.: "
J2 4E r
949
950
Williamson
NNi approximates lcx;(J)
1
x
n
?
1
NN
(dimx = n)
1
2
2N
3
Figure 2: The Neural Network architecture we adopt .
This theorem is proved in a straight-forward manner by taking account of all the
errors incurred in the approxima.tion of a worst-case function in Ff'';;.
,
Since compa.ring the number of nodes alone is inadequate for comparing the complexity of neural nets (because the nodes themselves could implement quite complex
functions) we have a.lso calcula.ted the number of bits needed to represent all of the _
weights (including zero weights which denote no connection) in order to achieve an
cr-approximation: 1
Theoreln 3 The 1lumber- of bits needed to specify the weights in a neural network
with the above architecture i1l order to represent an arbitrary function f E Ff'~
with accuracy Cr in the sup-metric is bounded above by
.
(10)
Equation 10 can be compared with (2) to see that the neural net representation is
close to optimal. It is suboptimal by a factor of O( e.f:-). The
term is considered
subsumed into the B(n) term in (2).
,h:n
lThe idea of using the number of bits as a measure of network complexity has also
recently been adopted in [5].
E-Entropy and the Complexity of Ieedforward Neural Networks
5
FURTHER WORK
Theorem 3 shows that the complexity of representing an arbitrary f E F?:~ is
exponential in n. This is not so much a limitation of the neural network as an
indication that our problem is too hard. Theorem 1 shows that if smoothness
constraints are imposed. then the complexity can be considerably reduced. It is an
open problem to determine whether the construction of the network presented in
this paper can be extended to make good use of smoothness constraints.
Of course the most important question is whether functions can be learned using
neural networks. Apropos of this is Stone's result on rates of convergence in nonparametric regression [17]. Although we do not have space to give details here,
suffice it say that he shows that the gains suggested by theorem 1 by imposing
smoothness constraints in the representation problem, are also achievable in the
learning problem. A more general statement of this type of result, making explicit
the connexion with ?-entropy is given by Yatracos [19]:
Them'em 4 Let Iii be a Ll-totally bounded set of measures on a probability space.
Let the metric defined on the space be the Ll-distance between measures. Then there
exists a uniformly consistent estimator (ji for some parameter 0 from a possibly
infinite dimensional family of measures 8 C At whose rate of convergence in i
asymptotically satisfies the equation
a;
=
[1t ?./0) ]'/2
(11)
where 'lie (8) is tile ?-el!tropy of 8.
Similar results have been discussed by Ben-Da.vid et al. [3] (who have made use of
Dudley'S (loose) relationships between ?-entropy and Vapnik-Chervonenkis dimension [8]) and others [12.13]. There remain many open problems in this field. One of
the main difficulties however is the calculation of 'lit for non-trivial function classes.
One of the most significant results would be a complete and tight determination of
the ?-entropy for a feedforward neural network.
Acknowledgenlellts
This work was supported in part by a grant from ATERB. I thank Andrew Paice
for many useful discussions.
References
[1]
V. 1. Arnold, Represent.at.ion of Continuous Functions of Three Variables by the Superposition of Continuous Functions of Two Variables. Matematic1Jesllii Sbocnik
(N.S.), 48 (1959). pp. 3-74. Translation ill American Mathematical Society Translations Sel?ies 2. 28 (1959) pp. 61-147.
[2] A. R. Barroll. St.atist.ical Properties of Artificial Neural Networks, in Proceedjllgs of
the 28t}1 COllfel"ellCe 011 Decision alld COlltrol, 1989, pp. 280-285.
951
952
Williamson
[3]
S. Ben-David, A. Itai and E. Kushilevitz, Learning by Distances, in Proceedings of
the Third Annual Workshop on Computational Learning Theory, M. Fulk and
J. Case, eds., Morgan Kaufmann, San Mateo, 1990, pp. 232-245.
[4]
A. Blumer, A. Ehrenfeucht, D. Haussler and M. K. Warmuth, Learnability and
the Vapnik-Chervonenkis Dimension, Journal of tIle Association for Computing
Machinery, 36 (1989), pp. 929-965.
[5]
J. Bruck and J. W. Goodman, On the Power of Neural Networks for Solving Hard
Problems, Journal of Complexity, 6 (1990), pp. 129-135.
[6]
S. M. Carroll and B. W. Dickinson, Construction of Neural Nets using the Radon
Transform, in Proceedings of the International Joint Conference on Neural Networks, 1989, pp. 607-611, (Volume I).
[7]
G. Cybenko, Approximation by Superpositions of a Sigmoidal Function, Mathematics of Control, Signals, alld Systems, 2 (1989), pp. 303-314.
[8]
R. M. Dudley, A Course on Empirical Processes, in Ecole d'Ete de Probabilites
de Saillt-Flour XII-19S2, R. M. Dudley, H. Kunitay and F. Ledrappier, eds.,
Springer-Verlag, Berlin. 1984, pp. 1-142, Lecture Notes in Mathematics 1097.
[9]
A. Ehreufeucht, D. Haussler, M. Kearns and L. Valiant, A General Lower Bound on
the Number of Examples Needed for Learning, Information and Computation,
82 (1989), pp. 247-261.
[10]
K. -I. Funahashi, On the Approximate Realization of Continuous Mappings by Neural Networks, Neural Networks, 2 (1989), pp. 183-192.
[11] S. I. Gallant, A Connectionist Learning Algorithm with Provable Generalization and
Scaling Bounds, Neural Networks, 3 (1990), pp. 191-201.
[12]
S. van de Geer, A New Approach to Least-Squares Estimation with Applications,
The Annals of Statistics, 15 (1987), pp. 587-602.
[13]
R. Hasminskii and I. Ibragimov, On Density Estimation in the View of Kolmogorov's
Ideas in Approximation Theory, The Annals of Statistics, 18 (1990), pp. 9991010.
[14]
R. Hecht-Nielsen, Theory of the Backpropagation Neural Network, in Proceedings
of the Internatiollal Joint Conference on Neural Networks, 1989, pp. 593-605,
Volume 1.
[15]
K. Hornik, M. Stinchcombe and H. White, Multilayer Feedforward Networks are
Universal Approximators, Neural Networks, 2 (1989), pp. 359-366.
[16]
A. N. Kolmogorov and V. M. Tihomirov, c-Entropy and c-Capacity of Sets in Functional Spaces, Uspelli Mat. (N.S.), 14 (1959), pp. 3-86, Translation in American
Matllematical Society Translations, Series 2, 17 (1961) pp. 277-364.
[17]
C. J. Stone, Optimal Global Rates of Convergence for Nonparametric Regression,
The Annals of Statistics. 10 (1982), pp. 1040-1053.
[18]
A. G. Vitushkin, Tlleory of tIle Transmission and Processing of Information, Pergamon Press, Oxford, 1961, Originally published as Otsenka slozhnosti zadachi
tabulirovaniya (Estimation of the Complexit.y of the Tabulation Problem), Fizmatgiz, Moscow, 1959.
[19]
Y. G. Yatracos, Rat.es of Convergence of Minimum Distance Estimators and Kolmogorov's Entrap),. The Alll1als of Statistics, 13 (1985), pp. 768-774.
| 386 |@word achievable:1 open:2 hu:1 tiw:1 complexit:1 series:1 chervonenkis:3 ecole:1 current:1 comparing:1 erms:1 i1l:1 j1:1 wll:1 alone:1 half:1 warmuth:1 ith:1 funahashi:1 node:8 lx:1 sigmoidal:1 mathematical:1 along:1 combine:1 lcx:1 manner:2 themselves:1 ua:1 totally:1 bounded:3 suffice:1 probabilites:1 ti:1 control:1 grant:1 positive:1 engineering:2 dropped:1 oxford:1 mateo:1 range:1 implement:1 backpropagation:1 universal:1 empirical:1 pln:1 atist:1 onto:1 close:1 judged:1 restriction:1 imposed:1 wit:1 apropos:1 estimator:2 kushilevitz:1 haussler:2 his:1 notion:1 lso:1 annals:3 hierarchy:1 construction:2 dickinson:1 pa:1 element:1 approximated:1 solved:1 worst:1 connected:1 mentioned:1 complexity:9 ration:1 tight:1 solving:1 neuralnet:1 vid:1 easily:1 joint:2 kolmogorov:5 artificial:1 whose:2 quite:1 valued:1 say:1 statistic:4 transform:1 indication:3 net:6 j2:1 realization:1 date:1 achieve:3 description:1 convergence:4 transmission:1 llet:1 ring:1 ben:2 depending:1 develop:2 stating:1 andrew:1 school:1 approxima:2 implemented:1 australian:1 iou:1 australia:1 wol:1 implementing:1 sand:1 generalization:1 cybenko:1 sufficiently:1 considered:1 mapping:1 achieves:1 adopt:1 smallest:1 estimation:3 superposition:2 uas:1 pn:1 cr:4 sel:1 am:3 dependent:1 el:1 nn:2 ical:1 ill:6 field:1 ted:1 lit:1 others:1 connectionist:1 ete:1 national:1 ve:3 gpo:1 subsumed:1 flour:1 tj:1 iiy:1 implication:1 partial:1 necessary:1 machinery:1 iv:1 logarithm:1 re:1 theoretical:1 ar:1 cover:2 subset:1 uniform:3 inadequate:1 too:1 learnability:1 considerably:1 tihomirov:1 st:1 density:1 international:1 ie:1 iy:1 concrete:1 possibly:1 tile:3 american:2 derivative:1 li:1 account:1 de:3 wk:1 satisfy:1 depends:1 tion:1 view:1 closed:1 sup:4 capability:1 sct:1 alld:2 il:3 square:1 accuracy:5 kaufmann:1 who:1 fulk:1 straight:1 published:1 ed:4 pp:20 mi:1 gain:1 proved:2 nielsen:1 originally:2 specify:1 box:1 just:1 approximat:1 wp:1 ehrenfeucht:1 white:1 ll:6 rat:1 stone:2 complete:1 l1:2 consideration:1 recently:1 functional:5 mt:1 physical:1 ji:1 tabulation:1 volume:2 discussed:1 he:9 approximates:1 association:1 significant:1 compa:1 imposing:1 smoothness:3 mathematics:2 carroll:1 base:1 apart:1 certain:1 verlag:1 xe:1 approximators:1 morgan:1 minimum:1 determine:1 shortest:1 signal:1 colltrol:1 determination:2 calculation:1 y:1 hecht:1 basic:1 regression:2 multilayer:1 metric:4 represent:11 ion:1 interval:1 goodman:1 subject:1 integer:1 feedforward:8 iii:1 concerned:1 fit:1 architecture:6 suboptimal:1 connexion:1 idea:3 whether:2 fullction:1 wo:1 useful:1 ibragimov:1 nonparametric:2 category:1 reduced:1 exist:1 disjoint:1 xii:1 mat:1 itai:1 threshold:1 rectangle:2 asymptotically:1 nce:1 powerful:1 place:1 family:1 decision:1 scaling:1 radon:1 bit:7 layer:3 bound:7 ki:1 nni:1 annual:1 constraint:3 department:1 o_:1 melt:1 remain:1 em:1 wi:1 making:1 equation:2 discus:1 loose:1 needed:11 adopted:1 dudley:3 existence:1 moscow:1 neura:1 lhe:1 ting:1 uj:1 society:2 pergamon:1 question:2 fa:1 che:1 distance:3 thank:1 berlin:1 capacity:1 lthe:1 trivial:1 provable:1 length:1 index:1 relationship:1 robert:1 statement:1 negative:1 ba:1 lil:1 gallant:1 upper:1 neuron:1 arc:1 extended:1 arbitrary:2 introduced:1 david:1 required:2 specified:1 paris:1 connection:1 learned:1 suggested:1 below:1 oj:1 including:1 stinchcombe:1 ia:2 power:1 ation:1 difficulty:1 bruck:1 indicator:1 representing:3 literature:1 lecture:1 limitation:1 incurred:1 consistent:1 exciting:1 principle:1 pi:3 translation:4 lo:1 course:2 supported:1 last:1 vitushkin:1 arnold:2 taking:1 absolute:1 van:1 dimension:5 unaware:1 forward:1 made:1 san:1 approximate:3 logic:1 global:1 continuous:3 table:2 mj:1 ca:2 hornik:1 williamson:4 complex:1 da:1 main:2 s2:1 canberra:1 ff:4 il1:1 n:1 explicit:2 exponential:1 lie:1 third:1 hw:4 theorem:7 specific:1 jt:1 illust:1 r2:1 exists:1 workshop:1 vapnik:3 valiant:1 te:1 entropy:11 springer:1 satisfies:1 tha:1 blumer:1 lipschitz:4 hard:2 infinite:1 uniformly:1 hyperplane:1 vlj:1 kearns:1 geer:1 e:1 la:4 yardstick:1 constructive:2 |
3,157 | 3,860 | Filtering Abstract Senses From Image Search Results
Kate Saenko1,2 and Trevor Darrell2
1
MIT CSAIL, Cambridge, MA
2
UC Berkeley EECS and ICSI, Berkeley, CA
[email protected], [email protected]
Abstract
We propose an unsupervised method that, given a word, automatically selects
non-abstract senses of that word from an online ontology and generates images
depicting the corresponding entities. When faced with the task of learning a visual model based only on the name of an object, a common approach is to find
images on the web that are associated with the object name and train a visual classifier from the search result. As words are generally polysemous, this approach
can lead to relatively noisy models if many examples due to outlier senses are
added to the model. We argue that images associated with an abstract word sense
should be excluded when training a visual classifier to learn a model of a physical
object. While image clustering can group together visually coherent sets of returned images, it can be difficult to distinguish whether an image cluster relates to
a desired object or to an abstract sense of the word. We propose a method that uses
both image features and the text associated with the images to relate latent topics
to particular senses. Our model does not require any human supervision, and
takes as input only the name of an object category. We show results of retrieving
concrete-sense images in two available multimodal, multi-sense databases, as well
as experiment with object classifiers trained on concrete-sense images returned by
our method for a set of ten common office objects.
1
Introduction
Many practical scenarios call for robots or agents which can learn a visual model on the fly given
only a spoken or textual definition of an object category. A prominent example is the Semantic
Robot Vision Challenge (SRVC)1 , which provides robot entrants with a text-file list of categories to
be detected shortly before the competition begins. More generally, we would like a robot or agent
to be able to engage in situated dialog with a human user and to understand what objects the user
is refering to. It is generally unreasonable to expect users to refer only to objects covered by static,
manually annotated image databases. We therefore need a way to find images for an arbitrary object
in an unsupervised manner.
A common approach to learning a visual model based solely on the name of an object is to find
images on the web that co-occur with the object name by using popular web search services, and
train a visual classifier from the search results. As words are generally polysemous (e.g. mouse)
and are often used in different contexts (e.g. mouse pad), this approach can lead to relatively noisy
models. Early methods used manual intervention to identify clusters corresponding to the desired
sense [2], or grouped together visually coherent sets of images using automatic image clustering
(e.g. [9]). However, image clusters rarely exactly align with object senses because of the large
variation in appearance within most categories. Also, clutter from abstract senses of the word that
1
http://www.semantic-robot-vision-challenge.org
1
Input Word: cup
Object Sense: drink container
cup
WISDOM
Online Dictionary
cup
search
Object Sense: trophy
?
? cup?(a?small?open?container usually?used?for?drinking;?usually?has?a?handle)?"he?
put?the?cup?back?in?the?saucer";?"the?handle?of?the?cup?was?missing"
Abstract Sense: sporting event
? cup,?loving?cup (a?large?metal?vessel?with?two?handles?that?is?awarded?as?a?trophy
to?the?winner?of?a?competition)?"the?school?kept?the?cups?is?a?special?glass?case?
?
? cup (a?contest in?which?a?cup?is?awarded)??the?World?Cup?is?the?
world's?most?widely?watched?sporting?event.?
Figure 1:
WISDOM
separates the concrete (physical) senses from the abstract ones.
are not associated with a physical object can further complicate matters (e.g. mouse as in ?a timid
person?.) 2
To address these issues, we propose an unsupervised Web Image Sense DisambiguatiOn Model
(WISDOM), illustrated in Figure 1. Given a word, WISDOM automatically selects concrete senses
of that word from a semantic dictionary and generates images depicting the corresponding entities,
first finding coherent topics in both text and image domains, and then grounding the learned topics
using the selected word senses. Images corresponding to different visual manifestations of a single
physical sense are linked together based on the likelihood of their image content and surrounding
text (words in close proximity to the image link) being associated with the given sense.
We make use of a well-known semantic dictionary (WordNet [8]), which has been previously used
together with a text-only latent topic model to construct a probabilistic model of individual word
senses for use with online images [17]. We build on this work by incorporating a visual term, and
by using the Wordnet semantic hierarchy to automatically infer whether a particular sense describes
a physical entity or a non-physical concept. We show results of detecting such concrete senses in
two available multimodal (text and image), multi-sense databases: the MIT-ISD dataset [17], and
the UIUC-ISD dataset [14]. We also experiment with object classification in novel images, using
classifiers trained on the images collected via our method for a set of ten common objects.
2
Related Work
Several approaches to building object models from image search results have been proposed. Some
have relied on visual similarity, either selecting a single inlier image cluster based on a small validation set [9], or bootstrapping object classifiers from existing labeled images [13]. In [18] a classifier
based on text features (such as whether the keyword appears in the URL) was used re-rank the images before bootstrapping the image model. However, the text ranker was category-independent and
thus unable to learn words predictive of a specific word sense. An approach most similar to ours [2]
discovered topics in the textual context of images using Latent Dirichlet Allocation (LDA), however,
manual intervention by the user was required to sort the topics into positive and negative for each
category. Also, the combination of image and text features is used in some web retrieval methods
(e.g. [7]), however, our work is focused not on instance-based image retrieval, but on category-level
modeling.
Two recent papers have specifically addressed polysemous words. In Saenko and Darrell [17], the
use of dictionary definitions to train an unsupervised visual sense model was proposed. However, the
user was required to manually select the definition for which to build the model. Furthermore, the
2
While the first few pages of image search results returned by modern search engines generally have very
few abstract examples, possibly due to the sucess of reranking based on previous user?s click-through history,
results from farther down the list are much less uniform, as our experimental results show.
2
sense model did not incorporate visual features, but rather used the text contexts to re-rank images,
after which an image classifier was built on the top-ranked results. Loeff et al. [14] performed
spectral clustering in both the text and image domain and evaluated how well the clusters matched
different senses. However, as a pure clustering approach, this method cannot assign sense labels.
In the text domain, Yarowsky [20] proposed an unsupervised method for traditional word sense
disambiguation (WSD), and suggested the use of dictionary definitions as an initial seed. Also, Boiy
et al. [4] determined which words are related to a visual domain using hypothesis testing on a target
(visual) corpus, compared to a general (non-visual) corpus.
A related problem is modeling word senses in images manually annotated with words, such as the
caption ?sky, airplane? [1]. Models of annotated images assume that there is a correspondence
between each image region and a word in the caption (e.g. Corr-LDA, [5]). Such models predict
words, which serve as category labels, based on image content. In contrast, our model predicts a
category label based on all of the words in the web image?s text context, where a particular word
does not necessarily have a corresponding image region, and vice versa. In work closely related to
Corr-LDA, a People-LDA [11] model is used to guide topic formation in news photos and captions,
using a specialized face recognizer. The caption data is less constrained than annotations, including
non-category words, but still far more constrained than generic webpage text.
3
Sense-Grounding with a Dictionary Model
We wish to estimate the probability that an image search result embedded in a web page is one of
a concrete or abstract concept. First, we determine whether the web image is related to a particular
word sense, as defined by a dictionary. The dictionary model presented in [17] provides an estimate
of word sense based on the text associated with the web image. We will first describe this model,
and then extend it to include both an image component and an adaptation step to better reflect word
senses present in images.
The dictionary model [17] uses LDA on a large collection of text related to the query word to learn
latent senses/uses of the word. LDA [6] discovers hidden topics, i.e. distributions over discrete
observations (such as words), in the data. Each document is modeled as a mixture of topics z ?
{1, ..., K}. A given collection of M documents, each containing a bag of Nd words, is assumed
to be generated by the following process: First, we sample the parameters ?j of a multinomial
distribution over words from a Dirichlet prior with parameter ? for each topic j = 1, ..., K. For
each document d, we sample the parameters ?d of a multinomial distribution over topics from a
Dirichlet prior with parameter ?. Finally, for each word token i, we choose a topic zi from the
multinomial ?d , and then choose a word wi from the multinomial ?zi .
Since learning LDA topics directly from the images? text contexts can lead to poor results due to
the low quantity and irregular quality of such data, an additional dataset of text-only web pages
is created for learning, using regular web search. The dictionary model then uses the limited text
available in the WordNet entries to relate dictionary sense to latent text topics. For example, sense
1 of ?bass? contains the definition ?the lowest part of the musical range,? as well as the hypernym
(?pitch?) and other semantic relations. The bag-of-words extracted from such a semantic entry for
sense s ? {1, 2, ..., S} is denoted by the variable es = (e1 , e2 , ..., eEs ), where Es is the total number
of words. The dictionary model assumes that the sense is independent of the words conditioned on
the distribution of topics in the document. For a web image with an associated text document dt , the
conditional probability of sense is given by
P (s|dt ) =
K
X
P (s|z = j)P (z = j|dt ),
(1)
j=1
where the distribution of latent topics in the text context, P (z|dt ) is given by the ?dt variable,
computed by generalizing the learned LDA model to the (unseen) text contexts. The likelihood of
a sense given latent topic z = j is defined as the normalized average likelihood of words in the
3
dictionary entry es , 3
P (s|z) ?
Es
1 X
P (ei |z),
Es i=1
(2)
Incorporating Image Features. The dictionary model (1) does not take into account the image part
of the image/text pair. Here, we extend it to include an image term, which can potentially provide
complementary information. First, we estimate P (s|di ), or the probability of a sense given an image
di . Similar to the text-only case, we learn an LDA model consisting of latent topics v ? {1, ..., L},
using the visual bag-of-words extracted from the unlabeled images. The estimated ? variables give
P (v|di ). To compute the conditional probability of a sense given a visual topic, we marginalize the
joint P (s, v) across all image and associated text documents {di , dt } in the collection
P (s|v) ?
M
X
P (s|dt = k)P (v|di = k)
(3)
k=1
Note that the above assumes conditional independence of the sense and the visual topic given the
observations. Intuitively, this provides us with an estimate of the collocation of senses with visual
topic. We can now compute the probability of dictionary sense for a novel image di? as:
P (s|di? ) =
L
X
P (s|v = j)P (v = j|di? )
(4)
j=1
Finally, the joint text and image model is defined as the combination of the text-space and imagespace models via the sum rule,
P (s|di , dt ) = ?P (s|di ) + (1 ? ?)P (s|dt )
(5)
Our assumption in using the sum rule is that the combination can be modelled as a mixture of
experts, where the features of one modality are independent of sense given the other modality [3].
Adaptation. Recall that we can estimate ?dt for the unseen web image contexts by generalizing the
web-text LDA model using Gibbs sampling. However, web topics can be a poor match to image
search data (e.g. the ?genome research? topic of mouse.) Our solution is to adapt the web topics to
the image search data. We do this by fixing the z assignments of the web documents and sampling
the z?s of the image contexts for a few iterations. This procedure updates the topics to better reflect
the latent dimensions present in the image search data, without the overfitting effect mentioned
earlier.
4
Filtering out Abstract Senses
To our knowledge, no previous work has considered the task of detecting concrete vs. abstract senses
in general web images. We can do so by virtue of the multimodal sense grounding method presented
in the previous section. Given a set of senses for a paricular word, our task is to classify each sense
as being abstract or concrete. Fortunately, WordNet contains relatively direct metadata related to
the physicality of a word sense. In particular, one of the main functions of WordNet is to put words
in semantic relation to each other using the concepts of hyponym and hypernym. For example,
?scarlet? and ?crimson? are hyponyms of ?red?, while ?color? is a hypernym of ?red?. One can
follow the chain of direct hypernyms all the way to the top of the tree, ?entity?. Thus, we can detect
a concrete sense by examining its hypernym tree to see if it contains one of the following nodes:
?article?, ?instrumentality?,?article of clothing?, ?animal?, or ?body part?. What?s more, we can thus
restrict the model to specific types of physical entities: living things, artifacts, clothing, etc.
In addition, WordNet contains lexical file information for each sense, marking it as a state, or an animal, etc. For example, the sense ?mouse, computer mouse? is marked <artifact>. In this paper, we
classify a WordNet sense as being due to a concrete object when the lexical tag is one of <animal>,
<artifact>, <body>, <plant> and <act>. We exclude people and proper nouns in the experiments
in this paper, as well as prune away infrequent senses.
3
The average word likelihood was found to be a good indicator of how relevant a topic is to a sense. The
total word likelihood could be used, but it would allow senses with longer entries to dominate.
4
5
Data
We evaluated the outlined algorithms on three datasets: the five-word MIT-ISD dataset [17], the
three-word UIUC-ISD dataset [14], and OFFICE dataset of ten common office objects that we collected for the classification experiment. 4 All datasets had been collected automatically by issuing
queries to the Yahoo Image SearchTM engine and downloading the returned images and corresponding HTML web pages. For the MIT-ISD dataset, the query terms used were: BASS, FACE, MOUSE,
SPEAKER and WATCH. For the UIUC-ISD dataset, three basic query terms were used: BASS,
CRANE and SQUASH. To increase corpus size, the authors also used supplemental query terms
for each word. The search terms selected were those related to the concrete senses (e.g. ?construction cranes?, ?whooping crane?, etc.) Since these human-selected search terms require human
input, while our method only requires a list of words, we exclude them from our experiments. The
OFFICE dataset queries were: CELLPHONE, FORK, HAMMER, KEYBOARD, MUG, PLIERS,
SCISSORS, STAPLER, TELEPHONE, WATCH.
The images were labeled by a human annotator with all concrete senses for each word. The annotator
saw only the images, and not the surrounding text or any dictionary definitions. For the MIT-ISD
dataset, each concrete sense was labeled as core, related, and unrelated. Images where the object
was too small or too occluded were labeled as related. For the UIUC-ISD dataset, the labels for
each concrete sense were similarly core, related and unrelated. In addition, a people label was
used for unrelated images depicting faces or a crowd. 5 The OFFICE dataset was only labeled with
core and unrelated labels. We evaluated our models on two retrieval tasks: retrieval of only core
images of each sense, and retrieval of both core and related images. In the former case, core labels
were used as positive labels for each sense, with related, unrelated and people images labeled as
negative. In the latter case, core and related images were labeled as positive, and unrelated and
people as negative. Note that the labels were only used in testing, and not in training.
To provide training data for the web text topic model, we also collected an unlabeled corpus of textonly webpages for each word. These additional webpages were collected via regular web search for
the single-word search term (e.g. CRANE), and were not labeled.
6
Features
When extracting words from web pages, all HTML tags are removed, and the remaining text is
tokenized. A standard stop-word list of common English words, plus a few domain-specific words
like ?jpg?, is applied, followed by a Porter stemmer [16]. Words that appear only once and the
actual word used as the query are pruned. To extract text context words for an image, the image
link is located automatically in the corresponding HTML page. All word tokens in a 100-token
window surrounding the location of the image link are extracted. The text vocabulary size used for
the dictionary model ranges between 12K-20K words for the different search words.
To extract image features, all images are resized to 300 pixels in width and converted to grayscale.
Two types of local feature points are detected in the image: edge features [9] and scale-invariant
salient points. To detect edge points, we first perform Canny edge detection, and then sample a fixed
number of points along the edges from a distribution proportional to edge strength. The scales of the
local regions around points are sampled uniformly from the range of 10-50 pixels. To detect scaleinvariant salient points, we use the Harris-Laplace [15] detector with the lowest strength threshold
set to 10. Altogether, 400 edge points and approximately the same number of Harris-Laplace points
are detected per image. A 128-dimensional SIFT descriptor is used to describe the patch surrounding each interest point. After extracting a bag of interest point descriptors for each image, vector
quantization is performed. A codebook of size 800 is constructed by k-means clustering a randomly
chosen subset of the database (300 images per keyword), and all images are converted to bags of the
resulting visual words (cluster centers of the codebook.) No spatial information is included in the
image representation, rather it is treated as a bag-of-words.
4
The MIT-ISD and OFFICE datasets are available at http://people.csail.mit.edu/saenko
The UIUC-ISD dataset and its complete description can be obtained at
http://visionpc.cs.uiuc.edu/isd/index.html
5
5
(a) Text Model
(b) Image Model
Figure 2: The top 25 images returned by the text and the image models for mouse-4 (device).
7
Retrieval Experiments
In this section, we evaluate WISDOM on the task of retrieving concrete sense images from search
results. Below are the actual concrete senses that were automatically selected from WordNet by our
model for each word in the datasets:
MIT-ISD: bass-7 (instrument), bass-8 (fish), face-1 (human face), face-13 (surface), mouse-1 (rodent), mouse-4 (device), speaker-2 (loudspeaker), watch-1 (timepiece)
UIUC-ISD: bass-7 (instrument), bass-8 (fish), crane-4 (machine), crane-5 (bird), squash-1 (plant),
squash-3 (game)
OFFICE: cellphone-1 (mobile phone), fork-1 (utensil), hammer-2 (hand tool), keyboard-1 (any
keyboard), mug-1 (drinking vessel), mug-1 (drinking vessel), pliers-1 (tool), scissors-1
(cutting tool), stapler-1 (stapling device), telephone-1 (landline phone), watch-1 (timepiece)
We train a separate web text LDA model and a separate image LDA model for each word in the
dataset. The number of topics K is a parameter to the model that represents the dimensionality of
the latent space used by the model. We set K = 8 for all LDA models in the following experiments.
This was done so that the number of latent text topics is roughly equal to the number of senses. In the
image domain, it is less clear what the number of topics should be. Ideally, each topic would coincide
with a visually coherent class of images all belonging to the same sense. In practice, because images
of an object class on the web are extremely varied, multiple visual clusters are needed to encompass
a single visual category. Our experiments have shown that the model is relatively insensitive to
values of this parameter in the range of 8-32. To perform inference in LDA, we used the Gibbs
sampling approach of [10], implemented in the Matlab Topic Modeling Toolbox [19]. We used
symmetric Dirichlet priors with scalar hyperparameters ? = 50/K and ? = 0.01, which have the
effect of smoothing the empirical topic distribution, and 1000 iterations of Gibbs sampling.
Figure 2 shows the images that were assigned the highest probability for mouse-4 (computer device)
sense by the text-only model P (s|dt ) (Figure 2(a)), and by the image-only model P (s|di ) (Figure
2(b)). Both models return high-precision results, but somewhat different and complementary images. As we expected, the image model?s results are more visually coherent, while the text model?s
results are more visually varied.
Next, we evaluate retrieval of individual senses using the multimodal model (Eq. 5, with ? = 0.5)
and compare it to the Yahoo search engine baseline. This is somewhat unfair to the baseline, as here
we assume that our model knows which sense to retrieve (we will remove this assumption later.)
The recall-precision curves (RPCs) are shown in Figure 3. The figure shows the RPCs for each
word in the MIT-ISD (top row) and UIUC-ISD (bottom row) datasets, computed by thresholding
P (s|di , dt ). WISDOM?s RPCs are shown as the green curves. The blue curves are the RPCs obtained
by the original Yahoo image search retrieval order. For example, the top leftmost plot shows retrieval
of bass-7 (musical instrument). These results demonstrate that we are able to greatly improve the
retrieval of each concrete sense compared to the search engine.
6
(a) MIT-ISD data
(b) UIUC-ISD data
Figure 3: Recall-precision of each concrete sense (core labels) using the multimodal dictionary
model (green) and the search engine (blue), evaluated on two datasets.
(a) Core Senses, MIT-ISD
(b) Core Senses, UIUC-ISD
(c) Core+Related Senses, MIT-ISD
(d) Core+Related Senses, UIUC-ISD
Figure 4: Recall-precision of all concrete senses using WISDOM (green) and the search engine (blue).
WISDOM does fail to retrieve one sense ? face-13, defined as ?a vertical surface?. This is a highly
ambiguous sense visually, although it has an <artifact> lexical tag. One possibility for the future
is to exclude senses that are descendents of ?surface? as being too ambiguous. Also, preliminary
investigation indicates that weighting the text and image components of the model differently can
result in improved results; model weighting is therefore an important topic for future work.
Next, we evaluate the ability of WISDOM to filter out abstract senses. Here we no longer assume that
the correct senses are known. Figure 4 shows the result of filtering out the abstract senses, which is
done by evaluating the probability of any of the concrete senses in a given search result. The ground
truth labels used to compute these RPCs are positive if an image was labeled either with any core
sense (Fig.4 (a,b)), or with any core or related sense (Fig.4 (c,d)), and negative otherwise. These
results demonstrate that our model improves the retrieval of images of concrete (i.e. physical) senses
of words, without any user input except for the word itself. Figure 5 shows how the model filters out
certain images, including illustrations by an artist named Crane, from search results for CRANE.
8
Classification Experiments
We have shown that our method can improve retrieval of concrete senses, therefore providing higherprecision image training data for object recognition algorithms. We have conjectured that this leads
to better classification results; in this section, we provide some initial experiments to support this
claim. We collected a dataset of ten office objects, and trained ten-way SVM classifiers using
the vocabulary-guided pyramid match kernel over bags of local SIFT features implemented in the
LIBPMK library [12]. The training data for the SVM was either the first 100 images returned from
the search engine, or the top 100 images ranked by our model. Since we?re interested in objects, we
keep only the <artifact> senses that descend from ?instrumentality? or ?article?. Figure 6 shows
classification results on held-out test data, averaged over 10 runs on random 80% subsets of the
7
(a) Yahoo Image Search
(b) WISDOM
Figure 5: The top images returned by the search engine for CRANE, compared to our model.
Figure 6: Classification accuracy of ten objects in the OFFICE dataset.
data. Our method improves accuracy for most of the objects; in particular, classification of ?mug?
improves greatly due to the non-object senses being filtered out. This is a very difficult task, as
evidenced by the baseline performance; the average baseline accuracy is 27%. Training with our
method achieves 35% accuracy, a 25% relative improvement. We believe that this relative improvement is due to the higher precision of the training images and will persist even if the overall accuracy
were improved due to a better classifier.
9
Conclusion
We presented WISDOM, an architecture for clustering image search results for polysemous words
based on image and text co-occurrences and grounding latent topics according to dictionary word
senses. Our method distinguishes which senses are abstract from those that are concrete, allowing
it to filter out the abstract senses when constructing a classifier for a particular object of interest to
a situated agent. This can be of particular utility to a mobile robot faced with the task of learning
a visual model based only on the name of an object provided on a target list or spoken by a human
user. Our method uses both image features and the text associated with the images to relate estimated
latent topics to particular senses in a semantic database. WISDOM does not require any human
supervision, and takes as input only an English noun. It estimates the probability that a search result
is associated with an abstract word sense, rather than a sense that is tied to a physical object. We
have carried out experiments with image and text-based models to form estimates of abstract vs.
concrete senses, and have shown results detecting concrete-sense images in two multimodal, multisense databases. We also demonstrated a 25% relative improvement in accuracy when classifiers are
trained with our method as opposed to the raw search results.
Acknowledgments
This work was supported in part by DARPA, Google, and NSF grants IIS-0905647 and IIS-0819984.
8
References
[1] K. Barnard, K. Yanai, M. Johnson, and P. Gabbur. Cross modal disambiguation. In Toward Category-Level
Object Recognition, J. Ponce, M. Hebert, C. Schmidt, eds., Springer-Verlag LNCS Vol. 4170, 2006.
[2] T. Berg and D. Forsyth. Animals on the web. In Proc. CVPR, 2006.
[3] J. Bilmes and K. Kirchhoff. Directed graphical models of classifier combination: application to phone
recognition. In Proc. ICSLP, 2000.
[4] E. Boiy, K. Deschacht, and M.-F. Moens. Learning Visual Entities and Their Visual Attributes from Text
Corpora. In Proc. DEXA, 2008.
[5] D. Blei and M. Jordan. Modeling annotated data. In Proc. International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 127-134. ACM Press, 2003.
[6] D. Blei, A. Ng, and M. Jordan. Latent Dirichlet allocation. J. Machine Learning Research, 3:993-1022,
2003.
[7] Z. Chen, L. Wenyin, F. Zhang and M. Li. Web mining for web image retrieval. J. of the American Society
for Information Science and Technology, 51:10, pages 831-839, 2001.
[8] C. Fellbaum. Wordnet: An Electronic Lexical Database. Bradford Books, 1998.
[9] R. Fergus, L. Fei-Fei, P. Perona, and A. Zisserman. Learning Object Categories from Google?s Image
Search. In Proc. ICCV 2005.
[10] T. Griffiths and M. Steyvers. Finding Scientific Topics. In Proc. of the National Academy of Sciences,
101 (suppl. 1), pages 5228-5235, 2004.
[11] V. Jain, E. Learned-Miller, A. McCallum. People-LDA: Anchoring Topics to People using Face Recognition. In Proc. ICCV, 2007.
[12] J. Lee. LIBPMK: A Pyramid Match Toolkit. MIT Tech Report MIT-CSAIL-TR-2008-17, available online
at http://hdl.handle.net/1721.1/41070. 2008
[13] J. Li, G. Wang, and L. Fei-Fei. OPTIMOL: automatic Object Picture collecTion via Incremental MOdel
Learning. In Proc. CVPR, 2007.
[14] N. Loeff, C. Ovesdotter Alm, D. Forsyth. Discriminating Image Senses by Clustering with Multimodal
Features. In Proc. ACL 2006.
[15] K. Mikolajczyk and C. Schmid. Scale and affine invariant interest point detectors. In Proc. IJCV, 2004.
[16] M. Porter, An algorithm for suffix stripping, Program, 14(3) pp 130-137, 1980.
[17] K, Saenko and T. Darrell. Unsupervised Learning of Visual Sense Models for Polysemous Words. In Proc.
NIPS, 2008.
[18] F. Schroff, A. Criminisi and A. Zisserman. Harvesting image databases from the web. In Proc. ICCV,
2007.
[19] M. Steyvers and T. Griffiths. Matlab Topic Modeling Toolbox.
http://psiexp.ss.uci.edu/research/software.htm
[20] D. Yarowsky. Unsupervised word sense disambiguation rivaling supervised methods. ACL, 1995.
9
| 3860 |@word nd:1 open:1 hyponym:2 downloading:1 tr:1 initial:2 contains:4 cellphone:2 selecting:1 ours:1 document:7 existing:1 issuing:1 remove:1 plot:1 update:1 v:2 reranking:1 selected:4 device:4 mccallum:1 core:14 farther:1 filtered:1 blei:2 harvesting:1 provides:3 detecting:3 node:1 location:1 codebook:2 org:1 zhang:1 five:1 along:1 constructed:1 direct:2 retrieving:2 ijcv:1 manner:1 alm:1 expected:1 roughly:1 ontology:1 dialog:1 uiuc:11 multi:2 hypernym:5 anchoring:1 automatically:6 actual:2 window:1 begin:1 provided:1 matched:1 unrelated:6 lowest:2 what:3 supplemental:1 spoken:2 finding:2 bootstrapping:2 berkeley:3 sky:1 act:1 exactly:1 classifier:13 trophy:2 yarowsky:2 grant:1 intervention:2 appear:1 before:2 service:1 positive:4 local:3 solely:1 approximately:1 acl:2 plus:1 bird:1 co:2 limited:1 range:4 averaged:1 directed:1 practical:1 acknowledgment:1 testing:2 practice:1 procedure:1 lncs:1 empirical:1 word:74 regular:2 griffith:2 cannot:1 close:1 unlabeled:2 marginalize:1 put:2 context:10 www:1 crane:9 demonstrated:1 missing:1 lexical:4 center:1 focused:1 sigir:1 moens:1 pure:1 rule:2 dominate:1 retrieve:2 steyvers:2 handle:4 variation:1 laplace:2 hierarchy:1 target:2 infrequent:1 engage:1 user:8 caption:4 construction:1 us:5 hypothesis:1 recognition:4 rivaling:1 located:1 predicts:1 database:8 labeled:9 bottom:1 fork:2 persist:1 fly:1 wang:1 descend:1 region:3 news:1 keyword:2 bass:8 removed:1 highest:1 icsi:1 mentioned:1 ideally:1 occluded:1 trained:4 predictive:1 serve:1 multimodal:7 joint:2 darpa:1 differently:1 kirchhoff:1 htm:1 surrounding:4 train:4 jain:1 describe:2 detected:3 query:7 formation:1 crowd:1 widely:1 cvpr:2 s:1 otherwise:1 squash:3 ability:1 unseen:2 noisy:2 scaleinvariant:1 itself:1 online:4 net:1 propose:3 adaptation:2 canny:1 relevant:1 uci:1 academy:1 description:1 competition:2 webpage:3 cluster:7 darrell:2 incremental:1 object:37 inlier:1 fixing:1 school:1 eq:1 implemented:2 c:1 guided:1 closely:1 annotated:4 hammer:2 filter:3 correct:1 attribute:1 criminisi:1 human:8 require:3 assign:1 icslp:1 preliminary:1 investigation:1 drinking:3 clothing:2 proximity:1 around:1 considered:1 ground:1 visually:6 seed:1 predict:1 claim:1 dictionary:19 early:1 achieves:1 recognizer:1 proc:12 schroff:1 bag:7 label:11 saw:1 grouped:1 vice:1 tool:3 mit:15 rather:3 resized:1 mobile:2 office:9 ponce:1 improvement:3 rank:2 likelihood:5 indicates:1 greatly:2 contrast:1 tech:1 baseline:4 plier:2 sense:60 glass:1 inference:1 detect:3 suffix:1 collocation:1 pad:1 hidden:1 relation:2 perona:1 selects:2 interested:1 pixel:2 issue:1 classification:7 html:4 overall:1 denoted:1 yahoo:4 development:1 animal:4 noun:2 smoothing:1 constrained:2 special:1 uc:1 equal:1 construct:1 once:1 spatial:1 ng:1 sampling:4 manually:3 represents:1 unsupervised:7 future:2 report:1 few:4 distinguishes:1 modern:1 randomly:1 national:1 individual:2 consisting:1 hdl:1 detection:1 interest:4 highly:1 possibility:1 mining:1 mixture:2 sens:46 held:1 chain:1 edge:6 tree:2 desired:2 re:3 instance:1 classify:2 modeling:5 earlier:1 assignment:1 entry:4 subset:2 uniform:1 examining:1 johnson:1 too:3 stripping:1 eec:2 person:1 international:1 discriminating:1 csail:4 probabilistic:1 lee:1 together:4 mouse:11 concrete:25 reflect:2 containing:1 choose:2 possibly:1 opposed:1 book:1 expert:1 american:1 return:1 li:2 account:1 exclude:3 converted:2 psiexp:1 matter:1 forsyth:2 kate:1 descendent:1 scissors:2 performed:2 later:1 linked:1 red:2 relied:1 sort:1 annotation:1 accuracy:6 musical:2 descriptor:2 miller:1 identify:1 wisdom:12 modelled:1 polysemous:5 raw:1 artist:1 bilmes:1 history:1 detector:2 manual:2 trevor:2 complicate:1 definition:6 ed:1 pp:1 e2:1 associated:10 di:12 static:1 stop:1 sampled:1 dataset:16 timepiece:2 popular:1 recall:4 knowledge:1 color:1 dimensionality:1 improves:3 back:1 fellbaum:1 appears:1 higher:1 dt:12 supervised:1 follow:1 modal:1 improved:2 zisserman:2 evaluated:4 done:2 furthermore:1 hand:1 web:28 ei:1 google:2 porter:2 lda:15 quality:1 artifact:5 scientific:1 believe:1 grounding:4 building:1 name:6 concept:3 normalized:1 effect:2 former:1 assigned:1 excluded:1 symmetric:1 semantic:9 illustrated:1 mug:4 game:1 width:1 ambiguous:2 speaker:2 manifestation:1 prominent:1 leftmost:1 complete:1 demonstrate:2 image:121 novel:2 discovers:1 common:6 specialized:1 multinomial:4 jpg:1 physical:9 winner:1 insensitive:1 extend:2 he:1 refer:1 cambridge:1 cup:12 versa:1 gibbs:3 automatic:2 outlined:1 similarly:1 contest:1 had:1 toolkit:1 robot:6 supervision:2 similarity:1 longer:2 etc:3 align:1 surface:3 recent:1 conjectured:1 awarded:2 phone:3 scenario:1 keyboard:3 certain:1 verlag:1 additional:2 fortunately:1 somewhat:2 prune:1 determine:1 living:1 ii:2 relates:1 multiple:1 encompass:1 infer:1 match:3 adapt:1 cross:1 retrieval:14 e1:1 watched:1 pitch:1 basic:1 vision:2 iteration:2 kernel:1 pyramid:2 suppl:1 irregular:1 addition:2 addressed:1 modality:2 container:2 file:2 thing:1 jordan:2 call:1 extracting:2 ee:1 libpmk:2 independence:1 zi:2 architecture:1 restrict:1 click:1 airplane:1 ranker:1 whether:4 utility:1 url:1 deschacht:1 returned:7 matlab:2 generally:5 covered:1 clear:1 clutter:1 ten:6 situated:2 category:13 http:5 nsf:1 fish:2 estimated:2 per:2 utensil:1 blue:3 discrete:1 vol:1 group:1 salient:2 threshold:1 kept:1 isd:21 sum:2 run:1 wsd:1 named:1 electronic:1 patch:1 disambiguation:4 loeff:2 drink:1 followed:1 distinguish:1 correspondence:1 strength:2 occur:1 fei:4 software:1 tag:3 generates:2 extremely:1 pruned:1 relatively:4 loudspeaker:1 marking:1 according:1 combination:4 poor:2 belonging:1 describes:1 across:1 wi:1 outlier:1 intuitively:1 invariant:2 iccv:3 previously:1 fail:1 needed:1 know:1 instrument:3 photo:1 available:5 unreasonable:1 away:1 spectral:1 generic:1 occurrence:1 schmidt:1 shortly:1 altogether:1 original:1 top:7 clustering:7 dirichlet:5 include:2 assumes:2 remaining:1 graphical:1 build:2 society:1 added:1 quantity:1 traditional:1 stapler:2 separate:3 link:3 unable:1 entity:6 timid:1 topic:39 argue:1 collected:6 toward:1 tokenized:1 modeled:1 index:1 illustration:1 providing:1 difficult:2 potentially:1 relate:3 negative:4 optimol:1 proper:1 perform:2 allowing:1 vertical:1 observation:2 datasets:6 discovered:1 varied:2 arbitrary:1 evidenced:1 pair:1 required:2 toolbox:2 engine:8 coherent:5 learned:3 textual:2 nip:1 address:1 able:2 suggested:1 usually:2 below:1 challenge:2 program:1 built:1 including:2 green:3 event:2 ranked:2 treated:1 indicator:1 improve:2 technology:1 library:1 picture:1 created:1 carried:1 metadata:1 extract:2 schmid:1 sporting:2 faced:2 text:45 prior:3 relative:3 embedded:1 plant:2 expect:1 filtering:3 allocation:2 proportional:1 entrant:1 annotator:2 validation:1 yanai:1 agent:3 affine:1 metal:1 article:3 thresholding:1 row:2 token:3 supported:1 english:2 hebert:1 guide:1 allow:1 understand:1 stemmer:1 face:8 curve:3 dimension:1 vocabulary:2 world:2 evaluating:1 genome:1 mikolajczyk:1 author:1 collection:4 coincide:1 far:1 cutting:1 keep:1 overfitting:1 corpus:5 assumed:1 fergus:1 grayscale:1 search:33 latent:14 learn:5 ca:1 depicting:3 vessel:3 necessarily:1 constructing:1 domain:6 did:1 main:1 hyperparameters:1 complementary:2 body:2 fig:2 precision:5 wish:1 unfair:1 tied:1 weighting:2 down:1 specific:3 sift:2 list:5 svm:2 virtue:1 incorporating:2 quantization:1 corr:2 conditioned:1 chen:1 rodent:1 generalizing:2 appearance:1 visual:25 scalar:1 watch:4 springer:1 truth:1 extracted:3 ma:1 harris:2 acm:2 conditional:3 marked:1 barnard:1 content:2 included:1 specifically:1 determined:1 telephone:2 uniformly:1 except:1 wordnet:9 total:2 bradford:1 experimental:1 e:5 saenko:4 rarely:1 select:1 berg:1 people:8 support:1 latter:1 incorporate:1 evaluate:3 |
3,158 | 3,861 | Constructing Topological Maps using Markov
Random Fields and Loop-Closure Detection
Roy Anati Kostas Daniilidis
GRASP Laboratory
Department of Computer and Information Science
University of Pennsylvania
Philadelphia, PA 19104
{royanati,kostas}@cis.upenn.edu
Abstract
We present a system which constructs a topological map of an environment given
a sequence of images. This system includes a novel image similarity score which
uses dynamic programming to match images using both the appearance and relative positions of local features simultaneously. Additionally, an MRF is constructed to model the probability of loop-closures. A locally optimal labeling is
found using Loopy-BP. Finally we outline a method to generate a topological map
from loop closure data. Results, presented on four urban sequences and one indoor
sequence, outperform the state of the art.
1
Introduction
The task of generating a topological map from video data has gained prominence in recent years.
Topological representations of routes spanning multiple kilometers are robuster than metric and
cognitively more plausible for use by humans. They are used to perform path planning, providing
waypoints, and defining reachability of places. Topological maps can correct for the drift in visual
odometry systems and can be part of hybrid representations where the environment is represented
metrically locally but topologically globally.
We identify two challenges in constructing a topological map from video: how can we say whether
two images have been taken from the same place; and how can we reduce the original set of thousands of video frames to a reduced representative set of keyframes for path planning. We take into
advantage the fact that our input is video as opposed to an unorganized set of pictures. Video guarantees that keyframes will be reachable to each other but it also provides temporal ordering constraints
on deciding about loop closures. The paper has three innovations: We define a novel image similarity
score which uses dynamic programming to match images using both the appearance and the layout
of the features in the environment. Second, graphical models are used to detect loop-closures which
are locally consistent with neighboring images. Finally, we show how the temporal assumption can
be used to generate compact topological maps using minimum dominating sets.
We formally define a topological map T as a graph T = (K, ET ), where K is a set of keyframes
and ET edges describing connectivity between keyframes. We will see later that keyframes are
representatives of locations. We desire the following properties of T :
Loop closure For any two locations i, j ? K, ET contains the edge (i, j) if and only if it is possible
to reach location j from location i without passing through any other location k ? K.
Compactness Two images taken at the ?same location? should be represented by the same
keyframe.
1
Spatial distinctiveness Two images from ?different locations? cannot be represented by the same
keyframe.
Note that spatial distinctiveness requires that we distinguish between separate locations, however
compactness encourages agglomeration of geographically similar images. This distinction is important, as lack of compactness does not lead to errors in either path planning or visual odometry
while breaking spatial distinctiveness does. Our approach to building topological maps is divided
into three modules: calculating image similarity, detecting loop closures, and map construction. As
defined it is possible to implement each module independently, providing great flexibility in the
algorithm selection. We now define the interfaces between each pair of modules.
Starting with I, a sequence of n images, the result of calculating image similarity scores is a matrix
Mn?n where Mij represents a relative similarity between images i and j. In section 2 we describe
how we use local image features to compute the matrix M . To detect loop-closures we have to
discretize M into a binary decision matrix Dn?n where Dij = 1 indicates that images i and j
are geographically equivalent and form a loop closure. Section 3 describes the construction of
D by defining a Markov Random Field (MRF) on M and perform approximate inference using
Loopy Belief Propagation (Loopy-BP). In the final step, the topological map T is generated from
D. We calculate the set of keyframes K and their associated connectivity ET using the minimum
dominating set of the graph represented by D (Section 4).
Related Work The state of the art in topological mapping of images is the FAB-MAP [8] algorithm. FAB-MAP uses bag of words to model locations using a generative appearance approach
that models dependencies and correlations between visual words rendering FAB-MAP extremely
successful in dealing with the challenge of perceptual aliasing (different locations sharing common
visual characteristics). Its implementation outperforms any other in speed averaging an intra-image
comparison of less than 1ms. Bayesian inference is also used in [1] where bags of words on local
image descriptors model locations whose consistency is validated with epipolar geometry. Ranganathan et al. [14] incorporate both odometry and appearance and maintain several hypotheses of
topological maps. Older approaches like ATLAS [5] and Tomatis et al. [17] define maps on two
levels, creating global (topological) maps by matching independent local (metric) data and combining loop -closure detection with visual SLAM (Self Localization and Mapping). The ATLAS
framework [5] matches local maps through the geometric structures defined by their 2D schematics
whose correspondences define loop-closures. Tomatis et al [17] detect loop closures by examining
the modality of the robot position?s density function (PDF). A PDF with two modes traveling in sync
is the result of a missed loop-closure, which is identified and merged through backtracking.
Approaches like [3] [19] [18] and [9] represent the environment using only an image similarity
matrix. Booij et al [3] use the similarity matrix to define a weighted graph for robot navigation.
Navigation is conducted on a node by node basis, using new observations and epipolar geometry
to estimate the direction of the next node. Valgren et al [19] avoid exhaustively computing the
similarity matrix by searching for and sampling cells which are more likely to describe existing
loop-closures. In [18], they employ exhaustive search, but use spectral clustering to reduce the
search space incrementally when new images are processed. Fraundoerfer et al [9] use hierarchical
vocabulary trees [13] to quickly compute image similarity scores. They show improved results by
using feature distances to weigh the similarity score. In [15] a novel image feature is constructed
from patches centered around vertical lines from the scene (radial lines in the image). These are
then used to track the bearing of landmarks and localize the robot in the environment. Goedeme
[10] proposes ?invariant column segments? combined with color information to compare images.
This is followed by agglomerative clustering of images into locations. Potential loop-closures are
identified within clusters and confirmed u sing Dempster-Shafer probabilities.
Our approach advances the state of the art by using a powerful image alignment score without employing full epipolar geometry, and more robust loop colsure detection by applying MRF inference
on the similarity matrix. It is together with [4] the only video-based approach that provides a greatly
reduced set of nodes for the final topological representation, making thus path planning tractable.
2
2
Image similarity score
For any two images i and j, we calculate the similarity score Mij in three steps: generate image features, sort image features into sequences, calculate optimal alignment between both sequences. To
detect and generate image features we use Scale Invariant Feature Transform (SIFT) [12]. SIFT was
selected as it is invariant to rotation and scale, and partially immune to other affine transformations.
Feature sequences Simply matching the SIFT features by value [12] yields satisfactory results
(see later in figure 2). However, to mitigate perceptual aliasing, we take advantage of the fact that
features represent real world structures with fixed spatial arrangements and therefore the similarity
score should take their relative positions into account. A popular approach, employed in [16], is to
enforce scene rigidity by validating the epipolar geometry between two images. This process, although extremely accurate, is expensive and very time-consuming. Instead, we make the assumption
that the gravity vector is known so that we can split image position into bearing and elevation and
we take into account only the bearing of each feature. Sorting the features by their bearing, results
in ordered sequences of SIFT features. We then search for an optimal alignment between pairs of
sequences, incorporating both the value and ordering of SIFT features into our similarity score.
Sequence alignment To solve for the optimal alignment between two ordered sequences of features we employ dynamic programming. Here a match between two features, fa and fb , occurs if
their L1 norm is below a threshold, Score(a, b) = 1 if |fa ?fb |1 < tmatch . A key aspect to dynamic
programming is the enforcement of the ordering constraint. This ensures that the relative order of
features matched is consistent in both sequences, exactly the property desired to ensure consistency
between two scene appearances. Since bearing is not given with respect to an absolute orientation,
ordering is meant only cyclically, which can be handled easily in dynamic programming by replicating one of the input sequences. Modifying the first and last rows of the score matrix to allow for
arbitrary start and end locations yields the optimal cyclical alignment in most cases. This comes at
the cost of allowing one-to-many matches which can result in incorrect alignment scores. The score
of the optimal alignment between both sequences of features provides the basis for the similarity
score between two images and the entries of the matrix M . We calculate the values of Mij for all
i < j ? w. Here w represents a window used to ignore images immediately before/after our query.
3 Loop closure-detection using MRF
Using the image similarity measure matrix M , we use Markov Random Fields to detect loopclosures. A lattice H is defined as an n ? n lattice of binary nodes where a node vi,j represents
the probability of images i and j forming a loop-closure. The matrix M provides an initial estimate of this value. We define the factor ?i,j over the node vi,j as follows: ?i,j (1) = Mij /F and
?i,j (0) = 1 ? ?i,j (1) where F = max(M ) is used to normalize the values in M to the range [0, 1].
Loops closures in the score matrix M appear as one of three possible shapes. In an intersection the
score matrix contains an ellipse. A parallel traversal, when a vehicle repeats part of its trajectory,
is seen as a diagonal band. An inverse traversal, when a vehicle repeats a part of its trajectory in
the opposite direction, is an inverted diagonal band. The length and thickness of these shapes vary
with the speed of the vehicle (see figure 1 for examples of these shapes). Therefore we define lattice H with eight way connectivity, as it better captures the structure of possible loop closures. As
adjacent nodes in H represent sequential images in the sequence, we expect significant overlap in
their content. So two neighboring nodes (in any orientation), are expected to have similar scores.
Sudden changes occur when either a loop is just closed (sudden increase) or when a loop closure
is complete (sudden decrease) or due to noise caused by a sudden occlusion in one of the scenes.
By imposing smoothness on the labeling we capture loop closures while discarding noise. Edge
potentials are therefore defined as Gaussians of differences in M . Letting G(x, y) = e?
k = {i ? 1, i, i + 1} and l = {j ? 1, j, j + 1} then
?i,j,k,l (0, 0) = ?i,j,k,l (1, 1) =
?i,j,k,l (0, 1) = ?i,j,k,l (1, 0) =
3
? ? G (Mij , Mkl )
1,
(x?y)2
?2
,
(a) Intersection
(b) Parallel Traversal
(c) Inverse Traversal
Figure 1: A small ellipse resulting from an intersection (a) and two diagonal bands from a parallel
(b) and inverse (c) traversals. All extracted from a score matrix M .
where 1 ? ? (we ignore the case when both k = i and j = l). Overall, H models a probability
distribution over a labeling v ? {1, 0}n?n where:
Y
Y
Y
1 Y
P (v) =
?i,j (vi,j )
?i,j,k,l (vi,j , vk,l )
Z
i,j?[1,n]
i,j?[1,n] k=[i?1,i+1] l=[j?1,j+1]
In order to solve for the MAP labeling of H, v ? = arg maxv P (v), the lattice must first be transformed into a cluster graph C. This transformation allows us to model the beliefs of all factors in the
graph and the messages being passed during inference. We model every node and every edge in H as
a node in the cluster graph C. An edge exists between two nodes in the cluster graph if the relevant
factors share variables. In addition this construction presents a two step update schedule, alternating
between ?node? clusters and ?edge? clusters as each class only connects to instances of the other.
Once defined, a straightforward implementation of the generalized max-product belief propagation
algorithm (described in both [2] and [11]) serves to approximate the final labeling. We initialize the
cluster graph directly from the lattice H with ?i,j = ?i,j for nodes and ?i,j,k,l = ?i,j,k,l for edges.
The MAP labeling found here defines our matrix D determining whether two images i and j close
a loop. Note, that the above MAP labeling is guaranteed to be locally optimal, but is not necessarily
consistent across the entire lattice. Generally, finding the globally consistent optimal assignment is
NP-hard [11]. Instead, we rely on our definition of D, which specifies which pairs of images are
equivalent, and our construction in section 4 to generate consistent results.
4 Constructing the topological map
Finally the decision matrix D is used to define keyframes K and determine the map connectivity
ET . D can be viewed as an adjacency matrix of an undirected graph. Since there is no guarantee
that D found through belief propagation is symmetric, we initially treat D as an adjacency matrix
for a directed graph, and then remove the direction from all the edges resulting in a symmetric graph
D0 = D ? DT . It is possible to use the graph defined by D0 as a topological map. However this
representation is practically useless because multiple nodes represent the same location. To achieve
compactness, D0 needs to be pruned while remaining faithful to the overall structure of the environment. Booij [4] achieve this by approximating for the minimum connected dominating set. By using
the temporal assumption we can remove the connectedness requirement and use minimum dominating set to prune D0 . We find the keyframes K by finding the minimum dominating set of D0 . Finding
the optimal solution is NP-Complete, however algorithm 1 provides a greedy approximation. This
approximation has a guaranteed bound of H(dmax ) (harmonic function of the maximal degree in
the graph dmax ) [6].
The dominating set itself serves as our keyframes K. Each dominating node k ? K is also associated
with the set of nodes it dominates Nk . Each set Nk represent images which have the ?same location?.
The sets {Nk : k ? K} in conjunction with our underlying temporal assumption are used to connect
the map T . An edge (k, j) is added if Nk and Nl contain two consecutive images from our sequence,
i.e. (k, j) ? ET if ?i such that i ? Nk and i + 1 ? Nl . This yields our final topological map T .
4
Algorithm 1: Approximate Minimum Dominating Set
Input: Adjacency matric D0
Output: K,{Nk : k ? K}
K??
while D0 is not empty do
k ? node with largest degree
K ? K ? {k}
Nk ? {k} ? N b(k)
Remove all nodes Nk from matrix D0
end
5
Experiments
The system was applied to five image sequences. Results are shown for the system as described, as
well as for FAB-MAP ([8]) and for different methods of calculating image similarity scores.
Image sets Three image sequences, indoors, Philadelphia and Pittsburgh1 were captured with a
Point Gray Research Ladybug camera. The Ladybug is composed of five wide-angle lens camera
arranged in circle around the base and one camera on top facing upwards. The resulting output is
a sequence of frames each containing a set of images captured by the six cameras. For the outdoor
sequences the camera was mounted on top of a vehicle which was driven around an urban setting, in
this case the cities of Philadelphia and Pittsburgh. In the indoor sequence, the camera was mounted
on a tripod set on a cart and moved inside the building covering the ground and 1st floors. Ladybug
images were processed independently for each camera using the SIFT detector and extractor provided in the VLFeat toolbox [20]. The resulting features for every camera were merged into a single
set and sorted by their spherical coordinates. The two remaining sequences, City Centre and New
College were captured in an outdoor setting by Cummins [7] from a limited field of view camera
mounted on a mobile robot. Table 1 summarizes some basic properties of the sequences we use.
All the outdoor sequences were provided with GPS location of the vehicle / robot. For Philadelphia
Data Set
Indoors
Philadelphia[16]
Pittsburgh
New College[7]
City Centre[7]
Length
Not available
2.5km
12.5km
1.9km
2km
No. of frames
852
1,266
1,256
1,237
1,073
Camera Type
spherical
spherical
spherical
limited field of view
limited field of view
Format
raw Ladybug stream file
raw Ladybug stream file
rectified images
standard images
standard images
Table 1: Summary of image sequences processed.
and Pittsburgh, these were used to generate ground truth decision matrices using a threshold of 10
meters. Ground truth matrices were provided for New College and City Centre. For the indoor
sequence the position of the camera was manually determined using building schematics at an arbitrary scale. A ground truth decision matrix was generated using a manually determined threshold.
The entire system was implemented in Matlab with the exception of the SIFT detector and extractor
implemented by [20].
Parameters Both the image similarity scores and the MRF contain a number of parameters that
need to be set. When calculating the image similarity score, there are five parameters. The first
tmatch is the threshold on th L1 norm at which two SIFT features are considered matched. In addition, dynamic programming requires three parameters to define the score of an optimal alignment:
smatch ,sgap ,smiss . smatch is the value by which the score of an alignment is improved by including
correctly matched pairs of features. sgap is the cost of ignoring a feature in the optimal alignment
(insertion and deletion), and smiss is the cost of including incorrectly matched pairs (substitution).
We use tmatch = 1000, smatch = 1, sgap = ?0.1 and smiss = 0. Finally we use w = 30 as our
window size, to avoid calculating similarity scores for images taken within very short time of each
1
The Pittsburgh dataset has been provided by Google for research purposes
5
Precision
Recall
Indoors
91.67%
79.31%
Philadelphia
91.72%
51.46%
Pittsburgh
63.85%
54.60%
City Centre
97.42%
40.04%
New College
91.57%
84.35%
Table 2: Precision and recall after performing inference.
other. Constructing the MRF requires three parameters, F , ? and ?. The normalization factor, F ,
has already been defined as max(M ). The ? used in defining edge potentials is ? = 0.05F where
F is again used to rescale the data in the interval [0, 1]. Finally we set ? = 2 to rescale the Gaussian
to favor edges between similarly valued nodes. Inference using loopy belief propagation features
two parameters, a dampening factor ? = 0.5 used to mitigate the effect of cyclical inferencing and
n = 20, the number of iterations over which to perform inference.
Results In addition to the image similarity score defined above, we also processed the image
SIF T
sequences using alternative similarity measures. We show results for Mij
= number of SIFT
REC
matches, Mij
= number of reciprocal SIFT matches (the intersection of matches from image i
to image j and from j to i). We also show results using FAB-MAP [8]. To process spherical images
using FAB-MAP we limited ourselves to using images captured by camera 0 (Directly forwards
/ backwards). Figure 2 shows precision-recall curves for all sequences and similarity measures.
The curves were generated by thresholding the similarity scores. Our method outperforms state of
the art in terms of precision and recall in all sequences. The gain from using our system is most
pronounced in the Philadelphia sequence, where FAB-MAP yields extremely low recall rates. Table
2 shows the results of performing inference on the image similarity matrices. Finally figure 3 shows
the topological map resulting from running dominating sets on the decision matrices D. We use
the ground truth GPS positions for display purposes only. The blue dots represent the locations
of the keyframes K with the edges ET drawn in blue. Red dots mark keyframes which are also
loop-closures. For reference, figure 4 provides ground truth maps and loop-closures.
6 Outlook
We presented a system that constructs purely topological maps from video sequences captured from
moving vehicles. Our main assumption is that the images are presented in a temporally consistent
manner. A highly accurate image similarity score is found by a cyclical alignment of sorted feature
sequences. This score is then refined via loopy-belief propagation to detect loop-closures. Finally
we constructed a topological map for the sequence in question. This map can be used for either path
planning or for bundle adjustment in visual SLAM systems. The bottleneck of the system is computing the image similarity score. In some instances, taking over 166 hours to process a single sequence
while FAB-MAP [8] accomplishes the same task in 20 minutes. In addition to implementing score
calculation with a parallel algorithm (either on a multicore machine or using graphics hardware), we
plan to construct approximations to our image similarity score. These include using visual bags of
words in a hierarchical fashion [13] and building the score matrix M incrementally [19, 18].
Acknowledgments
Financial support by the grants NSF-IIS-0713260, NSF-IIP-0742304, NSF-IIP-0835714, and
ARL/CTA DAAD19-01-2-0012 is gratefully acknowledged.
6
Philadelphia ? Precision Recall
1
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.6
Precision
Precision
Indoors ? Precision Recall
1
0.5
0.5
0.4
0.4
0.3
0.3
Dynamic Programming
FAB?MAP
No. SIFT
Symmetric SIFT
0.2
0.1
0
0
0.2
0.4
Dynamic Programming
FAB?MAP
No. SIFT
Symmetric SIFT
0.2
0.1
0.6
0.8
0
0
1
0.2
0.4
0.6
0.8
1
Recall
Recall
(a) Indoors
(b) Philadelphia
Pittsburgh ? Precision Recall
1
Dynamic Programming
FAB?MAP
No. SIFT
Symmetric SIFT
0.9
0.8
Precision
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
0.2
0.4
0.6
0.8
1
Recall
(c) Pittsburgh
New College ? Precision Recall
1
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.6
Precision
Precision
City Centre ? Precision Recall
1
0.5
0.4
0.4
0.3
0.3
Dynamic Programming
FAB?MAP
No. SIFT
Symmetric SIFT
0.2
0.1
0
0.5
0
0.2
0.4
0.6
Dynamic Programming
FAB?MAP
No. SIFT
Symmetric SIFT
0.2
0.1
0.8
0
1
0
0.2
0.4
0.6
0.8
Recall
Recall
(d) City Centre
(e) New College
Figure 2: Precision-recall curves for different thresholds on image similarity scores.
7
1
(a) Indoors
(b) Philadelphia
(d) City Centre
(c) Pittsburgh
(e) New College
Figure 3: Loop-closures generated using minimum dominating set approximation. Blue dots represent positions of keyframes K with edges ET drawn in blue. Red dots mark keyframes with
loop-closures.
(a) Indoors
(b) Philadelphia
(d) City Centre
(c) Pittsburgh
(e) New College
Figure 4: Ground truth maps and loop-closures. Blue dots represent positions of keyframes K with
edges ET drawn in blue. Red dots mark keyframes with loop-closures.
8
References
[1] A. Angeli, D. Filliat, S. Doncieux, and J.-A. Meyer. Fast and incremental method for loopclosure detection using bags of visual words. Robotics, IEEE Transactions on, 24(5):1027?
1037, Oct. 2008.
[2] Christopher M. Bishop. Pattern Recognition and Machine Learning (Information Science and
Statistics). Springer, August 2006.
[3] O. Booij, B. Terwijn, Z. Zivkovic, and B. Krose. Navigation using an appearance based topological map. In 2007 IEEE International Conference on Robotics and Automation, pages
3927?3932, 2007.
[4] O. Booij, Z. Zivkovic, and B. Krose. Pruning the image set for appearance based robot localization. In In Proceedings of the Annual Conference of the Advanced School for Computing
and Imaging, 2005.
[5] M. Bosse, P. Newman, J. Leonard, M. Soika, W. Feiten, and S. Teller. An atlas framework
for scalable mapping. In IEEE International Conference on Robotics and Automation, 2003.
Proceedings. ICRA?03, volume 2, 2003.
[6] V. Chvatal. A greedy heuristic for the set-covering problem. Mathematics of Operations
Research, 4(3):233?235, 1979.
[7] M. Cummins and P. Newman. Accelerated appearance-only SLAM. In Proc. IEEE International Conference on Robotics and Automation (ICRA?08), Pasadena,California, April 2008.
[8] M. Cummins and P. Newman. FAB-MAP: Probabilistic Localization and Mapping in the Space
of Appearance. The International Journal of Robotics Research, 27(6):647?665, 2008.
[9] F. Fraundorfer, C. Wu, J.-M. Frahm, and M. Pollefeys. Visual word based location recognition
in 3d models using distance augmented weighting. In Fourth International Symposium on 3D
Data Processing, Visualization and Transmission, 2008.
[10] T. Goedem?e, M. Nuttin, T. Tuytelaars, and L. Van Gool. Omnidirectional vision based topological navigation. Int. J. Comput. Vision, 74(3):219?236, 2007.
[11] D. Koller and N. Friedman. Probabilistic Graphical Models: Principles and Techniques. MIT
Press, 2009.
[12] D. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of
Computer Vision, 60:91?110, 2004.
[13] D. Nister and H. Stewenius. Scalable recognition with a vocabulary tree. volume 2, pages
2161?2168, 2006.
[14] A. Ranganathan, E. Menegatti, and F. Dellaert. Bayesian inference in the space of topological
maps. IEEE Transactions on Robotics, 22(1):92?107, 2006.
[15] D. Scaramuzza, N. Criblez, A. Martinelli, and R. Siegwart. Robust feature extraction and
matching for omnidirectional images. Springer Tracts in Advanced Robotics, Field and Service
Robotics, 2008.
[16] J.-P. Tardif, Y. Pavlidis, and K. Daniilidis. Monocular visual odometry in urban environments
using an omnidirectional camera. pages 2531?2538, Sept. 2008.
[17] N. Tomatis, I. Nourbakhsh, and R. Siegwart. Hybrid simultaneous localization and map
building: a natural integration of topological and metric. Robotics and Autonomous Systems,
44(1):3?14, 2003.
[18] C. Valgren, T. Duckett, and A. J. Lilienthal. Incremental spectral clustering and its application
to topological mapping. In Proc. IEEE Int. Conf. on Robotics and Automation, pages 4283?
4288, 2007.
[19] C. Valgren, A. J. Lilienthal, and T. Duckett. Incremental topological mapping using omnidirectional vision. In Proc. IEEE Int. Conf. On Intelligent Robots and Systems, pages 3441?3447,
2006.
[20] A. Vedaldi and B. Fulkerson. VLFeat: An open and portable library of computer vision algorithms. http://www.vlfeat.org/, 2008.
9
| 3861 |@word norm:2 open:1 km:4 closure:28 prominence:1 outlook:1 initial:1 substitution:1 contains:2 score:34 outperforms:2 existing:1 must:1 shape:3 remove:3 atlas:3 update:1 maxv:1 generative:1 selected:1 greedy:2 reciprocal:1 short:1 sudden:4 provides:6 detecting:1 node:20 location:18 org:1 five:3 dn:1 constructed:3 symposium:1 incorrect:1 sync:1 inside:1 cta:1 manner:1 upenn:1 expected:1 planning:5 aliasing:2 globally:2 spherical:5 window:2 provided:4 matched:4 underlying:1 finding:3 transformation:2 guarantee:2 temporal:4 mitigate:2 every:3 gravity:1 exactly:1 vlfeat:3 grant:1 appear:1 before:1 service:1 local:5 treat:1 path:5 connectedness:1 limited:4 range:1 pavlidis:1 directed:1 faithful:1 camera:13 acknowledgment:1 implement:1 daad19:1 vedaldi:1 matching:3 word:6 radial:1 cannot:1 close:1 selection:1 martinelli:1 applying:1 www:1 equivalent:2 map:46 layout:1 straightforward:1 starting:1 independently:2 immediately:1 financial:1 fulkerson:1 searching:1 coordinate:1 autonomous:1 construction:4 programming:11 gps:2 us:3 hypothesis:1 pa:1 roy:1 expensive:1 recognition:3 rec:1 module:3 tripod:1 capture:2 thousand:1 calculate:4 ensures:1 connected:1 ordering:4 decrease:1 weigh:1 environment:7 dempster:1 insertion:1 dynamic:11 exhaustively:1 traversal:5 segment:1 purely:1 localization:4 distinctive:1 basis:2 easily:1 sif:1 represented:4 fast:1 describe:2 query:1 labeling:7 newman:3 refined:1 exhaustive:1 whose:2 heuristic:1 plausible:1 dominating:10 say:1 solve:2 valued:1 favor:1 statistic:1 tuytelaars:1 transform:1 itself:1 final:4 sequence:34 advantage:2 product:1 maximal:1 neighboring:2 relevant:1 loop:31 combining:1 flexibility:1 achieve:2 moved:1 pronounced:1 normalize:1 cluster:7 requirement:1 empty:1 transmission:1 generating:1 incremental:3 tract:1 inferencing:1 multicore:1 rescale:2 school:1 implemented:2 reachability:1 come:1 arl:1 direction:3 merged:2 correct:1 modifying:1 centered:1 human:1 implementing:1 adjacency:3 elevation:1 practically:1 around:3 considered:1 ground:7 deciding:1 great:1 mapping:6 vary:1 consecutive:1 purpose:2 proc:3 bag:4 largest:1 city:9 weighted:1 mit:1 gaussian:1 odometry:4 avoid:2 mobile:1 geographically:2 conjunction:1 validated:1 vk:1 indicates:1 greatly:1 detect:6 inference:9 entire:2 compactness:4 initially:1 pasadena:1 koller:1 transformed:1 overall:2 arg:1 orientation:2 proposes:1 plan:1 art:4 spatial:4 initialize:1 integration:1 field:7 construct:3 once:1 extraction:1 sampling:1 manually:2 represents:3 np:2 intelligent:1 employ:2 composed:1 simultaneously:1 cognitively:1 geometry:4 occlusion:1 connects:1 ourselves:1 maintain:1 dampening:1 friedman:1 detection:5 message:1 highly:1 intra:1 grasp:1 alignment:12 navigation:4 nl:2 bundle:1 slam:3 accurate:2 edge:14 tree:2 desired:1 circle:1 siegwart:2 instance:2 column:1 assignment:1 lattice:6 loopy:5 cost:3 entry:1 successful:1 dij:1 examining:1 conducted:1 graphic:1 dependency:1 thickness:1 connect:1 combined:1 st:1 density:1 international:6 probabilistic:2 together:1 quickly:1 connectivity:4 again:1 opposed:1 containing:1 iip:2 conf:2 creating:1 account:2 potential:3 includes:1 automation:4 int:3 caused:1 vi:4 stream:2 stewenius:1 later:2 vehicle:6 lowe:1 closed:1 view:3 red:3 start:1 sort:1 parallel:4 descriptor:1 characteristic:1 yield:4 identify:1 bayesian:2 raw:2 trajectory:2 daniilidis:2 confirmed:1 rectified:1 detector:2 simultaneous:1 reach:1 sharing:1 definition:1 ladybug:5 associated:2 gain:1 dataset:1 popular:1 recall:16 color:1 schedule:1 dt:1 improved:2 april:1 arranged:1 just:1 correlation:1 traveling:1 christopher:1 lack:1 propagation:5 incrementally:2 mkl:1 defines:1 mode:1 google:1 gray:1 building:5 effect:1 contain:2 alternating:1 symmetric:7 laboratory:1 satisfactory:1 omnidirectional:4 adjacent:1 during:1 self:1 encourages:1 covering:2 m:1 generalized:1 pdf:2 outline:1 complete:2 l1:2 interface:1 upwards:1 image:71 harmonic:1 novel:3 common:1 rotation:1 agglomeration:1 keyframes:15 volume:2 significant:1 imposing:1 smoothness:1 consistency:2 mathematics:1 similarly:1 centre:8 replicating:1 gratefully:1 reachable:1 immune:1 dot:6 robot:7 moving:1 similarity:30 base:1 recent:1 driven:1 route:1 binary:2 inverted:1 seen:1 minimum:7 captured:5 floor:1 employed:1 prune:1 accomplishes:1 determine:1 cummins:3 ii:1 angeli:1 multiple:2 full:1 keypoints:1 d0:8 match:8 calculation:1 divided:1 schematic:2 mrf:6 basic:1 scalable:2 vision:5 metric:3 iteration:1 represent:8 normalization:1 robotics:10 cell:1 addition:4 interval:1 modality:1 file:2 cart:1 validating:1 undirected:1 backwards:1 chvatal:1 split:1 rendering:1 duckett:2 pennsylvania:1 identified:2 opposite:1 reduce:2 bottleneck:1 whether:2 six:1 handled:1 passed:1 passing:1 dellaert:1 matlab:1 generally:1 indoors:7 locally:4 band:3 hardware:1 processed:4 nister:1 reduced:2 generate:6 specifies:1 outperform:1 http:1 nsf:3 track:1 correctly:1 blue:6 waypoints:1 pollefeys:1 key:1 four:1 threshold:5 acknowledged:1 urban:3 localize:1 drawn:3 imaging:1 graph:13 year:1 inverse:3 angle:1 powerful:1 fourth:1 topologically:1 place:2 wu:1 missed:1 patch:1 decision:5 summarizes:1 bound:1 followed:1 distinguish:1 guaranteed:2 correspondence:1 display:1 topological:27 annual:1 occur:1 constraint:2 bp:2 scene:4 aspect:1 speed:2 extremely:3 pruned:1 performing:2 format:1 department:1 describes:1 across:1 making:1 invariant:4 taken:3 monocular:1 visualization:1 describing:1 dmax:2 enforcement:1 letting:1 tractable:1 end:2 serf:2 smiss:3 available:1 gaussians:1 operation:1 eight:1 hierarchical:2 spectral:2 enforce:1 alternative:1 original:1 top:2 clustering:3 ensure:1 remaining:2 running:1 graphical:2 include:1 calculating:5 matric:1 ellipse:2 approximating:1 icra:2 arrangement:1 added:1 occurs:1 already:1 fa:2 question:1 diagonal:3 distance:2 separate:1 landmark:1 agglomerative:1 portable:1 spanning:1 length:2 useless:1 providing:2 zivkovic:2 innovation:1 implementation:2 perform:3 allowing:1 discretize:1 vertical:1 observation:1 markov:3 sing:1 incorrectly:1 defining:3 frame:3 arbitrary:2 august:1 drift:1 pair:5 toolbox:1 california:1 fab:14 unorganized:1 distinction:1 deletion:1 hour:1 below:1 pattern:1 indoor:3 challenge:2 max:3 including:2 video:7 belief:6 epipolar:4 gool:1 overlap:1 natural:1 hybrid:2 rely:1 advanced:2 mn:1 older:1 library:1 temporally:1 picture:1 philadelphia:11 sept:1 geometric:1 teller:1 meter:1 determining:1 relative:4 expect:1 mounted:3 facing:1 degree:2 affine:1 consistent:6 thresholding:1 principle:1 share:1 row:1 summary:1 repeat:2 last:1 allow:1 wide:1 distinctiveness:3 taking:1 absolute:1 van:1 curve:3 vocabulary:2 world:1 fb:2 forward:1 employing:1 transaction:2 approximate:3 compact:1 ignore:2 keyframe:2 pruning:1 dealing:1 global:1 pittsburgh:9 consuming:1 search:3 kilometer:1 table:4 additionally:1 robust:2 ignoring:1 bearing:5 necessarily:1 constructing:4 main:1 noise:2 shafer:1 frahm:1 augmented:1 representative:2 fashion:1 kostas:2 precision:15 position:8 meyer:1 comput:1 outdoor:3 perceptual:2 breaking:1 weighting:1 extractor:2 cyclically:1 minute:1 discarding:1 bishop:1 sift:20 dominates:1 incorporating:1 exists:1 sequential:1 gained:1 ci:1 metrically:1 nk:8 sorting:1 intersection:4 backtracking:1 simply:1 appearance:9 likely:1 forming:1 visual:10 desire:1 ordered:2 adjustment:1 partially:1 cyclical:3 springer:2 mij:7 truth:6 extracted:1 oct:1 viewed:1 sorted:2 leonard:1 content:1 change:1 hard:1 determined:2 averaging:1 lens:1 exception:1 formally:1 college:8 krose:2 mark:3 support:1 meant:1 accelerated:1 incorporate:1 rigidity:1 |
3,159 | 3,862 | A Data-Driven Approach to Modeling Choice
Vivek F. Farias
Srikanth Jagabathula
Devavrat Shah?
Abstract
We visit the following fundamental problem: For a ?generic? model of consumer choice (namely, distributions over preference lists) and a limited
amount of data on how consumers actually make decisions (such as marginal
preference information), how may one predict revenues from offering a particular assortment of choices? This problem is central to areas within operations research, marketing and econometrics. We present a framework to
answer such questions and design a number of tractable algorithms (from
a data and computational standpoint) for the same.
1
Introduction
Consider a seller who must pick from a universe of N products, N , a subset M of products
to offer to his customers. The ith product has price pi . Given a probabilistic model of how
customers make choices, P(?|?), where P(i|M) is the probability that a potential customer
purchases product i when faced with options M, the seller may solve
?
(1)
max
pi P(i|M).
M?N
i?M
In addition to being a potentially non-trivial optimization problem, one faces a far more
fundamental obstacle here: specifying the ?choice? model P(?|?) is a difficult task and it
is unlikely that a seller will have sufficient data
q to estimate a generic such model. Thus,
simply predicting expected revenues, R(M) = i?M pi P(i|M), for a given offer set, M, is
a difficult task. This problem, and variants thereof, are central in the fields of marketing,
operations research and econometrics. With a few exceptions, the typical approach to
dealing with the challenge of specifying a choice model with limited data has been to make
parametric assumptions on the choice model that allow for its estimation from a limited
amount of data. This approach has a natural deficiency ? the implicit assumptions made in
specifying a parametric model of choice may not hold. Indeed, for one of the most commonly
used parametric models in modern practice (the multinomial logit), it is a simple task to
come up with a list of deficiencies, ranging from serious economic fallacies presumed by
the model ([5]), to a lack of statistical fit to observed data for real-world problems ([1, 8]).
These issues have led to a proliferation of increasingly arcane parametric choice models.
The present work considers the following question: given a limited amount of data on
customer preferences and assuming only a ?generic? model of customer choice, what can one
predict about expected revenues from a given set of products? We take as our ?generic?
model of customer choice, the set of distributions over all possible customer preference lists
(i.e. all possible permutations of N ). We will subsequently see that essentially all extant
models of customer choice can be viewed as a special case of this generic model. We view
?data? as some linear transformation of the distribution specifying the choice model, yielding
marginal information. Again, we will see that this view is consistent with reality.
?
VF, DS are affiliated with ORC; VF with Sloan School of Management; SJ and DS with LIDS
and Department of EECS at MIT. Emails: vfarias, jskanth, [email protected]. The work was
supported in part by NSF CAREER CNS 0546590.
1
Given these views, we first consider finding the ?simplest? choice model, consistent with the
observed marginal data on customer preferences. Here we take as our notion of simple,
a distribution over permutations of N with the sparsest support. We present two simple
abstract properties that if satisfied by the ?true? choice model, allow us to solve the sparsest
fit problem exactly via a simple combinatorial procedure (Theorem 2). In fact, the sparsest
fit in this case coincides with the true model (Theorem 1). We present a generative family of
choice models that illustrates when the two properties we posit may be expected to hold (see
Theorem 3). More generally, when we may not anticipate the above abstract properties, we
seek to find a ?worst-case? distribution consistent with the observed data in the sense that
this distribution yields minimum expected revenues for a given offer set M while remaining
consistent with the observed marginal data. This entails solving mathematical programs
with as many variables as there are permutations (N !). In spite of this, we present a
simple efficient procedure to solve this problem that is exact for certain interesting types
of data and produces approximations (and computable error bounds) in general. Finally,
we present a computational study illustrating the efficacy of our approach relative to a
parametric technique on a real-world data set.
Our main contribution is thus a novel approach to modeling customer choice given limited
data. The approach we propose is complemented with efficient, implementable algorithms.
These algorithms yield subroutines that make non-parametric revenue predictions for any
given offer set (i.e. predict R(M) for any M) given limited data. Such subroutines could
then be used in conjunction with generic set-function optimization heuristics to solve (1).
Relevant Literature: There is a vast body of literature on the parametric modeling
of customer choice; a seminal paper in this regard is [10]. See also [14] and references
therein for an overview of the area with an emphasis on applications. There is a stream of
research (eg. [6]) on estimating and optimizing (parametric) choice models when products
possess measurable attributes that are the sole influencers of choice; we do not assume the
availability of such attributes and thus do not consider this situation here. A non-parametric
approach to choice modeling is considered by [12]; that work studies a somewhat distinct
pricing problem, and assumes the availability of a specific type of rich observable data.
Fitting a sparsest model to observable data has recently become of great interest in the area
of compressive sensing in signal processing [3, 7], and in the design of sketches for streaming
algorithms [2, 4]. This work focuses on deriving precise conditions on the support size of
the true model, which, when satisfied, guarantee that the sparsest solution is indeed the
true solution. However, these prior methods do not apply in the present context (see [9]);
therefore, we take a distinct approach to the problem in this paper.
2
The Choice Model and Problem Formulations
We consider a universe of N products, N = {0, 1, 2, . . . , N ? 1}. We assume that the 0th
product in N corresponds to the ?outside? or ?no-purchase? option. A consumer is associated
with a permutation ? of the elements of N ; the customer prefers product i to product j
iff ?(i) < ?(j). Given that the customer is faced with a set of alternatives M ? N , she
chooses to purchase her single most preferred product among those in M. In particular, she
purchases argmini?M ?(i).
Choice Model: We take as our model of customer choice a distribution, ? : SN ? [0, 1],
over all possible permutations (i.e. the set of all permutations SN ). Define the set
Sj (M) = {? ? SN : ?(j) < ?(i), ?i ? M, i ?= j}
as the set of all customer types that would result in a purchase of j when the offer set is
M. Our choice model is thus
?
P(j|M) =
?(?) , ?j (M).
??Sj (M)
This model subsumes a vast body of extant parametric choice models.
Revenues: We associate every product in N with a retail price pj . Of course, p0 = 0. The
expected revenues to a retailer from offering
qa set of products M to his customers under
our choice model is thus given by R(M) = j?M pj ?j (M).
2
Data: A seller will have limited data with which to estimate ?. We simply assume that
the data observed by the seller is given by an m-dimensional ?partial information? vector
y = A?, where A ? {0, 1}m?N ! makes precise the relationship between the observed data
and the underlying choice model. For the purposes of illustration, we consider the following
concrete examples of data vectors y:
Ranking Data: This data represents the fraction of customers that rank a given product i
as their rth choice. Here the partial information vector y is indexed by i, r with 0 ? i, r ? N .
For each i, r, yri denotes the probability that product i is ranked at position r. The matrix
2
A is thus in {0, 1}N ?N ! . For a column of A corresponding to the permuation ?, A(?), we
will thus have A(?)ri = 1 iff ?(i) = r.
Comparison Data: This data represents the fraction of customers that prefer a given
product i to a product j. The partial information vector y is indexed by i, j with 0 ? i, j ?
N ; i ?= j. For each i, j, yi,j denotes the probability that product i is preferred to product j.
The matrix A is thus in {0, 1}N (N ?1)?N ! . A column of A, A(?), will thus have A(?)ij = 1
if and only if ?(i) < ?(j).
Top Set Data: This data refers to a concatenation of the ?Comparison Data? and information on the fraction of customers who have a given product i as their topmost choice for
?
each i. Thus A? = [A?
1 A2 ] where A1 is simply the A matrix for comparison data, and
A2 ? {0, 1}N ?N ! has A2 (?)i = 1 iff ?(i) = 1.
Many other types of data vectors consistent with the above view are possible; all we anticipate is that the dimension of the observed data m is substantially smaller than N !. We are
now in a position to formulate the questions broached in the previous section precisely:
?Simplest? Model: In finding the simplest choice model consistent with the observed data
we attempt to solve:
(2)
minimize ???0
subject to A? = y,
1? ? = 1,
? ? 0.
Robust Approach: For a given offer set M, and data vector y, what are the minimal
expected revenues we might expect from M consistent with the observed data? To answer
this question, we attempt to solve :
q
1? ? = 1,
? ? 0.
minimize
pj ?j (M)
subject to A? = y,
(3)
?
j?M
3
Estimating Sparse Choice Models
Here we consider finding the sparsest model consistent with the observed data (i.e. problem
(2)). We face two questions: (a) Why is sparsity an interesting criterion? (b) Is there
an efficient procedure to solve the program in (2)? We begin by identifying two simple
conditions that define a class of choice models (i.e. a class of distributions ?). Assuming
that the ?true? underlying model ? belongs to this class, we prove that the sparsest model
(i.e the solution to (2)) is in fact this true model. This answers the first question. We then
propose a simple procedure inspired by [9] that correctly solves the program in (2) assuming
these conditions. It is difficult to expect the program in (2) to recover the true solution in
general (see [9] for a justification). Nonetheless, we show that the conditions we impose are
not overly restrictive: we prove that a ?sufficiently? sparse model generated uniformly at
random from the set of all possible choice models satisfies the two conditions with a high
probability.
Before we describe the conditions we impose on the true underlying distribution, we introduce some notation. Let ? denote the true underlying distribution, and let K denote the
support size, ???0 . Let ?1 , ?2 , . . . , ?K denote the permutations in the support, i.e, ?(?i ) ?= 0
for 1 ? i ? K, and ?(?) = 0 for all ? ?= ?i , 1 ? i ? K. Recall that y is of dimension m and
we index its elements by d. The two conditions we impose are as follows:
Signature Condition: For every permutation ?i in the support, there exists a d(i) ?
{1, 2, . . . , m} such that A(?i )d(i) = 1 and A(?j )d(i) ?= 0, for every j ?= i and 1 ? i, j ? K.
In other words, for each permutation ?i in the support, yd(i) serves as its ?signature?.
3
qK
Linear Independence Condition:
i=1 ci ?(?i ) ?= 0, for any ci ? Z and |ci | ? C, where Z
denotes the set of integers and C is a sufficiently large number ? K. This condition is
satisfied with probability 1 if [?1 ?2 . . . ?K ]? is drawn uniformly from K-dim simplex.
When the two conditions are satisfied, the sparsest solution is indeed the true solution as
stated in the following theorem:
Theorem 1. Suppose we are given y = A? and ? satisfies the ?Signature? condition and
the ?Linear Independence? condition. Then, ? is the unique solution to the program in (2).
The proof of Theorem 1 is given in the appendix. Next we describe the algorithm we propose
for recovery. The algorithm takes y and A and as the input and outputs ?i (denotes ?(?i ))
and A(?i ) for every permutation ?i in the support. The algorithm assumes the observed
values yd are sorted. Therefore, without loss of generality, assume that y1 < y2 < . . . < ym .
Then, the algorithm is as follows:
Algorithm:
Initialization: ?0 = 0, k(0) = 0 and A(?i )d = 0, 1 ? i ? K and 1 ? d ? m.
for d = 1 to
q m
if yd = i?T ?i for some T ? {1, . . . , k(d ? 1)}
k(d) = k(d ? 1),
A(?i )d = 1 ? i ? T
else
k(d) = k(d ? 1) + 1,
?k(d) = yd ,
A(?k(d) )d = 1,
end if
end for
Output K = k(m) and (?i , A(?i )), 1 ? i ? K.
Now, we have the following theorem:
Theorem 2. Suppose we are given y = A? and ? satisfies the ?signature? and the ?linear
independence? conditions. Then, the above described algorithm recovers ?.
Theorem 2 is proved in the appendix. The algorithm we have described either succeeds
in finding a valid ? or else determines that the two properties are not satisfied. We now
show that the conditions we have imposed do not restrict the class of plausible models
severely. For this, we show that models drawn from the following generative model satisfy
the conditions with high probability.
Generative Model. Given K and an interval [a, b] on the positive real line, we generate a
choice model ? as follows: choose K permutations, ?1 , ?2 , . . . , ?K , uniformly at random with
replacement, choose K numbers uniformly at random from the interval [a, b], normalize the
numbers so that they sum to 1, and assign them to the permutations ?i , 1 ? i ? K. For all
other permutations ? ?= ?i , ?(?) = 0. Note that, since we are choosing permutations in the
support with replacement, there could be repetitions. However, for large N and K ? N !,
this happens with a vanishing probability.
Depending on the observed data, we characterize values of sparsity K for which distributions generated by the above generative model can be recovered with a high probability.
Specifically, we have the following theorem for the three forms of observed data mentioned
in Section 2. The proof may be found in the appendix.
Theorem 3. Suppose ? is a choice model of support size K drawn from the generative model.
Then, ? satisfies the ?signature? and ?linear independence? conditions with probability 1 ?
o(1) as N ?
? ? provided K = O(N ) for ranking data, K = o(log N ) for comparison data,
and K = o( N ) for the top set data.
Of course, in general, the underlying choice model may not satisfy the two conditions we
have posited or be exactly recoverable from the observed data. In order to deal with this
more general scenario, we next propose an approach that implicitly identifies a ?worst-case?
distribution consistent with the observed data.
4
4
Robust Revenue Estimates Consistent with Data
In this section, we propose a general algorithm for the solution of program (3). This LP
has N ! variables and is clearly not amenable to direct solution; hence we consider its dual.
In preparation for taking the dual, let Aj (M) , {A(?) : ? ? Sj (M)}, where, recall that,
Sj (M) denotes the set of all permutations that result in the purchase of j ? M when offered
the assortment M. Since SN = ?j?M Sj (M), we have implicitly specified a partition of the
columns of the matrix A. Armed with this notation, the dual of (3) is:
! ? j
"
?
max
? x + ? ? pj , for each j ? M.
(4) maximize ? y + ? subject to
j
?,?
x ?Aj (M)
Our solution procedure will rely on an effective representation of the sets Aj (M).
4.1
A Canonical Representation of Aj (M) and its Application
We assume that every set Sj (M) can be expressed as a disjoint union of Dj sets. We denote
the dth such set by Sjd (M) and let Ajd (M) be the corresponding set of columns. Consider
the convex hull of the set Ajd (M), conv{Ajd (M)} , A?jd (M). A?jd (M) is by definition a
polytope contained in the m-dimensional unit cube, [0, 1]m . In other words,
(5)
jd
A?jd (M) = {xjd : Ajd
? bjd
1 x
1 ,
jd
Ajd
= bjd
2 x
2 ,
jd
Ajd
? bjd
3 x
3 ,
xjd ? 0.}
jd
for appropriately defined Ajd
? , b? . By a canonical representation of Aj (M), we will thus understand a partition of Sj (M) and a polyhedral representation of the columns corresponding
to every set in the partition as given by (5). Ignoring the problem of actually obtaining this
representation for now, we assume access to a canonical representation and present a simple
program whose size is polynomial in the size of this representation that is equivalent to (3),
(4). For simplicity of notation, we assume that each of the polytopes A?jd (M) is in standard
form, i.e. A?jd (M) = {xjd : Ajd xjd = bjd , xjd ? 0.}. Now since an affine function is
always optimized at the vertices of a polytope, we know:
! ? j
"
! ? jd
"
max
?
x
+
?
=
max
?
x
+
?
.
j
?jd (M)
d,xjd ?A
x ?Aj (M)
We have thus reduced (4) to a ?robust? LP. By strong duality we have:
maximize
(6)
max
?jd (M)
xjd ?A
!
? jd
? x
xjd
+ ? , subject to
"
?? xjd + ?
jd
jd
A x =b
xjd ? 0.
jd
=
?
minimize
bjd ? jd + ?
subject to
? jd Ajd ? ?
? jd
?
We have thus established the following useful equality:
;
< ?
?
! ? j
"
?
?
?, ? : max
? x + ? ? pj = ?, ? : bjd ? jd + ? ? pj , ? jd Ajd ? ?, d = 1, 2, . . . , Dj .
?j (M)
xj ?A
It follows that solving (3) is equivalent to the following LP whose complexity is polynomial
in the description of our canonical representation:
maximize
?? y + ?
subject to
bjd ? jd + ? ? pj
?
? jd Ajd ? ?
?,?
(7)
?
for all j ? M, d = 1, 2, . . . , Dj
for all j ? M, d = 1, 2, . . . , Dj .
Our ability to solve (7) relies on our ability to produce an efficient canonical representation
of Sj (M). In what follows, we first consider an example where such a representation is
readily available, and then consider the general case.
Canonical Representation for Ranking Data: Recall the definition of ranking data
from Section 2. Consider partitioning Sj (M) into N sets wherein the dth set is given by
5
Sjd (M) = {? ? Sj (M) : ?(j) = d}. It is not difficult to show that the set Ajd (M) is equal
to the set of all vectors xjd in {0, 1}N satisfying:
Nq
?1
(8)
xjd
ri = 1
for 0 ? i ? N ? 1
xjd
ri = 1
r=0
xjd
ri ? {0, 1}
xjd
dj = 1
jd
xd? i = 0
for 0 ? r ? N ? 1
i=0
Nq
?1
for 0 ? i, r ? N ? 1.
for all i ? M, i ?= jand 0 ? d? < d.
Our goal is, of course, to find a description for A?jd (M) of the type (5). Now consider
replacing the third (integrality) constraint in (8) with simply the non-negativity constraint
?
xjd
ri ? 0. It is clear that the resulting polytope contains Ajd (M). In addition, one may
show that the resulting polytope has integral vertices since it is simply a matching polytope
with some variables forced to be integers, so that in fact the polytope is precisely A?jd (M),
and we have our canonical representation. Further, notice that this representation yields an
efficient algorithm to solve (3) via (7)!
4.2
Computing a Canonical Representation: Comparison Data
Recall the definition of comparison data from Section 2. We use this data as an example to
illustrate a general procedure for computing a canonical representation. Consider Sj (M).
It is not difficult to see that the corresponding set of columns Aj (M) is equal to the set of
vectors in {0, 1}(N ?1)N satisfying the following constraints:
(9)
xjil ? xjik + xjkl ? 1
xjik + xjki = 1
xjji = 1
xjik ? {0, 1}
for
for
for
for
all i, k, l ? N , i ?= k ?= l
all i, k ? N , i ?= k
all i ? M, i ?= j
all i, k ? N , i ?= k
Briefly, the second constraint follows since for any i, k, i ?= k, either ?(i) > ?(k) or else ?(i) <
?(k). The first constraint enforces transitivity: ?(i) < ?(k) and ?(k) < ?(l) together imply
?(i) < ?(l). The third constraint enforces that all ? ? Sj (M) must satisfy ?(j) < ?(i) for all
i ? M. Now consider the polytope obtained by relaxing the fourth (integrality) constraint
to simply xjik ? 0. Call this polytope A?oj (M). Of course, we must have A?oj (M) ? A?j (M).
Unlike the case of ranking data, however, A?oj (M) can in fact be shown to be non-integral,
so that A?oj (M) ?= A?j (M) in general. In this case we resort to the following procedure.
[1.] Solve (7) using the representation of A?oj (M) in place of A?j (M). This yields a lower
bound on (3) since A?oj (M) ? A?j (M). Call the corresponding solution ?(1) , ?(1) .
? j
[2.] Solve the optimization problem max ?(1)
x subject to xj ? A?oj (M) for each j. If the
optimal solution x
?j is integral for each j, then stop; the solution computed in the first step
is in fact optimal.
[3.] Let x
?jik be a non-integral variable. Partition Sj (M) on this variable - i.e. define
Sj1 (M) = {? : ? ? Sj (M), ?(i) < ?(k)} and Sj2 (M) = {? : ? ? Sj (M), ?(i) > ?(k)}.
Define outer-approximations to A?j1 (M) and A?j2 (M) as the projection of A?oj (M) on xjik = 1
and xjik = 0 respectively. Go to step 1.
The above procedure is finite, but the size of the LP we solve at each iteration doubles.
Nonetheless, each iteration produces a lower bound to (3) whose quality is easily measured
(for instance, by solving the maximization version of (3) using the same procedure), and
this quality improves with each iteration. In our computational experiments with a related
type of data, it sufficed to stop after a single iteration.
6
5
An Empirical Evaluation of the Approach
We have presented simple sub-routines to estimate the revenues R(M) from a particular offer
set M, given marginal preference data y. These sub-routines are effectively ?non-parametric?
and can form the basis of a procedure that solves the revenue optimization problem posed in
the introduction. Here we seek to contrast this approach with a commonly used parametric
approach. We consider two types of observable data: ranking data and a ?censored? version
of the comparison data which gives us for every pair of products i, j, ?= 0, the fraction of
customers that prefer i to j, and in addition prefer i to 0 (i.e. not buying). The latter type
of data is quite realistic.
The parametric recipe we consider is the following: One fits a Multinomial Logit (MNL)
model
to the observable data and picks an optimal offer set by evaluating R(M) =
q
j?M pj P(j|M) assuming P(?|M) follows the estimated model. The MNL is a commonly
used parametric model that associates with each product
i in N a positive scalar wi ; w0 = 1
q
by convention. The model assumes P(i|M) = wi / j?M wj . In place of making this parametric assumption, we could instead evaluate R(M) using the robust sub-routine developed
in the previous section and pick M to maximize this conservative estimate. It is clear that
if the MNL model is a poor fit to the true choice model, P, our robust approach is likely
to outperform the parametric approach substantially. Instead, what we focus on here is
what happens if the MNL model is a perfect fit to the true choice model. In this case, the
parametric approach is the best possible. How sub-optimal is our non-parametric approach
here?
We consider an MNL model on N = 25 products. The model and prices were specified
using customer utilities for Amazon.com?s highest selling DVDs (and their prices) during a
3-month period from 1 July 2005 to 30 September 2005 estimated by [13] 1 . We generate
synthetic observed data (of both the ranking type and the comparison type) according to this
fitted MNL model. This represents a scenario where the fitted MNL is a perfect descriptor
of reality. We conduct the following experiments:
Quality of Revenue Predictions: For each type of observable data we compute our
estimate of the minimum value that R(M) can take on, consistent with that data, by
solving (3). We compare this with the value of R(M) predicted under the MNL model
(which in this case, is exact). Figures 1(b) and 1(d) compare these two quantities for a
set of randomly chosen subsets M of the 25 potential DVD?s assuming ranking data and
the censored comparison data respectively. In both cases, our procedure produces excellent
predictions of expected revenue without making the assumptions on P(?|?) inherent in the
MNL model.
Quality of Optimal Solutions to Revenue Maximization Problems: For each type
of observable data, we compute optimal offer sets M of varying capacities assuming the
fitted MNL model and an optimization procedure described in [13]. We then evaluate the
revenue predictions for these optimal offer sets by solving (3). Figures 1(a) and 1(c) plot
these estimates for the two types of observable data. The gap between the ?MNL? and the
?MIN? curves is thus an upper bound on the expected revenue loss if one used our nonparametric procedure to pick an optimal offer set M over the parametric procedure (which
in this setting is optimal). Again, we see that the revenue loss is surprisingly small.
6
Conclusion and Potential Future Directions
We have presented a general framework that allows us to answer questions related to how
consumers choose among alternatives using limited observable data and without making
additional parametric assumptions. The approaches we have proposed are feasible from
a data availability standpoint as well as a computational standpoint and provide a much
needed non-parametric ?sub-routine? for the revenue optimization problems described at the
outset. This paper also opens up the potential for a stream of future work.
1
The problem of optimizing over M is particularly relevant to Amazon.com given limited screen
real-estate and cannibilization effects
7
14
14
MNL Expected Revenue
MIN Expected Revenue
10
8
6
4
2
0
MNL Expected Revenue
MIN Expected Revenue
12
Expected Revenue (dollars)
Expected Revenue (dollars)
12
10
8
6
4
2
0
5
10
15
20
Optimal MNL assortment (size)
0
25
0
5
10
15
Assortment index
20
25
(a) Ranking Data: Optimal M
(b) Ranking Data: Random M
14
14
MNL Expected Revenue
MIN Expected Revenue
10
8
6
4
2
0
MNL Expected Revenue
MIN Expected Revenue
12
Expected Revenue (dollars)
Expected Revenue (dollars)
12
10
8
6
4
2
0
5
10
15
20
Optimal MNL assortment (size)
0
25
0
5
10
15
Assortment index
20
25
(c) Comparison Data: Optimal M (d) Comparison Data: Random M
References
[1] K. Bartels, Y. Boztug, and M. M. Muller. Testing the multinomial logit model. Working
Paper, 1999.
[2] R. Berinde, AC Gilbert, P. Indyk, H. Karloff, and MJ Strauss. Combining geometry
and combinatorics: A unified approach to sparse signal recovery. Preprint, 2008.
[3] E.J. Candes, J.K. Romberg, and T. Tao. Stable signal recovery from incomplete and
inaccurate measurements. Communications on Pure and Applied Mathematics, 59(8),
2006.
[4] G. Cormode and S. Muthukrishnan. Combinatorial algorithms for compressed sensing.
Lecture Notes in Computer Science, 4056:280, 2006.
[5] G. Debreu. Review of r.d. luce, ?individual choice behavior: A theoretical analysis?.
American Economic Review, 50:186?188, 1960.
[6] G. Dobson and S. Kalish. Positioning and pricing a product line. Marketing Science,
7(2):107?125, 1988.
[7] DL Donoho. Compressed sensing. IEEE Transactions on Information Theory, 52(4):
1289?1306, 2006.
[8] J. L. Horowitz. Semiparametric estimation of a work-trip mode choice model. Journal
of Econometrics, 58:49?70, 1993.
[9] S. Jagabathula and D. Shah. Inferring rankings under constrained sensing. In NIPS,
pages 7?1, 2008.
[10] D. McFadden. Econometric models for probabiistic choice among products. The Journal
of Business, 53(3):S13?S29, 1980.
[11] EH McKinney. Generalized birthday problem. American Mathematical Monthly, pages
385?387, 1966.
[12] P. Rusmevichientong, B. Van Roy, and P. Glynn. A nonparametric approach to multiproduct pricing. Operations Research, 54(1), 2006.
[13] P. Rusmevichientong, ZJ Shen, and D.B. Shmoys. Dynamic Assortment Optimization
with a Multinomial Logit Choice Model and Capacity Constraint. Technical report,
Working Paper, 2008.
[14] Kalyan T. Talluri and Garrett J. van Ryzin. The Theory and Practice of Revenue
Management. Springer Science+Business Media, 2004.
8
| 3862 |@word illustrating:1 version:2 briefly:1 polynomial:2 logit:4 open:1 seek:2 p0:1 pick:4 contains:1 efficacy:1 offering:2 recovered:1 com:2 must:3 readily:1 realistic:1 partition:4 j1:1 plot:1 generative:5 ajd:13 nq:2 ith:1 vanishing:1 cormode:1 preference:6 mathematical:2 direct:1 become:1 prove:2 fitting:1 polyhedral:1 introduce:1 indeed:3 presumed:1 behavior:1 expected:20 proliferation:1 inspired:1 buying:1 armed:1 conv:1 begin:1 estimating:2 underlying:5 notation:3 provided:1 medium:1 what:5 substantially:2 jik:1 developed:1 compressive:1 unified:1 finding:4 transformation:1 guarantee:1 every:7 xd:1 exactly:2 partitioning:1 unit:1 before:1 positive:2 severely:1 yd:4 birthday:1 might:1 emphasis:1 therein:1 initialization:1 specifying:4 relaxing:1 limited:9 unique:1 enforces:2 testing:1 practice:2 union:1 procedure:14 area:3 empirical:1 matching:1 projection:1 word:2 outset:1 refers:1 spite:1 romberg:1 context:1 seminal:1 gilbert:1 measurable:1 imposed:1 customer:21 equivalent:2 go:1 convex:1 formulate:1 shen:1 simplicity:1 identifying:1 recovery:3 amazon:2 pure:1 deriving:1 his:2 notion:1 justification:1 suppose:3 exact:2 associate:2 element:2 roy:1 satisfying:2 particularly:1 econometrics:3 observed:16 preprint:1 worst:2 wj:1 highest:1 topmost:1 mentioned:1 sj1:1 complexity:1 seller:5 dynamic:1 signature:5 solving:5 kalyan:1 basis:1 farias:1 selling:1 easily:1 muthukrishnan:1 distinct:2 forced:1 describe:2 effective:1 outside:1 choosing:1 whose:3 heuristic:1 posed:1 solve:12 plausible:1 quite:1 compressed:2 ability:2 indyk:1 kalish:1 propose:5 product:25 j2:1 relevant:2 combining:1 iff:3 description:2 normalize:1 recipe:1 double:1 produce:4 perfect:2 depending:1 illustrate:1 ac:1 measured:1 ij:1 school:1 sole:1 strong:1 solves:2 predicted:1 come:1 convention:1 direction:1 posit:1 attribute:2 subsequently:1 hull:1 assign:1 anticipate:2 hold:2 sufficiently:2 considered:1 great:1 predict:3 a2:3 purpose:1 estimation:2 combinatorial:2 repetition:1 mit:2 clearly:1 always:1 varying:1 conjunction:1 srikanth:1 focus:2 she:2 rank:1 contrast:1 sense:1 dollar:4 dim:1 streaming:1 inaccurate:1 unlikely:1 her:1 bartels:1 subroutine:2 tao:1 issue:1 among:3 dual:3 xjd:16 special:1 constrained:1 marginal:5 field:1 cube:1 equal:2 represents:3 purchase:6 simplex:1 future:2 report:1 serious:1 few:1 inherent:1 modern:1 randomly:1 individual:1 geometry:1 cns:1 replacement:2 attempt:2 interest:1 evaluation:1 yielding:1 amenable:1 integral:4 partial:3 censored:2 indexed:2 conduct:1 incomplete:1 theoretical:1 minimal:1 fitted:3 instance:1 column:6 modeling:4 obstacle:1 maximization:2 vertex:2 subset:2 characterize:1 answer:4 eec:1 synthetic:1 chooses:1 fundamental:2 probabilistic:1 influencers:1 ym:1 together:1 concrete:1 extant:2 again:2 central:2 satisfied:5 management:2 choose:3 horowitz:1 resort:1 american:2 s13:1 potential:4 rusmevichientong:2 subsumes:1 availability:3 satisfy:3 combinatorics:1 sloan:1 ranking:11 stream:2 view:4 sufficed:1 recover:1 option:2 candes:1 contribution:1 minimize:3 qk:1 who:2 descriptor:1 yield:4 shmoys:1 email:1 definition:3 nonetheless:2 glynn:1 thereof:1 associated:1 proof:2 recovers:1 stop:2 proved:1 recall:4 improves:1 routine:4 garrett:1 actually:2 wherein:1 formulation:1 generality:1 marketing:3 implicit:1 d:2 sketch:1 working:2 replacing:1 lack:1 mode:1 quality:4 aj:7 pricing:3 sjd:2 effect:1 talluri:1 true:12 y2:1 hence:1 equality:1 vivek:1 eg:1 deal:1 transitivity:1 during:1 coincides:1 criterion:1 generalized:1 ranging:1 novel:1 recently:1 multinomial:4 overview:1 rth:1 measurement:1 monthly:1 mathematics:1 dj:5 access:1 entail:1 stable:1 optimizing:2 belongs:1 driven:1 scenario:2 certain:1 retailer:1 yri:1 yi:1 muller:1 minimum:2 additional:1 somewhat:1 impose:3 maximize:4 period:1 signal:3 july:1 recoverable:1 debreu:1 positioning:1 technical:1 offer:11 posited:1 visit:1 a1:1 prediction:4 variant:1 essentially:1 iteration:4 retail:1 addition:3 semiparametric:1 interval:2 else:3 standpoint:3 appropriately:1 unlike:1 posse:1 subject:7 s29:1 integer:2 call:2 estate:1 independence:4 fit:6 xj:2 restrict:1 karloff:1 economic:2 computable:1 luce:1 utility:1 sj2:1 prefers:1 generally:1 useful:1 clear:2 ryzin:1 amount:3 nonparametric:2 simplest:3 reduced:1 generate:2 outperform:1 nsf:1 canonical:9 notice:1 zj:1 estimated:2 overly:1 correctly:1 disjoint:1 dobson:1 drawn:3 pj:8 integrality:2 econometric:1 vast:2 fraction:4 sum:1 fourth:1 place:2 family:1 decision:1 prefer:3 appendix:3 vf:2 bound:4 precisely:2 deficiency:2 constraint:8 ri:5 dvd:2 min:5 department:1 according:1 poor:1 smaller:1 increasingly:1 wi:2 lp:4 lid:1 making:3 happens:2 jagabathula:2 devavrat:2 needed:1 know:1 tractable:1 serf:1 end:2 available:1 operation:3 assortment:7 apply:1 generic:6 alternative:2 shah:2 jd:26 assumes:3 remaining:1 denotes:5 top:2 restrictive:1 question:7 quantity:1 parametric:21 september:1 concatenation:1 capacity:2 outer:1 w0:1 polytope:8 considers:1 trivial:1 consumer:4 assuming:6 index:3 relationship:1 illustration:1 difficult:5 potentially:1 stated:1 design:2 affiliated:1 upper:1 implementable:1 finite:1 situation:1 communication:1 precise:2 y1:1 jand:1 namely:1 pair:1 specified:2 trip:1 optimized:1 polytopes:1 established:1 nip:1 qa:1 dth:2 sparsity:2 challenge:1 program:7 max:7 oj:8 natural:1 ranked:1 rely:1 predicting:1 business:2 eh:1 imply:1 identifies:1 negativity:1 sn:4 faced:2 prior:1 literature:2 review:2 relative:1 loss:3 expect:2 permutation:15 lecture:1 mcfadden:1 interesting:2 revenue:31 offered:1 sufficient:1 consistent:11 affine:1 pi:3 course:4 supported:1 surprisingly:1 allow:2 understand:1 face:2 taking:1 sparse:3 van:2 regard:1 curve:1 dimension:2 world:2 valid:1 rich:1 evaluating:1 made:1 commonly:3 far:1 transaction:1 sj:16 observable:8 preferred:2 implicitly:2 dealing:1 fallacy:1 why:1 reality:2 mj:1 robust:5 career:1 ignoring:1 obtaining:1 excellent:1 mnl:17 main:1 universe:2 body:2 orc:1 screen:1 sub:5 position:2 inferring:1 sparsest:8 third:2 theorem:11 specific:1 sensing:4 list:3 dl:1 exists:1 strauss:1 effectively:1 ci:3 illustrates:1 gap:1 led:1 simply:6 likely:1 expressed:1 contained:1 scalar:1 springer:1 corresponds:1 satisfies:4 determines:1 complemented:1 relies:1 viewed:1 sorted:1 goal:1 month:1 donoho:1 price:4 feasible:1 argmini:1 typical:1 specifically:1 uniformly:4 conservative:1 duality:1 succeeds:1 exception:1 support:9 latter:1 preparation:1 evaluate:2 |
3,160 | 3,863 | Modelling Relational Data using Bayesian Clustered
Tensor Factorization
Ilya Sutskever
University of Toronto
[email protected]
Ruslan Salakhutdinov
MIT
[email protected]
Joshua B. Tenenbaum
MIT
[email protected]
Abstract
We consider the problem of learning probabilistic models for complex relational
structures between various types of objects. A model can help us ?understand? a
dataset of relational facts in at least two ways, by finding interpretable structure
in the data, and by supporting predictions, or inferences about whether particular
unobserved relations are likely to be true. Often there is a tradeoff between these
two aims: cluster-based models yield more easily interpretable representations,
while factorization-based approaches have given better predictive performance on
large data sets. We introduce the Bayesian Clustered Tensor Factorization (BCTF)
model, which embeds a factorized representation of relations in a nonparametric
Bayesian clustering framework. Inference is fully Bayesian but scales well to
large data sets. The model simultaneously discovers interpretable clusters and
yields predictive performance that matches or beats previous probabilistic models
for relational data.
1 Introduction
Learning with relational data, or sets of propositions of the form (object, relation, object), has been
important in a number of areas of AI and statistical data analysis. AI researchers have proposed that
by storing enough everyday relational facts and generalizing appropriately to unobserved propositions, we might capture the essence of human common sense. For instance, given propositions such
as (cup, used-for, drinking), (cup, can-contain, juice), (cup, can-contain, water), (cup, can-contain,
coffee), (glass, can-contain, juice), (glass, can-contain, water), (glass, can-contain, wine), and so
on, we might also infer the propositions (glass, used-for, drinking), (glass, can-contain, coffee), and
(cup, can-contain, wine). Modelling relational data is also important for more immediate applications, including problems arising in social networks [2], bioinformatics [16], and collaborative
filtering [18].
We approach these problems using probabilistic models that define a joint distribution over the truth
values of all conceivable relations. Such a model defines a joint distribution over the binary variables
T (a, r, b) ? {0, 1}, where a and b are objects, r is a relation, and the variable T (a, r, b) determines
whether the relation (a, r, b) is true. Given a set of true relations S = {(a, r, b)}, the model predicts
that a new relation (a, r, b) is true with probability P (T (a, r, b) = 1|S).
In addition to making predictions on new relations, we also want to understand the data?that is, to
find a small set of interpretable laws that explains a large fraction of the observations. By introducing
hidden variables over simple hypotheses, the posterior distribution over the hidden variables will
concentrate on the laws the data is likely to obey, while the nature of the laws depends on the model.
For example, the Infinite Relational Model (IRM) [8] represents simple laws consisting of partitions
of objects and partitions of relations. To decide whether the relation (a, r, b) is valid, the IRM simply
checks that the clusters to which a, r, and b belong are compatible. The main advantage of the IRM
is its ability to extract meaningful partitions of objects and relations from the observational data,
1
which greatly facilitates exploratory data analysis. More elaborate proposals consider models over
more powerful laws (e.g., first order formulas with noise models or multiple clusterings), which are
currently less practical due to the computational difficulty of their inference problems [7, 6, 9].
Models based on matrix or tensor factorization [18, 19, 3] have the potential of making better predictions than interpretable models of similar complexity, as we demonstrate in our experimental results
section. Factorization models learn a distributed representation for each object and each relation,
and make predictions by taking appropriate inner products. Their strength lies in the relative ease of
their continuous (rather than discrete) optimization, and in their excellent predictive performance.
However, it is often hard to understand and analyze the learned latent structure.
The tension between interpretability and predictive power is unfortunate: it is clearly better to have
a model that has both strong predictive power and interpretability. We address this problem by
introducing the Bayesian Clustered Tensor Factorization (BCTF) model, which combines good interpretability with excellent predictive power. Specifically, similarly to the IRM, the BCTF model
learns a partition of the objects and a partition of the relations, so that the truth-value of a relation
(a, r, b) depends primarily on the compatibility of the clusters to which a, r, and b belong. At the
same time, every entity has a distributed representation: each object a is assigned the two vectors
aL , aR (one for a being a left argument in a relation and one for it being a right argument), and
a relation r is assigned the matrix R. Given the distributed representations, the truth of a relation
(a, r, b) is determined by the value of a?
L RbR , while the object partition encourages the objects
within a cluster to have similar distributed representations (and similarly for relations).
The experiments show that the BCTF model achieves better predictive performance than a number
of related probabilistic relational models, including the IRM, on several datasets. The model is scalable, and we apply it on the Movielens [15] and the Conceptnet [10] datasets. We also examine
the structure found in BCTF?s clusters and learned vectors. Finally, our results provide an example where the performance of a Bayesian model substantially outperforms a corresponding MAP
estimate for large sparse datasets with minimal manual hyperparameter selection.
2 The Bayesian Clustered Tensor Factorization (BCTF)
We begin with a simple tensor factorization model. Suppose that we have a fixed finite set of objects
O and a fixed finite set of relations R. For each object a ? O the model maintains two vectors
aL , aR ? Rd (the left and the right arguments of the relation), and for each relation r ? R it
maintains a matrix R ? Rd?d , where d is the dimensionality of the model. Given a setting of
these parameters (collectively denoted by ?), the model independently chooses the truth-value of
each relation (a, r, b) from the distribution P (T (a, r, b) = 1|?) = 1/(1 + exp(?a?
L RbR )). In
particular, given a set of known relations S, we can learn the parameters by maximizing a penalized
log likelihood log P (S|?) ? Reg(?). The necessity of having a pair of parameters aL , aR , instead
of a single distributed representation a, will become clear later.
Next, we define a prior over the vectors {aL }, {aR }, and {R}. Specifically, the model defines a
prior distribution over partitions of objects and partitions of relations using the Chinese Restaurant
Process. Once the partitions are chosen, each cluster C samples its own prior mean and prior diagonal covariance, which are then used to independently sample vectors {aL , aR : a ? C} that
belong to cluster C (and similarly for the relations, where we treat R as a d2 -dimensional vector).
As a result, objects within a cluster have similar distributed representations. When the clusters are
sufficiently tight, the value of a?
L RbR is mainly determined by the clusters to which a, r, and b
belong. At the same time, the distributed representations help generalization, because they can represent graded similarities between clusters and fine differences between objects in the same cluster.
Thus, given a set of relations, we expect the model to find both meaningful clusters of objects and
relations, as well as predictive distributed representations.
More formally, assume that O = {a1 , . . . , aN } and R = {r1 , . . . , rM }. The model is defined as
follows:
P (obs, ?, c, ?, ?DP ) = P (obs|?, ? 2 )P (?|c, ?)P (c|?DP )P (?DP , ?, ? 2 )
(1)
where the observed data obs is a set of triples and their truth values {(a, r, b), t}; the variable c =
{cobj , crel } contains the cluster assignments (partitions) of the objects and the relations; the variable
? = {aL , aR , R} consists of the distributed representations of the objects and the relations, and
2
Figure 1: A schematic diagram of the model, where the arcs represent the object clusters and the
vectors within each cluster are similar. The model predicts T (a, r, b) with a?
L RbR .
{? 2 , ?, ?DP } are the model hyperparameters. Two of the above terms are given by
Y
2
P (obs|?) =
N (t|a?
L RbR , ? )
{(a,r,b),t}?obs
P (c|?DP ) = CRP (cobj |?DP )CRP (crel |?DP )
(2)
(3)
where N (t|?, ? 2 ) denotes the Gaussian distribution with mean ? and variance ? 2 , and CRP (c|?)
denotes the probability of the partition induced by c under the Chinese Restaurant Process with
concentration parameter ?. The Gaussian likelihood in Eq. 2 is far from ideal for modelling binary
data, but, similarly to [19, 18], we use it instead of the logistic function because it makes the model
conjugate and Gibbs sampling easier.
Defining P (?|c, ?) takes a little more work. Given the partitions, the sets of parameters {aL }, {aR },
and {R} become independent, so
P (?|c, ?) = P ({aL }|cobj , ?obj )P ({aR }|cobj , ?obj )P ({R}|crel , ?rel )
(4)
The distribution over the relation-vectors is given by
P ({R}|crel , ?rel ) =
|crel | Z
Y
k=1
Y
N (Ri |?, ?) dP (?, ?|?rel )
(5)
?,? i:c
rel,i =k
where |crel | is the number of clusters in the partition crel . This is precisely a Dirichlet process
mixture model [13]. We further place a Gaussian-Inverse-Gamma prior over (?, ?):
Y
P (?, ?|?rel ) = P (?|?)P (?|?rel ) = N (?|0, ?)
IG(?d2? |?rel , 1)
(6)
d?
? exp ?
X ?2? /2 + 1
d
d?
?d2?
!
Y
d?
?d2?
?0.5??rel ?1
(7)
where ? is a diagonal matrix whose entries are ?d2? , the variable d? ranges over the dimensions of
Ri (so 1 ? d? ? d2 ), and IG(x|?, ?) denotes the inverse-Gamma distribution with shape parameter
? and scale parameter ?. This prior makes many useful expectations analytically computable. The
terms P ({aL }|cobj , ?obj ) and P ({aR }|cobj , ?obj ) are defined analogously to Eq. 5.
Finally, we place an improper P (x) ? x?1 scale-uniform prior over each hyperparameter independently.
Inference
We now briefly describe the MCMC algorithm used for inference. Before starting the Markov chain,
we find a MAP estimate of the model parameters using the method of conjugate gradient (but we
do not optimize over the partitions). The MAP estimate is then used to initialize the Markov chain.
Each step of the Markov chain consists of a number of internal steps. First, given the parameters
?, the chain updates c = (crel , cobj ) using a collapsed Gibbs sampling sweep and a step of the
split-and-merge algorithm (where the launch state was obtained with two sweeps of Gibbs sampling
starting from a uniformly random cluster assignment) [5]. Next, it samples from the posterior mean
3
and covariance of each cluster, which is the distribution proportional to the term being integrated in
Eq. 5.
Next, the Markov chain samples the parameters {aL } given {aR }, {R}, and the cluster posterior
means and covariances. This step is tractable since the conditional distribution over the object vectors {aL } is Gaussian and factorizes into the product of conditional distributions over the individual
object vectors. This conditional independence is important, since it tends to make the Markov chain
mix faster, and is a direct consequence of each object a having two vectors, aL and aR . If each
object a was only associated with a single vector a (and not aL , aR ), the conditional distribution
over {a} would not factorize, which in turn would require the use of a slower sequential Gibbs
sampler. In the current setting, we can further speed up the inference by sampling from conditional
distributions in parallel. The speedup could be substantial, particularly when the number of objects
is large. The disadvantage of using two vectors for each object is that the model cannot as easily
capture the ?position-independent? properties of the object, especially in the sparse regime.
Sampling {aL } from the Gaussian takes time proportional to d3 ? N , where N is the number of
objects. While we do the same for {aR }, we run a standard hybrid Monte Carlo to update the
matrices {R} using 10 leapfrog steps of size 10?5 [12]. Each matrix, which we treat as a vector,
has d2 dimensions, so direct sampling from the Gaussian distribution scales as d6 ? M , which is slow
even for small values of d (e.g. 20). Finally, we make a small symmetric multiplicative change to
each hyperparameter and accept or reject its new value according to the Metropolis-Hastings rule.
3 Evaluation
In this section, we show that the BCTF model has excellent predictive power and that it finds interpretable clusters by applying it to five datasets and comparing its performance to the IRM [8] and
the Multiple Relational Clustering (MRC) model [9]. We also compare BCTF to its simpler counterpart: a Bayesian Tensor Factorization (BTF) model, where all the objects and the relations belong to
a single cluster. The Bayesian Tensor Factorization model is a generalization of the Bayesian probabilistic matrix factorization [17], and is closely related to many other existing tensor-factorization
methods [3, 14, 1]. In what follows, we will describe the datasets, report the predictive performance
of our and of the competing algorithms, and examine the structure discovered by BCTF.
3.1 Description of the Datasets
We use three of the four datasets used by [8] and [9], namely, the Animals, the UML, and the Kinship
dataset, as well the Movielens [15] and the Conceptnet datasets [10].
1. The animals dataset consists of 50 animals and 85 binary attributes. The dataset is a fully
observed matrix?so there is only one relation.
2. The kinship dataset consists of kinship relationships among the members of the Alyawarra
tribe [4]. The dataset contains 104 people and 26 relations. This dataset is dense and has
104?26?104 = 218216 observations, most of which are 0.
3. The UML dataset [11] consists of a 135 medical terms and 49 relations. The dataset is also
fully observed and has 135?49?135 = 893025 (mostly 0) observations.
4. The Movielens [15] dataset consists of 1000209 observed integer ratings of 6041 movies
on a scale from 1 to 5, which are rated by 3953 users. The dataset is 95.8% sparse.
5. The Conceptnet dataset [10] is a collection of common-sense assertions collected from the
web. It consists of about 112135 ?common-sense? assertions such as (hockey, is-a, sport).
There are 19 relations and 17571 objects. To make our experiments faster, we used only
the 7000 most frequent objects, which resulted in 82062 true facts. For the negative data,
we sampled twice as many random object-relation-object triples and used them as the false
facts. As a result, there were 246186 binary observations in this dataset. The dataset is
99.9% sparse.
3.2 Experimental Protocol
To facilitate comparison with [9], we conducted our experiments the following way. First, we normalized each dataset so the mean of its observations was 0. Next, we created 10 random train/test
4
algorithm
MAP20
MAP40
BTF20
BCTF20
BTF40
BCTF40
IRM [8]
MRC [9]
animals
RMSE AUC
0.467
0.78
0.528
0.68
0.337
0.85
0.331
0.86
0.338
0.86
0.336
0.86
0.382
0.75
?
0.81
kinship
RMSE AUC
0.122
0.82
0.110
0.90
0.122
0.82
0.122
0.82
0.108
0.90
0.108
0.90
0.140
0.66
?
0.85
UML
RMSE AUC
0.033
0.96
0.024
0.98
0.033
0.96
0.033
0.96
0.024
0.98
0.024
0.98
0.054
0.70
?
0.98
movielens
RMSE AUC
0.899
?
0.933
?
0.835
?
0.836
?
0.834
?
0.836
?
?
?
?
?
conceptnet
RMSE AUC
0.536
0.57
0.614
0.48
0.275
0.93
0.278
0.93
0.267
0.94
0.260
0.94
?
?
?
?
Table 1: A quantitative evaluation of the algorithms using 20 and 40 dimensional vectors. We report the
performance of the following algorithms: the MAP-based Tensor Factorization, the Bayesian Tensor Factorization (BTF) with MCMC (where all objects belong to a single cluster), the full Bayesian Clustered Tensor
Factorization (BCTF), the IRM [8] and the MRC [9].
O1
killer whale, blue whale, humpback whale,
O1
seal, walrus, dolphin
antelope, dalmatian, horse, giraffe, zebra, deer
O2
mole, hamster, rabbit, mouse
hippopotamus, elephant, rhinoceros
O3
spider monkey, gorilla, chimpanzee
moose, ox, sheep, buffalo, pig, cow
beaver, squirrel, otter
Persian cat, skunk, chihuahua, collie
grizzly bear, polar bear
O2
O3
O4
O5
O6
O7
O8
O9
F1
F1
F1
F2
F2 F3
flippers, strainteeth, swims, fish,
arctic, coastal, ocean, water
hooves, vegetation, grazer, plains, fields
paws, claws, solitary
bulbous, slow, inactive
jungle, tree
big, strong, group
walks, quadrapedal, ground
small, weak, nocturnal, hibernate, nestspot
tail, newworld, oldworld, timid
F2
F3
F4
F5
F6
F7
F8
F9
O1
O2
O3
Figure 2: Results on the Animals dataset. Left: The discovered clusters. Middle: The biclustering of the
features. Right: The covariance of the distributed representations of the animals (bottom) and their attributes
(top).
splits, where 10% of the data was used for testing. For the Conceptnet and the Movielens datasets,
we used only two train/test splits and at most 30 clusters, which made our experiments faster. We
report test root mean squared error (RMSE) and the area under the precision recall curve (AUC) [9].
For the IRM1 we make predictions as follows. The IRM partitions the data into blocks; we compute
the smoothed mean of the observed entries of each block and use it to predict the test entries in the
same block.
3.3 Results
We first applied BCTF to the Animals, Kinship, and the UML datasets using 20 and 40-dimensional
vectors. Table 1 shows that BCTF substantially outperforms IRM and MRC in terms of both RMSE
and AUC. In fact, for the Kinship and the UML datasets, the simple tensor factorization model
trained by MAP performs as well as BTF and BCTF. This happens because for these datasets the
number of observations is much larger than the number of parameters, so there is little uncertainty
about the true parameter values. However, the Animals dataset is considerably smaller, so BTF
performs better, and BCTF performs even better than the BTF model.
We then applied BCTF to the Movielens and the Conceptnet datasets. We found that the MAP estimates suffered from significant overfitting, and that the fully Bayesian models performed much
better. This is important because both datasets are sparse, which makes overfitting difficult to combat. For the extremely sparse Conceptnet dataset, the BCTF model further improved upon simpler
1
The code is available at http://www.psy.cmu.edu/?ckemp/code/irm.html
5
a)
b)
d)
f)
c)
e)
g)
Figure 3: Results on the Kinship dataset. Left: The covariance of the distributed representations {aL } learned
for each person. Right: The biclustering of a subset of the relations.
2
3
4
5
6
7
8
9
10
11
12
13
14
Amino Acid, Peptide, or Protein, Biomedical or Dental Material, Carbohydrate, . . .
Amphibian, Animal, Archaeon, Bird, Fish, Human, . . .
Antibiotic, Biologically Active Substance, Enzyme, Hazardous or Poisonous Substance, Hormone, . . .
Biologic Function, Cell Function, Genetic Function, Mental Process, . . .
Classification, Drug Delivery Device, Intellectual Product, Manufactured Object, . . .
Body Part, Organ, Cell, Cell Component, . . .
Alga, Bacterium, Fungus, Plant, Rickettsia or Chlamydia, Virus
Age Group, Family Group, Group, Patient or Disabled Group, . . .
Cell / Molecular Dysfunction, Disease or Syndrome, Model of Disease, Mental Dysfunction, . . .
Daily or Recreational Activity, Educational Activity, Governmental Activity, . . .
Environmental Effect of Humans, Human-caused Phenomenon or Process, . . .
Acquired Abnormality, Anatomical Abnormality, Congenital Abnormality, Injury or Poisoning
Health Care Related Organization, Organization, Professional Society, . . .
Affects
interacts with
causes
Figure 4: Results on the medical UML dataset. Left: The covariance of the distributed representations {aL }
learned for each object. Right: The inferred clusters, along with the biclustering of a subset of the relations.
BTF model. We do not report results for the IRM, because the existing off-the-shelf implementation
could not handle these large datasets.
We now examine the latent structure discovered by the BCTF model by inspecting a sample produced by the Markov chain. Figure 2 shows some of the clusters learned by the model on the
Animals dataset. It also shows the biclustering, as well as the covariance of the distributed representations of the animals and their attributes, sorted by their clusters. By inspecting the covariance,
we can determine the clusters that are tight and the affinities between the clusters. Indeed, the cluster structure is reflected in the block-diagonal structure of the covariance matrix. For example, the
covariance of the attributes (see Fig. 2, top-right panel) shows that cluster F1, containing {flippers,
stainteeth,swims} is similar to cluster F4, containing {bulbous, slow, inactive}, but is very dissimilar
to F2, containing {hooves, vegetation, grazer}.
Figure 3 displays the learned representation for the Kinship dataset. The kinship dataset has 104
people with complex relationships between them: each person belongs to one of four sections,
which strongly constrains the other relations. For example, a person in section 1 has a father in
section 3 and a mother in section 4 (see [8, 4] for more details). After learning, each cluster was
almost completely localized in gender, section, and age. For clarity of presentation, we sort the
clusters first by their section, then by their gender, and finally by their age, as done in [8]. Figure 3
(panels (b-g)) displays some of the relations according to this clustering, and panel (a) shows the
covariance between the vectors {aL } learned for each person. The four sections are clearly visible
in the covariance structure of the distributed representations.
Figure 4 shows the inferred clusters for the medical UML dataset. For example, the model discovers
that {Amino Acid, Peptide, Protein} Affects {Biologic Function, Cell Function, Genetic Function},
6
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Independence Day; Lost World: Jurassic Park The; Stargate; Twister; Air Force One; . . .
Star Wars: Episode IV - A New Hope; Silence of the Lambs The; Raiders of the Lost Ark; . . .
Shakespeare in Love; Shawshank Redemption The; Good Will Hunting; As Good As It Gets; . . .
Fargo; Being John Malkovich; Annie Hall; Talented Mr. Ripley The; Taxi Driver; . . .
E.T. the Extra-Terrestrial; Ghostbusters; Babe; Bug?s Life A; Toy Story 2; . . .
Jurassic Park; Saving Private Ryan; Matrix The; Back to the Future; Forrest Gump; . . .
Dick Tracy; Space Jam; Teenage Mutant Ninja Turtles; Superman III; Last Action Hero; . . .
Monty Python and the Holy Grail; Twelve Monkeys; Beetlejuice; Ferris Bueller?s Day Off; . . .
Lawnmower Man The; Event Horizon; Howard the Duck; Beach The; Rocky III; Bird on a Wire; . . .
Terminator 2: Judgment Day; Terminator The; Alien; Total Recall; Aliens; Jaws; Predator; . . .
Groundhog Day; Who Framed Roger Rabbit?; Usual Suspects The; Airplane!; Election; . . .
Back to the Future Part III; Honey I Shrunk the Kids; Crocodile Dundee; Rocketeer The; . . .
Sixth Sense The; Braveheart; Princess Bride The; Batman; Willy Wonka and the Chocolate Factory; . . .
Men in Black; Galaxy Quest; Clueless; Chicken Run; Mask The; Pleasantville; Mars Attacks!; . . .
Austin Powers: The Spy Who Shagged Me; There?s Something About Mary; Austin Powers: . . .
Breakfast Club The; American Pie; Blues Brothers The; Animal House; Rocky; Blazing Saddles; . . .
American Beauty; Pulp Fiction; GoodFellas; Fight Club; South Park: Bigger Longer and Uncut; . . .
Star Wars: Episode V - The Empire Strikes Back; Star Wars: Episode VI - Return of the Jedi; . . .
Edward Scissorhands; Blair Witch Project The; Nightmare Before Christmas The; James and the Giant Peach; . . .
Mighty Peking Man
Figure 5: Results on the Movielens dataset. Left: The covariance between the movie vectors. Right: The
inferred clusters.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
feel good; make money; make music; sweat; earn money; check your mind; pass time;
weasel; Apple trees; Ferrets; heifer; beaver; ficus; anemone; blowfish; koala; triangle;
boredom; anger; cry; buy ticket; laughter; fatigue; joy; panic; turn on tv; patience;
enjoy; danger; hurt; bad; competition; cold; recreate; bored; health; excited;
car; book; home; build; store; school; table; office; music; desk; cabinet; pleasure;
library; New York; shelf; cupboard; living room; pocket; a countryside; utah; basement;
city; bathroom; kitchen; restaurant; bed; park; refrigerate; closet; street; bedroom;
think; sleep; sit; play games; examine; listen music; read books; buy; wait; play sport;
Housework; attend class; go jogging; chat with friends; visit museums; ride bikes;
fox; small dogs; wiener dog; bald eagle; crab; boy; bee; monkey; shark; sloth; marmot;
fun; relax; entertain; learn; eat; exercise; sex; food; work; talk; play; party; travel;
state; a large city; act; big city; Europe; maryland; colour; corner; need; pennsylvania;
play music; go; look; drink water; cut; plan; rope; fair; chew; wear; body part; fail;
green; lawyer; recycle; globe; Rat; sharp points; silver; empty; Bob Dylan; dead fish;
potato; comfort; knowledge; move; inform; burn; men; vegetate; fear; accident; murder;
garbage; thought; orange; handle; penis; diamond; wing; queen; nose; sidewalk; pad;
sand; bacteria; robot; hall; basketball court; support; Milky Way; chef; sheet of paper;
dessert; pub; extinguish fire; fuel; symbol; cleanliness; lock the door; shelter; sphere;
Figure 6: Results on the Conceptnet dataset. Left: The covariance of the learned {aL } vectors for each object.
Right: The inferred clusters.
which is also similar, according to the covariance, to {Cell Dysfunction, Disease, Mental Dysfunction}. Qualitatively, the clustering appears to be on par with that of the IRM on all the datasets, but
the BCTF model is able to predict held-out relations much better.
Figures 5 and 6 display the learned clusters for the Movielens and the Conceptnet datasets. For the
Movielens dataset, we show the most frequently-rated movies in each cluster where the clusters are
sorted by size. We also show the covariance between the movie vectors which are sorted by the
clusters, where we display only the 100 most frequently-rated movies per cluster. The covariance
matrix is aligned with the table on the right, making it easy to see how the clusters relate to each
other. For example, according to the covariance structure, clusters 7 and 9, containing Hollywood
action/adventure movies are similar to each other but are dissimilar to cluster 8, which consists of
comedy/horror movies.
For the Conceptnet dataset, Fig. 6 displays the 100 most frequent objects per category. From the covariance matrix, we can infer that clusters 8, 9, and 11, containing concepts associated with humans
taking actions, are very similar to each other, and are very dissimilar to cluster 10, which contains
animals. Observe that some clusters (e.g., clusters 2-6) are not crisp, which is reflected in the smaller
covariances between vectors in each of these clusters.
4 Discussions and Conclusions
We introduced a new method for modelling relational data which is able to both discover meaningful
structure and generalize well. In particular, our results illustrate the predictive power of distributed
representations when applied to modelling relational data, since even simple tensor factorization
models can sometimes outperform the more complex models. Indeed, for the kinship and the UML
datasets, the performance of the MAP-based tensor factorization was as good as the performance
of the BCTF model, which is due to the density of these datasets: the number of observations was
much larger than the number of parameters. On the other hand, for large sparse datasets, the BCTF
7
model significantly outperformed its MAP counterpart, and in particular, it noticeably outperformed
BTF on the Conceptnet dataset.
A surprising aspect of the Bayesian model is the ease with which it worked after automatic hyperparameter selection was implemented. Furthermore, the model performs well even when the
initial MAP estimate is very poor, as was the case for the 40-dimensional models on the Conceptnet
dataset. This is particularly important for large sparse datasets, since finding a good MAP estimate
requires careful cross-validation to select the regularization hyperparameters. Careful hyperparameter selection can be very labour-expensive because it requires careful training of a large number of
models.
Acknowledgments
The authors acknowledge the financial support from NSERC, Shell, NTT Communication Sciences
Laboratory, AFOSR FA9550-07-1-0075, and AFOSR MURI.
References
[1] Edoardo Airoldi, David M. Blei, Stephen E. Fienberg, and Eric P. Xing. Mixed membership stochastic
blockmodels. In NIPS, pages 33?40. MIT Press, 2008.
[2] P.J. Carrington, J. Scott, and S. Wasserman. Models and methods in social network analysis. Cambridge
University Press, 2005.
[3] W. Chu and Z. Ghahramani. Probabilistic models for incomplete multi-dimensional arrays. In Proceedings of the International Conference on Artificial Intelligence and Statistics, volume 5, 2009.
[4] W. Denham. The Detection of Patterns in Alyawarra Nonverbal Behavior. PhD thesis, Department of
Anthropology, University of Washington, 1973.
[5] S. Jain and R.M. Neal. A split-merge Markov chain Monte Carlo procedure for the Dirichlet process
mixture model. Journal of Computational and Graphical Statistics, 13(1):158?182, 2004.
[6] Y. Katz, N.D. Goodman, K. Kersting, C. Kemp, and J.B. Tenenbaum. Modeling Semantic Cognition as
Logical Dimensionality Reduction. In Proceedings of Thirtieth Annual Meeting of the Cognitive Science
Society, 2008.
[7] C. Kemp, N.D. Goodman, and J.B. Tenenbaum. Theory acquisition and the language of thought. In
Proceedings of Thirtieth Annual Meeting of the Cognitive Science Society, 2008.
[8] C. Kemp, J.B. Tenenbaum, T.L. Griffiths, T. Yamada, and N. Ueda. Learning systems of concepts with an
infinite relational model. In Proceedings of the National Conference on Artificial Intelligence, volume 21,
page 381. Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999, 2006.
[9] S. Kok and P. Domingos. Statistical predicate invention. In Proceedings of the 24th international conference on Machine learning, pages 433?440. ACM New York, NY, USA, 2007.
[10] H. Liu and P. Singh. ConceptNeta practical commonsense reasoning tool-kit. BT Technology Journal,
22(4):211?226, 2004.
[11] A.T. McCray. An upper-level ontology for the biomedical domain. Comparative and Functional Genomics, 4(1):80?84, 2003.
[12] R.M. Neal. Probabilistic inference using Markov chain Monte Carlo methods, 1993.
[13] R.M. Neal. Markov chain sampling methods for Dirichlet process mixture models. Journal of computational and graphical statistics, pages 249?265, 2000.
[14] Ian Porteous, Evgeniy Bart, and Max Welling. Multi-HDP: A non parametric bayesian model for tensor
factorization. In Dieter Fox and Carla P. Gomes, editors, AAAI, pages 1487?1490. AAAI Press, 2008.
[15] J. Riedl, J. Konstan, S. Lam, and J. Herlocker. Movielens collaborative filtering data set, 2006.
[16] J.F. Rual, K. Venkatesan, T. Hao, T. Hirozane-Kishikawa, A. Dricot, N. Li, G.F. Berriz, F.D. Gibbons,
M. Dreze, N. Ayivi-Guedehoussou, et al. Towards a proteome-scale map of the human protein?protein
interaction network. Nature, 437(7062):1173?1178, 2005.
[17] R. Salakhutdinov and A. Mnih. Bayesian probabilistic matrix factorization using Markov chain Monte
Carlo. In Proceedings of the 25th international conference on Machine learning, pages 880?887. ACM
New York, NY, USA, 2008.
[18] R. Salakhutdinov and A. Mnih. Probabilistic matrix factorization. Advances in neural information processing systems, 20, 2008.
[19] R. Speer, C. Havasi, and H. Lieberman. AnalogySpace: Reducing the dimensionality of common sense
knowledge. In Proceedings of AAAI, 2008.
8
| 3863 |@word private:1 middle:1 briefly:1 seal:1 sex:1 d2:7 covariance:20 excited:1 holy:1 reduction:1 initial:1 liu:1 hunting:1 paw:1 murder:1 necessity:1 contains:3 pub:1 genetic:2 o2:3 existing:2 outperforms:2 current:1 comparing:1 virus:1 surprising:1 chu:1 john:1 visible:1 partition:15 shakespeare:1 shape:1 flipper:2 interpretable:6 update:2 joy:1 bart:1 intelligence:2 device:1 beaver:2 yamada:1 fa9550:1 blei:1 mental:3 intellectual:1 club:2 toronto:1 attack:1 simpler:2 lawyer:1 raider:1 five:1 along:1 direct:2 become:2 driver:1 consists:8 combine:1 introduce:1 acquired:1 chew:1 mask:1 indeed:2 biologic:2 ontology:1 panic:1 examine:4 frequently:2 multi:2 behavior:1 love:1 salakhutdinov:3 food:1 election:1 little:2 begin:1 discover:1 project:1 panel:3 bike:1 fuel:1 kinship:10 what:1 killer:1 factorized:1 substantially:2 monkey:3 teenage:1 unobserved:2 finding:2 giant:1 sloth:1 quantitative:1 combat:1 fun:1 act:1 every:1 honey:1 rm:1 medical:3 enjoy:1 before:2 attend:1 treat:2 jungle:1 tends:1 consequence:1 taxi:1 uncut:1 chocolate:1 merge:2 black:1 burn:1 twice:1 anthropology:1 might:2 bird:2 ease:2 factorization:21 range:1 acknowledgment:1 practical:2 testing:1 lost:2 block:4 cold:1 procedure:1 danger:1 area:2 drug:1 significantly:1 reject:1 thought:2 griffith:1 wait:1 protein:4 proteome:1 get:1 cannot:1 selection:3 sheet:1 collapsed:1 applying:1 optimize:1 www:1 crisp:1 map:11 rhinoceros:1 maximizing:1 educational:1 go:2 starting:2 independently:3 rabbit:2 wasserman:1 rule:1 dundee:1 array:1 financial:1 handle:2 exploratory:1 hurt:1 feel:1 play:4 suppose:1 user:1 alyawarra:2 hypothesis:1 domingo:1 expensive:1 particularly:2 ark:1 cut:1 predicts:2 muri:1 observed:5 bottom:1 capture:2 improper:1 episode:3 redemption:1 disease:3 substantial:1 princess:1 gibbon:1 constrains:1 complexity:1 trained:1 singh:1 tight:2 predictive:11 upon:1 basement:1 f2:4 eric:1 completely:1 triangle:1 easily:2 joint:2 various:1 cat:1 talk:1 train:2 carbohydrate:1 jain:1 describe:2 london:1 monte:4 artificial:2 horse:1 deer:1 whose:1 larger:2 relax:1 elephant:1 ability:1 statistic:3 chef:1 think:1 advantage:1 lam:1 interaction:1 product:3 frequent:2 aligned:1 horror:1 jurassic:2 bug:1 description:1 bed:1 mccray:1 hibernate:1 everyday:1 competition:1 sutskever:1 dolphin:1 empty:1 cluster:52 r1:1 bctf:20 comparative:1 silver:1 object:38 hoof:2 help:2 illustrate:1 friend:1 ticket:1 school:1 eq:3 strong:2 edward:1 implemented:1 launch:1 c:1 blair:1 concentrate:1 closely:1 attribute:4 f4:2 stochastic:1 shrunk:1 twister:1 human:6 observational:1 material:1 jam:1 explains:1 require:1 sand:1 noticeably:1 f1:4 generalization:2 clustered:5 proposition:4 ryan:1 inspecting:2 drinking:2 squirrel:1 sufficiently:1 hall:2 ground:1 crab:1 exp:2 cognition:1 predict:2 bald:1 achieves:1 wine:2 ruslan:1 f7:1 polar:1 outperformed:2 travel:1 currently:1 coastal:1 peptide:2 organ:1 hollywood:1 city:3 tool:1 hope:1 mit:6 clearly:2 gaussian:6 aim:1 rather:1 hippopotamus:1 shelf:2 beauty:1 kersting:1 factorizes:1 thirtieth:2 office:1 closet:1 kid:1 leapfrog:1 mutant:1 modelling:5 check:2 likelihood:2 alien:2 greatly:1 mainly:1 psy:1 glass:5 sense:5 inference:7 humpback:1 membership:1 integrated:1 bt:1 accept:1 fight:1 pad:1 relation:42 hidden:2 compatibility:1 classification:1 among:1 html:1 alga:1 denoted:1 plan:1 animal:13 initialize:1 orange:1 field:1 once:1 f3:2 saving:1 beach:1 sampling:7 having:2 whale:3 arctic:1 washington:1 represents:1 park:5 look:1 anger:1 future:2 report:4 primarily:1 simultaneously:1 national:1 resulted:1 gamma:2 museum:1 individual:1 kitchen:1 consisting:1 fire:1 hamster:1 detection:1 organization:2 mnih:2 evaluation:2 sheep:1 mixture:3 held:1 chain:11 commonsense:1 potato:1 entertain:1 bacteria:1 daily:1 fox:2 peach:1 tree:2 iv:1 irm:13 walk:1 incomplete:1 minimal:1 instance:1 modeling:1 lieberman:1 ar:13 disadvantage:1 injury:1 assertion:2 queen:1 assignment:2 introducing:2 subset:2 entry:3 uniform:1 father:1 predicate:1 conducted:1 dental:1 btf:7 considerably:1 chooses:1 person:4 density:1 international:3 twelve:1 probabilistic:9 off:2 analogously:1 mouse:1 ilya:2 earn:1 recreational:1 thesis:1 aaai:4 gump:1 squared:1 denham:1 containing:5 f5:1 corner:1 dead:1 american:2 cognitive:2 book:2 return:1 wing:1 toy:1 manufactured:1 f6:1 potential:1 li:1 star:3 o7:1 caused:1 depends:2 vi:1 multiplicative:1 performed:1 root:1 later:1 analyze:1 xing:1 sort:1 maintains:2 parallel:1 predator:1 rmse:7 collaborative:2 air:1 wiener:1 acid:2 who:2 variance:1 judgment:1 yield:2 generalize:1 weak:1 bayesian:16 produced:1 carlo:4 mrc:4 apple:1 researcher:1 bob:1 inform:1 manual:1 sixth:1 acquisition:1 galaxy:1 james:1 associated:2 cabinet:1 sampled:1 nonverbal:1 dataset:30 logical:1 recall:2 knowledge:2 pulp:1 dimensionality:3 car:1 listen:1 pocket:1 back:3 appears:1 day:4 tension:1 o6:1 improved:1 reflected:2 breakfast:1 amphibian:1 done:1 ox:1 strongly:1 mar:1 furthermore:1 babe:1 biomedical:2 roger:1 crp:3 hand:1 hastings:1 web:1 defines:2 logistic:1 chat:1 disabled:1 empire:1 mary:1 utah:1 effect:1 facilitate:1 contain:8 true:6 usa:2 counterpart:2 normalized:1 regularization:1 assigned:2 concept:2 analytically:1 symmetric:1 laboratory:1 shelter:1 jbt:1 semantic:1 neal:3 read:1 evgeniy:1 game:1 dysfunction:4 encourages:1 basketball:1 auc:7 essence:1 rat:1 o3:3 fungus:1 fatigue:1 chlamydia:1 rope:1 demonstrate:1 performs:4 reasoning:1 adventure:1 discovers:2 common:4 witch:1 juice:2 functional:1 labour:1 volume:2 belong:6 tail:1 vegetation:2 katz:1 significant:1 cambridge:2 gibbs:4 mother:1 ai:2 cup:5 automatic:1 zebra:1 framed:1 rd:2 similarly:4 language:1 wear:1 ride:1 crocodile:1 robot:1 europe:1 similarity:1 longer:1 money:2 something:1 enzyme:1 posterior:3 own:1 belongs:1 store:1 binary:4 life:1 meeting:2 joshua:1 care:1 bathroom:1 mr:1 quadrapedal:1 kit:1 accident:1 determine:1 mole:1 syndrome:1 venkatesan:1 living:1 stephen:1 strike:1 multiple:2 full:1 persian:1 infer:2 mix:1 o5:1 hormone:1 ntt:1 match:1 faster:3 cross:1 sphere:1 molecular:1 visit:1 bigger:1 a1:1 peking:1 schematic:1 prediction:5 scalable:1 globe:1 patient:1 cmu:1 shawshank:1 expectation:1 represent:2 sometimes:1 cell:6 tracy:1 chicken:1 addition:1 proposal:1 want:1 fine:1 ferret:1 diagram:1 suffered:1 appropriately:1 extra:1 goodman:2 south:1 suspect:1 induced:1 facilitates:1 chimpanzee:1 member:1 groundhog:1 obj:4 integer:1 ideal:1 abnormality:3 split:4 enough:1 door:1 easy:1 iii:3 independence:2 restaurant:3 affect:2 bedroom:1 pennsylvania:1 competing:1 cow:1 inner:1 sweat:1 computable:1 court:1 tradeoff:1 airplane:1 inactive:2 whether:3 recreate:1 war:3 colour:1 swim:2 edoardo:1 strainteeth:1 york:3 cause:1 action:3 garbage:1 useful:1 clear:1 nocturnal:1 nonparametric:1 kok:1 desk:1 tenenbaum:4 category:1 antibiotic:1 http:1 outperform:1 fiction:1 fish:3 brother:1 spy:1 governmental:1 arising:1 per:2 anatomical:1 blue:2 discrete:1 hyperparameter:5 group:5 four:3 d3:1 clarity:1 anemone:1 f8:1 invention:1 fraction:1 run:2 inverse:2 uncertainty:1 powerful:1 solitary:1 place:2 family:1 almost:1 decide:1 lamb:1 shark:1 forrest:1 home:1 ueda:1 delivery:1 ob:5 cleanliness:1 patience:1 drink:1 display:5 sleep:1 annual:2 activity:3 eagle:1 strength:1 precisely:1 worked:1 your:1 ri:2 aspect:1 turtle:1 speed:1 argument:3 extremely:1 claw:1 eat:1 poisoning:1 uml:8 department:1 speedup:1 according:4 jaw:1 tv:1 poor:1 riedl:1 conjugate:2 recycle:1 smaller:2 metropolis:1 biologically:1 rsalakhu:1 making:3 happens:1 dieter:1 fienberg:1 grail:1 turn:2 fail:1 mind:1 nose:1 hero:1 tractable:1 available:1 ferris:1 apply:1 sidewalk:1 obey:1 observe:1 appropriate:1 ocean:1 professional:1 slower:1 denotes:3 dirichlet:3 clustering:5 top:2 unfortunate:1 graphical:2 lock:1 porteous:1 music:4 countryside:1 ghahramani:1 especially:1 coffee:2 graded:1 build:1 society:3 chinese:2 tensor:16 sweep:2 move:1 parametric:1 concentration:1 bacterium:1 usual:1 diagonal:3 interacts:1 conceivable:1 gradient:1 affinity:1 dp:8 pleasure:1 maryland:1 entity:1 d6:1 street:1 timid:1 me:1 terrestrial:1 collected:1 kemp:3 water:4 o4:1 hdp:1 code:2 o1:3 relationship:2 dick:1 difficult:1 pie:1 mostly:1 spider:1 collie:1 boy:1 relate:1 hao:1 wonka:1 negative:1 herlocker:1 implementation:1 diamond:1 upper:1 wire:1 observation:7 datasets:21 markov:10 howard:1 acknowledge:1 finite:2 arc:1 buffalo:1 beat:1 immediate:1 defining:1 relational:13 communication:1 supporting:1 milky:1 carrington:1 discovered:3 smoothed:1 sharp:1 inferred:4 rating:1 david:1 introduced:1 pair:1 namely:1 dog:2 dalmatian:1 learned:9 comedy:1 poisonous:1 nip:1 address:1 able:2 pattern:1 scott:1 comfort:1 regime:1 pig:1 gorilla:1 max:1 including:2 interpretability:3 green:1 power:7 event:1 difficulty:1 force:1 hybrid:1 movie:7 technology:1 rated:3 rocky:2 library:1 created:1 fargo:1 extract:1 health:2 genomics:1 prior:7 python:1 bee:1 relative:1 law:5 afosr:2 plant:1 fully:4 expect:1 par:1 moose:1 men:2 bear:2 proportional:2 mixed:1 filtering:2 localized:1 triple:2 age:3 validation:1 bulbous:2 o8:1 editor:1 story:1 ckemp:1 storing:1 cry:1 austin:2 compatible:1 penalized:1 last:1 tribe:1 silence:1 understand:3 taking:2 sparse:8 distributed:15 curve:1 dimension:2 plain:1 valid:1 world:1 willy:1 author:1 made:1 collection:1 boredom:1 ig:2 qualitatively:1 far:1 party:1 welling:1 social:2 grizzly:1 skunk:1 christmas:1 otter:1 active:1 buy:2 overfitting:2 gomes:1 factorize:1 ripley:1 nestspot:1 latent:2 continuous:1 table:4 hockey:1 hazardous:1 nature:2 learn:3 ca:2 menlo:1 excellent:3 complex:3 cupboard:1 terminator:2 domain:1 protocol:1 bored:1 dense:1 blockmodels:1 giraffe:1 main:1 big:2 noise:1 hyperparameters:2 fair:1 jogging:1 amino:2 body:2 fig:2 f9:1 elaborate:1 slow:3 ny:2 embeds:1 precision:1 laughter:1 position:1 duck:1 konstan:1 exercise:1 lie:1 house:1 dylan:1 factory:1 congenital:1 learns:1 ian:1 formula:1 bad:1 substance:2 grazer:2 utoronto:1 symbol:1 sit:1 false:1 rel:8 sequential:1 extinguish:1 airoldi:1 phd:1 o9:1 horizon:1 easier:1 generalizing:1 carla:1 simply:1 likely:2 saddle:1 nightmare:1 nserc:1 dessert:1 sport:2 fear:1 biclustering:4 collectively:1 gender:2 truth:5 determines:1 environmental:1 acm:2 ma:1 shell:1 conditional:5 sorted:3 presentation:1 careful:3 towards:1 rbr:5 room:1 man:2 hard:1 change:1 monty:1 infinite:2 specifically:2 movielens:10 reducing:1 sampler:1 determined:2 uniformly:1 total:1 pas:1 experimental:2 blazing:1 meaningful:3 dreze:1 select:1 formally:1 internal:1 quest:1 support:2 people:2 dissimilar:3 bioinformatics:1 mcmc:2 reg:1 phenomenon:1 |
3,161 | 3,864 | Rank-Approximate Nearest Neighbor Search:
Retaining Meaning and Speed in High Dimensions
Parikshit Ram, Dongryeol Lee, Hua Ouyang and Alexander G. Gray
Computational Science and Engineering, Georgia Institute of Technology
Atlanta, GA 30332
{p.ram@,dongryel@cc.,houyang@,agray@cc.}gatech.edu
Abstract
The long-standing problem of efficient nearest-neighbor (NN) search has ubiquitous applications ranging from astrophysics to MP3 fingerprinting to bioinformatics to movie recommendations. As the dimensionality of the dataset increases, exact NN search becomes computationally prohibitive; (1+?) distance-approximate
NN search can provide large speedups but risks losing the meaning of NN search
present in the ranks (ordering) of the distances. This paper presents a simple,
practical algorithm allowing the user to, for the first time, directly control the
true accuracy of NN search (in terms of ranks) while still achieving the large
speedups over exact NN. Experiments on high-dimensional datasets show that
our algorithm often achieves faster and more accurate results than the best-known
distance-approximate method, with much more stable behavior.
1
Introduction
In this paper, we address the problem of nearest-neighbor (NN) search in large datasets of high
dimensionality. It is used for classification (?-NN classifier [1]), categorizing a test point on the basis of the classes in its close neighborhood. Non-parametric density estimation uses NN algorithms
when the bandwidth at any point depends on the ? ?? NN distance (NN kernel density estimation [2]).
NN algorithms are present in and often the main cost of most non-linear dimensionality reduction
techniques (manifold learning [3, 4]) to obtain the neighborhood of every point which is then preserved during the dimension reduction. NN search has extensive applications in databases [5] and
computer vision for image search Further applications abound in machine learning.
Tree data structures such as ??-trees are used for efficient exact NN search but do not scale better
than the na??ve linear search in sufficiently high dimensions. Distance-approximate NN (DANN)
search, introduced to increase the scalability of NN search, approximates the distance to the NN and
any neighbor found within that distance is considered to be ?good enough?. Numerous techniques
exist to achieve this form of approximation and are fairly scalable to higher dimensions under certain
assumptions.
Although the DANN search places bounds on the numerical values of the distance to NN, in NN
search, distances themselves are not essential; rather the order of the distances of the query to the
points in the dataset captures the necessary and sufficient information [6, 7]. For example, consider
the two-dimensional dataset (1, 1), (2, 2), (3, 3), (4, 4), . . . with a query at the origin. Appending
non-informative dimensions to each of the reference points produces higher dimensional datasets
of the form (1, 1, 1, 1, 1, ....), (2, 2, 1, 1, 1, ...), (3, 3, 1, 1, 1, ...), (4, 4, 1, 1, 1, ...), . . .. For a fixed distance approximation, raising the dimension increases the number of points for which the distance to
the query (i.e. the origin) satisfies the approximation condition. However, the ordering (and hence
the ranks) of those distances remains the same. The proposed framework, rank-approximate nearestneighbor (RANN) search, approximates the NN in its rank rather than in its distance, thereby making
the approximation independent of the distance distribution and only dependent on the ordering of
the distances.
1
This paper is organized as follows: Section 2 describes the existing methods for exact NN and
DANN search and the challenges they face in high dimensions. Section 3 introduces the proposed
approach and provides a practical algorithm using stratified sampling with a tree data structure to
obtain a user-specified level of rank approximation in Euclidean NN search. Section 4 reports the
experiments comparing RANN with exact search and DANN. Finally, Section 5 concludes with
discussion of the road ahead.
2
Related Work
The problem of NN search is formalized as the following:
Problem. Given a dataset ? ? ? of size ? in a metric space (?, ?) and a query ? ? ?, efficiently
find a point ? ? ? such that
?(?, ?) = min ?(?, ?).
(1)
???
2.1
Exact Search
The simplest approach of linear search over ? to find the NN is easy to implement, but requires
O(? ) computations for a single NN query, making it unscalable for moderately large ? .
Hashing the dataset into buckets is an efficient technique, but scales only to very low dimensional
?. Hence data structures are used to answer queries efficiently. Binary spatial partitioning trees,
like ??-trees [9], ball trees [10] and metric trees [11] utilize the triangular inequality of the distance
metric ? (commonly the Euclidean distance metric) to prune away parts of the dataset from the computation and answer queries in expected O(log ? ) computations [9]. Non-binary cover trees [12]
answer queries in theoretically bounded O(log ? ) time using the same property under certain mild
assumptions on the dataset.
Finding NNs for O(? ) queries would then require at least O(? log ? ) computations using the
trees. The dual-tree algorithm [13] for NN search also builds a tree on the queries instead of going
through them linearly, hence amortizing the cost of search over the queries. This algorithm shows
orders of magnitude improvement in efficiency and is conjectured to be O(? ) for answering O(? )
queries using the cover trees [12].
2.2
Nearest Neighbors in High Dimensions
The frontier of research in NN methods is high dimensional problems, stemming from common
datasets like images and documents to microarray data. But high dimensional data poses an inherent
problem for Euclidean NN search as described in the following theorem:
Theorem 2.1. [8] Let ? be a ?-dimensional hypersphere with radius ?. Let ? and ? be any two
points chosen at random in ?, the distributions of ? and ? being independent and uniform over the
interior of ?. Let ? be the
? Euclidean distance between ? and ? (? ? [0, 2?]). Then the asymptotic
distribution of ? is ? (? 2, ?2 /2?).
This implies that in high dimensions, the Euclidean distances between uniformly distributed points
lie in a small range of continuous values. This hypothesizes that the tree based algorithms perform
no better than linear search since these data structures would be unable to employ sufficiently tight
bounds in high dimensions. This turns out to be true in practice [14, 15, 16]. This prompted interest
in approximation of the NN search problem.
2.3
Distance-Approximate Nearest Neighbors
The problem of NN search is relaxed in the following form to make it more scalable:
Problem. Given a dataset ? ? ? of size ? in some metric space (?, ?) and a query ? ? ?,
efficiently find any point ?? ? ? such that
+
?(?? , ?) ? (1 + ?) min ?(?, ?)
???
for a low value of ? ? ? with high probability.
(2)
This approximation can be achieved with ??-trees, balls trees, and cover trees by modifying the
search algorithm to prune more aggressively. This introduces the allowed error while providing
some speedup over the exact algorithm [12]. Another approach modifies the tree data structures to
2
bound error with just one root-to-leaf traversal of the tree, i.e. to eliminate backtracking. Sibling
nodes in ??-trees or ball-trees are modified to share points near their boundaries, forming spill
trees [14]. These obtain significant speed up over the exact methods. The idea of approximately
correct (satisfying Eq. 2) NN is further extended to a formulation where the (1 + ?) bound can be
exceeded with a low probability ?, thus forming the PAC-NN search algorithms [17]. They provide
1-2 orders of magnitude speedup in moderately large datasets with suitable ? and ?.
These methods are still unable to scale to high dimensions. However, they can be used in combination with the assumption that high dimensional data actually lies on a lower dimensional subspace.
There are a number of fast DANN methods that preprocess data with randomized projections to
reduce dimensionality. Hybrid spill trees [14] build spill trees on the randomly projected data to
obtain significant speedups. Locality sensitive hashing [18, 19] hashes the data into a lower dimensional buckets using hash functions which guarantee that ?close? points are hashed into the same
bucket with high probability and ?farther apart? points are hashed into the same bucket with low
probability. This method has significant improvements in running times over traditional methods in
high dimensional data and is shown to be highly scalable.
However, the DANN methods assume that the distances are well behaved and not concentrated in a
small range. However, for example, if the all pairwise distances are within the range (100.0, 101.00),
any distance approximation ? ? 0.01 will return an arbitrary point to a NN query. The exact treebased algorithms failed to be efficient because many datasets encountered in practice suffered the
same concentration of pairwise distances. Using DANN in such a situation leads to the loss of the
ordering information of the pairwise distances which is essential for NN search [6]. This is too
large of a loss in accuracy for increased efficiency. In order to address this issue, we propose a
model of approximation for NN search which preserves the information present in the ordering of
the distances by controlling the error in the ordering itself irrespective of the dimensionality or the
distribution of the pairwise distances in the dataset. We also provide a scalable algorithm to obtain
this form of approximation.
3
Rank Approximation
To approximate the NN rank, we formulate and relax NN search in the following way:
Problem. Given a dataset ? ? ? of size ? in a metric space (?, ?) and a query ? ? ?, let
? = {?1 , . . . , ?? } be the set of distances between the query and all the points in the dataset ?,
such that ?? = ?(?? , ?), ?? ? ?, ? = 1, . . . , ? . Let ?(?) be the ??? order statistic of ?. Then the
? ? ? : ?(?, ?) = ?(1) is the NN of ? in ?. The rank-approximation of NN search would then be to
efficiently find a point ?? ? ? such that
?(?? , ?) ? ?(1+? )
(3)
with high probability for a given value of ? ? ?+ .
RANN search may use any order statistics of the population ?, bounded above by the (1 + ? )??
order statistics, to answer a NN query. Sedransk et.al. [20] provide a probability bound for the
sample order statistics bound on the order statistics of the whole set.
Theorem 3.1. For a population of size ? with ? values ordered as ?(1) ? ?(2) ? ? ? ? ?(? ) , let
?(1) ? ?(2) ? ? ? ? ?(?) be a ordered sample of size ? drawn from the population uniformly without
replacement. For 1 ? ? ? ? and 1 ? ? ? ?,
)(
) (
)
??? (
?
????1
? ??+?
?
? (?(?) ? ?(?) ) =
/
.
(4)
??1
???
?
?=0
We may find a ?? ? ? satisfying Eq. 3 with high probability by sampling enough points {?1 , . . . ?? }
from ? such that for some 1 ? ? ? ?, rank error bound ? , and a success probability ?
? (?(?? , ?) = ?(?) ? ?(1+? ) ) ? ?.
(5)
Sample order statistic ? = 1 minimizes the required number of samples; hence we substitute the
values of ? = 1 and ? = 1 + ? in Eq. 4 obtaining the following expression which can be computed
in O(? ) time
) (
)
? (
?
? ?? +??1
?
/
.
(6)
? (?(1) ? ?(1+? ) ) =
??1
?
?=0
3
The required sample size ? for a particular error ? with success probability ? is computed using
binary search over the range (1 + ?, ? ]. This makes RANN search O(?) (since now we only need
to compute the first order statistics of a sample of size ?) giving O(?/?) speedup.
3.1
Stratified Sampling with a Tree
For a required sample size of ?, we randomly sample ? points from ? and compute the RANN for a
query ? by going through the sampled set linearly. But for a tree built on ?, parts of the tree would
be pruned away for the query ? during the tree traversal. Hence we can ignore the random samples
from the pruned part of the tree, saving us some more computation.
Hence let ? be in the form of a binary tree (say ??-tree) rooted at ????? . The root node has ?
points. Let the left and right child have ?? and ?? points respectively. For a random query ? ? ?,
the population ? is the set of distances of ? to all the ? points in ????? . The tree stratifies the
population ? into ?? = {??1 , . . . , ???? } and ?? = {??1 , . . . , ???? }, where ?? and ?? are the
set of distances of ? to all the ?? and ?? points respectively in the left and right child of the root
node ????? . The following theorem provides a way to decide how much to sample from a particular
node, subsequently providing a lower bound on the number of samples required from the unpruned
part of the tree without violating Eq.5
Theorem 3.2. Let ?? and ?? be the number of random samples from the strata ?? and ?? respectively by doing a stratified sampling on the population ? of size ? = ?? + ?? . Let ? samples be
required for Eq.5 to hold in the population ? for a given value of ?. Then Eq.5 holds for ? with the
same value of ? with the random samples of sizes ?? and ?? from the random strata ?? and ?? of
? respectively if ?? + ?? = ? and ?? : ?? = ?? : ?? .
Proof. Eq. 5 simply requires ? uniformly sampled points, i.e. for each distance in ? to have
probability ?/? of inclusion. For ?? + ?? = ? and ?? : ?? = ?? : ?? , we have ?? = ?(?/? )?? ?
and similarly ?? = ?(?/? )?? ?, and thus samples in both ?? and ?? are included at the proper rate.
Since the ratio of the sample size to the population size is a constant ? = ?/? , Theorem 3.2 is
generalizable to any level of the tree.
3.2
The Algorithm
The proposed algorithm introduces the intended approximation in the unpruned portion of the ??tree since the pruned part does not add to the computation in the exact tree based algorithms. The
algorithm starts at the root of the tree. While searching for the NN of a query ? in a tree, most of
the computation in the traversal involves computing the distance of the query ? to any tree node
? (???? ?? ????(?, ?)). If the current upperbound to the NN distance (??(?)) for the query ? is
greater than ???? ?? ????(?, ?), the node is traversed and ??(?) is updated. Otherwise node ? is
pruned. The computations of distance of ? to points in the dataset ? occurs only when ? reaches
a leaf node it cannot prune. The NN candidate in that leaf is computed using the linear search
(C OMPUTE B RUTE NN subroutine in Fig.2). The traversal of the exact algorithm in the tree is illustrated in Fig.1.
To approximate the computation by sampling, traversal down the tree is stopped at a node which can
be summarized with a small number of samples (below a certain threshold M AX S AMPLES). This is
illustrated in Fig.1. The value of M AX S AMPLES giving maximum speedup can be obtained by crossvalidation. If a node is summarizable within the desired error bounds (decided by the C ANA PPROX IMATE subroutine in Fig.2), required number of points are sampled from such a node and the nearest
neighbor candidate is computed from among them using linear search (C OMPUTE A PPROX NN subroutine of Fig.2).
Single Tree. The search algorithm is presented in Fig.2. The dataset ? is stored as a binary tree
rooted at ????? . The algorithm starts as STR ANK A PPROX NN(?, ?, ?, ?). During the search, if a
leaf node is reached (since the tree is rarely balanced), the exact NN candidate is computed. In case
a non-leaf node cannot be approximated, the child node closer to the query is always traversed first.
The following theorem proves the correctness of the algorithm.
Theorem 3.3. For a query ? and a specified value of ? and ? , STR ANK A PPROX NN(?, ?, ?, ?)
computes a neighbor in ? within (1 + ? ) rank with probability at least ?.
4
Figure 1: The traversal paths of the exact and the rank-approximate algorithm in a ??-tree
Proof. By Eq.6, a query requires at least ? samples from a dataset of size ? to compute a neighbor
within (1 + ? ) rank with a probability ?. Let ? = (?/? ). Let a node ? contain ??? points. In the
algorithm, sampling occurs when a base case of the recursion is reached. There are three base cases:
? Case 1 - Exact Pruning (if ??(?) ? ???? ?? ????(?, ?)): Then number of points required
to be sampled from the node is at least ?? ? ????. However, since this node is pruned, we
ignore these points. Hence nothing is done in the algorithm.
? Case 2 - Exact Computation C OMPUTE B RUTE NN(?, ?)): In this subroutine, linear search
is used to find the NN candidate. Hence number of points actually sampled is ??? ?
?? ? ????.
? Case 3 - Approximate Computation (C OMPUTE A PPROX NN(?, ?, ?)): In this subroutine,
exactly ? ? ??? samples are made and linear search is performed over them.
Let the total number of points effectively sampled from ? be ?? . From the three base cases of the
algorithm, it is confirmed that ?? ? ?? ?? ? = ?. Hence the algorithm computes a NN within (1+? )
rank with probability at least ?.
Dual Tree. The single tree algorithm in Fig.2 can be extended to the dual tree algorithm in case
of O(? ) queries. The dual tree RANN algorithm (DTR ANK A PPROX NN(?, ?, ?, ?)) is given in
Fig.2. The only difference is that for every query ? ? ? , the minimum required amount of sampling
is done and the random sampling is done separately for each of the queries. Even though the queries
do not share samples from the reference set, when a query node of the query tree prunes a reference
node, that reference node is pruned for all the queries in that query node simultaneously. This
work-sharing is a key feature of all dual-tree algorithms [13].
4
Experiments and Results
A meaningful value for the rank error ? should be relative to the size of the reference dataset ? .
Hence for the experiments, the (1 + ? )-RANN is modified to (1 + ?? ? ? ?)-RANN where 1.0 ?
? ? ?+ . The Euclidean metric is used in all the experiments. Although the value of M AX S AMPLES
for maximum speedup can be obtained by cross-validation, for practical purposes, any low value (?
20-30) suffices well, and this is what is used in the experiments.
4.1
Comparisons with Exact Search
The speedups of the exact dual-tree NN algorithm and the approximate tree-based algorithm over
the linear search algorithm is computed and compared. Different levels of approximations ranging
from 0.001% to 10% are used to show how the speedup increases with increase in approximation.
5
STR ANK A PPROX NN(?, ?, ?, ?)
? ?C OMPUTE S AMPLE S IZE (???, ?, ?) DTR ANK A PPROX NN(?, ?, ?, ?)
? ? ?/???
? ?C OMPUTE S AMPLE S IZE (???, ?, ?)
????? ?T REE(?)
? ? ?/???
STRANN (?, ????? , ?)
????? ?T REE(?)
STRANN(?, ?, ?)
????? ?T REE(? )
DTRANN (????? , ????? , ?)
if ??(?) > ???? ?? ????(?, ?) then
if I S L EAF(?) then
DTRANN(?, ?, ?)
C OMPUTE B RUTE NN(?, ?)
if ???? ??(?) >
else if C ANA PPROXIMATE(?, ?)
???? ??????? ?????(?, ?) then
then
if I S L EAF(?) && I S L EAF(?) then
C OMPUTE A PPROX NN (?, ?, ?)
C OMPUTE B RUTE NN(?, ?)
else
else
if I S L EAF(?) then
STRANN (?, ?? , ?),
DTRANN
(?? , ?, ?), DTRANN(?? , ?, ?)
STRANN (?, ?? , ?)
???? ??(?) ? max ???? ??(?? )
?={?,?}
end if
else if C ANA PPROXIMATE(?, ?) then
end if
if I S L EAF(?) then
C OMPUTE B RUTE NN(?, ?)
C OMPUTE A PPROX NN (?, ?, ?)
??(?) ? min(min ?(?, ?), ??(?))
else
???
DTRANN (?? , ?, ?),
C OMPUTE B RUTE NN(?, ?)
DTRANN (?? , ?, ?)
for ?? ? ? do
???? ??(?) ? max ???? ??(?? )
?={?,?}
??(?) ? min(min ?(?, ?), ??(?))
???
end if
end for
else if I S L EAF(?) then
???? ??(?) ? max ??(?)
DTRANN (?, ?? , ?), DTRANN (?, ?? , ?)
???
else
C OMPUTE A PPROX NN(?, ?, ?)
DTRANN (?? , ?? , ?), DTRANN (?? , ?? , ?)
DTRANN (?? , ?? , ?),
?? ? ?? ? ???? samples from ?
?
DTRANN (?? , ?? , ?)
C OMPUTE B RUTE NN(?, ? )
???? ??(?) ? max ???? ??(?? )
C OMPUTE A PPROX NN(?, ?, ?)
?={?,?}
for ?? ? ? do
?? ? ?? ? ???? samples from ?
C OMPUTE B RUTE NN(?, ?? )
end for
???? ??(?) ? max ??(?)
end if
end if
C ANA PPROXIMATE(?, ?)
return ?? ? ???? ?M AX S AMPLES
???
Figure 2: Single tree (STR ANK A PPROX NN) and dual tree (DTR ANK A PPROX NN) algorithms and
subroutines for RANN search for a query ? (or a query set ? ) in a dataset ? with rank approximation
? and success probability ?. ?? and ?? are the closer and farther child respectively of ? from the
query ? (or a query node ?)
Different datasets drawn for the UCI repository (Bio dataset 300k?74, Corel dataset 40k?32,
Covertype dataset 600k?55, Phy dataset 150k?78)[21], MN IST handwritten digit recognition
dataset (60k?784)[22] and the Isomap ?images? dataset (700?4096)[3] are used. The final dataset
?urand? is a synthetic dataset of points uniform randomly sampled from a unit ball (1m?20). This
dataset is used to show that even in the absence of a lower-dimensional subspace, RANN is able to
get significant speedups over exact methods for relatively low errors. For each dataset, the NN of
every point in the dataset is found in the exact case, and (1 + ?? ? ? ?)-rank-approximate NN of every
point in the dataset is found in the approximate case. These results are summarized in Fig.3.
The results show that for even low values of ? (high accuracy setting), the RANN algorithm is
significantly more scalable than the exact algorithms for all the datasets. Note that for some of the
datasets, the low values of approximation used in the experiments are equivalent to zero rank error
(which is the exact case), hence are equally efficient as the exact algorithm.
6
?=0%(exact),0.001%,0.01%,0.1%,1%,10%
?=0.95
4
speedup over linear search
10
3
10
2
10
1
10
0
10
bio
corel
covtype images
mnist
phy
urand
Figure 3: Speedups(logscale on the Y-axis) over the linear search algorithm while finding the NN in the exact case or (1 + ?? )-RANN in the approximate case with ? =
0.001%, 0.01%, 0.1%, 1.0%, 10.0% and a fixed success probability ? = 0.95 for every point in
the dataset. The first(white) bar in each dataset in the X-axis is the speedup of exact dual tree
NN algorithm, and the subsequent(dark) bars are the speedups of the approximate algorithm with
increasing approximation.
4.2
Comparison with Distance-Approximate Search
In the case of the different forms of approximation, the average rank errors and the maximum rank
errors achieved in comparable retrieval times are considered for comparison. The rank errors are
compared since any method with relatively lower rank error will obviously have relatively lower
distance error. For DANN, Locality Sensitive Hashing (LSH) [19, 18] is used.
Subsets of two datasets known to have a lower-dimensional embedding are used for this experiment
- Layout Histogram (10k?30)[21] and MN IST dataset (10k?784)[22]. The approximate NN of
every point in the dataset is found with different levels of approximation for both the algorithms.
The average rank error and maximum rank error is computed for each of the approximation levels.
For our algorithm, we increased the rank error and observed a corresponding decrease in the retrieval
time. LSH has three parameters. To obtain the best retrieval times with low rank error, we fixed one
parameter and changed the other two to obtain a decrease in runtime and did this for many values of
the first parameter. The results are summarized in Fig. 4 and Fig. 5.
The results show that even in the presence of a lower-dimensional embedding of the data, the rank
errors for a given retrieval time are comparable in both the approximate algorithms. The advantage
of the rank-approximate algorithm is that the rank error can be directly controlled, whereas in LSH,
tweaking in the cross-product of its three parameters is typically required to obtain the best ranks for
a particular retrieval time. Another advantage of the tree-based algorithm for RANN is the fact that
even though the maximum error is bounded only with a probability, the actual maximum error is not
much worse than the allowed maximum rank error since a tree is used. In the case of LSH, at times,
the actual maximum rank error is extremely large, corresponding to LSH returning points which
are very far from being the NN. This makes the proposed algorithm for RANN much more stable
7
Random Sample of size 10000
Random Sample of size 10000
10
4
RANN
LSH
3.5
RANN
LSH
9
8
3
Time (in sec.)
Time (in sec.)
7
2.5
2
1.5
6
5
4
3
1
2
0.5
0
1
0
500
1000
1500
0
2000
0
500
1000
Average Rank Error
1500
2000
2500
3000
3500
4000
Average Rank Error
(a) Layout Histogram
(b) Mnist
Figure 4: Query times on the X-axis and the Average Rank Error on the Y-axis.
Random Sample of size 10000
Random Sample of size 10000
4
10
RANN
LSH
3.5
RANN
LSH
9
8
3
Time (in sec.)
Time (in sec.)
7
2.5
2
1.5
6
5
4
3
1
2
0.5
0
1
0
1000
2000
3000
4000
5000
6000
7000
8000
0
9000 10000
Maximum Rank Error
0
1000
2000
3000
4000
5000
6000
7000
8000
9000 10000
Maximum Rank Error
(a) Layout Histogram
(b) Mnist
Figure 5: Query times on the X-axis and the Maximum Rank Error on the Y-axis.
than LSH for Euclidean NN search. Of course, the reported times highly depend on implementation
details and optimization tricks, and should be considered carefully.
5
Conclusion
We have proposed a new form of approximate algorithm for unscalable NN search instances by controlling the true error of NN search (i.e. the ranks). This allows approximate NN search to retain
meaning in high dimensional datasets even in the absence of a lower-dimensional embedding. The
proposed algorithm for approximate Euclidean NN has been shown to scale much better than the
exact algorithm even for low levels of approximation even when the true dimension of the data is
relatively high. When compared with the popular DANN method (LSH), it is shown to be comparably efficient in terms of the average rank error even in the presence of a lower dimensional subspace
of the data (a fact which is crucial for the performance of the distance-approximate method). Moreover, the use of spatial-partitioning tree in the algorithm provides stability to the method by clamping
the actual maximum error to be within a reasonable rank threshold unlike the distance-approximate
method.
However, note that the proposed algorithm still benefits from the ability of the underlying tree data
structure to bound distances. Therefore, our method is still not necessarily immune to the curse of
dimensionality. Regardless, RANN provides a new paradigm for NN search which is comparably
efficient to the existing methods of distance-approximation and allows the user to directly control
the true accuracy which is present in ordering of the neighbors.
8
References
[1] T. Hastie, R. Tibshirani, and J. H. Friedman. The Elements of Statistical Learning: Data
Mining, Inference, and Prediction. Springer, 2001.
[2] B. W. Silverman. Density Estimation for Statistics and Data Analysis. Chapman & Hall/CRC,
1986.
[3] J. B. Tenenbaum, V. Silva, and J.C. Langford. A Global Geometric Framework for Nonlinear
Dimensionality Reduction. Science, 290(5500):2319?2323, 2000.
[4] S. T. Roweis and L. K. Saul. Nonlinear Dimensionality Reduction by Locally Linear Embedding. Science, 290(5500):2323?2326, December 2000.
[5] A. N. Papadopoulos and Y. Manolopoulos. Nearest Neighbor Search: A Database Perspective.
Springer, 2005.
[6] N. Alon, M. B?adoiu, E. D. Demaine, M. Farach-Colton, and M. T. Hajiaghayi. Ordinal Embeddings of Minimum Relaxation: General Properties, Trees, and Ultrametrics. 2008.
[7] K. Beyer, J. Goldstein, R. Ramakrishnan, and U. Shaft. When Is ?Nearest Neighbor? Meaningful? LECTURE NOTES IN COMPUTER SCIENCE, pages 217?235, 1999.
[8] J. M. Hammersley. The Distribution of Distance in a Hypersphere. Annals of Mathematical
Statistics, 21:447?452, 1950.
[9] J. H. Freidman, J. L. Bentley, and R. A. Finkel. An Algorithm for Finding Best Matches in
Logarithmic Expected Time. ACM Trans. Math. Softw., 3(3):209?226, September 1977.
[10] S. M. Omohundro. Five Balltree Construction Algorithms. Technical Report TR-89-063,
International Computer Science Institute, December 1989.
[11] F. P. Preparata and M. I. Shamos. Computational Geometry: An Introduction. Springer, 1985.
[12] A. Beygelzimer, S. Kakade, and J.C. Langford. Cover Trees for Nearest Neighbor. Proceedings
of the 23rd international conference on Machine learning, pages 97?104, 2006.
[13] A. G. Gray and A. W. Moore. ?? -Body? Problems in Statistical Learning. In NIPS, volume 4,
pages 521?527, 2000.
[14] T. Liu, A. W. Moore, A. G. Gray, and K. Yang. An Investigation of Practical Approximate
Nearest Neighbor Algorithms. In Advances in Neural Information Processing Systems 17,
pages 825?832, 2005.
[15] L. Cayton. Fast Nearest Neighbor Retrieval for Bregman Divergences. Proceedings of the 25th
international conference on Machine learning, pages 112?119, 2008.
[16] T. Liu, A. W. Moore, and A. G. Gray. Efficient Exact k-NN and Nonparametric Classification
in High Dimensions. 2004.
[17] P. Ciaccia and M. Patella. PAC Nearest Neighbor Queries: Approximate and Controlled Search
in High-dimensional and Metric spaces. Data Engineering, 2000. Proceedings. 16th International Conference on, pages 244?255, 2000.
[18] A. Gionis, P. Indyk, and R. Motwani. Similarity Search in High Dimensions via Hashing.
pages 518?529, 1999.
[19] P. Indyk and R. Motwani. Approximate Nearest Neighbors: Towards Removing the Curse of
Dimensionality. In STOC, pages 604?613, 1998.
[20] J. Sedransk and J. Meyer. Confidence Intervals for the Quantiles of a Finite Population: Simple
Random and Stratified Simple Random sampling. Journal of the Royal Statistical Society,
pages 239?252, 1978.
[21] C. L. Blake and C. J. Merz. UCI Machine Learning Repository. http://archive.ics.uci.edu/ml/,
1998.
[22] Y. LeCun. MN IST dataset, 2000. http://yann.lecun.com/exdb/mnist/.
9
| 3864 |@word mild:1 repository:2 thereby:1 tr:1 reduction:4 phy:2 liu:2 document:1 pprox:14 existing:2 current:1 comparing:1 com:1 beygelzimer:1 stemming:1 subsequent:1 numerical:1 informative:1 hash:2 prohibitive:1 leaf:5 farther:2 papadopoulos:1 hypersphere:2 provides:4 math:1 node:22 five:1 mathematical:1 pairwise:4 theoretically:1 expected:2 behavior:1 themselves:1 actual:3 curse:2 str:4 increasing:1 becomes:1 abound:1 bounded:3 moreover:1 underlying:1 what:1 ouyang:1 minimizes:1 generalizable:1 mp3:1 finding:3 guarantee:1 every:6 hajiaghayi:1 runtime:1 exactly:1 returning:1 classifier:1 rann:19 control:2 partitioning:2 bio:2 unit:1 engineering:2 path:1 ree:3 approximately:1 nearestneighbor:1 stratified:4 range:4 decided:1 practical:4 lecun:2 practice:2 implement:1 silverman:1 digit:1 significantly:1 projection:1 confidence:1 road:1 tweaking:1 get:1 cannot:2 ga:1 interior:1 close:2 risk:1 equivalent:1 modifies:1 layout:3 regardless:1 formulate:1 formalized:1 population:9 searching:1 embedding:4 stability:1 updated:1 annals:1 controlling:2 construction:1 user:3 exact:27 losing:1 us:1 origin:2 trick:1 element:1 satisfying:2 approximated:1 recognition:1 database:2 observed:1 capture:1 ordering:7 decrease:2 balanced:1 moderately:2 traversal:6 depend:1 tight:1 efficiency:2 basis:1 fast:2 eaf:6 query:41 neighborhood:2 shamos:1 say:1 relax:1 otherwise:1 triangular:1 ability:1 statistic:9 itself:1 final:1 indyk:2 obviously:1 advantage:2 propose:1 product:1 uci:3 achieve:1 roweis:1 scalability:1 crossvalidation:1 motwani:2 produce:1 alon:1 pose:1 nearest:13 eq:8 involves:1 implies:1 radius:1 correct:1 modifying:1 subsequently:1 ana:4 crc:1 require:1 suffices:1 investigation:1 traversed:2 frontier:1 hold:2 sufficiently:2 considered:3 hall:1 blake:1 ic:1 achieves:1 purpose:1 estimation:3 sensitive:2 correctness:1 always:1 stratifies:1 modified:2 rather:2 beyer:1 finkel:1 gatech:1 categorizing:1 ax:4 improvement:2 rank:42 inference:1 dependent:1 nn:81 eliminate:1 typically:1 going:2 subroutine:6 issue:1 classification:2 dual:8 among:1 retaining:1 spatial:2 fairly:1 saving:1 sampling:9 chapman:1 softw:1 report:2 preparata:1 inherent:1 employ:1 randomly:3 preserve:1 ve:1 simultaneously:1 divergence:1 parikshit:1 pproximate:3 intended:1 geometry:1 replacement:1 friedman:1 atlanta:1 interest:1 highly:2 mining:1 introduces:3 accurate:1 bregman:1 closer:2 necessary:1 tree:61 euclidean:8 desired:1 adoiu:1 stopped:1 increased:2 instance:1 cover:4 cost:2 subset:1 uniform:2 too:1 stored:1 reported:1 dongryeol:1 answer:4 synthetic:1 nns:1 density:3 international:4 randomized:1 stratum:2 retain:1 standing:1 lee:1 unscalable:2 na:1 worse:1 return:2 amortizing:1 upperbound:1 amples:4 summarized:3 sec:4 gionis:1 hypothesizes:1 ultrametrics:1 dann:9 depends:1 performed:1 root:4 doing:1 portion:1 start:2 reached:2 accuracy:4 efficiently:4 preprocess:1 farach:1 handwritten:1 comparably:2 confirmed:1 cc:2 reach:1 sharing:1 proof:2 sampled:7 dataset:33 popular:1 dimensionality:9 ubiquitous:1 organized:1 carefully:1 actually:2 goldstein:1 exceeded:1 higher:2 hashing:4 violating:1 formulation:1 done:3 though:2 cayton:1 just:1 langford:2 nonlinear:2 gray:4 behaved:1 bentley:1 contain:1 true:5 ize:2 isomap:1 hence:11 aggressively:1 moore:3 illustrated:2 white:1 during:3 rooted:2 exdb:1 omohundro:1 silva:1 meaning:3 ranging:2 image:4 common:1 corel:2 volume:1 approximates:2 significant:4 rd:1 similarly:1 inclusion:1 immune:1 lsh:11 stable:2 hashed:2 similarity:1 add:1 base:3 perspective:1 conjectured:1 apart:1 certain:3 inequality:1 binary:5 success:4 minimum:2 greater:1 relaxed:1 prune:4 paradigm:1 patella:1 technical:1 faster:1 match:1 cross:2 long:1 retrieval:6 equally:1 controlled:2 prediction:1 scalable:5 vision:1 metric:8 histogram:3 kernel:1 achieved:2 preserved:1 ompute:16 whereas:1 separately:1 ank:7 interval:1 else:7 microarray:1 suffered:1 crucial:1 unlike:1 archive:1 ample:2 december:2 near:1 presence:2 yang:1 enough:2 easy:1 embeddings:1 hastie:1 bandwidth:1 reduce:1 idea:1 sibling:1 rute:8 expression:1 amount:1 nonparametric:1 dark:1 tenenbaum:1 locally:1 concentrated:1 simplest:1 http:2 exist:1 tibshirani:1 ist:3 key:1 threshold:2 achieving:1 drawn:2 utilize:1 ram:2 relaxation:1 place:1 reasonable:1 decide:1 yann:1 comparable:2 bound:10 encountered:1 ahead:1 covertype:1 speed:2 min:6 extremely:1 pruned:6 relatively:4 speedup:15 ball:4 combination:1 describes:1 kakade:1 making:2 bucket:4 computationally:1 remains:1 imate:1 turn:1 ordinal:1 end:7 away:2 appending:1 substitute:1 running:1 spill:3 logscale:1 freidman:1 giving:2 build:2 prof:1 society:1 dongryel:1 occurs:2 ciaccia:1 parametric:1 concentration:1 traditional:1 balltree:1 september:1 subspace:3 distance:42 unable:2 manifold:1 prompted:1 providing:2 ratio:1 stoc:1 dtr:3 astrophysics:1 implementation:1 proper:1 perform:1 allowing:1 datasets:11 finite:1 situation:1 extended:2 arbitrary:1 fingerprinting:1 introduced:1 required:9 specified:2 extensive:1 raising:1 nip:1 trans:1 address:2 able:1 bar:2 below:1 challenge:1 hammersley:1 built:1 max:5 royal:1 suitable:1 hybrid:1 recursion:1 mn:3 movie:1 technology:1 numerous:1 axis:6 irrespective:1 concludes:1 geometric:1 asymptotic:1 relative:1 loss:2 lecture:1 demaine:1 validation:1 sufficient:1 unpruned:2 share:2 course:1 changed:1 institute:2 neighbor:17 saul:1 face:1 distributed:1 benefit:1 boundary:1 dimension:14 computes:2 commonly:1 made:1 projected:1 far:1 approximate:27 pruning:1 ignore:2 ml:1 global:1 colton:1 search:57 continuous:1 obtaining:1 manolopoulos:1 agray:1 necessarily:1 did:1 main:1 linearly:2 whole:1 nothing:1 allowed:2 child:4 body:1 fig:11 quantiles:1 georgia:1 shaft:1 meyer:1 lie:2 candidate:4 answering:1 theorem:8 down:1 removing:1 pac:2 covtype:1 essential:2 mnist:4 effectively:1 magnitude:2 clamping:1 locality:2 logarithmic:1 backtracking:1 simply:1 forming:2 failed:1 ordered:2 recommendation:1 hua:1 springer:3 ramakrishnan:1 satisfies:1 acm:1 towards:1 absence:2 included:1 uniformly:3 total:1 merz:1 meaningful:2 rarely:1 treebased:1 alexander:1 bioinformatics:1 |
3,162 | 3,865 | Posterior vs. Parameter Sparsity in Latent Variable
Models
Jo?o V. Gra?a
L2 F INESC-ID
Lisboa, Portugal
Kuzman Ganchev
Ben Taskar
University of Pennsylvania
Philadelphia, PA, USA
Fernando Pereira
Google Research
Mountain View, CA, USA
Abstract
We address the problem of learning structured unsupervised models with moment
sparsity typical in many natural language induction tasks. For example, in unsupervised part-of-speech (POS) induction using hidden Markov models, we introduce a bias for words to be labeled by a small number of tags. In order to express
this bias of posterior sparsity as opposed to parametric sparsity, we extend the posterior regularization framework [7]. We evaluate our methods on three languages
? English, Bulgarian and Portuguese ? showing consistent and significant accuracy improvement over EM-trained HMMs, and HMMs with sparsity-inducing
Dirichlet priors trained by variational EM. We increase accuracy with respect
to EM by 2.3%-6.5% in a purely unsupervised setting as well as in a weaklysupervised setting where the closed-class words are provided. Finally, we show
improvements using our method when using the induced clusters as features of a
discriminative model in a semi-supervised setting.
1
Introduction
Latent variable generative models are widely used in inducing meaningful representations from unlabeled data. Maximum likelihood estimation is a standard method for fitting such models, but in
most cases we are not so interested in the likelihood of the data as in the distribution of the latent
variables, which we hope will capture regularities of interest without direct supervision. In this paper we explore the problem of biasing such unsupervised models to favor a novel kind of sparsity
that expresses our expectations about the role of the latent variables. Many important language processing tasks (tagging, parsing, named-entity classification) involve classifying events into a large
number of possible classes, where each event type can have just a few classes. We extend the posterior regularization framework [7] to achieve that kind of posterior sparsity on the unlabeled training
data. In unsupervised part-of-speech (POS) tagging, a well studied yet challenging problem, the new
method consistently and significantly improves performance over a non-sparse baseline and over a
variational Bayes baseline with a Dirichlet prior used to encourage sparsity [9, 4].
A common approach to unsupervised POS tagging is to train a hidden Markov model where the
hidden states are the possible tags and the observations are word sequences. The model is typically trained with the expectation-maximization (EM) algorithm to maximize the likelihood of the
observed sentences. Unfortunately, while supervised training of HMMs achieves relatively high
accuracy, the unsupervised models tend to perform poorly. One well-known reason for this is that
EM tends to allow each word to be generated by most hidden states some of the time. In reality,
we would like most words to have a small number of possible tags. To solve this problem, several
studies [14, 17, 6] investigated weakly-supervised approaches where the model is given the list of
possible tags for each word. The task is then to disambiguate among the possible tags for each word
type. Recent work has made use of smaller dictionaries, trying to model the set of possible tags for
each word [18, 5], or use a small number of ?prototypes? for each tag [8]. All these approaches
initialize the model in a way that encourages sparsity by zeroing out impossible tags. Although this
1
has worked extremely well for the weakly-supervised case, we are interested in the setting where
we have only high-level information about the model: we know that the distribution over the latent variables (such as POS tags) should be sparse. This has been explored in a Bayesian setting,
where a prior is used to encourage sparsity in the model parameters [4, 9, 6]. This sparse prior,
which prefers each tag to have few word types associated with it, indirectly achieves sparsity over
the posteriors, meaning each word type should have few possible tags. Our method differs in that
it encourages sparsity in the model posteriors, more directly encoding the desiderata. Additionally
our method can be applied to log-linear models where sparsity in the parameters leads to dense
posteriors. Sparsity at this level has already been suggested before under a very different model[18].
We use a first-order HMM as our model to compare the different training conditions: classical
expectation-maximization (EM) training without modifications to encourage sparsity, the sparse
prior used by [9] with variational Bayes EM (VEM), and our sparse posterior regularization (Sparse).
We evaluate these methods on three languages, English, Bulgarian and Portuguese. We find that our
method consistently improves performance with respect to both baselines in a completely unsupervised scenario, as well as in a weakly-supervised scenario where the tags of closed-class words are
supplied. Interestingly, while VEM achieves a state size distribution (number of words assigned
to hidden states) that is closer to the empirical tag distribution than EM and Sparse its state-token
distribution is a worse match to the empirical tag-token distribution than the competing methods.
Finally, we show that states assigned by the model are useful as features for a supervised POS
tagger.
2
Posterior Regularization
In order to express the desired preference for posterior sparsity, we use the posterior regularization
(PR) framework [7], which incorporates side information into parameter estimation in the form of
linear constraints on posterior expectations. This allows tractable learning and inference even when
the constraints would be intractable to encode directly in the model, for instance to enforce that
each hidden state in an HMM is used only once in expectation. Moreover, PR can represent prior
knowledge that cannot be easily expressed as priors over model parameters, like the constraint used
in this paper. PR can be seen as a penalty on the standard marginal likelihood objective, which we
define first:
X
b log p? (x)] = E[?
b log
Marginal Likelihood: L(?) = E[?
p? (z, x)]
z
b is the empirical expectation over the unlabeled sample x, and z are
over the parameters ?, where E
the hidden states. This standard objective may be regularized with a parameter prior ? log p(?) =
C(?), for example a Dirichlet.
Posterior information in PR is specified with sets Qx of distributions over the hidden variables z
defined by linear constraints on feature expectations:
Qx = {q(z | x) : Eq [f (x, z)] ? b}.
(1)
The marginal log-likelihood of a model is then penalized with the KL-divergence between the desired distributions Qx and the model, KL(Qx k p? (z|x)) = minq?Qx KL(q(z) k p? (z|x)). The
revised learning objective minimizes:
b
PR Objective: L(?) + C(?) + E[KL(Q
x k p? (z|x))].
(2)
Since the objective above is not convex in ?, PR estimation relies on an EM-like lower-bounding
scheme for model fitting, where the E step computes a distribution q(z|x) over the latent variables
and the M step minimizes negative marginal likelihood under q(z|x) plus parameter regularization:
M-Step:
b [Eq [? log p? (x, z)]] + C(?)
min E
?
(3)
In a standard E step, q is the posterior over the model hidden variables given current ?: q(z|x) =
p? (z|x). However, in PR, q is a projection of the posteriors onto the constraint set Qx for each
example x:
arg min KL(q(z|x) k p? (z|x)) s.t. Eq [f (x, z)] ? b.
(4)
q
2
p?
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
DT
JJ
VB
NN
qti ? p? e??ti
?
instance
10
9
8
7
6
5
4
3
2
1
0
DT
JJ
VB
NN
1
DT
0.8
JJ
0.6
VB
0.4
0.2
NN
0
instance
instance
Figure 1: An illustration of `1 /`? regularization. Left panel: initial tag distributions (columns)
for 15 instances of a word. Middle panel: optimal regularization parameters ?, each row sums to
? = 20. Right panel: q concentrates the posteriors for all instances on the NN tag, reducing the
`1 /`? norm from just under 4 to a little over 1.
The new posteriors q(z|x) are used to compute sufficient statistics for this instance and hence to
update the model?s parameters in the M step. The optimization problem in Equation 4 can be solved
efficiently in dual form:
X
arg min b> ? + log
p? (z|x) exp {??> f (x, z)}.
(5)
??0
z
Given ?, the primal solution is q(z|x) = p? (z|x) exp{??> f (x, z)}/Z, where Z is a normalization
constant. There is one dual variable per expectation constraint, which can be optimized by projected
gradient descent where gradient for ? is b ? Eq [f (x, z)]. Gradient computation involves an expectation under q(z|x) that can be computed efficiently if the features f (x, z) factor in the same way as
the model p? (z|x) [7].
3
Relaxing Posterior Regularization
In this work, we modify PR so that instead of hard constraints on q(z | x), it allows the constraints
to be relaxed at a cost specified by a penalty. This relaxation can allow combining multiple constraints without having to explicitly ensure that the constraint set remains non-empty. Additionally,
it will be useful in dealing with the `1 /`? constraints we need. If those were incorporated as hard
constraints, the dual objective would become non-differentiable, making the optimization (somewhat) more complicated. Using soft constraints, the non-differentiable portion of the dual objective
turns into simplex constraints on the dual variables, allowing us to use an efficient projected gradient
method. For soft constraints, Equation 4 is replaced by
arg min KL(q k p) + R(b)
q,b
s. t.
Eq [f (x, z)] ? b
(6)
where b is the constraint vector, and R(b) penalizes overly lax constraints. For POS tagging, we
will design R(b) to encourage each word type to be observed with a small number of POS tags in
the projected posteriors q. The overall objective minimized can be shown to be:
b
Soft PR Objective: arg min L(?) + C(?) + E[KL(q
k p? ) + R(b)]
?,q,b
s. t.
Eq [f (x, z)] ? b.
(7)
3.1
`1 /`? regularization
We now choose the posterior constraint regularizer R(b) to encourage each word to be associated
with only a few parts of speech. Let feature fwti have value 1 whenever the ith occurrence of word
w has part of speech tag t. For every word w, we would like there to be only a few POS tags t
such that there are occurrences i where t has nonzero probability. This can be achieved if it ?costs?
a lot to allow an occurrence of a word to take a tag, but once that happens, it should be ?free? for
other occurrences of the word to receive that same tag. More precisely, we would like the sum (`1
norm) over tags t and word types w of the maxima (`? norm) of the expectation of taking tag t
3
over all occurrences of w to be small. Table 1 shows the value of the `1 /`? sparsity measure for
three different corpora, comparing fully supervised HMM and fully unsupervised HMM learned
with standard EM, with standard EM having 3-4 times larger value of `1 /`? than the supervised.
This discrepancy is what our PR objective is attempting to eliminate.
Formally, the E-step of our approach is expressed by the objective:
X
cwt s. t. Eq [fwti ] ? cwt
min KL(q k p? ) + ?
q,cwt
(8)
wt
where ? is the strength of the regularization. Note that setting ? = 0 we are back to normal EM
where q is the model posterior distribution. As ? ? ?, the constraints force each occurrence of
a word type to have the same posterior distribution, effectively reducing the mode to a 0th-order
Markov chain in the E step.
The dual of this objective has a very simple form (see supplementary material for derivation):
!
X
X
max ? log
p? (z) exp(?? ? f (z))
s. t.
?wti ? ?
??0
z
(9)
i
where z ranges over assignments to the hidden tag variables for all of the occurrences in the training
data, f (z) is the vector of fwti feature values for assignment z, ? is the vector of dual parameters ?wti , and the primal parameters are q(z) ? p? (z) exp (?? ? f (z)). This can be computed by
projected gradient, as described by Bertsekas [3].
Figure 1 illustrates how the `1 /`? norm operates on a toy example. For simplicity suppose we are
only regularizing one word and our model p? is just a product distribution over 15 instances of the
word. The left panel in Figure 1 shows the posteriors under p? . We would like to concentrate the
posteriors on a small subset of rows. The center panel of the figure shows the ? values determined
by Equation 9, and the right panel shows the projected distribution q, which concentrates most of the
posterior on the bottom row. Note that we are not requiring the posteriors to be sparse, which would
be equivalent to preferring that the distribution is peaked; rather, we want a word to concentrate its
tag posterior on a few tags across all instances of the word. Indeed, most of the instances (columns)
become less peaked than in the original posterior to allow posterior mass to be redistributed away
from the outlier tags. Since they are more numerous than the outliers, they moved less. This also
justifies only regularizing relatively frequent events in our model.
4
Bayesian Estimators
Recent advances in inference methods for sparsifying Bayesian estimation have been applied to
unsupervised POS tagging [4, 9, 6]. In the Bayesian setting, preference for sparsity is expressed
as a prior distribution over model structures and parameters, rather than as constraints on feature
posteriors. To compare these two approaches, in Section 5 we compare our method to a Bayesian
approach proposed by Johnson [9], which relies on a Dirichlet prior to encourage sparsity in a firstorder HMM for POS tagging. The complete description of the model is:
?i
P (ti |tt?1 = tag)
?
?
Dir (?i )
Multi(?i )
?i
P (wi |ti = tag)
?
?
Dir (?i )
Multi(?i )
Here, ?i controls sparsity over the state transition matrix and ?i controls the sparsity of state emission probabilities. Johnson [9] notes that ?i does not influence the model that much. In contrast, as
?i approaches zero, it encourages the model to have highly skewed P (wi |ti = tag) distributions,
that is, each tag is encouraged to generate a few words with high probability, and the rest with very
low probability. This is not exactly the constraint we would like to enforce: there are some POS
tags that generate many different words with relatively high probability (for example, nouns and
verbs), while each word is associated with a small number of tags. This difference is one possible
explanation for the relatively worse performance of this prior compared to our method.
Johnson [9] describes two approaches to learn the model parameters: a component-wise Gibbs
sampling scheme (GS) and a variational Bayes (VB) approximation using a mean field. Since Johnson [9] found VB worked much better than GS, we use VB in our experiments. Additionally, VB is
particularly simple to implement, consisting only a small modification to the M-Step of the EM algorithm. The Dirichlet prior hyper-parameters are added to the expected counts and passed through
4
a squashing function (exponential of the Digamma function) before being normalized. We refer
the reader to the original paper for more detail (see also http://www.cog.brown.edu/~mj/
Publications.htm for a bug fix in the Digamma function implementation).
5
Experiments
We now compare first-order HMMs trained using the three methods described earlier: the classical EM algorithm (EM), our `1 /`? posterior regularization based method (Sparse), and the model
presented in Section 4 (VEM). Models were trained and tested on all available data of three corpora: the Wall Street Journal portion of the Penn treebank [13] using the reduced tag set of 17 tags
[17] (PTB17); the Bosque subset of the Portuguese Floresta Sinta(c)tica Treebank [1] used for the
ConLL X shared task on dependency parsing (PT-CoNLL); and the Bulgarian BulTreeBank [16]
(BulTree) with the 12 coarse tags. We also report results on the full Penn treebank tag set in the
supplementary materials. All words that occurred only once were replaced by the token ?unk?. To
measure model sparsity, we compute the average `1 /`? norm over words occurring more than 10
times (denoted ?L1LMax? in our figures). Table 1 gives statistics for each corpus as well as the
sparsity for a first-order HMM trained using the labeled data and using standard EM with unlabeled
data.
PT-Conll
BulTree
PTB17
Types
11293
12177
23768
Tokens
206678
174160
950028
Unk
8.5%
10%
2%
Tags
22
12
17
Sup. `1 /`?
1.14
1.04
1.23
EM `1 /`?
4.57
3.51
3.97
Table 1: Corpus statistics. All words with only one occurrence where replaced by the ?unk? token.
The third column shows the percentage of tokens replaced. Sup. `1 /`? is the value of the sparsity
measure for a fully supervised HMM trained on all available data and EM `1 /`? is the value of the
sparsity measure for a fully unsupervised HMM trained using standard EM on all available data.
Following Gao and Johnson [4], the parameters were initialized with a ?pseudo E step? as follows:
we filled the expected count matrices with numbers 1 + X ? U (0, 1), where U (0, 1) is a random
number between 0 and 1 and X is a parameter. These matrices are then fed to the M step; the resulting ?random? transition and emission probabilities are used for the first real E step. For VEM,
X was set to 0.0001 (almost uniform) since this showed a significant improvement in performance.
On the other hand, EM showed less sensitivity to initialization, and we used X = 1 which resulted
in the best results. The models were trained for 200 iterations as longer runs did not significantly
change the results (models converge before 100 iterations). For VEM we tested 4 different prior
combinations, (all combinations of 10?1 and 10?3 for emission prior and transition prior), based
on Johnson?s results [9]. As previously noted, changing the transition priors does not affect the
Estimator
EM
VEM(10?1 )
VEM(10?4 )
Sparse (10)
Sparse (32)
Sparse (100)
PT-Conll
1-Many
1-1
64.0(1.2) 40.4(3.0)
60.4(0.6) 51.1(2.3)
63.2(1.0)* 48.1(2.2)
68.5(1.3) 43.3(2.2)
69.2(0.9) 43.2(2.9)
68.3(2.1) 44.5(2.4)
BG
1-Many
1-1
59.4(2.2) 42.0(3.0)
54.9(3.1) 46.4(3.0)
56.1(2.8) 43.3(1.7)*
65.1(1.0) 48.0(3.3)
66.0(1.8) 48.7(2.2)
65.9(1.6) 48.9(2.8)
PTB17
1-Many
1-1
67.5(1.3) 46.4(2.6)
68.2(0.8)* 52.8(3.5)
67.3(0.8)* 49.6(4.3)
69.5(1.6) 50.0(3.5)
70.2(2.2) 49.5(2.0)
68.7(1.1) 47.8(1.5)*
Table 2: Average accuracy (standard deviation in parentheses) over 10 different runs (random seeds
identical across models) for 200 iterations. 1-Many and 1-1 are the two hidden-state to POS mappings described in the text. All models are first order HMMs: EM trained using expectation maximization, VEM trained using variational EM observation priors shown in parentheses, Sparse trained
using PR with the constraint strength (?) in parentheses. Bold indicates the best value for each column. All results except those starred are significant (p=0.005) on a paired t-test against the EM
model.
5
Sparse 32
Sparse 10
Sparse 100
-1
EM
VEM 10
1
2
3
4
(a)
0
1
2
3
2.4
EM
VEM 10-3
VEM 10-1
Sparse 32
True
3
2
2.2
2
1.8
1.6
1
1.4
0
4
5
(b)
10
(c)
15
20
00
ax 1
M rse 2
a 3
Sp rse 0
a 1
Sp se
ar 1
Sp 0. 1
M 00
VE 0.
M
0
4
VE
VEM 10-3
52
VEM 10-1
51
50
49
VEM 10-3
48
47
46
45
Sparse 100
44
Sparse 10
43
42 Sparse 32
41
EM
40
EM
70
69
68
67
66
65
64
63
62
61
60
(d)
Figure 2: Detailed visualizations of the results on the PT-Conll corpus. (a) 1-many accuracy vs
`1 /`? , (b) 1-1 accuracy vs `1 /`? , (c) tens of thousands of tokens assigned to hidden state vs rank,
(d) mutual information in bits between gold tag distribution and hidden state distribution.
results, so we only report results for different emission priors. Later work [4] considered a wider
range of values but did not identify definitely better choices. Sparse was initialized with the parameters obtained by running EM for 30 iterations, followed by 170 iterations of the new training
procedure. Predictions were obtained using posterior decoding since this consistently showed small
improvements over Viterbi decoding.
We evaluate the accuracy of the models using two established mappings between hidden states and
POS tags: 1-Many maps each hidden state to the tag with which it co-occurs the most; 1-1 [8]
greedily picks a tag for each state under the constraint of never using the same tag twice. This
results in an approximation of the optimal 1-1 mapping. If the numbers of hidden states and tags
are not the same, some hidden states will be unassigned (and hence always wrong) or some tags not
used. In all our experiments the number of hidden states is the same as the number of POS tags.
Table 2 shows the accuracy of the different methods averaged over 15 different random parameter
initializations. Comparing the methods for each of the initialization points individually, our `1 /`?
regularization always outperforms EM baseline model on both metrics, and always outperforms
VEM using 1-Many mapping, while for the 1-1 mapping our method outperforms VEM roughly
half the time. The improvements are consistent for different constraint strength values.
Figure 2 shows detailed visualizations of the behavior of the different methods on the PT-Conll corpus. The results for the other corpora are qualitatively similar, and are reported in the supplemental
material. The left two plots show scatter graphs of accuracy with respect to `1 /`? value, where
accuracy is measured with either the 1-many mapping (left) or 1-1 mapping (center). We see that
Sparse is much better using the 1-many mapping and worse using the 1-1 mapping than VEM, even
though they achieve similar `1 /`? . The third plot shows the number of tokens assigned to each
hidden state at decoding time, in frequency rank order. While both EM and Sparse exhibit a fast decrease in the size of the states, VEM more closely matches the power law-like distribution achieved
by the gold labels. This difference explains the improvement on the 1-1 mapping, where VEM is
assigning larger size states to the most frequent tags. However, VEM achieves this power law distribution at the expense of the mutual information with the gold labels as we see in the rightmost plot.
From all methods, VEM has the lowest mutual information, while Sparse has the highest.
5.1
Closed-class words
We now consider the case where some supervision has been given in the form of a list of the closedclass words for the language, along with POS tags. Example closed classes are punctuation, pronouns, possessive markers, while open classes would include nouns, verbs, and adjectives. (See the
supplemental materials for details.) We assume that we are given the POS tags of closed classes
along with the words in each closed class. In the models, we set the emission probability from a
closed-class tag to any word not in its class to zero. Also, any word appearing in a closed class is assumed to have zero probability of being generated by an open-class tag. This improves performance
significantly for all languages, but our sparse training procedure is still able to outperform EM training significantly as shown in Table 3. Note, for these experiments we do not use an unknown word,
since doing so for closed-class words would allow closed class tags to generate unknown words.
6
Estimator
EM
Sparse (32)
PT-Conll
1-Many
1-1
72.5(1.7) 52.6(4.2)
75.3(1.2) 57.5(5.0)
BulTree
1-Many
1-1
77.9(1.7) 65.4(2.8)
82.4(1.2) 69.5(1.3)
PTB-17
1-Many
1-1
76.7(0.9) 61.1(1.8)
78.0(1.6) 62.2(2.0)
Table 3: Results with given closed-class tags, using posterior decoding, and projection at test time.
PT-Conll
80
75
BulTree
85
90
80
85
75
80
70
75
70
65
Sparse 32
EM
VEM
none
10 20 30 40 50 60 70 80 90 100
PTB-17
65
Sparse 32
EM
VEM
none
60
55
10 20 30 40 50 60 70 80 90 100
70
65
Sparse 32
EM
VEM
none
10 20 30 40 50 60 70 80 90 100
Figure 3: Accuracy of a supervised classifier when trained using the output of various unsupervised
models as features. Vertical axis: accuracy, Horizontal axis: number of labeled sentences.
5.2
Supervised POS tagging
As a further comparison of the models trained using the different methods, we use them to generate
features for a supervised POS tagger. The basic supervised model has features for the identity of the
current token as well as suffixes of length 2 and 3. We augment these features with the state identity
for the current token, based on the automatically generated models. We train the supervised model
using averaged perceptron for 20 iterations.
For each unsupervised training procedure (EM, Sparse, VEM) we train 10 models using different
random initializations and got 10 state identities per training method for each token. We then add
these cluster identities as features to the supervised model. Figure 3 shows the average accuracy of
the supervised model as we vary the type of unsupervised features. The average is taken over 10
random samples for the training set at each training set size. We can see from Figure 3 that using
our method or EM always improves performance relative to the baseline features (labeled ?none?
in the figure). VEM always under performs EM and for larger amounts of training data, the VEM
features appear not to be useful. This should not be surprising given that VEM has very low mutual
information with the gold labeling.
6
Related Work
Our learning method is very closely related to the work of Mann and McCallum [11, 12], who
concurrently developed the idea of using penalties based on posterior expectations of features to
guide learning. They call their method generalized expectation (GE) constraints or alternatively
expectation regularization. In the original GE framework, the posteriors of the model are regularized
directly. For equality constraints, our objective would become:
arg max L(?) ? ED [R(E? [f ])].
?
(10)
Notice that there is no intermediate distribution q. For some kinds of constraints this objective is
difficult to optimize in ? and in order to improve efficiency Bellare et al. [2] propose interpreting
the PR framework as an approximation to the GE objective in Equation 10. They compare the
two frameworks on several datasets and find that performance is similar, and we suspect that this
would be true for the sparsity constraints also. Liang et al. [10] cast the problem of incorporating
partial information about latent variables into a Bayesian framework using ?measurements,? and
they propose active learning for acquiring measurements to reduce uncertainty.
7
Recently, Ravi et al. [15] show promising results in weakly-supervised POS tagging, where a tag
dictionary is provided. This method first searches, using integer programming, for the smallest
grammar (in terms of unique transitions between tags) that explains the data. This sparse grammar
and the dictionary are provided as input for training an unsupervised HMM. Results show that using
a sparse grammar, hence enforcing sparsity over possible sparsity transitions leads to better results.
This method is different from ours in the sense that our method focuses on learning the sparsity
pattern they their method uses as input.
7
Conclusion
We presented a new regularization method for unsupervised training of probabilistic models that
favors a kind of sparsity that is pervasive in natural language processing. In the case of part-ofspeech induction, the preference can be summarized as ?each word occurs as only a few different
parts-of-speech,? but the approach is more general and could be applied to other tasks. For example,
in grammar induction, we could favor models where only a small number of production rules have
non-zero probability for each child non-terminal.
Our method uses the posterior regularization framework to specify preferences about model posteriors directly, without having to say how these should be encoded in model parameters. This means
that the sparse regularization penalty could be used for a log-linear model, where sparse parameters
do not correspond to posterior sparsity.
We evaluated the new regularization method on the task of unsupervised POS tagging, encoding
the prior knowledge that each word should have a small set of tags as a mixed-norm penalty. We
compared our method to a previously proposed Bayesian method (VEM) for encouraging sparsity of
model parameters [9] and found that ours performs better in practice. We explain this advantage by
noting that VEM encodes a preference that each POS tag should generate a few words, which goes
in the wrong direction. In reality, in POS tagging (as in several other language processing task), a
few event types (tags) (such the NN for POS tagging) generate the bulk of the word occurrences,
but each word is only associated with a few tags. Even when some supervision was provided with
through closed class lists, our regularizer still improved performance over the other methods.
An analysis of sparsity shows that both VEM and Sparse achieve a similar posterior sparsity as
measured by the `1 /`? metric. While VEM models better the empirical sizes of states (tags), the
states it assigns have lower mutual information to the true tags, suggesting that parameter sparsity is
not as good at generating good tag assignments. In contrast, Sparse?s sparsity seems to help build a
model that contains more information about the correct tag assignments.
Finally, we evaluated the worth of states assigned by unsupervised learning as features for supervised
tagger training with small training sets. These features are shown to be useful in most conditions,
especially those created by Sparse. The exceptions are some of the annotations provided by VEM
which actually hinder the performance, confirming that its lower mutual information states are not
so informative.
In future work, we would like to evaluate the usefulness of these sparser annotations for downstream tasks, for example determining whether Sparse POS tags are better for unsupervised parsing.
Finally, we would like to apply the `1 /`? posterior regularizer to other applications such as unsupervised grammar induction where we would like sparsity in production rules. Similarly, it would
be interesting to use this to regularize a log-linear model, where parameter sparsity does not achieve
the same goal.
Acknowledgments
J. V. Gra?a was supported by a fellowship from Funda??o para a Ci?ncia e Tecnologia (SFRH/
BD/ 27528/ 2006). K. Ganchev was supported by ARO MURI SUBTLE W911NF-07-1-0216 The
authors would like to thank Mark Johnson and Jianfeng Gao for their help in reproducing the VEM
results.
8
References
[1] S. Afonso, E. Bick, R. Haber, and D. Santos. Floresta Sinta(c)tica: a treebank for Portuguese.
In In Proc. LREC, pages 1698?1703, 2002.
[2] K. Bellare, G. Druck, and A. McCallum. Alternating projections for learning with expectation
constraints. In In Proc. UAI, 2009.
[3] D.P. Bertsekas, M.L. Homer, D.A. Logan, and S.D. Patek. Nonlinear programming. Athena
scientific, 1995.
[4] Jianfeng Gao and Mark Johnson. A comparison of Bayesian estimators for unsupervised Hidden Markov Model POS taggers. In In Proc. EMNLP, pages 344?352, Honolulu, Hawaii,
October 2008. ACL.
[5] Y. Goldberg, M. Adler, and M. Elhadad. Em can find pretty good hmm pos-taggers (when
given a good start). In Proc. ACL, pages 746?754, 2008.
[6] S. Goldwater and T. Griffiths. A fully bayesian approach to unsupervised part-of-speech tagging. In In Proc. ACL, volume 45, page 744, 2007.
[7] J. Gra?a, K. Ganchev, and B. Taskar. Expectation maximization and posterior constraints. In
In Proc. NIPS. MIT Press, 2008.
[8] A. Haghighi and D. Klein. Prototype-driven learning for sequence models. In In Proc. NAACL,
pages 320?327, 2006.
[9] M Johnson. Why doesn?t EM find good HMM POS-taggers. In In Proc. EMNLP-CoNLL,
2007.
[10] P. Liang, M. I. Jordan, and D. Klein. Learning from measurements in exponential families. In
In proc. ICML, 2009.
[11] G. Mann and A. McCallum. Simple, robust, scalable semi-supervised learning via expectation
regularization. In Proc. ICML, 2007.
[12] G. Mann and A. McCallum. Generalized expectation criteria for semi-supervised learning of
conditional random fields. In In Proc. ACL, pages 870 ? 878, 2008.
[13] M.P. Marcus, M.A. Marcinkiewicz, and B. Santorini. Building a large annotated corpus of
English: The Penn Treebank. Computational linguistics, 19(2):313?330, 1993.
[14] B. Merialdo. Tagging English text with a probabilistic model. Computational linguistics,
20(2):155?171, 1994.
[15] Sujith Ravi and Kevin Knight. Minimized models for unsupervised part-of-speech tagging. In
In Proc. ACL, 2009.
[16] Kiril Simov, Petya Osenova, Milena Slavcheva, Sia Kolkovska, Elisaveta Balabanova, Dimitar
Doikoff, Krassimira Ivanova, Alexander Simov, Er Simov, and Milen Kouylekov. Building a
linguistically interpreted corpus of bulgarian: the bultreebank. In In Proc. LREC, page pages,
2002.
[17] N.A. Smith and J. Eisner. Contrastive estimation: Training log-linear models on unlabeled
data. In In Proc. ACL, pages 354?362, 2005.
[18] K. Toutanova and M. Johnson. A Bayesian LDA-based model for semi-supervised part-ofspeech tagging. In Proc. NIPS, 20, 2007.
9
| 3865 |@word middle:1 norm:6 seems:1 open:2 contrastive:1 pick:1 moment:1 initial:1 contains:1 ours:2 interestingly:1 rightmost:1 outperforms:3 current:3 comparing:2 surprising:1 yet:1 scatter:1 assigning:1 portuguese:4 parsing:3 bd:1 informative:1 confirming:1 plot:3 update:1 v:4 generative:1 half:1 mccallum:4 ith:1 smith:1 coarse:1 preference:5 tagger:6 along:2 direct:1 become:3 fitting:2 introduce:1 tagging:15 expected:2 indeed:1 behavior:1 roughly:1 multi:2 terminal:1 ptb:2 automatically:1 little:1 encouraging:1 provided:5 moreover:1 panel:6 mass:1 lowest:1 what:1 mountain:1 kind:4 santos:1 minimizes:2 interpreted:1 developed:1 supplemental:2 homer:1 pseudo:1 every:1 ti:4 firstorder:1 exactly:1 wrong:2 classifier:1 control:2 penn:3 appear:1 bertsekas:2 before:3 modify:1 tends:1 encoding:2 id:1 plus:1 twice:1 initialization:4 studied:1 acl:6 challenging:1 relaxing:1 co:1 hmms:5 range:2 averaged:2 unique:1 acknowledgment:1 merialdo:1 practice:1 implement:1 differs:1 procedure:3 empirical:4 honolulu:1 significantly:4 got:1 projection:3 word:44 griffith:1 cannot:1 unlabeled:5 onto:1 impossible:1 influence:1 www:1 equivalent:1 map:1 optimize:1 center:2 go:1 minq:1 convex:1 simplicity:1 assigns:1 estimator:4 rule:2 regularize:1 ptb17:3 pt:7 suppose:1 programming:2 us:2 goldberg:1 pa:1 particularly:1 muri:1 labeled:4 observed:2 taskar:2 role:1 bottom:1 solved:1 capture:1 thousand:1 decrease:1 highest:1 knight:1 hinder:1 trained:14 weakly:4 rse:2 purely:1 efficiency:1 completely:1 po:27 easily:1 htm:1 various:1 regularizer:3 derivation:1 train:3 fast:1 labeling:1 hyper:1 jianfeng:2 kevin:1 encoded:1 widely:1 solve:1 larger:3 supplementary:2 say:1 grammar:5 favor:3 statistic:3 sequence:2 differentiable:2 advantage:1 propose:2 redistributed:1 aro:1 product:1 frequent:2 combining:1 pronoun:1 starred:1 poorly:1 achieve:4 gold:4 description:1 inducing:2 moved:1 bug:1 cluster:2 regularity:1 empty:1 generating:1 ben:1 wider:1 help:2 measured:2 eq:7 involves:1 concentrate:4 direction:1 closely:2 correct:1 annotated:1 material:4 mann:3 explains:2 fix:1 marcinkiewicz:1 wall:1 considered:1 normal:1 exp:4 seed:1 mapping:10 viterbi:1 achieves:4 dictionary:3 vary:1 smallest:1 estimation:5 sfrh:1 proc:15 linguistically:1 label:2 individually:1 ganchev:3 hope:1 mit:1 concurrently:1 always:5 rather:2 unassigned:1 publication:1 pervasive:1 encode:1 ax:1 emission:5 focus:1 improvement:6 consistently:3 rank:2 likelihood:7 indicates:1 contrast:2 digamma:2 greedily:1 baseline:5 sense:1 inference:2 suffix:1 nn:5 typically:1 eliminate:1 hidden:20 interested:2 arg:5 classification:1 unk:3 dual:7 denoted:1 overall:1 bulgarian:4 among:1 augment:1 noun:2 initialize:1 mutual:6 marginal:4 field:2 once:3 never:1 having:3 sampling:1 encouraged:1 identical:1 unsupervised:23 inesc:1 icml:2 peaked:2 discrepancy:1 simplex:1 report:2 minimized:2 future:1 few:11 divergence:1 resulted:1 ve:2 replaced:4 consisting:1 interest:1 highly:1 punctuation:1 primal:2 chain:1 encourage:6 closer:1 partial:1 filled:1 penalizes:1 desired:2 initialized:2 logan:1 instance:10 column:4 soft:3 earlier:1 ar:1 w911nf:1 assignment:4 maximization:4 cost:2 deviation:1 subset:2 uniform:1 usefulness:1 johnson:10 reported:1 dependency:1 cwt:3 para:1 dir:2 adler:1 definitely:1 sensitivity:1 preferring:1 probabilistic:2 decoding:4 druck:1 jo:1 opposed:1 choose:1 emnlp:2 worse:3 hawaii:1 toy:1 suggesting:1 tica:2 bold:1 summarized:1 explicitly:1 bg:1 later:1 view:1 lot:1 closed:12 doing:1 sup:2 portion:2 start:1 bayes:3 complicated:1 annotation:2 vem:34 accuracy:13 who:1 efficiently:2 correspond:1 identify:1 goldwater:1 bayesian:10 none:4 worth:1 explain:1 afonso:1 whenever:1 ed:1 against:1 frequency:1 associated:4 knowledge:2 improves:4 subtle:1 actually:1 back:1 dt:3 supervised:21 specify:1 improved:1 evaluated:2 though:1 just:3 hand:1 horizontal:1 nonlinear:1 marker:1 google:1 mode:1 lda:1 scientific:1 building:2 usa:2 naacl:1 requiring:1 normalized:1 brown:1 true:3 regularization:19 assigned:5 hence:3 equality:1 alternating:1 nonzero:1 skewed:1 encourages:3 noted:1 criterion:1 generalized:2 trying:1 complete:1 qti:1 tt:1 performs:2 interpreting:1 meaning:1 variational:5 wise:1 novel:1 recently:1 regularizing:2 common:1 volume:1 extend:2 occurred:1 significant:3 refer:1 measurement:3 gibbs:1 similarly:1 zeroing:1 portugal:1 language:8 supervision:3 longer:1 add:1 posterior:42 recent:2 showed:3 driven:1 possessive:1 scenario:2 seen:1 relaxed:1 somewhat:1 converge:1 fernando:1 maximize:1 semi:4 multiple:1 lisboa:1 full:1 match:2 paired:1 parenthesis:3 prediction:1 desideratum:1 basic:1 scalable:1 expectation:18 metric:2 iteration:6 represent:1 normalization:1 achieved:2 receive:1 want:1 fellowship:1 haghighi:1 rest:1 milena:1 induced:1 tend:1 suspect:1 incorporates:1 jordan:1 call:1 integer:1 noting:1 intermediate:1 affect:1 pennsylvania:1 competing:1 wti:2 reduce:1 idea:1 prototype:2 whether:1 passed:1 penalty:5 speech:7 jj:3 prefers:1 useful:4 se:1 involve:1 detailed:2 amount:1 ten:1 bellare:2 reduced:1 generate:6 http:1 supplied:1 percentage:1 outperform:1 notice:1 overly:1 per:2 bulk:1 klein:2 express:3 sparsifying:1 elhadad:1 changing:1 ravi:2 fwti:3 relaxation:1 graph:1 downstream:1 sum:2 run:2 uncertainty:1 named:1 gra:3 almost:1 reader:1 family:1 vb:7 conll:9 bit:1 lrec:2 followed:1 g:2 strength:3 constraint:30 worked:2 precisely:1 encodes:1 weaklysupervised:1 tag:64 extremely:1 min:6 attempting:1 relatively:4 structured:1 combination:2 smaller:1 across:2 em:41 describes:1 wi:2 modification:2 making:1 happens:1 outlier:2 pr:12 taken:1 equation:4 visualization:2 remains:1 previously:2 turn:1 count:2 know:1 ge:3 tractable:1 fed:1 lax:1 available:3 apply:1 away:1 indirectly:1 enforce:2 occurrence:9 appearing:1 original:3 dirichlet:5 ensure:1 running:1 include:1 linguistics:2 eisner:1 build:1 especially:1 classical:2 objective:15 already:1 added:1 occurs:2 parametric:1 exhibit:1 gradient:5 thank:1 entity:1 hmm:11 street:1 athena:1 reason:1 induction:5 enforcing:1 marcus:1 length:1 sia:1 illustration:1 kuzman:1 liang:2 difficult:1 unfortunately:1 october:1 expense:1 negative:1 design:1 implementation:1 unknown:2 perform:1 allowing:1 vertical:1 observation:2 revised:1 markov:4 datasets:1 descent:1 incorporated:1 santorini:1 reproducing:1 verb:2 cast:1 specified:2 kl:8 sentence:2 optimized:1 learned:1 established:1 nip:2 address:1 able:1 suggested:1 pattern:1 biasing:1 sparsity:38 adjective:1 max:2 explanation:1 haber:1 power:2 event:4 natural:2 force:1 regularized:2 scheme:2 improve:1 numerous:1 axis:2 created:1 philadelphia:1 text:2 prior:19 l2:1 determining:1 relative:1 law:2 fully:5 bick:1 mixed:1 interesting:1 sufficient:1 consistent:2 treebank:5 classifying:1 squashing:1 row:3 production:2 penalized:1 token:11 supported:2 free:1 english:4 bias:2 allow:5 side:1 perceptron:1 guide:1 taking:1 sparse:38 transition:6 computes:1 doesn:1 author:1 made:1 qualitatively:1 projected:5 qx:6 dealing:1 active:1 uai:1 corpus:9 assumed:1 discriminative:1 alternatively:1 search:1 latent:7 why:1 pretty:1 reality:2 promising:1 disambiguate:1 additionally:3 table:7 learn:1 ca:1 mj:1 robust:1 investigated:1 did:2 sp:3 dense:1 bounding:1 child:1 pereira:1 exponential:2 third:2 cog:1 showing:1 er:1 list:3 explored:1 intractable:1 incorporating:1 toutanova:1 effectively:1 ci:1 illustrates:1 justifies:1 occurring:1 sparser:1 explore:1 gao:3 expressed:3 acquiring:1 relies:2 conditional:1 identity:4 goal:1 shared:1 hard:2 change:1 typical:1 determined:1 reducing:2 operates:1 wt:1 except:1 tecnologia:1 meaningful:1 exception:1 formally:1 mark:2 alexander:1 evaluate:4 ofspeech:2 tested:2 |
3,163 | 3,866 | Learning with Compressible Priors
Volkan Cevher
Rice University
[email protected]
Abstract
We describe a set of probability distributions, dubbed compressible priors, whose
independent and identically distributed (iid) realizations result in p-compressible
signals. A signal x ? RN is called p-compressible with magnitude R if its sorted
coefficients exhibit a power-law decay as |x|(i) . R ? i?d , where the decay rate d
is equal to 1/p. p-compressible signals live close to K-sparse signals (K N )
in the `r -norm (r > p)
since their best K-sparse approximation error decreases
with O R ? K 1/r?1/p . We show that the membership of generalized Pareto, Student?s t, log-normal, Fr?echet, and log-logistic distributions to the set of compressible priors depends only on the distribution parameters and is independent of N .
In contrast, we demonstrate that the membership of the generalized Gaussian distribution (GGD) depends both on the signal dimension and the GGD parameters:
the expected decay rate of N -sample iid realizations from the GGD with the shape
parameter q is given by 1/ [q log (N/q)]. As stylized examples, we show via experiments that the wavelet coefficients of natural images are 1.67-compressible
whereas their pixel gradients are 0.95 log (N/0.95)-compressible, on the average.
We also leverage the connections between compressible priors and sparse signals
to develop new iterative re-weighted sparse signal recovery algorithms that outperform the standard `1 -norm minimization. Finally, we describe how to learn the
hyperparameters of compressible priors in underdetermined regression problems
by exploiting the geometry of their order statistics during signal recovery.
1
Introduction
Many problems in signal processing, machine learning, and communications can be cast as a linear
regression problem where an unknown signal x ? RN is related to its observations y ? RM via
y = ?x + n.
(1)
M ?N
In (1), the observation matrix ? ? R
is a non-adaptive measurement matrix with random
entries in compressive sensing (CS), an over-complete dictionary of features in sparse Bayesian
learning (SBL), or a code matrix in communications [1, 2]. The vector n ? RM usually accounts
for physical noise with partially or fully known distribution, or it models bounded perturbations in
the measurement matrix or the signal.
Because of its theoretical and practical interest, we focus on the instances of (1) where there are
more unknowns than equations, i.e., M < N . Hence, determining x from y in (1) is ill-posed: ?v ?
kernel (?), x + v defines a solution space that produces the same observations y. Prior information
is therefore necessary to distinguish the true x among the infinitely many possible solutions. For
instance, CS and SBL frameworks assume that the signal x belongs to the set of sparse signals. By
sparse, we mean that at most K out of the N signal coefficients are nonzero where K N . CS
and SLB algorithms then regularize the solution space by signal priors that promote sparseness and
they have been extremely successful in practice in a number of applications even if M N [1?3].
Unfortunately, prior information by itself is not sufficient to recover x from noisy y. Two more
key ingredients are required: (i) the observation matrix ? must stably embed (or encode) the set of
signals x into the space of y, and (ii) a tractable decoding algorithm must exist to map y back to
x. By stable embedding, we mean that ? is bi-Lipschitz where the encoding x ? ?x is one to
one and the inverse mapping ? = {? (?x) ? x} is smooth. The bi-Lipschitz property of ? is
crucial to ensure the stability in decoding x by controlling the amount by which perturbations of the
1
observations are amplified [1, 4]. Tractable decoding is important for practical reasons as we have
limited time and resources, and it can clearly restrict the class of usable signal priors.
In this paper, we describe compressible prior distributions whose independent and identically distributed (iid) realizations result in compressible signals. A signal is compressible when sorted magnitudes of its coefficients exhibit a power-law decay. For certain decay rates, compressible signals
live close to the sparse signals, i.e., they can be well-approximated by sparse signals. It is wellknown that the set of K-sparse signals has stable and tractable encoder-decoder pairs (?, ?) for M
as small as O(K log (N/K)) [1, 5]. Hence, an N -dimensional compressible signal with the proper
decay rate inherits the encoder-decoder pairs of its K-sparse approximation for a given approximation error, and can be stably embedded into dimensions logarithmic in N .
Compressible priors analytically summarize the set of compressible signals and shed new light on
underdetermined linear regression problems by building upon the literature on sparse signal recovery. Our main results are summarized as follows:
1) By using order statistics, we show that the compressibility of the iid realizations of generalized
Pareto, Student?s t, Fr?echet, and log-logistics distributions is independent of the signals? dimension.
These distributions are natural members of compressible priors: they truly support logarithmic dimensionality reduction and have important parameter learning guarantees from finite sample sizes.
We demonstrate that probabilistic models for the wavelet coefficients of natural images must also be
a natural member of compressible priors.
2) We point out a common misconception about the generalized Gaussian distribution (GGD): GGD
generates signals that lose their compressibility as N grows. For instance, special cases of the GGD
distribution, e.g., Laplacian distribution, are commonly used as sparsity promoting priors in CS and
SBL problems where M is assumed to grow logarithmically with N [1?3, 6]. We show that signals
generated from Laplacian distribution can only be stably embedded into lower dimensions that grow
proportional to N . Hence, we identify an inconsistency between the decoding algorithms motivated
by the GGD distribution and their sparse solutions.
3) We use compressible priors as a scaffold to build new decoding algorithms based on Bayesian
inference arguments. The objective of these algorithms is to approximate the signal realization from
a compressible prior as opposed to pragmatically producing sparse solutions. Some of these new
algorithms are variants of the popular iterative re-weighting schemes [3,6?8]. We show how the tuning of these algorithms explicitly depends on the compressible prior parameters, and how to learn
the parameters of the signal?s compressible prior on the fly while recovering the signal.
The paper is organized as follows. Section 2 provides the necessary background on sparse signal
recovery. Section 3 mathematically describes the compressible signals and ties them with the order
statistics of distributions to introduce compressible priors. Section 4 defines compressible priors,
identifies common misconceptions about the GGD distribution, and examines natural images as instances of compressible priors. Section 5 derives new decoding algorithms for underdetermined
linear regression problems. Section 6 describes an algorithm for learning the parameters of compressible priors. Section 7 provides simulations results and is followed by our conclusions.
2
Background on Sparse Signals
Any signal x ? RN can be represented in terms of N coefficients ?N ?1 in a basis ?N ?N via
x = ??. Signal x has a sparse representation if only K N entries of ? are nonzero. To account
for sparse signals in an appropriate basis, (1) should be modified as y = ?x + n = ??? + n.
Let ?K denote the set of all K-sparse signals. When ? in (1) satisfies the so-called restricted
isometry property (RIP), it can be shown that ?? defines a bi-Lipschitz embedding of ?K into
RM [1, 4, 5]. Moreover, RIP implies the recovery of K-sparse signals to within a given error bound,
and the best attainable lower bounds for M are related to the Gelfand width of ?K , which is logarithmic in the signal dimension, i.e., M = O(K log (N/K)) [5]. Without loss of generality, we
restrict our attention in the sequel to canonically sparse signals and assume that ? = I (the N ? N
identity matrix) so that x = ?.
With the sparsity prior and RIP assumptions, inverse maps can be obtained by solving the following
convex problems:
0
0
?1 (y) = arg min kx k1 s.t. y = ?x ,
?2 (y) = arg min kx0 k1 s.t. ky ? ?x0 k2 ? ,
?3 (y) = arg min kx0 k1 + ? ky ? ?x0 k22 ,
2
(2)
P
r 1/r
where and ? are constants, and kxkr , ( i |xi | ) . The decoders ?i (i = 1, 2) are known as
basis pursuit (BP) and basis pursuit denoising (BPDN), respectively; and, ?3 is a scalarization of
BPDN [1, 9]. They also have the following deterministic worst-case guarantee when ? has RIP:
kx ? ?(y)k2 ? C1
kx ? xK k1
?
+ C2 knk2 ,
K
(3)
where C1,2 are constants, xK is the best K-term approximation, i.e., xK = arg minkx0 k0 ?K kx ?
x0 kr for r ? 1, and kxk0 is a pseudo-norm that counts the number of nonzeros of x [1, 4, 5].
Note that the error guarantee (3) is adaptive to each given signal x because of the definition of xK .
Moreover, the guarantee does not assume that the signal is sparse.
3
Compressible Signals, Order Statistics and Quantile Approximations
We define a signal x as p-compressible if it lives close to the shell of the weak-`p ball of radius
R (sw`p (R)?pronounced as swell p). Defining x
?i = |xi |, we arrange the signal coefficients xi in
decreasing order of magnitude as
x
?(1) ? x
?(2) ? . . . ? x
?(N ) .
(4)
Then, when x ? sw`p (R), the i-th ordered entry x
?(i) in (4) obeys
x
?(i) . R ? i?1/p ,
(5)
where . means ?less than or approximately equal to.? We deliberately substitute . for ? in the
p-compressibility definition of [1] to reduce the ambiguity of multiple feasible R and p values. In
Section 6, we describe a geometric approach to learn R and p so that R ? i?1/p ? x
?(i) .
Signals in sw`p (R) can be well-approximated by sparse signals as the best K-term approximation
error decays rapidly to zero as
kx ? xK kr . (r/p ? 1)?1/r RK 1/r?1/p , when p < r.
(6)
Given M , a good rule of thumb is to set K = M/[C log(N/M )] (C ? 4 or 5) and use (6) to predict
the approximation error for the decoders ?i in Section 2. Since the decoding guarantees are bounded
by the best K-term approximation error in `1 (i.e., r = 1; cf. (3)), we will restrict our attention to
x ? sw`p where p < 1. Including p = 1 adds a logarithmic error factor to the approximation errors,
which is not severe; however, it is not considered in this paper to avoid a messy discussion.
Suppose now the individual entries xi of the signal x are random variables (RV) drawn iid with
respect to a probability density function (pdf) f (x), i.e., xi ? f (x) for i = 1, . . . , N . Then, x
?(i) ?s
in (4) are also RV?s and are known as the order statistics (OS) of yet another pdf f?(?
x), which can
be related to f (x) in a straightforward manner: f?(?
x) = f (?
x) + f (??
x). Note that even though the
RV?s xi (hence, x
?i ) are iid, the RV?s x
?(i) are statistically dependent.
The concept of OS enables us to create a link between signals summarized by pdf?s and their compressibility, which is a deterministic property after the signals are realized. The key to establishing this link turns out to be the parameterized form of the quantile function of the pdf f?(?
x). Let
R x?
x). The quantile
F? (?
x) = 0 f?(v)dv be the cumulative distribution function (CDF) and u = F? (?
function F? ? (u) of f?(?
x) is then given by the inverse of its CDF: F? ? (u) = F? ?1 (u). We will refer to
?
?
F (u) as the magnitude quantile function (MQF) of f (x).
A well-known quantile approximation to the expected OS of a pdf is given by [10]:
E[?
x(i) ] = F? ? 1 ?
i
N +1
,
(7)
where E[?] is the expected value. Moreover, we have the following moment matching approximation
x
?(i) ? N
E[?
x(i) ],
i
N
1?
i
N
2
N f E[?
x(i) ]
!
,
(8)
which can be used to quantify how much the actual realizations x
?(i) deviate from E[?
x(i) ]. For
instance, these deviations for i > K can be used to bound the statistical variations of the best Kterm approximation error. In practice, the deviations are relatively small for compressible priors.
In Sections 4?6, we will use the quantile approximation in (7) as our basis to motivate the set of
compressible priors, derive recovery algorithms for x, and learn the parameters of compressible
priors during recovery.
3
Table 1: Example distributions and the sw`p (R) parameters of their iid realizations
Distribution
Generalized Pareto
Student?s t
pdf
|x| ?(q+1)
1+ ?
2 ?(q+1)/2
?((q+1)/2)
?
1 + x2
2???(q/2)
?q
Generalized Gaussian
4
2?((q+1)/2)
?
?q?(q/2)
p
q
?N
1/q
q
q
e?(|x|/?)
2??(1/q)
q?1 ?(x/?)q
? max {1, ? (1 + 1/q)} log
q
1/q
(N/q)
? log1/q N
? max {1, ? (1 + 1/q)q } log (qN )
?
2
?q
e?(q log(x/?)) /2
2?x
q
q
?N 1/q
(q/?) (x/?)
e
1
(x/?)q?1 e?x/?
??(q)
Log-Normal
h
?N 1/q
i1/q
?N 1/q
(q/?)(x/?)q?1
[1+(x/?)q ]2
Log-Logistic
Weibull
Gamma
?
(q/?) (x/?)?(q+1) e?(x/?)
Fr?echet
R
q
2?
?e 2 log N /q
q log (N/q)
q log N
log (qN )
?
2 log N q
Compressible Priors
A compressible prior f (x; ?) in `r is a pdf with parameters ? whose MQF satisfies
F? ? 1 ?
i
N +1
. R(N, ?) ? i?1/p(N,? ) , where R > 0 and p < r.
(9)
Table 4 lists example pdf?s, parameterized by ? = (q, ?) 0, and the sw`p (R) parameters of their
N -sample iid realizations. In this paper, we fix r = 1 (cf. Section 3); hence, the example pdf?s are
compressible priors whenever p < 1. In (9), we make it explicit that the sw`p (R) parameters can
depend on the parameters ? of the specific compressible prior as well as the signal dimension N .
The dependence of the parameter p on N is of particular interest since it has important implications
in signal recovery as well as parameter learning from finite sample sizes, as discussed below.
We define natural p-compressible priors as the set Np of compressible priors such that p = p(?) < 1
is independent of N , ?f (x; ?) ? Np . It is possible to prove that we can capture most of the `1 energy in an N -sample iid realization from a natural p-compressible prior by using a constant K,
p
i.e., kx ? xK k1 ? kxk1 for any desired 0 < 1 by choosing K = d(p/) 1?p e. Hence,
N -sample iid signal realizations from the compressible priors in Np can be truly embedded into
dimensions M that grow logarithmically with N with tractable decoding guarantees due to (3). Np
members include the generalized Pareto (GPD), Fr?echet (FD), and log-logistic distributions (LLD).
It then only comes as a surprise that generalized Gaussian distribution (GGD) is not a natural pcompressible prior since its iid realizations lose their compressibility as N grows (cf. Table 4). While
it is common practice to use a GGD prior with q ? 1 for sparse signal recovery, we have no recovery guarantees for signals generated from GGD when M grows logarithmically with N in (1).1 In
fact, to be p-compressible, the shape parameter of a GGD prior should satisfy q = N eW?1 (?p/N ) ,
where W?1 (?) is the Lambert W -function with the alternate branch. As a result, the learned GGD
parameters from dimensionality-reduced data will in general depend on the dimension and may not
generalize to other dimensions. Along with GGD, Table 4 shows how Weibull, gamma, and lognormal distributions are dimension-restricted in their membership to the set of compressible priors.
Wavelet coefficients of natural images provide a stylized example to demonstrate why we should
care about the dimensional independence of the parameter p.2 As a brief background, we first note
that research in natural image modeling to date has had two distinct approaches, with one focusing on deterministic explanations and the other pursuing probabilistic models [12]. Deterministic
approaches operate under the assumption that the natural images belong to Besov spaces, having a
bounded number of derivatives between edges. Unsurprisingly, wavelet thresholding is proven nearoptimal for representing and denoising Besov space images. As the simplest example, the magnitude
sorted discrete wavelet coefficients w
?(i) of a Besov q-image should satisfy w
?(i) = R ? i?1/q . The
probabilistic approaches, on the other hand, exploit the power-law decay of the power spectra of images and fit various pdf?s, such as GGD and the Gaussian scale mixtures, to the histograms of wavelet
1
To illustrate the issues with the compressibility of GGD, consider the Laplacian distribution (LD: GGD
with q = 1), which is the conventional convex prior for promoting sparsity. Via order statistics, it is possible
to show that x
?(i) ? ? log Ni for xi ? GGD(1, ?). Without loss of generality, let us judiciously pick ? =
1/ log N so that R = 1. Then, we have kxk1 ? N ? 1 and kx ? xK k1 ? N ? K log (N/K) ? K.?When
we only have K terms to capture (1 ? ) of the `1 energy ( 1) in the signal x, we need K ? (1 ? )N .
2
Here, we assume that the reader is familiar with the discrete wavelet transform and its properties [11].
4
coefficients while trying to simultaneously capture the dependencies observed in the marginal and
joint distributions of natural image wavelet coefficients. Probabilistic approaches are quite important
in image compression because optimal compressors quantize the wavelet coefficients according to
the estimated distributions, dictating the image compression limits via Shannon?s coding theorem.
We conjecture that probabilistic models that summarize the wavelet coefficients of natural images
belong to the set of natural (non-iid) p-compressible priors. We base our claim on two observations:
1) Due to the multiscale nature of the wavelet transform, the decay profile of the magnitude sorted
wavelet coefficients are scale-invariant, i.e., preserved at different resolutions, where lower resolutions inherit the highest resolution. Hence, probabilistic models that explain the wavelet transform of
any signals should exhibit this decay profile inheritance property. 2) The magnitude sorted wavelet
coefficients of natural images exhibit a constant decay rate, as expected of Besov space images.
Section 7.2 demonstrates the ideas using natural images from the Berkeley natural images database.
5
Signal Decoding Algorithms
Convex problems to recover sparse or compressible signals in (2) are usually motivated by Bayesian
inference. In a similar fashion, we formalize two new decoding algorithms below by assuming prior
distributions on the signal x and the noise n, and then asking inference questions given y in (1).
5.1 Fixed point continuation for a non-iid compressible prior
The multivariate Lomax distribution (MLD) provides an elementary example of a non-iid compress
?q?N
?1
ible prior. The pdf of the distribution is given by MLD(x; q, ?) ? 1 + N
[13].
i=1 ?i |xi |
For MLD, the marginal distribution of the signal coefficients is GPD, i.e., xi ? GPD(x; q, ?i ).
Moreover, given n-realizations x1:n of MLD (n ? N ), the joint marginal distribution of xn+1:N is
?1
P
MLD(xn+1:N ; q + k, ?n+1:N 1 + ni=1 ??1
). In the sequel, we assume ?i = ? ?i, for which
i |xi |
it can be proved that MLD is compressible with p = 1 [14]. For now, we will only demonstrate
this property via simulations in Section 7.1. With the MLD prior on x, we focus on only two optimization problems below, one based on BP and the other based on maximum a posteriori (MAP)
estimation. Other convex formulations, such as BPDN (?2 in (2)) and LASSO [15], trivially follow.
P
1) BP Decoder: When there is no noise, the observations are given by y = ?x, which has infinitely
many solutions, as discussed in Section 1. In this case, we can exploit the MLD likelihood function
to regularize the solution space. For instance, when we ask for the solution that maximizes the MLD
likelihood given y, it is easy to see that we obtain the BP decoder formulation, i.e., ?1 (y) in (2).
2) MAP Decoder: Suppose that the noise coefficients (ni ?s in (1)) are iid Gaussian with zero mean
and variance ? 2 , ni ? N (n; 0, ? 2 ). Although many inference questions are possible, here we seek
the mode of the posterior distribution to obtain a point estimate, also known as the MAP estimate.
Since we have f (y|x) = N (y ? ?x; 0, ? 2 I M ?M ) and f (x) =MLD(x; q, ?), the MAP estimate can
b MAP = arg maxx0 f (y|x0 )f (x0 ), which is explicitly given by
be derived using the Bayes rule as x
b MAP = arg min
x
ky ? ?x0 k22 + 2? 2 (q + N ) log 1 + ??1 kx0 k1 .
x0
(10)
Unfortunately, we stumble upon a non-convex problem in (10) during our quest for the MAP estimate. We circumvent the non-convexity in (10) using a majorization-minimization idea where
we iteratively obtain a tractable upperbound on the log-term in (10) using the following inequality:
?u, v ? (0, ?), log u ? log v + u/v ? 1. After some straightforward calculus, we obtain the
b {k} is the k-th iteration estimate (b
iterative decoder below, indexed by k, where x
x{0} = 0):
2? 2 (q + N )
0 2
0
b {k} = arg min
x
ky
?
?x
k
+
?
.
k kx k1 , where ?k =
2
x0
? + kb
x{k?1} k1
(11)
The decoding approach in (11) can be viewed as a continuation (or a homotopy) algorithm where a
fixed point is obtained at each iteration, similar to [16]. This decoding scheme has provable, linear
convergence guarantees when kb
x{k} k1 is strictly increasing (equivalently, ?k ) [16].
5.2 Iterative `s -decoding for iid scale mixtures of GGD
We consider a generalization of GPD and the Student?s t distribution, which we will denote as
the generalized Gaussian gamma scale mixture distribution (SMD, in short), whose pdf is given
?(q+1)/s
s
by SMD(x; q, ?, s) ? (1 + |x| /?s )
. The additional parameter s of SMD modulates its
OS near the origin. It can be proved that SMD is p-compressible with p = q [14]. SMD, for
instance, arises through the following interaction of the gamma distribution and GGD: x = a?1/s b,
a ? Gamma(a; q/s, ??s ), and b ? GGD(b; s, 1). Given a, the distribution of x is a scaled GGD:
5
f (x|a) ? GGD(x; s, a?1 ). Marginalizing a from f (x|a), we reach the SMD as the true underlying
distribution of x. SMD arise in multiple contexts, such as the SLB framework that exploit Student?s
t (i.e., s = 2) for learning problems [2], and the Laplacian and Gaussian scale mixtures (i.e., s = 1
and 2, respectively) that model natural images [17, 18].
Due to lack of space, we only focus on noiseless observations in (1). We assume that x is an N sample iid realization from SMD(x; q, ?, s) with known parameters (q, ?, s) 0 and choose a solub that maximizes the SMD likehood to find the true vector x among the kernel of ?:
tion x
b = max
x
SMD(x; q, ?, s) = min
x0
x0
X
log 1 + ??s |xi |s , s.t. y = ?x0 .
(12)
i
The majorization-minimization trick in Section 5.1 also circumvents the non-convexity in (12):
b {k} = min
x
x0
X
s ?1
wi,{k} |xi |s , s.t. y = ?x0 ; where wi,{k} = ?s + xi,{k}
.
(13)
i
The decoding scheme in (13) is well-known as the iterative re-weighted `s algorithms [7, 19?21].
6
Parameter Learning for Compressible Distributions
While deriving decoding algorithms in Section 5, we assumed that the signal coefficients xi are
generated from a compressible prior f (x; ?) and that ? is known. We now relax the latter assumption
and discuss how to simultaneously estimate x and learn the parameters ?.
When we visualize the joint estimation of x and ? from y in (1) as a graphical model, we immediately realize that x creates a Markov blanket for ?. Hence, to determine ?, we have to estimate
the signal coefficients. When ? has the stable embedding property, we know that the decoding algorithms can obtain x with approximation guarantees, such as (3). Then, given x, we can choose
an estimator for ? via standard Bayesian inference arguments. Unfortunately, this argument leads to
one important road block: estimation of the signal x without knowing the prior parameters ?.
A n?aive approach to overcoming this road block is to split the optimization space and alternate
on x and ? while optimizing the Bayesian objective. Unfortunately, there is one important and
unrecognized bug in this argument: the estimated signal values are in general not iid, hence we
would be minimizing the wrong Bayesian objective to determine ?. To see this, we first note that the
b in general consist of M N non-zero coefficients that mimic the best K-term
recovered signals x
approximation of the signal xK and some other coefficients that explain the small tail energy. We
then recall from Section (3) that the coefficients of xK are statistically dependent. Hence, at least
b are also dependent. One way to overcome this dependency
partially, the significant coefficients of x
issue is to treat the recovered signals as if they are drawn iid from a censored GPD. However, the
optimization becomes complicated and the approach does not provide any additional guarantees.
As an alternative, we propose to exploit geometry and use the consensus among the coefficients
b {k} during iterative recovery.
in fitting the sw`p (R) parameters via the auxiliary signal estimates x
To do this, we employ Fischler and Bolles? probabilistic random sampling consensus (RANSAC)
algorithm [22] to fit a line, whose y-intercept is log R(N, ?) and whose slope is 1/p(N, ?):
log x
bi,{k} = log R(N, ?) ?
1
log i, for i = 1, . . . , K; where K = M/[C log(N/M )],
p(N, ?)
(14)
where C ? 4, 5 as discussed in Section. 3. RANSAC provides excellent results with high probability
even if the data contains significant outliers. Because of its probabilistic nature, it is computationally
efficient. The RANSAC algorithm requires a threshold to gate the observations and count how much
a proposed solution is supported by the observations [22]. We determine this threshold by bounding
the tail probability that the OS of a compressible prior will be out of bounds. For the pseudo-code
and further details of the RANSAC algorithm, cf. [22].
7
Experiments
7.1 Order Statistics
To demonstrate the sw`p (R) decay profile of p-compressible priors, we generated iid realizations
of GGD with q = 1 (LD) and GPD with q = 1, and (non-iid) realizations of MLD with q = 1 of
varying signal dimensions N = 10j , where j = 2, 3, 4, 5. We sorted the magnitudes of the signal
coefficients, normalized them by their corresponding value of R. We then plotted the results on a
log-log scale in Fig. 1. At http://dsp.rice.edu/randcs, we provide a MATLAB routine (randcs.m) so
that it is easy to repeat the same experiment for the rest of the distributions in Table 4.
6
?10
0
2
3
4
5
ordered index [power of 10]
0
?1
?2
?3
?4
?5
normalized values
normalized values
slope = ?1
average of 100 realizations
?10
0
(a) LD (iid)
slope = ?1
average of 100 realizations
2
3
4
5
ordered index [power of 10]
0
?1
?2
?3
?4
?5
?10
0
slope = ?1
average of 100 realizations
2
3
4
5
ordered index [power of 10]
(b) GPD (iid)
(c) MLD
Figure 1: Numerical illustration of the sw`p (R) decay profile of three different pdfs.
To live in sw`p (1) with 0 < p ? 1, the slope of the resulting curve must be less than or equal to ?1.
Figure 1(a) illustrates that the iid LD slope is much greater than ?1 and moreover logarithmically
grows with N . In contrast, Fig. 1(b) shows that iid GPD with q = 1 exhibits the constant slope of
?1 that is independent of N . MLD with q = 1 also delivers such a slope (Fig. 1(c)). The latter two
distributions thus produce compressible signal realizations, while the Laplacian does not.
7.2 Natural Images
We investigate the images from the Berkeley natural images database in the context of pcompressible priors. We randomly sample 100 image patches of varying sizes N = 2j ? 2j
(j = 3, . . . , 8), take their wavelet transforms (scaling filter: daub2), and plot the average of their
magnitude ordered wavelet coefficients in Figs. 2(a) and (b) (solid lines). Figure 2(c) also illustrates
the OS of the pixel gradients, which are of particular interest in many applications.
?5
0
average of 100 images
GPD(q=1/0.6;?=10)
1
2
3
4
5
ordered index [power of 10]
(a) wavelet coefficients
5
4
3
2
1
0
?5
0
average of 100 images
histogram fit ? GGD
1
2
3
4
5
ordered index [power of 10]
(b) wavelet coefficients
coefficient amplitudes
5
4
3
2
1
0
coefficient amplitudes
Along with the wavelet coefficients, Fig. 2(a) superposes the
expected OS of GPD
with q = 1.67
and ? = 10 (dashed line), given by x
?(i) {GPD(q, ?)} = ? (N + 1)1/q i?1/q ? 1 (i = 1, . . . , N ).
Although wavelet coefficients of natural images do not follow an iid distribution, they exhibit a
constant decay rate, which can be well-approximated by an iid GPD distribution. This apparent
constant decay rate is well-explained by the decay profile inheritance of the wavelet transform across
different resolutions and supports the Besov space assumption used in the deterministic approaches.
The GPD rate of q = 1.67 implies a disappointing O(K ?0.1 ) approximation rate in the `2 -norm
vs. the theoretical O(K ?0.5 ) rate [23]. Moreover, we lose all the guarantees in the `1 -norm.
coefficient amplitudes
normalized values
0
?1
?2
?3
?4
?5
5
4
3
2
1
0
?5
0
average of 100 images
GGD(q=0.95;?=25)
1
2
3
4
5
ordered index [power of 10]
(c) pixel gradients
Figure 2: Approximation of the order statistics and histograms of natural images with GPD and GGD.
In contrast, Fig. 2(b) demonstrates the GGD histogram fits to the wavelet coefficients, where the
GGD exponent q ? [0.5, 1] depends on the particular dimension and decreases as N increases. The
histogram matching is common practice in the existing probabilistic approaches (e.g., [18]) to determine pdf?s that explain the statistics of natural images. Typically, least square error metrics or
Kullback-Liebler (KL) divergence measures are used. Although the GGD fit via histogram matching
in Fig. 2(b) deceptively appears to fit a small number of coefficients, we emphasize the log-log scale
of the plots and mention that there is a significant number of coefficients in the narrow space where
the GGD distribution is a good fit. Unfortunately, these approaches approximate the wavelet coeffi7
0.2
0.4
0.6
0.8
1
1.2
(a)
0.2
0.4
0.6
0.8
1
1.2
(b)
0
0.2
0.4
0.6
0.8
1
(c)
Figure 3: Improvements afforded by re-weighted `1 -decoding (a) with known parameters ? and (b) with
learning. (c) The learned sw`p exponent of the GPD distribution with q = 0.4 via the RANSAC algorithm.
cients of natural images that have almost no approximation power of the overall image. Moreover,
the learned GGD distribution is dimension dependent, assigns lower probability to the large coefficients that explain the image well, and predicts a mismatched OS of natural images (cf.Fig. 2(b)).
Figure 2(c) compares the magnitude ordered pixel gradients of the images (solid lines) with the
expected OS of GGD (dashed line). From the figure, it appears that the natural image pixel gradients
lose their compressibility as the image dimensions grow, similar to the GGD, Weibull, gamma, and
log-normal distributions. In the figure, the GGD parameters are given (q, ?) = (0.95, 25).
7.3 Iterative `1 Decoding
We repeat the compressible signal recovery experiment in Section 3.2 of [7] to demonstrate the
performance of our iterative `s decoder with s = 1 in (13). We first randomly sample a signal
x ? RN (N = 256) where the signal coefficients are iid from the GPD distribution with q = 0.4
and ? = (N + 1)?1/q so that the E[?
x(1) ] ? 1. We set M = 128 and draw a random M ? N matrix
with iid Gaussian entries to obtain y = ?x. We then decode signals via (13) where maximum
iterations is set to 5, with the knowledge of the signal parameters and with learning. During the
learning phase, we use log(2) as the threshold for the RANSAC algorithm. We set the maximum
iteration count of RANSAC to 500.
The results of a Monte Carlo run with 100 independent realizations are illustrated in Fig. 3. In
Figs. 3(a) and (b), the plots summarize the average improvement over the standard decoder ?1 (y)
b {4} k2 /kx ? ?1 (y)k2 , which have mean and standard deviation
via the histograms of kx ? x
(0.7062, 0.1380) when we know the parameters of the GPD (a) and (0.7101, 0.1364) when we learn
the parameters of the GPD via RANSAC (b). The learned sw`p exponent is summarized by the histogram in Fig. 3(c), which has mean and standard deviation (0.3757, 0.0539). Hence, we conclude
that the our alternative learning approach via the RANSAC algorithm is competitive with knowing
the actual prior parameters that generated the signal. Moreover, the computational time of learning
is insignificant compared to time required by the state-of-the art linear SPGL algorithm [24].
8
Conclusions3
Compressible priors create a connection between probabilistic and deterministic models for signal
compressibility. The bridge between these seemingly two different modeling frameworks turns out
to be the concept of order statistics. We demonstrated that when the p-parameter of a compressible
prior is independent of the ambient dimension N , it is possible to have truly logarithmic embedding
of its iid signal realizations. Moreover, the learned parameters of such compressible priors are dimension agnostic. In contrast, we showed that when the p-parameter depends on N , we have many
restrictions in signal embedding and recovery as well as in parameter learning. We illustrated that
wavelet coefficients of natural images can be well approximated by the generalized Pareto prior,
which in turn predicts a disappointing approximation rate for image coding with the n?aive sparse
model and for CS image recovery from measurements that grow only logarithmically with the image dimension. We motivated many of the existing sparse signal recovery algorithm as instances of
a corresponding compressible prior and discussed parameter learning for these priors from dimensionality reduced data. We hope that the iid compressibility view taken in this paper will pave the
way for a better understanding of probabilistic non-iid and structured compressibility models.
3
We thank R. G. Baraniuk, M. Wakin, M. Davies, J. Haupt, and J. P. Slavinksy for useful discussions.
Supported by ONR N00014-08-1-1112, DARPA N66001-08-1-2065, ARO W911NF-09-1-0383 grants.
8
References
[1] E. J. Cand`es. Compressive sampling. In Proc. International Congress of Mathematicians,
volume 3, pages 1433?1452, Madrid, Spain, 2006.
[2] M.E. Tipping. Sparse bayesian learning and the relevance vector machine. The Journal of
Machine Learning Research, 1:211?244, 2001.
[3] D. P. Wipf and B. D. Rao. Sparse Bayesian learning for basis selection. IEEE Transactions on
Signal Processing, 52(8):2153?2164, 2004.
[4] T. Blumensath and M.E. Davies. Sampling theorems for signals from the union of linear
subspaces. IEEE Trans. Info. Theory, 2009.
[5] A. Cohen, W. Dahmen, and R. DeVore. Compressed sensing and best k-term approximation.
American Mathematical Society, 22(1):211?231, 2009.
[6] I. F. Gorodnitsky, J. S. George, and B. D. Rao. Neuromagnetic source imaging with FOCUSS: a recursive weighted minimum norm algorithm. Electroenceph. and Clin. Neurophys.,
95(4):231?251, 1995.
[7] E. J. Cand`es, M. B. Wakin, and S. P. Boyd. Enhancing sparsity by reweighted `1 minimization.
Journal of Fourier Analysis and Applications, 14(5):877?905, 2008.
[8] D. P. Wipf and S. Nagarajan. Iterative reweighted `1 and `2 methods for finding sparse solutions. In SPARS09, Rennes, France, 2009.
[9] S. S. Chen, D. L. Donoho, and M. A. Saunders. Atomic decomposition by basis pursuit. SIAM
review, pages 129?159, 2001.
[10] H.A. David and H.N. Nagaraja. Order Statistics. Wiley-Interscience, 2004.
[11] S. Mallat. A Wavelet Tour of Signal Processing. Academic Press, 1999.
[12] H. Choi and R. G. Baraniuk. Wavelet statistical models and Besov spaces. Lecture Notes in
Statistics, pages 9?30, 2003.
[13] T. K. Nayak. Multivariate Lomax distribution: properties and usefulness in reliability theory.
Journal of Applied Probability, pages 170?177, 1987.
[14] V. Cevher. Compressible priors. IEEE Trans. on Information Theory, in preparation, 2010.
[15] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical
Society, pages 267?288, 1996.
[16] E. T. Hale, W. Yin, and Y. Zhang. Fixed-point continuation for `1 -minimization: Methodology
and convergence. SIAM Journal on Optimization, 19:1107, 2008.
[17] P. J. Garrigues. Sparse Coding Models of Natural Images: Algorithms for Efficient Inference
and Learning of Higher-Order Structure. PhD thesis, EECS Department, University of California, Berkeley, May 2009.
[18] M. J. Wainwright and E. P. Simoncelli. Scale mixtures of Gaussians and the statistics of natural
images. In NIPS, 2000.
[19] D. Wipf and S. Nagarajan. A new view of automatic relevance determination. In NIPS, volume 20, 2008.
[20] I. Daubechies, R. DeVore, M. Fornasier, and S. Gunturk. Iteratively re-weighted least squares
minimization for sparse recovery. Commun. Pure Appl. Math, 2009.
[21] R. Chartrand and W. Yin. Iteratively reweighted algorithms for compressive sensing. In
ICASSP, pages 3869?3872, 2008.
[22] M.A. Fischler and R.C. Bolles. Random sample consensus: a paradigm for model fitting
with applications to image analysis and automated cartography. Communications of the ACM,
24(6):381?395, 1981.
[23] E. J. Candes and D. L. Donoho. Curvelets and curvilinear integrals. Journal of Approximation
Theory, 113(1):59?90, 2001.
[24] E. van den Berg and M. P. Friedlander. Probing the Pareto frontier for basis pursuit solutions.
SIAM Journal on Scientific Computing, 31(2):890?912, 2008.
9
| 3866 |@word compression:2 norm:6 calculus:1 seek:1 simulation:2 decomposition:1 attainable:1 pick:1 mention:1 solid:2 garrigues:1 moment:1 ld:4 reduction:1 contains:1 kx0:3 existing:2 recovered:2 neurophys:1 yet:1 must:4 realize:1 slb:2 numerical:1 shape:2 enables:1 plot:3 v:1 xk:9 scaffold:1 short:1 volkan:2 provides:4 math:1 compressible:59 zhang:1 mathematical:1 along:2 c2:1 prove:1 blumensath:1 fitting:2 interscience:1 manner:1 introduce:1 x0:13 expected:6 cand:2 bpdn:3 decreasing:1 actual:2 increasing:1 becomes:1 spain:1 bounded:3 moreover:9 maximizes:2 underlying:1 agnostic:1 weibull:3 compressive:3 mathematician:1 finding:1 dubbed:1 guarantee:11 pseudo:2 berkeley:3 shed:1 tie:1 rm:3 k2:4 demonstrates:2 wrong:1 scaled:1 grant:1 producing:1 treat:1 congress:1 limit:1 encoding:1 establishing:1 approximately:1 appl:1 limited:1 bi:4 statistically:2 obeys:1 practical:2 atomic:1 practice:4 block:2 union:1 recursive:1 matching:3 davy:2 boyd:1 road:2 close:3 selection:2 context:2 live:3 intercept:1 restriction:1 conventional:1 map:9 deterministic:6 demonstrated:1 straightforward:2 attention:2 convex:5 resolution:4 recovery:16 immediately:1 assigns:1 pure:1 examines:1 rule:2 estimator:1 deceptively:1 deriving:1 regularize:2 dahmen:1 embedding:5 stability:1 variation:1 controlling:1 suppose:2 mallat:1 rip:4 decode:1 origin:1 trick:1 logarithmically:5 approximated:4 predicts:2 database:2 kxk1:2 observed:1 fly:1 capture:3 worst:1 besov:6 decrease:2 highest:1 convexity:2 messy:1 fischler:2 neuromagnetic:1 motivate:1 depend:2 solving:1 upon:2 creates:1 basis:8 icassp:1 stylized:2 joint:3 k0:1 darpa:1 represented:1 various:1 distinct:1 describe:4 monte:1 choosing:1 saunders:1 whose:6 gelfand:1 posed:1 quite:1 apparent:1 relax:1 compressed:1 encoder:2 statistic:13 transform:4 itself:1 noisy:1 seemingly:1 propose:1 aro:1 interaction:1 fr:4 cients:1 realization:21 rapidly:1 date:1 canonically:1 amplified:1 bug:1 pronounced:1 ky:4 curvilinear:1 exploiting:1 convergence:2 produce:2 derive:1 develop:1 illustrate:1 recovering:1 c:5 auxiliary:1 implies:2 come:1 quantify:1 blanket:1 radius:1 filter:1 kb:2 nagarajan:2 fix:1 generalization:1 fornasier:1 homotopy:1 gorodnitsky:1 elementary:1 underdetermined:3 mathematically:1 strictly:1 frontier:1 considered:1 normal:3 mapping:1 predict:1 visualize:1 claim:1 dictionary:1 arrange:1 estimation:3 proc:1 lose:4 bridge:1 create:2 weighted:5 minimization:6 hope:1 clearly:1 gaussian:9 modified:1 avoid:1 shrinkage:1 varying:2 encode:1 derived:1 focus:4 inherits:1 dsp:1 pdfs:1 improvement:2 likelihood:2 cartography:1 contrast:4 posteriori:1 inference:6 dependent:4 membership:3 typically:1 france:1 i1:1 pixel:5 issue:2 arg:7 among:3 ill:1 overall:1 exponent:3 art:1 special:1 marginal:3 equal:3 having:1 sampling:3 promote:1 wipf:3 mimic:1 np:4 employ:1 randomly:2 gamma:6 simultaneously:2 divergence:1 individual:1 familiar:1 geometry:2 phase:1 interest:3 fd:1 investigate:1 aive:2 severe:1 truly:3 mixture:5 light:1 implication:1 ambient:1 edge:1 integral:1 necessary:2 censored:1 indexed:1 re:5 desired:1 plotted:1 theoretical:2 cevher:2 instance:8 modeling:2 asking:1 rao:2 w911nf:1 deviation:4 entry:5 tour:1 usefulness:1 successful:1 mld:13 nearoptimal:1 dependency:2 eec:1 density:1 international:1 siam:3 sequel:2 probabilistic:11 decoding:18 thesis:1 ambiguity:1 daubechies:1 opposed:1 choose:2 american:1 usable:1 derivative:1 account:2 upperbound:1 student:5 summarized:3 coding:3 coefficient:39 satisfy:2 explicitly:2 depends:5 tion:1 view:2 competitive:1 recover:2 bayes:1 complicated:1 candes:1 slope:8 nagaraja:1 majorization:2 square:2 ni:4 variance:1 identify:1 chartrand:1 generalize:1 weak:1 bayesian:8 thumb:1 lambert:1 iid:32 carlo:1 liebler:1 explain:4 reach:1 whenever:1 definition:2 energy:3 echet:4 proved:2 popular:1 ask:1 recall:1 knk2:1 knowledge:1 dimensionality:3 organized:1 formalize:1 amplitude:3 routine:1 back:1 focusing:1 appears:2 higher:1 tipping:1 follow:2 methodology:1 devore:2 formulation:2 though:1 generality:2 hand:1 o:9 multiscale:1 lack:1 pragmatically:1 defines:3 logistic:3 stably:3 mode:1 scientific:1 grows:4 building:1 k22:2 concept:2 true:3 normalized:4 deliberately:1 hence:11 analytically:1 nonzero:2 iteratively:3 illustrated:2 reweighted:3 during:5 width:1 generalized:10 trying:1 pdf:13 complete:1 demonstrate:6 bolles:2 delivers:1 image:42 sbl:3 common:4 physical:1 cohen:1 volume:2 discussed:4 belong:2 tail:2 measurement:3 refer:1 significant:3 tuning:1 automatic:1 trivially:1 had:1 reliability:1 stable:3 add:1 base:1 multivariate:2 isometry:1 posterior:1 showed:1 optimizing:1 belongs:1 commun:1 wellknown:1 disappointing:2 certain:1 n00014:1 inequality:1 onr:1 life:1 inconsistency:1 minimum:1 additional:2 care:1 greater:1 kxk0:1 george:1 determine:4 paradigm:1 dashed:2 signal:87 ii:1 multiple:2 rv:4 branch:1 nonzeros:1 simoncelli:1 smooth:1 academic:1 determination:1 laplacian:5 variant:1 regression:5 ransac:9 noiseless:1 metric:1 enhancing:1 histogram:8 kernel:2 maxx0:1 iteration:4 c1:2 preserved:1 whereas:1 background:3 grow:5 source:1 crucial:1 operate:1 rest:1 rennes:1 member:3 near:1 leverage:1 split:1 identically:2 easy:2 automated:1 independence:1 fit:7 restrict:3 lasso:2 reduce:1 idea:2 knowing:2 judiciously:1 scalarization:1 motivated:3 dictating:1 matlab:1 useful:1 amount:1 transforms:1 simplest:1 reduced:2 continuation:3 http:1 outperform:1 exist:1 estimated:2 tibshirani:1 discrete:2 key:2 threshold:3 drawn:2 n66001:1 imaging:1 run:1 inverse:3 parameterized:2 baraniuk:2 almost:1 reader:1 pursuing:1 patch:1 circumvents:1 draw:1 scaling:1 bound:4 smd:10 followed:1 distinguish:1 bp:4 x2:1 afforded:1 generates:1 fourier:1 argument:4 extremely:1 min:7 ible:1 relatively:1 conjecture:1 structured:1 department:1 according:1 alternate:2 ball:1 describes:2 across:1 lld:1 wi:2 den:1 dv:1 restricted:2 invariant:1 outlier:1 explained:1 taken:1 computationally:1 equation:1 resource:1 turn:3 count:3 discus:1 know:2 tractable:5 pursuit:4 gaussians:1 promoting:2 appropriate:1 alternative:2 gate:1 substitute:1 compress:1 likehood:1 ensure:1 cf:5 include:1 graphical:1 wakin:2 sw:13 clin:1 exploit:4 k1:10 build:1 quantile:6 society:2 objective:3 question:2 realized:1 dependence:1 pave:1 exhibit:6 gradient:5 subspace:1 link:2 thank:1 decoder:10 consensus:3 reason:1 provable:1 assuming:1 code:2 index:6 illustration:1 minimizing:1 equivalently:1 unfortunately:5 info:1 unrecognized:1 proper:1 unknown:2 observation:10 markov:1 finite:2 logistics:1 defining:1 communication:3 rn:4 perturbation:2 compressibility:10 ggd:35 overcoming:1 david:1 cast:1 required:2 pair:2 kl:1 connection:2 california:1 learned:5 narrow:1 nip:2 trans:2 usually:2 below:4 sparsity:4 summarize:3 including:1 max:3 explanation:1 royal:1 wainwright:1 power:11 natural:29 circumvent:1 representing:1 scheme:3 brief:1 identifies:1 kxkr:1 log1:1 deviate:1 review:1 prior:57 literature:1 geometric:1 friedlander:1 inheritance:2 understanding:1 determining:1 marginalizing:1 law:3 embedded:3 fully:1 loss:2 unsurprisingly:1 stumble:1 haupt:1 lecture:1 proportional:1 proven:1 ingredient:1 sufficient:1 thresholding:1 pareto:6 supported:2 repeat:2 mismatched:1 lognormal:1 sparse:32 distributed:2 van:1 overcome:1 dimension:17 xn:2 curve:1 cumulative:1 electroenceph:1 qn:2 commonly:1 adaptive:2 transaction:1 approximate:2 emphasize:1 kullback:1 assumed:2 conclude:1 xi:14 spectrum:1 iterative:9 why:1 table:5 learn:6 nature:2 quantize:1 excellent:1 inherit:1 main:1 bounding:1 noise:4 hyperparameters:1 profile:5 arise:1 x1:1 fig:11 madrid:1 fashion:1 wiley:1 probing:1 explicit:1 weighting:1 wavelet:26 rk:1 theorem:2 embed:1 choi:1 misconception:2 specific:1 hale:1 sensing:3 list:1 decay:16 insignificant:1 derives:1 consist:1 kr:2 modulates:1 phd:1 magnitude:10 illustrates:2 sparseness:1 kx:10 chen:1 surprise:1 logarithmic:5 yin:2 infinitely:2 ordered:9 partially:2 compressor:1 satisfies:2 acm:1 rice:3 shell:1 cdf:2 sorted:6 identity:1 viewed:1 donoho:2 lipschitz:3 feasible:1 denoising:2 called:2 e:2 shannon:1 ew:1 berg:1 support:2 quest:1 latter:2 arises:1 relevance:2 preparation:1 |
3,164 | 3,867 | Bayesian Sparse Factor Models and DAGs
Inference and Comparison
Ole Winther
DTU Informatics
Technical University of Denmark
2800 Lyngby, Denmark
Bioinformatics Centre
University of Copenhagen
2200 Copenhagen, Denmark
[email protected]
Ricardo Henao
DTU Informatics
Technical University of Denmark
2800 Lyngby, Denmark
Bioinformatics Centre
University of Copenhagen
2200 Copenhagen, Denmark
[email protected]
Abstract
In this paper we present a novel approach to learn directed acyclic graphs (DAGs)
and factor models within the same framework while also allowing for model comparison between them. For this purpose, we exploit the connection between factor
models and DAGs to propose Bayesian hierarchies based on spike and slab priors to promote sparsity, heavy-tailed priors to ensure identifiability and predictive
densities to perform the model comparison. We require identifiability to be able to
produce variable orderings leading to valid DAGs and sparsity to learn the structures. The effectiveness of our approach is demonstrated through extensive experiments on artificial and biological data showing that our approach outperform a
number of state of the art methods.
1
Introduction
Sparse factor models have proven to be a very versatile tool for detailed modeling and interpretation
of multivariate data, for example in the context of gene expression data analysis [1, 2]. A sparse
factor model encodes the prior knowledge that the latent factors only affect a limited number of the
observed variables. An alternative way of modeling the data is through linear regression between
the measured quantities. This multiple regression model is a well-defined multivariate probabilistic
model if the connectivity (non-zero weights) defines a directed acyclic graph (DAG). What usually
is done in practice is to consider either factor or DAG models. Modeling the data with both types
of models at the same time and then perform model comparison should provide additional insight
as these models are complementary and often closely related. Unfortunately, existing off-the-shelf
models are specified in such a way that makes direct comparison difficult. A more principled idea
that can phrased in Bayesian terms is for example to find an equivalence between both models, then
represent them using a common/comparable hierarchy, and finally use a marginal likelihood or a
predictive density to select one of them. Although a formal connection between factor models and
DAGs has been already established in [3], this paper makes important extensions such as explicitly
modeling sparsity, stochastic search over the order of the variables and model comparison.
Is well known that learning the structure of graphical models, in particular DAGs is a very difficult
task because it turns out to be a combinatorial optimization problem known to be NP-hard [4]. A
commonly used approach for structure learning is to split the problem into two stages using the
fact that the space of variable orderings is far more smaller than the space of all possible structures,
e.g. by first attempting to learn a suitable permutation of the variables and then the skeleton of the
structure given the already found ordering or viceversa. Most of the work so far for continuous
data assumes linearity and Gaussian variables hence they can only recover the DAG structure up
1
to Markov equivalence [5, 6, 7, 8], which means that some subset of links can be reversed without
changing the likelihood [9]. To break the Markov equivalence usually experimental (interventional)
data in addition to the observational (non-interventional) data is required [10]. In order to obtain
identifiability from purely observational data, strong assumptions have to to be made [11, 3, 12]. In
this work we follow the line of [3] by starting from a linear factor model and ensure identifiability by
using non-normal heavy-tailed latent variables. As a byproduct we find a set of candidate orderings
compatible with a linear DAG, i.e. a mixing matrix which is ?close to? triangular. Finally, we may
perform model comparison between the factor and DAG models inferred with fixed orderings taken
from the candidate set.
The rest of the paper is organized as follows. Sections 2 to 5 we motivate and describe the different
ingredients in our method, in Section 6 we discuss existing work, in Section 7 experiments on both
artificial and real data are presented, and Section 8 concludes with a discussion and perspectives for
future work.
2
From DAGs to factor models
We will assume that an ordered d-dimensional data vector Px can be represented as a directed
acyclic graph with only observed nodes, where P is the usually unknown true permutation matrix. We will focus entirely on linear models such that the value of each variable is a linear weight
combination of parent nodes plus a driving signal z
x = P?1 BPx + z ,
(1)
where B is a strictly lower triangular square matrix. In this setting, each non-zero element of B
corresponds to a link in the DAG. Solving for x we can rewrite the problem as
x = P?1 APz = P?1 (I ? B)?1 Pz ,
(2)
?1
which corresponds to a noise-free linear factor model with the restriction that P AP must have a
sparsity pattern that can be permuted to a triangular form since (I ? B)?1 is triangular. This requirement alone is not enough to ensure identifiability (up to scaling and permutation of columns Pf )1 .
We further have to use prior knowledge about the distribution of the factors z. A necessary condition
is that these must be a set of non-Gaussian independent variables [11]. For heavy-tailed data is it
often sufficient in practice to use a model with heavier tails than Gaussian [13]. If the requirements
for A and for the distribution of z are met, we can first estimate P?1 AP and subsequently find P
searching over the space of all possible orderings. Recently, [3] applied the fastICA algorithm to
solve for the inverse mixing matrix P?1 A?1 P. To find a candidate solution for B, P is set such
that B found from the direct relation equation (1), B = I ? A?1 (according to magnitude-based
criterion) is as close as possible to lower triangular. In the final step the Wald statistic is used for
pruning B and the chi-square test is used for model selection.
In our work we also exploit the relation between the factor models and linear DAGs. We apply a
Bayesian approach to learning a sparse factor models and DAGs, and the stochastic search for P
is performed as an integrated part of inference of the sparse factor model. The inference of factor
model (including order) and DAG parameters are performed as two separate inferences such that the
only input that comes from the first part is a set of candidate orders.
3
From factor models to DAGs
Our first goal is to perform model inference in the families of factor and linear DAG models. We
specify the joint distribution or probability of everything, e.g. for the factor model, as
p(X, A, Z, ?, P, ?) = p(X|A, Z, P, ?)p(A|?)p(Z|?)p(?|?)p(P|?)p(?) ,
where X = [x1 , . . . , xN ], Z = [z1 , . . . zN ], N is the number of observations and (?) indicates
additional parameters in the hierarchical models. The prior over permutation p(P|?) will always
be chosen to be uniform over the d! possible values. The actual sampling based inference for P is
discussed in the next section and the standard Gibbs sampling components are provided in the supplementary material. Model comparison should ideally be performed using the marginal likelihood.
This is more difficult to calculate with sampling than obtaining samples from the posterior so we
use the predictive densities on a test set as a yardstick.
1
These ambiguities are not affecting our ability to find correct permutation P of the rows.
2
Factor model Instead of using the noise-free factor model of equation (2) we allow for additive
noise x = P?1
r APc z + ?, where ? is an additional Gaussian noise term with diagonal covariance
matrix ?, i.e. uncorrelated noise, to account for independent measurement noise, Pr = P is the
permutation matrix for the rows of A and Pc = Pf Pr another permutation for the columns with
Pf accounting for the permutation freedom of the factors. We will not restrict the mixing matrix
A to be triangular. Instead we infer Pr and Pc using a stochastic search based upon closeness to
triangular as measured by a masked likelihood, see below. Now we can specify a hierarchy for the
Bayesian model as follows
X|Pr , A, Pc , Z, ? ? N (X|P?1
r APc Z, ?) ,
?i?1 |ss , sr
?
Gamma(?i?1 |ss , sr )
Z ? ?(Z|?) ,
(3)
A ? ?(A|?) ,
,
where ?i are elements of ?. For convenience, to exploit conjugate exponential families we are
placing a gamma prior on the precision of ? with shape ss and rate sr . Given that the data is
standardized, the selection of hyperparameters for ?i is not very critical as long as both ?signal and
noise? are supported. The prior should favor small values of ?i as well as providing support for
?i = 1 such that certain variables can be explained solely by noise (we set ss = 2 and sr = 0.05 in
the experiments).
For the factors we use a heavy-tailed prior ?(Z|?) in the form of a Laplace distribution parameterized
for convenience as a scale mixture of Gaussians [14]
Z ?
N (zjn |?, ?)Exponential(?jn |?2 )d?jn ,
(4)
zjn |?, ? ? Laplace(zjn |?, ?) =
0
?2 |?s , ?r ? Gamma(?2 |?s , ?r ) ,
(5)
where zjn is an element of Z, ? is the rate and ?
has an exponential distribution acting as mixing density. Furthermore, we place a gamma distribution on
?2 to get conditionals for ? and ?2 in standard conjugate families. We let the components of Z have
on average unit variance. This is achieved by setting
?s /?r = 2 (we set ?s = 4 and ?r = 2). Alternatively
one may use a t distribution?again as scale mixture
of Gaussians?which can to interpolate between very
heavy-tailed (power law) and very light tails, i.e. becoming Gaussian when degrees of freedom approaches
infinity. However such flexibility comes at the price of
being more difficult to select its hyperparameters, because the model could become unidentified for some
settings.
?j
j =1:d
?
?jn
?ij
qij
aij
zjn
rij
?ij
?i
n=1:N
xin
i=1:d
Figure 1: Graphical model for Bayesian
hierarchy in equation (3).
The prior ?(A|?) for the mixing matrix should be biased towards sparsity because we want to infer
something close to a triangular matrix. Here we adopt a two-layer discrete spike and slab prior for
the elements aij of A similar to the one in [2]. The first layer in the prior control the sparsity of
each element aij individually, whereas the second layer impose a per-factor sparsity level to allow
elements within the same factor to share information. The hierarchy can be written as
aij |rij , ?i , ?ij ? (1 ? rij )?(aij ) + rij N (aij |0, ?i ?ij ) ,
?1
?1
?ij
|ts , tr ? Gamma(?ij
|ts , tr ) ,
rij |?ij
?ij |qij , ?p , ?m
qij |?j
?j |?m , ?p
? Bernoulli(rij |?ij ) ,
? (1 ? qij )?(?ij ) + qij Beta(?ij |?p ?m , ?p (1 ? ?m )) ,
? Bernoulli(qij |?j ) ,
? Beta(?j |?p ?m , ?p (1 ? ?m )) ,
(6)
where ?(?) is a Dirac ?-function. The prior above specify a point mass mixture over aij with mask
rij . The expected probability of aij to be non-zero is ?ij and is controlled through a beta hyperprior
with mean ?m and precision ?p . Besides, each factor has a common sparsity rate ?j that let the
elements ?ij to be exactly zero with probability 1 ? ?j through a beta distribution with mean ?m and
3
precision ?p , turning the distribution of ?ij bimodal over the unit interval. The magnitude of nonzero elements in A is specified through the slab distribution depending on ?ij . The parameters for
?ij should be specified in the same fashion as ?i but putting more probability mass around aij = 1,
for instance ts = 4 and tr = 10. Note that we scale the variances with ?i since it makes the
model easier to specify and tend to have better mixing properties [15]. The masking matrix rij with
parameters ?ij should be somewhat diffuse while favoring relatively large masking probabilities,
e.g. ?p = 10 and ?m = 0.9. Additionally, qj and should favor very small values with low variance,
this is for example ?p = 1000 and ?m = 0.005. The graphical model for the entire hierarchy in (3)
omitting parameters is shown in Figure 1.
DAG
We make the following Bayesian specification of linear DAG model of equation (1) as
X|Pr , B, X, ? ? ?(X ? P?1
r B|?) ,
B ? ?(B|?) ,
(7)
where ? and ? are given by equations (4) and (6). The Bayesian specification for the DAG has a
similar graphical model to the one in Figure 1 but without noise variances ?. The factor model
needs only shared variance parameter ? for the Laplace distributed zjn because a change of scale in
A is equivalent to change of variance in zjn . The DAG on the other hand, needs individual variance
parameters because it has no scaling freedom. Given that we know that B is strictly lower triangular,
it should be in general less sparse than A, thus we use a different setting for the sparsity prior, i.e.
?p = 100 and ?m = 0.01.
4
Sampling based inference
For given permutation P, Gibbs sampling can be used for inference of the remaining parameters. Details of Gibbs sampler is given in the supplementary material and we will focus on the non-standard
inference corresponding to the sampling over permutations. There are basically two approaches to
find P, one is perform the inference for parameters and P jointly with B restricted to be triangular.
The other is to let the factor model be unrestricted and search for P according to a criterion that does
not affect parameter inference. Here we prefer the latter for two reasons. First, joint combinatorial
and parameter inference in this model will probably have poor mixing with slow convergence. Second, we are also interested in comparing the factor model against the DAG for cases when we cannot
really assume that the data is well approximated by a DAG. In our approach the proposal P? corresponds to picking two of the elements in the order vector by random and exchanging them. Other
approaches such as restricting to pick two adjacent elements have been suggested as well [16, 7].
For the linear DAG model we are not performing joint inference of P and the model parameters.
Rather we use a set of Ps found for the factor model to be good candidates for the DAG.
The stochastic search for P = Pc goes as follows: we make inference for the unrestricted factor
model, propose P?r and P?c independently according q(P?r |Pr )q(P?c |Pc ) which is the uniform two
variable random exchange. With this proposal and the flat prior over P, we use a MetropolisHastings acceptance probability simply as the ratio of likelihoods with A masked to have zeros
above its diagonal (through masking matrix M)
??? =
N (X|(P?r )?1 (M ? P?r A(P?c )?1 )P?c , ?)
,
?1
N (X|P?1
r (M ? Pr APc )Pc , ?)
The procedure can be seen as a simple approach for generating hypotheses about good, close to
triangular A, orderings in a model where the spike and slab prior provides bias towards sparsity.
To learn DAGs we first perform inference on the factor model specified by the hierarchy in (3) to
obtain a set of ordering candidates sorted according to their usage during sampling?after the burnin period. It is possible that the estimation of A might contain errors, e.g. a false zero entry on A
allowing several orderings leading to several lower triangular versions of A, only one of those being
actually correct. Thus, we propose not only to use the best candidate but a set of top candidates of
size mtop = 10. Then we perform inference on the DAG model corresponding to the structure search
(m )
(1)
hierarchy in (7), for each one of the permutation candidates being considered, Pr , . . . , Pr top .
Finally, we select the DAG model among candidates using the predictive distribution for the DAG
when a test set is available or just the likelihood if not.
4
5
Predictive distributions and model comparison
Given that our model produces both DAG and a factor model estimates at the same time, it could
be interesting to estimate also whether one option is better than the other given the observed
data, for example in exploratory analysis when the DAG assumption is just one reasonable option. In order to perform the model comparison, we use predictive densities p(X? |X, M) with
M = {MFA , MDAG }, instead of marginal likelihoods because the latter is difficult and expensive
to compute by sampling, requiring for example thermodynamic integration. With Gibbs sampling,
we draw samples from the posterior distributions p(A, ?, ?|X, ?) and p(B, ?1 , . . . , ?m |X, ?). The
average over the extensive variables associated with the test points p(Z? |?) is a bit more complicated because naively drawing samples from p(Z? |?) gives an estimator with high variance?for
?i ? ?jn . In the following we describe how to do it for each model, omitting the permutation
matrices for clarity.
Factor model We can compute the predictive distribution by taking the likelihood in equation (3)
and marginalizing Z. Since the integral has no closed form we can approximate it using the Gaussian
distribution from the scale mixture representation as
Z
rep
1 YX
?
?
p(X |A, ?, ?) = p(X |A, Z, ?)p(Z|?)dZ ?
N (x?n |0, A? Un A + ?) ,
rep n r
where Un = diag(?1n , . . . , ?dn ), the ?jn are sampled from the prior and rep is the number of
samples generated to approximate the intractable integral (rep = 500 in the experiments). Then we
can average over p(A, ?, ?|X, ?) to obtain p(X? |X, MFA ).
DAG In this case the predictive distribution is rather easy because the marginal over Z in equation
(4) is just a Laplace distribution with mean BX
Z
Y
p(X? |B, ?) = p(X? |B, X, Z)p(Z|?)dZ =
Laplace(xij |[BX]in , ?i ) ,
i,n
where [BX]ij is the element indexed by the i-th row and n-th column of BX. In practice we
compute the predictive densities for a particular X? during sampling and then select the model
based on its ratio. Note that both predictive distributions depend directly on ??the rate of Laplace
distribution, making the estimates highly dependent on its value. This is why it is important to have
the hyperprior on ? of equation (5) instead of just fixing its value.
6
Existing work
Among the existing approaches to DAG learning, our work is most closely related to LiNGAM
(Linear Non-Gaussian Acyclic Model for causal discovery) [3] with several important differences:
Since LiNGAM relies on fastICA to learn the mixing is not inherently sparse, hence a pruning
procedure based on Wald statistic and model fit second order information should be applied after
obtaining an ordering for the variables. The order search in LiNGAM assumes that there is not
estimation errors during fastICA model inference, then a single ordering candidate is produced.
LiNGAM produces and select a final model among several candidates, but in contrast to our method
such candidates are not different DAGs with different variable orderings but DAGs with different
sparsity levels. The factor model inference in LiNGAM, namely fastICA is very efficient however
their structure search involves repeated inversions of matrices of sizes d2 ? d2 which can make
it prohibitive for large problems. More explicitly, the computational complexity of LiNGAM is
roughly O(Nfit d6 ) where Nfit is the number of model fit evaluations. In contrast, the complexity
in our case is O(Nite d2 N ) where Nite is the total number of samples including burn-in periods for
both, factor model and DAG inferences. Finally, our model is more principled in the sense that all
the approach is within the same Bayesian framework, as a result it can be extended to for example
binary data or time series by selecting some suitable prior distributions.
Much work on Bayesian models for DAG learning already exist. For example, the approach presented in [16] is a Gaussian Bayesian network and therefore suffers from lack of identifiability.
Besides, order search is performed directly for the DAG model making necessary the use of longer
5
sampler runs with a number of computational tricks when the problem is large (d > 10), i.e. when
exhaustive order enumeration is not an option.
7
Experiments
We consider four sets of experiments in the following. The first two consist on extensive experiments
using artificial data, the third addresses the model comparison scenario and the last one uses real
data previously published in [17]. In every case we ran 2000 samples after a burn-in period of 4000
iterations and three independent chains for the factor model, and a single chain with 1000 samples
and 2000 as burn-in for the DAG2 . Hyperparameter settings are discussed in Section 3.
LiNGAM suite We evaluate the performance of our model against LiNGAM3 using the artificial
model generator presented in [3]. The generator produces both dense and sparse networks with different degree of sparsity, Z is generated from a non-Gaussian heavy-tailed distribution, X is generated using equation (1) and then randomly permuted to hide the correct order, P. For the experiment
we have generated 1000 different dataset/models using d = {5, 10}, N = {200, 500, 1000, 2000}
and the DAG was selected using the (training set) likelihood in equation (7). Results are summarized
in Figure 2 using several performance measures. For the particular case of the area under the ROC
curve (AUC), we use the conditional posterior of the masking matrix, i.e. p(R|X, ?) where R is
a matrix with elements rij . AUC is an important measure because it quantifies how the model accounts for the uncertainty of presence or absence of links in the DAG. Such uncertainty assessment
is not possible in LiNGAM where the probability of having a link is simply zero or one, however
the AUC can be still computed.
0.94
0.8
0.9
0.85
0.92
0.7
0.8
0.9
0.8
0.6
0.84
0.82
0.8
0.75
0.6
AUC
True negative rate
True positive rate
0.86
0.7
0.65
0.5
0.4
0.6
0.78
Orderings error rate
0.7
0.88
0.55
d=5 Ours
d=5 LINGAM
d=10 Ours
d=10 LINGAM
0.5
0.4
0.3
0.3
0.2
0.2
0.1
0.76
200
500
1000
2000
0.5
200
500
1000
2000
0.1
200
500
1000
2000
0
200
500
1000
N
N
N
N
(a)
(b)
(c)
(d)
2000
Figure 2: Performance measures for LiNGAM suite. Symbols are: square for 5 variables, star for 10
variables, solid line for sFA and dashed line for LiNGAM. (a) True positive rate. (b) True negative
rate. (c) Frequency of AUC being greater than 0.9. (d) Number of estimated correct orderings.
In terms of true negative rates, AUC and ordering error rate, our approach is significantly better than
LiNGAM. The true positive rate results in Figure 2(a) show that LiNGAM outperform our approach
only for N = 2000. However by comparing it to the true positive rate, it seems than LiNGAM prefer
more dense models which could be an indication of overfitting. Looking to the ordering errors, our
model is clearly superior. It is important to mention that being able to compute a probability for a
link in the DAG to be zero, p(bij 6= 0|X, ?), turns out to be very useful in practice, for example to
reject links with high uncertainty or to rank them. To give an idea of running times on a regular
two-core 2.5GHz machine, for d = 10 and N = 500: LiNGAM took in average 10 seconds and
our method 170 seconds. However, when doubling the number of variables the times were 730 and
550 seconds for LiNGAM and our method respectively, which is in agreement with our complexity
estimates.
2
3
Source code available upon request (C with Matlab interface).
Matlab package available at http://www.cs.helsinki.fi/group/neuroinf/lingam/.
6
Bayesian networks repository Next we want to compare some of the state of the art (Gaussian)
approaches to DAG learning on 7 well known structures4 , namely alarm, barley, carpo, hailfinder,
insurance, mildew and water (d = 37, 48, 61, 56, 27, 35, 32 respectively). A single dataset of size
1000 per structure was generated using a similar procedure to the one used before. Apart from
ours (sFA), we considered the following methods5 : standard DAG search (DS), order-search (OS),
sparse candidate pruning then DAG-search (DSC) [6], L1MB then DAG-search (DSL) [8], sparsecandidate pruning then order-search (OSC) [7]. Results are shown in Figure 3, including the number
of reversed links found due to ordering errors.
water
water
water
water
mildew
mildew
mildew
mildew
insurance
hailfinder
DS
OS
OSC
DSC
DSL
sFA
carpo
barley
alarm
0
0.1
0.2
False positive rate
(a)
insurance
insurance
insurance
hailfinder
hailfinder
hailfinder
carpo
carpo
carpo
barley
barley
barley
alarm
alarm
0
0.2
0.4
False negative rate
alarm
0
(b)
0.2
0.4 0.6
AUC
(c)
0.8
0
0.2
0.4
0.6
Reversed links
(d)
Figure 3: Performance measures for Bayesian networks repository experiments.
In this case, our approach obtained slightly better results when looking at the false positive rate,
Figure 3(a). The true negative rate is comparable to the other methods suggesting that our model
in some cases is sparser than the others. AUC estimates are significantly better because we have
continuous probabilities for links to be zero (in the other methods we had to use a binary value).
From Figure 3(d), the number of reversed links in the other methods is quite high as expected due to
lack of identifiability. Our model produced a small amount reversed links because it was not able to
find any of the true orderings, but indeed something quite close. This results could be improved by
running the sampler for a longer time or by considering more candidates. We also tried to run the
other approaches with data generated from Gaussian distributions but the results were approximately
equal to those shown in Figure 3. On the other hand, our approach performs similarly but the number
of reversed links increases significantly since the model is no longer identified. The most important
advantage of the (Gaussian) methods used in this experiment is their speed. In all cases they are
considerably faster than sampling based methods. Their speed make them very suitable for large
scale problems regardless of their identifiability issues.
Model comparison For this experiment we have generated 1000 different datasets/models with
d = 5 and N = {500, 1000} in a similar way to the first experiment but this time we selected
the true model to be a factor model or a DAG uniformly. In order to generate a factor model we
basically just need to be sure that A cannot be permuted to a triangular form. We kept 20% of the
data to compute the predictive densities to then select between all estimated DAG candidates and
the factor model. We found that for N = 500 our approach was able to select true DAGs 91.5% of
the times and true factor models 89.2%, corresponding to an overall error of 9.6%, For N = 1000
the true DAG and true factor model rates increased to 98.5% and 94.6% respectively. This results
demonstrate that our approach is very effective at selecting the true underlying structure in the data
between the two proposed hypotheses.
Protein-signaling network The dataset introduced in [17] consists on flow cytometry measurements of 11 phosphorylated proteins and phospholipids (Raf, Erk, p38, Jnk, Akt, Mek, PKA, PKC,
PIP2 , PIP3 , PLC?). Each observation is a vector of quantitative amounts measured from single
cells, generated from a series of stimulatory cues and inhibitory interventions. The dataset contains
both observational and experimental data. Here we are only using 1755 samples corresponding to
4
http://compbio.cs.huji.ac.il/Repository/.
Parameters: 10000 iterations, 5 candidates (SC, DSC), max fan-in of 5 (OS, OSC) and Or strategy and
MDL penalty (DSL).
5
7
Raf
PLC?
p38
PLC?
0.23
Mek
PKC
Erk
PKA
p38
Akt
Erk
0.23
PIP3
0.93
0.44
PIP2
0.46
Jnk
PKC
0.15
Jnk
PIP2
PIP2
PLC?
PKA
PKA
Akt
PKC
0.37
Jnk
0.02
0.24
0.77
Raf
PIP3
PIP3
Erk
Akt
(a)
Mek
p38
Mek
0.93
(b)
Raf
(c)
?3
8
x 10
1
0.9
7
6
4
0.95
0.8
3
2
1
0
?4000 ?3800 ?3600 ?3400 ?3200 ?3000
Likelihood
(d)
0.9
1
2
3
4
5
6
7
Orderings
8
9
0.7
10
Accuracy
Ratio
5
Figure 4: Result for protein-signaling network. (a) Textbook signaling network as reported in [17]. (b) Estimated
structure using Bayesian networks [17]. (c) Estimated
structure using our model. (e) Test likelihoods for the
best ordering DAG (dashed) and the factor model (solid).
(d) Likelihood ratios (solid) and structure errors (dashed)
for all candidates considered by our method and their usage. The Bayesian network is not able to identify the
direction of the links with only observational data.
(e)
pure observational data and randomly selected 20% of the data to compute the predictive densities.
Using the entire set will produce a richer model, however interventions are out of the scope of this
paper. The textbook ground truth and results are presented in figure 4. From the 21 possible links
in figure 4(a), the model from [17] was able to find 9, but also one falsely added link. In 4(b), a
marginal likelihood equivalent prior is used and they therefore cannot make any inferences about
directionality from observational data alone, see Figure 4(b). Our model in Figure 4(c) was able to
find 10 true links, one falsely added link and only two reversed links (RL), one of them is PIP2 ?
PIP3 which according to the ground truth is bidirectional and the other one, PLC? ? PIP3 which
was also found reversed using experimental data in [17]. Note from figure 4(e) that the predictive
density ratios correlate quite well with the structural accuracy. The predictive densities for the best
candidate (sixth in Figure 4(e)) is shown in Figure 4(d) and suggests that the factor model is a better
option which makes sense considering that estimated DAG in figure 4(c) is a substructure of the
ground truth. We also examined the estimated factor model and we found out that three factors
could correspond to unmeasured proteins (PI3K, MKK and IP3), see Figure 2 and table 3 in [17].
We also tried the above methods. Results were very similar to our method in terms of true positives
(? 9) and true negatives (? 32), however none of them were able to produce less than 6 reversed
links that corresponds to approximately two-thirds of total true positives.
8
Discussion
We have proposed a novel approach to perform inference and model comparison of sparse factor
models and DAGs within the same framework. The key ingredients for both Bayesian models are
spike and slab priors to promote sparsity, heavy-tailed priors to ensure identifiability and predictive
densities to perform the comparison. A set of candidate orderings is produced by the factor model.
Subsequently, a linear DAG is learned for each of the candidates. To the authors? knowledge this
is the first time that a method for comparing such a closely related linear models is proposed. This
setting can be very beneficial in situations where the prior evidence suggests both DAG structure
and/or unmeasured variables in the data. For example in the protein signaling network [17], the
textbook ground truth suggests both DAG structure and a number of unmeasured proteins. The
previous approach [17] only performed structure learning in DAGs but our results suggest that the
data is better explained by the factor model. For further exploration of this data set, we obviously
need to modify our approach to handle hybrid models, i.e. graphs with directed/undirected links
and observed/latent nodes as well as being able to use experimental data. Our Bayesian hierarchical
approach is very flexible. We are currently investigating extensions to other source distributions
(non-parametric Dirichlet process, temporal Gaussian processes and discrete).
8
References
[1] M. West. Bayesian factor regression models in the ?large p, small n? paradigm. In J. Bernardo, M. Bayarri, J. Berger, A. Dawid, D. Heckerman, A. Smith, and M. West, editors, Bayesian Statistics 7, pages
723?732. Oxford University Press, 2003.
[2] J. Lucas, C. Carvalho, Q. Wang, A. Bild, J. R. Nevins, and M. West. Bayesian Inference for Gene
Expression and Proteomics, chapter Sparse Statistical Modeling in Gene Expression Genomics, pages
155?176. Cambridge University Press, 2006.
[3] S. Shimizu, P. O. Hoyer, A. Hyv?arinen, and A. Kerminen. A linear non-Gaussian acyclic model for causal
discovery. Journal of Machine Learning Research, 7:2003?2030, October 2006.
[4] D. M. Chickering. Learning Bayesian networks is NP-complete. In D. Fisher and H.-J. Lenz, editors,
Learning from Data: AI and Statistics, pages 121?130. Springer-Verlag, 1996.
[5] I. Tsamardinos, L. E. Brown, and C. F. Aliferis. The max-min hill-climbing Bayesian network structure
learning algorithm. Machine Learning, 65(1):31?78, October 2006.
[6] N. Friedman, I. Nachman, and D. Pe?er. Learning Bayesian network structure from massive datasets: The
?sparse candidate? algorithm. In K. B. Laskey and H. Prade, editors, UAI, pages 206?215, 1999.
[7] M. Teyssier and D. Koller. Ordering-based search: A simple and effective algorithm for learning Bayesian
networks. In UAI, pages 548?549, 2005.
[8] M. W. Schmidt, A. Niculescu-Mizil, and K. P. Murphy. Learning graphical model structure using L1regularization paths. In AAAI, pages 1278?1283, 2007.
[9] D. Heckerman, D. Geiger, and D. M. Chickering. Learning Bayesian networks: The combination of
knowledge and statistical data. Machine Learning, 20(3):197?243, January 1995.
[10] J. Pearl. Causality: Models, Reasoning, and Inference. Cambridge University Press, March 2000.
[11] P. Comon. Independent component analysis, a new concept? Signal Processing, 36(3):287?314, December 1994.
[12] C. M. Carvalho, J. Chang, J. E. Lucas, J. R. Nevins, Q. Wang, and M. West. High-dimensional sparse factor modeling: Applications in gene expression genomics. Journal of the American Statistical Association,
103(484):1438?1456, December 2008.
[13] A. Hyv?arinen, J. Karhunen, and E. Oja. Independent Component Analysis. Wiley-Interscience, May 2001.
[14] D. F. Andrews and C. L. Mallows. Scale mixtures of normal distributions. Journal of the Royal Statistical
Society: Series B (Methodology), 36(1):99?102, 1974.
[15] T. Park and G. Casella. The Bayesian lasso.
103(482):681?686, June 2008.
Journal of the American Statistical Association,
[16] N. Friedman and D. Koller. Being Bayesian about network structure: A Bayesian approach to structure
discovery in Bayesian networks. Machine Learning, 50(1?2):95?125, January 2003.
[17] K. Sachs, O. Perez, D. Pe?er, D. A. Lauffenburger, and G. P. Nolan. Causal protein-signaling networks
derived from multiparameter single-cell data. Science, 308(5721):523?529, April 2005.
9
| 3867 |@word repository:3 version:1 inversion:1 seems:1 d2:3 hyv:2 tried:2 covariance:1 accounting:1 pick:1 mention:1 tr:3 solid:3 versatile:1 series:3 contains:1 selecting:2 ours:3 existing:4 comparing:3 must:2 written:1 additive:1 shape:1 alone:2 cue:1 prohibitive:1 selected:3 smith:1 core:1 provides:1 node:3 p38:4 dn:1 direct:2 become:1 beta:4 qij:6 consists:1 interscience:1 falsely:2 mask:1 indeed:1 expected:2 roughly:1 chi:1 actual:1 enumeration:1 pf:3 considering:2 provided:1 mek:4 linearity:1 underlying:1 mass:2 what:1 erk:4 textbook:3 sfa:3 suite:2 temporal:1 quantitative:1 every:1 bernardo:1 exactly:1 control:1 unit:2 intervention:2 positive:8 before:1 modify:1 oxford:1 solely:1 becoming:1 ap:2 approximately:2 might:1 plus:1 burn:3 path:1 examined:1 equivalence:3 suggests:3 limited:1 directed:4 nevins:2 neuroinf:1 mallow:1 practice:4 signaling:5 procedure:3 nite:2 area:1 significantly:3 reject:1 viceversa:1 regular:1 protein:7 suggest:1 get:1 convenience:2 close:5 selection:2 cannot:3 unidentified:1 context:1 restriction:1 equivalent:2 www:1 demonstrated:1 dz:2 go:1 regardless:1 starting:1 independently:1 pure:1 insight:1 estimator:1 searching:1 exploratory:1 unmeasured:3 handle:1 laplace:6 hierarchy:8 massive:1 us:1 hypothesis:2 agreement:1 trick:1 element:12 dawid:1 approximated:1 expensive:1 observed:4 rij:9 wang:2 calculate:1 ordering:22 ran:1 principled:2 complexity:3 skeleton:1 plc:5 ideally:1 motivate:1 depend:1 solving:1 rewrite:1 predictive:15 purely:1 upon:2 joint:3 represented:1 chapter:1 bild:1 describe:2 effective:2 ole:1 artificial:4 sc:1 exhaustive:1 quite:3 richer:1 supplementary:2 solve:1 aliferis:1 s:4 drawing:1 nolan:1 triangular:13 ability:1 statistic:4 favor:2 multiparameter:1 jointly:1 final:2 obviously:1 advantage:1 indication:1 took:1 propose:3 mixing:8 flexibility:1 pka:4 dirac:1 parent:1 convergence:1 requirement:2 p:1 produce:6 generating:1 depending:1 andrew:1 ac:1 fixing:1 measured:3 ij:18 strong:1 c:2 involves:1 come:2 met:1 direction:1 closely:3 correct:4 stochastic:4 subsequently:2 exploration:1 observational:6 material:2 everything:1 require:1 exchange:1 arinen:2 really:1 biological:1 extension:2 strictly:2 around:1 considered:3 ground:4 normal:2 scope:1 slab:5 driving:1 pi3k:1 adopt:1 purpose:1 estimation:2 lenz:1 combinatorial:2 currently:1 nachman:1 individually:1 tool:1 clearly:1 gaussian:14 always:1 rather:2 shelf:1 derived:1 focus:2 l1regularization:1 june:1 bernoulli:2 likelihood:13 indicates:1 rank:1 contrast:2 sense:2 inference:23 dependent:1 niculescu:1 integrated:1 entire:2 relation:2 favoring:1 koller:2 interested:1 henao:1 issue:1 among:3 overall:1 flexible:1 lucas:2 art:2 integration:1 marginal:5 equal:1 having:1 sampling:11 placing:1 park:1 promote:2 future:1 np:2 others:1 randomly:2 oja:1 gamma:5 interpolate:1 individual:1 murphy:1 friedman:2 freedom:3 acceptance:1 highly:1 insurance:5 evaluation:1 mdl:1 mixture:5 pc:6 light:1 perez:1 chain:2 integral:2 byproduct:1 necessary:2 indexed:1 hyperprior:2 causal:3 increased:1 instance:1 column:3 modeling:6 barley:5 zn:1 exchanging:1 kerminen:1 subset:1 entry:1 uniform:2 masked:2 fastica:4 reported:1 considerably:1 density:11 winther:1 huji:1 l1mb:1 probabilistic:1 off:1 informatics:2 picking:1 connectivity:1 again:1 ambiguity:1 aaai:1 american:2 leading:2 ricardo:1 bx:4 account:2 suggesting:1 star:1 summarized:1 explicitly:2 performed:5 break:1 closed:1 bayarri:1 recover:1 option:4 complicated:1 raf:4 masking:4 identifiability:9 substructure:1 square:3 il:1 accuracy:2 variance:8 correspond:1 identify:1 climbing:1 dsl:3 bayesian:29 produced:3 basically:2 none:1 published:1 suffers:1 casella:1 sixth:1 against:2 frequency:1 associated:1 sampled:1 dataset:4 knowledge:4 organized:1 actually:1 bidirectional:1 follow:1 methodology:1 specify:4 improved:1 april:1 done:1 furthermore:1 just:5 stage:1 d:2 hand:2 o:3 assessment:1 mtop:1 lack:2 defines:1 laskey:1 usage:2 omitting:2 contain:1 true:20 requiring:1 brown:1 concept:1 hence:2 nonzero:1 adjacent:1 during:3 auc:8 compbio:1 criterion:2 hill:1 apc:3 complete:1 demonstrate:1 performs:1 binf:1 interface:1 reasoning:1 novel:2 recently:1 fi:1 common:2 superior:1 permuted:3 rl:1 dsc:3 tail:2 interpretation:1 discussed:2 association:2 measurement:2 methods5:1 gibbs:4 dag:57 cambridge:2 ai:1 similarly:1 centre:2 had:1 specification:2 longer:3 something:2 multivariate:2 posterior:3 hide:1 pkc:4 perspective:1 apart:1 scenario:1 certain:1 verlag:1 rep:4 binary:2 seen:1 additional:3 somewhat:1 impose:1 unrestricted:2 greater:1 paradigm:1 period:3 dashed:3 signal:3 multiple:1 thermodynamic:1 apz:1 infer:2 technical:2 faster:1 ip3:1 long:1 controlled:1 regression:3 wald:2 proteomics:1 iteration:2 represent:1 bimodal:1 achieved:1 cell:2 proposal:2 addition:1 affecting:1 conditionals:1 want:2 whereas:1 interval:1 source:2 biased:1 rest:1 sr:4 probably:1 sure:1 tend:1 hailfinder:5 undirected:1 december:2 zjn:7 flow:1 effectiveness:1 structural:1 presence:1 split:1 enough:1 easy:1 affect:2 fit:2 restrict:1 identified:1 lasso:1 idea:2 qj:1 whether:1 expression:4 heavier:1 penalty:1 matlab:2 useful:1 detailed:1 tsamardinos:1 amount:2 http:2 generate:1 outperform:2 xij:1 exist:1 inhibitory:1 estimated:6 per:2 discrete:2 hyperparameter:1 group:1 putting:1 four:1 key:1 changing:1 interventional:2 clarity:1 kept:1 graph:4 run:2 inverse:1 parameterized:1 uncertainty:3 package:1 place:1 family:3 reasonable:1 geiger:1 draw:1 prefer:2 scaling:2 comparable:2 bit:1 entirely:1 layer:3 fan:1 mildew:5 infinity:1 helsinki:1 flat:1 encodes:1 phrased:1 diffuse:1 speed:2 min:1 attempting:1 performing:1 px:1 relatively:1 according:5 combination:2 poor:1 request:1 conjugate:2 march:1 smaller:1 slightly:1 beneficial:1 heckerman:2 making:2 comon:1 explained:2 restricted:1 pr:9 lingam:18 taken:1 lyngby:2 equation:10 previously:1 turn:2 discus:1 know:1 available:3 gaussians:2 lauffenburger:1 apply:1 hierarchical:2 osc:3 alternative:1 schmidt:1 jn:5 assumes:2 standardized:1 ensure:4 remaining:1 top:2 graphical:5 running:2 dirichlet:1 yx:1 exploit:3 society:1 already:3 quantity:1 spike:4 added:2 strategy:1 parametric:1 teyssier:1 diagonal:2 hoyer:1 reversed:9 link:20 separate:1 d6:1 reason:1 water:5 denmark:6 besides:2 code:1 berger:1 providing:1 ratio:5 difficult:5 unfortunately:1 october:2 negative:6 unknown:1 perform:10 allowing:2 observation:2 markov:2 datasets:2 t:3 january:2 situation:1 extended:1 looking:2 cytometry:1 inferred:1 introduced:1 copenhagen:4 required:1 specified:4 extensive:3 connection:2 z1:1 namely:2 learned:1 established:1 pearl:1 address:1 able:9 suggested:1 usually:3 pattern:1 below:1 sparsity:13 including:3 max:2 royal:1 power:1 suitable:3 critical:1 metropolishastings:1 mfa:2 hybrid:1 turning:1 mizil:1 mkk:1 dtu:3 concludes:1 genomics:2 prior:21 discovery:3 marginalizing:1 law:1 permutation:12 interesting:1 acyclic:5 proven:1 ingredient:2 carvalho:2 generator:2 degree:2 sufficient:1 editor:3 uncorrelated:1 share:1 heavy:7 row:3 compatible:1 supported:1 last:1 free:2 aij:9 formal:1 allow:2 bias:1 taking:1 sparse:13 distributed:1 ghz:1 curve:1 xn:1 valid:1 author:1 commonly:1 made:1 far:2 correlate:1 pruning:4 approximate:2 gene:4 imm:1 overfitting:1 investigating:1 uai:2 alternatively:1 search:15 latent:3 continuous:2 un:2 tailed:7 why:1 quantifies:1 additionally:1 table:1 ku:1 learn:5 inherently:1 obtaining:2 diag:1 dense:2 sachs:1 owi:1 noise:9 hyperparameters:2 alarm:5 pip3:6 repeated:1 complementary:1 x1:1 causality:1 west:4 roc:1 fashion:1 slow:1 akt:4 wiley:1 precision:3 exponential:3 candidate:22 pe:2 chickering:2 third:2 bij:1 showing:1 er:2 symbol:1 dk:2 pz:1 closeness:1 evidence:1 naively:1 intractable:1 consist:1 restricting:1 false:4 magnitude:2 karhunen:1 jnk:4 sparser:1 easier:1 shimizu:1 simply:2 ordered:1 pip2:5 doubling:1 chang:1 springer:1 corresponds:4 truth:4 relies:1 conditional:1 goal:1 sorted:1 towards:2 price:1 shared:1 absence:1 hard:1 change:2 directionality:1 fisher:1 uniformly:1 acting:1 sampler:3 total:2 experimental:4 xin:1 burnin:1 select:7 support:1 latter:2 yardstick:1 bioinformatics:2 evaluate:1 |
3,165 | 3,868 | Slow, Decorrelated Features for
Pretraining Complex Cell-like Networks
Yoshua Bengio
University of Montreal
[email protected]
James Bergstra
University of Montreal
[email protected]
Abstract
We introduce a new type of neural network activation function based on recent
physiological rate models for complex cells in visual area V1. A single-hiddenlayer neural network of this kind of model achieves 1.50% error on MNIST.
We also introduce an existing criterion for learning slow, decorrelated features
as a pretraining strategy for image models. This pretraining strategy results in
orientation-selective features, similar to the receptive fields of complex cells. With
this pretraining, the same single-hidden-layer model achieves 1.34% error, even
though the pretraining sample distribution is very different from the fine-tuning
distribution. To implement this pretraining strategy, we derive a fast algorithm for
online learning of decorrelated features such that each iteration of the algorithm
runs in linear time with respect to the number of features.
1
Introduction
Visual area V1 is the first area of cortex devoted to handling visual input in the human visual system (Dayan & Abbott, 2001). One convenient simplification in the study of cell behaviour is to
ignore the timing of individual spikes, and to look instead at their frequency. Some cells in V1
are described well by a linear filter that has been rectified to be non-negative and perhaps bounded.
These so-called simple cells are similar to sigmoidal activation functions: their activity (firing frequency) is greater as an image stimulus looks more like some particular linear filter. However, these
simple cells are a minority in visual area V1 and the characterization of the remaining cells there
(and even beyond in visual areas V2, V4, MT, and so on) is a very active area of ongoing research.
Complex cells are the next-simplest kind of cell. They are characterized by an ability to respond to
narrow bars of light with particular orientations in some region (translation invariance) but to turn off
when all those overlapping bars are presented at once. This non-linear response has been modeled
by quadrature pairs (Adelson & Bergen, 1985; Dayan & Abbott, 2001): pairs of linear filters with
the property that the sum of their squared responses is constant for an input image with particular
spatial frequency and orientation (i.e. edges). It has also been modeled by max-pooling across two
or more linear filters (Riesenhuber & Poggio, 1999). More recently, it has been argued that V1 cells
exhibit a range of behaviour that blurs distinctions between simple and complex cells and between
energy models and max-pooling models (Rust et al., 2005; Kouh & Poggio, 2008; Finn & Ferster,
2007).
Another theme in neural modeling is that cells do not react to single images, they react to image
sequences. It is a gross approximation to suppose that each cell implements a function from image
to activity level. Furthermore, the temporal sequence of images in a video sequence contains a lot
of information about the invariances that we would like our models to learn. Throwing away that
temporal structure makes learning about objects from images much more difficult. The principle
of identifying slowly moving/changing factors in temporal/spatial data has been investigated by
many (Becker & Hinton, 1993; Wiskott & Sejnowski, 2002; Hurri & Hyv?arinen, 2003; K?ording
et al., 2004; Cadieu & Olshausen, 2009) as a principle for finding useful representations of images,
1
and as an explanation for why V1 simple and complex cells behave the way they do. A good
overview can be found in (Berkes & Wiskott, 2005).
This work follows the pattern of initializing neural networks with unsupervised learning (pretraining) before fine-tuning with a supervised learning criterion. Supervised gradient descent explores the
parameter space sufficiently to get low training error on smaller training sets (tens of thousands of
examples, like MNIST). However, models that have been pretrained with appropriate unsupervised
learning procedures (such as RBMs and various forms of auto-encoders) generalize better (Hinton
et al., 2006; Larochelle et al., 2007; Lee et al., 2008; Ranzato et al., 2008; Vincent et al., 2008).
See Bengio (2009) for a comprehensive review and Erhan et al. (2009) for a thorough experimental
analysis of the improvements obtained. It appears that unsupervised pretraining guides the learning
dynamics in better regions of parameter space associated with basins of attraction of the supervised
gradient procedure corresponding to local minima with lower generalization error, even for very
large training sets (unlike other regularizers whose effects tend to quickly vanish on large training
sets) with millions of examples.
Recent work in the pretraining of neural networks has taken a generative modeling perspective. For
example, the Restricted Boltzmann Machine is an undirected graphical model, and training it (by
maximum likelihood) as such has been demonstrated to also be a good initialization. However, it is
an interesting open question whether a better generative model is necessarily (or even typically) a
better point of departure for fine-tuning. Contrastive divergence (CD) is not maximum likelihood,
and works just fine as pretraining. Reconstruction error is an even poorer approximation of the
maximum likelihood gradient, and sometimes works better than CD (with additional twists like
sparsity or the denoising of (Vincent et al., 2008)).
The temporal coherence and decorrelation criterion is an alternative to training generative models
such as RBMs or auto-encoder variants. Recently (Mobahi et al., 2009) demonstrated that a slowness
criterion regularizing the top-most internal layer of a deep convolutional network during supervised
learning helps their model to generalize better. Our model is similar in spirit to pre-training with
the semi-supervised embedding criterion at each level (Weston et al., 2008; Mobahi et al., 2009),
but differs in the use of decorrelation as a mechanism for preventing trivial solutions to a slowness
criterion. Whereas RBMs and denoising autoencoders are defined for general input distributions,
the temporal coherence and decorrelation criterion makes sense only in the context of data with
slowly-changing temporal or spatial structure, such as images, video, and sound.
In the same way that simple cell models were the inspiration for sigmoidal activation units in artificial neural networks and validated simple cell models, we investigate in artificial neural network
classifiers the value of complex cell models. This paper builds on these results by showing that
the principle of temporal coherence is useful for finding initial conditions for the hidden layer of
a neural network that biases it towards better generalization in object recognition. We introduce
temporal coherence and decorrelation as a pretraining algorithm. Hidden units are initialized so that
they are invariant to irrelevant transformations of the image, and sensitive to relevant ones. In order
for this criterion to be useful in the context of large models, we derive a fast online algorithm for
decorrelating units and maximizing temporal coherence.
2
2.1
Algorithm
Slow, decorrelated feature learning algorithm
(K?ording et al., 2004) introduced a principle (and training criterion) to explain the formation of
complex cell receptive fields. They based their analysis on the complex-cell model of (Adelson &
Bergen, 1985), which describes a complex cell as a pair of half-rectified linear filters whose outputs
are squared and added together and then a square root is applied to that sum.
Suppose
x is an input image and we have F complex cells h1 , ..., hF such that hi
p
(ui ? x)2 + (vi ? x)2 . (K?ording et al., 2004) showed that by minimizing the following cost,
LK2004 = ?
X Covt (hi , hj )2
X X (hi,t ? hi,t?1 )2
+
Var(hi )Var(hj )
Var(hi )
t
i
i!=j
2
=
(1)
over consecutive natural movie frames (with respect to model parameters), the filters ui and vi of
each complex cell form local Gabor filters whose phases are offset by about 90 degrees, like the sine
and cosine curves that implement a Fourier transform.
The criterion in Equation 1 requires a batch minimization algorithm because of the variance and
covariance statistics that must be collected. This makes the criterion too slow for use with large
datasets. At the same time, the size of the covariance matrix is quadratic in the number of features, so
it is computationally expensive (perhaps prohibitively) to apply the criterion to train large numbers
of features.
2.1.1
Online Stochastic Estimation of Covariance
This section presents an algorithm for approximately minimizing LK2004 using an online algorithm
whose iterations run in linear time with respect to the number of features. One way to apply the
criterion to large or infinite datasets is by estimating the covariance (and variance) from consecutive
minibatches of N movie frames. Then the cost can be minimized by stochastic gradient descent.
We used an exponentially-decaying moving average to track the mean of each feature over time.
? i (t) = ?h
? i (t ? 1) + (1 ? ?)hi (t)
h
For good results, ? should be chosen so that the estimates change very slowly. We used a value of
1.0 ? 5.0 ? 10?5 .
Then we estimated the variance of each feature over a minibatch like this:
Var(h) ?
t+N
X?1
1
? i (t))2
(hi (t) ? h
N ? 1 ? =t
With this mean and variance, we computed normalized features for each minibatch:
p
? i (t))/ Var(h) + 10?10
zi (t) = (hi (t) ? h
Letting Z denote an F ? N matrix with N columns of F normalized feature values, we estimate the
correlation between features hi by the covariance in these normalized features: C(t) = N1 Z(t)Z(t)0 .
We can now write down L(t), a minibatch-wise approximation to Eq. 1:
L(t) = ?
X
i!=j
2
Cij
(t) +
N
?1 X
X
? =0
(zi (t + ? ) ? zi (t + ? ? 1))2
(2)
i
The time complexity of evaluating L(t) from Z using this expression is O(F F N +N F ). In practice
we use small minibatches and our model has lots of features, so the fact that the time complexity of
the algorithm is quadratic in F is troublesome.
There is, however, a way to compute this value exactly in time linear in F . The key observation
is that the sum of the squared elements of C can be computed from the N ? N Gram matrix
G(t) = Z(t)0 Z(t).
F X
F
X
2
Cij
(t) = Tr(C(t)C(t))
i=1 j=1
1
1
Tr(Z(t)Z(t)0 Z(t)Z(t)0 ) = 2 Tr(Z(t)0 Z(t)Z(t)0 Z(t))
N2
N
1
1
= 2 Tr(G(t)G(t)) = 2 Tr(G(t)G(t)0 )
N
N
N N
1 XX 2
. 1
= 2
Gkl (t) = 2 |Z(t)0 Z(t)|2
N
N
=
k=1 l=1
3
Subtracting the Cii2 terms from the sum of all squared elements lets us rewrite Equation 2 in a way
that suggests the linear-time implementation.
!
F X
N
N ?1 F
X
?
1 XX
0
2
2 2
L(t) = 2 |Z(t)Z (t)| ?
(
zi (? ) ) +
(zi (? ) ? zi (? ? 1))2 (3)
N
N
?
1
? =1 i=1
i=1 ? =1
The time complexity of computing L(t) using Equation 3 from Z(t) is O(N N F ). The sum of
squared correlations is still the most expensive term, but for the case where N << F , this expression
makes L(t)?s computation linear in F . Considering that each iteration treats N training examples,
the per-training-example cost of this algorithm can be seen as O(N F ). In implementation, an
additional factor of two in runtime can be obtained by only computing half of the Gram matrix G,
which is symmetric.
2.2
Complex-cell activation function
Recently, (Rust et al., 2005) have argued that existing models, such as that of (Adelson & Bergen,
1985) cannot account for the variety of behaviour found in visual area V1. Some complex cells
behave like simple cells to some extent and vice versa; there is a continuous range of simple to complex cells. They put forward a similar but more involved expression that can capture the simple and
complex cells as special cases, but ultimately parameterizes a larger class of cell-response functions
(Eq. 4).
?
?
P
PI
J
(j) 2
? max(0, wx)2 + i=1 (u(i) x)2 ? ?
(v
x)
j=1
a+
(4)
?
P
?
P
J
I
(j) x)2
(v
1 + ? max(0, wx)2 + i=1 (u(i) x)2 +
j=1
The numerator in Eq 4 describes the difference between an excitation term and a shunting inhibition
term. The denominator acts to normalize this difference. Parameters w, u(i) , v (j) have the same
shape as the input image x, and can be thought of as image filters like the first layer of a neural
network or the codebook of a sparse-coding model. The parameters a, ?, ?, ?, , ? are scalars that
control the range and shape of the activation function, given all the filter responses. The numbers I
and J of quadratic filters required to explain a particular cellular response were on the order of 2-16.
We introduce the approximation in Equation 5 because it is easier to learn by gradient descent. We
replaced the max operation with a softplus(x) = log(1 + ex ) function so that there is always a
gradient on w and b, even when wx + b is negative. We fixed the scalar parameters to prevent the
system from entering regimes of extreme non-linearity. We fixed ?, ?, ?, to 1, and a to 0. We chose
to fix the exponent ? to 0.5 because (Rust et al., 2005) found that values close to 0.5 offered good
fits to cell firing-rate data. Future work might look at choosing these constants in a principled way
or adapting them; we found that these values worked well. The range of this activation function (as
a function of x) is a connected set on the (?1, 1) interval. However, the whole (?1, 1) range is not
always available, depending on the parameters. If the inhibition term is always 0 for example, then
the activation function will be non-negative.
q
qP
PI
J
(j) x)2
log(1 + ewx+b )2 + i=1 (u(i) x)2 ?
j=1 (v
q
qP
PI
J
(j) x)2
1.0 + log(1 + ewx+b )2 + i=1 (u(i) x)2 +
j=1 (v
3
(5)
Results
Classification results were obtained by adding a logistic regression model on top of the features
learned, and treating the resulting model as a single-hidden-layer neural network. The weights of
the logistic regression were always initialized to zero.
All work was done on 28x28 images (MNIST-sized), using a model with 300 hidden units. Each
hidden unit had one linear filter w, a bias b, two quadratic excitatory filters u1 , u2 and two quadratic
inhibitory filters v1 , v2 . The computational cost of evaluating each unit was thus five times the cost
of evaluating a normal sigmoidal activation function of the form tanh(w0 x + b).
4
3.1
Random initialization
As a baseline, our model parameters were initialized to small random weights and used as the hidden
layer of a neural network. Training this randomly-initialized model by stochastic gradient descent
yielded test-set performance of 1.56% on MNIST.
The filters learned by this procedure looked somewhat noisy for the most part, but had low-frequency
trends. For example, some of the quadratic filters had small local Gabor-like filters. We believe that
these phase-offset pairs of Gabor-like functions allow the units to implement some shift-invariant
response to edges with a specific orientation (Fig. 1).
Figure 1: Four of the three hundred activation functions learned by training our model from random
initialization to perform classification. Top row: the red and blue channels are the two quadratic
filters of the excitation term. Bottom row: the red and blue channels are the two quadratic filters of
the shunting inhibition term. Training approximately yields locally orientation-selective edge filters,
opposite-orientation edges are inhibitory.
3.2
Pretraining with natural movies
Under the hypothesis that the matched Gabor functions (see Fig. 1) allowed our model to generalize
better across slight translations of the image, we appealed to a pretraining process to initialize our
model with values better than random noise.
We pretrained the hidden layer according to the online version of the cost in Eq. 3, using movies
(MIXED-movies) made by sliding a 28 x 28 pixel window across large photographs. Each of these
movies was short (just four frames long) and ten movies were used in each minibatch (N = 40). The
sliding speed was sampled uniformly between 0.5 and 2 pixels per frame. The sliding direction was
sampled uniformly from 0 to 2?. The sliding initial position was sampled uniformly from image
coordinates. Any sampled movie that slid off of the underlying image was rejected. We used two
photographs to generate the movies. The first photograph was a grey-scale forest scene (resolution
1744x1308). The second photograph was a tiling of 100x100 MNIST digits (resolution 2800x2800).
As a result of this procedure, digits are not at all centered in MIXED-movies: there might part of a
?3? in the upper-left part of a frame, and part of a ?7? in the lower right.
The shunting inhibition filters (v1 , v2 ) learned after five hundred thousand movies (fifty thousand
iterations of stochastic gradient descent) are shown in Figure 2. The filters learn to implement
orientation-selective, shift-invariant filters at different spatial frequencies. The filters shown in figure 2 have fairly global receptive fields, but smaller more local receptive fields were obtained by
applying `1 weight-penalization during pretraining. The ? parameter that balances decorrelation
and slowness was chosen manually on the basis of the trained filters. We were looking for a diversity of filters with relatively low spatial frequency. The excitatory filters learned similar Gabor pairs
but the receptive fields tended to be both smaller (more localized) and lower-frequency. Fine-tuning
this pre-trained model with a learning rate of 0.003 with L1 weight decay of 10?5 yielded a test
error rate of 1.34% on MNIST.
3.3
Pretraining with MNIST movies
We also tried pretraining with videos whose frames follow a similar distribution to the images used
for fine-tuning and testing. We created MNIST movies by sampling an image from the training set,
and moving around (translating it) according to a Brownian motion. The initial velocity was sampled
from a zero-mean normal distribution with std-deviation 0.2. Changes in that velocity between each
5
Figure 2: Filters from some of the units of the model, pretrained on small sliding image patches from
two large images. The features learn to be direction-selective for moving edges by approximately
implementing windowed Fourier transforms. These features have global receptive field, but become
more local when an `1 weight penalization is applied during pretraining. Excitatory filters looked
similar, but tended to be more localized and with lower spatial frequency (fewer, shorter, broader
stripes). Columns of the figure are arranged in triples: linear filter w in grey, u(1) , u(2) in red and
green, v (1) , v (2) in blue and green.
frame were sampled from zero-mean normal distribution with std-deviation 0.2. Furthermore, the
digit image in each frame was modified according to a randomly chosen elastic deformation, as
in (Loosli et al., 2007). As before, movies of four frames were created in this way and training
was conducted on minibatches of ten movies (N = 4 ? 10 = 40). Unlike the mnist frames in
MIXED-movies, the frames of MNIST-movies contain a single digit that is approximately centered.
The activation functions learned by minimizing Equation 3 on these MNIST movies were qualitatively different from the activation functions learned from the MIXED movies. The inhibitory
weights (v1 , v2 ) learned from MNIST movies are shown in 3. Once again, the inhibitory weights
exhibit the narrow red and green stripes that indicate edge-orientation selectivity. But this time they
are not parallel straight stripes, they follow contours that are adapted to digit edges. The excitation filters u1 , u2 were also qualitatively different. Instead of forming localized Gabor pairs, some
formed large smooth blob-like shapes but most converged toward zero. Fine-tuning this pre-trained
model with a learning rate of 0.003 with L1 weight decay of 10?5 yielded a test error rate of 1.37
% on MNIST.
Figure 3: Filters of our model, pretrained on movies of centered MNIST training images subjected
to Brownian translation. The features learn to be direction-selective for moving edges by approximately implementing windowed Fourier transforms. The filters are tuned to the higher spatial
frequency in MNIST digits, as compared with the natural scene. Columns of the figure are arranged
in triples: linear filter w in grey, u(1) , u(2) in red and green, v (1) , v (2) in blue and green.
6
Table 1: Generalization error (% error) from 100 labeled MNIST examples after pretraining on
MIXED-movies and MNIST-movies.
Pre-training Dataset Number of pretraining iterations (?104 )
0
1
2
3
4
5
MIXED-movies
MNIST-movies
4
23.1
23.1
21.2
19.0
20.8
18.7
20.8
18.8
20.6
18.4
20.6
18.6
Discussion
The results on MNIST compare well with many results in the literature. A single-hidden layer neural
network of sigmoidal units can achieve 1.8% error by training from random initial conditions, and
our model achieves 1.5% from random initial conditions. A single-hidden layer sigmoidal neural
network pretrained as a denoising auto-encoder (and then fine-tuned) can achieve 1.4% error on
average, and our model is able to achieve 1.34% error from many different fine-tuned models (Erhan
et al., 2009). Gaussian SVMs trained just on the original MNIST data achieve 1.4%; our pretraining
strategy allows our single-layer model be better than Gaussian SVMs (Decoste & Sch?olkopf, 2002).
Deep learning algorithms based on denoising auto-encoders and RBMs are typically able to achieve
slightly lower scores in the range of 1.2 ? 1.3% (Hinton et al., 2006; Erhan et al., 2009). The
best convolutional architectures and models that have access to enriched datasets for fine-tuning can
achieve classification accuriacies under 0.4% (Ranzato et al., 2007). In future work, we will explore
strategies for combining these methods and with our decorrelation criterion to train deep networks
of models with quadratic input interactions. We will also look at comparative performance on a
wider variety of tasks.
4.1
Transfer learning, the value of pretraining
To evaluate our unsupervised criterion of slow, decorrelated features as a pretraining step for classification by a neural network, we fine-tuned the weights obtained after ten, twenty, thirty, forty,
and fifty thousand iterations of unsupervised learning. We used only a small subset (the first 100
training examples) from the MNIST data to magnify the importance of pre-training. The results
are listed in Table 1. Training from random weights initial led to 23.1 % error. The value of pretraining is evident right away: after two unsupervised passes over the MNIST training data (100K
movies and 10K iterations), the weights have been initialized better. Fine-tuning the weights learned
on the MIXED-movies led to test error rate of 21.2%, and fine-tuning the weights learned on the
MNIST-movies led to a test error rate of 19.0%. Further pretraining offers a diminishing marginal
return, although after ten unsupervised passes through the training data (500K movies) there is no
evidence of over-pretraining. The best score (20.6%) on MIXED-movies occurs at both eight and
ten unsupervised passes, and the best score on MNIST-movies (18.4%) occurs after eight. A larger
test set would be required to make a strong conclusion about a downward trend in test set scores
for larger numbers of pretraining iterations. The results with MNIST-movies pretraining are slightly
better than MIXED-movies but these results suggest strong transfer learning: the videos featuring
digits in random locations and natural image patches are almost as good for pretraining as compared
with videos featuring images very similar to those in the test set.
4.2
Slowness in normalized features encourages binary activations
Somewhat counter-intuitively, the slowness criterion requires movement in the features h. Suppose
a feature hi has activation levels that are normally distributed around 0.1 and 0.2, but the activation
at each frame of a movie is independent of previous frames. Since the features has a small variance,
then the normalized feature zi will oscillate in the same way, but with unit variance. This will cause
zi (t) ? zi (t ? 1) to be relatively high, and for our slowness criterion not to be well satisfied. In this
way the lack of variance in hi can actually make for a relatively fast normalized feature zi rather
than a slow one.
However, if hi has activation levels that are normally distributed around .1 and .2 for some image
sequences and around .8 and .9 for other image sequences, the marginal variance in hi will be larger.
7
The larger marginal variance will make the oscillations between .1 and .2 lead to much smaller
changes in the normalized feature zi (t). In this sense, the slowness objective can be maximally
satisfied by features hi (t) that take near-minimum and near-maximum values for most movies, and
never transition from a near-minimum to a near-maximum value during a movie.
When training on multiple short videos instead of one continuous one, it is possible for large changes
in normalized-feature-activation never [or rarely] to occur during a video. Perhaps this is one of the
roles of saccades in the visual system: to suspend the normal objective of temporal coherence during
a rapid widespread change of activation levels.
4.3
Eigenvalue interpretation of decorrelation term
What does our unsupervised cost mean? One way of thinking about the decorrelation term (first
term in Eq. 1) which helped us to design an efficient algorithm for computing it, is to think of it as
flattening the eigen-spectrum of the correlation matrix of our features h (over time). It is helpful to
?
rewrite this cost in terms of normalized features: zi = hi??ihi , and to consider that we sum over all
the elements of the correlation matrix including the diagonal.
?
?
F X
F
F
?1 X
F
X
X
X Covt (hi , hj )2
=2
Covt (zi , zj )2 = ?
Covt (zi , zj )2 ? ? F
Var(hi )Var(hj )
i=1 j=1
i=1 j=i+1
i!=j
If we use C to denote the matrix whose i, j entry is Covt (zi , zj ), and we use U 0 ?U to denote the
eigen-decomposition of C, then we can transform this sum over i! = j further.
F X
F
X
(
Covt (zi , zj )2 ) ? F = Tr(C 0 C) ? F = Tr(CC) ? F
i=1 j=1
= Tr(U 0 ?U U 0 ?U ) ? F = Tr(U U 0 ?U U 0 ?) ? F =
F
X
?2k ? F
k=1
We can interpret the first term of Eq. 1 as penalizing the squared eigenvalues of the covariance
matrix between features in a normalized feature space (z as opposed to h), or as minimizing the
squared eigenvalues of the correlation matrix between features h.
5
Conclusion
We have presented an activation function for use in neural networks that is a simplification of a
recent rate model of visual area V1 complex cells. This model learns shift-invariant, orientationselective edge filters from purely supervised training on MNIST and achieves lower generalization
error than conventional neural nets.
Temporal coherence and decorrelation has been put forward as a principle for explaining the functional behaviour of visual area V1 complex cells. We have described an online algorithm for minimizing correlation that has linear time complexity in the number of hidden units. Pretraining our
model with this unsupervised criterion yields even lower generalization error: better than Gaussian SVMs, and competitive with deep denoising auto-encoders and 3-layer deep belief networks.
The good performance of our model compared with poorer approximations of V1 is encouraging
machine learning research inspired by neural information processing in the brain. It also helps to
validate the corresponding computational neuroscience theories by showing that these neuron activations and unsupervised criteria have value in terms of learning.
Acknowledgments
This research was performed thanks to funding from NSERC, MITACS, and the Canada Research
Chairs.
8
References
Adelson, E. H., & Bergen, J. R. (1985). Spatiotemporal energy models for the perception of motion.
Journal of the Optical Society of America, 2, 284?99.
Becker, S., & Hinton, G. E. (1993). Learning mixture models of spatial coherence. Neural Computation, 5, 267?277.
Bengio, Y. (2009). Learning deep architectures for AI. Foundations and Trends in Machine Learning, to appear.
Berkes, P., & Wiskott, L. (2005). Slow feature analysis yields a rich repertoire of complex cell
properties. Journal of Vision, 5, 579?602.
Cadieu, C., & Olshausen, B. (2009). Learning transformational invariants from natural movies. In
Advances in neural information processing systems 21 (nips?08), 209?216. MIT Press.
Dayan, P., & Abbott, L. F. (2001). Theoretical neuroscience. The MIT Press.
Decoste, D., & Sch?olkopf, B. (2002). Training invariant support vector machines. Machine Learning, 46, 161?190.
Erhan, D., Manzagol, P.-A., Bengio, Y., Bengio, S., & Vincent, P. (2009). The difficulty of training
deep architectures and the effect of unsupervised pre-training. AISTATS?2009 (pp. 153?160).
Clearwater (Florida), USA.
Finn, I., & Ferster, D. (2007). Computational diversity in complex cells of cat primary visual cortex.
Journal of Neuroscience, 27, 9638?48.
Hinton, G. E., Osindero, S., & Teh, Y. (2006). A fast learning algorithm for deep belief nets. Neural
Computation, 18, 1527?1554.
Hurri, J., & Hyv?arinen, A. (2003). Temporal coherence, natural image sequences, and the visual
cortex. Advances in Neural Information Processing Systems 15 (NIPS?02) (pp. 141?148).
K?ording, K. P., Kayser, C., Einh?auser, W., & K?onig, P. (2004). How are complex cell properties
adapted to the statistics of natural stimuli? Journal of Neurophysiology, 91, 206?212.
Kouh, M. M., & Poggio, T. T. (2008). A canonical neural circuit for cortical nonlinear operations.
Neural Computation, 20, 1427?1451.
Larochelle, H., Erhan, D., Courville, A., Bergstra, J., & Bengio, Y. (2007). An empirical evaluation
of deep architectures on problems with many factors of variation. ICML 2007 (pp. 473?480).
Corvallis, OR: ACM.
Lee, H., Ekanadham, C., & Ng, A. (2008). Sparse deep belief net model for visual area V2. In
Advances in neural information processing systems 20 (nips?07). Cambridge, MA: MIT Press.
Loosli, G., Canu, S., & Bottou, L. (2007). Training invariant support vector machines using selective sampling. In L. Bottou, O. Chapelle, D. DeCoste and J. Weston (Eds.), Large scale kernel
machines, 301?320. Cambridge, MA.: MIT Press.
Mobahi, H., Collobert, R., & Weston, J. (2009). Deep learning from temporal coherence in video.
ICML 2009. ACM. To appear.
Ranzato, M., Boureau, Y., & LeCun, Y. (2008). Sparse feature learning for deep belief networks.
NIPS 20.
Ranzato, M., Poultney, C., Chopra, S., & LeCun, Y. (2007). Efficient learning of sparse representations with an energy-based model. NIPS 19.
Riesenhuber, M., & Poggio, T. (1999). Hierarchical models of object recognition in cortex. Nature
Neuroscience, 2, 1019?1025.
Rust, N., Schwartz, O., Movshon, J. A., & Simoncelli, E. (2005). Spatiotemporal elements of
macaque V1 receptive fields. Neuron, 46, 945?956.
Vincent, P., Larochelle, H., Bengio, Y., & Manzagol, P.-A. (2008). Extracting and composing robust
features with denoising autoencoders. ICML 2008 (pp. 1096?1103). ACM.
Weston, J., Ratle, F., & Collobert, R. (2008). Deep learning via semi-supervised embedding. ICML
2008 (pp. 1168?1175). New York, NY, USA: ACM.
Wiskott, L., & Sejnowski, T. (2002). Slow feature analysis: Unsupervised learning of invariances.
Neural Computation, 14, 715?770.
9
| 3868 |@word neurophysiology:1 version:1 open:1 grey:3 hyv:2 tried:1 covariance:6 decomposition:1 contrastive:1 tr:9 initial:6 contains:1 score:4 tuned:4 ording:4 existing:2 activation:19 must:1 wx:3 blur:1 shape:3 treating:1 generative:3 half:2 fewer:1 short:2 characterization:1 codebook:1 location:1 sigmoidal:5 five:2 windowed:2 become:1 introduce:4 rapid:1 brain:1 ratle:1 inspired:1 encouraging:1 decoste:3 window:1 considering:1 estimating:1 bounded:1 xx:2 linearity:1 matched:1 underlying:1 circuit:1 what:1 kind:2 finding:2 transformation:1 temporal:13 thorough:1 act:1 runtime:1 exactly:1 prohibitively:1 classifier:1 schwartz:1 control:1 unit:11 normally:2 onig:1 appear:2 before:2 timing:1 local:5 treat:1 troublesome:1 firing:2 approximately:5 might:2 chose:1 initialization:3 suggests:1 range:6 ihi:1 acknowledgment:1 thirty:1 lecun:2 testing:1 practice:1 implement:5 differs:1 kayser:1 digit:7 procedure:4 area:10 empirical:1 gabor:6 thought:1 convenient:1 adapting:1 pre:6 suggest:1 get:1 cannot:1 close:1 put:2 context:2 applying:1 conventional:1 demonstrated:2 maximizing:1 resolution:2 identifying:1 react:2 attraction:1 kouh:2 embedding:2 coordinate:1 variation:1 suppose:3 hypothesis:1 element:4 trend:3 recognition:2 expensive:2 velocity:2 std:2 stripe:3 labeled:1 bottom:1 role:1 loosli:2 initializing:1 capture:1 thousand:4 region:2 connected:1 ranzato:4 counter:1 movement:1 gross:1 principled:1 ui:2 complexity:4 dynamic:1 ultimately:1 trained:4 rewrite:2 purely:1 basis:1 various:1 x100:1 america:1 cat:1 train:2 fast:4 sejnowski:2 artificial:2 formation:1 choosing:1 clearwater:1 whose:6 larger:5 encoder:2 ability:1 statistic:2 think:1 transform:2 noisy:1 online:6 sequence:6 blob:1 eigenvalue:3 net:3 reconstruction:1 subtracting:1 interaction:1 relevant:1 combining:1 achieve:6 magnify:1 validate:1 normalize:1 olkopf:2 comparative:1 object:3 help:2 derive:2 depending:1 montreal:2 wider:1 eq:6 strong:2 indicate:1 larochelle:3 direction:3 filter:34 stochastic:4 centered:3 human:1 translating:1 implementing:2 argued:2 arinen:2 behaviour:4 fix:1 generalization:5 repertoire:1 sufficiently:1 around:4 normal:4 achieves:4 consecutive:2 estimation:1 tanh:1 sensitive:1 vice:1 minimization:1 mit:4 always:4 gaussian:3 modified:1 rather:1 hj:4 broader:1 validated:1 improvement:1 likelihood:3 baseline:1 sense:2 helpful:1 dayan:3 bergen:4 typically:2 diminishing:1 hidden:11 selective:6 pixel:2 classification:4 orientation:8 exponent:1 spatial:8 special:1 initialize:1 fairly:1 marginal:3 field:7 once:2 never:2 auser:1 ng:1 sampling:2 cadieu:2 manually:1 look:4 adelson:4 unsupervised:13 icml:4 thinking:1 future:2 minimized:1 yoshua:2 stimulus:2 randomly:2 divergence:1 comprehensive:1 individual:1 replaced:1 phase:2 n1:1 investigate:1 evaluation:1 mixture:1 extreme:1 light:1 hiddenlayer:1 devoted:1 regularizers:1 poorer:2 edge:9 poggio:4 shorter:1 initialized:5 deformation:1 theoretical:1 column:3 modeling:2 cost:8 ekanadham:1 deviation:2 subset:1 entry:1 hundred:2 conducted:1 osindero:1 too:1 encoders:3 spatiotemporal:2 thanks:1 explores:1 v4:1 off:2 lee:2 together:1 quickly:1 squared:7 again:1 satisfied:2 opposed:1 slowly:3 return:1 account:1 transformational:1 diversity:2 bergstra:3 coding:1 vi:2 collobert:2 performed:1 sine:1 root:1 lot:2 h1:1 helped:1 red:5 competitive:1 hf:1 decaying:1 parallel:1 square:1 formed:1 convolutional:2 variance:9 yield:3 generalize:3 vincent:4 rectified:2 cc:1 straight:1 converged:1 gkl:1 explain:2 tended:2 decorrelated:5 ed:1 energy:3 rbms:4 frequency:9 involved:1 james:2 pp:5 associated:1 sampled:6 dataset:1 actually:1 appears:1 higher:1 supervised:7 follow:2 response:6 maximally:1 decorrelating:1 done:1 though:1 arranged:2 furthermore:2 just:3 rejected:1 mitacs:1 autoencoders:2 correlation:6 nonlinear:1 overlapping:1 lack:1 minibatch:4 widespread:1 logistic:2 perhaps:3 believe:1 olshausen:2 usa:2 effect:2 normalized:10 contain:1 inspiration:1 entering:1 symmetric:1 during:6 numerator:1 encourages:1 excitation:3 cosine:1 criterion:19 evident:1 l1:2 motion:2 image:29 wise:1 recently:3 umontreal:2 funding:1 functional:1 mt:1 rust:4 overview:1 twist:1 qp:2 exponentially:1 million:1 slight:1 interpretation:1 interpret:1 corvallis:1 versa:1 cambridge:2 ai:1 tuning:9 canu:1 had:3 moving:5 access:1 chapelle:1 cortex:4 inhibition:4 berkes:2 brownian:2 recent:3 showed:1 perspective:1 irrelevant:1 selectivity:1 slowness:7 binary:1 seen:1 minimum:3 greater:1 additional:2 somewhat:2 forty:1 semi:2 sliding:5 multiple:1 sound:1 simoncelli:1 smooth:1 characterized:1 x28:1 offer:1 long:1 shunting:3 variant:1 regression:2 denominator:1 vision:1 iteration:8 sometimes:1 kernel:1 cell:35 whereas:1 fine:13 interval:1 sch:2 fifty:2 unlike:2 pass:3 pooling:2 tend:1 undirected:1 spirit:1 extracting:1 near:4 chopra:1 bengio:8 variety:2 fit:1 zi:16 architecture:4 opposite:1 parameterizes:1 shift:3 whether:1 expression:3 becker:2 movshon:1 york:1 oscillate:1 pretraining:29 cause:1 deep:13 useful:3 orientationselective:1 listed:1 transforms:2 ten:6 locally:1 svms:3 simplest:1 generate:1 zj:4 inhibitory:4 canonical:1 estimated:1 neuroscience:4 track:1 per:2 blue:4 write:1 key:1 four:3 changing:2 prevent:1 penalizing:1 abbott:3 v1:14 sum:7 run:2 respond:1 almost:1 patch:2 oscillation:1 coherence:10 layer:11 hi:18 simplification:2 courville:1 quadratic:9 yielded:3 activity:2 adapted:2 occur:1 throwing:1 worked:1 scene:2 fourier:3 u1:2 speed:1 chair:1 optical:1 relatively:3 according:3 across:3 smaller:4 describes:2 slightly:2 intuitively:1 restricted:1 invariant:7 handling:1 taken:1 computationally:1 equation:5 turn:1 mechanism:1 letting:1 finn:2 subjected:1 tiling:1 available:1 operation:2 apply:2 eight:2 hierarchical:1 v2:5 away:2 appropriate:1 alternative:1 batch:1 eigen:2 florida:1 original:1 top:3 remaining:1 graphical:1 build:1 society:1 objective:2 question:1 added:1 spike:1 looked:2 strategy:5 receptive:7 occurs:2 primary:1 diagonal:1 exhibit:2 gradient:8 w0:1 collected:1 extent:1 trivial:1 cellular:1 toward:1 minority:1 modeled:2 manzagol:2 minimizing:5 balance:1 difficult:1 cij:2 negative:3 implementation:2 design:1 boltzmann:1 twenty:1 perform:1 teh:1 upper:1 observation:1 neuron:2 datasets:3 riesenhuber:2 behave:2 descent:5 hinton:5 looking:1 frame:13 canada:1 introduced:1 pair:6 required:2 distinction:1 narrow:2 learned:10 nip:5 macaque:1 beyond:1 bar:2 able:2 pattern:1 perception:1 departure:1 regime:1 sparsity:1 poultney:1 max:5 green:5 video:8 explanation:1 including:1 belief:4 decorrelation:9 natural:7 difficulty:1 movie:37 created:2 auto:5 review:1 literature:1 appealed:1 mixed:9 interesting:1 var:7 localized:3 triple:2 penalization:2 foundation:1 degree:1 offered:1 basin:1 wiskott:4 principle:5 pi:3 cd:2 translation:3 row:2 excitatory:3 featuring:2 einh:1 guide:1 bias:2 allow:1 explaining:1 sparse:4 distributed:2 curve:1 cortical:1 evaluating:3 gram:2 contour:1 transition:1 preventing:1 forward:2 made:1 qualitatively:2 rich:1 erhan:5 ignore:1 global:2 active:1 hurri:2 spectrum:1 continuous:2 why:1 table:2 learn:5 channel:2 transfer:2 ca:2 elastic:1 nature:1 composing:1 robust:1 forest:1 investigated:1 complex:21 necessarily:1 bottou:2 flattening:1 aistats:1 whole:1 noise:1 n2:1 allowed:1 quadrature:1 enriched:1 fig:2 slow:8 ny:1 theme:1 position:1 vanish:1 learns:1 down:1 specific:1 showing:2 mobahi:3 offset:2 decay:2 physiological:1 evidence:1 mnist:26 adding:1 importance:1 slid:1 downward:1 boureau:1 easier:1 led:3 photograph:4 explore:1 forming:1 visual:13 nserc:1 scalar:2 pretrained:5 u2:2 saccade:1 acm:4 minibatches:3 ma:2 weston:4 sized:1 ferster:2 towards:1 change:5 infinite:1 uniformly:3 denoising:6 called:1 invariance:3 experimental:1 rarely:1 internal:1 support:2 softplus:1 ongoing:1 evaluate:1 regularizing:1 ex:1 |
3,166 | 3,869 | Conditional Neural Fields
Liefeng Bo
Toyota Technological Institute at Chicago
6045 S. Kenwood Ave.
Chicago, IL 60637
[email protected]
Jian Peng
Toyota Technological Institute at Chicago
6045 S. Kenwood Ave.
Chicago, IL 60637
[email protected]
Jinbo Xu
Toyota Technological Institute at Chicago
6045 S. Kenwood Ave.
Chicago, IL 60637
[email protected]
Abstract
Conditional random fields (CRF) are widely used for sequence labeling such as
natural language processing and biological sequence analysis. Most CRF models
use a linear potential function to represent the relationship between input features
and output. However, in many real-world applications such as protein structure
prediction and handwriting recognition, the relationship between input features
and output is highly complex and nonlinear, which cannot be accurately modeled
by a linear function. To model the nonlinear relationship between input and output
we propose a new conditional probabilistic graphical model, Conditional Neural
Fields (CNF), for sequence labeling. CNF extends CRF by adding one (or possibly more) middle layer between input and output. The middle layer consists of a
number of gate functions, each acting as a local neuron or feature extractor to capture the nonlinear relationship between input and output. Therefore, conceptually
CNF is much more expressive than CRF. Experiments on two widely-used benchmarks indicate that CNF performs significantly better than a number of popular
methods. In particular, CNF is the best among approximately 10 machine learning
methods for protein secondary structure prediction and also among a few of the
best methods for handwriting recognition.
1 Introduction
Sequence labeling is a ubiquitous problem arising in many areas, including natural language processing [1], bioinformatics [2, 3, 4] and computer vision [5]. Given an input/observation sequence,
the goal of sequence labeling is to infer the state sequence (also called output sequence), where a
state may be some type of labeling or segmentation. For example, in protein secondary structure
prediction, the observation is a protein sequence consisting of a collection of residues. The output
is a sequence of secondary structure types. Hidden Markov model (HMM) [6] is one of the popular
methods for sequence labeling. HMM is a generative learning model since it generates output from
a joint distribution between input and output. In the past decade, several discriminative learning
models such as conditional random fields (CRF) have emerged as the mainstream methods for sequence labeling. Conditional random fields, introduced by Lafferty [7], is an undirected graphical
model. It defines the conditional probability of the output given the input. CRF is also a special case
of the log-linear model since its potential function is defined as a linear combination of features. Another approach for sequence labeling is max margin structured learning such as max margin Markov
1
networks (MMMN) [8] and SVM-struct [9]. These models generalize the large margin and kernel
methods to structured learning.
In this work, we present a new probabilistic graphical model, called conditional neural fields (CNF),
for sequence labeling. CNF combines the advantages of both CRF and neural networks. First, CNF
preserves the globally consistent prediction, i.e. exploiting the structural correlation between outputs, and the strength of CRF as a rigorous probabilistic model. Within the probabilistic framework,
posterior probability can be derived to evaluate confidence on predictions. This property is particularly valuable in applications that require multiple cascade predictors. Second, CNF automatically
learns an implicit nonlinear representation of features and thus, can capture more complicated relationship between input and output. Finally, CNF is much more efficient than kernel-based methods
such as MMMN and SVM-struct. The learning and inference procedures in CNF adopt efficient
dynamic programming algorithm, which makes CNF applicable to large scale tasks.
2 Conditional Random Fields
Assume the input and output sequences are X and Y , respectively. Meanwhile, Y
{y1 , y2 , ..., yN } ? ?N where ? is the alphabet of all possible output states and |?| = M .
=
CRF uses two types of features given a pair of input and output sequences. The first type of features
describes the dependency between the neighboring output labels.
fy,y? (Y, X, t) = ?[yt = y]?[yt?1 = y ? ]
(1)
where ?[yt = y] is a indicator function. It is equal to 1 if and only if the state at position t is y.
The second type of features describes the dependency between the label at one position and the
observations around this position.
fy (Y, X, t) = f (X, t)?[yt = y]
(2)
where f(X,t) is the local observation or feature vector at position t.
In a linear chain CRF model [7], the conditional probability of the output sequence Y given the
input sequence X is the normalized product of the exponentials of potential functions on all edges
and vertices in the chain.
N
X
1
P (Y |X) =
exp( (?(Y, X, t) + ?(Y, X, t)))
(3)
Z(X)
t=1
where
?(Y, X, t) =
X
y
wyT fy (Y, X, t)
(4)
is the potential function defined on vertex at the tth position, which measures the compatibility
between the local observations around the tth position and the output label yt ; and
X
?(Y, X, t) =
(5)
uy,y? fy,y? (Y, X, t)
?
y,y
is the potential function defined on an edge connecting two labels yt and yt+1 . This potential measures the compatibility between two neighbor output labels.
Although CRF is a very powerful model for sequence labeling, CRF does not work very well on
the tasks in which the input features and output labels have complex relationship. For example,
in computer vision or bioinformatics, many problems require the modeling of complex/nonlinear
relationship between input and output [10, 11]. To model complex/nonlinear relationship between
input and output, CRF has to explicitly enumerate all possible combinations of input features and
output labels. Nevertheless, even assisted with domain knowledge, it is not always possible for CRF
to capture all the important nonlinear relationship by explicit enumeration.
3 Conditional Neural Fields
Here we propose a new probabilistic graphical model, conditional neural fields (CNF), for sequence labeling. Figure 1 shows the structural difference between CNF and CRF. CNF not only
2
can parametrize the conditional probability in the log-linear like formulation, but also is able to implicitly model complex/nonlinear relationship between input features and output labels. In a linear
chain CNF, the edge potential function is similar to that of a linear chain CRF. That is, the edge function describes only the interdependency between the neighbor output labels. However, the potential
function of CNF at each vertex is different from that of CRF. The function is defined as follows.
K
X X
?(Y, X, t) =
wy,g h(?gT f (X, t))?[yt = y]
(6)
y
g=1
where h is a gate function. In this work, we use the logistic function as the gate function. The
major difference between CRF and CNF is the definition of the potential function at each vertex. In
CRF, the local potential function (see Equation (4)) is defined as a linear combination of features. In
CNF, there is an extra hidden layer between the input and output, which consists of K gate functions
(see Figure 1 and Equation (6)). The K gate functions extract a K-dimensional implicit nonlinear
representation of input features. Therefore, CNF can be viewed as a CRF with its inputs being K
homogeneous hidden feature-extractors at each position. Similar to CRF, CNF can also be defined
on a general graph structure or an high-order Markov chain. This paper mainly focuses on a linear
chain CNF model for sequence labeling.
Input
Local window
Input
xi-2
Local window
xi-2
xi-1
xi
xi+1
xi+2
xi-1
?1
xi
?
xi+1
wyi,1
yi-1
uyi-2,yi-1
yi
uyi-1,yi
?K
?
Gates Level
?g
wyi
yi-2
xi+2
yi+1
yi+2
uyi,yi+1 uyi+2,yi+1
yi-2
yi-1
uyi-2,yi-1
Output
wyi,K
yi
uyi-1,yi
yi+1
uyi,yi+1
yi+2
uyi+2,yi+1
Output
Figure 1: Structures of CRF and CNF
CNF can also be viewed as a natural combination of neural networks and log-linear models. In the
hidden layer, there are a set of neurons that extract implicit features from input. Then the log-linear
model in the output layer utilizes the implicit features as its input. The parameters in the hidden
neurons and the log-linear model can be jointly optimized. After learning the parameters, we can first
compute all the hidden neuron values from the input and then use an inference algorithm to predict
the output. Any inference algorithm used by CRF, such as Viterbi [7], can be used by CNF. Assume
that the dimension of feature vector at each vertex is D. The computational complexity for the K
neurons is O(N KD). Supposing Viterbi is used as the inference algorithm, the total computational
complexity of CNF inference is O(N M K + N KD). Empirically the number of hidden neurons
K is small, so the CNF inference procedure may have lower computational complexity than CRF.
In our experiments, CNF shows superior predictive performance over two baseline methods: neural
networks and CRF.
4 Parameter Optimization
Similar to CRF, we can use the maximum likelihood method to train the model parameters such that
the log-likelihood is maximized. For CNF, the log-likelihood is as follows.
log P (Y |X) =
N
X
(?(Y, X, t) + ?(Y, X, t))) ? log Z(X)
t=1
3
(7)
Since CNF contains a hidden layer of gate function h, the log-likelihood function is not convex any
more. Therefore, it is very likely that we can only obtain a local optimal solution of the parameters. Although both the output and hidden layers contain model parameters, all the parameters can
be learned together by gradient-based optimization. We can use LBFGS [12] as the optimization
routine to search for the optimal model parameters because 1) LBFGS is very efficient and robust;
and 2) LBFGS provides us an approximation of inverse Hessian for hyperparameter learning [13],
which will be described in the next section. The gradient of the log-likelihood with respect to the
parameters is given by
N
N
X
X
? log P
=
?[yt = y]?[yt?1 = y ? ] ? EP (Y? |X,w,u,?) [
?[?
yt = y]?[?
yt?1 = y ? ]]
?uy,y?
t=1
t=1
N
N
X
X
? log P
=
?[yt = y]h(?gT f (X, t)) ? EP (Y? |X,w,u,?) [
?[?
yt = y]h(?gT f (X, t))]
?wy,g
t=1
t=1
N
N
X
X
?h(?gT f (X, t))
?h(?gT f (X, t))
? log P
wyt ,g
wy?t ,g
=
? EP (Y? |X,w,u,?) [
]
??g
??g
??g
t=1
t=1
(8)
(9)
(10)
where ? is the indicator function.
Just like CRF, we can calculate the expectations in these gradients efficiently using the forwardbackward algorithm. Assume that the dimension of feature vector at each vertex is D. Since the K
gate functions can be computed in advance, the computational complexity of the gradient computation is O(N KD + N M 2 K) for a single input-output pair with length N . If K is smaller than D, it
is very possible that the computation of gradient in CNF is faster than in CRF, where the complexity
of gradient computation is O(N M 2 D). In our experiments, K is usually much smaller than D. For
example, in protein secondary structure prediction, K = 30 and D = 260. In handwriting recognition, K = 40 and D = 128. As a result, although the optimization problem is non-convex, the
training time of CNF is acceptable. Our experiments show that the training time of CNF is about 2
or 3 times that of CRF.
5 Regularization and Hyperparameter Optimization
Because an hidden layer is added to CNF to introduce more expressive power than CRF, it is crucial to control the model complexity of CNF to avoid overfitting. Similar to CRF, we can enforce
regularization on the model parameters to avoid overfitting. We assume that the parameters have
a Gaussian prior and constrain the inverse covariance matrix (of Gaussian distribution) by a small
number of hyperparameters. To simplify the problem, we divide the model parameter vector ? into
three different groups w, u and ? (see Figure 1) and assume that the parameters among different
groups are independent of each other. Furthermore, we assume parameters in each group share the
same Gaussian prior with a diagonal covariance matrix. Let ? = [?w , ?u , ?? ]T denote the vector of
three regularizations/hyperparameters for these three groups of parameters, respectively. While grid
search provides a practical way to determine the best value at low resolution for a single hyperparameter, we need a more sophisticated method to determine three hyperparameters simultaneously.
In this section, we discuss the hyperparameter learning in evidence framework.
5.1 Laplace?s Approximation
The evidence framework [14] assumes that the posterior of ? is sharply peaked around the maximum
?max . Since no prior knowledge of ? is known, the prior of each ?i , i ? {w, u, ?}, P (?i ) is chosen
to be a constant on log-scale or flat. Thus, the value of ? maximizing the posterior of ? P (?|Y, X)
can be found by maximizing
Z
P (Y |X, ?)P (?|?)d?
(11)
P (Y |X, ?) =
?
By Laplace?s approximation [14], this integral is approximated around the MAP estimation of
weights. We have
1
(12)
log P (Y |X, ?) = log P (Y |X, ?MAP ) + log P (?MAP |?) ? log det(A) + const
2
4
where A is the hessian of log P (Y |X, ?MAP ) + log P (?MAP |?) with respect to ?.
In order to maximize the approximation, we take the derivative of the right hand side of Equation
(12) with respect to ?. The optimal ? value can be derived by the following update formula.
?new
=
i
1
?1
(Wi ? ?old
))
i Tr(A
?TMAP ?MAP
(13)
where Wi is the number of parameters in group i ? {w, u, ?}.
5.2 Approximation of the Trace of Inverse Hessian
When there is a large number of model parameters, accurate computation of Tr(A?1 ) is very expensive. All model parameters are coupled together by the normalization factor, so the diagonal
approximation of Hessian or the outer-product approximation are not appropriate. In this work, we
approximate inverse Hessian using information available in the parameter optimization procedure.
The LBFGS algorithm is used to optimize parameters iteratively, so we can approximate inverse
Hessian at ?MAP using the update information generated in the past several iterations. This approach is also employed in [15, 14]. From the LBFGS update formula [13], we can compute the
approximation of the trace of inverse Hessian very efficiently. The computational complexity of
this approximation is only O(m3 + nm2 ), while the accurate computation has complexity O(n3 )
where n is the number of parameters and m is the size of history budget used by LBFGS. Since m
is usually much smaller than n, the computational complexity is only O(nm2 ). See Theorem 2.2 in
[13] for more detailed account of this approximation method.
5.3 Hyperparameter Update
The hyperparameter ? is iteratively updated by a two-step procedure. In the first step we fix hyperparameter ? and optimize the model parameters by maximizing the log-likelihood in Equation (7)
using LBFGS. In the second step,we fix the model parameters and then update ? using Equation
(13). This two-step procedure is iteratively carried out until the norm of ? does not change more
than a threshold. Figure 2 shows the learning curve of the hyperparameter on a protein secondary
structure prediction benchmark. In our experiments, the update usually converges in less than 15
iterations. Also we found that this method achieves almost the same test performance as the grid
search approach on two public benchmarks.
Hyperparameter Training
80.6
80.4
Accuracy
80.2
80
79.8
79.6
79.4
79.2
1
2
3
4
5
6
Iterations
7
8
9
10
Figure 2: Learning curve of hyperparameter ?.
6 Related Work
Most existing methods for sequence labeling are built under the framework of graphical models such
as HMM and CRF. Since these approaches are incapable of capturing highly complex relationship
between observations and labels, many structured models are proposed for nonlinear modeling of
label-observation dependency. For example, kernelized max margin Markov networks [8], SVMstruct [9] and kernel CRF [16] use nonlinear kernels to model the complex relationship between
5
observations and labels. Although these kernelized models are convex, it is still too expensive to
train and test them in the case that observations are of very high dimension. Furthermore,the number of resultant support vectors for these kernel methods are also very large. Instead, CNF has
computational complexity comparable to CRF. Although CNF is non-convex and usually only the
local minimum solution can be obtained, CNF still achieves very good performance in real-world
applications. Very recently, the probabilistic neural language model [17] and recurrent temporal restricted Boltzmann machine [18] are proposed for natural language and time series modeling. These
two methods model sequential data using a directed graph structure, so they are essentially generative models. By contrast, our CNF is a discriminative model, which is mainly used for discriminative
prediction of sequence data. The hierarchical recurrent neural networks [19, 20] can be viewed as
a hybrid of HMM and neural networks (HMM/NN), building on a directed linear chain. Similarly,
CNF can be viewed as an a hybrid of CRF and neural networks, which has the global normalization
factor and alleviate the label-bias problem.
7 Experiments
7.1 Protein Secondary Structure Prediction
Protein secondary structure (SS) prediction is a fundamental problem in computational biology as
well as a typical problem used to evaluate sequence labeling methods. Given a protein sequence
consisting of a collection of residues, the problem of protein SS prediction is to predict the secondary
structure type at each residue. A variety of methods have been described in literature for protein SS
prediction.
Given a protein sequence,we first run PSI-BLAST [21] to generate sequence profile and then use this
profile as input to predict SS. A sequence profile is a position-specific scoring matrix X with n ? 20
elements where n is the number of residues in a protein. Formally, X = [x1 , x2 , x3 , ..., xn ] where
xi is a vector of 20 elements. Each xi contains 20 position-specific scores, each corresponding to
one of the 20 amino acids in nature. The output we want to predict is Y = [y1 , y2 , ..., yn ] where
yi ? {H, E, C} represents the secondary structure type at the ith residue.
We evaluate all the SS prediction methods using the CB513 benchmark [22], which consists of 513
no-homologous proteins. The true secondary structure for each protein is calculated using DSSP
[23], which generates eight possible secondary structure states. Then we convert these 8 states into
three SS types as follows: H and G to H (Helix), B and E to E (Sheets) and all other states to C
(Coil). Q3 is used to measure the accuracy of three SS types averaged on all positions. To obtain
good performance, we also linearly transform X into values in [0, 1] as suggested by Kim et al[24].
(
0
if x < ?5;
0.1x+0.5 if ?5 ? x ? 5;
S(x) =
1
if x > 5.
To determine the number of gate functions for CNF, we enumerate this number in set
{10,20,30,40,60,100}. We also enumerate window size for CNF in set {7,9,11,13,15,17} and find
that the best evidence is achieved when window size is 13 and K = 30. Two baseline methods are
used for comparison: conditional random fields and neural networks. All the parameters of these
methods are carefully tuned. The best window sizes for neural networks and CRF are 15 and 13,
respectively. We also compared our methods with other popular secondary structure prediction programs. CRF, neural networks, Semi-Markov HMM [25], SVMpsi [24], PSIPRED[2] and CNF use
the sequence profile generated by PSI-BLAST as described above. SVMpro [26] uses the position
specific frequency as input feature. YASSPP [27] and SPINE [28] also use other residue-specific
features in addition to sequence profile.
Table 1 lists the overall performance of a variety of methods on the CB513 data set. As shown in
this table, there are two types of gains on accuracy. First, by using one hidden layer to model the
nonlinear relationship between input and output, CNF achieves a very significant gain over linear
chain CRF. This also confirms that strong nonlinear relationship exists between sequence profile and
secondary structure type. Second, by modeling interdependency between neighbor residues, CNF
also obtains much better prediction accuracy over neural networks. We also tested the the hybrid
of HMM/NN on this dataset. The predicted accuracy of HMM/NN is about three percent less than
6
Table 1: Performance of various methods for protein secondary structure prediction on the CB513 dataset.
Semi-Markov HMM is a segmental semi-Markov model for sequence labeling. SVMpro and SVMpsi are jury
method with the SVM (Gaussian kernel) as the basic classifiers. YASSPP use the SVM with a specifically
designed profile kernel function for SVM classifiers. PSIPRED is a two stage double-hidden layer neural
network. SPINE is voting systems with multiple coupled neural networks. YASSPP, PSIPRED and SPINE
also use other features besides the PSSM scores. An * symbol indicates the methods are tested over a 10-fold
cross-validation on CB513, while others are tested over a 7-fold cross-validation.
Methods
Conditional Random Fields
SVM-struct (Linear Kernel)
Neural Networks (one hidden layer)
Neural Networks (two hidden layer)
Semimarkov HMM
SVMpro
SVMpsi
PSIPRED
YASSPP
SPINE*
Conditional Neural Fields
Conditional Neural Fields*
Q3(%)
72.9
73.1
72
74
72.8
73.5
76.6
76
77.8
76.8
80.1 ?0.3
80.5 ?0.3
that of CNF. By seamlessly integrating neural networks and CRF, CNF outperforms all other thestate-of-art prediction methods on this dataset. We also tried Max-Margin Markov Network [8] and
SVM-struct1 with RBF kernel for this dataset. However, because the dataset is large and the feature
space is of high dimension, it is impossible for these kernel-based methods to finish training within
a reasonable amount of time. Both of them failed to converge within 120 hours. The running time
of CNF learning and inference is about twice that of CRF.
7.2 Handwriting Recognition
Handwriting recognition(OCR) is another widely-used benchmark for sequence labeling algorithms.
We use the subset of OCR dataset chosen by Taskar [8], which contains 6876 sequences. In this
dataset, each word consists of a sequence of characters and each character is represented by an
image with 16 ? 8 binary pixels. In addition to using the vector of pixel values as input features, we
do not use any higher-level features. Formally, the input X = [x1 , x2 , x3 , ..., xn ] is a sequence of
128-dimensional binary vectors. The output we want to predict is a sequence of labels. Each label yi
for image xi is one of the 26 classes {a, b, c, ..., z}. The accuracy is defined as the average accuracy
over all characters.
The number of gate functions for CNF is selected from set {10, 20, 30, 40, 60, 100} and we find that
the best evidence is achieved when K = 40. Window sizes for all methods are fixed to 1. All the
methods are tested using 10-fold cross-validation and their performance are shown in Table 2. As
shown in this table, CNF achieves superior performance over log-linear methods, SVM, CRF and
neural networks. CNF is also comparable with two slightly different max margin Markov network
models.
8 Discussion
We present a probabilistic graphical model conditional neural fields (CNF) for sequence labeling
tasks which require accurate account of nonlinear relationship between input and output. CNF is
a very natural integration of conditional graphical models and neural networks and thus, inherits
advantages from both of them. On one hand, by neural networks, CNF can model nonlinear relationship between input and output. On the other hand, by using graphical representation, CNF
1
http://svmlight.joachims.org/svm struct.html
7
Table 2: Performance of various methods on handwriting recognition. The results of logistic regression, SVM
and max margin Markov networks are taken from [8]. Both CNF and neural networks use 40 neurons in the
hidden layer. The CRF performance (78.9%) we obtained is a bit better than 76% in [8].
Methods
Logistic Regression
SVM (linear)
SVM (quadratic)
SVM (cubic)
SVM-struct
Conditional Random Fields
Neural Networks
MMMN (linear)
MMMN (quadratic)
MMMN (cubic)
Conditional Neural Fields
Accuracy(%)
71
71
80
81
80
78.9
79.8
80
87
87
86.9 ?0.4
can model interdependency between output labels. While CNF is more sophisticated and expressive
than CRF, the computational complexity of learning and inference is not necessarily higher. Our
experimental results on large-scale datasets indicate that CNF can be trained and tested as almost
efficient as CRF but much faster than kernel-based methods. Although CNF is not convex, it can still
be trained using the quasi-Newton method to obtain a local optimal solution, which usually works
very well in real-world applications.
In two real-world applications, CNF significantly outperforms two baseline methods, CRF and neural networks. On protein secondary structure prediction, CNF achieves the best performance over all
methods we tested. on handwriting recognition, CNF also compares favorably with the best method
max-margin Markov network. We are currently generalizing our CNF model to a second-order
Markov chain and a more general graph structure and also studying if it will improve predictive
power of CNF by interposing more than one hidden layers between input and output.
Acknowledgements
We thank Nathan Srebro and David McAllester for insightful discussions.
References
[1] Fei Sha and O. Pereira. Shallow parsing with conditional random fields. In Proceedings of
Human Language Technology-NAACL 2003.
[2] D. T. Jones. Protein secondary structure prediction based on position-specific scoring matrices.
Journal of Molecular Biology, 292(2):195?202, September 1999.
[3] Feng Zhao, Shuaicheng Li, Beckett W. Sterner, and Jinbo Xu. Discriminative learning for
protein conformation sampling. Proteins, 73(1):228?240, October 2008.
[4] Feng Zhao, Jian Peng, Joe Debartolo, Karl F. Freed, Tobin R. Sosnick, and Jinbo Xu. A
probabilistic graphical model for ab initio folding. In RECOMB 2?09: Proceedings of the 13th
Annual International Conference on Research in Computational Molecular Biology, pages 59?
73, Berlin, Heidelberg, 2009. Springer-Verlag.
[5] Sy Bor Wang, Ariadna Quattoni, Louis-Philippe Morency, and David Demirdjian. Hidden
conditional random fields for gesture recognition. In CVPR 2006.
[6] Lawrence R. Rabiner. A tutorial on hidden markov models and selected applications in speech
recognition. In Proceedings of the IEEE, 1989.
[7] John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. Conditional random fields:
Probabilistic models for segmenting and labeling sequence data. In ICML 2001.
[8] Ben Taskar, Carlos Guestrin, and Daphne Koller. Max-margin markov networks. In NIPS
2003.
8
[9] Ioannis Tsochantaridis, Thomas Hofmann, Thorsten Joachims, and Yasemin Altun. Support
vector machine learning for interdependent and structured output spaces. In ICML 2004.
[10] Nam Nguyen and Yunsong Guo. Comparisons of sequence labeling algorithms and extensions.
In ICML 2007.
[11] Yan Liu, Jaime Carbonell, Judith Klein-Seetharaman, and Vanathi Gopalakrishnan. Comparison of probabilistic combination methods for protein secondary structure prediction. Bioinformatics, 20(17), November 2004.
[12] D. C. Liu and J. Nocedal. On the limited memory bfgs method for large scale optimization.
Mathematical Programming, 45(3), 1989.
[13] Richard H. Byrd, Jorge Nocedal, and Robert B. Schnabel. Representations of quasi-newton
matrices and their use in limited memory methods. Mathematical Programming, 63(2), 1994.
[14] David J. C. Mackay. A practical bayesian framework for backpropagation networks. Neural
Computation, 4:448?472, 1992.
[15] Christopher M. Bishop. Neural Networks for Pattern Recognition. Oxford University Press,
November 1995.
[16] John Lafferty, Xiaojin Zhu, and Yan Liu. Kernel conditional random fields: representation and
clique selection. In ICML 2004.
[17] Yoshua Bengio, R?ejean Ducharme, Pascal Vincent, and Christian Janvin. A neural probabilistic
language model. Journal of Machine Learning Research, 3:1137?1155, 2003.
[18] Ilya Sutskever, Geoffrey E Hinton, and Graham Taylor. The recurrent temporal restricted
boltzmann machine. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, NIPS
2009.
[19] Barbara Hammer. Recurrent networks for structured data - a unifying approach and its properties. Cognitive Systems Research, 2002.
[20] Alex Graves and Juergen Schmidhuber. Offline handwriting recognition with multidimensional
recurrent neural networks. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors,
NIPS 2009.
[21] S. F. Altschul, T. L. Madden, A. A. Sch?affer, J. Zhang, Z. Zhang, W. Miller, and D. J. Lipman.
Gapped blast and psi-blast: a new generation of protein database search programs. Nucleic
Acids Research, 25, September 1997.
[22] James A. Cuff and Geoffrey J. Barton. Evaluation and improvement of multiple sequence
methods for protein secondary structure prediction. Proteins: Structure, Function, and Genetics, 34, 1999.
[23] Wolfgang Kabsch and Christian Sander. Dictionary of protein secondary structure: Pattern
recognition of hydrogen-bonded and geometrical features. Biopolymers, 22(12):2577?2637,
December 1983.
[24] H. Kim and H. Park. Protein secondary structure prediction based on an improved support
vector machines approach. Protein Engineering, 16(8), August 2003.
[25] Wei Chu, Zoubin Ghahramani, and David. A graphical model for protein secondary structure
prediction. In ICML 2004.
[26] Sujun Hua and Zhirong Sun. A novel method of protein secondary structure prediction with
high segment overlap measure: Support vector machine approach. Journal of Molecular Biology, 308, 2001.
[27] George Karypis. Yasspp: Better kernels and coding schemes lead to improvements in protein
secondary structure prediction. Proteins: Structure, Function, and Bioinformatics, 64(3):575?
586, 2006.
[28] O. Dor and Y. Zhou. Achieving 80% ten-fold cross-validated accuracy for secondary structure prediction by large-scale training. Proteins: Structure, Function, and Bioinformatics, 66,
March 2007.
9
| 3869 |@word middle:2 norm:1 confirms:1 tried:1 covariance:2 tr:2 liu:3 contains:3 series:1 score:2 tuned:1 past:2 existing:1 outperforms:2 com:3 jinbo:3 gmail:3 chu:1 parsing:1 john:2 chicago:6 hofmann:1 christian:2 designed:1 update:6 generative:2 selected:2 mccallum:1 ith:1 provides:2 judith:1 org:1 daphne:1 zhang:2 mathematical:2 uyi:8 consists:4 combine:1 introduce:1 blast:4 peng:2 spine:4 globally:1 automatically:1 byrd:1 enumeration:1 window:6 temporal:2 yunsong:1 voting:1 multidimensional:1 classifier:2 control:1 yn:2 louis:1 segmenting:1 engineering:1 local:9 oxford:1 kabsch:1 approximately:1 twice:1 limited:2 karypis:1 averaged:1 uy:2 practical:2 directed:2 x3:2 backpropagation:1 procedure:5 area:1 barton:1 yan:2 significantly:2 cascade:1 confidence:1 integrating:1 word:1 affer:1 protein:32 altun:1 zoubin:1 cannot:1 tsochantaridis:1 sheet:1 selection:1 impossible:1 optimize:2 jaime:1 map:7 yt:14 maximizing:3 convex:5 resolution:1 nam:1 svmstruct:1 laplace:2 updated:1 programming:3 homogeneous:1 us:2 element:2 recognition:12 particularly:1 approximated:1 expensive:2 database:1 ep:3 taskar:2 wang:1 capture:3 calculate:1 jury:1 sun:1 technological:3 valuable:1 forwardbackward:1 complexity:11 dynamic:1 gapped:1 trained:2 segment:1 predictive:2 joint:1 various:2 represented:1 alphabet:1 train:2 labeling:19 emerged:1 widely:3 ducharme:1 kenwood:3 cvpr:1 s:7 jointly:1 transform:1 sequence:41 advantage:2 propose:2 product:2 neighboring:1 exploiting:1 sutskever:1 double:1 converges:1 ben:1 recurrent:5 andrew:1 conformation:1 strong:1 ejean:1 predicted:1 indicate:2 wyt:2 hammer:1 human:1 mcallester:1 public:1 require:3 fix:2 alleviate:1 biological:1 extension:1 assisted:1 initio:1 around:4 exp:1 lawrence:1 viterbi:2 predict:5 major:1 achieves:5 adopt:1 dictionary:1 estimation:1 applicable:1 label:16 currently:1 always:1 gaussian:4 avoid:2 zhou:1 derived:2 focus:1 q3:2 inherits:1 joachim:2 improvement:2 validated:1 likelihood:6 mainly:2 indicates:1 seamlessly:1 contrast:1 ave:3 rigorous:1 baseline:3 kim:2 inference:8 demirdjian:1 nn:3 hidden:18 kernelized:2 koller:3 quasi:2 compatibility:2 overall:1 among:3 pixel:2 html:1 pascal:1 art:1 special:1 integration:1 mackay:1 field:20 equal:1 lipman:1 sampling:1 biology:4 represents:1 park:1 jones:1 icml:5 peaked:1 others:1 yoshua:1 simplify:1 richard:1 few:1 preserve:1 simultaneously:1 consisting:2 dor:1 ab:1 highly:2 evaluation:1 chain:9 accurate:3 edge:4 integral:1 divide:1 old:1 taylor:1 modeling:4 juergen:1 vertex:6 subset:1 predictor:1 too:1 dependency:3 fundamental:1 dssp:1 international:1 probabilistic:11 connecting:1 together:2 ilya:1 possibly:1 cognitive:1 derivative:1 semimarkov:1 zhao:2 li:1 account:2 potential:10 bfgs:1 ioannis:1 coding:1 explicitly:1 wolfgang:1 carlos:1 complicated:1 il:3 accuracy:9 acid:2 efficiently:2 maximized:1 sy:1 rabiner:1 miller:1 conceptually:1 generalize:1 bor:1 bayesian:1 vincent:1 accurately:1 history:1 quattoni:1 definition:1 frequency:1 james:1 resultant:1 psi:3 handwriting:8 gain:2 dataset:7 popular:3 knowledge:2 ubiquitous:1 segmentation:1 routine:1 sophisticated:2 carefully:1 higher:2 improved:1 wei:1 formulation:1 furthermore:2 just:1 implicit:4 stage:1 correlation:1 until:1 hand:3 expressive:3 christopher:1 nonlinear:15 liefeng:1 defines:1 logistic:3 building:1 naacl:1 normalized:1 y2:2 contain:1 true:1 regularization:3 iteratively:3 bonded:1 crf:45 performs:1 percent:1 tmap:1 geometrical:1 image:2 novel:1 recently:1 superior:2 empirically:1 significant:1 grid:2 similarly:1 language:6 mainstream:1 gt:5 segmental:1 posterior:3 barbara:1 schmidhuber:1 altschul:1 verlag:1 zhirong:1 incapable:1 binary:2 jorge:1 yi:20 scoring:2 yasemin:1 guestrin:1 minimum:1 george:1 employed:1 determine:3 maximize:1 converge:1 fernando:1 semi:3 multiple:3 interdependency:3 infer:1 faster:2 gesture:1 cross:4 molecular:3 prediction:26 basic:1 regression:2 vision:2 expectation:1 essentially:1 iteration:3 represent:1 kernel:13 normalization:2 achieved:2 folding:1 addition:2 residue:7 want:2 jian:2 crucial:1 sch:1 extra:1 supposing:1 undirected:1 december:1 lafferty:3 tobin:1 structural:2 beckett:1 svmlight:1 bengio:3 sander:1 variety:2 finish:1 det:1 speech:1 hessian:7 cnf:64 enumerate:3 detailed:1 amount:1 interposing:1 ten:1 tth:2 generate:1 http:1 tutorial:1 arising:1 klein:1 hyperparameter:10 group:5 seetharaman:1 nevertheless:1 threshold:1 achieving:1 freed:1 nocedal:2 graph:3 convert:1 run:1 inverse:6 powerful:1 extends:1 almost:2 reasonable:1 utilizes:1 acceptable:1 comparable:2 bit:1 capturing:1 layer:14 graham:1 fold:4 quadratic:2 annual:1 strength:1 sharply:1 constrain:1 fei:1 n3:1 flat:1 x2:2 alex:1 generates:2 nathan:1 structured:5 combination:5 march:1 kd:3 describes:3 smaller:3 slightly:1 character:3 wi:2 shallow:1 restricted:2 thorsten:1 taken:1 equation:5 discus:1 studying:1 parametrize:1 available:1 eight:1 hierarchical:1 ocr:2 enforce:1 appropriate:1 struct:5 gate:10 thomas:1 assumes:1 running:1 graphical:10 newton:2 unifying:1 const:1 ghahramani:1 feng:2 added:1 sha:1 diagonal:2 september:2 gradient:6 thank:1 berlin:1 hmm:10 outer:1 carbonell:1 fy:4 gopalakrishnan:1 length:1 besides:1 modeled:1 relationship:16 october:1 robert:1 favorably:1 trace:2 wyi:3 boltzmann:2 neuron:7 observation:9 markov:14 datasets:1 benchmark:5 nucleic:1 november:2 philippe:1 hinton:1 y1:2 biopolymers:1 august:1 introduced:1 david:4 pair:2 janvin:1 optimized:1 learned:1 nm2:2 hour:1 nip:3 able:1 suggested:1 wy:3 usually:5 pattern:2 program:2 built:1 including:1 max:9 memory:2 power:2 overlap:1 natural:5 hybrid:3 homologous:1 indicator:2 zhu:1 scheme:1 improve:1 technology:1 carried:1 madden:1 extract:2 coupled:2 xiaojin:1 prior:4 literature:1 acknowledgement:1 interdependent:1 graf:1 generation:1 recomb:1 srebro:1 geoffrey:2 validation:3 consistent:1 editor:2 helix:1 share:1 karl:1 genetics:1 ariadna:1 offline:1 side:1 bias:1 institute:3 neighbor:3 curve:2 dimension:4 xn:2 world:4 calculated:1 collection:2 nguyen:1 approximate:2 obtains:1 implicitly:1 clique:1 global:1 overfitting:2 discriminative:4 xi:13 search:4 hydrogen:1 decade:1 table:6 nature:1 robust:1 schuurmans:2 heidelberg:1 bottou:2 complex:7 meanwhile:1 necessarily:1 domain:1 linearly:1 hyperparameters:3 profile:7 amino:1 xu:3 x1:2 cubic:2 position:12 pereira:2 explicit:1 exponential:1 toyota:3 extractor:2 learns:1 formula:2 theorem:1 specific:5 bishop:1 insightful:1 symbol:1 list:1 svm:14 evidence:4 exists:1 joe:1 adding:1 sequential:1 budget:1 margin:9 generalizing:1 likely:1 lbfgs:7 failed:1 bo:1 hua:1 springer:1 coil:1 conditional:25 goal:1 viewed:4 rbf:1 change:1 typical:1 specifically:1 acting:1 called:2 total:1 secondary:24 morency:1 experimental:1 m3:1 formally:2 support:4 guo:1 schnabel:1 bioinformatics:5 evaluate:3 tested:6 |
3,167 | 387 | Training Knowledge-Based Neural Networks to
Recognize Genes in DNA Sequences
Michiel O. Noordewier
Computer Science
Rutgers University
New Brunswick, NJ 08903
Geoffrey G. Towell
Computer Sciences
University of Wisconsin
Madison, WI 53706
Jude W. Shavlik
Computer Sciences
University of Wisconsin
Madison, WI 53706
Abstract
We describe the application of a hybrid symbolic/connectionist machine
learning algorithm to the task of recognizing important genetic sequences.
The symbolic portion of the KBANN system utilizes inference rules that
provide a roughly-correct method for recognizing a class of DNA sequences
known as eukaryotic splice-junctions. We then map this "domain theory"
into a neural network and provide training examples. Using the samples,
the neural network's learning algorithm adjusts the domain theory so that
it properly classifies these DNA sequences. Our procedure constitutes
a general method for incorporating preexisting knowledge into artificial
neural networks. We present an experiment in molecular genetics that
demonstrates the value of doing so.
1
Introduction
Often one has some preconceived notions about how to perform some classification task. It would be useful to incorporate this knowledge into a, neural network, and then use some training examples to refine these approximately-correct
rules of thumb. This paper describes the KBANN (Knowledge-Based Artificial Neural Networks) hybrid learning system and demonstrates its ability to learn in the
complex domain of molecular genetics. Briefly, KBANN uses a knowledge base of
hierarchically-structured rules (which may be both incomplete and incorrect) to
form an artificial neural network (ANN). In so doing, KBANN makes it possible to
apply neural learning techniques to the empirical improvement of knowledge bases.
The task to be learned is the recognition of certain DNA (deoxyribonucleic acid)
subsequences important in the expression of genes. A large governmental research
530
Training Knowledge-Based Neural Networks to Recognize Genes
DNA
D t EJ
t
I I
precursormRNA
mRNA (after splicing)
t
protein
folded protein
Figure 1: Steps in the Expression of Genes
program, called the Human Genome Initiative, has recently been undertaken to
determine the sequence of DNA in humans, estimated to be 3 x 109 characters of
information. This provides a strong impetus to develop genetic-analysis techniques
based solely on the information contained in the sequence, rather than in combination with other chemical, physical, or genetic techniques. DNA contains the
information by which a cell constructs protein molecules. The cellular expression
of proteins proceeds by the creation of a "message" ribonucleic acid (mRNA) copy
from the DNA template (Figure 1). This mRNA is then translated into a protein.
One of the most unexpected findings in molecular biology is that large pieces of the
mRNA are removed before it is translated further [1].
The utilized sequences (represented by boxes in Figure 1) are known as "exons",
while the removed sequences are known as "introns", or intervening sequences.
Since the discovery of such "split genes" over a decade ago, the nature of the
splicing event has been the subject of intense research. The points at which DNA
is removed (the boundaries of the boxes in Figure 1) are known as splice-junctions.
The splice-junctions of eukaryotic 1 mRNA precursors contain patterns similar to
those in Figure 2.
exon
no
intron
(A/C) A Gil GT (A/G) A GT
no
exon
(crr) 6 X (Crr) A Gil G (Grr) ...
Figure 2: Canonical Splice-Junctions
DNA is represented by a string of characters from the set {A,G,C,T}.
In this figure, X represents any character, slashes represent disjunctive
options, and subscripts indicate repetitions of a pattern.
However, numerous other locations can resemble these canonical patterns. As a
result, these patterns do not by themselves reliably imply the presence of a splicejunction. Evidently, if junctions are to be recognized on the basis of sequence
information alone, longer-range sequence information will have to be included in
1 Eukaryotic
cells contain nuclei, unlike prokaryotic cells such as bacterial and viruses.
531
532
Noordewier, Towell, and Shavlik
the decision-making criteria. A central problem is therefore to determine the extent
to which sequences surrounding splice-junctions differ from sequences surrounding
spurious analogues.
We have recently described a method [9, 12] that combines empirical and symbolic learning algorithms to recognize another class of genetic sequences known as
bacterial promoters. Our hybrid KBANN system was demonstrated to be superior
to other empirical learning systems including decision trees and nearest-neighbor
algorithms. In addition, it was shown to more accurately classify promoters than
the methods currently reported in the biological literature. In this manuscript we
describe the application of KBANN to the recognition of splice-junctions, and show
that it significantly increases generalization ability when compared to randomlyinitialized, single-hidden-Iayer networks (i.e., networks configured in the "usual"
way). The paper concludes with a discussion of related research and the areas
which our research is currently pursuing.
2
The KBANN Algorithm
uses a knowledge base of domain-specific inference rules in the form of
PROLOG-like clauses to define what is initially known about a topic. The knowledge
base need be neither complete nor correct; it need only support approximately
correct reasoning. KBANN translates knowledge bases into ANNs in which units
and links correspond to parts of knowledge bases. A detailed explanation of the
procedure used by KBANN to translate rules into an ANN can be found in [12].
KBANN
As an example of the KBANN method, consider the artificial knowledge base in
Figure 3a which defines membership in category A. Figure 3b represents the hierarchical structure of these rules: solid and dotted lines represent necessary and
prohibitory dependencies, respectively. Figure 3c represents the ANN that results
from the translation into a neural network of this knowledge base. Units X and Yin
Figure 3c are introduced into the ANN to handle the disjunction in the knowledge
base. Otherwise, units in the ANN correspond to consequents or antecedents in
the knowledge base. The thick lines in Figure 3c represent the links in the ANN
that correspond to dependencies in the explanation. The weight on thick solid lines
is 3, while the weight on thick dotted lines is -3. The lighter solid lines represent
the links added to the network to allow refinement of the initial rules. At present,
KBANN is restricted to non-recursive, propositional (i.e., variable-free) sets of rules.
Numbers beside the unit names in Figure 3c are the biases of the units. These
biases are set so that the unit is active if and only if the corresponding consequent
in the knowledge base is true.
As this example illustrates, the use of KBANN to initialize ANNs has two principle
benefits. First, it indicates the features believed to be important to an example's
classification. Second, it specifies important derived features; through their deduction the complexity of an ANN's final decision is reduced.
Training Knowledge-Based Neural Networks to Recognize Genes
A
A:- B, C.
B :-notF, O.
B :-notH.
C :-1,1.
~
?0 1\
..
?
??
F
.??
?
G
a
H
I
J
K
b
Figure 3: Translation of a Knowledge Base into an ANN
3
Problem Definition
The splice-junction problem is to determine into which of the following three categories a specified location in a DNA sequence falls: (1) exon/intron borders, referred
to as donors, (2) intron/exon borders, referred to as acceptors, and (3) neither. To
address this problem we provide KBANN with two sets of information: a set of DNA
sequences 60 nucleotides long that are classified as to the category membership of
their center and a domain theory that describes when the center of a sequence
corresponds to one of these three categories.
Table 1 contains the initial domain theory used in the splice-junction recognition
task. A special notation is used to specify locations in the DNA sequence. When a
rule's antecedents refer to input features, they first state a relative location in the
sequence vector, then the DNA symbol that must occur (e.g., @3=A). Positions
are numbered negatively or positively depending on whether they occur before or
after the possible junction location. By biological convention, position numbers of
zero are not used. The set of rules was derived in a straightforward fashion from
the biological literature [13]. Briefly, these rules state that a donor or acceptor
sequence is present if characters from the canonical sequence (Figure 2) are present
and triplets known as stop codons are absent in the appropriate positions.
The examples were obtained by taking the documented split genes from all primate
gene entries in Genbank release 64.1 [1] that are described as complete. Each
training example consists of a window that covers 30 nucleotides before and after
each donor and acceptor site. This procedure resulted in 751 examples of acceptor
and 745 examples of donors. Negative examples are derived from similarly-sized
windows, which did not cross an intron/exon boundary, sampled at random from
these sequences. Note that this differs from the usual practice of generating random sequences with base-frequency composition the same as the positive instances.
However, we feel that this provides a more realistic training set, since DNA is known
to be highly non-random [3]. Although many more negative examples were available, we used approximately as many negative examples are there were both donor
and acceptors. Thus, the total data set we used had 3190 examples.
The network created by KBANN for the splice-junction problem has one output
533
534
Noordewier, Towell, and Shavlik
Table 1: Knowledge Base for Splice-Junctions
donor :- @-3=M, @-2=A, @-l=G, @l=G, @2=T, @3=R,
@4=A, @5=G, @6=T, not(don-stop).
don-stop :- @-3=T, @-2=A, @-l=A.
don-stop :- @-4=T, @-3=A, @-2=G.
don-stop :- @-4=T, @-3=G, @-2=A.
don-stop :- @-3=T, @-2=A, @-l=G.
don-stop :- @-3=T, @-2=G, @-l=A.
don-stop :- @-5=T, @-4=A, @-3=A.
don-stop :- @-4=T, @-3=A, @-2=A.
don-stop :- @-5=T, @-4=A, @-3=G.
don-stop :- @-5=T, @-4=G, @-3=A.
acceptor :- pyr-rich, @-3=Y, @-2=A, @-l=G, @l=G, @2=K, not(ace-stop).
pyr-rich :- 6 of (@-15=Y, @-14=Y, @-13=Y, @-12=Y, @-l1=Y,
@-lO=Y, @-9=Y, @-8=Y, @-7=Y, @-6=Y.)
ace-stop :- @2=T, @3=A, @4=A.
ace-stop :- @l=T, @2=A, @3=A.
acc-stop :- @l=T, @2=A, @3=G.
acc-stop :- @2=T, @3=A, @4=G.
acc-stop :- @l=T, @2=G, @3=A.
acc-stop :- @2=T, @3=G, @4=A.
ace-stop :- @3=T, @4=A, @5=A.
acc-stop :- @3=T, @4=A, @5=G.
acc-stop :- @3=T, @4=G, @5=A.
R:- A. R:- G. Y:- C. Y:- T. M:- C. M:- A. K:- G. K:- T
units for each category to be learned; and four input units for each nucleotide in
the DNA training sequences, one for each of the four values in the DNA alphabet. In
addition, the rules for ace-stop, don-stop, R, Y, and M are considered definitional.
Thus, the weights on the links and biases into these units were frozen. Also, the
second rule only requires that six of its 11 antecedents be true. Finally, there are
no rules in Table 1 for recognizing negative examples. So we added four unassigned
hidden units and connected them to all of the inputs and to the output for the
neither category. The final result is that the network created by KBANN has 286
units: 3 output units, 240 input units, 31 fixed-weight hidden units, and 12 tunable
hidden units.
4
Experimental Results
Figure 4 contains a learning curve plotting the percentage of errors made on a set
of "testing" examples by KBANN-initialized networks, as a function of the number
of training examples. Training examples were obtained by randomly selecting examples from the population of 3190 examples described above. Testing examples
consisted of all examples in the population that were not used for training. Each
data point represents the average of 20 repetitions of this procedure.
For comparison, the error rate for a randomly-initialized, fully-connected, two-layer
ANN with 24 hidden units is also plotted in Figure 4. (This curve is expected to have
an error rate of 67% for zero training examples. Test results were slightly better due
to statistical fluctuations.) Clearly, the KBANN-initialized networks learned faster
than randomly-initialized ANNs, making less than half the errors of the randomlyinitialized ANNs when there were 100 or fewer training examples. However, when
Training Knowledge-Based Neural Networks to Recognize Genes
60
CD
--a
:?
45
o
30
-~
15
??
??
??
??
~,
,
....m
c:
'6... ..... ......
L..
W
c:
KBANN network
Randomly-weighted network
~
C)
~
- - 4_ - - - - - ~ - .
,
CIJ
0
&.
. .6--- __ 6_ ----6 ____ _
O~----r---~----~----r----'
o
100
200
300
400
I
500 SOO
i
1000
I
1500
I
2000
Number of Training Examples
Figure 4: Learning Curve for Splice Junctions
large numbers of training examples were provided the randomly-initialized ANNs
had a slightly lower error rate (5.5% vs. 6.4% for KBANN). All of the differences in
the figure are statistically significant.
5
Related and Future Research
Several others have investigated predicting splice-junctions. Staden [10] has devised
a weight-matrix method that uses a perceptron-like algorithm to find a weighting
function that discriminates two sets (true and false) of boundary patterns in known
sequences. Nakata et al. [7] employ a combination of methods to distinguish between exons and introns, including Fickett's statistical method [5]. When applied to
human sequences in the Genbank database; this approach correctly identified 81%
of true splice-junctions. Finally, Lapedes et al. [6] also applied neural networks and
decision-tree builders to the splice-junction task. They reported neural-network accuracies of 92% and claimed their neural-network approach performed significantly
better than the other approaches in the literature at that time. The accuracy we report in this paper represents an improvement over these results. However, it should
be noted that these experiments were not all performed under the same conditions.
One weakness of neural networks is that it is hard to understand what they have
learned. We are investigating methods for the automatic translation into symbolic
rules of trained KBANN-initialized networks [11]. These techniques take advantage of
the human-comprehensible starting configuration of KBANN's networks to create a
small set of hierarchically-structured rules that accurately reflect what the network
learned during training. We are also currently investigating the use of richer splicejunction domain theories, which we hope will improve KBANN'S accuracy.
535
536
N oordewier, lOwell, and Shavlik
6
Conclusion
The KBANN approach allows ANN s to refine preexisting knowledge, generating ANN
topologies that are well-suited to the task they are intended to learn. KBANN does
this by using a knowledge base of approximately correct, domain-specific rules to
determine the ANN's structure and initial weights. This provides an alternative to
techniques that either shrink [2] or grow [4] networks to the "right" size. Our experiments on splice-junctions, and previously on bacterial promoters, [12] demonstrate
that the KBANN approach can substantially reduce the number of training examples
needed to reach a given level of accuracy on future examples.
This research was partially supported by Office of Naval Research Grant N00014-90-J-1941, National
Science Foundation Grant IRI-9002413, and Department of Energy Grant DE-FG02-91ER61129.
References
[1] R. J. Breathnach, J. L. Mandel, and P. Chambon. Ovalbumin gene is split in chicken
DNA. Nature, 270:314-319, 1977.
[2] Y. Le Cun, J. Denker, and S. Solla. Optimal brain damage. Advances in Neural
Information Processing Systems 2, pages 598-605, 1990.
[3] G. Dykes, R. Bambara, K. Marians, and R. Wu. On the statistical significance of
primary structural features found in DNA-protein interaction sites. Nucleic Acids
Research, 2:327-345, 1975.
[4] S. Fahlman and C. Lebiere. The cascade-correlation learning architecture. Advances
in Neural Information Processing Systems 2, pages 524-532, 1990.
[5] J. W. Fickett. Recognition of protein coding regions in DNA sequences. Nucleic Acids
Research, 10:5303-5318, 1982.
[6] A. Lapedes, D. Barnes, C. Burks, R. Farber, and K. Sirotkin. Application of neural networks and other machine learning algorithms to DNA sequence analysis. In
Computers and DNA, pages 157-182. Addison-Wesley, 1989.
[7] K. Nakata, M. Kanehisa, and C. DeLisi. Prediction of splice junctions in mrna sequences. NucleIC Acids Research, 13:5327-5340, 1985.
[8] M. C. O'Neill. Escherichia coli promoters: 1. Consensus as it relates to spacing
class, specificity, repeat substructure, and three dimensional orgainzation. Journal of
Biological Chemistry, 264:5522-5530, 1989.
[9] J. W. Shavlik and G. G. Towell. An approach to combining explanation-based and
neural learning algorithms. Connection Science, 1:233-255, 1989.
[10] R. Staden. Computer methods to locate signals in DNA sequences. Nucleic Acids
Research, 12:505-519, 1984.
[11] G. G. Towell, M. Craven, and J. W. Shavlik. Automated interpretation of knowledge
based neural networks. Technical report, University of Wisconsin, Computer Sciences
Department, Madison, WI, 1991.
[12] G. G. Towell, J. W. Shavlik, and M. O. Noordewier. Refinement of approximately
correct domain theories by knowledge-based neural networks. In Proc. of the Eighth
National Conf. on Artificial Intelligence, pages 861-866, Boston, MA, 1990.
[13] J. D. Watson, N. H. Hopkins, J. W. Roberts, J. A. Steitz, and A. M. Weiner. Molecular
Biology of the Gene, pages 634-647, 1987.
| 387 |@word briefly:2 solid:3 initial:3 configuration:1 contains:3 selecting:1 genetic:4 lapedes:2 virus:1 must:1 realistic:1 v:1 alone:1 half:1 fewer:1 intelligence:1 provides:3 location:5 initiative:1 incorrect:1 consists:1 combine:1 expected:1 roughly:1 themselves:1 nor:1 brain:1 codon:1 precursor:1 window:2 provided:1 classifies:1 notation:1 what:3 string:1 substantially:1 finding:1 nj:1 burk:1 demonstrates:2 unit:16 grant:3 before:3 positive:1 subscript:1 solely:1 fluctuation:1 approximately:5 escherichia:1 dyke:1 range:1 statistically:1 testing:2 recursive:1 practice:1 differs:1 procedure:4 area:1 empirical:3 significantly:2 acceptor:6 cascade:1 numbered:1 specificity:1 protein:7 symbolic:4 mandel:1 map:1 demonstrated:1 center:2 mrna:6 straightforward:1 starting:1 iri:1 rule:17 adjusts:1 population:2 handle:1 notion:1 feel:1 lighter:1 us:3 recognition:4 utilized:1 donor:6 database:1 disjunctive:1 region:1 connected:2 solla:1 removed:3 discriminates:1 complexity:1 trained:1 kbann:26 creation:1 negatively:1 basis:1 translated:2 exon:7 represented:2 surrounding:2 alphabet:1 describe:2 preexisting:2 artificial:5 disjunction:1 ace:5 richer:1 otherwise:1 ability:2 final:2 sequence:30 advantage:1 evidently:1 frozen:1 interaction:1 combining:1 translate:1 impetus:1 intervening:1 generating:2 depending:1 develop:1 nearest:1 strong:1 resemble:1 indicate:1 convention:1 differ:1 thick:3 farber:1 correct:6 grr:1 human:4 generalization:1 biological:4 considered:1 slash:1 proc:1 currently:3 repetition:2 builder:1 create:1 weighted:1 hope:1 clearly:1 rather:1 ej:1 unassigned:1 office:1 derived:3 release:1 naval:1 properly:1 improvement:2 indicates:1 inference:2 membership:2 initially:1 spurious:1 hidden:5 deduction:1 classification:2 special:1 initialize:1 bacterial:3 construct:1 biology:2 represents:5 constitutes:1 future:2 connectionist:1 others:1 report:2 employ:1 randomly:5 recognize:5 resulted:1 national:2 intended:1 prokaryotic:1 antecedent:3 message:1 highly:1 weakness:1 necessary:1 nucleotide:3 intense:1 tree:2 incomplete:1 initialized:6 plotted:1 instance:1 classify:1 cover:1 entry:1 noordewier:4 recognizing:3 reported:2 dependency:2 hopkins:1 central:1 reflect:1 genbank:2 conf:1 coli:1 prolog:1 de:1 chemistry:1 coding:1 configured:1 piece:1 performed:2 doing:2 portion:1 option:1 substructure:1 accuracy:4 acid:6 correspond:3 crr:2 thumb:1 accurately:2 ago:1 classified:1 acc:6 anns:5 reach:1 definition:1 energy:1 frequency:1 lebiere:1 pyr:2 stop:23 sampled:1 tunable:1 lowell:1 knowledge:24 manuscript:1 wesley:1 specify:1 box:2 shrink:1 correlation:1 defines:1 name:1 contain:2 true:4 consisted:1 chemical:1 during:1 noted:1 criterion:1 complete:2 demonstrate:1 ribonucleic:1 l1:1 sirotkin:1 reasoning:1 recently:2 nakata:2 superior:1 physical:1 clause:1 interpretation:1 refer:1 composition:1 significant:1 automatic:1 similarly:1 had:2 longer:1 gt:2 base:15 claimed:1 certain:1 n00014:1 watson:1 recognized:1 determine:4 signal:1 relates:1 technical:1 faster:1 michiel:1 believed:1 long:1 cross:1 devised:1 molecular:4 prediction:1 rutgers:1 jude:1 represent:4 cell:3 chicken:1 addition:2 spacing:1 grow:1 unlike:1 subject:1 structural:1 presence:1 split:3 automated:1 architecture:1 identified:1 topology:1 reduce:1 translates:1 absent:1 whether:1 expression:3 weiner:1 six:1 useful:1 detailed:1 category:6 dna:23 reduced:1 documented:1 specifies:1 percentage:1 canonical:3 dotted:2 governmental:1 gil:2 estimated:1 towell:6 correctly:1 four:3 neither:3 undertaken:1 definitional:1 splicing:2 pursuing:1 utilizes:1 wu:1 decision:4 layer:1 distinguish:1 neill:1 refine:2 barnes:1 occur:2 structured:2 department:2 combination:2 craven:1 describes:2 slightly:2 character:4 wi:3 cun:1 making:2 primate:1 restricted:1 previously:1 needed:1 addison:1 junction:18 available:1 apply:1 denker:1 hierarchical:1 appropriate:1 fickett:2 alternative:1 comprehensible:1 madison:3 added:2 damage:1 primary:1 usual:2 link:4 topic:1 extent:1 cellular:1 consensus:1 cij:1 robert:1 negative:4 reliably:1 perform:1 nucleic:4 locate:1 introduced:1 propositional:1 specified:1 connection:1 learned:5 address:1 proceeds:1 pattern:5 eighth:1 program:1 including:2 soo:1 explanation:3 analogue:1 event:1 hybrid:3 predicting:1 improve:1 imply:1 numerous:1 created:2 concludes:1 literature:3 discovery:1 relative:1 wisconsin:3 beside:1 fully:1 geoffrey:1 foundation:1 nucleus:1 principle:1 plotting:1 cd:1 translation:3 lo:1 genetics:2 supported:1 fahlman:1 copy:1 free:1 repeat:1 bias:3 allow:1 understand:1 perceptron:1 shavlik:7 neighbor:1 template:1 fall:1 taking:1 er61129:1 benefit:1 boundary:3 curve:3 genome:1 rich:2 made:1 refinement:2 gene:11 active:1 investigating:2 iayer:1 don:11 subsequence:1 decade:1 triplet:1 table:3 learn:2 nature:2 molecule:1 investigated:1 complex:1 eukaryotic:3 domain:9 did:1 significance:1 hierarchically:2 promoter:4 border:2 positively:1 site:2 referred:2 fashion:1 position:3 weighting:1 splice:16 specific:2 intron:6 symbol:1 consequent:1 incorporating:1 false:1 fg02:1 illustrates:1 boston:1 suited:1 yin:1 unexpected:1 contained:1 partially:1 corresponds:1 ma:1 sized:1 kanehisa:1 ann:12 hard:1 included:1 folded:1 called:1 total:1 experimental:1 support:1 brunswick:1 incorporate:1 noth:1 |
3,168 | 3,870 | Sequential effects reflect parallel learning of multiple
environmental regularities
Matthew H. Wilder? , Matt Jones? , & Michael C. Mozer?
?
Dept. of Computer Science
?
Dept. of Psychology
University of Colorado
Boulder, CO 80309
<[email protected], [email protected], [email protected]>
Abstract
Across a wide range of cognitive tasks, recent experience influences behavior. For
example, when individuals repeatedly perform a simple two-alternative forcedchoice task (2AFC), response latencies vary dramatically based on the immediately preceding trial sequence. These sequential effects have been interpreted
as adaptation to the statistical structure of an uncertain, changing environment
(e.g., Jones and Sieck, 2003; Mozer, Kinoshita, and Shettel, 2007; Yu and Cohen, 2008). The Dynamic Belief Model (DBM) (Yu and Cohen, 2008) explains
sequential effects in 2AFC tasks as a rational consequence of a dynamic internal
representation that tracks second-order statistics of the trial sequence (repetition
rates) and predicts whether the upcoming trial will be a repetition or an alternation of the previous trial. Experimental results suggest that first-order statistics
(base rates) also influence sequential effects. We propose a model that learns both
first- and second-order sequence properties, each according to the basic principles of the DBM but under a unified inferential framework. This model, the Dynamic Belief Mixture Model (DBM2), obtains precise, parsimonious fits to data.
Furthermore, the model predicts dissociations in behavioral (Maloney, Martello,
Sahm, and Spillmann, 2005) and electrophysiological studies (Jentzsch and Sommer, 2002), supporting the psychological and neurobiological reality of its two
components.
1
Introduction
Picture an intense match point at the Wimbledon tennis championship, Nadal on the defense from
Federer?s powerful shots. Nadal returns three straight hits to his forehand side. In the split second
before the ball is back in his court, he forms an expectation about where Federer will hit the ball
next?will the streak of forehands continue or will there be a switch to his backhand. As the point
continues, Nadal gains the upper ground and begins making Federer alternate from forehand to
backhand to forehand. Now Federer finds himself trying to predict whether or not this alternating
pattern will be continued with the next shot. These two are caught up in a high-stakes game of
sequential effects?their actions and expectations for the current shot have a strong dependence on
the past few shots. Sequential effects play a ubiquitous role in our lives?our actions are constantly
affected by our recent experiences.
In controlled environments, sequential effects have been observed across a wide range of tasks and
experimental paradigms, and aspects of cognition ranging from perception to memory to language
to decision making. Sequential effects often occur without awareness and cannot be overriden by
instructions, suggesting a robust cognitive inclination to adapt behavior in an ongoing manner. Surprisingly, people exhibit sequential effects even when they are aware that there is no dependence
1
340
360
340
320
320
300
300
(b)
RRRR
ARRR
RARR
AARR
RRAR
ARAR
RAAR
AAAR
RRRA
ARRA
RARA
AARA
RRAA
ARAA
RAAA
AAAA
(a)
Cho
DBM2
380
360
RRRR
ARRR
RARR
AARR
RRAR
ARAR
RAAR
AAAR
RRRA
ARRA
RARA
AARA
RRAA
ARAA
RAAA
AAAA
Response Time
380
400
Cho
DBM
Response Time
400
Figure 1: (a) DBM fit to the behavioral data from Cho et al. (2002). Predictions within each of the four
groups are monotonically increasing or decreasing. Thus the model is unable to account for the two circled
relationships. This fit accounts for 95.8% of the variance in the data. (p0 = Beta(2.6155, 2.4547), ? =
0.4899) (b) The fit to the same data obtained from DBM2 in which probability estimates are derived from both
first-order and second-order trial statistics. 99.2% of the data variance is explained by this fit. (? = 0.3427,
w = 0.4763)
structure to the environment. Progress toward understanding the intricate complexities of sequential
effects will no doubt provide important insights into the ways in which individuals adapt to their
environment and make predictions about future outcomes.
One classic domain where reliable sequential effects have been observed is in two-alternative forcedchoice (2AFC) tasks (e.g, Jentzsch and Sommer, 2002; Hale, 1967; Soetens et al., 1985; Cho et al.,
2002). In this type of task, participants are shown one of two different stimuli, which we denote
as X and Y, and are instructed to respond as quickly as possible by mapping the stimulus to a
corresponding response, say pressing the left button for X and the right button for Y. Response time
(RT) is recorded, and the task is repeated several hundred or thousand times. To measure sequential
effects, the RT is conditioned on the recent trial history. (In 2AFC tasks, stimuli and responses are
confounded; as a result, it is common to refer to the ?trial? instead of the ?stimulus? or ?response?. In
this paper, ?trial? will be synonymous with the stimulus-response pair.) Consider a sequence such
as XY Y XX, where the rightmost symbol is the current trial (X), and the symbols to the left are
successively earlier trials. Such a four-back trial history can be represented in a manner that focuses
not on the trial identities, but on whether trials are repeated or alternated. With R and A denoting
repetitions and alternations, respectively, the trial sequence XY Y XX can be encoded as ARAR.
Note that this R/A encoding collapses across isomorphic sequences XY Y XX and Y XXY Y .
The small blue circles in Figure 1a show the RTs from Cho et al. (2002) conditioned on the recent
trial history. Along the abscissa in Figure 1a are all four-back sequence histories ordered according
to the R/A encoding. The left half of the graph represents cases where the current trial is a repetition
of the previous, and the right half represents cases where the current trial is an alternation. The
general pattern we see in the data is a triangular shape that can be understood by comparing the
two extreme points on each half, RRRR vs. AAAR and RRRA vs. AAAA. It seems logical that
the response to the current trial in RRRR will be significantly faster than in AAAR (RTRRRR <
RTAAAR ) because in the RRRR case, the current trial matches the expectation built up over the past
few trials whereas in the AAAR case, the current trial violates the expectation of an alternation. The
same argument applies to RRRA vs. AAAA, leading to the intuition that RTRRRA > RTAAAA .
The trial histories are ordered along the abscissa so that the left half is monotonically increasing
and the right half is monotonically decreasing following the same line of intuition, i.e., many recent
repetitions to many recent alternations.
2
Toward A Rational Model Of Sequential Effects
Many models have been proposed to capture sequential effects, including Estes (1950), Anderson
(1960), Laming (1969), and Cho et al. (2002). Other models have interpreted sequential effects as
adaptation to the statistical structure of a dynamic environment (e.g., Jones and Sieck, 2003; Mozer,
Kinoshita, and Shettel, 2007). In this same vein, Yu and Cohen (2008) recently suggested a rational
2
C t-1
C t-1
C t-1
Ct
? t-1
?
R t-1
Ct
? t-1
t
Rt
(a)
? t-1
?
S t-1
(b)
Ct
?t
? t-1
t
St
?t
S t-1
St
(c)
Figure 2: Three graphical models that capture sequential dependencies. (a) Dynamic Belief Model (DBM) of
Yu and Cohen (2008). (b) A reformulation of DBM in which the output variable, St , is the actual stimulus
identity instead of the repetition/alternation representation used in DBM. (c) Our proposed Dynamic Belief
Mixture Model (DBM2). Models are explained in more detail in the text.
explanation for sequential effects such as those observed in Cho et al. (2002). According to their
Dynamic Belief Model (DBM), individuals estimate the statistics of a nonstationary environment.
The key contribution of this work is that it provides a rational justification for sequential effects that
have been previously viewed as resulting from low-level brain mechanisms such as residual neural
activation.
DBM describes performance in 2AFC tasks as Bayesian inference over whether the next trial in the
sequence will be a repetition or an alternation of the previous trial, conditioned on the trial history. If
Rt is the Bernoulli random variable that denotes whether trial t is a repetition (Rt = 1) or alternation
~ t?1 ), where R
~ t?1 denotes the trial sequence
(Rt = 0) of the previous trial, DBM determines P (Rt |R
~ t?1 = (R1 , R2 , ..., Rt?1 ).
preceding trial t, i.e., R
DBM assumes a generative model, shown in Figure 2a, in which Rt = 1 with probability ?t and
Rt = 0 with probability 1??t . The random variable ?t describes a characteristic of the environment.
According to the generative model, the environment is nonstationary and ?t can either retain the
same value as on trial t ? 1 or it can change. Specifically, Ct denotes whether the environment
has changed between t ? 1 and t (Ct = 1) or not (Ct = 0). Ct is a Bernoulli random variable
with success probability ?. If the environment does not change, ?t = ?t?1 . If the environment
changes, ?t is drawn from a prior distribution, which we refer to as the reset prior denoted by
p0 (?) ? Beta(a, b).
Before each trial t of a 2AFC task, DBM computes the probability of the upcoming stimulus conditioned on the trial history. The model assumes that the perceptual and motor system is tuned based
on this expectation, so that RT will be a linearly decreasing function of the probability assigned to
~ t?1 ) on repetition trials and of P (Rt = A|R
~ t?1 )
the event that actually occurs, i.e. of P (Rt = R|R
~
= 1 - P (Rt = R|Rt?1 ) on alternation trials.
The red plusses in Figure 1 show DBM?s fit to the data from Cho et al. (2002). DBM has five
free parameters that were optimized to fit the data. The parameters are: the change probability,
?; the imaginary counts of the reset prior, a and b; and two additional parameters to map model
probabilities to RTs via an affine transform.
2.1
Intuiting DBM predictions
Another contribution of Yu and Cohen (2008) is the mathematical demonstration that DBM is approximately equivalent to an exponential filter over trial histories. That is, the probability that the
current stimulus is a repetition is a weighted sum of past observations, with repetitions being scored
as 1 and alternations as 0, and with weights decaying exponentially as a function of lag. The exponential filter gives insight into how DBM probabilities will vary as a function of trial history.
Consider two 4-back trial histories: an alternation followed by two repetitions (ARR?) and two
alternations followed by a repetition (AAR?), where the ? indicates that the current trial type is
unknown. An exponential filter predicts that ARR? will always create a stronger expectation for
an R on the current trial than AAR? will, because the former includes an additional past repetition.
Thus, if the current trial is in fact a repetition, the model predicts a faster RT for ARR? compared
to AAR? (i.e., RTARRR < RTAARR ). Conversely, if the current trial is an alternation, the model
3
predicts RTARRA > RTAARA . Similarly, if two sequences with the same number of Rs and As
are compared, for example RAR? and ARR?, the model predicts RTRARR > RTARRR and
RTRARA < RTARRA because more recent trials have a stronger influence.
Comparing the exponential filter predictions for adjacent sequences in Figure 1 yields the expectation that the RTs will be monotonically increasing in the left two groups of four and monotonically
decreasing in the two right groups. The data are divided into groups of 4 because the relationships
between histories like AARR and RRAR depend on the specific parameters of the exponential filter, which determine whether one recent A will outweigh two earlier As. It is clear in Figure 1 that
the DBM predictions follow this pattern.
2.2
what?s missing in DBM
DBM offers an impressive fit to the overall pattern of the behavioral data. Circled in Figure 1,
however, we see two significant pairs of sequence histories for which the monotonicity prediction
does not hold. These are reliable aspects of the data and are not measurement error. Consider
the circle on the left, in which RTARAR > RTRAAR for the human data. Because DBM functions
approximately as an exponential filter, and the repetition in the trial history is more recent for ARAR
than for RAAR, DBM predicts RTARAR < RTRAAR . An exponential filter, and thus DBM, is
unable to account for this deviation in the data.
To understand this mismatch, we consider an alternative representation of the trial history: the firstorder sequence, i.e., the sequence of actual stimulus values. The two R/A sequences ARAR and
RAAR correspond to stimulus sequences XY Y XX and XXY XX. If we consider an exponential filter on the actual stimulus sequence, we obtain the opposite prediction from that of DBM:
RTXY Y XX > RTXXY XX because there are more recent occurrences of X in the latter sequence.
The other circled data in Figure 1a correspond to an analogous situation. Again, DBM also makes
a prediction inconsistent with the data, that RTARAA > RTRAAA , whereas an exponential filter on
stimulus values predicts the opposite outcome?RTXY Y XY < RTXXY XY . Of course this analysis
leads to predictions for other pairs of points where DBM is consistent with the data and a stimulus
based exponential filter is inconsistent. Nevertheless, the variations in the data suggest that more
importance should be given to the actual stimulus values.
In general, we can divide the sequential effects observed in the data into two classes: first- and
second-order effects. First-order sequential effects result from the priming of specific stimulus or
response values. We refer to this as a first-order effect because it depends only on the stimulus
values rather than a higher-order representation such as the repetition/alternation nature of a trial.
These effects correspond to the estimation of the baserate of each stimulus or response value. They
are observed in a wide range of experimental paradigms and are referred to as stimulus priming
or response priming. The effects captured by DBM, i.e. the triangular pattern in RT data, can be
thought of as a second-order effect because it reflects learning of the correlation structure between
the current trial and the previous trial. In second-order effects, the actual stimulus value is irrelevant
and all that matters is whether the stimulus was a repetition of the previous trial. As DBM proposes,
these effects essentially arise from an attempt to estimate the repetition rate of the sequence.
DBM naturally produces second-order sequential effects because it abstracts over the stimulus level
of description: observations in the model are R and A instead of the actual stimuli X and Y . Because
of this abstraction, DBM is inherently unable to exhibit first-order effects. To gain an understanding
of how first-order effects could be integrated into this type of Bayesian framework, we reformulate
the DBM architecture. Figure 2b shows an equivalent depiction of DBM in which the generative
process on trial t produces the actual stimulus value, denoted St . St is conditioned on both the
repetition probability, ?t , and the previous stimulus value, St?1 . Under this formulation, St = St?1
with probability ?t , and St equals the opposite of St?1 (i.e., XY or Y X) with probability 1 ? ?t .
An additional benefit of this reformulated architecture is that it can represent first-order effects if we
switch the meaning of ?. In particular, we can treat ? as the probability of the stimulus taking on a
specific value (X or Y ) instead of the probability of a repetition. St is then simply a draw from a
Bernoulli process with rate ?. Note that for modeling a first-order effect with this architecture, the
conditional dependence of St on St?1 becomes unnecessary. The nonstationarity of the environment, as represented by the change variable C, behaves in the same way regardless of whether we
use the model to represent first- or second-order structure.
4
3
Dynamic Belief Mixture Model
The complex contributions of first- and second-order effects to the full pattern of observed sequential
effects suggest the need for a model with more explanatory power than DBM. It seems clear that
individuals are performing a more sophisticated inference about the statistics of the environment
than proposed by DBM. We have shown that the DBM architecture can be reformulated to generate
first-order effects by having it infer the baserate instead of the repetition rate of the sequence, but the
empirical data suggest both mechanisms are present simultaneously. Thus the challenge is to merge
these two effects into one model that performs joint inference over both environmental statistics.
Here we propose a Bayesian model that captures both first- and second-order effects, building on the
basic principles of DBM. According to this new model, which we call the Dynamic Belief Mixture
Model (DBM2), the learner assumes that the stimulus on a given trial is probabilistically affected
by two factors: the random variable ?, which represents the sequence baserate, and the random
variable ?, which represents the repetition rate. The combination of these two factors is governed
by a mixture weight w that represents the relative weight of the ? component. As in DBM, the
environment is assumed to be nonstationary, meaning that on each trial, with probability ?, ? and ?
are jointly resampled from the reset prior, p0 (?, ?), which is uniform over [0, 1]2 . Figure 2c shows
the graphical architecture for this model. This architecture is an extension of our reformulation of
the DBM architecture in Figure 2b. Importantly, the observed variable, S, is the actual stimulus
value instead of the repetition/alternation representation used in DBM. This architecture allows for
explicit representation of the baserate, through the direct influence of ?t on the physical stimulus
value St , as well as representation of the repetition rate through the joint influence of ?t and the
previous stimulus St?1 on St . Formally, we express the probability of St given ?, ?, and St?1 as
shown in Equation 1.
P (St = X|?t , ?t , St?1 = X) = w?t + (1 ? w)?t
P (St = X|?t , ?t , St?1 = Y ) = w?t + (1 ? w)(1 ? ?t )
(1)
~t?1 ). After each observaDBM2 operates by maintaining the iterative prior over ? and ?, p(?t , ?t |S
~t ), is computed using Bayes? Rule from the iterative prior and the
tion, the joint posterior, p(?t , ?t |S
likelihood of the most recent observation, as shown in Equation 2.
~t ) ? P (St |?t , ?t , St?1 )p(?t , ?t |S
~t?1 ).
p(?t , ?t |S
(2)
The iterative prior for the next trial is then a mixture of the posterior from the current trial, weighted
by 1 ? ?, and the reset prior, weighted by ? (the probability of change in ? and ?).
~t ) = (1 ? ?)p(?t , ?t |S
~t ) + ?p0 (?t+1 , ?t+1 ).
p(?t+1 , ?t+1 |S
(3)
~t?1 ), by integrating Equation 1 over the iterative prior on
The model generates predictions, P (St |S
?t and ?t . In our simulations, we maintain a discrete approximation to the continuous joint iterative
prior with the interval [0,1] divided into 100 equally spaced sections. Expectations are computed by
summing over the discrete probability mass function.
Figure 1b shows that DBM2 provides an excellent fit to the Cho et al. data, explaining the combination of both first- and second-order effects. To account for the overall advantage of repetition trials
over alternation trials in the data, a repetition bias had to be built into the reset prior in DBM. In
DBM2, the first-order component naturally introduces an advantage for repetition trials. This occurs
because the estimate of ?t is shifted toward the value of the previous stimulus, St?1 , thus leading
to a greater expectation that the same value will appear on the current trial. This fact eliminates the
need for a nonuniform reset prior in DBM2. We use a uniform reset prior in all DBM2 simulations,
thus allowing the model to operate with only four free parameters: ?, w, and the two parameters for
the affine transform from model probabilities to RTs.
The nonuniform reset prior in DBM allows it to be biased either for repetition or alternation. This
flexibility is important in a model, because different experiments show different biases, and the
biases are difficult to predict. For example, the Jentzsch and Sommer experiment showed little
5
360
Jentzsch 1
DBM 2
340
Maloney 1
DBM2
PSI
Response Time
P bias
320
neutral
300
280
N bias
(b)
NNNN
PNNN
PPNN
NPNN
PPPN
NPPN
NNPN
PNPN
PPPP
NPPP
NNPP
PNPP
NNNP
PNNP
PPNP
NPNP
(a)
RRRR
ARRR
RARR
AARR
RRAR
ARAR
RAAR
AAAR
RRRA
ARRA
RARA
AARA
RRAA
ARAA
RAAA
AAAA
260
Figure 3: DBM2 fits for the behavioral data from (a) Jentzsch and Sommer (2002) Experiment 1 which accounts
for 96.5% of the data variance (? = 0.2828, w = 0.3950) and (b) Maloney et al. (2005) Experiment 1 which
accounts for 97.7% of the data variance. (? = 0.0283, w = 0.3591)
bias, but a replication we performed?with the same stimuli and same responses?obtained a strong
alternation bias. It is our hunch that the bias should not be cast as part of the computational theory
(specifically, the prior); rather, the bias reflects attentional and perceptual mechanisms at play, which
can introduce varying degrees of an alternation bias. Specifically, four classic effects have been
reported in the literature that make it difficult for individuals to process the same stimulus two times
in a row at a short lag: attentional blink Raymond et al. (1992), inhibition of return Posner and
Cohen (1984), repetition blindness Kanwisher (1987), and the Ranschburg effect Jahnke (1969).
For example, with repetition blindness, processing of an item is impaired if it occurs within 500 ms
of another instance of the same item in a rapid serial stream; this condition is often satisfied with
2AFC. In support of our view that fast-acting secondary mechanisms are at play in 2AFC, Jentzsch
and Sommer (Experiment 2) found that using a very short lag between each response and the next
stimulus modulated sequential effects in a difficult-to-interpret manner. Explaining this finding via
a rational theory would be challenging. To allow for various patterns of bias across experiments, we
introduced an additional parameter to our model, an offset specifically for repetition trials, which
can serve as a means of removing the influence of the effects listed above. This parameter plays
much the same role as DBM?s priors. Although it is not as elegant, we believe it is more correct,
because the bias should be considered as part of the neural implementation, not the computational
theory.
4
Other Tests of DBM2
With its ability to represent both first- and second-order effects, DBM2 offers a robust model for a
range of sequential effects. In Figure 3a, we see that DBM2 provides a close fit to the data from
Experiment 1 of Jentzsch and Sommer (2002). The general design of this 2AFC task is similar to
the design in Cho et al. (2002) though some details vary. Notably we see a slight advantage on
alternation trials, as opposed to the repetition bias seen in Cho et al.
Surprisingly, DBM2 is able to account for the sequential effects in other binary decision tasks that
do not fit into the 2AFC paradigm. In Experiment 1 of Maloney et al. (2005), subjects observed
a rotation of two points on a circle and reported whether the direction of rotation was positive
(clockwise) or negative (counterclockwise). The stimuli were constructed so that the direction of
motion was ambiguous, but a particular variable related to the angle of motion could be manipulated
to make subjects more likely to perceive one direction or the other. Psychophysical techniques were
used to estimate the Point of Subjective Indifference (PSI), the angle at which the observer was
equally likely to make either response. PSI measures the subject?s bias toward perceiving a positive
as opposed to a negative rotation. Maloney et. al. found that this bias in perceiving rotation was
influenced by the recent trial history. Figure 3b shows the data for this experiment rearranged to be
consistent with the R/A orderings used elsewhere (the sequences on the abscissa show the physical
stimulus values, ending with Trial t ? 1). The bias, conditioned on the 4-back trial history, follows
a similar pattern to that seen with RTs in Cho et al. (2002) and Jentzsch and Sommer (2002).
6
Table 1: A comparison between the % of data variance explained by DBM and DBM2.
DBM
DBM2
Cho
95.8
99.2
Jentzsch 1
95.5
96.5
Maloney 1
96.1
97.7
In modeling Experiment 1, we assumed that PSI reflects the subject?s probabilistic expectation about
the upcoming stimulus. Before each trial, we computed the model?s probability that the next stimulus would be P, and then converted this probability to the PSI bias measure using an affine transform
similar to our RT transform. Figure 3b shows the close fit DBM2 obtains for the experimental data.
To assess the value of DBM2, we also fit DBM to these two experiments. Table 1 shows the comparison between DBM and DBM2 for both datasets as well as Cho et al. The percentage of variance
explained by the models is used as a measure for comparison. Across all three experiments, DBM2
captures a greater proportion of the variance in the data.
5
EEG evidence for first-order and second-order predictions
DBM2 proposes that subjects in binary choice tasks track both the baserate and the repetition rate
in the sequence. Therefore an important source of support for the model would be evidence for the
psychological separability of these two mechanisms. One such line of evidence comes from Jentzsch
and Sommer (2002), who used electroencephalogram (EEG) recordings to provide additional insight
into the mechanisms involved in the 2AFC task. The EEG was used to record subjects? lateralized
readiness potential (LRP) during performance of the task. LRP essentially provides a way to identify
the moment of response selection?a negative spike in the LRP signal in motor cortex reflects initiation of a response command in the corresponding hand. Jentzsch and Sommer present two different
ways of analyzing the LRP data: stimulus-locked LRP (S-LRP) and response-locked LRP (LRP-R).
The S-LRP interval measures the time from stimulus onset to response activation on each trial. The
LRP-R interval measures the time elapsed between response activation and the actual response. Together, these two measures provide a way to divide the total RT into a stimulus-processing stage and
a response-execution stage.
Interestingly, the S-LRP and LRP-R data exhibit different patterns of sequential effects when conditioned on the 4-back trial histories, as shown in Figure 4. DBM2 offers a natural explanation for the
different patterns observed in the two stages of processing, because they align well with the division
between first- and second-order sequential effects. In the S-LRP data, the pattern is predominantly
second-order, i.e. RT on repetition trials increases as more alternations appear in the recent history,
and RT on alternation trials shows the opposite dependence. In contrast, the LRP-R results exhibit
an effect that is mostly first-order (which could be easily seen if the histories were reordered under
an X/Y representation). Thus we can model the LRP data by extracting the separate contributions
of ? and ? in Equation 1. We use the ? component (i.e., the second term on the RHS of Eq. 1) to
predict the S-LRP results and the ? component (i.e., the first term on the RHS of Eq. 1) to predict
the LRP-R results. This decomposition is consistent with the model of overall RT, because the sum
of these components provides the model?s RT prediction, just as the sum of the S-LRP and LRP-R
measures equals the subject?s actual RT (up to an additive constant explained below).
Figure 4 shows the model fits to the LRP data. The parameters of the model were constrained to
be the same as those used for fitting the behavioral results shown in Figure 3a. To convert the
probabilities in DBM2 to durations, we used the same scaling factor used to fit the behavioral data
but allowed for new offsets for the R and A groups for both S-LRP and LRP-R. The offset terms
need to be free because the difference in procedures for estimating S-LRP and LRP-R (i.e., aligning
trials on the stimulus vs. the response) allows the sum of S-LRP and LRP-R to differ from total
RT by an additive constant related to the random variability in RT across trials. Other than these
offset terms, the fits to the LRP measures constitute parameter-free predictions of EEG data from
behavioral data.
7
120
300
100
240
220
200
P bias
80
PSI
260
Response Time
Response Time
Maloney 2
DBM2
LRP?R
DBM2
S?LRP
DBM2
280
neutral
60
N bias
40
PNP
NNP
PPP
NPP
PPN
NPN
PNN
(c)
NNN
20
(b)
RRRR
ARRR
RARR
AARR
RRAR
ARAR
RAAR
AAAR
RRRA
ARRA
RARA
AARA
RRAA
ARAA
RAAA
AAAA
(a)
RRRR
ARRR
RARR
AARR
RRAR
ARAR
RAAR
AAAR
RRRA
ARRA
RARA
AARA
RRAA
ARAA
RAAA
AAAA
180
Figure 4: (a) and (b) show DBM2 fits to the S-LRP results of Jentzsch and Sommer (2002) Experiment 1. Model
parameters are the same as those used for the behavioral fit shown in Figure 3a, except for offset parameters.
DBM2 explains 73.4% of the variance in the S-LRP data and 87.0% of the variance in the LRP-R data. (c)
Behavioral results and DBM2 fits for Experiment 2 of Maloney et al. (2005). The model fit explains 91.9% of
the variance in the data (? = 0.0283, w = 0).
6
More evidence for the two components of DBM2
In the second experiment reported in Maloney et al. (2005), participants only responded on every
fourth trial. The goal of this manipulation was to test whether the sequential effect occurred in
the absence of prior responses. Each ambiguous test stimulus followed three stimuli for which the
direction of rotation was unambiguous and to which the subject made no response. The responses
to the test stimuli were grouped according to the 3-back stimulus history, and a PSI value was
computed for each of the eight histories to measure subjects? bias toward perceiving positive vs.
negative rotation. The results are shown in Figure 4c. As in Figure 3b, the histories on the abscissa
show the physical stimulus values, ending with Trial t ? 1, and the arrangement of these histories is
consistent with the R/A orderings used elsewhere in this paper.
DBM2?s explanation of Jentzsch and Sommer?s EEG results indicates that first-order sequential
effects arise in response processing and second-order effects arise in stimulus processing. Therefore,
the model predicts that, in the absence of prior responses, sequential effects will follow a pure
second-order pattern. The results of Maloney et al.?s Experiment 2 confirm this prediction. Just as
in the S-LRP data of Jentzsch and Sommer (2002), the first-order effects have mostly disappeared,
and the data are well explained by a pure second-order effect (i.e., a stronger bias for alternation
when there are more alternations in the history, and vice versa). We simulated this experiment with
DBM2 using the same value of the change parameter (?) from the fit of Maloney et al.?s Experiment
1. Additionally, we set the mixture parameter, w, to 0, which removes the first-order component of
the model. For this experiment we used different affine transformation values than in Experiment
1 because the modifications in the experimental design led to a generally weaker sequential effect,
which we speculate to have been due to lesser engagement by subjects when fewer responses were
needed. Figure 4c shows the fit obtained by DBM2, which explains 91.9% data variance.
7
Discussion
Our approach highights the power of modeling simultaneously at the levels of rational analysis and
psychological mechanism. The details of the behavioral data (i.e. the systematic discrepancies from
DBM) pointed to an improved rational analysis and an elaborated generative model (DBM2) that is
grounded in both first- and second-order sequential statistics. In turn, the conceptual organization
of the new rational model suggested a psychological architecture (i.e., separate representation of
baserates and repetition rates) that was borne out in further data. The details of these latter findings
now turn back to further inform the rational model. Specifically, the fits to Jentzsch and Sommer?s
EEG data and to Maloney et al.?s intermittent-response experiment suggest that the statistics individuals track are differentially tied to the stimuli and responses in the task. That is, rather than learning
statistics of the abstract trial sequence, individuals learn the baserates (i.e., marginal probabilities) of
responses and the repetition rates (i.e., transition probabilities) of stimulus sequences. This division
suggests further hypotheses about both the empirical nature and the psychological representation
of stimulus sequences and of response sequences, which future experiments and statistical analyses
will hopefully shed light on.
8
References
M. Jones and W. Sieck. Learning myopia: An adaptive recency effect in category learning. Journal
of Experimental Psychology: Learning, Memory, & Cognition, 29:626?640, 2003.
M. Mozer, S. Kinoshita, and M. Shettel. Sequential dependencies offer insight into cognitive control.
In W. Gray, editor, Integrated Models of Cognitive Systems, pages 180?193. Oxford University
Press, 2007.
A. Yu and J. Cohen. Sequential effects: Superstition or rational behavior? NIPS, pages 1873?1880,
2008.
L. Maloney, M. Dal Martello, C. Sahm, and L. Spillmann. Past trials influence perception of ambiguous motion quartets through pattern completion. Proceedings of the National Academy of
Sciences, 102:3164?3169, 2005.
I. Jentzsch and W. Sommer. Functional localization and mechanisms of sequential effects in serial
reaction time tasks. Perception and Psychophysics, 64(7):1169?1188, 2002.
D. Hale. Sequential effects in a two-choice reaction task. Quarterly Journal of Experimental Psychology, 19:133?141, 1967.
E. Soetens, L. Boer, and J. Hueting. Expectancy or automatic facilitation? separating sequential
effects in two-choice reaction time. Journal of experimental psychology. Human perception and
performance, 11:598?616, 1985.
R. Cho, L. Nystrom, E. Brown, A. Jones, T. Braver, P. Holmes, and J. Cohen. Mechanisms underlying dependencies of performance on stimulus history in a two-alternative forced-choice task.
Cognitive, Affective, & Behavioral Neuroscience, 4:283?299, 2002.
W. Estes. Toward a statistical theory of learning. Psychological Review, 57:94?107, 1950.
N. Anderson. Effect of first-order conditional probability in a two-choice learning situation. Journal
of Experimental Psychology, 59(2):73?83, 1960.
D. Laming. Subjective probability in choice-reaction experiments. Journal of Mathematical Psychology, 6:81?120, 1969.
J. Raymond, K. Shapiro, and K. Arnell. Temporary suppression of visual processing in an rsvp task:
an attentional blink? Journal of experimental psychology. Human perception and performance,
18:849?860, 1992.
M. Posner and Y. Cohen. Components of visual orienting. In H. Bouma and D. G. Bouwhuis,
editors, Attention and Performance X: Control of language processes, pages 531?556. Erlbaum,
Hillsdale, NJ, 1984.
N. Kanwisher. Repetition blindness: Type recognition without token individuation. Cognition, 27:
117?143, 1987.
J. Jahnke. The ranschburg effect. Psychological Review, 76:592?605, 1969.
9
| 3870 |@word trial:73 blindness:3 stronger:3 proportion:1 seems:2 instruction:1 r:1 simulation:2 decomposition:1 p0:4 shot:4 moment:1 denoting:1 tuned:1 interestingly:1 rightmost:1 past:5 imaginary:1 subjective:2 current:15 comparing:2 reaction:4 activation:3 additive:2 shape:1 motor:2 remove:1 championship:1 v:5 half:5 generative:4 fewer:1 item:2 rts:5 short:2 record:1 provides:5 nnp:1 five:1 mathematical:2 along:2 constructed:1 direct:1 beta:2 replication:1 fitting:1 behavioral:11 affective:1 introduce:1 pnp:1 manner:3 notably:1 kanwisher:2 intricate:1 rapid:1 abscissa:4 aaaa:7 behavior:3 brain:1 decreasing:4 nppp:1 actual:10 little:1 increasing:3 becomes:1 begin:1 xx:7 estimating:1 underlying:1 mass:1 what:1 interpreted:2 nadal:3 unified:1 finding:2 transformation:1 nj:1 every:1 firstorder:1 shed:1 hit:2 control:2 appear:2 before:3 positive:3 understood:1 treat:1 consequence:1 encoding:2 analyzing:1 oxford:1 approximately:2 merge:1 plus:1 conversely:1 challenging:1 suggests:1 co:1 collapse:1 range:4 locked:2 forcedchoice:2 procedure:1 empirical:2 significantly:1 thought:1 inferential:1 integrating:1 suggest:5 cannot:1 close:2 selection:1 recency:1 pnpn:1 influence:7 equivalent:2 map:1 soetens:2 outweigh:1 missing:1 regardless:1 attention:1 caught:1 duration:1 immediately:1 pure:2 perceive:1 insight:4 continued:1 rule:1 importantly:1 holmes:1 facilitation:1 his:3 posner:2 classic:2 variation:1 justification:1 analogous:1 play:4 colorado:4 xxy:2 hypothesis:1 hunch:1 recognition:1 continues:1 predicts:9 vein:1 observed:9 role:2 capture:4 thousand:1 ordering:2 mozer:5 environment:14 intuition:2 complexity:1 dynamic:9 depend:1 serve:1 reordered:1 division:2 localization:1 learner:1 easily:1 joint:4 represented:2 various:1 forced:1 fast:1 outcome:2 encoded:1 lag:3 say:1 triangular:2 ability:1 statistic:9 transform:4 jointly:1 sequence:27 pressing:1 advantage:3 propose:2 arr:4 reset:8 adaptation:2 backhand:2 flexibility:1 academy:1 description:1 differentially:1 regularity:1 impaired:1 r1:1 produce:2 disappeared:1 completion:1 progress:1 eq:2 strong:2 come:1 differ:1 direction:4 correct:1 filter:10 rrar:6 ppp:1 human:3 violates:1 rar:1 explains:4 hillsdale:1 extension:1 hold:1 considered:1 ground:1 cognition:3 dbm:48 predict:4 mapping:1 matthew:1 vary:3 estimation:1 grouped:1 repetition:38 create:1 vice:1 weighted:3 reflects:4 lrp:33 always:1 rather:3 varying:1 command:1 forehand:4 probabilistically:1 derived:1 focus:1 bernoulli:3 indicates:2 likelihood:1 contrast:1 martello:2 lateralized:1 suppression:1 inference:3 abstraction:1 synonymous:1 integrated:2 explanatory:1 federer:4 overall:3 denoted:2 proposes:2 constrained:1 psychophysics:1 marginal:1 equal:2 wimbledon:1 aware:1 having:1 represents:5 jones:5 afc:11 yu:6 future:2 discrepancy:1 superstition:1 stimulus:51 few:2 manipulated:1 simultaneously:2 national:1 individual:7 maintain:1 attempt:1 organization:1 introduces:1 mixture:7 extreme:1 light:1 experience:2 xy:7 intense:1 divide:2 circle:3 dal:1 uncertain:1 psychological:7 instance:1 earlier:2 modeling:3 deviation:1 neutral:2 hundred:1 uniform:2 erlbaum:1 reported:3 dependency:3 engagement:1 cho:15 st:26 individuation:1 boer:1 retain:1 probabilistic:1 systematic:1 laming:2 michael:1 together:1 quickly:1 again:1 reflect:1 recorded:1 successively:1 satisfied:1 opposed:2 borne:1 cognitive:5 leading:2 return:2 doubt:1 suggesting:1 account:7 converted:1 potential:1 speculate:1 includes:1 matter:1 depends:1 stream:1 onset:1 tion:1 performed:1 view:1 observer:1 red:1 decaying:1 participant:2 parallel:1 bayes:1 elaborated:1 contribution:4 ass:1 responded:1 variance:11 characteristic:1 who:1 dissociation:1 yield:1 correspond:3 spaced:1 blink:2 identify:1 bayesian:3 straight:1 history:25 influenced:1 inform:1 myopia:1 nonstationarity:1 maloney:13 involved:1 nystrom:1 naturally:2 psi:7 rational:10 gain:2 jentzsch:16 logical:1 electrophysiological:1 ubiquitous:1 sophisticated:1 actually:1 back:8 shettel:3 higher:1 follow:2 response:36 improved:1 formulation:1 though:1 anderson:2 furthermore:1 just:2 stage:3 correlation:1 hand:1 spillmann:2 hopefully:1 readiness:1 gray:1 believe:1 orienting:1 building:1 effect:61 matt:1 brown:1 former:1 assigned:1 npp:1 alternating:1 adjacent:1 game:1 during:1 ambiguous:3 unambiguous:1 m:1 trying:1 electroencephalogram:1 performs:1 motion:3 ranging:1 meaning:2 recently:1 predominantly:1 common:1 rotation:6 behaves:1 functional:1 physical:3 cohen:9 exponentially:1 he:1 slight:1 occurred:1 interpret:1 refer:3 significant:1 measurement:1 versa:1 automatic:1 similarly:1 pointed:1 language:2 had:1 tennis:1 impressive:1 depiction:1 inhibition:1 cortex:1 base:1 align:1 aligning:1 posterior:2 recent:13 showed:1 irrelevant:1 manipulation:1 initiation:1 binary:2 continue:1 success:1 alternation:24 life:1 captured:1 seen:3 additional:5 greater:2 preceding:2 determine:1 paradigm:3 monotonically:5 signal:1 clockwise:1 multiple:1 full:1 infer:1 pppp:1 match:2 adapt:2 faster:2 offer:4 divided:2 serial:2 equally:2 controlled:1 prediction:14 basic:2 himself:1 expectation:10 essentially:2 represent:3 grounded:1 pnn:1 whereas:2 interval:3 source:1 biased:1 eliminates:1 operate:1 subject:10 recording:1 elegant:1 counterclockwise:1 inconsistent:2 call:1 nonstationary:3 extracting:1 split:1 npn:1 switch:2 fit:24 psychology:7 architecture:9 opposite:4 lesser:1 court:1 whether:11 defense:1 reformulated:2 nnn:1 constitute:1 repeatedly:1 action:2 dramatically:1 generally:1 latency:1 clear:2 listed:1 category:1 rearranged:1 generate:1 shapiro:1 percentage:1 shifted:1 neuroscience:1 track:3 blue:1 discrete:2 affected:2 express:1 group:5 key:1 four:6 reformulation:2 nevertheless:1 drawn:1 changing:1 button:2 graph:1 sum:4 convert:1 angle:2 powerful:1 respond:1 fourth:1 parsimonious:1 draw:1 decision:2 scaling:1 ct:7 resampled:1 followed:3 occur:1 rsvp:1 generates:1 aspect:2 argument:1 performing:1 according:6 alternate:1 ball:2 combination:2 wilder:1 across:6 describes:2 separability:1 making:2 kinoshita:3 modification:1 explained:6 boulder:1 equation:4 previously:1 turn:2 count:1 mechanism:9 needed:1 confounded:1 eight:1 quarterly:1 occurrence:1 mcj:1 braver:1 alternative:4 denotes:3 assumes:3 sommer:14 estes:2 graphical:2 maintaining:1 upcoming:3 psychophysical:1 arrangement:1 occurs:3 spike:1 dependence:4 rt:26 exhibit:4 unable:3 attentional:3 separate:2 simulated:1 separating:1 rara:5 toward:6 quartet:1 relationship:2 reformulate:1 demonstration:1 difficult:3 mostly:2 negative:4 implementation:1 design:3 unknown:1 perform:1 allowing:1 upper:1 ppn:1 observation:3 datasets:1 supporting:1 situation:2 rrrr:8 variability:1 precise:1 nonuniform:2 intermittent:1 introduced:1 pair:3 cast:1 optimized:1 inclination:1 elapsed:1 temporary:1 nip:1 able:1 suggested:2 below:1 pattern:13 perception:5 mismatch:1 challenge:1 built:2 reliable:2 memory:2 including:1 belief:7 explanation:3 power:2 event:1 natural:1 residual:1 picture:1 alternated:1 raymond:2 text:1 prior:18 circled:3 understanding:2 literature:1 review:2 relative:1 awareness:1 degree:1 affine:4 consistent:4 principle:2 editor:2 row:1 course:1 changed:1 elsewhere:2 surprisingly:2 token:1 free:4 side:1 bias:21 understand:1 allow:1 weaker:1 wide:3 explaining:2 taking:1 benefit:1 ending:2 transition:1 computes:1 instructed:1 made:1 adaptive:1 expectancy:1 obtains:2 neurobiological:1 monotonicity:1 confirm:1 summing:1 conceptual:1 unnecessary:1 assumed:2 continuous:1 iterative:5 reality:1 table:2 additionally:1 nature:2 learn:1 robust:2 inherently:1 streak:1 baserates:2 eeg:6 excellent:1 priming:3 complex:1 domain:1 linearly:1 rh:2 scored:1 arise:3 repeated:2 allowed:1 referred:1 explicit:1 exponential:10 governed:1 perceptual:2 tied:1 learns:1 aar:3 removing:1 specific:3 arra:5 hale:2 symbol:2 r2:1 offset:5 evidence:4 sequential:37 importance:1 execution:1 conditioned:7 led:1 simply:1 likely:2 visual:2 indifference:1 ordered:2 baserate:5 applies:1 environmental:2 constantly:1 determines:1 conditional:2 identity:2 viewed:1 goal:1 absence:2 change:7 specifically:5 perceiving:3 operates:1 except:1 acting:1 total:2 isomorphic:1 secondary:1 experimental:10 stake:1 bouma:1 formally:1 internal:1 people:1 support:2 latter:2 modulated:1 ongoing:1 dept:2 |
3,169 | 3,871 | Online Submodular Minimization
Elad Hazan
IBM Almaden Research Center
650 Harry Rd, San Jose, CA 95120
[email protected]
Satyen Kale
Yahoo! Research
4301 Great America Parkway, Santa Clara, CA 95054
[email protected]
Abstract
We consider an online decision problem over a discrete space in which the loss
function is submodular. We give algorithms which are computationally efficient
and are Hannan-consistent in both the full information and bandit settings.
1
Introduction
Online decision-making is a learning problem in which one needs to choose a decision repeatedly
from a given set of decisions, in an effort to minimize costs over the long run, even in the face of
complete uncertainty about future outcomes. The performance of an online learning algorithm is
measured in terms of its regret, which is the difference between the total cost of the decisions it
chooses, and the cost of the optimal decision chosen in hindsight. A Hannan-consistent algorithm is
one that achieves sublinear regret (as a function of the number of decision-making rounds). Hannanconsistency implies that the average per round cost of the algorithm converges to that of the optimal
decision in hindsight.
In the past few decades, a variety of Hannan-consistent algorithms have been devised for a wide
range of decision spaces and cost functions, including well-known settings such as prediction from
expert advice [10], online convex optimization [15], etc. Most of these algorithms are based on
an online version of convex optimization algorithms. Despite this success, many online decisionmaking problems still remain open, especially when the decision space is discrete and large (say,
exponential size in the problem parameters) and the cost functions are non-linear.
In this paper, we consider just such a scenario. Our decision space is now the set of all subsets of
a ground set of n elements, and the cost functions are assumed to be submodular. This property
is widely seen as the discrete analogue of convexity, and has proven to be a ubiquitous property in
various machine learning tasks (see [4] for references). A crucial component in these latter results
are the celebrated polynomial time algorithms for submodular function minimization [7].
To motivate the online decision-making problem with submodular cost functions, here is an example
from [11]. Consider a factory capable of producing any subset from a given set of n products E.
Let f : 2E 7? R be the cost function for producing any such subset (here, 2E stands for the set of
all subsets of E). Economics tells us that this cost function should satisfy the law of diminishing
returns: i.e., the additional cost of producing an additional item is lower the more we produce.
Mathematically stated, for all sets S, T ? E such that T ? S, and for all elements i ? E, we have
f (T ? {i}) ? f (T ) ? f (S ? {i}) ? f (S).
Such cost functions are called submodular, and frequently arise in real-world economic and other
scenarios. Now, for every item i, let pi be the market price of the item, which is only determined in
the futureP
based on supply and demand. Thus, the profit from producing a subset S of the items is
P (S) = i?S pi ? f (S). Maximizing profit is equivalent to minimizing the function ?P , which
is easily seen to be submodular as well.
The online decision problem which arises is now to decide which set of products to produce, to maximize profits in the long run, without knowing in advance the cost function or the market prices. A
1
more difficult version of this problem, perhaps more realistic, is when the only information obtained
is the actual profit of the chosen subset of items, and no information on the profit possible for other
subsets.
In general, the Online Submodular Minimization problem is the following. In each iteration, we
choose a subset of a ground set of n elements, and then observe a submodular cost function which
gives the cost of the subset we chose. The goal is to minimize the regret, which is the difference
between the total cost of the subsets we chose, and the cost of the best subset in hindsight. Depending
on the feedback obtained, we distinguish between two settings, full-information and bandit. In the
full-information setting, we can query each cost function at as many points as we like. In the bandit
setting, we only get to observe the cost of the subset we chose, and no other information is revealed.
Obviously, if we ignore the special structure of these problems, standard algorithms for learning
with expert advice and/or with bandit feedback can be applied to this setting. However, the computational complexity of these algorithms would be proportional to the number of subsets, which is
2n . In addition, for the submodular bandits problem, even the regret bounds have an exponential
dependence on n. It is hence of interest to design efficient algorithms for these problems. For the
bandit version an even more basic question arises: does there exist an algorithm with regret which
depends only polynomially on n?
In this paper, we answer these questions in the affirmative. We give efficient algorithms for both
problems, with regret which is bounded by a polynomial in n ? the underlying dimension ? and
sublinearly in the number of iterations. For the
? full information setting, we give two different randomized algorithms with expected regret O(n T ). One of these algorithms is based on the followthe-perturbed-leader approach [5, 9]. We give a new way of analyzing such an algorithm. This
analysis technique should have applications for other problems with large decision spaces as well.
This algorithm is combinatorial, strongly polynomial, and can be easily generalized to arbitrary distributive lattices, rather than just all subsets of a given set. The second algorithm is based on convex
analysis. We make crucial use of a continuous extension of a submodular function known as the
Lov?asz extension. We obtain our regret bounds by running a (sub)gradient descent algorithm in the
style of Zinkevich [15].
For the bandit setting, we give a randomized algorithm with expected regret O(nT 2/3 ). This algorithm also makes use of the Lov?asz extension and gradient descent. The algorithm folds exploration
and exploitation steps into a single sample and obtains the stated regret bound. We also show that
these regret bounds hold with high probability. Note that the technique of Flaxman, Kalai and
McMahan [1], when applied to the Lov?asz extension, gives a worse regret bound of O(nT 3/4 ).
2
Preliminaries and Problem Statement
Submodular functions. The decision space is the set of all subsets of a universe of n elements,
[n] = {1, 2, . . . , n}. The set of all subsets of [n] is denoted 2[n] . For a set S ? [n], denote by ?S its
characteristic vector in {0, 1}n , i.e. ?S (i) = 1 if i ? S, and 0 otherwise.
A function f : 2[n] ? R is called submodular if for all sets S, T ? [n] such that T ? S, and for all
elements i ? E, we have
f (T + i) ? f (T ) ? f (S + i) ? f (S).
Here, we use the shorthand notation S + i to indicate S ? {i}. An explicit description of f would
take exponential space. We assume therefore that the only way to access f is via a value oracle, i.e.
an oracle that returns the value of f at any given set S ? [n].
Given access to a value oracle for a submodular function, it is possible to minimize it in polynomial
time [3], and indeed, even in strongly polynomial time [3, 7, 13, 6, 12, 8]. The current fastest strongly
polynomial algorithm are those of Orlin[12] and Iwata-Orlin [8], which takes time O(n5 EO + n6 ),
where EO is the time taken to run the value oracle. The fastest weakly polynomial algorithm is that
? 4 EO + n5 ).
of Iwata [6] and Iwata-Orlin [8] which runs in time O(n
Online Submodular Minimization. In the Online Submodular Minimization problem, over a
sequence of iterations t = 1, 2, . . ., an online decision maker has to repeatedly chose a subset
2
St ? [n]. In each iteration, after choosing the set St , the cost of the decision is specified by a
submodular function ft : 2[n] ? [?1, 1]. The decision maker incurs cost ft (St ). The regret of the
decision maker is defined to be
RegretT :=
T
X
ft (St ) ? min
S?[n]
t=1
T
X
ft (S).
t=1
If the sets St are chosen by a randomized algorithm, then we consider the expected regret over the
randomness in the algorithm.
An online algorithm to choose the sets St will be said to be Hannan-consistent if it ensures that
RegretT = o(T ). The algorithm will be called efficient if it computes each decision St in poly(n, t)
time. Depending on the kind of feedback the decision maker receives, we distinguish between two
settings of the problem:
? Full information setting. In this case, in each round t, the decision maker has unlimited
access to the value oracles of the previously seen cost function f1 , f2 , . . . ft?1 .
? Bandit setting. In this case, in each round t, the decision maker only observes the cost of
her decision St , viz. ft (St ), and receives no other information.
Main Results.
In the setup of the Online Submodular Minimization, we have the following results:
Theorem 1. In the full information setting of Online Submodular Minimization, there is an efficient
randomized algorithm that attains the following regret bound:
?
E[RegretT ] ? O(n T ).
p
?
Furthermore, RegretT ? O((n + log(1/?)) T ) with probability at least 1 ? ?.
Theorem 2. In the bandit setting of Online Submodular Minimization, there is an efficient randomized algorithm that attains the following regret bound:
E[RegretT ] ? O(nT 2/3 ).
p
Furthermore, RegretT ? O(nT 2/3 log(1/?)) with probability at least 1 ? ?.
Both of the theorems above hold against both oblivious as well as adaptive adversaries.
The Lov?asz Extension. A major technical construction we need for the algorithms is the Lov?asz
extension f? of the submodular function f . This is defined on the Boolean hypercube K = [0, 1]n and
takes real values. Before defining the Lov?asz extension, we need the concept of a chain of subsets
of [n]:
Definition 1. A chain of subsets of [n] is a collection of sets A0 , A1 , . . . , Ap such that
A0 ? A1 ? A2 ? ? ? ? ? Ap .
A maximal chain is one where p = n. For a maximal chain, we have A0 = ?, An = [n], and there is
a unique associated permutation ? : [n] ? [n] such that for all i ? [n], we have A?(i) = A?(i)?1 +i.
Now let x ? K. There isPa unique chain A0 ? A2 ? P
? ? ? Ap such that x can be expressed as a
p
p
convex combination x = i=0 ?i ?Ai where ?i > 0 and i=0 ?i = 1. A nice way to construct this
combination is the following random process: choose a threshold ? ? [0, 1] uniformly at random,
and consider the level set S? = {i : xi > ? }. The sets in the required chain are exactly the level
sets which are obtained with positive probability, and for any such set Ai , ?i = Pr[S? = Ai ]. In
other words, we have x = E? [?S? ]. This follows immediately by noting that for any i, we have
Pr? [i ? S? ] = xi . Of course, the chain and the weights ?i can also be constructed deterministically
simply by sorting the coordinates of x.
Now, we are ready to define1 the Lov?asz extension f?:
1
Note that this is not the standard definition of the Lov?asz extension, but an equivalent characterization.
3
Definition 2. Let x
p such that x can be expressed as a convex
Pp? K. Let A0 ? A2 ? ? ? ? AP
p
combination x =
?
?
where
?
>
0
and
asz
i
i=0 i Ai
i=0 ?i = 1. Then the value of the Lov?
?
extension f at x is defined to be
p
X
?
f (x) :=
?i f (Ai ).
i=0
The preceding discussion gives an equivalent way of defining the Lov?asz extension: choose a threshold ? ? [0, 1] uniformly at random, and consider the level set S? = {i : xi > ? }. Then we have
f?(x) = E? [f (S? )].
Note that the definition immediately implies that for all sets S ? [n], we have f?(?S ) = f (S).
We will also need the notion of a maximal chain associated to a point x ? K in order to define
subgradients of the Lov?asz extension:
Pp
Definition 3. Let xP? K, and let A0 ? A2 ? ? ? ? Ap be the unique chain such that x = i=0 ?i ?Ai
p
where ?i > 0 and i=0 ?i = 1. A maximal chain associated with x is any maximal completion of
the Ai chain, i.e. a maximal chain ? = B0 ? B1 ? B2 ? ? ? ? Bn = [n] such that all sets Ai appear
in the Bj chain.
We have the following key properties of the Lov?asz extension. For proofs, refer to Fujishige [2],
chapter IV.
Proposition 3. The following properties of the Lov?asz extension f? : K ? R hold:
1. f? is convex.
2. Let x ? K. Let ? = B0 ? B1 ? B2 ? ? ? ? Bn = [n] be an arbitrary maximal chain
associated with x, and let ? : [n] ? [n] be the corresponding permutation. Then, a
subgradient g of f? at x is given as follows:
gi = f (B?(i) ) ? f (B?(i)?1 ).
3
The Full Information Setting
In this section we give two algorithms for regret
? minimization in the full information setting, both of
which attain the same regret bound of O(n T ). The first is a randomized combinatorial algorithm,
based on the ?follow the leader? approach of Hannan [5] and Kalai-Vempala [9], and the second is
an analytical algorithm based on (sub)gradient descent on the Lov?asz extension.
Both algorithms have pros and cons: while the second algorithm is much simpler and more efficient,
we do not know how to extend it to distributive lattices, for which the first algorithm readily applies.
3.1
A Combinatorial Algorithm
In this section we analyze a combinatorial, strongly polynomial, algorithm for minimizing regret in
the full information Online Submodular Minimization setting:
Algorithm 1 Submodular Follow-The-Perturbed-Leader
1: Input: parameter ? > 0.
2: Initialization: For every i ? [n], choose a random number ri ? [?1/?, 1/?] uniformly at
P
random. Define R : 2[n] ? R as R(S) = i?S ri .
3: for t = 1 to T do
Pt?1
4:
Use the set St = arg minS?[n] ? =1 f? (S) + R(S), and obtain cost ft (St ).
5: end for
Pt?1
Define ?t : 2[n] ? R as ?t (S) = ? =1 f? (S) + R(S). Note that R is a submodular function, and
?t , being the sum of submodular functions, is itself submodular. Furthermore, it is easy to construct
4
a value oracle for ?t simply by using the value oracles for the f? . Thus, the optimization in step 3
is poly-time solvable given oracle access to ?t .
While the algorithm itself is a simple extension of Hannan?s [5] follow-the-perturbed-leader algorithm, previous analysis (such as Kalai and Vempala [9]), which rely on linearity of the cost functions, cannot be made to work here. Instead, we introduce a new analysis technique: we divide the
decision space using n different cuts so that any two decisions are separated by at least one cut, and
then we give an upper bound on the probability that the chosen decision switches sides over each
such cut. This new technique may have applications to other problems as well. We now prove the
regret bound of Theorem 1:
?
Theorem 4. Algorithm 1 run with parameter ? = 1/ T achieves the following regret bound:
?
E[RegretT ] ? 6n T .
Proof. We note that the algorithm is essentially running a ?follow-the-leader? algorithm on the
cost functions f0 , f1 , . . . , ft?1 , where f0 = R is a fictitious ?period 0? cost function used for
regularization. The first step to analyzing this algorithm is to use a stability lemma, essentially
proved in Theorem 1.1 of [9], which bounds the regret as follows:
RegretT ?
T
X
ft (St ) ? ft (St+1 ) + R(S ? ) ? R(S1 ).
t=1
Here, S ? = arg minS?[n]
PT
t=1
ft (S).
To bound the expected regret, by linearity of expectation, it suffices to bound E[f (St ) ? f (St+1 )],
where for the purpose of analysis, we assume that we re-randomize in every round (i.e. choose a
fresh random function R : 2[n] ? R). Naturally, the expectation E[f (St ) ? f (St+1 )] is the same
regardless of when R is chosen.
To bound this, we need the following lemma:
Lemma 5.
Pr[St 6= St+1 ] ? 2n?.
Proof. First, we note the following simple union bound:
X
Pr[St 6= St+1 ] ?
Pr[i ? St and i ?
/ St+1 ] + Pr[i ?
/ St and i ? St+1 ].
(1)
i?[n]
Now, fix any i, and we aim to bound Pr[i ? St and i ?
/ St+1 ]. For this, we P
condition on the
randomness in choosing rj for all j 6= i. Define R0 : 2[n] ? R as R0 (S) =
j?S,j6=i rj , and
Pt?1
/ S, then R0 (S) = R(S) and
?0t : 2[n] ? R as ?0t (S) = ? =1 f? (S) + R0 (S). Note that if i ?
?0t (S) = ?t (S). Let
A = arg
min
S?[n]:i?S
?0 (S) and B = arg
min
S?[n]:i?S
/
?0 (S).
Now, we note that the event i ? St happens only if ?0t (A) + ri < ?0t (B), and St = A. But if
?0t (A) + ri < ?0t (B) ? 2, then we must have i ? St+1 , since for any C such that i ?
/ C,
?t+1 (A) = ?0t (A) + ri + ft (A) < ?0t (B) ? 1 < ?0t (C) + ft (C) = ?t (C).
The inequalities above use the fact that ft (S) ? [?1, 1] for all S ? [n]. Thus, if v := ?0t (B) ?
?0t (A), we have
Pr[i ? St and i ?
/ St+1 | rj , j 6= i] ? Pr[ri ? [v ? 2, v] | rj , j 6= i] ? ?,
since ri is chosen uniformly from [?1/?, 1/?]. We can now remove the conditioning on rj for
j 6= i, and conclude that
Pr[i ? St and i ?
/ St+1 ] ? ?.
Similarly, we can bound Pr[i ?
/ St and i ? St+1 ] ? ?. Finally, the union bound (1) over all choices
of i yields the required bound on Pr[St 6= St+1 ].
5
Continuing the proof, we have
E[f (St ) ? f (St+1 )] = E[f (St ) ? f (St+1 ) | St 6= St+1 ] ? Pr[St 6= St+1 ]
? 0 + 2 ? Pr[St 6= St+1 ]
? 4n?.
The last inequality follows from Lemma 5. Now, we have R(S ? ) ? R(S1 ) ? 2n/?, and so
E[RegretT ] ?
T
X
E[f (St ) ? f (St+1 )] + E[R(S ? ) ? R(S1 )]
t=1
? 4n?T + 2n/?
?
? 6n T ,
?
since ? = 1/ T .
3.2
An Analytical Algorithm
In this section, we give a different algorithm based on the Online Gradient Descent method of
Zinkevich [15]. We apply this technique to the Lov?asz extension of the cost function coupled with a
simple randomized construction of the subgradient, as given in definition 2. This algorithm requires
the concept of a Euclidean projection of a point in Rn on to the set K, which is a function ?K :
Rn ? K defined by
?K (y) := arg min kx ? yk.
x?K
Since K = [0, 1]n , it is easy to implement this projection: indeed, for a point y ? Rn , the projection
x = ?K (y) is defined by
?
?yi if yi ? [0, 1]
xi = 0
if yi < 0
?
1
if yi > 1
Algorithm 2 Submodular Subgradient Descent
1: Input: parameter ? > 0. Let x1 ? K be an arbitrary initial point.
2: for t = 1 to T do
3:
Choose a threshold ? ? [0, 1] uniformly at random, and use the set St = {i : xt (i) > ? } and
obtain cost ft (St ).
4:
Find a maximal chain associated with xt , ? = B0 ? B1 ? B2 ? ? ? ? Bn = [n], and use
ft (B0 ), ft (B1 ), . . . , ft (Bn ) to compute a subgradient gt of f?t at xt as in part 2 of Proposition 3.
5:
Update: set xt+1 = ?K (xt ? ?gt ).
6: end for
In the analysis of the algorithm, we need the following regret bound. It is a simple extension of
Zinkevich?s analysis of Online Gradient Descent to vector-valued random variables whose expectation is the subgradient of the cost function (the generality to random variables is not required for
this section, but it will be useful in the next section):
Lemma 6. Let f?1 , f?2 , . . . , f?T : K ? [?1, 1] be a sequence of convex cost functions over the cube
K. Let x1 , x2 , . . . , xT ? K be defined by x1 = 0 and xt+1 = ?K (xt ? ??
gt ), where g?1 , g?2 , . . . , g?T
are vector-valued random variables such that E[?
gt |xt ] = gt , where gt is a subgradient of f?t at xt .
Then the expected regret of playing x1 , x2 , . . . , xT is bounded by
T
X
t=1
E[f?t (xt )] ? min
x?K
T
X
X
n
+ 2?n
E[k?
gt k2 ].
f?T (x) ?
2?
t
t=1
Since this Lemma follows rather easily from [15], we omit the proof in this extended abstract.
We can now prove the following regret bound:
6
?
Theorem 7. Algorithm 2, run with parameter ? = 1/ T , achieves the following regret bound:
?
E[RegretT ] ? 3n T .
p
?
Furthermore, with probability at least 1 ? ?, RegretT ? (3n + 2 log(1/?)) T .
Proof. Note that be Definition 2, we have that E[ft (St )] = f?t (xt ). Since the algorithm runs Online
Gradient Descent (from Lemma 6) with g?t = gt (i.e. no randomness), we get the following bound
on the regret. Here, we use the bound k?
gt k2 = kgt k2 ? 4n.
E[RegretT ] =
T
X
E[ft (St )] ? min
S?[n]
t=1
T
X
f (S) ?
t=1
T
X
f?t (xt ) ? min
t=1
x?K
T
X
n
+ 2?nT.
f?T (x) ?
2?
t=1
?
Since ? = 1/ T , we get the required regret bound. Furthermore, by a simple Hoeffding bound, we
also get that with probability at least 1 ? ?,
T
X
t=1
ft (St ) ?
T
X
E[ft (St )] +
p
2T log(1/?),
t=1
which implies the high probability regret bound.
4
The Bandit Setting
We now present an algorithm for the Bandit Online Submodular Minimization problem. The algorithm is based on the Online Gradient Descent algorithm of Zinkevich [15]. The main idea is use
just one sample for both exploration (to construct an unbiased estimator for the subgradient) and
exploitation (to construct an unbiased estimator for the point chosen by the Online Gradient Descent
algorithm).
Algorithm 3 Bandit Submodular Subgradient Descent
1: Input: parameters ?, ? > 0. Let x1 ? K be arbitrary.
2: for t = 1 to T do
3:
Find a maximal chain associated with xt , ? = B0 ? B1 ? B2 ? ? ? ? Bn = [n], and let
? be the
Pnassociated permutation as in part 2 of Proposition 3. Then xt can be written as
xt =
i=0 ?i ?Bi , where ?i = 0 for the extra sets Bi that were added to complete the
maximal chain for xt .
4:
Choose the set St as follows:
St = Bi with probability ?i = (1 ? ?)?i +
5:
?
.
n+1
Use the set St and obtain cost ft (St ).
If St = B0 , then set g?t = ? ?10 ft (St )e?(1) , and if St = Bn then set g?t = ?1n ft (St )e?(n) .
Otherwise, St = Bi for some i ? [2, n ? 1]. Choose ?t ? {+1, ?1} uniformly at random,
and set:
? 2
if ?t = 1
? ?i ft (St )e?(i)
g?t =
? ? 2 f (S )e
if ?t = ?1
t ?(i+1)
?i t
6:
Update: set xt+1 = ?K (xt ? ? g?t ).
7: end for
Before launching into the analysis, we define some convenient notation first. For a random variable
Xt defined in round t of the algorithm, define Et [Xt ] (resp. VARt [Xt ]) to be the expectation (resp.
variance) of Xt conditioned on all the randomness chosen by the algorithm until round t.
A first observation is that on the expectation, the regret of the algorithm above is almost the same
as if it had played xt all along and the loss functions were replaced by the Lov?asz extensions of the
actual loss functions.
7
Lemma 8. For all t, we have E[f (St )] ? E[f?t (xt )] + 2?.
P
Proof. From Definition 2 we have that f?(xt ) =
i ?i f (Bi ). On the other hand, Et [f (St )] =
P
i ?i f (Bi ), and hence:
?
n
n ?
X
X
1
Et [f (St )] ? f?t (xt ) =
(?i ? ?i )f (Bi ) ? ?
+ ?i |f (Bi )| ? 2?.
n+1
i=0
i=0
The lemma now follows by taking the expectation of both sides of this inequality with respect to the
randomness chosen in the first t ? 1 rounds.
Next, by Proposition 3, the subgradient of the Lov?asz extension of ft at point xt corresponding to
the maximal chain B0 ? B1 ? ? ? ? ? Bn is given by gt (i) = f (B?(i) ) ? f (B?(i)?1 ). Using this
fact, it is easy to check that the random vector g?t is constructed in such a way that E[?
gt |xt ] = gt .
Furthermore, we can bound the norm of this estimator as follows:
n
X
4(n + 1)2
16n2
4
2
ft (Bi )2 ? ?i ?
?
.
(2)
Et [k?
gt k ] ?
2
?
?
?
i=0 i
16n2
? .
We can now remove the conditioning, and conclude that E[k?
gt k2 ] ?
n
, ?=
Theorem 9. Algorithm 3, run with parameters ? = T 1/3
bound:
E[RegretT ] ? 12nT 2/3 .
1
,
T 2/3
achieves the following regret
Proof. We bound the expected regret as follows:
T
X
t=1
E[ft (St )] ? min
S?[n]
T
X
ft (S) ? 2?T +
t=1
T
X
E[f?t (xt )] ? min
t=1
x?K
T
X
The bound is now obtained for ? =
4.1
n
,
T 1/3
?=
(By Lemma 8)
t=1
T
?X
n
+
E[k?
gt k2 ]
? 2?T +
2?
2 t=1
? 2?T +
f?t (x)
8n2 ?T
n
+
.
2?
?
(By Lemma 6)
(By (2))
1
.
T 2/3
High probability bounds on the regret
The theorem of the previous section gave a bound on the expected regret. However, a much stronger
claim can be made that essentially the same regret bound holds with very high probability (exponential tail). In addition, the previous theorem (which only bounds expected regret) holds against an
oblivious adversary, but not necessarily against a more powerful adaptive adversary. The following
gives high probability bounds against an adaptive adversary.
n
1
, ? = T 2/3
,
Theorem 10. With probability 1 ? 4?, Algorithm 3, run with parameters ? = T 1/3
achieves the following regret bound:
p
RegretT ? O(nT 2/3 log(1/?)).
The proof of this theorem is deferred to the full version of this paper.
5
Conclusions and Open Questions
We have described efficient regret minimization algorithms for submodular cost functions, in both
the bandit and full information settings. This parallels the work of Streeter and Golovin [14] who
study two specific instances of online submodular maximization (for which the offline problem is
NP-hard), and give (approximate)
regret minimizing algorithms. An open question is whether it is
?
possible to attain O( T ) regret bounds for online submodular minimization in the bandit setting.
8
References
[1] A. D. Flaxman, A. T. Kalai, and H. B. McMahan, Online convex optimization in the bandit
setting: gradient descent without a gradient, SODA, 2005, pp. 385?394.
[2] Satoru Fujishige, Submodular functions and optimization, Elsevier, 2005.
[3] M. Gr?otschel, L. Lov?asz, and A. Schrijver, Geometric Algorithms and Combinatorial Optimization, Springer Verlag, 1988.
[4] Carlos Guestrin and Andreas Krause, Beyond convexity - submodularity in machine learning.,
Tutorial given in the 25rd International Conference on Machine Learning (ICML), 2008.
[5] J. Hannan, Approximation to bayes risk in repeated play, In M. Dresher, A. W. Tucker, and P.
Wolfe, editors, Contributions to the Theory of Games, volume III (1957), 97?139.
[6] Satoru Iwata, A faster scaling algorithm for minimizing submodular functions, SIAM J. Comput. 32 (2003), no. 4, 833?840.
[7] Satoru Iwata, Lisa Fleischer, and Satoru Fujishige, A combinatorial strongly polynomial algorithm for minimizing submodular functions, J. ACM 48 (2001), 761?777.
[8] Satoru Iwata and James B. Orlin, A simple combinatorial algorithm for submodular function
minimization, SODA ?09: Proceedings of the Nineteenth Annual ACM -SIAM Symposium on
Discrete Algorithms (Philadelphia, PA, USA), Society for Industrial and Applied Mathematics,
2009, pp. 1230?1237.
[9] Adam Kalai and Santosh Vempala, Efficient algorithms for online decision problems, Journal
of Computer and System Sciences 71(3) (2005), 291?307.
[10] N. Littlestone and M. K. Warmuth, The weighted majority algorithm, Proceedings of the 30th
Annual Symposium on the Foundations of Computer Science, 1989, pp. 256?261.
[11] S. T. McCormick, Submodular function minimization., Chapter 7 in the Handbook on Discrete
Optimization (G. Nemhauser K. Aardal and R. Weismantel, eds.), Elsevier, 2006, pp. 321?391.
[12] James B. Orlin, A faster strongly polynomial time algorithm for submodular function minimization, Math. Program. 118 (2009), no. 2, 237?251.
[13] Alexander Schrijver, A combinatorial algorithm minimizing submodular functions in strongly
polynomial time, 1999.
[14] Matthew J. Streeter and Daniel Golovin, An online algorithm for maximizing submodular functions, NIPS, 2008, pp. 1577?1584.
[15] Martin Zinkevich, Online convex programming and generalized infinitesimal gradient ascent.,
Proceedings of the Twentieth International Conference (ICML), 2003, pp. 928?936.
9
| 3871 |@word exploitation:2 version:4 polynomial:11 norm:1 stronger:1 open:3 bn:7 incurs:1 profit:5 initial:1 celebrated:1 daniel:1 past:1 current:1 com:2 nt:7 define1:1 clara:1 must:1 readily:1 written:1 realistic:1 remove:2 update:2 item:5 warmuth:1 characterization:1 math:1 launching:1 simpler:1 along:1 constructed:2 supply:1 symposium:2 shorthand:1 prove:2 introduce:1 lov:18 sublinearly:1 market:2 indeed:2 expected:8 frequently:1 actual:2 bounded:2 underlying:1 notation:2 linearity:2 kind:1 affirmative:1 hindsight:3 every:3 exactly:1 k2:5 omit:1 appear:1 producing:4 before:2 positive:1 despite:1 analyzing:2 ap:5 chose:4 initialization:1 fastest:2 range:1 bi:9 unique:3 union:2 regret:41 implement:1 attain:2 projection:3 convenient:1 word:1 get:4 cannot:1 satoru:5 risk:1 equivalent:3 zinkevich:5 center:1 maximizing:2 kale:1 economics:1 regardless:1 convex:9 immediately:2 estimator:3 stability:1 notion:1 coordinate:1 resp:2 construction:2 pt:4 play:1 programming:1 pa:1 element:5 wolfe:1 cut:3 ft:30 ensures:1 observes:1 yk:1 convexity:2 complexity:1 motivate:1 weakly:1 f2:1 easily:3 various:1 america:1 chapter:2 separated:1 query:1 tell:1 outcome:1 choosing:2 whose:1 elad:1 widely:1 valued:2 say:1 nineteenth:1 otherwise:2 satyen:1 gi:1 itself:2 online:30 obviously:1 sequence:2 analytical:2 product:2 maximal:11 description:1 decisionmaking:1 produce:2 adam:1 converges:1 depending:2 completion:1 measured:1 b0:7 implies:3 indicate:1 submodularity:1 kgt:1 exploration:2 f1:2 suffices:1 fix:1 preliminary:1 proposition:4 mathematically:1 extension:20 hold:5 ground:2 great:1 skale:1 bj:1 claim:1 matthew:1 major:1 achieves:5 a2:4 purpose:1 combinatorial:8 maker:6 weighted:1 minimization:16 aim:1 rather:2 kalai:5 viz:1 check:1 industrial:1 attains:2 elsevier:2 a0:6 diminishing:1 her:1 bandit:15 arg:5 almaden:1 denoted:1 yahoo:2 special:1 cube:1 santosh:1 construct:4 icml:2 future:1 np:1 few:1 oblivious:2 replaced:1 interest:1 deferred:1 chain:18 capable:1 iv:1 divide:1 continuing:1 euclidean:1 re:1 littlestone:1 instance:1 boolean:1 lattice:2 maximization:1 cost:33 subset:19 gr:1 answer:1 perturbed:3 chooses:1 st:70 international:2 randomized:7 siam:2 choose:10 hoeffding:1 worse:1 expert:2 style:1 return:2 harry:1 b2:4 inc:1 satisfy:1 depends:1 hazan:2 analyze:1 bayes:1 carlos:1 parallel:1 orlin:5 minimize:3 contribution:1 variance:1 characteristic:1 who:1 yield:1 j6:1 randomness:5 ed:1 definition:8 infinitesimal:1 against:4 pp:8 tucker:1 james:2 naturally:1 associated:6 proof:9 con:1 proved:1 ubiquitous:1 ispa:1 follow:4 strongly:7 generality:1 furthermore:6 just:3 until:1 hand:1 receives:2 perhaps:1 usa:1 concept:2 unbiased:2 hence:2 regularization:1 round:8 game:1 generalized:2 complete:2 pro:1 conditioning:2 volume:1 extend:1 tail:1 refer:1 ai:8 rd:2 mathematics:1 similarly:1 submodular:40 had:1 access:4 f0:2 etc:1 gt:15 followthe:1 scenario:2 verlag:1 inequality:3 success:1 yi:4 seen:3 guestrin:1 additional:2 preceding:1 eo:3 r0:4 maximize:1 period:1 full:11 hannan:7 rj:5 technical:1 faster:2 long:2 devised:1 a1:2 prediction:1 basic:1 n5:2 essentially:3 expectation:6 iteration:4 addition:2 krause:1 crucial:2 extra:1 asz:18 ascent:1 fujishige:3 noting:1 revealed:1 iii:1 easy:3 variety:1 switch:1 gave:1 economic:1 idea:1 andreas:1 knowing:1 fleischer:1 whether:1 effort:1 repeatedly:2 regrett:14 useful:1 santa:1 exist:1 tutorial:1 per:1 discrete:5 key:1 threshold:3 subgradient:9 sum:1 run:9 jose:1 uncertainty:1 powerful:1 soda:2 almost:1 weismantel:1 decide:1 decision:28 scaling:1 bound:39 distinguish:2 played:1 fold:1 dresher:1 oracle:8 annual:2 ri:7 x2:2 unlimited:1 min:11 subgradients:1 vempala:3 martin:1 combination:3 remain:1 making:3 s1:3 happens:1 vart:1 pr:14 taken:1 computationally:1 previously:1 know:1 end:3 apply:1 observe:2 running:2 especially:1 hypercube:1 society:1 question:4 added:1 randomize:1 dependence:1 said:1 gradient:11 nemhauser:1 otschel:1 distributive:2 majority:1 fresh:1 minimizing:6 difficult:1 setup:1 statement:1 stated:2 design:1 mccormick:1 upper:1 observation:1 descent:11 defining:2 extended:1 rn:3 arbitrary:4 required:4 specified:1 nip:1 beyond:1 adversary:4 program:1 including:1 analogue:1 event:1 rely:1 solvable:1 ready:1 n6:1 flaxman:2 coupled:1 philadelphia:1 nice:1 geometric:1 law:1 loss:3 permutation:3 sublinear:1 proportional:1 fictitious:1 proven:1 foundation:1 consistent:4 xp:1 editor:1 playing:1 pi:2 ibm:2 course:1 last:1 offline:1 side:2 lisa:1 wide:1 face:1 taking:1 feedback:3 dimension:1 stand:1 world:1 computes:1 collection:1 adaptive:3 san:1 made:2 polynomially:1 approximate:1 obtains:1 ignore:1 parkway:1 b1:6 handbook:1 assumed:1 conclude:2 leader:5 xi:4 continuous:1 decade:1 streeter:2 ca:2 golovin:2 poly:2 necessarily:1 main:2 universe:1 arise:1 n2:3 repeated:1 x1:5 advice:2 sub:2 explicit:1 deterministically:1 exponential:4 factory:1 mcmahan:2 comput:1 theorem:12 xt:31 specific:1 conditioned:1 demand:1 kx:1 sorting:1 simply:2 twentieth:1 expressed:2 applies:1 springer:1 iwata:6 acm:2 goal:1 price:2 hard:1 determined:1 uniformly:6 lemma:11 total:2 called:3 schrijver:2 latter:1 arises:2 alexander:1 |
3,170 | 3,872 | 3D Object Recognition with Deep Belief Nets
Vinod Nair and Geoffrey E. Hinton
Department of Computer Science, University of Toronto
10 King?s College Road, Toronto, M5S 3G5 Canada
{vnair,hinton}@cs.toronto.edu
Abstract
We introduce a new type of top-level model for Deep Belief Nets and evaluate it on a 3D object recognition task. The top-level model is a third-order
Boltzmann machine, trained using a hybrid algorithm that combines both
generative and discriminative gradients. Performance is evaluated on the
NORB database (normalized-uniform version), which contains stereo-pair
images of objects under different lighting conditions and viewpoints. Our
model achieves 6.5% error on the test set, which is close to the best published result for NORB (5.9%) using a convolutional neural net that has
built-in knowledge of translation invariance. It substantially outperforms
shallow models such as SVMs (11.6%). DBNs are especially suited for
semi-supervised learning, and to demonstrate this we consider a modified
version of the NORB recognition task in which additional unlabeled images
are created by applying small translations to the images in the database.
With the extra unlabeled data (and the same amount of labeled data as
before), our model achieves 5.2% error.
1
Introduction
Recent work on deep belief nets (DBNs) [10], [13] has shown that it is possible to learn
multiple layers of non-linear features that are useful for object classification without requiring labeled data. The features are trained one layer at a time as a restricted Boltzmann
machine (RBM) using contrastive divergence (CD) [4], or as some form of autoencoder [20],
[16], and the feature activations learned by one module become the data for training the
next module. After a pre-training phase that learns layers of features which are good at
modeling the statistical structure in a set of unlabeled images, supervised backpropagation
can be used to fine-tune the features for classification [7]. Alternatively, classification can
be performed by learning a top layer of features that models the joint density of the class
labels and the highest layer of unsupervised features [6]. These unsupervised features (plus
the class labels) then become the penultimate layer of the deep belief net [6].
Early work on deep belief nets was evaluated using the MNIST dataset of handwritten digits
[6] which has the advantage that a few million parameters are adequate for modeling most of
the structure in the domain. For 3D object classification, however, many more parameters
are probably required to allow a deep belief net with no prior knowledge of spatial structure
to capture all of the variations caused by lighting and viewpoint. It is not yet clear how well
deep belief nets perform at 3D object classification when compared with shallow techniques
such as SVM?s [19], [3] or deep discriminative techniques like convolutional neural networks
[11].
In this paper, we describe a better type of top-level model for deep belief nets that is trained
using a combination of generative and discriminative gradients [5], [8], [9]. We evaluate the
model on NORB [12], which is a carefully designed object recognition task that requires
1
hj
hj
lk
lk
vi
vi
(a)
h1
(b)
h2
l1
h1
h2
W112
l1
l2
l2
v1
(c)
dd
Nh hhidden
units
v1 v2
v2
(d)
W212
W111 W122
W211 W222
W121
W221
Nv visible units
(e)
Figure 1: The Third-Order Restricted Boltzmann Machine. (a) Every clique in the model contains
a visible unit, hidden unit, and label unit. (b) Our shorthand notation for representing the clique
in (a). (c) A model with two of each unit type. There is one clique for every possible triplet of
units created by selecting one of each type. The ?restricted? architecture precludes cliques with
multiple units of the same type. (d) Our shorthand notation for representing the model in (c).
(e) The 3D tensor of parameters for the model in (c). The architecture is the same as that of an
implicit mixture of RBMs [14], but the inference and learning algorithms have changed.
generalization to novel object instances under varying lighting conditions and viewpoints.
Our model significantly outperforms SVM?s, and it also outperforms convolutional neural
nets when given additional unlabeled data produced by small translations of the training
images. We use restricted Boltzmann machines trained with one-step contrastive divergence
as our basic module for learning layers of features. These are fully described elsewhere [6],
[1] and the reader is referred to those sources for details.
2
A Third-Order RBM as the Top-Level Model
Until now, the only top-level model that has been considered for a DBN is an RBM with
two types of observed units (one for the label, another for the penultimate feature vector).
We now consider an alternative model for the top-level joint distribution in which the class
label multiplicatively interacts with both the penultimate layer units and the hidden units
to determine the energy of a full configuration. It is a Boltzmann machine with three-way
cliques [17], each containing a penultimate layer unit vi , a hidden unit hj , and a label unit
lk . See figure 1 for a summary of the architecture. Note that the parameters now form a
3D tensor, instead of a matrix as in the earlier, bipartite model.
Consider the case where the components of v and h are stochastic binary units, and l is a
discrete variable with K states represented by 1-of-K encoding. The model can be defined
in terms of its energy function
X
E(v, h, l) = ?
Wijk vi hj lk ,
(1)
i,j,k
where Wijk is a learnable scalar parameter. (We omit bias terms from all expressions for
clarity.) The probability of a full configuration {v, h, l} is then
exp(?E(v, h, l))
,
(2)
Z
P
where Z = v? ,h? ,l? exp(?E(v? , h? , l? )) is the partition function. Marginalizing over h gives
the distribution over v and l alone.
P (v, h, l) =
2
The main difference between the new top-level model and the earlier one is that now the
class label multiplicatively modulates how the visible and hidden units contribute to the
energy of a full configuration. If the label?s k th unit is 1 (and the rest are 0), then the k th
slice of the tensor determines the energy function. In the case of soft activations (i.e. more
than one label has non-zero probability), a weighted blend of the tensor?s slices specifies
the energy function. The earlier top-level (RBM) model limits the label?s effect to changing
the biases into the hidden units, which modifies only how the hidden units contribute to
the energy of a full configuration. There is no direct interaction between the label and the
visible units. Introducing direct interactions among all three sets of variables allows the
model to learn features that are dedicated to each class. This is a useful property when the
object classes have substantially different appearances that require very different features
to describe. Unlike an RBM, the model structure is not bipartite, but it is still ?restricted?
in the sense that there are no direct connections between two units of the same type.
2.1
Inference
The distributions that we would like to be able to infer are P (l|v) (to classify an input), and
P (v, l|h) and P (h|v, l) (for CD learning). Fortunately, all three distributions are tractable
to sample from exactly. The simplest case is P (h|v, l). Once l is observed, the model
reduces to an RBM whose parameters are the k th slice of the 3D parameter tensor. As a
result P (h|v, l) is a factorized distribution that can be sampled exactly.
For a restricted third-order model with Nv visible units, Nh hidden units and Nl class labels,
the distribution P (l|v) can be exactly computed in O(Nv Nh Nl ) time. This result follows
from two observations: 1) setting lk = 1 reduces the model to an RBM defined by the k th
slice of the tensor, and 2) the negative log probability of v, up to an additive constant,
under this RBM is the free energy:
Fk (v) = ?
Nh
X
Nv
X
Wijk vi )).
log(1 + exp(
(3)
i=1
j=1
The idea is to first compute Fk (v) for each setting of the label, and then convert them to a
discrete distribution by taking the softmax of the negative free energies:
exp(?Fk (v))
P (lk = 1|v) = PNl
.
k=1 exp(?Fk (v))
(4)
Equation 3 requires O(Nv Nh ) computation, which is repeated Nl times for a total of
O(Nv Nh Nl ) computation.
We can use the same method to compute P (l|h). Simply switch the role of v and h in
equation 3 to compute the free energy of h under the k th RBM. (This is possible since the
model is symmetric with respect to v and h.) Then convert the resulting Nl free energies
to the probabilities P (lk = 1|h) with the softmax function.
Now it becomes possible to exactly sample P (v, l|h) by first sampling ?l ? P (l|h). Suppose
? ? P (v|h, l?k = 1) can be
l?k = 1. Then the model reduces to its k th -slice RBM from which v
?
easily sampled. The final result {?
v, l} is an unbiased sample from P (v, l|h).
2.2
Learning
Given a set of N labeled training cases {(v1 , l1 ), (v2 , l2 ), ..., (vN , lN )} , we want to learn the
3D parameter tensor W for the restricted third-order model. When trained as the top-level
model of a DBN, the visible vector v is a penultimate layer feature vector. We can also
train the model directly on images as a shallow model, in which case v is an image (in row
vector form). In both cases the label l represents the Nl object categories using 1-of-Nl
encoding. For the same reasons as in the case of an RBM, maximum likelihood learning
is intractable here as well, so we rely on Contrastive Divergence learning instead. CD was
originally formulated in the context of the RBM and its bipartite architecture, but here we
extend it to the non-bipartite architecture of the third-order model.
3
An unbiased estimate of the maximum likelihood gradient can be computed by running a
Markov chain that alternatively samples P (h|v, l) and P (v, l|h) until it reaches equilibrium.
Contrastive divergence uses the parameter updates given by three half-steps of this chain,
with the chain initialized from a training case (rather than a random state). As explained
in section 2.1, both of these distributions are easy to sample from. The steps for computing
the CD parameter updates are summarized below:
Contrastive divergence learning of P (v, l):
1. Given a labeled training pair {v+ , lk+ = 1}, sample h+ ? P (h|v+ , lk+ = 1).
2.
3.
4.
5.
Compute the outer product Dk+ = v+ h+T .
Sample {v? , l? } ? P (v, l|h+ ). Let m be the index of the component of l? set to 1.
?
= 1).
Sample h? ? P (h|v? , lm
?
= v? h?T .
Compute the outer product Dm
Let W?,?,k denote the Nh ? Nv matrix of parameters corresponding to the k th slice along the
label dimension of the 3D tensor. Then the CD update for W?,?,k is:
?W?,?,k = Dk+ ? Dk? ,
(5)
W?,?,k ? W?,?,k + ??W?,?,k ,
(6)
where ? is a learning rate parameter. Typically, the updates computed from a ?mini-batch?
of training cases (a small subset of the entire training set) are averaged together into one
update and then applied to the parameters.
3
Combining Gradients for Generative and Discriminative Models
In practice the Markov chain used in the learning of P (v, l) can suffer from slow mixing. In
particular, the label l? generated in step 3 above is unlikely to be different from the true
label l+ of the training case used in step 1. Empirically, the chain has a tendency to stay
?stuck? on the same state for the label variable because in the positive phase the hidden
activities are inferred with the label clamped to its true value. So the hidden activities
contain information about the true label, which gives it an advantage over the other labels.
Consider the extreme case where we initialize the Markov chain with a training pair
{v+ , lk+ = 1} and the label variable never changes from its initial state during the chain?s
entire run. In effect, the model that ends up being learned is a class-conditional generative
distribution P (v|lk = 1), represented by the k th slice RBM. The parameter updates are
identical to those for training Nl independent RBMs, one per class, with only the training
cases of each class being used to learn the RBM for that class. Note that this is very different
from the model in section 2: here the energy functions implemented by the class-conditional
RBMs are learned independently and their energy units are not commensurate with each
other.
Alternatively, we can optimize the same set of parameters to represent yet another distribution, P (l|v). The advantage in this case is that the exact gradient needed for maximum
likelihood learning, ?logP (l|v)/?W , can be computed in O(Nv Nh Nl ) time. The gradient
expression can be derived with some straightforward differentiation of equation 4. The disadvantage is that it cannot make use of unlabeled data. Also, as the results show, learning
a purely discriminative model at the top level of a DBN gives much worse performance.
However, now a new way of learning P (v, l) becomes apparent: we can optimize the
parameters by using a weighted sum of the gradients for log P (v|l) and log P (l|v). As
explained below, this approach 1) avoids the slow mixing of the CD learning for P (v, l), and
2) allows learning with both labeled and unlabeled data. It resembles pseudo-likelihood in
how it optimizes the two conditional distributions in place of the joint distribution, except
here one of the conditionals (P (v|l)) is still learned only approximately. In our experiments,
a model trained with this hybrid learning algorithm has the highest classification accuracy,
beating both a generative model trained using CD as well as a purely discriminative model.
4
The main steps of the algorithm are listed below.
Hybrid learning algorithm for P (v, l):
Let {v+ , lk+ = 1} be a labeled training case.
Generative update: CD learning of P (v|l)
1. Sample h+ ? P (h|v+ , lk+ = 1).
2. Compute the outer product Dk+ = v+ h+T .
3. Sample v? ? P (v|h+ , lk+ = 1).
4. Sample h? ? P (h|v? , lk+ = 1).
5. Compute the outer product Dk? = v ? h?T .
g
6. Compute update ?W?,?,k
= Dk+ ? Dk? .
Discriminative update: ML learning of P (l|v)
1. Compute log P (lc = 1|v+ ) for c ? {1, ..., Nl }.
2. Using the result from step 1 and the true label lk+ = 1, compute the update
d
?W?,?,k
= ? log P (l|v)/?W?,?,c for c ? {1, ..., Nl }.
The two types of update for the cth slice of the tensor W?,?,c are then combined by a weighted
sum:
g
d
W?,?,c ? W?,?,c + ?(?W?,?,c
+ ??W?,?,c
),
(7)
where ? is a parameter that sets the relative weighting of the generative and discriminative
updates, and ? is the learning rate. As before, the updates from a mini-batch of training
cases can be averaged together and applied as a single update to the parameters. In experiments, we set ? by trying different values and evaluating classification accuracy on a
validation set.
Note that the generative part in the above algorithm is simply CD learning of the RBM for
the k th class. The earlier problem of slow mixing does not appear in the hybrid algorithm
because the chain in the generative part does not involve sampling the label.
Semi-supervised learning: The hybrid learning algorithm can also make use of unlabeled
training cases by treating their labels as missing inputs. The model first infers the missing
label by sampling P (l|vu ) for an unlabeled training case vu . The generative update is then
computed by treating the inferred label as the true label. (The discriminative update will
always be zero in this case.) Therefore the unlabeled training cases contribute an extra
generative term to the parameter update.
4
Sparsity
Discriminative performance is improved by using binary features that are only rarely active.
Sparse activities are achieved by specifying a desired probability of being active, p << 1, and
then adding an additional penalty term that encourages an exponentially decaying average,
q, of the actual probability of being active to be close to p. The natural error measure to use
is the cross entropy between the desired and actual distributions: p log q + (1 ? p) log(1 ? q).
For logistic units this has a simple derivative of p?q with respect to the total input to a unit.
This derivative is used to adjust both the bias and the incoming weights of each hidden unit.
We tried various values for p and 0.1 worked well. In addition to specifying p it is necessary
to specify how fast the estimate of q decays. We used qnew = 0.9 ? qold + 0.1 ? qcurrent where
qcurrent is the average probability of activation for the current mini-batch of 100 training
cases. It is also necessary to specify how strong the penalty term should be, but this is easy
to set empirically. We multiply the penalty gradient by a coefficient that is chosen to ensure
that, on average, q is close to p but there is still significant variation among the q values for
different hidden units. This prevents the penalty term from dominating the learning. One
5
added advantage of this sparseness penalty is that it revives any hidden units whose average
activities are much lower than p.
5
5.1
Evaluating DBNs on the NORB Object Recognition Task
NORB Database
For a detailed description see [12]. The five object classes in NORB are animals, humans,
planes, trucks, and cars. The dataset comes in two different versions, normalized-uniform
and jittered-cluttered. In this paper we use the normalized-uniform version, which has
objects centred in the images with a uniform background. There are 10 instances of each
object class, imaged under 6 illuminations and 162 viewpoints (18 azimuths ? 9 elevations).
The instances are split into two disjoint sets (pre-specified in the database) of five each to
define the training and test sets, both containing 24,300 cases. So at test time a trained
model has to recognize unseen instances of the same object classes.
Pre-processing: A single training (and test) case is a stereo-pair of grayscale images, each
of size 96?96. To speed up experiments, we reduce dimensionality by using a ?foveal? image
representation. The central 64 ? 64 portion of an image is kept at its original resolution.
The remaining 16 pixel-wide ring around it is compressed by replacing non-overlapping
square blocks of pixels with the average value of a block. We split the ring into four smaller
ones: the outermost ring has 8 ? 8 blocks, followed by a ring of 4 ? 4 blocks, and finally
two innermost rings of 2 ? 2 blocks. The foveal representation reduces the dimensionality
of a stereo-pair from 18432 to 8976. All our models treat the stereo-pair images as 8976dimensional vectors1 .
5.2
Training Details
Model architecture: The two main decisions to make when training DBNs are the number
of hidden layers to greedily pre-train and the number of hidden units to use in each layer.
To simplify the experiments we constrain the number of hidden units to be the same at
all layers (including the top-level model). We have tried hidden layer sizes of 2000, 4000,
and 8000 units. We have also tried models with two, one, or no greedily pre-trained hidden
layers. To avoid clutter, only the results for the best settings of these two parameters are
given. The best classification results are given by the DBN with one greedily pre-trained
sparse hidden layer of 4000 units (regardless of the type of top-level model).
A DBN trained on the pre-processed input with one greedily pre-trained layer of 4000
hidden units and a third-order model on top of it, also with 4000 hidden units, has roughly
116 million learnable parameters in total. This is roughly two orders of magnitude more
parameters than some of the early DBNs trained on the MNIST images [6], [10]. Training
such a model in Matlab on an Intel Xeon 3GHz machine takes almost two weeks. See a
recent paper by Raina et al. [15] that uses GPUs to train a deep model with roughly the
same number of parameters much more quickly.
We put Gaussian units at the lowest (pixel) layer of the DBN, which have been shown to be
effective for modelling grayscale images [7]. See [7], [21] for details about Gaussian units.
6
Results
The results are presented in three parts: part 1 compares deep models to shallow ones,
all trained using CD. Part 2 compares CD to the hybrid learning algorithm for training
the top-level model of a DBN. Part 3 compares DBNs trained with and without unlabeled
data, using either CD or the hybrid algorithm at the top level. For comparison, here are
some published results for discriminative models on normalized-uniform NORB (without
any pre-processing) [2], [12]: logistic regression 19.6%, kNN (k=1) 18.4%, Gaussian kernel
SVM 11.6%, convolutional neural net 6.0%, convolutional net + SVM hybrid 5.9%.
1
Knowledge about image topology is used only along the (mostly empty) borders, and not in
the central portion that actually contains the object.
6
6.1
Deep vs. Shallow Models Trained with CD
We consider here DBNs with one greedily pre-trained layer and a top-level model that
contains the greedily pretrained features as its ?visible? layer. The corresponding shallow
version trains the top-level model directly on the pixels (using Gaussian visible units), with
no pre-trained layers in between. Using CD as the learning algorithm (for both greedy pretraining and at the top-level) with the two types of top-level models gives us four possibilities
to compare. The test error rates for these four models(see table 1) show that one greedily
pre-trained layer reduces the error substantially, even without any subsequent fine-tuning
of the pre-trained layer.
Model
Shallow
Deep
RBM with
label unit
22.8%
11.9%
Third-order
RBM
20.8%
7.6%
Table 1: NORB test set error rates for deep and shallow models trained using CD with two
types of top-level models.
The third-order RBM outperforms the standard RBM top-level model when they both have
the same number of hidden units, but a better comparison might be to match the number
of parameters by increasing the hidden layer size of the standard RBM model by five times
(i.e. 20000 hidden units). We have tried training such an RBM, but the error rate is worse
than the RBM with 4000 hidden units.
6.2
Hybrid vs. CD Learning for the Top-level Model
We now compare the two alternatives for training the top-level model of a DBN. There are
four possible combinations of top-level models and learning algorithms, and table 2 lists
their error rates. All these DBNs share the same greedily pre-trained first layer ? only the
top-level model differs among them.
Learning
algorithm
CD
Hybrid
RBM with
label unit
11.9%
10.4%
Third-order
RBM
7.6%
6.5%
Table 2: NORB test set error rates for top-level models trained using CD and the hybrid
learning algorithms.
The lower error rates of hybrid learning are partly due to its ability to avoid the poor mixing
of the label variable when CD is used to learn the joint density P (v, l) and partly due to its
greater emphasis on discrimination (but with strong regularization provided by also learning
P (v|l)).
6.3
Semi-supervised vs. Supervised Learning
In this final part, we create additional images from the original NORB training set by
applying global translations of 2, 4, and 6 pixels in eight directions (two horizontal, two
vertical and four diagonal directions) to the original stereo-pair images2 . These ?jittered?
images are treated as extra unlabeled training cases that are combined with the original
labeled cases to form a much larger training set. Note that we could have assigned the
jittered images the same class label as their source images. By treating them as unlabeled,
the goal is to test whether improving the unsupervised, generative part of the learning alone
can improve discriminative performance.
There are two ways to use unlabeled data:
1. Use it for greedy pre-training of the lower layers only, and then train the top-level
model as before, with only labeled data and the hybrid algorithm.
2
The same translation is applied to both images in the stereo-pair.
7
2. Use it for learning the top-level model as well, now with the semi-supervised variant
of the hybrid algorithm at the top-level.
Table 3 lists the results for both options.
Top-level model
(hyrbid learning
only)
RBM with
label unit
Third-order
model
Unlabeled
jitter for
pre-training
lower layer?
No
Yes
No
Yes
Yes
Unlabeled
jitter at the
top-level?
Error
No
No
No
No
Yes
10.4%
9.0%
6.5%
5.3%
5.2%
Table 3: NORB test set error rates for DBNs trained with and without unlabeled data, and
using the hybrid learning algorithm at the top-level.
The key conclusion from table 3 is that simply using more unlabeled training data in the
unsupervised, greedy pre-training phase alone can significantly improve the classification
accuracy of the DBN. It allows a third-order top-level model to reduce its error from 6.5%
to 5.3%, which beats the current best published result for normalized-uniform NORB without
using any extra labeled data. Using more unlabeled data also at the top level further improves
accuracy, but only slightly, to 5.2%.
Now consider a discriminative model at the top, representing the distribution P (l|v). Unlike
in the generative case, the exact gradient of the log-likelihood is tractable to compute.
Table 4 shows the results of some discriminative models. These models use the same greedily
pre-trained lower layer, learned with unlabeled jitter. They differ in how the top-level
parameters are initialized, and whether they use the jittered images as extra labeled cases
for learning P (l|v).
We compare training the discriminative toplevel model ?from scratch? (random initializaInitialization Use jittered
tion) versus initializing its parameters to those
of top-level
images as
Error of a generative model learned by the hybrid alparameters
labeled?
gorithm. We also compare the effect of using the
Random
No
13.4% jittered images as extra labeled cases. As menRandom
Yes
7.1% tioned before, it is possible to assign the jittered
Model with
images the same labels as the original NORB
5.2% error
Yes
5.0% images they are generated from, which expands
from table 3
the labeled training set by 25 times. The botTable 4: NORB test set error rates for dis- tom two rows of table 4 compare a discriminative
criminative third-order models at the top third-order model initialized with and without
pre-training. Pre-trained initialization (5.0%)
level.
significantly improves accuracy over random initialization (7.1%). But note that discriminative training only makes a small additional improvement (5.2% to 5.0%) over the accuracy
of the pre-trained model itself.
7
Conclusions
Our results make a strong case for the use of generative modeling in object recognition.
The main two points are: 1) Unsupervised, greedy, generative learning can extract an
image representation that supports more accurate object recognition than the raw pixel
representation. 2) Including P (v|l) in the objective function for training the top-level model
results in better classification accuracy than using P (l|v) alone. In future work we plan to
factorize the third-order Boltzmann machine as described in [18] so that some of the top-level
features can be shared across classes.
8
References
[1] Y. Bengio, P. Lamblin, P. Popovici, and H. Larochelle. Greedy Layer-Wise Training of
Deep Networks. In NIPS, 2006.
[2] Y. Bengio and Y. LeCun. Scaling learning algorithms towards AI. In Large-Scale Kernel
Machines, 2007.
[3] D. DeCoste and B. Scholkopf. Training Invariant Support Vector Machines. Machine
Learning, 46:161?190, 2002.
[4] G. E. Hinton. Training products of experts by minimizing contrastive divergence. Neural
Computation, 14(8):1711?1800, 2002.
[5] G. E. Hinton. To Recognize Shapes, First Learn to Generate Images. Technical Report
UTML TR 2006-04, Dept. of Computer Science, University of Toronto, 2006.
[6] G. E. Hinton, S. Osindero, and Y. Teh. A fast learning algorithm for deep belief nets.
Neural Computation, 18:1527?1554, 2006.
[7] G. E. Hinton and R. Salakhutdinov. Reducing the dimensionality of data with neural
networks. Science, 313:504?507, 2006.
[8] M. Kelm, C. Pal, and A. McCallum. Combining Generative and Discriminative Methods
for Pixel Classification with Multi-Conditional Learning. In ICPR, 2006.
[9] H. Larochelle and Y. Bengio. Classification Using Discriminative Restricted Boltzmann
Machines. In ICML, pages 536?543, 2008.
[10] H. Larochelle, D. Erhan, A. Courville, J. Bergstra, and Y. Bengio. An empirical evaluation of deep architectures on problems with many factors of variation. In ICML, pages
473?480, 2007.
[11] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to
document recognition. Proceedings of the IEEE, 86(11):2278?2324, November 1998.
[12] Y. LeCun, F. J. Huang, and L. Bottou. Learning methods for generic object recognition
with invariance to pose and lighting. In CVPR, Washington, D.C., 2004.
[13] H. Lee, R. Grosse, R. Ranganath, and A. Ng. Convolutional Deep Belief Networks for
Scalable Unsupervised Learning of Hierarchical Representations. In ICML, 2009.
[14] V. Nair and G. E. Hinton. Implicit mixtures of restricted boltzmann machines. In
Neural information processing systems, 2008.
[15] R. Raina, A. Madhavan, and A. Ng. Large-scale Deep Unsupervised Learning using
Graphics Processors. In ICML, 2009.
[16] Marc?Aurelio Ranzato, Fu-Jie Huang, Y-Lan Boureau, and Yann LeCun. Unsupervised
learning of invariant feature hierarchies with applications to object recognition. In Proc.
Computer Vision and Pattern Recognition Conference (CVPR?07). IEEE Press, 2007.
[17] T. J. Sejnowski. Higher-order Boltzmann Machines. In AIP Conference Proceedings,
pages 398?403, 1987.
[18] G. Taylor and G. E. Hinton. Factored Conditional Restricted Boltzmann Machines for
Modeling Motion Style. In ICML, 2009.
[19] V. Vapnik. Statistical Learning Theory. John Wiley and Sons, 1998.
[20] P. Vincent, H. Larochelle, Y. Bengio, and P. A. Manzagol. Extracting and Composing
Robust Features with Denoising Autoencoders. In ICML, 2008.
[21] M. Welling, M. Rosen-Zvi, and G. E. Hinton. Exponential family harmoniums with an
application to information retrieval. In NIPS 17, 2005.
9
| 3872 |@word version:5 tried:4 contrastive:6 innermost:1 tr:1 initial:1 configuration:4 contains:4 foveal:2 selecting:1 document:1 outperforms:4 current:2 activation:3 yet:2 john:1 additive:1 subsequent:1 visible:8 partition:1 shape:1 utml:1 designed:1 treating:3 update:17 v:3 alone:4 generative:17 half:1 greedy:5 discrimination:1 plane:1 mccallum:1 contribute:3 toronto:4 five:3 along:2 direct:3 become:2 scholkopf:1 shorthand:2 combine:1 introduce:1 roughly:3 multi:1 salakhutdinov:1 actual:2 decoste:1 increasing:1 becomes:2 provided:1 notation:2 factorized:1 lowest:1 substantially:3 differentiation:1 pseudo:1 every:2 expands:1 exactly:4 unit:44 omit:1 appear:1 positive:1 before:4 treat:1 limit:1 encoding:2 approximately:1 might:1 plus:1 emphasis:1 initialization:2 resembles:1 specifying:2 averaged:2 lecun:4 vu:2 practice:1 block:5 differs:1 backpropagation:1 digit:1 empirical:1 significantly:3 pre:21 road:1 cannot:1 close:3 unlabeled:19 put:1 context:1 applying:2 optimize:2 missing:2 modifies:1 straightforward:1 regardless:1 independently:1 cluttered:1 resolution:1 factored:1 lamblin:1 variation:3 dbns:9 suppose:1 hierarchy:1 exact:2 us:2 recognition:11 gorithm:1 database:4 labeled:13 observed:2 role:1 module:3 initializing:1 capture:1 ranzato:1 highest:2 trained:27 harmonium:1 purely:2 bipartite:4 easily:1 joint:4 represented:2 various:1 train:5 fast:2 describe:2 effective:1 sejnowski:1 whose:2 apparent:1 larger:1 dominating:1 cvpr:2 precludes:1 compressed:1 ability:1 knn:1 unseen:1 itself:1 final:2 advantage:4 net:13 interaction:2 product:5 combining:2 mixing:4 description:1 images2:1 empty:1 ring:5 object:20 pose:1 strong:3 implemented:1 c:1 come:1 larochelle:4 differ:1 direction:2 stochastic:1 human:1 qold:1 require:1 assign:1 generalization:1 elevation:1 around:1 considered:1 exp:5 equilibrium:1 week:1 hhidden:1 lm:1 achieves:2 early:2 proc:1 label:34 create:1 weighted:3 always:1 gaussian:4 modified:1 rather:1 avoid:2 hj:4 varying:1 derived:1 improvement:1 modelling:1 likelihood:5 greedily:9 sense:1 inference:2 typically:1 entire:2 unlikely:1 hidden:24 pixel:7 classification:12 among:3 animal:1 spatial:1 softmax:2 initialize:1 plan:1 once:1 never:1 washington:1 sampling:3 ng:2 identical:1 represents:1 unsupervised:8 icml:6 future:1 rosen:1 report:1 simplify:1 aip:1 few:1 divergence:6 recognize:2 phase:3 possibility:1 wijk:3 multiply:1 evaluation:1 adjust:1 mixture:2 extreme:1 nl:11 chain:8 accurate:1 fu:1 necessary:2 taylor:1 initialized:3 desired:2 instance:4 classify:1 modeling:4 earlier:4 soft:1 xeon:1 disadvantage:1 logp:1 introducing:1 subset:1 uniform:6 azimuth:1 osindero:1 pal:1 graphic:1 zvi:1 jittered:7 combined:2 density:2 stay:1 lee:1 together:2 quickly:1 central:2 containing:2 huang:2 worse:2 expert:1 derivative:2 style:1 centred:1 bergstra:1 summarized:1 coefficient:1 vnair:1 caused:1 vi:5 performed:1 h1:2 tion:1 portion:2 decaying:1 option:1 square:1 accuracy:7 convolutional:6 yes:6 handwritten:1 raw:1 vincent:1 produced:1 lighting:4 m5s:1 published:3 processor:1 reach:1 rbms:3 energy:12 dm:1 rbm:25 sampled:2 dataset:2 knowledge:3 car:1 infers:1 dimensionality:3 improves:2 carefully:1 actually:1 originally:1 higher:1 supervised:6 tom:1 specify:2 improved:1 evaluated:2 implicit:2 until:2 autoencoders:1 horizontal:1 replacing:1 overlapping:1 logistic:2 effect:3 normalized:5 requiring:1 unbiased:2 true:5 contain:1 regularization:1 assigned:1 symmetric:1 imaged:1 during:1 encourages:1 trying:1 demonstrate:1 l1:3 dedicated:1 motion:1 image:27 wise:1 novel:1 empirically:2 exponentially:1 nh:8 million:2 extend:1 significant:1 ai:1 tuning:1 dbn:9 fk:4 vectors1:1 recent:2 optimizes:1 binary:2 additional:5 fortunately:1 greater:1 determine:1 semi:4 multiple:2 full:4 infer:1 reduces:5 technical:1 match:1 cross:1 retrieval:1 variant:1 basic:1 regression:1 scalable:1 vision:1 represent:1 kernel:2 achieved:1 addition:1 want:1 fine:2 conditionals:1 background:1 source:2 extra:6 rest:1 unlike:2 probably:1 nv:8 extracting:1 split:2 vinod:1 easy:2 bengio:6 switch:1 architecture:7 topology:1 reduce:2 idea:1 haffner:1 whether:2 expression:2 penalty:5 stereo:6 suffer:1 pretraining:1 adequate:1 deep:19 matlab:1 useful:2 jie:1 clear:1 listed:1 tune:1 involve:1 detailed:1 amount:1 clutter:1 svms:1 category:1 simplest:1 processed:1 generate:1 specifies:1 disjoint:1 per:1 discrete:2 key:1 four:5 lan:1 clarity:1 changing:1 kept:1 v1:3 convert:2 sum:2 run:1 jitter:3 place:1 almost:1 reader:1 family:1 yann:1 vn:1 decision:1 scaling:1 layer:29 followed:1 courville:1 truck:1 activity:4 toplevel:1 worked:1 constrain:1 speed:1 gpus:1 department:1 icpr:1 combination:2 poor:1 smaller:1 slightly:1 across:1 son:1 shallow:8 cth:1 explained:2 restricted:10 invariant:2 ln:1 equation:3 needed:1 tractable:2 qnew:1 end:1 eight:1 hierarchical:1 v2:3 generic:1 alternative:2 batch:3 original:5 top:41 running:1 ensure:1 remaining:1 especially:1 tensor:9 objective:1 added:1 blend:1 interacts:1 g5:1 diagonal:1 gradient:10 penultimate:5 outer:4 evaluate:2 reason:1 index:1 multiplicatively:2 mini:3 minimizing:1 manzagol:1 mostly:1 negative:2 boltzmann:10 perform:1 teh:1 vertical:1 observation:1 markov:3 commensurate:1 november:1 beat:1 hinton:9 kelm:1 canada:1 inferred:2 pair:8 required:1 specified:1 connection:1 learned:6 nip:2 able:1 below:3 pattern:1 beating:1 sparsity:1 built:1 including:2 belief:10 natural:1 hybrid:16 rely:1 treated:1 raina:2 representing:3 improve:2 lk:16 created:2 autoencoder:1 extract:1 prior:1 popovici:1 l2:3 marginalizing:1 relative:1 fully:1 geoffrey:1 versus:1 validation:1 h2:2 madhavan:1 dd:1 viewpoint:4 share:1 cd:19 translation:5 row:2 elsewhere:1 changed:1 summary:1 free:4 dis:1 bias:3 allow:1 wide:1 taking:1 sparse:2 ghz:1 slice:8 outermost:1 dimension:1 evaluating:2 avoids:1 stuck:1 erhan:1 welling:1 ranganath:1 clique:5 ml:1 global:1 active:3 incoming:1 norb:15 discriminative:19 factorize:1 alternatively:3 grayscale:2 triplet:1 table:10 learn:6 robust:1 composing:1 improving:1 bottou:2 domain:1 marc:1 main:4 aurelio:1 border:1 repeated:1 referred:1 intel:1 grosse:1 slow:3 wiley:1 lc:1 exponential:1 clamped:1 third:15 weighting:1 learns:1 learnable:2 list:2 dk:7 svm:4 decay:1 intractable:1 mnist:2 vapnik:1 adding:1 modulates:1 magnitude:1 pnl:1 illumination:1 sparseness:1 boureau:1 suited:1 entropy:1 simply:3 appearance:1 prevents:1 scalar:1 pretrained:1 determines:1 nair:2 conditional:5 goal:1 formulated:1 king:1 tioned:1 towards:1 shared:1 change:1 except:1 reducing:1 denoising:1 total:3 invariance:2 tendency:1 partly:2 rarely:1 college:1 support:2 dept:1 scratch:1 |
3,171 | 3,873 | Factor Modeling for Advertisement Targeting
Ye Chen?
eBay Inc.
[email protected]
Michael Kapralov
Stanford University
[email protected]
Dmitry Pavlov?
Yandex Labs
[email protected]
John F. Canny
University of California, Berkeley
[email protected]
Abstract
We adapt a probabilistic latent variable model, namely GaP (Gamma-Poisson) [6],
to ad targeting in the contexts of sponsored search (SS) and behaviorally targeted
(BT) display advertising. We also approach the important problem of ad positional bias by formulating a one-latent-dimension GaP factorization. Learning
from click-through data is intrinsically large scale, even more so for ads. We scale
up the algorithm to terabytes of real-world SS and BT data that contains hundreds
of millions of users and hundreds of thousands of features, by leveraging the scalability characteristics of the algorithm and the inherent structure of the problem
including data sparsity and locality. Specifically, we demonstrate two somewhat
orthogonal philosophies of scaling algorithms to large-scale problems, through
the SS and BT implementations, respectively. Finally, we report the experimental results using Yahoo?s vast datasets, and show that our approach substantially
outperform the state-of-the-art methods in prediction accuracy. For BT in particular, the ROC area achieved by GaP is exceeding 0.95, while one prior approach
using Poisson regression [11] yielded 0.83. For computational performance, we
compare a single-node sparse implementation with a parallel implementation using Hadoop MapReduce, the results are counterintuitive yet quite interesting. We
therefore provide insights into the underlying principles of large-scale learning.
1
Introduction
Online advertising has become the cornerstone of many sustainable business models in today?s Internet, including search engines (e.g., Google), content providers (e.g., Yahoo!), and social networks
(e.g., Facebook). One essential competitive advantage, over traditional channels, of online advertising is that it allows for targeting. The objective of ad targeting is to select most relevant ads to
present to a user based on contextual and prior knowledge about this user. The relevance measure or
response variable is typically click-through rate (CTR), while explanatory variables vary in different
application domains. For instance, sponsored search (SS) [17] uses query, content match [5] relies
on page content, and behavioral targeting (BT) [11] leverages historical user behavior. Nevertheless,
the training data can be generally formed as a user-feature matrix of event counts, where the feature
dimension contains various events such as queries, ad clicks and views. This characterization of data
naturally leads to our adoption of the family of latent variable models [20, 19, 16, 18, 4, 6], which
have been quite successfully applied to text and image corpora. In general, the goal of latent variable
models is to discover statistical structures (factors) latent in the data, often with dimensionality reduction, and thus to generalize well to unseen examples. In particular, our choice of Gamma-Poisson
(GaP) is theoretically as well as empirically motivated, as we elaborate in Section 2.2.
??
This work was conducted when the authors were at Yahoo! Labs, 701 First Ave, Sunnyvale, CA 94089.
1
Sponsored search involves placing textual ads related to the user query alongside the algorithmic
search results. To estimate ad relevance, previous approaches include similarity search [5], logistic
regression [25, 8], classification and online learning with perceptron [13], while primarily in the
original term space. We consider the problem of estimating CTR of the form p(click|ad, user, query),
through a factorization of the user-feature matrix into a latent factor space, as derived in Section 2.1.
SS adopts the keyword-based pay-per-click (PPC) advertising model [23]; hence the accuracy of
CTR prediction is essential in determining the ad?s ranking, placement, pricing, and filtering [21].
Behavioral targeting leverages historical user behavior to select relevant ads to display. Since BT
does not primarily rely on contextual information such as query and page content; it makes an enabling technology for display (banner) advertising where such contextual data is typically unavailable, such as reading an email, watching a movie, instant messaging, and at least from the ad?s side.
We consider the problem of predicting CTR of the form p(click|ad, user). The question addressed
by the state-of-the-art BT is instead that of predicting the CTR of an ad in a given category (e.g., Finance and Technology) or p(click|ad-category, user), by fitting a sign-constrained linear regression
with categorized features [12] or a non-negative Poisson regression with granular features [11,10,7].
Ad categorization is done by human labeling and thus expensive and error-prone. One of the major
advantages of GaP is the ability to perform granular or per-ad prediction, which is infeasible by the
previous BT technologies due to scalability issues (e.g., a regression model for each category).
2
GaP model
GaP is a generative probabilistic model, as graphically represented in Figure 1. Let F be an n ? m
data matrix whose element fij is the observed count of event (or feature) i by user j. Y is a matrix
of expected counts with the same dimensions as F . F , element-wise, is naturally assumed to follow
Poisson distributions with mean parameters in Y respectively, i.e., F ? Poisson(Y ). Let X be a
d ? m matrix where the column vector xj is a low-dimensional representation of user j in a latent
space of ?topics?. The element xkj encodes the ?affinity? of user j to topic k as the total number of
occurrences of all events contributing to topic k. ? is an n?d matrix where the column ?k represents
the kth topic as a vector of event probabilities p(i|k), that is, a multinomial distribution of event
counts conditioned on topic k. Therefore, the Poisson mean matrix Y has a linear parameterization
with ? and X, i.e., Y = ?X. GaP essentially yields an approximate factorization of the data
matrix into two matrices with a low inner dimension F ? ?X. The approximation has an appealing
interpretation column-wise f ? ?x, that is, each user vector f in event space is approximated by
a linear combination of the column vectors of ?, weighted by the topical mixture x for that user.
Since by design d n, m, the model matrix ? shall capture significant statistical (topical) structure
hidden in the data. Finally, xkj is given a gamma distribution as an empirical prior. The generative
process of an observed event-user count fij follows:
1. Generate xkj ? Gamma(?k , ?k ), ?k.
2. Generate yij occurrences of event i from a mixture of k Multinomial(p(i|k)) with outcome
i, i.e., yij = ?i xj where ?i is the ith row vector of ?.
3. Generate fij ? Poisson(yij ).
The starting point of the generative process is a gamma distribution of x, with pdf
x??1 exp(?x/?)
p(x) =
for x > 0 and ?, ? > 0.
(1)
? ? ?(?)
It has a shape parameter ? and a scale parameter ?. Next, from the latent random vector characterizing a user x, we derive the expected count vector y for the user as follows:
y = ?x.
(2)
The last stochastic process is a Poisson distribution of the observed count f with the mean value y,
y f exp(?y)
p(f ) =
for f ? 0.
(3)
f!
The data likelihood for a user generated as described above is
n
d
Y
yifi exp(?yi ) Y (xk /?k )?k ?1 exp(?xk /?k )
,
(4)
fi !
?k ?(?k )
i=1
k=1
2
cookie:
?4qb2cg939usaj?
Fn?m
Yn?m
?n?d
features
users
? fij
F
cookie hashmap
(inverted index)
Xd?m
?
Y
=
?i
?i
?
?
xj-cookie lookup
<9869, 878623>
topics
? yij
<?4qb2cg939usaj?, 878623>
(42497th row)
X
?
?
X
xj (9869th column)
= zij
xj
<?machine+learning+8532948011?,
?machine+learning+8532948011?, 42497>
?k
fij ~ Poisson(yij) ? yij ~ mixture of Multinomial(p(i|k)) ? xkj ~ Gamma(?k,?k)
query-ad:
?machine+learning+8532948011?
Figure 1: GaP graphical model
query-ad hashmap
(inverted index)
Figure 2: GaP online prediction
where yi = i x. And the log likelihood reads
(fi log yi yi log fi !)+
[( k 1) log xk xk /k k log(k ) log ( k )]. (5)
=
i
k
Given a corpus of user data F = (f1 , ..., fj , ..., fm ), we wish to find the maximum likelihood
estimates (MLE) of the model parameters ( , X). Based on an elegant multiplicative recurrence
developed by Lee and Seung [22] for NMF, the following EM algorithm was derived in [6]:
ik /yij ) + ( k 1)/xkj
i (fij
x
.
(6)
E-step: x
kj
kj
i ik + 1/ k
j fij xkj /y ij
M-step:
.
(7)
ik
ik
j xkj
2.1
Two variants for CTR prediction
The standard GaP model fits discrete count data. We now describe two variant derivations for predicting CTR. The first approach is to predict clicks and views independently, and then to construct
the unbiased estimator of CTR, typically with Laplacian smoothing:
ad(i)j = click(i) xj + / view(i) xj + ,
(8)
CTR
where click(i) and view(i) are the indices corresponding to the click/view pair of ad feature i,
respectively, by user j; and are smoothing constants.
The second idea is to consider the relative frequency of counts, particularly the number of clicks
relative to the number of views for the events of interest. Formally, let F be a matrix of observed
click counts and Y be a matrix of the corresponding expected click counts. We further introduce a
matrix of observed views V and a matrix of click probabilities Z, and define the link function:
F Y = V.Z = V.( X),
(9)
where ?.? denotes element-wise matrix multiplication. The linear predictor Z = X now estimates CTR directly, and is scaled by the observed view counts V to obtain the expected number of
clicks Y . The Poisson assumption is only given to the click events F with the mean parameters Y .
Given a number of views v and the probability of click for a single view or CTR, a more natural
stochastic model for click counts is Binomial(v, CTR). But since in ad?s data the number of views
is sufficiently large and CTR is typically very small, the binomial converges to Poisson(v ? CTR).
Given the same form of log likelihood in Eq. (5) but with the extended link function in Eq. (9), we
derive the following EM recurrence:
(fij ik /zij ) + ( k 1)/xkj
E-step: xkj xkj i
.
(10)
i (vij ik ) + 1/ k
j (fij xkj /z ij )
M-step:
.
(11)
ik
ik
j (vij xkj )
3
2.2
Rationale for GaP model
GaP is a generative probabilistic model for discrete data (such as texts). Similar to LDA (latent
Dirichlet allocation) [4], GaP represents each sample (document or in this case a user) as a mixture of topics or interests. The latent factors in these models are non-negative, which has proved
to have several practical advantages. First of all, texts arguably do comprise passages of prose on
specific topics, whereas negative factors have no clear interpretation. Similarly, users have occasional interests in particular products or groups of products and their click-through propensity will
dramatically increase for those products. On the other hand ?temporary avoidance? of a product
line is less plausible, and one clearly cannot have negative click-through counts which would be a
consequence of allowing negative factors. A more practical aspect of non-negative factor models is
that weak factor coefficients are driven to zero, especially when the input data is itself sparse; and
hence the non-zeros will be much more stable, and cross-validation error much lower. This helps to
avoid overfitting, and a typical LDA or GaP model can be run with high latent dimensions without
overfitting, e.g., with 100 data measurements per user; one factor of a 100-dimensional PCA model
will essentially be a (reversible) linear transformation of the input data. On the choice of GaP vs.
LDA, the models are very similar, however there is a key difference. In LDA, the choice of latent
factor is made independently word-by-word, or in the BT case, ad view by ad view. In GaP however,
it is assumed that several items are chosen from each latent factor, i.e., that interests are locally related. Hence GaP uses gamma priors which include both shape and scale factors. The scale factors
provide an estimated count of the number of items drawn from each latent factor. Another reason for
our preference for GaP in this application is its simplicity. While LDA requires application of transcendental functions across the models with each iteration (e.g., ? function in Equation (8) of [4]),
GaP requires only basic arithmetic. Apart from transcendentals, the numbers of arithmetic operations of the two methods on same-sized data are identical. While we did not have the resources to
implement LDA at this scale in addition to GaP, small-scale experiments showed identical accuracy.
So we chose GaP for its speed and simplicity.
3
Sponsored search
We apply the second variant of GaP or the CTR-based formulation to SS CTR prediction, where the
factorization will directly yield a linear predictor of CTR or p(click|ad, user, query), as in Eq. (9).
Based on the structure of the SS click-through data, specifically the dimensionality and the user data
locality, the deployment of GaP for SS involves three processes: (1) offline training, (2) offline user
profile updating, and (3) online CTR prediction, as elaborated below.
3.1
The GaP deployment for SS
Offline training. First, given the observed click counts F and view counts V obtained from a
corpus of historical user data, we derive ? and X using the CTR-based GaP algorithm in Eqs. (10)
and (11). Counts are aggregated over a certain period of time (e.g., one month) and for a feature
space to be considered in the model. In SS, the primary feature type is the query-ad pair (noted as
QL for query-linead, where linead refers to a textual ad) since it is the response variable of which
the CTR is predicted. Other features can also be added based on their predicting capabilities, such
as query term, linead term, ad group, and match type. This will effectively change the per-topic
feature mixture in ? and possibly the per-user topic mixture in X, with the objective of improving
CTR prediction by adding more contextual information. In prediction though, one only focuses on
the blocks of QL features in ? and Z. In order for the model matrix ? to capture the corpus-wide
topical structure, the entire user corpus should be used as training set.
Offline user profile updateing. Second, given the derived model matrix ?, we update the user
profiles X in a distributed and data-local fashion. This updating step is necessary for two reasons.
(1) User space is more volatile relative to feature space, due to cookie churn (fast turnover) and
user?s interests change over time. To ensure the model to capture the latest user behavioral pattern
and to have high coverage of users, one needs to refresh the model often, e.g., on a daily basis. (2)
Retraining the model from scratch is relatively expensive, and thus impractical for frequent model
refresh. However, partial model refresh, i.e., updating X, has a very efficient and scalable solution
which works as follows. Once a model is trained on a full corpus of user data, it suffices to keep
only ?, the model matrix so named. ? contains the global information of latent topics in the form
4
of feature mixtures. We then distribute ? across servers with each randomly bucketized for a subset
of users. Note that this bucketization is exactly how production ad serving works. With the global
? and the user-local data F and V , X can be computed using E-step recurrence only. According
to Eq. (10), the update rule for a given user xj only involves the data for that user and the global
?. Moreover, since ? and a local X usually fit in memory, we can perform successive E-steps
to converge X within an order of magnitude less amount of time comparing with a global E-step.
Notice that the multiplicative factor in E-step depends on xkj , the parameter being updated, thus
consecutive E-steps will indeed advance convergence.
Online CTR prediction. Finally, given the global ? and a local X learned and stored in each
server, the expected CTR for a user given a QL pair or p(click|QL, user) is computed online as
follows. Suppose a user issues a query, a candidate set of lineads is retrieved by applying various
matching algorithms. Taking the product of these lineads with the query gives a set of QLs to be
scored. One then extracts the row vectors from ? corresponding to the candidate QL set to form a
smaller block ?mat , and looks up the column vector xj for that user from X. The predicted CTRs
are obtained by a matrix-vector multiplication zmat
= ?mat xj . The online prediction deployment is
j
schematically shown in Figure 2.
3.2
Positional normalization
Our analysis so far has been abstracted from another essential factor, that is, the position of an ad
impression on a search result page. It is known intuitively and empirically that ad position has a
significant effect on CTR [24, 14]. In this section we treat the positional effect in a statistically
sound manner.
The observed CTR actually represents a conditional probability p(click|position). We wish to learn
a CTR normalized by position, i.e., ?scaled? to a same presentation position, in order to capture
the probability of click regardless of where the impression is shown. To achieve positional normalization, we assume the following Markov chain: (1) viewing an ad given its position, and then (2)
clicking the ad given a user actually views the ad; thus
p(click|position) = p(click|view)p(view|position),
(12)
where ?view? is the event of a user voluntarily examining an ad, instead of an ad impression itself.
Eq. (12) suggests a factorization of a matrix of observed CTRs into two vectors. As it turns out,
to estimate the positional prior p(view|position) we can apply a special GaP factorization with one
inner dimension. The data matrices F and V are now feature-by-position matrices, and the inner
dimension can be interpreted as the topic of physically viewing.
In both training and evaluation, one shall use the position-normalized CTR, i.e., p(click|view). First,
the GaP algorithm for estimating positional priors is run on the observed click and view counts of
(feature, position) pairs. This yields a row vector of positional priors xpos . In model training, each
ad view occurrence is then normalized (multiplied) by the prior p(view|position) for the position
where the ad is presented. For example, the a priori CTR of a noticeable position (e.g., ov-top+1
in Yahoo?s terminology meaning the North 1 position in sponsored results) is typically higher than
that of an obscure position (e.g., ov-bottom+2) by a factor of up to 10. An observed count of views
placed in ov-top+1 thus has a greater normalized count than that in ov-bottom+2. This normalization
effectively asserts that, given a same observed (unnormalized) CTR, an ad shown in an inferior
position has a higher click probability per se than the one placed in a more obvious position. The
same view count normalization should also be applied during offline evaluation. In online prediction,
however, we need CTR estimates unbiased by positional effect in order for the matching ads to
be ranked based on their qualities (clickabilities). The linear predictor Z = ?X learned from a
position-normalized training dataset gives exactly the position-unbiased CTR estimation. In other
words, we are hypothesizing that all candidate ads are to be presented in a same imaginary position.
For an intuitive interpretation, if we scale positional priors so that the top position has a prior of 1,
i.e., xpos
ov-top+1 = 1, all ads are normalized to that top position.
Another view of the positional prior model we use is an examination model [25], that is, the probability of clicking on an ad is the product of a positional probability and a relevance-based probability
which is independent of position. This model is simple and easy to solve for using maximum likelihood as explained above. This model is not dependent on the content of ads higher up on the
search page, as for example the cascade [14] or DBN models [9]. These models are appropriate
5
for search results where users have a high probability of clicking on one of the links. However, for
ads, the probability of clicking on ad links is extremely low, usually a fraction of a percent. Thus
the effects of higher ads is a product of factors which are extremely close to one. In this case, the
DBN positional prior reduces to a negative exponential function which is a good fit to the empirical
distribution found from the examination model.
3.3
Large-scale implementation
Data locality. Recall that updating X after a global training is distributed and only involves E-steps
using user-local data. In fact, this data locality can also be leveraged in training. More precisely,
Eq. (10) suggests that updating a user profile vector xj via E-step only requires that user?s data fj
and vj as well as the model matrix ?. This computation has a very small memory footprint and
typically fits in L1 cache. On the other hand, updating each single value in ? as in Eq. (11) for
M-step requires a full pass over the corpus (all users? data) and hence more expensive. To better
exploit the data locality present in E-step, we alternate 3 to 10 successive E-steps with one M-step.
We also observe that M-step involves summations over j ? m users, for both the numerator and
the denominator in Eq. (11). Both summing terms (fij xkj /z ij and vij xkj ) only requires data that
is available locally (in memory) right after the E-step for user j. Thus the summations for M-step
can be computed incrementally along with the E-step recurrence for each user. As thus arranged, an
iteration of 3-10 E-steps combined with one M-step only requires a single pass over the user corpus.
Data sparsity. The multiplicative recurrence exploits data sparsity very well. Note that the inner
loops of both E-step and M-step involve calculating the ratio fij /zij . Since f is a count of very rare
click events, one only needs to compute z when the corresponding f is non-zero. Let Nc be the total
number of non-zero f terms or distinct click events over all users. For each non-zero fij , computing
zij = ?i xj dot-product takes d multiplications. Thus the numerators of both E-step and M-step
have a complexity of O(Nc d). Both denominators have a complexity of O(Nv ), where Nv is the
total number of non-zero v terms. The final divisions to compute the multiplicative factors in one
outer loop over topics take O(d) time (the other outer loop over m or n has already been accounted
for by both Nc and Nv ). Typically, we have Nv Nc m > n d. Thus the smoothed
complexity [26] of offline training is O(Nv dr), where r is the number of EM iterations and r = 20
suffices for convergence.
Scalability. Now that we have reached an algorithm of linear complexity O(Nv dr) with various
implementation tricks as just described. We now illustrate the scalability of our algorithm by the
following run-time analysis. The constant factor of the complexity is 4, the number of division
terms in the recurrence formulae. Suppose the entire Yahoo?s user base of SS contains about 200
million users. A 1/16 sample (32 out of 512 buckets) gives around 10 million user. Further assume
100 distinct ad views on average per user and an inner dimension of 10, thus the total number of
operations is 4 ? 1010 for one iteration. The model converges after 15-20 iterations. Our singlemachine implementation with sparse matrix operations (which are readily available in MATLAB [2]
and LAPACK [3]) gives above 100 Mflops, hence it takes 1.6-2.2 hours to train a model.
So far, we have demonstrated one paradigm of scaling up, which focuses on optimizing arithmetic
operations, such as using sparse matrix multiplication in the innermost loop. Another paradigm is
through large-scale parallelization, such as using a Hadoop [1] cluster, as we illustrate in the BT
implementation in Section 4.1.
3.4
Experiments
We have experimented with different feature types, and found empirically the best combination
is query-linead (QL), query term (QT), and linead term (LT). A QL feature is a product of query
and linead. For QTs, queries are tokenized with stemming and stopwords removed. For LTs, we
first concatenate the title, short description, and description of a linead text, and then extract up to
8 foremost terms. The dataset was obtained from 32 buckets of users and covering a one-month
period, where the first three weeks forms the training set and the last week was held out for testing.
For feature selection, we set the minimum frequency to 30 to be included for all three feature types,
which yielded slightly above 1M features comprised of 700K QLs, 175K QTs, and 135K LTs. We
also filtered out users with a total event count below 10, which gave 1.6M users. We used a latent
6
dimension of 10, which was empirically among the best while computationally favorable. For the
gamma prior on X, we fixed the shape parameter ? to 1.45 and the scale parameter ? to 0.2 across
all latent topics for model training; and used a near-zero prior for positional prior estimation.
We benchmarked our GaP model with two simple baseline predictors: (1) Panama score (historical
COEC defined as the ratio of the observed clicks to the expected clicks [9]), and (2) historical QL
CTR normalized by position. The experimental results are plotted in Figure 3, and numerically
summarized in Tables 1 and 2. A click-view ROC curve plots the click recall vs. the view recall,
from the testing examples ranked in descending order of predicted CTR. A CTR lift curve plots
the relative CTR lift vs. the view recall. As the results show, historical QL CTR is a fair predictor
relative to Panama score. The GaP model yielded a ROC area of 0.82 or 2% improvement over
historical QL CTR, and a 68% average CTR lift over Panama score at the 5-20% view recall range.
1
0.9
0.8
0.7
0.6
Click
recall
0.5
CTR
lift
0.4
0.3
0.2
0
GAP
Panama
QLctr
0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
View recall
0.7
0.8
0.9
1
(a) ROC plots
View recall
(b) Pairwise CTR lift
Figure 3: Model performance comparison among (1) GaP using QL-QT-LT, (2) Panama score predictor, and (3) historical QL-CTR predictor.
Table 1: Areas under ROC curves
GaP Panama QL-CTR
0.82
0.72
0.80
4
Table 2: CTR lift of GaP over Panama
View recall
1% 1-5% avg. 5% 5-20% avg.
CTR lift
0.96
0.86
0.93
0.68
Behavioral targeting
For the BT application, we adopt the first approach to CTR prediction as described in Section 2.1.
The number of clicks and views for a given ad are predicted separately and a CTR estimator is
constructed as in Eq. (8). Moreover, the granular nature of GaP allows for significant flexibility in
the way prediction can be done, as we describe next.
4.1
Prediction with different granularity
We form the data matrix F from historical user behavioral data at the granular level, including
click and view counts for individual ads, as well as other explanatory variable features such as page
views. This setup allows for per-ad CTR prediction, i.e., p(click|ad, user), given by Eq. (8). Percategory CTR prediction as does in previous BT systems, i.e., p(click|ad-category, user), can also
be performed in this setup by marginalizing ? over categories:
!
!
X
X
d
CTRcj =
?click(i) xj + ? /
?view(i) xj + ? ,
(13)
i?c
i?c
where c denotes a category and i ? c is defined by ad categorization.
7
The modeling was implemented in a distributed fashion using Hadoop. As discussed in Section 3.3,
the EM algorithm can be parallelized efficiently by exploiting user data locality, particularly in the
MapReduce [15] framework. However, compared with the scaling approach adopted by the SS implementation, the large-scale parallelization paradigm typically cannot support complex operations
as efficient, such as performing sparse matrix multiplication by three-level nested loops in Java.
4.2
Experiments
The data matrix F was formed to contain rows for all ad clicks and views, as well as page views with
frequency above a threshold of 100. The counts were aggregated over a two-week period of time
and from 32 buckets of users. This setup resulted in 170K features comprised of 120K ad clicks or
views, and 50K page views, which allows the model matrix ? to fit well in memory. The number
of users was about 40M. We set the latent inner dimension d = 20. We ran 13 EM iterations where
each iteration alternated 3 E-steps with one M-step. Prediction accuracy was evaluated using data
from the next day following the training period, and measured by the area under the ROC curve.
We first compared per-ad prediction (Eq. (8)) with per-category prediction (Eq. (13)), and obtained
the ROC areas of 95% and 70%, respectively. One latest technology used Poisson regression for
per-category modeling and yielded an average ROC area of 83% [11]. This shows that capturing
intra-category structure by factor modeling can result in substantial improvement over the state-ofthe-art of BT. We also measured the effect of the latent dimension on the model performance by
varying d = 10 to 100, and observed that per-ad prediction is insensitive to the latent dimension
with all ROC areas in the range of [95%, 96%], whereas per-category prediction benefits from larger
inner dimensions. Finally, to verify the scalability of our parallel implementation, we increased the
size of training data from 32 to 512 user buckets. The experiments were run on a 250-node Hadoop
cluster. As shown in Table 3, the running time scales sub-linearly with the number of users.
Table 3: Run-time vs. number of user buckets
Number of buckets
32
64
128 512
Run-time (hours)
11.2 18.6 31.7 79.8
Surprisingly though, the running time for 32 buckets with a 250-node cluster is no less than a singlenode yet highly efficient implementation as analyzed in Section 3.3 (after accounting for the different
factors of users 4?, latent dimension 2?, and EM iterations 13/15), with a similar 100 Mflops. Actually, the same pattern has been found in one previous large-scale learning task [11]. We argue that
large-scale parallelization is not necessarily the best way, nor the only way, to deal with scaling; but
in fact implementation issues (such as cache efficiency, number of references, data encapsulation)
still cause orders-of-magnitude differences in performance and can more than overwhelm the additional nodes. The right principle of scaling up should start with single node and achieve above 100
Mflops with sparse arithmetic operations.
5
Discussion
GaP is a dimensionality reduction algorithm. The low-dimensional latent space allows scalable
and efficient learning and prediction, and hence making the algorithm practically appealing for
web-scale data like in SS and BT. GaP is also a smoothing algorithm, which yields smoothed click
prediction. This addresses the data sparseness issue that is typically present in click-through data.
Moreover, GaP builds personalization into ad targeting, by profiling a user as a vector of latent
variables. The latent dimensions are inferred purely from data, with the objective to maximize the
data likelihood or the capability to predict target events. Furthermore, position of ad impression
has a significant impact on CTR. GaP factorization with one inner dimension gives a statistically
sound approach to estimating the positional prior. Finally, the GaP-derived latent low-dimensional
representation of user can be used as a valuable input to other applications and products, such as
user clustering, collaborative filtering, content match, and algorithmic search.
8
References
[1] http://hadoop.apache.org/.
[2] http://www.mathworks.com/products/matlab/.
[3] http://www.netlib.org/lapack/.
[4] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet allocation. The Journal of Machine Learning
Research, 3:993?1022, 2003.
[5] A. Broder, M. Fontoura, V. Josifovski, and L. Riedel. A semantic approach to contextual advertising.
ACM Conference on Information Retrieval (SIGIR 2007), pages 559?566, 2007.
[6] J. F. Canny. GaP: a factor model for discrete data. ACM Conference on Information Retrieval (SIGIR
2004), pages 122?129, 2004.
[7] J. F. Canny, S. Zhong, S. Gaffney, C. Brower, P. Berkhin, and G. H. John. Granular data for behavioral
targeting. U.S. Patent Application 20090006363.
[8] D. Chakrabarti, D. Agarwal, and V. Josifovski. Contextual advertising by combining relevance with click
feedback. International World Wide Web Conference (WWW 2008), pages 417?426, 2008.
[9] O. Chapelle and Y. Zhang. A dynamic Bayesian network click model for web search ranking. International World Wide Web Conference (WWW 2009), pages 1?10, 2009.
[10] Y. Chen, D. Pavlov, P. Berkhin, and J. F. Canny. Large-scale behavioral targeting for advertising over a
network. U.S. Patent Application 12/351,749, filed: Jan 09, 2009.
[11] Y. Chen, D. Pavlov, and J. F. Canny. Large-scale behavioral targeting. ACM Conference on Knowledge
Discovery and Data Mining (KDD 2009), 2009.
[12] C. Y. Chung, J. M. Koran, L.-J. Lin, and H. Yin. Model for generating user profiles in a behavioral
targeting system. U.S. Patent 11/394,374, filed: Mar 29, 2006.
[13] M. Ciaramita, V. Murdock, and V. Plachouras. Online learning from click data for sponsored search.
International World Wide Web Conference (WWW 2008), pages 227?236, 2008.
[14] N. Craswell, O. Zoeter, M. Taylor, and B. Ramsey. An experimental comparison of click position-bias
models. Web Search and Web Data Mining (WSDM 2008), pages 87?94, 2008.
[15] J. Dean and S. Ghemawat. Mapreduce: Simplified data processing on large clusters. Communications of
the ACM, 51(1):107?113, 2008.
[16] S. Deerwester, S. T. Dumais, G. W. Furnas, T. K. Landauer, and R. Harshman. Indexing by latent semantic
analysis. Journal of the American Society for Information Science, 41(6):391?407, 1990.
[17] D. C. Fain and J. O. Pedersen. Sponsored search: a brief history. Bulletin of the American Society for
Information Science and Technology, 32(2):12?13, 2006.
[18] T. Hofmann. Unsupervised learning by probabilistic latent semantic analysis. Machine Learning, 42(12):177?196, 2001.
[19] A. Hyv?arinen. Fast and robust fixed-point algorithms for independent component analysis. IEEE Transactions on Neural Networks, 10(3):626?634, 1999.
[20] I. T. Jolliffe. Principal Component Analysis. Springer, 2002.
[21] A. Lacerda, M. Cristo, M. A. Gonc?alves, W. Fan, N. Ziviani, and B. Ribeiro-Neto. Learning to advertise.
ACM Conference on Information Retrieval (SIGIR 2006), pages 549?556, 2006.
[22] D. D. Lee and H. S. Seung. Algorithms for non-negative matrix factorization. Advances in Neural
Information Processing Systems (NIPS 2000), 13:556?562, 2000.
[23] S. Pandey and C. Olston. Handling advertisements of unknown quality in search advertising. Advances
in Neural Information Processing Systems (NIPS 2006), 19:1065?1072, 2006.
[24] F. Radlinski and T. Joachims. Minimally invasive randomization for collecting unbiased preferences from
clickthrough logs. National Conference on Artificial Intelligence (AAAI 2006), pages 1406?1412, 2006.
[25] M. Richardson, E. Dominowska, and R. Ragno. Predicting clicks: estimating the click-through rate for
new ads. International World Wide Web Conference (WWW 2007), pages 521?530, 2007.
[26] D. A. Spielman and S.-H. Teng. Smoothed analysis of algorithms: Why the simplex algorithm usually
takes polynomial time. Journal of the ACM, 51(3):385?463, 2004.
9
| 3873 |@word polynomial:1 retraining:1 hyv:1 accounting:1 innermost:1 reduction:2 contains:4 score:4 zij:4 document:1 ramsey:1 imaginary:1 com:2 contextual:6 comparing:1 yet:2 readily:1 refresh:3 john:2 fn:1 transcendental:1 stemming:1 concatenate:1 kdd:1 shape:3 hofmann:1 plot:3 sponsored:7 update:2 v:4 generative:4 intelligence:1 item:2 parameterization:1 xk:4 ith:1 short:1 filtered:1 blei:1 characterization:1 murdock:1 node:5 preference:2 successive:2 org:2 zhang:1 stopwords:1 lacerda:1 along:1 constructed:1 become:1 ik:8 chakrabarti:1 fitting:1 behavioral:9 manner:1 introduce:1 pairwise:1 theoretically:1 indeed:1 expected:6 behavior:2 nor:1 wsdm:1 cache:2 discover:1 underlying:1 estimating:4 moreover:3 benchmarked:1 interpreted:1 substantially:1 developed:1 transformation:1 impractical:1 berkeley:2 collecting:1 xd:1 finance:1 exactly:2 scaled:2 yn:1 harshman:1 arguably:1 local:5 treat:1 consequence:1 chose:1 minimally:1 suggests:2 pavlov:4 deployment:3 josifovski:2 factorization:8 range:2 adoption:1 statistically:2 practical:2 testing:2 block:2 implement:1 footprint:1 jan:1 area:7 empirical:2 cascade:1 java:1 matching:2 word:3 refers:1 cannot:2 targeting:12 close:1 selection:1 context:1 applying:1 descending:1 www:6 dean:1 demonstrated:1 graphically:1 latest:2 starting:1 independently:2 regardless:1 sigir:3 simplicity:2 insight:1 estimator:2 avoidance:1 rule:1 counterintuitive:1 updated:1 target:1 today:1 suppose:2 user:80 us:2 trick:1 element:4 expensive:3 approximated:1 particularly:2 updating:6 observed:14 bottom:2 capture:4 thousand:1 keyword:1 removed:1 valuable:1 ran:1 voluntarily:1 substantial:1 complexity:5 seung:2 turnover:1 dynamic:1 trained:1 ov:5 purely:1 division:2 efficiency:1 basis:1 various:3 represented:1 derivation:1 train:1 distinct:2 fast:2 describe:2 query:17 artificial:1 labeling:1 lift:7 outcome:1 quite:2 whose:1 stanford:2 plausible:1 xpos:2 solve:1 s:13 larger:1 ability:1 unseen:1 richardson:1 itself:2 final:1 online:10 advantage:3 product:11 canny:5 frequent:1 relevant:2 loop:5 combining:1 flexibility:1 achieve:2 prose:1 intuitive:1 asserts:1 description:2 scalability:5 exploiting:1 convergence:2 cluster:4 categorization:2 generating:1 converges:2 help:1 derive:3 illustrate:2 measured:2 ij:3 qt:2 noticeable:1 eq:13 coverage:1 c:1 involves:5 predicted:4 implemented:1 fij:12 stochastic:2 human:1 viewing:2 sunnyvale:1 arinen:1 f1:1 suffices:2 randomization:1 summation:2 yij:7 practically:1 ppc:1 sufficiently:1 considered:1 around:1 exp:4 algorithmic:2 predict:2 week:3 major:1 vary:1 consecutive:1 adopt:1 estimation:2 favorable:1 propensity:1 title:1 successfully:1 weighted:1 clearly:1 behaviorally:1 avoid:1 zhong:1 varying:1 derived:4 focus:2 joachim:1 improvement:2 likelihood:6 ave:1 baseline:1 dependent:1 bt:14 typically:9 explanatory:2 entire:2 hidden:1 lapack:2 issue:4 classification:1 among:2 priori:1 yahoo:5 art:3 constrained:1 smoothing:3 special:1 construct:1 comprise:1 once:1 ng:1 identical:2 placing:1 represents:3 look:1 unsupervised:1 hypothesizing:1 simplex:1 report:1 inherent:1 primarily:2 randomly:1 gamma:8 resulted:1 national:1 individual:1 interest:5 mflop:3 highly:1 gaffney:1 intra:1 mining:2 evaluation:2 mixture:7 analyzed:1 personalization:1 held:1 chain:1 partial:1 necessary:1 daily:1 orthogonal:1 taylor:1 plotted:1 instance:1 column:6 modeling:4 increased:1 subset:1 rare:1 hundred:2 predictor:7 comprised:2 examining:1 conducted:1 stored:1 encapsulation:1 ctrs:2 banner:1 combined:1 dumais:1 broder:1 international:4 filed:2 probabilistic:4 lee:2 michael:1 dominowska:1 ctr:49 aaai:1 leveraged:1 possibly:1 messaging:1 dr:2 watching:1 american:2 chung:1 distribute:1 lookup:1 summarized:1 north:1 coefficient:1 inc:1 fain:1 ranking:2 yandex:2 ad:61 depends:1 multiplicative:4 view:42 performed:1 lab:2 kapralov:2 competitive:1 reached:1 start:1 parallel:2 capability:2 hashmap:2 zoeter:1 elaborated:1 collaborative:1 formed:2 accuracy:4 characteristic:1 efficiently:1 yield:4 ofthe:1 generalize:1 weak:1 bayesian:1 pedersen:1 provider:1 advertising:9 churn:1 history:1 facebook:1 email:1 frequency:3 berkhin:2 invasive:1 obvious:1 naturally:2 proved:1 dataset:2 intrinsically:1 recall:9 knowledge:2 dimensionality:3 actually:3 higher:4 day:1 follow:1 response:2 formulation:1 done:2 though:2 arranged:1 evaluated:1 furthermore:1 just:1 mar:1 hand:2 web:8 gonc:1 reversible:1 google:1 incrementally:1 logistic:1 lda:6 quality:2 pricing:1 effect:5 ye:1 normalized:7 unbiased:4 contain:1 verify:1 hence:6 read:1 lts:2 semantic:3 deal:1 during:1 numerator:2 recurrence:6 inferior:1 covering:1 noted:1 unnormalized:1 pdf:1 impression:4 demonstrate:1 l1:1 fj:2 passage:1 percent:1 image:1 wise:3 meaning:1 fi:3 xkj:15 volatile:1 multinomial:3 empirically:4 apache:1 patent:3 insensitive:1 million:3 discussed:1 interpretation:3 numerically:1 significant:4 measurement:1 ebay:2 dbn:2 similarly:1 dot:1 chapelle:1 stable:1 similarity:1 base:1 showed:1 retrieved:1 optimizing:1 driven:1 apart:1 advertise:1 certain:1 server:2 yi:4 qls:2 inverted:2 minimum:1 greater:1 somewhat:1 additional:1 terabyte:1 parallelized:1 aggregated:2 converge:1 period:4 paradigm:3 maximize:1 arithmetic:4 full:2 sound:2 reduces:1 match:3 adapt:1 profiling:1 cross:1 retrieval:3 lin:1 mle:1 laplacian:1 impact:1 prediction:24 variant:3 regression:6 basic:1 scalable:2 essentially:2 denominator:2 poisson:13 foremost:1 physically:1 iteration:8 normalization:4 agarwal:1 achieved:1 whereas:2 addition:1 schematically:1 separately:1 addressed:1 parallelization:3 nv:6 elegant:1 leveraging:1 jordan:1 near:1 leverage:2 granularity:1 easy:1 xj:14 fit:5 gave:1 fm:1 click:54 inner:8 idea:1 motivated:1 pca:1 cause:1 matlab:2 cornerstone:1 generally:1 dramatically:1 clear:1 se:1 involve:1 amount:1 locally:2 category:10 generate:3 http:3 outperform:1 notice:1 sign:1 estimated:1 per:13 serving:1 discrete:3 shall:2 mat:2 group:2 key:1 terminology:1 nevertheless:1 threshold:1 drawn:1 vast:1 fraction:1 deerwester:1 run:6 named:1 family:1 scaling:5 capturing:1 internet:1 pay:1 display:3 fan:1 yielded:4 placement:1 precisely:1 riedel:1 encodes:1 aspect:1 speed:1 ragno:1 extremely:2 formulating:1 performing:1 relatively:1 according:1 alternate:1 combination:2 across:3 smaller:1 em:6 slightly:1 appealing:2 making:1 intuitively:1 explained:1 indexing:1 bucket:7 computationally:1 equation:1 resource:1 overwhelm:1 turn:1 count:26 mathworks:1 jolliffe:1 adopted:1 available:2 operation:6 multiplied:1 apply:2 observe:1 occasional:1 appropriate:1 occurrence:3 original:1 denotes:2 binomial:2 include:2 dirichlet:2 ensure:1 graphical:1 top:5 running:2 clustering:1 instant:1 calculating:1 exploit:2 especially:1 build:1 society:2 objective:3 question:1 added:1 already:1 primary:1 craswell:1 traditional:1 affinity:1 kth:1 link:4 outer:2 topic:14 argue:1 reason:2 ru:1 tokenized:1 ciaramita:1 index:3 ratio:2 nc:4 ql:13 setup:3 negative:8 implementation:11 design:1 neto:1 clickthrough:1 unknown:1 perform:2 allowing:1 datasets:1 markov:1 enabling:1 extended:1 communication:1 team:1 topical:3 smoothed:3 nmf:1 inferred:1 namely:1 pair:4 california:1 engine:1 learned:2 textual:2 temporary:1 hour:2 nip:2 address:1 alongside:1 below:2 pattern:2 usually:3 sparsity:3 reading:1 panama:7 including:3 memory:4 event:16 business:1 rely:1 natural:1 predicting:5 ranked:2 examination:2 movie:1 technology:5 brief:1 extract:2 alternated:1 kj:2 text:4 prior:16 mapreduce:3 discovery:1 multiplication:5 determining:1 contributing:1 relative:5 marginalizing:1 rationale:1 interesting:1 filtering:2 allocation:2 granular:5 validation:1 principle:2 vij:3 obscure:1 production:1 row:5 prone:1 accounted:1 placed:2 last:2 surprisingly:1 infeasible:1 offline:6 bias:2 side:1 perceptron:1 wide:5 characterizing:1 taking:1 bulletin:1 sparse:6 distributed:3 benefit:1 curve:4 dimension:16 feedback:1 world:5 author:1 adopts:1 made:1 avg:2 simplified:1 historical:9 far:2 ribeiro:1 social:1 transaction:1 approximate:1 dmitry:2 keep:1 abstracted:1 global:6 overfitting:2 corpus:8 summing:1 assumed:2 landauer:1 search:16 latent:28 pandey:1 why:1 table:5 nature:1 channel:1 robust:1 learn:1 ca:1 hadoop:5 unavailable:1 improving:1 complex:1 necessarily:1 domain:1 vj:1 did:1 linearly:1 scored:1 profile:5 fair:1 categorized:1 roc:9 elaborate:1 fashion:2 furnas:1 sub:1 position:28 exceeding:1 wish:2 exponential:1 candidate:3 clicking:4 advertisement:2 formula:1 specific:1 ghemawat:1 experimented:1 essential:3 olston:1 adding:1 effectively:2 magnitude:2 conditioned:1 sparseness:1 cookie:4 alves:1 chen:3 gap:41 locality:6 lt:2 yin:1 positional:14 springer:1 nested:1 relies:1 acm:6 conditional:1 netlib:1 goal:1 targeted:1 sized:1 month:2 presentation:1 content:6 change:2 included:1 specifically:2 typical:1 principal:1 total:5 teng:1 pas:2 experimental:3 sustainable:1 select:2 formally:1 support:1 radlinski:1 relevance:4 spielman:1 philosophy:1 qts:2 scratch:1 handling:1 |
3,172 | 3,874 | Efficient Match Kernels between Sets of Features
for Visual Recognition
Cristian Sminchisescu
University of Bonn
sminchisescu.ins.uni-bonn.de
Liefeng Bo
Toyota Technological Institute at Chicago
[email protected]
Abstract
In visual recognition, the images are frequently modeled as unordered collections
of local features (bags). We show that bag-of-words representations commonly
used in conjunction with linear classifiers can be viewed as special match kernels,
which count 1 if two local features fall into the same regions partitioned by visual words and 0 otherwise. Despite its simplicity, this quantization is too coarse,
motivating research into the design of match kernels that more accurately measure the similarity between local features. However, it is impractical to use such
kernels for large datasets due to their significant computational cost. To address
this problem, we propose efficient match kernels (EMK) that map local features
to a low dimensional feature space and average the resulting vectors to form a setlevel feature. The local feature maps are learned so their inner products preserve,
to the best possible, the values of the specified kernel function. Classifiers based
on EMK are linear both in the number of images and in the number of local features. We demonstrate that EMK are extremely efficient and achieve the current
state of the art in three difficult computer vision datasets: Scene-15, Caltech-101
and Caltech-256.
1
Introduction
Models based on local features have achieved state-of-the art results in many visual object recognition tasks. For example, an image can be described by a set of local features extracted from patches
around salient interest points or regular grids, or a shape can be described by a set of local features
defined at edge points. This raises the question on how should one measure the similarity between
two images represented as sets of local features. The problem is non-trivial because the cardinality
of the set varies with each image and the elements are unordered.
Bag of words (BOW) [27] is probably one of the most popular image representations, due to both
its conceptual simplicity and its computational efficiency. BOW represents each local feature with
the closest visual word and counts the occurrence frequencies in the image. The resulting histogram
is used as an image descriptor for object recognition, often in conjunction with linear classifiers.
The length of the histogram is given by the number of visual words, being the same for all images.
Various methods for creating vocabularies exist [10], the most common being k-means clustering of
all (or a subsample of) the local features to obtain visual words.
An even better approach to recognition is to define kernels over sets of local features. One way is to
exploit closure rules. The sum match kernel of Haussler [7] is obtained by adding local kernels over
all combinations of local features from two different sets. In [17], the authors modify the sum kernel
by introducing an integer exponent on local kernels. Neighborhood kernels [20] integrate the spatial
location of local features into a sum match kernel. Pyramid match kernels [5, 14, 13] map local
features to multi-resolution histograms and compute a weighted histogram intersection. Algebraic
set kernels [26] exploit tensor products to aggregate local kernels, whereas principal angle kernels
1
[29] measure similarities based on angles between linear subspaces spanned by local features in the
two sets. Other approaches estimate a probability distribution on sets of local features, then derive
their similarity using distribution-based comparison measures [12, 18, 2]. All of the above methods
need to explicitly evaluate the full kernel matrix, hence they require space and time complexity that
is quadratic in the number of images. This is impractical for large datasets (see ?4).
In this paper we present efficient match kernels (EMK) that combine the strengths of both bag of
words and set kernels. We map local features to a low dimensional feature space and construct
set-level features by averaging the resulting feature vectors. This feature extraction procedure is not
significantly different than BOW. Hence EMK can be used in conjunction with linear classifiers and
do not require the explicit computation of a full kernel matrix?this leads to both space and time
complexity that is linear in the number of images. Experiments on three image categorization tasks
show that EMK are effective computational tools.
2
Bag of Words and Match Kernels
In supervised image classification, we are given a training set of images and their corresponding
labels. The goal is to learn a classifier to label unseen images. We adopt a bag of features method,
which represents an image as a set of local features. Let X = {x1 , . . . , xp } be a set of local features
in an image and V = {v1 , . . . , vD } the dictionary, a set of visual words. In BOW, each local
feature is quantized into a D dimensional binary indicator vector ?(x) = [?1 (x), . . . , ?D (x)]> .
?i (x) is 1 if x ? R(vi ) and 0 otherwise, where R(vi ) = {x : kx ? vi k ? kx
P ? vk, ?v ? V}.
1
The feature vectors for one image form a normalized histogram ?(X) = |X|
x?X ?(x), where
| ? | is the cardinality of a set. BOW features can be used in conjunction with either a linear or a
kernel classifier, albeit the latter often leads to expensive training and testing (see ?4). When a linear
classifier is used, the resulting kernel function is:
X X
X X
1
1
?(x)> ?(y) =
?(x, y)
(1)
KB (X, Y) = ?(X)> ?(Y) =
|X||Y|
|X||Y|
x?X y?Y
with
?
?(x, y) =
x?X y?Y
1, x, y ? R(vi ), ?i ? {1, . . . , D}
0, otherwise
(2)
?(x, y) is obviously a positive definite kernel, measuring the similarity between two local features
x and y: ?(x, y) = 1 if x and y belong the same region R(vi ), and 0 otherwise. However, this type
of quantization can be too coarse when measuring the similarity of two local features (see also fig.
1 in [21]), risking a significant decrease in classification performance. Better would be to replace
?(x, y) with a continuous kernel function that more accurately measures the similarity between x
and y:
X X
1
KS (X, Y) =
k(x, y)
(3)
|X||Y|
x?X y?Y
In fact, this is related to the normalized sum match kernel [7, 17]. Based on closure properties,
Ks (X, Y) is a positive definite kernel, as long as the components k(x, y) are positive definite. For
convenience, we refer to k(x, y) as the local kernel. A negative impact of kernelization is the high
computational cost required to compute the summation match function, which takes O(|X||Y|) for
a single kernel value rather than O(1), the cost of evaluating a single kernel function defined on
vectors. When used in conjunction with kernel machines, it takes O(n2 ) and O(n2 m2 d) to store
and compute the entire kernel matrix, respectively, where n is the number of images in the training
set, and m is the average cardinality of all sets. For image classification, m can be in the thousands
of units, so the computational cost rapidly becomes quartic as n approaches (or increases beyond)
m. In addition to expensive training, the match kernel P
function has also a fairly high testing cost:
n
2
for a test input, evaluating the discriminant f (X) =
i=1 ?i Ks (Xi , X) takes O(nm d). This
is, again, unacceptably slow for large n. For sparse kernel machines, such as SVMs, the cost can
decrease to some extent, as some of the ?i are zero. However, this does not change the order of
complexity, as the level of sparsity usually grows linearly in n.
Definition 1. The kernel function k(x, y) = ?(x)> ?(y) is called finite dimensional if the feature
map ?(?) is finite dimensional.
2
Train
Test
Sum [7]
O(n2 m2 d)
O(nm2 d)
Bhattacharyya [12]
O(n2 m3 d)
O(nm3 d)
PMK [6]
O(n2 m log(T )d)
O(nm log(T )d)
EMK-CKSVD
O(nmDd + nD2 )
O(mDd + D2 )
EMK-Fourier
O(nmDd)
O(mDd)
Table 1: Computational complexity for five types of ?set kernels?. ?Test? means the computational
cost per image.?Sum? is the sum match kernel used in [7]. ?Bhattacharyya? is the Bhattacharyya
kernel in [12]. PMK is the pyramid match kernel of [6], with T in PMK giving the value of the
maximal feature range. d is the dimensionality of local features. D in EMK is the dimensionality
of feature maps and does not change with the training set size. Our experiments suggest that a
value of D in the order of thousands of units is sufficient for good accuracy. Thus, O(nmDd) will
dominate the computational cost for training, and O(mDd) the one for testing, since m is usually
in the thousands, and d in the hundreds of units. EMK uses linear classifiers and does not require
the evaluation of the kernel matrix. The other four methods are used in conjunction with kernel
classifiers, hence they all need to evaluate the entire kernel matrix. In the case of nearest neighbor
classifiers, there is no training cost, but testing costs remain unchanged.
?(x, y) is a special type of finite dimensional kernel. With the finite dimensional kernel, the match
kernel can be simplified as:
KS (X, Y) = ?(X)> ?(Y)
(4)
P
1
where ?(X) = |X|
x?X ?(x) is the feature map on the set of vectors. Since ?(X) is finite
and can be computed explicitly, we can extract feature vectors on the set X, then apply a linear
classifier on the resulting represenation. We call (4) an efficient match kernel (EMK). The feature
extraction in EMK is not significantly different from the bag of words method. The training and
testing costs are O(nmDd) and O(mDd) respectively, where D is the dimensionality of the feature
map ?(x). If the feature map ?(x) is low dimensional, the computational cost of EMK can be
much lower than the one required to evaluate the match kernel by computing the kernel functions
k(x, y). For example, the cost is n1 lower when D has the same order as m (this is the case in
our experiments). Notice that we only need the feature vectors ?(X) in EMK, hence it is not
necessary to compute the entire kernel matrix. Since recent developments have shown that linear
SVMs can be trained in linear complexity [25], there is no substantial cost added in the training
phase. The complexity of EMK and of several other well-known set kernels is reviewed in table 1.
If necessary, location information can be incorporated into EMK, using a spatial pyramid [14, 13]:
PL?1 P2l ?l
(l,t)
KP (X, Y) =
, Y(l,t) ) = ?S (X)> ?S (Y), where L is the number of
l=0
t=1 2 KS (X
l
pyramid levels, 2 is the number of spatial cells in the l-th pyramid level, X(l,s) are local features
falling within the spatial cell (l, s), and ?P (X) = [?(X(1,1) )> , . . . , ?(X(l,s) )> ]> .
While there can be many choices for the local feature maps ?(x)?and the positive definiteness of
k(x, y) = ?(x)> ?(x) can be always guaranteed?, most do not necessarily lead to a meaningful
similarity measure. In the paper, we give two principled methods to create meaningful local feature
maps ?(x), by arranging for their inner products to approximate a given kernel function.
3
Efficient Match Kernels
In this section we present two kernel approximations, based on low-dimensional projections (?3.1),
and based on random Fourier set features (?3.2).
3.1
Learning Low Dimensional Set Features
Our approach is to project the high dimensional feature vectors ?(x) induced by the kernel
k(x, y) = ?(x)> ?(y) to a low dimensional space spanned by D basis vectors, then construct a
local kernel from inner products, based on low-dimensional representations. Given {?(zi )}D
i=1 , a
set of basis vectors zi , we can approximate the feature vector ?(x):
vx = argmin k?(x) ? Hvx k2
vx
3
(5)
Approximation
Exact
1.2
?165
1
1
?175
0.8
0.6
0.8
0.6
?180
0.4
0.4
?185
0.2
0
?0.2
?10
Approximation
Exact
1.2
?170
0.2
0
?190
?5
0
5
10
0
100
200
300
400
500
?0.2
?10
?5
0
5
10
Figure 1: Low-dimensional approximations for a Gaussian kernel. Left: approximated Gaussian
kernel with 20 learned feature maps. Center: the training objective (12) as a function of stochastic
gradient descent iterations. Right: approximated Gaussian kernel based on 200 random Fourier
features. The feature maps are learned from 200 samples, uniformly drawn from [-10,10].
where H = [?(z1 ), . . . , ?(zD )] and vx are low-dimensional (projection) coefficients. This is a
convex quadratic program with analytic solution:
vx = (H> H)?1 (H> ?(x))
(6)
The local kernel derived from the projected vectors is:
>
kl (x, y) = [Hvx ] [Hvy ] = kZ (x)> K?1
ZZ kZ (y)
(7)
where kZ is a D ? 1 vector with {kZ }i = k(x, zi ) and KZZ is a D ? D matrix with {KZZ }ij =
?1
k(zi , zj ). For G> G = K?1
ZZ (notice that KZZ is positive definite), the local feature maps are:
?(x) = GkZ (x)
(8)
?
?
P
1
The resulting full feature map is: ?(X) = |X|
G
x?X kZ (x) , with computational complexity
2
O(mDd + D ) for a set of local features. A related method is the kernel codebook [28], where
a set-level feature is also extracted based on a local kernel, but with different feature map ?(?).
An essential difference is that inner products of our set-level features ?(X) formally approximate
the sum-match kernel, whereas the ones induced by the kernel codebook do not. Therefore EMK
only requires a linear classifier wherea a kernel codebook would require a non-linear classifier for
comparable performance. As explained, this can be prohibitively expensive to both train and test, in
large datasets. Our experiments, shown in table 3, further suggest that EMK outperforms the kernel
codebook, even in the non-linear case.
How can we learn the basis vectors? One way is kernel principal component analysis (KPCA) [24]
on a randomly selected pool of F local features, with the basis set to the topmost D eigenvectors.
This faces two difficulties, however: (i) KPCA scales cubically in the number of selected local features, F ; (ii) O(F md) work is required to extract the set-level feature vector for one image, because
PF
the eigenvectors are linear combinations of the selected local feature vectors, i=1 ?i ?(xi ). For
large F , as typically required for good accuracy, this approach is too expensive. Although the first
difficulty can be palliated by iterative KPCA [11], the second computational challenge remains. Another option would be to approximate each eigenvector with a single feature vector ?(z) by solving
PF
the pre-image problem (z, ?) = argminz,? k i=1 ?i ?(xi ) ? ??(z)k2 , after KPCA. However,
the two step approach is sub-optimal. Intuitively, it should be better to find the single vector approximations within an unified objective function. This motivates our constrained singular value
decomposition in kernel feature space (CKSVD):
argmin R(V, Z) =
V,Z
F
1 X
k?(xi ) ? Hvi k2
F i=1
(9)
where F is the number of the randomly selected local features, Z = [z1 , . . . , zD ] and V =
[v1 , . . . , vF ]. If the pre-image constraints H = [?(z1 ), . . . , ?(zD )] are dropped, it is easy to show
that KPCA can be recovered. The partial derivatives of R with respect to vi are:
?R(V, Z)
= 2H> Hvi ? 2H> ?(xi )
?vi
4
(10)
Expanding equalities like ?R(V,Z)
= 0 produces a linear system with respect to vi for a fixed Z. In
?vi
this case, we can obtain the optimal, analytical solution: vi = (H> H)?1 (H> ?(xi )). Substituting
the solution in eq. (9), we can eliminate the variable V. To learn the basis vectors, instead of directly
optimizing R(V, Z), we can solve the equivalent optimization problem:
argmin R? (Z) = ?
Z
F
1 X
kZ (xi )> K?1
ZZ kZ (xi )
F i=1
(11)
Optimizing R? (Z) is tractable because its parameter space is much smaller than R(V, Z). The problem (11) can be solved using any gradient descent algorithm. For efficiency, we use the stochastic
(on-line) gradient descent (SGD) method. SGD applies to problems where the full gradient decomposes as a sum of individual gradients of the training samples. The standard (batch) gradient descent
method updates the parameter vector using the full gradient whereas SGD approximates it using the
gradient at a single training sample. For large datasets, SGD is usually much faster than batch gradient descent. At the t-th iteration, in SCG, we randomly pick a sample xt from the training set and
update the parameter vector based on:
?
?
? ? ?kZ (xt )> K?1
ZZ kZ (xt )
Z(t + 1) = Z(t) ?
(12)
t
?Z
where ? is the learning rate. In our implementation, we use D samples (rather than just one) to
compute the gradient. This produces more accurate results and matches the cost of inverting KZZ ,
which is O(D3 ) per iteration.
3.2 Random Fourier Set Features
Another tractable approach to large-scale learning is to approximate the kernel using random feature maps [22, 23]. For a given functionR ?(x; ?) and the probability distribution p(?), one can
define the local kernel as: kf (x, y) = p(?)?(x; ?)?(y, ?)d?. We consider feature maps of
the form ?(x; ?) = cos(? > x + b) with ? = (?, b), which project local features to a randomly chosen line, then pass the resulting scalar through a sinusoid. For example, to approximate the Gaussian kernel kf (x, y) = exp(??kx ? yk2 ), the random feature maps are: ?(x) =
q
2
>
D [cos(?1 x
>
x + bD )]> , where bi are drawn from the uniform distribution
+ b1 ),. . . , cos(?D
[??, ?] and ? are drawn from a Gaussian with
P 0 mean and covariance 2?I. Our proposed set1
level feature map is (c.f . ?2): ?(X) = |X|
x?X ?(x). Although any shift invariant kernel can
be represented using random Fourier features, currently these are limited to Gaussian kernels or
to kernels with analytical inverse Fourier transforms. In particular, ? needs to be sampled from
the inverse Fourier transform of the corresponding shift invariant kernel. The constraint of a shiftinvariant kernel excludes a number of practically interesting similarities. For example, the ?2 kernel
[8] and the histogram intersection kernel [5] are designed to compare histograms, hence they can
be used as local kernels, if the features are histograms. However, no random Fourier features can
approximate them. Such problems do not occur for the learned low dimensional features?a methodology applicable to any Mercer kernel. Moreover, in experiments, we show that kernels based on
low-dimensional approximations (?3.1) can produce superior results when the dimensionality of the
feature maps is small. As seen in fig. 2, for applicable kernels, the random Fourier set features also
produce very competitive results in the higher-dimensional regime.
4
Experiments
We illustrate our methodology in three publicly available computer vision datasets: Scene-15,
Caltech-101 and Caltech-256. For comparisons, we consider four algorithms: BOW-Linear, BOWGaussian, EMK-CKSVD and EMK-Fourier. BOW-Linear and BOW-Gaussian use a linear classifier
and a Gaussian kernel classifier on BOW features, respectively. EMK-CKSVD and EMK-Fourier
use linear classifiers. For the former, we learn low dimensional feature maps (?3.1), whereas for the
latter we obtain them using random sampling (?3.2).
All images are transformed into grayscale form. The local features are SIFT descriptors [16] extracted from 16?16 image patches. Instead of detecting the interest points, we compute SIFT descriptors over dense regular grids with spacing of 8 pixels. For EMK, our local kernel is a Gaussian
5
exp(??kx ? yk2 ). We use the same fixed ? = 1 for our SIFT descriptor in all datasets: Scene-15,
Caltech-101 and Caltech-256, although a more careful selection is likely to further improve performance. We run k-means clustering to identify the visual words and stochastic gradient descent to
learn the local feature maps, using a 100,000 random set of SIFT descriptors.
Our classifier is a support vector machine (SVM), which is extended to multi-class decisions by
combining one-versus-all votes. We work with LIBLINEAR [3] for BOW-Linear, EMK-Fourier
and EMK-CKSVD, and LIBSVM for BOW-Gaussian (the former need a linear classifier whereas
the latter uses a nonlinear classifier). The regularization and the kernel parameters (if available) in
SVM are tuned by ten-fold cross validation on the training set. The dimensionality of the feature
maps and the vocabulary size are both set to 1000 for fair comparisons, unless otherwise specified.
We have also experimented with larger vocabulary sizes in BOW, but no substantial improvement
was found (fig. 2). We measure performance based on classification accuracy, averaged over five
random training/testing splits. All experiments are run on a cluster built of compute nodes with 1.0
GHz processors and 8GB memory.
Scene-15: Scene-15 consists of 4485 images labeled into 15 categories. Each category contains 200
to 400 images whose average size is 300?250 pixels. In our first experiment, we train models on a
randomly selected set of 1500 images (100 images per category) and test on the remaining images.
We vary the dimensionality of the feature maps (EMK) and the vocabulary size (BOW) from 250
to 2000 with step length 250. For this dataset, we only consider the flat BOW and EMK (only
pyramid level 0) in all experiments. The classification accuracy of BOW-Linear, BOW-Gaussian,
EMK-Fourier and EMK-CKSVD is plotted in fig. 2 (left). Our second experiment is similar with
the first one, but the dimensionality of the feature maps and the vocabulary size vary from 50 to 200
with step length 25. In our third experiment, we fix the dimensionality of the feature maps to 1000,
and vary the training set size from 300 to 2400 with step length 300. We show the classification
accuracy of the four models as a function of the training set size in fig. 2 (right).
We notice that EMK is consistently 5-8% better than BOW in all cases. BOW-Gaussian is about 2
% better than BOW-Linear on average, whereas EMK-CKSVD give are very similar performance
to EMK-Fourier in most cases. We observe that EMK-CKSVD significantly outperforms EMKFourier for low-dimensional feature maps, indicating that learned features preserve the values of the
Gaussian kernel better than the random Fourier maps in this regime, see also fig. 1 (center).
For comparisons, we attempted to run the sum match kernel, on the full Scene-15 dataset. However,
we weren?t able to finish in one week. Therefore, we considered a smaller dataset, by training and
testing with only 40 images from each category. The sum match kernel obtains 71.8% accuracy
and slightly better than EMK-Fourier 71.0% and EMK-CKSVD 71.4% on the same dataset. The
sum match kernel takes about 10 hours for training and 10 hours for testing, respectively whereas
EMK-Fourier and EMK-CKSVD need less than 1 hour, most spent computing SIFT descriptors.
In addition, we use 10,000 randomly selected SIFT descriptors to learn KPCA-based local feature
maps, which takes about 12 hours for the training and testing sets on the full Scene-15 dataset,
respectively. We obtain slightly lower accuracy than EMK-Fourier and EMK-CKSVD. One reason
can be the small sample size, but it is currently prohibitive, computationally, to use larger ones.
Caltech-101: Caltech-101 [15] contains 9144 images from 101 object categories and a background
category. Each category has 31 to 800 images with significant color, pose and lighting variations.
Caltech-101 is one of the most frequently used benchmarks for image classification, and results obtained by different algorithms are available from the published papers, allowing direct comparisons.
Following the common experimental setting, we train models on 15/30 image per category and test
on the remaining images. We consider three pyramid levels: L = 0, L = 1, amd L = 2 (for the
latter two, spatial information is used). We have also tried increasing the number of levels in the
pyramid, but did not obtain a significant improvement.
We report the accuracy of BOW-Linear, BOW-Gaussian, EMK-Fourier and EMK-CKSVD in table 2. EMK-Fourier and EMK-CKSVD perform substantially better than BOW-Linear and BOWGaussian for all pyramid levels. The performance gap increases as more pyramid levels are added.
EMK-CKSVD is very close to EMK-Fourier and BOW-Gaussian does not improve over BOWLinear much. In table 3, we compare EMK to other algorithms. As we have seen, EMK is comparable to the best-scoring classifiers to date. The best result on Caltech101 was obtained by combining
multiple descriptor types [1]. Our main goal in this paper is to analyze the strengths of EMK relative
6
Scene?15
Scene?15
0.8
Scene?15
0.75
0.8
0.75
0.7
BOW?Linear
BOW?Gaussian
EMK?Fourier
EMK?CKSVD
0.65
0.6
500
1000
1500
Dimentionality
2000
0.7
Accuracy
Accuracy
Accuracy
0.75
0.65
BOW?Linear
BOW?Gaussian
EMK?Fourier
EMK?CKSVD
0.6
50
100
150
Dimentionality
200
0.7
0.65
BOW?Linear
BOW?Gaussian
EMK?Fourier
EMK?CKSVD
0.6
0.55
0.5
50
100
Training Set Size
150
Figure 2: Classification accuracy on Scene-15. Left: Accuracy in the high-dimensional regime, and
(center) in the low-dimensional regime. Right: Accuracy as a function of the training set size. The
training set size is 1500 in the left plot; the dimensionality of feature maps and the vocabulary size
are both set to 1000 in the right plot (for fair comparisons).
Algorithms
BOW-Linear
BOW-Gaussian
EMK-Fourier
EMK-CKSVD
Pyramid levels(15 training)
L=0
L=1
L=2
37.3? 0.9 41.6? 0.7 45.0?0.5
38.7? 0.8 43.7? 0.7 46.5?0.6
46.3? 0.7 53.0? 0.6 60.2?0.8
46.6?0.9 53.4?0.8 60.5?0.9
Pyramid levels (30 training)
L=0
L=1
L=2
46.2?0.8 53.0? 0.9 56.2?0.7
47.5?0.7 54.7? 0.8 58.1?0.6
54.0? 0.7 64.1? 0.8 70.1?0.8
54.5?0.8 63.7?0.9 70.3?0.8
Table 2: Classification accuracy comparisons for three pyramid levels. The results are averaged
over five random training/testing splits. The dimensionality of the feature maps and the vocabulary
size are both set to 1000. We have also experimented with large vocabularies, but did not observe
noticeable improvement?the performance tends to saturate beyond 1000 dimensions.
to BOW. Only SIFT descriptors are used in BOW and EMK for all compared algorithms, listed in
table 3. To improve performance, EMK can be conveniently extended to multiple feature types.
Caltech-256: Caltech-256 consists of 30,607 images from 256 object categories and background,
where each category contains at least 80 images. Caltech-256 is challenging due to the large number
of classes and the diverse lighting conditions, poses, backgrounds, images size, etc. We follow the
standard setup and increase the training set from 15 to 60 images per category with step length 15.
In table 4, we show the classification accuracy obtained from BOW-Linear, BOW-Gaussian, EMKFourier and EMK-CKSVD. As in the other datasets, we notice that EMK-Fourier and EMK-CKSVD
consistently outperform the BOW-Linear and the BOW-Gaussian.
To compare the four algorithms computationally, we select images from each category proportionally to the total number of images of that category, as the training set. We consider six different
training set sizes: b0.3 ? 30607c, . . . , b0.8 ? 30607c. The results are shown in fig. 3. To accelerate BOW-Gaussian, we precompute the entire kernel matrix. As expected, BOW-Gaussian is much
slower than the other three algorithms as the training set size increases, for both training and testing.
Algorithms
PMK [5, 6]
HMAX [19]
ML+PMK [9]
KC [28]
SPM [14]
SVM-KNN [31]
15 training
50.0?0.9
51.0
52.2
N/A
56.4
59.1?0.5
30 training
58.2
56.0
62.1
64.0
64.4?0.5
66.2?0.8
Algorithms
kCNN [30]
LDF [4]
ML+CORR [9]
NBNN [1]
EMK-Fourier
EMK-CKSVD
15 training
59.2
60.3
61.0
65.0?1.1
60.2?0.8
60.5?0.9
30 training
67.4
N/A
69.6
73.0
70.1?0.8
70.3?0.8
Table 3: Accuracy comparisons on Caltech-101. EMK is compared with ten recently published
methods. N/A indicates that results are not available. Notice that EMK is used in conjunction with a
linear classifier (linear SVM here) whereas all other methods (except HMAX [19]) require nonlinear
classifiers.
7
Algorithms
15 training
30 training
45 training
60 training
BOW-Linear
17.4?0.7
22.7?0.4
26.9?0.3
29.3?0.6
BOW-Gaussian
19.1?0.8
24.4?0.6
28.3?0.5
30.9?0.4
EMK-Fourier
22.6?0.7
30.1?0.5
34.1?0.5
37.4?0.6
EMK-CKSVD
23.2?0.6
30.5?0.4
34.4?0.4
37.6?0.5
Table 4: Accuracy on Caltech-256. The results are averaged over five random training/testing splits.
The dimensionality of the feature maps and the vocabulary size are both set to 1000 (for fair comparisons). We use 2 pyramid levels.
Caltech?256
4
Caltech?256
x 10
BOW?Linear
BOW?Gaussian
EMK?Fourier
EMK?CKSVD
10
120
Testing time (seconds)
Training time (seconds)
15
5
0
100
BOW?Linear
BOW?Gaussian
EMK?Fourier
EMK?CKSVD
80
60
40
20
1
1.5
2
Trainning set size
1
2.5
4
x 10
1.5
2
Trainning set size
2.5
4
x 10
Figure 3: Computational costs on Caltech-256. Left: training time as a function of the training set
size. Right: testing time as a function of the training set size. Testing time is in seconds per 100
samples. Flat BOW and EMK are used (no pyramid, L = 0). Notice that PMK has a similar training
and testing cost with BOW-Gaussian.
Nonlinear SVMs takes O(n2 ? n3 ) even when a highly optimized software package like LIBSVM
is used. For large n, the SVM training dominates the training cost. The testing time of BOWGaussian is linear in the training set size, but constant for the other three algorithms. Although we
only experiment with a Gaussian kernel, a similar complexity would be typical for other nonlinear
kernels, as used in [6, 9, 14, 31, 4].
5
Conclusion
We have presented efficient match kernels for visual recognition, based on a novel insight that
popular bag-of-words representations used in conjunction with linear models can be viewed as a
special type of match kernel which counts 1 if two local features fall into the same regions partitioned by visual words and 0 otherwise. We illustrate the quantization limitations of such models
and propose more sophisticated kernel approximations that preserve the computational efficiency of
bag-of-words while being just as (or more) accurate than the existing, computationally demanding,
non-linear kernels. The models we propose are built around Efficient Match Kernels (EMK), which
map local features to a low dimensional feature space, average the resulting feature vectors to form
a set-level feature, then apply a linear classifier. In experiments, we show that EMK are efficient and
achieve state of the art classification results in three difficult computer vision datasets: Scene-15,
Caltech-101 and Caltech-256.
Acknowledgements: This research was supported, in part, by awards from NSF (IIS-0535140) and
the European Commission (MCEXT-025481). Liefeng Bo thanks Jian Peng for helpful discussions.
References
[1] O. Boiman, E. Shechtman, and M. Irani. In defense of nearest-neighbor based image classification. In CVPR, 2008.
[2] M. Cuturi and J. Vert. Semigroup kernels on finite sets. In NIPS, 2004.
[3] R. Fan, K. Chang, C. Hsieh, X. Wang, and C. Lin. Liblinear: A library for large linear classification. JMLR, 9:1871?1874, 2008.
8
[4] A. Frome, Y. Singer, and J. Malik. Image retrieval and classification using local distance
functions. In NIPS, 2006.
[5] K. Grauman and T. Darrell. The pyramid match kernel: discriminative classification with sets
of image features. In ICCV, 2005.
[6] K. Grauman and T. Darrell. The pyramid match kernel: Efficient learning with sets of features.
JMLR, 8:725?760, 2007.
[7] D. Haussler. Convolution kernels on discrete structures. Technical report, 1999.
[8] Zhang J., Marszalek M., Lazebnik S., and Schmid C. Local features and kernels for classification of texture and object categories: A comprehensive study. IJCV, 73(2):213?238, 2007.
[9] P. Jain, B. Kulis, and K. Grauman. Fast image search for learned metrics. In CVPR, 2008.
[10] F. Jurie and B. Triggs. Creating efficient codebooks for visual recognition. In ICCV, 2005.
[11] K. Kim, M. Franz, and B. Sch?olkopf. Iterative kernel principal component analysis for image
modeling. PAMI, 27(9):1351?1366, 2005.
[12] R. Kondor and T. Jebara. A kernel between sets of vectors. In ICML, 2003.
[13] A. Kumar and C. Sminchisescu. Support kernel machines for object recognition. In ICCV,
2007.
[14] S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid matching for
recognizing natural scene categories. In CVPR, 2006.
[15] F. Li, R. Fergus, and P. Perona. One-shot learning of object categories. PAMI, 28(4):594?611,
2006.
[16] D. Lowe. Distinctive image features from scale-invariant keypoints. IJCV, 60:91?110, 2004.
[17] S. Lyu. Mercer kernels for object recognition with local features. In CVPR, 2005.
[18] P. Moreno, P. Ho, and N. Vasconcelos. A kullback-leibler divergence based kernel for svm
classification in multimedia applications. In NIPS, 2003.
[19] J. Mutch and D. Lowe. Multiclass object recognition with sparse, localized features. In CVPR,
2006.
[20] M. Parsana, S. Bhattacharya, C. Bhattacharyya, and K. Ramakrishnan. Kernels on attributed
pointsets with applications. In NIPS, 2007.
[21] J. Philbin, O. Chum, M. Isard, J. Sivic, and A. Zisserman. Lost in quantization: Improving
particular object retrieval in large scale image databases. In CVPR, 2008.
[22] A. Rahimi and B. Recht. Random features for large-scale kernel machines. In NIPS, 2007.
[23] A. Rahimi and B. Recht. Weighted sums of random kitchen sinks: Replacing minimization
with randomization in learning. In NIPS, 2008.
[24] B. Sch?olkopf, A. Smola, and K. M?uller. Nonlinear component analysis as a kernel eigenvalue
problem. Neural Computation, 10:1299?1319, 1998.
[25] S. Shalev-Shwartz, Y. Singer, and N. Srebro. Pegasos: Primal estimated sub-gradient solver
for svm. In ICML, pages 807?814. ACM, 2007.
[26] A. Shashua and T. Hazan. Algebraic set kernels with application to inference over local image
representations. In NIPS, 2004.
[27] J. Sivic and A. Zisserman. Video google: A text retrieval approach to object matching in
videos. In ICCV, 2003.
[28] J. van Gemert, J. Geusebroek, C. Veenman, and A. Smeulders. Kernel codebooks for scene
categorization. In ECCV, 2008.
[29] L. Wolf and A. Shashua. Learning over sets using kernel principal angles. JMLR, 4:913?931,
2003.
[30] K. Yu, W. Xu, and Y. Gong. Deep learning with kernel regularization for visual recognition.
In NIPS, 2008.
[31] H. Zhang, A. Berg, M. Maire, and J. Malik. Svm-knn: Discriminative nearest neighbor classification for visual category recognition. In CVPR, 2006.
9
| 3874 |@word kulis:1 kondor:1 triggs:1 d2:1 closure:2 scg:1 tried:1 decomposition:1 covariance:1 hsieh:1 pick:1 sgd:4 shot:1 shechtman:1 liblinear:2 contains:3 tuned:1 bhattacharyya:4 outperforms:2 existing:1 current:1 recovered:1 bd:1 chicago:1 shape:1 analytic:1 moreno:1 designed:1 plot:2 update:2 isard:1 selected:6 prohibitive:1 unacceptably:1 coarse:2 quantized:1 codebook:4 location:2 detecting:1 node:1 org:1 zhang:2 five:4 direct:1 consists:2 ijcv:2 combine:1 peng:1 expected:1 frequently:2 multi:2 pf:2 cardinality:3 increasing:1 becomes:1 project:2 solver:1 moreover:1 argmin:3 substantially:1 eigenvector:1 unified:1 impractical:2 prohibitively:1 classifier:23 k2:3 grauman:3 unit:3 positive:5 dropped:1 local:54 modify:1 tends:1 despite:1 marszalek:1 pami:2 k:5 challenging:1 co:3 limited:1 range:1 bi:1 averaged:3 jurie:1 testing:17 lost:1 definite:4 procedure:1 maire:1 significantly:3 vert:1 projection:2 matching:2 word:14 pre:2 regular:2 suggest:2 convenience:1 close:1 selection:1 pegasos:1 equivalent:1 map:34 center:3 pointsets:1 convex:1 resolution:1 simplicity:2 m2:2 rule:1 haussler:2 insight:1 spanned:2 dominate:1 variation:1 arranging:1 exact:2 us:2 element:1 recognition:12 expensive:4 approximated:2 labeled:1 database:1 solved:1 wang:1 thousand:3 region:3 decrease:2 technological:1 substantial:2 principled:1 topmost:1 complexity:8 cuturi:1 trained:1 raise:1 solving:1 distinctive:1 efficiency:3 basis:5 sink:1 accelerate:1 represented:2 various:1 veenman:1 train:4 jain:1 fast:1 effective:1 kp:1 aggregate:1 neighborhood:1 shalev:1 whose:1 larger:2 solve:1 cvpr:7 otherwise:6 knn:2 unseen:1 transform:1 cristian:1 obviously:1 eigenvalue:1 analytical:2 propose:3 product:5 maximal:1 combining:2 rapidly:1 bow:47 date:1 achieve:2 olkopf:2 cluster:1 darrell:2 produce:4 categorization:2 tti:1 object:11 spent:1 derive:1 illustrate:2 gong:1 pose:2 ij:1 nearest:3 b0:2 noticeable:1 eq:1 frome:1 stochastic:3 kb:1 vx:4 require:5 fix:1 weren:1 randomization:1 summation:1 pl:1 practically:1 around:2 considered:1 exp:2 nbnn:1 lyu:1 week:1 substituting:1 dictionary:1 adopt:1 hvi:2 vary:3 applicable:2 bag:10 label:2 currently:2 create:1 tool:1 weighted:2 minimization:1 uller:1 always:1 gaussian:28 rather:2 functionr:1 mcext:1 conjunction:8 derived:1 ponce:1 vk:1 nd2:1 improvement:3 consistently:2 indicates:1 kim:1 helpful:1 inference:1 cubically:1 entire:4 typically:1 eliminate:1 perona:1 kc:1 transformed:1 pixel:2 classification:18 exponent:1 development:1 art:3 special:3 spatial:6 fairly:1 constrained:1 construct:2 extraction:2 vasconcelos:1 sampling:1 zz:4 represents:2 yu:1 icml:2 report:2 randomly:6 preserve:3 divergence:1 comprehensive:1 individual:1 kitchen:1 phase:1 n1:1 interest:2 highly:1 evaluation:1 primal:1 accurate:2 edge:1 partial:1 necessary:2 unless:1 plotted:1 modeling:1 measuring:2 kpca:6 cost:18 introducing:1 hundred:1 uniform:1 recognizing:1 too:3 motivating:1 commission:1 varies:1 thanks:1 recht:2 pool:1 again:1 nm:2 creating:2 derivative:1 li:1 de:1 unordered:2 coefficient:1 explicitly:2 vi:10 lowe:2 philbin:1 analyze:1 hazan:1 shashua:2 competitive:1 option:1 smeulders:1 publicly:1 accuracy:18 descriptor:9 boiman:1 identify:1 accurately:2 lighting:2 published:2 processor:1 definition:1 frequency:1 attributed:1 sampled:1 dataset:5 popular:2 color:1 dimensionality:11 sophisticated:1 higher:1 supervised:1 follow:1 methodology:2 mutch:1 zisserman:2 risking:1 just:2 smola:1 replacing:1 nonlinear:5 liefeng:2 google:1 spm:1 grows:1 normalized:2 former:2 hence:5 equality:1 sinusoid:1 regularization:2 irani:1 semigroup:1 leibler:1 demonstrate:1 image:51 lazebnik:2 novel:1 recently:1 common:2 superior:1 belong:1 approximates:1 emk:74 significant:4 refer:1 grid:2 similarity:9 yk2:2 etc:1 closest:1 recent:1 quartic:1 optimizing:2 store:1 binary:1 caltech:19 scoring:1 seen:2 ii:2 full:7 multiple:2 keypoints:1 rahimi:2 technical:1 match:28 faster:1 cross:1 long:1 lin:1 retrieval:3 award:1 impact:1 vision:3 metric:1 histogram:8 kernel:114 iteration:3 pyramid:18 achieved:1 cell:2 whereas:8 addition:2 background:3 spacing:1 singular:1 jian:1 sch:2 probably:1 induced:2 integer:1 call:1 split:3 easy:1 finish:1 zi:4 inner:4 codebooks:2 multiclass:1 shift:2 six:1 defense:1 gb:1 algebraic:2 deep:1 proportionally:1 eigenvectors:2 listed:1 transforms:1 ten:2 svms:3 category:17 argminz:1 outperform:1 exist:1 zj:1 nsf:1 notice:6 chum:1 estimated:1 per:6 zd:3 diverse:1 discrete:1 salient:1 four:4 kzz:4 falling:1 drawn:3 d3:1 represenation:1 libsvm:2 v1:2 excludes:1 sum:13 run:3 angle:3 inverse:2 package:1 patch:2 decision:1 vf:1 comparable:2 set1:1 guaranteed:1 fold:1 quadratic:2 fan:1 strength:2 occur:1 constraint:2 scene:14 flat:2 n3:1 software:1 bonn:2 fourier:30 extremely:1 kumar:1 combination:2 precompute:1 remain:1 smaller:2 slightly:2 partitioned:2 explained:1 intuitively:1 invariant:3 iccv:4 computationally:3 remains:1 count:3 singer:2 gemert:1 tractable:2 available:4 apply:2 observe:2 occurrence:1 bhattacharya:1 batch:2 ho:1 slower:1 clustering:2 remaining:2 exploit:2 giving:1 unchanged:1 tensor:1 objective:2 malik:2 question:1 added:2 md:1 gradient:12 subspace:1 distance:1 vd:1 kcnn:1 amd:1 extent:1 discriminant:1 trivial:1 reason:1 length:5 modeled:1 difficult:2 setup:1 negative:1 design:1 implementation:1 motivates:1 perform:1 allowing:1 pmk:6 convolution:1 datasets:9 benchmark:1 finite:6 descent:6 extended:2 incorporated:1 jebara:1 inverting:1 required:4 specified:2 kl:1 z1:3 optimized:1 cksvd:24 mdd:5 sivic:2 learned:6 nm2:1 hour:4 nip:8 address:1 beyond:3 able:1 usually:3 regime:4 sparsity:1 challenge:1 program:1 geusebroek:1 built:2 memory:1 video:2 demanding:1 difficulty:2 natural:1 indicator:1 improve:3 library:1 extract:2 schmid:2 text:1 acknowledgement:1 kf:2 relative:1 interesting:1 limitation:1 srebro:1 versus:1 localized:1 validation:1 integrate:1 sufficient:1 xp:1 nm3:1 mercer:2 eccv:1 caltech101:1 supported:1 institute:1 fall:2 neighbor:3 face:1 sparse:2 ghz:1 van:1 dimension:1 vocabulary:9 evaluating:2 p2l:1 kz:9 hvx:2 author:1 collection:1 commonly:1 projected:1 simplified:1 franz:1 approximate:7 obtains:1 uni:1 kullback:1 ml:2 conceptual:1 b1:1 xi:8 discriminative:2 fergus:1 grayscale:1 shwartz:1 continuous:1 iterative:2 search:1 decomposes:1 table:10 reviewed:1 learn:6 expanding:1 improving:1 sminchisescu:3 necessarily:1 european:1 did:2 dense:1 main:1 linearly:1 subsample:1 n2:6 dimentionality:2 fair:3 x1:1 xu:1 fig:7 definiteness:1 slow:1 sub:2 explicit:1 jmlr:3 toyota:1 third:1 hmax:2 saturate:1 xt:3 sift:7 experimented:2 svm:8 dominates:1 essential:1 quantization:4 albeit:1 adding:1 corr:1 gkz:1 texture:1 kx:4 gap:1 intersection:2 likely:1 visual:14 conveniently:1 bo:2 scalar:1 chang:1 applies:1 ramakrishnan:1 wolf:1 extracted:3 acm:1 viewed:2 goal:2 careful:1 replace:1 change:2 typical:1 except:1 uniformly:1 averaging:1 principal:4 multimedia:1 called:1 total:1 pas:1 experimental:1 m3:1 vote:1 meaningful:2 attempted:1 shiftinvariant:1 indicating:1 formally:1 select:1 berg:1 support:2 latter:4 evaluate:3 kernelization:1 |
3,173 | 3,875 | Nonlinear Learning using Local Coordinate Coding
Kai Yu
NEC Laboratories America
[email protected]
Tong Zhang
Rutgers University
[email protected]
Yihong Gong
NEC Laboratories America
[email protected]
Abstract
This paper introduces a new method for semi-supervised learning on high dimensional nonlinear manifolds, which includes a phase of unsupervised basis learning
and a phase of supervised function learning. The learned bases provide a set of
anchor points to form a local coordinate system, such that each data point x on
the manifold can be locally approximated by a linear combination of its nearby
anchor points, and the linear weights become its local coordinate coding. We
show that a high dimensional nonlinear function can be approximated by a global
linear function with respect to this coding scheme, and the approximation quality
is ensured by the locality of such coding. The method turns a difficult nonlinear
learning problem into a simple global linear learning problem, which overcomes
some drawbacks of traditional local learning methods.
1
Introduction
Consider the problem of learning a nonlinear function f (x) on a high dimensional space x ? Rd .
We are given a set of labeled data (x1 , y1 ), . . . , (xn , yn ) drawn from an unknown underlying distribution. Moreover, assume that we observe a set of unlabeled data x ? Rd from the same distribution.
If the dimensionality d is large compared to n, then the traditional statistical theory predicts overfitting due to the so called ?curse of dimensionality?. One intuitive argument for this effect is that
when the dimensionality becomes larger, pairwise distances between two similar data points become
larger as well. Therefore one needs more data points to adequately fill in the empty space. However,
for many real problems with high dimensional data, we do not observe this so-called curse of dimensionality. This is because although data are physically represented in a high-dimensional space,
they often lie on a manifold which has a much smaller intrinsic dimensionality.
This paper proposes a new method that can take advantage of the manifold geometric structure
to learn a nonlinear function in high dimension. The main idea is to locally embed points on the
manifold into a lower dimensional space, expressed as coordinates with respect to a set of anchor
points. Our main observation is simple but very important: we show that a nonlinear function on the
manifold can be effectively approximated by a linear function with such an coding under appropriate
localization conditions. Therefore using Local Coordinate Coding, we turn a very difficult high
dimensional nonlinear learning problem into a much simpler linear learning problem, which has
been extensively studied in the literature. This idea may also be considered as a high dimensional
generalization of low dimensional local smoothing methods in the traditional statistical literature.
2
Local Coordinate Coding
We are interested in learning a smooth function f (x) defined on a high dimensional space Rd . Let
k ? k be a norm on Rd . Although we do not restrict to
pany specific norm, in practice, one often
employs the Euclidean norm (2-norm): kxk = kxk2 = x21 + ? ? ? + x2d .
1
Definition 2.1 (Lipschitz Smoothness) A function f (x) on Rd is (?, ?, p)-Lipschitz smooth with
respect to a norm k ? k if |f (x0 ) ? f (x)| ? ?kx ? x0 k and |f (x0 ) ? f (x) ? ?f (x)> (x0 ? x)| ?
?kx ? x0 k1+p , where we assume ?, ? > 0 and p ? (0, 1].
Note that if the Hessian of f (x) exists, then we may take p = 1. Learning an arbitrary Lipschitz
smooth function on Rd can be difficult due to the curse of dimensionality. That is, the number
of samples required to characterize such a function f (x) can be exponential in d. However, in
many practical applications, one often observe that the data we are interested in approximately lie
on a manifold M which is embedded into Rd . Although d is large, the intrinsic dimensionality
of M can be much smaller. Therefore if we are only interested in learning f (x) on M, then the
complexity should depend on the intrinsic dimensionality of M instead of d. In this paper, we
approach this problem by introducing the idea of localized coordinate coding. The formal definition
of (non-localized) coordinate coding is given below, where we represent a point in Rd by a linear
combination of a set of ?anchor points?. Later we show it is sufficient to choose a set of ?anchor
points? with cardinality depending on the intrinsic dimensionality of the manifold rather than d.
d
Definition 2.2 (Coordinate Coding) A coordinate coding is a pair (?, C), where
P C ? R is a set
d
|C|
of anchor points, and ? is a map of x ? R to [?v (x)]v?C ? RP such that v ?v (x) = 1. It
induces the following physical approximation of x in Rd : ?(x) = v?C ?v (x)v. Moreover, for all
P
2 1/2
x ? Rd , we define the corresponding coding norm as kxk? =
.
v?C ?v (x)
P
The quantity kxk? will become useful in our learning theory analysis. The condition v ?v (x) = 1
follows from the shift-invariance requirement, which means that the coding should remain the same
if we use a different origin of the Rd coordinate system for representing data points.
It can be
P
shown (see the appendix file accompanying the submission) that the map x ?
?
(x)v is
v?C
P v
invariant under any shift of the origin for representing data points in Rd if and only if v ?v (x) =
1. The importance of the coordinate coding concept is that if a coordinate coding is sufficiently
localized, then a nonlinear function can be approximate by a linear function with respect to the
coding. This critical observation, illustrate in the following linearization lemma, is the foundation
of our approach. Due to the space limitation, all proofs are left to the appendix that accompanies the
submission.
Lemma 2.1 (Linearization) Let (?, C) be an arbitrary coordinate coding on Rd . Let f be an
(?, ?, p)-Lipschitz smooth function. We have for all x ? Rd :
X
X
1+p
?v (x)f (v) ? ? kx ? ?(x)k + ?
|?v (x)| kv ? ?(x)k
.
f (x) ?
v?C
v?C
d
To understand this result, we note
P that on the left hand side, a nonlinear function f (x) in R is approximated by a linear function v?C ?v (x)f (v) with respect to the coding ?(x), where [f (v)]v?C
is the set of coefficients to be estimated from data. The quality of this approximation is bounded by
the right hand side, which has two terms: the first term kx ? ?(x)k means x should be close to its
physical approximation ?(x), and the second term means that the coding should be localized. The
quality of a coding ? with respect to C can be measured by the right hand side. For convenience,
we introduce the following definition, which measures the locality of a coding.
Definition 2.3 (Localization Measure) Given ?, ?, p, and coding (?, C), we define
"
#
X
1+p
Q?,?,p (?, C) = Ex ?kx ? ?(x)k + ?
|?v (x)| kv ? ?(x)k
.
v?C
Observe that in Q?,?,p , ?, ?, p may be regarded as tuning parameters; we may also simply pick
? = ? = p = 1. Since the quality function Q?,?,p (?, C) only depends on unlabeled data, in
principle, we can find [?, C] by optimizing this quality using unlabeled data. Later, we will consider
simplifications of this objective function that are easier to compute.
Next we show that if the data lie on a manifold, then the complexity of local coordinate coding
depends on the intrinsic manifold dimensionality instead of d. We first define manifold and its
intrinsic dimensionality.
2
Definition 2.4 (Manifold) A subset M ? Rd is called a p-smooth (p > 0) manifold with intrinsic
dimensionality m = m(M) if there exists a constant cp (M) such that
x ? M, there
given any
Pm
d
0
0
exists m vectors v1 (x), . . . , vm (x) ? R so that ?x ? M: inf ??Rm
x ? x ? j=1 ?j vj (x)
?
cp (M)kx0 ? xk1+p .
This definition is quite intuitive. The smooth manifold structure implies that one can approximate
a point in M effectively using local coordinate coding. Note that for a typical manifold with welldefined curvature, we can take p = 1.
Definition 2.5 (Covering Number) Given any subset M ? Rd , and > 0. The covering
number, denoted as N (, M), is the smallest cardinality of an -cover C ? M. That is,
supx?M inf v?C kx ? vk ? .
For a compact manifold with intrinsic dimensionality m, there exists a constant c(M) such that its
covering number is bounded by N (, M) ? c(M)?m . The following result shows that there exists
a local coordinate coding to a set of anchor points C of cardinality O(m(M)N (, M)) such that
any (?, ?, p)-Lipschitzpsmooth function can be linearly approximated using local coordinate coding
up to the accuracy O( m(M)1+p ).
Theorem 2.1 (Manifold Coding) If the data points x lie on a compact p-smooth manifold M, and
the norm is defined as kxk = (x> Ax)1/2 for some positive definite matrix A. Then given any > 0,
there exist anchor points C ? M and coding ? such that
p
p
|C| ? (1+m(M))N (, M), Q?,?,p (?, C) ? [?cp (M)+(1+ m(M)+21+p m(M))?] 1+p .
p
Moreover, for all x ? M, we have kxk2? ? 1 + (1 + m(M))2 .
The approximation result in Theorem 2.1 means that the complexity of linearization in Lemma 2.1
depends only on the intrinsic dimension m(M) of M instead of d. Although this result is proved
for manifolds, it is important to observe that the coordinate coding method proposed in this paper
does not require the data to lie precisely on a manifold, and it does not require knowing m(M). In
fact, similar results hold even when the data only approximately lie on a manifold.
In the next section, we characterize the learning complexity of the local coordinate coding method.
It implies that linear prediction methods can be used to effectively learn nonlinear functions on
a manifold. The nonlinearity is fully captured by the coordinate coding map ? (which can be a
nonlinear function). This approach has some great advantages because the problem of finding local
coordinate coding is much simpler than direct nonlinear learning:
? Learning (?, C) only requires unlabeled data, and the number of unlabeled data can be
significantly more than the number of labeled data. This step also prevents overfitting with
respect to labeled data.
? In practice, we do not have to find the optimal coding because the coordinates are merely
features for linear supervised learning. This significantly simplifies the optimization problem. Consequently, it is more robust than some standard approaches to nonlinear learning
that direct optimize nonlinear functions on labeled data (e.g., neural networks).
3
Learning Theory
In machine learning, we minimize the expected loss Ex,y ?(f (x), y) with respect to the underlying
distribution within a function class f (x) ? F. In this paper, we are interested in the function class
F?,?,p = {f (x) : (?, ?, p) ? Lipschitz smooth function in Rd }.
The local coordinate coding method considers a linear approximation of functions in F?,?,p on the
a
data manifold. Given a local coordinate coding scheme (?, C), we approximate each f (x) ? F?,?,p
P
by f (x) ? f?,C (w,
? x) = v?C w
?v ?v (x), where we estimate the coefficients using ridge regression
as:
" n
#
X
X
2
[w
?v ] = arg min
? (f?,C (w, xi ), yi ) + ?
(wv ? g(v)) ,
(1)
[wv ]
i=1
v?C
3
where g(v) is an arbitrary function assumed to be pre-fixed. In the Bayesian interpretation, this
can be regarded as the prior mean for the weights [wv ]v?C . The default values of g(v) are simply
g(v) ? 0. Given a loss function ?(p, y), let ?01 (p, y) = ??(p, y)/?p. For simplicity, in this paper
we only consider convex Lipschitz loss function, where |?01 (p, y)| ? B. This includes the standard
classification loss functions such as logistic regression and SVM (hinge loss), both with B = 1.
Theorem 3.1 (Generalization Bound) Suppose ?(p, y) is Lipschitz: |?01 (p, y)| ? B. Consider
coordinate coding (?, C), and the estimation method (1) with random training examples Sn =
{(x1 , y1 ), . . . , (xn , yn )}. Then the expected generalization error satisfies the inequality:
ESn Ex,y ?(f?,C (w,
? x), y)
"
?
inf
f ?F?,?,p
Ex,y ? (f (x), y) + ?
#
X
(f (v) ? g(v))
v?C
2
+
B2
Ex kxk2? + BQ?,?,p (?, C).
2?n
We may choose the regularization parameter ? that optimizes the bound in Theorem 3.1.
Moreover, if we pick g(v) ? 0, and find (?, C) at some > 0, then Theorem 2.1 implies the following simplified
i for any f ? F?,?,p such that |f (x)| =
hpgeneralization bound
1+p
?m(M)
/n +
. By optimizing over , we obtain a bound:
O(1): Ex,y ? (f (x), y) + O
Ex,y ? (f (x), y) + O(n?(1+p)/(2+2p+m(M)) ).
By combining Theorem 2.1 and Theorem 3.1, we can immediately obtain the following simple
consistency result. It shows that the algorithm can learn an arbitrary nonlinear function on manifold
when n ? ?. Note that Theorem 2.1 implies that the convergence only depends on the intrinsic
dimensionality of the manifold M, not d.
Theorem 3.2 (Consistency) Suppose the data lie on a compact manifold M ? Rd , and the norm
k ? k is the Euclidean norm in Rd . If loss function ?(p, y) is Lipschitz. As n ? ?, we choose
?, ? ? ?, ?/n, ?/n ? 0 (?, ? depends on n), and p = 0. Then it is possible to find coding
(?, C) using unlabeled data such that |C|/n ? 0 and Q?,?,p (?, C) ? 0. If we pick ?n ? ?, and
?|C| ? 0. Then the local coordinate coding method (1) with g(v) ? 0 is consistent as n ? ?:
limn?? ESn Ex,y ?(f (w,
? x), y) = inf f :M?R Ex,y ? (f (x), y).
4
Practical Learning of Coding
Given a coordinate coding (?, C), we can use (1) to learn a nonlinear function in Rd . We showed
that (?, C) can be obtained by optimizing Q?,?,p (?, C). In practice, we may also consider the
following simplifications of the localization term:
X
X
1+p
1+p
|?v (x)| kv ? ?(x)k
?
|?v (x)| kv ? xk
.
v?C
v?C
Note that we may simply chose p = 0 or p = 1. The formulation is related to sparse coding [6] which
has no locality constraints with p = ?1. In this representation, we may either enforce the constraint
P
v ?v (x) = 1 or for simplicity, remove it because the formulation is already shift-invariant. Putting
the above together, we try to optimize the following objective function in practice:
?
?
2
X
X
Q(?, C) = Ex inf ?
x ?
?v v
+ ?
|?v |kv ? xk1+p ? .
[?v ]
v?C
v?C
We update C and ? via alternating optimization. The step of updating ? can be transformed into
a canonical LASSO problem, where efficient algorithms exist. The step of updating C is a leastsquares problem in case p = 1.
5
Relationship to Other Methods
Our work is related to several existing approaches in the literature of machine learning and statistics.
The first class of them is nonlinear manifold learning, such as LLE [8], Isomap [9], and Laplacian
4
Eigenmaps [1]. These methods find global coordinates of data manifold based on a pre-computed
affinity graph of data points. The use of affinity graphs requires expensive computation and lacks a
coherent way of generalization to new data. Our method learns a compact set of bases to form local
coordinates, which has a linear complexity with respect to data size and can naturally handle unseen
data. More importantly, local coordinate coding has a direct connection to nonlinear function approximation on manifold, and thus provides a theoretically sound unsupervised pre-training method
to facilitate further supervised learning tasks.
Another set of related models are local models in statistics, such as local kernel smoothing and local
regression, e.g.[4, 2], both traditionally using fixed-bandwidth kernels. Local kernel smoothing can
be regarded as a zero-order method; while local regression is higher-order, including local linear
regression as the 1st-order case. Traditional local methods are not widely used in machine learning practice, because data with non-uniform distribution on the manifold require to use adaptivebandwidth kernels. The problem can be somehow alleviated by using K-nearest neighbors. However, adaptive kernel smoothing still suffers from the high-dimensionality and noise of data. On
the other hand, higher-order methods are computationally expensive and prone to overfitting, because they are highly flexible in locally fitting many segments of data in high-dimension space. Our
method can be seen as a generalized 1st-order local method with basis learning and adaptive locality. Compared to local linear regression, the learning is achieved by fitting a single globally linear
function with respect to a set of learned local coordinates, which is much less prone to overfitting
and computationally much cheaper. This means that our method achieves better balance between
local and global aspects of learning. The importance of such balance has been recently discussed in
[10].
Finally, local coordinate coding draws connections to vector quantization (VQ) coding, e.g., [3],
and sparse coding, which have been widely applied in processing of sensory data, such as acoustic
and image signals. Learning linear functions of VQ codes can be regarded as a generalized zeroorder local method with basis learning. Our method has an intimate relationship with sparse coding.
In fact, we can regard local coordinate coding as locally constrained sparse coding. Inspired by
biological visual systems, people has been arguing sparse features of signals are useful for learning
[7]. However, to the best of our knowledge, there is no analysis in the literature that directly answers
the question why sparse codes can help learning nonlinear functions in high dimensional space. Our
work reveals an important finding ? a good first-order approximation to nonlinear function requires
the codes to be local, which consequently requires the codes to be sparse. However, sparsity does not
always guarantee locality conditions. Our experiments demonstrate that sparse coding is helpful for
learning only when the codes are local. Therefore locality is more essential for coding, and sparsity
is a consequence of such a condition.
6
Experiments
Due to the space limitation, we only include two examples: one synthetic and one real, to illustrate
various aspects of our theoretical results. We note that image classification based on LCC recently
achieved state-of-the-art performance in PASCAL Visual Object Classes Challenge 2009. 1
6.1
Synthetic Data
Our first example is based on a synthetic data set, where a nonlinear function is defined on a Swissroll manifold, as shown in Figure 1-(1). The primary goal is to demonstrate the performance of
nonlinear function learning using simple linear ridge regression based on representations obtained
from traditional sparse coding and the newly suggested local coordinate coding, which are, respectively, formulated as the following,
X1
X
X
2
min
kx ? ?(x)k + ?
|?v (x)|kv ? xk2 + ?
kvk2
(2)
?,C
2
x
v?C
v?C
P
where ?(x) =
v?C ?v (x)v. We note that (2) is an approximation to the original formulation,
mainly for the simplicity of computation.
1
http://pascallin.ecs.soton.ac.uk/challenges/VOC/voc2009/workshop/index.html
5
(1) A nonlinear function
(2) RMSE=4.394
(3) RMSE=0.499
(4)RMSE=4.661
(5) RMSE=0.201
(6) RMSE=0.109
(7) RMSE=0.669
(8) RMSE=1.170
Figure 1: Experiments of nonlinear regression on Swiss-roll: (1) a nonlinear function on the Swissroll manifold, where the color indicates function values; (2) result of sparse coding with fixed random anchor points; (3) result of local coordinate coding with fixed random anchor points; 4) result
of sparse coding; (5) result of local coordinate coding; (6) result of local kernel smoothing; (7) result
of local coordinate coding on noisy data; (8) result of local kernel smoothing on noisy data.
We randomly sample 50, 000 data points on the manifold for unsupervised basis learning, and 500
labeled points for supervised regression. The number of bases is fixed to be 128. The learned nonlinear functions are tested on another set of 10, 000 data points, with their performances evaluated
by root mean square error (RMSE).
In the first setting, we let both coding methods use the same set of fixed bases, which are 128
points randomly sampled from the manifold. The regression results are shown in Figure 1-(2) and
(3), respectively. Sparse coding based approach fails to capture the nonlinear function, while local
coordinate coding behaves much better. We take a closer look at the data representations obtained
from the two different encoding methods, by visualizing the distributions of distances from encoded
data to bases that have positive, negative, or zero coefficients in Figure 2. It shows that sparse
coding lets bases faraway from the encoded data have nonzero coefficients, while local coordinate
coding allows only nearby bases to get nonzero coefficients. In other words, sparse coding on
this data does not ensure a good locality and thus fails to facilitate the nonlinear function learning.
As another interesting phenomenon, local coordinate coding seems to encourage coefficients to be
nonnegative, which is intuitively understandable ? if we use several bases close to a data point to
linearly approximate the point, each basis should have a positive contribution. However, whether
there is any merit by explicitly enforcing nonnegativity will remain an interesting future work.
In the next two experiments, given the random bases as a common initialization, we let the two
algorithms learn bases from the 50, 000 unlabeled data points. The regression results based on the
learned bases are depicted in Figure 1-(4) and (5), which indicate that regression error is further
reduced for local coordinate coding, but remains to be high for sparse coding. We also make a
comparison with local kernel smoothing, which takes a weighted average of function values of
K-nearest training points to make prediction. As shown in Figure 1-(6), the method works very
well on this simple low-dimensional data, even outperforming the local coordinate coding approach.
However, if we increase the data dimensionality to be 256 by adding 253-dimensional independent
Gaussian noises with zero mean and unitary variance, local coordinate coding becomes superior to
local kernel smoothing, as shown in Figure 1-(7) and (8). This is consistent with our theory, which
suggests that local coordinate coding can work well in high dimension; on the other hand, local
kernel smoothing is known to suffer from high dimensionality and noise.
6.2
Handwritten Digit Recognition
Our second example is based on the MNIST handwritten digit recognition benchmark, where each
data point is a 28 ? 28 gray image, and pre-normalized into a unitary 784-dimensional vector. In
our setting, the set C of anchor points is obtained from sparse coding, with the regularization on
6
(a-1)
(a-2)
(b-1)
(b-2)
Figure 2: Coding locality on Swiss roll: (a) sparse coding vs. (b) local coordinate coding.
v replaced by inequality constraints kvk ? 1. Our focus here is not on anchor point learning, but
rather on checking whether a good nonlinear classifier can be obtained if we enforce sparsity and
locality in data representation, and then apply simple one-against-all linear SVMs.
Since the optimization cost of sparse coding is invariant under flipping the sign of v, we take a
postprocessing step to change the sign of v if we find the corresponding ?v (x) for most of x is
negative. This rectification will ensure the anchor points to be on the data manifold. With the
obtained C, for each data point x we solve the local coordinate coding problem (2), by optimizing ?
only, to obtain the representation [?v (x)]v?C . In the experiments we try different sizes of bases. The
classification error rates are provided in Table 1. In addition we also compare with linear classifier
on raw images, local kernel smoothing based on K-nearest neighbors, and linear classifiers using
representations obtained from various unsupervised learning methods, including autoencoder based
on deep belief networks [5], Laplacian eigenmaps [1], locally linear embedding (LLE) [8], and VQ
coding based on K-means. We note that, like most of other manifold learning approaches, Laplacian
eigenmaps or LLE is a transductive method which has to incorporate both training and testing data in
training. The comparison results are summarized in Table 2. Both sparse coding and local coordinate
coding perform quite good for this nonlinear classification task, significantly outperforming linear
classifiers on raw images. In addition, local coordinate coding is consistently better than sparse
coding across various basis sizes. We further check the locality of both representations by plotting
Figure-3, where the basis number is 512, and find that sparse coding on this data set happens to be
quite local ? unlike the case of Swiss-roll data ? here only a small portion of nonzero coefficients
(again mostly negative) are assigned onto the bases whose distances to the encoded data exceed
the average of basis-to-datum distances. This locality explains why sparse coding works well on
MNIST data. On the other hand, local coordinate coding is able to remove the unusual coefficients
and further improve the locality. Among those compared methods in Table 2, we note that the
error rate 1.2% of deep belief network reported in [5] was obtained via unsupervised pre-training
followed by supervised backpropagation. The error rate based on unsupervised training of deep
belief networks is about 1.90%.2 Therefore our result is competitive to the-state-of-the-art results
that are based on unsupervised feature learning plus linear classification without using additional
image geometric information.
2
This is obtained via a personal communication with Ruslan Salakhutdinov at University of Toronto.
7
(a-1)
(a-2)
(b-1)
(b-2)
Figure 3: Coding locality on MNIST: (a) sparse coding vs. (b) local coordinate coding.
Table 1: Error rates (%) of MNIST classification with different |C|.
|C|
Linear SVM with sparse coding
Linear SVM with local coordinate coding
512
2.96
2.64
1024
2.64
2.44
2048
2.16
2.08
4096
2.02
1.90
Table 2: Error rates (%) of MNIST classification with different methods.
Methods
Linear SVM with raw images
Linear SVM with VQ coding
Local kernel smoothing
Linear SVM with Laplacian eigenmap
Linear SVM with LLE
Linear classifier with deep belief network
Linear SVM with sparse coding
Linear SVM with local coordinate coding
7
Error Rate
12.0
3.98
3.48
2.73
2.38
1.90
2.02
1.90
Conclusion
This paper introduces a new method for high dimensional nonlinear learning with data distributed
on manifolds. The method can be seen as generalized local linear function approximation, but can
be achieved by learning a global linear function with respect to coordinates from unsupervised local
coordinate coding. Compared to popular manifold learning methods, our approach can naturally
handle unseen data and has a linear complexity with respect to data size. The work also generalizes
popular VQ coding and sparse coding schemes, and reveals that locality of coding is essential for
supervised function learning. The generalization performance depends on intrinsic dimensionality
of the data manifold. The experiments on synthetic and handwritten digit data further confirm the
findings of our analysis.
8
References
[1] Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps for dimensionality reduction and data
representation. Neural Computation, 15:1373 ? 1396, 2003.
[2] Leon Bottou and Vladimir Vapnik. Local learning algorithms. Neural Computation, 4:888 ?
900, 1992.
[3] Robert M. Gray and David L. Neuhoff. Quantization. IEEE Transaction on Information Theory, pages 2325 ? 2383, 1998.
[4] Trevor Hastie and Clive Loader. Local regression: Automatic kernel carpentry. Statistical
Science, 8:139 ? 143, 1993.
[5] Geoffrey E. Hinton and Ruslan R. Salakhutdinov. Reducing the dimensionality of data with
neural networks. Science, 313:504 ? 507, 2006.
[6] Honglak Lee, Alexis Battle, Rajat Raina, and Andrew Y. Ng. Efficient sparse coding algorithms. Neural Information Processing Systems (NIPS) 19, 2007.
[7] Rajat Raina, Alexis Battle, Honglak Lee, Benjamin Packer, and Andrew Y. Ng. Self-taught
learning: Transfer learning from unlabeled data. International Conference on Machine Learning, 2007.
[8] Sam Roweis and Lawrence Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290:2323 ? 2326, 2000.
[9] Joshua B. Tenenbaum, Vin De Silva, and John C. Langford. A global geometric framework
for nonlinear dimensionality reduction. Science, 290:2319 ? 2323, 2000.
[10] Alon Zakai and Ya?acov Ritov. Consistency and localizability. Journal of Machine Learning
Research, 10:827 ? 856, 2009.
9
| 3875 |@word norm:9 seems:1 pick:3 reduction:3 kx0:1 existing:1 com:2 john:1 remove:2 update:1 v:2 xk:1 provides:1 toronto:1 simpler:2 zhang:1 kvk2:1 direct:3 become:3 welldefined:1 fitting:2 introduce:1 theoretically:1 x0:5 pairwise:1 expected:2 inspired:1 globally:1 voc:1 salakhutdinov:2 curse:3 cardinality:3 becomes:2 provided:1 underlying:2 moreover:4 bounded:2 finding:3 guarantee:1 ensured:1 rm:1 classifier:5 uk:1 clive:1 yn:2 positive:3 local:65 consequence:1 encoding:1 loader:1 approximately:2 chose:1 plus:1 initialization:1 studied:1 suggests:1 practical:2 arguing:1 testing:1 practice:5 definite:1 carpentry:1 swiss:3 backpropagation:1 digit:3 significantly:3 alleviated:1 pre:5 word:1 get:1 convenience:1 unlabeled:8 onto:1 close:2 optimize:2 map:3 convex:1 simplicity:3 immediately:1 regarded:4 fill:1 importantly:1 embedding:2 handle:2 coordinate:55 traditionally:1 suppose:2 alexis:2 origin:2 approximated:5 expensive:2 updating:2 recognition:2 submission:2 predicts:1 labeled:5 capture:1 benjamin:1 complexity:6 personal:1 depend:1 segment:1 localization:3 basis:8 represented:1 america:2 various:3 x2d:1 quite:3 encoded:3 kai:1 larger:2 widely:2 solve:1 whose:1 statistic:2 niyogi:1 unseen:2 transductive:1 noisy:2 advantage:2 combining:1 roweis:1 intuitive:2 neuhoff:1 kv:6 convergence:1 empty:1 requirement:1 object:1 help:1 depending:1 illustrate:2 ac:1 gong:1 stat:1 andrew:2 measured:1 nearest:3 alon:1 implies:4 indicate:1 drawback:1 lcc:1 explains:1 require:3 generalization:5 biological:1 leastsquares:1 accompanying:1 hold:1 sufficiently:1 considered:1 great:1 lawrence:1 achieves:1 smallest:1 xk2:1 estimation:1 ruslan:2 weighted:1 always:1 gaussian:1 rather:2 ax:1 focus:1 vk:1 consistently:1 indicates:1 mainly:1 check:1 helpful:1 transformed:1 interested:4 arg:1 classification:7 flexible:1 pascal:1 denoted:1 html:1 among:1 proposes:1 smoothing:11 constrained:1 art:2 ng:2 yu:1 unsupervised:8 look:1 future:1 employ:1 belkin:1 randomly:2 packer:1 cheaper:1 replaced:1 phase:2 highly:1 introduces:2 kvk:1 closer:1 encourage:1 bq:1 euclidean:2 theoretical:1 cover:1 cost:1 introducing:1 subset:2 uniform:1 eigenmaps:4 characterize:2 reported:1 answer:1 supx:1 sv:2 synthetic:4 st:2 international:1 lee:2 vm:1 together:1 again:1 choose:3 de:1 coding:96 b2:1 includes:2 coefficient:8 summarized:1 explicitly:1 depends:6 later:2 try:2 root:1 lab:2 portion:1 competitive:1 vin:1 rmse:8 partha:1 contribution:1 minimize:1 square:1 accuracy:1 roll:3 variance:1 bayesian:1 handwritten:3 raw:3 suffers:1 trevor:1 definition:8 against:1 esn:2 naturally:2 proof:1 soton:1 sampled:1 newly:1 proved:1 popular:2 knowledge:1 color:1 dimensionality:22 zakai:1 higher:2 supervised:7 pascallin:1 formulation:3 evaluated:1 ritov:1 xk1:2 langford:1 hand:6 nonlinear:34 lack:1 somehow:1 logistic:1 quality:5 gray:2 facilitate:2 effect:1 concept:1 normalized:1 isomap:1 adequately:1 regularization:2 assigned:1 alternating:1 laboratory:2 nonzero:3 visualizing:1 self:1 covering:3 generalized:3 ridge:2 demonstrate:2 cp:3 silva:1 postprocessing:1 image:7 recently:2 common:1 superior:1 behaves:1 physical:2 discussed:1 interpretation:1 honglak:2 smoothness:1 rd:20 tuning:1 consistency:3 pm:1 automatic:1 nonlinearity:1 base:13 curvature:1 showed:1 optimizing:4 inf:5 optimizes:1 inequality:2 wv:3 outperforming:2 yi:1 joshua:1 captured:1 seen:2 additional:1 signal:2 semi:1 sound:1 smooth:8 laplacian:5 prediction:2 regression:13 rutgers:2 physically:1 represent:1 kernel:13 achieved:3 addition:2 limn:1 unlike:1 file:1 unitary:2 exceed:1 hastie:1 restrict:1 lasso:1 bandwidth:1 idea:3 simplifies:1 knowing:1 yihong:1 shift:3 whether:2 suffer:1 accompanies:1 hessian:1 deep:4 useful:2 locally:6 extensively:1 induces:1 svms:1 tenenbaum:1 reduced:1 http:1 exist:2 canonical:1 sign:2 estimated:1 taught:1 putting:1 drawn:1 v1:1 graph:2 merely:1 swissroll:2 tzhang:1 draw:1 appendix:2 bound:4 followed:1 simplification:2 datum:1 nonnegative:1 precisely:1 constraint:3 nearby:2 aspect:2 argument:1 min:2 leon:1 combination:2 battle:2 smaller:2 remain:2 voc2009:1 across:1 sam:1 happens:1 intuitively:1 invariant:3 computationally:2 rectification:1 vq:5 remains:1 ygong:1 turn:2 merit:1 unusual:1 generalizes:1 apply:1 observe:5 appropriate:1 enforce:2 rp:1 original:1 include:1 ensure:2 x21:1 hinge:1 kyu:1 k1:1 objective:2 already:1 quantity:1 question:1 flipping:1 primary:1 traditional:5 affinity:2 distance:4 manifold:39 considers:1 enforcing:1 code:5 index:1 relationship:2 balance:2 vladimir:1 difficult:3 mostly:1 robert:1 negative:3 understandable:1 unknown:1 perform:1 observation:2 benchmark:1 hinton:1 communication:1 y1:2 arbitrary:4 david:1 pair:1 required:1 connection:2 acoustic:1 coherent:1 learned:4 nip:1 acov:1 able:1 suggested:1 below:1 sparsity:3 challenge:2 including:2 belief:4 critical:1 raina:2 representing:2 scheme:3 improve:1 autoencoder:1 sn:1 prior:1 geometric:3 literature:4 checking:1 embedded:1 fully:1 loss:6 interesting:2 limitation:2 geoffrey:1 localized:4 foundation:1 sufficient:1 consistent:2 principle:1 plotting:1 prone:2 formal:1 side:3 understand:1 lle:4 neighbor:2 saul:1 mikhail:1 sparse:27 distributed:1 regard:1 dimension:4 xn:2 default:1 sensory:1 adaptive:2 simplified:1 ec:1 transaction:1 approximate:4 compact:4 overcomes:1 confirm:1 global:6 overfitting:4 reveals:2 anchor:13 assumed:1 xi:1 why:2 table:5 learn:5 transfer:1 robust:1 bottou:1 vj:1 main:2 linearly:2 noise:3 x1:3 tong:1 fails:2 nonnegativity:1 exponential:1 lie:7 kxk2:3 intimate:1 learns:1 theorem:9 embed:1 specific:1 svm:9 exists:5 intrinsic:11 quantization:2 essential:2 workshop:1 effectively:3 importance:2 adding:1 mnist:5 vapnik:1 nec:4 linearization:3 kx:7 easier:1 locality:14 depicted:1 simply:3 faraway:1 visual:2 prevents:1 expressed:1 kxk:4 satisfies:1 goal:1 formulated:1 consequently:2 lipschitz:8 change:1 typical:1 reducing:1 lemma:3 called:3 invariance:1 ya:1 people:1 eigenmap:1 rajat:2 incorporate:1 tested:1 phenomenon:1 ex:10 |
3,174 | 3,876 | Inter-domain Gaussian Processes for
Sparse Inference using Inducing Features
Miguel L?azaro-Gredilla and An??bal R. Figueiras-Vidal
Dep. Signal Processing & Communications
Universidad Carlos III de Madrid, SPAIN
{miguel,arfv}@tsc.uc3m.es
Abstract
We present a general inference framework for inter-domain Gaussian Processes
(GPs) and focus on its usefulness to build sparse GP models. The state-of-the-art
sparse GP model introduced by Snelson and Ghahramani in [1] relies on finding
a small, representative pseudo data set of m elements (from the same domain as
the n available data elements) which is able to explain existing data well, and
then uses it to perform inference. This reduces inference and model selection
computation time from O(n3 ) to O(m2 n), where m n. Inter-domain GPs can
be used to find a (possibly more compact) representative set of features lying in a
different domain, at the same computational cost. Being able to specify a different
domain for the representative features allows to incorporate prior knowledge about
relevant characteristics of data and detaches the functional form of the covariance
and basis functions. We will show how previously existing models fit into this
framework and will use it to develop two new sparse GP models. Tests on large,
representative regression data sets suggest that significant improvement can be
achieved, while retaining computational efficiency.
1
Introduction and previous work
Along the past decade there has been a growing interest in the application of Gaussian Processes
(GPs) to machine learning tasks. GPs are probabilistic non-parametric Bayesian models that combine a number of attractive characteristics: They achieve state-of-the-art performance on supervised
learning tasks, provide probabilistic predictions, have a simple and well-founded model selection
scheme, present no overfitting (since parameters are integrated out), etc.
Unfortunately, the direct application of GPs to regression problems (with which we will be concerned here) is limited due to their training time being O(n3 ). To overcome this limitation, several
sparse approximations have been proposed [2, 3, 4, 5, 6]. In most of them, sparsity is achieved by
projecting all available data onto a smaller subset of size m n (the active set), which is selected
according to some specific criterion. This reduces computation time to O(m2 n). However, active
set selection interferes with hyperparameter learning, due to its non-smooth nature (see [1, 3]).
These proposals have been superseded by the Sparse Pseudo-inputs GP (SPGP) model, introduced
in [1]. In this model, the constraint that the samples of the active set (which are called pseudoinputs) must be selected among training data is relaxed, allowing them to lie anywhere in the input
space. This allows both pseudo-inputs and hyperparameters to be selected in a joint continuous
optimisation and increases flexibility, resulting in much superior performance.
In this work we introduce Inter-Domain GPs (IDGPs) as a general tool to perform inference across
domains. This allows to remove the constraint that the pseudo-inputs must remain within the same
domain as input data. This added flexibility results in an increased performance and allows to encode
prior knowledge about other domains where data can be represented more compactly.
1
2
Review of GPs for regression
We will briefly state here the main definitions and results for regression with GPs. See [7] for a
comprehensive review.
Assume we are given a training set with n samples D ? {xj , yj }nj=1 , where each D-dimensional
input xj is associated to a scalar output yj . The regression task goal is, given a new input x? , predict
the corresponding output y? based on D.
The GP regression model assumes that the outputs can be expressed as some noiseless latent function
plus independent noise, y = f (x)+?, and then sets a zero-mean1 GP prior on f (x), with covariance
k(x, x0 ), and a zero-mean Gaussian prior on ?, with variance ? 2 (the noise power hyperparameter).
The covariance function encodes prior knowledge about the smoothness of f (x). The most common
choice for it is the Automatic Relevance Determination Squared Exponential (ARD SE):
"
#
D
1 X (xd ? x0d )2
0
2
k(x, x ) = ?0 exp ?
,
(1)
2
`2d
d=1
with hyperparameters ?02 (the latent function power) and {`d }D
d=1 (the length-scales, defining how
rapidly the covariance decays along each dimension). It is referred to as ARD SE because, when
coupled with a model selection method, non-informative input dimensions can be removed automatically by growing the corresponding length-scale. The set of hyperparameters that define the GP are
? = {? 2 , ?02 , {`d }D
d=1 }. We will omit the dependence on ? for the sake of clarity.
If we evaluate the latent function at X = {xj }nj=1 , we obtain a set of latent variables following a
joint Gaussian distribution p(f |X) = N (f |0, Kff ), where [Kff ]ij = k(xi , xj ). Using this model
it is possible to express the joint distribution of training and test cases and then condition on the
observed outputs to obtain the predictive distribution for any test case
2
?1
2
?1
kf ? ),
y, ? 2 + k?? ? k>
pGP (y? |x? , D) = N (y? |k>
f ? (Kff + ? In )
f ? (Kff + ? In )
>
(2)
>
where y = [y1 , . . . , yn ] , kf ? = [k(x1 , x? ), . . . , k(xn , x? )] , and k?? = k(x? , x? ). In is used to
denote the identity matrix of size n. The O(n3 ) cost of these equations arises from the inversion of
the n ? n covariance matrix. Predictive distributions for additional test cases take O(n2 ) time each.
These costs make standard GPs impractical for large data sets.
To select hyperparameters ?, Type-II Maximum Likelihood (ML-II) is commonly used. This
amounts to selecting the hyperparameters that correspond to a (possibly local) maximum of the
log-marginal likelihood, also called log-evidence.
3
Inter-domain GPs
In this section we will introduce Inter-Domain GPs (IDGPs) and show how they can be used as a
framework for computationally efficient inference. Then we will use this framework to express two
previous relevant models and develop two new ones.
3.1
Definition
Consider a real-valued GP f (x) with x ? RD and some deterministic real function g(x, z), with
z ? RH . We define the following transformation:
Z
u(z) =
f (x)g(x, z)dx.
(3)
RD
There are many examples of transformations that take on this form, the Fourier transform being
one of the best known. We will discuss possible choices for g(x, z) in Section 3.3; for the moment
we will deal with the general form. Since u(z) is obtained by a linear transformation of GP f (x),
1
We follow the common approach of subtracting the sample mean from the outputs and then assume a
zero-mean model.
2
it is also a GP. This new GP may lie in a different domain of possibly different dimension. This
transformation is not invertible in general, its properties being defined by g(x, z).
IDGPs arise when we jointly consider f (x) and u(z) as a single, ?extended? GP. The mean and
covariance function of this extended GP are overloaded to accept arguments from both the input and
transformed domains and treat them accordingly. We refer to each version of an overloaded function
as an instance, which will accept a different type of arguments. If the distribution of the original GP
is f (x) ? GP(m(x), k(x, x0 )), then it is possible to compute the remaining instances that define the
distribution of the extended GP over both domains. The transformed-domain instance of the mean
is
Z
Z
m(z) = E[u(z)] =
E[f (x)]g(x, z)dx =
m(x)g(x, z)dx.
RD
RD
The inter-domain and transformed-domain
instances of the covariance
Z function are:
Z
k(x, z0 ) = E[f (x)u(z0 )] = E f (x)
f (x0 )g(x0 , z0 )dx0 =
k(x, x0 )g(x0 , z0 )dx0
RD
RD
Z
Z
0
0
0
0 0
0
k(z, z ) = E[u(z)u(z )] = E
f (x)g(x, z)dx
f (x )g(x , z )dx
RD
RD
Z Z
=
k(x, x0 )g(x, z)g(x0 , z0 )dxdx0 .
RD
(4)
(5)
RD
Mean m(?) and covariance function k(?, ?) are therefore defined both by the values and domains of
their arguments. This can be seen as if each argument had an additional domain indicator used to
select the instance. Apart from that, they define a regular GP, and all standard properties hold. In
particular k(a, b) = k(b, a). This approach is related to [8], but here the latent space is defined as
a transformation of the input space, and not the other way around. This allows to pre-specify the
desired input-domain covariance. The transformation is also more general: Any g(x, z) can be used.
We can sample an IDGP at n input-domain points f = [f1 , f2 , . . . , fn ]> (with fj = f (xj )) and m
transformed-domain points u = [u1 , u2 , . . . , um ]> (with ui = u(zi )). With the usual assumption
of f (x) being a zero mean GP and defining Z = {zi }m
, the joint distribution of these samples is:
i=1
Kff Kfu
f
f
0,
X,
Z
=
N
p
,
(6)
u
u
K>
fu Kuu
with [Kff ]pq = k(xp , xq ),
[Kfu ]pq = k(xp , zq ),
[Kuu ]pq = k(zp , zq ),
which allows to perform inference across domains. We will only be concerned with one input
domain and one transformed domain, but IDGPs can be defined for any number of domains.
3.2
Sparse regression using inducing features
In the standard regression setting, we are asked to perform inference about the latent function f (x)
from a data set D lying in the input domain. Using IDGPs, we can use data from any domain to
perform inference in the input domain. Some latent functions might be better defined by a set of
data lying in some transformed space rather than in the input space. This idea is used for sparse
inference.
Following [1] we introduce a pseudo data set, but here we place it in the transformed domain:
D = {Z, u}. The following derivation is analogous to that of SPGP. We will refer to Z as the
inducing features and u as the inducing variables. The key approximation leading to sparsity is to
set m n and assume that f (x) is well-described by the pseudo data set D, so that any two samples
(either from the training or test set) fp and fq with p 6= q will be independent given xp , xq and D.
With this simplifying assumption2 , the prior over f can be factorised as a product of marginals:
n
Y
p(f |X, Z, u) ?
p(fj |xj , Z, u).
(7)
j=1
2
Alternatively, (7) can be obtained by proposing
a generic factorised form for the approximate conQn
ditional p(f |X, Z, u) ? q(f |X, Z, u) =
q (f |xj , Z, u) and then choosing the set of funcj=1 j j
tions {qj (?)}n
j=1 so as to minimise the Kullback-Leibler (KL) divergence from the exact joint prior
KL(p(f |X, Z, u)p(u|Z)||q(f |X, Z, u)p(u|Z)), as noted in [9], Section 2.3.6.
3
Marginals are in turn obtained from (6): p(fj |xj , Z, u) = N (fj |kj K?1
uu u, ?j ), where kj is the j-th
row of Kfu and ?j is the j-th element of the diagonal of matrix ?f = diag(Kf f ? Kfu K?1
uu Kuf ).
Operator diag(?) sets all off-diagonal elements to zero, so that ?f is a diagonal matrix.
Since p(u|Z) is readily available and also Gaussian, the inducing variables can be integrated out
from (7), yielding a new, approximate prior over f (x):
Z
Z Y
n
p(f |X, Z) = p(f , u|X, Z)du ?
p(fj |xj , Z, u)p(u|Z)du = N (f |0, Kfu K?1
uu Kuf + ?f )
j=1
Using this approximate prior, the posterior distribution for a test case is:
?1 > ?1
?1
pIDGP (y? |x? , D, Z) = N (y? |k>
Kfu ?y y, ? 2 + k?? + k>
? K?1
u? Q
u? (Q
uu )ku? ),
(8)
?1
K>
fu ?y Kfu
where we have defined Q = Kuu +
and ?y = ?f + ? 2 In . The distribution (2)
is approximated by (8) with the information available in the pseudo data set. After O(m2 n) time
precomputations, predictive means and variances can be computed in O(m) and O(m2 ) time per
test case, respectively. This model is, in general, non-stationary, even when it is approximating a
stationary input-domain covariance and can be interpreted as a degenerate GP plus heteroscedastic
white noise.
The log-marginal likelihood (or log-evidence) of the model, explicitly including the conditioning on
kernel hyperparameters ? can be expressed as
1
?1 > ?1
> ?1
Kfu ?y y+log(|Q||?y |/|Kuu |)+n log(2?)]
log p(y|X, Z, ?) = ? [y> ??1
y y?y ?y Kfu Q
2
which is also computable in O(m2 n) time.
Model selection will be performed by jointly optimising the evidence with respect to the hyperparameters and the inducing features. If analytical derivatives of the covariance function are available,
conjugate gradient optimisation can be used with O(m2 n) cost per step.
3.3
On the choice of g(x, z)
The feature extraction function g(x, z) defines the transformed domain in which the pseudo data set
lies. According to (3), the inducing variables can be seen as projections of the target function f (x)
on the feature extraction function over the whole input space. Therefore, each of them summarises
information about the behaviour of f (x) everywhere. The inducing features Z define the concrete set
of functions over which the target function will be projected. It is desirable that this set captures the
most significant characteristics of the function. This can be achieved either using prior knowledge
about data to select {g(x, zi )}m
i=1 or using a very general family of functions and letting model
selection automatically choose the appropriate set.
Another way to choose g(x, z) relies on the form of the posterior. The posterior mean of a GP is
often thought of as a linear combination of ?basis functions?. For full GPs and other approximations
such as [1, 2, 3, 4, 5, 6], basis functions must have the form of the input-domain covariance function.
When using IDGPs, basis functions have the form of the inter-domain instance of the covariance
function, and can therefore be adjusted by choosing g(x, z), independently of the input-domain
covariance function.
If two feature extraction functions g(?, ?) and h(?, ?) can be related by g(x, z) = h(x, z)r(z) for any
function r(?), then both yield the same sparse GP model. This property can be used to simplify the
expressions of the instances of the covariance function.
In this work we use the same functional form for every feature, i.e. our function set is {g(x, zi )}m
i=1 ,
but it is also possible to use sets with different functional forms for each inducing feature, i.e.
{gi (x, zi )}m
i=1 where each zi may even have a different size (dimension). In the sections below
we will discuss different possible choices for g(x, z).
3.3.1
Relation with Sparse GPs using pseudo-inputs
The sparse GP using pseudo-inputs (SPGP) was introduced in [1] and was later renamed to Fully
Independent Training Conditional (FITC) model to fit in the systematic framework of [10]. Since
4
the sparse model introduced in Section 3.2 also uses a fully independent training conditional, we
will stick to the first name to avoid possible confusion.
IDGP innovation with respect to SPGP consists in letting the pseudo data set lie in a different domain. If we set gSPGP (x, z) ? ?(x ? z) where ?(?) is a Dirac delta, we force the pseudo data set to
lie in the input domain. Thus there is no longer a transformed space and the original SPGP model is
retrieved. In this setting, the inducing features of IDGP play the role of SPGP?s pseudo-inputs.
3.3.2
Relation with Sparse Multiscale GPs
Sparse Multiscale GPs (SMGPs) are presented in [11]. Seeking to generalise the SPGP model with
ARD SE covariance function, they propose to use a different set of length-scales for each basis
function. The resulting model presents a defective variance that is healed by adding heteroscedastic
white noise. SMGPs, including the variance improvement, can be derived in a principled way as
IDGPs:
" D
#
X (xd ? ?d )2
1
?
gSMGP (x, z) ? QD p
exp ?
with
z
=
(9)
2 ? `2 )
c
2 ? `2 )
2(c
2?(c
d
d
d=1
d
d
d=1
# D s
" D
X (xd ? ?0 )2 Y
`2d
0
d
kSMGP (x, z ) = exp ?
(10)
2c02
c02
d
d
d=1
d=1
" D
# D s
X (?d ? ?0 )2
Y
`2d
0
d
kSMGP (z, z ) = exp ?
(11)
2
2.
2(c2d + c02
c2d + c02
d ? `d )
d ? `d
d=1
d=1
With this approximation, each basis function has its own centre ? = [?1 , ?2 , . . . , ?d ]> and its
own length-scales c = [c1 , c2 , . . . , cd ]> , whereas global length-scales {`d }D
d=1 are shared by all
inducing features. Equations (10) and (11) are derived from (4) and (5) using (1) and (9). The
integrals defining kSMGP (?, ?) converge if and only if c2d ? `2d , ?d , which suggests that other values,
even if permitted in [11], should be avoided for the model to remain well defined.
3.3.3
Frequency Inducing Features GP
If the target function can be described more compactly in the frequency domain than in the input
domain, it can be advantageous to let the pseudo data set lie in the former domain. We will pursue
that possibility for the case where the input domain covariance is the ARD SE. We will call the
resulting sparse model Frequency Inducing Features GP (FIFGP).
Directly applying the Fourier transform is not possible because the target function is not square
integrable (it has constant power ?02 everywhere, so (5) does not converge). We will workaround
this by windowing the target function in the region of interest. It is possible to use a square window,
but this results in the covariance being defined in terms of the complex error function, which is very
slow to evaluate. Instead, we will use a Gaussian window3 . Since multiplying by a Gaussian in
the input domain is equivalent to convolving with a Gaussian in the frequency domain, we will be
working with a blurred version of the frequency space. This model is defined by:
" D
#
D
X x2
X
1
d
exp ?
cos
?
+
x
?
with z = ?
(12)
gFIF (x, z) ? QD p
0
d
d
2c2d
2?c2d
d=1
d=1
d=1
" D
#
! D s
D
X x2 + c2 ? 02
X
`2d
c2d ?d0 xd Y
0
0
d
d d
kFIF (x, z ) = exp ?
cos
?
+
(13)
0
2(c2d + `2d )
c2 + `2d
c2d + `2d
d=1
d=1 d
d=1
" D
#
" D
#
X c4 (?d ? ? 0 )2
X c2 (? 2 + ? 02 )
0
d
d
d
d
d
kFIF (z, z ) = exp ?
exp ?
cos(?0 ? ?00 )
2(2c2d + `2d )
2(2c2d + `2d )
d=1
d=1
" D
#
! D s
X c4 (?d + ? 0 )2
Y
`2d
0
d
d
+ exp ?
cos(?
+
?
.
(14)
)
0
0
2(2c2d + `2d )
2c2d + `2d
d=1
3
d=1
A mixture of m Gaussians could also be used as window without increasing the complexity order.
5
The inducing features are ? = [?0 , ?1 , . . . , ?d ]> , where ?0 is the phase and the remaining components are frequencies along each dimension. In this model, both global length-scales {`d }D
d=1 and
0
window length-scales {cd }D
are
shared,
thus
c
=
c
.
Instances
(13)
and
(14)
are
induced
by (12)
d
d=1
d
using (4) and (5).
3.3.4
Time-Frequency Inducing Features GP
Instead of using a single window to select the region of interest, it is possible to use a different
window for each feature. We will use windows of the same size but different centres. The resulting model combines SPGP and FIFGP, so we will call it Time-Frequency Inducing Features
GP (TFIFGP). It is defined by gTFIF (x, z) ? gFIF (x ? ?, ?), with z = [?> ? > ]> . The implied
inter-domain and transformed-domain instances of the covariance function are:
" D
#
X (?d ? ?0 )2
0
0
0
0
0
d
kTFIF (x, z ) = kFIF (x ? ? , ? ) ,
kTFIF (z, z ) = kFIF (z, z ) exp ?
2(2c2d + `2d )
d=1
FIFGP is trivially obtained by setting every centre to zero {?i = 0}m
i=1 , whereas SPGP is obtained
by setting window length-scales c, frequencies and phases {? i }m
i=1 to zero. If the window lengthscales were individually adjusted, SMGP would be obtained.
While FIFGP has the modelling power of both FIFGP and SPGP, it might perform worse in practice due to it having roughly twice as many hyperparameters, thus making the optimisation problem
harder. The same problem also exists in SMGP. A possible workaround is to initialise the hyperparameters using a simpler model, as done in [11] for SMGP, though we will not do this here.
4
Experiments
In this section we will compare the proposed approximations FIFGP and TFIFGP with the current
state of the art, SPGP on some large data sets, for the same number of inducing features/inputs
and therefore, roughly equal computational cost. Additionally, we provide results using a full GP,
which is expected to provide top performance (though requiring an impractically big amount of
computation). In all cases, the (input-domain) covariance function is the ARD SE (1).
We use four large data sets: Kin-40k, Pumadyn-32nm4 (describing the dynamics of a robot arm,
used with SPGP in [1]), Elevators and Pole Telecomm5 (related to the control of the elevators of an
F16 aircraft and a telecommunications problem, and used in [12, 13, 14]). Input dimensions that
remained constant throughout the training set were removed. Input data was additionally centred for
use with FIFGP (the remaining methods are translation invariant). Pole Telecomm outputs actually
take discrete values in the 0-100 range, in multiples of 10. This was taken into account by using the
corresponding quantization noise variance (102 /12) as lower bound for the noise hyperparameter6 .
Pn
Hyperparameters are initialised as follows: ?02 = n1 j=1 yj2 , ? 2 = ?02 /4, {`d }D
d=1 to one half of
the range spanned by training data along each dimension. For SPGP, pseudo-inputs are initialised
to a random subset of the training data, for FIFGP window size c is initialised to the standard
deviation of input data, frequencies are randomly chosen from a zero-mean `?2
d -variance Gaussian
distribution, and phases are obtained from a uniform distribution in [0 . . . 2?). TFIFGP uses the
same initialisation as FIFGP, with window centres set to zero. Final values are selected by evidence
maximisation.
Denoting the output average over the training set as y and the predictive mean and variance for test
sample y?l as ??l and ??l respectively, we define the following quality measures: Normalized Mean
Square Error (NMSE) h(y?l ? ??l )2 i/h(y?l ? y)2 i and Mean Negative Log-Probability (MNLP)
1
2
2
2
2 h(y?l ? ??l ) /??l + log ??l + log 2?i, where h?i averages over the test set.
4
Kin-40k: 8 input dimensions, 10000/30000 samples for train/test, Pumadyn-32nm: 32 input dimensions,
7168/1024 samples for train/test, using exactly the same preprocessing and train/test splits as [1, 3]. Note that
their error measure is actually one half of the Normalized Mean Square Error defined here.
5
Pole Telecomm: 26 non-constant input dimensions, 10000/5000 samples for train/test. Elevators:
17 non-constant input dimensions, 8752/7847 samples for train/test. Both have been downloaded from
http://www.liaad.up.pt/?ltorgo/Regression/datasets.html
6
If unconstrained, similar plots are obtained; in particular, no overfitting is observed.
6
For Kin-40k (Fig. 1, top), all three sparse methods perform similarly, though for high sparseness
(the most useful case) FIFGP and TFIFGP are slightly superior. In Pumadyn-32nm (Fig. 1, bottom),
only 4 out the 32 input dimensions are relevant to the regression task, so it can be used as an ARD
capabilities test. We follow [1] and use a full GP on a small subset of the training data (1024 data
points) to obtain the initial length-scales. This allows better minima to be found during optimisation.
Though all methods are able to properly find a good solution, FIFGP and especially TFIFGP are
better in the sparser regime. Roughly the same considerations can be made about Pole Telecomm
and Elevators (Fig. 2), but in these data sets the superiority of FIFGP and TFIFGP is more dramatic.
Though not shown here, we have additionally tested these models on smaller, overfitting-prone data
sets, and have found no noticeable overfitting even using m > n, despite the relatively high number
of parameters being adjusted. This is in line with the results and discussion of [1].
Mean Negative Log?Probability
Normalized Mean Squared Error
2.5
0.5
0.1
0.05
0.01
0.005
SPGP
FIFGP
TFIFGP
Full GP on 10000 data points
0.001
25
50
100
200 300
500 750
SPGP
FIFGP
TFIFGP
Full GP on 10000 data points
2
1.5
1
0.5
0
?0.5
?1
?1.5
1250
25
50
100
200
300
500
750
1250
Inducing features / pseudo?inputs
(b) Kin-40k MNLP (semilog plot)
Inducing features / pseudo?inputs
(a) Kin-40k NMSE (log-log plot)
0.05
0.04
10
25
50
75
Mean Negative Log?Probability
Normalized Mean Squared Error
0.2
SPGP
FIFGP
TFIFGP
Full GP on 7168 data points
0.1
0.1
0.05
0
?0.05
?0.1
?0.15
?0.2
100
Inducing features / pseudo?inputs
(c) Pumadyn-32nm NMSE (log-log plot)
SPGP
FIFGP
TFIFGP
Full GP on 7168 data points
0.15
10
25
50
75
100
Inducing features / pseudo?inputs
(d) Pumadyn-32nm MNLP (semilog plot)
Figure 1: Performance of the compared methods on Kin-40k and Pumadyn-32nm.
5
Conclusions and extensions
In this work we have introduced IDGPs, which are able combine representations of a GP in different domains, and have used them to extend SPGP to handle inducing features lying in a different
domain. This provides a general framework for sparse models, which are defined by a feature extraction function. Using this framework, SMGPs can be reinterpreted as fully principled models using a
transformed space of local features, without any need for post-hoc variance improvements. Furthermore, it is possible to develop new sparse models of practical use, such as the proposed FIFGP and
TFIFGP, which are able to outperform the state-of-the-art SPGP on some large data sets, especially
for high sparsity regimes.
7
0.25
0.2
0.15
0.1
10
25
50
100
250
Mean Negative Log?Probability
Normalized Mean Squared Error
?3.8
SPGP
FIFGP
TFIFGP
Full GP on 8752 data points
SPGP
FIFGP
TFIFGP
Full GP on 8752 data points
?4
?4.2
?4.4
?4.6
?4.8
500 7501000
10
Inducing features / pseudo?inputs
(a) Elevators NMSE (log-log plot)
25
50
100
250
500 7501000
Inducing features / pseudo?inputs
(b) Elevators MNLP (semilog plot)
0.2
0.15
0.1
0.05
0.04
0.03
0.02
0.01
10
25
50
100
250
500
Mean Negative Log?Probability
Normalized Mean Squared Error
5.5
SPGP
FIFGP
TFIFGP
Full GP on 10000 data points
4.5
4
3.5
3
2.5
1000
SPGP
FIFGP
TFIFGP
Full GP on 10000 data points
5
10
25
50
100
250
500
1000
Inducing features / pseudo?inputs
(d) Pole Telecomm MNLP (semilog plot)
Inducing features / pseudo?inputs
(c) Pole Telecomm NMSE (log-log plot)
Figure 2: Performance of the compared methods on Elevators and Pole Telecomm.
Choosing a transformed space for the inducing features enables to use domains where the target
function can be expressed more compactly, or where the evidence (which is a function of the features) is easier to optimise. This added flexibility translates as a detaching of the functional form of
the input-domain covariance and the set of basis functions used to express the posterior mean.
IDGPs approximate full GPs optimally in the KL sense noted in Section 3.2, for a given set of
inducing features. Using ML-II to select the inducing features means that models providing a good
fit to data are given preference over models that might approximate the full GP more closely. This,
though rarely, might lead to harmful overfitting. To more faithfully approximate the full GP and
avoid overfitting altogether, our proposal can be combined with the variational approach from [15],
in which the inducing features would be regarded as variational parameters. This would result in
more constrained models, which would be closer to the full GP but might show reduced performance.
We have explored the case of regression with Gaussian noise, which is analytically tractable, but it
is straightforward to apply the same model to other tasks such as robust regression or classification,
using approximate inference (see [16]). Also, IDGPs as a general tool can be used for other purposes,
such as modelling noise in the frequency domain, aggregating data from different domains or even
imposing constraints on the target function.
Acknowledgments
We would like to thank the anonymous referees for helpful comments and suggestions. This work
has been partly supported by the Spanish government under grant TEC2008- 02473/TEC, and by
the Madrid Community under grant S-505/TIC/0223.
8
References
[1] E. Snelson and Z. Ghahramani. Sparse Gaussian processes using pseudo-inputs. In Advances in Neural
Information Processing Systems 18, pages 1259?1266. MIT Press, 2006.
[2] A. J. Smola and P. Bartlett. Sparse greedy Gaussian process regression. In Advances in Neural Information
Processing Systems 13, pages 619?625. MIT Press, 2001.
[3] M. Seeger, C. K. I. Williams, and N. D. Lawrence. Fast forward selection to speed up sparse Gaussian
process regression. In Proceedings of the 9th International Workshop on AI Stats, 2003.
[4] V. Tresp. A Bayesian committee machine. Neural Computation, 12:2719?2741, 2000.
[5] L. Csat?o and M. Opper. Sparse online Gaussian processes. Neural Computation, 14(3):641?669, 2002.
[6] C. K. I. Williams and M. Seeger. Using the Nystr?om method to speed up kernel machines. In Advances
in Neural Information Processing Systems 13, pages 682?688. MIT Press, 2001.
[7] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. Adaptive Computation and Machine Learning. MIT Press, 2006.
[8] M. Alvarez and N. D. Lawrence. Sparse convolved Gaussian processes for multi-output regression. In
Advances in Neural Information Processing Systems 21, pages 57?64, 2009.
[9] Ed. Snelson. Flexible and efficient Gaussian process models for machine learning. PhD thesis, University
of Cambridge, 2007.
[10] J. Qui?nonero-Candela and C. E. Rasmussen. A unifying view of sparse approximate Gaussian process
regression. Journal of Machine Learning Research, 6:1939?1959, 2005.
[11] C. Walder, K. I. Kim, and B. Sch?olkopf. Sparse multiscale Gaussian process regression. In 25th International Conference on Machine Learning. ACM Press, New York, 2008.
[12] G. Potgietera and A. P. Engelbrecht. Evolving model trees for mining data sets with continuous-valued
classes. Expert Systems with Applications, 35:1513?1532, 2007.
[13] L. Torgo and J. Pinto da Costa. Clustered partial linear regression. In Proceedings of the 11th European
Conference on Machine Learning, pages 426?436. Springer, 2000.
[14] G. Potgietera and A. P. Engelbrecht. Pairwise classification as an ensemble technique. In Proceedings of
the 13th European Conference on Machine Learning, pages 97?110. Springer-Verlag, 2002.
[15] M. K. Titsias. Variational learning of inducing variables in sparse Gaussian processes. In Proceedings of
the 12th International Workshop on AI Stats, 2009.
[16] A. Naish-Guzman and S. Holden. The generalized FITC approximation. In Advances in Neural Information Processing Systems 20, pages 1057?1064. MIT Press, 2008.
9
| 3876 |@word aircraft:1 version:2 briefly:1 inversion:1 advantageous:1 covariance:21 simplifying:1 dramatic:1 nystr:1 harder:1 moment:1 initial:1 selecting:1 initialisation:1 kuf:2 denoting:1 past:1 existing:2 current:1 dx:5 must:3 readily:1 fn:1 informative:1 enables:1 remove:1 plot:9 stationary:2 half:2 selected:4 greedy:1 accordingly:1 kff:6 provides:1 preference:1 simpler:1 along:4 c2:4 direct:1 consists:1 combine:3 introduce:3 pairwise:1 x0:8 inter:9 expected:1 roughly:3 growing:2 multi:1 automatically:2 window:10 increasing:1 spain:1 tic:1 interpreted:1 pursue:1 proposing:1 finding:1 transformation:6 nj:2 impractical:1 pseudo:24 every:2 xd:4 exactly:1 um:1 stick:1 control:1 grant:2 omit:1 yn:1 superiority:1 local:2 treat:1 aggregating:1 despite:1 might:5 plus:2 twice:1 precomputations:1 suggests:1 heteroscedastic:2 co:4 limited:1 range:2 practical:1 acknowledgment:1 yj:2 practice:1 maximisation:1 evolving:1 thought:1 projection:1 pre:1 regular:1 suggest:1 onto:1 selection:7 operator:1 applying:1 www:1 equivalent:1 deterministic:1 straightforward:1 williams:3 independently:1 yj2:1 stats:2 m2:6 regarded:1 spanned:1 initialise:1 handle:1 analogous:1 target:7 play:1 pt:1 exact:1 gps:16 us:3 element:4 referee:1 approximated:1 observed:2 role:1 bottom:1 capture:1 region:2 removed:2 principled:2 workaround:2 ui:1 complexity:1 asked:1 dynamic:1 spgp:23 torgo:1 predictive:4 titsias:1 efficiency:1 f2:1 basis:7 compactly:3 joint:5 represented:1 derivation:1 train:5 fast:1 tec:1 choosing:3 lengthscales:1 valued:2 gi:1 gp:40 transform:2 jointly:2 final:1 online:1 hoc:1 interferes:1 analytical:1 propose:1 subtracting:1 product:1 relevant:3 nonero:1 rapidly:1 flexibility:3 achieve:1 degenerate:1 inducing:31 dirac:1 olkopf:1 figueiras:1 zp:1 tions:1 develop:3 miguel:2 ij:1 ard:6 noticeable:1 dep:1 uu:4 qd:2 closely:1 government:1 behaviour:1 f1:1 clustered:1 anonymous:1 adjusted:3 extension:1 hold:1 lying:4 around:1 exp:10 lawrence:2 predict:1 ditional:1 purpose:1 individually:1 faithfully:1 tool:2 mit:5 gaussian:21 rather:1 avoid:2 pn:1 encode:1 derived:2 focus:1 improvement:3 properly:1 modelling:2 likelihood:3 fq:1 f16:1 seeger:2 kim:1 sense:1 kfu:9 inference:11 helpful:1 integrated:2 holden:1 accept:2 relation:2 transformed:12 among:1 html:1 classification:2 flexible:1 retaining:1 art:4 constrained:1 marginal:2 equal:1 extraction:4 having:1 optimising:1 guzman:1 simplify:1 randomly:1 divergence:1 comprehensive:1 elevator:7 phase:3 n1:1 interest:3 possibility:1 mining:1 reinterpreted:1 mixture:1 yielding:1 fu:2 integral:1 closer:1 partial:1 tree:1 harmful:1 desired:1 increased:1 instance:9 cost:5 pole:7 deviation:1 subset:3 uniform:1 usefulness:1 optimally:1 combined:1 international:3 probabilistic:2 universidad:1 off:1 systematic:1 invertible:1 concrete:1 pumadyn:6 squared:5 thesis:1 nm:5 ltorgo:1 choose:2 possibly:3 worse:1 convolving:1 derivative:1 leading:1 expert:1 account:1 de:1 factorised:2 centred:1 blurred:1 explicitly:1 performed:1 later:1 view:1 candela:1 carlos:1 capability:1 om:1 square:4 variance:8 characteristic:3 ensemble:1 correspond:1 yield:1 bayesian:2 multiplying:1 explain:1 ed:1 definition:2 frequency:11 initialised:3 engelbrecht:2 associated:1 costa:1 knowledge:4 actually:2 uc3m:1 supervised:1 follow:2 permitted:1 specify:2 alvarez:1 done:1 though:6 furthermore:1 anywhere:1 smola:1 telecomm:6 working:1 multiscale:3 defines:1 quality:1 name:1 x0d:1 requiring:1 normalized:6 former:1 analytically:1 leibler:1 deal:1 attractive:1 white:2 during:1 spanish:1 noted:2 bal:1 criterion:1 generalized:1 tsc:1 confusion:1 fj:5 snelson:3 consideration:1 variational:3 superior:2 common:2 functional:4 conditioning:1 extend:1 marginals:2 significant:2 refer:2 cambridge:1 imposing:1 ai:2 smoothness:1 automatic:1 rd:10 trivially:1 unconstrained:1 similarly:1 centre:4 had:1 pq:3 robot:1 longer:1 etc:1 posterior:4 own:2 retrieved:1 apart:1 verlag:1 integrable:1 seen:2 minimum:1 additional:2 relaxed:1 converge:2 signal:1 ii:3 full:15 desirable:1 windowing:1 reduces:2 d0:1 multiple:1 smooth:1 determination:1 post:1 prediction:1 regression:18 optimisation:4 noiseless:1 kernel:2 achieved:3 c1:1 proposal:2 whereas:2 sch:1 semilog:4 comment:1 induced:1 call:2 iii:1 split:1 concerned:2 naish:1 xj:9 fit:3 zi:6 idea:1 computable:1 translates:1 minimise:1 qj:1 expression:1 bartlett:1 kuu:4 york:1 useful:1 se:5 amount:2 reduced:1 http:1 outperform:1 delta:1 per:2 csat:1 discrete:1 hyperparameter:2 express:3 key:1 four:1 clarity:1 everywhere:2 telecommunication:1 place:1 family:1 c02:4 throughout:1 qui:1 bound:1 detaching:1 constraint:3 n3:3 x2:2 encodes:1 sake:1 fourier:2 u1:1 argument:4 speed:2 relatively:1 gredilla:1 according:2 combination:1 conjugate:1 renamed:1 smaller:2 across:2 remain:2 slightly:1 making:1 projecting:1 invariant:1 taken:1 computationally:1 equation:2 previously:1 discus:2 turn:1 describing:1 committee:1 letting:2 tractable:1 available:5 gaussians:1 vidal:1 apply:1 generic:1 appropriate:1 altogether:1 convolved:1 original:2 c2d:13 assumes:1 remaining:3 top:2 unifying:1 ghahramani:2 build:1 especially:2 approximating:1 dxdx0:1 summarises:1 seeking:1 implied:1 added:2 parametric:1 dependence:1 usual:1 diagonal:3 gradient:1 thank:1 length:9 providing:1 innovation:1 unfortunately:1 negative:5 perform:7 allowing:1 datasets:1 walder:1 defining:3 extended:3 communication:1 y1:1 community:1 introduced:5 overloaded:2 kl:3 c4:2 able:5 below:1 nm4:1 fp:1 sparsity:3 regime:2 including:2 optimise:1 power:4 force:1 indicator:1 arm:1 scheme:1 fitc:2 superseded:1 coupled:1 tresp:1 kj:2 xq:2 prior:10 review:2 pseudoinputs:1 kf:3 fully:3 suggestion:1 limitation:1 downloaded:1 xp:3 cd:2 translation:1 row:1 prone:1 supported:1 rasmussen:2 generalise:1 sparse:26 overcome:1 dimension:12 xn:1 opper:1 forward:1 commonly:1 made:1 projected:1 avoided:1 preprocessing:1 founded:1 adaptive:1 approximate:8 compact:1 kullback:1 ml:2 global:2 overfitting:6 active:3 xi:1 alternatively:1 continuous:2 latent:7 decade:1 zq:2 liaad:1 additionally:3 nature:1 ku:1 robust:1 du:2 complex:1 mnlp:5 european:2 domain:53 diag:2 da:1 main:1 rh:1 whole:1 noise:8 hyperparameters:10 arise:1 n2:1 big:1 defective:1 x1:1 nmse:5 fig:3 representative:4 referred:1 madrid:2 slow:1 exponential:1 lie:6 pgp:1 kin:6 z0:5 remained:1 specific:1 explored:1 decay:1 evidence:5 exists:1 workshop:2 quantization:1 adding:1 phd:1 sparseness:1 sparser:1 easier:1 azaro:1 expressed:3 scalar:1 u2:1 pinto:1 springer:2 relies:2 acm:1 conditional:2 goal:1 identity:1 shared:2 impractically:1 called:2 partly:1 e:1 rarely:1 select:5 arises:1 dx0:2 relevance:1 incorporate:1 evaluate:2 tested:1 |
3,175 | 3,877 | Improving Existing Fault Recovery Policies
Guy Shani
Department of Information Systems Engineering
Ben Gurion University, Beer-Sheva, Israel
[email protected]
Christopher Meek
Microsoft Research
One Microsoft Way, Redmond, WA
[email protected]
Abstract
An automated recovery system is a key component in a large data center. Such
a system typically employs a hand-made controller created by an expert. While
such controllers capture many important aspects of the recovery process, they are
often not systematically optimized to reduce costs such as server downtime. In
this paper we describe a passive policy learning approach for improving existing
recovery policies without exploration. We explain how to use data gathered from
the interactions of the hand-made controller with the system, to create an improved
controller. We suggest learning an indefinite horizon Partially Observable Markov
Decision Process, a model for decision making under uncertainty, and solve it using a point-based algorithm. We describe the complete process, starting with data
gathering, model learning, model checking procedures, and computing a policy.
1
Introduction
Many companies that provide large scale online services, such as banking services, e-mail services,
or search engines, use large server farms, often containing tens of thousands of computers in order to
support fast computation with low latency. Occasionally, these computers may experience failures,
due to software, or hardware problems. Often, these errors can be fixed automatically through
actions such as rebooting or re-imaging of the computer [6]. In such large systems it is prohibitively
costly to have a technician decide on a repair action for each observed problem. Therefore, these
systems often use some automatic repair policy or controller to choose appropriate repair actions.
These repair policies typically receive failure messages from the system. For example, Isard [6]
suggests using a set of watchdogs ? computers that probe other computers to test some attribute.
Messages from the watchdogs are then typically aggregated into a small set of notifications, such as
?Software Error? or ?Hardware Error?. The repair policy receives notifications and decides which
actions can fix the observed problems. In many cases such policies are created by a human experts
based on their experience and knowledge of the process. While human-made controllers often exhibit a reasonable performance, they are not automatically optimized to reduce costs. Thus, in many
cases, it is possible to create a better controller, that would improve the performance of the system.
A natural choice for modeling such systems is to model each machine as a Partially Observable
Markov Decision Process (POMDP) [8] ? a well known model for decision making under uncertainty [12]. Given the POMDP parameters, we can compute a policy that optimizes repair costs,
but learning the POMDP parameters may be difficult. Most researchers that use POMDPs therefore
assume that the parameters are known. Alternatively, Reinforcement Learning (RL) [14] offers a
wide range of techniques for learning optimized controllers through interactions with the environment, often avoiding the need for an explicit model. These techniques are typically used in an online
learning setting, and require that the agent will explore all possible state-action pairs.
In the case of the management of large data centers, where inappropriate actions may result in
considerable increased costs, it is unlikely that the learning process would be allowed to try every
1
combination of state and action. It is therefore unclear how standard RL techniques can be used
in this setting. On the other hand, many systems log the interactions of the existing hand-made
controller with the environment, accumulating significant data. Typically, the controller will not be
designed to perform exploration, and we cannot expect such logs to contain sufficient data to train
standard RL techniques.
In this paper we introduce a passive policy learning approach, that uses only available information
without exploration, to improve an existing repair policy. We adopt the indefinite-horizon POMDP
formalization [4], and use the existing controller?s logs to learn the unkown model parameters, using
an EM algorithm (an adapted Baum-Welch [1, 2, 15] algorithm). We suggest a model-checking
phase, providing supporting evidence for the quality of the learned model, which may be crucial
to help the system administrators decide whether the learned model is appropriate. We proceed
to compute a policy for our learned model, that can then be used in the data center instead of the
original hand-made controller.
We experiment with a synthetic, yet realistic, simulation of machine failures, showing how the
policy of the learned POMDP performs close to optimal, and outperforms a set of simpler techniques
that learn a policy directly in history space. We discuss the limitations of our method, mainly the
dependency on a reasonable hand-made controller in order to learn good models.
Many other real world applications, such as assembly lines, medical diagnosis systems, and failure
detection and recovery systems, are also controlled by hand-made controllers. While in this paper
we focus on recovery from failures, our approach may be applicable to other similar domains.
2
Properties of the Error Recovery Problem
In this section we describe aspects of the error recovery problem and a POMDP model for the
problem. Key aspects of the problem include the nature of repair actions and costs, machine failure,
failure detection, and control policies.
Key aspects of repair actions include: (1) actions may succeed or fail stochastically. (2) These actions often provide an escalating behavior. We label actions using increasing levels, where problems
fixed by an action at level i, are also fixed by any action of level j > i. Probabilistically, this would
mean that if j > i then pr(healthy|aj , e) ? pr(healthy|ai , e) for any error e. (3) Action costs are
typically escalating, where lower level actions that fix minor problems are relatively cheap, while
higher level actions are more expensive. In many real world systems this escalation is exponential. For example, restarting a service takes 5 seconds, rebooting a machine takes approximately 10
minutes, while re-imaging the machine takes about 2 hours.
Another stochastic feature of this problem is the inexact failure detection. It is not uncommon for a
watchdog to report an error for a machine that is fully operational, or to report an ?healthy? status
for a machine that experiences a failure.
In this domain, machines are identical and independent. Typically computers in service farms share
the same configuration and execute independent programs, attempting, for example, to answer independent queries to a search engine. It is therefore unlikely, if not impossible, for errors to propagate
from one machine to another.
In view of the escalating nature of actions and costs, a natural choice for a policy is an escalation
policy. Such policies choose a starting level based on the first observation, and execute an action at
that level. In many cases, due to the non-deterministic success of repair actions, each action is tried
several times. After the controller decides that the action at the current level cannot fix the problem,
the controller escalates to the next action level. Such policies have several hand tuned decisions. For
example, the number of retries of an action before an escalation occurs, and the entry level given an
observation. We can hope that these features, at least, could be optimized by a learning algorithm.
System administrators typically collect logs of the hand-made controller execution, for maintenance
purposes. These logs represent a valuable source of data about the system behavior that can be
used to learn a policy. We would like to use this knowledge to construct an improved policy that
will perform better than the original policy. Formally, we assume that we receive as input a log L
of repair sessions. Each repair session is a sequence l = o0 , a1 , o1 , ..., onl , starting with an error
notification, followed by a set of repair actions and observations until the problem is fixed. In
2
some cases, sessions end with the machine declared as ?dead?, but in practice a technician is called
for these machines, repairing or replacing them. Therefore, we can assume that all sessions end
successfully in the healthy state.
2.1
A POMDP for Error Recovery
Given the problem features above, a natural choice is to model each machine independently as a
partially observable Markov decision process (POMDP) with common parameters. We define a
cost-based POMDP through a tuple < S, A, tr, C, ?, O > where S is a set of states. In our case,
we adopt a factored representation, where s =< e0 , ..., en > where ei ? {0, 1} indicates whether
error i exists. That is, states are sets of failures, or errors of a machine, such as software error or a
hardware failure. We also add a special state sH =< 0, ..., 0 > ? the healthy state.
A is a set of actions, such as rebooting a machine or re-imaging it. tr(s, a, s0 ) is a state transition
function, specifying the probabilities of moving between states. We restrict our transition function
such that tr(s, a, s0 ) > 0 iff ?i if si = 0 then s0i = 0. That is, an action may only fix an error,
not generate new errors. C(s, a) is a cost function, assigning a cost to each state-action pair. Often,
costs can be measured as the time (minutes) for executing the action. For example, a reboot may
take 15 minutes, while re-imaging takes 2 hours.
? is a set of possible observations. For us, observations are messages from the watchdogs, such
as a notification of a hard disk failure, or a service reporting an error, and notifications about the
success or failure of an action. O(a, s0 , o) is an observation function, assigning a probability to each
observation pr(o|a, s0 ).
In a POMDP the true state is not directly observable and we thus maintain a belief state b ? B ?
a probability distribution over states, where b(s) is the probability that the system is at state s. We
assume that every repair session starts with an error observation, typically provided by one of the
watchdogs. We therefore define bo0 ? the prior distribution over states given an initial observation
o. We will also maintain a probability distribution pr0 (o) over initial observations. While this
probability distribution is not used in model learning, it is useful for evaluating the quality of policies
through trials.
It is convenient to define a policy for a POMDP as a mapping from belief states to actions ? : B ?
A. Our goal is to find an optimal policy that brings the machine to the healthy state with the minimal
cost. One method for computing a policy is through a value function, V , assigning a value to each
belief state b. Such a value function can be expressed as a set of |S| dimensional vectors known as
?-vectors, i.e., V = {?1 , ..., ?n }. Then, ?b = min??V ? ? b is the optimal ?-vector for
P belief state
b, and V (b) = b ? ?b is the value that the value function V assigns to b, where ? ? b = i ?i bi is the
standard vector inner product. By associating an action a(?) which each vector, a policy ? : B ? A
can be defined through ?(b) = a(?b ).
While exact value iteration, through complete updates of the belief space, does not scale beyond
small toy examples, Pineau et al. [10] suggest to update the value function by creating a single
?-vector that is optimized for a specific belief state. Such methods, known as point-based value
iteration, compute a value function over a finite set of belief states, resulting in a finite size value
function. Perseus [13] is an especially fast point-based solver that incrementally updates a value
function over a randomly chosen set of belief points, ensuring that at each iteration, the value for
each belief state is improved, while maintaining a compact value function representation.
We adopt here the indefinite horizon POMDP framework [4], which we consider to be most appropriate for failure recovery. In this framework the POMDP has a single special action aT , available in
any state, that terminates the repair session. In our case, the action is to call a technician, deterministically repairing the machine, but with a huge cost. For example, Isard [6] estimates that a technician
will fix a computer within 2 weeks. Executing aT in sH incurs no cost. Using indefinite horizon it is
easy to define a lower bound on the value function using aT , and execute any point-based algorithm,
such as the Perseus algorithm that we use.
3
3
Learning Policies from System Logs
In this section we propose two alternatives for computing a recovery policy given the logs. We begin
with a simple, model-free, history-based policy computation. Then, we suggest a more sophisticated
method that learns the POMDP model parameters, and then uses the POMDP to compute a policy.
3.1
Model-Free Learning of Q-values
The optimal policy for a POMDP can be expressed as a mapping from action-observation histories
to actions. Histories are directly observable, allowing us to use the standard Q function terminology, where Q(h, a) is the expected cost of executing action a with history h and continuing the
session until it terminates. This approach is known as model-free, because (e.g.) the parameters
of a POMDP are never learned, and has some attractive properties, because histories are directly
observable, and do not require any assumption about the unobserved state space.
As opposed to standard Q-learning, where the Q function is learned while interacting with the
environment, we use the system log L to compute Q:
Cost(li )
=
|l|
X
C(aj )
(1)
j=i+1
P
?(h + a, l)Cost(l|h| )
P
(2)
l?L ?(h + a, l)
where li is a suffix of l starting at action ai , C(a) is the cost of action a, h + a is the history h with
the action a appended at its end, and ?(h, l) = 1 if h is a prefix of l and 0 otherwise. The Q function
is hence the average cost until repair of executing the action a in history h, under the policy that
generated L. Learning a Q function is much faster than learning the POMDP parameters, requiring
only a single pass over the training sequences in the system log.
Q(h, a)
=
l?L
Given the learned Q function, we can define the following policy:
?Q (h) = min Q(h, a)
a
(3)
One obvious problem of learning a direct mapping from history to actions is that such policies do
not generalize ? if a history sequence was not observed in the logs, then we cannot evaluate the
expected cost until the error is repaired. An approach that generalizes better is to use a finite history
window of size k, discarding all the observations and action occurring more than k steps ago. For
example, when k = 1 the result is a completely reactive Q function, computing Q(o, a) using the
last observation only.
3.2
Model-Based Policy Learning
While we assume that the behavior of a machine can be captured perfectly using a POMDP as
described above, in practice we cannot expect the parameters of the POMDP to be known a-priori.
In practice, the only parameters that are known are the set of possible repair actions and the set of
possible observations, but even the number of possible errors is not initially known, let alone the
probability of repair or observation.
Given the log of repair sessions, we can use a learning algorithm to learn the parameters of the
POMDP. In this paper we choose to use an adapted Baum-Welch algorithm [1, 2, 15], an EM algorithm originally developed for computing the parameters of Hidden Markov Models (HMMs). The
Baum-Welch algorithm takes as input the number of states (the number of possible errors) and a set
of training sequences. Then, using the forward-backward procedure, the parameters of the POMDP
are computed, attempting to maximize the likelihood of the data (the observation sequences). After
the POMDP parameters have been learned, we execute Perseus [13] to compute a policy.
While training the model parameters, it is important to test likelihood on a held out set of sequences
that are not used in training, in order to ensure that the resulting model does not over-fit the data. We
hence split the input sequences into a train set (80%) and test set (20%). We check the likelihood of
the test set after each forward-backward iteration, and stop the training when the likelihood of the
test set does not improve.
4
3.2.1
Model Checking
When employing automatic learning methods to create an improved policy, it is important to provide evidence for the quality of the learned models. Such evidence can be helpful for the system
administrators in order to make a decision whether to replace the existing policy with a new policy.
Using an imperfect learner such as Baum-Welch does not guarantee that the resulting model indeed
maximizes the likelihood of the observations given the policy, even for the same policy that was used
to generate the training data. Also, the loss function used for learning the model ignores action costs,
thus ignoring an important aspect of the problem. For these reasons, it is possible that the resulting
model will describe the domain poorly. After the model has been learned, however, we can use the
average cost to provide evidence for the validity of the model. Such a process can help us determine
whether these shortcomings of the learning process have indeed resulted in an inappropriate model.
This phase is usually known as model checking (see, e.g. [3]).
As opposed to the Q-learning approach, learning a generative model (the POMDP) allows us check
how similar the learned model is to the original model. We say that two POMDPs M1 =<
S1 , A, tr1 , C1 , ?,
S2 , A, tr2 , C2 , ?, O2 > are indistinguishable if for each policy
PO1 > and M2 =<P
? : H ? A, E[ t Ct |M1 , ?] = E[ t Ct |M2 , ?]. That is, the models are indistinguishable if any
policy has the same expected accumulated cost when executed in both models.
Many policies cannot be evaluated on the real system because we cannot tolerate damaging policies.
We can, however, compare the performance of the original, hand-made policy, on the system and on
the learned POMDP model. We hence focus the model checking phase on comparing the expected
cost of the hand-made policy predicted be the learned model to the true expected cost on the real
system. To estimate the expected cost in the real system, we use the average cost of the sessions
in the logs. To estimate the expected cost of the policy on the learned POMDP we execute a set of
trials, each simulating a repair session, using the learned parameters of the POMDP to govern the
trial advancement (observation emissions given the history and action).
We can then use the two expected cost estimates as a measure of closeness. For example, if the
predicted cost of the policy over the learned POMDP is more than 20% away from the true expected
cost, we may deduce that the learned mode does not properly capture the system dynamics. While
checking the models under a single policy cannot ensure that the models are identical, it can detect
whether the model is defective. If the learned model produces a substantially different expectation
over the cost of a policy than the real system, we know that the model is corrupted prior to executing
its optimal policy on the real system.
After ensuring that the original policy performs similarity on the real system and on the learned
model, we can also evaluate the performance of the computed policy on the learned model. Thus,
we can compare the quality of the new policy to the existing one, helping us to understand the
potential cost reduction of the new policy.
4
Empirical Evaluation
In this section we provide an empirical evaluation to demonstrate that our methods can improve an
existing policy. We created a simulator of recovery sessions. In the simulator we assume that a
machine can be in one of n error states, or in healthy state, we also assume n possible repair actions,
and m possible observations. We assume that each action was designed to fix a single error state,
and set the number of errors to be the number of repair actions.
We set pr(sH |ei , aj ) = 0.7 + 0.3 ? j?i
j if j ? i and 0 otherwise, simulating the escalation power
of repair actions. We set C(s, ai ) = 4i and C(sH , aT ) = 0, simulating the exponential growth of
costs in the real AutoPilot system [6], and the zero downtime caused by terminating the session in
the healthy state. For observations, we compute the relative severity of an error ei in the observation
(i??i )2 ?
?
2
space ?i = i?m
/ 2?, where ? is a normalizing factor, and
n , and then set pr(oj |ei , a) = ?e
j ? [?i ? 1, ?i + 1].
We execute a hand-made escalation policy with 3 retries (see Section 2) over the simulator and
gather a log of repair sequences. Each repair sequence begins with selecting an error uniformly, and
executing the policy until the error is fixed. Then, we use the logs in order to learn a Q function
5
Table 1: Average cost of recovery policies in simulation, with increasing model size. Results are
averaged over 10 executions, and the worst standard error across all recovery policies is reported in
the last column.
Problem parameters
|E| |O| |L|
2
2
10000
4
2
10000
4
4
10000
8
4
50000
8
8
50000
Original
?E , S
21.6
220.3
221.6
29070
28978
Optimal
?M ? , S
17.3
167.7
136.8
15047
15693
?M , S
17.3
172.3
141.5
20592
18303
Policies learned from logs
Q, S
Q1 , S Q3 , S
17.3
18.0
17.4
193.6
174.6
179.6
197.8
239.5
163.6
52636 29611 24611
54585 61071 26808
Q5 , S
17.3
190.8
178.5
27951
27038
SE
< 0.2
<3
< 2.5
< 250
< 275
over the complete history, finite history window Q functions with k = 1, 3, and a POMDP model.
For the POMDP model, we initialize the number of states to the number of repair actions, initialize
transition uniformly, and observation randomly, and execute the Baum-Welch algorithm. We also
constructed a maximum-likelihood POMDP model, by initializing the state space, transition, and
observation function using the true state labels (the simulated errors), and executing Baum-Welch
afterwards. This initialization simulates the result of a ?perfect learner? that does not suffer from the
local maximum problems of Baum-Welch.
In the tables below we use S for simulator, M for the learned model, and M ? for the model initialized by the true error labels. For policies, we use ?E for the escalation policy, ?M for the policy
computed by Perseus on the learned model, and ?M ? for the Perseus policy over the ?perfect learner?
model. For the history-based Q functions, we use Q to denote the function computed for the complete history, and Qi denotes a policy over history suffixes of length i. A column header ?, S denotes
the estimated cost of executing ? on the simulator, and ?, M denotes the estimated cost of executing
? on the model M . We also report the standard error in the estimations.
4.1
Results
We begin with showing the improvement of the policy of the learned models over the original escalation policies. As Table 1 demonstrates, learning a POMDP model and computing its policy always
result in a substantial reduction in costs. The M ? model, initialized using the true error labels, provides an upper bound on the best performance gain that can be achieved using our approach. We
can see that in many cases, the result of the learned model is very closed to this upper bound.
The Q functions over histories did well on the smallest domains, but not on larger domains. The
worst performance is of the reactive Q function (Q1 ) over the latest observation. In the smaller
domains Q learning, especially with a history window of 3 (Q3 ) does fairly well, but in the larger
domains all history-based policies do not perform well.
We now take a look at the results of the model checking technique. As we explained above, a model
checking phase, comparing the expected cost of a policy on both the real system and the learned
model, can provide evidence as to the validity of the learned model. Indeed, as we see in Table 2,
the learned models predict an expected cost that is within 3% of the real expected cost.
To further validate our learned models, we also compare the expected cost of the policies computed
from the models (M and M ? ) over the model and the simulator. Again, we can see that the predicted
costs are very close to the real costs of these policies. As expected, the M ? predicted costs are within
measurement error of the true costs of the policy on the real system.
4.2
Discussion
The experimental results provide a few interesting insights. First, when the observation space is rich
enough to capture all the errors, the learned model is very close to the optimal one. When we use
fewer observations, the quality of the learned model is reduced (further from M ? ), but the policy
that is learned still significantly outperforms the original escalation policy.
It is important to note that the hand-made escalation policy that we use is very natural for domains
with actions that have escalating costs and effects. Also, the number of actions and errors that we
use is similar to these used by current controllers of repair services [6]. As such, the improvements
6
Table 2: Comparing expected cost of policies on the learned model and the simulator for model
checking. Results are averaged over 10 executions, and the worst standard error across all recovery
policies is reported in the last column.
Problem parameters
|E| |O| |L|
2
2
10000
4
2
10000
4
4
10000
8
4
50000
8
8
50000
Escalation policy
? E , M ? ?E , M ?E , S
21.6
22.3
21.6
219.5
227.2
220.4
221.1
225.4
221.6
28985
29152
29070
28870
29104
28978
Optimal model
?M ? , M ? ?M ? , S
17.3
17.3
165.5
167.7
137.4
136.8
16461
15047
15630
15693
Learned model
?M , M ?M , S
17.6
17.3
173.5
172.4
138.7
141.5
21104
20592
17052
18303
SE
< 0.2
<3
< 2.5
< 250
< 275
that we achieve over this hand-made policy hint that similar gains can be made over the real system.
Our results show an improvement of 20% ? 40%, increasing with the size of the domain. As driving
down the costs of data centers is crucial for the success of such systems, increasing performance by
20% is a substantial contribution.
The history-based approaches did well only on the smallest domains. This is because for a history
based value function, we have to evaluate any action at every history. As the input policy does not
explore, the set of resulting histories does not provide enough coverage of the history space. For
example, if the current repair policy only escalates, the history-based approach will never observe a
higher level action followed by a low level action, and cannot evaluate its expected cost.
Finite history windows increase coverage, by reducing the number of possible histories to a finite
scale. Thus, finite-history window provide some generalization power. Indeed, the finite history
window methods (except for the reactive policy) improve upon the original escalation policy in
some cases. We note, though, that we used a very simple finite history model, and more complicated
approaches, such as variable length history windows [9, 11] may provide better results.
Our model checking technique indicates that none of the models that were learned, even when the
number of observations was smaller than the number of errors, was defective. This is not particulary
surprising, as the input data originated from a simulator that operated under the assumptions of the
model. This is unlikely to happen in the real system, where any modeling assumptions compromise
some aspect of the real world. Still, in the real world this technique will allow us to test, before
changing the existing policy, whether the assumptions are close to the truth, and whether the model
is reasonable. This is a crucial step in making the decision to replace an existing, imperfect yet
operative policy, with a new one. It is unclear how to run a similar model checking phase over the
history-based approach.
In improving unexploring policies, we must assume that many possible histories will not be observed. However, a complete model definition must set a value for every possibility. In learning
such cases, it is important to set default values for these unknown model parameters. In our case,
it is best to be pessimistic about these parameters, that is, to overestimate the cost of repair. It is
therefore safe to assume that action a will not fix error e if we never observed a to fix e in the logs,
except for the terminating action aT .
5
Related Work
Using decision-theoretic techniques for troubleshooting and recovery dates back to Heckerman et
al. [5], who employed Bayesian networks for troubleshooting, and a myopic approximation for
recovery. Heckerman et al. assume that the parameters of the Bayesian network are given as input,
and training it using the unlabeled data that the logs contain is difficult. This Bayesian network
approach is also not designed for sequential data.
Partially Observable Markov Decision Processes were previously suggested for modeling automated
recovery from errors. Most notably, Littman et al. [8] suggests the CSFR model which is similar
to our POMDP formalization, except for a deterministic observation function, the escalation of
actions, and the terminating action. They then proceed to define a belief state in this model, which
is a set of possible error states, and a Q-function Q(b, a) over beliefs. The Q-function is computed
using standard value iteration. As these assumptions reduce the partial observability, the resulting
7
Q function can produce good policies. Littman et al. assume that the model is either given, or that
Q-learning can be executed online, using an exploration strategy, both of which are not applicable in
our case. Also, as we argue above, in our case a Q function produces substantially inferior policies
because of its lack of generalization power in partially observable domains.
Another, more recent, example of a recovery approach based on POMDPs was suggested by Joshi
et al. [7]. Similar to Littman et al., Joshi et al. focus on the problem of fault recovery in networks,
which adds a layer of difficulty because we can no longer assume that machines are independent,
as often faults cascade through the network. Joshi et al. also assume that the parameters of the
model, such as the probability that a watchdog will detect each failure, and the effects of actions on
failures, are known a-priori. They then suggest a one step lookahead repair strategy, and a multi-step
lookahead, that uses a value function over a belief space similar to the Littman et al. belief space.
Bayer-Zubek and Dietterich [16] use a set of examples, similar to our logs, to learn a policy for
disease diagnosis. They formalize the problem as an MDP, assuming that test results are discrete
and exact, and use AO? search, while computing the needed probabilities using the example set.
They did not address the problem of missing data in the example set that arises from a non-exploring
policy. Indeed, in the medical diagnosis case, one may argue that trying an action sequence that was
never tried by a human doctor may result in an unreasonable risk of harm to the patient, and that
therefore the system should not consider such policies.
6
Conclusion
We have presented an approach to improving imperfect repair policies through learning a POMDP
model of the problem. Our method takes as input a log of interaction of the existing controller with
the system, learns a POMDP model, and computes a policy for the POMDP that can be used in the
the real system. The advantage of our method is that it does not require the existing controller to
actively explore the effects of actions in all conditions, which may result in unacceptable costs in the
real system. On the other hand, our approach may not converge to an optimal policy. We experiment
with a synthetic, yet realistic, example of a hand-made escalation policy, where actions are ordered
by increasing cost, and any action is repeated a number of times. We show how the policy of the
learned model significantly improves the original escalation policy.
In the future we intend to use the improved policies to manage repairs in a real data center within the
AutoPilot system [6]. The first step would be to ?flight? candidate policies to evaluate their performance in the real system. Our current method is a single shot improvement, and an interesting next
step is to create an incremental improvement process, where new policies constantly improve the
existing one. In this setting, it would be interesting to explore bounded exploration, an exploration
technique that puts a bound on the risk of the strategy.
There are a number of interesting theoretical questions about our passive policy learning method
and about passive policy learning in general. First, for what families of initial policies and system
dynamics would a passive policy learning method be expected to yields an improvement in expected
costs. Second, what families of initial policies and systems dynamics would a passive policy learning
method be expected to yield the optimal policy. Third, how would one characterize when iteratively
applying a passive policy learning method would yield expected improvements in expected costs.
Finally, while this paper focuses on the important failure recovery problem, our methods may be
applicable to a wide range of similar systems, such as assembly line management, and medical
diagnosis systems, that currently employ hand-made imperfect controllers.
References
[1] Leonard E. Baum, Ted Petrie, George Soules, and Norman Weiss. A maximization technique occurring in
the statistical analysis of probabilistic functions of Markov chains. The Annals of Mathematical Statistics,
41(1):164?171, 1970.
[2] Lonnie Chrisman. Reinforcement learning with perceptual aliasing: The perceptual distinctions approach.
In In Proceedings of the Tenth National Conference on Artificial Intelligence, pages 183?188. AAAI
Press, 1992.
8
[3] Andrew Gelman, John B. Carlin, Hal S. Stern, and Donald B. Rubin. Bayesian Data Analysis. Chapman
and Hall, 1996.
[4] Eric A. Hansen. Indefinite-horizon POMDPs with action-based termination. In AAAI, pages 1237?1242,
2007.
[5] David Heckerman, John S. Breese, and Koos Rommelse. Decision-theoretic troubleshooting. Commun.
ACM, 38(3):49?57, 1995.
[6] Michael Isard. Autopilot: automatic data center management. Operating Systems Review, 41(2):60?67,
2007.
[7] Kaustubh R. Joshi, William H. Sanders, Matti A. Hiltunen, and Richard D. Schlichting. Automatic modeldriven recovery in distributed systems. In SRDS, pages 25?38, 2005.
[8] Michael L. Littman and Nishkam Ravi. An instance-based state representation for network repair. In
in Proceedings of the Nineteenth National Conference on Artificial Intelligence (AAAI, pages 287?292,
2004.
[9] Andrew Kachites Mccallum. Reinforcement learning with selective perception and hidden state. PhD
thesis, 1996. Supervisor-Ballard, Dana.
[10] Joelle Pineau, Geoffrey Gordon, and Sebastian Thrun. Point-based value iteration: An anytime algorithm
for POMDPs. In International Joint Conference on Artificial Intelligence (IJCAI), pages 1025 ? 1032,
August 2003.
[11] Guy Shani and Ronen I. Brafman. Resolving perceptual aliasing in the presence of noisy sensors. In
NIPS, 2004.
[12] R. D. Smallwood and E. J. Sondik. The optimal control of partially observable Markov decision processes
over a finite horizon. Operations Research, 21:1071?1098, 1973.
[13] Matthijs T. J. Spaan and Nikos Vlassis. Perseus: Randomized point-based value iteration for POMDPs.
Journal of Artificial Intelligence Research, 24:195?220, 2005.
[14] Richard S. Sutton and Andrew Barto. Reinforcement Learning: An Introduction. MIT Press, 1998.
[15] Daan Wierstra and Marco Wiering. Utile distinction hidden Markov models. In ICML ?04: Proceedings
of the twenty-first international conference on Machine learning, page 108, New York, NY, USA, 2004.
ACM.
[16] Valentina Bayer Zubek and Thomas G. Dietterich. Integrating learning from examples into the search for
diagnostic policies. J. Artif. Intell. Res. (JAIR), 24:263?303, 2005.
9
| 3877 |@word trial:3 disk:1 termination:1 simulation:2 propagate:1 tried:2 q1:2 incurs:1 tr:3 shot:1 reduction:2 initial:4 configuration:1 selecting:1 tuned:1 prefix:1 outperforms:2 existing:13 o2:1 current:4 com:1 comparing:3 surprising:1 soules:1 si:1 yet:3 assigning:3 must:2 john:2 realistic:2 happen:1 gurion:1 cheap:1 designed:3 update:3 alone:1 isard:3 generative:1 advancement:1 fewer:1 intelligence:4 mccallum:1 utile:1 provides:1 simpler:1 mathematical:1 unacceptable:1 c2:1 direct:1 constructed:1 wierstra:1 introduce:1 notably:1 indeed:5 expected:21 behavior:3 simulator:8 multi:1 aliasing:2 company:1 automatically:2 inappropriate:2 solver:1 increasing:5 window:7 provided:1 begin:3 bounded:1 maximizes:1 zubek:2 israel:1 what:2 reboot:1 substantially:2 perseus:6 developed:1 shani:2 unobserved:1 guarantee:1 every:4 growth:1 rebooting:3 prohibitively:1 demonstrates:1 control:2 medical:3 overestimate:1 before:2 service:7 engineering:1 local:1 sutton:1 approximately:1 initialization:1 suggests:2 collect:1 specifying:1 hmms:1 range:2 bi:1 averaged:2 practice:3 procedure:2 empirical:2 downtime:2 significantly:2 convenient:1 cascade:1 watchdog:6 donald:1 integrating:1 suggest:5 srds:1 cannot:8 close:4 unlabeled:1 gelman:1 put:1 risk:2 impossible:1 applying:1 accumulating:1 deterministic:2 center:6 baum:8 missing:1 latest:1 starting:4 independently:1 pomdp:36 welch:7 recovery:22 assigns:1 factored:1 m2:2 insight:1 smallwood:1 annals:1 exact:2 us:3 expensive:1 observed:5 initializing:1 capture:3 worst:3 thousand:1 wiering:1 valuable:1 substantial:2 disease:1 environment:3 govern:1 littman:5 dynamic:3 terminating:3 q5:1 compromise:1 upon:1 eric:1 learner:3 completely:1 joint:1 train:2 fast:2 describe:4 shortcoming:1 query:1 artificial:4 repairing:2 header:1 larger:2 solve:1 nineteenth:1 say:1 otherwise:2 statistic:1 farm:2 noisy:1 online:3 unkown:1 sequence:10 advantage:1 propose:1 interaction:4 product:1 date:1 iff:1 poorly:1 achieve:1 lookahead:2 validate:1 ijcai:1 produce:3 perfect:2 executing:9 ben:1 incremental:1 help:2 andrew:3 ac:1 measured:1 minor:1 coverage:2 predicted:4 safe:1 attribute:1 stochastic:1 exploration:6 human:3 require:3 fix:8 generalization:2 ao:1 pessimistic:1 exploring:1 helping:1 marco:1 hall:1 mapping:3 week:1 predict:1 driving:1 particulary:1 adopt:3 smallest:2 purpose:1 estimation:1 applicable:3 label:4 currently:1 hansen:1 healthy:8 create:4 successfully:1 hope:1 mit:1 sensor:1 always:1 barto:1 probabilistically:1 q3:2 focus:4 emission:1 properly:1 improvement:7 indicates:2 mainly:1 likelihood:6 check:2 detect:2 helpful:1 suffix:2 accumulated:1 typically:9 unlikely:3 initially:1 hidden:3 selective:1 priori:2 special:2 initialize:2 fairly:1 construct:1 never:4 ted:1 chapman:1 identical:2 look:1 icml:1 future:1 report:3 gordon:1 hint:1 employ:2 few:1 richard:2 randomly:2 resulted:1 national:2 intell:1 phase:5 microsoft:3 maintain:2 william:1 detection:3 huge:1 message:3 possibility:1 evaluation:2 uncommon:1 sh:4 operated:1 myopic:1 held:1 chain:1 tuple:1 bayer:2 partial:1 experience:3 continuing:1 initialized:2 re:5 e0:1 theoretical:1 minimal:1 increased:1 column:3 modeling:3 instance:1 maximization:1 cost:55 entry:1 supervisor:1 characterize:1 reported:2 dependency:1 answer:1 corrupted:1 synthetic:2 international:2 randomized:1 matthijs:1 probabilistic:1 tr2:1 michael:2 thesis:1 again:1 aaai:3 management:3 containing:1 choose:3 opposed:2 administrator:3 tr1:1 manage:1 guy:2 bgu:1 stochastically:1 dead:1 expert:2 creating:1 toy:1 li:2 actively:1 potential:1 caused:1 try:1 view:1 closed:1 sondik:1 doctor:1 start:1 complicated:1 contribution:1 appended:1 il:1 who:1 gathered:1 yield:3 ronen:1 generalize:1 bayesian:4 none:1 pomdps:6 researcher:1 ago:1 history:34 explain:1 sebastian:1 notification:5 inexact:1 failure:17 definition:1 obvious:1 stop:1 gain:2 knowledge:2 anytime:1 improves:1 formalize:1 sophisticated:1 back:1 higher:2 originally:1 tolerate:1 jair:1 improved:5 wei:1 execute:7 evaluated:1 though:1 until:5 hand:17 receives:1 flight:1 christopher:1 replacing:1 ei:4 lack:1 incrementally:1 pineau:2 brings:1 aj:3 quality:5 mode:1 mdp:1 hal:1 artif:1 usa:1 effect:3 validity:2 contain:2 true:7 requiring:1 dietterich:2 norman:1 hence:3 iteratively:1 attractive:1 indistinguishable:2 inferior:1 trying:1 complete:5 demonstrate:1 theoretic:2 performs:2 passive:7 petrie:1 common:1 rl:3 m1:2 significant:1 measurement:1 ai:3 automatic:4 session:12 moving:1 similarity:1 longer:1 operating:1 deduce:1 add:2 recent:1 optimizes:1 commun:1 occasionally:1 server:2 success:3 fault:3 joelle:1 captured:1 george:1 nikos:1 employed:1 aggregated:1 maximize:1 determine:1 converge:1 resolving:1 afterwards:1 technician:4 faster:1 offer:1 escalates:2 a1:1 controlled:1 ensuring:2 qi:1 maintenance:1 controller:21 patient:1 expectation:1 iteration:7 represent:1 achieved:1 c1:1 receive:2 operative:1 source:1 crucial:3 simulates:1 call:1 joshi:4 presence:1 split:1 easy:1 enough:2 automated:2 sander:1 fit:1 carlin:1 restrict:1 associating:1 perfectly:1 reduce:3 inner:1 imperfect:4 observability:1 lonnie:1 valentina:1 whether:7 o0:1 sheva:1 pr0:1 suffer:1 proceed:2 york:1 action:66 useful:1 latency:1 se:2 ten:1 hardware:3 reduced:1 generate:2 retries:2 estimated:2 diagnostic:1 diagnosis:4 discrete:1 key:3 indefinite:5 terminology:1 changing:1 ravi:1 tenth:1 backward:2 imaging:4 run:1 uncertainty:2 reporting:1 family:2 reasonable:3 decide:2 decision:12 banking:1 bound:4 ct:2 layer:1 meek:2 followed:2 adapted:2 software:3 declared:1 aspect:6 min:2 attempting:2 relatively:1 department:1 combination:1 terminates:2 across:2 em:2 smaller:2 heckerman:3 spaan:1 making:3 s1:1 explained:1 repair:33 gathering:1 pr:5 previously:1 discus:1 fail:1 needed:1 know:1 end:3 available:2 generalizes:1 operation:1 unreasonable:1 probe:1 observe:1 away:1 appropriate:3 simulating:3 alternative:1 kaustubh:1 original:10 thomas:1 denotes:3 include:2 assembly:2 ensure:2 maintaining:1 especially:2 intend:1 question:1 occurs:1 strategy:3 costly:1 unclear:2 exhibit:1 simulated:1 thrun:1 mail:1 argue:2 reason:1 assuming:1 length:2 o1:1 providing:1 difficult:2 onl:1 executed:2 stern:1 policy:113 unknown:1 perform:3 allowing:1 upper:2 twenty:1 observation:28 markov:8 daan:1 finite:10 supporting:1 vlassis:1 severity:1 interacting:1 august:1 david:1 pair:2 optimized:5 engine:2 learned:36 distinction:2 chrisman:1 hour:2 nip:1 address:1 beyond:1 redmond:1 suggested:2 usually:1 below:1 perception:1 program:1 oj:1 belief:13 power:3 natural:4 difficulty:1 improve:6 created:3 prior:2 review:1 checking:11 relative:1 repaired:1 fully:1 expect:2 loss:1 interesting:4 limitation:1 dana:1 geoffrey:1 agent:1 gather:1 sufficient:1 beer:1 s0:4 rubin:1 systematically:1 share:1 brafman:1 last:3 free:3 allow:1 understand:1 wide:2 distributed:1 default:1 world:4 transition:4 evaluating:1 rich:1 ignores:1 forward:2 made:16 reinforcement:4 computes:1 employing:1 restarting:1 observable:9 compact:1 status:1 decides:2 harm:1 alternatively:1 search:4 s0i:1 table:5 ballard:1 learn:7 nature:2 matti:1 operational:1 ignoring:1 improving:4 troubleshooting:3 domain:11 did:3 s2:1 allowed:1 repeated:1 defective:2 en:1 autopilot:3 ny:1 formalization:2 originated:1 explicit:1 deterministically:1 exponential:2 candidate:1 kachites:1 perceptual:3 third:1 learns:2 minute:3 down:1 specific:1 discarding:1 showing:2 evidence:5 closeness:1 exists:1 normalizing:1 sequential:1 phd:1 execution:3 occurring:2 horizon:6 explore:4 expressed:2 ordered:1 partially:6 truth:1 constantly:1 acm:2 succeed:1 goal:1 leonard:1 replace:2 considerable:1 hard:1 except:3 uniformly:2 reducing:1 called:1 breese:1 pas:1 experimental:1 formally:1 damaging:1 support:1 arises:1 reactive:3 evaluate:5 avoiding:1 |
3,176 | 3,878 | Group Orthogonal Matching Pursuit for
Variable Selection and Prediction
?
Aur?elie C. Lozano, Grzegorz Swirszcz,
Naoki Abe
IBM Watson Research Center,
1101 Kitchawan Road,
Yorktown Heights NY 10598,USA
{aclozano,swirszcz,nabe}@us.ibm.com
Abstract
We consider the problem of variable group selection for least squares regression,
namely, that of selecting groups of variables for best regression performance,
leveraging and adhering to a natural grouping structure within the explanatory
variables. We show that this problem can be efficiently addressed by using a certain greedy style algorithm. More precisely, we propose the Group Orthogonal
Matching Pursuit algorithm (Group-OMP), which extends the standard OMP procedure (also referred to as ?forward greedy feature selection algorithm? for least
squares regression) to perform stage-wise group variable selection. We prove that
under certain conditions Group-OMP can identify the correct (groups of) variables. We also provide an upperbound on the l? norm of the difference between
the estimated regression coefficients and the true coefficients. Experimental results on simulated and real world datasets indicate that Group-OMP compares
favorably to Group Lasso, OMP and Lasso, both in terms of variable selection
and prediction accuracy.
1
Introduction
We address the problem of variable selection for regression, where a natural grouping structure
exists within the explanatory variables, and the goal is to select the correct group of variables, rather
than the individual variables. This problem arises in many situations (e.g. in multifactor ANOVA,
generalized additive models, time series data analysis, where lagged variables belonging to the same
time series may form a natural group, gene expression analysis from microarrays data, where genes
belonging to the same functional cluster may be considered as a group). In these settings, selecting
the right groups of variables is often more relevant to the subsequent use of estimated models, which
may involve interpreting the models and making decisions based on them.
Recently, several methods have been proposed to address this variable group selection problem, in
the context of linear regression [12, 15]. These methods are based on extending the Lasso formulation [8] by modifying the l1 penalty to account for the?group structure. Specifically, Yuan & Lin [12]
?
PJ
PJ
proposed the Group Lasso, which solves arg min? 12 ky ? j=1 XGj ?Gj k2 + ? j=1 k?Gj k2 ,
where XG1 , . . . , XGJ are the natural groupings within the variables of X and ?Gj are the coefficient vectors for variables in groups Gj . Zhao et al [15] considered a more general penalty class, the
PJ
Composite Absolute Penalties family T (?) = j=1 k?j kll0j , of which the Group Lasso penalty is a
special instance. This development opens up a new direction of research, namely that of extending
the existing regression methods with variable selection to the variable group selection problem and
investigating to what extent they carry over to the new scenario.
The present paper establishes that indeed one recent advance in variable selection methods for regression, ?forward greedy feature selection algorithm?, also known as the Orthogonal Matching
1
Pursuit (OMP) algorithm in the signal processing community [5], can be generalized to the current
setting of group variable selection. Specifically we propose the ?Group Orthogonal Matching Pursuit? algorithm (Group-OMP), which extends the OMP algorithm to leverage variable groupings,
and prove that, under certain conditions, Group-OMP can identify the correct (groups of) variables
when the sample size tends to infinity. We also provide an upperbound on the l? norm of the difference between the estimated regression coefficients and the true coefficients. Hence our results
generalize those of Zhang [13], which established consistency of the standard OMP algorithm. A
key technical contribution of this paper is to provide a condition for Group-OMP to be consistent,
which generalizes the ?Exact Recovery Condition? of [9](Theorem 3.1) stated for OMP under the
noiseless case. This result should also be of interest to the signal processing community in the
context of block-sparse approximation of signals. We also conduct empirical evaluation to compare the performance of Group-OMP with existing methods, on simulated and real world datasets.
Our results indicate that Group-OMP favorably compares to the Group Lasso, OMP and Lasso algorithms, both in terms of the accuracy of prediction and that of variable selection. Related work
include [10, 3] using OMP for simultaneous sparse approximation, [11] showing that standard MP
selects features from correct groups, and [4] that consider a more general setting than ours.
The rest of the paper is organized as follows. Section 2 describes the proposed Group-OMP procedure. The consistency results are then stated in Section 3. The empirical evaluation results are
presented in Section 4. We conclude the paper with some discussions in Section 5.
2
Group Orthogonal Matching Pursuit
Consider the general regression problem y = X ?? + ?, where y ? Rn is the response vector, X =
[f1 , . . . , fd ] ? Rn?d is the matrix of feature (or variable) vectors fj ? Rn , ?? ? Rd is the coefficient
vector and ? ? Rn is the noise vector. We assume that the noise components ?i , i = 1, . . . , n,
are independent Gaussian variables with mean 0 and variance ? 2 . For any G ? {1, . . . , d} let XG
denote the restriction of X to the set of variables, {fj , j ? G}, where the colums fj are arranged in
ascending order. Similarly for any vector ? ? Rd of regression coefficients, denote ?G its restriction
to G, with reordering in ascending order. Suppose that a natural grouping structure exists within the
variables of X consisting of J groups XG1 , . . . , XGJ , where Gi ? {1, . . . , d}, Gi ? Gj = ? for
i 6= j and XGi ? Rn?dj . Then, the above regression problem can be decomposed with respect to
PJ
the groups, i.e. y = j=1 XGj ??Gj + ?, where ??Gj ? Rdj . Furthermore, to simplify the exposition,
?
assume that each XGj is orthonormalized, i.e. XG
XGj = Idj .
j
Given ? ? Rd let supp(?) = {j : ?j 6= 0}. For any such G and v ? Rn , denote by ??X (G, v) the coefficients resulting from applying ordinary least squares (OLS) with non-zero coefficients restricted
to G, i.e., ??X (G, v)=arg min??Rd kX? ? vk22 subject to supp(?) ? G. Given the above setup, the
Group-OMP procedure we propose is described in Figure 1, which extends the OMP procedure to
deal with group selection. Note that this procedure picks the best group in each iteration, with respect to reduction of the residual error, and it then re-estimates the coefficients, ? (k) , as in OMP. We
recall that this re-estimation step is what distinguishes OMP, and our group version, from standard
boosting-like procedures.
? Input: The data matrix X = [f1 , . . . , fd ] ? Rn?d , with group structure G1 , . . . , GJ , such that
?
XG
XGj = Idj . The response y ? Rn . Precision ? > 0 for the stopping criterion.
j
? Output: The selected groups G (k) , the regression coefficients ? (k) .
? Initialization: G (0) = ?, ? (0) = 0.
For k = 1, 2, . . .
?
Let j (k) = arg maxj kXG
(X? (k?1) ? y)k2 . (?)
j
?
(k?1)
If (kXG (k) (X?
? y)k2 ? ?) break
j
Set G (k) = G (k?1) ? Gj (k) . Let ? (k) = ??X (G (k) , y).
End
Figure 1: Method Group-OMP
2
3
3.1
Consistency Results
Notation
Let Ggood denote the set of all the groups included in the true model. We refer to the groups in
Ggood as good groups. Similarly we call Gbad the set of all the groups which are not S
included. We
let ggood and gbad denote the set of ?good incides? and ?bad indices?, i.e. ggood = Gi ?Ggood Gi
S
and gbad = Gi ?Gbad Gi . When they are used to restrict index sets for matrix columns or vectors,
they are assumed to be in canonical (ascending) order, as we did for G. Furthermore, the elements
of Ggood are groups of indices, and |Ggood | is the number of groups in Ggood , while ggood is defined
in terms of individual indices, i.e. ggood is the set of indices corresponding to the groups in Ggood .
? ? ggood .
The same holds for Gbad and gbad . In this notation supp(?)
We denote by ?X (Ggood ) the smallest eigenvalue of Xg?good Xggood , i.e.
?
?
?X (Ggood ) = inf ? kX?k22 /k?k22 : supp(?) ? ggood .
Here and throughout the paper we let A? denote the conjugation of the matrix A (which, for a
real matrix A, coincides with its transpose) and A+ denote the Moore?Penrose pseudoinverse of
the matrix A (c.f. [6, 7]). If rows of A are linearly independent A+ = A? (AA? )?1 and when
columns of A are linearly independent A+ = (A? A)?1 A? . Generally for u = {u1 , . . . , u|ggood | },
v = {v1 , . . . , v|gbad | } we define
rP
rP
P
P
2 , and kvkbad =
kukgood
u
vj2
=
j
Gi ?Gbad
Gi ?Ggood
(2,1)
(2,1)
j?Gi
|ggood |?|gbad |
and then for any matrix A ? R
j?Gi
, let
good/bad
kAk(2,1)
good/bad
Then we define ?X (Ggood ) = kXg+good Xgbad k(2,1)
3.2
=
sup
kvkbad
=1
(2,1)
kAvkgood
(2,1) .
.
The Noiseless Case
We first focus on the noiseless case (i.e. ? ? 0). For all k, let rk = X? (k) ? y. In the noiseless case,
we have r0 = ?y ? Span(Ggood ). So if Group-OMP has not made a mistake up to round k, we
also have rk ? Span(Ggood ). The following theorem and its corollary provide a condition which
guarantees that Group-OMP does not make a mistake at the next iteration, given that it has not made
any mistakes up to that point. By induction on k, it implies that Group-OMP never makes a mistake.
Theorem 1. Reorder the groups in such a way that Ggood = G1 , . . . , Gm and Gbad =
Gm+1 , . . . , GJ . Let r ? Span(Xggood ). Then the following holds
?
?
?
k(kXG
rk2 , kXG
rk2 , . . . , kXG
rk2 )k?
m+1
m+2
J
?
?
?
k(kXG
rk2 , kXG
rk2 , . . . , kXG
rk2 )k?
m
1
2
??X (Ggood ).
(1)
Proof of Theorem 1. Reorder the groups in such way that Ggood = {G1 , . . . , Gm } and Gbad =
{Gm+1 , . . . , GJ }. Let ?? : Rn ? Rd1 ? Rd2 ? . . . ? Rdm be defined as
? ?
?T
?
?
?? (x) = (XG
x)T , (XG
x)T , . . . , (XG
x)T
1
2
m
and analogously let ?? : Rn ? Rdm+1 ? Rdm+2 ? . . . ? RdJ be defined as
?T
?
?
T
?
T
?
T
?? (x) = (XG
x)
,
(X
x)
,
.
.
.
,
(X
x)
.
Gm+2
GJ
m+1
We shall denote V ? = Rd1 ? Rd2 ? . . . ? Rdm with a norm k.k?
(2,?) defined as:
?
di
k(v1 , v2 , . . . , vm )k(2,?) = k(kv1 k2 , kv2 k2 , . . . , kvm k2 )k? for vi ? R , i = 1, . . . , m.
Analogously V ? = Rdm+1 ? Rdm+2 ? . . . ? RdJ with a norm k.k?
(2,?) defined as:
?
dm+j
k(v1 , v2 , . . . , vJ?m )k(2,?)=k(kv1 k2 , kv2 k2 , . . . , kvJ?m k2 )k? for vj ? R
, j = 1, . . . , J ? m.
?
It is easy to verify that k.k?
,
k.k
are
norms
indeed.Now
the
condition
expressed
by Eq. (1)
(2,?)
(2,?)
can be rephrased as
k?? (r)k?
(2,?)
? ?X (Ggood )
(2)
?
?
k? (r)k(2,?)
3
Lemma 1. The map ?? restricted to Span
Sm
i=1
XGi is a linear isomorphism onto its image.
Proof of Lemma 1. By definition if ?? (x) = (0)V? then x must
Sm be orthogonal to each of the subspaces spanned by XGi , i = 1, . . . , m. Thus ker ?? ? Span i=1 XGi = 0
Let (?? )+ denote the inverse mapping whose existence was proved in Lemma 1. The choice
of symbol is not coincident, the matrix of this mapping is indeed a pseudoinverse of the matrix
(XG1 |XG2 | . . . |XGm )T .We have
k?? (r)k?
(2,?)
k?? (r)k?
(2,?)
=
k?? ((?? )+ ?? (r))k?
(2,?)
k?? (r)k?
(2,?)
? +
?
?
where the last term is the norm of the operator ?? ? (? ) : V
following
? k?? ? (?? )+ k(2,?) ,
? V . We are going to need the
Lemma 2. A dual space of V ? is (V ? )? = Rd1 ? Rd2 ? . . . ? Rdm with a norm k.k?
(2,1) defined
?
as: k(v1 , v2 , . . . , vm )k(2,1) = k(kv1 k2 , kv2 k2 , . . . , kvm k2 )k1 .
A dual space of V ? is (V ? )? = Rdm+1 ? Rdm+2 ? . . . ? RnJ with a norm k.k?
(2,1) defined as:
?
k(v1 , v2 , . . . , vJ?m )k(2,1) = k(kv1 k2 , kv2 k2 , . . . , kvJ?m k2 )k1 .
Proof of Lemma 2. We prove for V ? , the proof for V ? is identical.
?
Let v ?
= (v1? , v2? , . . . , vJ?m
) ? Rdm+1 ? Rdm+2 ? . . . ? RdJ .
We have
J
J
J
P
P
P
sup |hvi? , vi i| =
kvi? k2 .
kv ? k = sup |v ? (v)| = sup
|hvi? , vi i| =
v?V ?
kvk2,? =1
vi ?Rni
i=m+1
i=m+1
kvk2,? =1
vi ?Rni
i=m+1
kvi k2 =1
The last equality follows from sup |hvi? , vi i| = kvi? k2 (as `?2 = `2 ) and Schwartz inequality.
vi ?Rni
kvi k2 =1
A fundamental fact from Functional Analysis states that a (Hermitian) conjugation is an isometric
isomorphism. Thus
k?? ? (?? )+ k(2,?) = k(?)+ ? ?k(2,1) .
(3)
We used here (A? )? = A and (A? )+ = (A+ )? . The right hand side of (3) is equal to
good/bad
kXg+good Xgbad k(2,1)
in matrix notation. Thus the inequality (1) holds. This concludes the proof
of Theorem 1.
Corollary 1. Under the conditions of Theorem 1, if ?X (Ggood ) < 1 then the following holds
?
?
?
k(kXG
rk2 , kXG
rk2 , . . . , kXG
rk2 )k?
m+1
m+2
J
?
?
?
rk2 , . . . , kXG
rk2 )k?
rk2 , kXG
k(kXG
m
2
1
< 1.
(4)
Intuitively, the condition ?X (Ggood ) < 1 guarantees that no bad group ?mimics? any good group too
well. Note that Theorem 1 and Corollary 1 are the counterpart to Theorem 3.3 in [9] which states the
Exact Recovery condition for the standard OMP algorithm, namely that kXg+good Xgbad k(1,1) < 1,
where ggood is not defined in terms of groups, but rather in terms of the variables present in the true
model (since the notion of groups does not pertain to OMP in its original form).
3.3
The Noisy Case
The following theorem extends the results of Theorem 1 to deal with the non-zero Gaussian noise
?. It shows that under certain conditions the Group-OMP algorithm does not select bad groups. A
sketch of the proof is provided at the end of this section.
Theorem 2. Assume that ?X (Ggood ) < 1 and 1 ? ?X (Ggood ) > 0. For any ? ? (0, 1/2), with
probability at least 1 ? 2?, if the stopping criterion of the Group-OMP algorithm is such that
p
1
? 2d ln(2d/?),
?>
1 ? ?X (Ggood )
then when the algorithm stops all of the following hold:
(C1)G (k?1) ? Ggood
4
?
|G
\G (k?1) |
(C2)k? (k?1) ? ??X (Ggood , y)k2 ? ? ?good
X (Ggood )
q
? ? ? ? 2 ln(2|ggood |/?)
(C3)k??X (Ggood , y) ? ?k
?X (Ggood )
??
?
??
(C4)|Ggood \ G (k?1) | ? 2 ? Gj ? Ggood : k??G k2 < 8??X (Ggood )?1 ? .
j
We thus obtain the following theorem which states the main consistency result for Group-OMP.
Theorem 3. Assume that ?X (Ggood ) < 1 and 1 ? ?X (Ggood ) > 0. For any ? ? (0, 1/2), with
probability at leastp
1 ? 2?, if the stopping criterion of the Group-OMP
algorithm is such that
?
1
?G k2 ? 8??X (Ggood )?1 then when the
? > 1??X (G
?
2d
ln(2d/?)
and
min
k
?
G
?G
j
j
good
good )
p
? ? ? ? (2 ln(2|Ggood |/?))/?X (Ggood ).
algorithm stops G (k?1) = Ggood and k? (k?1) ? ?k
Except for the condition on ?X (Ggood ) (and the definition of ?X (Ggood ) itself), the conditions in
Theorem 2 and Theorem 3 are similar to those required for the standard OMP algorithm [13], the
main advantage being that for Group-OMP it is the l2 norm of the coefficient groups for the true
model that need to be lower-bounded, rather than the amplitude of the individual coefficients.1
Proof Sketch of Theorem 2. To prove the theorem a series of lemmas are needed, whose proofs are
omitted due to space constraint, as they can be derived using arguments similar to Zhang [13] for
the standard OMP case. The following lemma gives a lower bound on the correlation between the
good groups and the residuals from the OLS prediction where the coefficients have been restricted
to a set of good groups.
Lemma 3. Let G ? Ggood , i.e., G is a set of good groups. Let ??= ??X (G, y), ? 0 = ??X (Ggood , y),
?X (Ggood )
?
f = X? and f 0 = X? 0 . Then maxGj ?Ggood kXG
(y ? f )k2 ? ?
kf ? f 0 k2 .
j
|Ggood \G|
The following lemma relates the parameter ??X (Ggood ), which is estimated by OLS given that the
?
set of good groups has been correctly identified, to the true parameter ?.
Lemma 4. For all ? ? (0, 1), with probability at least 1 ?
q?, we have
2 ln(2|ggood |/?)
?
?
k?X (Ggood , y) ? ?X (Ggood , Ey)k? ? ?
.
?X (Ggood )
The following lemma provides an upper bound on the correlation of the bad features to the residuals
from the prediction by OLS given that the set of good groups has been correctly identified.
0
0
0
?
Lemma 5. Let
We have
? ? = ?X (Ggood , y) and f = X? .p
?
?
0
P maxGj 6?Ggood kXG
(f
?
y)k
?
?
2d ln(2d/?) ? 1 ? ?.
2
j
We are now ready to prove Theorem 2. We first prove that for each iteration k before the GroupOMP algorithm stops, G (k?1) ? Ggood by induction on k. Now, suppose that the claim holds after
k ? 1 iterations, where k ? 1. So at the beginning of the kth iteration, we have G (k?1) ? Ggood . We
have
?
max kXG
(X? (k?1) ? y)k2
j
Gj 6?Ggood
?
?
max kXG
X(? (k?1) ? ? 0 )k2 +
j
Gj 6?Ggood
?
max kXG
(X? 0 ? y)k2
j
Gj 6?Ggood
?
?
X(? (k?1) ? ? 0 )k2 +
?X (Ggood ) max kXG
j
=
?
(X? (k?1) ? y)k2 +
?X (Ggood ) max kXG
j
Gj ?Ggood
Gj ?Ggood
?
max kXG
(X? 0 ? y)k2
j
Gj 6?Ggood
?
max kXG
(X? 0 ? y)k2
j
Gj 6?Ggood
(5)
(6)
1
The sample size n is explicitly part of the conditions in [13] while it is implicit here due to
the different ways of normalizing the matrix X. One recovers the same dependency on n by con?
?
? ?n, defining (as in [13]) ?0 0 (Ggood ) =
sidering X 0 =
nX, ? 0(k) = ? (k) / n, ??0 = ?/
X
?1
?
inf ? n kX 0 ?k22 /k?k22 : supp(?) ? ggood , and noting that ?0X 0(Ggood ) = ?X(Ggood ) and ??X 0(Ggood , y) =
?
If X had i.i.d. entries, with mean 0, variance 1/n and finite 4th moment, ?X (Ggood ) con??X(Ggood , y)/ n. ?
verges a.s. to (1 ? g)2 as n ? ? and |ggood |/n ? g ? 1 [2]. Hence the rates in C2-C4 are unaffected by
?X (Ggood ).
5
Here Eq. 5 follows by applying Theorem 1, and Eq. 6 is due to the fact that for all Gj ? Ggood
?
XG
(X? 0 ? y) = 0(dj ) holds.
j
Lemma 5 together with the condition on ? of Theorem 2 implies that with probability at least 1 ? ?,
p
?
max kXG
(X? 0 ? y)k2 ? ? 2d ln(2d/?) < (1 ? ?X (Ggood ))?.
(7)
j
Gj 6?Ggood
Lemma 3 together with the definition of ?X (Ggood ) implies
?
max kXG
(y ? X? (k?1) )k2 ? p
j
Gj ?Ggood
?X (Ggood )
|Ggood \ G (k?1) |
k? (k?1) ? ? 0 k2
(8)
We then have to deal with the following cases.
?
|G
\G (k?1) |
(k?1)
0
Case 1: k?
? ? k2 > ? ?good
. It follows that
X (Ggood )
?
?
max kXG
(y ? X? (k?1) )k2 > ? > max kXG
(X? 0 ? y)k2 /(1 ? ?X (Ggood )),
j
j
Gj ?Ggood
Gj 6?G
(9)
where the last inequality follows from Eq. 7.
Then Eq. 6 implies that
?
(k?1)
?
(k?1)
maxGj 6?Ggood kXG
(X?
?
y)k
<
max
kX
(X?
? y)k2 . So a good
2
Gj ?Ggood
Gj
j
group is selected, i.e., Gi(k) ? Ggood and Eq. 9 implies that the algorithm does not stop.
?
|G
\G (k?1) |
Case 2: k? (k?1) ? ? 0 k2 ? ? ?good
. We then have three possibilities.
X (Ggood )
Case 2.1: Gi(k) ? Ggood and the procedure does not stop.
Case 2.2: Gi(k) ? Ggood and the procedure stops.
?
Case 2.3: Gi(k) 6? Ggood in which case we have maxGj ?Ggood kXG
(X? (k?1) ? y)k2 ?
j
?
(k?1)
?
maxGj 6?Ggood kXGj (X?
? y)k2 ? ?X (Ggood ) maxGj ?Ggood kXGj (X? (k?1) ? y)k2 +
?
?
maxGj 6?Ggood kXG
(X? 0 ? y)k2 ? ?X (Ggood ) maxGj 6?Ggood kXG
(X? (k?1) ? y)k2 +
j
j
?
0
maxGj 6?Ggood kXGj (X? ? y)k2 , where the second inequality follows from Eq. 6 and
the last follows from applying the first inequality once again.
We thus obtain that
1
?
(k?1)
?
0
maxGj 6?Ggood kXG
(X?
?
y)k
?
max
kX
2
Gj 6?Ggood
Gj (X? ? y)k2 < ?,
1??X (Ggood )
j
where the last inequality follows by Eq. 7. Hence the algorithm stops.
(k)
The above cases imply that if the algorithm does not stop we have
? Gi(k) ? Ggood , and hence G ?
Ggood and if the algorithm stops we have k? (k?1) ? ? 0 k2 ? ?
|Ggood \G (k?1) |
. Thus by induction,
?X (Ggood )
(k?1)
G
? Ggood and k? (k?1) ?
if the Group-OMP
algorithm stops at iteration k, we have that
?
|Ggood \G (k?1) |
0
? k2 ? ? ?X (Ggood ) . So (C1) and (C2) are satisfied. Lemma 4 implies that (C3) holds, and
together with the theorem?s condition on p
? also implies that with probability at least
p 1 ? ?, we have
k??X (Ggood , y) ? ??X (Ggood , Ey)k? ? ? (2 ln(2|Ggood |/?))/?X (Ggood ) < ?/ ?X (Ggood ). This
allows us to show that (C4) holds, using similar arguments as in [13], which we omit due to space
constraints. This leads to Theorem 2.
4
4.1
Experiments
Simulation Results
We empirically evaluate the performance of the proposed Group-OMP method, against comparison
methods OMP, Group Lasso, Lasso and OLS (Ordinary Least Square). Comparison with OMP will
test the effect of ?grouping? OMP, while Group Lasso is included as a representative existing method
of group variable selection. We compare the performance of these methods in terms of the accuracy of variable selection, variable group selection and prediction. As measure of variable (group)
R
, where P denotes the
selection accuracy we use the F1 measure, which is defined as F1 = P2P+R
precision and R denotes the recall. For computing variable group F1 for a variable selection method,
6
we consider a group to be selected if any of the variables in the group is selected.2 As measure of
? ? E(X ? X)(?? ? ?),
?
prediction accuracy, we use the model error, defined as Model error = (?? ? ?)
?
?
where ? are
and ? the estimated coefficients. Recall that Lasso solves
? the true model coefficients
?
arg min? kY ? X?k2 + ?k?k1 . So the tuning parameter for Lasso and Group Lasso is the
penalty parameter ?. For Group-OMP and OMP rather than parameterizing the models according to
precision ?, we do so using the iteration number (i.e. a stopping point). We consider two estimates:
the ?oracle estimate? and the ?holdout validated estimate?. For the oracle estimate, the tuning parameter is chosen so as to minimize the model error. Note that such estimate can only be computed
in simulations and not in practical situations, but it is useful for evaluating the relative performance
of comparison methods, independently of the appropriateness of the complexity parameter. The
holdout-validated estimate is a practical version of the oracle estimate, obtained by selecting the
tuning parameter by minimizing the average squared error on a validation set. We now describe the
experimental setup.
Experiment 1: We use an additive model with categorical variables taken from [12](model I).
Consider variables Z1 , . . . , Z15 , where Zi ? N (0, 1)(i = 1, . . . , 15) and cov(Zi , Zj ) = 0.5|i?j| .
Let W1 , . . . , W15 be such that Wi = 0 if Zi < ??1 (1/3), Wi = 1 if Zi > ??1 (2/3) and Wi = 2
if ??1 (1/3) ? Zi ? ??1 (2/3), where ??1 is the quantile function for the normal distribution.
The responses in the data are generated using the true model:
Y = 1.8I(W1 = 1) ?1.2I(W1 = 0) + I(W3 = 1) + 0.5I(W3 = 0) + I(W5 = 1)+ I(W5 = 0) + ?,
where I denote the indicator function and ? ? N (0, ? = 1.476). Then let (X2(i?1)+1 , X2i ) =
(I(Wi = 1), I(Wi = 0)), which are the variables that the estimation methods use as the explanatory
variables, with the following variable groups: Gi = {2i ? 1, 2i}(i = 1, . . . , 15). We ran 100 runs,
each with 50 observations for training and 25 for validation.
Experiment 2: We use an additive model with continuous variables taken from [12](model III),
where the groups correspond to the expansion of each variable into a third-order polynomial. .
Consider variables Z1 , . . . , Z?17 , with Zi i.i.d. ? N (0, 1) (i = 1, . . . , 17). Let W1 , . . . , W16 be
defined as Wi = (Zi + Z17 )/ 2. The true model is Y = W33 + W32 + W3 + 31 W63 ? W62 + 32 W6 + ?,
where
? 3 ? 2? N?(0, ? = 2). Then let the explanatory variables be (X3(i?1)+1 , X3(i?1)+2 , X3i ) =
Wi , Wi , Wi with the variable groups Gi = {3(i ? 1) + 1, 3(i ? 1) + 2, 3i}(i = 1, . . . , 16). We
ran 100 runs, each with 100 observations for training and 50 for validation.
Experiment 3: We use an additive model with continuous variables similar to that of [16]. Consider
three independent hidden variables Z1 , . . . , Z3 such that Zi ? N (0, ? = 1). Consider 40 predictors
defined as: Xi = Zb(i?1)/3c+1 + ?i for i = 1, . . . , 15 and Xi ? N (0, 1) for i = 16, . . . , 40, where
?i i.i.d. ? N (0, ? = 0.11/2 ). The true model is
P15
P10
P5
Y = 3 i=1 Xi + 4 i=6 Xi + 2 i=11 Xi + ?, where ? ? N (0, ? = 15)
and the groups are Gk = {5(k ? 1) + 1, . . . , 5k}, for k = (1, . . . , 3), and Gk = k + 12, for k > 3.
We ran 100 runs, each with 500 observations for training and 50 for validation.
Experiment 4: We use an additive model with continuous variables taken from [15]. Consider five
hidden variables Z1 , . . . , Z5 such that Zi i.i.d. ? N (0, ? = 1). Consider 10 measurements of each
of these hidden variables such that Xi = (0.05)Zb(i?1)/10c+1 + (1 ? 0.052 )1/2 ?i , i=1,. . . ,50, where
?i ? N (0, 1) and cov(?i , ?j ) = 0.5|i?j| . The true model is Y = X ?? + ?, where ? ? N (0, ? =
19.22), and
?
7 for i = 1, . . . , 10
?
?
2 for i = 11, . . . , 20
??i =
?
? 1 for i = 21, . . . , 30
0 for i = 31, . . . , 50
The groups are Gk = {10(k ? 1) + 1, . . . , 10k}, for k = (1, . . . , 5). We ran 100 runs, each with
300 observations for training and 50 for validation.
The results of the four experiments are presented in Table 1. We note that F1 (Var) and F1 (Group)
are identical for the grouped methods for Experiments 1, 2 and 4, since in these the groups have
equal size. Overall, Group-OMP performs consistently better than all the comparison methods, with
respect to all measures considered . In particular, Group-OMP does better than OMP not only for
2
Other ways of translating variable selection to variable group selection are possible, but the F1 measure is
relatively robust with respect to this choice.
7
F1 (Var)
OLS
Lasso (Oracle)
Lasso (Holdout)
OMP (Oracle)
OMP (Holdout)
Group Lasso (Oracle)
Group Lasso (Holdout)
Group-OMP (Oracle)
Group-OMP (Holdout)
Exp 1
0.333 ? 0
0.483 ? 0.010
0.389 ? 0.012
0.531 ? 0.019
0.422 ? 0.014
0.545 ? 0.010
0.624 ? 0.017
0.730 ? 0.017
0.615 ? 0.020
Exp 2
0.222 ? 0
0.541 ? 0.010
0.528 ? 0.015
0.787 ? 0.009
0.728 ? 0.013
0.449 ? 0.011
0.459 ? 0.016
0.998 ? 0.002
0.921 ? 0.012
Exp 3
0.545 ? 0
0.771 ? 0.007
0.758 ? 0.015
0.532 ? 0.004
0.477 ? 0.006
0.693 ? 0.005
0.706 ? 0.013
0.999 ? 0.001
0.918 ? 0.011
Exp 4
0.750 ? 0
0.817 ? 0.004
0.810 ? 0.005
0.781 ? 0.005
0.741 ? 0.006
0.755 ? 0.002
0.794 ? 0.008
0.998 ? 0.002
0.890 ? 0.011
F1 (Group)
OLS
Lasso (Oracle)
Lasso (Holdout)
OMP (Oracle)
OMP (Holdout)
Group Lasso (Oracle)
Group Lasso (Holdout)
Group-OMP (Oracle)
Group-OMP (Holdout)
Exp 1
0.333 ? 0
0.458 ? 0.012
0.511 ? 0.010
0.687 ? 0.018
0.621 ? 0.020
0.545 ? 0.010
0.624 ? 0.017
0.730 ? 0.017
0.615 ? 0.020
Exp 2
0.222 ? 0
0.346 ? 0.008
0.340 ? 0.014
0.808 ? 0.020
0.721 ? 0.025
0.449 ? 0.011
0.459 ? 0.016
0.998 ? 0.002
0.921 ? 0.012
Exp 3
0.194 ? 0
0.494 ? 0.011
0.547 ? 0.029
0.224 ? 0.004
0.421 ? 0.026
0.317 ? 0.006
0.364 ? 0.018
0.998 ? 0.001
0.782 ? 0.025
Exp 4
0.750 ? 0
0.751 ? 0.001
0.776 ? 0.006
0.842 ? 0.010
0.827 ? 0.010
0.755 ? 0.002
0.794 ? 0.008
0.998 ? 0.002
0.890 ? 0.011
ME
OLS
Lasso (Oracle)
Lasso (Holdout)
OMP (Oracle)
OMP (Holdout)
Group Lasso (Oracle)
Group Lasso (Holdout)
Group-OMP (Oracle)
Group-OMP (Holdout)
Exp 1
3.184 ? 0.129
1.203 ? 0.078
2.536 ? 0.097
0.711 ? 0.020
0.945 ? 0.031
0.457 ? 0.021
1.279 ? 0.017
0.601 ? 0.0273
0.965 ? 0.050
Exp 2
7.063 ? 0.251
1.099 ? 0.067
1.309 ? 0.080
1.052 ? 0.061
1.394 ? 0.102
0.867 ? 0.052
1.047 ? 0.075
0.379 ? 0.035
0.605 ? 0.089
Exp 3
19.592 ? 0.451
9.228 ? 0.285
12.987 ? 0.670
19.006 ? 0.443
28.246 ? 1.942
11.538 ? 0.370
14.979 ? 0.538
6.727 ? 0.252
12.553 ? 1.469
Exp 4
46.845 ? 0.985
30.343 ? 0.796
38.089 ? 1.353
38.497 ? 0.926
48.564 ? 1.957
31.053 ? 0.831
37.359 ? 1.260
27.765 ? 0.703
35.989 ? 1.127
Table 1: Average F1 score at the variable level and group level, and model error for the models
output by Ordinary Least Squares, Lasso, OMP, Group Lasso, and Group-OMP.
Boston Housing
Prediction Error
Number of Original Variables
OLS
29.30 ? 3.25
13 ? 0
Lasso
17.82 ? 0.48
12.82 ? 0.05
OMP
19.10 ? 0.78
11.51 ? 0.20
Group Lasso
18.45 ? 0.59
12.50 ? 0.13
Group-OMP
17.60 ? 0.51
9.09 ? 0.31
Table 2: Average test set prediction error, average number of original variables, for the models
output by OLS, Lasso, OMP, Group Lasso, and Group-OMP on the? Boston Housing? dataset.
variable group selection, but also for variable selection and predictive accuracy. Against GroupLasso, Group-OMP does better in all four experiments with respect to variable (group) selection
when using Oracle, while it does worse in one case when using holdout validation. Group-OMP also
does better than Group-Lasso with respect to the model error in three out of the four experiments.
4.2
Experiment on a real dataset
We use the ?Boston Housing? dataset (UCI Machine Learning Repository). The continuous variables appear to have non-linear effects on the target value, so for each such variable, say Xi , we
consider its third-order polynomial expansion, i.e., Xi , Xi2 and Xi3 , and consider them as a variable
group. We ran 100 runs, where for each run we select at random half of the instances as training
examples, one quarter as validation set, and the remaining quarter as test examples. The penalty
parameter was chosen with holdout validation for all methods. The average test set prediction error,
the average number of selected original variables (i.e. groups) are reported in Table 2. These results
confirm that Group-OMP has the highest prediction accuracy among the comparison methods, and
also leads to the sparsest model.
5 Concluding Remarks
In addition to its merits in terms of consistency and accuracy, Group-OMP is particulary attractive
due to its computational efficiency (the entire path is computed in J rounds, where J is the number of groups). Interesting directions for future research include comparing the conditions for the
consistency of Group-OMP to those for Group Lasso and the bounds on their respective accuracy in
estimating the regression coefficients, evaluating modified versions of Group-OMP where the group
selection step (?) in Figure 1 includes a penalty to account for the group size, and considering a
forward/backward extension that allows correcting for mistakes (similarly to [14]).
8
References
[1] BACH , F.R., Consistency of the Group Lasso and Multiple Kernel Learning, J. Mach. Learn.
Res., 9, 1179-1225, 2008.
[2] BAI D., Y IN Y.Q., Limit of the smallest eigenvalue of a large dimensional sample covariance
matrix, Ann. Probab. 21, 1275-1294, 1993.
[3] C HEN J., H UO X., Sparse representations for multiple measurement vectors (MMV) in an
overcomplete dictionary, in Proc. of the 2005 IEEE Int. Conf. on Acoustics, Speech, and Signal
Proc., 2005.
[4] H UANG J., Z HANG T., M ETAXAS D., Learning with Structured Sparsity, in ICML?09, 2009.
[5] M ALLAT S., Z HANG Z., Matching pursuits with time-frequency dictionaries, IEEE Transactions on Signal Processing, 41, 3397-3415, 1993.
[6] M OORE , E.H, On the reciprocal of the general algebraic matrix, Bulletin of the American
Mathematical Society 26, 394-395, 1920.
[7] P ENROSE , R., A generalized inverse for matrices, Proceedings of the Cambridge Philosophical
Society 51, 406-413, 1955.
[8] T IBSHIRANI , R., Regression shrinkage and selection via the lasso, J. Royal. Statist. Soc B.,
58(1), 267-288, 1996.
[9] T ROPP J.A., Greed is good: Algorithmic results for sparse approximation, IEEE Trans. Info.
Theory, 50(10), 2231-2242, 2004.
[10] T ROPP J.A., G ILBERT A.C. , S TRAUSS M.J., Algorithms for simultaneous sparse approximation, Part I: greedy pursuit, Signal Proc. 86 (3), 572-588, 2006.
[11] P EOTTA L., VANDERGHEYNST P., Matching Pursuit with Block Incoherent Dictionaries, Signal Proc. 55 (9), 2007.
[12] Y UAN , M., L IN , Y., Model selection and estimation in regression with grouped variables, J.
R. Statist. Soc. B, 68, 4967, 2006.
[13] Z HANG , T., On the consistency of feature selection using greedy least squares regression, J.
Machine Learning Research, 2008.
[14] Z HANG , T., Adaptive Forward-Backward Greedy Algorithm for Sparse Learning with Linear
Models, in NIPS08, 2008.
[15] Z HAO , P, ROCHA , G. AND Y U , B., Grouped and hierarchical model selection through composite absolute penalties, Manuscript, 2006.
[16] Z OU , H., H ASTIE T., Regularization and variable selection via the Elastic Net., J. R. Statist.
Soc. B, 67(2) 301-320, 2005.
9
| 3878 |@word repository:1 version:3 polynomial:2 norm:9 open:1 simulation:2 covariance:1 pick:1 carry:1 moment:1 reduction:1 bai:1 series:3 score:1 selecting:3 ours:1 existing:3 current:1 com:1 comparing:1 must:1 additive:5 subsequent:1 kv1:4 rd2:3 greedy:6 selected:5 half:1 beginning:1 reciprocal:1 provides:1 boosting:1 zhang:2 five:1 height:1 mathematical:1 kvk2:2 c2:3 yuan:1 prove:6 hermitian:1 indeed:3 kxgj:3 decomposed:1 considering:1 provided:1 estimating:1 notation:3 bounded:1 what:2 guarantee:2 k2:50 schwartz:1 uo:1 omit:1 appear:1 before:1 kvm:2 naoki:1 tends:1 mistake:5 limit:1 mach:1 w32:1 path:1 initialization:1 elie:1 practical:2 block:2 x3:2 ker:1 procedure:8 sidering:1 xggood:2 empirical:2 composite:2 matching:7 road:1 onto:1 selection:30 operator:1 pertain:1 context:2 applying:3 restriction:2 map:1 center:1 rnj:1 independently:1 xgi:4 adhering:1 recovery:2 correcting:1 parameterizing:1 spanned:1 rocha:1 notion:1 target:1 suppose:2 gm:5 exact:2 element:1 w33:1 p5:1 highest:1 kv2:4 ran:5 complexity:1 predictive:1 efficiency:1 describe:1 whose:2 say:1 gbad:11 cov:2 gi:17 g1:3 noisy:1 itself:1 housing:3 advantage:1 eigenvalue:2 net:1 propose:3 relevant:1 uci:1 kv:1 ky:2 cluster:1 extending:2 eq:8 solves:2 soc:3 indicate:2 implies:7 appropriateness:1 direction:2 correct:4 modifying:1 translating:1 f1:11 extension:1 hold:9 considered:3 normal:1 exp:12 mapping:2 algorithmic:1 claim:1 particulary:1 dictionary:3 hvi:3 smallest:2 omitted:1 estimation:3 proc:4 grouped:3 establishes:1 orthonormalized:1 gaussian:2 modified:1 rather:4 shrinkage:1 corollary:3 derived:1 focus:1 validated:2 consistently:1 rdj:4 stopping:4 entire:1 explanatory:4 hidden:3 going:1 selects:1 arg:4 dual:2 overall:1 among:1 development:1 special:1 incides:1 equal:2 once:1 never:1 identical:2 icml:1 mimic:1 future:1 simplify:1 distinguishes:1 individual:3 maxj:1 consisting:1 interest:1 fd:2 w5:2 possibility:1 evaluation:2 respective:1 orthogonal:6 conduct:1 re:3 overcomplete:1 instance:2 column:2 ordinary:3 entry:1 predictor:1 too:1 reported:1 dependency:1 aclozano:1 fundamental:1 aur:1 vm:2 analogously:2 together:3 kvj:2 w1:4 again:1 squared:1 satisfied:1 worse:1 conf:1 verge:1 american:1 zhao:1 style:1 supp:5 account:2 upperbound:2 includes:1 coefficient:17 int:1 mp:1 explicitly:1 vi:7 break:1 ropp:2 sup:5 p2p:1 xgj:7 contribution:1 vk22:1 square:6 kxg:35 accuracy:9 minimize:1 variance:2 efficiently:1 correspond:1 identify:2 generalize:1 unaffected:1 simultaneous:2 xg2:1 definition:3 against:2 p15:1 frequency:1 dm:1 proof:8 di:1 recovers:1 con:2 stop:10 proved:1 holdout:16 dataset:3 recall:3 organized:1 amplitude:1 ou:1 manuscript:1 isometric:1 response:3 formulation:1 arranged:1 furthermore:2 stage:1 implicit:1 correlation:2 hand:1 sketch:2 usa:1 effect:2 k22:4 verify:1 true:11 rk2:12 counterpart:1 lozano:1 hence:4 regularization:1 equality:1 moore:1 deal:3 attractive:1 round:2 yorktown:1 coincides:1 kak:1 criterion:3 generalized:3 performs:1 l1:1 interpreting:1 fj:3 image:1 wise:1 recently:1 ols:10 functional:2 quarter:2 empirically:1 refer:1 measurement:2 cambridge:1 rd:4 tuning:3 consistency:8 similarly:3 dj:2 xg1:3 had:1 gj:29 xi3:1 recent:1 inf:2 scenario:1 certain:4 inequality:6 watson:1 p10:1 omp:68 ey:2 r0:1 signal:7 ibshirani:1 relates:1 multiple:2 technical:1 bach:1 lin:1 oore:1 z5:1 prediction:11 regression:17 noiseless:4 iteration:7 kernel:1 c1:2 addition:1 addressed:1 rest:1 subject:1 leveraging:1 call:1 leverage:1 noting:1 iii:1 easy:1 zi:9 rdm:11 w3:3 lasso:35 nabe:1 restrict:1 identified:2 microarrays:1 expression:1 isomorphism:2 greed:1 penalty:8 algebraic:1 speech:1 remark:1 generally:1 useful:1 involve:1 statist:3 canonical:1 zj:1 multifactor:1 estimated:5 correctly:2 rephrased:1 shall:1 group:128 key:1 four:3 pj:4 anova:1 backward:2 v1:6 run:6 inverse:2 extends:4 family:1 throughout:1 decision:1 bound:3 conjugation:2 oracle:16 uan:1 precisely:1 infinity:1 constraint:2 x2:1 u1:1 argument:2 min:4 span:5 concluding:1 relatively:1 structured:1 according:1 belonging:2 describes:1 wi:9 making:1 intuitively:1 restricted:3 taken:3 ln:8 xi2:1 needed:1 allat:1 merit:1 nips08:1 ascending:3 end:2 pursuit:8 generalizes:1 hierarchical:1 v2:5 rp:2 existence:1 original:4 denotes:2 remaining:1 include:2 k1:3 quantile:1 society:2 kth:1 subspace:1 simulated:2 nx:1 me:1 extent:1 induction:3 w6:1 index:5 z3:1 grouplasso:1 minimizing:1 setup:2 mmv:1 favorably:2 gk:3 info:1 stated:2 hao:1 xgbad:3 lagged:1 perform:1 upper:1 observation:4 datasets:2 sm:2 groupomp:1 finite:1 coincident:1 situation:2 defining:1 vj2:1 rn:10 abe:1 community:2 grzegorz:1 namely:3 required:1 c3:2 z1:4 philosophical:1 c4:3 acoustic:1 uang:1 established:1 swirszcz:2 trans:1 address:2 sparsity:1 max:13 royal:1 natural:5 indicator:1 residual:3 x2i:1 imply:1 xg:9 concludes:1 ready:1 categorical:1 incoherent:1 probab:1 l2:1 kf:1 hen:1 relative:1 reordering:1 interesting:1 var:2 vandergheynst:1 validation:8 rni:3 consistent:1 ibm:2 row:1 last:5 idj:2 transpose:1 side:1 bulletin:1 absolute:2 sparse:6 world:2 evaluating:2 forward:4 made:2 adaptive:1 transaction:1 hang:4 gene:2 confirm:1 pseudoinverse:2 investigating:1 astie:1 conclude:1 assumed:1 reorder:2 xi:8 continuous:4 table:4 learn:1 robust:1 elastic:1 expansion:2 vj:4 did:1 main:2 linearly:2 noise:3 w15:1 referred:1 representative:1 ny:1 precision:3 sparsest:1 third:2 theorem:22 rk:2 bad:7 showing:1 kvi:4 symbol:1 normalizing:1 grouping:6 exists:2 kx:5 boston:3 rd1:3 x3i:1 penrose:1 expressed:1 ggood:120 aa:1 goal:1 ann:1 exposition:1 included:3 specifically:2 except:1 lemma:15 zb:2 experimental:2 select:3 arises:1 evaluate:1 |
3,177 | 3,879 | Statistical Consistency of Top-k Ranking
Fen Xia
Institute of Automation
Chinese Academy of Sciences
[email protected]
Tie-Yan Liu
Microsoft Research Asia
[email protected]
Hang Li
Microsoft Research Asia
[email protected]
Abstract
This paper is concerned with the consistency analysis on listwise ranking methods. Among various ranking methods, the listwise methods have competitive performances on benchmark datasets and are regarded as one of the state-of-the-art
approaches. Most listwise ranking methods manage to optimize ranking on the
whole list (permutation) of objects, however, in practical applications such as information retrieval, correct ranking at the top k positions is much more important.
This paper aims to analyze whether existing listwise ranking methods are statistically consistent in the top-k setting. For this purpose, we define a top-k ranking
framework, where the true loss (and thus the risks) are defined on the basis of
top-k subgroup of permutations. This framework can include the permutationlevel ranking framework proposed in previous work as a special case. Based on
the new framework, we derive sufficient conditions for a listwise ranking method
to be consistent with the top-k true loss, and show an effective way of modifying the surrogate loss functions in existing methods to satisfy these conditions.
Experimental results show that after the modifications, the methods can work significantly better than their original versions.
1
Introduction
Ranking is the central problem in many applications including information retrieval (IR). In recent
years, machine learning technologies have been successfully applied to ranking, and many learning
to rank methods have been proposed, including the pointwise [12] [9] [6], pairwise [8] [7] [2], and
listwise methods [13] [3] [16]. Empirical results on benchmark datasets have demonstrated that the
listwise ranking methods have very competitive ranking performances [10].
To explain the high ranking performances of the listwise ranking methods, a theoretical framework
was proposed in [16]. In the framework, existing listwise ranking methods are interpreted as making
use of different surrogate loss functions of the permutation-level 0-1 loss. Theoretical analysis shows
that these surrogate loss functions are all statistically consistent in the sense that minimization of the
conditional expectation of them will lead to obtaining the Bayes ranker, i.e., the optimal ranked list
of the objects.
Here we point out that there is a gap between the analysis in [16] and many real ranking problems,
where the correct ranking of the entire permutation is not needed. For example, in IR, users usually
care much more about the top ranking results and thus only correct ranking at the top positions is
important. In this new situation, it is no longer clear whether existing listwise ranking methods are
still statistically consistent. The motivation of this work is to perform formal study on the issue.
For this purpose, we propose a new ranking framework, in which the ?true loss? is defined on the
top-k subgroup of permutations instead of on the entire permutation. The new true loss only measures errors occurring at the top k positions of a ranked list, therefore we refer to it as the top-k true
loss (Note that when k equals the length of the ranked list, the top-k true loss will become exactly
1
the permutation-level 0-1 loss). We prove a new theorem which gives sufficient conditions for a
surrogate loss function to be consistent with the top-k true loss. We also investigate the change of
the conditions with respect to different k?s. Our analysis shows that, as k decreases, to guarantee the
consistency of a surrogate loss function, the requirement on the probability space becomes weaker
while the requirement on the surrogate loss function itself becomes stronger. As a result, a surrogate loss function that is consistent with the permutation-level 0-1 loss might not be consistent with
the top-k true loss any more. Therefore, the surrogate loss functions in existing listwise ranking
methods, which have been proved to be consistent with the permutation-level 0-1 loss, are not theoretically guaranteed to have good performances in the top-k setting. Modifications to these surrogate
loss functions are needed to further make them consistent with the top-k true loss. We show how
to make such modifications, and empirically verify that such modifications can lead to significant
performance improvement. This validates the correctness of our theoretical analysis.
2
Permutation-level ranking framework
We review the permutation-level ranking framework proposed in [16].
Let X be the input space whose elements are groups of objects to be ranked, Y be the output space
whose elements are permutations of objects, and PXY be an unknown but fixed joint probability
distribution of X and Y . Let h ? H : X ? Y be a ranking function. Let x ? X and y ? Y , and
let y(i) be the index of the object that is ranked at position i in y. The task of learning to rank is to
learn a function that can minimize the expected risk R(h), defined as,
Z
R(h) =
l(h(x), y)dP (x, y),
(1)
X?Y
where l(h(x), y) is the true loss such that
l(h(x), y) =
1,
0,
if h(x) 6= y
if h(x) = y.
(2)
The above true loss indicates that if the permutation of the predicted result is exactly the same as
the permutation in the ground truth, then the loss is zero; otherwise the loss is one. For ease of
reference, we call it permutation-level 0-1 loss. The optimal ranking function which can minimize
the expected true risk R(h? ) = inf R(h) is referred to as the permutation-level Bayes ranker.
h? (x) = arg max P (y|x).
y?Y
(3)
In practice, for efficiency consideration, the ranking function is usually defined as h(x) =
sort(g(x1 ), . . . , g(xn )), where g(?) denotes the scoring function, and sort(?) denotes the sorting
function. Since the risk is non-continuous and non-differentiable with respect to the scoring function
g, a continuous and differentiable surrogate loss function ?(g(x), y) is usually used as an approximation of the true loss. In this way, the expected risk becomes
Z
R? (g) =
?(g(x), y)dP (x, y),
(4)
X?Y
where g(x) = (g(x1 ), . . . , g(xn )) is a vector-valued function induced by g.
It has been shown in [16] that many existing listwise ranking methods fall into the above framework,
with different surrogate loss functions used. Furthermore, their surrogate loss functions are statistically consistent under certain conditions with respect to the permutation-level 0-1 loss. However,
as shown in the next section, the permutation-level 0-1 loss is not suitable to describe the ranking
problem in many real applications.
3
Top-k ranking framework
We next describe the real ranking problem, and then propose the top-k ranking framework.
2
3.1
Top-k ranking problem
In real ranking applications like IR, people pay more attention to the top-ranked objects. Therefore
the correct ranking on the top positions is critically important. For example, modern web search
engines only return top 1, 000 results and 10 results in each page. According to a user study1 , 62%
of search engine users only click on the results within the first page, and 90% of users click on the
results within the first three pages. It means that two ranked lists of documents will likely provide
the same experience to the users (and thus suffer the same loss), if they have the same ranking results
for the top positions. This, however, cannot be reflected in the permutation-level 0-1 loss in Eq.(2).
This characteristic of ranking problems has also been explored in earlier studies in different settings
[4, 5, 14]. We refer to it as the top-k ranking problem.
3.2
Top-k true loss
To better describe the top-k ranking problem, we propose defining the true loss based on the top k
positions in a ranked list, referred to as the top-k true loss.
0, if y?(i) = y(i) ?i ? {1, . . . , k}, where y? = h(x),
(5)
lk (h(x), y) =
1,
otherwise .
The actual value of k is determined by application. When k equals the length of the entire ranked
list, the top-k true loss will become exactly the permutation-level 0-1 loss. In this regard, the top-k
true loss is more general than the permutation-level 0-1 loss.
With Eq.(5), the expected risk becomes
Z
Rk (h) =
lk (h(x), y)dP (x, y).
(6)
X?Y
It can be proved that the optimal ranking function with respect to the top-k true loss (i.e., the top-k
Bayes ranker) is any permutation in the top-k subgroup having the highest probability2 , i.e.,
h?k (x) ? arg maxGk (j1 ,j2 ,...,jk )?Gk P (Gk (j1 , j2 , ..., jk )|x),
(7)
where Gk (j1 , j2 , ..., jk ) = {y ? Y |y(t) = jt , ?t = 1, 2, . . . k} denotes a top-k subgroup in which
all the permutations have the same top-k true loss; Gk denotes the collection of all top-k subgroups.
With the above setting, we will analyze the consistency of the surrogate loss functions in existing
ranking methods with the top-k true loss in the next section.
4
Theoretical analysis
In this section, we first give the sufficient conditions of consistency for the top-k ranking problem.
Next, we show how these conditions change with respect to k. Last, we discuss whether the surrogate
loss functions in existing methods are consistent, and how to make them consistent if not.
4.1
Statistical consistency
We investigate what kinds of surrogate loss functions ?(g(x), y) are statistically consistent with
the top-k true loss. For this purpose, we study whether the ranking function that minimizes the
conditional expectation of the surrogate loss function defined as follows coincides with the top-k
Bayes ranker as defined in Eq.(7).
X
Q(P (y|x), g(x)) =
P (y|x)?(g(x), y).
(8)
y?Y
1
iProspect Search Engine User Behavior Study, April 2006, http://www.iprospect.com/
Note that the probability of a top-k subgroup is defined as the sum of the probabilities of the permutations
in the subgroup (cf., Definitions 6 and 7 in [3]).
2
3
According to [1], the above condition is the weakest condition to guarantee that optimizing a surrogate loss function will lead to obtaining a model achieving the Bayes risk (in our case, the top-k
Bayes ranker), when the training sample size approaches infinity.
We denote Q(P (y|x), g(x)) as Q(p, g), g(x) as g and P (y|x) as py . Hence, Q(p, g) is the loss
of g at x with respect to the conditional probability distribution py . The key idea is to decompose
the sorting of g into pairwise relationship between scores of objects. To this end, we denote Yi,j as
a permutation set in which each permutation ranks object i before object j, i.e., Yi,j , {y ? Y :
y ?1 (i) < y ?1 (j)} (here y ?1 (j) denotes the position of object j in permutation y), and introduce
the following definitions.
Definition 1. ?Gk is the a top-k subgroup probability space, such that ?Gk , {p ? R|Gk | :
P
Gk (j1 ,j2 ,...,jk )?Gk pGk (j1 ,j2 ,...,jk ) = 1, pGk (j1 ,j2 ,...,jk ) ? 0}.
Definition 2. A top-k subgroup probability space ?Gk is order preserving with respect to objects
?1
?1
?1
i and j, if ?y ? Yi,j and Gk (y(1), y(2), ..., y(k)) 6= Gk (?i,j
y(1), ?i,j
y(2), ..., ?i,j
y(k)), we
?1
have pGk (y(1),y(2),...,y(k)) > pGk (??1 y(1),??1 y(2),...,??1 y(k)) . Here ?i,j y denotes the permutation
i,j
i,j
i,j
in which the positions of objects i and j are exchanged while those of the other objects remain the
same as in y.
Definition 3. A surrogate loss function ? is top-k subgroup order sensitive on a set ? ? Rn , if ?
is a non-negative differentiable function and the following three conditions hold for ? objects i and
?1
?1
j: (1) ?(g, y) = ?(?i,j
g, ?i,j
y); (2)Assume gi < gj , ?y ? Yi,j . If Gk (y(1), y(2), ..., y(k)) 6=
?1
?1
?1
?1
Gk (?i,j y(1), ?i,j y(2), ..., ?i,j y(k)), then ?(g, y) ? ?(g, ?i,j
y) and for at least one y, the strict
?1
inequality holds; otherwise, ?(g, y) = ?(g, ?i,j y). (3) Assume gi = gj . ?y ? Yi,j with
?1
?1
?1
Gk (y(1), y(2), ..., y(k)) 6= Gk (?i,j
y(1), ?i,j
y(2), ..., ?i,j
y(k)) satisfying
?1
??(g,?i,j
y)
?gi
>
??(g,y)
?gi .
The order preserving property of a top-k subgroup probability space (see Definition 2) indicates
that if the top-k subgroup probability on a permutation y ? Yi,j is larger than that on permutation
?1
?1 0
?i,j
y, then the relation holds for any other permutation y 0 in Yi,j and and the corresponding ?i,j
y
provided that the top-k subgroup of the former is different from that of the latter. The order sensitive
property of a surrogate loss function (see Definition 3) indicates that (i) ?(g, y) exhibits a symmetry
in the sense that simultaneously exchanging the positions of objects i and j in the ground truth
and their scores in the predicted score list will not make the surrogate loss change. (ii) When a
permutation is transformed to another permutation by exchanging the positions of two objects of it,
if the two permutations do not belong to the same top-k subgroup, the loss on the permutation that
ranks the two objects in the decreasing order of their scores will not be greater than the loss on its
counterpart. (iii) There exists a permutation, for which the speed of change in loss with respect to
the score of an object will become faster if exchanging its position with another object with the same
score but ranked lower. A top-k subgroup order sensitive surrogate loss function has several nice
properties as shown below.
Proposition 4. Let ?(g, y) be a top-k subgroup order sensitive loss function. ?y, ?? ?
Gk (y(1), y(2), . . . , y(k)), we have ?(g, ?) = ?(g, y).
Proposition 5. Let ?(g, y) be a top-k subgroup order sensitive surrogate loss function. ? objects i
?1
?1
?1
and j with gi = gj , ?y ? Yi,j , if Gk (y(1), y(2), ..., y(k)) 6= Gk (?i,j
y(1), ?i,j
y(2), ..., ?i,j
y(k)),
then
?1
??(g,?i,j
y)
?gi
?
??(g,y)
?gi .
Otherwise,
?1
??(g,?i,j
y)
?gi
=
??(g,y)
?gi .
Proposition 4 shows that all permutations in the same top-k subgroup share the same loss ?(g, y)
and thus share the same partial difference with respect to the score of a given object. Proposition 5
indicates that the partial difference of ?(g, y) also has a similar property to ?(g, y) (see the second
condition in Definition 3). Due to space restriction, we omit the proofs (see [15] for more details).
Based on the above definitions and propositions, we give the main theorem (Theorem 6), which
states the sufficient conditions for a surrogate loss function to be consistent with the top-k true loss.
Theorem 6. Let ? be a top-k subgroup order sensitive loss function on ? ? Rn . For ?n objects, if its top-k subgroup probability space is order preserving with respect to n ? 1 object pairs
{(ji , ji+1 )}ki=1 and {(jk+si , jk+i : 0 ? si < i)}n?k
i=2 , then the loss ?(g, y) is consistent with the
top-k true loss as defined in Eq.(5).
4
The proof of the main theorem is mostly based on Theorem 7, which specifies the score relation
between two objects for the minimizer of Q(p, g). Due to space restriction, we only give Theorem
7 and its detailed proof. For the detailed proof of Theorem 6, please refer to [15].
Theorem 7. Let ?(g, y) be a top-k subgroup order sensitive loss function. ?i and j, if the topk subgroup probability space is order preserving with respect to them, and g is a vector which
minimizes Q(p, g) in Eq.(8), then gi > gj .
Proof. Without loss of generality, we assume i = 1, j = 2, g10 = g2 , g20 = g1 , and gk0 = gk (k > 2).
First, we prove g1 ? g2 by contradiction. Assume g1 < g2 , we have
X
X
?1
Q(p, g0 ) ? Q(p, g) =
(p??1 y ? py )?(g, y) =
(p??1 y ? py )(?(g, y) ? ?(g, ?1,2
y)).
y?Y
1,2
y?Y1,2
1,2
?1
The first equation is based on the fact g0 = ?1,2
g, and the second equation is based on the fact
?1 ?1
?1,2
?1,2 y = y. After some algebra, by using Proposition 4, we have,
X
?1
Q(p, g0 ) ? Q(p, g) =
(pGk (??1 y) ? pGk (y) )(?(g, y) ? ?(g, ?1,2
y)),
1,2
?1
Gk (y)?{Gk :Gk (y)6=Gk (?1,2
y)}:y?Y1,2
where Gk (y) denotes the subgroup that y belongs to.
?1
Since g1 < g2 , we have ?(g, y) ? ?(g, ?1,2
y). Meanwhile, pGk (??1 y) < pGk (y) due to the order
1,2
preserving of the top-k subgroup probability space. Thus each component in the sum is non-positive
and at least one of them is negative, which means Q(p, g0 ) < Q(p, g). This is a contradiction to
the optimality of g. Therefore, we must have g1 ? g2 .
Second, we prove g1 6= g2 , again by contradiction. Assume g1 = g2 . By setting the derivative of
Q(p, g) with respect to g1 and g2 to zero and compare them3 , we have,
?1
X
??(g, y) ??(g, ?1,2 y)
(py ? p??1 y )(
?
) = 0.
1,2
?g1
?g1
y?Y1,2
After some algebra, we obtain,
X
(pGk (y) ? pGk (??1 y) )(
1,2
?1
Gk (y)?{Gk :Gk (y)6=Gk (?1,2
y)}:y?Y1,2
?1
??(g, y) ??(g, ?1,2 y)
?
) = 0.
?g1
?g1
??(g,? ?1 y)
1,2
According to Proposition 5, we have ??(g,y)
?
. Meanwhile, pGk (??1 y) < pGk (y) due to
?g1
?g1
1,2
the order preserving of the top-k subgroup probability space. Thus, the above equation cannot hold
since at least one of components in the sum is negative according to Definition 3.
Consistency with respect to k
4.2
We discuss the change of the consistency conditions with respect to various k values.
First, we have the following proposition for the top-k subgroup probability space.
Proposition 8. If the top-k subgroup probability space is order preserving with respect to object i
and j, the top-(k ? 1) subgroup probability space is also order preserving with respect to i and j.
The proposition can be proved by decomposing a top-(k ? 1) subgroup into the sum of top-k subgroups. One can find the detailed proof in [15]. Here we give an example to illustrate the basic idea.
Suppose there are three objects {1, 2, 3} to be ranked. If the top-2 subgroup probability space is order preserving with respect to objects 1 and 2, then we have pG2 (1,2) > pG2 (2,1) , pG2 (1,3) > pG2 (2,3)
and pG2 (3,1) > pG2 (3,2) . On the other hand, for top-1, we have pG1 (1) > pG1 (2) . Note that
pG1 (1) = pG2 (1,2) + pG2 (1,3) and pG1 (2) = pG2 (2,1) + pG2 (2,3) . Thus, it is easy to verify that
Proposition 8 holds for this case while the opposite does not.
Second, we obtain the following proposition for the surrogate loss function ?.
3
By trivial modifications, one can handle the case that g1 or g2 is infinite (cf. [17]).
5
Proposition 9. If the surrogate loss function ? is top-k subgroup order sensitive on a set ? ? Rn ,
then it is also top-(k + 1) subgroup order sensitive on the same set.
Again, one can refer to [15] for the detailed proof of the proposition, and here we only provide an example. Let us consider the same setting in the previous example. Assume that
g1 < g2 . If ? is top-1 subgroup order sensitive, then we have ?(g, (1, 2, 3)) ? ?(g, (2, 1, 3)),
?(g, (1, 3, 2)) ? ?(g, (2, 3, 1)), and ?(g, (3, 1, 2)) = ?(g, (3, 2, 1)). From Proposition 4, we know
that the two inequalities are strict. On the other hand, if ? is top-2 subgroup order sensitive, the
following inequalities hold with at least one of them being strict: ?(g, (1, 2, 3)) ? ?(g, (2, 1, 3)),
?(g, (1, 3, 2)) ? ?(g, (2, 3, 1)), and ?(g, (3, 1, 2)) ? ?(g, (3, 2, 1)). Therefore top-1 subgroup
order sensitive is a special case of top-2 subgroup order sensitive.
According to the above propositions, we can come to the following conclusions.
? For the consistency with the top-k true loss, when k becomes smaller, the requirement on
the probability space becomes weaker but the requirement on the surrogate loss function
becomes stronger. Since we never know the real property of the (unknown) probability
space, it is more likely the requirement on the probability space for the consistency with
the top-k true loss can be satisfied than that for the top-l (l > k) true loss. Specifically, it is
risky to assume the requirement for the permutation-level 0-1 loss to hold.
? If we fix the true loss to be top-k and the probability space to be top-k subgroup order
preserving, the surrogate loss function should be at most top-l (l ? k) subgroup order
sensitive in order to meet the consistency conditions. It is not guaranteed that a top-l (l > k)
subgroup order sensitive surrogate loss function can be consistent with the top-k true loss.
For example, a top-1 subgroup order sensitive surrogate loss function may be consistent
with any top-k true loss, but a permutation-level order sensitive surrogate loss function
may not be consistent with any top-k true loss, if k is smaller than the length of the list.
For ease of understanding the above discussions, let us see an example shown in the following
proposition (the proof of this proposition can be found in [15]). It basically says that given a probability space that is top-1 subgroup order preserving, a top-3 subgroup order sensitive surrogate loss
function may not be consistent with the top-1 true loss.
Proposition 10. Suppose there are three objects to be ranked. ? is a top-3 subgroup order sensitive
loss function and the strict inequality ?(g, (3, 1, 2)) < ?(g, (3, 2, 1)) holds when g1 > g2 . The
probabilities of permutations are p123 = p1 , p132 = 0, p213 = p2 , p231 = 0, p312 = 0, p321 = p2
respectively, where p1 > p2 . Then ? is not consistent with the top-1 true loss.
The above discussions imply that although the surrogate loss functions in existing listwise ranking
methods are consistent with the permutation-level 0-1 loss (under a rigid condition), they may not
be consistent with the top-k true loss (under a mild condition). Therefore, it is necessary to modify
these surrogate loss functions. We will make discussions on this in the next subsection.
4.3
Consistent surrogate loss functions
In [16], the surrogate loss functions in ListNet, RankCosine, and ListMLE have been proved to be
permutation-level order sensitive. According to the discussion in the previous subsection, however,
they may not be top-k subgroup order sensitive, and therefore not consistent with the top-k true loss.
Even for the consistency with the permutation-level 0-1 loss, in order to guarantee these surrogate
loss functions to be consistent, the requirement on the probability space may be too strong in some
real scenarios. To tackle the challenge, it is desirable to modify these surrogate loss functions to
make them top-k subgroup order sensitive. Actually this is doable, and the modifications to the
aforementioned surrogate loss functions are given as follows.
4.3.1
Likelihood loss
The likelihood loss is the loss function used in ListMLE [16], which is defined as below,
?(g(x), y) = ? log P (y|x; g),
where P (y|x; g) =
6
n
Y
exp(g(xy(i) ))
Pn
.
t=i exp(g(xy(t) ))
i=1
(9)
We propose replacing the permutation probability with the top-k subgroup probability (which is also
defined with the Luce model [11]) in the above definition:
P (y|x; g) =
k
Y
exp(g(xy(i) ))
Pn
.
t=i exp(g(xy(t) ))
i=1
(10)
It can be proved that the modified loss is top-k subgroup order sensitive (see [15]).
4.3.2
Cosine loss
The cosine loss is the loss function used in RankCosine [13], which is defined as follows,
?(g(x), y) =
?y (x)T g(x)
1
(1 ?
),
2
k?y (x)kkg(x)k
(11)
where the score vector of the ground truth is produced by a mapping function ?y (?) : Rd ? R,
which retains the order in a permutation, i.e., ?y (xy(1) ) > ? ? ? > ?y (xy(n) ).
We propose changing the mapping function as follows. Let the mapping function retain the order
for the top k positions of the ground truth permutation and assigns to all the remaining positions
a small value (which is smaller than the score of any object ranked at the top-k positions), i.e.,
?y (xy(1) ) > ? ? ? > ?y (xy(k) ) > ?y (xy(k+1) ) = ? ? ? = ?y (xy(n) ) = . It can be proved that after
the modification, the cosine loss becomes top-k subgroup order sensitive (see [15]).
4.3.3
Cross entropy loss
The cross entropy loss is the loss function used in ListNet [3], defined as follows,
?(g(x), y) = D(P (?|x; ?y )||P (?|x; g)),
(12)
where ? is a mapping function whose definition is similar to that in RankCosine, and P (?|x; ?y )
and P (?|x; g) are the permutation probabilities in the Luce model.
We propose using a mapping function to modify the cross entropy loss in a similar way as in the case
of the cosine loss4 It can be proved that such a modification can make the surrogate loss function
top-k subgroup order sensitive (see [15]).
5
Experimental results
In order to validate the theoretical analysis in this work, we conducted some empirical study. Specifically, we used OHSUMED, TD2003, and TD2004 in the LETOR benchmark dataset [10] to perform
some experiments. As evaluation measure, we adopted Normalized Discounted Cumulative Gain
(N) at positions 1, 3, and 10, and Precision (P) at positions 1, 3, and 10.5 It is obvious that these
measures are top-k related and are suitable to evaluate the ranking performance in top-k ranking
problems.
We chose ListMLE as example method since the likelihood loss has nice properties such as convexity, soundness, and linear computational complexity [16]. We refer to the new method that we
obtained by applying the modifications mentioned in Section 4.3 as top-k ListMLE. We tried different values of k (i.e., k=1, 3, 10, and the exact length of the ranked list). Obviously the last case
corresponds to the original likelihood loss in ListMLE.
Since the training data in LETOR is given in the form of multi-level ratings, we adopted the methods
proposed in [16] to produce the ground truth ranked list. We then used stochastic gradient descent
as the algorithm for optimization of the likelihood loss. As for the ranking model, we chose linear
Neural Network, since the model has been widely used [3, 13, 16].
4
Note that in [3], a top-k cross entropy loss was also proposed, by using the top-k Luce model. However,
it can be verified that the so-defined top-k cross entropy loss is still permutation-level order sensitive, but not
top-k subgroup order sensitive. In other words, the proposed modification here is still needed.
5
On datasets with only two ratings such as TD2003 and TD2004, N@1 equals P@1.
7
The experimental results are summarized in Tables 1-3.
Methods
N@1
N@3
N@10
P@1
P@3
P@10
Methods
N/P@1
N@3
N@10
P@3
P@10
ListMLE
0.548
0.473
0.446
0.642
0.582
0.495
ListMLE
0.24
0.253
0.261
0.22
0.146
Top-1 ListMLE
0.529
0.482
0.447
0.652
0.595
0.499
Top-1 ListMLE
0.4
0.329
0.314
0.3
0.176
Top-3 ListMLE
0.535
0.484
0.445
0.671
0.608
0.504
Top-3 ListMLE
0.44
0.382
0.343
0.34
0.204
Top-10 ListMLE
0.558
0.473
0.444
0.672
0.601
0.509
Top-10 ListMLE
0.5
0.410
0.378
0.38
0.22
Table 1: Ranking accuracies on OHSUMED
Methods
ListMLE
Top-1 ListMLE
N/P@1
0.4
0.52
N@3
0.351
0.469
N@10
0.356
0.451
P@3
0.284
0.413
Table 2: Ranking accuracies on TD2003
Methods
N@1
N@3
N@10
P@1
P@3
P@10
RankBoost
0.497
0.472
0.435
0.604
0.586
0.495
Ranking SVM
0.495
0.464
0.441
0.633
0.592
0.507
ListNet
0.523
0.477
0.448
0.642
0.602
0.509
RankCosine
0.523
0.475
0.437
0.642
0.589
0.493
Top-10 ListMLE
0.558
0.473
0.444
0.672
0.601
0.509
P@10
0.188
0.248
Top-3 ListMLE
0.506
0.456
0.458
0.417
0.261
Top-10 ListMLE
0.52
0.469
0.472
0.413
0.269
Table 3: Ranking accuracies on TD2004
Table 4: Ranking accuracies on OHSUMED
From the tables, we can see that with the modifications the ranking accuracies of ListMLE can be
significantly boosted, in terms of all measures, on both TD2003 and TD2004. This clearly validates
our theoretical analysis. On OHSUMED, all the loss functions achieve comparable performances.
The possible explanation is that the probability space in OHSUMED is well formed such that it is
order preserving for many different k values.
Next, we take Top-10 ListMLE as an example to make comparison with some other baseline methods such as Ranking SVM [8], RankBoost [7], ListNet [3], and RankCosine [13]. The results are
listed in Tables 4-6. We can see from the tables, Top-10 ListMLE achieves the best performance
among all the methods on the TD2003 and TD2004 datasets in terms of almost all measures. On the
OHSUMED dataset, it also performs fairly well as compared to the other methods. Especially for
N@1 and P@1, it significantly outperforms all the other methods on all the datasets.
Methods
N/P@1
N@3
N@10
P@3
P@10
Methods
N/P@1
N@3
N@10
P@3
P@10
RankBoost
0.26
0.270
0.285
0.24
0.178
RankBoost
0.48
0.463
0.471
0.404
0.253
Ranking SVM
0.42
0.378
0.341
0.34
0.206
Ranking SVM
0.44
0.409
0.420
0.351
0.225
ListNet
0.46
0.408
0.374
0.36
0.222
ListNet
0.439
0.437
0.457
0.399
0.257
RankCosine
0.36
0.346
0.322
0.3
0.182
RankCosine
0.439
0.397
0.405
0.328
0.209
Top-10 ListMLE
0.5
0.410
0.378
0.38
0.22
Top-10 ListMLE
0.52
0.469
0.472
0.413
0.269
Table 5: Ranking accuracies on TD2003
Table 6: Ranking accuracies on TD2004
From the above experimental results, we can come to the conclusion that for real ranking applications like IR (where top-k evaluation measures are widely used), it is better to use the top-k true loss
than the permutation-level 0-1 loss, and is better to use the modified surrogate loss functions than
the original surrogate loss functions.
6
Conclusion
In this paper we have proposed a top-k ranking framework, which can better describe real ranking applications like information retrieval. In the framework, the true loss is defined on the top-k
subgroup of permutations. We have derived the sufficient conditions for a surrogate loss function
to be statistically consistent with the top-k true loss. We have also discussed how to modify the
loss functions in existing listwise ranking methods to make them consistent with the top-k true loss.
Our experiments have shown that with the proposed modifications, algorithms like ListMLE can
significantly outperform their original version, and also many other ranking methods.
As future work, we plan to investigate the following issues. (1) we will empirically study the modified ListNet and RankCosine, to see whether their performances can also be significantly boosted in
the top-k setting. (2) We will also study the consistency of the pointwise and pairwise loss functions
with the top-k true loss.
8
References
[1] P. L. Bartlett, M. I. Jordan, and J. D. McAuliffe. Convexity, classification, and risk bounds.
Journal of the American Statistical Association, 101:138?156, 2006.
[2] C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, and G. Hullender.
Learning to rank using gradient descent. In Proc. of ICML?05, pages 89?96, 2005.
[3] Z. Cao, T. Qin, T. Y. Liu, M. F. Tsai, and H. Li. Learning to rank: From pairwise approach to
listwise approach. In Proc. of ICML?07, pages 129?136, 2007.
[4] S. Clemencon and N. Vayatis. Ranking the best instances. Journal of Machine Learning
Research, 8:2671?2699, 2007.
[5] D. Cossock and T. Zhang. Subset ranking using regression. In Proc. of COLT, pages 605?619,
2006.
[6] D. Cossock and T. Zhang. Statistical analysis of bayes optimal subset ranking. Information
Theory, 54:5140?5154, 2008.
[7] Y. Freund, R. Iyer, R. E. Schapire, and Y. Singer. An efficient boosting algorithm for combining
preferences. In Proc. of ICML?98, pages 170?178, 1998.
[8] R. Herbrich, T. Graepel, and K. Obermayer. Support vector vector learning for ordinal regression. In Proc. of ICANN?99, pages 97?102, 1999.
[9] P. Li, C. Burges, and Q. Wu. Mcrank: Learning to rank using multiple classification and
gradient boosting. In Advances in Neural Information Processing Systems 20(NIPS 07), pages
897?904, Cambridge, MA, 2008. MIT Press.
[10] T. Y. Liu, T. Qin, J. Xu, W. Y. Xiong, and H. Li. Letor: Benchmark dataset for research on
learning to rank for information retrieval. In LR4IR 2007, in conjunction with SIGIR 2007,
2007.
[11] J. I. Marden, editor. Analyzing and Modeling Rank Data. Chapman and Hall, London, 1995.
[12] R. Nallapati. Discriminative models for information retrieval. In Proc. of SIGIR?04, pages
64?71, 2004.
[13] T. Qin, X.-D. Zhang, M.-F. Tsai, D.-S. Wang, T.-Y. Liu, and H. Li. Query-level loss functions
for information retrieval. Information processing and management, 44:838?855, 2008.
[14] C. Rudin. Ranking with a p-norm push. In Proc. of COLT, pages 589?604, 2006.
[15] F. Xia, T. Y. Liu, and H. Li. Top-k consistency of learning to rank methods. Technical report,
Microsoft Research, MSR-TR-2009-139, 2009.
[16] F. Xia, T. Y. Liu, J. Wang, W. S. Zhang, and H. Li. Listwise approach to learning to rank theory and algorithm. In Proc. of ICML?08, pages 1192?1199, 2008.
[17] T. Zhang. Statistical analysis of some multi-category large margin classification methods.
Journal of Machine Learning Research, 5:1225?1251, 2004.
9
| 3879 |@word mild:1 msr:1 version:2 stronger:2 norm:1 mcrank:1 tried:1 tr:1 liu:6 score:10 document:1 outperforms:1 existing:10 com:3 si:2 must:1 j1:6 listmle:24 rudin:1 renshaw:1 boosting:2 preference:1 herbrich:1 zhang:5 become:3 prove:3 introduce:1 theoretically:1 pairwise:4 expected:4 behavior:1 p1:2 multi:2 discounted:1 decreasing:1 actual:1 ohsumed:6 becomes:8 provided:1 what:1 kind:1 interpreted:1 minimizes:2 guarantee:3 pg2:10 tackle:1 tie:1 exactly:3 omit:1 hamilton:1 mcauliffe:1 before:1 positive:1 modify:4 analyzing:1 meet:1 might:1 chose:2 ease:2 statistically:6 practical:1 practice:1 empirical:2 yan:1 significantly:5 deed:1 word:1 cannot:2 risk:8 applying:1 shaked:1 py:5 optimize:1 www:1 restriction:2 demonstrated:1 attention:1 sigir:2 assigns:1 contradiction:3 regarded:1 marden:1 handle:1 suppose:2 user:6 exact:1 element:2 satisfying:1 jk:8 tyliu:1 wang:2 decrease:1 highest:1 mentioned:1 convexity:2 complexity:1 algebra:2 efficiency:1 basis:1 joint:1 various:2 effective:1 describe:4 london:1 query:1 whose:3 larger:1 valued:1 widely:2 say:1 otherwise:4 soundness:1 gi:10 g1:17 itself:1 validates:2 obviously:1 doable:1 differentiable:3 g20:1 propose:6 qin:3 j2:6 cao:1 combining:1 achieve:1 academy:1 validate:1 requirement:7 letor:3 produce:1 object:29 derive:1 illustrate:1 ac:1 eq:5 strong:1 p2:3 predicted:2 come:2 correct:4 modifying:1 stochastic:1 fix:1 decompose:1 proposition:19 td2004:6 hold:8 hall:1 ground:5 exp:4 mapping:5 achieves:1 purpose:3 proc:8 sensitive:27 p321:1 correctness:1 successfully:1 minimization:1 mit:1 clearly:1 rankboost:4 aim:1 modified:3 pn:2 boosted:2 conjunction:1 derived:1 improvement:1 rank:11 indicates:4 likelihood:5 baseline:1 sense:2 rigid:1 entire:3 relation:2 transformed:1 issue:2 among:2 arg:2 aforementioned:1 classification:3 colt:2 plan:1 art:1 special:2 fairly:1 equal:3 never:1 having:1 chapman:1 icml:4 future:1 report:1 modern:1 simultaneously:1 microsoft:5 investigate:3 evaluation:2 partial:2 necessary:1 experience:1 xy:10 clemencon:1 exchanged:1 theoretical:6 instance:1 earlier:1 modeling:1 retains:1 exchanging:3 subset:2 lazier:1 conducted:1 too:1 retain:1 topk:1 again:2 central:1 satisfied:1 manage:1 management:1 american:1 derivative:1 return:1 li:7 summarized:1 automation:1 satisfy:1 ranking:69 analyze:2 competitive:2 sort:2 bayes:7 minimize:2 formed:1 ir:4 accuracy:7 characteristic:1 critically:1 basically:1 produced:1 explain:1 definition:12 obvious:1 proof:8 gain:1 proved:7 dataset:3 subsection:2 graepel:1 actually:1 asia:2 reflected:1 listnet:7 april:1 generality:1 furthermore:1 hand:2 web:1 replacing:1 verify:2 true:40 normalized:1 counterpart:1 former:1 hence:1 please:1 coincides:1 cosine:4 performs:1 consideration:1 empirically:2 ji:2 td2003:6 belong:1 discussed:1 association:1 cossock:2 refer:5 significant:1 cambridge:1 rd:1 consistency:14 longer:1 gj:4 recent:1 optimizing:1 inf:1 belongs:1 scenario:1 certain:1 inequality:4 yi:8 fen:2 scoring:2 preserving:12 greater:1 care:1 ii:1 multiple:1 desirable:1 lr4ir:1 technical:1 faster:1 cross:5 retrieval:6 basic:1 regression:2 expectation:2 vayatis:1 strict:4 induced:1 jordan:1 call:1 iii:1 easy:1 concerned:1 click:2 opposite:1 idea:2 cn:1 luce:3 ranker:5 whether:5 bartlett:1 suffer:1 detailed:4 listed:1 clear:1 category:1 http:1 specifies:1 outperform:1 schapire:1 group:1 key:1 achieving:1 changing:1 rankcosine:8 verified:1 year:1 probability2:1 sum:4 almost:1 wu:1 comparable:1 ki:1 bound:1 pay:1 guaranteed:2 pxy:1 infinity:1 speed:1 optimality:1 according:6 remain:1 pg1:4 smaller:3 modification:12 making:1 equation:3 discus:2 needed:3 know:2 singer:1 ordinal:1 end:1 adopted:2 decomposing:1 xiong:1 original:4 top:129 denotes:7 include:1 cf:2 remaining:1 chinese:1 especially:1 g0:4 surrogate:42 obermayer:1 exhibit:1 gradient:3 dp:3 trivial:1 length:4 pointwise:2 index:1 relationship:1 mostly:1 gk:29 negative:3 unknown:2 perform:2 datasets:5 benchmark:4 descent:2 situation:1 defining:1 y1:4 rn:3 rating:2 pair:1 engine:3 subgroup:52 gk0:1 nip:1 usually:3 below:2 challenge:1 including:2 max:1 explanation:1 ia:1 suitable:2 ranked:15 technology:1 imply:1 risky:1 lk:2 hullender:1 review:1 nice:2 understanding:1 freund:1 loss:130 permutation:50 sufficient:5 consistent:27 editor:1 share:2 last:2 formal:1 weaker:2 burges:2 institute:1 fall:1 listwise:16 regard:1 xia:4 xn:2 cumulative:1 collection:1 hang:1 discriminative:1 continuous:2 search:3 table:10 learn:1 obtaining:2 symmetry:1 meanwhile:2 icann:1 main:2 whole:1 motivation:1 nallapati:1 x1:2 xu:1 referred:2 precision:1 position:17 theorem:9 rk:1 jt:1 list:11 explored:1 svm:4 weakest:1 exists:1 g10:1 iyer:1 occurring:1 push:1 margin:1 gap:1 sorting:2 entropy:5 likely:2 g2:11 corresponds:1 truth:5 minimizer:1 ma:1 conditional:3 change:5 determined:1 infinite:1 specifically:2 experimental:4 people:1 support:1 latter:1 tsai:2 evaluate:1 |
3,178 | 388 | Integrated Modeling and Control
Based on Reinforcement Learning
and Dynamic Programming
Richard S. Sutton
GTE Laboratories Incorporated
Waltham, MA 02254
Abstract
This is a summary of results with Dyna, a class of architectures for intelligent systems based on approximating dynamic programming methods.
Dyna architectures integrate trial-and-error (reinforcement) learning and
execution-time planning into a single process operating alternately on the
world and on a learned forward model of the world. We describe and
show results for two Dyna architectures, Dyna-AHC and Dyna-Q. Using a
navigation task, results are shown for a simple Dyna-AHC system which
simultaneously learns by trial and error, learns a world model, and plans
optimal routes using the evolving world model. We show that Dyna-Q
architectures (based on Watkins's Q-Iearning) are easy to adapt for use in
changing environments.
1
Introduction to Dyna
Dyna architectures (Sutton, 1990) use learning algorithms to approximate the conventional optimal control technique known as dynamic programming (DP) (Bellman, 1957; Bertsekas, 1987). DP itself is not a learning method, but rather a
computational method for determining optimal behavior given a complete model of
the task to be solved. It is very similar to state-space search, but differs in that
it is more incremental and never considers actual action sequences explicitly, only
single actions at a time. This makes DP more amenable to incremental planning
at execution time, and also makes it more suitable for stochastic or incompletely
modeled environments, as it need not consider the extremely large number of sequences possible in an uncertain environment. Learned world models are likely
to be stochastic and uncertain, making DP approaches particularly promising for
471
472
Sutton
learning systems. Dyna architectures are those that learn a world model online
while using approximations to DP to learn and plan optimal behavior.
The theory of Dyna is based on the theory of DP and on DP's relationship to
reinforcement learning (Watkins, 1989; Barto, Sutton & Watkins, 1989, 1990), to
temporal-difference learning (Sutton, 1988), and to AI methods for planning and
search (Korf, 1990). Werb08 (1987) has previously argued for the general idea of
building AI systems that approximate dynamic programming, and Whitehead &
Ballard (1989) and others (Sutton & Barto, 1981; Sutton & Pinette, 1985; Rumelhart et aI., 1986; Lin, 1991; Riolo, 1991) have presented results for the specific
idea of augmenting a reinforcement learning system with a world model used for
planning.
2
Dyna-AHC: Dyna by Approximating Policy Iteration
The Dyna-AHC architecture is based on approximating a DP method known as
policy iteration (see Bertsekas, 1987). It consists of four components interacting as
shown in Figure 1. The policy is simply the function formed by the current set of
reactions; it receives as input a description of the current state of the world and
produces as output an action to be sent to the world. The world represents the
task to be solved; prototypically it is the robot's external environment. The world
receives actions from the policy and produces a next state output and a reward
output. The overall task is defined as maximizing the long-term average reward
per time step. The architecture also includes an explicit world model. The world
model is intended to mimic the one-step input-output behavior of the real world.
Finally, the Dyna-AHC architecture includes an evaluation function that rapidly
maps states to values, much as the policy rapidly maps states to actions. The
evaluation function, the policy, and the world model are each updated by separate
learning processes.
The policy is continually modified by an integrated planning/learning process. The
policy is, in a sense, a plan, but one that is completely conditioned by current input.
The planning process is incremental and can be interrupted and resumed at any
time. It consists of a series of shallow seaches, each typically of one ply, and yet
ultimately produces the same result as an arbitrarily deep conventional search. I
call this relaxation planning.
Relaxation planning is based on continually adjusting the evaluation function in
such a way that credit is propagated to the appropriate steps within action sequences. Generally speaking, the evaluation e(x) of a state x should be equal to
the best of the states y that can be reached from it in one action, taking into
consideration the reward (or cost) r for that one transition:
e(x) "="
m~
aEActlon.
E {r + e(y) I x, a},
(1)
where E {. I .} denotes a conditional expected value and the equal sign is quoted to
indicate that this is a condition that we would like to hold, not one that necessarily
does hold. If we have a complete model of the world, then the right-hand side can
be computed by looking ahead one action. Thus we can generate any number of
training examples for the process that learns the evaluation function: for any x,
Integrated Modeling and Control Based on Reinforcement Learning
(r EVALUATION 1
FUNCTION J~---' Heuristic
"
~
Reward
(scalar)
r
Reward
(scalar)
"
POLICY
~
State
Action
WORLD
OR
~
"WORLD MODEL) /sWITCH
Figure 1. Overview of Dyna-AHC
1. Decide if this will be a real experience
or a hypothetical one.
2. Pick a state z. If this is a real experience, use the current state.
3. Choose an action: a +- Policy(z)
4. Do action a; obtain next state y and
reward r from world or world model.
5. If this is a real experience, update
world model from z, a, y and r.
6. Update evaluation function so that
e(z) is more like r + re(y); this is
temporal-difference learning.
7. Update policy-strengthen or weaken
the tendency to perform action a in
state z according to the error in the
evaluation function: r + re(y) - e( z) .
8. Go to Step 1.
Figure 2. Inner Loop of Dyna-AHC.
These steps are repeatedly continually,
sometimes with real experiences, sometimes with hypothetical ones.
the right-hand side of (1) is the desired output. If the learning process converges
such that (1) holds in all states, then the optimal policy is given by choosing the
action in each state z that achieves the maximum on the right-hand side. There is an
extensive theoretical basis from dynamic programming for algorithms of this type for
the special case in which the evaluation function is tabular, with enumerable states
and actions. For example, this theory guarantees convergence to a unique evaluation
function satisfying (1) and that the corresponding policy is optimal (Bertsekas,
1987).
The evaluation function and policy need not be tables, but can be more compact
function approximators such as connectionist networks, decision trees, k-d trees,
or symbolic rules. Although the existing theory does not apply to these machine
learning algorithms directly, it does provide a theoretical foundation for exploring
their use in this way.
The above discussion gives the general idea of relaxation planning, but not the exact form used in policy iteration and Dyna-AHC, in which the policy is adapted
simultaneously with the evaluation function. The evaluations in this case are not
supposed to reflect the value of states given optimal behavior, but rather their
value given current behavior (the current policy). As the current policy gradually
approaches optimality, the evaluation function also approaches the optimal evaluation function. In addition, Dyna-AHC is a Monte Carlo or stochastic approximation
variant of policy iteration, in which the world model is only sampled, not examined
directly. Since the real world can also be sampled, by actually taking actions and
observing the result, the world can be used in place of the world model in these
methods. In this case, the result is not relaxation planning, but a trial-and-error
learning process much like reinforcement learning (see Barto, Sutton & Watkins,
473
474
Sutton
WITHOUT PLANNING (t
800
=0)
700
600
soo
STEPS
PER
TRIAL 400
WITH PLANNING (t = 100)
o Planning steps
300
(Trial and Error Learning Only)
200
/
10 Planning Steps
100 Planning Steps
100
14~~============~====_
20
40
60
80
100
TRIALS
Figure 3. Learning Curves of DynaAIIC Systems on a Navigation Task
Figure 4. Policies Found by Planning
and Non-Planning Dyna-AHC Systems
by the Middle of the Second Trial. The
black square is the current location of
the system. The arrows indicate action
probabilities (excess over smallest) for
each direction of movement.
1989, 1990). In Dyna-AHC, both of these are done at once. The same algorithm is
applied both to real experience (resulting in learning) and to hypothetical experience generated by the world model (resulting in relaxation planning). The results
in both cases are accumulated in the policy and the evaluation function.
There is insufficient room here to fully justify the algorithm used in Dyna-AHC,
but it is quite simple and is given in outline form in Figure 2.
3
A Navigation Task
As an illustration of the Dyna-AHC architecture, consider the task of navigating
the maze shown in the upper right of Figure 3. The maze is a 6 by 9 grid of
possible locations or states, one of which is marked as the starting state, "S", and
one of which is marked as the goal state, "G". The shaded states act as barriers and
cannot be entered. All the other states are distinct and completely distinguishable.
From each there are four possible actions: UP, DOWN, RIGHT, and LEFT, which
change the state accordingly, except where such a movement would take the take
the system into a barrier or outside the maze, in which case the location is not
changed. Reward is zero for all transitions except for those into the goal state, for
which it is +1. Upon entering the goal state, the system is instantly transported
back to the start state to begin the next trial. None of this structure and dynamics
is known to the Dyna-AHC system a priori.
In this instance of the Dyna-AHC architecture, real and hypothetical experiences
Integrated Modeling and Control Based on Reinforcement Learning
were used alternately (Step 1). For each single experience with the real world, k
hypothetical experiences were generated with the model. Figure 3 shows learning
0, k
10, and k
100, each an average over 100 runs. The k
0
curves for k
case involves no planning; this is a pure trial-and-error learning system entirely
analogous to those used in reinforcement learning systems based on the adaptive
heuristic critic (AHC) (Sutton, 1984; Barto, Sutton & Anderson, 1983). Although
the length of path taken from start to goal falls dramatically for this case, it falls
much more rapidly for the cases including hypothetical experiences, showing the
benefit of relaxation planning using the learned world model. For k
100, the
optimal path was generally found and followed by the fourth trip from start to goal;
this is very rapid learning.
=
=
=
=
=
Figure 4 shows why a Dyna-AHC system that includes planning solves this problem
so much faster than one that does not. Shown are the policies found by the k == 0 and
k 100 Dyna-AHC systems half-way through the second trial. Without planning
(k = 0), each trial adds only one additional step to the policy, and so only one step
(the last) has been learned so far. With planning, the first trial also learned only
one step, but here during the second trial an extensive policy has been developed
that by the trial's end will reach almost back to the start state.
=
4
Dyna-Q: Dyna by Q-learning
The Dyna-AHC architecture is in essence the reinforcement learning architecture
based on the adaptive heuristic critic (AHC) that my colleagues and I developed
(Sutton, 1984; Barto, Sutton & Anderson, 1983) plus the idea of using a learned
world model to generate hypothetical experience and to plan. Watkins (1989) subsequently developed the relationships between the reinforcement-learning architecture and dynamic programming (see also Barto, Sutton & Watkins, 1989, 1990)
and, moreover, proposed a slightly different kind of reinforcement learning called
Q-learning. The Dyna- Q architecture is the combination of this new kind of learning with the Dyna idea of using a learned world model to generate hypothetical
experience and achieve planning.
Whereas the AHC reinforcement learning architecture maintains two fundamental
memory structures, the evaluation function and the policy, Q-Iearning maintains
only one. That one is a cross between an evaluation function and a policy. For each
pair of state x and action a, Q-Iearning maintains an estimate Qra of the value of
taking a in x. The value of a state can then be defined as the value of the state's
best state-action pair: e(x) deC maXa Qra. In general, the Q-value for a state x and
an action a should equal the expected value of the immediate reward r plus the
discounted value of the next state y:
Qra
"=" E{r+-ye(y)lx,a}.
(3)
To achieve this goal, the updating steps (Steps 6 and 7 of Figure 2) are implemented
by
(4)
Qra +- Qra + f3(r + -ye(y) - Qra).
This is the only update rule in Q-Iearning. We note that it is very similar though
not identical to Holland's bucket brigade and to Sutton's (1988) temporal-difference
learning.
475
476
Sutton
The simplest way of determining the policy on real experiences is to deterministically
select the action that currently looks best-the action with the maximal Q-value.
However, as we show below, this approach alone suffers from inadequate exploration.
To deal with this problem, a new memory structure was added that keeps track of
the degree of uncertainty about each component of the model. For each state x and
action a, a record is kept of the number of time steps n XIl that have elapsed since a
was tried in z in a real experience. An exploration bonus of fVnxll is used to make
actions that have not been tried in a long time (and that therefore have uncertain
consequences) appear more attractive by replacing (4) with:
QXIl
+-
QXIl + f3(r + fVnxll + ')'e(y) - QXIl)'
(5)
In addition, the system is permitted to hypothetically experience actions is has
never before tried, so that the exploration bonus for trying them can be propagated
back by relaxation planning. This was done by starting the system with a nonempty initial model and by selecting actions randomly on hypothetical experiences.
In the experiments with Dyna-Q systems reported below, actions that had never
been tried were assumed to produce zero reward and leave the state unchanged.
5
Changing-World Experiments
Two experiments were performed to test the ability of Dyna systems to adapt to
changes in their environments. Three Dyna systems were used: the Dyna-AHC
system presented earlier in the paper, a Dyna-Q system including the exploration
bonus (5), called the Dyna-Q+ system, and a Dyna-Q system without the exploration bonus (4), called the Dyna-Q- system. All systems used k = 10.
The blocking experiment used the two mazes shown in the upper portion of Figure
5. Initially a short path from start to goal was available (first maze). After 1000
time steps, by which time the short path was usually well learned, that path was
blocked and a longer path was opened (second maze). Performance under the new
condition was measured for 2000 time steps. Average results over 50 runs are shown
in Figure 5 for the three Dyna systems. The graph shows a cumulative record of
the number of rewards received by the system up to each moment in time. In the
first 1000 trials, all three Dyna systems found a short route to the goal, though the
Dyna-Q+ system did so significantly faster than the other two. After the short path
was blocked at 1000 steps, the graph for the Dyna-AHC system remains almost flat,
indicating that it was unable to obtain further rewards. The Dyna-Q systems, on
the other hand, clearly solved the blocking problem, reliably finding the alternate
path after about 800 time steps.
The shortcut experiment began with only a long path available (first maze of Figure
6). After 3000 times steps all three Dyna systems had learned the long path, and
then a shortcut was opened without interferring with the long path (second maze of
Figure 6). The lower part of Figure 6 shows the results. The increase in the slope
of the curve for the Dyna-Q+ system, while the others remain constant, indicates
that it alone was able to find the shortcut. The Dyna-Q+ system also learned
the original long route faster than the Dyna-Q- system, which in turn learned it
faster than the Dyna-AHC system. However, the ability of the Dyna-Q+ system
to find shortcuts does not come totally for free . Continually re-exploring the world
Integrated Modeling and Control Based on Reinforcement Learning
??
150
Dyna-Q+
Dyna-Q-
Dyna-PI
o=-______
o
~
1000
______
~
____
~
a:lOO
Time Steps
Figure 5. Performance on the Blocking
Task (Slope is the Rate of Reward)
o
3000
Time Steps
Figure 6. Performance on the Shortcut
Task (Slope is the Rate of Reward)
means occasionally making suboptimal actions. If one looks closely at Figure 6,
one can see that the Dyna-Q+ system actually acheives a slightly lower rate of
reinforcement during the first 3000 steps. In a static environment, Dyna-Q+ will
eventually perform worse than Dyna-Q-, whereas, in a changing environment, it
will be far superior, as here. One possibility is to use a meta-level learning process
to adjust the exploration parameter f to match the degree of variability of the
environment.
6
Limitations and Conclusions
The results presented here are clearly limited in many ways. The state and action
spaces are small and denumerable, permitting tables to be used for all learning processes and making it feasible for the entire state space to be explicitly explored. In
addition, these results have assumed knowledge of the world state, have used a trivial form of search control (random exploration), and have used terminal goal states.
These are significant limitations of the results, but not of the Dyna architecture.
There is nothing about the Dyna architecture which prevents it from being applied
more generally in each of these ways (e.g., see Lin, 1991; Riolo, 1991; Whitehead &
Ballard, in press).
Despite limitations, these results are significant. They show that the use of a forward model can dramatically speed trial-and-error (reinforcement) learning processes even on simple problems. Moreover, they show how planning can be done
with the incomplete, changing, and oftimes incorrect world models that are contructed through learning. Finally, they show how the functionality of planning can
be obtained in a completely incremental manner, and how a planning process can be
freely intermixed with reaction and learning processes. Further results are needed
for a thorough comparison of Dyna-AHC and Dyna-Q architectures, but the results
presented here suggest that it is easier to adapt Dyna-Q architectures to changing
environments.
477
478
Sutton
Acknowledgements
The author gratefully acknowledges the contributions by Andrew Barto, Chris
Watkins, Steve Whitehead, Paul Werbos, Luis Almeida, and Leslie Kaelbling.
References
Barto, A. G., Sutton R. S., & Anderson, C. W. (1983) IEEE '.Irans. SMC-13, 834846.
Barto, A. G., Sutton, R. S., & Watkins, C. J. C. H. (1989) In: Learning and
Computational Neuroscience, M. Gabriel and J.W. Moore (Eds.), MIT Press, 1991.
Barto, A. G., Sutton, R. S., & Watkins, C. J. C. II. (1990) NIPS 2, 686-693.
Bellman, R. E. (1957) Dynamic Programming, Princeton University Press.
Bertsekas, D. P. (1987) Dynamic Programming: Deterministic and Stochastic Models, Prentice-Hall.
Korf, R. E. (1990) Artificial Intelligence 42, 189-211.
Lin, Long-Ji. (1991) In: Proceedings of the International Conference on the Simulation of Adaptive Behavior, MIT Press.
Riolo, R. (1991) In: Proceedings of the International Conference on the Simulation
of Adaptive Behavior, MIT Press.
Rumelhart, D. E., Smolensky, P., McClelland, J. L., & Hinton, G. E. (1986) In:
Parallel Distributed Processing: Explorations in the Microstructure of Cognition,
Volume II, by J. L. McClelland, D. E. Rumelhart, and the PDP research group,
7-57. MIT Press.
Sutton, R. S. (1984) Temporal Credit Assignment in Reinforcement Learning. PhD
thesis, COINS Dept., Univ, of Mass.
Sutton, R.S. (1988) Machine Learning 3, 9-44.
Sutton, R.S. (1990) In: Proceedings of the Seventh International Conference on
Machine Learning, 216-224, Morgan Kaufmann.
Sutton, R.S., Barto, A.G. (1981) Cognition and Brain Theory 4, 217-246.
Sutton, R.S., Pinette, B. (1985) In: Proceedings of the Seventh Annual Coni. of the
Cognitive Science Society, 54-64, Lawrence Erlbaum.
Watkins, C. J. C. H. (1989) Learning with Delayed Rewards. PhD thesis, Cambridge
University Psychology Department.
Werbos, P. J. (1987) IEEE 'frans. SMC-17, 7-20.
Whitehead, S. D., Ballard, D.II. (1989) In: Proceedings of the Sixth International
Workshop on Machine Learning, 354-357, Morgan Kaufmann.
Whitehead, S. D., Ballard, D.H. (in press) Machine Learning.
| 388 |@word trial:16 middle:1 simulation:2 korf:2 tried:4 pick:1 moment:1 initial:1 series:1 selecting:1 reaction:2 existing:1 current:8 yet:1 luis:1 interrupted:1 update:4 alone:2 half:1 intelligence:1 accordingly:1 short:4 record:2 location:3 lx:1 incorrect:1 consists:2 frans:1 manner:1 expected:2 rapid:1 behavior:7 planning:28 brain:1 terminal:1 bellman:2 discounted:1 actual:1 totally:1 begin:1 moreover:2 bonus:4 mass:1 kind:2 denumerable:1 maxa:1 developed:3 finding:1 ahc:25 guarantee:1 temporal:4 thorough:1 hypothetical:9 act:1 iearning:4 control:6 appear:1 bertsekas:4 continually:4 before:1 consequence:1 sutton:25 despite:1 path:11 black:1 plus:2 examined:1 shaded:1 limited:1 smc:2 unique:1 differs:1 evolving:1 significantly:1 suggest:1 symbolic:1 cannot:1 prentice:1 conventional:2 map:2 deterministic:1 maximizing:1 go:1 starting:2 pure:1 rule:2 analogous:1 updated:1 strengthen:1 exact:1 programming:8 rumelhart:3 satisfying:1 particularly:1 updating:1 werbos:2 blocking:3 solved:3 movement:2 environment:9 reward:14 dynamic:9 ultimately:1 upon:1 completely:3 basis:1 univ:1 distinct:1 describe:1 monte:1 artificial:1 choosing:1 outside:1 quite:1 heuristic:3 ability:2 itself:1 online:1 sequence:3 maximal:1 loop:1 rapidly:3 entered:1 achieve:2 supposed:1 description:1 convergence:1 produce:4 xil:1 incremental:4 converges:1 leave:1 andrew:1 augmenting:1 measured:1 received:1 solves:1 implemented:1 involves:1 indicate:2 waltham:1 come:1 direction:1 closely:1 functionality:1 stochastic:4 subsequently:1 exploration:8 opened:2 argued:1 microstructure:1 exploring:2 hold:3 credit:2 hall:1 lawrence:1 cognition:2 achieves:1 smallest:1 pinette:2 currently:1 mit:4 clearly:2 modified:1 rather:2 barto:11 indicates:1 sense:1 accumulated:1 integrated:5 typically:1 entire:1 initially:1 overall:1 priori:1 plan:4 special:1 equal:3 once:1 never:3 f3:2 identical:1 represents:1 look:2 mimic:1 tabular:1 others:2 connectionist:1 intelligent:1 richard:1 randomly:1 simultaneously:2 delayed:1 intended:1 possibility:1 acheives:1 evaluation:18 adjust:1 navigation:3 amenable:1 experience:16 prototypically:1 tree:2 incomplete:1 re:3 desired:1 theoretical:2 weaken:1 uncertain:3 instance:1 modeling:4 earlier:1 assignment:1 resumed:1 leslie:1 cost:1 kaelbling:1 inadequate:1 seventh:2 erlbaum:1 loo:1 reported:1 my:1 fundamental:1 international:4 thesis:2 reflect:1 choose:1 worse:1 external:1 cognitive:1 includes:3 explicitly:2 performed:1 observing:1 reached:1 start:5 portion:1 maintains:3 parallel:1 slope:3 contribution:1 formed:1 square:1 kaufmann:2 none:1 carlo:1 reach:1 suffers:1 ed:1 sixth:1 colleague:1 static:1 propagated:2 sampled:2 adjusting:1 knowledge:1 oftimes:1 actually:2 back:3 steve:1 permitted:1 done:3 though:2 anderson:3 hand:4 receives:2 replacing:1 building:1 ye:2 entering:1 laboratory:1 moore:1 deal:1 attractive:1 during:2 essence:1 trying:1 outline:1 complete:2 consideration:1 began:1 superior:1 ji:1 overview:1 brigade:1 volume:1 significant:2 blocked:2 cambridge:1 ai:3 grid:1 gratefully:1 had:2 robot:1 longer:1 operating:1 add:1 route:3 occasionally:1 meta:1 arbitrarily:1 approximators:1 morgan:2 additional:1 freely:1 ii:3 match:1 faster:4 adapt:3 cross:1 long:7 lin:3 permitting:1 variant:1 iteration:4 sometimes:2 dec:1 addition:3 whereas:2 sent:1 call:1 easy:1 switch:1 psychology:1 architecture:20 suboptimal:1 inner:1 idea:5 enumerable:1 speaking:1 repeatedly:1 action:29 deep:1 gabriel:1 dramatically:2 generally:3 iran:1 mcclelland:2 simplest:1 generate:3 sign:1 neuroscience:1 per:2 track:1 instantly:1 group:1 four:2 changing:5 kept:1 graph:2 relaxation:7 run:2 fourth:1 uncertainty:1 place:1 almost:2 decide:1 decision:1 entirely:1 followed:1 annual:1 adapted:1 ahead:1 flat:1 speed:1 extremely:1 optimality:1 department:1 according:1 alternate:1 combination:1 remain:1 slightly:2 shallow:1 making:3 gradually:1 bucket:1 taken:1 previously:1 remains:1 turn:1 eventually:1 nonempty:1 dyna:62 needed:1 end:1 whitehead:5 available:2 apply:1 appropriate:1 coin:1 original:1 denotes:1 coni:1 approximating:3 society:1 unchanged:1 added:1 navigating:1 dp:8 separate:1 unable:1 incompletely:1 chris:1 considers:1 trivial:1 length:1 modeled:1 relationship:2 insufficient:1 illustration:1 intermixed:1 reliably:1 policy:27 perform:2 upper:2 immediate:1 hinton:1 incorporated:1 looking:1 variability:1 pdp:1 interacting:1 pair:2 extensive:2 trip:1 elapsed:1 learned:11 alternately:2 nip:1 able:1 below:2 usually:1 smolensky:1 including:2 soo:1 memory:2 suitable:1 acknowledges:1 acknowledgement:1 determining:2 fully:1 limitation:3 foundation:1 integrate:1 degree:2 critic:2 pi:1 summary:1 changed:1 last:1 free:1 side:3 fall:2 riolo:3 taking:3 barrier:2 benefit:1 distributed:1 curve:3 world:34 transition:2 maze:8 cumulative:1 forward:2 author:1 reinforcement:16 adaptive:4 far:2 excess:1 approximate:2 compact:1 keep:1 assumed:2 quoted:1 search:4 why:1 table:2 promising:1 learn:2 ballard:4 transported:1 necessarily:1 did:1 arrow:1 paul:1 nothing:1 explicit:1 deterministically:1 watkins:10 ply:1 learns:3 down:1 specific:1 showing:1 explored:1 workshop:1 phd:2 execution:2 conditioned:1 easier:1 distinguishable:1 simply:1 likely:1 prevents:1 scalar:2 holland:1 ma:1 conditional:1 marked:2 goal:9 room:1 shortcut:5 change:2 feasible:1 except:2 justify:1 gte:1 called:3 tendency:1 indicating:1 select:1 hypothetically:1 almeida:1 dept:1 princeton:1 |
3,179 | 3,880 | On the Algorithmics and Applications of a
Mixed-norm based Kernel Learning Formulation
G. Dinesh
Dept. of Computer Science & Automation,
Indian Institute of Science, Bangalore.
[email protected]
J. Saketha Nath
Dept. of Computer Science & Engg.,
Indian Institute of Technology, Bombay.
[email protected]
S. Raman
Dept. of Computer Science & Automation,
Indian Institute of Science, Bangalore.
[email protected]
Chiranjib Bhattacharyya
Dept. of Computer Science & Automation,
Indian Institute of Science, Bangalore.
[email protected]
Aharon Ben-Tal
Faculty of Industrial Engg. & Management,
Technion, Haifa.
[email protected]
K. R. Ramakrishnan
Dept. of Electrical Engg.,
Indian Institute of Science, Bangalore.
[email protected]
Abstract
Motivated from real world problems, like object categorization, we study a particular mixed-norm regularization for Multiple Kernel Learning (MKL). It is assumed that the given set of kernels are grouped into distinct components where
each component is crucial for the learning task at hand. The formulation hence
employs l? regularization for promoting combinations at the component level and
l1 regularization for promoting sparsity among kernels in each component. While
previous attempts have formulated this as a non-convex problem, the formulation given here is an instance of non-smooth convex optimization problem which
admits an efficient Mirror-Descent (MD) based procedure. The MD procedure
optimizes over product of simplexes, which is not a well-studied case in literature.
Results on real-world datasets show that the new MKL formulation is well-suited
for object categorization tasks and that the MD based algorithm outperforms stateof-the-art MKL solvers like simpleMKL in terms of computational effort.
1
Introduction
In this paper the problem of Multiple Kernel Learning (MKL) is studied where the given kernels are
assumed to be grouped into distinct components and each component is crucial for the learning task
in hand. The focus of this paper is to study the formalism, algorithmics of a specific mixed-norm
regularization based MKL formulation suited for such tasks.
Majority of existing MKL literature have considered employing a block l1 norm regularization leading to selection of few of the given kernels [8, 1, 16, 14, 20] . Such formulations tend to select
the ?best? among the given kernels and consequently the decision functions tend to depend only on
the selected kernel. Recently [17] extended the framework of MKL to the case where kernels are
partitioned into groups and introduces a generic mixed-norm regularization based MKL formulation
in order to handle groups of kernels. Again the idea is to promote sparsity leading to low number of
kernels. This paper differs from [17] by assuming that every component (group of kernels) is highly
1
crucial for success of the learning task. It is well known in optimization literature that l? regularizations often promote combinations with equal preferences and l1 regularizations lead to selections.
The proposed MKL formulation hence employs l? regularization and promotes combinations of
kernels at the component level. Moreover it employs l1 regularization for promoting sparsity among
kernels in each component.
The formulation studied here is motivated by real-world learning applications like object categorization where multiple feature representations need to be employed simultaneously for achieving good
generalization. Combining feature descriptors using the framework of Multiple Kernel Learning
(MKL) [8] for object categorization has been a topic of interest for many recent studies [19, 13].
For e.g., in the case of flower classification feature descriptors for shape, color and texture need
to be employed in order to achieve good visual discrimination as well as significant within-class
variation [12]. A key finding of [12] is the following: in object categorization tasks, employing few
of the feature descriptors or employing a canonical combination of them often leads to sub-optimal
solutions. Hence, in the framework of MKL, employing a l1 regularization, which is equivalent to
selecting one of the given kernels, as well as employing a l2 regularization, which is equivalent to
working with a canonical combination of the given kernels, may lead to sub-optimality. This important finding clearly motivates the use of l? norm regularization for combining kernels generated
from different feature descriptors and l1 norm regularization for selecting kernels generated from
the same feature descriptor. Hence, by grouping kernels generated from the same feature descriptor
together and employing the new MKL formulation, classifiers which are potentially well-suited for
object categorization tasks can be built.
Apart from the novel MKL formulation the main contribution of the paper is a highly efficient
algorithm for solving it. Since the formulation is an instance of a Second Order Cone Program
(SOCP), it can be solved using generic interior point algorithms. However it is impractical to work
with such solvers even for moderately large number of data points and kernels. Also the generic
wrapper approach proposed in [17] cannot be employed as it solves a non-convex variant of the
proposed (convex) formulation. The proposed algorithm employs mirror-descent [3, 2, 9] leading to
extremely scalable solutions.
The feasibility set for the minimization problem tackled by Mirror-Descent (MD) turns out to be
direct product of simplexes, which is not a standard set-up discussed in optimization literature. We
employ a weighted version of the entropy function as the prox-function in the auxiliary problem
solved by MD at each iteration and justify its suitability for the case of direct product of simplexes.
The mirror-descent based algorithm presented here is also of independent interest to the MKL community as it can solve the traditional MKL problem; namely the case when the number of groups is
unity. Empirically we show that the mirror-descent based algorithm proposed here scales better than
the state-of-the-art steepest descent based algorithms [14].
The remainder of this paper is organized as follows: in section 2, details of the new MKL formulation
and its dual are presented. The mirror-descent based algorithm which efficiently solves the dual is
presented in section 3. This is followed by a summary of the numerical experiments carried for
verifying the major claims of the paper. In particular, the empirical findings are a) the new MKL
formulation is well-suited for object categorization tasks b) the MD based algorithm scales better
than state-of-the-art gradient descent methods (e.g. simpleMKL) in solving the special case where
number of components (groups) of kernels is unity.
2
Mixed-norm based MKL Formulation
This section presents the novel mixed-norm regularization based MKL formulation and its dual.
In the following text we concentrate on the case of binary classification. However many of the
ideas presented here apply to other learning problems too. Let the training dataset be denoted by
D = {(xi , yi ), i = 1, . . . , m | xi ? X , yi ? {?1, 1}}. Here, xi represents the ith training data
point with label yi . Let Y denote the diagonal matrix with entries as yi . Suppose the given kernels are divided into n groups (components) and the j th component has nj number of kernels. Let
the feature-space mapping generated from the k th kernel of the j th component be ?jk (?) and the
corresponding gram-matrix of training data points be Kjk 1 . We are in search of a hyperplane clas1
The gram-matrices are unit-trace normalized.
2
Pn Pnj
>
sifier of the form j=1 k=1
wjk
?jk (xi ) ? b = 0. As discussed above, we wish to perform a
block l? regularization over the model parameters wjk associated with distinct components and l1
regularization for those associated with the same component. Intuitively, such a regularization promotes combinations of kernels belonging to different components and selections among kernels of
the same component. Following the framework of MKL and the mixed norm regularization detailed
here, the following formulation is immediate:
h
2 i
Pnj
P
1
min
maxj
+ C i ?i
k=1 kwjk k2
2
wjk ,b,?i
P
Pnj
n
>
s.t. yi
w
?
(x
)
?
b
? 1 ? ?i , ?i ? 0 ? i
(1)
jk
i
j=1
k=1 jk
Here, ?i variables measure the slack in correctly classifying the ith training data point and C is the
regularization parameter controlling weightage given to the mixed-norm regularization term and the
total slack. MKL formulation in (1) is convex and moreover an instance of SOCP. This formulation
can also be realized as a limiting case of the generic CAP formulation presented in [17] (with ? =
1, ?0 ? ?). However since the motivation of that work was to perform feature selection, this
limiting case was neither theoretically studied nor empirically evaluated. Moreover, the generic
wrapper approach of [17] is inappropriate for solving this limiting case as that approach would solve
a non-convex variant of this (convex) formulation. In the following text, a dual of (1) is derived.
Let a simplex of dimensionality d be represented by ?d . Following the strategy of [14], one can
>
introduce variables ?j ? ?j1 . . . ?jnj ? ?nj and re-write (1) as follows:
h
Pnj kwjk k22 i
P
1
min
max
min
+ C i ?i
j
?
??
j
n
k=1 ?jk
2
j
wjk ,b,?i
P
Pnj
n
>
s.t. yi
w
?
(x
)
?
b
? 1 ? ?i , ?i ? 0 ? i
(2)
jk
i
j=1
k=1 jk
P a2
This is because for any vector [a1 . . . an ] ? 0, the following holds: minxi ?0,Pi xi =1 i xii =
P
( i ai )2 . Notice that the max over j and min over ?j can be interchanged. To see that rewrite
Pnj kwjk k22
? t, where t is a new decision variable.
maxj as mint t with constraints min?j ??nj k=1
?jk
This problem is feasible in both ?j s and t and hence we can drop the minimization over individual
Pnj kwjk k22
constraints to obtain an equivalent problem: min?j ??nj ?j,t t subject to k=1
? t. One can
?jk
now eliminate t by reintroducing the maxj and interchange the min?j ??nj ?j with other variables
to obtain:
Pnj kwjk k22
P
1
min
min
+ C i ?i
k=1 ?jk
2 maxj
?j ??nj ?j
wjk ,b,?i
s.t.
yi
P
n
j=1
>
w
?
(x
)
?
b
? 1 ? ?i , ?i ? 0 ? i
jk
i
k=1 jk
Pnj
Now one can derive the standard dual of (3) wrt. to the variables wjk , b, ?i alone, leading to:
?
?
n Pnj
X
?
Q
1
jk
jk
k=1
??
min
max
1> ? ? ? > ?
?j ??nj ?j ??Sm (C), ???n
2
?
j
j=1
where ?, ? are Lagrange multipliers, Sm (C) ? {x ? Rm | 0 ? x ? C1,
Qjk ? YKjk Y. The following points regarding (4) must to be noted:
Pm
i=1
(3)
(4)
xi yi = 0} and
? (4) is equivalent to the well-known SVM [18] formulation with kernel Kef f ?
Pnj
?
Pn
k=1 ?jk Kjk
2
. In other words, ?1? is the weight given to the j th component
?
j=1
?
j
j
and ??jk is weight given to the k th kernel of the j th component.
? It can be shown that none of ?j , j = 1, . . . , n can be zero provided the given gram-matrices
Kjk are positive definite3 .
2
3
Superscript ?*? represents the optimal value as per (4)
Add a small ridge if positive semi-definite.
3
? By construction, most of the weights ?jk are zero and at-least for one kernel in every
component the weight is non-zero (see also [14]).
These facts readily justify the suitability of the particular mixed norm regularization for object categorization. Indeed, in-sync with findings of [12], kernels from different feature descriptors (components) are combined using non-trivial weights (i.e. ?1? ). Moreover, only the ?best? kernels from
j
each feature descriptor (component) are utilized by the model. This sparsity feature leads to better interpretability as well as computational benefits during the prediction stage. In the following
section an efficient iterative algorithm for solving the dual (4) is presented.
3
Efficient Algorithm for Solving the Dual
This section presents an efficient algorithm for solving the dual (4). Note that typically in object categorization or other such multi-modal learning tasks, the number of feature descriptors (i.e. number
of groups of kernels, n) is low (< 10). However the kernels constructed from each feature descriptor
can be very high in number i.e., nj ? j can be quite high. Also, it is frequent to encounter datasets
with huge number of training data points, m. Hence it is desirable to derive algorithms which scale
well wrt. m and nj . We assume n is small and almost O(1). Consider the dual formulation (4).
Using the minimax theorem [15], one can interchange the min over ?j s and max over ? to obtain:
?
?
?
?
? ?
?
??
n Pnj
?
?
?
X
?
Q
1
?
?
k=1 jk jk ?
max 1> ? ? ?> ?
min ? ? min
? ?
(5)
???n
??j ??nj ?j ???Sm (C)
??
2
?
j
j=1
?
?
|
{z
}
g? (?1 ,...,?n )
|
{z
f (?)
}
We have restated the maximum over ? as a minimization problem by introducing a minus sign.
The proposed algorithm performs alternate minimization over the variables ? and (?1 , . . . , ?n , ?).
In other words, in one step the variables (?1 , . . . , ?n , ?) are assumed to be constant and (5) is
optimized wrt. ?. This leads to the following optimization problem:
min
???n
where Wj = ?>
Pnj
k=1
n
X
Wj
j=1
?j
?jk Qjk ?. This problem has an analytical solution given by:
p
Wj
?j = P p
Wj
j
(6)
In the subsequent step ? is assumed to be fixed and (5) is optimized wrt. (?1 , . . . , ?n , ?). For
this f (?) needs to be evaluated by solving the corresponding optimization problem (refer (5) for
definition of f ). Now, the per-step computational complexity of the iterative algorithm will depend
on how efficiently one evaluates f for a given ?. In the following we present a mirror-descent
(MD) based algorithm which evaluates f to sufficient accuracy in O(log [maxj nj ])O(SVMm ). Here
O(SVMm ) represents the computational complexity of solving an SVM with m training data points.
Neglecting the log term, the overall per-step computational effort for the alternate minimization can
be assumed to be O(SVMm ) and hence nearly-independent of the number of kernels. Alternatively,
one can employ the strategy of [14] and compute f using projected steepest-descent (SD) methods.
The following points highlight the merits and de-merits of these two methods:
? In case of SD, the per-step auxiliary problem has no closed form solution and projections
onto the feasibility set need to be done which are computationally intensive especially for
problems with high dimensions. In case of MD, the auxiliary problem has an analytical
solution (refer (8)).
? The step size needs to be computed using 1-d line search in case of SD; whereas the stepsizes for MD can be easily computed using analytical expressions (refer (9)).
4
? The computational complexity of evaluating f using MD is nearly-independent of no. kernels. However no such statement can be made for SD (unless feasibility set is of Euclidean
geometry, which is not so in our case).
The MD based algorithm for evaluating f (?) i.e. solving min?j ??nj ?j g? (?1 , . . . , ?n ) is detailed
below. Let ? represent the vector [?1 . . . ?n ]> . Also let values at iteration ?t? be indicated using the
super-script ?(t)?. Similar to any gradient-based method, at each step ?t? MD works with a linear
(t)
approximation of g? : g?? (?) = g? (?(t) ) + (? ? ?(t) )> ?g? (?(t) ) and follows the below update
rule:
1
?(t+1) = argmin???n1 ?...??nn g??(t) (?) + ?(?(t) , ?)
(7)
st
where, ?(x, y) ? ?(y) ? ?(x) ? (y ? x)> ??(x) is the Bregman-divergence (prox-function) associated with ?(x), a continuously differentiable strongly convex distance-generating function. st is
a regularization parameter and also determines the step-size. (7) is usually known as the auxiliary
problem and needs to be solved at each step. Intuitively (7) minimizes a weighted sum of the local
linear approximation of the original objective and a regularization term that penalizes solutions far
from the current iterate. It is easy to show that the update rule in (7) leads to the SD technique
if ?(x) = 12 kxk22 and step-size is chosen using 1-d line search. The key idea in MD is to choose
the distance-generating function based on the feasibility set, which in our case is direct product of
simplexes, such that (7) is very easy to solve. Note that for SD, with feasibility set as direct product
of simplexes, (7) is not easy to solve especially in higher dimensions.
We
the following modified
entropy function: ?(x) ?
function as?1
Pn choose
Pnj the distance-generating
?1
?1
?1
?1
?1
x
n
+
?n
n
log
x
n
+
?n
n
where
?
is a small positive number
jk
j
jk
j
j=1
k=1
(t)
(t)
(t)
(say, 10e ? 16). Now, let g?? ? st ?g? (? ) ? ??(? ). Note that g? is nothing but the optimal
objective of SVM with kernel Kef f . Since it is assumed that each given kernel is positive definite,
the optimal of the SVM is unique and hence gradient of g? wrt. ? exists [5]. Gradient of g? can
>
(?(t) ) Qjk ?(t)
?g?
where ?(t) is the optimal ? obtained by solving an
= ? 12
be computed using (t)
?j
??jk
Pnj (t)
Pn
k=1 ?jk Kjk
SVM with kernel as j=1
. With this notation, it is easy to show that the optimal
?j
update (7) has the following analytical form4 :
(t+1)
?jk
n
o
(t)
exp ?g?? jk n
n
o
= Pn
(t)
j
k=1 exp ?g?? jk n
(8)
The following text discusses the convergence issues with MD. Let the modulus of strong convexity
of ? wrt. k ? k ? k ? k1 be ?. Also, let the ?-size of feasibility set be defined as follows: ? ?
maxu,v??n1 ?...??nn ?(u, v). It is easy to verify that ? = O(1)n?2 and ? = O (log [maxj nj ]) in
our case. The convergence and its efficiency follow from this result [3, 2, 9]:
Result 1 With step-sizes:st =
?
?? ?
k?g? k? t
T :T = mint?T g? (?(t) ) ? g? (?? ) ? O(1)
one has the following bound on error after iteration
?
?Lk?k (g? )
?
?T
where k ? k? is the dual norm of the norm wrt. which the modulus of strong convexity was computed
(in our case k ? k? = k ? k? ) and Lk?k (h) is Lipschitz constant of function h wrt. norm k ? k (in our
case k ? k = k ? k1 and it can be shown that the Lipschitz constant exists for g? ). Substituting the
particular values for our case, we obtain
p
log [maxj nj ]
?
st =
(9)
nk?g? k? t
?
log[maxj nj ]
?
and T ?
. In other words, for reaching a reasonable approximation of the optimal,
T
the number iterations required are O(log [maxj nj ]), which is nearly-independent of the number
4
Since the term involving ? is ?jk , it is neglected in this computation.
5
of kernels. Since the computations in each iteration are dominated by the SVM optimization, the
overall complexity of MD is (nearly) O(SV Mm ). Note that the iterative algorithm can be improved
by improving the algorithm for solving the SVM problem. The overall algorithm is summarized in
algorithm 15 . The MKL formulation presented here exploits the special structure in the kernels and
Algorithm 1: Mirror-descent based alternate minimization algorithm
Data: Labels and gram-matrices of training eg., component-id of each kernel, regularization
parameter (C)
Result: Optimal values of ?, ?, ? in (4)
begin
Set ?, ? to some initial feasible values.
while stopping criteria for ? is not met do /* Alternate minimization loop */
while stopping criteria for ? is not met do
/* Mirror-descent loop */
Solve SVM with current kernel weights and update ?
Compute g?? (t) and update ? using (8)
Compute Wj and update ? using (6)
Return values of ?, ?, ?
end
leads to non-trivial combinations of the kernels belonging to different components and selections
among the kernels of the same component. Moreover the proposed iterative algorithm solves the
formulation with a per-step complexity of (almost) O(SV Mm ), which is the same as that with traditional MKL formulations (which do not exploit this structure). As discussed earlier, this efficiency
is an outcome of employing state-of-the-art mirror-descent techniques. The MD based algorithm
presented here is of independent interest to the MKL community. This is because, in the special
case where number of components is unity (i.e. n = 1), the proposed algorithm solves the traditional MKL formulation. And clearly, owing to the merits of MD over SD discussed earlier, the new
algorithm can potentially be employed to boost the performance of state-of-the-art MKL algorithms.
Our empirical results confirm that the proposed algorithm (with n = 1) outperforms simpleMKL
in terms of computational efficiency.
4
Numerical Experiments
This section presents results of experiments which empirically verify the major claims of the paper: a) The proposed formulation is well-suited for object categorization b) In the case n = 1, the
proposed algorithm outperforms simpleMKL wrt. computational effort. In the following, the experiments done on real-world object categorization datasets are summarized. The proposed MKL
formulation is compared with state-of-the-art methodology for object categorization [19, 13] that
employs a block l1 regularization based MKL formulation with additional constraints for including
prior information regarding weights of kernels. Since such constraints lead to independent improvements with all formulations, the experiments here compare the following three MKL formulations
without the additional constraints: MixNorm-MKL, the (l? , l1 ) mixed-norm based MKL formulation studied in this paper; L1-MKL, the block l1 regularization based MKL formulation [14]; and
L2-MKL,PwhichPis nothing but an SVM built using the canonical combination of all kernels i.e.
nj
n
Kjk . In case of MixNorm-MKL, the MD based algorithm (section 3) was
Kef f ? j=1 k=1
used to solve the formulation. The SVM problem arising at each step of mirror-descent is solved
using the libsvm software6 . L1-MKL is solved using simpleMKL7 . L2-MKL is solved using
libsvm and serves as a baseline for comparison. In all cases, the hyper-parameters of the various
formulations were tuned using suitable cross-validation procedures and the accuracies reported denote testset accuracies achieved by the respective classifiers using the tuned set of hyper-parameters.
5
Asymptotic convergence can be proved for the algorithm; details omitted due to lack of space.
Available at www.csie.ntu.edu.tw/?cjlin/libsvm
7
Available at http://asi.insa-rouen.fr/enseignants/?arakotom/code/mklindex.
html
6
6
12
4
10
7
6
5
4
3
2
3.5
3
2.5
2
1.5
1
1
2
3
4
Object Categories
0
5
4.5
800
4
700
3.5
3
2.5
2
1.5
1
0.5
2
3
4
Object Categories
4
6
8
10
12
Object Categories
4
2
0
?4
0
5
14
(d) Oxford flowers
16
18
4
6
8
10
12
Object Categories
14
16
18
600
600
500
400
300
200
100
?100
0
2
(c) Oxford Flowers
0
2
6
(b) Caltech-5
Average gain wrt. L1?MKL (%)
Average gain wrt. L2?MKL (%)
(a) Caltech-5
1
Average gain wrt. L2?MKL (%)
0
8
?2
0.5
1
0
0
Average gain wrt. L1?MKL (%)
4.5
8
Average gain wrt. L2?MKL (%)
Average gain wrt. L1?MKL (%)
9
20
40
60
Object Categories
80
(e) Caltech-101
100
500
400
300
200
100
0
?100
0
20
40
60
Object Categories
80
100
(f) Caltech-101
Figure 1: Plot of average gain (%) in accuracy with MixNorm-MKL on the various real-world
datasets.
The following real-world datasets were used in the experiments: Caltech-5 [6], Caltech-101 [7]
and Oxford Flowers [10]. The Caltech datasets contain digital images of various objects like faces,
watches, ants etc.; whereas the Oxford dataset contains images of 17 varieties of flowers. The
Caltech-101 dataset has 101 categories of objects whereas Caltech-5 dataset is a subset of the
Caltech-101 dataset including images of Airplanes, Car sides, Faces, Leopards and Motorbikes
alone. Most categories of objects in the Caltech dataset have 50 images. The number of images
per category varies from 40 to 800. In the Oxford flowers dataset there are 80 images in each flower
category. In order to make the results presented here comparable to others in literature we have
followed the usual practice of generating training and test sets using a fixed number of pictures from
each object category and repeating the experiments with different random selections of pictures. For
the Caltech-5, Caltech-101 and Oxford flowers datasets we have used 50, 15, 60 images per object
category as training images and 50, 15, 20 images per object category as testing images respectively.
Also, in case of Caltech-5 and Oxford flowers datasets, the accuracies reported are the testset accuracies averaged over 10 such randomly sampled training and test datasets. Since the Caltech-101
dataset has large number of classes and the experiments are computationally intensive (100 choose
2 classifiers need to be built in each case), the results are averaged over 3 sets of training and test
datasets only. In case of the Caltech datasets, five feature descriptors8 were employed: SIFT, OpponentSIFT, rgSIFT, C-SIFT, Transformed Color SIFT. Whereas in case of Oxford flowers dataset,
following strategy of [11, 10], seven feature descriptors9 were employed. Using each feature descriptor, nine kernels were generated by varying the width-parameter of the Gaussian kernel. The
kernels can be grouped based on the feature descriptor they were generated from and the proposed
formulation can be employed to construct classifiers well-suited for object categorization. For eg. in
case of the Caltech datasets, n = 5 and nj = 9 ? j and in case of Oxford flowers dataset, n = 7 and
nj = 9 ? j. In all cases, the 1-vs-1 methodology was employed to handle the multi-class problems.
The results of the experiments are summarized in figure 1. Each plot shows the % gain in accuracy
achieved by MixNorm-MKL over L1-MKL and L2-MKL for each object category. Note that for
8
Code at http://staff.science.uva.nl/?ksande/research/colordescriptors/
Distance matrices available at http://www.robots.ox.ac.uk/?vgg/data/flowers/17/
index.html
9
7
600
900
MixNorm?MKL
L1?MKL
800
Training Time (seconds)
Training Time (seconds)
500
400
300
200
MixNorm?MKL
L1?MKL
700
600
500
400
300
200
100
100
0
1.5
2
2.5
3
log10(Number of Kernels)
3.5
0
1.5
4
2
2.5
3
log10(Number of Kernels)
3.5
4
Figure 2: Scaling plots comparing scalability of mirror-descent based algorithm and simpleMKL.
most object categories, the gains are positive and moreover quite high. The best results are seen
in case of the Caltech-101 dataset: the peak and avg. gains over L1-MKL are 800%, 37.57% respectively and over L2-MKL are 600%, 21.75% respectively. The gain in terms of numbers for the
other two datasets are not as high merely because the baseline accuracies were themselves high.
The baseline accuracies i.e., the average accuracy achieved by L2-MKL (over all categories) were
93.84%, 34.81% and 85.97% for the Caltech-5, Caltech-101 and Oxford flowers datasets respectively. The figures clearly show that the proposed formulation outperforms state-of-the-art object
categorization techniques and is hence highly-suited for such tasks. Another observation was that the
average sparsity (% of kernels with zero weightages) with the methods MixNorm-MKL, L1-MKL
and L2-MKL is 57%, 96% and 0% respectively. Also, it was observed that L1-MKL almost always
selected kernels from one or two components (feature descriptors) only whereas MixNorm-MKL
(and ofcourse L2-MKL) selected kernels from all the components. These observations clearly show
that the proposed formulation combines important kernels while eliminating redundant and noisy
kernels using the information embedded in the group structure of the kernels.
In the following, the results of experiments which compare the scalability of simpleMKL and
the proposed mirror-descent based algorithm wrt. the number of kernels are presented. Note that
in the special case, n = 1, the proposed formulation is exactly same as the l1 regularization based
formulation. Hence the mirror-descent based iterative algorithm proposed here can also be employed
for solving l1 regularization based MKL. Figure 2 shows plots of the training times as a function
of number of kernels with the algorithms on two binary classification problems encountered in the
object categorization experiments. The plots clearly show that the proposed algorithm outperforms
simpleMKL in terms of computational effort. Interestingly, it was found in our experiments that,
in most cases, the major computational effort at every iteration of SimpleMKL was in computing
the projection onto the feasible set! On the contrary Mirror descent allows an easily computable
closed form solution for the per-step auxiliary problem. We think this is the crucial advantage of
the proposed iterative algorithm over the gradient-decent based algorithms which were traditionally
employed for solving the MKL formulations.
5
Conclusions
This paper makes two important contributions: a) a specific mixed-norm regularization based MKL
formulation which is well-suited for object categorization and multi-modal tasks b) An efficient
mirror-descent based algorithm for solving the new formulation. Empirical results on real-world
datasets show that the new formulation achieves far better generalization than state-of-the-art object categorization techniques. In some cases, the average gain in testset accuracy compared to
state-of-the-art was as high as 37%. The mirror-descent based algorithm presented in the paper not
only solves the proposed formulation efficiently but also outperforms simpleMKL in solving the
traditional l1 regularization based MKL. The speed-up was as high as 12 times in some cases. Application of proposed methodology to various other multi-modal tasks and study of improved variants
of mirror-decent algorithm [4] for faster convergence are currently being explored by us.
Acknowledgements CB was supported by grants from Yahoo! and IBM.
8
References
[1] F. Bach, G. R. G. Lanckriet, and M. I. Jordan. Multiple Kernel Learning, Conic Duality, and
the SMO Algorithm. In International Conference on Machine Learning, 2004.
[2] Amir Beck and Marc Teboulle. Mirror descent and nonlinear projected subgradient methods
for convex optimization. Operations Research Letters, 31:167?175, 2003.
[3] Aharon Ben-Tal, Tamar Margalit, and Arkadi Nemirovski. The Ordered Subsets Mirror Descent Optimization Method with Applications to Tomography. SIAM Journal of Optimization,
12(1):79?108, 2001.
[4] Aharon Ben-Tal and Arkadi Nemirovski. Non-euclidean Restricted Memory Level Method for
Large-scale Convex Optimization. Mathematical Programming, 102(3):407?456, 2005.
[5] O. Chapelle, V. Vapnik, O. Bousquet, and S. Mukerjhee. Choosing multiple parameters for
SVM. Machine Learning, 46:131?159, 2002.
[6] R. Fergus, P. Perona, and A. Zisserman. Object class recognition by unsupervised scaleinvariant learning. In IEEE Computer Society Conference on Computer Vision and Pattern
Recognition, volume 2, 2003.
[7] R. Fergus L. Fei-Fei and P. Perona. Learning generative visual models from few training
examples: an incremental bayesian approach tested on 101 object categories. In IEEE. CVPR
2004, Workshop on Generative-Model Based Vision., 2004.
[8] G.R.G. Lanckriet, N. Cristianini, P. Bartlett, L. El Ghaoui, and M.I. Jordan. Learning the
Kernel Matrix with Semidefinite Programming. Journal of Machine Learning Research, 5:27?
72, 2004.
[9] Arkadi Nemirovski. Lectures on modern convex optimization (chp.5.4). Available at www2.
isye.gatech.edu/?nemirovs/Lect_ModConvOpt.pdf.
[10] M-E. Nilsback and A. Zisserman. A visual vocabulary for flower classification. In Proceedings
of the IEEE Conference on Computer Vision and Pattern Recognition, 2006.
[11] M-E. Nilsback and A Zisserman. Automated flower classification over a large number of
classes. In Proceedings of the Indian Conference on Computer Vision, Graphics and Image
Processing, 2008.
[12] Maria-Elena Nilsback and Andrew Zisserman. A Visual Vocabulary for Flower Classification. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and
Pattern Recognition, volume 2, pages 1447?1454, 2006.
[13] Maria-Elena Nilsback and Andrew Zisserman. Automated Flower Classification over a Large
Number of Classes. In Proceedings of the Sixth Indian Conference on Computer Vision, Graphics & Image Processing, 2008.
[14] A. Rakotomamonjy, F. Bach, S. Canu, and Y Grandvalet. SimpleMKL. Journal of Machine
Learning Research, 9:2491?2521, 2008.
[15] R. T. Rockafellar. Convex Analysis. Princeton University Press, 1970.
[16] Soren Sonnenburg, Gunnar Ratsch, Christin Schafer, and Bernhard Scholkopf. Large Scale
Multiple Kernel Learning. Journal of Machine Learning Research, 7:1531?1565, 2006.
[17] M. Szafranski, Y. Grandvalet, and A. Rakotomamonjy. Composite Kernel Learning. In Proceedings of the Twenty-fifth International Conference on Machine Learning (ICML), 2008.
[18] Vladimir Vapnik. Statistical Learning Theory. Wiley-Interscience, 1998.
[19] M. Varma and D. Ray. Learning the Discriminative Power Invariance Trade-off. In Proceedings
of the International Conference on Computer Vision, 2007.
[20] Zenglin Xu, Rong Jin, Irwin King, and Michael R. Lyu. An Extended Level Method for
Multiple Kernel Learning. In Advances in Neural Information Processing Systems, 2008.
9
| 3880 |@word version:1 faculty:1 eliminating:1 norm:17 minus:1 initial:1 wrapper:2 contains:1 selecting:2 tuned:2 interestingly:1 bhattacharyya:1 outperforms:6 existing:1 kwjk:5 current:2 comparing:1 must:1 readily:1 subsequent:1 numerical:2 j1:1 engg:3 shape:1 drop:1 plot:5 update:6 discrimination:1 alone:2 v:1 selected:3 generative:2 amir:1 steepest:2 ith:2 qjk:3 cse:1 preference:1 five:1 mathematical:1 constructed:1 direct:4 scholkopf:1 combine:1 sync:1 interscience:1 ray:1 introduce:1 theoretically:1 indeed:1 themselves:1 nor:1 multi:4 inappropriate:1 solver:2 iisc:4 provided:1 moreover:6 notation:1 begin:1 schafer:1 argmin:1 minimizes:1 finding:4 impractical:1 nj:19 every:3 exactly:1 classifier:4 k2:1 rm:1 uk:1 unit:1 grant:1 positive:5 local:1 sd:7 insa:1 id:1 oxford:10 simplemkl:10 studied:5 nemirovski:3 averaged:2 unique:1 testing:1 practice:1 block:4 definite:2 differs:1 procedure:3 empirical:3 asi:1 chp:1 projection:2 composite:1 word:3 cannot:1 interior:1 selection:6 onto:2 www:2 equivalent:4 szafranski:1 convex:12 restated:1 rule:2 varma:1 handle:2 variation:1 traditionally:1 limiting:3 controlling:1 suppose:1 construction:1 programming:2 lanckriet:2 recognition:4 jk:28 utilized:1 rgsift:1 observed:1 csie:1 electrical:1 solved:6 verifying:1 wj:5 sonnenburg:1 trade:1 convexity:2 complexity:5 moderately:1 cristianini:1 neglected:1 depend:2 solving:15 rewrite:1 efficiency:3 easily:2 represented:1 various:4 distinct:3 minxi:1 hyper:2 outcome:1 choosing:1 quite:2 solve:6 cvpr:1 say:1 saketha:1 think:1 noisy:1 scaleinvariant:1 superscript:1 advantage:1 differentiable:1 analytical:4 product:5 remainder:1 frequent:1 fr:1 combining:2 loop:2 achieve:1 scalability:2 wjk:6 convergence:4 categorization:17 generating:4 incremental:1 ben:3 object:33 derive:2 andrew:2 ac:3 kef:3 strong:2 solves:5 auxiliary:5 met:2 concentrate:1 owing:1 generalization:2 suitability:2 ntu:1 leopard:1 rong:1 hold:1 mm:2 considered:1 exp:2 maxu:1 mapping:1 cb:1 lyu:1 claim:2 substituting:1 major:3 achieves:1 interchanged:1 a2:1 omitted:1 label:2 krr:1 currently:1 grouped:3 weighted:2 minimization:7 clearly:5 gaussian:1 always:1 super:1 modified:1 reaching:1 pn:5 stepsizes:1 varying:1 gatech:1 derived:1 focus:1 improvement:1 maria:2 nemirovs:1 industrial:1 baseline:3 stopping:2 el:1 nn:2 eliminate:1 typically:1 margalit:1 perona:2 transformed:1 issue:1 overall:3 among:5 classification:7 dual:10 stateof:1 denoted:1 html:2 yahoo:1 art:9 special:4 ernet:4 equal:1 construct:1 represents:3 unsupervised:1 nearly:4 icml:1 promote:2 simplex:1 others:1 bangalore:4 employ:7 few:3 modern:1 randomly:1 simultaneously:1 divergence:1 individual:1 maxj:9 beck:1 geometry:1 n1:2 attempt:1 interest:3 huge:1 highly:3 introduces:1 nl:1 semidefinite:1 kjk:5 bregman:1 neglecting:1 respective:1 unless:1 euclidean:2 penalizes:1 haifa:1 re:1 instance:3 formalism:1 earlier:2 bombay:1 teboulle:1 introducing:1 rakotomamonjy:2 entry:1 subset:2 technion:2 too:1 graphic:2 reported:2 zenglin:1 varies:1 sv:2 combined:1 st:5 peak:1 international:3 siam:1 ie:1 off:1 michael:1 together:1 continuously:1 again:1 management:1 choose:3 leading:4 return:1 prox:2 socp:2 de:1 rouen:1 summarized:3 automation:3 rockafellar:1 script:1 closed:2 arkadi:3 iitb:1 contribution:2 il:1 accuracy:11 descriptor:13 efficiently:3 christin:1 ant:1 bayesian:1 none:1 definition:1 sixth:1 evaluates:2 simplexes:5 associated:3 gain:12 sampled:1 dataset:11 proved:1 color:2 cap:1 dimensionality:1 car:1 organized:1 higher:1 soren:1 follow:1 methodology:3 modal:3 improved:2 zisserman:5 formulation:47 evaluated:2 done:2 strongly:1 ox:1 stage:1 hand:2 working:1 clas1:1 nonlinear:1 lack:1 mkl:64 indicated:1 modulus:2 k22:4 normalized:1 multiplier:1 verify:2 contain:1 regularization:31 hence:10 dinesh:2 eg:2 during:1 width:1 noted:1 criterion:2 pdf:1 ridge:1 performs:1 l1:24 image:12 novel:2 recently:1 empirically:3 volume:2 discussed:4 significant:1 refer:3 ai:1 pm:1 canu:1 pnj:15 chapelle:1 robot:1 etc:1 add:1 recent:1 optimizes:1 apart:1 mint:2 binary:2 success:1 yi:8 caltech:20 seen:1 additional:2 staff:1 employed:10 redundant:1 semi:1 multiple:8 desirable:1 smooth:1 faster:1 cross:1 bach:2 divided:1 promotes:2 a1:1 feasibility:6 prediction:1 variant:3 scalable:1 involving:1 vision:7 nilsback:4 iteration:6 kernel:66 sifier:1 represent:1 achieved:3 c1:1 whereas:5 ratsch:1 crucial:4 subject:1 tend:2 contrary:1 nath:1 jordan:2 www2:1 ee:1 easy:5 decent:2 automated:2 iterate:1 variety:1 idea:3 regarding:2 tamar:1 airplane:1 vgg:1 intensive:2 computable:1 motivated:2 expression:1 bartlett:1 effort:5 nine:1 detailed:2 repeating:1 tomography:1 category:17 http:3 canonical:3 notice:1 sign:1 arising:1 correctly:1 per:9 xii:1 write:1 group:8 key:2 gunnar:1 achieving:1 neither:1 libsvm:3 subgradient:1 merely:1 cone:1 sum:1 letter:1 almost:3 reasonable:1 raman:1 decision:2 scaling:1 comparable:1 bound:1 followed:2 tackled:1 encountered:1 constraint:5 fei:2 tal:3 dominated:1 bousquet:1 speed:1 optimality:1 extremely:1 min:15 alternate:4 combination:8 belonging:2 unity:3 partitioned:1 tw:1 intuitively:2 restricted:1 ghaoui:1 computationally:2 chiranjib:1 turn:1 slack:2 discus:1 cjlin:1 wrt:16 merit:3 end:1 serf:1 available:4 aharon:3 operation:1 promoting:3 apply:1 generic:5 encounter:1 jnj:1 motorbike:1 original:1 log10:2 exploit:2 k1:2 especially:2 society:2 objective:2 realized:1 strategy:3 md:18 traditional:4 diagonal:1 usual:1 gradient:5 distance:4 majority:1 reintroducing:1 topic:1 seven:1 trivial:2 assuming:1 opponentsift:1 code:2 index:1 vladimir:1 potentially:2 statement:1 trace:1 motivates:1 twenty:1 perform:2 observation:2 datasets:15 sm:3 descent:22 jin:1 immediate:1 extended:2 community:2 namely:1 required:1 optimized:2 smo:1 algorithmics:2 boost:1 flower:17 below:2 usually:1 pattern:3 sparsity:5 program:1 built:3 max:5 interpretability:1 including:2 memory:1 power:1 suitable:1 minimax:1 kxk22:1 technology:1 picture:2 conic:1 lk:2 carried:1 text:3 prior:1 literature:5 l2:11 acknowledgement:1 asymptotic:1 embedded:1 lecture:1 highlight:1 mixed:11 validation:1 digital:1 sufficient:1 grandvalet:2 classifying:1 pi:1 ibm:1 summary:1 supported:1 side:1 institute:5 face:2 fifth:1 benefit:1 dimension:2 vocabulary:2 world:7 gram:4 evaluating:2 interchange:2 made:1 avg:1 projected:2 testset:3 employing:7 far:2 bernhard:1 confirm:1 assumed:6 xi:6 fergus:2 alternatively:1 discriminative:1 search:3 iterative:6 improving:1 csa:3 marc:1 uva:1 main:1 motivation:1 nothing:2 xu:1 wiley:1 sub:2 wish:1 isye:1 theorem:1 elena:2 specific:2 sift:3 explored:1 admits:1 svm:11 grouping:1 exists:2 workshop:1 vapnik:2 mirror:20 texture:1 nk:1 suited:8 entropy:2 visual:4 lagrange:1 ordered:1 watch:1 ramakrishnan:1 determines:1 chiru:1 formulated:1 king:1 consequently:1 lipschitz:2 feasible:3 justify:2 hyperplane:1 total:1 duality:1 invariance:1 select:1 irwin:1 indian:7 dept:5 princeton:1 tested:1 |
3,180 | 3,881 | Efficient Large-Scale Distributed Training of
Conditional Maximum Entropy Models
Gideon Mann
Google
[email protected]
Ryan McDonald
Google
[email protected]
Mehryar Mohri
Courant Institute and Google
[email protected]
Daniel D. Walker?
NLP Lab, Brigham Young University
[email protected]
Nathan Silberman
Google
[email protected]
Abstract
Training conditional maximum entropy models on massive data sets requires significant computational resources. We examine three common distributed training
methods for conditional maxent: a distributed gradient computation method, a
majority vote method, and a mixture weight method. We analyze and compare the
CPU and network time complexity of each of these methods and present a theoretical analysis of conditional maxent models, including a study of the convergence
of the mixture weight method, the most resource-efficient technique. We also report the results of large-scale experiments comparing these three methods which
demonstrate the benefits of the mixture weight method: this method consumes
less resources, while achieving a performance comparable to that of standard approaches.
1 Introduction
Conditional maximum entropy models [1, 3], conditional maxent models for short, also known as
multinomial logistic regression models, are widely used in applications, most prominently for multiclass classification problems with a large number of classes in natural language processing [1, 3] and
computer vision [12] over the last decade or more.
These models are based on the maximum entropy principle of Jaynes [11], which consists of selecting among the models approximately consistent with the constraints, the one with the greatest
entropy. They benefit from a theoretical foundation similar to that of standard maxent probabilistic
models used for density estimation [8]. In particular, a duality theorem for conditional maxent model
shows that these models belong to the exponential family. As shown by Lebanon and Lafferty [13],
in the case of two classes, these models are also closely related to AdaBoost, which can be viewed as
solving precisely the same optimization problem with the same constraints, modulo a normalization
constraint needed in the conditional maxent case to derive probability distributions.
While the theoretical foundation of conditional maxent models makes them attractive, the computational cost of their optimization problem is often prohibitive for data sets of several million points.
A number of algorithms have been described for batch training of conditional maxent models using
a single processor. These include generalized iterative scaling [7], improved iterative scaling [8],
gradient descent, conjugate gradient methods, and second-order methods [15, 18].
This paper examines distributed methods for training conditional maxent models that can scale to
very large samples of up to 1B instances. Both batch algorithms and on-line training algorithms such
?
This work was conducted while at Google Research, New York.
1
as that of [5] or stochastic gradient descent [21] can benefit from parallelization, but we concentrate
here on batch distributed methods.
We examine three common distributed training methods: a distributed gradient computation method
[4], a majority vote method, and a mixture weight method. We analyze and compare the CPU and
network time complexity of each of these methods (Section 2) and present a theoretical analysis of
conditional maxent models (Section 3), including a study of the convergence of the mixture weight
method, the most resource-efficient technique. We also report the results of large-scale experiments
comparing these three methods which demonstrate the benefits of the mixture weight method (Section 4): this method consumes less resources, while achieving a performance comparable to that of
standard approaches such as the distributed gradient computation method.1
2 Distributed Training of Conditional Maxent Models
In this section, we first briefly describe the optimization problem for conditional maximum entropy
models, then discuss three common methods for distributed training of these models and compare
their CPU and network time complexity.
2.1 Conditional Maxent Optimization problem
Let X be the input space, Y the output space, and ? : X ?Y? H a (feature) mapping to a Hilbert
space H, which in many practical settings coincides with RN , N = dim(H) < ?. We denote by
$ ?$ the norm induced by the inner product associated to H.
Let S = ((x1 , y1 ), . . . , (xm , ym )) be a training sample of m pairs in X ?Y. A conditional maximum
1
entropy model is a conditional probability of the form pw [y|x] = Z(x)
exp(w ? ?(x, y)) with Z(x) =
!
exp(w??(x,
y)),
where
the
weight
or
parameter
vector
w
?
H
is
the solution of the following
y?Y
optimization problem:
m
1 "
log pw [yi |xi ].
(1)
w = argmin FS (w) = argmin ?$w$2 ?
m i=1
w?H
w?H
Here, ? ? 0 is a regularization parameter typically selected via cross-validation. The optimization
problem just described corresponds to an L2 regularization. Many other types of regularization have
been considered for the same problem in the literature, in particular L1 regularization or regularizations based on other norms. This paper will focus on conditional maximum entropy models with L2
regularization.
These models have been extensively used and studied in natural language processing [1, 3] and
other areas where they are typically used for classification. Given the weight vector w, the output y
predicted by the model for an input x is:
y = argmax pw [y|x] = argmax w ? ?(x, y).
y?Y
(2)
y?Y
Since the function FS is convex and differentiable, gradient-based methods can be used to find a
global minimizer w of FS . Standard training methods such as iterative scaling, gradient descent,
conjugate gradient, and limited-memory quasi-Newton all have the general form of Figure 1, where
the update function ? : H ? H for the gradient ?FS (w) depends on the optimization method
selected. T is the number of iterations needed for the algorithm to converge to a global minimum.
In practice, convergence occurs when FS (w) differs by less than a constant " in successive iterations
of the loop.
2.2 Distributed Gradient Computation Method
Since the points are sampled i.i.d., the gradient computation in step 3 of Figure 1 can be distributed
across p machines. Consider a sample S = (S1 , . . . , Sp ) of pm points formed by p subsamples of
1
A batch parallel estimation technique for maxent models based on their connection with AdaBoost is also
described by [5]. This algorithm is quite different from the distributed gradient computation method, but, as for
that method, it requires a substantial amount of network resources, since updates need to be transferred to the
master at every iteration.
2
1 w?0
2 for t ? 1 to T do
3
?FS (w) ? G RADIENT(FS (w))
4
w ? w +?( ?FS (w))
5 return w
1 w?0
2 for t ? 1 to T do
3
?FS (w) ? D IST G RADIENT(FSk (w) # p machines)
4
w ? w +?( ?FS (w))
5
U PDATE(w # p machines)
6 return w
Figure 1: Standard Training
Figure 2: Distributed Gradient Training
m points drawn i.i.d., S1 , . . . , Sp . At each iteration, the gradients ?FSk (w) are computed by these
p machines in parallel. These separate gradients are then summed up to compute the exact global
gradient on a single machine, which also performs the optimization step and updates the weight
vector received by all other machines (Figure 2). Chu et al. [4] describe a map-reduce formulation
for this computation, where each training epoch consists of one map (compute each ?FSk (w))
and one reduce (update w). However, the update method they present is that of Newton-Raphson,
which requires the computation of the Hessian. We do not consider such strategies, since Hessian
computations are often infeasible for large data sets.
2.3 Majority Vote Method
The ensemble methods described
on mixture weights ? ? Rp .
!p in the next two paragraphs are based
p
p
Let ?p = {? ? R : ? ? 0 ? k=1 ?k = 1} denote the simplex of R and let ? ? ?p . In the absence
of any prior knowledge, ? is chosen to be the uniform mixture ?0 = (1/p, . . . , 1/p) as in all of our
experiments.
Instead of computing the gradient of the global function in parallel, a (weighted) majority vote
method can be used. Each machine receives one subsample Sk , k ? [1, p], and computes wk =
argminw?H FSk (w) by applying the standard training of Figure 1 to Sk . The output y predicted by
the majority vote method for an input x is
y = argmax
y?Y
p
"
k=1
?k I(argmax pwk [y # |x] = y),
(3)
y ! ?Y
where I is an indicator function of the predicate it takes as argument. Alternatively, the conditional class !
probabilities could be used to take into account the uncertainty of each classifier:
p
y = argmaxy k=1 ?k pwk [y|x].
2.4 Mixture Weight Method
The cost of storing p weight vectors can make the majority vote method unappealing. Instead, a
single mixture weight w? can be defined form the weight vectors wk , k ? [1, p]:
w? =
p
"
?k wk .
(4)
k=1
The mixture weight w? can be used directly for classification.
2.5 Comparison of CPU and Network Times
This section compares the CPU and network time complexity of the three training methods just
described. Table 1 summarizes these results. Here, we denote by N the dimension of H. User CPU
represents the CPU time experienced by the user, cumulative CPU the total amount of CPU time for
the machines participating in the computation, and latency the experienced runtime effects due to
network activity. The cumulative network usage is the amount of data transferred across the network
during a distributed computation.
For a training sample of pm points, both the user and cumulative CPU times are in Ocpu (T pmN )
when training on a single machine (Figure 1) since at each of the T iterations, the gradient computation must iterate over all pm training points and update all the components of w.
3
Training
User CPU + Latency
Single Machine
Ocpu (pmN T )
Distributed Gradient Ocpu (mN T ) + Olat (N T )
Ocpu (mN Tmax ) + Olat (N )
Majority Vote
Ocpu (mN Tmax ) + Olat (N )
Mixture Weight
Training
Cum. CPU
Ocpu (pmN T )
O
(pmN T )
Pcpu
p
Ocpu (mN Tk )
Pk=1
p
k=1 Ocpu (mN Tk )
Training
Cum. Network
N/A
Onet (pN T )
Onet (pN )
Onet (pN )
Prediction
User CPU
Ocpu (N )
Ocpu (N )
Ocpu (pN )
Ocpu (N )
Table 1: Comparison of CPU and network times.
For the distributed gradient method (Section 2.2), the worst-case user CPU of the gradient and
parameter update computations (lines 3-4 of Figure 2) is Ocpu (mN + pN + N ) since each parallel
gradient calculation takes mN to compute the gradient for m instances, p gradients of size N need
to be summed, and the parameters updated. We assume here that the time to compute ? is negligible.
If we assume that p * m, then, the user CPU is in Ocpu (mN T ). Note that the number of iterations
it takes to converge, T , is the same as when training on a single machine since the computations are
identical.
In terms of network usage, a distributed gradient strategy will incur a cost of Onet (pN T ) and a
latency proportional to Olat (N T ), since at each iteration w must be transmitted to each of the
p machines (in parallel) and each ?FSk (w) returned back to the master. Network time can be
improved through better data partitioning of S when ?(x, y) is sparse. The exact runtime cost of
latency is complicated as it depends on factors such as the physical distance between the master and
each machine, connectivity, the switch fabric in the network, and CPU costs required to manage
messages. For parallelization on massively multi-core machines [4], communication latency might
be negligible. However, in large data centers running commodity machines, a more common case,
network latency cost can be significant.
The training times are identical for the majority vote and mixture weight techniques. Let Tk be the
number of iterations for training the kth mixture component wk and let Tmax = max{T1 , . . . , Tp }.
Then, the user CPU usage of training is in Ocpu (mN Tmax ), similar to that of the distributed gradient
method. However, in practice, Tmax is typically less than T since convergence is often faster with
smaller data sets. A crucial advantage of these methods over the distributed gradient method is that
their network usage is significantly less than that of the distributed gradient computation. While
parameters and gradients are exchanged at each iteration for this method, majority vote and mixture
weight techniques only require the final weight vectors to be transferred at the conclusion of training.
Thus, the overall network usage is Onet (pN ) with a latency in Olat (N T ). The main difference
between the majority vote and mixture weight methods is the user CPU (and memory usage) for
prediction which is in Ocpu (pN ) versus Ocpu (N ) for the mixture weight method. Prediction could
be distributed over p machines for the majority vote method, but that would incur additional machine
and network bandwidth costs.
3 Theoretical Analysis
This section presents a theoretical analysis of conditional maxent models, including a study of the
convergence of the mixture weight method, the most resource-efficient technique, as suggested in
the previous section.
The results we obtain are quite general and include the proof of several fundamental properties of
the weight vector w obtained when training a conditional maxent model. We first prove the stability
of w in response to a change in one of the training points. We then give a convergence bound for
w as a function of the sample size in terms of the norm of the feature space and also show a similar
result for the mixture weight w? . These results are used to compare the weight vector wpm obtained
by training on a sample of size pm with the mixture weight vector w? .
#
Consider two training samples of size m, S = (z1 , . . . , zm?1 , zm ) and S # = (z1 , . . . , zm?1 , zm
),
with elements in X ?Y , that differ by a single training point, which we arbitrarily set as the last one
#
#
of each sample: zm = (xm , ym ) and zm
= (x#m , ym
). Let w denote the parameter vector returned
by conditional maximum entropy when trained on sample S, w# the vector returned when trained
on S # , and let ?w denote w# ? w. We shall assume that the feature vectors are bounded, that is
there exists R > 0 such that for all (x, y) in X ?Y , $?(x, y)$ ? R. Our bounds are derived using
4
techniques similar to those used by Bousquet and Elisseeff [2], or other authors, e.g., [6], in the
analysis of stability. In what follows, for any w ? H and z = (x, y) ? X ?Y , we denote by Lz (w)
the negative log-likelihood - log pw [y|x].
Theorem 1. Let S # and S be two arbitrary samples of size m differing only by one point. Then, the
following stability bound holds for the weight vector returned by a conditional maxent model:
$?w$ ?
2R
.
?m
(5)
Proof. We denote by BF the Bregman divergence associated to a convex and differentiable function
F defined
for all u, u# by: BF (u# $u) = F (u# )?F (u)??F (u)?(u#?u). Let GS denote the function
1 !m
u ,? m i=1 Lzi (u) and W the function u ,? ?$u$2 . GS and W are convex and differentiable
functions. Since the Bregman divergence is non-negative, BGS ? 0 and BFS = BW + BGS ? BW .
Similarly, BFS! ? BW . Thus, the following inequality holds:
BW (w# $w) + BW (w$w# ) ? BFS (w# $w) + BFS! (w$w# ).
#
(6)
#
By the definition of w and w as the minimizers of FS and FS ! , ?FS (w) = ?FS ! (w ) = 0 and
BFS (w# $w) + BFS! (w$w# ) = FS (w# ) ? FS (w) + FS ! (w) ? FS ! (w# )
% $
%&
1 #$
#
! (w) ? Lz ! (w )
Lzm (w# ) ? Lzm (w) + Lzm
=
m
m
&
1#
#
! (w) ? (w ? w)
?Lzm (w# ) ? (w ? w# ) + ?Lzm
??
m
%
1$
#
#
! (w) ? ?Lz (w ) ? (w ? w),
?Lzm
=?
m
m
#
#
! and Lz . It is not hard to see that BW (w $w)+BW (w$w ) =
where we used the convexity of Lzm
m
2
2?$?w$ . Thus, the application of the Cauchy-Schwarz inequality to the inequality just established
yields
&
1
1#
! (w)$ .
! (w)$ ?
$?Lzm (w# )$ + $?Lzm
2? $?w$ ? $?Lzm (w# ) ? ?Lzm
(7)
m
m
!
The gradient of w ,? Lzm (w) = log y?Y ew??(xm ,y) ?w ? ?(xm , ym ) is given by
!
w??(xm ,y)
?(xm , y)
$
%
y?Y e
!
?Lzm (w) =
?(xm , y) ? ?(xm , ym ) .
? ?(xm , ym ) =
E
w??(xm ,y ! )
y?p
[?|x
]
e
w
m
y ! ?Y
$
%
#
Thus, we obtain $?Lzm (w )$ ? Ey?pw! [?|xm ] $?(xm , y) ? ?(xm , ym )$ ? 2R and similarly
! (w)$ ? 2R, which leads to the statement of the theorem.
$?Lzm
Let D denote the distribution according to which training and test points are drawn and let F ! be
the objective function associated to the optimization defined with respect to the true log loss:
$
%
F ! (w) = argmin ?$w$2 + E Lz (w) .
(8)
z?D
w?H
!
F is a convex function since ED [Lz ] is convex. Let the solution of this optimization be denoted by
w! = argminw?H F ! (w).
Theorem 2. Let w ? H be the weight vector returned by conditional maximum entropy when
trained on a sample S of size m. Then, for any ? > 0, with probability at least 1??, the following
inequality holds:
'
)
R (
$w ? w! $ ? '
(9)
1 + log 1/? .
? m/2
Proof. Let S and S # be as before samples of size m differing by a single point. To derive this
bound, we apply McDiarmid?s inequality [17] to ?(S) = $w ? w! $. By the triangle inequality and
Theorem 1, the following Lipschitz property holds:
*
*
2R
.
(10)
|?(S # ) ? ?(S)| = *$w# ? w! $ ? $w ? w! $* ? $w# ? w$ ?
?m
5
( ?2"2 m )
Thus, by McDiarmid?s inequality, Pr[??E[?] ? "] ? exp 4R
2 /?2 . The following bound can be
2R
. Using this bound
shown for the expectation of ? (see longer version of this paper): E[?] ? ??
2m
and setting the right-hand side of McDiarmid?s inequality to ? show that the following holds
+
'
)
2R (
2R log 1?
? ?
? ? E[?] +
(11)
1 + log 1/? ,
?
2m
? 2m
with probability at least 1??.
Note that, remarkably, the bound of Theorem 2 does not depend on the dimension of the feature
space but only on the radius R of the sphere containing the feature vectors.
Consider now a sample S = (S1 , . . . , Sp ) of pm points formed by p subsamples of m points drawn
i.i.d. and let w? denote the ?-mixture weight as defined in Section 2.4. The following theorem gives
a learning bound for w? .
Theorem 3. For any ? ? ?p , let w? ? H denote the mixture weight vector obtained from a sample
of size pm by combining the p weight vectors wk , k ? [1, p], each returned by conditional maximum
entropy when trained on the sample Sk of size m. Then, for any ? > 0, with probability at least 1??,
the following inequality holds:
$
%
R$?$ '
log 1/?.
(12)
$w? ? w! $ ? E $w? ? w! $ + '
? m/2
For the uniform mixture ?0 = (1/p, . . . , 1/p), the bound becomes
'
$
%
R
$w? ? w! $ ? E $w? ? w! $ + '
log 1/?.
(13)
? pm/2
Proof. The result follows by application of McDiarmid?s inequality to ?(S) = $w? ? w! $. Let
S # = (S1# , . . . , Sp# ) denote a sample differing from S by one point, say in subsample Sk . Let wk#
#
denote the weight vector obtained by training on subsample Sk# and w?
the mixture weight vector
#
associated to S . Then, by the triangle inequality and the stability bound of Theorem 1, the following
holds:
* #
*
2?k R
#
.
? w! $ ? $w? ? w! $* ? $w?
? w? $ = ?k $wk# ? wk $ ?
|?(S # ) ? ?(S)| = *$w?
?m
Thus, by McDiarmid?s inequality,
,
,
?2?2 m"2
?2"2
,
(14)
Pr[?(S) ? E[?(S)] ? "] ? exp !p
( 2?k R )2 = exp
4R2 $?$2
k=1 m ?m
?
which proves the first statement and the uniform mixture case since $?0 $ = 1/ p.
Theorems 2 and 3 help us compare the mixture weight wpm obtained by training on a sample of
size pm versus the mixture weight vector w?0 . The regularization parameter ? is a function of
the sample size. To simplify the analysis, we shall assume that ? = O(1/m1/4 ) for a sample of
size m. A similar discussion holds for other comparable asymptotic behaviors. By Theorem 2,
?
$wpm ? w! $ converges to zero in O(1/(? pm)) = O(1/(pm)1/4 ), since ? = O(1/(pm)1/4 ) in
that case. But, by Theorem 3, the slack term bounding $w?0 ? w! $ converges to zero at the faster
?
rate O(1/(? pm)) = O(1/p1/2 m1/4 ), since here ? = O(1/m1/4 ). The expectation term appearing
in the bound on $w?0 ? w! $, E[$w?0 ? w! $], does not benefit from the same convergence rate
however. E[$w?0 ? w! $] converges always as fast as the expectation E[$wm ? w! $] for a weight
vector wm obtained by training on a sample of size m since, by the triangle inequality, the following
holds:
p
p
1"
1"
!
!
E[$w? ? w $] = E[$
(wk ? w )$] ?
E[$wk ? w! $] = E[$w1 ? w! $].
(15)
p
p
k=1
k=1
'
?
By the proof of Theorem 2, E[$w1 ?w! $] ? R/(? m/2) = O(1/(? m)), thus E[$w? ?w! $] ?
O(1/m1/4 ). In summary, w?0 always converges significantly faster than wm . The convergence
bound for w?0 contains two terms, one somewhat more favorable, one somewhat less than its counterpart term in the bound for wpm .
6
English POS [16]
Sentiment
RCV1-v2 [14]
Speech
Deja News Archive
Deja News Archive 250K
Gigaword [10]
pm
1M
9M
26 M
50 M
306 M
306 M
1,000 M
|Y|
24
3
103
129
8
8
96
|X |
500 K
500 K
10 K
39
50 K
250 K
10 K
sparsity
0.001
0.001
0.08
1.0
0.002
0.0004
0.001
p
10
10
10
499
200
200
1000
Table 2: Description of data sets. The column named sparsity reports the frequency of non-zero
feature values for each data set.
4 Experiments
We ran a number of experiments on data sets ranging in size from 1M to 1B labeled instances (see
Table 2) to compare the three distributed training methods described in Section 2. Our experiments
were carried out using a large cluster of commodity machines with a local shared disk space and a
high rate of connectivity between each machine and between machines and disk. Thus, while the
processes did not run on one multi-core supercomputer, the network latency between machines was
minimized.
We report accuracy, wall clock, cumulative CPU usage, and cumulative network usage for all of our
experiments. Wall clock measures the combined effects of the user CPU and latency costs (column
1 of Table 1), and includes the total time for training, including all summations. Network usage
measures the amount of data transferred across the network. Due to the set-up of our cluster, this
includes both machine-to-machine traffic and machine-to-disk traffic. The resource estimates were
calculated by point-sampling and integrating over the sampling time. For all three methods, we used
the same base implementation of conditional maximum entropy, modified only in whether or not the
gradient was computed in a distributed fashion.
Our first set of experiments were carried out with ?medium? scale data sets containing 1M-300M instances. These included: English part-of-speech tagging, generated from the Penn Treebank
[16] using the first character of each part-of-speech tag as output, sections 2-21 for training, section
23 for testing and a feature representation based on the identity, affixes, and orthography of the input word and the words in a window of size two; Sentiment analysis, generated from a set of
online product, service, and merchant reviews with a three-label output (positive, negative, neutral),
with a bag of words feature representation; RCV1-v2 as described by [14], where documents having
multiple labels were included multiple times, once for each label; Acoustic Speech Data, a 39dimensional input consisting of 13 PLP coefficients, plus their first and second derivatives, and 129
outputs (43 phones ? 3 acoustic states); and the Deja News Archive, a text topic classification
problem generated from a collection of Usenet discussion forums from the years 1995-2000. For all
text experiments, we used random feature mixing [9, 20] to control the size of the feature space.
The results reported in Table 3 show that the accuracy of the mixture weight method consistently
matches or exceeds that of the majority vote method. As expected, the resource costs here are
similar, with slight differences due to the point-sampling methods and the overhead associated with
storing p models in memory and writing them to disk. For some data sets, we could not report
majority vote results as all models could not fit into memory on a single machine.
The comparison shows that in some cases the mixture weight method takes longer and achieves
somewhat better performance than the distributed gradient method while for other data sets it terminates faster, at a slight loss in accuracy. These differences may be due to the performance of the
optimization with respect to the regularization parameter ?. However, the results clearly demonstrate that the mixture weight method achieves comparable accuracies at a much decreased cost in
network bandwidth ? upwards of 1000x. Depending on the cost model assessed for the underlying
network and CPU resources, this may make mixture weight a significantly more appealing strategy.
In particular, if network usage leads to significant increases in latency, unlike our current experimental set-up of high rates of connectivity, then the mixture weight method could be substantially
faster to train. The outlier appears to be the acoustic speech data, where both mixture weight and
distributed gradient have comparable network usage, 158GB and 200GB, respectively. However, the
bulk of this comes from the fact that the data set itself is 157GB in size, which makes the network
7
Training Method
Accuracy Wall Clock Cumulative CPU Network Usage
English POS Distributed Gradient
97.60%
17.5 m
11.0 h
652 GB
Majority Vote
96.80%
12.5 m
18.5 h
0.686 GB
(m=100k,p=10)
Mixture Weight
96.80%
5m
11.5 h
0.015 GB
Sentiment
Distributed Gradient
81.18%
104 m
123 h
367 GB
(m=900k,p=10)
Majority Vote
81.25%
131 m
168 h
3 GB
Mixture Weight
81.30%
110 m
163 h
9 GB
RCV1-v2
Distributed Gradient
27.03%
48 m
407 h
479 GB
Majority Vote
26.89%
54 m
474 h
3 GB
(m=2.6M,p=10)
Mixture Weight
27.15%
56 m
473 h
0.108 GB
Speech
Distributed Gradient
34.95%
160 m
511 h
200 GB
(m=100k,p=499)
Mixture Weight
34.99%
130 m
534 h
158 GB
Deja
Distributed Gradient
64.74%
327 m
733 h
5,283 GB
(m=1.5M,p=200)
Mixture Weight
65.46%
316 m
707 h
48 GB
Deja 250K
Distributed Gradient
67.03%
340 m
698 h
17,428 GB
(m=1.5M,p=200)
Mixture Weight
66.86%
300 m
710 h
65 GB
Gigaword
Distributed Gradient
51.16%
240 m
18,598 h
13,000 GB
Mixture Weight
50.12%
215 m
17,998 h
21 GB
(m=1M,p=1k)
Table 3: Accuracy and resource costs for distributed training strategies.
usage closer to 1GB for the mixture weight and 40GB for distributed gradient method when we
discard machine-to-disk traffic.
For the largest experiment, we examined the task of predicting the next character in a sequence
of text [19], which has implications for many natural language processing tasks. As a training
and evaluation corpus we used the English Gigaword corpus [10] and used the full ASCII output
space of that corpus of around 100 output classes (uppercase and lowercase alphabet characters
variants, digits, punctuation, and whitespace). For each character s, we designed a set of observed
features based on substrings from s?1 , the previous character, to s?10 , 9 previous characters, and
hashed each into a 10k-dimensional space in an effort to improve speed. Since there were around
100 output classes, this led to roughly 1M parameters. We then sub-sampled 1B characters from
the corpus as well as 10k testing characters and established a training set of 1000 subsets, of 1M
instances each. For the experiments described above, the regularization parameter ? was kept fixed
across the different methods. Here, we decreased the parameter ? for the distributed gradient method
since less regularization was needed when more data was available, and since there were three orders
of magnitude difference between the training size for each independent model and the distributed
gradient. We compared only the distributed gradient and mixture weight methods since the majority
vote method exceeded memory capacity. On this data set, the network usage is on a different scale
than most of the previous experiments, though comparable to Deja 250, with the distributed gradient
method transferring 13TB across the network. Overall, the mixture weight method consumes less
resources: less bandwidth and less time (both wall clock and CPU). With respect to accuracy, the
mixture weight method does only slightly worse than the distributed gradient method. The individual
models in the mixture weight method ranged between 49.73% to 50.26%, with a mean accuracy
of 50.07%, so a mixture weight model improves slightly over a random subsample models and
decreases the overall variance.
5 Conclusion
Our analysis and experiments give significant support for the mixture weight method for training
very large-scale conditional maximum entropy models with L2 regularization. Empirical results
suggest that this method achieves similar or better accuracies while reducing network usage by
about three orders of magnitude and modestly reducing the wall clock time, typically by about 15%
or more. In distributed environments without a high rate of connectivity, the decreased network
usage of the mixture weight method should lead to substantial gains in wall clock as well.
Acknowledgments
We thank Yishay Mansour for his comments on an earlier version of this paper.
8
References
[1] A. Berger, V. Della Pietra, and S. Della Pietra. A maximum entropy approach to natural
language processing. Computational Linguistics, 22(1):39?71, 1996.
[2] O. Bousquet and A. Elisseeff. Stability and generalization. Journal of Machine Learning
Research, 2:499?526, 2002.
[3] S. F. Chen and R. Rosenfeld. A survey of smoothing techniques for ME models. IEEE Transactions on Speech and Audio Processing, 8(1):37?50, 2000.
[4] C. Chu, S. Kim, Y. Lin, Y. Yu, G. Bradski, A. Ng, and K. Olukotun. Map-Reduce for machine
learning on multicore. In Advances in Neural Information Processing Systems, 2007.
[5] M. Collins, R. Schapire, and Y. Singer. Logistic regression, AdaBoost and Bregman distances.
Machine Learning, 48, 2002.
[6] C. Cortes, M. Mohri, M. Riley, and A. Rostamizadeh. Sample selection bias correction theory.
In Proceedings of ALT 2008, volume 5254 of LNCS, pages 38?53. Springer, 2008.
[7] J. Darroch and D. Ratcliff. Generalized iterative scaling for log-linear models. The Annals of
Mathematical Statistics, pages 1470?1480, 1972.
[8] S. Della Pietra, V. Della Pietra, J. Lafferty, R. Technol, and S. Brook. Inducing features of
random fields. IEEE transactions on pattern analysis and machine intelligence, 19(4):380?
393, 1997.
[9] K. Ganchev and M. Dredze. Small statistical models by random feature mixing. In Workshop
on Mobile Language Processing, ACL, 2008.
[10] D. Graff, J. Kong, K. Chen, and K. Maeda. English gigaword third edition, linguistic data
consortium, philadelphia, 2007.
[11] E. T. Jaynes. Information theory and statistical mechanics. Physical Review, 106(4):620630,
1957.
[12] J. Jeon and R. Manmatha. Using maximum entropy for automatic image annotation. In International Conference on Image and Video Retrieval, 2004.
[13] G. Lebanon and J. Lafferty. Boosting and maximum likelihood for exponential models. In
Advances in Neural Information Processing Systems, pages 447?454, 2001.
[14] D. Lewis, Y. Yang, T. Rose, and F. Li. RCV1: A new benchmark collection for text categorization research. Journal of Machine Learning Research, 5:361?397, 2004.
[15] R. Malouf. A comparison of algorithms for maximum entropy parameter estimation. In International Conference on Computational Linguistics (COLING), 2002.
[16] M. Marcus, M. Marcinkiewicz, and B. Santorini. Building a large annotated corpus of English:
The Penn Treebank. Computational linguistics, 19(2):313?330, 1993.
[17] C. McDiarmid. On the method of bounded differences. In Surveys in Combinatorics, pages
148?188. Cambridge University Press, Cambridge, 1989.
[18] J. Nocedal and S. Wright. Numerical optimization. Springer, 1999.
[19] C. E. Shannon. Prediction and entropy of printed English. Bell Systems Technical Journal,
30:50?64, 1951.
[20] K. Weinberger, A. Dasgupta, J. Langford, A. Smola, and J. Attenberg. Feature hashing for
large scale multitask learning. In International Conference on Machine Learning, 2009.
[21] T. Zhang. Solving large scale linear prediction problems using stochastic gradient descent
algorithms. In International Conference on Machine Learning, 2004.
9
| 3881 |@word multitask:1 kong:1 version:2 briefly:1 pw:5 norm:3 bf:2 disk:5 elisseeff:2 manmatha:1 contains:1 selecting:1 daniel:1 document:1 current:1 com:3 comparing:2 jaynes:2 chu:2 must:2 numerical:1 designed:1 update:7 intelligence:1 prohibitive:1 selected:2 short:1 core:2 affix:1 boosting:1 successive:1 mcdiarmid:6 zhang:1 mathematical:1 consists:2 prove:1 overhead:1 paragraph:1 tagging:1 expected:1 roughly:1 p1:1 examine:2 mechanic:1 multi:2 behavior:1 cpu:24 window:1 becomes:1 bounded:2 underlying:1 medium:1 what:1 argmin:3 substantially:1 differing:3 lzm:15 every:1 commodity:2 runtime:2 classifier:1 partitioning:1 control:1 penn:2 positive:1 t1:1 negligible:2 before:1 local:1 service:1 usenet:1 approximately:1 pmn:4 tmax:5 might:1 plus:1 acl:1 studied:1 examined:1 limited:1 practical:1 acknowledgment:1 testing:2 practice:2 differs:1 digit:1 lncs:1 area:1 empirical:1 bell:1 significantly:3 printed:1 word:3 integrating:1 suggest:1 consortium:1 selection:1 applying:1 writing:1 map:3 center:1 convex:5 survey:2 examines:1 bfs:6 his:1 stability:5 updated:1 annals:1 yishay:1 massive:1 modulo:1 exact:2 user:10 element:1 labeled:1 observed:1 worst:1 news:3 decrease:1 consumes:3 ran:1 substantial:2 rose:1 environment:1 convexity:1 complexity:4 onet:5 trained:4 depend:1 solving:2 incur:2 triangle:3 po:2 fabric:1 pdate:1 alphabet:1 train:1 fast:1 describe:2 quite:2 widely:1 say:1 statistic:1 rosenfeld:1 itself:1 final:1 online:1 subsamples:2 advantage:1 differentiable:3 sequence:1 product:2 zm:6 argminw:2 loop:1 combining:1 mixing:2 radient:2 description:1 inducing:1 participating:1 convergence:8 cluster:2 categorization:1 converges:4 tk:3 help:1 derive:2 depending:1 multicore:1 received:1 c:1 predicted:2 come:1 differ:1 concentrate:1 radius:1 closely:1 annotated:1 stochastic:2 mann:1 require:1 generalization:1 wall:6 marcinkiewicz:1 ryan:1 summation:1 correction:1 hold:9 around:2 considered:1 wright:1 exp:5 mapping:1 achieves:3 estimation:3 favorable:1 bag:1 label:3 schwarz:1 largest:1 ganchev:1 weighted:1 clearly:1 always:2 modified:1 pn:8 mobile:1 linguistic:1 derived:1 focus:1 consistently:1 likelihood:2 ratcliff:1 kim:1 rostamizadeh:1 dim:1 minimizers:1 lowercase:1 typically:4 transferring:1 quasi:1 overall:3 classification:4 among:1 denoted:1 smoothing:1 summed:2 field:1 once:1 having:1 ng:1 sampling:3 identical:2 represents:1 yu:1 simplex:1 report:5 minimized:1 simplify:1 divergence:2 individual:1 pietra:4 argmax:4 consisting:1 bw:7 jeon:1 unappealing:1 message:1 bradski:1 evaluation:1 mixture:48 argmaxy:1 punctuation:1 uppercase:1 implication:1 bregman:3 closer:1 maxent:16 exchanged:1 theoretical:6 instance:5 column:2 earlier:1 tp:1 riley:1 cost:12 neutral:1 subset:1 uniform:3 predicate:1 conducted:1 reported:1 combined:1 density:1 fundamental:1 international:4 probabilistic:1 ym:7 connectivity:4 w1:2 manage:1 containing:2 worse:1 derivative:1 return:2 li:1 account:1 wk:10 includes:2 coefficient:1 lzi:1 combinatorics:1 depends:2 lab:1 analyze:2 traffic:3 wm:3 parallel:5 complicated:1 annotation:1 formed:2 accuracy:9 variance:1 ensemble:1 yield:1 substring:1 processor:1 ed:1 definition:1 frequency:1 associated:5 proof:5 sampled:2 gain:1 knowledge:1 improves:1 hilbert:1 back:1 appears:1 exceeded:1 hashing:1 courant:1 adaboost:3 response:1 improved:2 formulation:1 though:1 just:3 wpm:4 smola:1 clock:6 langford:1 hand:1 receives:1 google:8 logistic:2 dredze:1 usage:16 effect:2 building:1 ranged:1 true:1 counterpart:1 regularization:11 attractive:1 during:1 plp:1 coincides:1 generalized:2 mcdonald:1 demonstrate:3 performs:1 l1:1 upwards:1 ranging:1 image:2 common:4 multinomial:1 physical:2 volume:1 million:1 belong:1 slight:2 m1:4 significant:4 cambridge:2 automatic:1 pm:13 similarly:2 malouf:1 language:5 hashed:1 longer:2 base:1 phone:1 massively:1 discard:1 inequality:13 arbitrarily:1 yi:1 transmitted:1 minimum:1 byu:1 additional:1 somewhat:3 ey:1 converge:2 multiple:2 full:1 exceeds:1 technical:1 faster:5 match:1 calculation:1 cross:1 raphson:1 sphere:1 lin:1 retrieval:1 prediction:5 variant:1 regression:2 vision:1 expectation:3 iteration:9 normalization:1 orthography:1 remarkably:1 decreased:3 walker:1 crucial:1 parallelization:2 unlike:1 archive:3 comment:1 induced:1 lafferty:3 yang:1 iterate:1 switch:1 fit:1 bandwidth:3 inner:1 reduce:3 multiclass:1 whether:1 darroch:1 gb:22 effort:1 sentiment:3 f:18 returned:6 speech:7 york:1 hessian:2 latency:10 fsk:5 amount:4 extensively:1 schapire:1 bulk:1 gigaword:4 shall:2 dasgupta:1 ist:1 achieving:2 drawn:3 kept:1 nocedal:1 olukotun:1 year:1 run:1 master:3 uncertainty:1 named:1 family:1 whitespace:1 summarizes:1 scaling:4 comparable:6 bound:13 g:2 activity:1 constraint:3 precisely:1 bousquet:2 tag:1 nathan:1 speed:1 argument:1 rcv1:4 transferred:4 according:1 conjugate:2 across:5 smaller:1 terminates:1 character:8 slightly:2 appealing:1 s1:4 outlier:1 pr:2 resource:12 discus:1 slack:1 needed:3 singer:1 available:1 apply:1 v2:3 appearing:1 attenberg:1 batch:4 weinberger:1 rp:1 supercomputer:1 running:1 nlp:1 include:2 linguistics:3 newton:2 prof:1 forum:1 silberman:1 objective:1 pwk:2 occurs:1 strategy:4 modestly:1 gradient:48 kth:1 distance:2 separate:1 thank:1 capacity:1 majority:17 me:1 topic:1 cauchy:1 marcus:1 berger:1 statement:2 negative:3 implementation:1 benchmark:1 descent:4 technol:1 merchant:1 communication:1 santorini:1 y1:1 rn:1 mansour:1 arbitrary:1 pair:1 required:1 connection:1 z1:2 acoustic:3 established:2 brook:1 suggested:1 pattern:1 xm:13 maeda:1 gideon:1 sparsity:2 tb:1 including:4 memory:5 max:1 video:1 greatest:1 natural:4 predicting:1 indicator:1 mn:9 improve:1 cim:1 carried:2 philadelphia:1 text:4 epoch:1 literature:1 l2:3 prior:1 review:2 asymptotic:1 loss:2 proportional:1 versus:2 validation:1 foundation:2 consistent:1 principle:1 treebank:2 storing:2 ascii:1 mohri:3 summary:1 last:2 english:7 infeasible:1 side:1 bias:1 institute:1 sparse:1 distributed:41 benefit:5 dimension:2 calculated:1 cumulative:6 computes:1 author:1 collection:2 lz:6 transaction:2 lebanon:2 global:4 corpus:5 xi:1 alternatively:1 iterative:4 decade:1 sk:5 table:7 mehryar:1 sp:4 pk:1 main:1 cum:2 did:1 bounding:1 subsample:4 edition:1 x1:1 fashion:1 experienced:2 sub:1 exponential:2 prominently:1 third:1 coling:1 young:1 theorem:13 nyu:1 r2:1 cortes:1 alt:1 brigham:1 exists:1 workshop:1 magnitude:2 chen:2 entropy:17 led:1 springer:2 corresponds:1 minimizer:1 lewis:1 conditional:27 viewed:1 identity:1 lipschitz:1 absence:1 shared:1 change:1 hard:1 included:2 reducing:2 graff:1 total:2 duality:1 experimental:1 vote:17 shannon:1 ew:1 support:1 assessed:1 collins:1 audio:1 della:4 |
3,181 | 3,882 | Dual Averaging Method for Regularized Stochastic
Learning and Online Optimization
Lin Xiao
Microsoft Research, Redmond, WA 98052
[email protected]
Abstract
We consider regularized stochastic learning and online optimization problems,
where the objective function is the sum of two convex terms: one is the loss function of the learning task, and the other is a simple regularization term such as
?1 -norm for promoting sparsity. We develop a new online algorithm, the regularized dual averaging (RDA) method, that can explicitly exploit the regularization
structure in an online setting. In particular, at each iteration, the learning variables
are adjusted by solving a simple optimization problem that involves the running
average of all past subgradients of the loss functions and the whole regularization term, not just its subgradient. Computational experiments show that the RDA
method can be very effective for sparse online learning with ?1 -regularization.
1
Introduction
In machine learning, online algorithms operate by repetitively drawing random examples, one at a
time, and adjusting the learning variables using simple calculations that are usually based on the
single example only. The low computational complexity (per iteration) of online algorithms is often
associated with their slow convergence and low accuracy in solving the underlying optimization
problems. As argued in [1, 2], the combined low complexity and low accuracy, together with other
tradeoffs in statistical learning theory, still make online algorithms a favorite choice for solving largescale learning problems. Nevertheless, traditional online algorithms, such as stochastic gradient
descent (SGD), has limited capability of exploiting problem structure in solving regularized learning
problems. As a result, their low accuracy often makes it hard to obtain the desired regularization
effects, e.g., sparsity under ?1 -regularization. In this paper, we develop a new online algorithm, the
regularized dual averaging (RDA) method, that can explicitly exploit the regularization structure in
an online setting. We first describe the two types of problems addressed by the RDA method.
1.1
Regularized stochastic learning
The regularized stochastic learning problems we consider are of the following form:
{
}
?(?) ? E? ? (?, ?) + ?(?)
minimize
?
(1)
where ? ? R? is the optimization variable (called weights in many learning problems), ? = (?, ?)
is an input-output pair drawn from an (unknown) underlying distribution, ? (?, ?) is the loss function
of using ? and ? to predict ?, and ?(?) is a regularization term. We assume ? (?, ?) is convex in ?
for each ?, and ?(?) is a closed convex function. Examples of the loss function ? (?, ?) include:
? Least-squares: ? ? R? , ? ? R, and ? (?, (?, ?)) = (? ? ?? ?)2 .
? Hinge loss: ? ? R? , ? ? {+1, ?1}, and ? (?, (?, ?)) = max{0, 1 ? ?(?? ?)}.
(
(
))
? Logistic regression: ? ? R? , ? ? {+1, ?1}, and ? (?, (?, ?)) = log 1+ exp ??(?? ?) .
1
Examples of the regularization term ?(?) include:
? ?1 -regularization: ?(?) = ????1 with ? > 0. With ?1 -regularization, we hope to get a
relatively sparse solution, i.e., with many entries of ? being zeroes.
? ?2 -regularization: ?(?) = (?/2)???22 , for some ? > 0.
? Convex constraints: ?(?) is the indicator function of a closed convex set ?, i.e., ?(?) = 0
if ? ? ? and +? otherwise.
In this paper, we focus on online algorithms that process samples sequentially as they become available. Suppose at time ?, we have the most up-to-date weight ?? . Whenever ?? is available, we can
evaluate the loss ? (?? , ?? ), and a subgradient ?? ? ?? (?? , ?? ) (here ?? (?, ?) denotes the subdifferential of ? with respect to ?). Then we compute the new weight ??+1 based on these information.
For solving the problem (1), the standard stochastic gradient descent (SGD) method takes the form
??+1 = ?? ? ?? (?? + ?? ) ,
(2)
where ?? is an appropriate stepsize, and ?? is a subgradient of ? at ?? . The SGD method has been
very popular in the machine learning community due to its capability of scaling with large data sets
and good generalization performance observed in practice (e.g., [3, 4]).
Nevertheless, a main drawback of the SGD method is its lack of capability in exploiting problem
structure, especially for regularized learning problems. As a result, their low accuracy (compared
with interior-point method for batch optimization) often makes it hard to obtain the desired regularization effect. An important example and motivation for this paper is ?1 -regularized stochastic
learning, where ?(?) = ????1 . Even with relatively big ?, the SGD method (2) usually does not
generate sparse solutions because only in very rare cases two float numbers add up to zero. Various
methods for rounding or truncating the solutions are proposed to generate sparse solutions (e.g., [5]).
Inspired by recently developed first-order methods for optimizing composite functions [6, 7, 8], the
regularized dual averaging (RDA) method we develop exploits the full regularization structure at
each online iteration. In other words, at each iteration, the learning variables are adjusted by solving
a simple optimization problem that involves the whole regularization term, not just its subgradients.
For many practical learning problems, we actually are able to find a closed-form solution for the
auxiliary optimization problem at each iteration. This means that the computational complexity per
iteration is ?(?), the same as the SGD method.
? Moreover, the RDA method converges to the optimal
solution of (1) with the optimal rate ?(1/ ?). If the the regularization function ?(?) is strongly
convex, we have the better rate ?(ln ?/?) by setting appropriate parameters in the algorithm.
1.2
Regularized online optimization
In online optimization (e.g., [9]), we make a sequence of decision ?? , for ? = 1, 2, 3, . . .. At each
time ?, a previously unknown cost function ?? is revealed, and we encounter a loss ?? (?? ). We
assume that the functions ?? are convex?for all ? ? 1. The goal of an online algorithm?is to ensure
?
?
that the total cost up to each time ?, ? =1 ?? (?? ), is not much larger than min? ? =1 ?? (?),
the smallest total cost of any fixed decision ? from hindsight. The difference between these two
cost is called the regret of the online algorithm. Applications of online optimization include online
prediction of time series and sequential investment (e.g. [10]).
In regularized online optimization, we add to each cost function a convex regularization function ?(?). For any fixed decision variable ?, consider the regret
?? (?) ?
?
?
(
? =1
?
(
)
) ?
?? (?) + ?(?) .
?? (?? ) + ?(?? ) ?
(3)
? =1
The RDA method we develop
? can also be used to solve the above regularized online optimization
problem, and it has an ?( ?) regret bound. Again, if the regularization term ?(?) is strongly
convex, the regret bound is ?(ln ?). However, the main advantage of the RDA method, compared
with other online algorithms, is its explicit regularization effect at each iteration.
2
Algorithm 1 Regularized dual averaging (RDA) method
input:
? a strongly convex function ?(?) with modulus 1 on dom?, and ?0 ? R? , such that
?0 = arg min ?(?) ? Arg min ?(?).
?
?
(4)
? a pre-determined nonnegative and nondecreasing sequence ?? for ? ? 1.
initialize: ?1 = ?0 , ??0 = 0.
for ? = 1, 2, 3, . . . do
1. Given the function ?? , compute a subgradient ?? ? ??? (?? ).
2. Update the average subgradient ??? :
1
??1
????1 + ??
??? =
?
?
3. Compute the next iterate ??+1 :
{
}
??
?? , ?? + ?(?) + ?(?)
??+1 = arg min ??
?
?
end for
2
(5)
(6)
Regularized dual averaging method
In this section, we present the generic RDA method (Algorithm 1) for solving regularized stochastic
learning and online optimization problems, and give some concrete examples. To unify notation,
we write ? (?, ?? ) as ?? (?) for stochastic learning problems. The RDA method uses an auxiliary
strongly convex function ?(?). A function ? is called strongly convex with respect to a norm ? ? ? if
there exists a constant ? > 0 such that
?
?(?? + (1 ? ?)?) ? ??(?) + (1 ? ?)?(?) ? ?(1 ? ?)?? ? ??2 ,
(7)
2
for all ?, ? ? dom?. The constant ? is called the convexity parameter, or the modulus of strong
convexity. In equation (4), Arg min? ?(?) denotes the convex set of minimizers of ?.
In Algorithm 1, step 1 is to compute a subgradient of ?? at ?? , which is standard for all (sub)gradientbased methods. Step 2 is the online version of computing average gradient ??? (dual average). In
step 3, we assume that the functions ? and ? are simple, meaning that the minimization problem
in (6) can be solved with litter effort, especially if we are able to find a closed-form solution for
??+1 . This assumption seems to be restrictive. But the following examples show that this indeed is
the case for many important learning problems in practice.
If the regularization function ?(?) has convexity parameter ? = 0 (i.e., it is not strongly convex),
we can choose a parameter ? > 0 and use the sequence
?
?? = ? ?,
? = 1, 2, 3, . . .
(8)
?
?
to obtain an ?(1/ ?) convergence rate for stochastic learning, or an ?( ?) regret bound for online
optimization. The formal convergence theorems are given in Sections 3. Here are some examples:
? Nesterov?s dual averaging method. Let ?(?) be the indicator{function of a close
? convex
}
set ?. This recovers the method of [11]: ??+1 = arg min??? ??
?? , ?? + (?/ ?)?(?) .
? ?1 -regularization: ?(?) = ????1 for some ? > 0. In this case, let ?0 = 0 and
1
?(?) = ???22 + ????1 ,
2
where ? ? 0 is a sparsity enhancing parameter. The solution to (6) can be found as
?
(?)
?
,
?? ? ?RDA
if ?
? 0
?
(?)
?
)
(
? = 1, . . . , ?, (9)
??+1 =
( (?) )
? (?)
?
? ?
otherwise,
??? ? ?RDA
sign ???
?
?
?
RDA
= ? + ?/ ?. Notice that the truncating threshold ?? is at least as large as ?.
where ??
This is the main difference of our method from related work, see Section 4.
3
If the regularization function ?(?) has convexity parameter ? > 0, we can use ?
any nonnegative,
nondecreasing sequence {?? }??1 that is dominated by ln ?, to obtain an ?(ln ?/ ?) convergence
rate for stochastic learning, or an ?(ln ?) regret bound for online optimization (see Section 3). For
simplicity, in the following examples, we use ?? = 0 for all ? ? 1, and we do not need ?(?).
? Mixed ?1 /?22 -regularization. Let ?(?) = ????1 + (?/2)???22 with ?, ? > 0. Then
?
(?)
? 0
if
??? ? ?,
(?)
(
)
??+1 =
? = 1, . . . , ?.
)
(
1
(?)
(?)
? ?
??? ? ? sign ???
otherwise,
?
Of course, setting ? = 0 gives the algorithm for pure ?22 -regularization.
? Kullback-Leibler (KL) divergence regularization: ?(?) = ??KL (???), where ? lies in
the standard simplex, ? is a given probability distribution, and
( (?) )
?
?
?
(?)
.
? ln
?KL (???) ?
?(?)
?=1
3
Note that ?KL (???) is strongly convex with respect to ???1 with modulus 1 (e.g., [12]).
In this case,
)
(
1 (?)
1 (?)
(?)
,
? exp ? ???
??+1 =
??+1
?
??
(?)
where ??+1 is a normalization parameter such that ?=1 ??+1 = 1.
Regret bounds and convergence rates
We first give bounds on the regret ?? (?) defined in (3), when the RDA method is used for solving
regularized online optimization problem. To simplify notations, we define the following sequence:
?? ? (?0 ? ?1 )?(?2 ) + ?? ?2 +
??1
?2 ?
1
,
2 ? =0 ?? + ??
? = 1, 2, 3, . . . ,
(10)
where ? and ? are some given constants, ? is the convexity parameter of the regularization function
?(?), and {?? }?? =1 is the input sequence to the RDA method, which is nonnegative and nondecreasing. Notice that we just introduced an extra parameter ?0 . We require ?0 > 0 to avoid blowup
of the first term (when ? = 0) in the summation in (10). This parameter does not appear in Algorithm 1, instead, it is solely for the convenience of convergence analysis. In fact, whenever ?1 > 0,
we can set ?0 = ?1 , so that the term (?0 ? ?1 )?(?2 ) vanishes. We also note that ?2 is determined
at the end of the step ? = 1, so ?1 is well defined. Finally, for any given constant ? > 0, we define
{
}
?? ? ? ? dom? ?(?) ? ?2 .
Theorem 1 Let the sequences {?? }?? =1 and {?? }?? =1 be generated by Algorithm 1. Assume there
is a constant ? such that ??? ?? ? ? for all ? ? 1, where ? ? ?? is the dual norm of ? ? ?. Then for
any ? ? 1 and any ? ? ?? , we have
?? (?) ? ?? .
(11)
The proof of this theorem is given in the longer version of this paper [13]. Here we give some direct
consequences based on concrete choices of algorithmic parameters.
If the regularization function ?(?) has convexity parameter ? = 0, then the sequence {?? }??1
defined in (8) together with ?0 = ?1 lead to
(
)
)
??1
)) (
( ?
?
? 2 ?2
? 2 ?2 (
?2 ?
1
2
?
?? = ? ?? +
?.
1+
1 + 2 ? ? 2 ? ?? +
? ? ?? +
2?
2?
?
?
? =1
The best ? that minimizes the above bound is ? ? = ?/?, which leads to
?
?? (?) ? 2?? ?.
4
(12)
If the regularization function ?(?) is strongly convex, i.e., with a convexity parameter ? > 0, then
any nonnegative, nondecreasing sequence that is dominated by ln ? will give an ?(ln ?) regret bound.
We can simply choose ?(?) = (1/?)?(?) whenever needed. Here are several possibities:
? Positive constant sequences. For simplicity, let ?? = ? for ? ? 1 and ?0 = ?1 . In this case,
?
?2
?2 ? 1
? ??2 +
(1 + ln ?).
?? = ??2 +
2? ? =1 ?
2?
? The logrithmic sequence. Let ?? = ?(1 + ln ?) for ? ? 1, and ?0 = ?. In this case,
(
) (
)
??1
2
?
?
?2
1
2
2
?? = ?(1 + ln ?)? +
1+
? ?? +
(1 + ln ?).
2?
? + 1 + ln ?
2?
? =1
? The zero sequence ?? = 0 for ? ? 1, with ?0 = ?. Using ?(?) = (1/?)?(?), we have
(
)
?
?
?2
1
?2
?? ? ?(?2 ) +
1+
?
(6 + ln ?),
2?
?
2?
? =1
where we used ?(?2 ) ? 2?2 /?, as proved in [13]. This bound does not depend on ?.
When Algorithm 1 is used to solve regularized stochastic learning problems, we have the following:
Theorem 2 Assume there exists an optimal solution ?? to the problem (1) that satisfies ?(?? ) ? ?2
for some ? > 0, and there is an ? > 0 such that E ???2? ? ?2 for all ? ? ?? (?, ?) and ? ? dom?.
Then for any ? ? 1, we have
?
1?
??
,
where
?
?? =
?? .
E ?(?
?? ) ? ?(?? ) ?
?
? ? =1
The proof of Theorem 2 is given in [13]. Further analysis for the cases ? = 0 and ? > 0 are the
same as before. We only need to divide every regret bound by ? to obtain the convergence rate.
4
Related work
There have been several recent work that address online algorithms for regularized learning problems, especially with ?1 -regularization; see, e.g., [14, 15, 16, 5, 17]. In particular, a forwardbackward splitting method (FOBOS) is studied in [17] for solving the same problems we consider.
In an online setting, each iteration of the FOBOS method can be written as
{
}
1
2
??+1 = arg min
(13)
?? ? (?? ? ?? ?? )? + ?? ?(?) ,
?
2
?
where ?? is set to be ?(1/ ?) if ?(?) has convexity parameter ? = 0, and ?(1/?) if ? > 0. The
RDA method and FOBOS use very different weights on the regularization term ?(?): RDA in (6)
uses the original ?(?) without any scaling, while FOBOS scales ?(?) by a diminishing stepsize ?? .
The difference is more clear in the special case of ?1 -regularization, i.e., when ?(?) = ????1 . For
this purpose, we consider the Truncated Gradient (TG) method proposed in [5]. The TG method
truncates the solutions obtained by the standard SGD method with an integer period ? ? 1. More
specifically, each component of ?? is updated as
(
)
{
(?)
(?)
trnc ?? ? ?? ?? , ?TG
if mod(?, ?) = 0,
(?)
? ,?
??+1 =
(14)
(?)
(?)
?? ? ?? ? ?
otherwise.
= ?? ? ?, the function mod(?, ?) means the remainder on division of ? by ?, and
where ?TG
?
?
? 0
if ??? ? ?TG
? ,
TG
TG
trnc(?, ?? , ?) =
<
??? ? ?,
? ? ?? sign(?) if ?TG
?
? ?
if ??? > ?.
When ? = 1 and ? = +?, the TG method is the same as the FOBOS method
? (13). Now
RDA
comparing the?truncation thresholds ?TG
and
?
used
in
(9):
with
?
=
?(1/
?), we have
?
?
?
RDA
=
?(1/
.
Therefore,
the
RDA
method
can
generate
much
more
sparse
solutions.
?TG
?)?
?
?
This is confirmed by our computational experiments in Section 5.
5
? = 0.01
? = 0.03
? = 0.1
? = 0.3
?=1
?=3
? = 10
SGD
??
TG
??
RDA
??
IPM
??
SGD
?
??
TG
?
??
RDA
?
??
Figure 1: Sparsity patterns of the weight ?? and the average weight ?
?? for classifying the digits 6
and 7 when varying the regularization parameter ? from 0.01 to 10. The background gray represents
the value zero, bright spots represent positive values and dark spots represent negative values.
5
Computational experiments
We provide computational experiments for the ?1 -RDA method using the MNIST dataset of handwritten digits [18]. Each image from the dataset is represented by a 28 ? 28 gray-scale pixel-map,
for a total of 784 features. Each of the 10 digits has roughly 6,000 training examples and 1,000
testing examples. No preprocessing of the data is employed.
We use ?1 -regularized logistic regression to do binary classification on each of the 45 pairs of digits. In the experiments, we compare the ?1 -RDA method (9) with the SGD method (2) and the
TG/FOBOS method (14) with ? = ?. These three online algorithms have similar convergence rate
and the same order of computational complexity per iteration. We also compare them with the batch
optimization approach, using an efficient interior-point method (IPM) developed by [19].
Each pair of digits have about 12,000 training examples and 2,000 testing examples. We use online
algorithms to go through the (randomly permuted) data only once, therefore the algorithms stop
at ? = 12,000. We vary the regularization parameter ? from 0.01 to 10. As a reference, the
maximum ? for the batch optimization case [19] is mostly in the range of 30 ? 50 (beyond which the
optimal weights are all zeros). In the ?1 -RDA method (9), we use ? = 5,000, and set ? = 0 for basic
regularization, or ? = 0.005 (effectively ?? = 25) for enhanced regularization effect. The tradeoffs
in choosing these parameters?
are further investigated in [13]. For the SGD and TG methods, we use a
constant stepsize ? = (1/?) 2/? . When ? = ?/?, which gives the best convergence bound (12)
6
Right: ? = 10 for TG, ?? = 25 for RDA
Left: ? = 1 for TG, ? = 0 for RDA
NNZs when ? = 0.1
600
SGD
TG
RDA
500
400
300
300
200
200
100
100
2000
4000
6000
8000
10000
12000
0
0
600
600
500
500
400
400
300
300
200
200
100
100
0
0
2000
4000
6000
8000
10000
SGD
TG
RDA
500
400
0
0
NNZs when ? = 10
600
12000
Number of samples ?
0
0
2000
2000
4000
6000
8000
10000
12000
4000
6000
8000
10000
12000
Number of samples ?
Figure 2: Number of non-zeros (NNZs) in ?(?) for the three online algorithms (classifying 6 and 7).
?
for the RDA method, the corresponding ? = (?/?) 2/? also gives the best convergence rate for
the SGD method (e.g., [20]). In the TG method, the truncation period is set to ? = 1 for basic
regularization, or ? = 10 for enhanced regularization effect, as suggested in [5].
Figure 1 shows the sparsity patterns of the solutions ?? and ?
?? for classifying the digits 6 and 7.
Both the TG and RDA methods were run with parameters for enhanced ?1 -regularization: ? = 10
for TG and ?? = 25 for RDA. The sparsity patterns obtained by the RDA method are most close to
the batch optimization results solved by IPM, especially for larger ?.
Figure 2 plots the number of non-zeros (NNZs) in ?(?) for different online algorithms. Only the
RDA method and TG with ? = 1 give explicit zero weights at every step. In order to count the
NNZs in all other cases, we set a small threshold for rounding the weights to zero. Considering that
the magnitudes of the largest weights in Figure 1 are mostly on the order of 10?3 , we set 10?5 as
the threshold and verified that rounding elements less than 10?5 to zero does not affect the testing
errors. Note that we do not truncate the weights for RDA and TG with ? = 1 further, even if
some of their components are below 10?5 . It can be seen that the RDA method maintains a much
more sparse ?(?) than the other two online algorithms. While the TG method generate more sparse
solutions than the SGD method when ? is large, the NNZs in ?(?) oscillates with a very big range.
In contrast, the RDA method demonstrate a much more smooth variation in the NNZs.
Figure 3 illustrates the tradeoffs between sparsity and testing error rates for classifying 6 and 7.
Since the performance of the online algorithms vary when the training data are given in different
permutations, we run them on 100 randomly permuted sequences of the same training set, and
plot the means and standard deviations shown as error bars. For the SGD and TG methods, the
testing error rates of ?? vary a lot for different random sequences. In contrast, the RDA method
demonstrates very robust performance (small standard deviations) for ?? , even though the theorems
only give performance bound for the averaged weight ?
?? . Note that ?
?? obtained by SGD and TG
have much smaller error rates than those of RDA and batch optimization, especially for larger ?.
The explanation is that these lower error rates are obtained with much more nonzero features.
Figure 4 shows summary of classification results for all the 45 pairs of digits. For clarity of presentation, here we only plot results of the ?1 -RDA method and batch optimization using IPM. (The NNZs
obtained by SGD and TG are mostly above the limit of the vertical axes, which is set at 200). We
see that, overall, the solutions obtained by the ?1 -RDA method demonstrate very similar tradeoffs
between sparsity and testing error rates as rendered by the batch optimization solutions.
7
Average weight ?
??
Error rates (%)
Last weight ??
SGD
TG (? = 1)
RDA (? = 0)
IPM
4
3
2
1
SGD
TG (? = 10)
RDA (?? = 25)
IPM
10
0.1
0.01
0.1
1
0
10
600
3
2
1
1
0.01
0.1
1
0.1
10
600
0.01
0.1
1
10
0
600
400
400
400
400
200
200
200
200
0
0.01
0.1
?
1
0
10
SGD
TG (? = 10)
RDA (?? = 25)
IPM
4
1
600
NNZs
SGD
TG (? = 1)
RDA (? = 0)
IPM
10
Average weight ?
??
Last weight ??
0.01
0.1
1
?
0
10
0.01
0.1
1
?
0
10
0.01
0.1
0.01
0.1
?
1
10
1
10
Figure 3: Tradeoffs between testing error rates and NNZs in solutions (for classifying 6 and 7).
0
1
3
2
5
4
6
4
1
1
0
0.5
0
2
2
0.1
1
10
0
0.1
1
10
0
1
10
0.1
1
10
1
0
10
0.1
1
10
2
1
0
0
0.1
1
10
2
0
0.1
1
10
0.1
1
10
0.1
1
10
0
0.1
1
10
0.1
1
10
0
0.1
1
10
5
2
1
0.1
1
10
0
0.1
1
10
0
2
0.1
1
10
0
0.1
1
10
5
10
5
5
0
5
2
1
0
0
2
4
2
2
5
0
0.1
5
4
1
0
2
2
0.5
0.1
9
8
7
4
5
5
5
5
0
0.1
1
10
0
0.1
1
0
10
0.1
1
10
5
0
0.1
1
10
0
0.1
1
10
0
0.1
1
10
10
3
10
0
0.1
1
0
10
0
0.1
1
10
5
5
5
5
0.1
1
10
0
0.1
1
10
0
0.1
1
10
0
0.1
1
10
0
0.1
1
10
10
4
5
5
0
0.1
1
10
0
0.1
1
10
0
10
5
5
0.1
1
10
0
0.1
1
10
0
0.1
1
10
10
5
5
5
0
0.1
1
10
0
10
0.1
1
10
0
5
0.1
1
10
0
0.1
1
10
5
5
5
6
0
0.1
1
10
0
0.1
1
10
0.1
1
10
10
5
7
0
5
0
0.1
1
10
0
0.1
1
10
10
8
5
0
0.1
1
10
200
150
100
50
0
200
150
100
50
0
200
150
100
50
0
200
150
100
50
0
200
150
100
50
0
200
150
100
50
0
200
150
100
50
0
200
150
100
50
0
200
150
100
50
0
9
Figure 4: Binary classification for all 45 pairs of digits. The images in the lower-left triangular area
show sparsity patterns of ?? with ? = 1, obtained by the ?1 -RDA with ?? = 25. The plots in
the upper-right triangular area show tradeoffs between sparsity and testing error rates, by varying ?
from 0.1 to 10. The solid circles and solid squares show error rates and NNZs in ?? , respectively,
using IPM for batch optimization. The hollow circles and hollow squares show error rates and
NNZs of ?? , respectively, using the ?1 -RDA method. The vertical bars centered at hollow circles
and squares show standard deviations by running on 100 random permutations of the training data.
8
References
[1] L. Bottou and O. Bousquet. The tradeoffs of large scale learning. In J.C. Platt, D. Koller,
Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20,
pages 161?168. MIT Press, Cambridge, MA, 2008.
[2] S. Shalev-Shwartz and N. Srebro. SVM optimization: Inverse dependence on training set size.
In Proceedings of the 25th International Conference on Machine Learning (ICML), 2008.
[3] L. Bottou and Y. LeCun. Large scale online learning. In S. Thrun, L. Saul, and B. Sch?olkopf,
editors, Advances in Neural Information Processing Systems 16. MIT Press, Cambridge, MA,
2004.
[4] T. Zhang. Solving large scale linear prediction problems using stochastic gradient descent
algorithms. In Proceedings of the 21st International Conference on Machine Learning (ICML),
Banff, Alberta, Canada, 2004.
[5] J. Langford, L. Li, and T. Zhang. Sparse online learning via truncated gradient. Journal of
Machine Learning Research, 10:777?801, 2009.
[6] Yu. Nesterov. Gradient methods for minimizing composiite objective function. CORE Discussion Paper 2007/76, Catholic University of Louvain, Center for Operations Research and
Econometrics, 2007.
[7] P. Tseng. On accelerated proximal gradient methods for convex-concave optimization. Submitted to SIAM Journal on Optimization, 2008.
[8] A. Beck and M. Teboulle. A fast iterative shrinkage-threshold algorithm for linear inverse
problems. Technical report, Technion, 2008. To appear in SIAM Journal on Image Sciences.
[9] M. Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In
Proceedings of the 20th International Conference on Machine Learning (ICML), pages 928?
936, Washington DC, 2003.
[10] N. Cesa-Bianchi and G. Lugosi. Predictioin, Learning, and Games. Cambridge University
Press, 2006.
[11] Yu. Nesterov. Primal-dual subgradient methods for convex problems. Mathematical Programming, 120(1):221?259, 2009. Appeared early as CORE discussion paper 2005/67, Catholic
University of Louvain, Center for Operations Research and Econometrics.
[12] Yu. Nesterov. Smooth minimization of nonsmooth functions. Mathematical Programming,
103:127?152, 2005.
[13] L. Xiao. Dual averaging method for regularized stochastic learning and online optimization.
Technical Report MSR-TR-2009-100, Microsoft Research, 2009.
[14] J. Duchi, S. Shalev-Shwartz, Y. Singer, and T. Chandra. Efficient projections onto the ?1 ball for learning in high dimensions. In Proceedings of the 25th International Conference on
Machine Learning (ICML), pages 272?279, 2008.
[15] P. Carbonetto, M. Schmidt, and N. De Freitas. An interior-point stochastic approximation
method and an ?1 -regularized delta rule. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, Advances in Neural Information Processing Systems 21, pages 233?240. MIT
Press, 2009.
[16] S. Balakrishnan and D. Madigan. Algorithms for sparse linear classifiers in the massive data
setting. Journal of Machine Learning Research, 9:313?337, 2008.
[17] J. Duchi and Y. Singer. Efficient learning using forward-backward splitting. In Proceedings of
Neural Information Processing Systems, December 2009.
[18] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278?2324, 1998. Dataset available at
http://yann.lecun.com/exdb/mnist.
[19] K. Koh, S.-J. Kim, and S. Boyd. An interior-point method for large-scale ?1 -regularized logistic
regression. Journal of Machine Learning Research, 8:1519?1555, 2007.
[20] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach
to stochastic programming. SIAM Journal on Optimization, 19(4):1574?1609, 2009.
9
| 3882 |@word msr:1 version:2 norm:3 seems:1 sgd:22 tr:1 solid:2 ipm:9 series:1 document:1 past:1 freitas:1 com:2 comparing:1 written:1 plot:4 update:1 juditsky:1 core:2 banff:1 zhang:2 mathematical:2 direct:1 become:1 indeed:1 blowup:1 roughly:1 inspired:1 alberta:1 considering:1 underlying:2 moreover:1 notation:2 minimizes:1 developed:2 hindsight:1 every:2 concave:1 oscillates:1 demonstrates:1 classifier:1 platt:1 appear:2 positive:2 before:1 limit:1 consequence:1 solely:1 lugosi:1 studied:1 limited:1 nemirovski:1 range:2 averaged:1 practical:1 lecun:3 testing:8 practice:2 regret:10 investment:1 spot:2 digit:8 area:2 composite:1 projection:1 boyd:1 word:1 pre:1 madigan:1 get:1 convenience:1 interior:4 close:2 onto:1 zinkevich:1 map:1 center:2 go:1 truncating:2 convex:20 unify:1 simplicity:2 splitting:2 pure:1 rule:1 variation:1 updated:1 enhanced:3 suppose:1 massive:1 programming:4 us:2 element:1 recognition:1 econometrics:2 observed:1 solved:2 forwardbackward:1 vanishes:1 convexity:8 complexity:4 nesterov:4 dom:4 depend:1 solving:10 division:1 various:1 represented:1 fast:1 effective:1 describe:1 choosing:1 shalev:2 larger:3 solve:2 drawing:1 otherwise:4 triangular:2 nondecreasing:4 online:39 sequence:14 advantage:1 remainder:1 date:1 roweis:1 olkopf:1 exploiting:2 convergence:10 converges:1 develop:4 strong:1 auxiliary:2 involves:2 drawback:1 stochastic:17 fobos:6 centered:1 argued:1 require:1 carbonetto:1 generalization:1 rda:48 summation:1 adjusted:2 gradientbased:1 exp:2 algorithmic:1 predict:1 vary:3 early:1 smallest:1 purpose:1 largest:1 hope:1 minimization:2 mit:3 avoid:1 shrinkage:1 varying:2 ax:1 focus:1 contrast:2 kim:1 minimizers:1 diminishing:1 koller:2 pixel:1 arg:6 dual:11 classification:3 overall:1 special:1 initialize:1 once:1 washington:1 represents:1 yu:3 icml:4 simplex:1 report:2 nonsmooth:1 simplify:1 randomly:2 divergence:1 beck:1 microsoft:3 primal:1 divide:1 desired:2 circle:3 teboulle:1 tg:32 cost:5 deviation:3 entry:1 rare:1 technion:1 rounding:3 proximal:1 combined:1 st:1 international:4 siam:3 together:2 concrete:2 again:1 cesa:1 choose:2 li:1 de:1 explicitly:2 lot:1 closed:4 maintains:1 capability:3 minimize:1 square:4 bright:1 accuracy:4 handwritten:1 confirmed:1 submitted:1 whenever:3 infinitesimal:1 associated:1 proof:2 recovers:1 stop:1 proved:1 adjusting:1 popular:1 dataset:3 actually:1 though:1 strongly:8 just:3 langford:1 lack:1 logistic:3 gray:2 modulus:3 effect:5 regularization:38 leibler:1 nonzero:1 game:1 generalized:1 exdb:1 demonstrate:2 duchi:2 meaning:1 image:3 recently:1 permuted:2 cambridge:3 longer:1 add:2 recent:1 optimizing:1 binary:2 seen:1 employed:1 period:2 full:1 smooth:2 technical:2 calculation:1 repetitively:1 lin:2 prediction:2 regression:3 basic:2 enhancing:1 chandra:1 iteration:9 represent:2 normalization:1 subdifferential:1 background:1 addressed:1 float:1 sch:1 extra:1 operate:1 ascent:1 litter:1 december:1 balakrishnan:1 mod:2 integer:1 revealed:1 bengio:2 iterate:1 affect:1 haffner:1 tradeoff:7 effort:1 clear:1 dark:1 generate:4 http:1 shapiro:1 notice:2 sign:3 delta:1 per:3 write:1 nevertheless:2 threshold:5 lan:1 drawn:1 clarity:1 verified:1 backward:1 subgradient:7 sum:1 run:2 inverse:2 catholic:2 yann:1 decision:3 scaling:2 bound:12 nonnegative:4 constraint:1 dominated:2 bousquet:1 min:7 subgradients:2 rendered:1 relatively:2 truncate:1 ball:1 smaller:1 koh:1 ln:14 equation:1 previously:1 count:1 needed:1 singer:3 end:2 available:3 operation:2 promoting:1 generic:1 appropriate:2 stepsize:3 batch:8 encounter:1 schmidt:1 original:1 denotes:2 running:2 ensure:1 include:3 hinge:1 exploit:3 restrictive:1 especially:5 objective:2 dependence:1 traditional:1 gradient:10 thrun:1 tseng:1 minimizing:1 mostly:3 truncates:1 negative:1 unknown:2 bianchi:1 upper:1 vertical:2 descent:3 truncated:2 dc:1 community:1 canada:1 introduced:1 pair:5 kl:4 louvain:2 address:1 able:2 redmond:1 beyond:1 usually:2 pattern:4 suggested:1 below:1 bar:2 appeared:1 sparsity:10 max:1 explanation:1 regularized:23 largescale:1 indicator:2 loss:7 permutation:2 mixed:1 srebro:1 xiao:3 editor:3 classifying:5 course:1 summary:1 last:2 truncation:2 formal:1 saul:1 sparse:9 dimension:1 forward:1 preprocessing:1 kullback:1 sequentially:1 shwartz:2 iterative:1 favorite:1 robust:2 schuurmans:1 investigated:1 bottou:4 main:3 whole:2 motivation:1 big:2 slow:1 sub:1 explicit:2 lie:1 theorem:6 svm:1 exists:2 mnist:2 sequential:1 effectively:1 magnitude:1 illustrates:1 simply:1 satisfies:1 ma:2 goal:1 presentation:1 hard:2 determined:2 specifically:1 averaging:8 called:4 total:3 accelerated:1 hollow:3 evaluate:1 |
3,182 | 3,883 | A Parameter-free Hedging Algorithm
Kamalika Chaudhuri
ITA, UC San Diego
[email protected]
Yoav Freund
CSE, UC San Diego
[email protected]
Daniel Hsu
CSE, UC San Diego
[email protected]
Abstract
We study the problem of decision-theoretic online learning (DTOL). Motivated
by practical applications, we focus on DTOL when the number of actions is very
large. Previous algorithms for learning in this framework have a tunable learning
rate parameter, and a barrier to using online-learning in practical applications is
that it is not understood how to set this parameter optimally, particularly when the
number of actions is large.
In this paper, we offer a clean solution by proposing a novel and completely
parameter-free algorithm for DTOL. We introduce a new notion of regret, which
is more natural for applications with a large number of actions. We show that our
algorithm achieves good performance with respect to this new notion of regret; in
addition, it also achieves performance close to that of the best bounds achieved
by previous algorithms with optimally-tuned parameters, according to previous
notions of regret.
1
Introduction
In this paper, we consider the problem of decision-theoretic online learning (DTOL), proposed by
Freund and Schapire [1]. DTOL is a variant of the problem of prediction with expert advice [2, 3].
In this problem, a learner must assign probabilities to a fixed set of actions in a sequence of rounds.
After each assignment, each action incurs a loss (a value in [0, 1]); the learner incurs a loss equal
to the expected loss of actions for that round, where the expectation is computed according to the
learner?s current probability assignment. The regret (of the learner) to an action is the difference
between the learner?s cumulative loss and the cumulative loss of that action. The goal of the learner
is to achieve, on any sequence of losses, low regret to the action with the lowest cumulative loss (the
best action).
DTOL is a general framework that captures many learning problems of interest. For example, consider tracking the hidden state of an object in a continuous state space from noisy observations [4].
To look at tracking in a DTOL framework, we set each action to be a path (sequence of states) over
the state space. The loss of an action at time t is the distance between the observation at time t and
the state of the action at time t, and the goal of the learner is to predict a path which has loss close
to that of the action with the lowest cumulative loss.
The most popular solution to the DTOL problem is the Hedge algorithm [1, 5]. In Hedge, each action
is assigned a probability, which depends on the cumulative loss of this action and a parameter ?, also
called the learning rate. By appropriately setting the learning rate as a function of
?the iteration [6, 7]
and the number of actions, Hedge can achieve a regret upper-bounded by O( T ln N ), for each
iteration
T , where N is the number of actions. This bound on the regret is optimal as there is a
?
?( T ln N ) lower-bound [5].
In this paper, motivated by practical applications such as tracking, we consider DTOL in the regime
where the number of actions N is very large. A major barrier to using online-learning for practical
problems is that when N is large, it is not understood how to set the learning rate ?. [7, 6] suggest
1
Total loss
?
Actions
Figure 1: A new notion of regret. Suppose each action is a point on a line, and the total losses are
as given in the plot. The regret to the top ?-quantile is the difference between the learner?s total loss
and the total loss of the worst action in the indicated interval of measure ?.
setting ? as a fixed function of the number of actions N . However, this can lead to poor performance,
as we illustrate by an example in Section 3, and the degradation in performance is particularly
exacerbated as N grows larger. One way to address this is by simultaneously running multiple
copies of Hedge with multiple values of the learning rate, and choosing the output of the copy
that performs the best in an online way. However, this solution is impractical for real applications,
particularly as N is already very large. (For more details about these solutions, please see Section 4.)
In this paper, we take a step towards making online learning more practical by proposing a novel,
completely adaptive algorithm for DTOL. Our algorithm is called NormalHedge. NormalHedge
is very simple and easy to implement, and in each round, it simply involves a single line search,
followed by an updating of weights for all actions.
A second issue with using online-learning in problems such as tracking, where N is very large, is
that the regret to the best action is not an effective measure of performance. For problems such as
tracking, one expects to have a lot of actions that are close to the action with the lowest loss. As
these actions also have low loss, measuring performance with respect to a small group of actions
that perform well is extremely reasonable ? see, for example, Figure 1.
In this paper, we address this issue by introducing a new notion of regret, which is more natural
for practical applications. We order the cumulative losses of all actions from lowest to highest and
define the regret of the learner to the top ?-quantile to be the difference between the cumulative loss
of the learner and the ??N ?-th element in the sorted list.
We prove that for NormalHedge, the regret to the top ?-quantile of actions is at most
!
r
1
O
T ln + ln2 N ,
?
which holds simultaneously forall T and ?. If weset ? = 1/N , we get that the regret to the best
?
action is upper-bounded by O
T ln N + ln2 N , which is only slightly worse than the bound
achieved by Hedge with optimally-tuned parameters. Notice that in our regret bound, the term
involving T has no dependence on N . In contrast, Hedge cannot achieve a regret-bound of this
nature uniformly for all ?. (For details on how Hedge can be modified to perform with our new
notion of regret, see Section 4).
NormalHedge works by assigning each action i a potential; actions which have lower cumulative
2
loss than the algorithm are assigned a potential exp(Ri,t
/2ct ), where Ri,t is the regret of action
i and ct is an adaptive scale parameter, which is adjusted from one round to the next, depending
on the loss-sequences. Actions which have higher cumulative loss than the algorithm are assigned
potential 1. The weight assigned to an action in each round is then proportional to the derivative of its
potential. One can also interpret Hedge as a potential-based algorithm, and under this interpretation,
the potential assigned by Hedge to action i is proportional to exp(?Ri,t ). This potential used by
Hedge differs significantly from the one we use. Although other potential-based methods have been
considered in the context of online learning [8], our potential function is very novel, and to the best
2
Initially: Set Ri,0 = 0, pi,1 = 1/N for each i.
For t = 1, 2, . . .
1. Each action i incurs loss ?i,t .
PN
2. Learner incurs loss ?A,t = i=1 pi,t ?i,t .
3. Update cumulative regrets: Ri,t = Ri,t?1 + (?A,t ? ?i,t ) for each i.
PN
([Ri,t ]+ )2
4. Find ct > 0 satisfying N1 i=1 exp
= e.
2ct
[R ]+
([Ri,t ]+ )2
exp
5. Update distribution for round t + 1: pi,t+1 ? i,t
for each i.
ct
2ct
Figure 2: The Normal-Hedge algorithm.
of our knowledge, has not been studied in prior work. Our proof techniques are also different from
previous potential-based methods.
Another useful property of NormalHedge, which Hedge does not possess, is that it assigns zero
weight to any action whose cumulative loss is larger than the cumulative loss of the algorithm itself. In other words, non-zero weights are assigned only to actions which perform better than the
algorithm. In most applications, we expect a small set of the actions to perform significantly better
than most of the actions. The regret of the algorithm is guaranteed to be small, which means that the
algorithm will perform better than most of the actions and thus assign them zero probability.
[9, 10] have proposed more recent solutions to DTOL in which the regret of Hedge to the best action
is upper bounded by a function of L, the loss of the best action, or by a function of the variations in
the losses. These bounds can be sharper than the bounds with respect to T . Our analysis (and in fact,
to our knowledge, any analysis based on potential functions in the style of [11, 8]) do not directly
yield these kinds of bounds. We therefore leave open the question of finding an adaptive algorithm
for DTOL which has regret upper-bounded by a function that depends on the loss of the best action.
The rest of the paper is organized as follows. In Section 2, we provide NormalHedge. In Section
3, we provide an example that illustrates the suboptimality of standard online learning algorithms,
when the parameter is not set properly. In Section 4, we discuss Related Work. In Section 5, we
present some outlines of the proof. The proof details are in the Supplementary Materials.
2
2.1
Algorithm
Setting
We consider the decision-theoretic framework for online learning. In this setting, the learner is given
access to a set of N actions, where N ? 2. In round t, the learner chooses a weight distribution
pt = (p1,t , . . . , pN,t ) over the actions 1, 2, . . . , N . Each action i incurs a loss ?i,t , and the learner
incurs the expected loss under this distribution:
?A,t =
N
X
pi,t ?i,t .
i=1
The learner?s instantaneous regret to an action i in round t is ri,t = ?A,t ? ?i,t , and its (cumulative)
regret to an action i in the first t rounds is
Ri,t =
t
X
ri,? .
? =1
We assume that the losses ?i,t lie in an interval of length 1 (e.g. [0, 1] or [?1/2, 1/2]; the sign of the
loss does not matter). The goal of the learner is to minimize this cumulative regret Ri,t to any action
i (in particular, the best action), for any value of t.
3
2.2
Normal-Hedge
Our algorithm, Normal-Hedge, is based on a potential function reminiscent of the half-normal distribution, specifically
([x]+ )2
?(x, c) = exp
for x ? R, c > 0
(1)
2c
where [x]+ denotes max{0, x}. It is easy to check that this function is separately convex in x and c,
differentiable, and twice-differentiable except at x = 0.
In addition to tracking the cumulative regrets Ri,t to each action i after each round t, the algorithm
also maintains a scale parameter ct . This is chosen so that the average of the potential, over all
actions i, evaluated at Ri,t and ct , remains constant at e:
N
1 X
([Ri,t ]+ )2
= e.
exp
N i=1
2ct
(2)
We observe that since ?(x, c) is convex in c > 0, we can determine ct with a line search.
The weight assigned to i in round t is set proportional to the first-derivative of the potential, evaluated
at Ri,t?1 and ct?1 :
?
([Ri,t?1 ]+ )2
[Ri,t?1 ]+
pi,t ?
exp
.
?(x, c)
=
?x
ct?1
2ct?1
x=Ri,t?1 ,c=ct?1
Notice that the actions for which Ri,t?1 ? 0 receive zero weight in round t.
We summarize the learning algorithm in Figure 2.
3
An Illustrative Example
In this section, we present an example to illustrate that setting the parameters of DTOL algorithms
as a function of N , the total number of actions, is suboptimal. To do this, we compare the performance of NormalHedge with two representative algorithms: a version of Hedge due to [7], and the
Polynomial Weights algorithm, due to [12, 11]. Our experiments with this example indicate that the
performance of both these algorithms suffer because of the suboptimal setting of the parameters; on
the other hand, NormalHedge automatically adapts to the loss-sequences of the actions.
The main feature of our example is that the effective number of actions n (i.e. the number of distinct
actions) is smaller than the total number of actions N . Notice that without prior knowledge of the
actions and their loss-sequences, one cannot determine the effective number actions in advance; as a
result, there is no direct method by which Hedge and Polynomial Weights could set their parameters
as a function of n.
Our example attempts to model a practical scenario where one often finds multiple actions with
loss-sequences which are almost identical. For example, in the tracking problem, groups of paths
which are very close together in the state space, will have very close loss-sequences. Our example
indicates that in this case, the performance of Hedge and the Polynomial Weights will depend on
the discretization of the state space, however, NormalHedge will comparatively unaffected by such
discretization.
Our example has four parameters: N , the total number of actions; n, the effective number of actions
(the number of distinct actions); k, the (effective) number of good actions; and ?, which indicates
how much better the good actions are compared to the rest. Finally, T is the number of rounds.
?,k
The instantaneous losses of the N actions are represented by a N ? T matrix BN
; the loss of
action i in round t is the (i, t)-th entry in the matrix. The construction of the matrix is as follows.
First, we construct a (preliminary) n ? T matrix An based on the 2d ? 2d Hadamard matrix, where
n = 2d+1 ? 2. This matrix An is obtained from the 2d ? 2d Hadamard matrix by (1) deleting
the constant row, (2) stacking the remaining rows on top of their negations, (3) repeating each row
4
horizontally T /2d times, and finally, (4) halving the first column. We show A6
?
?1/2 +1 ?1 +1 ?1 +1 ?1 +1 ?1 +1 ?1
? ?1/2 ?1 +1 +1 ?1 ?1 +1 +1 ?1 ?1 +1
? 1
? ? /2 +1 +1 ?1 ?1 +1 +1 ?1 ?1 +1 +1
A6 = ? 1
? + /2 ?1 +1 ?1 +1 ?1 +1 ?1 +1 ?1 +1
? +1/2 +1 ?1 ?1 +1 +1 ?1 ?1 +1 +1 ?1
+1/2 ?1 ?1 +1 +1 ?1 ?1 +1 +1 ?1 ?1
for concreteness:
?
+1 . . .
+1 . . . ?
?
?1 . . . ?
?1 . . . ?
?
?1 . . . ?
+1 . . .
If the rows of An give the losses for n actions over time, then it is clear that on average, no action
is better than any other. Therefore for large enough T , for these losses, a typical algorithm will
eventually assign all actions the same weight. Now, let A?,k
n be the same as An except that ? is
subtracted from each entry of the first k rows, e.g.
?
?
?1/2 ? ? +1 ? ? ?1 ? ? +1 ? ? ?1 ? ? +1 ? ? ?1 ? ? +1 ? ? . . .
? ?1/2 ? ? ?1 ? ? +1 ? ? +1 ? ? ?1 ? ? ?1 ? ? +1 ? ? +1 ? ? . . . ?
?
?
?1/2
+1
+1
?1
?1
+1
+1
?1
... ?
?
A?,2
=
.
?
6
+1
?1
+1
?1
... ?
+1/2
?1
+1
?1
?
?
?
?
+1
+1
?1
?1
...
+1/2
+1
?1
?1
+1/2
?1
?1
+1
+1
?1
?1
+1
...
Now, when losses are given by A?,k
n , the first k actions (the good actions) perform better than the
remaining n ? k; so, for large enough T , a typical algorithm will eventually recognize this and
assign the first k actions equal weights (giving little or no weight to the remaining n ? k). Finally,
?,k
we artificially replicate each action (each row) N/n times to yield the final loss matrix BN
for N
actions:
? ?,k ??
An
?
?
?
??
? A?,k
? n ?
?,k
BN = ? . ? N/n replicates of A?,k
n .
? .. ??
?
?
?
A?,k
n
The replication of actions significantly affects the behavior of algorithms that set parameters with
respect to the number of actions N , which is inflated compared to the effective number of actions n.
NormalHedge, having no such parameters, is completely unaffected by the replication of actions.
We compare the performance of NormalHedge to two other representative algorithms, which we
call ?Exp? and ?Poly?. Exp p
is a time/variation-adaptive version of Hedge (exponential weights)
due to [7] (roughly, ?t = O( (log N )/Vart ), where Vart is the cumulative loss variance). Poly
is polynomial weights [12, 11], which has a parameter p that is typically set as a function of the
number of actions; we set p = 2 ln N as is recommended to guarantee a regret bound comparable to
that of Hedge.
Figure 3 shows the regrets to the best action versus the replication factor N/n, where the effective
number of actions n is held fixed. Recall that Exp and Poly have parameters set with respect to the
number of actions N .
We see from the figures that NormalHedge is completely unaffected by the replication of actions;
no matter how many times the actions may be replicated, the performance of NormalHedge stays
exactly the same. In contrast, increasing the replication factor affects the performance of Exp and
Poly: Exp and Poly become more sensitive to the changes in the total losses of the actions (e.g. the
base of the exponent in the weights assigned by Exp increases with N ); so when there are multiple
good actions (i.e. k > 1), Exp and Poly are slower to stabilize their weights over these good actions.
When k = 1, Exp and Poly actually perform better using the inflated value N (as opposed to n), as
this causes the slight advantage of the single best action to be magnified. However, this particular
case is an anomaly; this does not happen even for k = 2. We note that if the parameters of Exp
and Poly were set to be a function of n, instead of N , then, then their performance would also
not depend on the replication factor (the peformance would be the same as the N/n = 1 case).
Therefore, the degradation in performance of Exp and Poly is solely due to the suboptimality in
setting their parameters.
5
Regret to best action after T=32768
Regret to best action after T=32768
400
350
300
250
Exp.
200
150
Poly.
Normal
100
0
10
1
2
10
10
3
10
650
Exp.
600
Poly.
Normal
550
500
450
400
0
10
1
Exp.
Poly.
Normal
600
500
400
0
10
1
2
10
10
3
10
Replication factor
900
Exp.
800
700
Poly.
Normal
600
500
400
0
10
1
2
10
10
Replication factor
k=8
k = 32
Figure 3: Regrets to the best action after T = 32768 rounds, versus replication factor N/n. Recall,
k is the (effective) number of good actions. Here, we fix n = 126 and ? = 0.025.
4
3
10
k=2
Regret to best action after T=32768
Regret to best expert after T=32768
k=1
700
10
Replication factor
900
800
2
10
Replication factor
Related work
There has been a large amount of literature on various aspects of DTOL. The Hedge algorithm of
[1] belongs to a more general family of algorithms, called the exponential weights algorithms; these
are originally based on Littlestone and Warmuth?s Weighted Majority algorithm [2], and they have
been well-studied.
The standard measure of regret in most of these
? works is the regret to the best action. The original
Hedge algorithm has a regret bound of O( T log N ). Hedge uses a fixed learning rate ? for all
iterations, and requires one to set ? as a function of the total number of iterations T . As a result,
its regret
? bound also holds only for a fixed T . The algorithm of [13] guarantees a regret bound
of O( T log N ) to the best action uniformly for all T by using a doubling trick. Time-varying
learning
p rates for exponential weights algorithms were considered in [6]; there, they show that if
?t ?
= 8 ln(N )/t, then using exponential weights with ? = ?t in round t guarantees regret bounds
of 2T ln N + O(ln N ) for any T . This bound provides a better regret to the best action than we
do. However, this method is still susceptible to poor performance, as illustrated in the example in
Section 3. Moreover, they do not consider our notion of regret.
Though not explicitly considered in previous works, the exponential weights algorithms can be
partly analyzed with respect to the regret to the top ?-quantile. For any fixed ?, Hedge can be
modified
by setting ? as a function of this ? such that the regret to the top ?-quantile is at most
p
is that it requires that the learning rate to be
O( T log(1/?)). The problem with this solution p
set as a function of that particular ? (roughly ? = (log 1/?)/T ). Therefore, unlike our bound,
this bound does not hold uniformly for all ?. One way to ensure a bound for all ? uniformly is to
run log N copies of Hedge, each with a learning rate set as a function of a different value of ?. A
final master copy of the Hedge algorithm then looks at the probabilities given by these
? subordinate
copiesto give the final probabilities. However, this procedure adds an additive O( T log log N )
factor to the regret to the ? quantile of actions, for any ?. More importantly, this procedure is also
impractical for real applications, where one might be already working with a large set of actions.
In contrast, our solution NormalHedge is clean and simple, and we guarantee a regret bound for all
values of ? uniformly, without any extra overhead.
6
3
10
More recent work in [14, 7, 10] provide algorithms with significantly improved bounds when the
total loss of the best action is small, or when the total variation in the losses is small. These bounds
do not explicitly depend on T , and thus can often be sharper than ones that do (including ours). We
stress, however, that these methods use a different notion of regret, and their learning rates depend
explicitly on N .
Besides exponential weights, another important class of online learning algorithms are the polynomial weights algorithms studied in [12, 11, 8]. These algorithms too require a parameter; this
parameter does not depend on the number of rounds T , but depends crucially on the number of actions N . The weight assigned to action i in round t is proportional
to ([Ri,t?1 ]+ )p?1 for some p > 1;
p
setting p = 2 ln N yields regret bounds of the form 2eT (ln N ? 0.5) for any T . Our algorithm
and polynomial weights share the feature that zero weight is given to actions that are performing
worse than the algorithm, although the degree of this weight sparsity is tied to the performance of
the algorithm. Finally, [15] derive a time-adaptive variation of the follow-the-(perturbed) leader
algorithm [16, 17] by scaling the perturbations by a parameter that depends on both t and N .
5
5.1
Analysis
Main results
Our main result is the following theorem.
Theorem 1. If Normal-Hedge has access to N actions, then for all loss sequences, for all t, for all
0 < ? ? 1 and for all 0 < ? ? 1/2, the regret of the algorithm to the top ?-quantile of the actions is
at most
s
16 ln2 N 10.2
(1 + ln(1/?)) 3(1 + 50?)t +
( 2 + ln N ) .
?
?
In particular, with ? = 1/N , the regret to the best action is at most
s
16 ln2 N 10.2
( 2 + ln N ) .
(1 + ln N ) 3(1 + 50?)t +
?
?
The value ? in Theorem 1 appears to be an artifact of our analysis; we divide the sequence of rounds
into two phases ? the length of the first is controlled by the value of ? ? and bound the behavior of
the algorithm in each phase separately. The following corollary illustrates the performance of our
algorithm for large values of t, in which case the effect of this first phase (and the ? in the bound)
essentially goes away.
Corollary 2. If Normal-Hedge has access to N actions, then, as t ? ?, the regret of NormalHedge to the top ?-quantile of actions approaches an upper bound of
p
3t(1 + ln(1/?)) + o(t) .
In particular, the regret of Normal-Hedge to the best action approaches an upper bound of of
p
3t(1 + ln N ) + o(t) .
The proof of Theorem 1 follows from a combination of Lemmas 3, 4, and 5, and is presented in
detail at the end of the current section.
5.2
Regret bounds from the potential equation
The following lemma relates the performance of the algorithm at time t to the scale ct .
Lemma 3. At any time t, the regret to the best action can be bounded as
p
max Ri,t ? 2ct (ln N + 1) .
i
Moreover, for any 0 ? ? ? 1 and any t, the regret to the top ?-quantile of actions is at most
p
2ct (ln(1/?) + 1) .
7
Proof. We use Et to denote the actions that have non-zero weight on iteration t. The first part of the
lemma follows from the fact that, for any action i ? Et ,
N
X
(Ri,t )2
([Ri? ,t ]+ )2
([Ri,t ]+ )2
exp
exp
= exp
?
? Ne
2ct
2ct
2ct
?
i =1
which implies Ri,t ?
p
2ct (ln N + 1).
For the second part of the lemma, let Ri,t denote the regret of our algorithm to the action with the
?N -th highest regret. Then, the total potential of the actions with regrets greater than or equal to
Ri,t is at least
([Ri,t ]+ )2
? Ne
?N exp
2ct
from which the second part of the lemma follows.
5.3
Bounds on the scale ct and the proof of Theorem 1
In Lemmas 4 and 5, we bound the growth of the scale ct as a function of the time t.
The main outline of the proof of Theorem 1 is as follows. As ct increases monotonically with t, we
can divide the rounds t into two phases, t < t0 and t ? t0 , where t0 is the first time such that
ct0 ?
4 ln2 N
16 ln N
+
,
?
?3
for some fixed ? ? (0, 1/2). We then show bounds on the growth of ct for each phase separately.
Lemma 4 shows that ct is not too large at the end of the first phase, while Lemma 5 bounds the
per-round growth of ct in the second phase. The proofs of these two lemmas are quite involved, so
we defer them to the supplementary appendix.
Lemma 4. For any time t,
ct+1 ? 2ct (1 + ln N ) + 3 .
2
Lemma 5. Suppose that at some time t0 , ct0 ? 4 ln? N + 16 ?ln3 N , where 0 ? ? ?
Then, for any time t ? t0 ,
3
ct+1 ? ct ? (1 + 49.19?) .
2
1
2
is a constant.
We now combine Lemmas 4 and 5 together with Lemma 3 to prove the main theorem.
Proof of Theorem 1. Let t0 be the first time at which ct0 ?
4 ln2 N
?
+
16 ln N
?3 .
Then, from Lemma 4,
ct0 ? 2ct0 ?1 (1 + ln N ) + 3,
which is at most
32 ln N
8 ln3 N
34 ln2 N
81 ln2 N
8 ln3 N
+
+
3
?
.
+
+
?
?3
?3
?
?3
The last inequality follows because N ? 2 and ? ? 1/2. By Lemma 5, we have that for any t ? t0 ,
ct ?
3
(1 + 49.19?)(t ? t0 ) + ct0 .
2
Combining these last two inequalities yields
ct ?
8 ln3 N
81 ln2 N
3
(1 + 49.19?)t +
+
.
2
?
?3
Now the theorem follows by applying Lemma 3.
8
References
[1] Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line learning and an application
to boosting. Journal of Computer and System Sciences, 55:119?139, 1997.
[2] N. Littlestone and M. Warmuth. The weighted majority algorithm. Information and Computation,
108:212?261, 1994.
[3] V. Vovk. A game of prediction witih expert advice. Journal of Computer and System Sciences, 56(2):153?
173, 1998.
[4] K. Chaudhuri, Y. Freund, and D. Hsu.
arXiv:0903.2862.
Tracking using explanation-based modeling, 2009.
[5] Y. Freund and R. E. Schapire. Adaptive game playing using multiplicative weights. Games and Economic
Behavior, 29:79?103, 1999.
[6] P. Auer, N. Cesa-Bianchi, and C. Gentile. Adaptive and self-confident on-line learning algorithms. Journal
of Computer and System Sciences, 64(1), 2002.
[7] N. Cesa-Bianchi, Y. Mansour, and G. Stoltz. Improved second-order bounds for prediction with expert
advice. Machine Learning, 66(2?3):321?352, 2007.
[8] N. Cesa-Bianchi and G. Lugosi. Potential-based algorithms in on-line prediction and game theory. Machine Learning, 51:239?261, 2003.
[9] N. Cesa-Bianchi and G. Lugosi. Prediction, Learning and Games. Cambridge University Press, 2006.
[10] E. Hazan and S. Kale. Extracting certainty from uncertainty: Regret bounded by variation in costs. In
COLT, 2008.
[11] C. Gentile. The robustness of p-norm algorithms. Machine Learning, 53(3):265?299, 2003.
[12] A. J. Grove, N. Littlestone, and D. Schuurmans. General convergence results for linear discriminant
updates. Machine Learning, 43(3):173?210, 2001.
[13] N. Cesa-Bianchi, Y. Freund, D. Haussler, D. P. Hembold, R. E. Schapire, and M. Warmuth. How to use
expert advice. Journal of the ACM, 44(3):427?485, 1997.
[14] R. Yaroshinsky, R. El-Yaniv, , and S. Seiden. How to better use expert advice. Machine Learning,
55(3):271?309, 2004.
[15] M. Hutter and J. Poland. Adaptive online prediction by following the perturbed leader. Journal of Machine
Learning Research, 6:639?660, 2005.
[16] J. Hannan. Approximation to bayes risk in repeated play. Contributions to the Theory of Games, 3:97?
139, 1957.
[17] A. Kalai and S. Vempala. Efficient algorithms for the online optimization. Journal of Computer and
System Sciences, 71(3):291?307, 2005.
9
| 3883 |@word version:2 polynomial:6 norm:1 replicate:1 open:1 crucially:1 bn:3 incurs:6 daniel:1 tuned:2 ours:1 current:2 discretization:2 assigning:1 must:1 reminiscent:1 additive:1 happen:1 plot:1 update:3 half:1 warmuth:3 provides:1 boosting:1 cse:2 direct:1 become:1 replication:11 prove:2 overhead:1 combine:1 introduce:1 expected:2 roughly:2 p1:1 behavior:3 automatically:1 little:1 increasing:1 bounded:6 moreover:2 lowest:4 kind:1 proposing:2 finding:1 magnified:1 impractical:2 guarantee:4 certainty:1 growth:3 exactly:1 understood:2 path:3 solely:1 lugosi:2 might:1 twice:1 studied:3 practical:7 regret:58 implement:1 differs:1 procedure:2 significantly:4 word:1 suggest:1 get:1 cannot:2 close:5 context:1 applying:1 risk:1 go:1 kale:1 convex:2 assigns:1 haussler:1 importantly:1 notion:8 variation:5 diego:3 suppose:2 pt:1 construction:1 anomaly:1 play:1 us:1 trick:1 element:1 satisfying:1 particularly:3 updating:1 capture:1 worst:1 highest:2 depend:5 learner:16 completely:4 represented:1 various:1 distinct:2 effective:8 choosing:1 whose:1 quite:1 larger:2 supplementary:2 itself:1 noisy:1 final:3 online:13 sequence:10 differentiable:2 advantage:1 hadamard:2 combining:1 chaudhuri:2 achieve:3 adapts:1 convergence:1 yaniv:1 leave:1 object:1 illustrate:2 depending:1 derive:1 exacerbated:1 c:1 involves:1 indicate:1 implies:1 inflated:2 material:1 subordinate:1 require:1 assign:4 fix:1 generalization:1 preliminary:1 dtol:14 adjusted:1 hold:3 considered:3 normal:11 exp:25 predict:1 major:1 achieves:2 sensitive:1 djhsu:1 weighted:2 modified:2 kalai:1 pn:3 varying:1 corollary:2 focus:1 properly:1 check:1 indicates:2 contrast:3 el:1 typically:1 initially:1 hidden:1 issue:2 colt:1 exponent:1 uc:3 equal:3 construct:1 having:1 identical:1 look:2 simultaneously:2 recognize:1 phase:7 n1:1 negation:1 attempt:1 interest:1 replicates:1 analyzed:1 held:1 grove:1 ln3:4 stoltz:1 divide:2 littlestone:3 hutter:1 column:1 modeling:1 measuring:1 yoav:1 assignment:2 a6:2 stacking:1 introducing:1 cost:1 entry:2 expects:1 too:2 optimally:3 perturbed:2 chooses:1 confident:1 stay:1 together:2 cesa:5 opposed:1 worse:2 expert:6 derivative:2 style:1 potential:17 stabilize:1 matter:2 explicitly:3 depends:4 hedging:1 multiplicative:1 lot:1 hazan:1 bayes:1 maintains:1 defer:1 contribution:1 minimize:1 variance:1 yield:4 unaffected:3 involved:1 proof:9 hsu:2 tunable:1 popular:1 recall:2 normalhedge:15 knowledge:3 organized:1 actually:1 auer:1 appears:1 higher:1 originally:1 follow:1 improved:2 evaluated:2 though:1 hand:1 working:1 artifact:1 indicated:1 grows:1 effect:1 assigned:9 illustrated:1 round:21 game:6 self:1 please:1 illustrative:1 suboptimality:2 ln2:9 stress:1 outline:2 theoretic:4 performs:1 instantaneous:2 novel:3 interpretation:1 slight:1 interpret:1 cambridge:1 access:3 ct0:6 base:1 add:1 recent:2 belongs:1 scenario:1 inequality:2 greater:1 gentile:2 determine:2 recommended:1 monotonically:1 relates:1 multiple:4 hannan:1 offer:1 controlled:1 prediction:6 variant:1 involving:1 halving:1 essentially:1 expectation:1 arxiv:1 iteration:5 achieved:2 receive:1 addition:2 separately:3 interval:2 appropriately:1 extra:1 rest:2 unlike:1 posse:1 call:1 extracting:1 easy:2 enough:2 peformance:1 affect:2 suboptimal:2 economic:1 t0:8 motivated:2 suffer:1 cause:1 action:120 useful:1 clear:1 amount:1 repeating:1 schapire:4 notice:3 sign:1 per:1 group:2 four:1 clean:2 concreteness:1 run:1 master:1 uncertainty:1 almost:1 reasonable:1 family:1 yfreund:1 decision:4 appendix:1 scaling:1 comparable:1 bound:32 ct:34 followed:1 guaranteed:1 ri:29 aspect:1 extremely:1 performing:1 vempala:1 according:2 combination:1 poor:2 smaller:1 slightly:1 making:1 vart:2 ln:25 equation:1 remains:1 discus:1 eventually:2 end:2 observe:1 away:1 subtracted:1 robustness:1 slower:1 original:1 top:9 running:1 denotes:1 remaining:3 ensure:1 giving:1 quantile:9 comparatively:1 already:2 question:1 dependence:1 distance:1 majority:2 discriminant:1 length:2 besides:1 susceptible:1 sharper:2 perform:7 bianchi:5 upper:6 observation:2 mansour:1 ucsd:3 perturbation:1 address:2 regime:1 sparsity:1 summarize:1 max:2 including:1 explanation:1 deleting:1 natural:2 ne:2 poland:1 prior:2 literature:1 freund:6 loss:48 expect:1 proportional:4 ita:1 versus:2 degree:1 playing:1 pi:5 share:1 row:6 last:2 free:2 copy:4 barrier:2 cumulative:16 adaptive:8 san:3 replicated:1 leader:2 continuous:1 search:2 nature:1 schuurmans:1 poly:13 artificially:1 main:5 repeated:1 advice:5 representative:2 exponential:6 lie:1 tied:1 theorem:9 list:1 kamalika:2 illustrates:2 simply:1 horizontally:1 tracking:8 doubling:1 acm:1 hedge:29 goal:3 sorted:1 towards:1 soe:1 change:1 specifically:1 except:2 uniformly:5 typical:2 vovk:1 degradation:2 lemma:17 called:3 total:12 partly:1 |
3,183 | 3,884 | Efficient Bregman Range Search
Lawrence Cayton
Max Planck Institute for Biological Cybernetics
[email protected]
Abstract
We develop an algorithm for efficient range search when the notion of dissimilarity is given by a Bregman divergence. The range search task is to return
all points in a potentially large database that are within some specified distance
of a query. It arises in many learning algorithms such as locally-weighted regression, kernel density estimation, neighborhood graph-based algorithms, and in
tasks like outlier detection and information retrieval. In metric spaces, efficient
range search-like algorithms based on spatial data structures have been deployed
on a variety of statistical tasks. Here we describe an algorithm for range search
for an arbitrary Bregman divergence. This broad class of dissimilarity measures
includes the relative entropy, Mahalanobis distance, Itakura-Saito divergence, and
a variety of matrix divergences. Metric methods cannot be directly applied since
Bregman divergences do not in general satisfy the triangle inequality. We derive
geometric properties of Bregman divergences that yield an efficient algorithm for
range search based on a recently proposed space decomposition for Bregman divergences.
1
Introduction
Range search is a fundamental proximity task at the core of many learning problems. The task of
range search is to return all points in a database within a specified distance of a given query. The
problem is to do so efficiently, without examining the entire database. Many machine learning algorithms require range search. Locally weighted regression and kernel density estimation/regression
both require retrieving points in a region around a test point. Neighborhood graphs?used in manifold learning, spectral algorithms, semisupervised algorithms, and elsewhere?can be built by connecting each point to all other points within a certain radius; doing so requires range search at
each point. Computing point-correlation statistics, distance-based outliers/anomalies, and intrinsic
dimensionality estimates also requires range search.
A growing body of work uses spatial data structures to accelerate the computation of these and other
proximity problems for statistical tasks. This line of techniques, coined ?n-body methods? in [11],
has showed impressive speedups on a variety of tasks including density estimation [12], gaussian
process regression [25], non-parametric classification [17], matrix approximation [14], and kernel
summation [15]. These methods achieve speedups by pruning out large portions of the search space
with bounds derived from KD or metric trees that are augmented with statistics of the database.
Some of these algorithms are direct applications of range search; others rely on very similar pruning
techniques. One fairly substantial limitation of these methods is that they all derive bounds from the
triangle inequality and thus only work for notions of distance that are metrics.
The present work is on performing range search efficiently when the notion of dissimilarity is not
a metric, but a Bregman divergence. The family of Bregman divergences includes the standard `22
distance, Mahalanobis distance, KL-divergence, Itakura-Saito divergence, and a variety of matrix
dissimilarity measures. We are particularly interested in the KL-divergence, as it is not a metric and
is used extensively in machine learning. It appears naturally in document analysis, since documents
1
are often modeled using histograms [22, 5]. It also is used in many vision applications [23], such as
content-based image retrieval [24]. Because Bregman divergences can be asymmetric and need not
satisfy the triangle inequality, the traditional metric methods cannot be applied.
In this work we present an algorithm for efficient range search when the notion of dissimilarity
is an arbitrary Bregman divergence. These results demonstrate that the basic techniques behind
the previously described efficient statistical algorithms can be applied to non-metric dissimilarities
including, notably, the KL-divergence. Because of the widespread use of histogram representations,
this generalization is important.
The task of efficient Bregman range search presents a technical challenge. Our algorithm cannot
rely on the triangle inequality, so bounds must be derived from geometric properties of Bregman
divergences. The algorithm makes use of a simple space decomposition scheme based on Bregman
balls [8], but deploying this decomposition for the range search problem is not straightforward. In
particular, one of the bounds required results in a non-convex program to be solved, and the other
requires comparing two convex bodies. We derive properties of Bregman divergences that imply
efficient algorithms for these problems.
2
Background
In this section, we briefly review prior work on Bregman divergences and proximity search. Bregman divergences originate in [7] and have become common in the machine learning literature, e.g.
[3, 4].
Definition 1. Let f : RD ? R be strictly convex and differentiable. The Bregman divergence based
on f is
df (x, y) ? f (x) ? f (y) ? h?f (y), x ? yi.
As can be seen from the definition, a Bregman divergence measures the distance between a function and its first-order taylor series approximation. Standard examples include f (x) = 12 kxk22 ,
P
yielding theP
`22 distance df (x, y) = 12 kx ? yk22 , and f (x) = i xi log xi , giving the KL-divergence
df (x, y) = i xi log xyii The Itakura-Saito divergence and Mahalanobis distance are other examples
of Bregman divergences.
Strict convexity of f implies that df (x, y) ? 0, with equality if, and only if, x = y. Though Bregman
divergences satisfy this non-negativity property, like metrics, the similarities to metrics end there. In
particular, a Bregman divergence need not satisfy the triangle inequality or be symmetric.
Bregman divergences do possess several geometric properties related to the convexity of the base
function. Most notably, df (x, y) is always convex in x (though not necessarily in y), implying that
the Bregman ball
Bf (?, R) ? {x | df (x, ?) ? R}
is a convex body.
Recently, work on a variety of geometric tasks with Bregman divergences has appeared. In [19],
geometric properties of Bregman voronoi diagrams are derived. [1] studies core-sets under Bregman
divergences and gives a provably correct approximation algorithm for k-median clustering. [13]
examines sketching Bregman (and Csisz?ar) divergences. [8] describes the Bregman ball tree in the
context of nearest neighbor search; we will describe this work further momentarily. As these papers
demonstrate, there has been substantial recent interest in developing basic geometric algorithms for
Bregman divergences. The present paper contributes an effective algorithm for range search, one of
the core problems of computational geometry [2], to this repertoire.
The Bregman ball tree (BB-tree) was introduced in the context of nearest neighbor (NN) search [8].
Though NN search has a similar flavor to range search, the bounds that suffice for NN search are
not sufficient for range search. Thus the utility of the BB-tree for statistical tasks is at present rather
seriously limited. Moreover, though the extension of metric trees to range search (and hence to the
previously described statistical tasks) is fairly straightforward because of the triangle inequality, the
extension of BB-trees is substantially more complex.
2
Several other papers on Bregman proximity search have appeared very recently. Nielsen et al. study
some improvements to the BB-tree [21] and develop a related data structure which can be used with
symmetrized divergences [20]. Zhang et al. develop extensions of the VA-file and the R-tree for
Bregman divergences [26]. These data structures can be adapted to work for Bregman divergences,
as the authors of [26] demonstrate, because bounds on the divergence from a query to a rectangular cell can be computed cheaply; however this idea appears limited to decomposable Bregman
divergences?divergences that decompose into a sum over one-dimensional divergences.1 Nevertheless, these data structures seem practical and effective and it would be interesting to apply them
to statistical tasks.2 The applicability of rectangular cell bounds was independently demonstrated
in [9, Chapter 7], where it is mentioned that KD-trees (and relatives) can be used for decomposable
Bregman divergences. That chapter also contains theoretical results on the general Bregman range
search problem attained by adapting known data structures via the lifting technique (also used in
[26] and previously in [19]).
3
Range search with BB-trees
In this section, we review the Bregman ball tree data structure and outline the range search algorithm.
The search algorithm relies on geometric properties of Bregman divergences, which we derive in
section 4.
The BB-tree is a hierarchical space decomposition based on Bregman balls. It is a binary tree
defined over the database such that each level provides a partition of the database points. As the
tree is descended, the partition becomes finer and finer. Each node i in the tree owns a subset of the
points Xi and also defines a Bregman ball Bf (?, R) such that Xi ? Bf (?, R). If i is an interior
node, it has two children j and k that encapsulate database points Xj and Xk . Moreover, each point
in Xi is in exactly one of Xj and Xk . Each leaf node contains some small number of points and the
root node contains the entire database.
Here we use this simple form of BB-tree, though our results apply to any hierarchical space decomposition based on Bregman balls, such as the more complex tree described in [21].
To encourage a rapid rate of radius decrease, an effective build algorithm will split a node into two
well-separated and compact children. Thus a reasonable method for building BB-trees is to perform a top-down hierarchical clustering. Since k-means has been generalized to arbitrary Bregman
divergences [4], it is a natural choice for a clustering algorithm.
3.1
Search algorithm
We now turn to the search algorithm, which uses a branch-and-bound approach. We develop the
necessary novel bounding techniques in the next section.
Suppose we are interested in returning all points within distance ? of a query q?i.e. we hope to
retrieve all database points lying inside of Bq ? Bf (q, ?). The search algorithm starts at the root
node and recursively explores the tree. At a node i, the algorithm compares the node?s Bregman ball
Bx to Bq . There are three possible situations. First, if Bx is contained in Bq , then all x ? Bx are in
the range of interest. We can thus stop the recursion and return all the points associated with the node
without explicitly computing the divergence to any of them. This type of pruning is called inclusion
pruning. Second, if Bx ? Bq = ?, the algorithm can prune out Bx and stop the recursion; none
of these points are in range. This is exclusion pruning. See figure 1. All performance gains from
using the algorithm come from these two types of pruning. The third situation is Bx ? Bq 6= ? and
Bx 6? Bq . In this situation, the algorithm cannot perform any pruning, so recurses on the children
of node i. If i is a leaf node, then the algorithm computes the divergence to each database point
associated with i and returns those elements within range.
The two types of pruning?inclusion and exclusion?have been applied to a variety of problems
with metric and KD-trees, see e.g. [11, 12, 25] and the papers cited previously. Thus though we
1
This assumption is implicit in the proof of [26, Lemma 3.1] and is used in the revised lower bound computation as well.
2
[26] had yet not been published at the time of submission of the present work and hence we have not yet
done a detailed comparison.
3
Exclusion
Inclusion
Figure 1: The two pruning scenarios. The dotted, shaded object is the query range and the other is
the Bregman ball associated with a node of the BB-tree.
focus on range search, these types of prunings are useful in a broad range of statistical problems. A
third type of pruning, approximation pruning, is useful in tasks like kernel density estimation [12].
This type of pruning is another form of inclusion pruning and can be accomplished with the same
technique.
It has been widely observed that the performance of spatial decomposition data structures, degrades
with increasing dimensionality. In order to manage high-dimensional datasets, practitioners often
use approximate proximity search techniques [8, 10, 17]. In the experiments, we explore one way
to use the BB-tree in an approximate fashion.
Determining whether two Bregman balls intersect, or whether one Bregman ball contains another,
is non-trivial. For the range search algorithm to be effective, it must be able to determine these
relationships very quickly. In the case of metric balls, these determinations are trivially accomplished using the triangle inequality. Since we cannot rely on the triangle inequality for an arbitrary
Bregman divergence, we must develop novel techniques.
4
Computation of ball intersection
In this section we lay out the main technical contribution of the paper. We develop algorithms for
determining (1) whether one Bregman ball is contained in another and (2) whether two Bregman
balls have non-empty intersection.
4.1
Containment
Let Bq Bf (? q , Rq ) and Bx Bf (?
equivalent to testing whether
x , Rx ).
df (x, ?
We wish to evaluate if Bx Bq . This problem is
q)
Rq
for all x Bx . Simplifying notation, the core problem is determining
df (x, q)
max
x
subject to: df (x, ? ) R.
(maxP)
Unfortunately, this problem is not convex. As is well-known, non-convex problems are in general
much more computationally difficult to solve than convex ones. This difficulty is particularly problematic in the case of range search, as the search algorithm will need to solve this problem repeatedly
in the course of evaluating a singe range query. Moreover, finding a sub-optimal solution (i.e. a point
x Bf (? , R) that is not the max) will render the solution to the range search incorrect.
Remarkably, beneath (maxP) lies a geometric structure that allows an efficient solution. We now
show the main claim of this section, which implies a simple, efficient algorithm for solving (maxP).
We denote the convex conjugate of f by
f (x) sup{ x, y? f (y)}
y
and define x f (x), q f (q), etc.
4
Claim 1. Suppose that the domain of f is C and that Bf (?, R) ? relint(C). Furthermore, assume
that k?2 f ? (x0 )k is lower-bounded for all x0 such that x ? Bf (?, R). Let xp be the optimal solution
to (maxP). Then x0p lies in the set {??0 + (1 ? ?)q 0 | ? ? 0}.
Proof. Though the program is not concave, the Lagrange dual still provides an upper bound on the
optimal solution value (by weak duality). The Lagrangian is
?(x, ?) ? df (x, q) ? ?(df (x, ?) ? R),
(1)
where ? ? 0.
Differentiating (1) with respect to x and setting it equal to 0, we get
?f (xp ) ? ?f (q) ? ??f (xp ) + ??f (?) = 0,
which implies that
1
(?f (q) ? ??f (?)) .
1??
We need to check what type of extrema ?f (xp ) = 0 is:
?f (xp ) =
(2)
?2x ?(x, ?) = (1 ? ?)?2 f (x).
?
Thus for ? > 1, the xp defined implicitly in (2) is a maximum. Setting ? ? ? 1??
gives
?f (xp ) = ??0 + (1 ? ?)q 0 ,
where ? ? (??, 0) ? (1, ?); we restrict attention to ? ? (1, ?) since that is where ? > 1 and
hence xp is a maximum. Let x0? ? ??0 + (1 ? ?)q 0 and x? ? ?f ? (x0? ). The Lagrange dual is
L(?) ? df (x? , q) +
?
(df (x? , ?) ? R).
1??
Then for any ? ? (1, ?), we have
df (xp , q) ? L(?)
(3)
?
by weak duality. We now show that there is a ? > 1 satisfying df (x?? , ?) = R. One can check
that the derivative of df (x? , ?) with respect to ? is
(? ? 1)(?0 ? q 0 )> ?2 f ? (x0? )(?0 ? q 0 ).
2 ?
0
(4)
0
Since k? f k > c, for some positive c, (4) is at least (? ? 1)k? ? q kc. We conclude that df (x? , ?)
is increasing at an increasing rate with ?. Thus there must be some ?? > 1 such that df (x?? , ?) = R.
Plugging this ?? into the dual, we get
??
L(?? ) = df (x?? , q) +
(df (x?? , ?) ? R)
1 ? ??
= df (x?? , q).
Combining with (3), we have
df (xp , q) ? df (x?? , ?).
Finally, since (maxP) is a maximization problem and since x?? is feasible, the previous inequality is
actually an equality, giving the theorem.
Thus determining if Bx ? Bq reduces to searching for ?? > 1 satisfying
df (x?? , ?x ) = Rx
and comparing df (x?? , ?q ) to Rq . Note that there is no obvious upper bound on ?? in general,
though one may be able to derive such a bound for a particular Bregman divergence. Without such
an upper bound, one needs to use a line search method that does not require one, such as Newton?s
method or the secant method. Both of these line search methods will converge quickly (quadratic in
the case of Newton?s method, slightly slower in the case of the secant method): since df (x? , ?x ) is
monotonic in ?, there is a unique root.
Interestingly, the convex program evaluated in [8] has a similar solution space, which we will again
encounter in the next section.
5
4.2
Non-empty intersection
In this section we provide an algorithm for evaluating whether Bq ? Bx = ?. We will need to make
use of the Pythagorean theorem, a standard property of Bregman divergences.
Theorem 1 (Pythagorean). Let C ? RD be a convex set and let x ? C. Then for all z, we have
df (x, z) ? df (x, y) + df (y, z),
where y ? argminy?C df (y, z) is the projection of z onto C.
At first glance, the Pythagorean theorem may appear to be a triangle inequality for Bregman divergences. However, the inequality is actually the reverse of the standard triangle inequality and only
applies to the very special case when y is the projection of z onto a convex set containing x. We
now prove the main claim of this section.
Claim 2. Suppose that Bx ? Bq 6= ?. Then there exists a w in
{?f ? (??0x + (1 ? ?)?0q ) | ? ? [0, 1]}
such that w ? Bq ? Bx .
Proof. Let z ? Bx ? Bq . We will refer to the set {?f ? (??0x + (1 ? ?)?0q ) | ? ? [0, 1]} as the dual
curve.
Let x be the projection of ?q onto Bx and let q be the projection of ?x onto Bq . Both x and q are on
the dual curve (this fact follows from [8, Claim 2]), so we are done if we can show that at least one
of them lies in the intersection of Bx and Bq . Suppose towards contradiction that neither are in the
intersection.
The projection of x onto Bq lies on the dual curve between x and ?y ; thus projecting x onto Bq
yields q and similarly projecting q onto Bx yields x. By the Pythagorean theorem,
df (z, x) ? df (z, q) + df (q, x),
(5)
since q is the projection of x onto Bq and since z ? Bq . Similarly,
df (z, q) ? df (z, x) + df (x, q).
(6)
Inserting (5) into (6), we get
df (z, q) ? df (z, q) + df (q, x) + df (x, q).
Rearranging, we get that df (q, x) + df (x, q) ? 0. Thus both df (q, x) = 0 and df (x, q) = 0,
implying that x = q. But since x ? Bx and q ? Bq , we have that x = q ? Bq ? Bq . This is the
desired contradiction.
The proceeding claim yields a simple algorithm for determining whether two balls Bx and Bq are
disjoint: project ?x onto Bq using the line search algorithm discussed previously. The projected
point will obviously be in Bq ; if it is also in Bx , the two balls intersect.3 Otherwise, they are disjoint
and exclusion pruning can be performed.
5
Experiments
We compare the performance of the search algorithm to standard brute force search on several
datasets. We are particularly interested in text applications as histogram representations are common, datasets are often very large, and efficient search is broadly useful. We experimented with the
following datasets, many of which are fairly high-dimensional.
? pubmed-D. We used one million documents from the pubmed abstract corpus (available
from the UCI collection). We generated a correlated topic model (CTM) [5] with D =
4, 8, . . . , 256 topics. For each D, we built a CTM using a training set and then performed
inference on the 1M documents to generate the topic histograms.
3
Claim 2 actually only shows that at least one of two projections??x onto Bq and ?q onto Bx ?will be in
the intersection. However, one can show that both projections will be in the intersection using the monotonicity
of df (x? , ?) in ?.
6
corel
4
pmed4 ? pmed32
4
10
3
10
2
10
1
2
2
10
1
10
1
0
10
0
0.2
0.4
0.6
0.8
1
1.2
rcv8?rcv32
4
10
0
0.2
0.4
0.6
0.8
1
rcv64 ? rcv256
4
3
2
10
1
10
2
10
1
10
0.4
0.6
0.8
1
1.2
10
0
0.8
1
1.2
0.8
1
1.2
1
0
0
0.2
0.6
semantic space
2
10
0
0.4
3
3
10
10
0.2
4
10
10
0
10
rcv64
rcv128
rcv256
rcv8
rcv16
rcv32
0
1.2
10
10
10
10
10
0
0
3
10
10
10
pmed64
pmed128
pmed256
pmed4
pmed8
pmed16
pmed32
3
10
pmed64 ? pmed256
4
10
10
0.2
0.4
0.6
0.8
1
1.2
10
0
0.2
0.4
0.6
Figure 2: Approximate search. The y-axis is on a logarithmic scale and is the speedup over brute
force search. The x axis is a linear scale and is the average percentage of the points in range returned
(i.e. the average recall).
? Corel histograms. This data set consists of 60k color histograms of dimensionality 64
generated from the Corel image datasets.
? rcv-D. Latent dirichlet allocation was applied to 500K documents from the rcv1 [16]
corpus to generate topic histograms for each [6]. D is set to 8, 16, 32, . . . 256.
? Semantic space. This dataset is a 371-dimensional representation of 5000 images from the
Corel stock photo collection. Each image is represented as a distribution over keywords
[24].
All of our experiments are for the KL-divergence. Although the KL-divergence is widely used, little
is known about efficient proximity techniques for it. In contrast, the `22 and Mahalanobis distances
can be handled by metric methods, for which there is a huge literature. Application of the range
search algorithm for the KL-divergence raises one technical point: Claim 1 requires that the KLball being investigated lies within the domain of the KL-divergence. It is possible that the ball will
cross the domain boundary (xi = 0), though we found that this was not a significant issue. When
it did occur (which can be checked by evaluating df (?, x? ) for large ?), we simply did not perform
inclusion pruning for that node.
There are two regimes where range search is particularly useful: when the radius ? is very small and
when it is large. When ? is small, range search is useful in instance-based learning algorithms like
locally weighted regression, which need to retrieve points close to each test point. It is also useful
for generating neighborhood graphs. When ? is large enough that Bf (q, ?) will contain most of the
database, range search is potentially useful for applications like distance-based outlier detection and
anomaly detection. We provide experiments for both of these regimes.
Table 1 shows the results for exact range search. For the small radius experiments, ? was chosen so
that about 20 points would be inside the query ball (on average). On the pubmed datasets, we are
getting one to two orders of magnitude speed-up across all dimensionalities. On the rcv datasets,
the BB-tree range search algorithm is an order of magnitude faster than brute search except of the
the two datasets of highest dimensionality. The algorithm provides a useful speedup on corel, but
no speedup on semantic space. We note that the semantic space dataset is both high-dimensional
(371 dimensions) and quite small (5k), which makes it very hard for proximity search. The algorithm reflects the widely observed phenomenon that the performance of spatial decomposition data
structures degrades with dimensionality, but still provides a useful speedup on several moderatedimensional datasets.
7
Table 1: Exact range search.
dataset
corel
pubmed4
pubmed8
pubmed16
pubmed32
pubmed64
pubmed128
pubmed256
rcv8
rcv16
rcv32
rcv64
rcv128
rcv256
semantic space
dimensionality
64
4
8
16
32
64
128
256
8
16
32
64
128
256
371
speedup
small radius large radius
2.53
3.4
371.6
5.1
102.7
9.7
37.3
12.8
18.6
47.1
13.26
21.6
15.0
120.4
18.9
39.0
48.1
8.9
23.0
21.9
16.4
16.4
11.4
9.6
6.1
3.1
1.1
1.9
.7
1.0
For the large radius experiments, ? was chosen so that all but about 100-300 points would be in
range. The results here are more varied than for small ?, but we are still getting useful speedups
across most of the datasets. Interestingly, the amount of speedup seems less dependent of the dimensionality in comparison to the small ? experiments.
Finally, we investigate approximate search, which we consider the most likely use of this algorithm.
There are many ways to use the BB-tree in an approximate way. Here, we follow [18] and simply
cut-off the search process early. We are thus guaranteed to get only points within the specified
range (perfect precision), but we may not get all of them (less than perfect recall). In instance-based
learning algorithms, this loss of recall is often tolerable as long as a reasonable number of points are
returned. Thus a practical way to deploy the range search algorithm is to run it until enough points
are recovered. In this experiment, ? was set so that about 50 points would be returned. Figure 2
shows the results.
These are likely the most relevant results to practical applications. They demonstrate that the proposed algorithm provides a speedup of up to four orders of magnitude with a high recall.
6
Conclusion
We presented the first algorithm for efficient ball range search when the notion of dissimilarity
is an arbitrary Bregman divergence. This is an important step towards generalizing the efficient
proximity algorithms from `2 (and metrics) to the family of Bregman divergences, but there is plenty
more to do. First, it would be interesting to see if the dual-tree approach promoted in [11, 12] and
elsewhere can be used with BB-trees. This generalization appears to require more complex bounding
techniques than those discussed here. A different research goal is to develop efficient algorithms for
proximity search that have rigorous guarantees on run-time; theoretical questions about proximity
search with Bregman divergences remain largely open. Finally, the work in this paper provides a
foundation for developing efficient statistical algorithms using Bregman divergences; fleshing out
the details for a particular application is an interesting direction for future research.
References
[1] Marcel Ackermann and Johannes Bl?omer. Coresets and approximate clustering for bregman
divergences. In Proceedings of the Symposium on Discrete Algorithms (SODA), 2009.
[2] Pankaj K. Agarwal and Jeff Erickson. Geometric range searching and its relatives. In Advances
in Discrete and Computational Geometry, pages 1?56. American Mathematical Society, 1999.
[3] Katy Azoury and Manfred Warmuth. Relative loss bounds for on-line density estimation with
the exponential family of distributions. Machine Learning, 43(3):211?246, 2001.
8
[4] Arindam Banerjee, Srujana Merugu, Inderjit S. Dhillon, and Joydeep Ghosh. Clustering with
bregman divergences. Journal of Machine Learning Research, Oct 2005.
[5] David Blei and John Lafferty. A correlated topic model of Science. Annals of Applied Statistics,
1(1):17?35, 2007.
[6] David Blei, Andrew Ng, and Michael Jordan. Latent dirichlet allocation. Journal of Machine
Learning Research, 2003.
[7] L.M. Bregman. The relaxation method of finding the common point of convex sets and its
application to the solution of problems in convex programming. USSR Computational Mathematics and Mathematical Physics, 7(3):200?217, 1967.
[8] Lawrence Cayton. Fast nearest neighbor retrieval for bregman divergences. In Proceedings of
the International Conference on Machine Learning, 2008.
[9] Lawrence Cayton. Bregman Proximity Search. PhD thesis, University of California, San Diego,
2009.
[10] Mayur Datar, Nicole Immorlica, Piotr Indyk, and Vahab S. Mirrokni. Locality-sensitive hashing scheme based on p-stable distributions. In Symposium on Computational Geometry, 2004.
[11] Alexander Gray and Andrew Moore. ?N-body? problems in statistical learning. In Advances
in Neural Information Processing Systems, 2000.
[12] Alexander Gray and Andrew Moore. Nonparametric density estimation: Toward computational tractability. In SIAM International Conference on Data Mining, 2003.
[13] Sudipto Guha, Piotr Indyk, and Andrew McGregor. Sketching information divergences. In
Conference on Learning Theory, 2007.
[14] Michael P. Holmes, Alexander Gray, and Charles Lee Isbell. QUIC-SVD: Fast SVD using
cosine trees. In Advances in Neural Information Processing Systems 21, 2008.
[15] Dongryeol Lee and Alexander Gray. Fast high-dimensional kernel summations using the monte
carlo multipole method. In Advances in Neural Information Processing Systems 21, 2008.
[16] D. D. Lewis, Y. Yang, T. Rose, and F. Li. RCV1: A new benchmark collection for text categorization research. Journal of Machine Learning Research, 2004.
[17] Ting Liu, Andrew Moore, and Alexander Gray. New algorithms for efficient high-dimensional
nonparametric classification. Journal of Machine Learning Research, 2006.
[18] Ting Liu, Andrew Moore, Alexander Gray, and Ke Yang. An investigation of practical approximate neighbor algorithms. In Advances in Neural Information Processing Systems, 2004.
[19] Frank Nielsen, Jean-Daniel Boissonnat, and Richard Nock. On bregman voronoi diagrams. In
Symposium on Discrete Algorithms, pages 746?755, 2007.
[20] Frank Nielsen, Paolo Piro, and Michel Barlaud. Bregman vantage point trees for efficient
nearest neighbor queries. In IEEE International Conference on Multimedia & Expo, 2009.
[21] Frank Nielsen, Paolo Piro, and Michel Barlaud. Tailored bregman ball trees for effective
nearest neighbors. In European Workshop on Computational Geometry, 2009.
[22] Fernando Pereira, Naftali Tishby, and Lillian Lee. Distributional clustering of English words.
In 31st Annual Meeting of the ACL, pages 183?190, 1993.
[23] Jan Puzicha, Joachim Buhmann, Yossi Rubner, and Carlo Tomasi. Empirical evaluation of
dissimilarity measures for color and texture. In Proceedings of the Internation Conference on
Computer Vision (ICCV), 1999.
[24] N. Rasiwasia, P. Moreno, and N. Vasconcelos. Bridging the gap: query by semantic example.
IEEE Transactions on Multimedia, 2007.
[25] Yirong Shen, Andrew Ng, and Matthias Seeger. Fast gaussian process regression using kdtrees. In Advances in Neural Information Processing Systems, 2006.
[26] Zhenjie Zhang, Beng Chin Ooi, Srinivasan Parthasarathy, and Anthony Tung. Similarity search
on bregman divergence: towards non-metric indexing. In International Conference on Very
Large Databases (VLDB), 2009.
9
| 3884 |@word briefly:1 seems:1 bf:10 open:1 vldb:1 decomposition:7 simplifying:1 recursively:1 liu:2 series:1 contains:4 daniel:1 seriously:1 document:5 interestingly:2 recovered:1 comparing:2 yet:2 must:4 john:1 partition:2 moreno:1 implying:2 leaf:2 warmuth:1 xk:2 core:4 manfred:1 blei:2 provides:6 node:13 zhang:2 mathematical:2 direct:1 become:1 symposium:3 retrieving:1 incorrect:1 prove:1 consists:1 mayur:1 inside:2 x0:5 notably:2 rapid:1 mpg:1 growing:1 little:1 increasing:3 becomes:1 project:1 boissonnat:1 moreover:3 suffice:1 notation:1 bounded:1 rcv:2 what:1 substantially:1 finding:2 extremum:1 ghosh:1 guarantee:1 ooi:1 concave:1 exactly:1 returning:1 brute:3 appear:1 planck:1 encapsulate:1 positive:1 xyii:1 datar:1 acl:1 shaded:1 limited:2 range:46 practical:4 unique:1 testing:1 descended:1 secant:2 jan:1 saito:3 intersect:2 empirical:1 adapting:1 projection:8 vantage:1 word:1 get:6 cannot:5 interior:1 onto:11 close:1 context:2 equivalent:1 demonstrated:1 lagrangian:1 nicole:1 straightforward:2 attention:1 independently:1 convex:14 rectangular:2 ke:1 shen:1 decomposable:2 contradiction:2 examines:1 holmes:1 retrieve:2 searching:2 notion:5 yirong:1 annals:1 diego:1 suppose:4 deploy:1 anomaly:2 exact:2 programming:1 us:2 element:1 satisfying:2 particularly:4 lay:1 asymmetric:1 submission:1 cut:1 database:12 distributional:1 observed:2 tung:1 solved:1 region:1 momentarily:1 decrease:1 highest:1 substantial:2 mentioned:1 rq:3 convexity:2 rose:1 raise:1 solving:1 triangle:10 accelerate:1 stock:1 chapter:2 represented:1 separated:1 fast:4 effective:5 describe:2 monte:1 query:9 neighborhood:3 quite:1 jean:1 widely:3 solve:2 otherwise:1 maxp:5 statistic:3 indyk:2 obviously:1 differentiable:1 matthias:1 srujana:1 recurses:1 inserting:1 uci:1 beneath:1 combining:1 relevant:1 barlaud:2 omer:1 achieve:1 sudipto:1 csisz:1 getting:2 empty:2 generating:1 perfect:2 categorization:1 object:1 derive:5 develop:7 andrew:7 nearest:5 keywords:1 marcel:1 implies:3 come:1 direction:1 radius:7 correct:1 nock:1 require:4 generalization:2 decompose:1 repertoire:1 investigation:1 biological:1 summation:2 strictly:1 extension:3 proximity:11 around:1 lying:1 lawrence:3 claim:8 early:1 ctm:2 estimation:6 sensitive:1 weighted:3 reflects:1 hope:1 gaussian:2 always:1 rather:1 derived:3 focus:1 joachim:1 improvement:1 check:2 contrast:1 rigorous:1 seeger:1 inference:1 voronoi:2 dependent:1 nn:3 entire:2 kc:1 interested:3 provably:1 issue:1 classification:2 dual:7 ussr:1 spatial:4 special:1 fairly:3 equal:1 vasconcelos:1 ng:2 piotr:2 broad:2 plenty:1 future:1 others:1 richard:1 divergence:60 geometry:4 detection:3 interest:2 huge:1 investigate:1 mining:1 evaluation:1 yielding:1 behind:1 bregman:65 encourage:1 necessary:1 bq:26 tree:30 taylor:1 desired:1 theoretical:2 joydeep:1 instance:2 vahab:1 ar:1 fleshing:1 maximization:1 applicability:1 tractability:1 subset:1 examining:1 guha:1 tishby:1 dongryeol:1 st:1 density:6 fundamental:1 explores:1 cited:1 international:4 siam:1 lee:3 off:1 physic:1 michael:2 connecting:1 sketching:2 quickly:2 again:1 thesis:1 manage:1 containing:1 american:1 derivative:1 return:4 rasiwasia:1 bx:22 li:1 michel:2 relint:1 de:1 includes:2 coresets:1 satisfy:4 explicitly:1 performed:2 root:3 doing:1 sup:1 portion:1 start:1 contribution:1 merugu:1 largely:1 efficiently:2 yield:4 weak:2 ackermann:1 none:1 carlo:2 rx:2 cybernetics:1 finer:2 published:1 deploying:1 checked:1 definition:2 obvious:1 naturally:1 associated:3 proof:3 stop:2 gain:1 dataset:3 recall:4 color:2 dimensionality:8 nielsen:4 actually:3 appears:3 attained:1 hashing:1 follow:1 done:2 though:9 cayton:3 evaluated:1 furthermore:1 implicit:1 correlation:1 until:1 banerjee:1 glance:1 widespread:1 defines:1 gray:6 semisupervised:1 building:1 contain:1 equality:2 hence:3 symmetric:1 x0p:1 dhillon:1 moore:4 semantic:6 mahalanobis:4 naftali:1 cosine:1 generalized:1 chin:1 outline:1 demonstrate:4 image:4 novel:2 recently:3 arindam:1 argminy:1 common:3 charles:1 corel:6 million:1 discussed:2 refer:1 significant:1 rd:2 trivially:1 mathematics:1 similarly:2 inclusion:5 had:1 stable:1 impressive:1 similarity:2 etc:1 base:1 showed:1 recent:1 exclusion:4 reverse:1 scenario:1 certain:1 inequality:12 binary:1 meeting:1 yi:1 accomplished:2 seen:1 promoted:1 prune:1 determine:1 converge:1 fernando:1 branch:1 reduces:1 technical:3 faster:1 determination:1 cross:1 long:1 retrieval:3 plugging:1 va:1 regression:6 basic:2 vision:2 metric:16 df:46 histogram:7 kernel:5 tailored:1 agarwal:1 cell:2 background:1 remarkably:1 diagram:2 median:1 posse:1 strict:1 file:1 subject:1 lafferty:1 seem:1 jordan:1 practitioner:1 yang:2 yk22:1 split:1 enough:2 variety:6 xj:2 restrict:1 idea:1 whether:7 handled:1 utility:1 bridging:1 render:1 returned:3 repeatedly:1 useful:10 detailed:1 johannes:1 amount:1 nonparametric:2 pankaj:1 locally:3 extensively:1 generate:2 percentage:1 problematic:1 dotted:1 disjoint:2 broadly:1 discrete:3 paolo:2 srinivasan:1 four:1 nevertheless:1 neither:1 graph:3 relaxation:1 sum:1 run:2 soda:1 family:3 reasonable:2 bound:14 guaranteed:1 quadratic:1 annual:1 adapted:1 occur:1 isbell:1 expo:1 speed:1 performing:1 rcv1:2 speedup:10 developing:2 ball:22 kd:3 conjugate:1 describes:1 slightly:1 across:2 remain:1 outlier:3 projecting:2 iccv:1 indexing:1 computationally:1 previously:5 turn:1 yossi:1 end:1 lcayton:1 photo:1 available:1 apply:2 hierarchical:3 spectral:1 tolerable:1 encounter:1 symmetrized:1 slower:1 top:1 clustering:6 include:1 dirichlet:2 multipole:1 newton:2 coined:1 giving:2 ting:2 build:1 society:1 bl:1 question:1 quic:1 parametric:1 degrades:2 mirrokni:1 traditional:1 erickson:1 distance:13 originate:1 manifold:1 topic:5 tuebingen:1 trivial:1 toward:1 modeled:1 relationship:1 difficult:1 unfortunately:1 potentially:2 frank:3 perform:3 upper:3 revised:1 datasets:10 benchmark:1 lillian:1 situation:3 varied:1 arbitrary:5 introduced:1 david:2 required:1 specified:3 kl:8 tomasi:1 california:1 able:2 appeared:2 regime:2 challenge:1 program:3 built:2 max:3 including:2 natural:1 rely:3 difficulty:1 force:2 buhmann:1 recursion:2 scheme:2 kxk22:1 imply:1 axis:2 negativity:1 parthasarathy:1 text:2 review:2 geometric:9 prior:1 literature:2 determining:5 relative:4 loss:2 interesting:3 limitation:1 allocation:2 foundation:1 rubner:1 sufficient:1 xp:10 elsewhere:2 course:1 english:1 institute:1 neighbor:6 differentiating:1 curve:3 boundary:1 dimension:1 evaluating:3 computes:1 author:1 collection:3 projected:1 san:1 transaction:1 bb:13 pruning:16 compact:1 approximate:7 implicitly:1 monotonicity:1 corpus:2 owns:1 containment:1 conclude:1 xi:7 thep:1 search:65 latent:2 table:2 rearranging:1 itakura:3 contributes:1 singe:1 investigated:1 necessarily:1 complex:3 european:1 domain:3 anthony:1 did:2 main:3 azoury:1 bounding:2 child:3 body:5 augmented:1 pubmed:3 fashion:1 deployed:1 precision:1 sub:1 pereira:1 wish:1 exponential:1 lie:5 third:2 down:1 theorem:5 experimented:1 intrinsic:1 exists:1 workshop:1 lifting:1 dissimilarity:8 magnitude:3 phd:1 texture:1 kx:1 gap:1 flavor:1 locality:1 entropy:1 intersection:7 logarithmic:1 generalizing:1 simply:2 explore:1 likely:2 cheaply:1 lagrange:2 katy:1 contained:2 inderjit:1 monotonic:1 applies:1 relies:1 lewis:1 oct:1 goal:1 towards:3 internation:1 jeff:1 content:1 feasible:1 hard:1 except:1 lemma:1 called:1 multimedia:2 duality:2 svd:2 puzicha:1 immorlica:1 arises:1 alexander:6 phenomenon:1 pythagorean:4 evaluate:1 mcgregor:1 correlated:2 |
3,184 | 3,885 | Learning from Neighboring Strokes:
Combining Appearance and Context for
Multi-Domain Sketch Recognition
Tom Y. Ouyang Randall Davis
Computer Science and Artificial Intelligence Laboratory
Massachusetts Institute of Technology
Cambridge, MA 02139 USA
{ouyang,davis}@csail.mit.edu
Abstract
We propose a new sketch recognition framework that combines a rich representation of low level visual appearance with a graphical model for capturing high
level relationships between symbols. This joint model of appearance and context
allows our framework to be less sensitive to noise and drawing variations, improving accuracy and robustness. The result is a recognizer that is better able to handle
the wide range of drawing styles found in messy freehand sketches. We evaluate
our work on two real-world domains, molecular diagrams and electrical circuit diagrams, and show that our combined approach significantly improves recognition
performance.
1
Introduction
Sketches are everywhere. From flow charts to chemical structures to electrical circuits, people use
them every day to communicate information across many different domains. They are also be an
important part of the early design process, helping us explore rough ideas and solutions in an informal environment. However, despite their ubiquity, there is still a large gap between how people
naturally interact with sketches and how computers can interpret them today. Current authoring
programs like ChemDraw (for chemical structures) and Visio (for general diagrams) still rely on the
traditional point-click-drag style of interaction. While popular, they simply do not provide the ease
of use, naturalness, or speed of drawing on paper.
We propose a new framework for sketch recognition that combines a rich representation of low level
visual appearance with a probabilistic model for capturing higher level relationships. By ?visual appearance? we mean an image-based representation that preserves the pictoral nature of the ink. By
?higher level relationships? we mean the spatial relationships between different symbols. Our combined approach uses a graphical model that classifies each symbol jointly with its context, allowing
neighboring interpretations to influence each other. This makes our method less sensitive to noise
and drawing variations, significantly improving robustness and accuracy. The result is a recognizer
that is better able to handle the range of drawing styles found in messy freehand sketches.
Current work in sketch recognition can, very broadly speaking, be separated into two groups. The
first group focuses on the relationships between geometric primitives like lines, arcs, and curves,
specifying them either manually [1, 4, 5] or learning them from labeled data [16, 20]. Recognition
is then posed as a constraint satisfaction problem, as in [4, 5], or as an inference problem on a
graphical model, as in [1, 16, 17, 20]. However, in many real-world sketches, it is difficult to extract
these primitives reliably. Circles may not always be round, line segments may not be straight, and
stroke artifacts like pen-drag (not lifting the pen between strokes), over-tracing (drawing over a
1
previously drawn stroke), and stray ink may introduce false primitives that lead to poor recognition.
In addition, recognizers that rely on extracted primitives often discard potentially useful information
contained in the appearance of the original strokes.
The second group of related work focuses on the visual appearance of shapes and symbols. These
include parts-based methods [9, 18], which learn a set of discriminative parts or patches for each
symbol class, and template-based methods [7, 11], which compare the input symbol to a library of
learned prototypes. The main advantage of vision-based approaches is their robustness to many of
the drawing variations commonly found in real-world sketches, including artifacts like over-tracing
and pen drag. However, these methods do not model the spatial relationships between neighboring
shapes, relying solely on local appearance to classify a symbol.
In the following sections we describe our approach, which combines both appearance and context.
It is divided into three main stages: (1) stroke preprocessing: we decompose strokes (each stroke is
defined as the set of points collected from pen-down to pen-up) into smaller segments, (2) symbol
detection: we search for potential symbols (candidates) among groups of segments, and (3) candidate selection: we select a final set of detections from these candidates, taking into account their
spatial relationships.
2
Preprocessing
The first step in our recognition framework is to preprocess the sketch into a set of simple segments,
as shown in Figure 1(b). The purpose for this step is twofold. First, like superpixels in computer
vision [14], segments are much easier to work with than individual points or pixels; the number
of points can be large even in moderate-sized sketches, making optimization intractable. Second,
in the domains we evaluated, the boundaries between segments effectively preserve the boundaries
between symbols. This is not the case when working with the strokes directly, so preprocessing
allows us to handle strokes that contain more than one symbol (e.g., when a wire and resistor are
drawn together without lifting the pen).
Our preprocessing algorithm divides strokes into segments by splitting them at their corner points.
Previous approaches to corner detection focused primarily on local pen speed and curvature [15], but
these measures are not always reliable in messy real-world sketches. Our corner detection algorithm,
on the other hand, tries to find the set of vertices that best approximates the original stroke as a whole.
It repeatedly discards the vertex vi that contributes the least to the quality of fit measure q, which we
define as:
q(vi ) = (MSE(v \ vi , s) ? MSE(v, s)) ? curvature(vi )
(1)
where s is the set of points in the original stroke, v is the current set of vertices remaining in the line
segment approximation, curvature(vi ) is a measure of the local stroke curvature1 , and (MSE(v \
vi , s) ? MSE(v, s)) is the increase in mean squared error caused by removing vertex vi from the
approximation.
Thus, instead of immediately trying to decide which point is a corner, our detector starts by making
the simpler decision about which point is not a corner. The process ends when q(vi ) is greater than
a predefined threshold2 . At the end of the preprocessing stage, the system records the length of the
longest segment L (after excluding the top 5% as outliers). This value is used in subsequent stages
as a rough estimate for the overall scale of the sketch.
3
Symbol Detection
Our algorithm searches for symbols among groups of segments. Starting with each segment in
isolation, we generate successively larger groups by expanding the group to include the next closest
segment3 . This process ends when either the size of the group exceeds 2L (a spatial constraint) or
1
Defined as the distance between vi and the line segment formed by vi?1 and vi+1
In our experiments, we set the threshold to 0.01 times the diagonal length of the stroke?s bounding box.
3
Distance defined as mindist(s, g) + bbdist(s, g), where mindist(s, g) is the distance at the nearest point
between segment s and group g and bbdist(s, g) is the diagonal length of the bounding box containing s and g.
2
2
(a) Original Strokes
(b) Segments after preprocessing
(d) Graphical
G hi l model
d l
(c) Candidate groups
( ) Final
(e)
Fi l detections
d
i
Figure 1: Our recognition framework. (a) An example sketch of a circuit diagram and (b) the segments after preprocessing. (c) A subset of the candidate groups extracted from the sketch (only those
with an appearance potential > 0.25 are shown). (d) The resulting graphical model: nodes represent
segment labels, dark blue edges represent group overlap potentials, and light blue edges represent
context potentials. (e) The final set of symbol detections after running loopy belief propagation.
when the group spans more strokes than the temporal window specified for the domain4 . Note that
we allow temporal gaps in the detection region, so symbols do not need to be drawn with consecutive
strokes. An illustration of this process is shown in Figure 1(c).
We classify each candidate group using the symbol recognizer we described in [11], which converts the on-line stroke sequences into a set of low resolution feature images (see Figure 2(a)). This
emphasis on visual appearance makes our method less sensitive to stroke level differences like overtracing and pen drag, improving accuracy and robustness. Since [11] was designed for classifying
isolated shapes and not for detecting symbols in messy sketches, we augment its output with five
geometric features and a set of local context features:
stroke count: The number of strokes in the group.
segment count: The number of segments in the group.
diagonal length: The diagonal length of the group?s bounding box, normalized by L.
group ink density: The total length of the strokes in the group divided by the diagonal length.
This feature is a measure of the group?s ink density.
stroke separation: Maximum distance between any stroke and its nearest neighbor in the group.
local context: A set of four feature images that captures the local context around the group. Each
image filters the local appearance at a specific orientation: 0, 45, 90, and 135 degrees. The
images are centered at the middle of the group?s bounding box and scaled so that each
dimension is equal to the group?s diagonal length, as shown in Figure 2(b). The initial
12x12 images are smoothed using a Gaussian filter, down-sampled by a factor of 4.
The symbol detector uses a linear SVM [13] to classify each candidate group, labeling it as one of the
symbols in the domain or as mis-grouped ?clutter?. The training data includes both valid symbols
and clutter regions. Because the classifier needs to distinguish between more than two classes, we
4
The temporal window is 8 strokes for chemistry diagrams and 20 strokes for the circuit diagrams. These
parameters were selected empirically, and can be customized by the system designer for each new domain.
3
(a) Isolated recognizer features
0
45
90
135
end
(b) Local context features
0
45
90
135
(c) Local context features
0
45
90
135
Figure 2: Symbol Detection Features. (a) The set of five 12x12 feature images used by the isolated
appearance-based classifier. The first four images encode stroke orientation at 0, 45, 90, and 135
degrees; the fifth captures the locations of stroke endpoints. (b) The set of four local context images
for multi-segment symbol. (c) The set of four local context images for single-segment symbols.
use the one-vs-one strategy for combining binary classifiers. Also, to generate probability estimates,
we fit a logistic regression model to the outputs of the SVM [12].
Many of the features above are not very useful for groups that contain only one segment. For
example, an isolated segment always looks like a straight line, so its visual appearance is not very
informative. Thus, we use a different set of features to classify candidates that contain only a single
segment: (e.g., wires in circuits and straight bonds in chemistry):
orientation: The orientation of the segment, discretized into evenly space bins of size ?/4.
segment length: The length of the segment, normalized by L.
segment count: The total number of segments extracted from the parent stroke.
segment ink density: The length of the substroke matching the start and end points of the segment
divided by the length of the segment. This is a measure of the segment?s curvature and is
higher for more curved segments.
stroke ink density: The length of the parent stroke divided by the diagonal length of the parent
stroke?s bounding box.
local context: Same as the local context for multi-segment symbols, except these images are centered at the midpoint of the segment, oriented in the same direction as the segment, and
scaled so that each dimension is equal to two times the length of the segment. An example
is shown in 2(c).
4
Improving Recognition using Context
The final task is to select a set of symbol detections from the competing candidate groups. Our
candidate selection algorithm has two main objectives. First, it must avoid selecting candidates
that conflict with each other because they share one or more segments. Second, it should select
candidates that are consistent with each other based on what the system knows about the likely
spatial relationships between symbols.
We use an undirected graphical model to encode the relationships between competing candidates.
Under our formulation, each segment (node) in the sketch needs to be assigned to one of the candidate groups (labels). Thus, our candidate selection problem becomes a segment labeling problem,
where the set of possible labels for a given segment is the set of candidate groups that contain that
segment. This allows us to incorporate local appearance, group overlap consistency, and spatial
context into a single unified model.
4
vi
vi
vi
vij
vij
vj
Spatial relationships:
vj
vj
?1=angle(vi , vj)
?2=angle(vi , vij)
vi
vj
?3=abs(|vi| - |vj|)
Figure 3: Spatial relationships: The three measurements used to calculate the context potential
?c (ci , cj , xi , xj ), where vi and vj are vector representing segment xi and xj and vij is a vector
from the center of vi to the center of vj .
The joint probability function over the entire graph is given by:
appearance
overlap
context
}|
{
X z }| { X z }| { z
log P (c|x) =
?a (ci , x) +
?o (ci , cj ) + ?c (ci , cj , xi , xj ) ? log(Z)
i
(2)
ij
where x is the set of segments in the sketch, c is the set of segment labels, and Z is a normalizing
constant.
Appearance potential. The appearance potential ?a measures how well the candidate group?s
appearance matches that of its predicted class. It uses the output of the isolated symbol classifier in
section 4 and is defined as:
?a (ci , x) = log Pa (ci |x)
(3)
where Pa (ci |x) is the likelihood score for candidate ci returned by the isolated symbol classifier.
Group overlap potential. The overlap potential ?o (ci , cj ) is a pairwise compatibility that ensures
the segment assignments do not conflict with each other. For example, if segments xi and xj are
both members of candidate c and xi is assigned to c, then xj must also be assigned to c.
?o (ci , cj ) =
?100,
0,
if ((xi ? cj ) or (xj ? ci )) and (ci 6= cj )
otherwise
(4)
To improve efficiency, instead of connecting every pair of segments that are jointly considered in
c, we connect the segments into a loop based on temporal ordering. This accomplishes the same
constraint with fewer edges. An example is shown in Figure 1(d).
Joint Context Potential. The context potential ?c (ci , cj , xi , xj ) represents the spatial compatibility between segments xi and xj , conditioned on their predicted class labels (e.g., resistor-resistor,
resistor-wire, etc). It is encoded as a conditional probability table that counts the number of times
each spatial relationship (?1 , ?2 , ?3 ) occurred for a given class pair (see Figure 3).
?c (ci , cj , xi , xj ) = log Pc (?(xi , xj ) | class(ci ), class(cj ))
(5)
where class(ci ) is the predicted class for candidate ci and ?(xi , xj ) is the set of three spatial relationships (?1 , ?2 , ?3 ) between segments xi and xj . This potential is active only for pairs of segments
whose distance at the closest point is less than L/2. To build the probability table we discretize ?1
and ?2 into bins of size ?/8 and ?3 into bins of size L/4.
The entries in the conditional probability table are defined as:
N?,classi ,classj + ?
0
? 0 N? ,classi ,classj + ?
Pc (? | li , lj ) = P
5
(6)
where N?,classi ,classj is the number of times we observed a pair of segments with spatial relationship
? and class labels (classi , classj ) and ? is a weak prior (? = 10 in our experiments).
Inference. We apply the max-product belief propagation algorithm [22] to find the configuration
that maximizes Equation 2. Belief propagation works by iteratively passing messages around the
connected nodes in the graph; each message from node i to node j contains i?s belief for each
possible state of j. In our implementation we use an ?accelerated? message passing schedule [21]
that propagates messages immediately without waiting for other nodes to finish. The procedure
alternates between forward and backward passes through the nodes based on the temporal ordering
of the segments, running for a total of 100 iterations.
5
Evaluation
One goal of our research is to build a system that can handle the range of drawings styles found in
natural, real world diagrams. As a result, our data collection program was designed to behave like
a piece of paper, i.e., capturing the sketch but providing no recognition or feedback. Using the data
we collected, we evaluated five versions of our system:
Appearance uses only the isolated appearance-based recognizer from [11].
Appearance+Geometry uses isolated appearance and geometric features.
Appearance+Geometry+Local uses isolated appearance, geometric features, and local context.
Complete is the complete framework described in this paper, using our corner detector.
Complete (corner detector from [15]) is the complete framework, using the corner detector in [15].
(We include this comparison to evaluate the effectiveness of our corner detection algorithm.)
Note that the first three versions still use the group overlap potential to select the best set of consistent
candidates.
Chemistry
For this evaluation we recruited 10 participants who were familiar with organic chemistry and asked
each of them to draw 12 real world organic compounds (e.g., Aspirin, Penicillin, Sildenafil, etc) on a
Tablet PC. We performed a set of user-independent performance evaluations, testing our system on
one user while using the examples from the other 9 users as training data. By leaving out sketches
from the same participant, this evaluation demonstrates how well our system would perform on a
new user.
For this domain we noticed that users almost never drew multiple symbols using a single stroke,
with the exception of multiple connected straight bonds (e.g., rings). Following this observation, we
optimized our candidate extractor to filter out multi-segment candidates that break stroke boundaries.
Method
Complete (corner detector from [15])
Appearance
Appearance+Geometry
Appearance+Geometry+Local
Complete
Accuracy
0.806
0.889
0.947
0.958
0.971
Table 1: Overall recognition accuracy for the chemistry dataset.
Note that for this dataset we report only accuracy (recall), because, unlike traditional object detection, there are no overlapping detections and every stroke is assigned to a symbol. Thus, a false
positive always causes a false negative, so recall and precision are redundant: e.g., misclassifying
one segment in a three-segment ?H? makes it impossible to recognize the original ?H? correctly.
The results in Table 1 show that our method was able to recognize 97% of the symbols correctly.
To be considered a correct recognition, a predicted symbol needs to match both the segmentation
and class of the ground truth label. By modeling joint context, the complete framework was able
to reduce the error rate by 31% compared to the next best method. Figure 4 (top) shows several
sketches interpreted by our system. We can see that the diagrams in this dataset can be very messy,
6
and exhibit a wide range of drawing styles. Notice that in the center diagram, the system made two
errors because the author drew hash bonds differently from all the other users, enclosing them inside
a triangle.
Circuits
The second dataset is a collection of circuit diagrams collected by Oltmans and Davis [9]. The
examples were from 10 users who were experienced in basic circuit design. Each user drew ten or
eleven different circuits, and every circuit was required to include a pre-specified set of components.
We again performed a set of user-independent performance evaluations. Because the exact locations
of the ground truth labels are somewhat subjective (i.e., it is not obvious whether the resistor label
should include the short wire segments on either end), we adopt the same evaluation metric used in
the Pascal Challenge [2] and in [9]: a prediction is considered correct if the area of overlap between
its bounding box and the ground truth label?s bounding box is greater than 50% of the area of their
union. Also, since we do not count wire detections for this dataset (as in [9]), we report precision as
well as recall.
Method
Oltmans 2007 [9]
Complete (corner detector from [15])
Appearance
Appearance+Geometry
Appearance+Geometry+Local
Complete
Precision
0.257
0.831
0.710
0.774
0.879
0.908
Recall
0.739
0.802
0.824
0.832
0.874
0.912
Table 2: Overall recognition accuracy for the circuit diagram dataset.
Table 2 shows that our method was able to recognize over 91% of the circuit symbols correctly.
Compared to the next best method, the complete framework was able to reduce the error rate by 30%.
On this dataset Oltmans and Davis [9] were able to achieve a best recall of 73.9% at a precision of
25.7%. Compared to their reported results, we reduced the error rate by 66% and more than triple the
precision. As Figure 4 (bottom) shows, this is a very complicated and messy corpus with significant
drawing variations like overtracing and pen drag.
Runtime
In the evaluations above, it took on average 0.1 seconds to process a new stroke in the circuits
dataset and 0.02 seconds for the chemistry dataset (running on a 3.6 GHz machine, single-thread).
With incremental interpretation, the system should be able to easily keep up in real time.
Related Work
Sketch recognition is a relatively new field, and we did not find any publicly available benchmarks
for the domains we evaluated. In this section, we summarize the performance of existing systems
that are similar to ours. Alvarado and Davis [1] proposed using dynamically constructed Bayesian
networks to represent the contextual relationships between geometric primitives. They achieved an
accuracy of 62% on a circuits dataset similar to ours, but needed to manually segment any strokes
that contained more than one symbol. Gennari et al [3] developed a system that searches for symbols
in high density regions of the sketch and uses domain knowledge to correct low level recognition
errors. They reported an accuracy of 77% on a dataset with 6 types of circuit components. Sezgin
and Davis [16] proposed using an HMM to model the temporal patterns of geometric primitives, and
reported an accuracy of 87% on a dataset containing 4 types of circuit components.
Shilman et. al. [17] proposed an approach that treats sketch recognition as a visual parsing problem.
Our work differs from theirs in that we use a rich model of low-level visual appearance and do
not require a pre-defined spatial grammar. Ouyang and Davis [10] developed a sketch recognition
system that uses domain knowledge to refine its interpretation. Their work focused on chemical
diagrams, and detection was limited to symbols drawn using consecutive strokes. Outside of the
sketch recognition community, there is also a great deal of interest in combining appearance and
context for problems in computer vision [6, 8, 19].
7
Figure 4: Examples of chemical diagrams (top) and circuit diagrams (bottom) recognized by our
system (complete framework). Correct detections are highlighted in green (teal for hash and wedge
bonds), false detections in red, and missed symbols in orange.
6
Discussion
We have proposed a new framework that combines a rich representation of low level visual appearance with a probabilistic model for capturing higher level relationships. To our knowledge this is
the first paper to combine these two approaches, and the result is a recognizer that is better able
to handle the range of drawing styles found in messy freehand sketches. To preserve the familiar
experience of using pen and paper, our system supports the same symbols, notations, and drawing
styles that people are already accustomed to.
In our initial evaluation we apply our method on two real-world domains, chemical diagrams and
electrical circuits (with 10 types of components), and achieve accuracy rates of 97% and 91% respectively. Compared to existing benchmarks in literature, our method achieved higher accuracy
even though the other systems supported fewer symbols [3, 16], trained on data from the same user
[3, 16], or required manual pre-segmentation [1].
Acknowledgements
This research was supported in part by a DHS Graduate Research Fellowship and a grant from Pfizer,
Inc. We thank Michael Oltmans for kindly making his dataset available to us.
8
References
[1] C. Alvarado and R. Davis. Sketchread: A multi-domain sketch recognition engine. In Proc.
ACM Symposium on User Interface Software and Technology, 2004.
[2] M. Everingham, L. Van Gool, C. Williams, J. Winn, and A. Zisserman. The pascal visual
object classes challenge 2008 results, 2008.
[3] L. Gennari, L. Kara, T. Stahovich, and K. Shimada. Combining geometry and domain knowledge to interpret hand-drawn diagrams. Computers & Graphics, 29(4):547?562, 2005.
[4] M. Gross. The electronic cocktail napkina computational environment for working with design
diagrams. Design Studies, 17(1):53?69, 1996.
[5] T. Hammond and R. Davis. Ladder: a language to describe drawing, display, and editing in
sketch recognition. In Proc. International Conference on Computer Graphics and Interactive
Techniques, 2006.
[6] X. He, R. Zemel, and M. Carreira-Perpinan. Multiscale conditional random fields for image
labeling. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, 2004.
[7] L. Kara and T. Stahovich. An image-based, trainable symbol recognizer for hand-drawn
sketches. Computers & Graphics, 29(4):501?517, 2005.
[8] K. Murphy, A. Torralba, and W. Freeman. Using the forest to see the trees: a graphical model
relating features, objects and scenes. Advances in Neural Information Processing Systems,
2003.
[9] M. Oltmans. Envisioning Sketch Recognition: A Local Feature Based Approach to Recognizing
Informal Sketches. PhD thesis, Massachusetts Institute of Technology, Cambridge, MA, May
2007.
[10] T. Y. Ouyang and R. Davis. Recognition of hand drawn chemical diagrams. In Proc. AAAI
Conference on Artificial Intelligence, 2007.
[11] T. Y. Ouyang and R. Davis. A visual approach to sketched symbol recognition. In Proc.
International Joint Conferences on Artificial Intelligence, 2009.
[12] J. Platt. Probabilities for sv machines. Advances in Neural Information Processing Systems,
1999.
[13] J. Platt. Sequential minimal optimization: A fast algorithm for training support vector machines. Advances in Kernel Methods-Support Vector Learning, 1999.
[14] X. Ren and J. Malik. Learning a classification model for segmentation. In Proc. IEEE International Conference on Computer Vision, pages 10?17, 2003.
[15] T. Sezgin and R. Davis. Sketch based interfaces: Early processing for sketch understanding. In
Proc. International Conference on Computer Graphics and Interactive Techniques. ACM New
York, NY, USA, 2006.
[16] T. Sezgin and R. Davis. Sketch recognition in interspersed drawings using time-based graphical
models. Computers & Graphics, 32(5):500?510, 2008.
[17] M. Shilman, H. Pasula, S. Russell, and R. Newton. Statistical visual language models for ink
parsing. Proc. AAAI Spring Symposium on Sketch Understanding, 2002.
[18] M. Shilman, P. Viola, and K. Chellapilla. Recognition and grouping of handwritten text in
diagrams and equations. In Proc. International Workshop on Frontiers in Handwriting Recognition, 2004.
[19] J. Shotton, J. Winn, C. Rother, and A. Criminisi. Textonboost: Joint appearance, shape and
context modeling for multi-class object recognition and segmentation. Lecture Notes in Computer Science, 3951:1, 2006.
[20] M. Szummer. Learning diagram parts with hidden random fields. In Proc. International Conference on Document Analysis and Recognition, 2005.
[21] M. Tappen and W. Freeman. Comparison of graph cuts with belief propagation for stereo,
using identical mrf parameters. In Proc. IEEE International Conference on Computer Vision,
2003.
[22] J. Yedidia, W. Freeman, and Y. Weiss. Understanding belief propagation and its generalizations. Exploring Artificial Intelligence in the New Millennium, pages 239?269, 2003.
9
| 3885 |@word version:2 middle:1 everingham:1 textonboost:1 initial:2 configuration:1 contains:1 score:1 selecting:1 ours:2 document:1 subjective:1 existing:2 current:3 contextual:1 must:2 parsing:2 subsequent:1 informative:1 shape:4 eleven:1 designed:2 v:1 hash:2 intelligence:4 selected:1 fewer:2 short:1 record:1 detecting:1 node:7 location:2 simpler:1 five:3 constructed:1 symposium:2 combine:5 inside:1 introduce:1 pairwise:1 multi:6 discretized:1 freeman:3 relying:1 window:2 becomes:1 classifies:1 notation:1 circuit:18 maximizes:1 what:1 ouyang:5 interpreted:1 developed:2 unified:1 temporal:6 every:4 interactive:2 runtime:1 scaled:2 classifier:5 demonstrates:1 platt:2 grant:1 positive:1 local:19 treat:1 despite:1 solely:1 emphasis:1 drag:5 dynamically:1 specifying:1 ease:1 limited:1 range:5 graduate:1 testing:1 union:1 differs:1 procedure:1 area:2 significantly:2 matching:1 organic:2 pre:3 selection:3 context:24 influence:1 impossible:1 center:3 primitive:6 williams:1 starting:1 focused:2 resolution:1 splitting:1 immediately:2 his:1 handle:5 variation:4 today:1 user:11 exact:1 us:8 tablet:1 pa:2 recognition:30 tappen:1 cut:1 labeled:1 observed:1 bottom:2 electrical:3 capture:2 calculate:1 region:3 ensures:1 connected:2 ordering:2 russell:1 gross:1 environment:2 messy:7 asked:1 trained:1 segment:59 efficiency:1 triangle:1 easily:1 joint:6 differently:1 separated:1 fast:1 describe:2 artificial:4 zemel:1 labeling:3 outside:1 whose:1 encoded:1 posed:1 larger:1 drawing:14 otherwise:1 grammar:1 alvarado:2 jointly:2 highlighted:1 final:4 advantage:1 sequence:1 took:1 propose:2 interaction:1 product:1 neighboring:3 combining:4 loop:1 achieve:2 shimada:1 parent:3 incremental:1 ring:1 object:4 ij:1 nearest:2 predicted:4 direction:1 wedge:1 correct:4 filter:3 criminisi:1 centered:2 bin:3 require:1 generalization:1 decompose:1 frontier:1 helping:1 exploring:1 around:2 considered:3 ground:3 great:1 early:2 consecutive:2 adopt:1 torralba:1 purpose:1 recognizer:7 proc:11 label:10 bond:4 sensitive:3 grouped:1 mit:1 rough:2 always:4 gaussian:1 avoid:1 encode:2 focus:2 longest:1 likelihood:1 superpixels:1 inference:2 entire:1 lj:1 hidden:1 pixel:1 overall:3 among:2 orientation:4 compatibility:2 augment:1 pascal:2 classification:1 sketched:1 spatial:13 orange:1 equal:2 field:3 never:1 manually:2 identical:1 represents:1 look:1 dhs:1 report:2 primarily:1 oriented:1 preserve:3 recognize:3 individual:1 murphy:1 familiar:2 geometry:7 ab:1 detection:17 interest:1 message:4 aspirin:1 evaluation:8 light:1 pc:3 predefined:1 edge:3 experience:1 tree:1 divide:1 circle:1 isolated:9 minimal:1 classify:4 modeling:2 assignment:1 loopy:1 vertex:4 subset:1 entry:1 recognizing:1 graphic:5 reported:3 connect:1 sv:1 combined:2 density:5 international:7 csail:1 probabilistic:2 michael:1 together:1 connecting:1 thesis:1 squared:1 again:1 successively:1 containing:2 aaai:2 corner:11 style:7 li:1 account:1 potential:13 chemistry:6 includes:1 inc:1 caused:1 vi:20 piece:1 performed:2 try:1 break:1 red:1 start:2 participant:2 complicated:1 chart:1 formed:1 accuracy:12 publicly:1 who:2 preprocess:1 weak:1 bayesian:1 handwritten:1 hammond:1 ren:1 straight:4 stroke:39 detector:7 manual:1 obvious:1 naturally:1 mi:1 handwriting:1 sampled:1 dataset:13 massachusetts:2 popular:1 recall:5 knowledge:4 improves:1 cj:10 schedule:1 segmentation:4 higher:5 day:1 tom:1 zisserman:1 wei:1 editing:1 formulation:1 evaluated:3 box:7 though:1 stage:3 shilman:3 pasula:1 sketch:37 working:2 hand:4 multiscale:1 overlapping:1 propagation:5 logistic:1 artifact:2 quality:1 usa:2 contain:4 normalized:2 assigned:4 chemical:6 laboratory:1 iteratively:1 deal:1 round:1 davis:13 trying:1 complete:11 interface:2 image:13 fi:1 gennari:2 empirically:1 endpoint:1 interspersed:1 interpretation:3 approximates:1 occurred:1 interpret:2 theirs:1 he:1 measurement:1 significant:1 relating:1 cambridge:2 consistency:1 language:2 recognizers:1 etc:2 curvature:4 closest:2 moderate:1 discard:2 compound:1 binary:1 greater:2 somewhat:1 accomplishes:1 recognized:1 redundant:1 multiple:2 exceeds:1 match:2 divided:4 molecular:1 prediction:1 mrf:1 regression:1 basic:1 vision:6 metric:1 iteration:1 represent:4 kernel:1 achieved:2 addition:1 fellowship:1 winn:2 diagram:20 leaving:1 unlike:1 pass:1 recruited:1 undirected:1 member:1 flow:1 effectiveness:1 shotton:1 xj:12 fit:2 isolation:1 finish:1 competing:2 click:1 reduce:2 idea:1 prototype:1 whether:1 thread:1 stereo:1 returned:1 speaking:1 passing:2 cause:1 repeatedly:1 york:1 cocktail:1 useful:2 envisioning:1 clutter:2 dark:1 ten:1 reduced:1 generate:2 misclassifying:1 notice:1 designer:1 correctly:3 blue:2 broadly:1 waiting:1 group:33 four:4 threshold:1 drawn:7 backward:1 graph:3 convert:1 angle:2 everywhere:1 communicate:1 almost:1 decide:1 electronic:1 patch:1 separation:1 draw:1 missed:1 decision:1 capturing:4 hi:1 distinguish:1 display:1 refine:1 constraint:3 scene:1 software:1 speed:2 span:1 spring:1 x12:2 relatively:1 alternate:1 poor:1 across:1 smaller:1 making:3 randall:1 outlier:1 equation:2 previously:1 count:5 needed:1 know:1 end:6 informal:2 teal:1 naturalness:1 available:2 yedidia:1 apply:2 ubiquity:1 robustness:4 original:5 top:3 remaining:1 include:5 running:3 graphical:8 newton:1 build:2 ink:7 objective:1 noticed:1 already:1 malik:1 strategy:1 traditional:2 diagonal:7 exhibit:1 distance:5 thank:1 hmm:1 evenly:1 collected:3 rother:1 length:15 relationship:16 illustration:1 providing:1 difficult:1 potentially:1 negative:1 design:4 reliably:1 implementation:1 enclosing:1 perform:1 allowing:1 discretize:1 wire:5 observation:1 arc:1 benchmark:2 curved:1 behave:1 viola:1 excluding:1 smoothed:1 community:1 pair:4 required:2 specified:2 optimized:1 conflict:2 engine:1 learned:1 able:9 pattern:2 challenge:2 summarize:1 program:2 green:1 including:1 reliable:1 belief:6 classj:4 max:1 overlap:7 satisfaction:1 natural:1 rely:2 gool:1 customized:1 representing:1 improve:1 millennium:1 technology:3 library:1 ladder:1 authoring:1 extract:1 text:1 prior:1 geometric:6 literature:1 acknowledgement:1 understanding:3 lecture:1 triple:1 degree:2 consistent:2 propagates:1 vij:4 classifying:1 share:1 supported:2 allow:1 institute:2 wide:2 template:1 taking:1 neighbor:1 fifth:1 midpoint:1 tracing:2 ghz:1 van:1 curve:1 boundary:3 dimension:2 world:7 valid:1 rich:4 feedback:1 forward:1 commonly:1 collection:2 preprocessing:7 made:1 author:1 keep:1 active:1 corpus:1 discriminative:1 xi:12 search:3 pen:10 table:7 nature:1 learn:1 expanding:1 contributes:1 improving:4 forest:1 interact:1 mse:4 domain:13 vj:8 did:1 kindly:1 main:3 accustomed:1 whole:1 noise:2 bounding:7 kara:2 ny:1 precision:5 experienced:1 stray:1 resistor:5 candidate:23 perpinan:1 extractor:1 down:2 removing:1 specific:1 symbol:41 chellapilla:1 svm:2 normalizing:1 grouping:1 intractable:1 workshop:1 false:4 sequential:1 effectively:1 drew:3 ci:17 lifting:2 phd:1 conditioned:1 gap:2 easier:1 simply:1 appearance:35 explore:1 likely:1 visual:12 contained:2 truth:3 extracted:3 ma:2 acm:2 conditional:3 sized:1 visio:1 goal:1 twofold:1 carreira:1 except:1 classi:4 total:3 exception:1 select:4 people:3 support:3 szummer:1 accelerated:1 incorporate:1 evaluate:2 trainable:1 |
3,185 | 3,886 | Measuring model complexity with the prior predictive
Wolf Vanpaemel ?
Department of Psychology
University of Leuven
Belgium.
[email protected]
Abstract
In the last few decades, model complexity has received a lot of press. While many
methods have been proposed that jointly measure a model?s descriptive adequacy
and its complexity, few measures exist that measure complexity in itself. Moreover, existing measures ignore the parameter prior, which is an inherent part of the
model and affects the complexity. This paper presents a stand alone measure for
model complexity, that takes the number of parameters, the functional form, the
range of the parameters and the parameter prior into account. This Prior Predictive
Complexity (PPC) is an intuitive and easy to compute measure. It starts from the
observation that model complexity is the property of the model that enables it to
fit a wide range of outcomes. The PPC then measures how wide this range exactly
is.
keywords: Model Selection & Structure Learning; Model Comparison Methods;
Perception
The recent revolution in model selection methods in the cognitive sciences was driven to a large
extent by the observation that computational models can differ in their complexity. Differences
in complexity put models on unequal footing when their ability to approximate empirical data is
assessed. Therefore, models should be penalized for their complexity when their adequacy is measured. The balance between descriptive adequacy and complexity has been termed generalizability
[1, 2].
Much attention has been devoted to developing, advocating, and comparing different measures of
generalizability (for a recent overview, see [3]). In contrast, measures of complexity have received
relatively little attention. The aim of the current paper is to propose and illustrate a stand alone
measure of model complexity, called the Prior Predictive Complexity (PPC). The PPC is based on
the intuitive idea that a complex model can predict many outcomes and a simple model can predict
a few outcomes only.
First, I discuss existing approaches to measuring model complexity and note some of their limitations. In particular, I argue that currently existing measures ignore one important aspect of a model:
the prior distribution it assumes over the parameters. I then introduce the PPC, which, unlike the
existing measures, is sensitive to the parameter prior. Next, the PPC is illustrated by calculating the
complexities of two popular models of information integration.
1
Previous approaches to measuring model complexity
A first approach to assess the (relative) complexity of models relies on simulated data. Simulationbased methods differ in how these artificial data are generated. A first, atheoretical approach uses
random data [4, 5]. In the semi-theoretical approach, the data are generated from some theoretically
?
I am grateful to Michael Lee and Liz Bonawitz.
1
interesting functions, such as the exponential or the logistic function [4]. Using these approaches,
the models under consideration are equally complex if each model provides the best optimal fit to
roughly the same number of data sets. A final approach to generating artificial data is a theoretical
one, in which the data are generated from the models of interest themselves [6, 7]. The parameter
sets used in the generation can either be hand-picked by the researcher, estimated from empirical
data or drawn from a previously specified distribution. If the models under consideration are equally
complex, each model should provide the best optimal fit to self-generated data more often than the
other models under consideration do.
One problem with this simulation-based approach is that it is very labor intensive. It requires generating a large amount of artificial data sets, and fitting the models to all these data sets. Further,
it relies on choices that are often made in an arbitrary fashion that nonetheless bias the results. For
example, in the semi-theoretical approach, a crucial choice is which functions to use. Similarly, in
the theoretical approach, results are heavily influenced by the parameter values used in generating
the data. If they are fixed, on what basis? If they are estimated from empirical data, from which
data? If they are drawn randomly, from which distribution? Further, a simulation study only gives a
rough idea of complexity differences but provides no direct measure reflecting the complexity.
A number of proposals have been made to measure model complexity more directly. Consider a
model M with k parameters, summarized in the parameter vector ? = (?1 , ?2 , . . . , ?k , ) which has a
range indicated by ?. Let d denote the data and p(d|?, M ) the likelihood. The most straightforward
measure of model complexity is the parametric complexity (PC), which simply counts the number
of parameters:
PC = k.
(1)
PC is attractive as a measure of model complexity since it is very easy to calculate. Further, it has a
direct and well understood relation toward complexity: the more parameters, the more complex the
model. It is included as the complexity term of several generalizability measures such as AIC [8]
and BIC [9], and it is at the heart of the Likelihood Ratio Test.
Despite this intuitive appeal, PC is not free from problems. One problem with PC is that it reflects only a single aspect of complexity. Also the parameter range and the functional form (the
way the parameters are combined in the model equation) influence a model?s complexity, but these
dimensions of complexity are ignored in PC [2, 6].
A complexity measure that takes these three dimensions into account is provided by the geometric
complexity (GC) measure, which is inspired by differential geometry [10]. In GC, complexity is
conceptualized as the number of distinguishable probability distributions a model can generate. It is
defined by
Z p
k
n
GC = ln
+ ln
det I(?|M )d?,
(2)
2 2?
?
where n indicates the size of the data sample and I(?) is the Fisher Information Matrix:
Iij (?|M ) = ?E?
? 2 ln p(d|?, M )
.
??i ??j
(3)
Note that I(?|M ) is determined by the likelihood function p(d|?, M ), which is in turn determined
by the model equation. Hence GC is sensitive to the number of parameters (through k), the functional form (through I), and the range (through ?). Quite surprisingly, GC turns out to be equal
to the complexity term used in one version of Minimum Description Length (MDL), a measure of
generalizability developed within the domain of information theory [2, 11, 12, 13].
GC contrasts favorably with PC, in the sense that it takes three dimensions of complexity into account rather than a single one. A major drawback of GC is that, unlike PC, it requires considerable
technical sophistication to be computed, as it relies on the second derivative of the likelihood. A
more important limitation of both PC and GC is that these measures are insensitive to yet another
important dimension contributing to model complexity: the prior distribution over the model parameters. The relation between the parameter prior distribution and model complexity is discussed
next.
2
2
Model complexity and the parameter prior
The growing popularity of Bayesian methods in psychology has not only raised awareness that
model complexity should be taken into account when testing models [6], it has also drawn attention
to the fact that in many occasions, relevant prior information is available [14]. In Bayesian methods,
there is room to incorporate this information in two different flavors: as a prior distribution over the
models, or as a prior distribution over the parameters. Specifying a model prior is a daunting task, so
almost invariably, the model prior is taken to be uniform (but see [15] for an exception). In contrast,
information regarding the parameter is much easier to include, although still challenging (e.g., [16]).
There are two ways to formalize prior information about a model?s parameters: using the parameter
prior range (often referred to as simply the range) and using the parameter prior distribution (often
referred to as simply the prior). The prior range indicates which parameter values are allowed
and which are forbidden. The prior distribution indicates which parameter values are likely and
which are unlikely. Models that share the same equation and the same range but differ in the prior
distribution can be considered different models (or at least different model versions), just like models
that share the same equation but differ in range are different model versions. Like the parameter prior
range, the parameter prior distribution influences the model complexity. In general, a model with a
vague parameter prior distribution is more complex than a model with a sharply peaked parameter
prior distribution, much as a model with a broad-ranged parameter is more complex than the same
model where the parameter is heavily restricted.
To drive home the point that the parameter prior should be considered when model complexity is
assessed, consider the following ?fair coin? model Mf and a ?biased coin? model Mb . There is a
clear intuitive complexity difference between these models: Mb is more complex than Mf . The
most straightforward way to formalize these models is as follows, where ph denotes the probability
of observing heads:
ph = 1/2,
(4)
ph = ?
0???1
p(?) = 1,
(5)
for model Mf and the triplet of equations
jointly define model Mb . The range forbids values smaller than 0 or greater than 1 because ph is a
proportion. As Mf and Mb have a different number of parameters, both PC and GC, being sensitive
to the number of parameters, pick up the difference in model complexity between the models.
Alternatively, model Mf could be defined as follows:
ph = ?
0???1
1
p(?) = ?(? ? ),
2
(6)
where ?(x) is the Dirac delta. Note that the model formalized in Equation 6 is exactly identical the
model formalized in Equation 4. However, relying on the formulation of model Mf in Equation 6,
PC and GC now judge Mf and Mb to be equally complex: both models share the same model
equation (which implies they have the same number of parameters and the same functional form) and
the same range for the parameter. Hence, PC and GC make an incorrect judgement of the complexity
difference between both models. This misjudgement is a direct result of the insensitivity of these
measures to the parameter prior. As models Mf and Mb have different prior distributions over their
parameter, a measure sensitive to the prior would pick up the complexity difference between these
models. Such a measure is introduced next.
3
The Prior Predictive Complexity
Model complexity refers to the property of the model that enables it to predict a wide range of data
patterns [2]. The idea of the PPC is to measure how wide this range exactly is. A complex model
3
can predict many outcomes, and a simple model can predict a few outcomes only. Model simplicity,
then, refers to the property of placing restrictions on the possible outcomes: the greater restrictions,
the greater the simplicity.
To understand how model complexity is measured in the PPC, it is useful to think about the universal
interval (UI) and the predicted interval (PI). The universal interval is the range of outcomes that could
potentially be observed, irrespective of any model. For example, in an experiment with n binomial
trials, it is impossible to observe less that zero successes, or more than n successes, so the range of
possible outcomes is [0, n] . Similarly, the universal interval for a proportion is [0, 1]. The predicted
interval is the interval containing all outcomes the model predicts.
An intuitive way to gauge model complexity is then the cardinality of the predicted interval, relative
to the cardinality of the universal interval, averaged over all m conditions or stimuli:
m
PPC =
1 X |PIi |
.
m i=1 |UIi |
(7)
A key aspect of the PPC is deriving the predicted interval. For a parameterized likelihood-based
model, prediction takes the form of a distribution over all possible outcomes for some future, yet-tobe-observed data d under some model M . This distribution is called the prior predictive distribution
(ppd) and can be calculated using the law of total probability:
Z
p(d|M ) =
p(d|?, M )p(?|M )d?.
(8)
?
Predicting the probability of unseen future data d arising under the assumption that model M is true
involves integrating the probability of the data for each of the possible parameter values, p(d|?, M ),
as weighted by the prior probability of each of these values, p(?|M ).
Note that the ppd relies on the number of parameters (through the number of integrals and the likelihood), the model equation (through the likelihood), and the parameter range (through ?). Therefore,
as GC, the PPC is sensitive to all these aspects. In contrast to GC, however, the ppd, and hence the
PPC, also relies on the parameter prior.
Since predictions are made probabilistically, virtually all outcomes will be assigned some prior
weight. This implies that, in principle, the predicted interval equals the universal interval. However,
for some outcomes the assigned weight will be extremely small. Therefore, it seems reasonable to
restrict the predicted interval to the smallest interval that includes some predetermined amount of the
prior mass. For example, the 95% predictive interval is defined by those outcomes with the highest
prior mass that together make up 95% of the prior mass.
Analytical solutions to the integral defining the ppd are rarely available. Instead, one should rely on
approximations to the ppd by drawing samples from it. In the current study, sampling was performed
using WinBUGS [17, 18], a highly versatile, user friendly, and freely available software package.
It contains sophisticated and relatively general-purpose Markov Chain Monte Carlo (MCMC) algorithms to sample from any distribution of interest.
4
An application example
The PPC is illustrated by comparing the complexity of two popular models of information integration, which attempt to account for how people merge potentially ambiguous or conflicting information from various sensorial sources to create subjective experience. These models either assume that
the sources of information are combined additively (the Linear Integration Model; LIM; [19]) or
multiplicatively (the Fuzzy Logical Model of Perception; FLMP; [20, 21]).
4.1
Information integration tasks
A typical information integration task exposes participants simultaneously to different sources of
information and requires this combined experience to be identified in a forced-choice identification
task. The presented stimuli are generated from a factorial manipulation of the sources of information
by systematically varying the ambiguity of each of the sources. The relevant empirical data consist
4
of, for each of the presented stimuli, the counts km of the number of times the mth stimulus was
identified as one of the response alternatives, out of the tm trials on which it was presented.
For example, an experiment in phonemic identification could involve two phonemes to be identified,
/ba/ and /da/ and two sources of information, auditory and visual. Stimuli are created by crossing
different levels of audible speech, varying between /ba/ and /da/, with different levels of visible
speech, also varying between these alternatives. The resulting set of stimuli spans a continuum
between the two syllables. The participant is then asked to listen and to watch the speaker, and
based on this combined audiovisual experience, to identify the syllable as being either /ba/ or /da/.
In the so-called expanded factorial design, not only bimodal stimuli (containing both auditory and
visual information) but also unimodal stimuli (providing only a single source of information) are
presented.
4.2
Information integration models
In what follows, the formal description of the LIM and the FLMP is outlined for a design with two
response alternatives (/da/ or /ba/) and two sources (auditory and visual), with I and J levels,
respectively. In such a two-choice identification task, the counts km follow a Binomial distribution:
km ? Binomial(pm , tm ),
(9)
where pm indicates the probability that the mth stimulus is identified as /da/.
4.2.1
Model equation
The probability for the stimulus constructed with the ith level of the first source and the jth level of
the second being identified as /da/ is computed according to the choice rule:
pij =
s (ij, /da/)
,
s (ij, /da/) + s (ij, /ba/)
(10)
where s (ij, /da/) represents the overall degree of support for the stimulus to be /da/.
The sources of information are assumed to be evaluated independently, implying that different parameters are used for the different modalities. In the present example, the degree of auditory support for /da/ is denoted by ai (i = 1, . . . , I) and the degree of visual support for /da/ by bj
(j = 1, . . . , J).
When a unimodal stimulus is presented, the overall degree of support for each alternative is given by
s (i?, /da/) = ai and s (?j, /da/) = bj , where the asterisk (*) indicates the absence of information,
implying that Equation 10 reduces to
pi? = ai
and p?j = bj .
(11)
When a bimodal stimulus is presented, the overall degree of support for each alternative is based
onN
the integration or blending
Nof both these sources. Hence, for bimodal stimuli, s (ij, /da/) =
ai bj , where the operator
denotes the combination of both sources. Hence, Equation 10 reduces to
N
ai bj
N
.
(12)
pij = N
ai bj + (1 ? ai ) (1 ? bj )
N
The LIM assumes an additive combination, i.e.,
= +, so Equation 12 becomes
pij =
ai + bj
.
2
The FLMP, in contrast, assumes a multiplicative combination, i.e.,
pij =
ai bj
.
ai bj + (1 ? ai )(1 ? bj )
5
(13)
N
= ?, so Equation 12 becomes
(14)
4.2.2
Parameter prior range and distribution
Each level of auditory and visual support for /da/ (i.e., ai and bj , respectively) is associated with a
free parameter, which implies that the FLMP and the LIM have an equal number of free parameters,
I + J. Each of these parameters is constrained to satisfy 0 ? ai , bj ? 1.
The original formulations of the LIM and FLMP unfortunately left the parameter priors unspecified.
However, an implicit assumption that has been commonly used is a uniform prior for each of the
parameters. This assumption implicitly underlies classical and widely adopted methods for model
evaluation using accounted percentage of variance or maximum likelihood.
ai ? Uniform(0, 1)
and bi ? Uniform(0, 1)
for i = 1, . . . , I; j = 1, . . . , J.
(15)
The models relying on this set of uniform priors will be referred to as LIMu and FLMPu .
Note that LIMu and FLMPu treat the different parameters as independent. This approach misses
important information. In particular, the experimental design is such that the amount of support for
each level i + 1 is always higher than for level i. Because parameter ai (or bi ) corresponds to the
degree of auditory (or visual) support for a unimodal stimulus at the ith level, it seems reasonable to
expect the following orderings among the parameters to hold (see also [6]):
aj > ai
and bj > bi
for j > i.
(16)
The models relying on this set of ordered priors will be referred to as LIMo and FLMPo .
4.3
Complexity and experimental design
It is tempting to consider model complexity as an inherent characteristic of a model. For some models and for some measures of complexity this is clearly the case. Consider, for example, model Mb .
In any experimental design (i.e., a number of coin tosses), PCMb = 1. However, more generally,
this is not the case. Focusing on the FLMP and the LIM, it is clear that even a simple measure as
PC depends crucially on (some aspects of) the experimental design. In particular, every level corresponds to a new parameter, so PC = I + J . Similarly, GC is dependent on design choices. The PPC
is not different in this respect.
The design sensitivity implies that one can only make sensible conclusions about differences in
model complexity by using different designs. In an information integration task, the design decisions include the type of design (expanded or not), the number of sources, the number of response
alternatives, the number of levels for each source, and the number of observations for each stimulus
(sample size). The present study focuses on the expanded factorial designs with two sources and two
response alternatives. The additional design features were varied: both a 5 ? 5 and a 8 ? 2 design
were considered, using three different sample sizes (20, 60 and 150, following [2]).
4.4
Results
Figure 1 shows the 99% predicted interval in the 8?2 design with n = 150. Each panel corresponds
to a different model. In each panel, each of the 26 stimuli is displayed on the x-axis. The first eight
stimuli correspond to the stimuli with the lowest level of visual support, and are ordered in increasing
order of auditory support. The next eight stimuli correspond to the stimuli with the highest level of
visual support. The next eight stimuli correspond to the unimodal stimuli where only auditory
information is provided (again ranked in increasing order). The final two stimuli are the unimodal
visual stimuli.
Panel A shows that the predicted interval of LIMu nearly equals the universal interval, ranging
between 0 and 1. This indicates that almost all outcomes are given a non-negligible prior mass
by LIMu , making it almost maximally complex. FLMPu is even more complex. The predicted
interval, shown in Panel B, virtually equals the universal interval, indicating that the model predicts
virtually every possible outcome. Panels C and D show the dramatic effect of incorporating relevant
prior information in the models. The predicted intervals of both LIMo and FLMPo are much smaller
than their counterparts using the uniform priors.
Focusing on the comparison between LIM and FLMP, the PPC indicates that the latter is more complex than the former. This observation holds irrespective of the model version (assuming uniform
6
1
0.9
0.9
0.8
0.8
Proportion of /da/ responses
Proportion of /da/ responses
1
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.6
0.5
0.4
0.3
0.2
0.1
11
21
A
1*
0
*1
1
1
0.9
0.9
0.8
0.8
Proportion of /da/ responses
Proportion of /da/ responses
0
0.7
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
11
21
11
21
B
1*
*1
1*
*1
0.7
0.6
0.5
0.4
0.3
0.2
0.1
11
21
C
1*
0
*1
D
Figure 1: The 99% predicted interval for each of the 26 stimuli (x-axis) according to LIMu (Panel
A), FLMPu (Panel B), LIMo (Panel C), and FLMPo (Panel D).
Table 1: PPC, based on the 99% predicted interval, for four models across six different designs.
LIMu
FLMPu
LIMo
FLMPo
20
5?5
60
150
20
8?2
60
150
0.97
1
0.75
0.83
0.94
1
0.67
0.80
.97
1
0.77
0.86
0.95
1
0.69
0.82
0.93
0.99
0.64
0.78
7
0.94
0.99
0.66
0.81
vs. ordered priors). The smaller complexity of LIM is in line with previous attempts to measure
the relative complexities of LIM and FLMP, such as the atheoretical simulation-based approach ([4]
but see [5]), the semi-theoretical simulation-based approach [4], the theoretical simulation-based
approach [2, 6, 22], and a direct computation of the GC [2].
The PPC?s for all six designs considered are displayed in Table 1. It shows that the observations
made for the 8 ? 2, n = 150 design holds across the five remaining designs as well: LIM is simpler
than FLMP; and models assuming ordered priors are simpler than models assuming uniform priors.
Note that these conclusions would not have been possible based on PC or GC. For PC, all four
models have the same complexity. GC, in contrast, would detect complexity differences between
LIM and FLMP (i.e., the first conclusion), but due to its insensitivity to the parameter prior, the
complexity differences between LIMu and LIMo on the one hand, and FLMPu and FLMPo on the
other hand (i.e., the second conclusion) would have gone unnoticed.
5
Discussion
A theorist defining a model should clearly and explicitly specify at least the three following pieces of
information: the model equation, the parameter prior range, and the parameter prior distribution. If
any of these pieces is missing, the model should be regarded as incomplete, and therefore untestable.
Consequently, any measure of generalizability should be sensitive to all three aspects of the model
definition. Many currently popular generalizability measures do not satisfy this criterion, including
AIC, BIC and MDL. A measure of generalizability that does take these three aspects of a model into
account is the marginal likelihood [6, 7, 14, 23]. Often, the marginal likelihood is criticized exactly
for its sensitivity to the prior range and distribution (e.g., [24]). However, in the light of the fact that
the prior is a part of the model definition, I see the sensitivity of the marginal likelihood to the prior
as an asset rather than a nuisance. It is precisely the measures of generalizability that are insensitive
to the prior that miss an important aspect of the model.
Similarly, any stand alone measure of model complexity should be sensitive to all three aspects of the
model definition, as all three aspects contribute to the model?s complexity (with the model equation
contributing two factors: the number of parameters and the functional form). Existing measures of
complexity do not satisfy this requirement and are therefore incomplete. PC takes only part of the
model equation into account, whereas GC takes only the model equation and the range into account.
In contrast, the PPC currently proposed is sensitive to all these three aspects. It assesses model
complexity using the predicted interval which contains all possible outcomes a model can generate.
A narrow predicted interval (relative to the universal interval) indicates a simple model; a complex
model is characterized by a wide predicted interval.
There is a tight coupling between the notions of information, knowledge and uncertainty, and the
notion of model complexity. As parameters correspond to unknown variables, having more information available leads to fewer parameters and hence to a simpler model. Similarly, the more
information there is available, the sharper the parameter prior, implying a simpler model. To put
it differently, the less uncertainty present in a model, the narrower its predicted interval, and the
simpler the model. For example, in model Mb , there is maximal uncertainty. Nothing but the range
is known about ?, so all values of ? are equally likely. In contrast, in model Mf , there is minimal
uncertainty. In fact, ph is known for sure, so only a single value of ? is possible. This difference in
uncertainty is translated in a difference in complexity. The same is true for the information integration models. Incorporating the order constraints in the priors reduces the uncertainty compared to
the models without these constraints (it tells you, for example, that parameter a1 is smaller than a2 ).
This reduction in uncertainty is reflected by a smaller complexity.
There are many different sources of prior information that can be translated in a range or distribution. The illustration using the information integration models highlighted that prior information
can reflect meaningful information in the design. Alternatively, priors can be informed by previous
applications of similar models in similar settings. Probably the purest form of priors are those that
translate theoretical assumptions made by a model (see [16]). The fact that it is often difficult to formalize this prior information may not be used as an excuse to leave the prior unspecified. Sure it is a
challenging task, but so is translating theoretical assumptions into the model equation. Formalizing
theory, intuitions, and information is what model building is all about.
8
References
[1] Myung, I. J. (2000) The importance of complexity in model selection. Journal of Mathematical Psychology, 44, 190?204.
[2] Pitt, M. A., Myung, I. J., and Zhang, S. (2002) Toward a method of selecting among computational models
of cognition. Psychological Review, 109, 472?491.
[3] Shiffrin, R. M., Lee, M. D., Kim, W., and Wagenmakers, E. J. (2008) A survey of model evaluation
approaches with a tutorial on hierarchical Bayesian methods. Cognitive Science, 32, 1248?1284.
[4] Cutting, J. E., Bruno, N., Brady, N. P., and Moore, C. (1992) Selectivity, scope, and simplicity of models:
A lesson from fitting judgments of perceived depth. Journal of Experimental Psychology: General, 121,
364?381.
[5] Dunn, J. (2000) Model complexity: The fit to random data reconsidered. Psychological Research, 63,
174?182.
[6] Myung, I. J. and Pitt, M. A. (1997) Applying Occam?s razor in modeling cognition: A Bayesian approach.
Psychonomic Bulletin & Review, 4, 79?95.
[7] Vanpaemel, W. and Storms, G. (in press) Abstraction and model evaluation in category learning. Behavior
Research Methods.
[8] Akaike, H. (1973) Information theory and an extension of the maximum likelihood principle. Petrov, B.
and Csaki, B. (eds.), Second International Symposium on Information Theory, pp. 267?281, Academiai
Kiado.
[9] Schwarz, G. (1978) Estimating the dimension of a model. Annals of Statistics, 6, 461?464.
[10] Myung, I. J., Balasubramanian, V., and Pitt, M. A. (2000) Counting probability distributions: Differential
geometry and model selection. Proceedings of the National Academy of Sciences, 97, 11170?11175.
[11] Lee, M. D. (2002) Generating additive clustering models with minimal stochastic complexity. Journal of
Classification, 19, 69?85.
[12] Rissanen, J. (1996) Fisher information and stochastic complexity. IEEE Transactions on Information
Theory, 42, 40?47.
[13] Gr?unwald, P. (2000) Model selection based on minimum description length. Journal of Mathematical
Psychology, 44, 133?152.
[14] Lee, M. D. and Wagenmakers, E. J. (2005) Bayesian statistical inference in psychology: Comment on
Trafimow (2003). Psychological Review, 112, 662?668.
[15] Lee, M. D. and Vanpaemel, W. (2008) Exemplars, prototypes, similarities and rules in category representation: An example of hierarchical Bayesian analysis. Cognitive Science, 32, 1403?1424.
[16] Vanpaemel, W. and Lee, M. D. (submitted) Using priors to formalize theory: Optimal attention and the
generalized context model.
[17] Lee, M. D. (2008) Three case studies in the Bayesian analysis of cognitive models. Psychonomic Bulletin
& Review, 15, 1?15.
[18] Spiegelhalter, D., Thomas, A., Best, N., and Lunn, D. (2004) WinBUGS User Manual Version 2.0. Medical Research Council Biostatistics Unit. Institute of Public Health, Cambridge.
[19] Anderson, N. H. (1981) Foundations of information integration theory. Academic Press.
[20] Oden, G. C. and Massaro, D. W. (1978) Integration of featural information in speech perception. Psychological Review, 85, 172?191.
[21] Massaro, D. W. (1998) Perceiving Talking Faces: From Speech Perception to a Behavioral Principle. MIT
Press.
[22] Massaro, D. W., Cohen, M. M., Campbell, C. S., and Rodriguez, T. (2001) Bayes factor of model selection
validates FLMP. Psychonomic Bulletin and Review, 8, 1?17.
[23] Kass, R. E. and Raftery, A. E. (1995) Bayes factors. Journal of the American Statistical Association, 90,
773?795.
[24] Liu, C. C. and Aitkin, M. (2008) Bayes factors: Prior sensitivity and model generalizability. Journal of
Mathematical Psychology, 53, 362?375.
9
| 3886 |@word trial:2 version:5 judgement:1 proportion:6 seems:2 km:3 additively:1 simulation:5 crucially:1 pick:2 dramatic:1 versatile:1 reduction:1 liu:1 contains:2 selecting:1 subjective:1 existing:5 current:2 comparing:2 ka:1 yet:2 visible:1 additive:2 predetermined:1 enables:2 v:1 alone:3 implying:3 fewer:1 ith:2 footing:1 provides:2 contribute:1 simpler:5 zhang:1 five:1 mathematical:3 constructed:1 direct:4 differential:2 symposium:1 incorrect:1 fitting:2 behavioral:1 introduce:1 theoretically:1 behavior:1 themselves:1 roughly:1 growing:1 inspired:1 relying:3 audiovisual:1 balasubramanian:1 little:1 cardinality:2 increasing:2 becomes:2 provided:2 estimating:1 moreover:1 formalizing:1 panel:9 mass:4 biostatistics:1 lowest:1 what:3 unspecified:2 fuzzy:1 developed:1 informed:1 brady:1 every:2 friendly:1 exactly:4 unit:1 medical:1 negligible:1 understood:1 treat:1 despite:1 merge:1 specifying:1 challenging:2 range:25 bi:3 averaged:1 gone:1 testing:1 flmp:11 dunn:1 empirical:4 universal:8 integrating:1 refers:2 selection:6 operator:1 put:2 context:1 influence:2 impossible:1 applying:1 restriction:2 missing:1 conceptualized:1 straightforward:2 attention:4 independently:1 survey:1 formalized:2 simplicity:3 rule:2 regarded:1 deriving:1 notion:2 annals:1 heavily:2 user:2 massaro:3 us:1 akaike:1 crossing:1 predicts:2 observed:2 calculate:1 ordering:1 highest:2 intuition:1 complexity:70 ui:1 asked:1 grateful:1 tight:1 predictive:6 basis:1 vague:1 translated:2 differently:1 various:1 forced:1 monte:1 artificial:3 tell:1 outcome:16 limo:5 quite:1 widely:1 reconsidered:1 drawing:1 ability:1 statistic:1 unseen:1 think:1 jointly:2 itself:1 highlighted:1 final:2 validates:1 descriptive:2 analytical:1 propose:1 mb:8 maximal:1 relevant:3 translate:1 shiffrin:1 insensitivity:2 academy:1 intuitive:5 description:3 dirac:1 requirement:1 generating:4 leave:1 illustrate:1 coupling:1 exemplar:1 measured:2 ij:5 keywords:1 received:2 phonemic:1 predicted:16 involves:1 judge:1 implies:4 pii:1 differ:4 nof:1 drawback:1 stochastic:2 translating:1 public:1 blending:1 extension:1 hold:3 ppc:18 considered:4 liz:1 cognition:2 predict:5 bj:14 pitt:3 scope:1 major:1 continuum:1 smallest:1 belgium:1 a2:1 purpose:1 perceived:1 vanpaemel:5 currently:3 expose:1 sensitive:8 schwarz:1 council:1 gauge:1 create:1 reflects:1 weighted:1 rough:1 clearly:2 mit:1 always:1 aim:1 rather:2 varying:3 probabilistically:1 focus:1 indicates:8 likelihood:12 contrast:8 kim:1 psy:1 am:1 sense:1 inference:1 detect:1 dependent:1 abstraction:1 unlikely:1 mth:2 relation:2 ppd:5 sensorial:1 overall:3 among:2 classification:1 denoted:1 raised:1 integration:12 constrained:1 marginal:3 equal:5 having:1 sampling:1 identical:1 placing:1 broad:1 represents:1 nearly:1 peaked:1 future:2 stimulus:26 inherent:2 few:4 randomly:1 simultaneously:1 national:1 geometry:2 attempt:2 invariably:1 interest:2 highly:1 evaluation:3 mdl:2 pc:17 light:1 devoted:1 chain:1 integral:2 experience:3 incomplete:2 theoretical:8 minimal:2 criticized:1 psychological:4 lunn:1 modeling:1 measuring:3 uniform:8 gr:1 generalizability:9 combined:4 international:1 sensitivity:4 lee:7 audible:1 michael:1 together:1 again:1 ambiguity:1 reflect:1 containing:2 cognitive:4 american:1 derivative:1 account:8 summarized:1 includes:1 satisfy:3 explicitly:1 depends:1 piece:2 performed:1 multiplicative:1 lot:1 picked:1 observing:1 start:1 bayes:3 participant:2 ass:2 phoneme:1 variance:1 characteristic:1 correspond:4 identify:1 lesson:1 judgment:1 bayesian:7 identification:3 carlo:1 researcher:1 drive:1 asset:1 submitted:1 influenced:1 manual:1 ed:1 definition:3 petrov:1 nonetheless:1 pp:1 storm:1 associated:1 auditory:8 popular:3 logical:1 lim:11 listen:1 knowledge:1 formalize:4 sophisticated:1 reflecting:1 campbell:1 focusing:2 higher:1 follow:1 reflected:1 response:8 maximally:1 daunting:1 specify:1 formulation:2 evaluated:1 anderson:1 just:1 implicit:1 hand:3 rodriguez:1 logistic:1 aj:1 indicated:1 building:1 effect:1 ranged:1 true:2 counterpart:1 former:1 hence:6 assigned:2 simulationbased:1 moore:1 illustrated:2 attractive:1 self:1 nuisance:1 ambiguous:1 speaker:1 razor:1 criterion:1 occasion:1 generalized:1 ranging:1 consideration:3 functional:5 psychonomic:3 overview:1 cohen:1 insensitive:2 discussed:1 association:1 theorist:1 cambridge:1 ai:16 leuven:1 outlined:1 pm:2 similarly:5 bruno:1 similarity:1 recent:2 forbidden:1 driven:1 termed:1 manipulation:1 selectivity:1 success:2 minimum:2 greater:3 additional:1 freely:1 tempting:1 semi:3 unimodal:5 reduces:3 technical:1 characterized:1 academic:1 equally:4 a1:1 oden:1 prediction:2 underlies:1 bimodal:3 proposal:1 whereas:1 interval:27 source:16 crucial:1 modality:1 biased:1 unlike:2 sure:2 probably:1 comment:1 virtually:3 adequacy:3 counting:1 easy:2 affect:1 fit:4 psychology:7 bic:2 restrict:1 identified:5 idea:3 regarding:1 tm:2 prototype:1 intensive:1 det:1 six:2 speech:4 ignored:1 useful:1 generally:1 clear:2 involve:1 factorial:3 amount:3 ph:6 category:2 generate:2 exist:1 percentage:1 tutorial:1 estimated:2 delta:1 popularity:1 arising:1 key:1 four:2 rissanen:1 drawn:3 package:1 parameterized:1 uncertainty:7 you:1 almost:3 reasonable:2 home:1 decision:1 syllable:2 aic:2 purest:1 precisely:1 sharply:1 uii:1 constraint:2 software:1 aspect:11 extremely:1 span:1 expanded:3 relatively:2 winbugs:2 department:1 developing:1 according:2 combination:3 smaller:5 across:2 making:1 restricted:1 heart:1 taken:2 ln:3 equation:20 previously:1 discus:1 count:3 turn:2 kuleuven:1 adopted:1 available:5 eight:3 observe:1 hierarchical:2 alternative:7 coin:3 original:1 thomas:1 assumes:3 denotes:2 include:2 binomial:3 remaining:1 unnoticed:1 clustering:1 calculating:1 classical:1 wagenmakers:2 parametric:1 simulated:1 sensible:1 argue:1 extent:1 toward:2 assuming:3 length:2 multiplicatively:1 ratio:1 balance:1 providing:1 illustration:1 difficult:1 unfortunately:1 potentially:2 sharper:1 favorably:1 ba:5 design:20 unknown:1 observation:5 markov:1 displayed:2 defining:2 head:1 gc:18 varied:1 arbitrary:1 introduced:1 specified:1 advocating:1 unequal:1 conflicting:1 narrow:1 perception:4 pattern:1 including:1 ranked:1 rely:1 predicting:1 spiegelhalter:1 axis:2 irrespective:2 created:1 raftery:1 health:1 featural:1 prior:66 geometric:1 review:6 contributing:2 relative:4 law:1 expect:1 interesting:1 limitation:2 generation:1 asterisk:1 foundation:1 awareness:1 degree:6 limu:7 pij:4 principle:3 myung:4 systematically:1 share:3 pi:2 occam:1 penalized:1 accounted:1 surprisingly:1 last:1 free:3 jth:1 bias:1 formal:1 understand:1 institute:1 wide:5 bulletin:3 face:1 dimension:5 calculated:1 stand:3 depth:1 made:5 commonly:1 transaction:1 approximate:1 ignore:2 implicitly:1 cutting:1 assumed:1 alternatively:2 forbids:1 decade:1 triplet:1 table:2 bonawitz:1 complex:13 domain:1 da:20 nothing:1 allowed:1 fair:1 referred:4 fashion:1 untestable:1 iij:1 exponential:1 revolution:1 appeal:1 consist:1 incorporating:2 importance:1 flavor:1 easier:1 mf:9 sophistication:1 distinguishable:1 simply:3 likely:2 visual:9 labor:1 ordered:4 watch:1 talking:1 wolf:2 corresponds:3 relies:5 narrower:1 consequently:1 room:1 toss:1 fisher:2 considerable:1 absence:1 included:1 determined:2 typical:1 perceiving:1 miss:2 called:3 total:1 onn:1 experimental:5 meaningful:1 unwald:1 exception:1 rarely:1 indicating:1 people:1 support:11 latter:1 assessed:2 incorporate:1 kiado:1 mcmc:1 |
3,186 | 3,887 | Maximin affinity learning of image segmentation
Srinivas C. Turaga ?
MIT
Kevin L. Briggman
Max-Planck Insitute for Medical Research
Moritz Helmstaedter
Max-Planck Insitute for Medical Research
Winfried Denk
Max-Planck Insitute for Medical Research
H. Sebastian Seung
MIT, HHMI
Abstract
Images can be segmented by first using a classifier to predict an affinity graph
that reflects the degree to which image pixels must be grouped together and then
partitioning the graph to yield a segmentation. Machine learning has been applied
to the affinity classifier to produce affinity graphs that are good in the sense of
minimizing edge misclassification rates. However, this error measure is only indirectly related to the quality of segmentations produced by ultimately partitioning
the affinity graph. We present the first machine learning algorithm for training a
classifier to produce affinity graphs that are good in the sense of producing segmentations that directly minimize the Rand index, a well known segmentation
performance measure.
The Rand index measures segmentation performance by quantifying the classification of the connectivity of image pixel pairs after segmentation. By using the
simple graph partitioning algorithm of finding the connected components of the
thresholded affinity graph, we are able to train an affinity classifier to directly
minimize the Rand index of segmentations resulting from the graph partitioning.
Our learning algorithm corresponds to the learning of maximin affinities between
image pixel pairs, which are predictive of the pixel-pair connectivity.
1
Introduction
Supervised learning has emerged as a serious contender in the field of image segmentation, ever
since the creation of training sets of images with ?ground truth? segmentations provided by humans,
such as the Berkeley Segmentation Dataset [15]. Supervised learning requires 1) a parametrized
algorithm that map images to segmentations, 2) an objective function that quantifies the performance
of a segmentation algorithm relative to ground truth, and 3) a means of searching the parameter space
of the segmentation algorithm for an optimum of the objective function.
In the supervised learning method presented here, the segmentation algorithm consists of a
parametrized classifier that predicts the weights of a nearest neighbor affinity graph over image
pixels, followed by a graph partitioner that thresholds the affinity graph and finds its connected
components. Our objective function is the Rand index [18], which has recently been proposed as a
quantitative measure of segmentation performance [23]. We ?soften? the thresholding of the classifier output and adjust the parameters of the classifier by gradient learning based on the Rand index.
? [email protected]
1
hypothetical thresholded affinity graphs
segmentation algorithm
missing
merge!
affinity
prediction
image
threshold,
connected
components
weighted affinity graph
segmentation
affinity graph #1
affinity graph #2
Figure 1: (left) Our segmentation algorithm. We first generate a nearest neighbor weighted affinity graph representing the degree to which nearest neighbor pixels should be grouped together. The
segmentation is generated by finding the connected components of the thresholded affinity graph.
(right) Affinity misclassification rates are a poor measure of segmentation performance. Affinity graph #1 makes only 1 error (dashed edge) but results in poor segmentations, while graph #2
generates a perfect segmentation despite making many affinity misclassifications (dashed edges).
Because maximin edges of the affinity graph play a key role in our learning method, we call it maximin affinity learning of image segmentation, or MALIS. The minimax path and edge are standard
concepts in graph theory, and maximin is the opposite-sign sibling of minimax. Hence our work can
be viewed as a machine learning application of these graph theoretic concepts. MALIS focuses on
improving classifier output at maximin edges, because classifying these edges incorrectly leads to
genuine segmentation errors, the splitting or merging of segments.
To the best of our knowledge, MALIS is the first supervised learning method that is based on optimizing a genuine measure of segmentation performance. The idea of training a classifier to predict
the weights of an affinity graph is not novel. Affinity classifiers were previously trained to minimize
the number of misclassified affinity edges [9, 16]. This is not the same as optimizing segmentations
produced by partitioning the affinity graph. There have been attempts to train affinity classifiers to
produce good segmentations when partitioned by normalized cuts [17, 2]. But these approaches do
not optimize a genuine measure of segmentation performance such as the Rand index. The work of
Bach and Jordan [2] is the closest to our work. However, they only minimize an upper bound to a
renormalized version of the Rand index. Both approaches require many approximations to make the
learning tractable.
In other related work, classifiers have been trained to optimize performance at detecting image pixels
that belong to object boundaries [16, 6, 14]. Our classifier can also be viewed as a boundary detector,
since a nearest neighbor affinity graph is essentially the same as a boundary map, up to a sign
inversion. However, we combine our classifier with a graph partitioner to produce segmentations.
The classifier parameters are not trained to optimize performance at boundary detection, but to
optimize performance at segmentation as measured by the Rand index.
There are also methods for supervised learning of image labeling using Markov or conditional random fields [10]. But image labeling is more similar to multi-class pixel classification rather than
image segmentation, as the latter task may require distinguishing between multiple objects in a
single image that all have the same label.
In the cases where probabilistic random field models have been used for image parsing and segmentation, the models have either been simplistic for tractability reasons [12] or have been trained
piecemeal. For instance, Tu et al. [22] separately train low-level discriminative modules based on a
boosting classifier, and train high-level modules of their algorithm to model the joint distribution of
the image and the labeling. These models have never been trained to minimize the Rand index.
2
Partitioning a thresholded affinity graph by connected components
Our class of segmentation algorithms is constructed by combining a classifier and a graph partitioner
(see Figure 1). The classifier is used to generate the weights of an affinity graph. The nodes of the
graph are image pixels, and the edges are between nearest neighbor pairs of pixels. The weights of
the edges are called affinities. A high affinity means that the two pixels tend to belong to the same
2
segment. The classifier computes the affinity of each edge based on an image patch surrounding the
edge.
The graph partitioner first thresholds the affinity graph by removing all edges with weights less
than some threshold value ?. The connected components of this thresholded affinity graph are the
segments of the image.
For this class of segmentation algorithms, it?s obvious that a single misclassified edge of the affinity
graph can dramatically alter the resulting segmentation by splitting or merging two segments (see
Fig. 1). This is why it is important to learn by optimizing a measure of segmentation performance
rather than affinity prediction.
We are well aware that connected components is an exceedingly simple method of graph partitioning. More sophisticated algorithms, such as spectral clustering [20] or graph cuts [3], might be more
robust to misclassifications of one or a few edges of the affinity graph. Why not use them instead?
We have two replies to this question.
First, because of the simplicity of our graph partitioning, we can derive a simple and direct method
of supervised learning that optimizes a true measure of image segmentation performance. So far
learning based on more sophisticated graph partitioning methods has fallen short of this goal [17, 2].
Second, even if it were possible to properly learn the affinities used by more sophisticated graph
partitioning methods, we would still prefer our simple connected components. The classifier in
our segmentation algorithm can also carry out sophisticated computations, if its representational
power is sufficiently great. Putting the sophistication in the classifier has the advantage of making it
learnable, rather than hand-designed.
The sophisticated partitioning methods clean up the affinity graph by using prior assumptions about
the properties of image segmentations. But these prior assumptions could be incorrect. The spirit of
the machine learning approach is to use a large amount of training data and minimize the use of prior
assumptions. If the sophisticated partitioning methods are indeed the best way of achieving good
segmentation performance, we suspect that our classifier will learn them from the training data. If
they are not the best way, we hope that our classifier will do even better.
3
The Rand index quantifies segmentation performance
Image segmentation can be viewed as a special case of the general problem of clustering, as image
segments are clusters of image pixels. Long ago, Rand proposed an index of similarity between two
clusterings [18]. Recently it has been proposed that the Rand index be applied to image segmentations [23]. Define a segmentation S as an assignment of a segment label si to each pixel i. The
indicator function ?(si , s j ) is 1 if pixels i and j belong to the same segment (si = s j ) and 0 otherwise.
Given two segmentations S and S? of an image with N pixels, define the function
? S) =
1 ? RI (S,
?1
N
?(si , s j ) ? ?(s?i , s?j )
?
2
i< j
(1)
which is the fraction of image pixel pairs on which the two segmentations disagree. We will refer to
? S) as the Rand index, although strictly speaking the Rand index is RI (S,
? S ),
the function 1 ? RI (S,
the fraction of image pixel pairs on which the two segmentations agree. In other words, the Rand
index is a measure of similarity, but we will often apply that term to a measure of dissimilarity.
In this paper, the Rand index is applied to compare the output S? of a segmentation algorithm with a
ground truth segmentation S, and will serve as an objective function for learning. Figure 1 illustrates
why the Rand index is a sensible measure of segmentation performance. The segmentation of affinity
graph #1 incurs a huge Rand index penalty relative to the ground truth. A single wrongly classified
edge of the affinity graph leads to an incorrect merger of two segments, causing many pairs of
image pixels to be wrongly assigned to the same segment. On the other hand, the segmentation
corresponding to affinity graph #2 has a perfect Rand index, even though there are misclassifications
in the affinity graph. In short, the Rand index makes sense because it strongly penalizes errors in the
affinity graph that lead to split and merger errors.
3
1
merger
11?
1?
2
2
3
3?
2?
2
2?
3?
3?
4
4?
groundtruth
test
split
4
4?
rand index
Figure 2: The Rand index quantifies segmentation performance by comparing the difference in
pixel pair connectivity between the groundtruth and test segmentations. Pixel pair connectivities can be visualized as symmetric binary block-diagonal matrices (si , s j ). Each diagonal block
corresponds to connected pixel pairs belonging to one of the image segments. The Rand index incurs
penalties when pixels pairs that must not be connected are connected or vice versa. This corresponds
to locations where the two matrices disagree. An erroneous merger of two groundtruth segments incurs a penalty proportional to the product of the sizes of the two segments. Split errors are similarly
penalized.
4
Connectivity and maximin affinity
Recall that our segmentation algorithm works by finding connected components of the thresholded
affinity graph. Let S? be the segmentation produced in this way. To apply the Rand index to train
our classifier, we need a simple way of relating the indicator function (s?i , s?j ) in the Rand index
to classifier output. In other words, we would like a way of characterizing whether two pixels are
connected in the thresholded affinity graph.
To do this, we introduce the concept of maximin affinity, which is defined for any pair of pixels in an
affinity graph (the definition is generally applicable to any weighted graph). Let Akl be the affinity
of pixels k and l. Let P ij be the set of all paths in the graph that connect pixels i and j. For every
path P in Pij , there is an edge (or edges) with minimal affinity. This is written as mink,l P Akl ,
where k, l P means that the edge between pixels k and l are in the path P.
A maximin path Pijis a path between pixels i and j that maximizes the minimal affinity,
Pij =
arg max min Akl
PPij k,l P
(2)
The maximin affinity of pixels i and j is the affinity of the maximin edge, or the minimal affinity of
the maximin path,
Aij =
max min Akl
PP ij k,l P
(3)
We are now ready for a trivial but important theorem.
Theorem 1. A pair of pixels is connected in the thresholded affinity graph if and only if their
maximin affinity exceeds the threshold value.
Proof. By definition, a pixel pair is connected in the thresholded affinity graph if and only if there
exists a path between them. Such a path is equivalent to a path in the unthresholded affinity graph
for which the minimal affinity is above the threshold value. This path in turn exists if and only if the
maximin affinity is above the threshold value.
As a consequence of this theorem, pixel pairs can be classified as connected or disconnected by
thresholding maximin affinities. Let S? be the segmentation produced by thresholding the affinity
graph Aij and then finding connected components. Then the connectivity indicator function is
(s?i , s?j ) = H ( Aij )
(4)
where H is the Heaviside step function.
Maximin affinities can be computed efficiently using minimum spanning tree algorithms [8]. A maximum spanning tree is equivalent to a minimum spanning tree, up to a sign change of the weights.
4
Any path in a maximum spanning tree is a maximin path. For our nearest neighbor affinity graphs,
the maximin affinity of a pixel pair can be computed in O(| E| ? ?(|V |)) where | E| is the number of
graph edges and |V | is the number of pixels and ?(?) is the inverse Ackerman function which grows
sub-logarithmically. The full matrix Aij? can be computed in time O(|V |2 ) since the computation
can be shared. Note that maximin affinities are required for training, but not testing. For segmenting
the image at test time, only a connected components computation need be performed, which takes
time linear in the number of edges | E|.
5
Optimizing the Rand index by learning maximin affinities
Since the affinities and maximin affinities are both functions of the image I and the classifier parameters W, we will write them as Aij ( I; W ) and Aij? ( I; W ), respectively. By Eq. (4) of the previous
section, the Rand index of Eq. (1) takes the form
?1
N
1 ? RI (S, I; W ) =
?(si , s j ) ? H ( Aij? ( I; W ) ? ? )
?
2
i< j
Since this is a discontinuous function of the maximin affinities, we make the usual relaxation by
replacing |?(si , s j ) ? H ( Aij? ( I; W ) ? ? )| with a continuous loss function l (?(si , s j ), Aij? ( I; W )).
Any standard loss such as the such as the square loss, 21 ( x ? x? )2 , or the hinge loss can be used for
l ( x, x? ). Thus we obtain a cost function suitable for gradient learning,
E(S, I; W )
=
=
?1
N
? l (?(si , s j ), Aij? ( I; W ))
2
i< j
?1
N
min Akl ( I; W ))
? l (?(si , s j ), Pmax
2
?Pij hk,l i? P
i< j
(5)
The max and min operations are continuous and differentiable (though not continuously differentiable). If the loss function l is smooth, and the affinity Akl ( I; W ) is a smooth function, then the
gradient of the cost function is well-defined, and gradient descent can be used as an optimization
method.
Define (k, l ) = mm(i, j) to be the maximin edge for the pixel pair (i, j). If there is a tie, choose
between the maximin edges at random. Then the cost function takes the form
?1
N
E(S, I; W ) =
? l (?(si , s j ), Amm(i,j) ( I; W ))
2
i< j
It?s instructive to compare this with the cost function for standard affinity learning
Estandard (S, I; W ) =
2
cN
? l (?(si , s j ), Aij ( I; W ))
hi,ji
where the sum is over all nearest neighbor pixel pairs hi, ji and c is the number of nearest neighbors
[9]. In contrast, the sum in the MALIS cost function is over all pairs of pixels, whether or not they
are adjacent in the affinity graph. Note that a single edge can be the maximin edge for multiple pairs
of pixels, so its affinity can appear multiple times in the MALIS cost function. Roughly speaking,
the MALIS cost function is similar to the standard cost function, except that each edge in the affinity
graph is weighted by the number of pixel pairs that it causes to be incorrectly classified.
6
Online stochastic gradient descent
Computing the cost function or its gradient requires finding the maximin edges for all pixel pairs.
Such a batch computation could be used for gradient learning. However, online stochastic gradient
5
learning is often more efficient than batch learning [13]. Online learning makes a gradient update of
the parameters after each pair of pixels, and is implemented as described in the box.
Maximin affinity learning
Standard affinity learning
1. Pick a random pair of (not necessarily nearest 1. Pick a random pair of nearest neighbor pixels i
neighbor) pixels i and j from a randomly drawn and j from a randomly drawn training image I
training image I.
2. Find a maximin edge mm(i, j)
3. Make the gradient update:
d
l (?(si , s j ), Amm(i,j) ( I; W ))
W ? W + ? dW
2. Make the gradient update:
d
l (?(si , s j ), Aij ( I; W ))
W ? W + ? dW
For comparison, we also show the standard affinity learning [9]. For each iteration, both learning
methods pick a random pair of pixels from a random image. Both compute the gradient of the weight
of a single edge in the affinity graph. However, the standard method picks a nearest neighbor pixel
pair and trains the affinity of the edge between them. The maximin method picks a pixel pair of
arbitrary separation and trains the minimal affinity on a maximin path between them.
Effectively, our connected components performs spatial integration over the nearest neighbor affinity
graph to make connectivity decisions about pixel pairs at large distances. MALIS trains these global
decisions, while standard affinity learning trains only local decisions. MALIS is superior because it
truly learns segmentation, but this superiority comes at a price. The maximin computation requires
that on each iteration the affinity graph be computed for the whole image. Therefore it is slower
than the standard learning method, which requires only a local affinity prediction for the edge being
trained. Thus there is a computational price to be paid for the optimization of a true segmentation
error.
7
7.1
Application to electron microscopic images of neurons
Electron microscopic images of neural tissue
By 3d imaging of brain tissue at sufficiently high resolution, as well as identifying synapses and tracing all axons and dendrites in these images, it is possible in principle to reconstruct connectomes,
complete ?wiring diagrams? for a brain or piece of brain [19, 4, 21]. Axons can be narrower than
100 nm in diameter, necessitating the use of electron microscopy (EM) [19]. At such high spatial
resolution, just one cubic millimeter of brain tissue yields teravoxel scale image sizes. Recent advances in automation are making it possible to collect such images [19, 4, 21], but image analysis
remains a challenge. Tracing axons and dendrites is a very large-scale image segmentation problem
requiring high accuracy. The images used for this study were from the inner plexiform layer of the
rabbit retina, and were taken using Serial Block-Face Scanning Electron Microscopy [5]. Two large
image volumes of 1003 voxels were hand segmented and reserved for training and testing purposes.
7.2
Training convolutional networks for affinity classification
Any classifier that is a smooth function of its parameters can be used for maximin affinity learning.
We have used convolutional networks (CN), but our method is not restricted to this choice. Convolutional networks have previously been shown to be effective for similar EM images of brain tissue
[11].
We trained two identical four-layer CNs, one with standard affinity learning and the second with
MALIS. The CNs contained 5 feature maps in each layer with sigmoid nonlinearities. All filters in
the CN were 5 ? 5 ? 5 in size. This led to an affinity classifier that uses a 17 ? 17 ? 17 cubic image
patch to classify a affinity edge. We used the square-square loss function l ( x, x? ) = x ? max(0, 1 ?
x? ? m)2 + (1 ? x ) ? max(0, x? ? m)2 , with a margin m = 0.3.
As noted earlier, maximin affinity learning can be significantly slower than standard affinity learning,
due to the need for computing the entire affinity graph on each iteration, while standard affinity
training need only predict the weight of a single edge in the graph. For this reason, we constructed
a proxy training image dataset by picking all possible 21 ? 21 ? 21 sized overlapping sub-images
6
from the original training set. Since each 21 ? 21 ? 21 sub-image is smaller than the original image,
the size of the affinity graph needed to be predicted for the sub-image is significantly smaller, leading
to faster training. A consequence of this approximation is that the maximum separation between
image pixel pairs chosen for training is less than about 20 pixels. A second means of speeding up the
maximin procedure is by pretraining the maximin CN for 500,000 iterations using the fast standard
affinity classification cost function. At the end, both CNs were trained for a total of 1,000,000
iterations by which point the training error plateaued.
Maximin learning leads to dramatic improvement in segmentation performance
0.99
0.98
0.97
0.96
0.95
0.94
0.93
0.5
0.6
0.7
0.8
Threshold
0.9
B. ROC curve
1
C. Precision?Recall curve
0.8
0.6
0.4
0.2
0
1
1
0.8
0.8
Mergers/object
A. Clustering accuracy
Precision
Fraction correct
1
True positive rate
7.3
0.6
0.4
0.2
0
0.5
False positive rate
1
0
0
0.5
Recall
1
D. Splits vs. Mergers
Standard (Train)
Standard (Test)
Minimax (Train)
Minimax (Test)
0.6
0.4
0.2
0
0
1
2
3
Splits/object
4
5
Figure 3: Quantification of segmentation performance on 3d electron microscopic images of
neural tissue. A) Clustering accuracy measuring the number of correctly classified pixel pairs. B)
and C) ROC curve and precision-recall quantification of pixel-pair connectivity classification shows
near perfect performance. D) Segmentation error as measured by the number of splits and mergers.
We benchmarked the performance of the standard and maximin affinity classifiers by measuring
the the pixel-pair connectivity classification performance using the Rand index. After training the
standard and MALIS affinity classifiers, we generated affinity graphs for the training and test images. In principle, the training algorithm suggests a single threshold for the graph partitioning.
In practice, one can generate a full spectrum of segmentations leading from over-segmentations to
under-segmentations by varying the threshold parameter. In Fig. 3, we plot the Rand index for
segmentations resulting from a range of threshold values.
In images with large numbers of segments, most pixel pairs will be disconnected from one another
leading to a large imbalancing the number of connected and disconnected pixel pairs. This is reflected in the fact that the Rand index is over 95% for both segmentation algorithms. While this
imbalance between positive and negative examples is not a significant problem for training the affinity classifier, it can make comparisons between classifiers difficult to interpret. Instead, we can use
the ROC and precision-recall methodologies, which provide for accurate quantification of the accuracy of classifiers even in the presence of large class imbalance. From these curves, we observe that
our maximin affinity classifier dramatically outperforms the standard affinity classifier.
Our positive results have an intriguing interpretation. The poor performance of the connected components when applied to a standard learned affinity classifier could be interpreted to imply that 1) a
local classifier lacks the context important for good affinity prediction; 2) connected components is
a poor strategy for image segmentation since mistakes in the affinity prediction of just a few edges
can merge or split segments. On the contrary, our experiments suggest that when trained properly,
thresholded affinity classification followed by connected components can be an extremely competitive method of image segmentations.
8
Discussion
In this paper, we have trained an affinity classifier to produce affinity graphs that result in excellent
segmentations when partitioned by the simple graph partitioning algorithm of thresholding followed
by connected components. The key to good performance is the training of a segmentation-based cost
function, and the use of a powerful trainable classifier to predict affinity graphs. Once trained, our
segmentation algorithm is fast. In contrast to classic graph-based segmentation algorithms where
7
mergers
image
groundtruth
maximin training
standard training
Figure 4: A 2d cross-section through a 3d segmentation of the test image. The maximin segmentation correctly segments several objects which are merged in the standard segmentation, and even
correctly segments objects which are missing in the groundtruth segmentation. Not all segments
merged in the standard segmentation are merged at locations visible in this cross section. Pixels colored black in the machine segmentations correspond to pixels completely disconnected from their
neighbors and represent boundary regions.
the partitioning phase dominates, our partitioning algorithm is simple and can partition graphs in
time linearly proportional to the number of edges in the graph. We also do not require any prior
knowledge of the number of image segments or image segment sizes at test time, in contrast to other
graph partitioning algorithms [7, 20].
The formalism of maximin affinities used to derive our learning algorithm has connections to singlelinkage hierarchical clustering, minimum spanning trees and ultrametric distances. Felzenszwalb
and Huttenlocher [7] describe a graph partitioning algorithm based on a minimum spanning tree
computation which resembles our segmentation algorithm, in part. The Ultrametric Contour Map
algorithm [1] generates hierarchical segmentations nearly identical those generated by varying the
threshold of our graph partitioning algorithm. Neither of these methods incorporates a means for
learning from labeled data, but our work shows how the performance of these algorithms can be
improved by use of our maximin affinity learning.
Acknowledgements
SCT and HSS were supported in part by the Howard Hughes Medical Institute and the Gatsby
Charitable Foundation.
References
[1] P. Arbelaez. Boundary extraction in natural images using ultrametric contour maps. Proc.
POCV, 2006.
[2] F. Bach and M. Jordan. Learning spectral clustering, with application to speech separation.
The Journal of Machine Learning Research, 7:1963?2001, 2006.
[3] Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts.
Pattern Analysis and Machine Intelligence, IEEE Transactions on, 23(11):1222?1239, 2001.
[4] K. L. Briggman and W. Denk. Towards neural circuit reconstruction with volume electron
microscopy techniques. Curr Opin Neurobiol, 16(5):562?70, 2006.
[5] W. Denk and H. Horstmann. Serial block-face scanning electron microscopy to reconstruct
three-dimensional tissue nanostructure. PLoS Biol, 2(11):e329, 2004.
[6] P. Doll?r, Z. Tu, and S. Belongie. Supervised learning of edges and object boundaries. In
CVPR, June 2006.
[7] P. Felzenszwalb and D. Huttenlocher. Efficient Graph-Based Image Segmentation. International Journal of Computer Vision, 59(2):167?181, 2004.
[8] B. Fischer, V. Roth, and J. Buhmann. Clustering with the connectivity kernel. In Advances
in Neural Information Processing Systems 16: Proceedings of the 2003 Conference. Bradford
Book, 2004.
[9] C. Fowlkes, D. Martin, and J. Malik. Learning affinity functions for image segmentation: combining patch-based and gradient-based approaches. Computer Vision and Pattern Recognition,
2003. Proceedings. 2003 IEEE Computer Society Conference on, 2, 2003.
8
[10] X. He, R. Zemel, and M. Carreira-Perpinan. Multiscale conditional random fields for image
labeling. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition,
volume 2. IEEE Computer Society; 1999, 2004.
[11] V. Jain, J. Murray, F. Roth, S. Turaga, V. Zhigulin, K. Briggman, M. Helmstaedter, W. Denk,
and H. Seung. Supervised learning of image restoration with convolutional networks. ICCV
2007, 2007.
[12] S. Kumar and M. Hebert. Discriminative random fields: a discriminative framework for contextual interaction in classification. Computer Vision, 2003. Proceedings. Ninth IEEE International Conference on, pages 1150?1157, 2003.
[13] Y. LeCun, L. Bottou, G. Orr, and K. M?ller. Efficient backprop. Lecture notes in computer
science, pages 9?50, 1998.
[14] M. Maire, P. Arbelaez, C. Fowlkes, and J. Malik. Using contours to detect and localize junctions in natural images. In IEEE Conference on Computer Vision and Pattern Recognition,
2008. CVPR 2008, pages 1?8, 2008.
[15] D. Martin, C. Fowlkes, D. Tal, and J. Malik. A database of human segmented natural images
and its application to evaluating segmentation algorithms and measuring ecological statistics.
In Proc. Eighth Int?l Conf. Computer Vision, volume 2, pages 416?423, 2001.
[16] D. R. Martin, C. C. Fowlkes, and J. Malik. Learning to detect natural image boundaries using
local brightness, color, and texture cues. IEEE Trans Pattern Anal Mach Intell, 26(5):530?549,
May 2004.
[17] M. Meila and J. Shi. Learning segmentation by random walks. ADVANCES IN NEURAL
INFORMATION PROCESSING SYSTEMS, pages 873?879, 2001.
[18] W. Rand. Objective criteria for the evaluation of clustering methods. Journal of the American
Statistical association, pages 846?850, 1971.
[19] H. Seung. Reading the Book of Memory: Sparse Sampling versus Dense Mapping of Connectomes. Neuron, 62(1):17?29, 2009.
[20] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 22(8):888?905, 2000.
[21] S. J. Smith. Circuit reconstruction tools today. Curr Opin Neurobiol, 17(5):601?608, Oct
2007.
[22] Z. Tu, X. Chen, A. Yuille, and S. Zhu. Image parsing: Unifying segmentation, detection, and
recognition. International Journal of Computer Vision, 63(2):113?140, 2005.
[23] R. Unnikrishnan, C. Pantofaru, and M. Hebert. Toward objective evaluation of image segmentation algorithms. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, pages 929?944, 2007.
9
| 3887 |@word version:1 inversion:1 paid:1 incurs:3 dramatic:1 pick:5 brightness:1 carry:1 briggman:3 outperforms:1 comparing:1 contextual:1 si:14 intriguing:1 must:2 parsing:2 written:1 visible:1 partition:1 opin:2 designed:1 plot:1 update:3 v:1 intelligence:3 cue:1 merger:8 smith:1 short:2 colored:1 detecting:1 boosting:1 node:1 location:2 constructed:2 direct:1 incorrect:2 consists:1 combine:1 introduce:1 indeed:1 roughly:1 multi:1 brain:5 provided:1 maximizes:1 circuit:2 benchmarked:1 interpreted:1 akl:6 neurobiol:2 finding:5 berkeley:1 quantitative:1 hypothetical:1 every:1 tie:1 classifier:39 partitioning:19 medical:4 appear:1 planck:3 producing:1 segmenting:1 positive:4 superiority:1 local:4 mistake:1 consequence:2 despite:1 mach:1 path:14 merge:2 might:1 black:1 resembles:1 collect:1 suggests:1 range:1 lecun:1 testing:2 practice:1 block:4 hughes:1 procedure:1 maire:1 significantly:2 word:2 suggest:1 wrongly:2 context:1 optimize:4 equivalent:2 map:5 missing:2 roth:2 shi:2 rabbit:1 resolution:2 simplicity:1 splitting:2 identifying:1 biol:1 dw:2 classic:1 searching:1 ultrametric:3 play:1 today:1 distinguishing:1 us:1 logarithmically:1 recognition:4 cut:4 predicts:1 huttenlocher:2 labeled:1 database:1 role:1 module:2 zhigulin:1 region:1 connected:24 plo:1 seung:3 denk:4 renormalized:1 ultimately:1 trained:11 segment:19 predictive:1 creation:1 serve:1 yuille:1 completely:1 joint:1 surrounding:1 train:11 jain:1 fast:3 effective:1 insitute:3 describe:1 zemel:1 labeling:4 kevin:1 emerged:1 cvpr:2 otherwise:1 reconstruct:2 statistic:1 fischer:1 online:3 advantage:1 differentiable:2 reconstruction:2 interaction:1 product:1 ackerman:1 tu:3 causing:1 combining:2 representational:1 cluster:1 optimum:1 produce:5 perfect:3 object:7 derive:2 measured:2 ij:2 nearest:12 eq:2 implemented:1 predicted:1 come:1 merged:3 discontinuous:1 correct:1 filter:1 stochastic:2 human:2 backprop:1 require:3 strictly:1 mm:2 sufficiently:2 ground:4 great:1 mapping:1 predict:4 electron:7 purpose:1 singlelinkage:1 proc:2 applicable:1 label:2 grouped:2 vice:1 tool:1 reflects:1 weighted:4 hope:1 minimization:1 mit:3 rather:3 varying:2 focus:1 june:1 unnikrishnan:1 properly:2 improvement:1 hk:1 contrast:3 sense:3 detect:2 entire:1 misclassified:2 pantofaru:1 pixel:56 arg:1 classification:8 spatial:2 special:1 integration:1 field:5 genuine:3 never:1 aware:1 once:1 extraction:1 sampling:1 identical:2 nearly:1 alter:1 serious:1 few:2 retina:1 randomly:2 intell:1 phase:1 cns:3 attempt:1 curr:2 detection:2 huge:1 evaluation:2 adjust:1 truly:1 accurate:1 edge:37 tree:6 plateaued:1 penalizes:1 walk:1 amm:2 minimal:5 instance:1 classify:1 earlier:1 formalism:1 measuring:3 restoration:1 assignment:1 soften:1 tractability:1 cost:11 veksler:1 connect:1 scanning:2 contender:1 international:3 probabilistic:1 picking:1 together:2 continuously:1 connectivity:10 nm:1 choose:1 conf:1 book:2 american:1 leading:3 nonlinearities:1 orr:1 automation:1 int:1 piece:1 performed:1 competitive:1 sct:1 minimize:6 square:3 accuracy:4 convolutional:4 reserved:1 efficiently:1 yield:2 correspond:1 millimeter:1 fallen:1 produced:4 ago:1 classified:4 tissue:6 detector:1 synapsis:1 sebastian:1 definition:2 energy:1 obvious:1 proof:1 dataset:2 recall:5 knowledge:2 nanostructure:1 color:1 segmentation:90 sophisticated:6 supervised:8 reflected:1 methodology:1 improved:1 rand:31 though:2 strongly:1 box:1 just:2 reply:1 hand:3 replacing:1 multiscale:1 overlapping:1 lack:1 quality:1 grows:1 concept:3 normalized:2 true:3 requiring:1 hence:1 assigned:1 moritz:1 symmetric:1 adjacent:1 wiring:1 noted:1 criterion:1 theoretic:1 complete:1 necessitating:1 performs:1 image:75 novel:1 recently:2 boykov:1 superior:1 sigmoid:1 ji:2 volume:4 belong:3 interpretation:1 he:1 relating:1 interpret:1 association:1 refer:1 significant:1 versa:1 meila:1 similarly:1 similarity:2 closest:1 recent:1 optimizing:4 optimizes:1 ecological:1 binary:1 minimum:4 ller:1 dashed:2 multiple:3 full:2 segmented:3 exceeds:1 smooth:3 faster:1 hhmi:1 bach:2 long:1 cross:2 mali:10 serial:2 prediction:5 simplistic:1 essentially:1 vision:7 iteration:5 represent:1 kernel:1 microscopy:4 separately:1 diagram:1 plexiform:1 suspect:1 tend:1 contrary:1 incorporates:1 spirit:1 jordan:2 call:1 near:1 presence:1 split:7 misclassifications:3 opposite:1 inner:1 idea:1 cn:4 sibling:1 whether:2 penalty:3 speech:1 speaking:2 cause:1 pretraining:1 dramatically:2 generally:1 amount:1 zabih:1 visualized:1 diameter:1 generate:3 sign:3 correctly:3 write:1 key:2 putting:1 four:1 threshold:12 achieving:1 drawn:2 localize:1 neither:1 clean:1 thresholded:10 imaging:1 graph:78 relaxation:1 fraction:3 sum:2 inverse:1 powerful:1 groundtruth:5 patch:3 separation:3 decision:3 prefer:1 bound:1 hi:2 layer:3 followed:3 ri:4 tal:1 generates:2 min:5 extremely:1 kumar:1 martin:3 turaga:2 poor:4 disconnected:4 belonging:1 smaller:2 em:2 partitioned:2 making:3 restricted:1 iccv:1 taken:1 agree:1 previously:2 remains:1 turn:1 needed:1 tractable:1 end:1 junction:1 operation:1 doll:1 apply:2 observe:1 hierarchical:2 indirectly:1 spectral:2 fowlkes:4 batch:2 slower:2 original:2 clustering:9 hinge:1 unifying:1 murray:1 society:3 objective:6 malik:5 question:1 strategy:1 usual:1 diagonal:2 microscopic:3 affinity:115 gradient:13 distance:2 arbelaez:2 parametrized:2 sensible:1 trivial:1 reason:2 spanning:6 toward:1 connectomes:2 index:30 minimizing:1 difficult:1 negative:1 pmax:1 anal:1 upper:1 disagree:2 neuron:2 imbalance:2 markov:1 howard:1 descent:2 incorrectly:2 ever:1 ninth:1 arbitrary:1 pair:35 required:1 connection:1 maximin:42 learned:1 trans:1 able:1 pattern:7 eighth:1 reading:1 challenge:1 max:8 memory:1 power:1 misclassification:2 suitable:1 natural:4 quantification:3 indicator:3 buhmann:1 zhu:1 representing:1 minimax:4 imply:1 ready:1 speeding:1 prior:4 voxels:1 acknowledgement:1 relative:2 loss:6 lecture:1 proportional:2 hs:1 versus:1 foundation:1 degree:2 pij:5 proxy:1 thresholding:4 principle:2 charitable:1 classifying:1 penalized:1 supported:1 hebert:2 aij:12 srinivas:1 institute:1 neighbor:13 characterizing:1 face:2 unthresholded:1 felzenszwalb:2 sparse:1 tracing:2 boundary:8 curve:4 evaluating:1 contour:3 computes:1 exceedingly:1 far:1 piecemeal:1 transaction:3 approximate:1 global:1 belongie:1 discriminative:3 spectrum:1 continuous:2 quantifies:3 why:3 learn:3 robust:1 helmstaedter:2 improving:1 dendrite:2 excellent:1 necessarily:1 bottou:1 dense:1 linearly:1 whole:1 fig:2 e329:1 roc:3 cubic:2 gatsby:1 axon:3 precision:4 sub:4 perpinan:1 learns:1 removing:1 theorem:3 erroneous:1 learnable:1 horstmann:1 dominates:1 exists:2 false:1 merging:2 effectively:1 texture:1 dissimilarity:1 illustrates:1 margin:1 chen:1 sophistication:1 led:1 contained:1 corresponds:3 truth:4 oct:1 conditional:2 viewed:3 goal:1 narrower:1 quantifying:1 sized:1 towards:1 shared:1 price:2 change:1 carreira:1 except:1 called:1 total:1 bradford:1 winfried:1 latter:1 heaviside:1 trainable:1 instructive:1 |
3,187 | 3,888 | Slow Learners are Fast
John Langford, Alexander J. Smola, Martin Zinkevich
Machine Learning, Yahoo! Labs and Australian National University
4401 Great America Pky, Santa Clara, 95051 CA
{jl, maz, smola}@yahoo-inc.com
Abstract
Online learning algorithms have impressive convergence properties when it comes
to risk minimization and convex games on very large problems. However, they are
inherently sequential in their design which prevents them from taking advantage
of modern multi-core architectures. In this paper we prove that online learning
with delayed updates converges well, thereby facilitating parallel online learning.
1
Introduction
Online learning has become the paradigm of choice for tackling very large scale estimation problems. The convergence properties are well understood and have been analyzed in a number of different frameworks such as by means of asymptotics [12], game theory [8], or stochastic programming
[13]. Moreover, learning-theory guarantees show that O(1) passes over a dataset suffice to obtain
optimal estimates [3, 2]. This suggests that online algorithms are an excellent tool for learning.
This view, however, is slightly deceptive for several reasons: current online algorithms process
one instance at a time. That is, they receive the instance, make some prediction, incur a loss, and
update an associated parameter. In other words, the algorithms are entirely sequential in their nature.
While this is acceptable in single-core processors, it is highly undesirable given that the number
of processing elements available to an algorithm is growing exponentially (e.g. modern desktop
machines have up to 8 cores, graphics cards up to 1024 cores). It is therefore very wasteful if only
one of these cores is actually used for estimation.
A second problem arises from the fact that network and disk I/O have not been able to keep up with
the increase in processor speed. A typical network interface has a throughput of 100MB/s and disk
arrays have comparable parameters. This means that current algorithms reach their limit at problems
of size 1TB whenever the algorithm is I/O bound (this amounts to a training time of 3 hours), or even
smaller problems whenever the model parametrization makes the algorithm CPU bound.
Finally, distributed and cloud computing are unsuitable for today?s online learning algorithms. This
creates a pressing need to design algorithms which break the sequential bottleneck. We propose two
variants. To our knowledge, this is the first paper which provides theoretical guarantees combined
with empirical evidence for such an algorithm. Previous work, e.g. by [6] proved rather inconclusive
in terms of theoretical and empirical guarantees.
In a nutshell, we propose the following two variants: several processing cores perform stochastic
gradient descent independently of each other while sharing a common parameter vector which is
updated asynchronously. This allows us to accelerate computationally intensive problems whenever gradient computations are relatively expensive. A second variant assumes that we have linear
function classes where parts of the function can be computed independently on several cores. Subsequently the results are combined and the combination is then used for a descent step.
A common feature of both algorithms is that the update occurs with some delay: in the first case
other cores may have updated the parameter vector in the meantime, in the second case, other cores
may have already computed parts of the function for the subsequent examples before an update.
1
2
2.1
Algorithm
Platforms
We begin with an overview of three platforms which are available for parallelization of algorithms.
They differ in their structural parameters, such as synchronization ability, latency, and bandwidth
and consequently they are better suited to different styles of algorithms. The description is not comprehensive by any means. For instance, there exist numerous variants of communication paradigms
for distributed and cloud computing.
Shared Memory Architectures: The commercially available 4-16 core CPUs on servers and desktop computers fall into this category. They are general purpose processors which operate on a joint
memory space where each of the processors can execute arbitrary pieces of code independently of
other processors. Synchronization is easy via shared memory/interrupts/locks. A second example
are graphics cards. There the number of processing elements is vastly higher (1024 on high-end
consumer graphics cards), although they tend to be bundled into groups of 8 cores (also referred
to as multiprocessing elements), each of which can execute a given piece of code in a data-parallel
fashion. An issue is that explicit synchronization between multiprocessing elements is difficult ? it
requires computing kernels on the processing elements to complete.
Clusters: To increase I/O bandwidth one can combine several computers in a cluster using MPI or
PVM as the underlying communications mechanism. A clear limit here is bandwidth constraints
and latency for inter-computer communication. On Gigabit Ethernet the TCP/IP latency can be in
the order of 100?s, the equivalent of 105 clock cycles on a processor and network bandwidth tends
to be a factor 100 slower than memory bandwdith.
Grid Computing: Computational paradigms such as MapReduce [4] and Hadoop are well suited
for the parallelization of batch-style algorithms [17]. In comparison to cluster configurations communication and latency are further constrained. For instance, often individual processing elements
are unable to communicate directly with other elements with disk / network storage being the only
mechanism of inter-process data transfer. Moreover, the latency is significantly increased.
We consider only the first two platforms since latency plays a critical role in the analysis of the
class of algorithms we propose. While we do not exclude the possibility of devising parallel online
algorithms suited to grid computing, we believe that the family of algorithm proposed in this paper
is unsuitable and a significantly different synchronization paradigm would be needed.
2.2
Delayed Stochastic Gradient Descent
Many learning problems can be written as convex minimization problems. It is our goal to find some
parameter vector x (which is drawn from some Banach space X with associated norm k?k) such that
the sum over convex functions fi : X ? R takes on the smallest value possible. For instance,
(penalized) maximum likelihood estimation in exponential families with fully observed data falls
into this category, so do Support Vector Machines and their structured variants. This also applies to
distributed games with a communications constraint within a team.
At the outset we make no special assumptions on the order or form of the functions fi . In particular,
an adversary may choose to order or generate them in response to our previous choices of x. In other
cases, the functions fi may be drawn from some distribution (e.g. whenever
P we deal with induced
losses). It is our goal to find a sequence of xi such that the cumulative loss i fi (xi ) is minimized.
With some abuse of notation we identify the average empirical and expected loss both by f ? . This
is possible, simply by redefining p(f ) to be the uniform distribution over F . Denote by
1 X
f ? (x) :=
fi (x) or f ? (x) := Ef ?p(f ) [f (x)]
(1)
|F | i
and correspondingly x? := argmin f ? (x)
(2)
x?X
the average risk. We assume that x? exists (convexity does not guarantee a bounded minimizer) and
that it satisfies kx? k ? R (this is always achievable, simply by intersecting X with the unit-ball of
radius R). We propose the following algorithm:
2
Algorithm 1 Delayed Stochastic Gradient Descent
Input: Feasible space X ? Rn , annealing schedule ?t and delay ? ? N
Initialization: set x1 . . . , x? = 0 and compute corresponding gt = ?ft (xt ).
for t = ? + 1 to T + ? do
Obtain ft and incur loss ft (xt )
Compute gt := ?ft (xt )
Update xt+1 = argminx?X kx ? (xt ? ?t gt?? )k (Gradient Step and Projection)
end for
Figure 1: Data parallel stochastic gradient descent with shared parameter vector. Observations
are partitioned on a per-instance basis among n
processing units. Each of them computes its own
gradient gt = ?x ft (xt ). Since each computer is
updating x in a round-robin fashion, it takes a delay of ? = n ? 1 between gradient computation
and when the gradients are applied to x.
x
data
source
loss
gradient
?
? ? . Often, X = Rn .
In this paper the annealing schedule will be either ?t = (t??
) or ?t =
t??
If we set ? = 0, algorithm 1 becomes an entirely standard stochastic gradient descent algorithm.
The only difference with delayed stochastic gradient descent is that we do not update the parameter
vector xt with the current gradient gt but rather with a delayed gradient gt?? that we computed ?
steps previously. We extend this to bounds which are dependent on strong convexity [1, 7] to obtain
adaptive algorithms which can take advantage of well-behaved optimization problems in practice.
An extension to Bregman divergences is possible. See [11] for details.
2.3
Templates
Asynchronous Optimization: Assume that we have n processors which can process data independently of each other, e.g. in a multicore platform, a graphics card, or a cluster of workstations.
Moreover, assume that computing the gradient of ft (x) is at least n times as expensive as it is to update x (read, add, write). This occurs, for instance, in the case of conditional random fields [15, 18],
in planning [14], and in ranking [19].
The rationale for delayed updates can be seen in the following setting: assume that we have n
cores performing stochastic gradient descent on different instances ft while sharing one common
parameter vector x. If we allow each core in a round-robin fashion to update x one at a time then
there will be a delay of ? = n ? 1 between when we see ft and when we get to update xt+? . The
delay arises since updates by different cores cannot happen simultaneously. This setting is preferable
whenever computation of ft itself is time consuming.
Note that there is no need for explicit thread-level synchronization between individual cores. All
we need is a read / write-locking mechanism for x or alternatively, atomic updates on the parameter
vector. On a multi-computer cluster we can use a similar mechanism simply by having one server act
as a state-keeper which retains an up-to-date copy of x while the loss-gradient computation clients
can retrieve at any time a copy of x and send gradient update messages to the state keeper.
Pipelined Optimization: The key impediment in the previous template is that it required significant
amounts of bandwidth solely for the purpose of synchronizing the state vector. This can be addressed
by parallelizing computing the function value fi (x) explicitly rather than attempting to compute
several instances of fi (x) simultaneously. Such situations occur, e.g. when fi (x) = g(h?(zi ), xi) for
high-dimensional ?(zi ). If we decompose the data zi (or its features) over n nodes we can compute
partial function values and also all partial updates locally. The only communication required is to
combine partial values and to compute gradients with respect to h?(zi ), xi.
This causes delay since the second stage is processing results of the first stage while the latter has
already moved on to processing ft+1 or further. While the architecture is quite different, the effects
are identical: the parameter vector x is updated with some delay ? . Note that here ? can be much
smaller than the number of processors and mainly depends on the latency of the communication
channel. Also note that in this configuration the memory access for x is entirely local.
3
Randomization: Order of observations matters for delayed updates: imagine that an adversary,
aware of the delay ? bundles each of the ? most similar instances ft together. In this case we will
incur a loss that can be ? times as large as in the non-delayed case and require a learning rate which is
? times smaller. The reason being that only after seeing ? instances of ft will we be able to respond
to the data. Such highly correlated settings do occur in practice: for instance, e-mails or search
keywords have significant temporal correlation (holidays, political events, time of day) and cannot
be treated as iid data. Randomization of the order of data can be used to alleviate the problem.
3
Lipschitz Continuous Losses
Due to space constraints we only state the results and omit the proofs. A more detailed analysis
can be found in [11]. We begin with a simple game theoretic analysis that only requires ft to
be convex and where the subdifferentials are bounded k?ft (x)k ? L by some L > 0. Denote
by x? the minimizer of f ? (x). It is our goal to bound the regret R associated with a sequence
X = {x1 , . . . , xT } of parameters. If all terms are convex we obtain
R[X] :=
T
X
ft (xt ) ? ft (x? ) ?
t=1
T
X
h?ft (xt ), xt ? x? i =
t=1
T
X
hgt , xt ? x? i .
(3)
t=1
Next define a potential function measuring the distance between xt and x? . In the more general
2
analysis this will become a Bregman divergence. We define D(xkx0 ) := 21 kx ? x0 k . At the heart
of our regret bounds is the following which bounds the instantaneous risk at a given time [16]:
Lemma 1 For all x? and for all t > ? , if X = Rn , the following expansion holds:
hxt??
1
D(x? kxt ) ? D(x? kxt+1 )
2
? x , gt?? i = ?t kgt?? k +
+
2
?t
?
min(?,t?(? +1))
X
?t?j hgt?? ?j , gt?? i
j=1
Note that the decomposition of Lemma 1 is very similar to standard regret decomposition bounds,
such as [21]. The key difference is that we now have an additional term characterizing the correlation
between successive gradients which needs to be bounded. In the worst case all we can do is bound
hgt?? ?j , gt?? i ? L2 , whenever the gradients are highly correlated, which yields the following:
Theorem 2 Suppose all the cost functions are Lipschitz continuous with a constant L and
?
for some constant ? > 0, the regret of the delayed
maxx,x0 ?X D(xkx0 ) ? F 2 . Given ?t = ?t??
update algorithm is bounded by
?
?
?
?? 2
2
2 T
R[X] ? ?L T + F
+ L2
+ 2L2 ?? T
(4)
?
2
and consequently for ? 2 =
F2
2? L2
and T ? ? 2 we obtain the bound
?
R[X] ? 4F L ? T
(5)
?
In other words the algorithm converges at rate O( ? T ). This is similar to what we would expect
in the worst case: an adversary may reorder instances such as to maximally slow down progress.
In this case a parallel algorithm is no faster than a sequential code. This result may appear overly
pessimistic but the following example shows that such worst-case scaling behavior is to be expected:
Lemma 3 Assume that an optimal online algorithm with regard to a convex game achieves regret
R[m] after seeing m instances. Then any algorithm which may only use information that is at least
? instances old has a worst case regret bound of ? R[m/? ].
Our construction works by designing a sequence of functions fi where for a fixed n ? N all fn? +j
are identical (for j ? {1, . . . , n}). That is, we send identical functions to the algorithm while it has
no chance of responding to them. Hence, even an algorithm knowing that we will see ? identical
instances in a row but being disallowed to respond to them for ? instances will do no better than one
which sees every instance once but is allowed to respond instantly.
4
The useful consequence of Theorem 2 is that we are guaranteed to converge at all even if we encounter delay (the latter is not trivial ? after all, we could end up with an oscillating parameter
vector for overly aggressive learning rates). While such extreme cases hardly occur in practice, we
need to make stronger assumptions in terms of correlation of ft and the degree of smoothness in ft
to obtain tighter bounds. We conclude this section by studying a particularly convenient case: the
setting when the functions fi are strongly convex satisfying
hi
2
f (x? ) ? fi (x) + hx? ? x, ?x f (x)i +
kx ? x? k
(6)
2
Here we can get rid of the D(x? kx1 ) dependency in the loss bound.
Theorem 4 Suppose that the functions fi are strongly convex with parameter ? > 0. Moreover,
1
choose the learning rate ?t = ?(t??
) for t > ? and ?t = 0 for t ? ? . Then under the assumptions
of Theorem 2 we have the following bound:
L2
R[X] ? ?? F 2 + 12 + ?
(1 + ? + log T )
(7)
?
The key difference is that now we need to take the additional contribution of the gradient correlations
into account. As before, we pay a linear price in the delay ? .
4
Decorrelating Gradients
To improve our bounds beyond the most pessimistic case we need to assume that the adversary is not
acting in the most hostile fashion possible. In the following we study the opposite case ? namely
that the adversary is drawing the functions fi iid from an arbitrary (but fixed) distribution. The key
reason for this requirement is that we need to control the value of hgt , gt0 i for adjacent gradients.
The flavor of the bounds we use will be in terms of the expected regret rather than an actual regret.
Conversions from expected to realized regret are standard. See e.g. [13, Lemma 2] for an example
of this technique. For this purpose we need to take expectations of sums of copies of the bound of
Lemma 1. Note that this is feasible since expectations are linear and whenever products between
more than one term occur, they can be seen as products which are conditionally independent given
past parameters, such as hgt , gt0 i for |t ? t0 | ? ? (in this case no information about gt can be used
to infer gt0 or vice versa, given that we already know all the history up to time min(t, t0 ) ? 1.
A key quantity in our analysis are bounds on the correlation between subsequent instances. In some
cases we will only be able to obtain bounds on the expected regret rather than the actual regret. For
the reasons pointed out in Lemma 3 this is an in-principle limitation of the setting.
Our first strategy is to assume that ft arises from a scalar function of a linear function class. This
leads to bounds which, while still bearing a linear penalty in ? , make do with considerably improved
constants. The second strategy makes stringent smoothness assumptions on ft , namely it assumes
that the gradients themselves are Lipschitz continuous. This will lead to guarantees for which the
delay becomes increasingly irrelevant as the algorithm progresses.
4.1
Covariance bounds for linear function classes
Many functions ft (x) depend on x only via an inner product. They can be expressed as
ft (x) = l(yt , hzt , xi) and hence gt (x) = ?ft (x) = zt ?hzt ,xi l(yt , hzt , xi)
(8)
Now assume that ?hzt ,xi l(yt , hzt , xi) ? ? for all x and all t. This holds, e.g. in the case of
logistic regression, the soft-margin hinge loss, novelty detection. In all three cases we have ? = 1.
Robust loss functions such as Huber?s regression score [9] also satisfy (8), although with a different
constant (the latter depends on the level of robustness). For such problems it is possible to bound
the correlation between subsequent gradients via the following lemma:
Lemma 5 Denote by (y, z), (y 0 , z 0 ) ? Pr(y, z) random variables which are drawn independently
of x, x0 ? X. In this case
=: L2 ?
(9)
Ey,z,y0 ,z0 [h?x l(y, hz, xi), ?x l(y 0 , hz 0 , x0 i)i] ? ?2
Ez,z0 z 0 z >
Frob
5
Here we defined ? to be the scaling factor which quantifies by how much gradients are correlated.
This yields a tighter version of Theorem 2.
Corollary 6 Given ?t =
algorithm is bounded by
??
t??
and the conditions of Lemma 5 the regret of the delayed update
2
R[X] ? ?L
Hence for ? 2 =
4.2
F2
2? ?L2
?
?
T +F
2
?
T
?? 2
+ L2 ?
+ 2L2 ??? T
?
2
(10)
?
(assuming that ? ? ? 1) and T ? ? 2 we obtain R[X] ? 4F L ?? T .
Bounds for smooth gradients
The key to improving the rate rather than the constant with regard to which the bounds depend on ?
is to impose further smoothness constraints on ft . The rationale is quite simple: we want to ensure
that small changes in x do not lead to large changes in the gradient. This is precisely what we need
in order to show that a small delay (which amounts to small changes in x) will not impact the update
that is carried out to a significant amount. More specifically we assume that the gradient of f is a
Lipschitz-continuous function. That is,
k?ft (x) ? ?ft (x0 )k ? H kx ? x0 k .
(11)
Such a constraint effectively rules out piecewise linear loss functions, such as the hinge loss, structured estimation, or the novelty detection loss. Nonetheless, since this discontinuity only occurs on
a set of measure 0 delayed stochastic gradient descent still works very well on them in practice.
Theorem 7 In addition to the conditions of Theorem 2 assume that the functions fi are i.i.d., H ?
L?
and that H also upper-bounds the change in the gradients as in Equation 11. Moreover,
4F ?
assume that we choose a learning rate ?t =
??
t??
with ? =
F
L.
In this case the risk is bounded by
?
2
4
8
E[R[X]] ? 28.3F 2 H + F L + F 2 H log T ? 2 + F L T .
3
3
3
(12)
?
Note that the convergence bound which is O(? 2 log T + T ) is governed by two different regimes.
Initially, a delay of ? can be quite harmful since subsequent gradients are highly correlated. At a
later stage when optimization becomes increasingly an averaging process a delay of ? in the updates
proves to be essentially harmless. The key difference to bounds of Theorem 2 is that now the rate of
convergence has improved dramatically and is essentially as good as in sequential online learning.
Note that H does not influence the asymptotic convergence properties but it significantly affects the
initial convergence properties.
This is exactly what one would expect: initially while we are far away from the solution x? parallelism does not help much in providing us with guidance to move towards x? . However, after
a number of steps online learning effectively becomes an averaging process for variance reduction
around x? since the stepsize is sufficiently small. In this case averaging becomes the dominant force,
hence parallelization does not degrade convergence further. Such a setting is desirable ? after all,
we want to have good convergence for extremely large amounts of data.
4.3
Bounds for smooth gradients with strong convexity
We conclude this section with the tightest of all bounds ? the setting where the losses are all
strongly convex and smooth. This occurs, for instance, for logistic regression with `2 regularization.
Such a requirement implies that the objective function f ? (x) is sandwiched between two quadratic
functions, hence it is not too surprising that we should be able to obtain rates comparable with what
is possible in the minimization of quadratic functions. Also note that the ratio between upper and
lower quadratic bound loosely corresponds to the condition number of a quadratic function ? the
ratio between the largest and smallest eigenvalue of the matrix involved in the optimization problem.
6
Performance on Real Data
Performance on TREC Data
2
Log_2 Error
-2
Log_2 Error
no delay
delay of 10
delay of 100
delay of 1000
0
-4
-6
-8
-10
-12
0
2
1
0
-1
-2
-3
-4
-5
-6
no delay
delay of 10
delay of 100
delay of 1000
0
10 20 30 40 50 60 70 80 90 100
Thousands of Iterations
10 20 30 40 50 60 70 80 90 100
Thousands of Iterations
Figure 2: Experiments with simulated delay on the TREC dataset (left) and on a propietary dataset
(right). In both cases a delay of 10 has no effect on the convergence whatsoever and even a delay of
100 is still quite acceptable.
Percent Speedup
Performance on TREC Data
450
400
350
300
250
200
150
100
50
0
Figure 3: Time performance on a subset of
the TREC dataset which fits into memory, using the quadratic representation. There was
either one thread (a serial implementation)
or 3 or more threads (master and 2 or more
slaves).
1
2
3
4
Threads
5
6
7
Theorem 8 Under the assumptions of Theorem 4, in particular, assuming that all functions fi are
1
i.i.d and strongly convex with constant ? and corresponding learning rate ?t = ?(t??
) and provided
that Equation 11 holds we have the following bound on the expected regret:
2
L
L2
? 2 ? 2 HL2
10
1
2
?? F +
.
+?
[1 + ? + log(3? + (H? /?))] +
[1 + log T ] +
E [R[X]] ?
9
2
?
2?
6?2
(13)
As before, this improves the rate of the bound. Instead of a dependency of the form O(? log T ) we
now have the dependency O(? log ? + log T ). This is particularly desirable for large T . We are now
within a small factor of what a fully sequential algorithm can achieve. In fact, we could make the
constant arbitrary small for large enough T .
5
Experiments
In our experiments we focused on pipelined optimization. In particular, we used two different training sets that were based on e-mails: the TREC dataset [5], consisting of 75,419 e-mail messages, and
a proprietary (significantly harder) dataset of which we took 100,000 e-mails. These e-mails were
tokenized by whitespace. The problem there is one of binary classification where we minimized a
?Huberized? soft-margin loss function
?1
if ? ? 0
?2 ? ?
ft (x) = l(yt hzt , xi) where l(?) = 12 (? ? 1)2 if ? ? [0, 1]
(14)
?
0
otherwise
Here yt ? {?1} denote the labels of the binary classification problem, and l is the smoothed
quadratic soft-margin loss of [10]. We used two feature representations: a linear one which
amounted to a simple bag of words representation, and a quadratic one which amounted to generating a bag of word pairs (consecutive or not).
7
To deal with high-dimensional feature spaces we used hashing [20]. In particular, for the TREC
dataset we used 218 feature bins and for the proprietary dataset we used 224 bins. Note that hashing comes with performance guarantees which state that the canonical distortion due to hashing is
sufficiently small for the dimensionality we picked. We tried to address the following issues:
1. The obvious question is a systematic one: how much of a convergence penalty do we incur
in practice due to delay. This experiment checks the goodness of our bounds. We checked
convergence for a system where the delay is given by ? ? {0, 10, 100, 1000}.
2. Secondly, we checked on an actual parallel implementation whether the algorithm scales
well. Unlike the previous check includes issues such as memory contention, thread synchronization, and general feasibility of a delayed updating architecture.
Implementation The code was written in Java, although several of the fundamentals were based
upon VW [10], that is, hashing and the choice of loss function. We added regularization using lazy
updates of the parameter vector (i.e. we rescale the updates and occasionally rescale the parameter).
This is akin to Leon Bottou?s SGD code. For robustness, we used ?t = ?1t .
All timed experiments were run on a single, 8 core machine with 32 GB of memory. In general, at
least 6 of the cores were free at any given time. In order to achieve advantages of parallelization,
we divide the feature space {1 . . . n} into roughly equal pieces, and assign a slave thread to each
piece. Each slave is given both the weights for its pieces, as well as the corresponding pieces
of the examples. The master is given the label of each example. We compute the dot product
separately on each piece, and then send these results to a master. The master adds the pieces together,
calculates the update, and then sends that back to the slaves. Then, the slaves update their weight
vectors in proportion to the magnitude of the central classifier. What makes this work quickly is that
there are multiple examples in flight through this dataflow simultaneously. Note that between the
time when a dot product is calculated for an example and when the results have been transcribed,
the weight vector has been updated with several other earlier examples and the dot products have
been calculated from several later examples. As a safeguard we limited the maximum delay to 100
examples. In this case the compute slave would simply wait for the pipeline to clear.
The first experiment that we ran was a simulation where we artificially added a delay between the
update and the product (Figure 2a). We ran this experiment using linear features, and observed
that the performance did not noticeably degrade with a delay of 10 examples, did not significantly
degrade with a delay of 100, but with a delay of 1000, the performance became much worse.
The second experiment that we ran was with a proprietary dataset (Figure 2b). In this case, the
delays hurt less; we conjecture that this was because the information gained from each example was
smaller. In fact, even a delay of 1000 does not result in particularly bad performance.
Since even the sequential version already handled 150,000 examples per second we tested parallelization only for quadratic features where throughput would be in the order of 1000 examples per
second. Here parallelization dramatically improved performance ? see Figure 3. To control for
disk access we loaded a subset of the data into memory and carried out the algorithm on it.
Summary and Discussion
The type of updates we presented is a rather natural one. However, intuitively, having a delay of ?
is like having a learning rate that is ? times larger. In this paper, we have shown theoretically how
independence between examples can make the actual effect much smaller.
The experimental results showed three important aspects: first of all, small simulated delayed updates do not hurt much, and in harder problems they hurt less; secondly, in practice it is hard to
speed up ?easy? problems with a small amount of computation, such as e-mails with linear features;
finally, when examples are larger or harder, the speedups can be quite dramatic.
8
References
[1] Peter L. Bartlett, Elad Hazan, and Alexander Rakhlin. Adaptive online gradient descent. In
J. C. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20, Cambridge, MA, 2008. MIT Press.
[2] L. Bottou and O. Bousquet. The tradeoffs of large scale learning. In J. C. Platt, D. Koller,
Y. Singer, and S.T. Roweis, editors, NIPS. MIT Press, 2007.
[3] L?eon Bottou and Yann LeCun. Large scale online learning. In S. Thrun, L. Saul, and
B. Sch?olkopf, editors, Advances in Neural Information Processing Systems 16, pages 217?
224, Cambridge, MA, 2004. MIT Press.
[4] C.T. Chu, S.K. Kim, Y. A. Lin, Y. Y. Yu, G. Bradski, A. Ng, and K. Olukotun. Map-reduce for
machine learning on multicore. In B. Sch?olkopf, J. Platt, and T. Hofmann, editors, Advances
in Neural Information Processing Systems 19, 2007.
[5] G. Cormack. TREC 2007 spam track overview. In The Sixteenth Text REtrieval Conference
(TREC 2007) Proceedings, 2007.
[6] O. Delalleau and Y. Bengio. Parallel stochastic gradient descent, 2007. CIAR Summer School,
Toronto.
[7] C.B. Do, Q.V. Le, and C.-S. Foo. Proximal regularization for online and batch learning. In A.P.
Danyluk, L. Bottou, and M. L. Littman, editors, Proceedings of the 26th Annual International
Conference on Machine Learning, ICML 2009, Montreal, Quebec, Canada, June 14-18, 2009,
volume 382, page 33. ACM, 2009.
[8] Elad Hazan, Amit Agarwal, and Satyen Kale. Logarithmic regret algorithms for online convex
optimization. Machine Learning, 69(2-3):169?192, 2007.
[9] P. J. Huber. Robust Statistics. John Wiley and Sons, New York, 1981.
[10] J. Langford, L. Li, and A. Strehl.
Vowpal wabbit online learning project, 2007.
http://hunch.net/?p=309.
[11] J. Langford, A.J. Smola, and M. Zinkevich. Slow learners are fast. arXiv:0911.0491.
[12] N. Murata, S. Yoshizawa, and S. Amari. Network information criterion ? determining the
number of hidden units for artificial neural network models. IEEE Transactions on Neural
Networks, 5:865?872, 1994.
[13] Y. Nesterov and J.-P. Vial. Confidence level solutions for stochastic programming. Technical Report 2000/13, Universit?e Catholique de Louvain - Center for Operations Research and
Economics, 2000.
[14] N. Ratliff, J. Bagnell, and M. Zinkevich. Maximum margin planning. In International Conference on Machine Learning, July 2006.
[15] N. Ratliff, J. Bagnell, and M. Zinkevich. (online) subgradient methods for structured prediction. In Eleventh International Conference on Artificial Intelligence and Statistics (AIStats),
March 2007.
[16] Shai Shalev-Shwartz, Yoram Singer, and Nathan Srebro. Pegasos: Primal estimated subgradient solver for SVM. In Proc. Intl. Conf. Machine Learning, 2007.
[17] Choon Hui Teo, S. V. N. Vishwanthan, Alex J. Smola, and Quoc V. Le. Bundle methods for
regularized risk minimization. J. Mach. Learn. Res., 2009. Submitted in February 2009.
[18] S. V. N. Vishwanathan, Nicol N. Schraudolph, Mark Schmidt, and Kevin Murphy. Accelerated training conditional random fields with stochastic gradient methods. In Proc. Intl. Conf.
Machine Learning, pages 969?976, New York, NY, USA, 2006. ACM Press.
[19] M. Weimer, A. Karatzoglou, Q. Le, and A. Smola. Cofi rank - maximum margin matrix
factorization for collaborative ranking. In J.C. Platt, D. Koller, Y. Singer, and S. Roweis,
editors, Advances in Neural Information Processing Systems 20. MIT Press, Cambridge, MA,
2008.
[20] K. Weinberger, A. Dasgupta, J. Attenberg, J. Langford, and A.J. Smola. Feature hashing for
large scale multitask learning. In L. Bottou and M. Littman, editors, International Conference
on Machine Learning, 2009.
[21] M. Zinkevich. Online convex programming and generalised infinitesimal gradient ascent. In
Proc. Intl. Conf. Machine Learning, pages 928?936, 2003.
9
| 3888 |@word multitask:1 version:2 maz:1 achievable:1 norm:1 stronger:1 proportion:1 disk:4 simulation:1 tried:1 decomposition:2 covariance:1 dramatic:1 sgd:1 thereby:1 harder:3 reduction:1 initial:1 configuration:2 score:1 past:1 current:3 com:1 surprising:1 clara:1 tackling:1 chu:1 written:2 john:2 fn:1 subsequent:4 happen:1 hofmann:1 update:26 intelligence:1 devising:1 desktop:2 parametrization:1 tcp:1 core:17 provides:1 node:1 toronto:1 successive:1 become:2 prove:1 combine:2 eleventh:1 theoretically:1 x0:6 inter:2 huber:2 expected:6 roughly:1 themselves:1 planning:2 growing:1 multi:2 gigabit:1 behavior:1 cpu:2 actual:4 solver:1 becomes:5 begin:2 provided:1 moreover:5 suffice:1 underlying:1 project:1 notation:1 bounded:6 what:6 argmin:1 whatsoever:1 guarantee:6 temporal:1 every:1 act:1 nutshell:1 preferable:1 exactly:1 classifier:1 universit:1 platt:4 control:2 unit:3 omit:1 appear:1 before:3 generalised:1 understood:1 local:1 tends:1 limit:2 consequence:1 mach:1 solely:1 abuse:1 initialization:1 deceptive:1 suggests:1 limited:1 factorization:1 lecun:1 atomic:1 practice:6 regret:14 asymptotics:1 empirical:3 maxx:1 significantly:5 java:1 projection:1 convenient:1 word:4 outset:1 cormack:1 seeing:2 wait:1 confidence:1 get:2 cannot:2 undesirable:1 pipelined:2 pegasos:1 storage:1 risk:5 keeper:2 influence:1 zinkevich:5 equivalent:1 map:1 yt:5 vowpal:1 send:3 kale:1 center:1 economics:1 independently:5 convex:12 focused:1 rule:1 array:1 retrieve:1 harmless:1 holiday:1 hurt:3 updated:4 imagine:1 today:1 play:1 suppose:2 construction:1 programming:3 designing:1 hunch:1 element:7 expensive:2 particularly:3 updating:2 satisfying:1 observed:2 cloud:2 role:1 ft:28 worst:4 thousand:2 cycle:1 ran:3 convexity:3 locking:1 littman:2 nesterov:1 depend:2 incur:4 creates:1 upon:1 f2:2 learner:2 basis:1 gt0:3 accelerate:1 joint:1 america:1 fast:2 artificial:2 kevin:1 shalev:1 quite:5 larger:2 elad:2 distortion:1 drawing:1 otherwise:1 delalleau:1 multiprocessing:2 ability:1 satyen:1 statistic:2 amari:1 itself:1 asynchronously:1 online:18 ip:1 advantage:3 pressing:1 sequence:3 kxt:2 eigenvalue:1 took:1 propose:4 wabbit:1 net:1 mb:1 product:7 date:1 kx1:1 achieve:2 roweis:3 sixteenth:1 description:1 moved:1 olkopf:2 convergence:11 cluster:5 requirement:2 intl:3 oscillating:1 generating:1 converges:2 help:1 montreal:1 rescale:2 school:1 multicore:2 keywords:1 progress:2 strong:2 come:2 australian:1 ethernet:1 differ:1 implies:1 radius:1 kgt:1 stochastic:12 subsequently:1 stringent:1 karatzoglou:1 bin:2 noticeably:1 require:1 hx:1 assign:1 decompose:1 randomization:2 alleviate:1 pessimistic:2 tighter:2 secondly:2 extension:1 hold:3 around:1 sufficiently:2 great:1 danyluk:1 achieves:1 consecutive:1 smallest:2 purpose:3 estimation:4 proc:3 bag:2 label:2 teo:1 largest:1 vice:1 tool:1 minimization:4 mit:4 always:1 rather:7 corollary:1 interrupt:1 bundled:1 june:1 rank:1 likelihood:1 mainly:1 check:2 political:1 hzt:6 kim:1 dependent:1 initially:2 hidden:1 koller:3 issue:3 among:1 classification:2 yahoo:2 platform:4 constrained:1 special:1 field:2 aware:1 once:1 having:3 equal:1 ng:1 identical:4 synchronizing:1 yu:1 icml:1 throughput:2 commercially:1 minimized:2 report:1 piecewise:1 modern:2 simultaneously:3 national:1 comprehensive:1 individual:2 delayed:13 murphy:1 divergence:2 choon:1 argminx:1 consisting:1 frob:1 detection:2 message:2 bradski:1 highly:4 possibility:1 analyzed:1 extreme:1 primal:1 bundle:2 bregman:2 partial:3 old:1 harmful:1 loosely:1 timed:1 divide:1 re:1 guidance:1 theoretical:2 instance:20 increased:1 soft:3 earlier:1 retains:1 measuring:1 goodness:1 hgt:5 cost:1 subset:2 uniform:1 delay:35 graphic:4 too:1 dependency:3 proximal:1 considerably:1 combined:2 fundamental:1 international:4 systematic:1 safeguard:1 together:2 quickly:1 intersecting:1 vastly:1 central:1 choose:3 transcribed:1 worse:1 conf:3 style:2 li:1 aggressive:1 exclude:1 potential:1 account:1 de:1 includes:1 inc:1 matter:1 satisfy:1 explicitly:1 ranking:2 depends:2 piece:8 later:2 view:1 break:1 lab:1 picked:1 hazan:2 parallel:7 shai:1 contribution:1 collaborative:1 became:1 variance:1 loaded:1 murata:1 yield:2 identify:1 iid:2 processor:8 history:1 submitted:1 reach:1 whenever:7 sharing:2 checked:2 infinitesimal:1 nonetheless:1 involved:1 obvious:1 yoshizawa:1 associated:3 proof:1 workstation:1 dataset:9 proved:1 dataflow:1 knowledge:1 improves:1 dimensionality:1 schedule:2 actually:1 back:1 higher:1 hashing:5 day:1 response:1 maximally:1 improved:3 decorrelating:1 execute:2 strongly:4 smola:6 stage:3 langford:4 clock:1 correlation:6 flight:1 logistic:2 behaved:1 believe:1 usa:1 effect:3 subdifferentials:1 hence:5 regularization:3 read:2 deal:2 conditionally:1 round:2 adjacent:1 game:5 mpi:1 criterion:1 complete:1 theoretic:1 hl2:1 interface:1 percent:1 instantaneous:1 ef:1 fi:15 contention:1 common:3 overview:2 exponentially:1 volume:1 jl:1 banach:1 extend:1 significant:3 versa:1 cambridge:3 smoothness:3 grid:2 pointed:1 dot:3 access:2 impressive:1 gt:11 add:2 dominant:1 own:1 showed:1 irrelevant:1 occasionally:1 server:2 hostile:1 binary:2 seen:2 additional:2 impose:1 ey:1 converge:1 paradigm:4 novelty:2 july:1 multiple:1 desirable:2 infer:1 smooth:3 technical:1 faster:1 schraudolph:1 lin:1 retrieval:1 serial:1 feasibility:1 impact:1 prediction:2 variant:5 regression:3 calculates:1 essentially:2 expectation:2 arxiv:1 iteration:2 kernel:1 agarwal:1 receive:1 addition:1 want:2 huberized:1 separately:1 annealing:2 addressed:1 source:1 sends:1 sch:2 parallelization:6 operate:1 unlike:1 pass:1 ascent:1 induced:1 tend:1 hz:2 quebec:1 structural:1 vw:1 bengio:1 easy:2 enough:1 affect:1 fit:1 zi:4 independence:1 architecture:4 bandwidth:5 opposite:1 impediment:1 inner:1 reduce:1 knowing:1 tradeoff:1 intensive:1 ciar:1 pvm:1 bottleneck:1 thread:6 t0:2 whether:1 handled:1 bartlett:1 gb:1 vishwanthan:1 akin:1 penalty:2 peter:1 york:2 cause:1 hardly:1 proprietary:3 dramatically:2 useful:1 latency:7 santa:1 clear:2 detailed:1 amount:6 vial:1 locally:1 category:2 generate:1 http:1 exist:1 canonical:1 estimated:1 overly:2 per:3 track:1 instantly:1 write:2 disallowed:1 dasgupta:1 group:1 key:7 drawn:3 wasteful:1 olukotun:1 subgradient:2 sum:2 run:1 master:4 communicate:1 respond:3 family:2 yann:1 whitespace:1 acceptable:2 scaling:2 comparable:2 entirely:3 bound:32 hi:1 pay:1 guaranteed:1 summer:1 quadratic:8 annual:1 occur:4 constraint:5 precisely:1 alex:1 vishwanathan:1 xkx0:2 bousquet:1 aspect:1 speed:2 nathan:1 min:2 extremely:1 leon:1 performing:1 attempting:1 cofi:1 martin:1 relatively:1 conjecture:1 speedup:2 structured:3 combination:1 ball:1 march:1 smaller:5 slightly:1 increasingly:2 y0:1 son:1 partitioned:1 quoc:1 intuitively:1 pr:1 heart:1 pipeline:1 computationally:1 equation:2 previously:1 mechanism:4 needed:1 know:1 singer:4 end:3 studying:1 available:3 tightest:1 operation:1 away:1 stepsize:1 attenberg:1 batch:2 encounter:1 robustness:2 slower:1 schmidt:1 weinberger:1 assumes:2 responding:1 ensure:1 lock:1 hinge:2 unsuitable:2 yoram:1 eon:1 prof:1 amit:1 february:1 sandwiched:1 move:1 objective:1 already:4 realized:1 occurs:4 quantity:1 strategy:2 question:1 added:2 bagnell:2 gradient:38 distance:1 unable:1 card:4 simulated:2 thrun:1 degrade:3 mail:6 trivial:1 reason:4 consumer:1 assuming:2 code:5 tokenized:1 providing:1 ratio:2 difficult:1 ratliff:2 design:2 implementation:3 zt:1 perform:1 conversion:1 upper:2 observation:2 descent:11 situation:1 communication:7 team:1 trec:8 rn:3 smoothed:1 arbitrary:3 parallelizing:1 canada:1 namely:2 required:2 pair:1 redefining:1 louvain:1 hour:1 discontinuity:1 nip:1 address:1 able:4 adversary:5 beyond:1 parallelism:1 regime:1 tb:1 memory:9 critical:1 event:1 treated:1 client:1 force:1 natural:1 meantime:1 regularized:1 improve:1 numerous:1 carried:2 text:1 mapreduce:1 l2:10 nicol:1 determining:1 asymptotic:1 synchronization:6 loss:19 fully:2 expect:2 rationale:2 limitation:1 srebro:1 degree:1 principle:1 editor:7 strehl:1 row:1 penalized:1 summary:1 asynchronous:1 copy:3 free:1 catholique:1 allow:1 fall:2 template:2 taking:1 correspondingly:1 characterizing:1 saul:1 distributed:3 regard:2 calculated:2 cumulative:1 computes:1 adaptive:2 spam:1 far:1 transaction:1 keep:1 rid:1 conclude:2 reorder:1 consuming:1 xi:11 shwartz:1 alternatively:1 search:1 continuous:4 quantifies:1 robin:2 learn:1 nature:1 transfer:1 channel:1 ca:1 inherently:1 hadoop:1 robust:2 improving:1 expansion:1 bearing:1 excellent:1 bottou:5 artificially:1 did:2 aistats:1 weimer:1 allowed:1 facilitating:1 x1:2 referred:1 fashion:4 slow:3 wiley:1 foo:1 ny:1 explicit:2 slave:6 exponential:1 governed:1 theorem:10 down:1 z0:2 bad:1 xt:14 rakhlin:1 svm:1 evidence:1 inconclusive:1 exists:1 hxt:1 sequential:7 effectively:2 gained:1 hui:1 magnitude:1 kx:5 margin:5 flavor:1 suited:3 logarithmic:1 simply:4 ez:1 prevents:1 expressed:1 lazy:1 scalar:1 applies:1 corresponds:1 minimizer:2 satisfies:1 chance:1 acm:2 ma:3 conditional:2 goal:3 consequently:2 towards:1 shared:3 lipschitz:4 feasible:2 price:1 change:4 hard:1 typical:1 specifically:1 acting:1 averaging:3 lemma:9 amounted:2 experimental:1 support:1 mark:1 latter:3 arises:3 alexander:2 accelerated:1 tested:1 correlated:4 |
3,188 | 3,889 | Nonparametric Greedy Algorithms for the Sparse
Learning Problem
Han Liu and Xi Chen
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
Abstract
This paper studies the forward greedy strategy in sparse nonparametric regression. For additive models, we propose an algorithm called additive forward regression; for general multivariate models, we propose an algorithm called generalized forward regression. Both algorithms simultaneously conduct estimation
and variable selection in nonparametric settings for the high dimensional sparse
learning problem. Our main emphasis is empirical: on both simulated and real
data, these two simple greedy methods can clearly outperform several state-ofthe-art competitors, including LASSO, a nonparametric version of LASSO called
the sparse additive model (SpAM) and a recently proposed adaptive parametric
forward-backward algorithm called Foba. We also provide some theoretical justifications of specific versions of the additive forward regression.
1
Introduction
The linear model is a mainstay of statistical inference. At present, there are two major approaches
to fit sparse linear models: convex regularization and greedy pursuit. The convex regularization
approach regularizes the model by adding a sparsity constraint, leading to methods like LASSO
[19, 7] or the Dantzig selector [6]. The greedy pursuit approach regularizes the model by iteratively
selecting the current optimal approximation according to some criteria, leading to methods like the
matching pursuit [14] or orthogonal matching pursuit (OMP) [20].
Substantial progress has been made recently on applying the convex regularization idea to fit sparse
additive models. For splines, Lin and Zhang [12] propose a method called COSSO, which uses
the sum of reproducing kernel Hilbert space norms as a sparsity inducing penalty, and can simultaneously conduct estimation and variable selection; Ravikumar et al. [17, 16] develop a method
called SpAM. The population version of SpAM can be viewed as a least squares problem penalized
by the sum of L2 (P )-norms; Meier et al. [15] develop a similar method using a different sparsitysmoothness penalty, which guarantees the solution to be a spline. All these methods can be viewed
as different nonparametric variants of LASSO. They have similar drawbacks: (i) it is hard to extend
them to handle general multivariate regression where the mean functions are no longer additive; (ii)
due to the large bias induced by the regularization penalty, the model estimation is suboptimal. One
way to avoid this is to resort to two-stage procedures as in [13], but the method becomes less robust
due to the inclusion of an extra tuning parameter in the first stage.
In contrast to the convex regularization methods, the greedy pursuit approaches do not suffer from
such problems. Instead of trying to formulate the whole learning task to be a global convex optimization, the greedy pursuit approaches adopt iterative algorithms with a local view. During each
iteration, only a small number of variables are actually involved in the model fitting so that the whole
inference only involves low dimensional models. Thus they naturally extend to the general multivariate regression and do not induce large estimation bias, which makes them especially suitable for
high dimensional nonparametric inference. However, the greedy pursuit approaches do not attract as
1
much attention as the convex regularization approaches in the nonparametric literature. For additive
models, the only work we know of are the sparse boosting [4] and multivariate adaptive regression
splines (MARS) [9]. These methods mainly target on additive models or lower-order functional
ANOVA models, but without much theoretical analysis. For general multivariate regression, the
only available method we are aware of is rodeo [11]. However, rodeo requires the total number of
variables to be no larger than a double-logarithmic of the data sample size, and does not explicitly
identify relevant variables.
In this paper, we propose two new greedy algorithms for sparse nonparametric learning in high dimensions. By extending the idea of the orthogonal matching pursuit to nonparametric settings, the
main contributions of our work include: (i) we formulate two greedy nonparametric algorithms:
additive forward regression (AFR) for sparse additive models and generalized forward regression
(GFR) for general multivariate regression models. Both of them can simultaneously conduct estimation and variable selection in high dimensions. (ii) We present theoretical results for AFR using
specific smoothers. (iii) We report thorough numerical results on both simulated and real-world
datasets to demonstrate the superior performance of these two methods over the state-of-the-art
competitors, including LASSO, SpAM, and an adaptive parametric forward-backward algorithm
called Foba [22].
The rest of this paper is organized as follows: in the next section we review the basic problem
formulation and notations. In Section 3 we present the AFR algorithm, in section 4, we present the
GFR algorithm. Some theoretical results are given in Section 5. In Section 6 we present numerical
results on both simulated and real datasets, followed by a concluding section at the end.
2
Sparse Nonparametric Learning in High Dimensions
n
We begin by introducing some notations. Assuming n data points (X i , Y i ) i=1 are observed from
a high dimensional regression model
Y i = m(X i ) + i , i ? N (0, ? 2 ) i = 1, . . . , n,
(1)
i
i
i T
p
p
where X = (X1 , . . . , Xp ) ? R is a p-dimensional design point, m : R ? R is an unknown
smooth mean function. Here we assume m lies in a p-dimensional second order Sobolev ball with
finite radius. In the sequel, we denote the response vector (Y 1 , . . . , Y n )T by Y and the vector
(Xj1 , . . . , Xjn )T by Xj for 1 ? j ? p.
We assume m is functional sparse, i.e. there exists an index set S ? {1, . . . , p}, such that
(General) m(x) = m(xS ),
where |S| = r p and xS denotes the sub-vector of x with elements indexed by S.
(2)
Sometimes, the function m can be assumed to have more structures to obtain a better estimation
result. The most popular one is additivity assumption [10]. In this case, m decomposes into the sum
of r univariate functions {mj }j?S :
X
(Additive) m(x) = ? +
mj (xj ),
(3)
j?S
where each component function mj is assumed to lie in a second order Sobolev ball with finite
radius so that each element in the space is smooth enough. For the sake of identifiability, we also
assume Emj (Xj ) = 0 for j = 1, . . . , p, where the expectation is taken with respect to the marginal
distribution of Xj .
Given the models in (2) or (3), we have two tasks: function estimation and variable selection. For
the first task, we try to find an estimate m,
b such that km
b ? mk ? 0 as n goes to infinity, where k ? k
b which is an index set of
is some function norm. For the
second
task,
we
try
to
find an estimate S,
variables, such that P Sb = S ? 1 as n goes to infinity.
3
Additive Forward Regression
P
In this section, we assume the true model is additive, i.e. m(x) = ? + j?S mj (xj ). In general,
if the true index set for the relevant variables is known, the backfitting algorithm can be directly
2
applied to estimate m
b [10]. It is essentially a Gauss-Seidel iteration for solving a set of normal
equations in a function space. In particular, we denote the estimates on the jth variable Xj to be
n T
n
m
b j ? (m
b j (Xj1 ), . . . , m
bP
b j can be estimated by regressing the partial residual
j (Xj )) ? R . Then m
vector Rj = Y ? ? ? k6=j m
b k on the variable Xj . This can be calculated by m
b j = Sj Rj , where
n
n
1
n
Sj : R ? R is a smoothing matrix, which only depends on X , . . . , X but not on Y . Once
m
b j is updated, the algorithm holds it fixed and repeats this process by cycling through each variable
until convergence. Under mild conditions on the smoothing matrices S1 , . . . , Sp , the backfitting
algorithm is a first order algorithm that guarantees to converge [5] and achieves the minimax rate
of convergence as if only estimating a univariate function. However, for sparse learning problems,
since the true index set is unknown, the backfitting algorithm no longer works due to the uncontrolled
estimation variance.
By extending the idea of the orthogonal matching pursuit to sparse additive models, we design a
forward greedy algorithm called the additive forward regression (AFR), which only involves a few
variables in each iteration. Under this framework, we only need to conduct the backfitting algorithm
on a small set of variables. Thus the variance can be well controlled. The algorithm is described in
Figure 1, where we use h?, ?in to denote the inner product of two vectors.
n
Input: (X i , Y i ) i=1 and ? > 0
Pn
let A(0) = ?, ? = i=1 Y i /n and the residual R(0) = Y ? ?
for k = 1, 2, 3, . . .
for each j 6? A(k?1) , estimate m
b j by smoothing: m
b j = Sj R(k?1)
(k)
(k?1)
let j = arg max |hm
b j, R
in |
j6?A(k?1)
let A(k) = A(k?1) ? j (k)
estimate M(k) = {mj : j ? A(k) } by the
P backfitting algorithm
compute the residual R(k) = Y ? ? ? mj ?M(k) mj (Xj )
if (kR(k?1) k22 ? kR(k) k22 )/n ? ?
k =k?1
break
end if
end for
Output: selected variables A(k) and estimated component functions M(k) = {mj : j ? A(k) }
Figure 1: T HE A DDITIVE F ORWARD R EGRESSION A LGORITHM
The algorithm uses an active set A to index the variables included in the model during each iteration
and then performs a full optimization over all ?active? variables via the backfitting algorithm. The
main advantage of this algorithm is that during each iteration, the model inference is conducted in
low dimensions and thus avoids the curse of dimensionality. The stopping criterion is controlled
by a predefined parameter ? which is equivalent to the regularization tuning parameter in convex
regularization methods. Other stopping criteria, such as the maximum number of steps, can also
be adopted. In practice, we always recommend to use data-dependent technique, such as crossvalidation, to automatically tune this parameter.
Moreover, the smoothing matrix Sj can be fairly general, e.g. univariate local linear smoothers as
described below, kernel smoothers or spline smoothers [21], etc.
4
Generalized Forward Regression
This section only assume m(x) to be functional sparse, i.e. m(x) = m(xS ), without restricting the
model to be additive. In this case, to find a good estimate m
b becomes more challenging.
To estimate the general multivariate mean function m(x), one of the most popular methods is the
local linear regression: given an evaluation point x = (x1 , . . . , xp )T , the estimate m(x)
b
is the
3
n
Input: (X i , Y i ) i=1 and ? > 0
Pn
Pn
let A(0) = ?, ? = i=1 Y i /n and ? (0) = i=1 (Y i ? ?)2 /n
for k = 1, 2, 3, . . .
2
Pn
let j (k) = arg min i=1 Y i ? S(A(k?1) ? j)X i Y /n
(k)
j6?A(k?1)
(k?1)
let A = A
? j (k)
2
Pn
(k)
let ? = i=1 Y i ? S(A(k) )X i Y /n
if (? (k?1) ? ? (k) ) ? ?
k =k?1
break
end if
end for
Output: selected variables A(k) and local linear estimates (S(A(k) )X 1 Y, . . . , S(A(k) )X n Y )
Figure 2: T HE G ENERALIZED F ORWARD R EGRESSION A LGORITHM
solution ?
bx to the following locally kernel weighted least squares problem:
min
?x ,?x
n
X
p
2 Y
Y i ? ?x ? ?xT (X i ? x)
Khj (Xji ? xj ),
i=1
(4)
j=1
where K(?) is a one dimensional kernel function and the kernel weight function in (4) is taken as a
product kernel with the diagonal bandwidth matrix H 1/2 = diag{h1 , . . . , hp }. Such a problem can
be re-casted as a standard weighted least squares regression. Therefore a closed-form solution to the
the local linear estimate can be explicitly given by
?
bx = eT1 (XxT Wx Xx )?1 XxT Wx Y = Sx Y,
where e1 = (1, 0, . . . , 0)T is the first canonical vector in Rp+1 and
Wx = diag
?
p
?Y
?
j=1
Khj (Xj1 ? xj ), . . . ,
p
Y
Khj (Xjn ? xj )
?
?
?
j=1
?
,
1
? ..
Xx = ? .
1
?
(X 1 ? x)T
?
..
?.
.
n
T
(X ? x)
Here, Sx is the local linear smoothing matrix. Note that if we constrain ?x = 0, then the local linear
estimate reduces to the kernel estimate. The pointwise rate of convergence of such an estimate
has been characterized in [8]: |m(x)
b
? m(x)|2 = OP (n?4/(4+p) ), which is extremely slow when
p > 10.
To handle the large p case, we again extend the idea of the orthogonal matching pursuit to this
setting. For an index subset A ? {1, . . . , p} and the evaluation point x, the local linear smoother
restricted on A is denoted as S(A) and
S(A)x = eT1 X(A)Tx W (A)x X(A)x
?1
X(A)Tx W (A)x ,
where W (A)x is a diagonal matrix whose diagonal entries are the product of univariate kernels over
the set A and X(A)x is a submatrix of Xx that only contains the columns indexed by A.
Given these definitions, the generalized forward regression (GFR) algorithm is described in Figure
2. Similar to AFR, GFR also uses an active set A to index the variables included in the model.
Such mechanism allows all the statistical inference to be conducted only in low-dimensional spaces.
The GFR algorithm using the multivariate local linear smoother can be computationally heavy for
very high dimensional problems. However, GFR is a generic framework and can be equipped with
arbitrary multivariate smoothers, e.g. kernel/Nearest Neighbor/spline smoothers. These smoothers
lead to much better scalability. The only reason we use the local linear smoother as an illustrative
example in this paper is due to its popularity and potential advantage on correcting the boundary
bias.
4
5
Theoretical Properties
In this section, we provide the theoretical properties of the additive forward regression estimates
using the spline smoother. Due to the asymptotic equivalence of the spline smoother and the local
linear smoother [18], we deduce that these results should also hold for the local linear smoother.
Our main result in Theorem 1 says when using the spline smoother with certain truncation rate
to implement AFR algorithm, the resulting estimator is consistent with a certain rate. When the
underlying true component functions do not go to zeroes too fast, we also achieve variable selection
consistency. Our analysis relies heavily on [3]. A similar analysis has also been reported in the
technical report version of [16].
Theorem 1. Assuming there exists some ? > 0 which can be arbitrarily large, such that p = O(n? ).
For ?j ? P
{1, . . . , p}, we assume mj lies in a second-order Sobolev ball with finite radius, and
p
m = ? + j=1 mj . For the additive forward regression algorithm using the spline smoother with
1/2
a truncation rate at n1/4 , after (n/log n)
steps, we obtain that
!
r
log n
2
km ? mk
b = OP
.
(5)
n
1/4
log n
b = S ? 1 as n goes
Furthermore, if we also assume minj?S kmj k = ?
,
then
P
S
n
to infinity. Here, Sb is the index set for nonzero component functions in m.
b
The rate for km
b ? mk2 obtained from Theorem 1 is only O(n?1/2 ), which is slower than the
minimax rate O(n?4/5 ). This is mainly an artifact of our analysis instead of a drawback of the
additive forward regression algorithm. In fact, if we perform a basis expansion for each component
function to first cast the problem to be a finite dimensional linear model with group structure, under
some more stringent smallest eigenvalue conditions on the augmented design as in [23], we can
show that AFR using spline smoothers can actually achieves the minimax rate O(n?4/5 ) up to a
logarithmic factor. A detailed treatment will be reported in a follow up paper.
Sketch of Proof: We first describe an algorithm called group orthogonal greedy algorithm (GOGA),
which solves a noiseless function approximation problem in a direct-sum Hilbert space. AFR can
then be viewed as an empirical realization of such an ?ideal? algorithm.
GOGA is a group extension of the orthogonal greedy algorithm (OGA) in [3]. For j = 1, . . . , p, let
Hj be a Hilbert space of continuous functions with a Hamel basis Dj . Then for a function m in the
direct-sum Hilbert space H = H1 + H2 + . . . + Hp , we want to approximate m using the union of
many truncated bases D = D10 ? . . . ? Dp0 , where for all j, Dj0 ? Dj .
R
We equip an inner product h?, ?i on H: ?f, g ? H, hf, gi = f (X)g(X)dPX where PX is the
marginal distribution for X. Let k ? k be the norm induced by the inner product h?, ?i on H. GOGA
begins by setting m(0) = 0, and then recursively defines the approximant m(k) based on m(k?1)
(k)
and its residual r(k?1) ? m ? m(k?1) . More specifically: we proceed as the following: define fj
(k)
to be the projection of r(k?1) onto the truncated basis Dj0 , i.e. fj
= arg ming?Dj0 kr(k?1) ? gk2 .
(k)
We calculate j (k) as j (k) = arg maxj |hr(k?1) , fj i|. m(k) can then be calculated by projecting m
onto the additive function space generated by A(k) = Dj0 (1) + ? ? ? + Dj0 (k) :
m
b (k) =
arg min
km ? gk2 .
g?span(A(k) )
AFR using regression splines is exactly GOGA when there is no noise. For noisy samples, we
replace the unknown function m by its n-dimensional
Pn output vector Y , and replace the inner product
h?, ?i by h?, ?in , which is defined as hf, gin = n1 i=1 f (X i )g(X i ). The projection of the current
residual vector onto each dictionary Dj0 is replaced by the corresponding nonparametric smoothers.
Considering any function m ? H, we proceed in the same way as in [3], but replacing the OGA
arguments in their analysis by those of GOGA. The desired results of the theorem follow from a
simple argument on bounding the random random covering number of spline spaces.
5
6
Experimental Results
In this section, we present numerical results for AFR and GFR applied to both synthetic and real
data. The main conclusion is that, in many cases, their performance on both function estimation
and variable selection can clearly outperform those of LASSO, Foba, and SpAM. For all the reported experiments, we use local linear smoothers to implement AFR and GFR. The results for
other smoothers, such as smoothing splines, are similar. Note that different bandwidth parameters
will have big effects on the performances of local linear smoothers. Our experiments simply use
the plug-in bandwidths according to [8] and set the bandwidth for each variable to be the same.
For AFR, the bandwidth h is set to be 1.06n?1/5 and for GFR, the bandwidth is varying over each
iteration such that h = 1.06n?1/(4+|A|) , where |A| is the size of the current active set.
For an estimate m,
b the estimation performance for the synthetic data is measured by the mean square
2
Pn
error (MSE), which is defined as MSE(m)
b = n1 i=1 m(X i ) ? m(X
b i ) . For the real data, since
we do not know the true function m(x), we approximate the mean squared error using 5-fold crossvalidation scores.
6.1
The Synthetic Data
For the synthetic data experiments, we consider the compound symmetry covariance structure of the
design matrix X ? Rn?p with n = 400 and p = 20. Each dimension Xj is generated according to
Xj =
Wj + tU
,
1+t
j = 1, . . . , p,
where W1 , . . . , Wp and U are i.i.d. sampled from Uniform(0,1). Therefore the correlation between
Xj and Xk is t2 /(1 + t2 ) for j 6= k. We assume the true regression functions have r = 4 relevant
variables:
Y = m(X) + = m(X1 , . . . , X4 ) + .
(6)
To evaluate the variable selection performance of different methods, we generate 50 designs and 50
trials for each design. For each trial, we run the greedy forward algorithm r steps. If all the relevant
variables are included in, the variable selection task for this trial is said to be successful. We report
the mean and standard deviation of the success rate in variable selection for various correlation
between covariates by varying the values of t.
We adopt some synthetic examples as in [12] and define the following four functions: g1 (x) = x,
g2 (x) = (2x ? 1)2 , g3 (x) = sin(2?x)/(2 ? sin(2?x)), and g4 (x) = 0.1 sin(2?x) + 0.2 cos(2?x) +
0.3 sin2 (2?x) + 0.4 cos3 (2?x) + 0.5 sin3 (2?x).
The following four regression models are studied. The first model is linear; the second is additive;
the third and forth are more complicated nonlinear models with at least two way interactions:
(Model1) :
Y i = 2X1i + 3X2i + 4X3i + 5X4i + 2N (0, 1), with t = 1 ;
(Model2) :
Y i = 5g1 (X1i ) + 3g2 (X2i ) + 4g3 (X3i ) + 6g4 (X4i ) + 4N (0, 1), with t = 1 ;
(Model3) :
Y i = exp(2X1i X2i + X3i ) + 2X4i + N (0, 1), with t = 0.5 ;
X4
Yi =
gj (Xji ) + g1 (X3i X4i ) + g2 ((X1i + X3i )/2) + g3 (X1i X2i ) + N (0, 1)
(Model4) :
j=1
with t = 0.5.
Compared with LASSO, Foba, and SpAM, the estimation performance using MSE as evaluation
criterion is presented in Figure 3. And Table 1 shows the rate of success for variable selection of
these models with different correlations controlled by t.
From Figure 3, we see that AFR and GFR methods provide very good estimates for the underlying
true regression functions as compared to others. Firstly, LASSO and SpAM perform very poorly
when the selected model is very sparse. This is because they are convex regularization based approaches: to obtain a very sparse model, they induce very large estimation bias. On the other hand,
the greedy pursuit based methods like Foba, AFR and GFR do not suffer from such a problem. Secondly, when the true model is linear, all methods perform similarly. For the nonlinear true regression
6
Model 1
Model 2
GFR
AFR
SpAM
Foba
LASSO
GFR
AFR
SpAM
Foba
LASSO
13
12
11
MSE
4
3
0.5
GFR
AFR
SpAM
Foba
LASSO
3
2.5
10
9
8
2
Model 4
3.5
MSE
5
MSE
Model 3
14
2
1.5
0.4
0.35
0.3
7
1
6
1
GFR
AFR
SpAM
Foba
LASSO
0.45
MSE
6
0.25
0.5
5
0
1
2
3
4
5
6
7
8
9
10
4
1
2
3
sparsity
4
5
6
7
8
9
10
0
sparsity
1
2
3
4
5
6
7
8
9
10
0.2
1
2
sparsity
3
4
5
6
7
8
9
10
sparsity
Figure 3: Performance of the different algorithms on synthetic data: MSE versus sparsity level
function, AFR, GFR and SpAM outperform LASSO and Foba. It is expectable since LASSO and
Foba are based on linear assumptions. Furthermore, we notice that when the true model is additive (Model 2) or nearly additive (Model 4), AFR performs the best. However, for the non-additive
general multivariate regression function (Model 3), GFR performs the best. For all examples, when
more and more irrelevant variables are included in the model, SpAM has a better generalization
performance due to the regularization effect.
Table 1: Comparison of variable selection
Model 1
t=0
t=1
t=2
LASSO(sd)
1.000 (0.0000)
0.879 (0.0667)
0.559 (0.0913)
Foba
1.000 (0.0000)
0.882 (0.0557)
0.553 (0.0777)
SpAM
0.999 (0.0028)
0.683 (0.1805)
0.190 (0.1815)
AFR
0.999 (0.0039)
0.879 (0.0525)
0.564 (0.0739)
GFR
0.990 (0.0229)
0.839 (0.0707)
0.515 (0.0869)
Model 2
t=0
t=1
t=2
LASSO(sd)
0.062 (0.0711)
0.056 (0.0551)
0.004 (0.0106)
Foba
0.069 (0.0774)
0.060 (0.0550)
0.029 (0.0548)
SpAM
0.842 (0.1128)
0.118 (0.0872)
0.008 (0.0056)
AFR
0.998 (0.0055)
0.819 (0.1293)
0.260 (0.1439)
GFR
0.769 (0.1751)
0.199 (0.2102)
0.021 (0.0364)
Model 3
t=0
t=1
t=2
LASSO(sd)
0.997 (0.0080)
0.818 (0.1137)
0.522 (0.1520)
Foba
0.999 (0.0039)
0.802 (0.1006)
0.391 (0.1577)
SpAM
0.980 (0.1400)
0.934 (0.1799)
0.395 (0.3107)
AFR
1.000 (0.0000)
1.000 (0.0000)
0.902 (0.1009)
GFR
1.000 (0.0000)
0.995 (0.0103)
0.845 (0.1623)
Model 4
t=0
t = 0.5
t=1
LASSO(sd)
0.043 (0.0482)
0.083 (0.0823)
0.048 (0.0456)
Foba
0.043 (0.0437)
0.049 (0.0511)
0.085 (0.0690)
SpAM
0.553 (0.1864)
0.157 (0.1232)
0.095 (0.0754)
AFR
0.732 (0.1234)
0.126 (0.0688)
0.192 (0.0679)
GFR
0.967 (0.0365)
0.708 (0.1453)
0.171 (0.1067)
The variable selection performances of different methods in Table 1 are very similar to their estimation performances. We observe that, when correlation parameter t becomes larger, the performances
of all methods decrease. But SpAM is most sensitive to the correlation increase. In all models, the
performance of SpAM can decrease more than 70% for the larger t; in contrast, AFR and GFR are
more robust to the increased correlation between different covariates. Another interesting observation is on model 4. From the previous discussion, on this model, AFR achieves a better estimation
performance. However, when comparing the variable selection performance, GFR is the best. This
suggests that for nonparametric inference, the goals of estimation consistency and variable selection
consistency might not be always coherent. Some tradeoffs might be needed to balance them.
6.2
The real data
In this subsection, we compare five methods on three real datasets: Boston Housing, AutoMPG, and
Ionosphere data set 1 . Boston Housing contains 556 data points, with 13 features; AutoMPG 392
data points (we delete those with missing values), with 7 features and Ionosphere 351 data points,
with 34 features and the binary output. We treat Ionosphere as a regression problem although the
1
Available from UCI Machine Learning Database Repository: http:archive.ics.uci.edu/ml.
7
response is binary. We run 10 times 5-fold cross validation on each dataset and plot the mean and
standard deviation of MSE versus different sparsity levels in Figure 4.
Boston Housing
AutoMPG
70
GFR
AFR
SpAM
Foba
LASSO
CV Error
70
60
0.24
GFR
AFR
SpAM
Foba
LASSO
60
50
CV Error
80
50
40
GFR
AFR
SpAM
Foba
LASSO
0.22
0.2
CV Error
90
Ionosphere
40
30
0.18
0.16
0.14
0.12
20
30
0.1
10
20
10
1
2
3
4
5
6
7
8
9
10
11
12
13
0
0.08
1
sparsity
2
3
4
sparsity
5
6
7
0.06
1
5
10
15
20
25
30
sparsity
Figure 4: Performance of the different algorithms on real datasets: CV error versus sparsity level
From Figure 4, since all the error bars are tiny, we deem all the results significant. On the Boston
Housing and AutoMPG datasets, the generalization performances of AFR and GFR are clearly better than LASSO, Foba, and SpAM. For all these datasets, if we prefer very sparse models, the
performance of the greedy methods are much better than the convex regularization methods due
to the much less bias being induced. On the Ionosphere data, we only need to run GFR up to 15
selected variables, since the generalization performance with 15 variables is already worse than the
null model due to the curse of dimensionality. Both AFR and GFR on this dataset achieve the best
performances when there are no more than 10 variables included; while SpAM achieves the best
CV score with 25 variables. However, this is not to say that the true model is not sparse. The main
reason that SpAM can achieve good generalization performance when many variables included is
due to its regularization effect. We think the true model should be sparse but not additive. Similar
trend among different methods has also appeared in Model 4 of previous synthetic datasets.
7
Conclusions and Discussions
We presented two new greedy algorithms for nonparametric regression with either additive mean
functions or general multivariate regression functions. Both methods utilize the iterative forward
stepwise strategy, which guarantees the model inference is always conducted in low dimensions in
each iteration. These algorithms are very easy to implement and have good empirical performance
on both simulated and real datasets.
One thing worthy to note is: people sometimes criticize the forward greedy algorithms since they
can never have the chance to correct the errors made in the early steps. This is especially true for
high dimensional linear models, which motivates the outcome of some adaptive forward-backward
procedures such as Foba [22]. We addressed a similar question: Whether a forward-backward procedure also helps in the nonparametric settings? AFR and GFR can be trivially extended to be
forward-backward procedures using the same way as in [22]. We conducted a comparative study to
see whether the backward steps help or not. However, the backward step happens very rarely and
the empirical performance is almost the same as the purely forward algorithm. This is very different
from the linear model cases, where the backward step can be crucial. In summary, in the nonparametric settings, the backward ingredients will cost much more computational efforts with very tiny
performance improvement. We will investigate more on this phenomenon in the near future.
A very recent research strand is to learn nonlinear models by the multiple kernel learning machinery
[1, 2], another future work is to compare our methods with the multiple kernel learning approach
from both theoretical and computational perspectives.
Acknowledgements
We thank John Lafferty, Larry Wasserman, Pradeep Ravikumar, and Jamie Carbonell for very helpful discussions on this work. This research was supported in part by NSF grant CCF-0625879 and a
Google Fellowship to Han Liu.
8
References
[1] Francis Bach. Consistency of the group lasso and multiple kernel learning. Journal of Machine
Learning Research, 8:1179?1225, 2008.
[2] Francis Bach. Exploring large feature spaces with hierarchical multiple kernel learning. In
Advances in Neural Information Processing Systems 21. MIT Press, 2008.
[3] Andrew R. Barron, Albert Cohen, Wolfgang Dahmen, and Ronald A. DeVore. Approximation
and learning by greedy algorithms. The Annals of Statistics, 36:64?94, 2008.
[4] Peter B?uhlmann and Bin Yu. Sparse boosting. Journal of Machine Learning Research, 7:1001?
1024, 2006.
[5] Andreas Buja, Trevor Hastie, and Robert Tibshirani. Linear smoothers and additive models.
The Annals of Statistics, 17:453?510, 1989.
[6] Emmanuel Candes and Terence Tao. The dantzig selector: statistical estimation when p is
much larger than n. The Annals of Statistics, 35:2313?2351, 2007.
[7] Scott Shaobing Chen, David L. Donoho, and Michael A. Saunders. Atomic decomposition by
basis pursuit. SIAM Journal on Scientific and Statistical Computing, 20:33?61, 1998.
[8] Jianqing Fan and Ir`ene Gijbels. Local polynomial modelling and its applications. Chapman
and Hall, 1996.
[9] Jerome H. Friedman. Multivariate adaptive regression splines. The Annals of Statistics, 19:1?
67, 1991.
[10] Trevor Hastie and Robert Tibshirani. Generalized additive models. Chapman & Hall Ltd.,
1999.
[11] John Lafferty and Larry Wasserman. Rodeo: Sparse, greedy nonparametric regression. The
Annals of Statistics, 36(1):28?63, 2008.
[12] Yi Lin and Hao Helen Zhang. Component selection and smoothing in multivariate nonparametric regression. The Annals of Statistics., 34(5):2272?2297, 2006.
[13] Han Liu and Jian Zhang. On the estimation consistency of the group lasso and its applications.
Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics,
2009.
[14] S. Mallat and Z. Zhang. Matching pursuit with time-frequency dictionaries. IEEE Transactions
on Signal Processing, 41:3397?3415, 1993.
[15] Lukas Meier, Sara van de Geer, and Peter B?uhlmann. High-dimensional additive modelling.
The Annals of Statistics (to appear), 2009.
[16] Pradeep Ravikumar, John Lafferty, Han Liu, and Larry Wasserman. Sparse additive models.
Journal of the Royal Statistical Society, Series B, Methodological, 2009. To appear.
[17] Pradeep Ravikumar, Han Liu, John Lafferty, and Larry Wasserman. Spam: Sparse additive
models. In Advances in Neural Information Processing Systems 20, 2007.
[18] B. W. Silverman. Spline smoothing: The equivalent variable kernel method. The Annals of
Statistics, 12:898?916, 1984.
[19] Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal
Statistical Society, Series B, Methodological, 58:267?288, 1996.
[20] Joel A. Tropp. Greed is good: Algorithmic results for sparse approximation. IEEE Trans.
Inform. Theory, 50(10):2231?2241, October 2004.
[21] Grace Wahba. Spline models for observational data. SIAM [Society for Industrial and Applied
Mathematics], 1990.
[22] Tong Zhang. Adaptive forward-backward greedy algorithm for learning sparse representations.
Technical report, Rutgers University, 2008.
[23] Tong Zhang. On the consistency of feature selection usinggreedy least squares regression.
Journal of Machine Learning Research, 10:555?568, 2009.
9
| 3889 |@word mild:1 trial:3 repository:1 version:4 polynomial:1 norm:4 twelfth:1 km:4 covariance:1 decomposition:1 recursively:1 liu:5 contains:2 score:2 selecting:1 series:2 current:3 comparing:1 john:4 ronald:1 additive:31 numerical:3 wx:3 plot:1 greedy:21 selected:4 intelligence:1 xk:1 boosting:2 firstly:1 zhang:6 five:1 direct:2 backfitting:6 fitting:1 g4:2 xji:2 ming:1 automatically:1 curse:2 equipped:1 considering:1 deem:1 becomes:3 begin:2 estimating:1 notation:2 moreover:1 xx:3 underlying:2 null:1 guarantee:3 thorough:1 exactly:1 grant:1 appear:2 local:15 treat:1 sd:4 mainstay:1 foba:20 might:2 emphasis:1 dantzig:2 studied:1 equivalence:1 suggests:1 challenging:1 sara:1 co:1 atomic:1 practice:1 union:1 implement:3 dpx:1 silverman:1 procedure:4 empirical:4 matching:6 projection:2 induce:2 onto:3 selection:17 applying:1 equivalent:2 missing:1 helen:1 go:4 attention:1 convex:9 formulate:2 correcting:1 wasserman:4 estimator:1 dahmen:1 population:1 handle:2 justification:1 updated:1 annals:8 target:1 heavily:1 mallat:1 us:3 pa:1 element:2 trend:1 database:1 observed:1 calculate:1 wj:1 decrease:2 substantial:1 covariates:2 solving:1 kmj:1 purely:1 basis:4 model2:1 various:1 tx:2 xxt:2 additivity:1 fast:1 describe:1 artificial:1 outcome:1 saunders:1 whose:1 larger:4 say:2 statistic:9 gi:1 g1:3 think:1 noisy:1 housing:4 advantage:2 eigenvalue:1 propose:4 jamie:1 interaction:1 product:6 tu:1 relevant:4 uci:2 realization:1 poorly:1 achieve:3 forth:1 inducing:1 scalability:1 crossvalidation:2 afr:32 convergence:3 double:1 extending:2 comparative:1 help:2 develop:2 andrew:1 measured:1 nearest:1 op:2 school:1 progress:1 solves:1 involves:2 radius:3 drawback:2 correct:1 stringent:1 observational:1 larry:4 bin:1 generalization:4 secondly:1 extension:1 exploring:1 hold:2 hall:2 ic:1 normal:1 exp:1 algorithmic:1 gk2:2 gfr:30 major:1 achieves:4 adopt:2 smallest:1 dictionary:2 early:1 estimation:17 uhlmann:2 et1:2 sensitive:1 weighted:2 mit:1 clearly:3 always:3 avoid:1 pn:7 hj:1 shrinkage:1 varying:2 improvement:1 model4:1 modelling:2 methodological:2 mainly:2 contrast:2 industrial:1 helpful:1 inference:7 sin2:1 dependent:1 stopping:2 attract:1 sb:2 tao:1 arg:5 among:1 denoted:1 k6:1 art:2 smoothing:8 fairly:1 marginal:2 aware:1 once:1 never:1 chapman:2 x4:2 yu:1 emj:1 nearly:1 future:2 report:4 spline:16 recommend:1 t2:2 few:1 others:1 simultaneously:3 maxj:1 replaced:1 n1:3 friedman:1 investigate:1 regressing:1 evaluation:3 joel:1 pradeep:3 predefined:1 partial:1 orthogonal:6 machinery:1 conduct:4 indexed:2 re:1 desired:1 theoretical:7 delete:1 mk:2 increased:1 column:1 cost:1 introducing:1 deviation:2 subset:1 entry:1 uniform:1 successful:1 conducted:4 too:1 reported:3 dp0:1 synthetic:7 international:1 siam:2 sequel:1 terence:1 michael:1 w1:1 again:1 squared:1 worse:1 resort:1 leading:2 bx:2 approximant:1 potential:1 de:1 explicitly:2 depends:1 view:1 try:2 break:2 h1:2 closed:1 francis:2 wolfgang:1 hf:2 complicated:1 cos3:1 identifiability:1 candes:1 contribution:1 square:5 ir:1 variance:2 ofthe:1 identify:1 j6:2 minj:1 inform:1 trevor:2 definition:1 competitor:2 frequency:1 involved:1 naturally:1 proof:1 sampled:1 dataset:2 treatment:1 popular:2 subsection:1 dimensionality:2 hilbert:4 organized:1 actually:2 follow:2 response:2 devore:1 formulation:1 mar:1 furthermore:2 stage:2 until:1 correlation:6 sketch:1 hand:1 jerome:1 tropp:1 replacing:1 nonlinear:3 google:1 d10:1 defines:1 artifact:1 scientific:1 effect:3 xj1:3 k22:2 true:13 lgorithm:2 ccf:1 regularization:12 iteratively:1 nonzero:1 hamel:1 wp:1 sin:3 during:3 covering:1 illustrative:1 criterion:4 generalized:5 trying:1 x4i:4 cosso:1 demonstrate:1 performs:3 fj:3 recently:2 superior:1 functional:3 cohen:1 extend:3 he:2 mellon:1 significant:1 cv:5 tuning:2 consistency:6 trivially:1 hp:2 inclusion:1 similarly:1 mathematics:1 dj:2 han:5 longer:2 gj:1 etc:1 deduce:1 base:1 multivariate:13 recent:1 perspective:1 irrelevant:1 compound:1 certain:2 jianqing:1 binary:2 arbitrarily:1 success:2 yi:2 omp:1 converge:1 signal:1 ii:2 smoother:22 multiple:4 full:1 rj:2 reduces:1 seidel:1 smooth:2 technical:2 characterized:1 plug:1 cross:1 bach:2 lin:2 ravikumar:4 e1:1 controlled:3 variant:1 regression:35 basic:1 essentially:1 expectation:1 noiseless:1 rutgers:1 albert:1 iteration:7 kernel:14 sometimes:2 want:1 fellowship:1 addressed:1 jian:1 crucial:1 extra:1 rest:1 archive:1 induced:3 thing:1 lafferty:4 near:1 ideal:1 iii:1 enough:1 easy:1 xj:15 fit:2 hastie:2 wahba:1 lasso:25 suboptimal:1 bandwidth:6 inner:4 idea:4 andreas:1 tradeoff:1 whether:2 casted:1 ltd:1 effort:1 greed:1 penalty:3 suffer:2 peter:2 proceed:2 detailed:1 tune:1 nonparametric:18 locally:1 generate:1 http:1 outperform:3 canonical:1 nsf:1 notice:1 estimated:2 popularity:1 tibshirani:3 carnegie:1 group:5 dj0:6 four:2 anova:1 utilize:1 backward:10 sum:5 gijbels:1 run:3 mk2:1 almost:1 sobolev:3 prefer:1 submatrix:1 uncontrolled:1 followed:1 fold:2 fan:1 constraint:1 infinity:3 constrain:1 bp:1 shaobing:1 sake:1 argument:2 min:3 concluding:1 extremely:1 span:1 px:1 according:3 ball:3 g3:3 s1:1 happens:1 projecting:1 restricted:1 ene:1 taken:2 computationally:1 equation:1 mechanism:1 needed:1 know:2 end:5 adopted:1 pursuit:13 available:2 observe:1 hierarchical:1 barron:1 generic:1 slower:1 rp:1 denotes:1 include:1 emmanuel:1 especially:2 society:3 already:1 question:1 strategy:2 parametric:2 diagonal:3 grace:1 cycling:1 gin:1 said:1 thank:1 simulated:4 carbonell:1 reason:2 equip:1 assuming:2 index:8 pointwise:1 balance:1 october:1 robert:3 hao:1 design:6 motivates:1 unknown:3 perform:3 observation:1 datasets:8 finite:4 truncated:2 regularizes:2 extended:1 rn:1 worthy:1 reproducing:1 arbitrary:1 buja:1 david:1 meier:2 cast:1 coherent:1 trans:1 bar:1 below:1 scott:1 egression:2 appeared:1 sparsity:12 criticize:1 model1:1 including:2 max:1 royal:2 suitable:1 hr:1 residual:5 minimax:3 x2i:4 hm:1 review:1 literature:1 l2:1 acknowledgement:1 asymptotic:1 interesting:1 versus:3 ingredient:1 validation:1 h2:1 xp:2 consistent:1 autompg:4 tiny:2 heavy:1 penalized:1 summary:1 repeat:1 supported:1 truncation:2 jth:1 bias:5 neighbor:1 lukas:1 sparse:25 van:1 boundary:1 dimension:6 calculated:2 world:1 avoids:1 forward:24 made:2 adaptive:6 spam:26 transaction:1 sj:4 approximate:2 selector:2 ml:1 global:1 active:4 pittsburgh:1 assumed:2 xi:1 continuous:1 iterative:2 decomposes:1 table:3 mj:10 learn:1 robust:2 rodeo:3 symmetry:1 model3:1 expansion:1 mse:9 diag:2 sp:1 main:6 whole:2 noise:1 bounding:1 big:1 goga:5 x1:3 augmented:1 slow:1 tong:2 sub:1 x1i:5 lie:3 third:1 theorem:4 eneralized:1 specific:2 xt:1 x:3 ionosphere:5 exists:2 stepwise:1 restricting:1 adding:1 kr:3 sx:2 chen:2 boston:4 logarithmic:2 simply:1 univariate:4 orward:2 xjn:2 x3i:5 strand:1 g2:3 khj:3 chance:1 relies:1 viewed:3 goal:1 donoho:1 replace:2 hard:1 included:6 specifically:1 called:9 total:1 geer:1 gauss:1 experimental:1 rarely:1 people:1 evaluate:1 phenomenon:1 |
3,189 | 389 | Lg DEPTH ESTIMATION AND RIPPLE FIRE
CHARACTERIZA TION USING
ARTIFICIAL NEURAL NETWORKS
John L. Perry and Douglas R. Baumgardt
ENSCO, Inc.
Signal Analysis and Systems Division
5400 Port Royal Road
Springfield, Virginia 22151
(703) 321-9000, [email protected]
Abstract
This srudy has demonstrated how artificial neural networks (ANNs) can
be used to characterize seismic sources using high-frequency regional
seismic data. We have taken the novel approach of using ANNs as a
research tool for obtaining seismic source information, specifically
depth of focus for earthquakes and ripple-fire characteristics for
economic blasts, rather than as just a feature classifier between
earthquake and explosion populations. Overall, we have found that
ANNs have potential applications to seismic event characterization and
identification, beyond just as a feature classifier. In future studies, these
techniques should be applied to actual data of regional seismic events
recorded at the new regional seismic arrays. The results of this study
indicates that an ANN should be evaluated as part of an operational
seismic event identification system.
1 INTRODUCTION
1.1 NEURAL NET,\VORKS FOR SEISl\UC SOURCE ANALYSIS
In this study, we have explored the application of artificial neural networks (ANNs) for
-the characterization of seismic sources for the purpose of distinguishing between
explosions and earthquakes. ANNs have usually been used as pattern matching
algorithms, and recent studies have applied ANNs to standard classification between
classes of earthquakes and explosions using wavefonn features (Dowla, et al, 1989),
(Dysart and Pulli, 1990). However, in considering the current state-of-the-art in seismic
event identification, we believe the most challenging problem is not to develop a superior
classification method, but rather, to have a better understanding of the physics of seismic
source and regional signal propagation.
544
Lg Depth Estimation and Ripple Fire Characterization
Our approach to the problem has been to use ANN technology as a research tool for
obtaining a better understanding of the phenomenology behind regional discrimination,
with emphasis on high-frequency regional array data, as well as using ANNs as a pattern
classifier. We have explored two applications of ANNs to seismic source
characterization: (1) the use of ANNs for depth characterization and (2) the recognition of
ripple-fIring effects in economic explosions.
In the fIrst study, we explored the possible use of the Lg cross-coherence matrix,
measured at a regional array, as a "hidden discriminant" for event depth of focus. In the
second study, we experimented with applying ANNs to the recognition of ripple-fIre
effects in the spectra of regional phases. Moreover, we also investigated how a small
(around 5 Kt yield) possibly decoupled nuclear explosion, detonated as part of a ripple-fIre
sequence, would affect the spectral modulations observed at regional distances and how
these effects could be identified by the ANN.
1.2
ANN DESCRIPTION
MLP Architecture:
The ANN that we used was a multilayer perceptron (MLP)
architecture with a backpropagation training algorithm (Rumelhart, et al, 1986). The
input layer is fully connected to the hidden layer, which is fully connected to the output
layer. There are no connections within an individual layer. Each node communicates
with another node through a weighted connection. Associated with each connection is a
weight connecting input node to hidden node, and a weight connecting hidden node to
output node. The output of "activation level" of a particular node is defined as the linear
weighted sum of all its inputs. For an MLP, a sigmoidal transformation is applied to
this weighted sum. Two layers of our network have activation levels.
MLP Training: The:w..,p uses a backpropagation training algorithm which employs
an iterating process where an output error signal is propagated back through the network
and used to modify weight values. Training involves presenting sweeps of input patterns
to the network and backpropagating the error until it is minimized. It is the weight
values that represent a trained network and which can be used in the
recognition/classification phase.
MLP Recognition: Recognition, on the other hand, involves presenting a pattern to
a trained network and propagating node activation levels uni-directionally from the input
layer, through the hidden layer(s), to the output layer, and then selecting the class
corresponding to the highest output (activation) signal.
2 Lg DEPTH ESTIMATION
In theory, the Lg phase, which is often the largest regional phase on the seismogram,
should provide depth information because Lg results from the superposition of numerous
normal modes in the crust, whose excitation is highly depth dependent. Some studies
have shown that Lg amplitudes do depend on depth (Der and Baumgardt, 1989). However,
the precise dependency of Lg amplitude on depth has been hard to establish because other
effects in the crustal model, such as anelastic attenuation, can also affect the Lg wave
amplitude.
545
546
Perry and Baumgardt
In this study, we have considered if the Lg coherence, measured across a regional array,
might show depth dependency. This idea is based on the fact that alI the normal modes
which comprise Lg propagate at different phase velocities. For multilayered media, the
normal modes will have frequency-dependent phase velocities because of dispersion. Our
method for studying this dependency is a neural network implementation of a technique,
called matchedjieldprocessing, which has been used in underwater acoustics for source
water-depth estimation (Bucker, 1976), (Baggeroer, et al, 1988). This method consists of
computing the spectral matrix of an emitted signal, in our case, Lg, and comparing it
against the same spectral matrix for master events at different depths. In the past, various
optimal methods have been developed for the matching process. In our study, we have
investigated using a neural network to accomplish the matching.
2.1 SPECTRAL MATRIX CALCULATION AND MATCHED FIELD
PROCESSING
The following is a description of how the spectral matrix is computed. First, the
synthetic seismograms for each of the nine elements of the hypothetical array are Fourier
transformed in some time window. If Si (co) is the Fourier transfonn of a time window
for the i the channel, then, the spectral matrix is written as, Hij (co) =S j (co) S j*(co), where
Si(co)=Ale 1[411 +41 / (41)), the indexjis the complex number, rpt is the phase angle, and
the * represents complex transpose. The elements, aik of the spectral matrix can be
.j [41, .(11)
written as a(k ( co)=AjAie
?
where the exponential phase shift term
t1>i (co) - <l\ (co) = __co_(x j -Xl) = - COT?l (co)
is
ell (co)
? en (co) represents the phase velocity for
mode n, which is a function of frequency because of dispersion, Xi - xk is the spatial
separation of the i th and k th channels of the array, and --ri k (m ) is the time shift of
mode n at frequency (J) across the two channels. The product of the synthetic
eigenfunctions, Ai ? and thus, the spectral matrix tenns, are functions of source depth and
model parameters.
The spectral matrix, H ij ( (J) ) , can be computed for an entire synthetic waveform or for a
window on a part of the waveform. The elements of the spectral matrix can be
normalized by inter- or intra- window nonnalization so that its values range from 0 to 1.
2.2 ANN ? MATCHED FIELD DEPTH ESTIMATION
Two different depth studies were performed during this effort. The flrst study evaluated
using the ANN to classify deep (greater than 4 kilometers) vs. shallow (less than 4
kilometers) seismic events. The number of input nodes equaled the number of points in
the spectral matrix which was 1620 (36 data points x 45 spectral elements after
smoothing). The number of output nodes was dependent on the type of classification we
wanted to perfonn. For the shallow-deep discrimination, we only required two output
nodes, one for each class. Training the ANN involved compiling a training (exemplar)
set of spectral matrices for various shallow and deep events and then presenting the
training set to the ANN.
In the second study, we investigated if the ANN could be used to classify seismic events
at different depths. Again, we used flve windows on the Lg phase and implemented the
Lg Depth Estimation and Ripple Fire Characterization
interwindow and intrawindow normalization procedure. The second network was trained
with a seven-element depth vector, whose elements represent the depths of 1, 3, 6, 9, 12,
16, and 20 kilometers.
3 RIPPLE-FIRE CHARACTERIZATION
In this study, we wanted to determine if spectral modulations could be recognized by the
neural network and if they could be attached to concepts relating to the source parameters
of ripple-flred events. Previous studies have demonstrated how such patterns could be
found by looking for time-independent spectral modulations (Baumgardt and Ziegler,
1989), (HedIin, et al, 1990). In this study, we assumed such a pattern has been found and
that the input to the ANN is one of the time-independent spectra. An additional issue we
considered was whether it would be possible to hide a nuclear explosion in the ripple-flre
pattern, and whether or not such a pattern might be recognizable by an ANN. In this
study, as in the previous depth characterization study, we relied entirely on simulated data
for training and test
3.1 ANN ? RIPPLE FIRE CHARACTERIZATION
We performed two ripple-fIred studies which were designed to extract different parameters
from the ripple-fIred events. The two studies characterized the following parameters: 1)
time delay (Experiment A), and 2) normal vs. anomalous (Experiment B). The purpose
of the time delay study was to estimate the time delay between explosions irrespective of
the number of explosions. The goal of the second study was to determine if the ANN
could extract a "normal" ripple-fIred explosion from a simulated nuclear explosion buried
in a ripple-fIred event
The input nodes to all three networks consisted of the elements of the seismic spectra
which had 256 data points, covering the frequency range of 0 to 20 Hz. All weights were
initialized to random values in the range of [-0.5, 0.5] and all networks had their
momentum term set to 0.9. The number of hidden units, learning rate, and number of
output nodes varied between each experiment
In Experiment A, the number of hidden units was 24, and we used a learning rate of 0.2.
We used seven output nodes to represent seven different classes with time delays of 37.5,
62.5,87.5, 112.5, 137.5, 162.5, and 187.5 ms. These delay times are the centers of the
following delay time bins: 25-50 ms, 50-75 ms, 75-100 ms, 100-125 ms, 125-150 ms,
150-175 ms, and 175-200 ms. We used five examples of each class for training derived
by varying the time delay by ?5 msec. Training involved presenting the flve exemplars
for each class to the ANN until the squared error approached zero, as we did in the depth
discrimination study. The ANN was trained to return a high activation level in the bin
closest to the delay time of the ripple-fIre.
In Experiment B, we wanted to determine if the ANN could discern between normal and
anomalous ripple-fIre patterns. The ANN was trained with 50 hidden units and a learning
rate of 0.1. There were 36 exemplars of each class, which resulted from all combinations
of six time delays of 5, 6, 7, 8, 9, and 10 ms between individual shots and six time
delays of 5, 6, 7, 8, 9, and 10 ms between rows of shots. Each time delay was also
varied ?1.0 ms to simulate the effect of errors in the blasting delays.
547
548
Perry and Baumgardt
Two output nodes were defined which represent anomalous or normal ripple-fire. The
normal ripple-fire class represented all the simulations done for the triangular pattern. We
assumed each shot had a yield of 1000 kg. The anomalous class were all the simulations
for when the last row of 10 shots was replaced with a single large explosion of 10,000
kg. The ANN was then trained to produce a high activation level for either of these
classes depending on which kind of event it was presented. The effect of the single large
explosion signal was to wash out the scalloping pattern produced by the ripple-fired
explosions. We trained the ANN with normal ripple-fired patterns, with no embedded
nuclear explosions, and anomalous patterns, with an embedded nuclear explosion.
4 RESULTS
4.1
RESUL TS OF DEPTH STUDY
In our first study, we wanted to determine if the network could learn the simple concepts
of shallow and deep from Lg synthetics when presented with only a small number of
exemplar patterns. We presented the ANN with four depths, 1, 2, 12, and 20 krn, and
trained the network to recognize the first two as shallow and the second two as deep. We
then presented the network with the rest of the synthetics, including synthetics for depths
the ANN had not seen in training.
The results of the shallow-deep discrimination study are shown in Table 1. The table
shows the results for both the interwindow and intrawindow normalization procedures.
The test set used to generate these results were also synthetically generated events that
were either less than 4 krn (shallow) or greater than 4 krn (deep). Our criteria for a correct
match was if the correct output node had an activation level that was 0.4 or more above
the other output node's activation. This is a very conservative threshold criteria, which is
evident from the number of undecided values. However, the results do indicate that the
percent of incorrect classifications was only 5.0% for the intrawindow case and 8.3% for
the interwindow case. The percent of correct classification (PCC) for the intrawindow
case was 50% and the PCC for the interwindow case was 58.3%. The network appeared
to be well trained, relative to their squared error values for this study. Using a less
conservative correct match criteria, where the correct output node only had to be larger
than the other output node's activation, the PCC was 88.3% for the intrawindow case and
93.3% for the interwindow case.
Inn? Window Ilnlcr?Window
Depd\
(bn.)
cmrca
o"~nttnOQ
3
1/3
4
liZ
S
6
7
8
9
10
11
IJlCOIRCt
""""'"
lCIl.:laoa 0?? cntenoa
515
3/3
113
.115
515
0/0
~/:'
4/S
1/0
0/0
1/0
0/0
1/2
0/0
0/0
0/0
315
3/2
31S
3n
liZ
-:.n.
515
~iS
SIS
~{3
IS
01./01.
3/2
TOQJ
30/35
531 ~6
I~
T~ble
I:
0/1
.1/3
SIS
SIS
41S
13
0/0
0/1
~
XdVCMJoa
0/0
0/2
0/0
0/0
1/0
0/0
0/0
0/0
1/2
0/0
0/0
0/0
';./4
I
undce,ded
...-..
Jcd~
0." c:ntenoa
0/0
ZIO
1/0
0/0
0/0
0/0
1/0
0/0
0/0
0/0
0/0
1/0
3/2
4/1
3/2
III
ZI1
2/3
I/O
2/2
3/1
3/3
1/1
-:'/3
-:'71 :0
I
Resulls or ANN ror Shallow?Dtep Discrimination.
SIO
4.2
RESULTS OF THE RIPPLE?FIRED STUDY
Linear Shot Patterns (Experiment A)
Table 2 summarizes all the results for the time-delay ripple-fired classification study
performed during Experiment A. The table shows both two-shot training and a two- and
three-shot training cases. The test set for both cases were spectra that had time delays that
were in a ?5 ms range of the target time delay pattern. We set two criteria for PCC for
the two-shot case. The fIrst was that the activation level for the correct output node be
larger than the activation levels of the other output nodes. This produced a PCC of
77.7%, with a 22.2% error rate and no undecided responses. All of the errors resulted
from attempting to use the ANN to learn time delays from a three-shot pattern where the
network was only trained on two-shot events. The second criterion was more
conservative and required that the activation level of the correct output node be ~ 0.5 than
the other output nodes. This gave a PCC 68.2%, an error percentage of 4.5%. although
the number of undecided responses increased to 27.2 %. Again, all the errors resulted from
expecting the ANN to generalize to three-shot events from only being trained with twoshot patterns. Finally. the results for the two- and three-shot training case were much
more impressive. Using both threshold criteria, the ANN achieved a PCC of 100%.
=
I
Threshoid Cdlem
0.0 I O.S
TCSI Set?
:ncorrca
=
D.seA
D.seB
:; shots
I
II
Tocli
I
andt:cCe11
0/0
010
il6
8/1
3/2
.Ill
:S liS
.III
OIl
OIl
0/.1
I
0/6
? (Tr:lincd with a'z?shol pancml
Tiln:siloid Crirena
0.0 10.5
Tesl Sct'
~o=
:nccrreCl
C15eA
D.seB
:; shots
71i
3/8
iIi
010
010
Tccli
221::
010
illO
I
uncieciCed
010
0/0
010
I
I
0/0
? (Truncd with a ~. and. 3-shot pam:m)
Tanle 1:
Results 01 A."IN lor Time-DelllY Ripple-Fired Discriminlllion
Triangular Shot Patterns? Normal Versus Anomalous (Experiment B)
Table 3 depicts the results of Experiment B for the normal vs. anomalous study. The
threshold criteria for the target output node compared to the other output nodes was 0.4.
Again_ the test set consisted of time delays that were within ?5 ms of the target time
delay pattern. The PCC was 69.4%. the error percentage was 2.7%, and the percentage of
undecided responses was 27.7%. As evident from the table, the majority of undecided
responses were generated from attempting to classify the anomalous event.
Threshold
Cruena
0.4
Tes SCI
,
COmet
incoaect
undecided
Nonn:1.l
Anom:1.lous
31
19
1
I
16
TOI.u
~O
2
:0
T:able J:
Results or A:"I:'oI row CIIormal
RiDDle-Fired
'50
Oi~crimin"li .. n
4
Anomalous
550
Perry and Baumgardt
5
CONCLUSIONS
This study has shown that ANNs can be used to characterize seismic waveform patterns
for the purpose of characterizing depth of focus, from Lg spectral matrices, and for
recognizing ripple-fIre patterns from spectral modulations. However, we were only able
to analyze the results for simulated input data. In future studies, we intend to use real data
as input.
We have demonstrated that events can be classed as shallow or deep on the basis of the Lg
spectral matrix and that the ANN provided a convenient and robust methodology for
matching spectral matrices. The fact that we obtained nearly the same recognition
performance for interwindow and intrawindow normalizations shows that the Lg spectral
matrix does in fact contain significant information about the depth of focus of a seismic
event, at least for theoretically derived synthetic cases.
The results for the ripple-fIre recognition study were very encouraging. We found that
neural networks could easily be trained to recognize many different ripple-fIre patterns.
For a given blasting region, a neural network could be trained to recognize the usual,
routine ripple-fIre patterns generally used in the region. We have shown that it should be
possible to identify unusual or anomalous ripple-fIre patterns due to attempts to include a
large decoupled nuclear explosion in with an ordinary ripple-fIre sequence.
References
Baggeroer, A.M., W.A. Kuperman, and H. Schmidt (1988). Matched field processing:
source localization in correlated noise as optimum parameter estimation, J. Acoust. Soc.
Am.,83, 571-587.
Baumgardt, D.R. and K.A. Ziegler (1989). Automatic recognition of economic and
underwater blasts using regional array data. Unpublished report to Science Applications
Incorporated, 11-880085-51.
Bucker, H.P. (1976). Use of calculated sound fields and matched-field detection to locate
sound sources in shallow water. J. Acoust. Soc. Am .? 59, 368-373.
Der, Z.A. and D.R. Baumgardt (1989). Effect of source depth on the Lg phase,
DARPA/AFTAC Research Review, November 1989.
Dowla, F.U., S.R. Taylor, and R.W. Anderson (1989). Seismic discrimination with
artificial neural networks: preliminary results with regional spectral data, UCRL-102310,
Lawrence Livermore National Laboratory, Livermore, CA.
Dysart, P.S. and J.J. Pulli (1990). Regional seismic event classification at the NORESS
array: seismological measurements and the use of trained neural networks, abstract in
Program, Symposium on Regional Seismic Arrays and Nuclear Test Ban Verification,
Oslo, Norway, 14-17 February 1990.
Hedlin, M.A.H., J.B. Minster, J.A. Orcutt (1990). An automatic means to discriminate
between earthquakes and quarry blasts, submitted to Bull. Seism. Soc. Am.
Rumelhart, D.E., Hinton, G.B., Williams, RJ. (1986). Learning internal representations
by error propagation", in Parallel Distributed Processing, 1, MIT Press, Cambridge, MA.
| 389 |@word pcc:8 simulation:2 propagate:1 bn:1 tr:1 shot:16 selecting:1 transfonn:1 past:1 current:1 comparing:1 activation:12 si:5 written:2 john:1 wanted:4 designed:1 discrimination:6 v:3 xk:1 characterization:9 node:25 sigmoidal:1 five:1 lor:1 symposium:1 incorrect:1 consists:1 recognizable:1 blast:3 theoretically:1 inter:1 cot:1 gov:1 actual:1 encouraging:1 window:7 considering:1 provided:1 moreover:1 matched:4 medium:1 kg:2 kind:1 developed:1 acoust:2 transformation:1 perfonn:1 hypothetical:1 attenuation:1 classifier:3 unit:3 crust:1 t1:1 modify:1 firing:1 modulation:4 might:2 pam:1 emphasis:1 challenging:1 co:11 range:4 earthquake:5 backpropagation:2 procedure:2 matching:4 convenient:1 nonnalization:1 road:1 seismological:1 seb:2 applying:1 demonstrated:3 center:1 williams:1 array:9 nuclear:7 population:1 underwater:2 cs:1 target:3 aik:1 distinguishing:1 us:1 velocity:3 rumelhart:2 recognition:8 element:7 observed:1 region:2 connected:2 highest:1 expecting:1 trained:14 depend:1 ror:1 ali:1 localization:1 division:1 basis:1 oslo:1 easily:1 darpa:1 various:2 represented:1 undecided:6 artificial:4 approached:1 whose:2 larger:2 triangular:2 directionally:1 sequence:2 net:1 inn:1 product:1 fired:10 description:2 optimum:1 ripple:30 produce:1 sea:1 depending:1 develop:1 propagating:1 exemplar:4 measured:2 ij:1 soc:3 implemented:1 involves:2 indicate:1 riddle:1 waveform:3 correct:7 bin:2 preliminary:1 around:1 considered:2 liz:2 normal:11 lawrence:1 purpose:3 estimation:7 superposition:1 ziegler:2 largest:1 tool:2 weighted:3 mit:1 rather:2 varying:1 ucrl:1 derived:2 focus:4 indicates:1 equaled:1 am:3 dependent:3 entire:1 hidden:8 springfield:1 transformed:1 buried:1 overall:1 classification:8 issue:1 ill:1 art:1 spatial:1 ell:1 uc:1 smoothing:1 field:5 comprise:1 represents:2 pulli:2 nearly:1 future:2 minimized:1 report:1 employ:1 resulted:3 recognize:3 individual:2 national:1 replaced:1 phase:11 fire:18 attempt:1 detection:1 mlp:5 highly:1 intra:1 behind:1 kt:1 explosion:16 decoupled:2 taylor:1 initialized:1 increased:1 classify:3 ordinary:1 bull:1 delay:18 recognizing:1 zio:1 virginia:1 characterize:2 dependency:3 accomplish:1 synthetic:7 physic:1 connecting:2 again:2 squared:2 recorded:1 possibly:1 flve:2 return:1 li:2 potential:1 inc:1 tion:1 performed:3 analyze:1 wave:1 relied:1 sct:1 parallel:1 oi:2 characteristic:1 yield:2 identify:1 generalize:1 identification:3 produced:2 submitted:1 anns:11 flrst:1 against:1 frequency:6 involved:2 associated:1 propagated:1 tesl:1 amplitude:3 routine:1 back:1 norway:1 methodology:1 response:4 evaluated:2 done:1 anderson:1 just:2 until:2 hand:1 perry:5 propagation:2 mode:5 believe:1 oil:2 effect:7 normalized:1 concept:2 consisted:2 contain:1 laboratory:1 rpt:1 during:2 backpropagating:1 covering:1 excitation:1 m:13 criterion:7 presenting:4 evident:2 percent:2 novel:1 superior:1 attached:1 jcd:1 relating:1 nonn:1 significant:1 measurement:1 cambridge:1 ai:1 automatic:2 had:7 impressive:1 closest:1 recent:1 hide:1 tenns:1 der:2 seen:1 greater:2 additional:1 recognized:1 determine:4 ale:1 ii:1 signal:6 sound:2 rj:1 match:2 characterized:1 calculation:1 cross:1 anomalous:10 multilayer:1 represent:4 normalization:3 achieved:1 source:12 resul:1 regional:15 rest:1 toi:1 eigenfunctions:1 hz:1 emitted:1 synthetically:1 iii:3 andt:1 affect:2 gave:1 architecture:2 identified:1 economic:3 idea:1 shift:2 whether:2 six:2 effort:1 nine:1 deep:8 generally:1 iterating:1 generate:1 percentage:3 seismogram:2 four:1 threshold:4 comet:1 dewey:1 douglas:1 sum:2 angle:1 master:1 discern:1 separation:1 coherence:2 ble:1 summarizes:1 entirely:1 layer:8 ri:1 fourier:2 simulate:1 attempting:2 combination:1 across:2 wavefonn:1 shallow:10 taken:1 unusual:1 studying:1 phenomenology:1 spectral:21 schmidt:1 compiling:1 include:1 establish:1 february:1 sweep:1 intend:1 usual:1 distance:1 simulated:3 sci:1 majority:1 seven:3 discriminant:1 water:2 lg:19 classed:1 hij:1 implementation:1 seismic:19 dispersion:2 november:1 t:1 zi1:1 looking:1 precise:1 incorporated:1 locate:1 hinton:1 varied:2 unpublished:1 required:2 livermore:2 connection:3 acoustic:1 beyond:1 able:2 usually:1 pattern:25 appeared:1 program:1 royal:1 including:1 event:20 technology:1 numerous:1 irrespective:1 extract:2 review:1 understanding:2 relative:1 embedded:2 fully:2 versus:1 verification:1 port:1 row:3 ban:1 last:1 transpose:1 perceptron:1 characterizing:1 distributed:1 depth:28 calculated:1 uni:1 assumed:2 xi:1 spectrum:4 kilometer:3 table:6 channel:3 learn:2 robust:1 ca:1 operational:1 obtaining:2 investigated:3 complex:2 did:1 multilayered:1 noise:1 ded:1 en:1 depicts:1 momentum:1 msec:1 exponential:1 xl:1 communicates:1 krn:3 explored:3 experimented:1 wash:1 te:1 ma:1 goal:1 ann:27 hard:1 specifically:1 conservative:3 called:1 discriminate:1 sio:1 internal:1 correlated:1 |
3,190 | 3,890 | CUR from a Sparse Optimization Viewpoint
Jacob Bien?
Department of Statistics
Stanford University
Stanford, CA 94305
Ya Xu?
Department of Statistics
Stanford University
Stanford, CA 94305
Michael W. Mahoney
Department of Mathematics
Stanford University
Stanford, CA 94305
[email protected]
[email protected]
[email protected]
Abstract
The CUR decomposition provides an approximation of a matrix X that has low
reconstruction error and that is sparse in the sense that the resulting approximation
lies in the span of only a few columns of X. In this regard, it appears to be similar to many sparse PCA methods. However, CUR takes a randomized algorithmic
approach, whereas most sparse PCA methods are framed as convex optimization
problems. In this paper, we try to understand CUR from a sparse optimization
viewpoint. We show that CUR is implicitly optimizing a sparse regression objective and, furthermore, cannot be directly cast as a sparse PCA method. We also
observe that the sparsity attained by CUR possesses an interesting structure, which
leads us to formulate a sparse PCA method that achieves a CUR-like sparsity.
1 Introduction
CUR decompositions are a recently-popular class of randomized algorithms that approximate a data
matrix X ? Rn?p by using only a small number of actual columns of X [12, 4]. CUR decompositions are often described as SVD-like low-rank decompositions that have the additional advantage of
being easily interpretable to domain scientists. The motivation to produce a more interpretable lowrank decomposition is also shared by sparse PCA (SPCA) methods, which are optimization-based
procedures that have been of interest recently in statistics and machine learning.
Although CUR and SPCA methods start with similar motivations, they proceed very differently. For
example, most CUR methods have been randomized, and they take a purely algorithmic approach.
By contrast, most SPCA methods start with a combinatorial optimization problem, and they then
solve a relaxation of this problem. Thus far, it has not been clear to researchers how the CUR and
SPCA approaches are related. It is the purpose of this paper to understand CUR decompositions
from a sparse optimization viewpoint, thereby elucidating the connection between CUR decompositions and the SPCA class of sparse optimization methods.
To do so, we begin by putting forth a combinatorial optimization problem (see (6) below) which
CUR is implicitly approximately optimizing. This formulation will highlight two interesting features
of CUR: first, CUR attains a distinctive pattern of sparsity, which has practical implications from
the SPCA viewpoint; and second, CUR is implicitly optimizing a regression-type objective. These
two observations then lead to the three main contributions of this paper: (a) first, we formulate a
non-randomized optimization-based version of CUR (see Problem 1: GL -R EG in Section 3) that is
based on a convex relaxation of the CUR combinatorial optimization problem; (b) second, we show
that, in contrast to the original PCA-based motivation for CUR, CUR?s implicit objective cannot
be directly expressed in terms of a PCA-type objective (see Theorem 3 in Section 4); and (c) third,
we propose an SPCA approach (see Problem 2: GL -SPCA in Section 5) that achieves the sparsity
structure of CUR within the PCA framework. We also provide a brief empirical evaluation of our
two proposed objectives. While our proposed GL -R EG and GL -SPCA methods are promising in
and of themselves, our purpose in this paper is not to explore them as alternatives to CUR; instead,
our goal is to use them to help clarify the connection between CUR and SPCA methods.
?
Jacob Bien and Ya Xu contributed equally.
1
We conclude this introduction with some remarks on notation. Given a matrix A, we use A(i) to
denote its ith row (as a row-vector) and A(i) its ith column. Similarly, given a set of indices I,
AI and AI denote the submatrices of A containing only these I rows and columns, respectively.
Finally, we let Lcol (A) denote the column space of A.
2 Background
In this section, we provide a brief background on CUR and SPCA methods, with a particular emphasis on topics to which we will return in subsequent sections. Before doing so, recall that, given
an input matrix X, Principal Component Analysis (PCA) seeks the k-dimensional hyperplane with
the lowest reconstruction error. That is, it computes a p ? k orthogonal matrix W that minimizes
T
ERR(W) = ||X ? XWW ||F .
(1)
Writing the SVD of X as U?VT , the minimizer of (1) is given by Vk , the first k columns of V. In
the data analysis setting, each column of V provides a particular linear combination of the columns
of X. These linear combinations are often thought of as latent factors. In many applications, interpreting such factors is made much easier if they are comprised of only a small number of actual
columns of X, which is equivalent to Vk only having a small number of nonzero elements.
2.1 CUR matrix decompositions
CUR decompositions were proposed by Drineas and Mahoney [12, 4] to provide a low-rank approximation to a data matrix X by using only a small number of actual columns and/or rows of X. Fast
randomized variants [3], deterministic variants [5], Nystr?om-based variants [1, 11], and heuristic
variants [17] have also been considered. Observing that the best rank-k approximation to the SVD
provides the best set of k linear combinations of all the columns, one can ask for the best set of k
actual columns. Most formalizations of ?best? lead to intractable combinatorial optimization problems [12], but one can take advantage of oversampling (choosing slightly more than k columns) and
randomness as computational resources to obtain strong quality-of-approximation guarantees.
Theorem 1 (Relative-error CUR [12]). Given an arbitrary matrix X ? Rn?p and an integer k,
there exists a randomized algorithm that chooses a random subset I ? {1, . . . , p} of size c =
O(k log k log(1/?)/?2 ) such that XI , the n ? c submatrix containing those c columns of X, satisfies
||X ? XI XI+ X||F = min
||X ? XI B||F ? (1 + ?)||X ? Xk ||F ,
c?p
B?R
(2)
with probability at least 1 ? ?, where Xk is the best rank k approximation to X.
The algorithm referred to by Theorem 1 is very simple:
1) Compute the normalized statistical leverage scores, defined below in (3).
2) Form I by randomly sampling c columns of X, using these normalized statistical leverage scores
as an importance sampling distribution.
3) Return the n ? c matrix XI consisting of these selected columns.
The key issue here is the choice of the importance sampling distribution. Let the p ? k matrix Vk
be the top-k right singular vectors of X. Then the normalized statistical leverage scores are
1
?i = ||Vk(i) ||22 ,
(3)
k
for all i = 1, . . . , p, where Vk(i) denotes the i-th row of Vk . These scores, proportional to the
Euclidean norms of the rows of the top-k right singular vectors, define the relevant nonuniformity
structure to be used to identify good (in the sense of Theorem 1) columns. In addition, these scores
are proportional to the diagonal elements of the projection matrix onto the top-k right singular
subspace. Thus, they generalize the so-called hat matrix [8], and they have a natural interpretation
as capturing the ?statistical leverage? or ?influence? of a given column on the best low-rank fit of
the data matrix [8, 12].
2.2 Regularized sparse PCA methods
SPCA methods attempt to make PCA easier to interpret for domain experts by finding sparse approximations to the columns of V.1 There are several variants of SPCA. For example, Jolliffe et al. [10]
1
For SPCA, we only consider sparsity in the right singular vectors V and not in the left singular vectors U.
This is similar to considering only the choice of columns and not of both columns and rows in CUR.
2
and Witten et al. [19] use the maximum variance interpretation of PCA and provide an optimization
problem which explicitly encourages sparsity in V based on a Lasso constraint [18]. d?Aspremont
et al. [2] take a similar approach, but instead formulate the problem as an SDP.
Zou et al. [21] use the minimum reconstruction error interpretation of PCA to suggest a different
approach to the SPCA problem; this formulation will be most relevant to our present purpose. They
begin by formulating PCA as the solution to a regression-type problem.
Theorem 2 (Zou et al. [21]). Given an arbitrary matrix X ? Rn?p and an integer k, let A and W
be p ? k matrices. Then, for any ? > 0, let
(A? , Vk? ) = argminA,W?Rp?k ||X ? XWAT ||2F + ?||W||2F s.t. AT A = Ik .
(4)
?(i)
Then, the minimizing matrices A? and Vk? satisfy A?(i) = si V(i) and Vk
si = 1 or ?1.
?2
= si ?2 ii+? V(i) , where
ii
That is, up to signs, A? consists of the top-k right singular vectors of X, and Vk? consists of
those same vectors ?shrunk? by a factor depending on the corresponding singular value. Given this
regression-type characterization of PCA, Zou et al. [21] then ?sparsify? the formulation by adding
an L1 penalty on W:
(A? , Vk? ) = argminA,W?Rp?k ||X ? XWAT ||2F + ?||W||2F + ?1 ||W||1 s.t. AT A = Ik , (5)
P
where ||W||1 =
ij |Wij |. This regularization tends to sparsify W element-wise, so that the
solution Vk? gives a sparse approximation of Vk .
3 Expressing CUR as an optimization problem
In this section, we present an optimization formulation of CUR. Recall, from Section 2.1, that CUR
takes a purely algorithmic approach to the problem of approximating a matrix in terms of a small
number of its columns. That is, it achieves sparsity indirectly by randomly selecting c columns, and
it does so in such a way that the reconstruction error is small with high probability (Theorem 1). By
contrast, SPCA methods are generally formulated as the exact solution to an optimization problem.
From Theorem 1, it is clear that CUR seeks a subset I of size c for which minB?Rc?p ||X?XI B||F
is small. In this sense, CUR can be viewed as a randomized algorithm for approximately solving the
following combinatorial optimization problem:
min
min ||X ? XI B||F
I?{1,...,p} B?Rc?p
s.t. |I| ? c.
(6)
In words, this objective asks for the subset of c columns of X which best describes the entire matrix
X. Notice that relaxing |I| = c to |I| ? c does not affect the optimum. This optimization problem
is analogous to all-subsets multivariate regression [7], which is known to be NP-hard.
However, by using ideas from the optimization literature we can approximate this combinatorial
problem as a regularized regression problem that is convex. First, notice that (6) is equivalent to
p
X
1{||B(i) ||2 6=0} ? c,
(7)
min ||X ? XB||F s.t.
B?Rp?p
i=1
where we now optimize over a p ? p matrix B. To see the equivalence between (6) and (7), note that
the constraint in (7) is the same as finding some subset I with |I| ? c such that BI c = 0.
The formulation in (7) provides a natural entry point to proposing a convex optimization approach
corresponding to CUR. First notice that (7) uses an L0 norm on the rows of B, which is not convex.
However, we can approximate the L0 constraint by a group lasso penalty, which uses a well-known
convex heuristic proposed by Yuan et al. [20] that encourages prespecified groups of parameters
to be simultaneously sparse. Thus, the combinatorial problem in (6) can be approximated by the
following convex (and thus tractable) problem:
Problem 1 (Group lasso regression: GL -R EG). Given an arbitrary matrix X ? Rn?p , let B ?
Rp?p and t > 0. The GL -R EG problem is to solve
p
X
?
||B(i) ||2 ? t,
(8)
B = argminB ||X ? XB||F s.t.
i=1
where t is chosen to get c nonzero rows in B? .
3
Pp
Since the rows of B are grouped together in the penalty i=1 ||B(i) ||2 , the row vector B(i) will tend
to be either dense or entirely zero. Note also that the algorithm to solve Problem 1 is a special case
of Algorithm 1 (see below), which solves the GL -SPCA problem, to be introduced later. (Finally,
as a side remark, note that our proposed GL -R EG is strikingly similar to a recently proposed method
for sparse inverse covariance estimation [6, 15].)
4 Distinguishing CUR from SPCA
Our original intention in casting CUR in the optimization framework was to understand better
whether CUR could be seen as an SPCA-type method. So far, we have established CUR?s connection to regression by showing that CUR can be thought of as an approximation algorithm for the
sparse regression problem (7). In this section, we discuss the relationship between regression and
PCA, and we show that CUR cannot be directly cast as an SPCA method.
To do this, recall that regression, in particular ?self? regression, finds a B ? Rp?p that minimizes
||X ? XB||F .
(9)
On the other hand, PCA-type methods find a set of directions W that minimize
ERR(W)
:= ||X ? XWW+ ||F .
(10)
Here, unlike in (1), we do not assume that W is orthogonal, since the minimizer produced from
SPCA methods is often not required to be orthogonal (recall Section 2.2).
Clearly, with no constraints on B or W, we can trivially achieve zero reconstruction error in both
cases by taking B = Ip and W any p ? p full-rank matrix. However, with additional constraints,
these two problems can be very different. It is common to consider sparsity and/or rank constraints.
We have seen in Section 3 that CUR effectively requires B to be row-sparse; in the standard PCA
setting, W is taken to be rank k (with k < p), in which case (10) is minimized by Vk and obtains
the optimal value ERR(Vk ) = ||X ? Xk ||F ; finally, for SPCA, W is further required to be sparse.
To illustrate the difference between the reconstruction errors (9) and (10) when extra constraints
are imposed, consider the 2-dimensional toy example in Figure 1. In this example, we compare
regression with a row-sparsity constraint to PCA with both rank and sparsity constraints. With
X ? Rn?2 , we plot X(2) against X(1) as the solid points in both plots of Figure 1. Constraining
B(2) = 0 (giving row-sparsity, as with CUR methods), (9) becomes minB12 ||X(2) ? X(1) B12 ||2 ,
which is a simple linear regression, represented by the black thick line and minimizing the sum
of squared vertical errors as shown. The red line (left plot) shows the first principal component
direction, which minimizes ERR(W) among all rank-one matrices W. Here, ERR(W) is the sum
of squared projection distances (red dotted lines). Finally, if W is further required to be sparse in
the X(2) direction (as with SPCA methods), we get the rank-one, sparse projection represented by
the green line in Figure 1 (right). The two sets of dotted lines in each plot clearly differ, indicating
that their corresponding reconstruction errors are differ ent as well. Since we have shown that CUR
is minimizing a regression-based objective, this toy example suggests that CUR may not in fact be
optimizing a PCA-type objective such as (10). Next, we will make this intuition more precise.
The first step to showing that CUR is an SPCA method would be to produce a matrix VCUR for
which XI XI+ X = XVCUR V+
CUR , i.e. to express CUR?s approximation in the form of an SPCA
I
approximation. However, this equality implies Lcol (XVCUR V+
CUR ) ? Lcol (X ), meaning that
I
(VCUR )I c = 0. If such a VCUR existed, then clearly ERR(VCUR ) = ||X ? X XI+ X||F , and so
CUR could be regarded as implicitly performing sparse PCA in the sense that (a) VCUR is sparse;
and (b) by Theorem 1 (with high probability), ERR(VCUR ) ? (1 + ?)ERR(Vk ). Thus, the existence
of such a VCUR would cast CUR directly as a randomized approximation algorithm for SPCA. However, the following theorem states that unless an unrealistic constraint on X holds, there does not
exist a matrix VCUR for which ERR(VCUR ) = ||X ? XI XI+ X||F . The larger implication of this
theorem is that CUR cannot be directly viewed as an SPCA-type method.
Theorem 3. Let I ? {1, . . . , p} be an index set and suppose W ? Rp?p satisfies WI c = 0. Then,
||X ? XWW+ ||F > ||X ? XI XI+ X||F ,
c
unless Lcol (XI ) ? Lcol (XI ), in which case ??? holds.
4
Regression
Regression
error (9)
error (9)
error (10)
error (10)
SPCA
X(2)
X(2)
PCA
X(1)
X(1)
Figure 1: Example of the difference in reconstruction errors (9) and (10), when additional constraints
imposed. Left: regression with row-sparsity constraint (black) compared with PCA with low rank
constraint (red). Right: regression with row-sparsity constraint (black) compared with PCA with
low rank and sparsity constraint (green). In both plots, the corresponding errors are represented by
the dotted lines.
Proof.
||X ? XWW+ ||2F = ||X ? XI WI W+ ||2F = ||X ? XI WI (WIT WI )?1 WT ||2F
c
c
= ||XI ? XI WI WI+ ||2F + ||XI ||2F ? ||XI ||2F
c
c
c
= ||XI ? XI XI+ XI ||2F + ||XI XI+ XI ||2F
c
= ||X ? XI XI+ X||2F + ||XI XI+ XI ||2F ? ||X ? XI XI+ X||2F .
c
The last inequality is strict unless XI XI+ XI = 0.
5 CUR-type sparsity and the group lasso SPCA
Although CUR cannot be directly cast as an SPCA-type method, in this section we propose a sparse
PCA approach (which we call the group lasso SPCA or GL -SPCA) that accomplishes something
very close to CUR. Our proposal produces a V? that has rows that are entirely zero, and it is motivated by the following two observations about CUR. First, following from the definition of the
leverage scores (3), CUR chooses columns of X based on the norm of their corresponding rows of
Vk . Thus, it essentially ?zeros-out? the rows of Vk with small norms (in a probabilistic sense).
Second, as we have noted in Section 4, if CUR could be expressed as a PCA method, its principal
directions matrix ?VCUR ? would have p ? c rows that are entirely zero, corresponding to removing
those columns of X.
Recall that Zou et al. [21] obtain a sparse V? by including in (5) an additional L1 penalty from
the optimization problem (4). Since the L1 penalty is on the entire matrix viewed as a vector,
it encourages only unstructured sparsity. To achieve the CUR-type row sparsity, we propose the
following modification of (4):
Problem 2 (Group lasso SPCA: GL -SPCA). Given an arbitrary matrix X ? Rn?p and an integer
k, let A and W be p ? k matrices, and let ?, ?1 > 0. The GL -SPCA problem is to solve
?
?
T
(A , V ) = argminA,W ||X ? XWA
||2F
+
?||W||2F
+ ?1
p
X
||W(i) ||2 s.t. AT A = Ik . (11)
i=1
Thus,
P the lasso penalty ?1 ||W||1 in (5) is replaced in (11) by a group lasso penalty
?1 pi=1 ||W(i) ||2 , where rows of W are grouped together so that each row of V? will tend to
be either dense or entirely zero.
Importantly, the GL -SPCA problem is not convex in W and A together; it is, however, convex in
W, and it is easy to solve in A. Thus, analogous to the treatment in Zou et al. [21], we propose
an iterative alternate-minimization algorithm to solve GL -SPCA. This is described in Algorithm 1;
and the justification of this algorithm is given in Section 7. Note that if we fix A to be I throughout,
then Algorithm 1 can be used to solve the GL -R EG problem discussed in Section 3.
5
1
2
3
4
Algorithm 1: Iterative algorithm for solving the GL -SPCA (and GL -R EG) problems.
(For the GL -R EG problem, fix A = I throughout this algorithm.)
Input: Data matrix X and initial estimates for A and W
Output: Final estimates for A and W
repeat
Compute SVD of XT XW as UDVT and then A ? UVT ;
S ? {i : ||W(i) ||2 6= 0};
for i ? S do
T
P
Compute bi = j6=i X(j)T X(i) W(j)
;
T T (i)
if ||A X X ? bi ||2 ? ?1 /2 then
T
W(i)
? 0;
else
2
T
W(i)
? 2||X(i) ||2 +?+?
AT XT X(i) ? bi ;
||2
1 /||W
2
(i)
until convergence;
We remark that such row-sparsity in V? can have either advantages or disadvantages. Consider, for
example, when there are a small number of informative columns in X and the rest are not important
for the task at hand [12, 14]. In such a case, we would expect that enforcing entire rows to be zero
would lead to better identification of the signal columns; and this has been empirically observed in
the application of CUR to DNA SNP analysis [14]. The unstructured V? , by contrast, would not
be able to ?borrow strength? across all columns of V? to differentiate the signal columns from the
noise columns. On the other hand, requiring such structured sparsity is more restrictive and may
not be desirable. For example, in microarray analysis in which we have measured p genes on n
patients, our goal may be to find several underlying factors. Biologists have identified ?pathways?
of interconnected genes [16], and it would be desirable if each sparse factor could be identified with
a different pathway (that is, a different set of genes). Requiring all factors of V? to exclude the same
p ? c genes does not allow a different sparse subset of genes to be active in each factor.
We finish this section by pointing out that while most SPCA methods only enforce unstructured
zeros in V? , the idea of having a structured sparsity in the PCA context has very recently been
explored [9]. Our GL -SPCA problem falls within the broad framework of this idea.
6 Empirical Comparisons
In this section, we evaluate the performance of the four methods discussed above on both synthetic and real data. In particular, we compare the randomized CUR algorithm of Mahoney and
Drineas [12, 4] to our GL -R EG (of Problem 1), and we compare the SPCA algorithm proposed
by Zou et al. [21] to our GL -SPCA (of Problem 2). We have also compared against the SPCA
algorithm of Witten et al. [19], and we found the results to be very similar to those of Zou et al.
6.1 Simulations
b + E, where X
b is the underlying signal
We first consider synthetic examples of the form X = X
matrix and E is a matrix of noise. In all our simulations, E has i.i.d. N (0, 1) entries, while the
b has one of the following forms:
signal X
b = [0n?(p?c) ; X
b ? ] where the n ? c matrix X
b ? is the nonzero part of X.
b In other words,
Case I) X
b
X has c nonzero columns and does not necessarily have a low-rank structure.
b = UVT where U and V each consist of k < p orthogonal columns. In addition to
Case II) X
being low-rank, V has entire rows equal to zero (i.e. it is row-sparse).
b = UVT where U and V each consist of k < p orthogonal columns. Here V is
Case III) X
low-rank and sparse, but the sparsity is not structured (i.e. it is scattered-sparse).
b and has high precision in
A successful method attains low reconstruction error of the true signal X
identifying correctly the zeros in the underlying model. As previously discussed, the four methods
6
optimize for different types of reconstruction error. Thus, in comparing CUR and GL -R EG, we
b ? XI XI+ X||F , whereas for the
use the regression-type reconstruction error ERRreg (I) = ||X
b ? XVV+ ||F .
comparison of SPCA and GL -SPCA, we use the PCA-type error ERR(V) = ||X
Table 1 presents the simulation results from the three cases. All comparisons use n = 100 and
p = 1000. In Case II and III, the signal matrix has rank k = 10. The underlying sparsity level is
b (Case I) and V (Case II&III) are zeros. Note that all methods
20%, i.e. 80% of the entries of X
except for GL -R EG require the rank k as an input, and we always take it to be 10 even in Case I. For
easy comparison, we have tuned each method to have the correct total number of zeros. The results
are averaged over 5 trials.
Methods
Case I
Case II
Case III
ERR reg (I)
CUR
GL -R EG
316.29 (0.835)
316.29 (0.989)
315.28 (0.797)
315.28 (0.750)
315.64 (0.166)
315.64 (0.107)
ERR (V)
SPCA
GL -SPCA
177.92 (0.809)
141.85 (0.998)
44.388 (0.799)
37.310 (0.767)
44.995 (0.792)
45.500 (0.804)
Table 1: Simulation results: The reconstruction errors and the percentages of correctly identified
zeros (in parentheses).
We notice in Table 1 that the two regression-type methods CUR and GL -R EG have very similar
performance. As we would expect, since CUR only uses information in the top k singular vectors, it
does slightly worse than GL -R EG in terms of precision when the underlying signal is not low-rank
(Case I). In addition, both methods perform poorly if the sparsity is not structured as in Case III. The
two PCA-type methods perform similarly as well. Again, the group lasso method seems to work
better in Case I. We note that the precisions reported here are based on element-wise sparsity?if we
were measuring row-sparsity, methods like SPCA would perform poorly since they do not encourage
entire rows to be zero.
6.2 Microarray example
We next consider a microarray dataset of soft tissue tumors studied by Nielsen et al. [13]. Mahoney and Drineas [12] apply CUR to this dataset of n = 31 tissue samples and p = 5520 genes.
As with the simulation results, we use two sets of comparisons: we compare CUR with GL -R EG,
b we take
and we compare SPCA with GL -SPCA. Since we do not observe the underlying truth X,
ERRreg (I) = ||X ? XI XI+ X||F and ERR(V) = ||X ? XVV+ ||F . Also, since we do not observe
the true sparsity, we cannot measure the precision as we do in Table 1. The left plot in Figure 2
shows ERRreg (I) as a function of |I|. We see that CUR and GL -R EG perform similarly. (However,
since CUR is a randomized algorithm, on every run it gives a different result. From a practical
standpoint, this feature of CUR can be disconcerting to biologists wanting to report a single set of
important genes. In this light, GL -R EG may be thought of as an attractive non-randomized alternative to CUR.) The right plot of Figure 2 compares GL -SPCA to SPCA (specifically, Zou et al. [21]).
Since SPCA does not explicitly enforce row-sparsity, for a gene to be not used in the model requires
all of the (k = 4) columns of V? to exclude it. This likely explains the advantage of GL -SPCA over
SPCA seen in the figure.
7 Justification of Algorithm 1
The algorithm alternates between minimizing with respect to A and B until convergence.
Solving for A given B: If B is fixed, then the regularization penalty in (11) can be ignored, in
which case the optimization problem becomes minA ||X ? XBAT ||2F subject to AT A = I. This
problem was considered by Zou et al. [21], who showed that the solution is obtained by computing
b = UVT . This explains step 1 in
the SVD of (XT X)B as (XT X)B = UDVT and then setting A
Algorithm 1.
Solving for B given A: If A is fixed, then (11) becomes an unconstrained convex optimization
problem in B. The subgradient equations (using that AT A = Ik ) are
2BT XT X(i) ? 2AT XT X(i) + 2?BT(i) + ?1 si = 0;
7
i = 1, . . . , p,
(12)
Microarray Dataset
GL -SPCA
460
SPCA
420
ERR (V)
200
0
360
380
400
300
440
400
GL -R EG
CUR
100
ERR reg (I)
Microarray Dataset
0
50
100
150
Number of genes used
200
1000
Figure 2: Left: Comparison of CUR, multiple runs, with
SPCA with SPCA (specifically, Zou et al. [21]).
2000 3000 4000
Number of genes used
GL -R EG ;
5000
Right: Comparison of
GL -
where the subgradient vectors si = BT(i) /||B(i) ||2 if B(i) 6= 0, or ||si ||2 ? 1 if B(i) = 0. Let us
P
define bi = j6=i (X(j)T X(i) )BT(j) = BT XT X(i) ?||X(i) ||22 BT(i) , so that the subgradient equations
can be written as
bi + (||X(i) ||22 + ?)BT(i) ? AT XT X(i) + (?1 /2)si = 0.
(13)
The following claim explains Step 3 in Algorithm 1.
Claim 1. B(i) = 0 if and only if ||AT XT X(i) ? bi ||2 ? ?1 /2.
Proof. First, if B(i) = 0, the subgradient equations (13) become bi ? AT XT X(i) + (?1 /2)si = 0.
Since ||si ||2 ? 1 if B(i) = 0, we have ||AT XT X(i) ? bi ||2 ? ?1 /2. To prove the other
direction, recall that B(i) 6= 0 implies si = BT(i) /||B(i) ||2 . Substituting this expression into
(13), rearranging terms, and taking
the norm on both sides, we get 2||AT XT X(i) ? bi ||2 =
(i) 2
2||X ||2 + 2? + ?1 /||B(i) ||2 ||B(i) ||2 > ?1 .
By Claim 1, ||AT XT X(i) ? bi ||2 > ?1 /2 implies that B(i) 6= 0 which further implies si =
BT(i) /||B(i) ||2 . Substituting into (13) gives Step 4 in Algorithm 1.
8 Conclusion
In this paper, we have elucidated several connections between two recently-popular matrix decomposition methods that adopt very different perspectives on obtaining interpretable low-rank matrix
decompositions. In doing so, we have suggested two optimization problems, GL -R EG and GL SPCA, that highlight similarities and differences between the two methods. In general, SPCA
methods obtain interpretability by modifying an existing intractable objective with a convex regularization term that encourages sparsity, and then exactly optimizing that modified objective. On
the other hand, CUR methods operate by using randomness and approximation as computational resources to optimize approximately an intractable objective, thereby implicitly incorporating a form
of regularization into the steps of the approximation algorithm. Understanding this concept of implicit regularization via approximate computation is clearly of interest more generally, in particular
for applications where the size scale of the data is expected to increase.
Acknowledgments
We would like to thank Art Owen and Robert Tibshirani for encouragement and helpful suggestions.
Jacob Bien was supported by the Urbanek Family Stanford Graduate Fellowship, and Ya Xu was
supported by the Melvin and Joan Lane Stanford Graduate Fellowship. In addition, support from
the NSF and AFOSR is gratefully acknowledged.
8
References
[1] M.-A. Belabbas and P.J. Wolfe. Fast low-rank approximation for covariance matrices. In
Second IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive
Processing, pages 293?296, 2007.
[2] A. d?Aspremont, L. El Ghaoui, M. I. Jordan, and G. R. G. Lanckriet. A direct formulation for
sparse PCA using semidefinite programming. SIAM Review, 49(3):434?448, 2007.
[3] P. Drineas, R. Kannan, and M.W. Mahoney. Fast Monte Carlo algorithms for matrices III:
Computing a compressed approximate matrix decomposition. SIAM Journal on Computing,
36:184?206, 2006.
[4] P. Drineas, M.W. Mahoney, and S. Muthukrishnan. Relative-error CUR matrix decompositions.
SIAM Journal on Matrix Analysis and Applications, 30:844?881, 2008.
[5] S.A. Goreinov and E.E. Tyrtyshnikov. The maximum-volume concept in approximation by
low-rank matrices. Contemporary Mathematics, 280:47?51, 2001.
[6] T. Hastie, R. Tibshirani, and J. Friedman. Applications of the lasso and grouped lasso to the
estimation of sparse graphical models. Manuscript. Submitted. 2010.
[7] T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning. SpringerVerlag, New York, 2003.
[8] D.C. Hoaglin and R.E. Welsch. The hat matrix in regression and ANOVA. The American
Statistician, 32(1):17?22, 1978.
[9] R. Jenatton, G. Obozinski, and F. Bach. Structured sparse principal component analysis. Technical report. Preprint: arXiv:0909.1440 (2009).
[10] I. T. Jolliffe, N. T. Trendafilov, and M. Uddin. A modified principal component technique based
on the LASSO. Journal of Computational and Graphical Statistics, 12(3):531?547, 2003.
[11] S. Kumar, M. Mohri, and A. Talwalkar. Ensemble Nystr?om method. In Annual Advances in
Neural Information Processing Systems 22: Proceedings of the 2009 Conference, 2009.
[12] M.W. Mahoney and P. Drineas. CUR matrix decompositions for improved data analysis. Proc.
Natl. Acad. Sci. USA, 106:697?702, 2009.
[13] T. Nielsen, R.B. West, S.C. Linn, O. Alter, M.A. Knowling, J. O?Connell, S. Zhu, M. Fero,
G. Sherlock, J.R. Pollack, P.O. Brown, D. Botstein, and M. van de Rijn. Molecular characterisation of soft tissue tumours: a gene expression study. Lancet, 359(9314):1301?1307, 2002.
[14] P. Paschou, E. Ziv, E.G. Burchard, S. Choudhry, W. Rodriguez-Cintron, M.W. Mahoney, and
P. Drineas. PCA-correlated SNPs for structure identification in worldwide human populations.
PLoS Genetics, 3:1672?1686, 2007.
[15] J. Peng, P. Wang, N. Zhou, and J. Zhu. Partial correlation estimation by joint sparse regression
models. Journal of the American Statistical Association, 104:735?746, 2009.
[16] A. Subramanian, P. Tamayo, V. K. Mootha, S. Mukherjee, B. L. Ebert, M. A. Gillette,
A. Paulovich, S. L. Pomeroy, T. R. Golub, E. S. Lander, and J. P. Mesirov. Gene set enrichment analysis: A knowledge-based approach for interpreting genome-wide expression profiles.
Proc. Natl. Acad. Sci. USA, 102(43):15545?15550, 2005.
[17] J. Sun, Y. Xie, H. Zhang, and C. Faloutsos. Less is more: Compact matrix decomposition for
large sparse graphs. In Proceedings of the 7th SIAM International Conference on Data Mining,
2007.
[18] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical
Society: Series B, 58(1):267?288, 1996.
[19] D. M. Witten, R. Tibshirani, and T. Hastie. A penalized matrix decomposition, with applications to sparse principal components and canonical correlation analysis. Biostatistics,
10(3):515?534, 2009.
[20] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables.
Journal of the Royal Statistical Society: Series B, 68(1):49?67, 2006.
[21] H. Zou, T. Hastie, and R. Tibshirani. Sparse principal component analysis. Journal of Computational and Graphical Statistics, 15(2):262?286, 2006.
9
| 3890 |@word trial:1 version:1 norm:5 seems:1 tamayo:1 seek:2 simulation:5 jacob:3 decomposition:16 covariance:2 asks:1 thereby:2 nystr:2 solid:1 initial:1 series:2 score:6 selecting:1 tuned:1 existing:1 err:15 com:1 comparing:1 si:11 gmail:1 written:1 subsequent:1 informative:1 plot:7 interpretable:3 selected:1 xk:3 ith:2 prespecified:1 provides:4 characterization:1 zhang:1 rc:2 melvin:1 direct:1 become:1 ik:4 yuan:2 consists:2 prove:1 pathway:2 peng:1 expected:1 themselves:1 sdp:1 multi:1 actual:4 considering:1 becomes:3 begin:2 notation:1 underlying:6 biostatistics:1 lowest:1 minimizes:3 proposing:1 finding:2 guarantee:1 every:1 exactly:1 before:1 scientist:1 tends:1 acad:2 approximately:3 black:3 emphasis:1 argminb:1 studied:1 equivalence:1 b12:1 relaxing:1 suggests:1 bi:11 graduate:2 averaged:1 practical:2 acknowledgment:1 procedure:1 empirical:2 submatrices:1 thought:3 projection:3 word:2 intention:1 suggest:1 get:3 cannot:6 onto:1 close:1 selection:2 context:1 influence:1 writing:1 optimize:3 equivalent:2 deterministic:1 imposed:2 convex:11 formulate:3 wit:1 unstructured:3 identifying:1 regarded:1 importantly:1 borrow:1 population:1 justification:2 analogous:2 suppose:1 exact:1 programming:1 us:3 distinguishing:1 lanckriet:1 element:5 wolfe:1 approximated:1 mukherjee:1 observed:1 preprint:1 wang:1 sun:1 plo:1 contemporary:1 intuition:1 solving:4 purely:2 distinctive:1 drineas:7 strikingly:1 easily:1 joint:1 differently:1 represented:3 muthukrishnan:1 fast:3 monte:1 choosing:1 heuristic:2 stanford:11 solve:7 larger:1 belabbas:1 compressed:1 statistic:5 ip:1 final:1 differentiate:1 advantage:4 reconstruction:12 propose:4 interconnected:1 mesirov:1 relevant:2 poorly:2 achieve:2 forth:1 ent:1 convergence:2 optimum:1 produce:3 mmahoney:1 help:1 depending:1 illustrate:1 measured:1 ij:1 lowrank:1 solves:1 strong:1 c:1 implies:4 mootha:1 differ:2 direction:5 thick:1 correct:1 modifying:1 shrunk:1 human:1 explains:3 require:1 fix:2 clarify:1 hold:2 considered:2 algorithmic:3 claim:3 pointing:1 substituting:2 achieves:3 adopt:1 purpose:3 estimation:4 proc:2 combinatorial:7 grouped:4 minimization:1 clearly:4 sensor:1 always:1 modified:2 zhou:1 shrinkage:1 casting:1 sparsify:2 l0:2 vk:18 rank:22 contrast:4 attains:2 talwalkar:1 sense:5 helpful:1 el:1 entire:5 bt:9 wij:1 issue:1 among:1 tyrtyshnikov:1 ziv:1 paulovich:1 art:1 special:1 biologist:2 equal:1 having:2 sampling:3 broad:1 uddin:1 alter:1 minimized:1 np:1 report:2 few:1 randomly:2 simultaneously:1 replaced:1 consisting:1 statistician:1 attempt:1 friedman:2 interest:2 mining:1 elucidating:1 evaluation:1 golub:1 mahoney:8 semidefinite:1 light:1 natl:2 xb:3 implication:2 encourage:1 partial:1 orthogonal:5 unless:3 euclidean:1 pollack:1 column:35 soft:2 disadvantage:1 measuring:1 subset:6 entry:3 comprised:1 successful:1 reported:1 synthetic:2 chooses:2 international:2 randomized:11 siam:4 probabilistic:1 michael:1 together:3 squared:2 again:1 containing:2 worse:1 expert:1 american:2 return:2 toy:2 exclude:2 de:1 satisfy:1 explicitly:2 udvt:2 later:1 try:1 doing:2 observing:1 red:3 start:2 contribution:1 om:2 minimize:1 variance:1 who:1 ensemble:1 identify:1 generalize:1 identification:2 produced:1 carlo:1 researcher:1 j6:2 randomness:2 tissue:3 submitted:1 definition:1 against:2 pp:1 proof:2 cur:77 dataset:4 treatment:1 popular:2 ask:1 recall:6 knowledge:1 nielsen:2 jenatton:1 appears:1 manuscript:1 attained:1 xie:1 botstein:1 gillette:1 improved:1 formulation:6 hoaglin:1 furthermore:1 implicit:2 until:2 correlation:2 hand:4 rodriguez:1 quality:1 usa:2 normalized:3 requiring:2 true:2 concept:2 brown:1 regularization:5 equality:1 nonzero:4 eg:20 attractive:1 self:1 encourages:4 noted:1 mina:1 l1:3 interpreting:2 snp:2 meaning:1 wise:2 recently:5 nonuniformity:1 common:1 witten:3 empirically:1 volume:1 discussed:3 interpretation:3 association:1 interpret:1 expressing:1 ai:2 framed:1 encouragement:1 unconstrained:1 trivially:1 mathematics:2 similarly:3 gratefully:1 similarity:1 argmina:3 something:1 multivariate:1 showed:1 perspective:1 optimizing:5 inequality:1 vt:1 seen:3 minimum:1 additional:4 accomplishes:1 signal:7 ii:6 full:1 desirable:2 multiple:1 worldwide:1 technical:1 bach:1 lin:1 equally:1 molecular:1 parenthesis:1 variant:5 regression:25 essentially:1 patient:1 arxiv:1 uvt:4 proposal:1 whereas:2 background:2 addition:4 fellowship:2 lander:1 else:1 singular:8 microarray:5 standpoint:1 extra:1 rest:1 unlike:1 posse:1 minb:1 strict:1 operate:1 subject:1 tend:2 jordan:1 integer:3 call:1 leverage:5 spca:61 constraining:1 easy:2 iii:6 affect:1 fit:1 finish:1 hastie:4 lasso:13 identified:3 idea:3 whether:1 motivated:1 pca:31 expression:3 penalty:8 proceed:1 york:1 remark:3 ignored:1 generally:2 clear:2 dna:1 exist:1 percentage:1 nsf:1 oversampling:1 notice:4 dotted:3 canonical:1 sign:1 correctly:2 tibshirani:6 express:1 group:8 putting:1 key:1 four:2 paschou:1 acknowledged:1 characterisation:1 linn:1 anova:1 graph:1 relaxation:2 subgradient:4 sum:2 run:2 inverse:1 throughout:2 family:1 submatrix:1 capturing:1 entirely:4 existed:1 annual:1 elucidated:1 strength:1 constraint:15 lane:1 span:1 min:4 formulating:1 performing:1 kumar:1 connell:1 department:3 structured:5 alternate:2 combination:3 describes:1 slightly:2 across:1 wi:6 modification:1 ghaoui:1 taken:1 resource:2 equation:3 previously:1 discus:1 jolliffe:2 tractable:1 apply:1 observe:3 indirectly:1 enforce:2 alternative:2 faloutsos:1 hat:2 rp:6 original:2 existence:1 top:5 denotes:1 graphical:3 xw:1 giving:1 restrictive:1 approximating:1 society:2 objective:11 diagonal:1 tumour:1 subspace:1 distance:1 thank:1 sci:2 topic:1 enforcing:1 kannan:1 xvv:2 index:2 relationship:1 goreinov:1 minimizing:4 robert:1 contributed:1 perform:4 vertical:1 observation:2 precise:1 rn:6 arbitrary:4 enrichment:1 introduced:1 cast:4 required:3 connection:4 established:1 able:1 suggested:1 below:3 pattern:1 bien:3 sparsity:28 sherlock:1 green:2 including:1 interpretability:1 royal:2 unrealistic:1 subramanian:1 natural:2 regularized:2 wanting:1 zhu:2 ebert:1 brief:2 aspremont:2 joan:1 review:1 literature:1 understanding:1 relative:2 afosr:1 expect:2 highlight:2 rijn:1 interesting:2 suggestion:1 proportional:2 lancet:1 viewpoint:4 pi:1 row:30 genetics:1 mohri:1 penalized:1 gl:39 last:1 repeat:1 supported:2 side:2 allow:1 understand:3 fall:1 wide:1 taking:2 sparse:37 van:1 regard:1 genome:1 computes:1 made:1 adaptive:1 far:2 approximate:5 obtains:1 compact:1 implicitly:5 gene:12 active:1 conclude:1 xi:43 latent:1 iterative:2 table:4 promising:1 ca:3 rearranging:1 obtaining:1 necessarily:1 zou:11 domain:2 main:1 dense:2 motivation:3 noise:2 profile:1 xu:3 referred:1 west:1 scattered:1 formalization:1 precision:4 lie:1 third:1 theorem:11 removing:1 xt:13 showing:2 explored:1 intractable:3 exists:1 consist:2 incorporating:1 adding:1 effectively:1 importance:2 workshop:1 easier:2 welsch:1 explore:1 likely:1 expressed:2 trendafilov:1 minimizer:2 satisfies:2 truth:1 obozinski:1 goal:2 formulated:1 viewed:3 shared:1 owen:1 hard:1 springerverlag:1 specifically:2 except:1 hyperplane:1 wt:1 principal:7 tumor:1 called:1 total:1 svd:5 ya:3 indicating:1 support:1 pomeroy:1 evaluate:1 reg:2 correlated:1 |
3,191 | 3,891 | Getting lost in space: Large sample analysis of the
commute distance
Ulrike von Luxburg
Agnes Radl
Max Planck Institute for Biological Cybernetics, T?ubingen, Germany
{ulrike.luxburg,agnes.radl}@tuebingen.mpg.de
Matthias Hein
Saarland University, Saarbr?ucken, Germany
[email protected]
Abstract
The commute distance between two vertices in a graph is the expected time it takes
a random walk to travel from the first to the second vertex and back. We study the
behavior of the commute distance as the size of the underlying graph increases.
We prove that the commute distance converges to an expression that does not take
into account the structure of the graph at all and that is completely meaningless
as a distance function on the graph. Consequently, the use of the raw commute
distance for machine learning purposes is strongly discouraged for large graphs
and in high dimensions. As an alternative we introduce the amplified commute
distance that corrects for the undesired large sample effects.
1
Introduction
Given an undirected, weighted graph, the commute distance between two vertices u and v is defined
as the expected time it takes a random walk starting in vertex u to travel to vertex v and back to u.
As opposed to the shortest path distance, it takes into account all paths between u and v, not just the
shortest one. As a rule of thumb, the more paths connect u with v, the smaller the commute distance
becomes. As a consequence, it supposedly satisfies the following, highly desirable property:
Property (F): Vertices in the same cluster of the graph have a small commute
distance, whereas two vertices in different clusters of the graph have a ?large?
commute distance.
It is because of this property that the commute distance has become a popular choice and is widely
used, for example in clustering (Yen et al., 2005), semi-supervised learning (Zhou and Sch?olkopf,
2004), in social network analysis (Liben-Nowell and Kleinberg, 2003), for proximity search (Sarkar
et al., 2008), in image processing (Qiu and Hancock, 2005), for dimensionality reduction (Ham
et al., 2004), for graph embedding (Guattery, 1998, Saerens et al., 2004, Qiu and Hancock, 2006,
Wittmann et al., 2009) and even for deriving learning theoretic bounds for graph labeling (Herbster
and Pontil, 2006, Cesa-Bianchi et al., 2009). One of the main contributions of this paper is to
establish that property (F) does not hold in many relevant situations.
In this paper we study how the commute distance (up to a constant factor equivalent to the resistance
distance, see below for exact definitions) behaves when the size of the graph increases. We focus on
the case of random geometric graphs as this is most relevant to machine learning, but similar results
hold for very general classes of graphs under mild assumptions. Denoting by Hij the expected
hitting time, by Cij the commute distance between two vertices vi and vj and by di the degree of
1
vertex vi we prove that the hitting times and commute distances can be approximated (up to the
constant vol(G) that denotes the volume of the graph) by
1
1
Hij ?
vol(G)
dj
1
1
1
+ .
Cij ?
vol(G)
di
dj
and
The intuitive reason for this behavior is that if the graph is large, the random walk ?gets lost? in
the sheer size of the graph. It takes so long to travel through a substantial part of the graph that by
the time the random walk comes close to its goal it has already ?forgotten? where it started from.
For this reason, the hitting time Hij does not depend on the starting vertex vi any more. It only
depends on the inverse degree of the target vertex vj , which intuitively represents the likelihood that
the random walk exactly hits vj once it is in its neighborhood. In this respect it shows the same
behavior as the mean return time at j (the mean time it takes a random walk that starts at j to return
to its staring point) which is well-known to be vol(G) ? 1/dj as well.
Our findings have very strong implications:
The raw commute distance is not a useful distance function on large graphs. On the negative
side, our approximation result shows that contrary to popular belief, the commute distance does not
take into account any global properties of the data, at least if the graph is ?large enough?. It just
considers the local density (the degree of the vertex) at the two vertices, nothing else. The resulting
large sample commute distance dist(vi , vj ) = 1/di + 1/dj is completely meaningless as a distance
on a graph. For example, all data points have the same nearest neighbor (namely, the vertex with
the largest degree), the same second-nearest neighbor (the vertex with the second-largest degree),
and so on. In particular, the main motivation to use the commute distance, Property (F), no longer
holds when the graph becomes ?large enough?. Even more disappointingly, computer simulations
show that n does not even need to be very large before (F) breaks down. Often, n in the order of
1000 is already enough to make the commute distance very close to its approximation expression
(see Section 5 for details). This effect is even stronger if the dimensionality of the underlying data
space is large. Consequently, even on moderate-sized graphs, the use of the raw commute distance
as a basis for machine learning algorithms should be discouraged.
Correcting the commute distance. It has been reported in the literature that hitting times and commute times can be observed to be quite small if the vertices under consideration have a high degree,
and that the spread of the commute distance values can be quite large (Liben-Nowell and Kleinberg,
2003, Brand, 2005, Yen et al., 2009). Subsequently, the authors suggested several different methods
to correct for this unpleasant behavior. In the light of our theoretical results we can see immediately
why the undesired behavior of the commute distance occurs. Moreover, we are able to analyze the
suggested corrections and prove which ones are meaningful and which ones not (see Section 4).
Based on our theory we suggest a new correction, the amplified commute distance. This is a new
distance function that is derived from the commute distance, but avoids its artifacts. This distance
function is Euclidean, making it well-suited for machine learning purposes and kernel methods.
Efficient computation of approximate commute distances. In some applications the commute
distance is not used as a distance function, but for other reasons, for example in graph sparsification
(Spielman and Srivastava, 2008) or when computing bounds on mixing or cover times (Aleliunas
et al., 1979, Chandra et al., 1989, Avin and Ercal, 2007, Cooper and Frieze, 2009) or graph labeling
(Herbster and Pontil, 2006, Cesa-Bianchi et al., 2009). To obtain the commute distance between all
points in a graph one has to compute the pseudo-inverse of the graph Laplacian matrix, an operation
of time complexity O(n3 ). This is prohibitive in large graphs. To circumvent the matrix inversion,
several approximations of the commute distance have been suggested in the literature (Spielman and
Srivastava, 2008, Sarkar and Moore, 2007, Brand, 2005). Our results lead to a much simpler and
well-justified way of approximating the commute distance on large random geometric graphs.
2
General setup, definitions and notation
We consider undirected, weighted graphs G = (V, E) with n vertices. We always assume that G is
connected and not bipartite.PThe non-negative weight matrix (adjacency matrix) is denoted
W :=
Pby
n
n
(wij )i,j=1,...,n . By di := j=1 wij we denote the degree of vertex vi and vol(G) := j=1 dj is
the volume of the graph. D denotes the diagonal matrix with diagonal entries d1 , . . . , dn and is
called the degree matrix.
2
Our main focus in this paper is the class of random geometric graphs as it is most relevant to machine
learning. Here we are given a sequence of points X1 , . . . , Xn that has been drawn i.i.d. from some
underlying density p on Rd . These points form the vertices v1 , . . . , vn of the graph. The edges in
the graph are defined such that ?neighboring points? are connected: In the ?-graph we connect two
points whenever their Euclidean distance is less than or equal to ?. In the undirected, symmetric
k-nearest neighbor graph we connect vi to vj if Xi is among the k nearest neighbors of Xj or vice
versa. In the mutual k-nearest neighbor graph we connect vi to vj if Xi is among the k nearest
neighbors of Xj and vice versa. For space constraints we only discuss the case of unweighted
graphs in this paper. Our results can be carried over to weighted graphs, in particular to weighted
kNN-graphs and Gaussian similarity graphs.
Consider the natural random walk on G, that is the random walk with transition matrix P = D?1 W .
The hitting time Hij is defined as the expected time it takes a random walk starting in vertex vi to
travel to vertex vj (with Hii := 0 by definition). The commute distance (also called commute time)
between vi and vj is defined as Cij := Hij + Hji . Some readers might also know the commute
distance under the name resistance distance. Here one interprets the graph as an electrical network
where the edges represent resistors. The conductance of a resistor is given by the corresponding edge
weight. The resistance distance Rij between i and j is defined as the effective resistance between
the vertices i and j in the network. It is well known that the resistance distance coincides with the
commute distance up to a constant: Cij = vol(G)?Rij . For background reading see Doyle and Snell
(1984), Klein and Randic (1993), Xiao and Gutman (2003), Fouss et al. (2006), Bollob?as (1998),
Lyons and Peres (2010).
For the rest of the paper we consider a probability distribution with density p on Rd . We want to
study the behavior of the commute distance between two fixed points s and t. We will see that we
only need to study the density in a reasonably small region X ? Rd that contains s and t. For
convenience, let us make the following definition.
Definition 1 (Valid region) Let p be any density on Rd , and s, t ? Rd be two points with
p(s), p(t) > 0. We call a connected subset X ? Rd a valid region with respect to s, t and p if
the following properties are satisfied:
1. s and t are interior points of X .
2. The density on X is bounded away from 0, that is for all x ? X we have that p(x) ?
pmin > 0 for some constant pmin . Assume that pmax := maxx?X p(x) < ?.
3. X has ?bottleneck? larger than some value h > 0: the set {x ? X : dist(x, ?X ) > h/2}
is connected (here ?X denotes the topological boundary of X ).
4. The boundary of X is regular in the following sense. We assume that there exist positive
constants ? > 0 and ?0 > 0 such that if ? < ?0 , then for all points x ? ?X we have
vol(B? (x) ? X ) ? ? vol(B? (x)) (where vol denotes the Lebesgue volume). Essentially
this condition just excludes the situation where the boundary has arbitrarily thin spikes.
For readability reasons, we are going to state some of our main results using constants ci > 0. These
constants are independent of n and the graph connectivity parameter (? or k, respectively) but depend
on the dimension, the geometry of X , and p. The values of all constants are determined explicitly in
the proofs. They do not coincide across different propositions. For notational convenience, we will
formulate all the following results in terms of the resistance distance. To obtain the results for the
commute distance one just has to multiply by factor vol(G).
3
Convergence of the resistance distance on random geometric graphs
In this section we present our theoretical main results for random geometric graphs. We show that
on this type of graph, the resistance distance Rij converges to the trivial limit 1/di + 1/dj . For
space constraints we only formulate these results for unweighted kNN and ?-graphs. Similar results
also hold for weighted variants of these graphs and for Gaussian similarity graphs.
Theorem 2 (Resistance distance on kNN-graphs) Fix two points Xi and Xj . Consider a valid
region X with respect to Xi and Xj with bottleneck h and density bounds pmin and pmax . Assume
that Xi and Xj have distance at least h from the boundary of X and that (k/n)1/d /2pmax ? h.
3
Then there exist constants c1 , . . . , c5 > 0 such that with probability at least 1 ? c1 n exp(?c2 k) the
resistance distance on both the symmetric and the mutual kNN-graph satisfies
1
1/3
if d = 3
kRij ? k + k ? c4 k log(n/k) + (k/n) + 1
c5 k1
if d > 3
di
dj
The probability converges to 1 if n ? ? and k/ log(n) ? ?. The rhs of the deviation bound
converges to 0 as n ? ?, if k ? ? and k/ log(n/k) ? ? in case d = 3, and if k ? ? in case
d > 3. Under these conditions, if the density p is continuous and if additionally k/n ? 0, then
kRij ? 2 in probability.
Theorem 3 (Resistance distance on ?-graphs) Fix two points Xi and Xj . Consider a valid region
X with respect to Xi and Xj with bottleneck h and density bounds pmin and pmax . Assume that Xi
and Xj have distance at least h from the boundary of X and that ? ? h. Then there exist constants
c1 , . . . , c6 > 0 such that with probability at least 1 ? c1 n exp(?c2 n?d ) ? c3 exp(?c4 n?d )/?d the
resistance distance on the ?-graph satisfies
( log(1/?)+?+1
d
d
d
if d = 3
n?3
n? Rij ? n? + n? ? c5
1
di
dj
if d > 3
c6 n?d
The probability converges to 1 if n ? ? and n?d / log(n) ? ?. The rhs of the deviation bound
converges to 0 as n ? ?, if n?3 / log(1/?) ? ? in case d = 3, and if n?d ? ? in case d > 3.
Under these conditions, if the density p is continuous and if additionally ? ? 0, then
1
1
n?d Rij ?
+
in probability.
?d p(Xi ) ?d p(Xj )
Let us discuss the theorems en bloc. We start with a couple of technical remarks. Note that to achieve
the convergence of the resistance distance we have to rescale it appropriately (for example, in the
?-graph we scale by a factor of n?d ). Our rescaling is exactly chosen such that the limit expressions
are finite, positive values. Scaling by any other factor in terms of n, ? or k either leads to divergence
or to convergence to zero.
The convergence conditions on n and ? (or k, respectively) are the ones to be expected for random
geometric graphs. They are satisfied as soon as the degrees are of the order log(n) (for smaller
degrees, the graphs are not connected anyway, see e.g. Penrose, 1999). Hence, our results hold for
sparse as well as for dense connected random geometric graphs.
The valid region X has been introduced for technical reasons. We need to operate in such a region
in order to be able to control the behavior of the graph, e.g. the average degrees. The assumptions
on X are the standard assumptions used in the random geometric graph literature. In our setting, we
have the freedom of choosing X ? Rd as we want. In order to obtain the tightest bounds one should
aim for a valid X that has a wide bottleneck and a high minimal density.
More generally, results about the convergence of the commute distance to 1/di + 1/dj can also be
proved for other kinds of graphs such as graphs with given expected degrees and even for power law
graphs, under the assumption that the minimal degree in the graph slowly increases with n. Details
are beyond the scope of this paper.
Proof outline of Theorems 2 and 3 (full proofs are presented in the supplementary material). Consider two fixed vertices s and t in a connected graph and consider the graph as an electrical network
where each edge has resistance 1. By the electrical laws, resistances in series add up, that is for two
resistances R1 and R2 in series we get the overall resistance R = R1 + R2 . Resistances in parallel
lines satisfy 1/R = 1/R1 + 1/R2 . Now consult the situation in Figure 1. Consider the vertex s and
all edges from s to its ds neighbors. The resistance ?spanned? by these ds parallel edges satisfies
Pds
1/R = i=1
1, that is R = 1/ds . Similarly for t. Between the neighbors of s and the ones of t there
are very many paths. It turns out that the contribution of these paths to the resistance is negligible
(essentially, we have so many wires between the two neighborhoods that electricity can flow nearly
freely). So the overall effective resistance between s and t is dominated by the edges adjacent to i
and j with contributions 1/ds + 1/dt .
Providing a clean mathematical proof for this argument is quite technical. Our proof is based on
Corollary 6 in Section IX.2 of Bollob?as (1998) that states that the resistance distance beween two
4
ds many outgoing
edges, each with s
resistance 1
d t many outgoing
t edges, each with
resistance 1
very many paths
Figure 1: Intuition for the proof of Theorems 2 and 3. See text for details.
fixed vertices s and t can be expressed as
nP
o
2
Rst = inf
u
u
=
(u
)
unit
flow
from
s
to
t
.
e
e?E
e?E e
To apply this theorem one has to construct a flow that spreads ?as widely as possible? over the whole
graph. Counting edges and adding up resistances then leads to the desired results. Details are fiddly
and can be found in the supplementary material.
,
4
Correcting the resistance distance
Obviously, the large sample resistance distance Rij ? 1/di + 1/dj is completely meaningless as
a distance on a graph. The question we want to discuss in this section is whether there is a way to
correct the commute distance such that this unpleasant large sample effect does not occur. Let us
start with some references to the literature. It has been observed in several empirical studies that the
commute distances are quite small if the vertices under consideration have a high degree, and that
the spread of the commute distance values can be quite large. Our theoretical results immediately
explain this behavior: if the degrees are large, then 1/di + 1/dj is very small. And compared to the
?spread? of di , the spread of 1/di can be enormous.
Several heuristics have been suggested to solve this problem. Liben-Nowell and Kleinberg (2003)
suggest to correct the hitting times by simply multiplying by the degrees. For the commute distance,
this leads to the suggested correction of CLN K (i, j) := dj Hij + di Hji . Even though we did
not prove it explicitly in our paper, the convergence results for the commute time also hold for
the individual hitting times. Namely, hitting time Hij can be approximated by vol(G)/dj . These
theoretical results immediately show that the correction CLN K is not useful, at least if we consider
the absolute values. For large graphs, it simply has the effect of normalizing all hitting times to
? 1, leading to CLN K ? 2. However, we believe that the ranking introduced by this distance
function still contains useful information about the data. The reason is that while the first order
terms dominate the absolute value and converge to two, the second order terms introduce some
?variation around two?, and this variation might encode the cluster structure.
Yen et al. (2009) exploit the well-known fact that the commute distance is Euclidean and its kernel
matrix coincides with the Moore-Penrose inverse L+ of the graph Laplacian matrix. The authors
+
now apply a sigmoid transformation to L+ and consider KYen (i, j) = 1/(1 + exp(?lij
/?)) for
some contant ?. The idea is that the sigmoid transformation reduces the spread of the distance (or
similarity) values. However, this is an ad-hoc approach that has the disadvantage that the resulting
?kernel? KYen is not positive definite.
A third correction has been suggested in Brand (2005). As Yen et al. (2009) he considers the kernel
matrix that corresponds to the commute distance. But instead of applying a sigmoid transformation
he centers and normalizes the kernel matrix in the feature space. This leads to the corrected kernel
p
? ij / K
? ii K
? jj
? ij = ?Rij + 1 Pn (Rik + Rkj ) ? 12 Pn
KBrand (i, j) = K
with
2K
k=1
k,l=1 Rkl .
n
n
One the first glance it is surprising that using the centered and normalized kernel instead of the
commute distance should make any difference. However, whenever one takes a Euclidean distance
function of the form dist(i, j) = sij + ui + uj ? 2?ij ui and computes the corresponding centered
kernel matrix, one obtains
n
2
2 X
s
Kij = Kij
+ 2?ij ui ? (ui + uj ) + 2
ur ,
(1)
n
n r=1
5
where K s is the kernel matrix induced by s. Thus the off-diagonal terms are still influenced by ui
but with a decaying factor n1 compared to the diagonal. Even though this is no longer the case after
normalization (because for the normalization the diagonal terms are important, and these terms still
depend on the di ), we believe that this is the key to why Brand?s kernel is useful.
What would be a suitable correction based on our theoretical results? The proof of our main theorems shows that the edges adjacent to i and j completely dominate the behavior of the resistance
distance: they are the ?bottleneck? of the flow, and their contribution 1/di + 1/dj dominates all
the other terms. The interesting information about the global topology of the graph is contained
in the remainder terms Sij = Rij ? 1/di ? 1/dj , which summarize the flow contributions of all
other edges in the graph. We believe that the key to obtaining a good distance function is to remove
the influence of the 1/di terms and ?amplify? the influence of the general graph term Sij . This
can be achieved by either using the off-diagonal terms of the pseudo-inverse graph Laplacian L?
while ignoring its diagonal, or by building a distance function based on the remainder terms Sij
directly. We choose the second option and propose the following new distance function. We define
the amplified commute distance as Camp (i, j) = Sij + uij with Sij = Rij ? 1/di ? 1/dj and
uij = 2wij /di dj ? wii /d2i ? wjj /d2j . Of course we set Camp (i, i) = 0 for all i.
Proposition 4 (Amplified commute distance is Euclidean) The matrix D with entries dij =
Camp (i, j)1/2 is a Euclidean distance matrix.
Proof outline. In preliminary work we show that the remainder terms can be written as Sij =
h(ei ? ej ), B(ei ? ej )i ? uij where ei denotes the i-th unit vector and B is a positive definite matrix
(see the proof of Proposition 2 in von Luxburg et al., 2010). This implies the desired statement. ,
Additionally to being a Euclidean distance, the amplified commute distance has a nice limit behavior.
When n ? ? the terms uij are dominated by the terms Sij , hence all that is left are the ?interesting
terms? Sij . For all practical purposes, one should use the kernel induced by the amplified commute
distance and center and normalize it. In formulas, the amplified commute kernel is
q
? ij / K
? ii K
? jj
? = (I ? 1 110 )Camp (I ? 1 110 )
Kamp (i, j) := K
with
K
(2)
n
n
(where I is the identity matrix, 1 the vector of all ones, and Camp the amplified commute distance
matrix). The next section shows that the kernel Kamp works very nicely in practice.
Note that the correction by Brand and our amplified commute kernel are very similar, but not identical with each other. The off-diagonal terms of both kernels are very close to each other, see Equation
(1), that is if one is only interested in a ranking based on similarity values, both kernels behave similarly. However, an important difference is that the diagonal terms in the Brand kernel are way bigger
than the ones in the amplified kernel (using our convergence techniques one can show that the Brand
kernel converges to an identity matrix, that is the diagonal completely dominates the off-diagonal
terms). This might lead to the effect that the Brand kernel behaves worse than our kernel with
algorithms like the SVM that do not ignore the diagonal of the kernel.
5
Experiments
Our first set of experiments considers the question how fast the convergence of the commute distance
takes place in practice. We will see that already for relatively small data sets, a very good approximation takes place. This means that the problems of the raw commute distance already occur for
small sample size. Consider the plots in Figure 2. They report the maximal relative error defined as
maxij |Rij ?1/di ?1/dj |/Rij and the corresponding mean relative error on a log10 -scale. We show
the results for ?-graphs, unweighted kNN graphs and Gaussian similarity graphs (fully connected
weighted graphs with edge weights exp(kxi ? xj k2 /? 2 )). In order to be able to plot all results in
the same figure, we need to match the parameters of the different graphs. Given some value k for
the kNN-graph we thus set the values of ? for the ?-graph and ? for the Gaussian graph to be equal
to the maximal k-nearest neighbor distance in the data set.
Sample size. Consider a set of points drawn from the uniform distribution on the unit cube in R10 .
As can be seen in Figure 2 (first plot), the maximal relative error decreases very fast with increasing
sample size. Note that already for small sample sizes the maximal deviations get very small.
Dimension. A result that seems surprising at first glance is that the maximal deviation decreases
6
data uniform, dim=10, k= n/10
data uniform, n=2000, k=100
?0.5
Gaussian
epsilon
knn
?1.5
log (rel deviation)
?2
?2.5
?3
?1.5
?2
?2.5
?3
2
1000
2000
n
mixture of Gaussians,n=2000, dim=10, k=100
0
Gaussian
epsilon
?1
knn
200 500
5
dim
USPS data set
0
log (rel deviation)
?2
10
Gaussian
epsilon
knn
?1
?2
?3
10
log10(rel deviation)
Gaussian
epsilon
knn
?1
10
log10(rel deviation)
?1
?3
?4
1
2
separation
?4
?5
0
3
1
log10(k)
2
3
Figure 2: Relative deviations between true and approximate commute distances. Solid lines show
the maximal relative deviations, dashed lines the mean relative deviations. See text for details.
as we increase the dimension, see Figure 2 (second plot). The intuitive explanation is that in higher
dimensions, geometric graphs mix faster as there exist more ?shortcuts? between the two sides of
the point cloud. Thus, the random walk ?forgets faster? where it started from.
Clusteredness. The deviation gets worse if the data has a more pronounced cluster structure. Consider a mixture of two Gaussians in R10 with unit variances and the same weight on both components. We call the distance between the centers of the two components the separation. In Figure 2
(third plot) we show both the maximum relative errors (solid lines) and mean relative errors (dashed
lines). We can clearly see that with increasing separation, the deviation increases.
Sparsity. The last plot of Figure 2 shows the relative errors for increasingly dense graphs, namely
for increasing parameter k. Here we used the well-known USPS data set of handwritten digits (9298
points in 256 dimensions). We plot both the maximum relative errors (solid lines) and mean relative
errors (dashed lines). We can see that the errors decrease the denser the graph gets. Again this is
due to the fact that the random walk mixes faster on denser graphs. Note that the deviations are
extremely small on this real-world data set.
In a second set of experiments we compare the different corrections of the raw commute distance. To
this end, we built a kNN graph of the whole USPS data set (all 9298 points, k = 10), computed the
commute distance matrix and the various corrections. The resulting matrices are shown in Figure
3 (left part) as heat plots. In all cases, we only plot the off-diagonal terms. We can see that as
predicted by theory, the raw commute distance does not identify the cluster structure. However, the
cluster structure is still visible in the kernel corresponding to the commute distance, the pseudoinverse graph Laplacian L? . The reason is that the diagonal of this matrix can be approximated by
(1/d1 , ...., 1/dn ), whereas the off-diagonal terms encode the graph structure, but on a much smaller
scale than the diagonal. In our heat plots, all four corrections of the graph Laplacian show the cluster
structure to a certain extent (the correction by LNK to a small extent, the corrections by Brand, Yen
and us to a bigger extent).
A last experiment evaluates the performance of the different distances in a semi-supervised learning
task. On the whole USPS data set, we first chose some random points to be labeled. Then
we classified the unlabeled points by the k-nearest neighbor classifier based on the distances
to the labeled data points. For each classifier, k was chosen by 10-fold cross-validation among
k ? {1, ..., 10}. The experiment was repeated 10 times. The mean results can be seen in Figure 3
(right figure). As baseline we also report results based on the standard Euclidean distance between
the data points. As predicted by theory, we can see that the raw commute distance performs
extremely poor. The Euclidean distance behaves reasonably, but is outperformed by all corrections
of the commute distance. This shows first of all that using the graph structure does help over the
basic Euclidean distance. While the naive correction by LNK stays close to the Euclidean distance,
the three corrections by Brand, Yen and us virtually lie on top of each other and outperform the
7
Semi?supervised learning task
1
Raw commute dist
Euclidean dist
LNK dist
Amplified kernel
Brand kernel
Yen kernel
classification error
0.8
0.6
0.4
0.2
0
20 50
100
200
Number of labeled points
400
Figure 3: Figures on the left: Distances and kernels based on a kNN graph between all 9298 USPS
points (heat plots, off-diagonal terms only): exact resistance distance, pseudo-inverse graph Laplacian L? ; kernels corresponding to the corrections by LNK, Yen, Brand, and our amplified Kamp .
Figure on the right: Semi-supervised learning results based on the different distances and kernels.
The last three lines corresponding to the amplified, Brand and Yen kernel lie on top of each other.
other methods by a large margin.
We conclude with the following tentative statements. We believe that the correction by LNK is ?a bit
too naive?, whereas the corrections by Brand, Yen and us ?tend to work? in a ranking based setting.
Based on our simple experiments it is impossible to judge which out of these candidates is ?the best
one?. We are not too fond of Yen?s correction because it does not lead to a proper kernel. Both
Brand?s and our kernel converge to (different) limit functions. So far we do not know the theoretical
properties of these limit functions and thus cannot present any theoretical reason to prefer one over
the other. However, we think that the diagonal dominance of the Brand kernel can be problematic.
6
Discussion
In this paper we have proved that the commute distance on random geometric graphs can be
approximated by a very simple limit expression. Contrary to intuition, this limit expression no
longer takes into account the cluster structure of the graph, nor any other global property (such as
distances in the underlying Euclidean space). Both our theoretical bounds and our simulations tell
the same story: the approximation gets better if the data is high-dimensional and not extremely
clustered, both of which are standard situations in machine learning. This shows that the use of the
raw commute distance for machine learning purposes can be problematic. However, the structure
of the graph can be recovered by certain corrections of the commute distance. We suggest to use
either the correction by Brand (2005) or our own amplified commute kernel from Section 4. Both
corrections have a well-defined, non-trivial limit and perform well in experiments.
The intuitive explanation for our result is that as the sample size increases, the random walk on the
sample graph ?gets lost? in the sheer size of the graph. It takes so long to travel through a substantial
part of the graph that by the time the random walk comes close to its goal it has already ?forgotten?
where it started from. Stated differently: the random walk on the graph has mixed before it hits the
desired target vertex. On a higher level, we expect that the problem of ?getting lost? not only affects
the commute distance, but many other methods where random walks are used in a naive way to
explore global properties of a graph. For example, the results in Nadler et al. (2009), where artifacts
of semi-supervised learning in the context of many unlabeled points are studied, seem strongly
related to our results. In general, we believe that one has to be particularly careful when using
random walk based methods for extracting global properties of graphs in order to avoid getting lost
and converging to meaningless results.
8
References
R. Aleliunas, R. Karp, R. Lipton, L. Lov?asz, and C. Rackoff. Random walks, universal traversal
sequences, and the complexity of maze problems. In FOCS, 1979.
C. Avin and G. Ercal. On the cover time and mixing time of random geometric graphs. Theor.
Comput. Sci, 380(1-2):2?22, 2007.
B. Bollob?as. Modern Graph Theory. Springer, 1998.
M. Brand. A random walks perspective on maximizing satisfaction and profit. In SDM, 2005.
N. Cesa-Bianchi, C. Gentile, and F. Vitale. Fast and optimal prediction on a labeled tree. In COLT,
2009.
A. Chandra, P. Raghavan, W. Ruzzo, R. Smolensky, and P. Tiwari. The electrical resistance of a
graph captures its commute and cover times. In STOC, 1989.
C. Cooper and A. Frieze. The cover time of random geometric graphs. In SODA, 2009.
P. G. Doyle and J. L. Snell. Random walks and electric networks. Mathematical Association of
America, Washington, DC, 1984.
F. Fouss, A. Pirotte, J.-M. Renders, and M. Saerens. A novel way of computing dissimilarities
between nodes of a graph, with application to collaborative filtering and subspace projection of
the graph nodes. Technical Report IAG WP 06/08, Universit?e catholique de Louvain, 2006.
S. Guattery. Graph embeddings, symmetric real matrices, and generalized inverses. Technical report,
Institute for Computer Applications in Science and Engineering, NASA Research Center, 1998.
J. Ham, D. D. Lee, S. Mika, and B. Sch?olkopf. A kernel view of the dimensionality reduction of
manifolds. In ICML, 2004.
M. Herbster and M. Pontil. Prediction on a graph with a perceptron. In NIPS, 2006.
D. Klein and M. Randic. Resistance distance. Journal of Mathematical Chemistry, 12:81?95, 1993.
D. Liben-Nowell and J. Kleinberg. The link prediction problem for social networks. In CIKM, 2003.
R. Lyons and Y. Peres. Probability on trees and networks. Book in preparation, available online on
the webpage of Yuval Peres, 2010.
B. Nadler, N. Srebro, and X. Zhou. Statistical analysis of semi-supervised learning: The limit of
infinite unlabelled data. In NIPS, 2009.
M. Penrose. A strong law for the longest edge of the minimal spanning tree. Ann. of Prob., 27(1):
246 ? 260, 1999.
H. Qiu and E. R. Hancock. Image segmentation using commute times. In BMVC, 2005.
H. Qiu and E. R. Hancock. Graph embedding using commute time. S+SSPR 2006, pages 441?449,
2006.
M. Saerens, F. Fouss, L. Yen, and P. Dupont. The principal components analysis of a graph, and its
relationships to spectral clustering. In ECML, 2004.
P. Sarkar and A. Moore. A tractable approach to finding closest truncated-commute-time neighbors
in large graphs. In UAI, 2007.
P. Sarkar, A. Moore, and A. Prakash. Fast incremental proximity search in large graphs. In ICML,
2008.
D. Spielman and N. Srivastava. Graph sparsification by effective resistances. In STOC, 2008.
U. von Luxburg, A. Radl, and M. Hein. Hitting times, commute distances and the spectral gap in
large random geometric graphs. Preprint available at Arxiv, March 2010.
D. M. Wittmann, D. Schmidl, F. Bl?ochl, and F. J. Theis. Reconstruction of graphs based on random
walks. Theoretical Computer Science, 2009.
W. Xiao and I. Gutman. Resistance distance and Laplacian spectrum. Theoretical Chemistry Accounts, 110:284?298, 2003.
L. Yen, D. Vanvyve, F. Wouters, F. Fouss, M. Verleysen, and M. Saerens. Clustering using a random
walk based distance measure. In ESANN, 2005.
L. Yen, F. Fouss, C. Decaestecker, P. Francq, and M. Saerens. Graph nodes clustering based on the
commute-time kernel. Advances in Knowledge Discovery and Data Mining, pages 1037?1045,
2009.
D. Zhou and B. Sch?olkopf. Learning from Labeled and Unlabeled Data Using Random Walks. In
DAGM, 2004.
9
| 3891 |@word mild:1 inversion:1 stronger:1 seems:1 simulation:2 commute:74 profit:1 solid:3 disappointingly:1 reduction:2 contains:2 series:2 denoting:1 recovered:1 surprising:2 wouters:1 written:1 visible:1 dupont:1 remove:1 plot:11 prohibitive:1 node:3 readability:1 c6:2 simpler:1 saarland:1 dn:2 c2:2 mathematical:3 become:1 focs:1 prove:4 introduce:2 lov:1 expected:6 behavior:10 mpg:1 dist:6 nor:1 lyon:2 ucken:1 increasing:3 becomes:2 underlying:4 moreover:1 notation:1 bounded:1 what:1 kind:1 finding:2 sparsification:2 transformation:3 forgotten:2 pseudo:3 prakash:1 exactly:2 universit:1 k2:1 hit:2 classifier:2 control:1 unit:4 planck:1 before:2 positive:4 negligible:1 local:1 engineering:1 limit:9 consequence:1 path:6 might:3 chose:1 mika:1 studied:1 iag:1 practical:1 lost:5 practice:2 definite:2 digit:1 pontil:3 universal:1 empirical:1 maxx:1 projection:1 regular:1 suggest:3 get:7 convenience:2 close:5 interior:1 amplify:1 unlabeled:3 cannot:1 context:1 applying:1 influence:2 impossible:1 equivalent:1 center:4 maximizing:1 starting:3 formulate:2 immediately:3 correcting:2 rule:1 ercal:2 deriving:1 spanned:1 dominate:2 embedding:2 anyway:1 variation:2 target:2 exact:2 approximated:4 particularly:1 gutman:2 labeled:5 observed:2 cloud:1 preprint:1 electrical:4 rij:11 capture:1 region:7 connected:8 decrease:3 liben:4 substantial:2 supposedly:1 ham:2 pd:1 complexity:2 intuition:2 ui:5 wjj:1 traversal:1 d2i:1 depend:3 bipartite:1 completely:5 basis:1 usps:5 differently:1 various:1 america:1 heat:3 hancock:4 effective:3 fast:4 labeling:2 tell:1 neighborhood:2 choosing:1 quite:5 heuristic:1 widely:2 larger:1 supplementary:2 solve:1 denser:2 knn:12 think:1 online:1 obviously:1 hoc:1 sequence:2 sdm:1 matthias:1 propose:1 reconstruction:1 maximal:6 remainder:3 neighboring:1 relevant:3 mixing:2 pthe:1 achieve:1 amplified:14 intuitive:3 pronounced:1 normalize:1 olkopf:3 getting:3 rst:1 convergence:8 cluster:8 webpage:1 r1:3 incremental:1 converges:7 help:1 rescale:1 ij:5 nearest:8 esann:1 strong:2 c:1 predicted:2 come:2 implies:1 judge:1 fouss:5 correct:3 subsequently:1 centered:2 raghavan:1 material:2 adjacency:1 fix:2 clustered:1 preliminary:1 snell:2 proposition:3 biological:1 fond:1 theor:1 correction:22 hold:6 proximity:2 around:1 exp:5 nadler:2 scope:1 nowell:4 purpose:4 travel:5 outperformed:1 largest:2 vice:2 weighted:6 clearly:1 always:1 gaussian:8 aim:1 zhou:3 pn:2 ej:2 avoid:1 karp:1 corollary:1 encode:2 derived:1 focus:2 notational:1 longest:1 likelihood:1 baseline:1 sense:1 camp:5 dim:3 sb:1 dagm:1 uij:4 wij:3 going:1 interested:1 germany:2 overall:2 among:3 classification:1 colt:1 denoted:1 verleysen:1 mutual:2 cube:1 equal:2 once:1 construct:1 nicely:1 washington:1 identical:1 represents:1 icml:2 nearly:1 thin:1 np:1 report:4 modern:1 frieze:2 doyle:2 divergence:1 individual:1 geometry:1 lebesgue:1 n1:1 freedom:1 conductance:1 highly:1 mining:1 multiply:1 mixture:2 light:1 contant:1 implication:1 edge:14 tree:3 euclidean:13 walk:22 desired:3 decaestecker:1 hein:3 theoretical:10 minimal:3 kij:2 krij:2 cover:4 disadvantage:1 electricity:1 vertex:27 entry:2 subset:1 deviation:14 uniform:3 dij:1 too:2 cln:3 reported:1 connect:4 kxi:1 density:11 herbster:3 stay:1 lee:1 off:7 corrects:1 bloc:1 connectivity:1 von:3 again:1 cesa:3 satisfied:2 opposed:1 choose:1 agnes:2 slowly:1 worse:2 book:1 leading:1 return:2 pmin:4 rescaling:1 account:5 de:3 chemistry:2 lnk:5 satisfy:1 explicitly:2 ranking:3 vi:9 depends:1 ad:1 break:1 view:1 analyze:1 ulrike:2 start:3 decaying:1 option:1 staring:1 parallel:2 yen:14 contribution:5 collaborative:1 variance:1 identify:1 kamp:3 raw:9 thumb:1 handwritten:1 multiplying:1 cybernetics:1 classified:1 explain:1 influenced:1 whenever:2 definition:5 evaluates:1 proof:9 di:20 couple:1 proved:2 popular:2 knowledge:1 dimensionality:3 tiwari:1 segmentation:1 back:2 nasa:1 higher:2 dt:1 supervised:6 bmvc:1 though:2 strongly:2 just:4 d:5 ei:3 glance:2 artifact:2 believe:5 name:1 effect:5 building:1 normalized:1 true:1 hence:2 symmetric:3 moore:4 wp:1 undesired:2 adjacent:2 coincides:2 generalized:1 outline:2 theoretic:1 performs:1 saerens:5 d2j:1 image:2 consideration:2 novel:1 sigmoid:3 behaves:3 volume:3 association:1 he:2 versa:2 rd:7 similarly:2 dj:18 longer:3 similarity:5 add:1 closest:1 own:1 perspective:1 moderate:1 inf:1 certain:2 ubingen:1 arbitrarily:1 seen:2 gentile:1 freely:1 converge:2 shortest:2 dashed:3 semi:6 ii:2 full:1 desirable:1 mix:2 reduces:1 technical:5 match:1 faster:3 unlabelled:1 cross:1 long:2 bigger:2 laplacian:7 converging:1 variant:1 basic:1 prediction:3 essentially:2 chandra:2 arxiv:1 kernel:36 represent:1 normalization:2 achieved:1 c1:4 justified:1 whereas:3 background:1 want:3 else:1 sch:3 appropriately:1 meaningless:4 rest:1 operate:1 asz:1 induced:2 tend:1 virtually:1 undirected:3 contrary:2 flow:5 seem:1 call:2 consult:1 extracting:1 counting:1 enough:3 embeddings:1 xj:10 affect:1 topology:1 interprets:1 idea:1 bottleneck:5 whether:1 expression:5 render:1 resistance:33 jj:2 remark:1 useful:4 generally:1 outperform:1 exist:4 problematic:2 cikm:1 hji:2 klein:2 vol:11 dominance:1 key:2 four:1 sheer:2 enormous:1 drawn:2 clean:1 r10:2 v1:1 graph:116 excludes:1 luxburg:4 inverse:6 prob:1 soda:1 place:2 reader:1 vn:1 separation:3 prefer:1 scaling:1 bit:1 bound:8 fold:1 topological:1 occur:2 rkl:1 constraint:2 n3:1 lipton:1 dominated:2 kleinberg:4 argument:1 extremely:3 relatively:1 march:1 poor:1 smaller:3 across:1 increasingly:1 ur:1 making:1 intuitively:1 sij:9 equation:1 beween:1 discus:3 turn:1 know:2 tractable:1 end:1 available:2 operation:1 tightest:1 wii:1 gaussians:2 apply:2 radl:3 away:1 spectral:2 hii:1 alternative:1 denotes:5 clustering:4 top:2 guattery:2 log10:4 exploit:1 rkj:1 k1:1 uj:2 establish:1 approximating:1 epsilon:4 bl:1 already:6 question:2 occurs:1 spike:1 sspr:1 diagonal:18 discouraged:2 subspace:1 distance:112 link:1 sci:1 pirotte:1 manifold:1 considers:3 tuebingen:1 trivial:2 reason:8 extent:3 spanning:1 relationship:1 bollob:3 providing:1 setup:1 cij:4 statement:2 stoc:2 hij:7 negative:2 stated:1 pmax:4 proper:1 perform:1 bianchi:3 wire:1 finite:1 behave:1 ecml:1 truncated:1 situation:4 peres:3 dc:1 sarkar:4 introduced:2 namely:3 c3:1 tentative:1 c4:2 louvain:1 saarbr:1 nip:2 avin:2 able:3 suggested:6 beyond:1 below:1 smolensky:1 reading:1 summarize:1 sparsity:1 built:1 max:1 explanation:2 belief:1 maxij:1 power:1 suitable:1 satisfaction:1 natural:1 ruzzo:1 circumvent:1 started:3 carried:1 naive:3 lij:1 text:2 nice:1 geometric:13 literature:4 discovery:1 theis:1 relative:11 law:3 fully:1 expect:1 mixed:1 interesting:2 filtering:1 srebro:1 validation:1 degree:16 rik:1 xiao:2 story:1 normalizes:1 course:1 last:3 soon:1 catholique:1 side:2 perceptron:1 institute:2 neighbor:11 wide:1 absolute:2 sparse:1 boundary:5 dimension:6 xn:1 transition:1 avoids:1 unweighted:3 valid:6 computes:1 author:2 c5:3 world:1 coincide:1 maze:1 far:1 social:2 approximate:2 obtains:1 uni:1 ignore:1 global:5 pseudoinverse:1 uai:1 conclude:1 xi:9 spectrum:1 search:2 continuous:2 why:2 additionally:3 reasonably:2 ignoring:1 obtaining:1 electric:1 vj:8 did:1 main:6 spread:6 dense:2 rh:2 motivation:1 whole:3 pby:1 qiu:4 nothing:1 repeated:1 x1:1 en:1 cooper:2 resistor:2 comput:1 lie:2 candidate:1 forgets:1 third:2 ix:1 down:1 theorem:7 formula:1 r2:3 svm:1 normalizing:1 dominates:2 rel:4 adding:1 ci:1 dissimilarity:1 margin:1 gap:1 suited:1 simply:2 explore:1 penrose:3 hitting:10 expressed:1 contained:1 springer:1 corresponds:1 satisfies:4 goal:2 sized:1 identity:2 consequently:2 careful:1 ann:1 shortcut:1 determined:1 infinite:1 corrected:1 yuval:1 principal:1 called:2 brand:18 meaningful:1 vitale:1 unpleasant:2 spielman:3 preparation:1 outgoing:2 d1:2 srivastava:3 |
3,192 | 3,892 | Auto-Regressive HMM Inference with Incomplete
Data for Short-Horizon Wind Forecasting
Joseph Bockhorst
EE and Computer Science
University of Wisconsin-Milwaukee, USA
Chris Barber
EE and Computer Science
University of Wisconsin-Milwaukee, USA
Paul Roebber
Atmospheric Science
University of Wisconsin-Milwaukee, USA
Abstract
Accurate short-term wind forecasts (STWFs), with time horizons from 0.5 to 6
hours, are essential for efficient integration of wind power to the electrical power
grid. Physical models based on numerical weather predictions are currently not
competitive, and research on machine learning approaches is ongoing. Two major
challenges confronting these efforts are missing observations and weather-regime
induced dependency shifts among wind variables. In this paper we introduce approaches that address both of these challenges. We describe a new regime-aware
approach to STWF that use auto-regressive hidden Markov models (AR-HMM), a
subclass of conditional linear Gaussian (CLG) models. Although AR-HMMs are
a natural representation for weather regimes, as with CLG models in general, exact inference is NP-hard when observations are missing (Lerner and Parr, 2001).
We introduce a simple approximate inference method for AR-HMMs, which we
believe has applications in other problem domains. In an empirical evaluation
on publicly available wind data from two geographically distinct regions, our approach makes significantly more accurate predictions than baseline models, and
uncovers meteorologically relevant regimes.
1
Introduction
Accurate wind speed and direction forecasts are essential for efficient integration of wind energy
into electrical transmission systems. The importance of wind forecasts for the wind energy industry
stems from three facts: 1) for reliability and safety the aggregate power produced and consumed
throughout a power system must be nearly in balance at all times, 2) because it depends strongly
on wind speed and direction, the power output of a wind farm is highly variable, and 3) efficient
and cost effective energy storage mechanisms do not exist. A recent estimate placed the value of a
perfect forecast at $3Billion annually (Piwko and Jordan, 2010) for the United States power system
of 2030 envisioned by the Department of Energy (Lindenberg, 2008). Because information on the
30 minute to six-hour time horizon is actionable for many control decisions, and the current stateof-the-art is considered inadequate, there has been a recent surge of interest in improving forecasts
in this range.
The short-term wind forecasting (STWF) problem presents numerous challenges to the modeler.
Data produced by the current observation network are sparse relative to the temporal and spatial scale
of weather events that drive short-term changes in wind features; observations are frequently missing
or corrupted; quality training sets with multiple years of turbine height (?80 m) wind observations
at numerous sites are typically not available; transfer of learned models (Caruana, 1997) across
1
wind farms is difficult, and because of the dynamic nature of weather the spatial and temporal
dependencies of wind features within a geographical region are not fixed.
Numerical weather predictions (NWP) methods are the primary means for producing the large-scale
weather forecasts used throughout the world, but are not competitive for STWF. In fact, NWP based
wind speed predictions are less accurate than ?persistence? forecasts (Giebel, 2003), a surprisingly
robust method for time horizons less than a few hours. Approaches to STWF include ARMA models (Marti et al., 2004), support vector machines (Zheng and Kusiak, 2009), and other data mining
methods (Kusiak et al., 2009), but with the exception of two methods (Gneiting et al., 2006; Pinson and Madsen, 2008) these do not consider dependency dynamics. Gneiting et al. (2006) ?hard
code? their regimes based on wind direction, while Pinson and Madsen (2008) learns regimes for
a single forecasting site with complete data. At the time of writing we are unaware of any previous STWF work which simultaneously learns regimes, incorporates multiple observation sites, and
accepts missing observations.
We propose a novel approach to STWF that automatically reasons about learned weather regimes
across multiple sites while naturally handling missing observations. Our approach is based on
switching conditional linear Gaussian (CLG) models, variously known as switching vector autoregressive models or autoregressive hidden Markov models. For an overview of CLG models, see
Koller and Friedman (2009, Chap. 14). Since exact inference in CLG models with incomplete data
is NP-hard (Lerner and Parr, 2001), we pursue approximate methods. We introduce a novel and simple approximate inference approach that exploits the tendency of regimes to persist for several hours.
Predictions by our learned models are significantly better than baseline persistence predictions in experiments on national climatic data center (NCDC) data from two sites in the United States: one in
the Pacific Northwest and one in southern Wisconsin. Inspection of the learned models show that
our approach learns meteorologically interesting regimes.
Switching CLG models have been applied in other domains where missing observations are an
issue, such as meteorology (Tang, 2004; Paroli et al., 2005), epidemiology (Toscani et al., 2010) and
econometrics (Perez-Quiros et al., 2010). Some approaches are able to avoid the issue of missing
data by throwing out affected timesteps, or by imputing values through a variety of domain-specific
techniques. Alternatively, Markov Chain Monte Carlo parameter estimation techniques have been
applied. Our approach may be an attractive alternative in these domains, offering a solution that
does not require deletion or imputation.
2
Methods
We consider the setting in which wind observations from a set of M stations arrive at regular intervals (hourly in our experiments). Let Ut and Vt be M -by-1 vector of random variables for the u
and v components of the wind at all sites at time t. We use Wt = [1 U0t Vt0 ]0 to refer to both Ut
and Vt 1 , and we denote settings to random variables with lowercase letters, for example wt .
Our approach to STWF is based on auto-regressive HMMs where at each time t we have a single
discrete random variable Rt that represents the active regime, and a continuous valued vector random
variable Wt that represents measured wind speeds. As local probabilities are linear Gaussian (LG)
we denote the model in which the regime variables Rt have cardinality C by AR-LG(C). Thus,
AR-LG(1) is a traditional AR model. Figure 1 shows example graphical models.
The local conditional distributions, Pr(Rt+1 |Rt ) and Pr(Wt |Wt?1 , Rt ), are shared across time.
We represent Pr(Rt+1 |Rt ) by the C-by-C transition matrix T where T (r, s) > 0 is the probability
of transitioning from regime r to regime s. Since weather regimes tend to persist for multiple
hours, the self-transition probabilities T (r, r) are typically the largest. The local distributions for
the continuous variables are linear Gaussian, Pr(wt |wt?1 , Rt = r) = N (B(r)wt?1 , Q(r)), where
B(r) is the 2M -by-2M regression matrix for regime r, row j of B(r) is the regression vector for
the j th component of wt , Q(r) is the regime?s covariance matrix, and N (?, ?) is the multivariate
Gaussian (MVG) with mean ? and covariance ?. The joint probability of a setting to all variables
1
We include the additional dimension with constant value of 1.0 here to indicate that we include a constant
term in all our models. But for notational simplicity in what follows we include this term only implicitly and
describe our methods as if Wt were comprised of observations only.
2
Figure 1: Graphical structures of wind speed models. Darkly shaded nodes are observed, lightly shaded nodes
are partially observed, and unshaded nodes are unobserved. (a) Auto-regressive linear Gaussian (AR-LG(1)) for
data set with L time steps. (b) Auto-regressive HMM (AR-LG(C), C > 1). Exact inference in (b) with missing
observations is NP-hard (Lerner and Parr, 2001). (c) Truncated AR-LG(C) HOMO approximation of (b) for
predictions of wind speeds at target time t + h made at t with K = 2 and horizon h = 2. Our approximation
assumes the regime does not change in the window t ? K to t + h. (d) Truncated (non-conditional) AR-LG(1)
analogous to (c). (e) Detailed structure of (d) for 3 sites showing within time-slice conditional independencies
and assorted missing observations.
for L time steps is
Pr(r1 , w1 , ? ? ? , rL , wL ) = Pr(r1 ) Pr(w1 |r1 )
L
Y
Pr(rt |rt?1 ) Pr(wt |rt , wt?1 )
t=2
= ?(r1 )N (w1 ; ?1 (r1 ), Q1 (r1 ))
L
Y
T (rt?1 , rt )N (wt ; B(rt )wt?1 , Q(rt ))
t=2
where ? is the initial regime distribution, the observations at t = 1 for regime r are Gaussian
with mean ?1 (r) and covariance Q1 (r), and N () with three arguments denotes MVG density. We
set ? to the stationary state-distribution, given by the eigenvector of T 0 associated with eigenvalue
1. We train model parameters with standard EM methods for conditional linear Gaussian (CLG)
models (Murphy, 1998), except that the E-step uses approximate inference.
2.1
Approximate Inference Methods
Consider a length L time series and let W = (W1 , ? ? ? , WL ) refer to the continuous variables. We
? 1:L = (w
? 1, w
? 2, ? ? ? w
? L ) where the ?dot? notation
denote the sequence of partial observations as w
? t denotes a potentially incomplete vector with missing data. Our inference tasks are to calculate
w
? 1:L ) and Pr(Rt?1 , Rt |w
? 1:L ) while training to compute the expected sufficient
Pr(Wt?1 , Wt , Rt |w
statistics needed for estimation of CLG parameters using EM (Murphy, 1998), and to compute
? 1:t ) for horizon H forecasting at time t.
Pr(Wt+H |w
In AR-LG(1) models with no discrete variables (Figure 1a), the chain structure permits efficient exact inference using message passing techniques (Weiss and Freeman, 2001). For general AR-LG(C)
models, however, the posterior distributions over unobserved continuous variables are mixtures of
exponentially many Gaussians and exact inference is NP-hard (Lerner and Parr, 2001). Specifically,
? 1:t ) has C d+H component Gaussians, one for each setting of Rt?d+1 , ? ? ? , Rt+H ,
Pr(Wt+H |w
? 1:t with at least one missing obserwhere d is the number of contiguous time steps in the suffix of w
? 1:L ) have C dl +2+dr components where dl and dr
vation. The training posteriors Pr(Wt?1 , Wt |w
are the number of consecutive time steps to the left of t ? 1 and right of t with at least one missing
observation. Because of the nature of data collection, most wind data sets with multiple sites will
have a number of missing observations. Indeed, our Wisconsin (21 sites) and Pacific Northwest (24
sites) data sets have only 5.6% and 6.4% hours of complete data, respectively. Missing observations
3
are by no means unique to wind. Lauritzen?s approach (Lauritzen and Jensen, 2001) for exact inference in conditional linear Gaussian models offers no relief as the clique sizes in the strongly rooted
junction trees are exponentially large for AR-LG(C) models.
Since exact inference is intractable we investigate approximate methods. We first make a simplification that involves focusing only on observations temporally close to the inference variables. We ignore observations more than K time-steps from t. For example, we approximate
? 1:L ) by a truncated model Pr(Wt?1 , Wt |w
? t?K:t+K ). While inference in the
Pr(Wt?1 , Wt |w
truncated model will be less costly than in the full model, it is still O(C 2K+1 ) in the worst case,
which is prohibitive for moderate K on large datasets.
Our approaches are based on the general concept of pruning (Koller and Friedman, 2009, Chap. 14),
where all but n mixture components are discarded, in order to approximate a posterior distribution
with a prohibitive number of components. Let P (V ) refer to a desired posterior distribution under
the truncated model given evidence, which we assume has at least one missing observation at each
time step. P (V ) is a mixture of Gaussians with an exponential number of components N , which
PN
we write P (V ) =
j=1 ?j pj (V ). Each mixing proportion ?j is associated with a regime state
sequence and pj (V ) isPthe posterior Gaussian for that sequence. We approximate P (V ) by the
n
distribution Q(V ) =
j=1 ?j qj (V ) with a much smaller number components n in which each
component in Q is equal to one component in P . Without loss of generality we re-order components
of P so that the selected components comprise the first n, and thus qj = pj for j ? n. As pointed out
previously (Lerner and Parr, 2001), this approach is appropriate in many real world settings in which
a large fraction of the probability mass of P (V ) is contained in a small number of components. This
is the case for us as weather regimes tend to persist for a number of hours, and thus regime sequences
with frequent regime switching are highly unlikely.
2.1.1
Approach 1: PRIOR
We consider three approaches to choosing the components in Q. Our first approach is the method
of Lerner and Parr (2001) that chooses the components associated with the n a-priori most likely
state sequences, which, since our discrete variables form a simple chain, we can find efficiently using
the Best Max-Marginal First (BMMF) method (Yanover and Weiss, 2003). The mixing proportions
are set so that ?i ? ?i . Although not theoretically justified in Lerner and Parr (2001), we show here
that this choice in fact minimizes an upper bound on the a-priori or evidence-free KL divergence
from Q to P , D(Q||P ), among all Q made from n components of P . To see this we first extend Q
to have N components where ?j = 0 for j > n and apply an upper bound on the KL-divergence
between mixtures of Gaussians (Singer and Warmuth, 1998; Do, 2003), D(Q||P ) ? D(?||?) +
PN
constrain Q to have components from P , the second term drops out,
j=1 ?j D(qj ||pj ). Since we
Pn ?j
?
and D(Q||P ) ? D(?||?) = j=1 Z log( Zj ) ? log(?j ) , where here we use that ?j ? ?j where
Pn
the proportionality constant Z = j=1 ?i is the sum of chosen mixing probabilities. This leaves
D(Q||P ) ? ? log(Z), which is clearly minimized by choosing the n components of P with largest
?j . We call this approach PRIOR(n).
2.1.2
Approach 2: HOMO
Our second method for setting Q(V ) is a simple but often effective approach that assumes no regime
changes in the truncated model. This approach, which we call HOMO has n = C components, one
for each homogeneous regime sequence. If the self transition probabilities T (r, r) are largest, then
the most likely regime sequence a-priori is homogeneous, and thus is also chosen by PRIOR(n).
The other components of PRIOR(n) may be only small variations from this homogeneous regime
sequence, however, the components selected by HOMO are very different from one another. This
diversity may be advantageous for prediction.
2.1.3
Approach 3: POST
Our final method depends on the evidence. We would like to select the components for the top
n regime sequences with maximum posterior likelihood, however, this too is NP-hard (Lerner
and Parr, 2001). We instead use a fast approximation in which the posterior potential of set? t |w
? t?1 , Rt = r) to
tings to regime variables is set by local evidence. We define ?t (r) = Pr(w
4
Table 1: Missing data summary. The ?Count? row lists the number of hours in our WI data set (21 sites total)
in which the number of sites with missing values was exactly equal to the value ?# Sites Missing?.
# Sites Missing
Count
Frequency
0
1978
5.6%
1
2583
7.4%
2
6463
18.4%
3
8165
23.3%
4
8849
25.2%
5
4769
13.6%
6
1229
3.5%
7+
1028
2.9%
be the potential for Rt = r, and then run BMMF on the model where Pr(rt?K , ? ? ? , rt+K ) ?
Qt+K
?t?k (rt?K ) t0 =t?K+1 ?t0 (rt0 )T (rt0 ?1 , rt0 ). Note that each ?t (r) is the density value of a single
Gaussian, and can be computed quickly from model parameters. We call this approach POST(n).
3
Experimental evaluation
We compare the forecasts of our models to forecasts of persistence models, which represent the
current state-of-the-art for STWF. We design our experiments to answer the following questions.
1. Are STWFs of our single-regime models more accurate than persistence forecasts? 2. Are
STWFs of our models that consider regimes more accurate than those of the single-regime models?
3. Do differences between learned regimes make sense meteorologically? Additionally, we wish to
comparatively evaluate the effectiveness of our approximate inference algorithm.
3.1
Data set
We conduct our evaluation in two meteorologically distinct regions in the United States: Wisconsin
(WI), and the Pacific Northwest (PNW) states of Washington and Oregon. The National Climatic
Data Center (NCDC) maintains a publicly accessible database of hourly historical climatic surface
data, from which we obtained 4 years of data from a number of sites in both regions. The WI
observations span from January 1, 2006 through December 31, 2009, and the PNW observations
span from February 4, 2006 through February 3, 2010. We have data from 21 and 24 sites in WI and
PNW, respectively. This data is available at http://ganymede.cs.uwm.edu/nips2010/.
We collect wind direction and wind speed at each site, as measured at 10 meters above ground
level. Since our primary motivation is wind power forecasting, we prefer wind speed measurements
taken at turbine height (approximately 50-100 meters above ground level). Publicly available turbine
height observations, however, are scarce, so we use the 10 m. data as a compromise and for proof of
concept.
Raw data from the NCDC is approximately hourly, but readings often appear off-the-hour in an
unpredictable fashion. We use the simple rule of selecting the nearest data point within +/?10
minutes of the hourly transition. We discard all readings outside this margin. Additionally, NCDC
appends various quality flags to each reading, and we discarded any data which did not pass all
quality checks. These discarded points as well as varying site-specific instrumentation practices
introduce missing observations. Table 1 shows a summary of missing data in the the WI data set.
Missing data did not arise from a few misbehaving stations.
3.2
Experimental methodology
Data was assembled into four cross-validation folds, each contiguous and exactly 1 year in length.
For each fold we use the three training years to learn AR-LG(C) models with C = 1, 2, ..., 5. Note
that AR-LG(1) is the standard (non-conditional) auto-regressive model. With each learned model
we forecast wind speeds at all sites and all test-year hours at six horizons (1-6 hours). Thus, for
each geography (WI and PNW) we have 20 learned models (4 folds and C = 1, 2, ..., 5) and 120
prediction sequences (horizon 1-6 hrs for each of the 20 learned models). Note that this entails
?casting out? or unrolling a learned model to reach longer horizons, which as we see below can
impact performance. For the persistence model, we only make a horizon h forecast for target time
t + h if the time t observation at that site is available. For point-predictions we predict the expected
value of the posterior wind distribution at the prediction time.
5
4
4
Persistence, WI
AR?LG(1), WI
RMSE (m/s)
3.5
3
3
2.5
2.5
2
2
1.5
1.5
1
1
0.5
0.5
0
0
1
2
3
Persistence, PNW
AR?LG(1), PNW
3.5
4
5
6
0
0
7
1
2
3
4
5
6
7
Horizon (hour)
Horizon (hour)
Figure 2: Mean RMSE over all sites and folds for single-regime model (AR-LG(1)) and persistence models in
WI (left) and PNW (right). Errorbars extend one standard deviation above and below the mean.
4
Persistence, WI
AR?LG(1), WI
4
3 hour
4
3.5
3.5
3
3
3
2.5
2.5
2.5
2
2
2
1.5
1.5
1.5
1
1
1
0.5
0.5
0.5
0
0
3.5
RMSE (m/s)
1 hour
Site
Site
0
5 hour
Site
Figure 3: Site-by-site average RMSE values for AR-LG(1) and persistence models in WI for 1, 3 and 5 hour
horizons. Errorbars show standard deviations calculated across folds (years).
We chose for these experiments to use the HOMO approximation method, with a truncated model
corresponding to 3 hours (K = 1). This approximation method is simplest and suits our domain,
where we expect distinct regimes to generally persist over a period of a few hours.
We use two performance measures to evaluate prediction sequences, test-set log-likelihood (LL) and
root mean squared error (RMSE). The RMSE measure provides an evaluation for point predictions
while the LL provides an evaluation of probabilistic predictions. For a given geographical region
we denote the RMSE of the horizon h prediction sequence made by AR-LG(C) model for site s
and year y by e(h, s, y, C). In a similar way we denote RMSE of a persistence prediction sequence
by ep (h, s, y). We denote collections and aggregates with MATLAB-style notation. For example,
e(1, :, :, 2) is the collection of RMSE values for 1 hour predictions for the 2 regime model across
all sites and years, and mean [e(1, :, :, 2)] and std [e(1, :, :, 2)] are the collection?s mean and standard
deviation.
We calculate LL values of the AR-LG(C) models relative to LL values of a persistent Gaussian
model wt+h = wt + h . Here, h is the horizon h zero-mean Gaussian noise vector with variance
estimated from the training-set.
In order to make meaningful comparisons between AR-LG(C) and persistence models, we calculate
performance measures for all horizon h prediction sequences from only hours for which a corresponding horizon h persistence prediction is available.
3.3
Results
To compare our approximation methods, we evaluate the three approximate inference procedures,
HOMO, PRIOR(2) and POST(2) using simulated data from a situation with ten sites arrayed linearly (eg, east-to-west) and two regimes. The parameters of regime 1 were set for a east-to-west
6
4
4
x 10
3.5
Relative log?likelihood
Relative log?likelihood
2.5
2
1.5
YR1
YR2
YR3
YR4
1
0.5
0
1
2
3
4
Number of regimes
x 10
3
2.5
2
1.5
0.5
0
1
5
YR1
YR2
YR3
YR4
1
2
3
4
Number of regimes
5
Figure 4: Performance of multi- versus single-regime models at a 2 hour prediction horizon, in each of 4
folds. Performance measures of test-set relative log-likelihood across all sites for WI (left) and PNW (right),
with number of regimes, C, on the x-axis. Note that at C = 1 the value is zero since this is comparison of
AR-LG(1) against itself.
moving weather regime and the parameters for regime 2 were set for a west-to-east weather regime.
Self transition probabilities were set to 0.8. Observations were generated using these models and
20% were hidden, which is consistent with the missing rate in our data sets. Then we make 2-hr
ahead forecasts at all time points using window size K from 2 to 15 hours. The mean-absolute error
of PRIOR(2) was highest for all K (1.05), HOMO had the lowest overall error (0.95), and there
was surprisingly no obvious trends due to K. The good performance of HOMO supports our hypothesis that the performance of PRIOR(2) will suffer from lack of diversity, however, we expected
POST(2) to perform better relative to HOMO, but instead it had an overall error of 0.965.
In all further experiments, we attempted to assess the effectiveness of the AR-LG(C) models, using
the real wind data described above.
To answer the first question above, we compute mean [e(h, :, :, 1)] and mean [ep (h, :, :)] for the
RMSE collections of the AR-LG(1) and persistence models for both geographical locations and all
horizons h = 1, 2, ..., 6. Figure 2 plots the mean RMSE of these collections. The errorbars extend
1 standard deviation unit above and below the mean. Not surprisingly, error increases with horizon
length. In both WI and PNW the AR-LG(1) model has significantly lower RMSE than persistence
for 1 and 2 hour time horizons. For longer horizons the results vary by geography. In PNW the gap
between AR-LG(1) and persistence grows with h, while in in WI the AR-LG(1) performance begins
to degrade relative to persistence starting with h = 3. At 3 and 4 hour horizons we see an increase
in the variance of AR-LG(1), but still a lower mean RMSE than persistence. For h = 5 and h = 6
the persistence model has lower mean RMSE than the AR-LG(1).
To gain insight into decreasing performance at longer horizons in WI, we plot in Figure 3 the mean
and standard deviation RMSE values for the site specific collections e(h, s, :, 1) and ep (h, s, :) at
all WI sites for 1,3 and 5 hour horizons. Each collection here contains four RMSE values, one per
fold. For h = 1 our AR-LG(1) model beats persistence at all sites, usually by by multiple standard
deviation units. This is a significant result because persistence forecasts have been shown to be
difficult to improve upon for very short horizons. At h = 3 problems begin to appear. Although
at most sites AR-LG(1) has improved further upon persistence accuracy, two sites display high
variance and one (second from left) has high variance and very high RMSE near 3 m/s. At h = 5
high variance is widespread and the RMSE of the ill behaving sites at h = 3 have grown. This
suggests that large erroneous predictions at a small number of sites spread throughout the system as
it evolves forward in time.
Next, we consider the performance of multiple regime models. For these models we focus on the
LL measure. Figure 4 plots total LL values for AR-LG(2), AR-LG(3), AR-LG(4) and AR-LG(5)
relative to AR-LG(1) for individual years. In both WI and PNW there is a large jump from 1 to 2
regimes. While in WI there is no obvious trend from 2 to 5 regimes, in PNW there is a clear increase
in performance as the number of regimes increases.
7
5
0.5
0.5
0.4
0.4
Regime 1
2
Regime 3
Regime 4
Regime 5
v (m/s)
1
0
?1
?2
?3
Average posterior probability
3
Regime 2
Average posterior probability
4
0.3
0.2
0.3
0.2
0.1
0.1
0
0
r
r
mbe
Dece
ber
mbe
Nove
st
mbe
r
Octo
Septe
July
Augu
May
June
h
April
ary
Marc
Febru
5
ary
0
u (m/s)
Janu
?5
?5
0:00
1:00
2:00
3:00
4:00
5:00
6:00
7:00
8:00
9:00
10:0
0
11:0
0
12:0
0
13:0
0
14:0
0
15:0
0
16:0
0
17:0
0
18:0
0
19:0
0
20:0
0
21:0
0
22:0
0
23:0
0
?4
Figure 5: Meteorological properties of learned regimes of AR-LG(5) models in WI. (a) Mean wind vectors
(u, v) at each of 21 sites in WI, in each of 5 regimes (regime indicated by shape). (b) Mean regime posteriors
with respect to test-set hour-of-day (CST), showing diurnal trends. (c) Mean regime posteriors with respect to
test-set month.
Increase in forecast skill and test-set log-likelihood indicate that regimes in the multi-regime models
are capturing important generalizable patterns in short-term wind dynamics, whose features ought
to arise from underlying meteorology. Indeed, model parameters exhibit strong clustering patterns
which can be tied to known regional meteorological phenomena. Figure 5 shows an analysis of a
five regime model trained on the WI dataset. Figure 5 (a) plots learned wind vectors in the first
time-slice between the five regimes. Figures 5 (b) and (c) analyze posterior regime likelihoods with
respect to diurnal (daily) and seasonal time-frames. We note strong clusterings in (a) and significant
diurnal and seasonal trends.
4
Conclusion
We have described a model for short-term wind forecasting (STWF), an important task in the wind
power industry. Our model is set apart from previous STWF approaches in three important ways:
Firstly, forecasts are informed by off-site evidence through a representation of the dynamical evolution of winds in the region. Secondly, our models can learn and reason about meteorological regimes
unique to the local climate. Finally, our model is tolerant to missing data which is present in most
sources of wind data. These points are shown empirically through an improvement in forecasting
error versus state-of-the-art, and observation of meteorological properties of learned regimes.
We presented novel approximate inference procedures that enables AR-HMMs to be gracefully used
in situations with missing data. We hope these approaches can be applied to other problem domains
suited to AR-HMMs.
References
Caruana, R. (1997). Multitask learning. Machine Learning, 28:41?75.
Do, M. (2003). Fast approximation of Kullback-Leibler distance for dependence trees and hidden Markov
models. Signal Processing Letters, IEEE, 10(4):115 ? 118.
Giebel, G. (2003). The state-of-the-art in short-term prediction of wind power. Deliverable Report D1.1, Project
Anemos. Available online at http://anemos.cma.fr.
Gneiting, T., Larson, K., Westrick, K., Genton, M. G., and Aldrich, E. (2006). Calibrated probabilistic forecasting at the stateline wind energy center. Journal of the American Statistical Association, 101(475):968?979.
Koller, D. and Friedman, N. (2009). Probabilistic Graphical Models: Principles and Techniques. MIT Press.
Kusiak, A., Zheng, H., and Song, Z. (2009). Short-term prediction of wind farm power: A data mining approach. IEEE Transactions on Energy Conversion, 24(1):125?136.
Lauritzen, S. L. and Jensen, F. (2001). Stable local computation with conditional Gaussian distributions. Statistics and Computing, 11:191?203.
8
Lerner, U. and Parr, R. (2001). Inference in hybrid networks: Theoretical limits and practical algorithms. In
Breese, J. and Koller, D., editors, Proceedings of the Seventeenth Conference on Uncertainty in Artificial
Intelligence (UAI-01), pages 310?318, San Francisco, CA. Morgan Kaufmann Publishers.
Lindenberg, S. (2008). 20% wind energy by 2030: Increasing wind energy?s contribution to U.S. electricity
supply. US Department of Energy Report.
Marti, I., San Isidro, M., Cabezn, D., Loureiro, Y., Villanueva, J., Cantero, E., and Perez, I. (2004). Wind power
prediction in complex terrain: from the synoptic scale to the local scale. In EAWE Conference,The science
of making torque from wind, Delft, The Netherlands.
Murphy, K. (1998). Fitting a conditional linear Gaussian distribution.
phyk/Papers/learncg.pdf.
http://www.cs.ubc.ca/ mur-
Paroli, R., Pistollato, S., Rosa, M., and Spezia, L. (2005). Non-homogeneous markov mixture of periodic
autoregressions for the analysis of air pollution in the lagoon of venice. In Applied Stochastic Models and
Data Analysis (ASMDA-2005), pages 1124?1132.
Perez-Quiros, G., Camacho, M., and Poncela, P. (2010). Green shoots? Where, when and how? Working
Papers 2010-04, FEDEA.
Pinson, P. and Madsen, H. (2008). Probabilistic forecasting of wind power at the minute time-scale with
markov-switching autoregressive models. Imagine.
Piwko, D. and Jordan, G. (2010). The economic value of day-ahead wind forecasts for power grid operations.
2010 UWIG Workshop on Wind Forecasting.
Singer, Y. and Warmuth, M. K. (1998). Batch and on-line parameter estimation of Gaussian mixtures based on
the joint entropy. In Kearns, M. J., Solla, S. A., and Cohn, D. A., editors, NIPS, pages 578?584. The MIT
Press.
Tang, X. (2004). Autoregressive hidden markov model with application in an El Ni?no study. Master?s thesis,
University of Saskatchewan, Saskatoon, Saskatchewan, Canada.
Toscani, D., Archetti, F., Quarenghi, L., Bargna, F., and Messina, E. (2010). A DSS for assessing the impact
of environmental quality on emergency hospital admissions. In Health Care Management (WHCM), 2010
IEEE Workshop on, pages 1 ?6.
Weiss, Y. and Freeman, W. T. (2001). Correctness of belief propagation in Gaussian graphical models of
arbitrary topology. Neural Computation, 13(10):2173?2200.
Yanover, C. and Weiss, Y. (2003). Finding the M most probable configurations using loopy belief propagation.
In Thrun, S., Saul, L. K., and Sch?olkopf, B., editors, NIPS. MIT Press.
Zheng, H. and Kusiak, A. (2009). Prediction of wind farm power ramp rates: A data-mining approach. Journal
of Solar Energy Engineering.
9
| 3892 |@word multitask:1 proportion:2 advantageous:1 proportionality:1 uncovers:1 covariance:3 q1:2 initial:1 configuration:1 series:1 contains:1 united:3 selecting:1 offering:1 current:3 must:1 numerical:2 shape:1 enables:1 camacho:1 arrayed:1 drop:1 plot:4 stationary:1 intelligence:1 prohibitive:2 selected:2 leaf:1 warmuth:2 inspection:1 short:9 regressive:6 provides:2 node:3 location:1 firstly:1 five:2 height:3 admission:1 uwm:1 supply:1 persistent:1 fitting:1 introduce:4 theoretically:1 indeed:2 expected:3 surge:1 frequently:1 multi:2 torque:1 freeman:2 chap:2 decreasing:1 automatically:1 window:2 cardinality:1 unpredictable:1 unrolling:1 begin:2 annually:1 notation:2 underlying:1 project:1 mass:1 increasing:1 lowest:1 what:1 pursue:1 eigenvector:1 minimizes:1 generalizable:1 informed:1 unobserved:2 finding:1 ought:1 temporal:2 subclass:1 exactly:2 control:1 unit:2 appear:2 producing:1 safety:1 hourly:4 engineering:1 gneiting:3 local:7 limit:1 switching:5 approximately:2 chose:1 collect:1 shaded:2 suggests:1 hmms:5 range:1 seventeenth:1 unique:2 practical:1 practice:1 procedure:2 empirical:1 significantly:3 weather:12 persistence:22 regular:1 ncdc:4 close:1 storage:1 writing:1 www:1 unshaded:1 missing:25 center:3 rt0:3 starting:1 simplicity:1 rule:1 insight:1 variation:1 analogous:1 target:2 imagine:1 exact:7 homogeneous:4 us:1 hypothesis:1 trend:4 econometrics:1 std:1 persist:4 database:1 observed:2 ep:3 electrical:2 worst:1 calculate:3 region:6 solla:1 highest:1 envisioned:1 dynamic:3 trained:1 compromise:1 upon:2 joint:2 various:1 grown:1 train:1 distinct:3 fast:2 describe:2 effective:2 monte:1 artificial:1 aggregate:2 choosing:2 outside:1 toscani:2 whose:1 valued:1 ramp:1 statistic:2 cma:1 farm:4 itself:1 final:1 online:1 sequence:14 eigenvalue:1 propose:1 fr:1 frequent:1 relevant:1 climatic:3 mixing:3 olkopf:1 billion:1 transmission:1 r1:6 assessing:1 perfect:1 measured:2 nearest:1 qt:1 lauritzen:3 strong:2 c:2 involves:1 indicate:2 direction:4 stochastic:1 genton:1 require:1 geography:2 probable:1 secondly:1 ds:1 considered:1 ground:2 predict:1 parr:9 major:1 vary:1 consecutive:1 estimation:3 currently:1 yr3:2 largest:3 wl:2 correctness:1 hope:1 mit:3 clearly:1 gaussian:17 messina:1 avoid:1 pn:4 varying:1 casting:1 geographically:1 focus:1 june:1 seasonal:2 notational:1 improvement:1 likelihood:7 check:1 baseline:2 sense:1 inference:19 suffix:1 lowercase:1 el:1 typically:2 unlikely:1 hidden:5 koller:4 issue:2 among:2 overall:2 ill:1 stateof:1 priori:3 art:4 integration:2 spatial:2 marginal:1 equal:2 aware:1 comprise:1 rosa:1 washington:1 represents:2 nearly:1 minimized:1 np:5 report:2 few:3 lerner:9 simultaneously:1 national:2 divergence:2 individual:1 variously:1 murphy:3 relief:1 delft:1 suit:1 friedman:3 interest:1 message:1 highly:2 mining:3 zheng:3 homo:9 evaluation:5 investigate:1 mixture:6 bmmf:2 perez:3 chain:3 accurate:6 partial:1 daily:1 tree:2 incomplete:3 conduct:1 arma:1 desired:1 re:1 theoretical:1 industry:2 ar:41 contiguous:2 caruana:2 electricity:1 cost:1 loopy:1 deviation:6 comprised:1 inadequate:1 too:1 vation:1 dependency:3 answer:2 corrupted:1 periodic:1 chooses:1 calibrated:1 st:1 geographical:3 epidemiology:1 density:2 accessible:1 probabilistic:4 off:2 quickly:1 w1:4 squared:1 thesis:1 management:1 dr:2 american:1 style:1 potential:2 diversity:2 oregon:1 depends:2 root:1 wind:50 analyze:1 competitive:2 maintains:1 vt0:1 solar:1 rmse:18 contribution:1 ass:1 air:1 ni:1 publicly:3 accuracy:1 variance:5 kaufmann:1 efficiently:1 mvg:2 raw:1 produced:2 yr1:2 carlo:1 drive:1 ary:2 reach:1 diurnal:3 against:1 energy:10 frequency:1 obvious:2 naturally:1 associated:3 proof:1 modeler:1 gain:1 dataset:1 appends:1 ut:2 focusing:1 day:2 methodology:1 wei:4 improved:1 april:1 strongly:2 generality:1 yr2:2 working:1 cohn:1 lack:1 meteorological:4 widespread:1 propagation:2 quality:4 indicated:1 believe:1 grows:1 usa:3 concept:2 evolution:1 leibler:1 eg:1 attractive:1 climate:1 ll:6 self:3 mbe:3 rooted:1 larson:1 pdf:1 complete:2 shoot:1 novel:3 imputing:1 physical:1 overview:1 rl:1 empirically:1 exponentially:2 extend:3 association:1 refer:3 measurement:1 significant:2 grid:2 giebel:2 pointed:1 had:2 reliability:1 dot:1 moving:1 stable:1 entail:1 longer:3 surface:1 behaving:1 ganymede:1 multivariate:1 milwaukee:3 recent:2 madsen:3 posterior:13 moderate:1 instrumentation:1 discard:1 apart:1 vt:2 morgan:1 additional:1 care:1 lindenberg:2 period:1 july:1 signal:1 multiple:7 full:1 stem:1 offer:1 cross:1 post:4 impact:2 prediction:25 regression:2 represent:2 justified:1 interval:1 source:1 publisher:1 sch:1 regional:1 induced:1 tend:2 december:1 incorporates:1 effectiveness:2 jordan:2 call:3 ee:2 near:1 variety:1 timesteps:1 topology:1 economic:1 consumed:1 shift:1 qj:3 t0:2 six:2 forecasting:10 effort:1 song:1 suffer:1 clg:8 passing:1 matlab:1 generally:1 detailed:1 clear:1 netherlands:1 meteorology:2 ten:1 simplest:1 http:3 exist:1 zj:1 estimated:1 per:1 discrete:3 write:1 affected:1 independency:1 four:2 imputation:1 pj:4 fraction:1 year:9 sum:1 run:1 letter:2 uncertainty:1 master:1 arrive:1 throughout:3 venice:1 decision:1 prefer:1 capturing:1 bound:2 emergency:1 simplification:1 display:1 fold:7 ahead:2 throwing:1 constrain:1 lightly:1 speed:9 argument:1 span:2 department:2 pacific:3 across:6 meteorologically:4 em:2 smaller:1 wi:22 joseph:1 evolves:1 making:1 pr:18 taken:1 previously:1 pinson:3 count:2 mechanism:1 needed:1 singer:2 available:7 gaussians:4 junction:1 permit:1 operation:1 apply:1 appropriate:1 alternative:1 batch:1 actionable:1 northwest:3 assumes:2 include:4 denotes:2 top:1 graphical:4 clustering:2 synoptic:1 exploit:1 ting:1 february:2 comparatively:1 pollution:1 question:2 primary:2 rt:25 costly:1 traditional:1 dependence:1 southern:1 exhibit:1 distance:1 simulated:1 thrun:1 hmm:3 gracefully:1 chris:1 degrade:1 barber:1 reason:2 code:1 length:3 balance:1 difficult:2 lg:36 potentially:1 design:1 perform:1 upper:2 conversion:1 observation:28 markov:7 datasets:1 discarded:3 truncated:7 january:1 situation:2 beat:1 frame:1 station:2 arbitrary:1 canada:1 atmospheric:1 kl:2 errorbars:3 learned:12 accepts:1 deletion:1 darkly:1 hour:27 nip:2 assembled:1 address:1 able:1 below:3 usually:1 pattern:2 dynamical:1 regime:67 reading:3 challenge:3 max:1 green:1 belief:2 power:14 event:1 natural:1 hybrid:1 hr:2 scarce:1 yanover:2 improve:1 numerous:2 temporally:1 axis:1 auto:6 health:1 prior:7 autoregressions:1 meter:2 relative:8 wisconsin:6 loss:1 expect:1 interesting:1 versus:2 pnw:12 validation:1 sufficient:1 consistent:1 principle:1 editor:3 row:2 summary:2 placed:1 surprisingly:3 free:1 u0t:1 ber:1 mur:1 saul:1 absolute:1 sparse:1 slice:2 dimension:1 calculated:1 world:2 transition:5 unaware:1 autoregressive:4 forward:1 made:3 collection:8 jump:1 san:2 historical:1 transaction:1 approximate:12 pruning:1 ignore:1 implicitly:1 skill:1 kullback:1 clique:1 active:1 tolerant:1 uai:1 francisco:1 alternatively:1 terrain:1 continuous:4 table:2 additionally:2 nature:2 transfer:1 robust:1 learn:2 ca:2 improving:1 complex:1 domain:6 marc:1 did:2 spread:1 linearly:1 motivation:1 noise:1 paul:1 arise:2 bockhorst:1 site:39 west:3 fashion:1 ispthe:1 wish:1 exponential:1 marti:2 tied:1 learns:3 tang:2 minute:3 transitioning:1 erroneous:1 specific:3 showing:2 nwp:2 jensen:2 list:1 evidence:5 dl:2 essential:2 intractable:1 workshop:2 importance:1 horizon:26 forecast:17 margin:1 gap:1 suited:1 entropy:1 likely:2 contained:1 partially:1 ubc:1 environmental:1 turbine:3 conditional:10 month:1 shared:1 hard:6 change:3 cst:1 specifically:1 except:1 wt:26 flag:1 kearns:1 total:2 breese:1 pas:1 hospital:1 tendency:1 experimental:2 attempted:1 meaningful:1 east:3 exception:1 select:1 support:2 ongoing:1 evaluate:3 d1:1 phenomenon:1 handling:1 |
3,193 | 3,893 | Learning via Gaussian Herding
Koby Crammer
Department of Electrical Enginering
The Technion
Haifa, 32000 Israel
[email protected]
Daniel D. Lee
Dept. of Electrical and Systems Engineering
University of Pennsylvania
Philadelphia, PA 19104
[email protected]
Abstract
We introduce a new family of online learning algorithms based upon constraining
the velocity flow over a distribution of weight vectors. In particular, we show how
to effectively herd a Gaussian weight vector distribution by trading off velocity
constraints with a loss function. By uniformly bounding this loss function, we
demonstrate how to solve the resulting optimization analytically. We compare the
resulting algorithms on a variety of real world datasets, and demonstrate how these
algorithms achieve state-of-the-art robust performance, especially with high label
noise in the training data.
1
Introduction
Online learning algorithms are simple, fast, and require less memory compared to batch learning
algorithms. Recent work has shown that they can also perform nearly as well as batch algorithms in
many settings, making them quite attractive for a number of large scale learning problems [3]. The
success of an online learning algorithm depends critically upon a tradeoff between fitting the current
data example and regularizing the solution based upon some memory of prior hypotheses. In this
work, we show how to incorporate regularization in an online learning algorithm by constraining the
motion of weight vectors in the hypothesis space. In particular, we demonstrate how to use simple
constraints on the velocity flow field of Gaussian-distributed weight vectors to regularize online
learning algorithms. This process results in herding the motion of the Gaussian weight vectors to
yield algorithms that are particularly robust to noisy input data.
Recent work has demonstrated how parametric information about the weight vector distribution can
be used to guide online learning [1]. For example, confidence weighted (CW) learning maintains a
Gaussian distribution over linear classifier hypotheses and uses it to control the direction and scale
of parameter updates [9]. CW learning has formal guarantees in the mistake-bound model [7];
however, it can over-fit in certain situations due to its aggressive update rules based upon a separable
data assumption. A newer online algorithm, Adaptive Regularization of Weights (AROW) relaxes
this separable assumption, resulting in an adaptive regularization for each training example based
upon its current confidence [8]. This regularization comes in the form of minimizing a bound on the
Kullback-Leibler divergence between Gaussian distributed weight vectors.
Here we take a different microscopic view of the online learning process. Instead of reweighting and
diffusing the weight vectors in hypothesis space, we model them as flowing under a velocity field
given by each data observation. We show that for linear velocity fields, a Gaussian weight vector
distribution will maintain its Gaussianity, with corresponding updates for its mean and covariance.
The advantage of this approach is that we can incorporate different constraints and regularization
on the resulting velocity fields to yield more robust online learning algorithms. In the remainder
of this paper, we elucidate the details of our approach and compare its performance on a variety of
experimental data.
1
These algorithms maintain a Gaussian distribution over possible weight vectors in hypothesis space.
In traditional stochastic filtering, weight vectors are first reweighted according to how accurately
they describe the current data observation. The remaining distribution is then subjected to random
diffusion, resulting in a new distribution. When the reweighting factor depends linearly upon the
weight vector in combination with a Gaussian diffusion model, a weight vector distribution will
maintain its Gaussianity under such a transformation. The Kalman filter equations then yield the
resulting change in the mean and covariance of the new distribution. Our approach, on the other
hand, updates the weight vector distribution with each observation by herding the weight vectors
using a velocity field. The differences between these two processes are shown in Fig. 1.
2
(a)
(b)
Background
Consider the following online binary classification
problem, that proceeds in rounds. On the ith round
the online algorithm receives an input xi ? Rd and
applies its current prediction rule to make a prediction y?i ? Y, for the binary set Y = {?1, +1}. It
then receives the correct label yi ? Y and suffers a
loss `(yi , y?i ). At this point, the algorithm updates its
prediction rule with the pair (xi , yi ) and proceeds to
the next round. A summary of online algorithms can
be found in [2].
An initial description for possible online algorithms
is provided by the family of passive-aggressive (PA)
Figure 1:
(a) Traditional stochastic filter- algorithms for linear classifiers [5]. The weight vecing: weight vectors in the hypothesis space are tor w i at each round is updated with the current inreweighted according to the new observation and put xi and label yi , by optimizing:
undergo diffusion resulting in a new weight vec1
2
tor distribution. (b) Herding via a velocity field: w
i+1 = arg min kw ? w i k + C` ((xi , yi ), w) ,
w 2
weights vectors flow in hypothesis space accord(1)
ing to a constrained velocity field, resulting in a
where ` ((xi , yi ), w) is the squared- or hinge-loss
new weight vector distribution.
function and C > 0 controls the tradeoff between
optimizing the current loss and being close to the
old weight vector. Eq. (1) can also be expressed in dual form, yielding the PA-II update equation:
2
wi+1 = wi + ?i yi xi , ?i = max{0, 1 ? yi (w>
(2)
i xi )} / kxi k + 1/C .
The theoretical properties of this algorithm was analyzed by [5], and it was demonstrated on a variety
of tasks (e.g. [3]).
Online confidence-weighted (CW) learning [9, 7], generalized the PA update principle to multivariate Gaussian distributions over the weight vectors N (?, ?) for binary classification. The mean
? ? Rd contains the current estimate for the best weight vector, whereas the Gaussian covariance
matrix ? ? Rd?d captures the confidence in this estimate.
CW classifiers are trained according to a PA rule that is modified to track differences in Gaussian distributions. At each round, the new mean and covariance of the weight vector distribution is chosen by optimizing: (?i+1 , ?i+1 ) = arg min?,? DKL (N (?, ?) k N (?i , ?i )) such that
Prw?N (?,?) [yi (w ? xi ) ? 0] ? ?.
This particular CW rule may over-fit since it guarantees a correct prediction with likelihood ? > 0.5
at every round. A more recent alternative scheme called AROW (adaptive regularization of weightvectors) [8] replaces the guaranteed prediction at each round with the following loss function:
?i+1 , ?i+1 = arg min?,? DKL (N (?, ?) k N (?i , ?i )) + ?1 `h2 (yi , ? ? xi ) + ?2 x>
i ?xi ,where
2
`h2 (yi , ? ? xi ) = (max{0, 1 ? yi (? ? xi )}) is the squared-hinge loss suffered using the weight
vector ? and ?1 , ?2 ? 0 are two tradeoff hyperparameters. AROW [8] has been shown to perform
well in practice, especially for noisy data where CW severely overfits.
In this work, we take the view that the Gaussian distribution over weight vectors is modified by
herding according to a velocity flow field. First we show that any change in a Gaussian distributed
random variable can be related to a linear velocity field:
2
Theorem 1 Assume that the random variable (r.v.) W is distributed according to a Gaussian distribution, W ? N (?, ?) ,
1. The r.v. U = AW + b also has a Gaussian distribution, U ? N b + A?, A?A> .
? .
? ?
2. Assume that a r.v. U is distributed according to a Gaussian distribution, U ? N ?,
Then there exists A and b such that the following linear relation holds, U = AW + b .
1
1
3. Let ? be any orthogonal matrix ?> = ??1 and define U = ? 2 ??? 2 (W ? ?) + ?,
then both U and W have the same distribution.
Proof: The first property follows easily from linear systems theory. The second property is easily
? 12 ?? 21 and b = ?
? 12 ?? 12 ? . Similarly, for the third property, it suffices
? ??
shown by taking: A = ?
h
i
1
1
>
to show that E [U] = ? 2 ??? 2 (E [W] ? ?)+? = ? , and Cov (U) = E (U ? ?) (U ? ?) =
h
i
1
1
1
1
1
1
1
1
1
1
>
? 2 ??? 2 E (W ? ?) (W ? ?) ?? 2 ?> ? 2 = ? 2 ??? 2 ??? 2 ?> ? 2 = ? 2 ??> ? 2 =
1
1
?2 ?2 = ? .
Thus, the transformation U = AW + b can be viewed as a velocity flow resulting in a change of the
underlying Gaussian distribution of weight vectors. On the other hand, this microscopic view of the
underlying velocity field contains more information than merely tracking the mean and covariance
of the Gaussian. This can be seen since many different velocity fields result in the same overall mean
and covariance. In the next section, we show how we can define new online learning algorithms by
considering various constraints on the overall velocity field. These new algorithms optimize a loss
function by constraining the parameters of this velocity field.
3
Algorithms
Our algorithms maintain a distribution, or infinite collection of weight vectors {Wi } for each round
i. Given an instance xi it outputs a prediction based upon the majority of these weight vectors. Each
weight vector Wi is then individually updated to Wi+1 according to a generalized PA rule,
1
>
Wi+1=arg min Ci (W) where Ci (W)= (W?Wi ) ??1
i (W?Wi )+C` ((xi , yi ) ,W) , (3)
W
2
and ?i is a PSD matrix that will be defined shortly. In fact, we assume that ?i is invertible and thus
PD.
Clearly, it is impossible to maintain and update an infinite set of vectors, and thus we employ a
parametric density fi (Wi ; ?i ) to weight each vector. In general, updating each individual weightvector using some rule (such as the PA update) will modify the parametric family. We thus employ
a Gaussian parametric density with W ? N (?i , ?i ), and update the distribution collectively,
Wi+1 = Ai Wi + bi ,
d?d
where Ai ? R
represents stretching and rotating the distribution, and the bi ? Rd is an overall
translation. Incorporating this linear transformation, we minimize the average of Eq. (3) with respect
to the current distribution,
(Ai , bi ) = arg min EWi ?N (?i ,?i ) [Ci (AWi + b)] .
A,b
(4)
We derive the algorithm by computing the expectation Eq. (4) starting with the first regularization
term of Eq. (3). After some algebraic manipulations and using the first property of Theorem 1 to
write ? = A?i + bi we get the expected value for the first term of Eq. (3) in terms of ? and A,
1
1
>
> ?1
(? ? ?i ) ??1
(5)
i (? ? ?i ) + Tr (A ? I) ?i (A ? I)?i .
2
2
Next, we focus on the expectation of the loss function in their second term of Eq. (3).
3.1
Expectation of the Loss Function
We consider the expectation,
EWi ?N (?i ,?i ) [` ((xi , yi ) , AWi + b)]
3
(6)
In general, there is no closed form solution for this expectation, and instead we seek for an appropriate approximation or bound. For simplicity we consider binary classification, denote the signed
margin by M = yi (W> x) and write ` ((x, y), W) = `(M ) .
If the loss is relatively concentrated about its mean, then the loss of the expected weight-vector ? is
a good proxy for Eq. (6). Formally, we can define
Definition 1 Let F = {f (M ; ?) : ? ? ?} be a family of density functions. A loss function is
uniformly ?-bounded in expectation with respect
h to F if there iexists ? > 0 such that for all ? ? ?
2
?
we have that, E [` (M )] ? ` (E [M ]) + 2 E (M ? E [M ]) , where all expectations are with
respect M ? f (M ; ?).
We note in passing that if the loss function ` is convex with respect to W we always have that,
E [` (M )] ? ` (E [M ]). For Gaussian distributions we have that ? = {?, ?} and a loss function
` is uniformly ?-bounded in expectation if there exists a ? such that, EN (?,?) [` ((x, y), W)] ?
` ((x, y), E [W]) + ?2 x> ?x . We now enumerate some particular cases where losses are uniformly
?-bounded.
Proposition 2 Assume that the loss function `(M ) has a bounded second derivative, `00 (M ) ? ?
then ` is uniformly ?-bounded in expectation.
Proof: Applying the Taylor expansion about M = E [M ] we get, ` (M ) = ` (E [M ]) +
2
(M ? E [M ]) `0 (E [M ]) + 21 (M ? E [M ]) `00 (?) ,for some ? ? [M, E [M ]]. Taking the expecta00
tion of both sides and bounding ` (?) ? ? concludes the proof.
2
For example, the squared loss 12 y ? M > x is uniformly (? =)1-bounded in expectation since
its second derivative is bounded by unity (1). Another example is the log-loss, log(1 + exp(?M )),
being uniformly 1/4-bounded in expectation. Note that the popular hinge and squared-hinge loss
are not even differentiable at M = 1. Nevertheless, we can show explicitly that indeed both are
uniformly ?-bounded, though the proof is omitted here due to space considerations. To conclude,
>
for uniformly ?-bounded loss functions, we bound Eq. (6) with ` ((xi , yi ), ?) + ?2 x>
i A?i A xi .
Thus, our online algorithm minimizes the following bound on Eq. (4), with a change of variables
from the pair (A, b) to the pair (A, ?), where ? is the mean of the new distribution,
(Ai , ?i+1 ) = arg min
A,?
1
>
(? ? ?i ) ??1
i (? ? ?i ) + C` ((xi , yi ), ?) +
2
C? >
1
Tr (A ? I)> ??1
x A?i A> xi
i (A ? I)?i +
2
2 i
(7)
(8)
In the next section we derive an analytic solution for the last problem. We note that, similar to
AROW, it is decomposed into two additive terms: Eq. (7) which depends only on ? and Eq. (8)
which depends only on A.
4
Solving the Optimization Problem
2
We consider here the squared-hinge loss, ` ((x, y), ?) = max{0, 1 ? y(?> x)} , reducing Eq. (7)
to a generalization of PA-II in Mahalanobis distances (see Eq. (2)),
>
?i+1 = ?i + ?i yi xi , ?i = max{0, 1 ? yi (?>
(9)
i xi )} / xi ?i xi + 1/C ,
We now focus on minimizing the second term (Eq. (8)) which depends solely on Ai . For simplicity
we assume ? = 1 and consider two cases.
4.1
Diagonal Covariance Matrix
We first assume that both ?i and A are diagonal, and thus also ?i+1 is diagonal,
?i+1
and thus ?i , >
and A commute with each other. Eq. (8) then becomes, 21 Tr (A ? I)> (A ? I) + C2 x>
A?
A
xi .
i
i
Denote the rth diagonal element of ?i by (?i )r,r and the rth diagonal element of A by (A)r,r . The
4
P
P
2
last equation becomes, r 12 ((A)r,r ? 1)2 + C2 r x2i,r (A)r,r (?i )r,r Taking the derivative with
respect to (A)r,r we get,
2
(Ai )r,r = 1/ 1 + Cx2i,r (?i )r,r
? (?i+1 )r,r = (?i )r,r / 1 + Cx2i,r (?i )r,r .
(10)
The last equation is well-defined since the denominator is always greater than or equal to 1.
2
4.2
1.5
?1
Expanding Eq. (8) we get 21 Tr A> ??1
i A?i ? Tr ?i A?i
>
> ?1
+Tr ??1
+ C2 x>
i A?i A xi . Setting the
i ?i ? Tr A ?i ?i
derivative of the last equation with respect to A we get, ??1
i A?i ?
?1
both
terms
by
?
(right)
and
I + Cxi x>
i A?i = 0 . We multiply
i
?1
>
combine terms, ??1
i + Cxi xi A = ?i , Yielding,
> ?1 ?1
Ai = ??1
?i .
(11)
i + Cxi xi
1
0.5
0
?0.5
?1
?1.5
?2
?1
0
1
2
2
1.5
1
0.5
To get ?i+1 we first compute its inverse, ??1
=
i+1
> ?1
A?i A
. Substituting Eq. (11) in the last equation we
get,
> ?1
2 >
>
??1
= ??1
(12)
i+1 = A?i A
i + 2C +C xi ?i xi xi xi
Finally, using the Woodbury identity [12] to compute to updated
covariance matrix,
2
>
>
2
?i+1 = ?i ? ?i xi x>
.
i ?i C xi ?i xi + 2C / (1 + Cxi ?i xi )
(13)
0
?0.5
?1
?1.5
?2
Full Covariance Matrix
?1
0
1
2
We call the above algorithms NHERD for Normal (Gaussian)
Herd. A pseudocode of the algorithm appears in Alg. 3.
1.5
4.3
1
0.5
0
?0.5
?1
?0.5
0
0.5
before
NHERD_P
NHERD_E
NHERD_D
AROW_P
AROW_D
1
1.5
Figure 2: Top and center panels:
an illustration of the algorithm?s
update (see text). Bottom panel:
an illustration of a single update
for the five algorithms. The cyan
ellipse represents the weight vector distribution before the example
is observed. The red-square represents the mean of the updated
distribution and the five ellipses
represents the covariance of each
of the algorithm after given the
data example ((1, 2), +1). The
ordering of the area of the five ellipses correlates well with the performance of the algorithms.
Discussion
Both our update of ?i+1 in Eq. (12) and the update of AROW (see
eq. (8) of [8] ) have the same structure of adding ?i xi x>
i to ?i .
AROW sets ?i = C while our update sets ?i = 2C +C 2 xi ?i x>
i .
In this aspect, the NHERD update is more aggressive as it increases the eigenvalues of ??1
at a faster rate. Furthermore, its
i
update rate is not constant and depends linearly on the current variance of the margin x>
i ?i xi ; the higher the variance, the faster the
eigenvalues of ?i decrease. Lastly, we note that the update matrix Ai can be written as a product of two terms, one depends on
the covariance matrix before the update and the other on the co? i+1 be
variance matrix after an AROW update. Formally, let ?
the covariance matrix after
updated using the AROW rule, that is,
? i+1 = ??1 + Cxi x> (see eq. (8) of [8] ). From Eq. (11) we
?
i
i
? ?1 ?i , which means that NHERD modifies
observe that Ai = ?
i+1
?i if and only if AROW modifies ?i .
The diagonal updates of AROW and NHERD share similar
properties. [8] did not specify the specific update for this
case, yet using a similar derivation of Sec. 4.1 we get
that
? i+1 is ?
? i+1
the AROW update for diagonal matrices ?
=
r,r
(?i )r,r / 1 + Cx2i,r (?i )r,r . Taking the ratio between the rth
? i+1
element of Eq. (10) and the last equation we get, ?
/(?i+1 )r,r = 1 + Cx2i,r (?i )r,r ? 1 .
r,r
5
To conclude, the update of NHERD for diagonal covariance matrices is also more aggressive than
AROW as it increases the (diagonal) elements of its inverse faster than AROW.
An illustration of the two updates appears in Fig. 2 for a problem in a planar 2-dimensional space.
The Gaussian distribution before the update is isotropic with mean ? = (0, 0) and ? = I2 . Given
the input example x = (1, 2), y = 1 we computed both A and b for both the full (top panel)
and diagonal (center panel) update. The plot illustrates the update of the mean vector (red square),
weight vectors with unit norm kwk = 1 (blue), and weight vectors with norm of 2, kwk = 2 (green).
The ellipses with dashed lines illustrate the weights before the
Parameter: C > 0
update, and ellipses with solid
Initialize: ?1 = 0 , ?1 = I
lines illustrate the weight-vectors
for i = 1, . . . , m do
d
after the update. All the weight
Get input example xi ? R
>
vectors above the black dotted
Predict y?i = sign(?i xi )
line classify the example corGet true label yi and suffer loss 1 if y?i 6= yi
rectly and the ones above the
if yi (?>
x
)
?
1
then
i
i
>
dashed lines classify the exammax{0,1?yi (?i xi )}
Set ?i+1=?i +yi
? i xi
(Eq. (9)) ple with margin of at least unit 1.
1
?
x
+
x>
i
i
i
C
The arrows connecting weightFull Covariance:
2
C
xi ?i x>
i +2C
(Eq. (13)) vectors from the dashed ellipses
Set ?i+1=?i ??i xi x>
i ?i (1+Cxi ?i x> )2
i
to solid ellipses illustrate the upDiagonal Covariance:
date of individual weight-vectors
Set (?i+1 )r,r for r = 1 . . . d using Eq. (14)
with the linear transformation
end if
w ? Ai (w ? ?i ) + ?i+1 .
end for
In both updates the current mean
Return: ?m+1 , ?m+1
?i is mapped to the next mean
?i+1 . The full update ?shrinks?
Figure 3: Normal Herd (NHERD)
the covariance in the direction
orthogonal to the example yi xi ; vectors close to the margin of unit 1 are modified less than vectors far from this margin; vectors with smaller margin are updated more aggressively then vectors
with higher margin; even vectors that classify the example correctly with large margin of at least one
are updated, such that their margin is shrunk. This is a consequence of the linear transformation that
ties the update between all weight-vectors. The diagonal update, as designed, maintains a diagonal
matrix, yet shrinks the matrix more in the directions that are more ?orthogonal? to the example.
We note in passing that for all previous CW algorithms [7] and AROW [8], a closed form solution
for diagonal matrices was not provided. Instead these papers proposed to diagonalize either ?i+1
(called drop) or ??1
i+1 (called project) which was then inverted. Together with the exact solution
of Eq. (10) we get the following three alternative solutions for diagonal matrices,
?
2
?
2
?
(?
)
/
1
+
Cx
(?
)
exact
i r,r
i r,r
?
i,r
?
?
2
2 >
>
(?i+1 )r,r =
(14)
project
? 1/ (1/ (?i )r,r ) + 2C+C xi ?i xi xi,r
?
?
2
>
2
?
(C xi ?i xi +2C )
? (? ) ? (? ) x
drop
i r,r
i r,r
i,r
2
(1+Cxi ?i x>
i )
We investigate these formulations in the next section. Finally, we note that similarly to CW and
AROW, algorithms that employ full matrices can be incorporated with Mercer kernels [11, 14],
while to the best of our knowledge, the diagonal versions can not.
5
Empirical Evaluation
We evaluate NHERD on several popular datasets for document classification, optical character
recognition (OCR), phoneme recognition, as well as on action recognition in video. We compare our
new algorithm NHERD with the AROW [8] algorithm, which was found to outperform other baselines [8]: the perceptron algorithm [13], Passive-Aggressive (PA) [5], confidence weighted learning
(CW) [9, 7] and second order perceptron [1] on these datasets. For both NHERD and AROW
we used the three diagonalization schemes, as mentioned in Eq. (14) in Sec. 4.3. Since AROW
Project and AROW Exact are equivalent we omit the latter, yielding a total of five algorithms:
NHERD {P, D, E} for Project,Drop,Exact and similarly AROW {P, D}.
6
NHERD_P
NHERD_P
NHERD_P
56%
54%
76%
52%
70%
71%
67%
72%
91%
85%
AROW_P
89%
NHERD_D
75%
84%
NHERD_E
76%
88%
78%
72%
AROW_D
73%
NHERD_E
66%
AROW_P
72%
65%
50%
NHERD_E
80%
75%
AROW_P
80%
59%
74%
NHERD_D
70%
NHERD_D
74%
72%
AROW_D
AROW_D
Figure 4: Performance comparison between algorithms. Each algorithm is represented by a vertex. The weight
of an edge between two algorithms is the fraction of datasets in which the top algorithm achieves lower test
error than the bottom algorithm. An edge with no head indicates a fraction lower than 60% and a bold edge
indicates a fraction greater than 80%. Graphs (left to right) are for noise levels of 0%, 10%, and 30%.
Although NHERD and AROW are designed primarily for binary classification, we can modify them
for use on multi-class problems as follows. Following [4], we generalize binary classification and
assume a feature function f (x, y) ? Rd mapping instances x ? X and labels y ? Y into a common
space. Given a new example, the algorithm predicts y? = arg maxz ? ? f (x, z), and suffers a loss if
y 6= y?. It then computes the difference vector ? = f (x, y)?f (x, y 0 ) for y 0 = arg maxz6=y f (x, y 0 )
which replaces yx in NHERD (Alg. 3).
We conducted an empirical study using the following datasets. First are datasets from [8]: 36 binary
document classification data, and 100 binary OCR data (45 all-pairs of both USPS and MNIST and
1-vs-rest of MNIST). Secondly, we used the nine multi-category document classification datasets
used by [6]. Third, we conducted experiments on a TIMIT phoneme classification task. Here
we used an experimental setup similar to [10] and mapped the 61 phonetic labels into 48 classes.
We then picked 10 pairs of classes to construct binary classification tasks. We focused mainly on
unvoiced phonemes where there is no underlying harmonic source and whose instantiations are
noisy. The ten binary classification problems are identified by a pair of phoneme symbols (one or
two Roman letters). For each of the ten pairs we picked 1, 000 random examples from both classes
for training and 4, 000 random examples for a test set. These signals were then preprocessed by
computing mel-frequency cepstral coefficients (MFCCs) together with first and second derivatives
and second order interactions, yielding a feature vector of 902 dimensions. Lastly, we also evaluated
our algorithm on an action recognition problem in video under four different conditions. There are
about 100 samples for each of 6 actions. Each sample is represented using a set of 575 positive real
localized spectral content filters from the videos. This yields a total of 156 datasets.
Each result for the text datasets was averaged over 10-fold cross-validation, otherwise a fixed split
into training and test sets was used. Hyperparameters (C for NHERD and r for ARROW) and the
number of online iterations (up to 20) were optimized using a single randomized run. In order to
observe each algorithm?s ability to handle non-separable data, we performed each experiment using
various levels of artificial label noise, generated by independently flipping binary labels.
Results: We first summarize the results on all datasets excluding the video recognition dataset in
Fig. 4, where we computed the number of datasets for which one algorithm achieved a lower test
error than another algorithm. The results of this tournament between algorithms is presented as
a winning percentage. An edge between two algorithms shows the fraction of the 155 datasets for
which the algorithm on top had lower test error than the other algorithm. The three panels correspond
to three varying noise levels, from 0%,10% and 30%.
We observe from the figure that Project generally outperforms Exact which in turn outperforms Drop. Furthermore, NHERD outperforms AROW, in particular NHERD P outperforms
AROW P and NHERD D outperforms AROW D. These relations become more prominent when
labeling noise is increased in the training data. The right panel of Fig. 2 illustrates a single update of
each of the five algorithms: AROW D, AROW D, NHERD D, NHERD E, NHERD P. Each of the
five ellipses represents the Gaussian weight vector distribution after a single update on an example
7
by each of the five algorithms. Interestingly, the resulting volume (area) of different ellipses roughly
correspond to the overall performance of the algorithms. The best update ? NHERD P ? has the
smallest ellipse (with lowest-entropy), and the update with the worst performance ? AROW D ? has
the largest, highest-entropy ellipse.
100
usps
mnist
95
99
85
NHERD_P
98
NHERD_P
More detailed results for NHERD P and
AROW P, the overall best performing
algorithms, are compared in Fig. 5.
NHERD P and AROW P are comparable when there is no added noise, with
NHERD P winning a majority of the
time. As label noise increases (moving
top-to-bottom in Fig. 5) NHERD P holds
up remarkably well. In almost every high
noise evaluation, NHERD P improves
over AROW P (as well as all other baselines, not shown). The bottom-left panel
of Fig. 5 shows the relative improvment
in accuracy of NHERD P over AROW P
on the ten phoneme recognition tasks
with additional 30% label noise. The ten
tasks are ordered according to their statistical significance according to McNemar?s test. The results for the seven right
tasks are statistically significant with a pvalue less then 0.001. NHERD P outperforms AROW P five times and underperforms twice on these seven significant tests. Finally, the bottom-right panel
shows the 10-fold accuracy of the five
algorithms over the video data, where
clearly NHERD P outperforms all other
algorithms by a wide margin.
binary text
mc text
phoneme
90
97
96
80
75
70
95
65
60
94
94
95
96
97
AROW_P
98
99
60
100
usps
mnist
98
95
90
70
80
AROW_P
90
binary text
mc text
phoneme
96
85
NHERD_P
NHERD_P
94
92
80
75
70
90
65
88
60
86
55
86
95
90
92
94
AROW_P
96
98
60
usps
mnist
90
90
85
85
80
80
NHERD_P
NHERD_P
88
75
70
65
70
80
AROW_P
90
binary text
mc text
phoneme
75
70
65
60
60
55
55
50
50
60
70
AROW_P
80
90
60
70
AROW_P
80
90
8
87
6
85
2
84
Accuracy
Relative Increase in Accuracy
86
4
0
?2
83
82
81
80
?4
79
?6
78
?8
77
b?p d?t f?th g?k jh?ch m?n m?ngn?ng s?sh v?dh
AROW_P AROW_D NHERD_ENHERD_PNHERD_D
Figure 5: Three top rows: Accuracy on OCR (left) and text
and phoneme (right) classification. Plots compare performance
between NHERD P and AROW P. Markers above the line indicate superior NHERD P performance and below the line superior AROW P performance. Label noise increases from top
to bottom: 0%, 10% and 30%. NHERD P improves relative
to AROW P as noise increases. Bottom left: relative accuracy
improvment of NHERD P over AROW P on the ten phoneme
classification tasks. Bottom right: accuracy of five algorithms
on the video data. In both cases NHERD P is superior
Conclusions: We have seen how to incorporate velocity constraints in an online learning algorithm. In addition to
tracking the mean and covariance of a
Gaussian weight vector distribution, regularization of the linear velocity terms
are used to herd the normal distribution
in the learning process. By bounding the
loss function with a quadratic term, the
resulting optimization can be solved analytically, resulting in the NHERD algorithm. We empirically evaluated the performance of NHERD on a variety of experimental datasets, and show that the
projected NHERD algorithm generally
outperforms all other online learning algorithms on these datasets. In particular,
NHERD is very robust when random labeling noise is present during training.
Acknowledgments:
KC is a Horev
Fellow, supported by the Taub Foundations. This work was also supported by German-Israeli Foundation grant GIF-2209-1912.
8
References
[1] Nicol?o Cesa-Bianchi, Alex Conconi, and Claudio Gentile. A second-order perceptron algorithm. Siam Journal of Commutation, 34(3):640?668, 2005.
[2] Nicolo Cesa-Bianchi and Gabor Lugosi. Prediction, Learning, and Games. Cambridge University Press, New York, NY, USA, 2006.
[3] G. Chechik, V. Sharma, U. Shalit, and S. Bengio. An online algorithm for large scale image
similarity learning. In NIPS, 2009.
[4] Michael Collins. Discriminative training methods for hidden markov models: Theory and
experiments with perceptron algorithms. In EMNLP, 2002.
[5] K. Crammer, O. Dekel, J. Keshet, S. Shalev-Shwartz, and Y. Singer. Online passive-aggressive
algorithms. JMLR, 7:551?585, 2006.
[6] K. Crammer, M. Dredze, and A. Kulesza. Multi-class confidence weighted algorithms. In
EMNLP, 2009.
[7] K. Crammer, M. Dredze, and F. Pereira. Exact confidence-weighted learning. In NIPS 22,
2008.
[8] K. Crammer, A. Kulesza, and M. Dredze. Adaptive regularization of weighted vectors. In
Advances in Neural Information Processing Systems 23, 2009.
[9] M. Dredze, K. Crammer, and F. Pereira. Confidence-weighted linear classification. In ICML,
2008.
[10] A. Gunawardana, M. Mahajan, A Acero, and Pl att J. C. Hidden conditional random fields for
phone classifi cation. In Proceedings of ICSCT, 2005.
[11] J. Mercer. Functions of positive and negative type and their connection with the theory of
integral equations. Philos. Trans. Roy. Soc. London A, 209:415?446, 1909.
[12] K. B. Petersen and M. S. Pedersen. The matrix cookbook, 2007.
[13] F. Rosenblatt. The perceptron: A probabilistic model for information storage and organization
in the brain. Psychological Review, 65:386?407, 1958.
[14] Bernhard Sch?olkopf and Alexander J. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, Cambridge, MA, USA, 2001.
9
| 3893 |@word version:1 norm:2 dekel:1 seek:1 covariance:17 commute:1 tr:7 solid:2 initial:1 contains:2 att:1 daniel:1 document:3 interestingly:1 outperforms:8 current:10 yet:2 written:1 additive:1 analytic:1 plot:2 designed:2 update:38 drop:4 v:1 isotropic:1 ith:1 five:10 c2:3 become:1 fitting:1 combine:1 introduce:1 upenn:1 indeed:1 expected:2 roughly:1 multi:3 brain:1 decomposed:1 considering:1 becomes:2 provided:2 project:5 underlying:3 bounded:10 panel:8 lowest:1 israel:1 minimizes:1 gif:1 transformation:5 guarantee:2 fellow:1 every:2 tie:1 classifier:3 control:2 unit:3 grant:1 omit:1 positive:2 before:5 engineering:1 modify:2 mistake:1 severely:1 consequence:1 solely:1 lugosi:1 signed:1 awi:2 black:1 tournament:1 twice:1 co:1 bi:4 statistically:1 averaged:1 acknowledgment:1 woodbury:1 practice:1 area:2 empirical:2 gabor:1 chechik:1 confidence:8 petersen:1 get:11 close:2 put:1 acero:1 impossible:1 applying:1 storage:1 optimize:1 equivalent:1 maxz:1 demonstrated:2 center:2 modifies:2 starting:1 independently:1 convex:1 focused:1 commutation:1 simplicity:2 rule:8 regularize:1 handle:1 updated:7 elucidate:1 exact:6 us:1 hypothesis:7 pa:9 velocity:18 element:4 recognition:6 particularly:1 updating:1 roy:1 predicts:1 bottom:8 observed:1 electrical:2 capture:1 worst:1 solved:1 ordering:1 decrease:1 highest:1 mentioned:1 pd:1 classifi:1 trained:1 solving:1 upon:7 ewi:2 usps:4 easily:2 various:2 represented:2 derivation:1 fast:1 describe:1 london:1 artificial:1 labeling:2 shalev:1 quite:1 whose:1 solve:1 otherwise:1 ability:1 cov:1 noisy:3 online:21 advantage:1 differentiable:1 eigenvalue:2 interaction:1 product:1 remainder:1 date:1 achieve:1 description:1 olkopf:1 sea:1 derive:2 illustrate:3 ac:1 eq:27 soc:1 trading:1 come:1 indicate:1 direction:3 correct:2 filter:3 stochastic:2 shrunk:1 require:1 suffices:1 generalization:1 proposition:1 secondly:1 pl:1 hold:2 normal:3 exp:1 mapping:1 predict:1 substituting:1 tor:2 achieves:1 smallest:1 omitted:1 label:11 individually:1 largest:1 weighted:7 mit:1 clearly:2 gaussian:25 always:2 modified:3 claudio:1 varying:1 focus:2 likelihood:1 indicates:2 mainly:1 baseline:2 hidden:2 relation:2 kc:1 arg:8 classification:14 dual:1 overall:5 art:1 constrained:1 initialize:1 field:14 equal:1 construct:1 ng:1 enginering:1 kw:1 represents:5 koby:2 icml:1 nearly:1 cookbook:1 roman:1 employ:3 primarily:1 divergence:1 individual:2 maintain:5 psd:1 organization:1 investigate:1 multiply:1 evaluation:2 analyzed:1 sh:1 yielding:4 edge:4 integral:1 orthogonal:3 old:1 taylor:1 rotating:1 haifa:1 shalit:1 theoretical:1 psychological:1 instance:2 classify:3 increased:1 vertex:1 technion:2 conducted:2 aw:3 kxi:1 density:3 randomized:1 siam:1 lee:1 off:1 probabilistic:1 invertible:1 michael:1 connecting:1 together:2 squared:5 cesa:2 gunawardana:1 emnlp:2 derivative:5 return:1 aggressive:6 sec:2 bold:1 gaussianity:2 coefficient:1 explicitly:1 depends:7 performed:1 tion:1 view:3 closed:2 picked:2 overfits:1 kwk:2 red:2 maintains:2 timit:1 minimize:1 il:1 square:2 accuracy:7 variance:3 stretching:1 phoneme:10 yield:4 correspond:2 generalize:1 pedersen:1 accurately:1 critically:1 mc:3 cation:1 herd:4 herding:5 suffers:2 definition:1 frequency:1 proof:4 dataset:1 popular:2 knowledge:1 improves:2 appears:2 higher:2 planar:1 flowing:1 specify:1 formulation:1 evaluated:2 though:1 shrink:2 furthermore:2 smola:1 lastly:2 hand:2 receives:2 reweighting:2 marker:1 dredze:4 usa:2 true:1 analytically:2 regularization:10 aggressively:1 leibler:1 i2:1 mahajan:1 attractive:1 reweighted:1 round:8 mahalanobis:1 during:1 game:1 mel:1 generalized:2 prominent:1 demonstrate:3 motion:2 passive:3 image:1 harmonic:1 consideration:1 fi:1 common:1 superior:3 pseudocode:1 empirically:1 volume:1 rth:3 significant:2 taub:1 cambridge:2 ai:10 rd:5 philos:1 similarly:3 mfccs:1 had:1 moving:1 similarity:1 rectly:1 nicolo:1 multivariate:1 recent:3 optimizing:3 phone:1 manipulation:1 phonetic:1 certain:1 binary:14 success:1 mcnemar:1 yi:25 inverted:1 seen:2 greater:2 additional:1 gentile:1 sharma:1 dashed:3 ii:2 signal:1 full:4 ing:1 faster:3 cross:1 dept:1 nherd:37 dkl:2 ellipsis:8 prediction:7 denominator:1 expectation:11 iteration:1 kernel:2 accord:1 achieved:1 underperforms:1 background:1 whereas:1 remarkably:1 addition:1 source:1 suffered:1 diagonalize:1 sch:1 rest:1 undergo:1 flow:5 call:1 ee:1 constraining:3 split:1 bengio:1 relaxes:1 diffusing:1 variety:4 fit:2 pennsylvania:1 identified:1 tradeoff:3 suffer:1 algebraic:1 passing:2 nine:1 york:1 action:3 enumerate:1 generally:2 detailed:1 ten:5 concentrated:1 category:1 outperform:1 percentage:1 dotted:1 sign:1 track:1 correctly:1 blue:1 rosenblatt:1 write:2 four:1 nevertheless:1 preprocessed:1 diffusion:3 graph:1 merely:1 fraction:4 run:1 inverse:2 letter:1 family:4 almost:1 comparable:1 bound:5 cyan:1 guaranteed:1 fold:2 replaces:2 quadratic:1 constraint:5 alex:1 aspect:1 min:6 performing:1 separable:3 optical:1 relatively:1 department:1 according:9 combination:1 smaller:1 character:1 newer:1 wi:11 unity:1 making:1 prw:1 equation:8 turn:1 german:1 singer:1 subjected:1 end:2 observe:3 ocr:3 appropriate:1 spectral:1 batch:2 alternative:2 shortly:1 top:7 remaining:1 hinge:5 yx:1 especially:2 ellipse:3 added:1 flipping:1 improvment:2 parametric:4 traditional:2 diagonal:15 microscopic:2 cw:9 distance:1 mapped:2 majority:2 seven:2 kalman:1 illustration:3 ratio:1 minimizing:2 setup:1 negative:1 perform:2 bianchi:2 observation:4 datasets:14 unvoiced:1 markov:1 situation:1 incorporated:1 head:1 excluding:1 pair:7 optimized:1 connection:1 nip:2 israeli:1 trans:1 beyond:1 proceeds:2 below:1 kulesza:2 summarize:1 max:4 memory:2 green:1 video:6 scheme:2 x2i:1 pvalue:1 concludes:1 philadelphia:1 text:9 prior:1 review:1 nicol:1 relative:4 loss:25 filtering:1 localized:1 validation:1 h2:2 foundation:2 proxy:1 mercer:2 principle:1 share:1 translation:1 row:1 summary:1 supported:2 last:6 guide:1 formal:1 side:1 perceptron:5 jh:1 wide:1 taking:4 cepstral:1 distributed:5 dimension:1 world:1 computes:1 collection:1 adaptive:4 projected:1 ple:1 far:1 correlate:1 kullback:1 bernhard:1 instantiation:1 conclude:2 xi:50 discriminative:1 shwartz:1 robust:4 expanding:1 alg:2 expansion:1 did:1 significance:1 linearly:2 arrow:2 bounding:3 noise:12 hyperparameters:2 fig:7 cxi:7 en:1 ddlee:1 ny:1 pereira:2 winning:2 jmlr:1 third:2 theorem:2 specific:1 symbol:1 exists:2 incorporating:1 mnist:5 adding:1 effectively:1 ci:3 diagonalization:1 keshet:1 illustrates:2 margin:10 entropy:2 cx:1 expressed:1 ordered:1 arow:36 tracking:2 conconi:1 applies:1 collectively:1 ch:1 dh:1 ma:1 conditional:1 viewed:1 identity:1 content:1 change:4 infinite:2 uniformly:9 reducing:1 called:3 total:2 experimental:3 formally:2 support:1 latter:1 crammer:6 collins:1 alexander:1 incorporate:3 evaluate:1 regularizing:1 |
3,194 | 3,894 | Smoothness, Low-Noise and Fast Rates
Ambuj Tewari
[email protected]
Computer Science Dept., University of Texas at Austin
Nathan Srebro
Karthik Sridharan
[email protected]
[email protected]
Toyota Technological Institute at Chicago
Abstract
?
? HR2n + HL? Rn for ERM with an H-smooth loss
We establish an excess risk bound of O
?
function and a hypothesis class with Rademacher complexity Rn , where Lp
is the best risk achievable by the hypothesis class. For typical hypothesis classes where Rn = R/n, this translates
to
p
?
?
? (RH/n) in the separable (L = 0) case and O
? RH/n + L RH/n more
a learning rate of O
generally. We also provide similar guarantees for online and stochastic convex optimization of a
smooth non-negative objective.
1
Introduction
Consider empirical risk minimization for a hypothesis class H = {h : X ? R} w.r.t. some non-negative loss function
?(t, y). That is, we would like to learn a predictor h with small risk L (h) = E [?(h(X), Y )] by minimizing the
Pn
?
empirical risk L(h)
= n1 i=1 ?(h(xi ), yi ) of an i.i.d. sample (x1 , y1 ), . . . , (xn , yn ).
Statistical guarantees on the excess risk are well understood for parametric (i.e. finite dimensional) hypothesis classes.
More formally, these are hypothesis classes with finite VC-subgraph dimension [23] (aka pseudo-dimension). For
such classes learning guarantees can be obtained for any bounded loss function (i.e. s.t. |?| ? b < ?) and the relevant
measure of complexity is the VC-subgraph dimension.
Alternatively, even for some non-parametric hypothesis classes (i.e. those with infinite VC-subgraph dimension),
e.g. the class of low-norm linear predictors HB = {hw : x 7? hw, xi|kwk ? B} , guarantees can be obtained in terms
of scale-sensitive measures of complexity such as fat-shattering dimensions [1], covering numbers [23] or Rademacher
complexity [2]. The classical statistical learning theory approach for obtaining learning guarantees for such scalesensitive classes is to rely on the Lipschitz constant D of ?(t, y) w.r.t. t (i.e. bound on its derivative w.r.t. t). The
excess risk can then be bounded as (in expectation over the sample):
r
?
?
? ? L + 2DRn (H) = L + 2 D2 R
(1)
L h
n
? = arg min L(h)
?
where h
is the empirical risk minimizer (ERM), L? = inf h Lp
(h) is the approximation error, and
R/n. E.g. for `2 -bounded linear
Rn (H) is the Rademacher complexity, which typically scales as Rn (H) =
2
predictors, R = B 2 sup kXk2 .
In this paper we address two deficiencies of the guarantee (1). First, the bound applies only to loss functions with
bounded derivative, like the hinge loss and logistic loss popular for classification, or the absolute-value (`1 ) loss for
regression. It is not directly applicable to the squared loss ?(t, y) = 21 (t ? y)2 , for which the second derivative is
bounded, but not the first. We could try to simply bound the derivative of the squared loss in terms of a bound on
the magnitude of h(x),pbut e.g. for norm-bounded linear predictors HB this results in a very disappointing excess risk
bound of the form O( B 4 (max kXk)4 /n). One aim of this paper is to provide clean bounds on the excess risk for
smooth loss functions such as the squared loss with a bounded second, rather then first, derivative.
1
?
?
The second deficiency of (1) is the dependence on 1/ n. The dependence on 1/ n might be unavoidable in general.
But at least for finite dimensional (parametric) classes, we know it can be improved to a 1/n rate when the distribution
is separable, i.e. when there exists h ? H with L (h) = 0 and so L? = 0. In particular, if H is a class of bounded
functions with VC-subgraph-dimension d (e.g. d-dimensional linear predictors), then in expectation over sample [22]:
!
r
? log n
dD
log
n
dDL
?
? ?L +O
L h
+
(2)
n
n
p
p
The 1/n term disappears in the separable case, and we get a graceful degradation between the 1/n rate to the 1/n
rate for separable case. Could we get a 1/n separable rate, and such a graceful degradation, in non-parametric case?
As we will show, the two deficiencies are actually related.
p For non-parametric classes, and non-smooth Lipschitz loss,
such as the hinge-loss, the excess risk might scale as 1/n and not 1/n, even in the separable case. However, for
H-smooth non-negative loss functions, where the second derivative of ?(t, y) w.r.t. t is bounded by H, a 1/n separable
rate is possible. In Section 2 we obtain the following bound on the excess risk (up to logarithmic factors):
?
? ? L? + O
? HR2n (H) + HL? Rn (H) = L? + O
?
L h
HR
+
n
r
HRL?
n
!
?
? 2L? + O
HR
n
.
(3)
2
In particular,pfor `2 -norm-bounded linear predictors HB with sup kXk2 ? 1, the excess risk is bounded by
?
O(HB 2 /n + HB 2 L? /n). Another interesting distinction between parametric ?
and non-parametric classes, is that
even for the squared-loss, the bound (3) is tight and the non-separable rate of 1/ n is unavoidable. This is in contrast to the parametric (fine dimensional) case, where a rate of 1/n is always possible for the squared loss, regardless
of the approximation error L? [16]. The differences between parametric and scale-sensitive classes, and between
non-smooth, smooth and strongly convex loss functions are discussed in Section 4 and summarized in Table 1.
The guarantees discussed thus far are general learning guarantees for the stochastic setting that rely only on the
Rademacher complexity of the hypothesis class, and are phrased in terms of minimizing some scalar loss function. In
Section 3 we consider also the online setting, in addition to the stochastic setting, and present similar guarantees for
online and stochastic convex optimization [32, 24]. The guarantees of Section 3 match equation (3) for the special
case of a convex loss function and norm-bounded linear predictors, but Section 3 capture a more general setting of
optimizing an arbitrary non-negative convex objective, which we require to be smooth (there is no separate discussion
of a ?predictor? and a scalar loss function in Section 3). Results in Section 3 are expressed in terms of properties of
the norm, rather then a measure of concentration like the Radamacher complexity as in (3) and Section 2. However,
the online and stochastic convex optimization setting of Section 3 is also more restrictive, as we require the objective
be convex (in Section 2 we make no assumption about the convexity of hypothesis class H nor the loss function ?).
Specifically, for a non-negative H-smooth convex objective, over a domain bounded by
p B, we prove that the average
2
HB 2 L? /n). Comparing with
online regret (and
excess
risk
of
stochastic
optimization)
is
bounded
by
O(HB
/n
+
p
the bound of O( D2 B 2 /n) when the loss is D-Lipschitz rather then H-smooth [32, 21], we see the same relationship
discussed above for ERM. Unlike the bound (3) for the ERM, the convex optimization bound avoids polylogarithmic
factors. The results in Section 3 also generalize to smoothness and boundedness with respect to non-Euclidean norms.
Studying the online and stochastic convex optimization setting (Section 3), in addition to ERM (Section 2), has several
advantages. First, it allows us to obtain a learning guarantee for an efficient single-pass learning methods, namely
stochastic gradient descent (or mirror descent), as well as for the non-stochastic regret. Second, the bound we obtain
in the convex optimization setting (Section 3) is actually better then the bound for the ERM (Section 2) as it avoids all
polylogarithmic and large constant factors. Third, the bound is applicable to other non-negative online or stochastic
optimization problems beyond classification, including problems for which ERM is not applicable (see, e.g., [24]).
The detailed proofs of the statements claimed in this paper can be found in the supplementary material corresponding
to the paper.
2
Empirical Risk Minimization with Smooth Loss
Recall that the Rademacher complexity of H for any n ? N given by [2]:
n
#
"
1 X
h(xi )?i .
Rn (H) =
sup
E??Unif({?1}n ) sup
x1 ,...,xn ?X
h?H n
i=1
2
(4)
Throughout we shall consider the ?worst case? Rademacher complexity.
Our starting point is the learning bound (1) that applies to D-Lipschitz loss functions, i.e. such that |?0 (t, y)| ? D
(we always take derivatives w.r.t. the first argument). What type of bound can we obtain if we instead bound the
second derivative ?00 (t, y)? We will actually avoid talking about the second derivative explicitly, and instead say that
a function is H-smooth iff its derivative is H-Lipschitz. For twice differentiable ?, this just means that |?00 | ? H.
The central observation, which allows us to obtain guarantees for smooth loss functions, is that for a smooth loss, the
derivative can be bounded in terms of the function value:
p
Lemma 2.1. For an H-smooth non-negative function f : R 7? R, we have: |f 0 (t)| ? 4Hf (t)
This Lemma allows us to argue that close to the optimum value, where the value of the loss is small, then so is its
derivative. Looking at the dependence of (1) on the derivative bound D, we are guided by the following heuristic
intuition: Since we should be concerned only with the behavior around the ERM, perhaps
q it is enough to bound
0 ?
0
? we can bound |E [? (w,
? What we would
? Applying Lemma 2.1 to L(h),
? X)]| ? 4HL(h).
? (w, x) at the ERM w.
? x)| separately, or at least have the absolute value inside the expectation?this
actually want is to bound each |?0 (w,
is where the non-negativity of the loss plays an important
q role. Ignoring this important issue for the moment and
? ? L? + 4 HL(h)R
? n (H). Solving for L(h)
? yields the desired bound
plugging this instead of D into (1) yields L(h)
(3).
This rough intuition is captured by the following Theorem:
Theorem 1. For an H-smooth non-negative loss ? s.t.?x,y,h |?(h(x), y)| ? b, for any ? > 0 we have that with
probability at least 1 ? ? over a random sample of size n, for any h ? H,
!
!
r
q
?
b log(1/?)
b log(1/?)
3
1.5
2
?
?
L(h)
+ H log n Rn (H) +
L (h) ? L(h) + K
H log n Rn (H) +
n
n
and so:
? ? L? + K
L h
?
L?
r
?
H log
1.5
n Rn (H) +
b log(1/?)
n
!
3
+ H log n
R2n (H)
b log(1/?)
+
n
!
where K < 105 is a numeric constant derived from [20] and [6].
Note that only the ?confidence? terms depended on b = sup |?|, and this is typically not the dominant term?we
believe it is possible to also obtain a bound that holds in expectation over the sample (rather than with high probability)
and that avoids a direct dependence on sup |?|.
To prove Theorem 1 we use the notion of Local Rademacher Complexity [3], which allows us to focus on the behavior
close to the ERM. To this end, consider the following empirically restricted loss class
n
o
?
L? (r) := (x, y) 7? ?(h(x), y) : h ? H, L(h)
?r
Lemma 2.2, presented below,?
solidifies the heuristic intuition discussed above, by showing that the Rademacher complexity of L? (r) scales with Hr. The Lemma can be seen as a higher-order version of the Lipschitz Composition
Lemma [2], which states that the Rademacher complexity of the unrestricted loss class is bounded by DRn (H). Here,
we use the second, rather then first, derivative, and obtain a bound that depends on the empirical restriction:
Lemma 2.2. For a non-negative H-smooth loss ? bounded by b and any function class H bounded by B:
!!
?
?
nB
n
12HB
?
Rn (L? (r)) ? 12Hr Rn (H) 16 log3/2
? 14 log3/2
Rn (H)
b
Applying Lemma 2.2, Theorem 1 follows using standard Local Rademacher argument [3].
2.1
Related Results
?
Rates faster than 1/ n have been previously explored under various conditions, including when L? is small.
3
The Finite Dimensional Case : Lee et al [16] showed faster rates for squared loss, exploiting the strong convexity
of this loss function, even when L? > 0, but only with finite VC-subgraph-dimension. Panchenko [22] provides fast
rate results for general Lipschitz bounded loss functions, still in the finite VC-subgraph-dimension case. Bousquet [6]
provided similar guarantees for linear predictors in Hilbert spaces when the spectrum of the kernel matrix (covariance
of X) is exponentially decaying, making the situation almost finite dimensional. All these methods rely on finiteness
of effective dimension to provide fast rates. In this case, smoothness is not necessary. Our method, on the other hand,
establishes fast rates, when L? = 0, for function classes that do not have finite VC-subgraph-dimension. We show
how in this non-parametric case, smoothness is necessary and plays an important role (see also Table 1).
Aggregation : Tsybakov [29] studied learning rates for aggregation, where a predictor is chosen from the convex
hull of a finite set of base predictors. This is equivalent to an `1 constraint where each base predictor is viewed
as a ?feature?. As with `1 -based analysis, since the bounds depend only logarithmically on the number of base
predictors (i.e. dimensionality), and rely on the scale of change of the loss function, they are of ?scale sensitive?
nature. For such an aggregate classifier, Tsybakov obtained a rate of 1/n when zero (or small) risk is achieve by
one of the base classifiers. Using Tsybakov?s result, it is not enough for zero risk to be achieved by an aggregate
(i.e. bounded ell1 ) classifier in order to obtain the faster rate. Tsybakov?s core result is thus in a sense more similar
to the finite dimensional results, since it allows for a rate of 1/n when zero error is achieved by a finite cardinality
(and hence finite dimension) class. Tsybakov then used the approximation error of a small class of base predictors
w.r.t. a large hypothesis class (i.e. a covering) to obtain learning rates for the large hypothesis class by considering
aggregation within the small class. However these results only imply?fast learning rates for hypothesis classes with
very low complexity. Specifically to get learning rates better than 1/ n using these results, the covering number of
the hypothesis class at scale needs to behave as 1/p for some p < 2. But typical classes, including the class of
linear predictors with bounded norm, have covering numbers that scale as 1/2 and so these methods do not imply
fast rates for such function classes. In fact, to get rates of 1/n with these techniques, even when L? = 0, requires
covering numbers that do not increase with at all, and so actually finite VC-subgraph-dimension. Chesneau et al
[10] extend Tsybakov?s work also to general losses, deriving
similar results for Lipschitz loss function. The same
?
caveats hold: even when L? = 0, rates faster when 1/ n require covering numbers that grow slower than 1/2 , and
rates of 1/n essentially require finite VC-subgraph-dimension. Our work, on the other hand, is applicable whenever the
Rademacher complexity (equivalently covering numbers) can be controlled. Although it uses some similar techniques,
it is also rather different from the work of Tsybakov and Chesneau et al, in that it points out the importance of
smoothness for obtaining fast rates in the non-parametric case: Chesneau et al relied only on the Lipschitz constant,
which we show, in Section 4, is not enough for obtaining fast rates in the non-parametric case, even when L? = 0.
Local Rademacher Complexities : Bartlett et al [3] developed a general machinery for proving possible fast rates
based on local Rademacher complexities. However, it is important to note that the localized complexity term typically
dominates the rate and still needs to be controlled. For example, Steinwart [27] used Local Rademacher Complexity to
provide fast rate on the 0/1 loss of Support Vector Machines (SVMs) (`2 -regularized hinge-loss minimization) based on
the so called ?geometric margin condition? and Tsybakov?s margin condition. Steinwart?s analysis is specific to SVMs.
We also use Local Rademacher Complexities in order to obtain fast rates, but do so for general hypothesis classes,
based only on the standard Rademacher complexity Rn (H) of the hypothesis classes, as well as the smoothness of the
loss function and the magnitude of L? , but without any further assumptions on the hypothesis classes itself.
Non-Lipschitz Loss : Beyond the strong connections between smoothness and fast rates which we highlight, we are
also not aware of prior work providing an explicit and easy-to-use result for controlling a generic non-Lipschitz loss
(such as the squared loss) solely in terms of the Rademacher complexity.
3
Online and Stochastic Optimization of Smooth Convex Objectives
We now turn to online and stochastic convex optimization. In these settings a learner chooses w ? W, where W is a
closed convex set in a normed vector space, attempting to minimize an objective `(w, z) on instances z ? Z, where
` : W ? Z ? R is an objective function which is convex in w. This captures learning linear predictors w.r.t. a convex
loss function ?(t, z), where Z = X ? Y and `(w, (x, y)) = ?(hw, xi, y), and extends beyond supervised learning.
We consider the case where the objective `(w, z) is H-smooth w.r.t. some norm kwk (the reader may choose to think
of W as a subset of a Euclidean or Hilbert space, and kwk as the `2 -norm): By this we mean that for any z ? Z, and
all w, w0 ? W
k?`(w, z) ? ?`(w0 , z)k? ? H kw ? w0 k
4
where k ? k? is the dual norm. The key here is to generalize Lemma 2.1 to smoothness w.r.t. a vector w, rather than
scalar smoothness:
p
Lemma 3.1. For an H-smooth non-negative f : W ? R, for all w ? W: k?f (w)k? ? 4Hf (w)
In order to consider general norms, we will also need to rely on a non-negative regularizer F : W 7? R that is a
1-strongly convex (see Definition in e.g. [31]) w.r.t. to the norm kwk for all w ? W. For the Euclidean norm we can
2
use the squared Euclidean norm regularizer: F (w) = 12 kwk .
3.1
Online Optimization Setting
In the online convex optimization setting we consider an n round game played between a learner and an adversary
(Nature) where at each round i, the player chooses a wi ? W and then the adversary picks a zi ? Z. The player?s
choice wi may only depend
Pnon the adversary?s choices in previous rounds. The goal of the player is to have low
average objective value n1 i=1 `(wi , zi ) compared to the best single choice in hind sight [9].
A classic algorithm for this setting is Mirror Descent [4], which starts at some arbitrary w1 ? W and updates wi+1
according to zi and a stepsize ? (to be discussed later) as follows:
wi+1 ? arg min h??`(wi , zi ) ? ?F (wi ), wi + F (w)
(5)
w?W
2
For the Euclidean norm with F (w) = 12 kwk , the update (5) becomes projected online gradient descent [32]:
wi+1 ? ?W (wi ? ??`(wi , zi )) where ?W (w) = arg minw0 ?W kw ? w0 k is the projection onto W.
? 1
for the Mirror Descent algorithm
Theorem 2. For any B ? R and L? if we use stepsize ? =
HB 2 +
H 2 B 4 +HB 2 nL?
then
z1 , . . . , zn ? Z, the average regret w.r.t. any w? ? W s.t. F (w? ) ? B 2 and
Pnfor any ?instance sequence
1
?
j=1 `(w , zi ) ? L is bounded by:
n
s
n
n
2
X
X
1
4HB
1
HB 2 L?
`(wi , zi ) ?
`(w? , zi ) ?
+2
n i=1
n i=1
n
n
Note that the stepsize depends on the bound L? on the loss in hindsight. The above theorem can be proved using
Lemma 3.1 and Theorem 1 of [26].
3.2
Stochastic Optimization
An online algorithm can also serve as an efficient one-pass learning algorithm in the stochastic setting. Here, we again
consider an i.i.d. sample z1 , . . . , zn from some unknown distribution (as in Section 2), and we would like to find w
with low risk L(w) = E [`(w, Z)]. When z = (x, y) and `(w, z) = ?(hw, xi, y) this agrees with the supervised
learning risk discussed in the Introduction and analyzed
P in Section 2. But instead of focusing on the ERM, we run
? = n1 ni=1 wi . Standard arguments [8] allow us to convert the online
Mirror Descent on the sample, and then take w
regret bound of Theorem 2 to a bound on the excess risk:
? 1
, then
Corollary 3. For any B ? R and L? , if we run Mirror Descent on the sample with ? =
2
2 4
2
?
HB +
H B +HB nL
for any w? ? W with F (w? ) ? B 2 and L(w? ) ? L? , with expectation over the sample:
s
2
4HB
HB 2 L?
? n ) ? L (w? ) ?
L (w
+2
.
n
n
It is instructive to contrast this guarantee with similar looking guarantees derived recently in the stochastic convex
optimization literature [14]. There, the model is stochastic first-order optimization, i.e. the learner gets to see an
unbiased estimate ?l(w, zi ) of the gradient of L(w). The variance of the estimate is assumed to be bounded by ? 2 .
2
The expected
? accuracy after n gradient evaluations then has two terms: a ?accelerated? term that is O(H/n ) and a
slow O(?/ n) term. While this result is applicable more generally (since it doesn?t require non-negativity of `), it is
not immediately clear if our guarantees can be derived using it. The main difficulty is that ? depends on the norm of
the gradient estimates. Thus, it cannot be bounded in advance even if we know that L(w? ) is small. That said, it is
5
intuitively clear that towards the end of the optimization process, the gradient norms will typically be small if L(w? )
is small because of the self bounding property (Lemma 3.1).
It is interesting to note that using stability arguments, a guarantee very similar to Corollary 3, avoiding the polylogarithmic factors of Theorem 1 as well as the dependence on the bound on the loss, can be obtained also for a ?batch?
learning rule similar to ERM, but incorporating regularization. For given regularization parameter ? > 0 define the
? ? (w) := L(w)
?
regularized empirical loss as L
+ ?F (w) and consider the Regularized Empirical Risk Minimizer
? ? (w)
? ? = arg min L
w
(6)
w?W
The following theorem provides a bound on excess risk similar to Corollary 3:
q
2
2
?
128H
?
?
2
?
Theorem 4. For any B ? R and L if we set ? = n + 128n2H + 128HL
nB 2 then for all w ? W with F (w ) ? B
?
and L(w ) ? L? , we have that in expectation over sample of size n:
s
2
2048HB 2 L?
256HB
? ? ) ? L (w? ) ?
L (w
+
.
n
n
To prove Theorem 4 we use stability arguments similar to the ones used by Shalev-Shwartz et al [24], which are in turn
based on Bousquet and Elisseeff [7]. However, while Shalev-Shwartz et al [24] use the notion of uniform stability,
here it is necessary to look at stability in expectation to get the faster rates.
4
Tightness
In this Section we return to the learning rates for the ERM for parametric and for scale-sensitive hypothesis classes
(i.e. in terms of the dimensionality and in terms of scale sensitive complexity measures), discussed in the Introduction
and analyzed in Section 2. We compare the guarantees on the learning rates in different situations, identify differences
between the parametric and scale-sensitive cases and between the smooth and non-smooth cases, and argue that these
differences are real by showing that the corresponding guarantees are tight. Although we discuss the tightness of the
learning guarantees for ERM in the stochastic setting, similar arguments can also be made for online learning.
Table 1 summarizes the bounds on the excess risk of the ERM implied by Theorem 1 as well previous bounds for Lipschitz loss on finite-dimensional [22] and scale-sensitive [2] classes, and a bound for squared-loss on finite-dimensional
classes [9, Theorem 11.7] that can be generalized to any smooth strongly convex loss. We shall now show that the
Loss function is:
D-Lipschitz
H-smooth
Parametric
dim(H) ? q
d , |h| ? 1
dD
dDL?
n +
q n
dH
dHL?
n +
n
Scale-Sensitive
p
Rn (H)
q ? R/n
H dH
? n
q
H-smooth and ?-strongly Convex
D2 R
HR
n
HR
n
+
+
qn
HRL?
n
HRL?
n
Table 1: Bounds on the excess risk, up to polylogarithmic factors.
?
1/ n dependencies in Table 1 are unavoidable. To do so, we will consider the class H = {x 7? hw, xi : kwk ? 1} of
`2 -bounded linear predictors (all norms inthis Section are Euclidean),
with different loss functions, and various spe
cific distributions over X ?Y, where X = x ? Rd : kxk ? 1 and Y = [0, 1]. For the non-parametric lower-bounds,
we will allow the dimensionality d to grow with the sample size n.
Infinite dimensional, Lipschitz (non-smooth), separable
Consider the absolute difference loss ?(h(x), y) = |h(x) ? y|, take d = 2n and consider the following distribution: X
is uniformly distributed over the d standard basis vectors ei and if X = ei , then Y = ?1n ri , where r1 , . . . , rd ? {?1} is
Pn
an arbitrary sequence of signs unknown to the learner. Taking w? = ?1n i=1 ri ei , kw? k = 1 and L? = L (w? ) = 0.
However any sample (x1 , y1 ), . . . , (xn , yn ) reveals at most n of 2n signs ri , and no information on the remaining
signs. This means that for any learning algorithm, there exists a choice of ri ?s such that on at least n of the remaining
?
?
points not seen by the learner, he/she has to suffer a loss of at least 1/ n, yielding an overall risk of at least 1/ 4n.
6
Infinite dimensional, smooth, non-separable, even if strongly convex
2
Consider
? the squared loss ?(h(x), y) = (h(x) ? y) which is 2-smooth and 2-strongly convex. For any ? ? 0 let
d = n/? and consider the following distribution: X is uniform over ei as before, but this time Y |X is random, with
Y |(X = ei ) ? N ( 2r?id , ?), where again ri are pre-determined, unknown to the learner, random signs. The minimizer
Pd
of the expected risk is w? = i=1 2r?id ei , with kw? k = 21 and L? = L(w? ) = ? 2 . Furthermore, for any w ? W,
d
1X
1
2
2
L (w) ? L (w? ) = E [hw ? w? , xi] =
(w[i] ? w? [i])2 = kw ? w? k
d i=1
d
?
?
?
? = 1, then L(w)
? ? L(w? ) ? 1/(4d) = ?/(4 n) = L? /(4 n). OthIf the norm constraint becomes tight, i.e. kwk
erwise, each coordinate is a separate mean estimation problem, with ni samples, where ni is the number of appearances
q
P
2
?
?
? ? w? [i])2 = ? 2 /ni and so L(w)?L
?
? ? w? k2 = d1 di=1 ?ni ? Ln
of ei in the sample. We have E (w[i]
= d1 kw
Finite dimensional, smooth, not strongly convex, non-separable:
Take d = 1, with X = 1 with probability q and X = 0 with probability 1 ? q. Conditioned on X = 0 let Y = 0
deterministically and while conditioned on X = 1 let Y = +1 with probability p = 12 + ?0.2
qn and Y = ?1 with
probability 1 ? p. Consider the following 1-smooth
loss
:
(
?(h(x), y) =
(h(x) ? y)2
if |h(x) ? y| ? 1/2
|h(x) ? y| ? 1/4 if |h(x) ? y| ? 1/2
First, irrespective of choice of w, when x = 0, we always have h(x) = 0 and so suffer no loss. This happens with
probability 1 ? q. Next observe that for p > 0.5, the optimal predictor is w? ? 1/2. However, for n p
> 20, with
Pn
?
? ? ?1/2. Hence, L(w)
? ? L > L(?1/2)
probability at least 0.25, i=1 yi < 0, and sow
p? L(1/2) = 0.16 q/n.
? ? L? > 0.32L? /n.
However for p > 0.5 and n > 20, L? > q/2 and so with probability 0.25, L(w)
5
5.1
Implications
Improved Margin Bounds
?Margin bounds? provide a bound on the expected zero-one loss of a classifiers based on the margin 0/1 error on
the training sample. Koltchinskii and Panchenko [13] provides margin bounds for a generic class H based on the
Rademacher complexity of the class. This is done by using a non-smooth Lipschitz ?ramp? loss that upper bounds the
zero-one
loss and is upper-bounded by the margin zero-one loss. However, such an analysis unavoidably leads to a
?
1/ n rate even in the separable case. Following?
the same idea we use the following smooth ?ramp?:
?(t) =
? 1
1+cos(?t/?)
2
?
2
0
t?0
0<t<?
t??
?
This loss function is 4?
2 -smooth and is lower bounded by the zero-one loss and upper bounded by the ? margin loss. Using Theorem 1 we can now provide improved
margin
bounds for the zero-one loss of any classifier
based on empirical margin error. Denote err(h) = E 11{h(x)6=y} the zero-one risk and for any ? > 0 and sample
Pn
(x1 , y1 ), . . . , (xn , yn ) ? X ? {?1} define the ?-margin empirical zero one loss as err
c ? (h) := n1 i=1 11{yi h(xi )<?} .
Theorem 5. For any hypothesis class H, with |h| ? b, and any ? > 0, with probability at least 1 ? ?, simultaneously
for all margins ? > 0 and all h ? H:
q
q
1.5
log(log( 4b
log(log( 4b
log3 n 2
? )/?)
? )/?)
err
c ? (h) log ? n Rn (H) +
+
R
(H)
+
err(h) ? err
c ? (h) + K
n
n
?2
n
where K is a numeric constant from Theorem 1.
In particular, for appropriate numeric constant K :
err(h) ? 1.01 err
c ? (h) + K
2 log(log( 4b
2 log3 n 2
? )/?)
R
(H)
+
n
2
?
n
!
Improved margin bounds of the above form have been previously shown specifically for linear prediction in a Hilbert
space based on the PAC Bayes theorem [19, 15]. However PAC-Bayes based results are specific to certain linear
function class. Theorem 5, in contrast, is a generic concentration-based result that can be applied to any function class.
7
5.2
Interaction of Norm and Dimension
Consider the problem of learning a low-norm linear predictor with respect to the squared loss ?(t, z) = (t ? z)2 ,
where X ? Rd , for finite but very large d, and where the expected norm of X is low. Specifically, let X be Gaussian
2
with E kXk = B, Y = hw? , Xi + N (0, ? 2 ) with kw? k = 1, and consider learning a linear predictor using `2
regularization. What determines the sample complexity? How does the error decrease as the sample size increases?
From a scale-sensitive statistical learning perspective, we expect that the sample complexity, and the decrease of the
error, should depend on the norm B, especially if d B 2 . However, for any fixed d and B, even if d B 2 ,
asymptotically as the number of samples increase, the excess risk of norm-constrained or norm-regularized regression
? ? L? ? nd ? 2 , and depends (to first order) only on the dimensionality d and not on B [17].
actually behaves as L(w)
The asymptotic dependence on the dimensionality alone can be understood through Table 1.?In this non-separable
situation, parametric complexity controls can lead to a 1/n rate, ultimately dominating the 1/ n rate resulting from
L? > 0 when considering the scale-sensitive, non-parametric complexity control B. Combining Theorem 4 with the
asymptotic nd ? 2 behavior, and noting that at the worst case we can predict using a zero vector, yields the following
overall picture on the expected excess risk of ridge regression with an optimally chosen ?:
?
? ? ) ? L? ? O min B 2 , B 2 /n + B?/ n, d? 2 /n
L(w
Roughly speaking, each term above describes the behavior in a different regime of the sample size. The first regime
has excess risk of order B 2 which occurs until n = ?(B 2 ). The second (?low-noise?) regime is one where the excess
? = ?(L? ). The third?(?slow?)
risk is dominated by the norm and behaves as B 2 /n, until n = ?(B 2 /? 2 ) and L(w)
regime, where the excess risk is controlled by the norm and the approximation error and behaves as B?/ n, until
? = L? + ?(B 2 /d). The fourth (?asymptotic?) regime is where excess risk behaves as
n = ?(d2 ? 2 /B 2 ) and L(w)
d/n. This sheds further light on recent work by Liang and Srebro [18] based on exact asymptotics.
5.3
Sparse Prediction
The use of the `1 norm has become popular for learning sparse predictors in high dimensions, as in the LASSO. The
?
? is obtained by considering the squared loss ?(z, y) = (z ? y)2 and minimizing L(w)
LASSO estimator [28] w
subject
to kwk1 ? B. Let us assume there is some (unknown) sparse reference predictor w0 that has low expected loss and
sparsity (number of non-zeros) kw0 k0 = k, and that kxk? ? 1, y ? 1. In order to choose B and apply Theorem 1 in
this setting, we need to bound kw0 k1 . This can be done by, e.g., assuming that the features x[i] in the support of w0
2
are mutually uncorrelated. Under such an assumption, we have: kw0 k21 ? kE w0 , x ? 2k(L(w0 ) + Ey 2 ) ? 4k.
Thus, Theorem 1 along with Rademacher complexity bounds from [11] gives us,
p
? k log(d)/n + k L(w0 ) log(d)/n .
? ? L(w0 ) + O
L(w)
(7)
It is possible to relax the no-correlation assumption to a bound on the correlations, as in mutual incoherence, or to other
weaker conditions [25]. But in any case, unlike typical analysis for compressed sensing, where the goal is recovering
w0 itself, here we are only concerned with correlations inside the support of w0 . Furthermore, we do not require that
the optimal predictor is sparse or that the model is well specified: only that there exists a low risk predictor using a
small number of fairly uncorrelated features.
Bounds similar to (7) have been derived using specialized arguments [12, 30, 5]?here we demonstrate that bounds of
these forms can be obtained under simple conditions, using the generic framework we suggest. It is also interesting to
note that the methods and results of Section 3 can also be applied to this setting. We use the entropy regularizer
X
x[i]
B2
F (w) = B
x[i] log
+
(8)
1/d
e
i
which is non-negative and 1-strongly convex with respect to kwk1 on W = w ? Rd w[i] ? 0, kwk1 ? B , with
F (w) ? B 2 (1 + log d) (we consider here
weights?in order
?
? to allow w[i] < 0 we can include also
only
non-negative
0
each feature?s negation). Recalling that w 1 ? 2 k and using B = 2 k in the entropy regularizer (8), we have
p
? ? ) ? L(w0 ) + O k log(d)/n + k L(w0 ) log(d)/n where w
? ? is the regularized
from Theorem 4 we that L(w
empirical minimizer (6) using the entropy regularizer (8) with ? as in Theorem 4. The advantage here is that using
Theorem 4 instead of Theorem 1 avoids the extra logarithmic factors.
8
References
[1] N. Alon, S. Ben-David, N. Cesa-Bianchi, and D. Haussler. Scale-sensitive dimensions, uniform convergence, and learnability.
FOCS, 0:292?301, 1993.
[2] P. L. Bartlett and S. Mendelson. Rademacher and Gaussian complexities: Risk bounds and structural results. JMLR, 3:463?
482, 2002.
[3] P.L. Bartlett, O. Bousquet, and S. Mendelson. Local rademacher complexities. Annals of Statistics, 33(4):1497?1537, 2005.
[4] A. Beck and M. Teboulle. Mirror descent and nonlinear projected subgradient methods for convex optimization. Operations
Research Letters, 31:167?175, 2003.
[5] P.J. Bickel, Y. Ritov, and A.B. Tsybakov. Simultaneous analysis of Lasso and Dantzig selector. The Annals of Statistics,
37(4):1705?1732, 2009.
[6] O. Bousquet. Concentration Inequalities and Empirical Processes Theory Applied to the Analysis of Learning Algorithms.
PhD thesis, Ecole Polytechnique, 2002.
[7] Olivier Bousquet and Andr?e Elisseeff. Stability and generalization. J. Mach. Learn. Res., 2:499?526, 2002.
[8] N. Cesa-Bianchi, A. Conconi, and C.Gentile. On the generalization ability of on-line learning algorithms. In NIPS, pages
359?366, 2002.
[9] N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge University Press, 2006.
[10] Christophe Chesneau and Guillaume Lecu. Adapting to unknown smoothness by aggregation of thresholded wavelet estimators. 2006.
[11] S.M. Kakade, K. Sridharan, and A. Tewari. On the complexity of linear prediction: Risk bounds, margin bounds, and
regularization. In NIPS, 2008.
[12] V. Koltchinskii. Sparsity in penalized empirical risk minimization. Ann. Inst, H. Poincar?e Probab. Statist., 45(1):7?57, 2009.
[13] V. Koltchinskii and D. Panchenko. Empirical margin distributions and bounding the generalization error of combined classifiers. Ann. of Stats., 30(1):1?50, 2002.
[14] G. Lan. Convex Optimization Under Inexact First-order Information. PhD thesis, Georgia Institute of Technology, 2009.
[15] J. Langford and J. Shawe-Taylor. PAC-Bayes & margins. In Advances in Neural Information Processing Systems 15, pages
423?430, 2003.
[16] Wee Sun Lee, Peter L. Bartlett, and Robert C. Williamson. The importance of convexity in learning with squared loss. IEEE
Trans. on Information Theory, 1998.
[17] P. Liang, F. Bach, G. Bouchard, and M. I. Jordan. Asymptotically optimal regularization in smooth parametric models. In
NIPS, 2010.
[18] P. Liang and N. Srebro. On the interaction between norm and dimensionality: Multiple regimes in learning. In ICML, 2010.
[19] D. A. McAllester. Simplified PAC-Bayesian margin bounds. In COLT, pages 203?215, 2003.
[20] Shahar Mendelson. Rademacher averages and phase transitions in glivenko-cantelli classes. IEEE Trans. On Information
Theory, 48(1):251?263, 2002.
[21] A. Nemirovski and D. Yudin. Problem complexity and method efficiency in optimization. Nauka Publishers, Moscow, 1978.
[22] D. Panchenko. Some extensions of an inequality of vapnik and chervonenkis. Electronic Communications in Probability,
7:55?65, 2002.
[23] David Pollard. Convergence of Stochastic Processes. Springer-Verlag, 1984.
[24] S. Shalev-Shwartz, O. Shamir, N. Srebro, and K. Sridharan. Stochastic convex optimization. In COLT, 2009.
[25] S. Shalev-Shwartz, N. Srebro, and T. Zhang. Trading accuracy for sparsity. Technical report, TTI-C, 2009. Available at
ttic.uchicago.edu/?shai.
[26] S.Shalev-Shwartz. Online Learning: Theory, Algorithms, and Applications. PhD thesis, Hebrew University of Jerusalem,
2007.
[27] I. Steinwart and C. Scovel. Fast rates for support vector machines using gaussian kernels. ANNALS OF STATISTICS, 35:575,
2007.
[28] R. Tibshirani. Regression shrinkage and selection via the lasso. J. Royal. Statist. Soc B., 58(1):267?288, 1996.
[29] A. Tsybakov. Optimal aggregation of classifiers in statistical learning. Annals of Statistics, 32:135?166, 2004.
[30] S. A. van de Geer. High-dimensional generalized linear models and the lasso. Annals of Statistics, 36(2):614?645, 2008.
[31] C. Zalinescu. Convex analysis in general vector spaces. World Scientific Publishing Co. Inc., River Edge, NJ, 2002.
[32] M. Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In ICML, 2003.
9
| 3894 |@word version:1 achievable:1 norm:29 nd:2 unif:1 d2:4 covariance:1 elisseeff:2 pick:1 boundedness:1 moment:1 chervonenkis:1 ecole:1 err:7 scovel:1 comparing:1 chicago:1 update:2 alone:1 core:1 caveat:1 provides:3 zhang:1 along:1 direct:1 become:1 focs:1 prove:3 inside:2 expected:6 roughly:1 behavior:4 nor:1 cardinality:1 considering:3 becomes:2 provided:1 bounded:28 what:3 developed:1 hindsight:1 nj:1 guarantee:20 pseudo:1 shed:1 fat:1 classifier:7 k2:1 control:2 yn:3 before:1 understood:2 local:7 depended:1 mach:1 id:2 solely:1 incoherence:1 lugosi:1 might:2 twice:1 koltchinskii:3 studied:1 dantzig:1 co:2 nemirovski:1 regret:4 poincar:1 asymptotics:1 empirical:13 adapting:1 projection:1 confidence:1 pre:1 suggest:1 get:6 onto:1 close:2 selection:1 cannot:1 nb:2 risk:37 applying:2 restriction:1 equivalent:1 zinkevich:1 jerusalem:1 regardless:1 starting:1 normed:1 convex:31 ke:1 chesneau:4 immediately:1 stats:1 rule:1 estimator:2 haussler:1 deriving:1 proving:1 classic:1 notion:2 stability:5 coordinate:1 annals:5 controlling:1 play:2 shamir:1 exact:1 olivier:1 programming:1 us:1 hypothesis:18 logarithmically:1 role:2 capture:2 worst:2 sun:1 decrease:2 technological:1 intuition:3 panchenko:4 convexity:3 complexity:32 pd:1 instructive:1 ultimately:1 depend:3 tight:3 solving:1 serve:1 scalesensitive:1 learner:6 basis:1 efficiency:1 k0:1 various:2 regularizer:5 fast:13 effective:1 glivenko:1 pnon:1 aggregate:2 shalev:5 heuristic:2 supplementary:1 dominating:1 say:1 tightness:2 ramp:2 relax:1 compressed:1 ability:1 statistic:5 think:1 itself:2 online:17 advantage:2 differentiable:1 sequence:2 interaction:2 relevant:1 combining:1 unavoidably:1 subgraph:9 iff:1 achieve:1 exploiting:1 convergence:2 optimum:1 r1:1 rademacher:22 tti:1 ben:1 alon:1 strong:2 soc:1 recovering:1 c:1 trading:1 guided:1 stochastic:19 vc:9 hull:1 mcallester:1 material:1 require:6 generalization:3 extension:1 hold:2 around:1 predict:1 bickel:1 estimation:1 applicable:5 utexas:1 sensitive:11 agrees:1 establishes:1 minimization:4 rough:1 always:3 sight:1 aim:1 gaussian:3 rather:7 pn:4 avoid:1 shrinkage:1 corollary:3 derived:4 focus:1 she:1 aka:1 cantelli:1 contrast:3 sense:1 dim:1 inst:1 typically:4 arg:4 classification:2 issue:1 dual:1 overall:2 colt:2 constrained:1 special:1 fairly:1 mutual:1 aware:1 shattering:1 kw:7 look:1 icml:2 report:1 wee:1 simultaneously:1 beck:1 phase:1 karthik:2 n1:4 negation:1 recalling:1 evaluation:1 analyzed:2 nl:2 yielding:1 light:1 implication:1 edge:1 necessary:3 machinery:1 euclidean:6 taylor:1 desired:1 re:1 instance:2 teboulle:1 zn:2 subset:1 predictor:24 uniform:3 learnability:1 optimally:1 dependency:1 chooses:2 combined:1 river:1 lee:2 w1:1 thesis:3 squared:13 cesa:3 central:1 unavoidable:3 again:2 choose:2 derivative:14 return:1 de:1 summarized:1 b2:1 inc:1 explicitly:1 depends:4 later:1 try:1 closed:1 kwk:8 sup:6 start:1 hf:2 decaying:1 aggregation:5 relied:1 bayes:3 bouchard:1 shai:1 minimize:1 ni:5 accuracy:2 variance:1 yield:3 identify:1 generalize:2 bayesian:1 simultaneous:1 whenever:1 definition:1 infinitesimal:1 inexact:1 spe:1 proof:1 di:1 proved:1 popular:2 recall:1 dimensionality:6 hilbert:3 actually:6 focusing:1 higher:1 supervised:2 improved:4 ritov:1 done:2 strongly:8 furthermore:2 just:1 langford:1 until:3 hand:2 steinwart:3 correlation:3 ei:7 nonlinear:1 logistic:1 perhaps:1 scientific:1 believe:1 unbiased:1 hence:2 regularization:5 round:3 game:2 self:1 covering:7 generalized:3 ell1:1 ridge:1 demonstrate:1 polytechnique:1 recently:1 specialized:1 behaves:4 empirically:1 exponentially:1 discussed:7 extend:1 he:1 composition:1 cambridge:1 smoothness:10 rd:4 shawe:1 base:5 dominant:1 showed:1 recent:1 perspective:1 optimizing:1 inf:1 disappointing:1 claimed:1 certain:1 verlag:1 inequality:2 shahar:1 kwk1:3 christophe:1 lecu:1 yi:3 captured:1 seen:2 unrestricted:1 gentile:1 ey:1 multiple:1 smooth:34 technical:1 match:1 faster:5 bach:1 plugging:1 controlled:3 prediction:4 regression:4 essentially:1 expectation:7 kernel:2 achieved:2 addition:2 want:1 fine:1 separately:1 grow:2 finiteness:1 publisher:1 extra:1 unlike:2 ascent:1 subject:1 sridharan:3 jordan:1 structural:1 noting:1 enough:3 concerned:2 hb:18 easy:1 zi:9 lasso:5 idea:1 translates:1 texas:1 cific:1 bartlett:4 suffer:2 peter:1 pollard:1 speaking:1 generally:2 tewari:2 detailed:1 clear:2 tsybakov:10 statist:2 svms:2 ddl:2 andr:1 sign:4 tibshirani:1 shall:2 key:1 lan:1 clean:1 thresholded:1 asymptotically:2 subgradient:1 convert:1 run:2 letter:1 fourth:1 extends:1 throughout:1 almost:1 reader:1 electronic:1 summarizes:1 hrl:3 bound:52 played:1 constraint:2 deficiency:3 ri:5 phrased:1 bousquet:5 dominated:1 nathan:1 argument:7 min:4 attempting:1 separable:13 graceful:2 hind:1 according:1 describes:1 wi:13 lp:2 kakade:1 making:1 happens:1 hl:5 intuitively:1 restricted:1 erm:15 ln:1 equation:1 mutually:1 previously:2 turn:2 discus:1 kw0:3 know:2 end:2 studying:1 available:1 operation:1 apply:1 observe:1 generic:4 appropriate:1 stepsize:3 batch:1 slower:1 moscow:1 remaining:2 include:1 publishing:1 hinge:3 restrictive:1 k1:1 especially:1 establish:1 classical:1 implied:1 objective:9 occurs:1 parametric:19 concentration:3 dependence:6 said:1 gradient:7 separate:2 w0:14 argue:2 assuming:1 relationship:1 providing:1 minimizing:3 hebrew:1 equivalently:1 liang:3 robert:1 statement:1 negative:13 unknown:5 bianchi:3 upper:3 observation:1 finite:18 descent:8 behave:1 situation:3 looking:2 communication:1 y1:3 rn:15 arbitrary:3 ttic:3 david:2 namely:1 specified:1 connection:1 z1:2 distinction:1 polylogarithmic:4 nip:3 erwise:1 address:1 beyond:3 adversary:3 trans:2 below:1 regime:6 sparsity:3 dhl:1 ambuj:2 max:1 including:3 royal:1 difficulty:1 rely:5 regularized:5 hr:6 technology:1 imply:2 picture:1 disappears:1 irrespective:1 negativity:2 prior:1 geometric:1 literature:1 nati:1 probab:1 asymptotic:3 loss:71 expect:1 highlight:1 interesting:3 srebro:5 localized:1 dd:2 uncorrelated:2 r2n:1 austin:1 penalized:1 allow:3 weaker:1 uchicago:1 institute:2 taking:1 absolute:3 sparse:4 distributed:1 van:1 dimension:16 xn:4 numeric:3 avoids:4 transition:1 doesn:1 qn:2 yudin:1 made:1 world:1 projected:2 simplified:1 far:1 log3:4 excess:19 selector:1 reveals:1 assumed:1 xi:9 shwartz:5 alternatively:1 spectrum:1 table:6 learn:2 nature:2 ignoring:1 obtaining:3 williamson:1 domain:1 main:1 rh:3 bounding:2 noise:2 x1:4 georgia:1 slow:2 explicit:1 deterministically:1 kxk2:2 jmlr:1 toyota:1 third:2 wavelet:1 hw:7 theorem:26 specific:2 showing:2 pac:4 k21:1 sensing:1 explored:1 dominates:1 exists:3 incorporating:1 mendelson:3 vapnik:1 importance:2 mirror:6 phd:3 magnitude:2 conditioned:2 margin:17 entropy:3 logarithmic:2 simply:1 appearance:1 kxk:4 expressed:1 conconi:1 scalar:3 talking:1 applies:2 springer:1 minimizer:4 determines:1 dh:2 viewed:1 goal:2 ann:2 towards:1 lipschitz:15 change:1 typical:3 infinite:3 specifically:4 uniformly:1 determined:1 degradation:2 lemma:12 called:1 geer:1 pas:2 player:3 nauka:1 formally:1 guillaume:1 support:4 pfor:1 accelerated:1 dept:1 d1:2 avoiding:1 |
3,195 | 3,895 | Gated Softmax Classification
Christopher Zach
Department of Computer Science
ETH Zurich
Switzerland
[email protected]
Roland Memisevic
Department of Computer Science
ETH Zurich
Switzerland
[email protected]
Marc Pollefeys
Department of Computer Science
ETH Zurich
Switzerland
[email protected]
Geoffrey Hinton
Department of Computer Science
University of Toronto
Canada
[email protected]
Abstract
We describe a ?log-bilinear? model that computes class probabilities by combining an input vector multiplicatively with a vector of binary latent variables. Even
though the latent variables can take on exponentially many possible combinations of values, we can efficiently compute the exact probability of each class
by marginalizing over the latent variables. This makes it possible to get the exact gradient of the log likelihood. The bilinear score-functions are defined using
a three-dimensional weight tensor, and we show that factorizing this tensor allows the model to encode invariances inherent in a task by learning a dictionary
of invariant basis functions. Experiments on a set of benchmark problems show
that this fully probabilistic model can achieve classification performance that is
competitive with (kernel) SVMs, backpropagation, and deep belief nets.
1
Introduction
Consider the problem of recognizing an image that contains a single hand-written digit that has been
approximately normalized but may have been written in one of a number of different styles. Features
extracted from the image often provide much better evidence for a combination of a class and a style
than they do for the class alone. For example, a diagonal stroke might be highly compatible with an
italic 1 or a non-italic 7. A short piece of horizontal stroke at the top right may be compatible with a
very italic 3 or a 5 with a disconnected top. A fat piece of vertical stroke at the bottom of the image
near the center may be compatible with a 1 written with a very thick pen or a narrow 8 written with
a moderately thick pen so that the bottom loop has merged. If each training image was labeled with
both the class and the values of a set of binary style features, it would make sense to use the image
features to create a bipartite conditional random field (CRF) which gave low energy to combinations
of a class label and a style feature that were compatible with the image feature. This would force
the way in which local features were interpreted to be globally consistent about style features such
as stroke thickness or ?italicness?. But what if the values of the style features are missing from the
training data?
We describe a way of learning a large set of binary style features from training data that are only
labeled with the class. Our ?gated softmax? model allows the 2K possible combinations of the K
learned style features to be integrated out. This makes it easy to compute the posterior probability
of a class label on test data and easy to get the exact gradient of the log probability of the correct
label on training data.
1
1.1
Related work
The model is related to several models known in the literature, that we discuss in the following. [1]
describes a bilinear sparse coding model that, similar to our model, can be trained discriminatively
to predict classes. Unlike in our case, there is no interpretation as a probabilistic model, and ? consequently ? not a simple learning rule. Furthermore, the model parameters, unlike in our case, are not
factorized, and as a result the model cannot extract features which are shared among classes. Feature
sharing, as we shall show, greatly improves classification performance as it allows for learning of
invariant representations of the input.
Our model is similar to the top layer of the deep network discussed in [2], again, without factorization and feature sharing. We also derive and utilize discriminative gradients that allow for efficient
training. Our model can be viewed also as a ?degenerate? special case of the image transformation
model described in [3], which replaces the output-image in that model with a ?one-hot? encoded
class label. The intractable objective function of that model, as a result, collapses into a tractable
form, making it possible to perform exact inference.
We describe the basic model, how it relates to logistic regression, and how to perform learning and
inference in the following section. We show results on benchmark classification tasks in Section 3
and discuss possible extensions in Section 4.
2
2.1
The Gated Softmax Model
Log-linear models
We consider the standard classification task of mapping an input vector x ? IRn to a class-label y.
One of the most common, and certainly oldest, approaches to solving this task is logistic regression,
which is based on a log-linear relationship between inputs and labels (see, for example, [4]). In
particular, using a set of linear, class-specific score functions
sy (x) = wyt x
(1)
we can obtain probabilities over classes by exponentiating and normalizing:
exp(wyt x)
t
y 0 exp(wy 0 x)
p(y|x) = P
(2)
Classification decisons for test-cases xtest are given by arg max p(y|xtest ). Training
amounts to
P
adapting the vectors wy by maximizing the average conditional log-probability N1 ? log p(y ? |x? )
for a set {(x? , y ? )}N
?=1 of training cases. Since there is no closed form solution, training is typically
performed using some form of gradient based optimization. In the case of two or more labels, logistic
regression is also referred to as the ?multinomial logit model? or the ?maximum entropy model? [5].
It is possible to include additive ?bias? terms by in the definition of the score function (Eq. 1) so
that class-scores are affine, rather than linear, functions of the input. Alternatively, we can think of
the inputs as being in a ?homogeneous? representation with an extra constant 1-dimension, in which
biases are implemented implicitly.
Important properties of logistic regression are that (a) the training objective is convex, so there are
no local optima, and (b) the model is probabilistic, hence it comes with well-calibrated estimates of
uncertainty in the classification decision (ref. Eq. 2) [4]. Property (a) is shared with, and property
(b) a possible advantage over, margin-maximizing approaches, like support vector machines [4].
2.2
A log-bilinear model
Logistic regression makes the assumption that classes can be separated in the input space with hyperplanes (up to noise). A common way to relax this assumption is to replace the linear separation
manifold, and thus, the score function (Eq. 1), with a non-linear one, such as a neural network
[4]. Here, we take an entirely different, probabilistic approach. We take the stance that we do not
know what form the separation manifold takes on, and instead introduce a set of probabilistic hidden
variables which cooperate to model the decision surface jointly. To obtain classification decisions at
test-time and for training the model, we then need to marginalize over these hidden variables.
2
h
h
hk
hk
f
xi
xi
f
f
y
x
y
x
(a)
(b)
Figure 1: (a) A log-bilinear model: Binary hidden variables hk can blend in log-linear dependencies
that connect input features xi with labels y. (b) Factorization allows for blending in a learned feature
space.
More specifically, we consider the following variation of logistic regression: We introduce a vector
h of binary latent variables (h1 , . . . , hK ) and replace the linear score (Eq. 1) with a bilinear score
of x and h:
sy (x, h) = ht Wy x.
(3)
The bilinear score combines, quadratically, all pairs of input components xi with hidden variables
hk . The score for each class is thus a quadratic product, parameterized by a class-specific matrix
Wy . This is in contrast to the inner product, parameterized by class-specific vectors wy , for logistic
regression. To turn scores into probabilities we can again exponentiate and normalize
exp(ht Wy x)
.
0t
0
y 0 h0 exp(h Wy x)
p(y, h|x) = P
(4)
In contrast to logistic regression, we obtain a distribution over both the hidden variables h and labels
y. We get back the (input-dependent) distributions over labels with an additional marginalization
over h:
X
p(y|x) =
p(y, h|x).
(5)
h?{0,1}K
As with logistic regression, we thus get a distribution over labels y, conditioned on inputs x. The
parameters are the set of class-specific matrices Wy . As before, we can add bias terms to the score,
or add a constant 1-dimension to x and h. Note that for any single and fixed instantiation of h
in Eq. 3, we obtain the logistic regression score (up to normalization), since the argument in the
?exp()? collapses to the class-specific row-vector ht Wy . Each of the 2K summands in Eq. 5 is
therefore exactly one logistic classifier, showing that the model is equivalent to a mixture of 2K
logistic regressors with shared weights. Because of the weight-sharing the number of parameters
grows linearly not exponentially in the number of hidden variables. In the following, we let W
denote the three-way tensor of parameters (by ?stacking? the matrices Wy ).
The sum over 2K terms in Eq. 5 seems to preclude any reasonably large value for K. However,
similar to the models in [6], [7], [2], the marginalization can be performed in closed form and can
be computed tractably by a simple re-arrangement of terms:
p(y|x) =
X
h
p(y, h|x) ?
X
h
exp(ht Wy x) =
X
X
exp(
h
ik
Wyik xi hk ) =
Y
k
X
1 + exp(
Wyik xi )
i
(6)
3
This shows that the class probabilities decouple into a product of K terms1 , each of which is a mixture of a uniform and an input-conditional ?softmax?. The model is thus a product of experts [8]
(which is conditioned on input vectors x). It can be viewed also as a ?strange? kind of Gated Boltzmann Machine [9] that models a single discrete output variable y using K binary latent variables.
As we shall show, it is the conditioning on the inputs x that renders this model useful.
Typically, training products of experts is performed using approximate, sampling based schemes,
because of the lack of a closed form for the data probability [8]. The same is true for most conditional
products of experts [9].
Note that in our case, the distribution that the model outputs is a distribution over a countable
2
(and, P
in particular,
fairly
Q
P small ) number of possible values, so we can compute the constant
? = y0 k (1 + exp( i Wyik xi )), that normalizes the left-hand side in Eqs. 6, efficiently. The
same observation was utilized before in [6], [7], [10].
2.3
Sharing features among classes
The score (or ?activation?) that class label y receives from each of the 2K terms in Eq. 5 is a linear
function of the inputs. A different class y 0 receives activations from a different, non-overlapping set
of functions. The number of parameters is thus: (number of inputs) ? (number of labels) ? (number
of hidden variables). As we shall show in Section 3 the model can achieve fairly good classification
performance.
A much more natural way to define class-dependencies in this model, however, is by allowing for
some parameters to be shared between classes. In most natural problems, inputs from different
classes share the same domain, and therefore show similar characteristics. Consider, for example,
handwritten digits, which are composed of strokes, or human faces, which are composed of facial
features. The features behave like ?atoms? that, by themselves, are only weakly indicative of a
class; it is the composition of these atoms that is highly class-specific3 . Note that parameter sharing
would not be possible in models like logistic regression or SVMs, which are based on linear score
functions.
In order to obtain class-invariant features, we factorize the parameter tensor W as follows:
Wyik =
F
X
y
x
h
Wif
Wyf
Wkf
(7)
f =1
The model parameters are now given by three matrices W x , W y , W h , and each component Wyik
of W is defined as a three-way inner product of column vectors taken from these matrices. This
factorization of a three-way parameter tensor was previously used by [3] to reduce the number of
parameters in an unsupervised model of images. Plugging the factorized form for the weight tensor
into the definition of the probability (Eq. 4) and re-arranging terms yields
X X
X
y
x
h
p(y, h|x) ? exp
xi Wif
hk Wkf
Wyf
(8)
f
i
k
This shows that, after factorizing, we obtain a classification decision by first projecting the input
vector x (and the vector of hidden variables h) onto F basis functions, or filters. The resulting filter
y
responses are multiplied and combined linearly using class-specific weights Wyf
. An illustration of
the model is shown in Figure 1 (b).
As before, we need to marginalize over h to obtain class-probabilities. In analogy to Eqs. 6, we
obtain the final form (here written in the log-domain):
X
log p(y|x) = ay ? log
exp(ay0 )
(9)
y0
1
The log-probability thus decouples into a sum over K terms and is the preferred object to compute in a
numerically stable implementation.
2
We are considering ?usual? classification problems, so the number of classes is in the tens, hundreds or
possibly even millions, but it is not exponential like in a CRF.
3
If this was not the case, then many practical classification problems would be much easier to solve.
4
where
ay =
X
XX
y
x
h
.
log 1 + exp
(
xi Wif
)Wkf
Wyf
k
f
(10)
i
Note that in this model, learning of features (the F basis functions W?fx ) is tied in with learning of
the classifier itself. In contrast to neural networks and deep learners ([11], [12]), the model does
not try to learn a feature hierarchy. Instead, learned features are combined multiplicatively with
hidden variables and the results added up to provide the inputs to the class-units. In terms of neural
networks nomenclature, the factored model can best be thought of as a single-hidden-layer network.
In general, however, the concept of ?layers? is not immediately applicable in this architecture.
2.4
Interpretation
An illustration of the graphical model is shown in Figure 1 (non-factored model on the left, factored
model on the right).
P Each hidden variable hk that is ?on? contributes a slice W?k? of the parameter
tensor to a blend k hk W?k? of at most K matrices. The classification decision is the sum over all
possible instantiations of h and thus over all possible such blends. A single blend is simply a linear
logistic classifier.
An alternative view is that each output unit y accumulates evidence for or against its class by projecting the input onto K basis functions (the rows of Wy in Eq. 4). Each instantiation of h constitutes
one way of combining a subset of basis function responses that are considered to be consistent into
a single piece of evidence. Marginalizing over h allows us to express the fact that there can be
multiple alternative sets of consistent basis function responses. This is like using an ?OR? gate to
combine the responses of a set of ?AND? gates, or like computing a probabilistic version of a disjunctive normal form (DNF). As an example, consider the task of classifying a handwritten 0 that
is roughly centered in the image but rotated by a random angle (see also Section 3): Each of the
following combinations: (i) a vertical stroke on the left and a vertical stroke on the right; (ii) a horizontal stroke on the top and a horizontal stroke on the bottom; (iii) a diagonal stroke on the bottom
left and a diagonal stroke on the top right, would constitute positive evidence for class 0. The model
can accomodate each if necessary by making appropriate use of the hidden variables.
The factored model, where basis function responses are computed jointly for all classes and then
weighted differently for each class, can be thought of as accumulating evidence accordingly in the
?spatial frequency domain?.
2.5
Discriminative gradients
Like the class-probabilities (Eq. 5) and thus the model?s objective function, the derivative of the
log-probability w.r.t. model parameters, is tractable, and scales linearly not exponentially with K.
The derivative w.r.t. to a single parameter Wy?ik of the unfactored form (Section 2.2) takes the form:
X
? log p(y|x)
= ?y?y ? p(?
y |x) ?
xi Wyik hk xi
?Wy?ik
i
with
?(a) = 1 + exp(?a)
?1
.
(11)
To compute gradients of the factored model (Section 2.3) we use Eq. 11 and the chain rule, in
conjunction with Eq. 7:
? log p(y|x) X ? log p(y|x) ?Wy?ik
=
(12)
x
x .
?Wif
?Wy?ik
?Wif
y?,k
y
h
Similarly for Wyf
and Wkf
(with the sums running over the remaining indices).
As with logistic regression, we can thus perform gradient based optimization of the model likelihood for training. Moreover, since we have closed form expressions, it is possible to use conjugate
gradients for fast training. However, in contrast to logistic regression, the model?s objective function
is non-linear, so it can contain local optima. We discuss this issue in more detail in the following
section. Like logistic regression, and in contrast to SVMs, the model computes probabilities and
thus provides well-calibrated estimates of uncertainty in its decisions.
5
2.6
Optimization
The log-probability is non-linear and can contain local optima w.r.t. W , so some care has to be taken
to obtain good local optima during training. In general we found that simply deploying a generalpurpose conjugate gradient solver on random parameter initializations does not reliably yield good
local optima (even though it can provide good solutions in some cases). Similar problems occur
when training neural networks.
While simple gradient descent tends to yield better results, we adopt the approach discussed in [2]
in most of our experiments, which consists in initializing with class-specific optimization: The set
of parameters in our proposed model is the same as the ones for an ensemble of class-specific distributions p(x|y) (by simply adjusting the normalization in Eq. 4). More specifically, the distribution
p(x|y) of inputs given labels is a factored Restricted Boltzmann machine, that can be optimized
using contrastive divergence [3]. We found that performing a few iterations of class-conditional
optimization as an initialization reliably yields good local optima of the model?s objective function. We also experimented with alternative approaches to avoiding bad local optima, such as letting
parameters grow slowly during the optimization (?annealing?), and found that class-specific pretraining yields the best results. This pre-training is reminiscent of training deep networks, which
also rely on a pre-training phase. In contrast, however, here we pre-train class-conditionally, and
initialize the whole model at once, rather than layer-by-layer. It is possible to perform a different
kind of annealing by adding the class-specific and the model?s actual objective function, and slowly
reducing the class-specific influence using some weighting scheme. We used both the simple and
the annealed optimization in some of our experiments, but did not find clear evidence that annealing leads to better local optima. We found that, given an initialization near a local optimum of the
objective function, conjugate gradients can significantly outperform stochastic gradient descent in
terms of the speed at which one can optimize both the model?s own objective function and the cost
on validation data.
In practice, one can add a regularization (or ?weight-decay?) penalty ??kW k2 to the objective
function, as is common for logistic regression and other classifiers, where ? is chosen by crossvalidation.
3
Experiments
We applied the Gated Softmax (GSM) classifier4 on the benchmark classification tasks described
in [11]. The benchmark consists of a set of classification problems, that are difficult, because they
contain many subtle, and highly complicated, dependencies of classes on inputs. It was initially
introduced to evaluate the performance of deep neural networks. Some examples tasks are illustrated
in Figure 3. The benchmark consists of 8 datasets, each of which contains several thousand graylevel images of size 28 ? 28 pixels. Training set sizes vary between 1200 and 10000. The testsets contain 50000 examples each. There are three two-class problems (?rectangles?, ?rectanglesimages? and ?convex?) and five ten-class problems (which are variations of the MNIST data-set5 ).
To train the model we make use of the approach described in Section 2.6. We do not make use of any
random re-starts or other additional ways to find good local optima of the objective function. For the
class-specific initializations, we use a class-specific RBM with binary observables on the datasets
?rectangles?, ?mnist-rot?, ?convex? and ?mnist?, because they contain essentially binary inputs (or
a heavily-skewed histogram), and Gaussian observables on the others. For the Gaussian case, we
normalize the data to mean zero and standard-deviation one (independently in each dimension). We
also tried ?hybrid? approaches on some data-sets where we optimize a sum of the RBM and the
model objective function, and decrease the influence of the RBM as training progresses.
3.1
Learning task-dependent invariances
The ?rectangles? task requires the classification of rectangle images into the classes horizontal vs.
vertical (some examples are shown in Figure 3 (a)). Figure 2 (left) shows random sets of 50 rows
of the matrix Wy learned by the unfactored model (class horizontal on the top, class vertical on
4
5
An implementation of the model is available at http://learning.cs.toronto.edu/?rfm/gatedsoftmax/
http://yann.lecun.com/exdb/mnist/
6
Figure 2: Left: Class-specific filters learned from the rectangle task ? top: filters in support of the
label horizontal, bottom: filters in support of the class label vertical. Right: Shared filters learned
from rotation-invariant digit classification.
the bottom). Each row Wy corresponds to a class-specific image filter. We display the filters using
gray-levels, where brighter means larger. The plot shows that the hidden units, like ?Hough-cells?,
make it possible to accumulate evidence for the different classes, by essentially counting horizontal
and vertical strokes in the images. Interestingly, classification error is 0.56% false, which is about a
quarter the number of mis-classifications of the next best performer (SVMs with 2.15% error) and
significantly more accurate than all other models on this data-set.
An example of filters learned by the factored model is shown in Figure 2 (right). The task is classification of rotated digits in this example. Figure 3 (b) shows some example inputs. In this task,
learning invariances with respect to rotation is crucial for achieving good classification performance.
Interestingly, the model achieves rotation-invariance by projecting onto a set of circular or radial
Fourier-like components. It is important to note that the model infers these filters to be the optimal input representation entirely from the task at hand. The filters resemble basis functions learned
by an image transformation model trained to rotate image patches described in [3]. Classification
performance is 11.75% error, which is comparable with the best results on this dataset.
(a)
(b)
(c)
(d)
Figure 3: Example images from four of the ?deep learning? benchmark tasks: (a) Rectangles (2class): Distinguish horizontal from vertical rectangles; (b) Rotated digits (10-class): Determine the
class of the digit; (c) Convex vs. non-convex (2-class): Determine if the image shows a convex or
non-convex shape; (d) Rectangles with images (2-class): Like (a), but rectangles are rendered using
natural images.
3.2
Performance
Classification performance on all 8 datasets is shown in Figure 4. To evaluate the model we chose
the number of hiddens units K, the number of factors F and the regularizer ? based on a validation
7
set (typically by taking a fifth of the training set). We varied both K and F between 50 and 1000 on
a fairly coarse grid, such as 50, 500, 1000, for most datasets, and for most cases we tried two values
for the regularizer (? = 0.001 and ? = 0.0). A finer grid may improve performance further.
Table 4 shows that the model performs well on all data-sets (comparing numbers are from [11]).
It is among the best (within 0.01 tolerance), or the best performer, in three out of 8 cases. For
comparison, we also show the error rates achieved with the unfactored model (Section 2.2), which
also performs fairly well as compared to deep networks and SVMs, but is significantly weaker in
most cases than the factored model.
dataset/model:
rectangles
rect.-images
mnistplain
convexshapes
mnistbackrand
mnistbackimg
mnistrotbackimg
mnistrot
SVM
SVMRBF SVMPOL
2.15
2.15
24.04
24.05
3.03
3.69
19.13
19.82
14.58
16.62
22.61
24.01
55.18
56.41
11.11
15.42
NNet
NNet
7.16
33.20
4.69
32.25
20.04
27.41
62.16
18.11
RBM
RBM
4.71
23.69
3.94
19.92
9.80
16.15
52.21
14.69
DEEP
DBN3 SAA3
2.60
2.41
22.50
24.05
3.11
3.46
18.63
18.41
6.73
11.28
16.31
23.00
47.39
51.93
10.30
10.30
GSM
0.83
22.51
3.70
17.08
10.48
23.65
55.82
11.75
GSM
(unfact)
(0.56)
(23.17)
(3.98)
(21.03)
(11.89)
(22.07)
(55.16)
(16.15)
Figure 4: Classification error rates on test data (error rates are in %). Models: SVMRBF: SVM with
RBF kernels. SVMPOL: SVM with polynomial kernels. NNet: (MLP) Feed-forward neural net.
RBM: Restricted Boltzmann Machine. DBN3: Three-layer Deep Belief Net. SAA3: Three-layer
stacked auto-associator. GSM: Gated softmax model (in brackets: unfactored model).
4
Discussion/Future work
Several extensions of deep learning methods, including deep kernel methods, have been suggested
recently (see, for example, [13], [14]), giving similar performance to the networks that we compare
to here. Our method differs from these approaches in that it is not a multi-layer architecture. Instead,
our model gets its power from the fact that inputs, hidden variables and labels interact in three-way
cliques. Factored three-way interactions make it possible to learn task-specific features and to learn
transformational invariances inherent in the task at hand.
It is interesting to note that the model outperforms kernel methods on many of these tasks. In contrast
to kernel methods, the GSM provides fully probabilistic outputs and can be easily trained online,
which makes it directly applicable to very large datasets.
Interestingly, the filters that the model learns (see previous Section; Figure 2) resemble those learned
be recent models of image transformations (see, for example, [3]). In fact, learning of invariances
in general is typically addressed in the context of learning transformations. Interestingly, most
transformation models themselves are also defined via three-way interactions of some kind ([15],
[16], [17], [18] , [19]). In contrast to a model of transformations, it is the classification task that
defines the invariances here, and the model learns the invariant representations from that task only.
Combining the explicit examples of transformations provided by video sequences with the implicit
information about transformational invariances provided by labels is a promising future direction.
Given the probabilistic definition of the model, it would be interesting to investigate a fully Bayesian
formulation that integrates over model parameters. Note that we trained the model without sparsity
constraints and in a fully supervised way. Encouraging the hidden unit activities to be sparse (e.g.
using the approach in [20]) and/or training the model semi-supervised are further directions for
further research. Another direction is the extension to structured prediction problems, for example,
by deploying the model as clique potential in a CRF.
Acknowledgments
We thank Peter Yianilos and the anonymous reviewers for valuable discussions and comments.
8
References
[1] Julien Mairal, Francis Bach, Jean Ponce, Guillermo Sapiro, and Andrew Zisserman. Supervised dictionary
learning. In Advances in Neural Information Processing Systems 21. 2009.
[2] Vinod Nair and Geoffrey Hinton. 3D object recognition with deep belief nets. In Advances in Neural
Information Processing Systems 22. 2009.
[3] Roland Memisevic and Geoffrey Hinton. Learning to represent spatial transformations with factored
higher-order Boltzmann machines. Neural Computation, 22(6):1473?92, 2010.
[4] Christopher Bishop. Pattern Recognition and Machine Learning (Information Science and Statistics).
Springer-Verlag New York, Inc., Secaucus, NJ, USA, 2006.
[5] Adam Berger, Vincent Della Pietra, and Stephen Della Pietra. A maximum entropy approach to natural
language processing. Computational Linguistics, 22(1):39?71, 1996.
[6] Geoffrey Hinton. To recognize shapes, first learn to generate images. Technical report, Toronto, 2006.
[7] Hugo Larochelle and Yoshua Bengio. Classification using discriminative restricted Boltzmann machines.
In ICML ?08: Proceedings of the 25th international conference on Machine learning, New York, NY,
USA, 2008. ACM.
[8] Geoffrey Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation, 14(8):1771?1800, 2002.
[9] Roland Memisevic and Geoffrey Hinton. Unsupervised learning of image transformations. In Proceedings
of IEEE Conference on Computer Vision and Pattern Recognition, 2007.
[10] Vinod Nair and Geoffrey Hinton. Implicit mixtures of restricted Boltzmann machines. In Advances in
Neural Information Processing Systems 21. 2009.
[11] Hugo Larochelle, Dumitru Erhan, Aaron Courville, James Bergstra, and Yoshua Bengio. An empirical
evaluation of deep architectures on problems with many factors of variation. In ICML ?07: Proceedings
of the 24th international conference on Machine learning, New York, NY, USA, 2007. ACM.
[12] Yoshua Bengio and Yann LeCun. Scaling learning algorithms towards ai. In L. Bottou, O. Chapelle,
D. DeCoste, and J. Weston, editors, Large-Scale Kernel Machines. MIT Press, 2007.
[13] Youngmin Cho and Lawrence Saul. Kernel methods for deep learning. In Advances in Neural Information
Processing Systems 22. 2009.
[14] Jason Weston, Fr?ed?eric Ratle, and Ronan Collobert. Deep learning via semi-supervised embedding. In
ICML ?08: Proceedings of the 25th international conference on Machine learning, New York, NY, USA,
2008. ACM.
[15] Bruno Olshausen, Charles Cadieu, Jack Culpepper, and David Warland. Bilinear models of natural images. In SPIE Proceedings: Human Vision Electronic Imaging XII, San Jose, 2007.
[16] Rajesh Rao and Dana Ballard. Efficient encoding of natural time varying images produces oriented spacetime receptive fields. Technical report, Rochester, NY, USA, 1997.
[17] Rajesh Rao and Daniel Ruderman. Learning lie groups for invariant visual perception. In In Advances in
Neural Information Processing Systems 11. MIT Press, 1999.
[18] David Grimes and Rajesh Rao. Bilinear sparse coding for invariant vision. Neural Computation, 17(1):47?
73, 2005.
[19] Joshua Tenenbaum and William Freeman. Separating style and content with bilinear models. Neural
Computation, 12(6):1247?1283, 2000.
[20] Honglak Lee, Chaitanya Ekanadham, and Andrew Ng. Sparse deep belief net model for visual area V2.
In Advances in Neural Information Processing Systems 20. MIT Press, 2008.
9
| 3895 |@word version:1 polynomial:1 seems:1 logit:1 tried:2 set5:1 xtest:2 contrastive:2 contains:2 score:14 daniel:1 interestingly:4 outperforms:1 com:2 comparing:1 activation:2 gmail:1 written:5 reminiscent:1 ronan:1 additive:1 shape:2 plot:1 v:2 alone:1 indicative:1 accordingly:1 oldest:1 short:1 provides:2 coarse:1 toronto:4 hyperplanes:1 five:1 ik:5 consists:3 combine:2 introduce:2 roughly:1 themselves:2 multi:1 ratle:1 freeman:1 globally:1 actual:1 encouraging:1 preclude:1 considering:1 solver:1 decoste:1 provided:2 xx:1 moreover:1 factorized:2 what:2 kind:3 interpreted:1 transformation:9 nj:1 sapiro:1 fat:1 exactly:1 classifier:4 decouples:1 k2:1 unit:5 before:3 positive:1 local:11 tends:1 bilinear:10 accumulates:1 encoding:1 approximately:1 might:1 chose:1 initialization:4 factorization:3 collapse:2 youngmin:1 practical:1 lecun:2 acknowledgment:1 practice:1 differs:1 backpropagation:1 digit:6 area:1 empirical:1 eth:3 adapting:1 thought:2 significantly:3 pre:3 radial:1 get:5 cannot:1 marginalize:2 onto:3 context:1 influence:2 accumulating:1 optimize:2 equivalent:1 reviewer:1 center:1 missing:1 maximizing:2 annealed:1 independently:1 convex:7 immediately:1 factored:10 rule:2 embedding:1 variation:3 fx:1 arranging:1 graylevel:1 hierarchy:1 heavily:1 exact:4 homogeneous:1 recognition:3 utilized:1 labeled:2 bottom:6 disjunctive:1 initializing:1 thousand:1 decrease:1 valuable:1 moderately:1 trained:4 weakly:1 solving:1 bipartite:1 eric:1 learner:1 basis:8 observables:2 easily:1 differently:1 regularizer:2 train:2 separated:1 stacked:1 fast:1 describe:3 dnf:1 h0:1 jean:1 encoded:1 larger:1 solve:1 relax:1 statistic:1 think:1 jointly:2 itself:1 final:1 online:1 unfactored:4 advantage:1 sequence:1 net:5 interaction:2 product:8 fr:1 combining:3 loop:1 degenerate:1 achieve:2 secaucus:1 normalize:2 crossvalidation:1 optimum:10 produce:1 adam:1 rotated:3 object:2 derive:1 andrew:2 progress:1 eq:16 implemented:1 c:2 resemble:2 come:1 larochelle:2 switzerland:3 direction:3 thick:2 merged:1 correct:1 wyt:2 filter:12 stochastic:1 centered:1 human:2 anonymous:1 blending:1 extension:3 considered:1 normal:1 exp:13 lawrence:1 mapping:1 predict:1 dictionary:2 adopt:1 vary:1 achieves:1 integrates:1 applicable:2 label:18 create:1 weighted:1 mit:3 gaussian:2 rather:2 varying:1 conjunction:1 encode:1 ponce:1 likelihood:2 greatly:1 hk:10 contrast:8 sense:1 inference:2 dependent:2 integrated:1 typically:4 initially:1 hidden:15 irn:1 pixel:1 arg:1 classification:26 among:3 issue:1 spatial:2 softmax:6 special:1 fairly:4 initialize:1 field:2 once:1 ng:1 sampling:1 atom:2 cadieu:1 kw:1 unsupervised:2 constitutes:1 icml:3 future:2 others:1 report:2 yoshua:3 inherent:2 few:1 culpepper:1 oriented:1 composed:2 divergence:2 recognize:1 pietra:2 phase:1 n1:1 william:1 mlp:1 highly:3 rfm:1 circular:1 nnet:3 evaluation:1 certainly:1 investigate:1 mixture:3 bracket:1 grime:1 chain:1 accurate:1 rajesh:3 necessary:1 facial:1 hough:1 chaitanya:1 re:3 column:1 rao:3 stacking:1 cost:1 deviation:1 subset:1 ekanadham:1 uniform:1 hundred:1 recognizing:1 dependency:3 thickness:1 connect:1 calibrated:2 combined:2 cho:1 hiddens:1 international:3 memisevic:4 probabilistic:8 lee:1 again:2 possibly:1 slowly:2 wkf:4 expert:4 derivative:2 style:9 transformational:2 potential:1 bergstra:1 coding:2 inc:1 collobert:1 piece:3 performed:3 h1:1 try:1 closed:4 view:1 jason:1 francis:1 competitive:1 start:1 complicated:1 rochester:1 characteristic:1 efficiently:2 sy:2 yield:5 ensemble:1 handwritten:2 bayesian:1 vincent:1 finer:1 stroke:12 gsm:5 wyf:5 deploying:2 sharing:5 ed:1 definition:3 against:1 energy:1 frequency:1 james:1 rbm:6 mi:1 spie:1 dataset:2 adjusting:1 improves:1 infers:1 subtle:1 back:1 feed:1 higher:1 supervised:4 response:5 zisserman:1 formulation:1 though:2 furthermore:1 implicit:2 hand:4 receives:2 horizontal:8 christopher:2 ruderman:1 overlapping:1 lack:1 defines:1 logistic:18 gray:1 grows:1 olshausen:1 usa:5 normalized:1 true:1 concept:1 contain:5 hence:1 regularization:1 stance:1 illustrated:1 conditionally:1 during:2 skewed:1 exdb:1 ay:2 crf:3 performs:2 exponentiate:1 cooperate:1 image:26 jack:1 recently:1 charles:1 common:3 rotation:3 multinomial:1 quarter:1 hugo:2 conditioning:1 exponentially:3 million:1 discussed:2 interpretation:2 numerically:1 accumulate:1 composition:1 honglak:1 ai:1 grid:2 similarly:1 bruno:1 language:1 rot:1 chapelle:1 stable:1 surface:1 summands:1 add:3 posterior:1 own:1 recent:1 inf:2 verlag:1 binary:8 joshua:1 additional:2 care:1 performer:2 determine:2 ii:1 relates:1 multiple:1 semi:2 stephen:1 technical:2 bach:1 roland:4 plugging:1 prediction:1 basic:1 regression:15 essentially:2 vision:3 iteration:1 kernel:8 normalization:2 histogram:1 represent:1 achieved:1 cell:1 annealing:3 addressed:1 grow:1 crucial:1 extra:1 unlike:2 comment:1 near:2 counting:1 iii:1 easy:2 wif:5 vinod:2 bengio:3 marginalization:2 gave:1 brighter:1 architecture:3 inner:2 reduce:1 expression:1 penalty:1 render:1 peter:1 nomenclature:1 york:4 constitute:1 pretraining:1 deep:16 useful:1 clear:1 amount:1 ten:2 tenenbaum:1 svmrbf:2 svms:5 http:2 generate:1 outperform:1 xii:1 discrete:1 pollefeys:2 shall:3 express:1 group:1 four:1 achieving:1 ht:4 utilize:1 rectangle:10 imaging:1 sum:5 angle:1 parameterized:2 uncertainty:2 jose:1 strange:1 yann:2 electronic:1 separation:2 patch:1 decision:6 scaling:1 comparable:1 entirely:2 layer:8 distinguish:1 display:1 courville:1 spacetime:1 replaces:1 quadratic:1 activity:1 occur:1 constraint:1 fourier:1 speed:1 argument:1 performing:1 rendered:1 department:4 structured:1 combination:5 disconnected:1 conjugate:3 describes:1 y0:2 making:2 projecting:3 invariant:7 restricted:4 taken:2 zurich:3 previously:1 discus:3 turn:1 know:1 letting:1 tractable:2 available:1 multiplied:1 v2:1 appropriate:1 alternative:3 gate:2 top:7 running:1 include:1 remaining:1 linguistics:1 graphical:1 warland:1 giving:1 tensor:7 objective:11 arrangement:1 added:1 blend:4 receptive:1 usual:1 diagonal:3 italic:3 gradient:12 thank:1 separating:1 manifold:2 index:1 relationship:1 multiplicatively:2 illustration:2 berger:1 minimizing:1 difficult:1 implementation:2 countable:1 reliably:2 boltzmann:6 gated:6 perform:4 allowing:1 vertical:8 observation:1 datasets:5 benchmark:6 descent:2 behave:1 hinton:8 varied:1 canada:1 introduced:1 david:2 pair:1 optimized:1 learned:9 narrow:1 quadratically:1 tractably:1 suggested:1 wy:18 pattern:2 perception:1 sparsity:1 max:1 including:1 video:1 belief:4 hot:1 power:1 natural:6 force:1 rely:1 hybrid:1 scheme:2 improve:1 julien:1 extract:1 auto:1 literature:1 marginalizing:2 unfact:1 fully:4 discriminatively:1 interesting:2 analogy:1 geoffrey:7 dana:1 validation:2 affine:1 consistent:3 editor:1 classifying:1 share:1 row:4 normalizes:1 compatible:4 guillermo:1 bias:3 allow:1 side:1 weaker:1 saul:1 face:1 taking:1 fifth:1 sparse:4 tolerance:1 slice:1 dimension:3 computes:2 forward:1 exponentiating:1 regressors:1 san:1 erhan:1 approximate:1 implicitly:1 preferred:1 clique:2 instantiation:3 mairal:1 rect:1 discriminative:3 xi:11 alternatively:1 factorize:1 factorizing:2 latent:5 pen:2 table:1 promising:1 learn:4 reasonably:1 ballard:1 associator:1 contributes:1 interact:1 generalpurpose:1 bottou:1 marc:2 domain:3 yianilos:1 did:1 linearly:3 whole:1 noise:1 ref:1 referred:1 ny:4 explicit:1 zach:1 exponential:1 lie:1 tied:1 weighting:1 learns:2 dumitru:1 bad:1 specific:16 bishop:1 showing:1 experimented:1 decay:1 svm:3 evidence:7 normalizing:1 intractable:1 mnist:4 false:1 adding:1 accomodate:1 conditioned:2 margin:1 easier:1 entropy:2 simply:3 visual:2 springer:1 ch:2 corresponds:1 extracted:1 acm:3 nair:2 weston:2 conditional:5 viewed:2 consequently:1 rbf:1 towards:1 shared:5 replace:2 content:1 specifically:2 reducing:1 decouple:1 invariance:8 aaron:1 support:3 rotate:1 ethz:2 avoiding:1 evaluate:2 della:2 |
3,196 | 3,896 | Online Classification with Specificity Constraints
Shie Mannor
Department of Electrical Engineering
Technion - Israel Institute of Technology
Haifa, 32000, Israel
[email protected]
Andrey Bernstein
Department of Electrical Engineering
Technion - Israel Institute of Technology
Haifa, 32000, Israel
[email protected]
Nahum Shimkin
Department of Electrical Engineering
Technion - Israel Institute of Technology
Haifa, 32000, Israel
[email protected]
Abstract
We consider the online binary classification problem, where we are given m classifiers. At each stage, the classifiers map the input to the probability that the input
belongs to the positive class. An online classification meta-algorithm is an algorithm that combines the outputs of the classifiers in order to attain a certain goal,
without having prior knowledge on the form and statistics of the input, and without prior knowledge on the performance of the given classifiers. In this paper, we
use sensitivity and specificity as the performance metrics of the meta-algorithm. In
particular, our goal is to design an algorithm that satisfies the following two properties (asymptotically): (i) its average false positive rate (fp-rate) is under some
given threshold; and (ii) its average true positive rate (tp-rate) is not worse than the
tp-rate of the best convex combination of the m given classifiers that satisfies fprate constraint, in hindsight. We show that this problem is in fact a special case of
the regret minimization problem with constraints, and therefore the above goal is
not attainable. Hence, we pose a relaxed goal and propose a corresponding practical online learning meta-algorithm that attains it. In the case of two classifiers, we
show that this algorithm takes a very simple form. To our best knowledge, this is
the first algorithm that addresses the problem of the average tp-rate maximization
under average fp-rate constraints in the online setting.
1
Introduction
Consider the binary classification problem, where each input is classified into +1 or ?1. A classifier
is an algorithm which, for every input, classifies that input. In general, classifiers may produce the
probability of the input to belong to class 1. There are several metrics for the performance of the
classifier in the offline setting, where a training set is given in advance. These include error (or
mistake) count, true positive rate, and false positive rate; see [6] for a discussion. In particular,
the true positive rate (tp-rate) is given by the fraction of the number of positive instances correctly
classified out of the total number of the positive instances, while false positive rate (fp-rate) is given
by the fraction of the number of negative instances incorrectly classified out of the total number
of the negative instances. A receiver operating characteristics (ROC) graph then depicts different
classifiers using their tp-rate on the Y axis, while fp-rate on the X axis (see [6]). We note that
there are alternative names for these metrics in the literature. In particular, the tp-rate is also called
sensitivity, while one minus the fp-rate is usually called specificity. In what follows, we prefer to
use the terms tp-rate and fp-rate, as we think that they are self-explaining.
1
In this paper we focus on the online classification problem, where no training set is given in advance. We are given m classifiers, which at each stage n = 1, 2, ... map the input instance to the
probability of the instance to belong to the positive class. An online classification meta-algorithm
(or a selection algorithm) is an algorithm that combines the outputs of the given classifiers in order
to attain a certain goal, without prior knowledge on the form and statistics of the input, and without
prior knowledge on the performance of the given classifiers. The assumption is that the observed
sequence of classification probabilities and labels comes from some unknown source and, thus, can
be arbitrary. Therefore, it is convenient to formulate the online classification problem as a repeated
game between an agent and some abstract opponent that stands for the collective behavior of the
classifiers and the realized labels. We note that, in this formulation, we can identify the agent with a
corresponding online classification meta-algorithm.
There is a rich literature that deals with the online classification problem, in the competitive ratio framework, such as [5, 1]. In these works, the performance guarantees are usually expressed
in terms of the mistake bound of the algorithm. In this paper, we take a different approach. Our
performance metrics will be the average tp-rate and fp-rate of the meta-algorithm, while the performance guarantees will be expressed in the regret minimization framework. In a seminal paper,
Hannan [8] introduced the optimal reward-in-hindsight rn? with respect to the empirical distribution of opponent?s actions, as a performance goal of an online algorithm. In our case, rn? is in fact
the maximal tp-rate the agent could get at time n by knowing the classification probabilities and
actual labels beforehand, using the best convex combination of the classifiers. The regret is then
defined as the difference between rn? and the actual average tp-rate obtained by the agent. Hannan
showed in [8] that there exist online algorithms whose ?
regret converges to zero (or below) as time
progresses, regardless of the opponent?s actions, at 1/ n rate. Such algorithms are often called
no-regret, Hannan-consistent, or universally consistent algorithms. Additional no-regret algorithms
were proposed in the literature over the years, such as Blackwell?s approachability-based algorithm
[2] and weighted majority schemes [10, 7] (see [4] for an overview of these and other related algorithms). These algorithms can be directly applied to the problem of online classification when the
goal is only to obtain no-regret with respect to the optimal tp-rate in hindsight.
However, in addition to tp-rate maximization, some performance guarantees in terms of the fprate are usually required. In particular, it is reasonable to require (following the Neyman-Pearson
approach) that, in the long term, the average fp-rate of the agent will be below some given threshold
0 < ? < 1. In this case the tp-rate can be considered as the average reward obtained by the
agent, while fp-rate ? as the average cost. This is in fact a special case of the regret minimization
problem with constraints whose study was initiated by Mannor et al. in [11]. They defined the
constrained reward-in-hindsight with respect to the empirical distribution of opponent?s actions,
as a performance goal of an online algorithm. This quantity is the maximal average reward the
agent could get in hindsight, had he known the opponent?s actions beforehand, by using any fixed
(mixed) action, while satisfying the average cost constraints. The desired online algorithm then has
to satisfy two requirements: (i) it should have a vanishing regret (with respect to the constrained
reward-in-hindsight); and (ii) it should asymptotically satisfy the average cost constraints. It is
shown in [11] that such algorithms do not exist in general. The positive result is that a relaxed
goal, which is defined in terms of the convex hull of the constrained reward-in-hindsight over an
appropriate space, is attainable. The two no-regret algorithms proposed in [11] explicitly involve
either the convex hull or a calibrated forecast of the opponent?s actions. Both of these algorithms
may not be computationally feasible, since there are no efficient (polynomial time) procedures for
the computation of both the convex hull and a calibrated forecast.
In this paper, we take an alternative approach to that of [11]. Instead of examining the constrained
tp-rate in hindsight (or its convex hull), our starting point is the ?standard? regret with respect to
the optimal (unconstrained) tp-rate, and we consider a certain relaxation thereof. In particular, we
define a simple relaxed form of the optimal tp-rate in-hindsight, by subtracting a positive constant
from the latter. We then find the minimal constant needed in order to have a vanishing regret (with
respect to this relaxed goal) while asymptotically satisfying the average fp-rate constraint. The motivation for this approach is as follows. We know that if the constraints are always satisfied, then the
optimal tp-rate in-hindsight is attainable (using relatively simple no-regret algorithms). On the other
hand, when the constraints need to be actively satisfied, we should ?pay? some penalty in terms of
the attainability of the tp-rate in-hindsight. In our case, we express this penalty in terms of the relaxation constant mentioned above. One of the main contributions of this paper is a computationally
2
feasible online algorithm, the Constrained Regret Matching (CRM) algorithm, that attains the posed
performance goal. We note that although we focus in this paper on the online classification problem,
our algorithm can be easily extended to the general case of regret minimization under average cost
constraints.
The paper is structured as follows. In Section 2 we formally define the online classification problem
and the goal of the meta-algorithm. In Section 3 we present the general problem of constrained
regret minimization, and show that the online classification problem is its special case. In Section
4 we define our relaxed goal in terms of the unconstrained optimal tp-rate in-hindsight, propose the
CRM algorithm, and show that it can be implemented efficiently. Section 5 discusses the special
case of two classifiers and corresponding experimental results. We conclude in Section 6 with some
final remarks.
2
Online Classification
We consider the online binary classification problem from an abstract space to {1, ?1}. We are given
m classifiers that map an input instance to the probability that the instance belongs to the positive
class. We denote by A = {1, ...m} the set of indices of the classifiers. An online classification metaalgorithm is an algorithm that combines the outputs of the given classifiers in order to attain a certain
goal, without prior knowledge on the form and statistics of the input, and without prior knowledge
on the performance of the given classifiers. In what follows, we identify the meta-algorithm with an
agent, and use both these notions interchangeably. The time axis is discrete, with index n = 1, 2, ....
At stage n, the following events occur: (i) the input instance is presented to the classifiers (but not
to the agent); (ii) each classifier a ? A outputs fn (a) ? [0, 1], which is the probability of the input
to belong to class 1, and simultaneously the agent chooses a classifier an ; and (iii) the correct label
of the instance, bn ? {1, ?1}, is revealed.
There are several standard performance metrics of classifiers. These include error count, truepositive rate (which is also termed recall or sensitivity), and false-positive rate (one minus the fp-rate
is usually termed specificity). As discussed in [6], tp-rate and fp-rate metrics have some attractive
properties, such as that they are insensitive to changes in class distribution, and thus we focus on
these metrics in this paper. In the online setting, no training set is given in advance, and therefore
these rates have to be updated online, using the obtained data at each stage. Observe that this
data is expressed in terms of the vector zn , {fn (a)}a?A , bn ? [0, 1]m ? {?1, 1}. We let
rn = r(an , zn ) , fn (an ) I {bn = 1} and cn = c(an , zn ) , fn (an ) I {bn = 0} denote the reward
and the cost of the agent at time n. Note that rn is the probability that the instance with positive
label at time n will be classified correctly by the agent, while cn isP
the probability
Pn that the instance
n
with negative label will be classified incorrectly. Then, ??tp (n) , k=1 rk / k=1 I {bn = 1} and
Pn
Pn
??f p (n) , k=1 ck / k=1 I {bn = ?1} are the average tp-rate and fp-rate of the agent at time n,
respectively.
Our aim is to design a meta-algorithm that will have ??tp (n) not worse than the tp-rate of the best
convex combination of the m given classifiers (in hindsight), while satisfying ??f p (n) ? ?, for some
0 < ? < 1 (asymptotically, almost surely, for any possible sequence z1 , z2 , ...). In fact, this problem
is a special case of the regret minimization problem with constraints. In the next section we thus
present the general constrained regret minimization framework, and discuss its applicability to the
case of online classification.
3
3.1
Constrained Regret Minimization
Model Definition
We consider the problem of an agent facing an arbitrary varying environment. We identify the environment with some abstract opponent, and therefore obtain a repeated game formulation between
the agent and the opponent. The constrained game is defined by a tuple (A, Z, r, c, ?) where A
denotes the finite set of possible actions of the agent; Z denotes the compact set of possible outcomes (or actions) of the environment; r : A ? Z ? R is the reward function; c : A ? Z ? R`
is the vector-valued cost function; and ? ? R` is a convex and closed set within which the average
3
cost vector should lie inorder to satisfy the constraints.
An important special case is that of linear
constraints, that is ? = c ? R` : ci ? ?i , i = 1, ..., ` for some vector ? ? R` .
The time axis is discrete, with index n = 1, 2, .... At time step n, the following events occur: (i) The
agent chooses an action an , and the opponent chooses an action zn , simultaneously; (ii) the agent
observes zn ; and (iii) the agent receives a reward rn = r(an , zn ) ? R and a cost cn = c(an , zn ) ?
Pn
Pn
R` . We let r?n , n1 k=1 rk and c?n , n1 k=1 ck denote the average reward and cost of the agent
at time n, respectively. Let Hn , Z n?1 ? An?1 denote the set of all possible histories of actions till
time n. At time n, the agent chooses an action an according to the decision rule ?n : Hn ? ?(A),
?
where ?(A) is the set of probability distributions over the set A. The collection ? = {?n }n=1 is the
strategy of the agent. That is, at each time step, a strategy prescribes some mixed action p ? ?(A),
based on the observed history. A strategy for the opponent is defined similarly. We denote the mixed
action of the opponent by q ? ?(Z), which is the probability density over Z.
R
P
In what follows, we will use the shorthand notation r(p, q) ,
a?A p(a) z?Z q(z)r(a, z)
for the expected reward under mixed actions p ? ?(A) and q ? ?(Z). The notation
r(a, q), c(p, q), c(p, z), c(a, q) will be interpreted similarly. We make the following assumption that
the agent can satisfy the constraints in expectation against any mixed action of the opponent.
Assumption 3.1 (Satisfiability of Constraints). For every q ? ?(Z), there exists p ? ?(A), such
that c(p, q) ? ?.
Assumption 3.1 is essential, since otherwise the opponent can violate the average-cost constraints
simply by playing the corresponding stationary strategy q.
Pn
Let q?n (z) , k=1 ? {z ? zk } /n denote the empirical density of the opponent?s actions at time n,
so that q?n ? ?(Z). The optimal reward-in-hindsight is then given by
Z
n
n
X
1
1X
?
rn (z1 , ..., zn ) , max
? {z ? zk } = max r(a, q?n ),
r(a, zk ) = max
r(a, z)
a?A
a?A z?Z
n a?A
n
k=1
k=1
implying that rn? = r? (?
qn ). In what follows, we will use the term ?reward envelope? in order
to refer to functions ? : ?(Z) ? R. The simplest reward envelope is the (unconstrained) bestresponse envelope (BE) ? = r? . The n-stage regret of the algorithm (with respect to the BE) is then
r? (?
qn ) ? r?n . The no-regret algorithm must ensure that the regret vanishes as n ? ? regardless
of the opponent?s actions. However, in our case, in addition to vanishing regret, we need to satisfy
the cost constraints. Obviously, the BE need not be attainable in the presence of constraints, and
therefore other reward envelopes should be considered. Hence, we use the following definition
(introduced in [11]) in order to assess the online performance of the agent.
Definition 3.1 (Attainability and No-Regret). A reward envelope ? : ?(Z) ? R is ?-attainable if
there exists a strategy ? for the agent such that, almost surely, (i) lim supn?? (?(?
qn ) ? r?n ) ? 0 ,
and (ii) limn?? d(?
cn , ?) = 0, for every strategy of the opponent. Here, d(?, ?) is Euclidean set-topoint distance. Such a strategy ? is called constrained no-regret strategy with respect to ?.
A natural extension of the BE to the constrained setting was defined in [11], by noting that if the
agent knew in advance that the empirical distribution of the opponents actions is q?n = q, he could
choose the constrained best response mixed action p, which is a solution of the corresponding optimization problem:
r?? (q) , max {r(p, q) : so that c(p, q) ? ?} .
(1)
p??(A)
We refer to r?? as the constrained best-response envelope (CBE).
The first positive result that appeared in the literature was that of Shimkin [12], which showed that
the value v? , minq??(Z) r?? (q) of the constrained game is attainable by the agent. The algorithm
which attains the value is based on Blackwell?s approachability theory [3], and is computationally
efficient provided that v? can be computed offline. Unfortunately, it was shown in [11] that r?? (q)
itself is not attainable in general. However, the (lower) convex hull of r?? (q), conv (r?? ), is attainable1 .
Two no-regret algorithms with respect to conv (r?? ) are suggested in [11]. To our best knowledge,
1
The (lower) convex hull of a function f : X ? R is the largest convex function which is nowhere larger
than f .
4
these algorithms are inefficient (i.e., not polynomial); these are the only existing constrained noregret algorithms in the literature.
It should be noted that the problem that is considered here can not be formulated as an instance of
online convex optimization [13, 9] ? see [11] for a discussion on this issue.
3.2
Application to the Online Classification Problem
For the model described in Section 2, A = {1, ..., m} denotes the set of possible classifiers and Z denotes the set of possible outputs of the classifiers and the true labels, that is: z = {f (a)}a?A , b ?
[0, 1]m ? {?1, 1} , Z. The reward at time n is rn = r(an , zn ) = fn (an ) I {bn = 1} and the cost
is cn = c(an , zn ) = fn (an ) I {bn = ?1}. Note that in this case, the mixed action of the opponent
q ? ?(Z) is q(f, b) = q(f |b)q(b), where q(f |b) is the conditional density of the predictions of the
classifiers and q(b) is the probability of the label b. It is easy to check that
X
r(p, q) = q(1)
p(a)?tp (q; a),
(2)
a?A
where ?tp (q; a) , f f (a)q(f |1) is the tp-rate of classifier a under distribution q. Regarding the
cost, the goal is to keep it under a given threshold 0 < ? < 1. Since the regret minimization
framework requires additive rewards and costs, we define the following modified cost function:
c? (a, z) , c(a, z) ? ? I {b = ?1} , and similarly to the reward above, we have that
!
X
c? (p, q) = q(?1)
p(a)?f p (q; a) ? ? ,
(3)
R
a?A
R
where ?f p (q; a) , f q(f | ? 1)f (a) is the fp-rate of classifier a under distribution q. We
?
note that
Pn keeping the average fp-rate of the agent ?f p (n) ? ? is equivalent to keeping
(1/n) k=1 c? (ak , zk ) ? 0.
Since our goal is to keep the fp-rate below ?, some assumption on classifiers should be imposed in order to satisfy Assumption 3.1. We assume here that the classifiers? single-stage falsepositive probability is such that it allows satisfying the constraint. In particular, we redefine2
Z , {z = (f, b) ? [0, 1]m ? {?1, 1} : if b = ?1, f (a) ? ?a } , where 0 ? ?a ? 1, and there
exists a? such that ?a? < ?. Under this assumption, it is clear that for every q ? ?(Z), there exists
p ? ?(A), such that c? (p, q) ? 0; in fact this p is the probability mass concentrated on a? . If
additional prior information is available on the single-stage performance of the given classifiers, this
may be usefully used to further restrict the set Z. For example, we can also restrict z = (f, 1) by
f (a) ? ?a for some 0 < ?a < 1. Such additional restrictions will generally contribute to reducing the value of the optimal relaxation parameter ?? (see (7) below). This effect will be explicitly
demonstrated in Section 5.
We proceed to compute the BE and CBE. Using (2), the BE is
r? (q) , max r(a, q) = q(1)
a?A
max
a?{1,...,m}
{?tp (q; a)} , q(1)? ? (q),
(4)
where ? ? (q) is the optimal (unconstrained) tp-rate in hindsight under distribution q. Now, using (1),
(2), and (3) we have that r?? (q) = q(1)??? (q), where
(
)
X
X
?
?? (q) , max
p(a)?tp (q; a) : so that
p(a)?f p (q; a) ? ? ,
(5)
p??(A)
a?A
a?A
is the optimal constrained tp-rate in hindsight under distribution q. Finally, note that the value of the
constrained game v? , minq??(Z) r?? (q) = 0 in this case.
As a consequence of this formulation, the algorithms proposed in [11] can be in principle used in
order to attain the convex hull of r?? . However, given the implementation difficulties associated with
these algorithms, we are motivated to examine more carefully the problem of regret minimization
with constraints and provide more practical no-regret algorithms with formal guarantees.
2
This assumption can always be satisfied by adding a fictitious classifier a0 that always outputs a fixed
f (a0 ) < ?, irrespectively of the data. However, such an addition might adversely affect the value of the
optimal relaxation parameter ?? (see (7) below), and should be avoided if possible.
5
4
Constrained Regret Matching
We next define a relaxed reward envelope for the online classification problem. The proposed is in
fact applicable to the problem of constrained regret minimization in general. However, due to space
limitation, we present it directly for our classification problem.
Our starting point here in defining an attainable reward envelope will be the BE r? (q) = q(1)? ? (q).
Clearly, r? is in general not attainable in the presence of fp-constraints, and we thus consider a
relaxed version thereof. For ? ? 0, set r?? (q) , q(1)(? ? (q) ? ?). Obviously, r?? is a convex
function, and we can always pick ? ? 0 large enough, such that r?? is attainable. Furthermore, recall
that the value v? of the constrained game is attainable by the agent. Observe that, generally, r?? (q)
can be smaller than v? = 0. We thus introduce the following modification:
We refer to r?SR
r?SR (q) , q(1) max {0, ? ? (q) ? ?} .
as the scalar-relaxed best-response envelope (SR-BE). Now, let3
?? , max ? ? (q) ? ??? (q) .
q??(Z)
(6)
(7)
We note that r?SR? (q) is strictly above 0 at some point, unless the game is in some sense trivial (see
the supplementary material for a proof). According to Definition 3.1, we are seeking for a strategy
? that is: (i) an ?-relaxed no-regret strategy for the average reward, and (ii) ensures that the cost
constraints are asymptotically satisfied. Thus, at each time step, we need to balance between the
need of maximizing the average tp-rate and satisfying the average fp-rate constraint. Below we
propose an algorithm which solves this trade-off for ? ? ?? .
We introduce some further notation. Let
Rk? (a) , [fk (a) ? fk (ak ) ? ?] I {bk = 1} , a ? A, Lk , c? (ak , zk ),
(8)
denote the instantaneous ?-regret and the instantaneous constraint violation (respectively) at time
k. We have that the average ?-regret and constraints violation at time n are
?
R (a) = q?n (1) ?tp (?
qn ; a) ? ??tp (n) ? ? , a ? A; Ln = q?n (0)[??f p (n) ? ?].
(9)
n
Using this notation, the Constrained Regret Matching (CRM) algorithm is given in Algorithm 1. We
then have the following result.
Theorem 4.1. Suppose that the CRM algorithm is applied with parameter ? ? ?? , where ?? is
given in (7). Then, under Assumption 3.1, it attains r?SR (6) in the sense of Definition 3.1. That is, (i)
lim inf n?? ??tp (n) ? max {0, maxa?A ?tp (?
qn ; a) ? ?} ? 0 , and (ii) lim supn?? ??f p (n) ? 0,
for every strategy of the opponent, almost surely.
The proof of this Theorem is based on Blackwell?s approachability theory [3], and is given in the
supplementary material. We note that the mixed action required by the CRM algorithm always
?
exists provided
that
h ?
i ? ? ? . It can be easily shown (see the supplementary material) that whenever
P
> 0, this action can be computed by solving the following linear program:
a?A Rn?1 (a)
+
X
(p?
(10)
min
n (a) ? p(a)) ,
p?Bn
a?A:p?
n (a)>p(a)
o
P
0
0
p(a
)f
(a
)
?
?
?
0,
?z
=
(f,
?1)
?
Z
and
where Bn ,
p ? ?(A) : Ln?1 +
0
h ?
h ?
i P
ia ?A
p?
/ a0 ?A Rn?1 (a0 )
is the ?-regret matching strategy. Note also that
n (a) = Rn?1 (a)
n
+
+
when the average constraints hviolation L
i n?1 is non-positive, the minimum in (10) is obtained by
P
?
?
p = pn . Finally, when a?A Rn?1 (a) = 0, any action p ? Bn can be chosen. It is worth men+
tioning that our algorithm, and in particular the program (10), can not be formulated in the Online
Convex Programming (OCP) framework [13, 9], since the equivalent reward functions in our case
are trajectory-dependent, while in the OCP it is assumed that these functions are arbitrary, but fixed
(i.e., they should not depend on the agent?s actions).
3
In general, the parameter ?? may be difficult to compute analytically. See the supplementary material for
a discussion on computational aspects. Also, in the supplementary material we propose an adaptive algorithm
which avoids this computation (see a remark at the end of Section 4). Finally, in Section 5 we show that in the
case of two classifiers this computation is trivial.
6
Algorithm 1 CRM Algorithm
Parameter: ? ? 0.
Initialization: At time n = 0 use arbitrary action a0 .
At times n = 1, 2, ... find a mixed action p ? ?(A) such that
?
h ?
i
P
?P
f (a) ? a0 ?A p(a0 )f (a0 ) ? ? ? 0,
(a)
R
n?1
a?A
+
P
? Ln?1
p(a0 )f (a0 ) ? ? ? 0,
0
+
a ?A
?z = (f, 1) ? Z,
?z = (f, ?1) ? Z,
(11)
?
where Rn (a) and Ln,i are given in (9). Draw classifier an from p.
Remark. In practice, it may be possible to attain r?SR with ? < ?? if the opponent is not entirely
adversarial. In order to capitalize on this possibility, an adaptive algorithm can be used that adjusts
the value of ? online. The idea is to start from some small initial value ?0 ? 0 (possibly ?0 = 0).
At each time step n, we would like to use a parameter ? = ?n for which inequality (11) can be
satisfied. This inequality is always satisfied when ? ? ?? . If however ? < ?? , the inequality may
or may not be satisfied. In the latter case, ? can be increased so that the condition is satisfied. In
addition, once in a while, ? can be reset to ?0 , in order to obtain better results. In the supplementary
material we further discuss the adaptive scheme, and prove a convergence rate for it. We note that
the adaptive scheme does not require the computation of the optimal ?? , as it discovers it online.
5
The Special Case of Two Classifiers
If m = 2, we can obtain explicit expressions for the reward envelopes and for the algorithm. In
particular,
we have two classifiers, and we assume that the outputs of these classifiers lie in
the set
Z , z ? (f, b) ? [0, 1]2 ? {?1, 1} : if b = ?1, f (1) ? ?1 , f (2) ? ?2 ; if b = 1, f (2) ? ? such
that ?1 > ?, ?2 < ?, and ? ? 0. Observe that under this assumption, classifier 2 has one-stage performance guarantees that will allow to obtain better guarantees of the meta-algorithm. By computing
explicitly the CBE, we obtain
? ???f p (q;2)
?f p (q;1)??
?
if ?tp (q; 1) > ?tp (q; 2)
?
?f p (q;1)??f p (q;2) ?tp (q; 1) + ?f p (q;1)??f p (q;2) ?tp (q; 2),
?
?
?
and ?f p (q; 1) > ?,
?
r?? (q) = q(1) ?tp (q; 1),
if ?tp (q; 1) > ?tp (q; 2)
?
?
?
and ?f p (q; 1) ? ?,
?
?
?
?tp (q; 2),
otherwise.
Therefore, the relaxation parameter is
(1 ? ?)(?1 ? ?)
?tp (q; 1) ? ?tp (q; 2)
(?f p (q; 1) ? ?) =
.
?=
max
?f p (q; 1) ? ?f p (q; 2)
?1 ? ?2
q: ?tp (q;1)>?tp (q;2),?f p (q;1)>?
Finally, it is easy to check
h ? usingi(10) that Algorithm 1 reduces nin this case to othe following simP
???2
?
ple rule: (i) if a?A Rn?1 (a)
> 0, choose p(1) = min p?
n (1), ?1 ??2 , where pn (1) =
h ?
i P
i+
h ?
Rn?1 (1) / a?A Rn?1 (a) denotes the ?-regret matching strategy; (ii) otherwise, choose
+
arbitrary action with p(1) ?
+
???2
?1 ??2 .
We simulated the CRM algorithm with the following parameters: ? = 0.3, ?1 = 0.4, ?2 = 0.2, ? =
0.7. This gives the relaxation parameter of ? = 0.15. Half of the input instances were positives and
the other half were negatives (on average). The time was divided into episodes with exponentially
growing lengths. At each odd episode, both classifiers had similar tp-rate and both of them satisfied
the constraints, while in each even episode, classifier 1 was perfect in positives? classification, but did
not satisfy the constraints. The results are shown in Figure 1. We compared the performance of the
CRM algorithm to a simple unconstrained no-regret algorithm that treats both the true-positive and
false positive probabilities similarly, but with different weight. In particular, the reward at stage n of
this algorithm is gn (w) = fn (an ) I {bn = 1} ? wfn (an ) I {bn = ?1} for some weight parameter
7
tp-rate
fp-rate
w = 1.1
w = 1.1
w = 1.3
w = 1.3
?=
w = 1.33
w = 1.33
w = 1.4
w = 1.4
n
? ? (?
qn )
n
??? (?
qn )
CRM
NR(w)
Figure 1: Experimental results for the case of two classifiers.
w ? 0. Given w, this is simply a no-regret algorithm with respect to gn (w). When w = 0, the
algorithm performs tp-rate maximization, while if w is large, it performs fp-rate minimization. We
call this algorithm NR(w). As can be seen from Figure 1, the CRM algorithm outperforms NR(w)
for any fixed parameter w. For w = 1.1, NR(w) has a better tp-rate, but the fp-rate constraint is
violated most of the time. For w = 1.4, the constraints are always satisfied, but the tp-rate is always
dominated by that of the CRM algorithm. For w = 1.3, 1.33 it can be seen that the constraints are
satisfied (or almost satisfied), but the tp-rate is usually dominated by that of the CRM algorithm.
6
Conclusion
We studied regret minimization with average-cost constraints, with the focus on computationally
feasible algorithm for the special case of online classification problem with specificity constraints.
We defined a relaxed version of the best-response reward envelope and showed that it can be attained
by the agent while satisfying the constraints, provided that the relaxation parameter is above a certain
threshold. A polynomial no-regret algorithm was provided. This algorithm generally solves a linear
program at each time step, while in some special case the algorithm?s mixed action reduces to the
simple ?-regret matching strategy. To the best of our knowledge, this is the first algorithm that
addresses the problem of the average tp-rate maximization under average fp-rate constraints in the
online setting. In addition, an adaptive scheme that adapts the relaxation parameter online was
briefly discussed. Finally, the special case of two classifiers was discussed, and the experimental
results for this case show that our algorithm outperforms a simple no-regret algorithm which takes
as the reward function a weighted sum of the tp-rate and fp-rate.
Some remarks about our?algorithm and results follow. First, the guaranteed convergence rate of
the algorithm is of O(1/ n) since it is based on Blackwell?s approachability theorem4 . Second,
additional constraints can be easily incorporated in the presented framework, since the general regret
minimization framework assumes a vector of constraints. Third, it seems that there is an inherent
trade-off between complexity and performance in the studied problem. In particular, in case of a
single constraint, the maximal attainable relaxed goal is the convex hull of the CBE (see [11]) but no
polynomial algorithms are known that attain this goal. Our results show that, by further relaxing the
goal, it is possible to devise attaining polynomial algorithms. Finally, we note that the assumption
on the single-stage fp-rates of the classifiers can be weakened by assuming that, in each sufficiently
large period of time, the average fp-rate of each classifier a is bounded by ?a . Our approach and
results can be then extended to this case, by treating each such period as a single stage.
?
A straightforward application of this theorem also gives m dependence of the rate on the number of classifiers. We note that it is possible to improve the dependence to log(m) by using a potential based Blackwell?s
approachability strategy (see for example [4], Chapter 7.8)
4
8
References
[1] Y. Amit, S. Shalev-Shwartz, and Y. Singer. Online classification for complex problems using
simultaneous projections. In NIPS 2006.
[2] D. Blackwell. Controlled random walks. In Proceedings of the International Congress of
Mathematicians, volume III, pages 335?338, 1954.
[3] D. Blackwell. An analog of the minimax theorem for vector payoffs. Pacific Journal of Mathematics, 6:1?8, 1956.
[4] N. Cesa-Bianchi and G. Lugosi. Prediction, Learning, and Games. Cambridge University
Press, New York, NY, USA, 2006.
[5] K. Crammer, O. Dekel, J. Keshet, S. Shalev-Shwartz, and Y. Singer. Online passive-aggressive
algorithms. Journal of Machine Learning Research, 7:551?585, 2006.
[6] T. Fawcett. An introduction to ROC analysis. Pattern Recognition Letters, 27(8):861?874,
2006.
[7] Y. Freund and R.E. Schapire. Adaptive game playing using multiplicative weights. Games and
Economic Behavior, 29(12):79?103, 1999.
[8] J. Hannan. Approximation to Bayes risk in repeated play. Contributions to the Theory of
Games, 3:97?139, 1957.
[9] E. Hazan, A. Agarwal, and S. Kale. Logarithmic regret algorithms for online convex optimization. Machine Learning, 69(2-3):169?192, 2007.
[10] N. Littlestone and M. K. Warmuth. The weighted majority algorithm. Information and Computation, 108(2):212?261, 1994.
[11] S. Mannor, J. N. Tsitsiklis, and J. Y. Yu. Online learning with sample path constraints. Journal
of Machine Learning Research, 10:569?590, 2009.
[12] N. Shimkin. Stochastic games with average cost constraints. Annals of the International
Society of Dynamic Games, Vol. 1: Advances in Dynamic Games and Applications, 1994.
[13] M. Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In
Proceedings of the 20th International Conference on Machine Learning (ICML ?03), pages
928?936, 2003.
9
| 3896 |@word briefly:1 version:2 polynomial:5 seems:1 approachability:5 dekel:1 noregret:1 bn:13 attainable:12 pick:1 minus:2 initial:1 outperforms:2 existing:1 z2:1 attainability:2 must:1 fn:7 additive:1 treating:1 stationary:1 implying:1 half:2 warmuth:1 vanishing:3 mannor:3 contribute:1 shorthand:1 prove:1 combine:3 introduce:2 expected:1 behavior:2 examine:1 growing:1 actual:2 conv:2 provided:4 classifies:1 notation:4 bounded:1 mass:1 israel:6 what:4 interpreted:1 maxa:1 mathematician:1 hindsight:16 guarantee:6 every:5 usefully:1 classifier:47 positive:21 engineering:3 simp:1 treat:1 congress:1 mistake:2 consequence:1 ak:3 initiated:1 path:1 lugosi:1 might:1 initialization:1 studied:2 weakened:1 relaxing:1 practical:2 practice:1 regret:45 procedure:1 empirical:4 attain:6 convenient:1 matching:6 projection:1 specificity:5 get:2 selection:1 risk:1 seminal:1 restriction:1 zinkevich:1 map:3 equivalent:2 imposed:1 demonstrated:1 maximizing:1 straightforward:1 regardless:2 starting:2 minq:2 convex:18 kale:1 formulate:1 rule:2 adjusts:1 tioning:1 notion:1 updated:1 annals:1 suppose:1 play:1 programming:2 nowhere:1 satisfying:6 recognition:1 observed:2 electrical:3 ensures:1 episode:3 trade:2 observes:1 mentioned:1 environment:3 vanishes:1 complexity:1 reward:27 dynamic:2 prescribes:1 depend:1 solving:1 easily:3 isp:1 chapter:1 tx:1 pearson:1 outcome:1 shalev:2 whose:2 posed:1 valued:1 larger:1 supplementary:6 otherwise:3 statistic:3 think:1 itself:1 final:1 online:40 obviously:2 sequence:2 propose:4 subtracting:1 maximal:3 reset:1 ocp:2 till:1 adapts:1 convergence:2 requirement:1 nin:1 produce:1 perfect:1 converges:1 ac:3 pose:1 odd:1 progress:1 solves:2 implemented:1 come:1 correct:1 hull:8 stochastic:1 material:6 require:2 extension:1 strictly:1 sufficiently:1 considered:3 applicable:1 label:8 largest:1 weighted:3 minimization:14 clearly:1 always:8 aim:1 modified:1 ck:2 pn:9 varying:1 focus:4 check:2 adversarial:1 attains:4 sense:2 dependent:1 a0:10 issue:1 classification:25 constrained:21 special:10 once:1 having:1 capitalize:1 yu:1 icml:1 inherent:1 simultaneously:2 n1:2 possibility:1 violation:2 beforehand:2 tuple:1 unless:1 euclidean:1 walk:1 haifa:3 desired:1 littlestone:1 minimal:1 instance:14 increased:1 gn:2 tp:56 zn:10 maximization:4 cost:18 applicability:1 technion:6 examining:1 calibrated:2 andrey:1 chooses:4 density:3 international:3 sensitivity:3 off:2 satisfied:12 cesa:1 hn:2 choose:3 possibly:1 worse:2 adversely:1 inefficient:1 actively:1 aggressive:1 potential:1 attaining:1 satisfy:7 explicitly:3 multiplicative:1 closed:1 hazan:1 competitive:1 start:1 bayes:1 contribution:2 ass:1 il:3 characteristic:1 efficiently:1 identify:3 trajectory:1 worth:1 classified:5 history:2 simultaneous:1 whenever:1 definition:5 infinitesimal:1 against:1 shimkin:4 thereof:2 associated:1 proof:2 recall:2 knowledge:9 lim:3 satisfiability:1 carefully:1 metaalgorithm:1 attained:1 follow:1 response:4 formulation:3 furthermore:1 stage:11 hand:1 receives:1 name:1 effect:1 usa:1 true:5 hence:2 analytically:1 deal:1 attractive:1 game:14 self:1 interchangeably:1 noted:1 generalized:1 performs:2 topoint:1 passive:1 instantaneous:2 discovers:1 overview:1 insensitive:1 exponentially:1 volume:1 belong:3 he:2 discussed:3 analog:1 refer:3 cambridge:1 unconstrained:5 fk:2 mathematics:1 similarly:4 had:2 cbe:4 operating:1 showed:3 belongs:2 inf:1 termed:2 certain:5 inequality:3 meta:10 binary:3 devise:1 seen:2 minimum:1 additional:4 relaxed:11 surely:3 period:2 ii:8 violate:1 hannan:4 reduces:2 bestresponse:1 long:1 divided:1 controlled:1 prediction:2 metric:7 expectation:1 fawcett:1 agarwal:1 addition:5 source:1 limn:1 envelope:11 sr:6 ascent:1 shie:2 call:1 ee:2 presence:2 noting:1 bernstein:1 iii:3 revealed:1 crm:12 easy:2 enough:1 affect:1 restrict:2 economic:1 regarding:1 cn:5 knowing:1 idea:1 motivated:1 expression:1 penalty:2 proceed:1 york:1 action:29 remark:4 generally:3 clear:1 involve:1 theorem4:1 concentrated:1 simplest:1 schapire:1 exist:2 correctly:2 discrete:2 irrespectively:1 vol:1 express:1 threshold:4 asymptotically:5 graph:1 relaxation:8 fraction:2 year:1 sum:1 letter:1 nahum:1 almost:4 reasonable:1 draw:1 decision:1 prefer:1 entirely:1 bound:1 pay:1 guaranteed:1 occur:2 constraint:41 dominated:2 aspect:1 min:2 relatively:1 department:3 structured:1 according:2 pacific:1 combination:3 smaller:1 modification:1 computationally:4 neyman:1 ln:4 discus:3 count:2 needed:1 know:1 singer:2 end:1 available:1 opponent:20 observe:3 appropriate:1 alternative:2 denotes:5 assumes:1 include:2 ensure:1 amit:1 society:1 seeking:1 realized:1 quantity:1 strategy:15 dependence:2 nr:4 gradient:1 supn:2 distance:1 simulated:1 majority:2 trivial:2 assuming:1 length:1 index:3 ratio:1 balance:1 difficult:1 unfortunately:1 negative:4 design:2 implementation:1 collective:1 unknown:1 bianchi:1 finite:1 incorrectly:2 defining:1 extended:2 incorporated:1 payoff:1 rn:17 arbitrary:5 introduced:2 bk:1 blackwell:7 required:2 z1:2 nip:1 address:2 suggested:1 usually:5 below:6 pattern:1 fp:25 appeared:1 program:3 max:11 ia:1 event:2 natural:1 difficulty:1 minimax:1 scheme:4 improve:1 technology:3 axis:4 lk:1 prior:7 literature:5 freund:1 mixed:10 men:1 limitation:1 fictitious:1 facing:1 agent:31 consistent:2 principle:1 playing:2 keeping:2 offline:2 formal:1 allow:1 tsitsiklis:1 institute:3 explaining:1 stand:1 avoids:1 rich:1 qn:7 collection:1 adaptive:6 universally:1 avoided:1 ple:1 compact:1 keep:2 receiver:1 conclude:1 assumed:1 knew:1 shwartz:2 zk:5 othe:1 complex:1 did:1 main:1 motivation:1 repeated:3 roc:2 depicts:1 ny:1 explicit:1 lie:2 third:1 rk:3 theorem:4 exists:5 essential:1 false:5 adding:1 ci:1 keshet:1 forecast:2 logarithmic:1 simply:2 expressed:3 scalar:1 satisfies:2 conditional:1 goal:19 formulated:2 feasible:3 change:1 reducing:1 total:2 called:4 experimental:3 formally:1 latter:2 crammer:1 violated:1 |
3,197 | 3,897 | Probabilistic Inference and Differential Privacy
Frank McSherry
Microsoft Research
Mountain View, CA 94043
[email protected]
Oliver Williams
Microsoft Research
Mountain View, CA 94043
[email protected]
Abstract
We identify and investigate a strong connection between probabilistic inference
and differential privacy, the latter being a recent privacy definition that permits
only indirect observation of data through noisy measurement. Previous research
on differential privacy has focused on designing measurement processes whose
output is likely to be useful on its own. We consider the potential of applying
probabilistic inference to the measurements and measurement process to derive
posterior distributions over the data sets and model parameters thereof. We find
that probabilistic inference can improve accuracy, integrate multiple observations,
measure uncertainty, and even provide posterior distributions over quantities that
were not directly measured.
1
Introduction
There has recently been significant interest in the analysis of data sets whose individual records are
too sensitive to expose directly, examples of which include medical information, financial data, and
personal data from social networking sites. Data like these are rich sources of information from
which models could be learned for a variety of important applications. Although agencies with the
resources to collate such data are unable to grant outside parties direct access to them, they may be
able to safely release aggregate statistics of the data set. Progress in this area has so far been driven
by researchers inventing sophisticated learning algorithms which are applied directly to the data and
output model parameters which can be proven to respect the privacy of the data set. Proving these
privacy properties requires an intricate analysis of each algorithm on a case-by-case basis. While this
does result in many valuable algorithms and results, it is not a scalable solution for two reasons: first,
to solve a new learning problem, one must invent and analyze a new privacy-preserving algorithm;
second, one must then convince the owner of the data to run this algorithm. Both of these steps are
challenging.
In this paper, we show a natural connection between differential privacy, one of the leading privacy
definitions, and probabilistic inference. Specifically, differential privacy exposes the conditional distribution of its observable outputs given any input data set. Combining the conditional distributions
of differentially-private observations with generative models for the data permits new inferences
about the data without the need to invent and analyze new differentially-private computations. In
some cases, one can rely on previously reported differentially private measurements. When this is
not sufficient, one can use off-the-shelf differentially-private ?primitives? pre-sanctioned by owners
of the data. As well as this flexibility, probabilistic inference can improve the accuracy of existing
approaches, provide a measure of uncertainty in any predictions made, combine multiple observations in a principled way, and integrate prior knowledge about the data or parameters.
The following section briefly introduces differential privacy. In Section 3 we explore the marginal
likelihood of the differentially-private observations given generative model parameters for the data.
In general this likelihood consists of a high-dimensional integration over the space of all data sets,
however we show that for a rich subclass of differentially private computations this distribution can
1
be efficiently approximated via upper and lower bounds, derived using variational techniques. Section 4 shows several experimental results validating our hypothesis that probabilistic inference can
be fruitfully applied to differentially-private computation. In particular, we show how the application of principled, probabilistic inference to measurements made by an existing, heuristic algorithm
for logistic regression improves performance, as well as providing confidence on the predictions
made.
1.1
Related work
There is a substantial amount of research on privacy, and differential privacy in particular, connected
with machine learning and statistics. Nonetheless, we are unaware of any research that uses exact
knowledge of the conditional distribution over outputs given inputs to perform inference over model
parameters, or other features of the data. Much of the existing statistical literature is concerned with
identifying cases when the differentially-private observations are ?as good? as traditional statistical
estimators, in terms of efficiency [1], power [2], and minimax rates [3], and also robust estimators
[4]. Instead, we are concerned with the cases where it is valuable to acknowledge and manage the
uncertainty in the observations. As we demonstrate experimentally, such cases abound.
Chaudhuri and Monteleoni [5, 6] introduced the NIPS community to the problem of differentiallyprivate logistic regression. Although we will also consider the problem of logistic regression (and
compare our findings with theirs) we should stress that the aim of the paper is not specifically to
attack the problem of logistic regression. Rather, the problem serves as a good example where prior
work on differentially-private logistic regression can be improved through probabilistic inference.
2
Differential Privacy
Differential privacy [7] applies to randomized computations executed against a dataset and returning
an aggregate result for the entire set. It prevents inference about specific records by requiring that
the result of the computation yield nearly identical distributions for similar data sets. Formally, a
randomized computation M satisfies -differential privacy if for any two possible input data sets A
and B, and any subset of possible outputs S,
P (M (A) ? S) ? P (M (B) ? S) ? exp( ? |A B|) ,
(1)
where A B is the set of records in A or B, but not both. When A B is small, the relative
bound on probabilities limits the inference an attacker can make about whether the true underlying
data were actually A or B. Inferences about the presence, absence, or specific values of individual
records are strongly constrained.
One example of a differentially private computation is the exponential mechanism[8], characterized
by a function ? : Dn ? R ? R scoring each pair of data set and possible result with a real value.
When the ? function satisfies | ln ?(z, A)?ln ?(z, B)| ? |A B| for all z, the following distribution
satisfies 2-differential privacy:
P (M (X) = z)
=
?(z, X)
0
z 0 ?Z ?(z , X)
P
(2)
The exponential mechanism is fully general for differential privacy; any differentially-private mechanism M can be encoded in a ? function using the density of M (X) at z.
While any differentially-private mechanism can be expressed as a ? function, verifying that a function ? satisfies the constraint | ln ?(z, A) ? ln ?(z, B)| ? |A B| is generally not easy, and requires
some form of proof on a case by case basis. OneQthat does not require a specialized proof, is when
the ? functions can be expressed as ?(z, X) = i ?(z, xi ). This subclass is useful practically, as
data providers can ensure differential privacy by clamping each ?(z, x) value to the range [e?1 , e+1 ],
without having to understand the ? function. We will refer to this subclass as the factored exponential mechanism.
As we can see from the definition of the exponential mechanism, a differentially-private mechanism
draws its guarantees from its inherent randomness, rather than from secrecy about its specification.
Although differential privacy has many other redeeming features, it is this feature alone that we
2
i=1..n
?
i=1..n
?
xi
xi
(a)
z
(b)
Figure 1: Graphical models. (a) If the data X = {xi } are directly observable (shaded nodes), the
canonical learning task is to infer the posterior over ? given a model relating X and ?. (b) In the
private setting, the data are not observable; instead we observe the private measurement z, related
to X by a known measurement process.
exploit in the remainder of the work. By the same token, although there are many other privacy
definitions with varying guarantees, we can apply inference to any definition exhibiting one key
feature: an explicit probabilistic relationship between the input data sets and output observations.
3
Inference and privacy
Differential privacy limits what can be inferred about a single record in a data set, but does not
directly limit inference about larger scale, aggregate properties of data sets. For example, many
tasks in machine learning and statistics infer global parameters describing a model of the data set
without explicit dependence on any single record, and we may still expect to be see a meaningful
relationship between differentially-private measurements and model parameters.
One way to model a data set is to propose a generative probabilistic model for the data p(X|?).
In Figure 1(a) we show a graphical model for the common case, in which we seek to infer the
parameters ? given the observed iid data X = {xi }. In Figure 1(b) we see a graphical model for
the case considered in this paper, in which the data are not directly observed due to privacy. Instead,
information about X is revealed by the measurement z, which is generated from X according to a
known conditional distribution p(z|X), for example as given in (2). We therefore reason about ? via
the marginal likelihood
Z
p(z|?) = dX p(X|?)p(z|X).
(3)
Armed with the marginal likelihood, it is possible to bring all the techniques of probabilistic inference to bear. This will generally include adding a prior distribution over ?, and combining multiple
measurements to form a posterior
Y
p(?|z1 . . . zm , ?) = p(?|?)
p(zj |?)
(4)
j
where ? stands for any non-private information about ? we may have available.
While this is superficially clean, there is a problem: the integration in (3) is over the space of all
data sets and is therefore challenging to compute whenever it cannot be solved analytically. Section
4 will show some results in which we tackle this head-on via MCMC, however this only works for
data sets of moderate size. Therefore, the remainder of this section is devoted to the development of
several bounds on the marginal likelihood for cases in which the measurement is generated via the
factored exponential mechanism. These bounds can be computed without requiring an integration
over all X.
3.1
Factored exponential mechanism
The factored exponential mechanism of Section 2 is a special case of differentially-private mechanism that admits efficient approximation
of the marginal
Q
Qlikelihood. We will be able to use the
independence in p(X|?) = i p(xi |?) and ?(z, X) = i ?(z, xi ) to factorize lower and upper
3
bounds on the integral (3), resulting in a small number of integrals over only the domain of records,
rather than the domain of data sets. As we will see, the bounds are often quite tight.
n !?1
X Z
?(z 0 , x)
dx p(x|?)
p(z|?) ?
(5a)
?(z, x)
z 0 ?Z
Z
n
?(z, x)
?H[q]
p(z|?) ? e
dx p(x|?) Q
(5b)
0
q(z 0 )
z 0 ?Z ?(z , x)
P
where the upper bound is defined in terms of a variational distribution q(z) [9] such that z? q(z) =
1. H[q] is the Shannon entropy of q. Notice that the integrations appearing in either bound are over
the space of a single record in a data set and not over the entire dataset as they were in (3).
Proof of lower bound
To prove the lower bound, we will apply Jensen?s inequality with the function f (x) = 1/x to the
marginal likelihood of the exponential mechanism.
!?1
Z
Z
X ?(z 0 , X)
?(z, X)
dX p(X|?) P
?
dX p(X|?)
0
?(z, X)
z 0 ?Z ?(z , X)
z 0 ?Z
!?1
XZ
?(z 0 , X)
=
dX p(X|?)
?(z, X)
0
z ?Z
which now factorizes, as
Z
Z
Z
Y
?(z 0 , xi )
dx1 dx2 . . . dxn
p(xi |?)
?(z, xi )
i
?(z 0 , xi )
=
dxi p(xi |?)
?(z, xi )
i
Z
n
?(z 0 , x)
=
dx p(x|?)
.
?(z, x)
Y Z
Proof of upper bound
P
We can lower bound the normalizing term z0 ?Z ?(z 0 , X) in (2) by introducing a variational
distribution q(z), and applying Jensen?s inequality with the function f (x) = log x.
X
z 0 ?Z
?(z 0 , X) = exp log
X q(z 0 )
?(z 0 , X)
0)
q(z
0
z ?Z
!
? exp(H[q]) + exp
X
0
0
q(z ) log ?(z , X)
.
z 0 ?Z
Applying this bound to the marginal likelihood gives us the bound
Z
Z
?(z, X)
?(z, X)
?H[q]
Q
dX p(X|?) P
?
e
dX
p(X|?)
0
0
q(z 0 )
z 0 ?Z ?(z , X)
z 0 ?Z ?(z , X)
Z
Y
?(z, xi )
= e?H[q] dX
p(xi |?) Q
0
q(z 0 )
z 0 ?Z ?(z , xi )
i
Z
n
?(z, x)
= e?H[q]
dx p(x|?) Q
.
0
q(z 0 )
z 0 ?Z ?(z , x)
While the upper bound is true for any q distribution, the tightest bound is found for the q which
minimizes the bound.
4
Upper bound
Lower bound
Actual
p(z|theta)
0.01
Max. error
Upper bound
0.4
0
-0.01
Lower bound
0.3
0.2
0.1
-0.02
-4
-3
-2
-1
0
0
0.2
log10(epsilon)
(a)
0.4
0.6
0.8
theta
(b)
Figure 2: Error in upper and lower bounds for coin-flipping problem. (a) For each epsilon, we
plot the maximum across all ? of the error between the true distribution and each of the upper and
lower bounds is plotted. (b) For n = 100 and = 0.5, we show the shape of the upper bound, lower
bound, and true distribution when differentially-private measurement returned was z = 0.7.
3.1.1
Chosing a ? function
The upper and lower bounds in (5) are true for any admissible ? function, but leave unanswered the
question of what to chose in this r?ole. In the absence of privacy we might try to find a good fit for
the parameters ? by maximum likelihood. In the private setting this is not possible because the data
are not directly observable, but the output of the factored exponential mechanism has a very similar
form:
Y
p(xi |?)
(6a)
Max likelihood:
?? = arg max
???
Exp. mechanism:
?
i
z = noisy max
z?Z
Y
?(z, xi )
(6b)
i
where noisy maxz?Z f (z) samples from P 0 f (z)f (z0 ) . By making the analogy between (6a) and (6b),
z ?Z
we might let z range over elements of ? (or a finite subset), and take ?(z, xi ) to be the likelihood of
xi under parameters z. The exponential mechanism is then likely to choose parameters z that fit the
data well, informing us that the posterior over ? is likely in the vicinity of z. For ? to be admissible,
we must clamp very small values of ? up to 1/e, limiting the ability of very poorly fit records to
influence our decisions strongly.
3.2
Evaluation of the bounds
To demonstrate the effectiveness of these bounds we consider a problem in which it is possible to
analytically compute the marginal likelihood. This is the case in which the database X contains
a set of Boolean values corresponding to independent samples from a Bernoulli distribution with
probability ?
p(x|?) = ?x (1 ? ?)(1?x) .
(7)
For our test, we took Z to be the nine multiples of 0.1 between 0.1 and 0.9, and log ?(z, xi ) =
log 0.9
[log p(x|?)]log 0.1 ) that is, the log likelihood clammped such that ?(z, x) lies in the interval
[e?1 , e+1 ], as required by privacy.
We see in figure 2a that the error in both the upper and lower bounds, across the entire density
function, is essentially zero for small epsilon. As epsilon increases the bounds deteriorate, but we
are most interested in the case of small values of epsilon, where privacy guarantees are meaningfully
strong. Figure 2b shows the shape of the two bounds, and the true density between, for epsilon =
0.5. This large value was chosen as it is in the region for which the bounds are less tight and the
difference between the bounds and the truth can be seen.
5
The upper bound is defined in terms of a variationalR distribution q. For these
experiments q was
approximately minimized by setting q(z) ? exp n dx p(x|?) log ?(z, x) . In general, however,
these (and other) test show that both bounds are equally good for reasonable values of and we
therefore use the lower bound for the experiments in this paper, since it is simpler to compute.
4
Experiments
We consider two scenarios for the experimental validation of the utility of probabilistic inference.
First, we consider applying probabilistic inference to an existing differentially-private computation,
specifically a logistic regression heuristic taken from a suite of differentially-private algorithms. The
heuristic is not representable in the factored exponential mechanism, and as such we must attempt
to approximate the full integral over the space of data sets directly. In our second experiment, we
choose a problem and measurement process appropriate for the factored exponential mechanism,
principal components analysis, previously only ever addressed through noisy observation of the
covariance matrix.
4.1
Logistic Regression
To examine the potential of probabilistic inference to improve the quality of existing differentiallyprivate computations, we consider a heuristic algorithm for logistic regression included in the Privacy Integrated Queries distribution [10]. This heuristic uses a noisy sum primitive to repeatedly
compute and step in the direction of an approximate gradient. When the number of records is large
compared to the noise introduced, the approximate gradient is relatively accurate, and the algorithm
performs well. When the records are fewer or the privacy requirements demand more noise, its
performance suffers. Probabilistic inference has the potential to improve performance by properly
integrating the information extracted from the data across the multiple gradient measurements and
managing the uncertainty associated with the noisy measurements.
We test our proposals against three synthetic data sets (CM1 and CM2 from [5] and one of our own:
SYNTH) and two data sets from the UCI repository (PIMA and ADULT) [11]. Details of these data
sets appear in table 4.1. The full ADULT data set was split into training and test sets, chosen so as
to force the marginal frequency of positive and negative examples to 50%.
Records
Dimensions
Positive examples
Test set records
SYNTH
CM1
CM2
PIMA
ADULT
1000
4
497
1000
17500
10
8770
17500
17500
10
8694
17500
691
8
237
767*
16000
6
7841
8000
Table 1: Data sets used and their statistics. Attribute values in SYNTH are sampled uniformly from
a hypercube of unit volume, centered at the origin. CM1 and CM2 are both sampled uniformly at
random for the surface of the unit hypersphere in 10 dimensions; CM1 is linearly separable, whereas
CM2 is not (see [5]). PIMA and ADULT are standard data sets [11] containing diabetes records,
and census data respectively, both of which correspond to the types of data one might expect to be
protected by differential privacy. The total PIMA data set is so small that we reused the full data set
as test data (indicated by *).
4.1.1
Error Rates and Log-Likelihood
Tables 2 and 3 report the classification accuracy of several approaches when the privacy parameter
is set to 0.1 and 1.0 respectively. These results are computed from 50 executions of the heuristic
gradient descent algorithm.
We can see a trend of general improvement from the heuristic approach to the probabilistic inference,
both in terms of the average error rate and the standard deviation. For the CM1 and CM2 data sets
at epsilon = 0.1, we see substantial improvement over the reported results of [5]. Please note that
6
Heuristic
Inference
Benchmark
NIPS 08 [5]
SYNTH
CM1
CM2
PIMA
ADULT
37.40 ? 15.75
29.14 ? 5.54
16.40
3.93 ? 1.57
2.72 ? 0.84
0.00
14.26 ? 12.84
9.32 ? 1.18
8.84 ? 0.79
5.40
19.03 ? 11.05
44.26 ? 8.50
45.70 ? 6.31
19.48
43.15 ? 7.85
36.07 ? 6.32
26.09
Table 2: Error Rates with = 0.1 All measurements are in per cent; errors are reported as the
mean ? one standard deviation computed from 50 independent executions with random starting
points. Heuristic corresponds to the last estimate made by noisy gradient ascent. Inference entries
correspond to the expected error, computed over the approximate posterior for ? found via MCMC.
Benchmark is the best maximum likelihood solution found by gradient ascent when the data are
directly observable and forms a baseline for expected performance. NIPS08 corresponds the the
results given in [5]; these values were copied from that paper and are provided for comparison.
Heuristic
Inference
Benchmark
SYNTH
CM1
CM2
PIMA
ADULT
17.31 ? 1.12
17.16 ? 0.94
16.40
0.00 ? 0.00
0.01 ? 0.02
0.00
5.67 ? 0.19
5.69 ? 0.13
5.40
35.67 ? 6.45
36.47 ? 8.56
19.48
31.30 ? 4.16
29.36 ? 1.31
26.09
Table 3: Error Rates with = 1.0 All measurements are in per cent; see caption for table 2.
the experiments were run on different data than in [5] drawn from the same distribution, and that
different numbers of repetitions were used in [5] for the computation of the standard deviation and
mean.
4.1.2
Exchanging Iterations for Accuracy
The heuristic gradient ascent algorithm has an important configuration parameter determining the
number of iterations of ascent, and consequently the accuracy permitted in each round (which must
be lower if more rounds are to be run, to keep the cumulative privacy cost constant). The performance of the algorithm can be very sensitive to this parameter, as too few iterations indicate too little
about the data, and too many render each iteration meaningless. In Figure 3a we consider several
parameterizations of the heuristic, taking varying numbers of steps with varying degrees of accuracy in each step. Each colored path describes an execution with a fixed level of accuracy in each
iteration, and all are plotted on the common scale of total privacy consumption. All of these paths
roughly describe a common curve, suggesting that careful configuration is not required for these approaches: probabilistic inference appears to extract an amount of information that depends mainly
on the total privacy consumption, and less on the specific details of its collection. This experiment
was performed on the CM2 data set and the corresponding result from [5] is indicated by the ?X?.
4.1.3
Integrating Auxiliary Information
To further demonstrate the power of the probabilistic inference approach, we consider the plausible
scenario in which we are provided with a limited number of additional data points, obtained without
privacy protection (for example, if we independently run a small survey of our own). These additional samples are easily incorporated into the graphical model by adding them as descendants of ?
in figure 1b. Figure 3b shows how the performance on SYNTH (which contains 1000 data points)
improves, as the quantity of additional examples increases. Even with very few additional examples,
probabilistic inference is capable of exploiting this information and performance improves dramatically.
4.2
Principal components
To demonstrate inference on another model, and to highlight the applicability of the factored exponential mechanism, we consider the problem of probabilistically finding the first principal compo7
0.3
Error rate
Error rate
0.4
0.2
0.25
0.2
0.15
0
-2.5
-2
-1.5
-1
-0.5
0
0
100
log10(epsilon)
200
300
400
Public data
(a)
(b)
Figure 3: (a) Paths of varying . (b) Incorporating non-private observations A compelling benefit
of probabilistic inference is how easily alternate sources of information are added. The horizontal
line indicates the performance of the benchmark maximum likelihood solution computed from the
data without privacy.
10
10
10
5
5
5
0
0
0
-5
-5
-5
-5
0
= 0.003
5
10
-5
0
5
10
= 0.01
-5
0
5
10
= 0.1
Figure 4: Posterior distribution as a function of The same synthetic data set under differentiallyprivate measurements with varying epsilon. For each measurement, 1000 samples of the full posterior over ? are drawn and overlaid on this figure to indicate the modes and concentration of the
density. The posterior is noticeably more concentrated and accurate as epsilon increases.
nent of a data set where we model the data as iid draws from a Gaussian
p(x|?) = N (0, ??T + ? 2 I).
(8)
An important advantage of our approach is its ability to capture uncertainty in the parameters and
act accordingly. Figure 4 demonstrates three instances of inference applied to the same data set with
three different values of . As increases, the concentration of the posterior over the parameters
increases. We stress that the posterior and its concentration are returned to the analyst; each image
is the result of a single differentially-private measurement, rather than a visualization of multiple
runs. The measurement associated with = 0.003 is revealing as it corresponds to the off-axis
mode of the posterior. Although centered on this incorrect answer, the posterior indicates lack of
confidence, and there is non-negligible mass over the correct answer.
5
Conclusions
Most work in the area of learning from private data forms an intrinsic analysis. That is, a complex
algorithm is run by the owner of the data, directly on that data, and a single output is produced which
appropriately indicates the desired parameters (modulo noise). In contrast, this paper has shown that
it is possible to do a great deal with an extrinsic analysis, where standard, primitive, measurements
are made against the data, and a posterior over model parameters is inferred post hoc.
This paper brings together two complementary lines of research: the design and analysis of
differentially-private algorithms, and probabilistic inference. Our primary goal is not to weigh-in
on new differentially-private algorithms, nor to find new methods for probabilistic inferences ? it is
to present the observation that the two approaches are complementary in way that can be mutually
enriching.
8
References
[1] A. Smith. Efficient, differentially private point estimators. 2008. arXiv:0809.4794.
[2] A. Slavkovic and D. Vu. Differential privacy for clinical trial data: Preliminary evaluations.
In Proceedings of the International workshop on Privacy Aspects of Data Mining, PADM09,
2009.
[3] L. Wasserman and S. Zhou. A statistical framework for differential privacy. Journal of the
American Statistical Association, 105(489):375?389, 2010.
[4] C. Dwork and J. Lei. Differential privacy and robust statistics. In STOC, 2009.
[5] K. Chaudhuri and C. Monteleoni. Privacy-preserving logistic regression. In NIPS, pages 289?
296, 2008.
[6] K. Chaudhuri, C. Monteleoni, and A.D. Sarwate. Differentially private empirical risk minimization. 2010.
[7] C. Dwork, F. McSherry, K. Nissim, and A. Smith. Calibrating noise to sensitivity in private
data analysis. In TCC, pages 265?284, 2006.
[8] F. McSherry and K. Talwar. Differential privacy via mechanism design. In FOCS, 2007.
[9] M.I. Jordan, Z. Ghahramani, T. Jaakkola, and L.K. Saul. An introduction to variational methods for graphical models. Machine Learning, 37(2):183?233, 1999.
[10] F. McSherry. Privacy integrated queries. In ACM SIGMOD, 2009.
[11] A. Asuncion and D.J. Newman. UCI machine learning repository, 2007.
9
| 3897 |@word trial:1 private:30 briefly:1 repository:2 reused:1 cm2:8 seek:1 covariance:1 configuration:2 contains:2 existing:5 com:2 protection:1 dx:12 must:5 shape:2 plot:1 alone:1 generative:3 fewer:1 accordingly:1 nent:1 smith:2 record:14 colored:1 hypersphere:1 parameterizations:1 node:1 attack:1 simpler:1 dn:1 direct:1 differential:20 descendant:1 consists:1 prove:1 incorrect:1 combine:1 focs:1 owner:3 privacy:43 deteriorate:1 expected:2 intricate:1 roughly:1 xz:1 examine:1 nor:1 actual:1 armed:1 little:1 abound:1 provided:2 underlying:1 mass:1 what:2 mountain:2 minimizes:1 finding:2 suite:1 guarantee:3 safely:1 subclass:3 act:1 tackle:1 returning:1 demonstrates:1 unit:2 medical:1 grant:1 appear:1 positive:2 negligible:1 limit:3 path:3 approximately:1 might:3 chose:1 challenging:2 shaded:1 limited:1 range:2 enriching:1 vu:1 area:2 empirical:1 revealing:1 pre:1 confidence:2 integrating:2 cannot:1 risk:1 applying:4 influence:1 maxz:1 williams:1 primitive:3 starting:1 independently:1 focused:1 survey:1 identifying:1 wasserman:1 factored:8 estimator:3 financial:1 proving:1 unanswered:1 limiting:1 modulo:1 exact:1 caption:1 us:2 designing:1 hypothesis:1 origin:1 diabetes:1 element:1 trend:1 approximated:1 database:1 observed:2 solved:1 verifying:1 capture:1 region:1 connected:1 valuable:2 principled:2 substantial:2 agency:1 weigh:1 personal:1 tight:2 efficiency:1 basis:2 easily:2 indirect:1 describe:1 ole:1 query:2 newman:1 aggregate:3 outside:1 whose:2 heuristic:12 encoded:1 solve:1 larger:1 quite:1 plausible:1 ability:2 statistic:5 noisy:7 hoc:1 advantage:1 took:1 propose:1 clamp:1 tcc:1 remainder:2 zm:1 uci:2 combining:2 flexibility:1 chaudhuri:3 poorly:1 differentially:23 exploiting:1 requirement:1 leave:1 derive:1 measured:1 progress:1 strong:2 auxiliary:1 indicate:2 exhibiting:1 direction:1 correct:1 attribute:1 slavkovic:1 centered:2 public:1 noticeably:1 require:1 inventing:1 preliminary:1 practically:1 considered:1 exp:6 great:1 overlaid:1 expose:2 sensitive:2 repetition:1 minimization:1 gaussian:1 aim:1 rather:4 zhou:1 shelf:1 varying:5 factorizes:1 jaakkola:1 probabilistically:1 release:1 derived:1 properly:1 improvement:2 bernoulli:1 likelihood:15 indicates:3 mainly:1 contrast:1 baseline:1 inference:34 entire:3 integrated:2 interested:1 arg:1 classification:1 development:1 constrained:1 integration:4 special:1 marginal:9 having:1 identical:1 nearly:1 minimized:1 report:1 inherent:1 few:2 individual:2 microsoft:4 attempt:1 interest:1 differentiallyprivate:3 investigate:1 mining:1 dwork:2 evaluation:2 introduces:1 mcsherry:5 devoted:1 accurate:2 oliver:1 integral:3 capable:1 desired:1 plotted:2 instance:1 boolean:1 compelling:1 exchanging:1 cost:1 introducing:1 deviation:3 subset:2 entry:1 applicability:1 fruitfully:1 too:4 reported:3 answer:2 synthetic:2 convince:1 density:4 international:1 randomized:2 sensitivity:1 probabilistic:23 off:2 together:1 manage:1 containing:1 choose:2 american:1 leading:1 suggesting:1 potential:3 depends:1 performed:1 view:2 try:1 analyze:2 asuncion:1 accuracy:7 efficiently:1 yield:1 identify:1 correspond:2 produced:1 iid:2 provider:1 researcher:1 randomness:1 networking:1 monteleoni:3 suffers:1 whenever:1 definition:5 against:3 nonetheless:1 frequency:1 thereof:1 proof:4 dxi:1 associated:2 sampled:2 dataset:2 knowledge:2 improves:3 sophisticated:1 actually:1 appears:1 permitted:1 improved:1 strongly:2 horizontal:1 lack:1 cm1:7 logistic:9 brings:1 mode:2 indicated:2 quality:1 lei:1 calibrating:1 requiring:2 true:6 analytically:2 vicinity:1 dx2:1 deal:1 round:2 please:1 stress:2 demonstrate:4 performs:1 bring:1 image:1 variational:4 recently:1 common:3 specialized:1 volume:1 sarwate:1 association:1 relating:1 theirs:1 measurement:23 significant:1 refer:1 access:1 specification:1 surface:1 posterior:14 own:3 recent:1 moderate:1 driven:1 scenario:2 inequality:2 scoring:1 preserving:2 seen:1 additional:4 managing:1 multiple:6 full:4 infer:3 characterized:1 clinical:1 collate:1 post:1 equally:1 prediction:2 scalable:1 regression:9 invent:2 essentially:1 arxiv:1 iteration:5 proposal:1 whereas:1 interval:1 addressed:1 source:2 appropriately:1 meaningless:1 ascent:4 validating:1 meaningfully:1 dxn:1 effectiveness:1 jordan:1 presence:1 revealed:1 split:1 easy:1 concerned:2 variety:1 independence:1 fit:3 whether:1 utility:1 render:1 returned:2 nine:1 repeatedly:1 dramatically:1 useful:2 generally:2 amount:2 concentrated:1 canonical:1 zj:1 notice:1 extrinsic:1 per:2 key:1 drawn:2 clean:1 sum:1 run:6 talwar:1 uncertainty:5 reasonable:1 draw:2 decision:1 bound:36 copied:1 constraint:1 aspect:1 separable:1 relatively:1 according:1 alternate:1 representable:1 across:3 describes:1 making:1 census:1 taken:1 ln:4 resource:1 visualization:1 previously:2 mutually:1 describing:1 mechanism:19 nips08:1 serf:1 available:1 tightest:1 permit:2 apply:2 observe:1 appropriate:1 appearing:1 coin:1 cent:2 include:2 ensure:1 graphical:5 log10:2 exploit:1 sigmod:1 epsilon:10 ghahramani:1 hypercube:1 question:1 quantity:2 flipping:1 added:1 concentration:3 dependence:1 primary:1 traditional:1 gradient:7 unable:1 secrecy:1 consumption:2 nissim:1 reason:2 analyst:1 relationship:2 providing:1 executed:1 pima:6 frank:1 stoc:1 synth:6 negative:1 design:2 perform:1 attacker:1 upper:13 observation:11 benchmark:4 acknowledge:1 finite:1 descent:1 ever:1 head:1 incorporated:1 community:1 inferred:2 introduced:2 pair:1 required:2 connection:2 z1:1 learned:1 nip:3 adult:6 able:2 max:4 power:2 natural:1 rely:1 force:1 minimax:1 improve:4 theta:2 axis:1 extract:1 prior:3 literature:1 determining:1 relative:1 fully:1 expect:2 bear:1 highlight:1 proven:1 analogy:1 validation:1 integrate:2 degree:1 sufficient:1 token:1 last:1 understand:1 saul:1 taking:1 benefit:1 curve:1 dimension:2 stand:1 superficially:1 rich:2 unaware:1 cumulative:1 made:5 collection:1 party:1 far:1 social:1 approximate:4 observable:5 keep:1 global:1 xi:21 factorize:1 protected:1 table:6 robust:2 ca:2 complex:1 domain:2 linearly:1 noise:4 complementary:2 chosing:1 site:1 explicit:2 exponential:13 lie:1 admissible:2 z0:2 specific:3 jensen:2 dx1:1 admits:1 normalizing:1 incorporating:1 intrinsic:1 workshop:1 adding:2 execution:3 clamping:1 demand:1 entropy:1 likely:3 explore:1 prevents:1 expressed:2 applies:1 corresponds:3 truth:1 satisfies:4 extracted:1 acm:1 conditional:4 goal:1 consequently:1 careful:1 informing:1 absence:2 experimentally:1 included:1 specifically:3 uniformly:2 principal:3 total:3 experimental:2 shannon:1 meaningful:1 formally:1 latter:1 mcmc:2 |
3,198 | 3,898 | Basis Construction from Power Series Expansions of
Value Functions
Bo Liu
Department of Computer Science
University of Massachusetts
Amherst, MA 01003
[email protected]
Sridhar Mahadevan
Department of Computer Science
University of Massachusetts
Amherst, MA 01003
[email protected]
Abstract
This paper explores links between basis construction methods in Markov decision processes and power series expansions of value functions. This perspective
provides a useful framework to analyze properties of existing bases, as well as
provides insight into constructing more effective bases. Krylov and Bellman error bases are based on the Neumann series expansion. These bases incur very
large initial Bellman errors, and can converge rather slowly as the discount factor approaches unity. The Laurent series expansion, which relates discounted and
average-reward formulations, provides both an explanation for this slow convergence as well as suggests a way to construct more efficient basis representations.
The first two terms in the Laurent series represent the scaled average-reward and
the average-adjusted sum of rewards, and subsequent terms expand the discounted
value function using powers of a generalized inverse called the Drazin (or group
inverse) of a singular matrix derived from the transition matrix. Experiments
show that Drazin bases converge considerably more quickly than several other
bases, particularly for large values of the discount factor. An incremental variant of Drazin bases called Bellman average-reward bases (BARBs) is described,
which provides some of the same benefits at lower computational cost.
1
Introduction
Markov decision processes (MDPs) are a well-studied model of sequential decision-making under
uncertainty [11]. Recently, there has been growing interest in automatic basis construction methods for constructing a problem-specific low-dimensional representation of an MDP. Functions on
the original state space, such as the reward function or the value function, are ?compressed? by
projecting them onto a basis matrix ?, whose column space spans a low-dimensional subspace of
the function space on the states of the original MDP. Among the various approaches proposed are
reward-sensitive bases, such as Krylov bases [10] and an incremental variant called Bellman error
basis functions (BEBFs) [9]. These approaches construct bases by dilating the (sampled) reward
function by geometric powers of the (sampled) transition matrix of a policy. An alternative approach, called proto-value functions, constructs reward-invariant bases by finding the eigenvectors
of the symmetric graph Laplacian matrix induced by the neighborhood topology of the state space
under the given actions [7].
A fundamental dilemma that is revealed by these prior studies is that neither reward-sensitive nor
reward-invariant eigenvector bases by themselves appear to be fully satisfactory. A Chebyshev polynomial bound for the error due to approximation using Krylov bases was derived in [10], extending
a known similar result for general Krylov approximation [12]. This bound shows that performance
of Krylov bases (and BEBFs) tends to degrade as the discount factor ? ? 1. Intuitively, the initial
basis vectors capture short-term transient behavior near rewarding regions, and tend to poorly ap1
proximate the value function over the entire state space until a sufficiently large time scale is reached.
A straightforward geometrical analysis of approximation errors using least-squares fixed point approximation onto a basis shows that the Bellman error decomposes into the sum of two terms: a
reward error and a second term involving the feature prediction error [1, 8] (see Figure 1). This
analysis helps reveal sources of error: Krylov bases and BEBFs tend to have low reward error (or
zero in the non-sampled case), and hence a large component of the error in using these bases tends to
be due to the feature prediction error. In contrast, PVFs tend to have large reward error since typical
spiky goal reward functions are poorly approximated by smooth low-order eigenvectors; however,
their feature prediction error can be quite low as the eigenvectors often capture invariant subspaces
of the model transition matrix.
A hybrid approach that combined low-order eigenvectors of the transition matrix (or PVFs) with
higher-order Krylov bases was proposed in [10], which empirically resulted in a better approach.
This paper demonstrates a more principled approach to address this problem, by constructing new
bases that emerge from investigating the links between basis construction methods and different
power series expansions of value functions. In particular, instead of using the eigenvectors of the
transition matrix, the proposed approach uses the average-reward or gain as the first basis vector, and
dilates the reward function by powers of the average-adjusted transition matrix. It turns out that the
gain is an element of the space of eigenvectors associated with the eigenvalue ? = 1 of the transition
matrix. The relevance of power series expansion to approximations of value functions was hinted
at in early work by Schwartz [13] on undiscounted optimization, although he did not discuss basis
construction.
Krylov and Bellman error basis functions (BEBFs) [10, 9, 12], as well as proto-value functions
[7], can be related to terms in the Neumann series expansion. Ultimately, the performance of these
bases is limited by the speed of convergence of the Neumann expansion, and of course, other errors
arising due to reward and feature prediction error. The key insight underlying this paper is to exploit
connections between average-reward and discounted formulations. It is well-known that discounted
value functions can be written in the form of a Laurent series expansion, where the first two terms
1
correspond to the average-reward term (scaled by 1??
), and the average-adjusted sum of rewards
(or bias). Higher order terms involve powers of the Drazin (or group) inverse of a singular matrix
related to the transition matrix. This expansion provides a mathematical framework for analyzing the
properties of basis construction methods and developing newer representations. In particular, Krylov
bases converge slowly for high discount factors since the value function is dominated by the scaled
average-reward term, which is poorly approximated by the initial BEBF or Krylov basis vectors as
it involves the long-term limiting matrix P ? . The Laurent series expansion leads to a new type of
basis called a Drazin basis [6]. An approximation of Drazin bases called Bellman average-reward
bases (BARBs) is described and compared with BEBFs, Krylov bases, and PVFs.
2
MDPs and Their Approximation
A Markov decision process M is formally defined by the tuple (S, A, P, R), where S is a discrete
state space, A is the set of actions (which could be conditioned on the state s, so that As is the set of
legal actions in s), P (s0 |s, a) is the transition matrix specifying the effect of executing action a in
state s, and R(s, a) : S ? A ? R is the (expected) reward for doing action a in state s. The value
function V associated with a deterministic policy ? : S ? A is defined as the long-term expected
sum of rewards received starting from a state, and following the policy ? indefinitely. 1 The value
function V associated with a fixed policy ? can be determined by solving the Bellman equation
V = T (V ) = R + ?P V,
where T (.) is the Bellman backup operator, R(s) = R(s, ?(s)), P (s, s0 ) = P (s0 |s, ?(s)), and the
discount factor 0 ? ? < 1. For a fixed policy ?, the induced discounted Markov reward process is
defined as (P, R, ?).
A popular approach to approximating V is to use a linear combination of basis functions V ? V? =
?w, where the basis matrix ? is of size |S| ? k, and k |S|. The Bellman error for a given basis ?,
denoted BE(?), is defined as the difference between the two sides of the Bellman equation, when
1
In what follows, we suppress the dependence of P , R, and V on the policy ? to avoid clutter.
2
?P ?w?
R
(I ? ?? )R
?? R
(I ? ?? )?P ?w?
?? ??w?
?
Figure 1: The Bellman error due to a basis and its decomposition. See text for explanation.
V is approximated by ?w. As Figure 1 illustrates, simple geometrical considerations can be used to
show that the Bellman error can be decomposed into two terms: a reward error term and a weighted
feature error term [1, 8]: 2
BE(?) = R + ?P ?w? ? ?w? = (I ? ?? )R + (I ? ?? )?P ?w? ,
where ?? is the weighted orthogonal projector onto the column space spanned by the basis ?. w?
is the weight vector associated with the fixed point ?w? = ?? (T (?w? )). If the Markov chain
defined by P is irreducible and aperiodic, then ?? = ?(?T D? ?)?1 ?T D? , where D? is a diagonal
matrix whose entries contain the stationary distribution of the Markov chain. In the experiments
shown below, for simplicity we will use the unweighted projection ?? = ?(?T ?)?1 ?T , in which
case the fixed point is given by w? = (?T ? ? ??T P ?)?1 ?T R.
3
Neumann Expansions and Krylov/BEBF Bases
The most familiar expansion of the value function V is in terms of the Neumann series, where
V = (I ? ?P )?1 R = (I + ?P + ? 2 P 2 + . . .)R.
Krylov bases correspond to successive terms in the Neumann series [10, 12]. The j th Krylov subspace Kj is defined as the space spanned by the vectors: Kj = {R, P R, P 2 R, . . . , P j?1 R}. Note
that K1 ? K2 ? . . ., such that for some m, Km = Km+1 = K (where m is the minimal polynomial
of A = I ? ?P ). Thus, K is the P -invariant Krylov space generated by P and R. An incremental
variant of the Krylov-based approach is called Bellman error basis functions (BEBFs) [9]. In particular, given a set of basis functions ?k (where the first one is assumed to equal R), the next basis is
defined to be ?k+1 = T (?k w?k ) ? ?k w?k . In the model-free reinforcement learning setting, ?k+1
? k (s0 , ?k (s0 )) ? Q
? k (s, a),
can be approximated by the temporal-difference (TD) error ?k+1 = r + ? Q
0
? k is the fixed-point least-squares apgiven a set of stored transitions in the form (s, a, r, s ). Here, Q
proximation to the action-value function Q(s, a) on the basis ?k . It can be easily shown that BEBFs
and Krylov bases define the same space [8].
3.1
Convergence Analysis
A key issue in evaluating the effectiveness of Krylov bases (and BEBFs) is the speed of convergence
of the Neumann series. As ? ? 1, Krylov bases and BEBFs converge rather slowly, owing to a large
increase in the weighted feature error. In practice, this problem can be shown to be acute even for
values of ? = 0.9 or ? = 0.99, which are quite common in experiments. Petrik [10] derived a bound
for the error due to Krylov approximation, which depends on the condition number of I ? ?P , and
the ratio of two Chebyshev polynomials on the complex plane. The condition number of I ? ?P
can significantly increase as ? ? 1 (see Figure 2).
It has been shown that BEBFs reduce the Bellman error at a rate bounded by value iteration [9].
Iterative solution methods for solving linear systems Ax = b can broadly be categorized as different
ways of decomposing A = S ? T , giving rise to the iteration Sxk+1 = T xk + b. The convergence
of this iteration depends on the spectral structure of B = S ?1 T , in particular its largest eigenvalue.
For standard value iteration, A = I ? ?P , and consequently the natural decomposition is to set
2
The two components of the Bellman error may partially (or fully) cancel each other out: the Bellman error
of V itself is 0, but it generates non-zero reward and feature prediction errors.
3
Chain MDP with 50 states and rewards at state 10 and 41
200
180
Condition Number
160
140
120
100
80
60
40
20
0
0
5
10
15
20
25
? varied from 0.75 ? 0.99
Figure 2: Condition number of I ? ?P as ? ? 1, where P is the transition matrix of the optimal
policy in a chain MDP of 50 states with rewards in states 10 and 41 [9].
S = I and T = ?P . Thus, the largest eigenvalue of S ?1 T is ?, and as ? ? 1, convergence of value
iteration is progressively decelerated. For Krylov bases, the following bound can be shown.
Theorem 1: The Bellman error in approximating the value function for a discounted Markov reward
process (P, R, ?) using m BEBF or Krylov bases is is bounded by
kBE(?)k2 = k(I ? ?? )?P ?w? k2 ? ?(X)
Cm ( ad )
,
Cm ( dc )
where the (Jordan form) diagonalization of I ? ?P = XSX ?1 , and ?(X) = kXk2 kX ?1 k2 is the
condition number of I ? ?P . Cm is the Chebyshev polynomial of degree m of the first kind, where
a, c and d are chosen such that E(a, c, d) is an ellipse on the set of complex numbers that covers all
the eigenvalues of I ? ?P with center c, focal distance d, and major semi-axis a.
Proof: This result follows directly from standard Krylov space approximation results [12], and past
results on approximation using BEBFs and Krylov spaces for MDPs [10, 8]. First, note that the
overall Bellman error can be reduced to the weighted feature error, since the reward error is 0 as R
is in the span of both BEBFs and Krylov bases:
kBE(?)k2 = kT (?w? ) ? ?w? k2 = kR ? (I ? ?P )V? k2 = k(I ? ?? )?P ?w? k2 .
Next, setting A = (I ? ?P ), we have
kR ? Awk2 = kR ?
m
X
wi Ai Rk2 = k
i=1
m
X
?w(i)Ai Rk2 .
i=0
assuming w(0) = ?1. A standard result in Krylov space approximation [12] shows that
min kp(A)k2 ? min ?(X) max |p(?i )| ? ?(X)
p?Pm
p?Pm
i=1,...,n
Cm ( ad )
,
Cm ( dc )
where Pm is the set of polynomials of degree m.
Figure 2 shows empirically that one reason for the slow convergence of BEBFs and Krylov bases
is that as ? ? 1, the condition number of I ? ?P significantly increases. Figure 3 compares the
weighted feature error of BEBF bases (the performance of Krylov bases is identical and not shown)
on a 50 state chain domain with a single goal reward of 1 in state 25. The dynamics of the chain are
identical to those in [9]. Notice as ? increases, the feature error increases dramatically.
4
Laurent Series Expansion and Drazin Bases
A potential solution to the slow convergence of BEBFs and Krylov bases is suggested by a different
power series called the Laurent expansion. It is well known from the classical theory of MDPs
that the discounted value function can be written in a form that relates it to the average-reward
formulation [11]. This connection uses the following Laurent series expansion of V in terms of the
average reward ? of the policy ?, the average-adjusted sum of rewards h, and higher order terms that
involve the generalized spectral inverse (Drazin or group inverse) of I ? P .
!
?
X
?
1
??1 n
D n+1
V =
?+h+
(
) ((I ? P ) )
R .
(1)
? 1??
?
n=1
4
0.2
0
5
10
15
20
30
35
40
45
50
0.8
Figure 1b
4.5
DBF
BEBF
PVF
PVF?MP
0.6
10
15
20
25
30
# Basis Functions
35
40
45
50
0
5
10
15
20
25
30
# Basis Functions
Optimal VF
Figure 1b
1
46
3
0.6
35
40
45
1
DBF
BEBF
PVF
PVF?MP
0.8
42
40
2.5
38
0.4
2
1.5
36
0.2
50
Optimal
Figure 1bVF
44
DBF
BEBF
PVF
PVF?MP
4
0.2
0.6
0.4
0.2
1
0.2
34
0
0.5
0
5
5
10
10
15
15
20 25 25 30 30
20
# Basis Functions
35
35
40
40
45
45
50
50
BEBF Feature Error in Chain Domain
0.8
0.6
0.4
0.2
0
0
5
10
15
# Basis Functions
20
25
30
0
0
Weighted Feature Error
0
Weighted Feature Error
5
0.8
0.4
0
0
0
3.5
0.4
20
10
5
Reward Error
1
1
Reward Error
1.2
0.6
25
30
1
# Basis Functions
Optimal
VF
1.4
0.8
2
0
0
DBF
BEBF
PVF
PVF?MP
40
Bellman Error
0.4
DBF
BEBF
PVF
PVF?MP
3
0
0
0
5
10
5
10
15
20
25
30
25
30
# Basis Functions
15
20
35
35
40
45
40
45
50
50
BEBF Feature Error in Chain Domain
4
3
2
1
0
0
5
10
15
20
25
30
32
0
Weighted Feature Error
Bellman Error
0.6
Bellman Error
DBF
BEBF
PVF
PVF?MP
0.8
Figure 1a
50
Figure 1a
4
Reward Error
Figure 1a
1
0
5
5
10
10
15
20
25
30
30
# Basis25Functions
15
20
35
35
40
45
40
45
50
50
BEBF Fearure Error for Chain Domain
50
40
30
20
10
0
0
5
10
15
20
25
30
# Basis Functions
# Basis Functions
Figure 3: Weighted feature error of BEBF bases on a 50 state chain MDP with a single reward of 1 in
state 25. Top: optimal value function for ? = 0.5, 0.9, 0.99. Bottom: Weighted feature error. As ?
increases, the weighted feature error grows much larger as the value function becomes progressively
smoother and less like the reward function. Note the difference in scale between the three plots.
As ? ? 1, the first term in the Laurent series expansion grows quite large causing the slow convergence of Krylov bases and BEBFs. (I ? P )D is a generalized inverse of the singular matrix I ? P
called the Drazin inverse [2, 11]. For any square matrix A ? Cn?n , the Drazin inverse X of A
satisfies the following properties: (1) XAX = X (2) XA = AX (3) Ak+1 X = Ak . Here, k is the
index of matrix A, which is the smallest nonnegative integer k such that R(Ak ) = R(Ak+1 ). R(A)
is the range (or column space) of matrix A. For example, a nonsingular (square) matrix A has index
0, because R(A0 ) = R(I) = R(A). The matrix I ? P of a Markov chain has index k = 1. For
index 1 matrices, the Drazin inverse turns out to be the same as the group inverse, which is defined
as
(I ? P )D = (I ? P + P ? )?1 ? P ? ,
Pn
where the long-term limiting matrix P ? = limn?? n1 k=0 P k = I ? (I ? P )(I ? P )D . The
matrix (I ? P + P ? )?1 is often referred to as the fundamental matrix of a Markov chain. Note that
for index 1 matrices, the Drazin (or group) inverse satisfies the additional property AXA = A. Also,
P ? and I ? P ? are orthogonal projection matrices, since they are both idempotent and furthermore
P P ? = P ? P = P ? , and P ? (I ? P ? ) = 0. 3 The gain and bias can be expressed in terms of P ? . In
particular, the gain g = P ? R, and the bias or average-adjusted value function is given by:
?
X
h = (I ? P )D R = (I ? P + P ? )?1 ? P ? R =
(P t ? P ? )R.
t=0
where the last equality holds for aperiodic Markov chains. If we represent the coefficients in the
Laurent series as y?1 , y0 , . . . ,, they can be shown to be solutions to the following set of equations
(for n = 1, 2, . . .). In terms of the expansion above, y?1 is the gain of the policy, y0 is its bias, and
so on.
(I ? P )y?1 = 0, y?1 + (I ? P )y0 = R, . . . , yn?1 + (I ? P )yn = 0.
Analogous to the Krylov bases, the successive terms of the Laurent series expansion can be viewed
as basis vectors. More formally, the Drazin basis is defined as the space spanned by the vectors [6]:
Dm = {P ? R, (I ? P )D R, ((I ? P )D )2 R, . . . , ((I ? P )D )m?1 R}.
(2)
The first basis vector is the average-reward or gain g = P ? R of policy ?. The second basis vector is
the bias, or average-adjusted sum of rewards h. Subsequent basis vectors correspond to higher-order
terms in the Laurent series.
3
Several methods are available to compute Drazin inverses, as described in [2]. An iterative method called
Successive Matrix Squaring (SMS) has also has been developed for efficient parallel implementation [15].
5
5
Bellman Average-Reward Bases
To get further insight into methods for approximating Drazin bases, it is helpful to note that the
(i, j)th element in the Drazin or group inverse matrix is the difference between the expected number
of visits to state j starting in state i following the transition matrix P versus the expected number
of visits to j following the long-term limiting matrix P ? . Building on this insight, an approximate
Drazin basis called Bellman average-reward bases (BARBs) can be defined as follows. First, the
approximate Drazin basis is defined as the space spanned by the vectors
Am
= {P ? R, (P ? P ? )R, (P ? P ? )2 R, . . . , (P ? P ? )m?1 R}
= {P ? R, P R ? P ? R, P 2 R ? P ? R, . . . , P m?1 R ? P ? R}.
BARBs are similar to Krylov bases, except that the reward function is being dilated by the averageadjusted transition matrix P ? P ? , and the first basis element is the gain. Defining ? = P ? R,
BARBs can be defined as follows:
?1
?k+1
=
=
? = P ? R.
R ? ? + P ?k w?k ? ?k w?k .
The cost of computing BARBs is essentially that of computing BEBFs (or Krylov bases), except
for the term involving the gain ?. Analogous to BEBFs, in the model-free reinforcement learning
setting, BARBs can be computed using the average-adjusted TD error
? k (s0 , ?k (s0 )) ? Q
? k (s, a).
?k+1 (s) = r ? ?k (s) + Q
There are a number of incremental algorithms for computing ? (such as the scheme used in Rlearning [13], or simply averaging the sample rewards). Several methods for computing P ? are
discussed in [14].
5.1
Expressivity Properties
Some results concerning the expressivity of approximate Drazin bases and BARBs are now discussed. Due to space, detailed proofs are not included.
Theorem 2 For any k > 1, the following hold:
span {Ak (R)} ? span {BARBk+1 (R)} .
span {BARBk+1 (R)} = span {{R} ? Ak (R)} .
span {BEBFk (R)} ? span {BARBk+1 (R)} .
span {BARBk+1 (R)} = span {{?} ? BEBFk (R)} .
Proof: Proof follows by induction. For k = 1, both approximate Drazin bases and BARBs contain
the gain ?. For k = 2, BARB2 (R) = R ? ?, whereas A2 = P R ? ? (which is included in
BARB3 (R)). For general k > 2, the new basis vector in Ak is P k?1 R, which can be shown to be
part of BARBk+1 (R). The other results can be derived through similar analysis.
There is a similar decomposition of the average-adjusted Bellman error into a component that depends on the average-adjusted reward error and an undiscounted weighted feature error.
Theorem 3 Given a basis ?, for any average reward Markov reward process (P, R), the Bellman
error can be decomposed as follows:
T (V? ) ? V?
= R ? ? + P ?w? ? ?w?
= (I ? ?? )(R ? ?) + (I ? ?? )P ?w?
= (I ? ?? )R ? (I ? ?? )? + (I ? ?? )P ?w? .
Proof: The three terms represent the reward error, the average-reward error, and the undiscounted
weighted feature error. The proof follows immediately from the geometry of the Bellman error,
similar to that shown in Figure 1, and using the property of linearity of orthogonal projectors.
A more detailed convergence analysis of BARBs is given in [4], based on the relationship between
the approximation error and the mixing rate of the Markov chain defined by P .
6
Reward Error
1.5
1
10
20
30
# Basis Functions
40
50
Weighted Feature Error
1.5
0
0.5
0
10
1
20
30
# Basis Functions
40
50
0
10
20
30
# Basis Functions
40
BARB
BEBF
PVF?MP
60
40
2
50
Weighted Feature Error
80
BARB
BEBF
PVF?MP
4
0.5
0
Weighted Feature Error
6
BARB
BEBF
PVF?MP
BARB
BEBF
PVF?MP
1
0.5
0
Reward Error
1.5
BARB
BEBF
PVF?MP
1
0.5
0
Reward Error
1.5
BARB
BEBF
PVF?MP
20
0
0
0
10
20
30
# Basis Functions
40
50
Bellman Error
1.5
0
10
1
40
50
0
10
20
30
# Basis Functions
40
BARBs
BEBF
PVF?MP
60
40
2
50
Bellman Error
80
BARBs
BEBF
PVF?MP
4
0.5
0
Bellman Error
6
BARBs
BEBF
PVF?MP
20
30
# Basis Functions
20
0
0
0
10
20
30
# Basis Functions
40
50
0
10
20
30
# Basis Functions
40
50
0
0
10
20
30
# Basis Functions
40
50
Figure 4: Experimental comparison on a 50 state chain MDP with rewards in state 10 and 41. Left
column: ? = 0.7. Middle column: ? = 0.9. Right column: ? = 0.99.
6
Experimental Comparisons
Figure 4 compares the performance of Bellman average-reward basis functions (BARBs) vs.
Bellman-error basis functions (BEBFs), and a variant of proto-value functions (PVF-MP) on a 50
state chain MDP. This problem was previously studied by [3]. The two actions (go left, or go right)
succeed with probability 0.9. When the actions fail, they result in movement in the opposite direction with probability 0.1. The two ends of the chain are treated as ?dead ends?. Rewards of +1 are
given in states 10 and 41. The PVF-MP algorithm selects basis functions incrementally based upon
the Bellman error, where basis function k+1 is the PVF that has the largest inner product with the
Bellman error resulting from the previous k basis functions.
PVFs have a high reward error, since the reward function is a set of two delta functions that is poorly
approximated by the eigenvectors of the combinatorial Laplacian on the chain graph. However,
PVFs have very low weighted feature error. The overall Bellman error remains large due to the high
reward error. The reward error for BEBFs is by definition 0 as R is a basis vector itself. However,
the weighted feature error for BEBFs grows quite large as ? increases from 0.7 to 0.99, particularly
initially, until around 15 bases are used. Consequently, the Bellman error for BEBFs remains large
initially. BARBs have the best overall performance at this task, particularly for ? = 0.9 and 0.99.
The plots in Figure 5 compare BARBs vs. Drazin and Krylov bases in the two-room gridworld
MDP [7]. Drazin bases perform the best, followed by BARBs, and then Krylov bases. At higher
discount factors, the differences are more noticeable. Finally, Figure 6 compares BARBs vs. BEBFs
on a 10 ? 10 grid world MDP with a reward placed in the upper left corner state. The advantage
of using BARBs over BEBFs is significant as ? ? 1. The policy is a random walk on the grid.
Finally, similar results were also obtained in experiments conducted on random MDPs, where the
states were decomposed into communicating classes of different block sizes (not shown).
7
Conclusions and Future Work
The Neumann and Laurent series lead to different ways of constructing problem-specific bases.
The Neumann series, which underlies Bellman error and Krylov bases, tends to converge slowly as
? ? 1. To address this shortcoming, the Laurent series was used to derive a new approach called the
Drazin basis, which expands the discounted value function in terms of the average-reward, the bias,
and higher order terms representing powers of the Drazin inverse of a singular matrix derived from
7
Reward Error
Reward Error
100
100
BARB
Drazin
Krylov
80
60
60
40
40
20
20
0
BARB
Drazin
Krylov
80
0
0
5
10
15
20
25
30
35
40
45
50
0
5
10
15
20
25
30
35
40
45
50
Weighted Feature Error
Weighted Feature Error
120
200
BARB
Drazin
Krylov
100
80
BARB
Drazin
Krylov
150
100
60
40
50
20
0
0
0
5
10
15
20
25
30
35
40
45
50
0
5
10
15
20
25
30
35
40
45
50
Bellman Error
Bellman Error
120
200
BARB
Drazin
Krylov
100
80
BARB
Drazin
Krylov
150
100
60
40
50
20
0
0
5
10
15
20
25
30
35
40
45
0
50
0
5
10
15
20
25
30
35
40
45
50
Figure 5: Comparison of BARBs vs. Drazin and Krylov bases in a 100 state two-room MDP [7].
All bases were evaluated on the optimal policy. The reward was set at +100 for reaching a corner
goal state in one of the rooms. Left: ? = 0.9. Right: ? = 0.99.
Reward Error
1
Reward Error
1
BEBF
BARB
0.8
0.6
0.8
0.6
0.4
0.4
0.2
0.2
0.2
0
0
5
10
15
20
25
30
Weighted Feature Error
0.8
0
0
5
0.6
15
20
25
30
Weighted Feature Error
1.5
BEBF
BARB
10
10
15
20
25
30
35
Weighted Feature Error
BEBF
BARB
6
4
0.5
2
0
5
10
15
20
25
30
Bellman Error
1
0
0
5
10
0.6
15
20
25
0
0
5
10
15
20
25
30
35
Bellman Error
10
BEBF
BARB
1
0.4
30
Bellman Error
1.5
BEBF
BARB
0.8
BEBF
BARB
8
6
4
0.5
2
0.2
0
5
8
0.4
0
0
10
BEBF
BARB
1
0.2
BEBF
BARB
0.6
0.4
0
Reward Error
1
0.8
BEBF
BARB
0
0
5
10
15
20
25
30
0
5
10
15
20
25
30
0
0
5
10
15
20
25
30
35
Figure 6: Experimental comparison of BARBs and BEBFs on a 10 ? 10 grid world MDP with a
reward in the upper left corner. Left: ? = 0.7. Middle: ? = 0.99. Right: ? = 0.999.
the transition matrix. An incremental version of Drazin bases called Bellman average-reward bases
(BARBs) was investigated. Numerical experiments on simple MDPs show superior performance of
Drazin bases and BARBs to BEBFs, Krylov bases, and PVFs. Scaling BARBs and Drazin bases
to large MDPs requires addressing sampling issues, and exploiting structure in transition matrices,
such as using factored representations. BARBs are computationally more tractable than Drazin
bases, and merit further study. Reinforcement learning methods to estimate the first few terms of
the Laurent series were proposed in [5], and can be adapted for basis construction. The Schultz
expansion provides a way of rewriting the Neumann series using a multiplicative series of dyadic
powers of the transition matrix, which is useful for multiscale bases [6].
Acknowledgements
This research was supported in part by the National Science Foundation (NSF) under grants NSF
IIS-0534999 and NSF IIS-0803288, and Air Force Office of Scientific Research (AFOSR) under
grant FA9550-10-1-0383. Any opinions, findings, and conclusions or recommendations expressed
in this material are those of the authors and do not necessarily reflect the views of the AFOSR or the
NSF.
8
References
[1] D. Bertsekas and D. Casta?non. Adaptive aggregation methods for infinite horizon dynamic
programming. IEEE Transactions on Automatic Control, 34:589?598, 1989.
[2] S. Campbell and C. Meyer. Generalized Inverses of Linear Transformations. Pitman, 1979.
[3] M. Lagoudakis and R. Parr. Least-squares policy iteration. Journal of Machine Learning
Research, 4:1107?1149, 2003.
[4] B. Liu and S. Mahadevan. An investigation of basis construction from power series expansions
of value functions. Technical report, University Massachusetts, Amherst, 2010.
[5] S Mahadevan. Sensitive-discount optimality: Unifying discounted and average reward reinforcement learning. In Proceedings of the International Conference on Machine Learning,
1996.
[6] S. Mahadevan. Learning representation and control in Markov Decision Processes: New frontiers. Foundations and Trends in Machine Learning, 1(4):403?565, 2009.
[7] S. Mahadevan and M. Maggioni. Proto-value functions: A Laplacian framework for learning representation and control in Markov Decision Processes. Journal of Machine Learning
Research, 8:2169?2231, 2007.
[8] R. Parr, , Li. L., G. Taylor, C. Painter-Wakefield, and M. Littman. An analysis of linear models, linear value-function approximation, and feature selection for reinforcement learning. In
Proceedings of the International Conference on Machine Learning (ICML), 2008.
[9] R. Parr, C. Painter-Wakefield, L. Li, and M. Littman. Analyzing feature generation for value
function approximation. In Proceedings of the International Conference on Machine Learning
(ICML), pages 737?744, 2007.
[10] M. Petrik. An analysis of Laplacian methods for value function approximation in MDPs.
In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), pages
2574?2579, 2007.
[11] M. L. Puterman. Markov Decision Processes. Wiley Interscience, New York, USA, 1994.
[12] Y. Saad. Iterative Methods for Sparse Linear Systems. SIAM Press, 2003.
[13] A. Schwartz. A reinforcement learning method for maximizing undiscounted rewards. In Proc.
10th International Conf. on Machine Learning. Morgan Kaufmann, San Francisco, CA, 1993.
[14] William J. Stewart. Numerical methods for computing stationary distributions of finite irreducible markov chains. In Advances in Computational Probability. Kluwer Academic Publishers, 1997.
[15] Y. Wei. Successive matrix squaring algorithm for computing the Drazin inverse. Applied
Mathematics and Computation, 108:67?75, 2000.
9
| 3898 |@word version:1 middle:2 polynomial:5 km:2 decomposition:3 initial:3 liu:2 series:27 uma:2 ap1:1 past:1 existing:1 written:2 numerical:2 subsequent:2 plot:2 progressively:2 v:4 stationary:2 intelligence:1 plane:1 xk:1 short:1 fa9550:1 indefinitely:1 provides:6 successive:4 mathematical:1 interscience:1 expected:4 behavior:1 themselves:1 nor:1 growing:1 bellman:43 discounted:9 decomposed:3 td:2 becomes:1 underlying:1 bounded:2 linearity:1 what:1 cm:5 kind:1 eigenvector:1 developed:1 finding:2 transformation:1 temporal:1 expands:1 scaled:3 demonstrates:1 schwartz:2 k2:9 control:3 grant:2 appear:1 yn:2 bertsekas:1 tends:3 ak:7 analyzing:2 laurent:14 studied:2 suggests:1 specifying:1 limited:1 range:1 practice:1 block:1 dbf:6 significantly:2 projection:2 get:1 onto:3 selection:1 operator:1 projector:2 deterministic:1 center:1 maximizing:1 straightforward:1 dilating:1 starting:2 go:2 simplicity:1 immediately:1 communicating:1 insight:4 factored:1 spanned:4 maggioni:1 analogous:2 limiting:3 construction:8 programming:1 us:2 element:3 trend:1 approximated:5 particularly:3 sxk:1 bottom:1 capture:2 region:1 movement:1 pvf:24 principled:1 reward:73 littman:2 dynamic:2 ultimately:1 solving:2 incur:1 dilemma:1 petrik:2 upon:1 basis:61 easily:1 joint:1 various:1 effective:1 shortcoming:1 kp:1 artificial:1 neighborhood:1 whose:2 quite:4 larger:1 compressed:1 itself:2 advantage:1 eigenvalue:4 product:1 causing:1 mixing:1 poorly:4 exploiting:1 convergence:10 ijcai:1 undiscounted:4 neumann:10 extending:1 incremental:5 executing:1 help:1 derive:1 noticeable:1 received:1 c:2 involves:1 direction:1 aperiodic:2 owing:1 transient:1 opinion:1 material:1 investigation:1 hinted:1 adjusted:9 frontier:1 hold:2 sufficiently:1 around:1 parr:3 major:1 early:1 smallest:1 a2:1 proc:1 combinatorial:1 sensitive:3 largest:3 weighted:23 rather:2 reaching:1 avoid:1 pn:1 office:1 derived:5 ax:2 contrast:1 am:1 helpful:1 squaring:2 entire:1 a0:1 initially:2 expand:1 selects:1 issue:2 among:1 overall:3 denoted:1 equal:1 construct:3 sampling:1 identical:2 cancel:1 icml:2 future:1 report:1 few:1 irreducible:2 resulted:1 national:1 familiar:1 geometry:1 n1:1 william:1 interest:1 chain:19 kt:1 tuple:1 orthogonal:3 taylor:1 walk:1 minimal:1 column:6 cover:1 stewart:1 cost:2 addressing:1 entry:1 conducted:1 stored:1 considerably:1 combined:1 explores:1 amherst:3 fundamental:2 international:5 siam:1 rewarding:1 barb:46 quickly:1 reflect:1 slowly:4 dead:1 corner:3 conf:1 li:2 potential:1 dilated:1 coefficient:1 mp:17 depends:3 ad:2 multiplicative:1 view:1 analyze:1 doing:1 reached:1 aggregation:1 parallel:1 square:5 air:1 painter:2 kaufmann:1 correspond:3 nonsingular:1 xax:1 definition:1 dm:1 associated:4 proof:6 sampled:3 gain:9 massachusetts:3 popular:1 campbell:1 higher:6 wei:1 formulation:3 evaluated:1 furthermore:1 xa:1 wakefield:2 spiky:1 until:2 multiscale:1 incrementally:1 reveal:1 scientific:1 mdp:11 grows:3 usa:1 effect:1 building:1 contain:2 rk2:2 hence:1 equality:1 symmetric:1 satisfactory:1 puterman:1 generalized:4 geometrical:2 consideration:1 recently:1 casta:1 lagoudakis:1 common:1 superior:1 bebfs:25 empirically:2 discussed:2 he:1 kluwer:1 xsx:1 significant:1 ai:2 automatic:2 focal:1 pm:3 grid:3 mathematics:1 acute:1 base:62 perspective:1 axa:1 morgan:1 additional:1 converge:5 semi:1 relates:2 ii:2 smoother:1 smooth:1 technical:1 academic:1 long:4 concerning:1 proximate:1 visit:2 laplacian:4 prediction:5 variant:4 involving:2 underlies:1 essentially:1 iteration:6 represent:3 whereas:1 singular:4 source:1 limn:1 publisher:1 saad:1 induced:2 tend:3 effectiveness:1 jordan:1 integer:1 mahadeva:1 near:1 revealed:1 mahadevan:5 topology:1 opposite:1 reduce:1 inner:1 cn:1 chebyshev:3 york:1 action:8 dramatically:1 useful:2 detailed:2 eigenvectors:7 involve:2 clutter:1 discount:7 reduced:1 nsf:4 notice:1 delta:1 arising:1 broadly:1 discrete:1 group:6 key:2 neither:1 rewriting:1 graph:2 sum:6 inverse:16 uncertainty:1 drazin:36 decision:7 scaling:1 vf:2 bound:4 followed:1 nonnegative:1 adapted:1 dominated:1 generates:1 speed:2 span:10 min:2 optimality:1 department:2 developing:1 combination:1 y0:3 unity:1 newer:1 wi:1 making:1 projecting:1 invariant:4 intuitively:1 legal:1 equation:3 computationally:1 previously:1 remains:2 turn:2 discus:1 fail:1 merit:1 tractable:1 end:2 available:1 decomposing:1 spectral:2 alternative:1 original:2 top:1 unifying:1 exploit:1 giving:1 k1:1 ellipse:1 approximating:3 classical:1 dependence:1 diagonal:1 subspace:3 distance:1 link:2 degrade:1 reason:1 induction:1 assuming:1 index:5 relationship:1 ratio:1 rise:1 suppress:1 implementation:1 policy:13 perform:1 upper:2 markov:16 sm:1 finite:1 defining:1 dc:2 gridworld:1 varied:1 connection:2 expressivity:2 address:2 suggested:1 krylov:44 below:1 max:1 explanation:2 power:12 natural:1 hybrid:1 treated:1 force:1 representing:1 scheme:1 mdps:8 idempotent:1 axis:1 kj:2 text:1 prior:1 geometric:1 acknowledgement:1 afosr:2 fully:2 generation:1 versus:1 foundation:2 degree:2 s0:7 course:1 placed:1 last:1 free:2 supported:1 bias:6 side:1 emerge:1 pitman:1 sparse:1 benefit:1 transition:16 evaluating:1 unweighted:1 world:2 author:1 reinforcement:6 adaptive:1 san:1 schultz:1 transaction:1 approximate:4 investigating:1 proximation:1 assumed:1 francisco:1 iterative:3 decomposes:1 ca:1 bebf:32 expansion:21 investigated:1 complex:2 necessarily:1 constructing:4 domain:4 did:1 backup:1 sridhar:1 dyadic:1 categorized:1 referred:1 slow:4 wiley:1 meyer:1 kxk2:1 theorem:3 specific:2 kbe:2 sequential:1 kr:3 diagonalization:1 conditioned:1 illustrates:1 kx:1 horizon:1 simply:1 expressed:2 partially:1 bo:1 recommendation:1 satisfies:2 ma:2 succeed:1 goal:3 viewed:1 consequently:2 room:3 included:2 typical:1 determined:1 except:2 infinite:1 averaging:1 called:13 pvfs:6 experimental:3 formally:2 relevance:1 proto:4 |
3,199 | 3,899 | Exact inference and learning for cumulative
distribution functions on loopy graphs
Jim C. Huang, Nebojsa Jojic and Christopher Meek
Microsoft Research
One Microsoft Way, Redmond, WA 98052
Abstract
Many problem domains including climatology and epidemiology require models
that can capture both heavy-tailed statistics and local dependencies. Specifying
such distributions using graphical models for probability density functions (PDFs)
generally lead to intractable inference and learning. Cumulative distribution networks (CDNs) provide a means to tractably specify multivariate heavy-tailed models as a product of cumulative distribution functions (CDFs). Existing algorithms
for inference and learning in CDNs are limited to those with tree-structured (nonloopy) graphs. In this paper, we develop inference and learning algorithms for
CDNs with arbitrary topology. Our approach to inference and learning relies on
recursively decomposing the computation of mixed derivatives based on a junction
trees over the cumulative distribution functions. We demonstrate that our systematic approach to utilizing the sparsity represented by the junction tree yields significant performance improvements over the general symbolic differentiation programs Mathematica and D*. Using two real-world datasets, we demonstrate that
non-tree structured (loopy) CDNs are able to provide significantly better fits to the
data as compared to tree-structured and unstructured CDNs and other heavy-tailed
multivariate distributions such as the multivariate copula and logistic models.
1 Introduction
The last two decades have been marked by significant advances in modeling multivariate probability
density functions (PDFs) on graphs. Various inference and learning algorithms have been successfully developed that take advantage of known variable dependence which can be used to simplify
computations and avoid overtraining. A major source of difficulty for such algorithms is the need to
compute a normalization term, as graphical models generally assume a factorized form for the joint
PDF. To make these models tractable, the factors themselves can be chosen to have tractable forms
such as Gaussians. Such choices may then make the model unsuitable for many types of data, such
as data with heavy-tailed statistics that are a quintessential feature in many application areas such as
climatology and epidemiology. Recently, a number of techniques have been proposed to allow for
both heavy-tailed/non-Gaussian distributions with a specifiable variable dependence structure. Most
of these methods are based on transforming the data to make it more easily modeled by Gaussian
PDF-fitting techniques, an example of which is the Gaussian copula [11] parameterized as a CDF
defined on nonlinearly transformed variables. In addition to copula models, many non-Gaussian
distributions are conveniently parameterized as CDFs [2]. Most existing CDF models, however,
do not allow the specification of local dependence structures and thus can only be applied to very
low-dimensional problems.
Recently, a class of multiplicative CDF models has been proposed as a way of modeling structured
CDFs. The cumulative distribution networks (CDNs) model a multivariate CDF as a product over
functions, each dependent on a small subset of variables and each having a CDF form [6, 7]. One
of the key advantages of this approach is that it eliminates the need to enforce normalization constraints that complicate inference and learning in graphical models of PDFs. An example of a CDN
is shown in Figure 1(a), where diamonds correspond to CDN functions and circles represent variables. In a CDN, inference and learning involves computation of derivatives of the joint CDF with
respect to model variables and parameters. The graphical model then allows us to efficiently perform
inference and learning for non-loopy CDNs using message-passing [6, 8]. Models of this form have
1
been applied to multivariate heavy-tailed data in climatology and epidemiology where they have
demonstrated improved predictive performance as compared to several graphical models for PDFs
despite the restriction to tree-structured CDNs. Non-loopy CDNs may however be limited models
and adding functions to the CDN may provide significantly more expressive models, with the caveat
that the resulting CDN may become loopy and previous algorithms for inference and learning in
.CDNs then cease to be exact.
Our aim in this paper is to provide an effective algorithm for learning and inference in loopy CDNs,
thus improving on previous approaches which were limited to CDNs with non-loopy dependencies.
In principle, symbolic differentiation algorithms such as Mathematica [16] and D* [4] could be used
for inference and learning for loopy CDNs. However, as we demonstrate, such generic algorithms
quickly become intractable for larger models. In this paper, we develop the JDiff algorithm which
uses the graphical structure to simplify the computation of the derivative and enables both inference
and learning for CDNs of arbitrary topology. In addition, we provide an analysis of the time and
space complexity of the algorithm and provide experiments comparing JDiff to Mathematica and
D*, in which we show that JDiff runs in less time and can handle significantly larger graphs. We
also provide an empirical comparison of several methods for modeling multivariate distributions as
applied to rainfall data and H1N1 data. We show that loopy CDNs provide significantly better model
fits for multivariate heavy-tailed data than non-loopy CDNs. Furthermore, these models outperform
models based on Gaussian copulas [11], as well as multivariate heavy tailed models that do not allow
for structure specification.
2 Cumulative distribution networks
In this section we establish preliminaries about learning and inference for CDNs [6, 7, 8]. Let x be
a vector of observed values for random variables in the set ? and let ?? , x? denote the observed
values for variable node ? ? ? and variable set ? ? ? . Let ? (?) be the set of neighboring variable
nodes for function node ?. Define the operator ?x? [?] as the mixed derivative operator with respect to
3
variables in set ?. For example, ??1,2,3 [? (?1 , ?2 , ?3 )] ? ??1????2 ??3 . Throughout the paper we will
be dealing primarily with continuous random variables and so we will generally deal with PDFs,
with probability mass functions (PMFs) as a special case. We also assume in the sequel that all
derivatives of a CDF with respect to any and all arguments exist and are continuous and as a result
any mixed derivative of the CDF is invariant to the order of differentiation (Schwarz? theorem).
Definition 2.1. The cumulative distribution network (CDN) consists of (1) an undirected bipartite
graphical model consisting of a bipartite graph ? = (?, ?, ?), where ? denotes variable nodes and
? denotes function nodes, with edges in ? connecting function nodes to variable nodes and (2) a
specification of functions ?? (x? ) for each function node ? ? ?, where x? ? x? (?) , ???? ? (?) = ?
and each function ?? : ??? (?)? 7? [0, 1] satisfies the properties of a CDF. The
?joint CDF over the
variables in the CDN is then given by the product of CDFs ?? , or ? (x) = ??? ?? (x? ), where
each CDF ?? is defined over neighboring variable nodes ? (?).
?
For example, in the CDN of Figure 1(a), each diamond corresponds to a function ?? defined over
neighboring pairs of variable nodes, such that the product of functions satisfies the properties of
a CDF. In the sequel we will assume that both ? and CDN functions ?? are parametric functions
of parameter vector ? and so ? ? ? (x) ? ? (x??) and ?? ? ?? (x? ) ? ?? (x? ; ?). In a CDN,
the marginal CDF for any subset ? ? ? is obtained simply by taking limits such that ? (x? ) =
lim ? (x), which can be done in constant time for each variable.
x? ?? ??
2.1 Inference and learning in CDNs as differentiation
For a joint CDF, the problems of inference and likelihood evaluation, or computing conditional CDFs
and marginal PDFs, both correspond to mixed differentiation of the joint CDF [6]. In particular, the
conditional CDF ? (x? ?x? ) is related to the mixed derivative ?x? [? (x? , x? )] by ? (x? ?x? ) =
?x? [? (x? ,x? )]
?x? [? (x? )] . In the case of evaluating the likelihood corresponding to the model, we note that
for CDF ? (x??), the PDF is defined as ? (x??) = ?x [? (x??)]. In order to perform maximum1
likelihood estimation, we require the gradient vector ?? log ? (x??) = ? (x??)
?? ? (x??), which
requires us to compute a vector of single derivatives ??? [? (x??)] of the joint CDF with respect to
parameters in the model.
2
2.2 Message-passing algorithms for differentiation in non-loopy graphs
As described above, inference and learning in a CDN corresponds to computing derivatives of the
CDF with respect to subsets of variables and/or model parameters. For inference in non-loopy
CDNs, computing mixed derivatives of the form ?x? [? (x)] for some subset of nodes ? ? ? can
be solved efficiently by the derivative-sum-product (DSP) algorithm of [6]. In analogy to the way
in which marginalization in graphical models for PDFs can be decomposed into a series of local
computations, the DSP algorithm decomposes the global computation of the total mixed derivative ?x [? (x)] into a series of local computations by the passing of messages that correspond to
mixed derivatives of ? (x) with respect to subsets of variables in the model. To evaluate the model
likelihood, messages are passed from leaf nodes to the root variable node and the product of incoming root messages is differentiated. This procedure provably produces the correct likelihood
? (x??) = ?x [? (x??)] for non-loopy CDNs [6].
To estimate model parameters ? for which the likelihood over i.i.d. data samples x1 , ? ? ? , x? is
optimized, we can further make use of the gradient of the log-likelihood ?? log ? (x??) within
a gradient-based optimization algorithm. As in the DSP inference algorithm, the computation of
the gradient can also be broken down into a series of local gradient computations. The gradientderivative-product (GDP) algorithm [8] updates the gradients of the messages from the DSP algorithm and passes these from leaf nodes to the root variable node in the CDN, provably obtaining the
correct gradient of the log-likelihood of a particular set of observations x for a non-loopy CDN.
3 Differentiation in loopy graphs
For loopy graphs, the DSP and GDP algorithms are not guaranteed to yield the correct derivative
computations. For the general case of differentiating a product of CDFs, computing the total mixed
derivative requires time and space exponential in the number of variables. To see this, consider the
simple example of the derivative of a product of two functions ?, ?, both of which are functions of
x = [?1 , ? ? ? , ?? ]. The mixed derivative of the product is then given by [5]
?
?x? [? (x)]?x{1,??? ,?}?? [?(x)],
(1)
?x [? (x)?(x)] =
??{1,??? ,?}
?
a summation that contains 2 terms. As computing the mixed derivative of a product of more
functions will entail even greater complexity, the na??ve approach will in general be intractable.
However, as we show in this paper, a CDN?s sparse graphical structure may often point to ways
to computing these derivatives efficiently, with non-loopy graphs being special, previously-studied
cases. To motivate our approach, consider the following lemma that follows in straightforward
fashion from the product rule of differentiation:
?
Lemma 3.1. Let ? = (?, ?, ?) be a CDN and let ? (x) =
?? (x? ) be defined over variables
???
?
in ? . Let ?1 , ?2 be a partition of the function nodes ? and let ?1 (x?1 ) = ???1 ?? (x? ) and
?
?
?
?2 (x?2 ) = ???2 ?? (x? ), where ?1 = ???1 ? (?) and ?2 = ???2 ? (?) are the variables
?
that are arguments to ?1 , ?2 . Let ?1,2 = ?1 ?2 . Then
[
]
[
]
?
?x [?1 (x?1 )?2 (x?2 )] =
?x?1 ??1,2 ?x? [?1 (x?1 )] ?x?2 ??1,2 ?x?1,2 ?? [?2 (x?2 )] . (2)
???1,2
Proof. Define ? = ?1 ? ?1,2 and ? = ?2 ? ?1,2 . Then
?
?x [? (x)] = ?x [?1 (x?1 )?2 (x?2 )] =
?x? [?1 (x?1 )]?x? ?? [?2 (x?2 )]
???
=
? ? ?
?x?,?,? [?1 (x?1 )]?x?1,2 ??,???,??? [?2 (x?2 )]
???1,2 ??? ???
=
?
?x?,? [?1 (x?1 )]?x?1,2 ??,? [?2 (x?2 )].
(3)
???1,2
The last step follows from identifying all derivatives that are zero, as we note that in the above,
?x? [?1 (x?1 )] = 0 for ? ?= ? and similarly, ?x??? [?2 (x?2 )] = 0 for ? ? ? ?= ?.
The number of individual steps needed?
to complete the differentiation in (2) depends on the size of
the variable intersection set ?1,2 = ?1 ?2 . When the two factors ?1 , ?2 depend on two variable
3
sets that do not intersect, then the differentiation can be simplified by independently computing
derivatives for each factor and multiplying. For example, for the CDN in Figure 1(a), partitioning
the problem such that ?1 = {2, 3, 4, 6}, ?2 = {1, 2, 5, 7} yields a more efficient computation than
the brute force approach. Significant computational advantages exist even when ? ?= ?, provided
??1,2 ? is small. This suggests that we can recursively decompose the total mixed derivative and
gradient computations into a series of simpler computations so that ?x [? (x)] reduces to a sum that
contains far fewer terms than that required by brute force. In such a recursion, the total product of
factors is always broken into parts that share as few variables as possible. This is efficient for most
CDNs of interest that consist of a large number of factors that each depend on a small subset of
variables. Such a recursive decomposition is naturally represented using a junction tree [12] for the
CDN in which we will pass messages corresponding to local derivative computations.
3.1 Differentiation in junction trees
In
??a CDN ? = (?, ?, ?), let {?1 , ? ? ? , ?? } be a set of ? subsets of variable nodes in ? , where
?=1 ?? = ? . Let ? = {1, ? ? ? , ?} and ? = (?, ?) be a tree where ? is the set of undirected edges
so that for any pair
? ?, ? ? ? there is a unique path from ? to ?. Then ? is a junction tree for ? if any
intersection ?? ?? is contained in the subset ?? corresponding to a node
? ? on the path from ? to ?.
For each directed edge (?, ?) we define the separator set as ??,? = ?? ?? . An example of a CDN
and a corresponding junction tree are shown in Figures 1(a), 1(b).
(a)
(b)
(c)
(d)
Figure 1: a) An example of a CDN with 7 variable nodes (circles) and 15 function nodes (diamonds); b) A
junction tree obtained from the CDN of a). Separating sets are shown for each edge connecting nodes in the
junction tree, each corresponding to a connected subset of variables in the CDN; c), d) CDNs used to model
the rainfall and H1N1 datasets. Nodes and edges in the non-loopy CDNs of [8] are shown in blue and function
nodes/edges that were added to the trees are shown in red.
Since ? is a tree, we can root the tree at some node in ?, say ?. Given ?, denote by ??? the subset
of elements of ? that are in the subtree of ? rooted at ? and containing ?. Also,
? let ?? be the set
of neighbors of ? in ? , such that ?? = {??(?, ?) ? ?}. Finally, let ?? = ??? ?? . Suppose
?1 , ? ? ? , ?? is a partition of ? such that for any ? = 1, ? ? ? , ?, ?? consists of all ? ? ? whose
neighbors in ? are contained in ?? and there is no ? >?? such that all neighbors of ? ? ?? are
included in ?? . Define the potential function ?? (x?? ) = ???? ?? (x? ) for subset ?? . We can then
write the joint CDF as
?
? (x) = ?? (x?? )
??? (x),
(4)
????
( ) ?
where
x = ??? ? ?? (x?? ), with ?? defined as above. Computing the probability ? (x) then
?
corresponds to computing
[
]
[
]
?
?
( )
( )
?
?
?x ?? (x?? )
?? x = ?x?? ?? (x?? )
?x? ? ???,? [?? x ]
???
????
????
[
= ?x?? ?? (x?? )
?
????
?
?
]
???? (?) ,
(5)
[
( )]
where we have defined messages ???? (?) ? ?x? ?x? ? ???,? [??? x ] , with ???? (?) =
?
?
( )
?x? ? ???,? [??? x ]. It remains to determine how we can efficiently compute messages in the above
?
?
expression. We notice that for any given ? ? ? with ? ? ?? and ?? ? ?? , we can define the
4
[
quantity ?? (?, ?? ) ? ?x? ?? (x?? )
?
????
]
???? (?) . Now select ? ? ?? for the given ?: we can
recursively re-write the above as
[
]
(
)
[
]
?
?? (?, ?? ) = ?x? ?? (x?? )
???? (?) ???? (?) = ?x? ?? (?, ?? ? ?)???? (?)
???? ??
=
?
???? (?)?? (? ? ?, ?? ? ?) =
???
?
???
?
???? (?)?? (? ? ?, ?? ? ?),
(6)
??,?
?
where in the last step we note that whenever ? ??,? = ?, ???? (?) = 0, since by definition
message ???? (?) does not depend on variables in ?? ? ??,? . From the definition of message
???? (?), for any ? ? ??,? we also have
[
]
[
]
?
( )
?( )
?
???? (?) = ?x? ?x? ? ???,? [?? x ] = ?x?,?? ???,? ?? (x?? )
?x? ? ???,? [?? x ]
?
?
???? ??
?
?
( ?
)
= ?? ? ?? ? ??,? , ?? ? ? ,
(7)
where ??? is the subtree of ? rooted at ? and containing ?. Thus, we can recursively compute functions
?? , ???? by applying the above updates for each node in ? , starting from from leaf nodes of ?
and up to the root node ?. At the root node, the correct mixed derivative is then given by ? (x) =
?x [? (x)] = ?? (?? , ?? ). Note that the messages can be kept in a symbolic form as functions over
appropriate variables, or, as is the case in the experiments section, they can simply be evaluated for
the given data x. In the latter case, each message reduces to a scalar, as we can evaluate derivatives
of the functions in the model for fixed x, ? and so we do not need to store increasingly complex
symbolic terms.
3.2 Maximum-likelihood learning in junction trees
While computing ? (x??) = ?x [? (x??)], we can in parallel obtain the gradient of the likelihood
function. The likelihood is equal to the message ?? (?? , ?? ) at the root node ? ? ? . The computation of its gradient ?? ?? (?? , ?? ) can be decomposed in a similar fashion to the decomposition of
the mixed derivative computation. The gradient of each message ?? , ???? in the junction tree decomposition is updated in parallel with the likelihood messages through the use of gradient messages
g? ? ?? ?? and g??? ? ?? ???? .
The algorithm for computing both the likelihood and its gradient, which we call JDiff for junction
tree differentiation, is shown in Algorithm 1. Thus by recursively computing the messages and their
gradients starting from leaf nodes of ? to the root node ?, we can obtain the exact likelihood and
gradient vector for the CDF modelled by ?.
3.3 Running time analysis
The space and time complexity of JDiff is dominated by Steps 1-3 in Algorithm 1: we quantify this
in the next Theorem.
Theorem 3.2. The time and space complexity of the JDiff algorithm is
(
)
? max(??? ? + 1)??? ? + max (??? ? ? 1) ? 2??? ???,? ? 3???,? ? .
(8)
?
(?,?)??
(
)
??? (??? ?)
?
??? ?
Proof. The complexity of Step 1 in Algorithm 1 is given by ?=1
??
?
=
?
(?
+1)
,
?
?
?
which is the total number of terms in the expanded sum of products form for computing mixed
derivatives ?x? [?? ] for all ? ? ?? . Step 2 has complexity bounded by
??,? (
(
)
? ???,? ?)
? (??? ? ? 1) ? max
2??? ???,? ? 2? = (??? ? ? 1) ? ?(max 2??? ???,? ? 3???,? ? )
(9)
????
????
?
?=0
since the cost of computing derivatives for each ? ? ?? is a function of the size of the intersection
with ??,? . Thus we have the number of ways that an intersection can be of size ? times the number of
ways that we can choose the variables not in the separator ??,? times the cost for that size of overlap.
Finally, Step 3 has complexity
bounded by ?(2???,? ? ). The total time and space) complexity is then
(
of order given by ? max(??? ? + 1)??? ? + max (??? ? ? 1) ? 2??? ???,? ? 3???,? ? .
?
(?,?)??
5
Algorithm 1: JDiff: A junction tree algorithm for computing the likelihood ?x [? (x??)] and its gradient
?? ?x [? (x??)] for a CDN ?. Lines marked 1,2,3 dominate the space and time complexity.
Input: A CDN ? = (?, ?, ?), a junction tree ? ? ? (?) = (?, ?) with node set ? = {1, ? ? ? , ?}
and edge set ?, where each ? ? ? indexes a subset ?? ? ? . Let ? ? ? be the root of ? and
?
denote the subtree of ? rooted at ? containing
? ? by ?? . Let ?1 , ? ? ? , ?? be a partition of ?
such that ?? = {? ? ??? (?) ? ?? , ? (?) ?? = ? ?? < ?}.
Data: Observations and parameters
( (x, ?)
)
Output: Likelihood and gradient ?x [? (x; ?)], ?? ?x [? (x; ?)]
1
2
3
foreach Node ? ? ? ?
do
?? ? ?; ?? ? ???? ?? ;
foreach Subset ? ? ?? do
?? (?, ?) ? ?x? [?? ];
g? (?, ?) ? ?? ?x? [?? ];
end
? ?
foreach Neighbor
? ? ? ?? ?? do
??,? ? ?? ?? ;
foreach Subset?? ? ???do
?? (?, ?? ?) ? ??? ? ??,? ???? (?)?? (? ? ?, ?? );
?
?
g? (?, ?? ?) ? ??? ? ??,? ???? (?)g? (? ? ?, ?? ) + g??? (?)?? (? ? ?, ?? );
end
?
?? ? ?? ?;
end
if ? ?= ? then ?
?
? ? {???? ??? ?= ?}; ??,? ? ?? ?? ;
foreach Subset ? ? ?
do
(?,??
)
???? (?) ? ?? ? ?? ? ??,? , ?? ? ? ;
( ?
)
g??? (?) ? g? ? ?? ? ??,? , ?? ? ? ;
end
else
(
)
return ?? (?? , ?? ), g? (?? , ?? )
end
end
Note that JDiff reduces to the algorithms of [6, 8] for non-loopy CDNs and its complexity then
becomes linear in the number of variables. For other types of graphs, the complexity grows exponentially with the tree-width.
4 Experiments
The experiments are divided into two parts. The first part evaluates the computational efficiency of
the JDiff algorithm for various graph topologies. The second set of experiments uses rainfall and
H1N1 epidemiology data to demonstrate the practical value of loopy CDNs, which JDiff for the first
time makes practical to learn from data.
4.1 Symbolic differentiation
As a first test, we compared the runtime of JDiff to that of commonly-used symbolic differentiation
tools such as Mathematica [16] and D* [4]. The task here was to symbolically compute ?x [? (x)]
for a variety of CDNs. All three algorithms were run on a machine with a 2.66 GHz CPU and 16
GB of RAM. The JDiff algorithm was implemented in MATLAB. A junction tree was constructed
by greedily eliminating the variables with the minimal fill-in algorithm and then constructing elimination subsets for nodes in the junction tree [10] using the MATLAB implementation of [14]. For
square grid-structured CDNs with CDN functions defined over pairs of adjacent variables, Mathematica and D* ran out of memory for grids larger than 3 ? 3. For the 3 ? 3 grid, JDiff took less
than 1 second to compute the symbolic derivative, whereas Mathematica and D* took 6.2 s. and 9.2
6
15
10
Log?likelihood
5
0
?5
NPN?BDG
NPN?MRF
GBDG?log
GMRF?log
MVlogistic
CDN?disc
CDN?tree
CDN?loopy
?10
?15
?20
?25
?30
(a)
0
?10
(c)
Log?likelihood
?20
?30
?40
NPN?BDG
NPN?MRF
GBDG?log
GMRF?log
MVlogistic
CDN?disc
CDN?tree
CDN?loopy
?50
?60
?70
?80
(b)
CDN
NPN-BDG
GBDG-log
(d)
Figure 2: Both a), b) report average test log-likelihoods achieved for the CDNs, the nonparanormal bidirected
and Markov models (NPN-BDG,NPN-MRF), Gaussian bidirected and Markov models for log-transformed
data (GBDG-log,GMRF-log) and the multivariate logistic distribution (MVlogistic) on leave-one-out crossvalidation of the a) rainfall and b) H1N1 datasets; c) Contour plots of log-bivariate densities under the CDN
model of Figure 1(c) for rainfall with observed measurements shown. Each panel shows the marginal PDF
? (?? , ?? ) = ???,? [? (?? , ?? )] under the CDN model for each CDN function ? and its neighbors ?, ?.
Each marginal PDF can be computed analytically by taking limits followed by differentiation; d) Graphs for
the H1N1 datasets with edges weighted according to mutual information under the CDN, nonparanormal and
Gaussian BDGs for log-transformed data. Dashed edges correspond to information of less than 1 bit.
s. each. We also found that JDiff could tractably (i.e.: in less than 20 min. of CPU time) compute
derivatives for graphs as large as 9 ? 9. We also compared the time to compute mixed derivatives
in loops of length ? = 10, 11, ? ? ? , 20. The time required by JDiff varied from 0.81 s. to 2.83 s. to
compute the total mixed derivative, whereas the time required by Mathematica varied from 1.2 s. to
580 s. and for D*, 6.7 s. to 12.7 s.
4.2 Learning models for rainfall and H1N1 data
The JDiff algorithm allows us to compute mixed derivatives of a joint CDF for applications in
which we may need to learn multivariate heavy-tailed distributions defined on loopy graphs. The
graphical structures in our examples are based on geographical location of variables that impose
dependence constraints based on spatial proximity. To model pairs of heavy-tailed variables, we
used the bivariate logistic distribution with Gumbel margins [2], given by
( ( ????,?
)
????,? )
??
?
?
?? (?, ?) = exp ? ? ??,? ?? + ? ??,? ??
, ??,? > 0, ??,? > 0, 0 < ?? < 1.
(10)
Models constructed by computing products of functions of the above type have the properties of
both being heavy-tailed multivariate distributions and satisfying marginal independence constraints
between variables that share no function nodes [8]. Here we examined the data studied in [8], which
consisted of spatial measurements for rainfall and for H1N1 mortality. The rainfall dataset consists
of 61 daily measurements of rainfall at 22 sites in China and the H1N1 dataset consists of 29 weekly
mortality rates in 11 cities in the Northeastern US during the 2008-2009 epidemic. Starting from the
non-loopy CDNs used in [8] (Figures 1(c) and 1(d), shown in blue), we added function nodes and
edges to construct loopy CDNs (shown in red in Figures 1(c) and 1(d)) to construct CDNs capable
7
of expressing many more marginal dependencies at the cost of creating numerous loops in the graph.
All CDN models (non-loopy and loopy) were learned from data using stochastic gradients to update
model parameters using settings described in the Supplemental Information.
The loopy CDN model was compared via leave-one-out cross-validation to non-loopy CDNs of [8]
and disconnected CDNs corresponding to independence models. To compare with other multivariate
approaches for modelling heavy-tailed data, we also tested the following:
? Gaussian bi-directed (BDG) and Markov (MRF) models with the same topology as the loopy
CDNs for log-transformed data with ?
? = log(? + ?? ) for ?? = 10?? , ? = 1, 2, 3, 4, 5, where we
show the results for ? that yielded the best test likelihood. Models were fitted using the algorithms
of [3] and [15]. For the Gaussian BDGs, the covariance matrices ? were constrained so that
(?)?,? = 0 only if there is no edge connecting variable nodes ?, ?. For the Gaussian MRF, the
constraints were (?)?1
?,? = 0).
? Structured nonparanormal distributions [11], which use a Gaussian copula model, where the structure was specified by the same BDG and MRF graphs and estimation of the covariance was performed using the algorithms for Gaussian MRFs and BDGs on nonlinearly transformed data. The
nonlinear transformation is given by ?? (?? ) = ?
?? + ?
?? ??1 (??? (?? )) where ? is the normal
CDF, ??? is the Winsorized estimator [11] of the CDF for random variable ?? and parameters
?
?? , ?
?? are the empirical mean and standard deviation for ?? . Although the nonparanormal allows for structure learning as part of model fitting, for the sake of comparison the structure of the
model was set to be same as those of the BDG and MRF models.
? The multivariate logistic CDF [13] that is heavy-tailed but does not model local dependencies.
Here we designed the BDG and MRF models to have the same graphical structure as the loopy
CDN model such that all three model classes represent the same set of local dependencies even
though the set of global dependencies is different for a BDG, MRF and CDN of the same connectivity. Additional details about these comparisons are provided in the Supplemental Information.
The resulting average test log-likelihoods on leave-one-out cross-validation achieved by the above
models are shown in Figures 2(a) and 2(b). Here, capturing the additional local dependencies and
heavy-tailedness using loopy CDNs leads to significantly better fits (? < 10?8 , two-sided sign test).
To further explore the loopy CDN model, we can visualize the set of log-bivariate densities obtained from the loopy CDN model for the rainfall data in tandem with observed data (Figure 2(c)).
The marginal bivariate density for each pair of neighboring variables is obtained by taking limits
of the learned multivariate CDF and differentiating the resulting bivariate CDF. We can also examine the resulting models by comparing the mutual information (MI) between pairs of neighboring
variables in the graphical models for the H1N1 dataset. This is shown in Figure 2(d) in the form
of undirected weighted graphs where edges are weighted proportional to the MI between the two
variable nodes connected by that edge. For the CDN, MI was computed by drawing 50,000 samples from the resulting density model via the Metropolis algorithm; for Gaussian models, the MI
was obtained analytically. As can be seen, the loopy CDN model differs significantly from the
nonparanormal and Gaussian BDGs for log-transformed data in the MI between pairs of variables
(Figure 2(d)). Not only are the MI values under the loopy CDN model significantly higher as compared to those under the Gaussian models, but also high MI is assigned to the edge corresponding
to the Newark,NJ/Philadelphia,PA air corridor, which is a likely source of H1N1 transmission between cities [1] (edge shown in black in Figure 2(d)). In contrast, this edge is largely missed by the
nonparanormal and log-transformed Gaussian BDGs.
5 Discussion
The above results for the rainfall and H1N1 datasets, combined with the lower runtime of JDiff
compared to standard symbolic differentiation algorithms, highlight A) the usefulness of JDiff as an
algorithm for exact inference and learning for loopy CDNs and B) the usefulness of loopy CDNs
in which multiple local functions can be used to model local dependencies between variables in
the model. Future work could include learning the structure of compact probability models in the
sense of graphs with bounded treewidth, with practical applications to other problem domains (e.g.:
finance, seismology) in which data are heavy-tailed and high-dimensional and comparisons to existing techniques for doing this [11]. Another line of research would be to further study the connection
between CDNs and other copula-based models (e.g.: [9]). Finally, given the demonstrated value of
adding dependency constraints to CDNs, further development of faster approximate algorithms for
loopy CDNs will also be of practical value.
8
References
[1] Colizza, V., Barrat, A., Barthelemy, M. and Vespignani, A. (2006) Prediction and predictability
of global epidemics: the role of the airline transportation network. Proceedings of the National
Academy of Sciences USA (PNAS) 103, 2015-2020.
[2] de Haan, L. and Ferreira, A. (2006) Extreme value theory. Springer.
[3] Drton, M. and Richardson, T.S. (2004) Iterative conditional fitting for Gaussian ancestral graph
models. Proceedings of the Twentieth Conference on Uncertainty in Artificial Intelligence (UAI),
130-137.
[4] Guenter, B. (2007) Efficient symbolic differentiation for graphics applications. ACM Transactions on Graphics 26(3).
[5] Hardy, M. (2006) Combinatorics of partial derivatives. Electronic Journal of Combinatorics 13.
[6] Huang, J.C. and Frey, B.J. (2008) Cumulative distribution networks and the derivative-sumproduct algorithm. Proceedings of the Twenty-Fourth Conference on Uncertainty in Artificial
Intelligence (UAI), 290-297.
[7] Huang, J.C. (2009) Cumulative distribution networks: Inference, estimation and applications
of graphical models for cumulative distribution functions. University of Toronto Ph.D. thesis.
http://hdl.handle.net/1807/19194
[8] Huang, J.C. and Jojic, N. (2010) Maximum-likelihood learning of cumulative distribution functions on graphs. Journal of Machine Learning Research W&CP Series 9, 342-349.
[9] Kirschner, S. (2007) Learning with tree-averaged densities and distributions. Advances in Neural
Information Systems Processing (NIPS) 20, 761-768.
[10] Koller, D. and Friedman, N. (2009). Probabilistic Graphical Models: Principles and Techniques, MIT Press.
[11] Liu, H., Lafferty, J. and Wasserman, L. (2009) The nonparanormal: Semiparametric estimation
of high dimensional undirected graphs. Journal of Machine Learning Research (JMLR) 10, 22952328.
[12] Lauritzen, S.L. and Spiegelhalter, D.J. (1988) Local computations with probabilities on graphical structures and their application to expert systems. Journal of the Royal Statistical Society
Series B (Methodological) 50(2), 157224.
[13] Malik, H.J. and Abraham, B. (1978) Multivariate logistic distributions. Annals of Statistics
1(3), 588-590.
[14] Murphy, K.P. (2001) The Bayes Net Toolbox for MATLAB. Computing science and statistics.
[15] Speed, T.S. and Kiiveri, H.T. (1986) Gaussian Markov distributions over finite graphs. Annals
of Statistics 14(1), 138-150.
[16] Wolfram Research, Inc. (2008) Mathematica, Version 7.0. Champaign, IL.
9
| 3899 |@word version:1 eliminating:1 covariance:2 decomposition:3 recursively:5 liu:1 series:6 contains:2 hardy:1 nonparanormal:7 existing:3 comparing:2 partition:3 enables:1 plot:1 designed:1 update:3 nebojsa:1 intelligence:2 leaf:4 fewer:1 wolfram:1 caveat:1 node:38 location:1 toronto:1 simpler:1 constructed:2 become:2 corridor:1 consists:4 fitting:3 themselves:1 examine:1 decomposed:2 cpu:2 tandem:1 becomes:1 provided:2 bounded:3 panel:1 factorized:1 mass:1 developed:1 supplemental:2 transformation:1 differentiation:17 nj:1 finance:1 runtime:2 rainfall:11 weekly:1 ferreira:1 partitioning:1 brute:2 local:12 frey:1 limit:3 despite:1 path:2 black:1 studied:2 examined:1 china:1 specifying:1 suggests:1 limited:3 cdfs:6 bi:1 averaged:1 directed:2 unique:1 practical:4 recursive:1 differs:1 procedure:1 intersect:1 area:1 empirical:2 significantly:7 barthelemy:1 symbolic:9 operator:2 applying:1 restriction:1 demonstrated:2 transportation:1 straightforward:1 starting:3 independently:1 unstructured:1 identifying:1 gmrf:3 wasserman:1 rule:1 estimator:1 utilizing:1 dominate:1 fill:1 handle:2 updated:1 annals:2 suppose:1 exact:4 us:2 pa:1 element:1 satisfying:1 observed:4 role:1 solved:1 capture:1 connected:2 ran:1 transforming:1 broken:2 complexity:11 motivate:1 depend:3 predictive:1 bipartite:2 efficiency:1 easily:1 joint:8 represented:2 various:2 effective:1 artificial:2 whose:1 larger:3 say:1 drawing:1 epidemic:2 statistic:5 richardson:1 advantage:3 net:2 took:2 product:15 neighboring:5 loop:2 academy:1 crossvalidation:1 transmission:1 produce:1 seismology:1 leave:3 develop:2 lauritzen:1 implemented:1 involves:1 treewidth:1 quantify:1 correct:4 stochastic:1 elimination:1 require:2 preliminary:1 decompose:1 summation:1 proximity:1 normal:1 exp:1 visualize:1 major:1 estimation:4 pmfs:1 schwarz:1 successfully:1 tool:1 weighted:3 city:2 mit:1 gaussian:18 always:1 aim:1 avoid:1 dsp:5 pdfs:7 improvement:1 modelling:1 likelihood:22 methodological:1 contrast:1 greedily:1 cdns:40 sense:1 inference:21 dependent:1 mrfs:1 koller:1 transformed:7 provably:2 development:1 spatial:2 special:2 copula:6 mutual:2 marginal:7 equal:1 construct:2 constrained:1 having:1 future:1 report:1 simplify:2 gdp:2 primarily:1 few:1 ve:1 national:1 individual:1 murphy:1 consisting:1 bdg:9 hdl:1 microsoft:2 friedman:1 drton:1 interest:1 message:18 evaluation:1 extreme:1 edge:16 capable:1 partial:1 daily:1 tree:27 circle:2 re:1 minimal:1 fitted:1 modeling:3 bidirected:2 loopy:39 cost:3 deviation:1 subset:16 usefulness:2 graphic:2 dependency:9 combined:1 density:7 epidemiology:4 geographical:1 ancestral:1 sequel:2 systematic:1 probabilistic:1 connecting:3 quickly:1 na:1 connectivity:1 thesis:1 mortality:2 containing:3 huang:4 choose:1 creating:1 expert:1 derivative:35 return:1 potential:1 de:1 inc:1 combinatorics:2 depends:1 multiplicative:1 root:9 performed:1 vespignani:1 doing:1 red:2 bayes:1 parallel:2 square:1 air:1 il:1 largely:1 efficiently:4 yield:3 correspond:4 modelled:1 disc:2 multiplying:1 overtraining:1 whenever:1 complicate:1 definition:3 evaluates:1 mathematica:8 naturally:1 proof:2 mi:7 dataset:3 lim:1 higher:1 specify:1 improved:1 done:1 evaluated:1 though:1 furthermore:1 christopher:1 expressive:1 nonlinear:1 logistic:5 grows:1 usa:1 consisted:1 analytically:2 assigned:1 jojic:2 deal:1 adjacent:1 during:1 width:1 rooted:3 guenter:1 pdf:5 complete:1 demonstrate:4 tailedness:1 climatology:3 cp:1 recently:2 exponentially:1 foreach:5 significant:3 measurement:3 expressing:1 grid:3 similarly:1 specification:3 entail:1 multivariate:16 store:1 seen:1 greater:1 additional:2 impose:1 determine:1 dashed:1 multiple:1 pnas:1 reduces:3 champaign:1 faster:1 cross:2 divided:1 prediction:1 mrf:9 normalization:2 represent:2 achieved:2 addition:2 whereas:2 semiparametric:1 else:1 source:2 eliminates:1 airline:1 pass:1 undirected:4 lafferty:1 call:1 npn:7 variety:1 marginalization:1 fit:3 independence:2 topology:4 expression:1 gb:1 passed:1 passing:3 matlab:3 generally:3 ph:1 http:1 outperform:1 exist:2 notice:1 sign:1 blue:2 write:2 key:1 kept:1 ram:1 graph:22 symbolically:1 sum:3 barrat:1 run:2 parameterized:2 uncertainty:2 fourth:1 throughout:1 electronic:1 missed:1 bit:1 capturing:1 meek:1 guaranteed:1 followed:1 yielded:1 constraint:5 sake:1 dominated:1 speed:1 argument:2 min:1 expanded:1 structured:7 according:1 disconnected:1 increasingly:1 metropolis:1 invariant:1 sided:1 previously:1 remains:1 needed:1 tractable:2 end:6 junction:15 decomposing:1 gaussians:1 enforce:1 generic:1 differentiated:1 appropriate:1 denotes:2 running:1 include:1 graphical:15 unsuitable:1 establish:1 society:1 malik:1 added:2 quantity:1 parametric:1 dependence:4 gradient:18 separating:1 length:1 modeled:1 index:1 implementation:1 twenty:1 diamond:3 perform:2 observation:2 datasets:5 markov:4 finite:1 jim:1 varied:2 arbitrary:2 sumproduct:1 nonlinearly:2 pair:7 required:3 specified:1 optimized:1 connection:1 toolbox:1 learned:2 tractably:2 nip:1 able:1 redmond:1 sparsity:1 program:1 including:1 max:6 memory:1 royal:1 overlap:1 difficulty:1 force:2 recursion:1 spiegelhalter:1 numerous:1 philadelphia:1 highlight:1 mixed:18 proportional:1 analogy:1 cdn:45 validation:2 principle:2 share:2 heavy:15 last:3 kiiveri:1 allow:3 neighbor:5 taking:3 differentiating:2 sparse:1 ghz:1 world:1 cumulative:11 evaluating:1 contour:1 commonly:1 simplified:1 far:1 transaction:1 approximate:1 compact:1 dealing:1 global:3 incoming:1 uai:2 winsorized:1 quintessential:1 continuous:2 iterative:1 decade:1 tailed:14 decomposes:1 learn:2 obtaining:1 improving:1 complex:1 separator:2 constructing:1 domain:2 abraham:1 x1:1 site:1 fashion:2 predictability:1 exponential:1 jmlr:1 northeastern:1 theorem:3 down:1 cease:1 bivariate:5 intractable:3 consist:1 adding:2 subtree:3 margin:1 gumbel:1 intersection:4 simply:2 explore:1 likely:1 twentieth:1 conveniently:1 contained:2 scalar:1 springer:1 corresponds:3 satisfies:2 relies:1 acm:1 cdf:27 kirschner:1 conditional:3 marked:2 included:1 lemma:2 total:7 pas:1 select:1 latter:1 newark:1 evaluate:2 tested:1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.